WO2021196108A1 - 大空间环境下边扫场边标定方法、装置、设备及存储介质 - Google Patents

大空间环境下边扫场边标定方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2021196108A1
WO2021196108A1 PCT/CN2020/082886 CN2020082886W WO2021196108A1 WO 2021196108 A1 WO2021196108 A1 WO 2021196108A1 CN 2020082886 W CN2020082886 W CN 2020082886W WO 2021196108 A1 WO2021196108 A1 WO 2021196108A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
optical
data
cameras
information
Prior art date
Application number
PCT/CN2020/082886
Other languages
English (en)
French (fr)
Inventor
王越
许秋子
Original Assignee
深圳市瑞立视多媒体科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市瑞立视多媒体科技有限公司 filed Critical 深圳市瑞立视多媒体科技有限公司
Priority to CN202111008244.0A priority Critical patent/CN113744346B/zh
Priority to CN202080000455.7A priority patent/CN111566701B/zh
Priority to CN202111008457.3A priority patent/CN113744347B/zh
Priority to PCT/CN2020/082886 priority patent/WO2021196108A1/zh
Publication of WO2021196108A1 publication Critical patent/WO2021196108A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E10/00Energy generation through renewable energy sources
    • Y02E10/50Photovoltaic [PV] energy

Definitions

  • This application relates to the field of computer vision technology, and in particular to a calibration method, device, equipment and storage medium while scanning the field in a large space environment.
  • the camera In order to locate and track the object, the camera must first be calibrated. In the optical motion capture system, the calibration process needs to continuously swing the calibration rod in the middle of the field to record all the data collected by the camera. We call the data collection process Sweep the field. In a multi-camera environment, the calibration process must not only determine the parameters of each camera, but also determine the positional relationship between the camera and the camera. Therefore, the calibration process needs to collect a large amount of camera data and optimize it through complex algorithms to achieve high quality Calibration accuracy.
  • This scanning calibration procedure brings a lot of inconvenience to users: first, the calibration process requires a lot of data and complex algorithm calculations, which takes too long; second, the calibration algorithm starts after the data is collected, in order to calibrate the algorithm For the accuracy of calculation, the user must collect a large amount of data at one time, which contains a lot of useless redundant data, which in turn leads to the complexity of the algorithm and time-consuming problems; third, if the result of a calibration algorithm does not conform to the user As expected, users must re-sweep the field, wasting a lot of manpower and material resources.
  • the main purpose of this application is to provide a method, device, equipment and storage medium for calibration while scanning the field in a large space environment, aiming to solve the technical problem of time-consuming and labor-intensive calibration of multiple optical cameras in a large space environment.
  • the present application provides a calibration method while scanning the field in a large space environment.
  • the method includes the following steps:
  • each of the initial data includes a camera serial number and corresponding coordinate data
  • the main camera of the plurality of optical cameras is determined according to the plurality of initial data, and The main camera obtains the target external parameters of each of the optical cameras; obtains the hardware resolution of each of the optical cameras, and obtains the target internal parameters of each of the optical cameras according to the hardware resolution;
  • the first step is repeated for the optical cameras whose calibration accuracy is greater than the accuracy threshold, until all the optical cameras are The calibration accuracy is not greater than the accuracy threshold;
  • Filter out the unique main camera from all main cameras define the rotation information of the unique main camera as a unit matrix, define the main camera translation information as a zero matrix, and define the rotation information and translation information of the unique main camera , Obtain the rotation information and translation information of each main camera, obtain the rotation information and translation information of each optical camera according to the rotation information and translation information of each main camera, the rotation information and the The translation information is the target external parameter of the optical camera.
  • This application provides a calibration device while scanning the field in a large space environment, including:
  • the initial data acquisition module is used to acquire the camera serial numbers of multiple optical cameras, collect multiple frames of data captured by each of the optical cameras on the swinging calibration rod, and classify the multiple frames of data containing coordinate data by frames, Each frame obtains a plurality of corresponding initial data, and each of the initial data includes a camera serial number and corresponding coordinate data;
  • a parameter determining module configured to determine a plurality of the optical cameras according to the plurality of initial data when at least two of the optical cameras collect coordinate data of a preset number of frames among the plurality of initial data In the main camera, obtain the target external parameters of each optical camera according to the main camera; obtain the hardware resolution of each optical camera, and obtain the target of each optical camera according to the hardware resolution Internal reference
  • the calibration accuracy feedback module is used to calculate the re-projection error of each optical camera according to the target internal parameters, the target external parameters, and all the collected coordinate data of each optical camera, and the re-projection The error is recorded as the calibration accuracy of the optical camera. If the calibration accuracy of any one of the optical cameras is greater than the preset accuracy threshold, the first step is repeated for the optical camera whose calibration accuracy is greater than the accuracy threshold, until The calibration accuracy of all the optical cameras is not greater than the accuracy threshold;
  • the overall optimization module is used to filter out the unique main camera from all main cameras, define the rotation information of the unique main camera as a unit matrix, and define the main camera translation information as a zero matrix, according to the unique main camera To obtain the rotation information and translation information of each main camera, and obtain the rotation information and translation information of each optical camera according to the rotation information and translation information of each main camera, so The rotation information and the translation information are target external parameters of the optical camera.
  • a computer device includes a memory and a processor, and the memory stores computer-readable instructions.
  • the computer-readable instructions When executed by the processor, one or more processors can perform the scanning in the large-space environment. Steps of the sideline calibration method.
  • a storage medium storing computer-readable instructions.
  • the one or more processors execute the steps of the scan-on-field calibration method in the large-space environment.
  • FIG. 1 is a flowchart of a method for scanning the field and calibration in a large space environment according to an embodiment of the application;
  • FIG. 2 is a schematic diagram of a structure of the calibration rod in an embodiment of the application.
  • FIG. 3 is a structural diagram of a calibration device while scanning the field in a large space environment in an embodiment of the application.
  • a calibration method while scanning the field in a large space environment includes the following steps:
  • Step S1 obtain initial data: obtain the camera serial numbers of multiple optical cameras, collect the multi-frame data captured by each optical camera on the swinging calibration rod, classify the multi-frame data containing coordinate data by frame, and obtain each frame Corresponding multiple initial data, each initial data includes a camera serial number and corresponding coordinate data.
  • the calibration rod in this step adopts a two-dimensional calibration rod.
  • the position relationship of the mark points is preset, and the position relationship data between multiple mark points can be directly obtained.
  • five marking points 11 are provided on the calibration rod 1.
  • the data structure of this step is based on the coordinate data collected by each optical camera as the bottom layer, the coordinate data forms one frame of data of each optical camera in the current frame, and finally the data of the current frame of all optical cameras are integrated into one complete frame data.
  • the complete data of each frame of all optical cameras is marked as Frame; then each frame of complete data Frame includes the initial data of the current frame of each optical camera, marked as View; finally, each optical camera data View includes the camera serial number Camera_id and coordinate data Points.
  • each View does not contain all the current frame data of the optical camera, but only those The current frame data of the optical camera containing coordinate data.
  • the advantage of this design is to save a lot of storage space.
  • the final collected data is many frames of frame data when the calibration stick is swung, and each frame of data View contains the two-dimensional spatial coordinate data Points of each optical camera Camera_id of the current frame.
  • Step S2 Determine the internal and external parameters of the optical camera: among the multiple initial data, when at least two optical cameras collect coordinate data of a preset frame number, determine the main camera of the multiple optical cameras according to the multiple initial data , Obtain the target external parameters of each optical camera according to the main camera; obtain the hardware resolution of each optical camera, and obtain the target internal parameters of each optical camera according to the hardware resolution.
  • step S1 After obtaining multiple initial data in step S1, if at least two cameras collect coordinate data of a preset frame number, for example, when the preset frame number is 500 frames of coordinate data, another thread is started, that is, the subsequent area Calibration algorithm process.
  • step S2 when at least two optical cameras collect coordinate data of a preset number of frames, the main camera of the multiple optical cameras is determined according to the multiple initial data.
  • Step S201 It is periodically determined whether the initial data contains coordinate data of the preset number of frames collected by at least two optical cameras, and if it does not contain the coordinate data of the preset number of frames collected by at least two optical cameras, continue to collect each optical camera. Multi-frame data steps captured by the camera on the swinging calibration rod.
  • step S1 Only when at least two optical cameras collect coordinate data of a preset number of frames, the area calibration algorithm process is started, otherwise, the process of collecting initial data in step S1 is continued.
  • step S202 if the coordinate data of the preset number of frames collected by at least two optical cameras is included, the initial data whose coordinate points in the coordinate data of each frame are less than the preset minimum number are eliminated, and the coordinates of each frame are eliminated.
  • the initial data where the number of coordinate points in the data is greater than the preset maximum number is eliminated, and the eliminated initial data is obtained for each frame.
  • step S1 Since the calibration bar is constantly swinging during the data collection process in step S1, not every frame of data is complete, that is, it contains the coordinates of multiple marking points on the calibration bar, and even if there are multiple coordinate point data, it cannot be determined. These multiple coordinates are multiple marked points on the calibration rod. So this step needs to check the coordinate data of each frame collected.
  • the preset number in this step is the same as the number of marking points set on the calibration rod. If there are five mark points on the calibration rod, the preset minimum number is 5, firstly exclude the data with coordinate points less than 5 in the coordinate data in each frame, and then due to the position of the five mark points on the calibration rod The relationship is determined. You can check whether each frame contains five coordinate points belonging to the calibration rod in the remaining coordinate data. If there are, then record these five coordinate data. If not, remove the frame " "Incomplete" coordinate data, that is, the first round of elimination of coordinate data with less than 5.
  • the number of coordinate points is greater than the preset maximum number.
  • the maximum number can be 500. It is considered that the coordinate points obtained by the current optical camera in this frame are miscellaneous. Too many points and excessive useless data will be eliminated in the second round.
  • Step S203 Obtain the positional relationship data of multiple marked points on the calibration rod.
  • the coordinate data contains multiple coordinate points of the positional relationship data. If it contains, the multiple coordinate points and the corresponding
  • the serial number of the camera is recorded to form valid data, otherwise, the initial data is eliminated, and multiple valid data corresponding to each frame are obtained, and the serial number of the camera with the most valid data is determined as the main camera.
  • multiple coordinate points can be calculated in the coordinate data corresponding to each optical camera in each frame, and finally whether it contains this positional relationship Multiple coordinate points of the data. For example, if the five marker points in Figure 2 have certain positional relationship data, it is calculated whether there is a line segment connected by three coordinate points, a line segment connected by three coordinate points, and two line segments among multiple coordinate points. A coordinate point in the middle coincides, and the two line segments are perpendicular. If there are five coordinate points of such positional relationship data, it is considered that the coordinate data contains multiple coordinate points of the positional relationship data.
  • the external parameters of the optical camera in this step it is first necessary to determine a main camera and its external parameters, and calculate the external parameters of other related optical cameras through the main camera.
  • the main camera in this step according to all valid data of all frames, the camera serial number that appears most frequently in the coordinate data is analyzed, and the optical camera corresponding to this camera serial number is recorded as the main camera.
  • step S2 obtaining the target external parameters of each optical camera according to the main camera includes:
  • step S211 the rotation information of the main camera is defined as a unit matrix, and the translation information of the main camera is defined as a zero matrix, and the unit matrix and the zero matrix are the target external parameters of the main camera.
  • the rotation information of the main camera can be determined according to the main camera as the unit matrix, and the translation information is the zero matrix.
  • the main camera has the greatest correlation with other optical cameras, and the external parameters of other cameras are the rotation and translation relative to the main camera.
  • Step S212 According to the multiple valid data of each frame, other optical cameras are matched with the main camera, the optical camera containing the matching data is recorded as the target camera, and the rotation information of the target camera is obtained from the rotation information and translation information of the main camera.
  • the translation information, rotation information and translation information are the target external parameters of the target camera.
  • step S21201 the effective data is searched frame by frame whether the camera serial number of the main camera is included, and if the camera serial number of the main camera is not included, the search continues for the next frame.
  • Step S21202 if the camera serial number of the main camera is included, continue to find whether the coordinate data of other optical cameras in the valid data contains enough matching data, if the valid data above the preset number of frames contains both the main camera and the current optical camera The coordinate data of the main camera and the current camera are considered to contain sufficient matching data.
  • the preset number of frames is 50, that is, the effective data of more than 50 frames contains the coordinate data of the main camera and the coordinate data of the current optical camera, and it is considered that there is sufficient matching data between the main camera and the current optical camera.
  • the valid data of these frames is the matching data of the two.
  • step S21203 if it does not contain matching data, continue to search for the next optical camera, if it contains matching data, mark the optical camera as the target camera, and finally obtain multiple target cameras and corresponding coordinate data per frame.
  • Step S21204 in any one frame of the matching data, obtain the coordinate data of the main camera and the target camera respectively, obtain the positional relationship data of multiple marked points on the calibration rod, and compare the coordinate data of the main camera with the target according to the key position data.
  • the coordinate data of the camera is matched to obtain multiple sets of two-dimensional spatial feature pairs, and the multiple sets of two-dimensional spatial feature pairs and two optical camera parameters are constructed to form a linear equation set to solve the essential matrix.
  • the essential matrix is obtained based on the eight-point method.
  • the coordinate data needs to be matched. Because the position relationship data of the marker points is determined, and the coordinate data of the main camera and the coordinate of the target camera in the matching data The data must contain coordinate points with the same positional relationship as the marked point. Therefore, according to the positional relationship data, multiple sets of two-dimensional spatial feature pairs can be obtained in each frame of matching data. If there are 5 marked points on the calibration rod, five sets of two-dimensional spatial feature pairs are obtained from each frame of matching data.
  • step S21205 the essential matrix is decomposed by a singular value decomposition algorithm to obtain rotation information and translation information of the target camera.
  • the essential matrix E is a 3*3 matrix.
  • the matrix W has values (singular values) only on the diagonal, and other elements are all 0.
  • auxiliary matrices M and N where:
  • step S212 the method further includes:
  • the target internal parameters and target external parameters of the main camera, the target internal parameters and target external parameters of the target camera, and all the matching data of the main camera and the target camera are optimized together through iterative optimization.
  • the cost function in the iterative optimization process is the reprojection error, and we get After the optimization of the target internal parameters and target external parameters of the main camera, the target internal parameters and target external parameters of the target camera, the iterative optimization process is as follows:
  • R and T are external parameters of the optical camera
  • k1, k2, and r are all distortion coefficients
  • fx, fy, cx, cy are internal parameters of the optical camera
  • the overall cost function is:
  • Solving the above least square formula is equivalent to adjusting the internal and external parameters of the optical camera as well as the world coordinate points at the same time, thereby obtaining a very high calibration accuracy.
  • the overall error will continue to decrease.
  • the calculation is stopped, and the optimized camera internal parameters, external parameters and other calibration information are output, that is, the entire iterative optimization process is completed.
  • this embodiment substitutes all the matching data corresponding to the main camera and the target camera and the target internal and external parameters of the two cameras into the optimization process.
  • the cost function in the optimization process is the reprojection error, which is optimized through iteration. , And finally get relatively accurate camera internal and external parameters.
  • step S213 the target camera that has obtained the target external parameters is recorded as the main camera, and the previous operation is repeated with other optical cameras that have not matched the matching data until the target external parameters of all the optical cameras are obtained.
  • step S212 when matching other optical cameras with the main camera, there may be insufficient matching data between the optical camera and the main camera. At this time, the main camera needs to be redefined, and the optical camera that fails to match the matching data is performed again.
  • the process of matching and calculating external parameters In this step, the optical camera that has calculated the external parameters is defined as another main camera, and the main camera and the optical camera that fails to match the matching data are repeated in step S212 to match and calculate the external parameters until all optical cameras are Get external parameters.
  • step S2 obtaining the hardware resolution of each optical camera, and obtaining the target internal parameters of each optical camera according to the hardware resolution, includes:
  • Step S221 the target internal parameters of the optical camera include imaging length, imaging width and focal length, obtain the hardware resolution of the optical camera, record the larger value in the hardware resolution as the imaging length of the optical camera, and set the smaller value in the hardware resolution Is recorded as the imaging width of the optical camera.
  • this step only the coordinate data of the preset frame number is used for the regional calibration algorithm process. These coordinate data may only cover a certain part of the optical camera, so they cannot be used for the initialization of internal parameters. Therefore, this step directly uses the hardware resolution to determine the target internal parameters. . For example, when the hardware resolution is 2048*1024, the imaging length of the optical camera is 2048, and the imaging width of the optical camera is 1024.
  • step S222 the focal length of the optical camera is obtained by the following calculation formula:
  • the imaging length is W and the imaging width is H, then the ratio of the imaging length alpha and the imaging width beta are respectively:
  • the value fx of the focal length of the optical camera in the imaging length direction and the value fy in the imaging width direction are:
  • fx and fy are the focal lengths of the optical camera.
  • the focal length of the optical camera can be obtained through the above calculation formula.
  • step S2 it further includes:
  • step S2 the relatively accurate internal and external parameters of all optical cameras are obtained, but since these parameters are calculated by the pairwise matching of optical cameras in the calculation process, the overall relationship of all optical cameras is not considered, so these parameters need to be calculated. Carry out an overall optimization.
  • This step uses the beam adjustment model (Bundle_Adjustment, BA for short) in the Ceres nonlinear optimization library. The goal of the entire BA is to minimize the reprojection error.
  • the input data of BA is the coordinate data collected by all optical cameras, and the coordinate data has been matched, as well as the internal and external parameters of all cameras.
  • the output result of BA is high-precision camera internal parameter information .
  • Step S3 calibration accuracy feedback: calculate the re-projection error of each optical camera according to the target internal parameters, target external parameters and all the collected coordinate data of each optical camera, and record the re-projection error as the calibration accuracy of the optical camera. If the calibration accuracy of any optical camera is greater than the preset accuracy threshold, the first step is repeated for the optical camera whose calibration accuracy is greater than the accuracy threshold, until the calibration accuracy of all optical cameras is not greater than the accuracy threshold.
  • the reprojection error refers to the error obtained by comparing the pixel 2D coordinates (observed camera coordinates) with the position obtained by projecting the calculated 3D point according to the current camera internal and external parameter information.
  • the two-dimensional image point coordinates are A(a1, a2)
  • the three-dimensional space point coordinates are P(p1, p2, p3)
  • the rotation matrix of camera a is Rcam
  • the translation matrix is Tcam
  • the three-dimensional space is obtained through the following formula Reprojection coordinates of point P:
  • the reprojection error of each optical camera can be calculated and recorded as the calibration accuracy of each camera.
  • This calibration accuracy can be passed
  • the interactive interface gives feedback to the user. The user can decide whether to end the calibration according to the current calibration accuracy of each optical camera. If the calibration accuracy of all cameras reaches the ideal condition, the calibration calculation can be ended. It is also possible to directly compare the calibration accuracy with the accuracy threshold to determine whether to end the calibration calculation. If it is necessary to continue the next round of area calibration algorithm process, these optical cameras whose calibration accuracy does not reach the accuracy threshold can be used as key cameras, and the calibration rod can be swung in the area where the key cameras are located to collect coordinate data.
  • Step S4 overall optimization: filter out the unique main camera from all main cameras, define the rotation information of the unique main camera as a unit matrix, define the translation information of the main camera as a zero matrix, and define the rotation information and translation information of the unique main camera , Obtain the rotation information and translation information of each main camera, and obtain the rotation information and translation information of each optical camera according to the rotation information and translation information of each main camera.
  • the rotation information and translation information are the target external parameters of the optical camera.
  • step S4 includes:
  • Step S401 Obtain a plurality of main cameras determined according to a plurality of initial data, and use the main camera with the largest number of occurrences in the initial data as a candidate main camera.
  • step S402 if any other optical camera is in contact with the candidate main camera and other main cameras at the same time, the optical camera that is in contact at the same time is recorded as the only main camera.
  • the number of calibration rods seen between the cameras in the same frame of data can be used for judgment, that is, when the two cameras see each other, the number of calibration rods is greater than the preset Threshold, there is a connection between the two cameras.
  • step S403 if there are multiple other optical cameras in contact with the candidate main camera and other main cameras at the same time, the optical camera with the smallest calibration accuracy is selected as the only main camera.
  • the unique main camera is determined according to each main camera in the area calibration process, and two special cases are also given.
  • the unique main camera is finally determined, and the unique main camera is used as a reference to convert other optical cameras.
  • step S4 the rotation information and translation information of each main camera are obtained according to the rotation information and translation information of the unique main camera, and the calculation method is the same as obtaining the rotation information and translation information of each optical camera according to the rotation information and translation information of each main camera.
  • the information is the same, that is, the following calculation method is used:
  • step S4 the internal and external parameters of all optical cameras after fusion are obtained, and the internal and external parameters of all optical cameras are also optimized overall.
  • step S2 The same optimization method as in step S2 is also used, that is, the target internal parameters and target external parameters of all optical cameras All the collected coordinate data are input into the beam adjustment model, and the output result of the beam adjustment model is the target internal parameters and target external parameters of all optical cameras after optimization.
  • This step uses the beam adjustment model (Bundle_Adjustment, BA for short) in the Ceres nonlinear optimization library. The goal of the entire BA is to minimize the reprojection error.
  • the input data of BA is the coordinate data collected by all optical cameras, and the coordinate data has been matched, as well as the internal and external parameters of all cameras.
  • the output result of BA is high-precision camera internal parameter information. .
  • Step S5 calibrate the center point: The high-precision internal and external parameters of all optical cameras are obtained, but the target external parameters of these optical cameras are relative to the rotation and translation of the main camera. In practical applications, the target external parameters should be relative It is located at the center of the field, so it is necessary to place the two-dimensional calibration rod at the center of the field.
  • step S501 the height of the multiple marking points of the calibration rod is defined as zero, the position coordinate information of the multiple marking points is obtained, and the three-dimensional space coordinates of the multiple marking points are obtained according to the position coordinate information.
  • the calibration rod is regarded as a rigid body, and the coordinate position information of multiple marking points on the calibration rod is known.
  • step S502 the three-dimensional space coordinates of the unique main camera are calculated according to the target external parameters of the unique main camera.
  • P' ⁇ P'1,...,P'5 ⁇
  • Step S503 substituting the three-dimensional space coordinates of the multiple marker points and the three-dimensional space coordinates of the unique main camera into the following equation, and solving the Euclidean transformation rotation matrix and translation matrix by iterating the nearest point:
  • P is the three-dimensional space coordinates of multiple marking points
  • P' is the three-dimensional space coordinates of the optical camera
  • R is the Euclidean transformation rotation matrix
  • T is the translation matrix
  • the Iterative Closest Point can be used to solve R and T, and the SVD decomposition method can be used to solve the ICP, so as to obtain the pose information of the current calibration rod.
  • Step S504 the Euclidean transformation rotation matrix and the translation matrix are the pose information of the calibration rod, the Euclidean transformation rotation matrix in the pose information of the calibration rod is marked as R, and the translation matrix in the pose information of the calibration rod is marked as T, Mark the target external parameters of any optical camera as R0 and T0, and then apply the calibration rod pose information to any optical camera, the rotation matrix in the target external parameters of the optical camera is R*R0, and the translation matrix is R *T0+T.
  • step S503 after the calibration rod pose information calculated in step S503 is applied to the target external parameters of each optical camera, the external parameter data of each optical camera relative to the center point of the field can be obtained.
  • the calibration method while scanning the field in the large space environment of this embodiment uses multiple optical cameras to capture the marking points of the calibration rod in motion, and obtain coordinate data of a preset number of frames.
  • the coordinate data is used to perform a round of area calibration. Under unsatisfactory conditions, start a new round of calibration and merge with the previous round of regional calibration results, so that the system can receive real-time calibration result feedback during the sweeping process, reducing the calibration time of the traditional calibration method.
  • the calibration technology of the present application has obvious advantages, which not only provides high-precision calculation results, but also saves a lot of manpower and material resources.
  • a device for side-by-side calibration in a large space environment is proposed. As shown in Fig. 3, the device includes:
  • the initial data acquisition module is used to acquire the camera serial numbers of multiple optical cameras, collect the multi-frame data captured by each optical camera on the swinging calibration rod, and classify the multi-frame data containing coordinate data by frame, and get each frame Corresponding multiple initial data, each initial data includes the camera serial number and corresponding coordinate data;
  • the parameter determination module is used to determine the main camera of the multiple optical cameras according to the multiple initial data when at least two optical cameras collect coordinate data of a preset frame number among multiple initial data. Obtain the target external parameters of each optical camera; obtain the hardware resolution of each optical camera, and obtain the target internal parameters of each optical camera according to the hardware resolution;
  • the calibration accuracy feedback module is used to calculate the re-projection error of each optical camera based on the target internal parameters, target external parameters and all collected coordinate data of each optical camera, and record the re-projection error as the calibration accuracy of the optical camera. If the calibration accuracy of any optical camera is greater than the preset accuracy threshold, repeat the first step for the optical camera whose calibration accuracy is greater than the accuracy threshold, until the calibration accuracy of all optical cameras is not greater than the accuracy threshold;
  • the overall optimization module is used to filter out the unique main camera from all main cameras, define the rotation information of the unique main camera as a unit matrix, and define the translation information of the main camera as a zero matrix, according to the rotation information and translation information of the unique main camera , Obtain the rotation information and translation information of each main camera, and obtain the rotation information and translation information of each optical camera according to the rotation information and translation information of each main camera.
  • the rotation information and translation information are the target external parameters of the optical camera.
  • the determining parameter module is further configured as:
  • the initial data in which the number of coordinate points in the coordinate data of each frame is less than the preset minimum number will be eliminated, and the coordinate data of each frame will be eliminated.
  • the initial data with the number of coordinate points greater than the preset maximum number is eliminated, and the eliminated initial data is obtained for each frame;
  • the positional relationship data of multiple marked points on the calibration rod In the initial data after the elimination, check whether the coordinate data contains multiple coordinate points of the positional relationship data. If it contains, multiple coordinate points and the corresponding camera serial number Recording is performed to form valid data. Otherwise, if the initial data is eliminated, multiple valid data corresponding to each frame will be obtained, and the serial number of the camera with the most valid data will be determined as the master camera.
  • the determining parameter module is further configured as:
  • the rotation information of the main camera is defined as a unit matrix
  • the translation information of the main camera is defined as a zero matrix
  • the unit matrix and the zero matrix are the target external parameters of the main camera
  • the main camera According to multiple valid data in each frame, match other optical cameras with the main camera, record the optical camera with matching data as the target camera, and obtain the rotation information and translation information of the target camera through the rotation information and translation information of the main camera ,
  • the rotation information and translation information are the target external parameters of the target camera; the target camera that has obtained the target external parameters is recorded as the main camera, and the previous operation is repeated with other optical cameras that have not matched the matching data until all the targets of the optical cameras are obtained External reference
  • the calibration accuracy feedback module is further configured as:
  • the target internal parameters of the optical camera include imaging length, imaging width, and focal length.
  • the focal length of the optical camera is obtained by the following calculation formula:
  • the imaging length is W and the imaging width is H, then the ratio of the imaging length alpha and the imaging width beta are respectively:
  • the value fx of the focal length of the optical camera in the imaging length direction and the value fy in the imaging width direction are:
  • fx and fy are the focal lengths of the optical camera.
  • the overall optimization module is further configured as:
  • the optical camera that is in contact at the same time is recorded as the only main camera
  • the optical camera with the smallest calibration accuracy is selected as the only main camera.
  • the device for calibration while scanning the field in a large space environment is further configured to:
  • the target external parameters of each optical camera according to the main camera After determining the main camera of the multiple optical cameras according to multiple initial data, obtain the target external parameters of each optical camera according to the main camera; obtain the hardware resolution of each optical camera, and obtain each optical camera according to the hardware resolution
  • the target internal parameters of the optical camera After the target internal parameters of the optical camera, the target internal parameters, target external parameters, and all collected coordinate data of all optical cameras are input to the preset beam adjustment model.
  • the output result of the beam adjustment model is that of all the optimized optical cameras.
  • the rotation information and translation information of the unique main camera are obtained.
  • the rotation information and translation information of each main camera are obtained.
  • the translation information is the target external parameters of the optical camera
  • the target internal parameters, target external parameters and all collected coordinate data of all optical cameras are input into the beam adjustment model, and the output results of the beam adjustment model are all optimized.
  • the device for calibration while scanning the field in a large space environment is further configured to:
  • the three-dimensional space coordinates of the unique main camera are calculated according to the target external parameters of the unique main camera;
  • P is the three-dimensional space coordinates of multiple marking points
  • P' is the three-dimensional space coordinates of the optical camera
  • R is the Euclidean transformation rotation matrix
  • T is the translation matrix
  • the Euclidean transformation rotation matrix and the translation matrix are the pose information of the calibration rod.
  • the Euclidean transformation rotation matrix in the pose information of the calibration rod is denoted as R
  • the translation matrix in the pose information of the calibration rod is denoted as T.
  • the target external parameters of the optical camera are respectively marked as R0 and T0. After the calibration rod pose information is applied to any optical camera, the rotation matrix in the target external parameters of the optical camera is R*R0, and the translation matrix is R*T0+ T.
  • a field-scanning calibration device in a large-space environment includes: a memory, a processor, and a field-scanning calibration program in a large-space environment that is stored in the memory and can be run on the processor.
  • the calibration program is executed by the processor in a large space environment, the steps in the method for calibration while scanning the field in a large space environment of the foregoing embodiments are implemented.
  • a computer-readable storage medium stores a calibration program while scanning the field in a large space environment, and the above-mentioned implementations are implemented when the calibration program is executed by a processor in a large space environment.
  • the steps in the calibration method while scanning the field in the large space environment of the example may be a volatile storage medium, and the storage medium may also be a non-volatile storage medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

一种大空间环境下边扫场边标定方法、装置、设备及存储介质。方法包括:获取多个光学相机的相机序号,采集多帧数据,每帧均得到对应的多个初始数据;在多个初始数据中,当至少有两个光学相机采集到预设帧数的坐标数据时,根据多个初始数据得到每个光学相机的目标外参和目标内参,计算每个光学相机的重投影误差并记为标定精度,在所有的光学相机的标定精度不大于精度阈值后,整体优化所有的光学相机的目标外参。本方法边扫场边标定技术,不但减少了传统标定方法的标定时间,而且实时反馈节省了大量的人力和物力,使得系统运行更加流畅方便。

Description

大空间环境下边扫场边标定方法、装置、设备及存储介质 技术领域
本申请涉及计算机视觉技术领域,尤其涉及一种大空间环境下边扫场边标定方法、装置、设备及存储介质。
背景技术
随着机器视觉应用日益广泛,大空间环境中的多相机视觉系统的需求越来越多了,主要方向是大空间内的高精度定位与跟踪。为实现对物体的定位与跟踪,首先要对相机进行标定,在光学动捕系统中,标定过程需要在场地中间不断挥动标定杆,从而记录所有相机采集到的数据,我们将数据采集过程称为扫场。在多相机环境下,标定过程不仅要确定每个相机的参数,也需要确定相机与相机之间的位置关系,因此标定过程需要收集大量的相机数据,并通过复杂的算法优化,才能达到高质量的标定精度。
这样的扫场标定程序给用户带来了很多不便:第一,标定过程需要大量数据和复杂的算法计算,耗时太长;第二,标定算法是在数据采集之后才开始的,为了标定算法计算的准确性,用户必须一次性采集大量的数据,这些数据中含有很多无用的冗余数据,从而又导致了算法复杂性和耗时的问题;第三,如果一次标定算法的结果不符合用户的预期,用户则必须重新扫场,浪费了大量的人力和物力。
发明内容
本申请的主要目的在于提供一种大空间环境下边扫场边标定方法、装置、设备及存储介质,旨在解决在大空间环境中多个光学相机标定时耗时耗力的技术问题。
为实现上述目的,本申请提供一种大空间环境下边扫场边标定方法,所述方法包括以下步骤:
获取多个光学相机的相机序号,采集每个所述光学相机对挥动中的标定杆捕获的多帧数据,将含有坐标数据的所述多帧数据按帧进行分类,每帧均得到对应的多个初始数据,每个所述初始数据均包括相机序号及对应的坐标数据;
在多个所述初始数据中,当至少有两个所述光学相机采集到预设帧数的坐标数据时,根据多个所述初始数据确定出多个所述光学相机中的主相机,根据所述主相机得到每个所述光学相机的目标外参;获取每个所述光学相机的硬件分辨率,根据所述硬件分辨率,得到每个所述光学相机的目标内参;
根据每个所述光学相机的所述目标内参、所述目标外参和采集到的所有坐标数据,计算每个所述光学相机的重投影误差,将所述重投影误差记为所述光学相机的标定精度,若任一所述光学相机的标定精度大于预设的精度阈值,则对于所述标定精度大于所述精度阈值的所述光学相机重复第一步,直至所有的所述光学相机的标定精度不大于所述精度阈值;
从所有的主相机中筛选出唯一主相机,将所述唯一主相机的旋转信息定义为单位阵,将所述主相机平移信息定义为零矩阵,根据所述唯一主相机的旋转信息和平移信息,得到每个所述主相机的旋转信息和平移信息,根据每个所述主相机的旋转信息和平移信息,得到每个所述光学相机的旋转信息和平移信息, 所述旋转信息和所述平移信息为所述光学相机的目标外参。
本申请提供一种大空间环境下边扫场边标定装置,包括:
获取初始数据模块,用于获取多个光学相机的相机序号,采集每个所述光学相机对挥动中的标定杆捕获的多帧数据,将含有坐标数据的所述多帧数据按帧进行分类,每帧均得到对应的多个初始数据,每个所述初始数据均包括相机序号及对应的坐标数据;
确定参数模块,用于在多个所述初始数据中,当至少有两个所述光学相机采集到预设帧数的坐标数据时,根据多个所述初始数据确定出多个所述光学相机中的主相机,根据所述主相机得到每个所述光学相机的目标外参;获取每个所述光学相机的硬件分辨率,根据所述硬件分辨率,得到每个所述光学相机的目标内参;
标定精度反馈模块,用于根据每个所述光学相机的所述目标内参、所述目标外参和采集到的所有坐标数据,计算每个所述光学相机的重投影误差,将所述重投影误差记为所述光学相机的标定精度,若任一所述光学相机的标定精度大于预设的精度阈值,则对于所述标定精度大于所述精度阈值的所述光学相机重复第一步,直至所有的所述光学相机的标定精度不大于所述精度阈值;
整体优化模块,用于从所有的主相机中筛选出唯一主相机,将所述唯一主相机的旋转信息定义为单位阵,将所述主相机平移信息定义为零矩阵,根据所述唯一主相机的旋转信息和平移信息,得到每个所述主相机的旋转信息和平移信息,根据每个所述主相机的旋转信息和平移信息,得到每个所述光学相机的旋转信息和平移信息,所述旋转信息和所述平移信息为所述光学相机的目标外参。
一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得一个或多个处理器执行上述大空间环境下边扫场边标定方法的步骤。
一种存储有计算机可读指令的存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行上述大空间环境下边扫场边标定方法的步骤。
附图说明
通过阅读下文优选实施方式的详细描述,各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。附图仅用于示出优选实施方式的目的,而并不认为是对本申请的限制。
图1为本申请一个实施例中大空间环境下边扫场边标定方法的流程图;
图2为本申请一个实施例中标定杆的一种结构示意图;
图3为本申请一个实施例中大空间环境下边扫场边标定装置的结构图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”、“所述”和“该”也可包括复数形式。应该进一步理解的是,本申请的说明书中使用的措辞“包括”是指存在所述特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。
参照图1,为本申请一个实施例中的大空间环境下边扫场边标定方法的流程图,如图1所示,一种大空间环境下边扫场边标定方法,包括以下步骤:
步骤S1,获取初始数据:获取多个光学相机的相机序号,采集每个光学相机对挥动中的标定杆捕获的多帧数据,将含有坐标数据的多帧数据按帧进行分类,每帧均得到对应的多个初始数据,每个初始数据均包括相机序号及对应的坐标数据。
本步骤的标定杆采用一种二维标定杆,标定杆上有多个标记点,且标记点涂有高反光材料,能够被光学相机识别。标记点的位置关系是预先设置好的,即可直接得到多个标记点之间的位置关系数据。如图2所示,标定杆1上设有五个标记点11。使用过程中,在大空间内多相机环境下挥动标定杆1,光学相机则会识别到标定杆1上的标记点,得到每一帧二维空间坐标数据,并把这些坐标数据记录存储。
由于标定算法需要采集大量的数据,这些数据需要以一种规范的数据结构形式被清晰地整理存储下来。本步骤的数据结构是以各个光学相机采集到的坐标数据为最底层,坐标数据形成当前帧各个光学相机的一帧数据,最后把所有光学相机当前帧的数据整合为一个完整帧数据。首先,挥动标定杆时,所有光学相机每一帧完整数据记为Frame;然后每一帧完整数据Frame包括各个光学相机当前帧的初始数据,记为View;最后,各个光学相机数据View包括相机序号Camera_id和坐标数据Points。
不是所有的光学相机在每一帧都能捕获到标定杆,即不是所有的光学相机每一帧都有坐标数据,所以每一个View并不包含所有光学相机当前帧数据,而是只包含了那些含有坐标数据的光学相机的当前帧数据。明显地,这样设计的优点是节约了大量的存储空间。通过这样的数据结构,最终采集到的数据是标定杆挥动时很多帧Frame数据,每一帧数据View包含当前帧各个光学相机Camera_id的二维空间坐标数据Points。
步骤S2,确定光学相机的内外参数:在多个初始数据中,当至少有两个光学相机采集到预设帧数的坐标数据时,根据多个初始数据确定出多个光学相机中的主相机,根据主相机得到每个光学相机的目标外参;获取每个光学相机的硬件分辨率,根据硬件分辨率,得到每个光学相机的目标内参。
在步骤S1得到了多个初始数据后,若至少有两个相机采集到预设帧数的坐标数据时,如预设帧数为500帧坐标数据时,就启动另一个线程,即后续的区域标定算法过程。
在一个实施例中,步骤S2中,在多个初始数据中,当至少有两个光学相机采集到预设帧数的坐标数据时,根据多个初始数据确定出多个光学相机中的主相机,包括:
步骤S201,定时判断初始数据中是否含有至少两个光学相机采集到预设帧数的坐标数据,若未含有至少两个光学相机采集到预设帧数的坐标数据,则继续进行采集每个光学相机对挥动中的标定杆捕获的多帧数据步骤。
只有至少两个光学相机采集到预设帧数的坐标数据时,才启动区域标定算法过程,否则继续进行步骤S1采集初始数据的过程。
步骤S202,若含有至少两个光学相机采集到预设帧数的坐标数据,则将每帧的坐标数据中的坐标点个数少于预设最少个数的初始数据剔除,将每帧的坐标数据中的坐标点个数大于预设的最大个数的初始数据剔除,每帧均得到剔除后的初始数据。
由于在步骤S1采集数据过程中标定杆是在不断挥动的,导致不是每一帧数据都能完整,即包含标定杆上的多个标记点坐标,而且即使有多个坐标点数据,也不能确定这多个坐标就是标定杆上的多个标记点。所以本步骤需要检查采集到的每一帧的坐标数据。本步骤的预设个数数量与标定杆上设置的标记点个数相同。若标定杆上设有五个标记点,则预设最少个数为5,则首先排除每一帧中的坐标数据的坐标点少于5的数据,然后由于标定杆上五个标记点的位置关系是确定的,可以在剩下的坐标数据中检测每一帧是否包含属于标定杆的五个坐标点,如果有,则记录下这五个坐标数据,如果没有,则去掉这一帧中“不完整”的坐标数据,即对于少于5个的坐标数据进行第一轮的剔除。
在完成第一轮剔除后剩下的坐标数据中,判断坐标点个数是否大于预设的最大个数,此最大个数可以为500个,认为此帧中的当前光学相机得到的坐标点杂点太多,无用数据过量,进行第二轮剔除。
步骤S203,获取标定杆上多个标记点的位置关系数据,在剔除后的初始数据中,检测坐标数据中是否含有位置关系数据的多个坐标点,若含有,则将多个坐标点及对应的相机序号进行记录形成有效数据,否则,剔除初始数据,则每帧得到对应的多个有效数据,将含有有效数据最多的相机序号确定为主相机。
由于标定杆上的标记点位置是已知且确定的,根据此位置关系数据,可以在每帧的每个光学相机对应的坐标数据中多个坐标点进行位置计算,最终得到是否含有此位置关系数据的多个坐标点。例如,图2中的5个标记点具有确定的位置关系数据,则计算多个坐标点中,是否含有3个坐标点连成的一条线段,3个坐标点连成的一条线段,两条线段中的中间一个坐标点重合,两条线段垂直。若有这样位置关系数据的5个坐标点,则认为坐标数据中含有位置关系数据的多个坐标点。
本步骤为了确定光学相机的外部参数,首先需要确定出一个主相机及其外部参数,通过主相机来计算其他相关联的光学相机的外部参数。本步骤在确定主相机时,根据所有帧的所有有效数据,分析出坐标数据中出现次数最多的相机序号,此相机序号对应的光学相机记为主相机。
本实施例通过上述剔除坐标数据的方式,可以根据已知的标记点坐标位置信息,精确的确定出每一帧的坐标数据是否是完整数据,以便于为后续光学相机内外参提供精确完整的计算数据。
在一个实施例中,步骤S2中,根据主相机得到每个光学相机的目标外参,包括:
步骤S211,将主相机的旋转信息定义为单位阵,将主相机平移信息定义为零矩阵,单位阵和零矩阵为主相机的目标外参。
在确定好主相机后,可根据主相机的确定主相机的旋转信息为单位阵,平移信息为零矩阵。此时的主相机与其他光学相机的关联度最大,其他相机的外参则是相对于主相机的旋转和平移。
步骤S212,根据每帧的多个有效数据,将其他的光学相机和主相机进行匹配,将含有匹配数据的光学相机记为目标相机,通过主相机的旋转信息和平移信息得到目标相机的旋转信息和平移信息,旋转信息和平移信息为目标相机的目标外参。
在计算其他光学相机的外部参数前,需要将其他光学相机与主相机进行数据匹配,从有效数据中,查找到足够的匹配数据,并根据匹配数据和主相机的外部参数,通过八点法求得本质矩阵,进而进行奇异值分解(Singular Value Decomposition,SVD),最终得到目标相机的外部参数。
在根据主相机得到目标相机的目标外参时,通过如下步骤确定:
步骤S21201,逐帧查找有效数据中是否含有主相机的相机序号,若不含有主相机的相机序号,则继续查找下一帧。
步骤S21202,若含有主相机的相机序号,则继续逐个查找有效数据中的其他光学相机的坐标数据是否含有足够的匹配数据,若预设帧数以上的有效数据中同时含有主相机和当前光学相机的坐标数据,则认为主相机和当前相机之间含有足够的匹配数据。
其中,预设帧数为50帧,即50帧以上的有效数据中即含有主相机的坐标数据,也含有当前光学相机的坐标数据,则认为主相机和当前光学相机之间含有足够的匹配数据,这些帧的有效数据为两者的匹配数据。
步骤S21203,若不含有匹配数据,则继续查找下一个光学相机,若含有匹配数据,则将光学相机标记为目标相机,最终每帧得到多个目标相机及对应的坐标数据。
步骤S21204,在匹配数据的任一一帧中,分别获取主相机和目标相机的坐标数据,获取标定杆上多个标记点的位置关系数据,根据位置关键数据,将主相机的坐标数据与目标相机的坐标数据进行匹配,得到多组二维空间特征对,将多组二维空间特征对和两个光学相机参数构造线性方程组,求解出本质矩阵。
本步骤通过基于八点法求得本质矩阵,在求本质矩阵之前,需要对坐标数据进行匹配,由于标记点的位置关系数据是确定的,且匹配数据中主相机的坐标数据和目标相机的坐标数据都必定含有与标记点相同的位置关系的坐标点,因此根据位置关系数据,在每帧匹配数据中就可以得到多组二维空间特征对。若标定杆上的标记点为5个时,每帧匹配数据得到五组二维空间特征对。
将将多组二维空间特征对和光学相机参数构造线性方程组,进而求解出本质矩阵。为了求解本质矩阵,首先计算基础矩阵F,由
Figure PCTCN2020082886-appb-000001
根据多组二维空间特征对得到基础矩阵F,根据F=M -TEM,由于相机参数对应的矩阵M已知,则可以得到本质矩阵E。
步骤S21205,通过奇异值分解算法分解本质矩阵,得到目标相机的旋转信息和平移信息。
本质矩阵E是3*3大小的矩阵,根据公式E=U*W*VT,可以将矩阵E分解为U、W、VT三个3*3的矩阵,其中U称为左奇异矩阵,V称为右奇异矩阵,VT是V的转置矩阵,W称为奇异值矩阵,矩阵W仅在对角线上有值(奇异值),其他元素均为0。定义两个辅助矩阵M和N,其中:
Figure PCTCN2020082886-appb-000002
那么目标相机相对于主相机的旋转矩阵有两种可能情况:RA=U*MT*VT或者RB=U*W*VT,目标相机相对于主相机的旋转矩阵也有两种情况:TA=U*N*UT或者TB=-U*N*UT.其中,MT是矩阵M的转置矩阵,UT是矩阵U的转置矩阵。两两组合共用四种可能性,但只有一种组合使得二维空间匹配特征对形成的三维坐标点的深度为正深度,这种组合就是目标相机的旋转矩阵和平移矩阵。
本实施例通过逐帧逐个光学相机与主相机进行数据匹配的方式,可以得到足够多的匹配数据,根据足够多的匹配数据分别求解本质矩阵和分解本质矩阵后,最终得到目标相机精确的外部参数。
在一个实施例中,在步骤S212后,还包括:
将主相机的目标内参和目标外参、目标相机的目标内参和目标外参、主相机和目标相机的所有的匹配数据一起通过迭代优化,在迭代优化过程中的代价函数为重投影误差,得到优化后的主相机的目标内参和目标外参、目标相机的目标内参和目标外参,迭代优化过程如下:
把世界坐标p转换到相机坐标:
P’=R*p+T={X,Y,Z}
其中,R和T为光学相机外参;
将P’投到归一化平面上,得到归一化坐标:
Pc={u,v,1}={X/Z,Y/Z,1}
考虑到归一化坐标的畸变情况,则进行去畸变:
u’=u*(1+k1*r*r+k2*r*r*r*r)
v’=v*(1+k1*r*r+k2*r*r*r*r)
其中,k1、k2、r均为畸变系数;
计算像素坐标M(Us,Vs):
Us=fx*u’+cx
Vs=fy*v’+cy
其中,fx、fy、cx、cy为光学相机内参;
设光学相机检测到的像素坐标N(U0,V0),世界坐标p的重投影误差e为:
e=||N-M|| 2
把主相机和目标相机的所有的匹配数据都代入进来,那么整体的代价函数为:
Figure PCTCN2020082886-appb-000003
在迭代过程中,当误差降到预设阈值范围内时,停止计算,并输出迭代优化后的所有光学相机内参和外参。
对上述最小二乘公式进行求解,相当于对光学相机的内参和外参以及世界坐标点同时做了调整,进而得到非常高的标定精度,随着迭代优化次数的增长,总体误差会不断下降,当误差降到满足需求的预设阈值范围内时,停止计算, 并输出优化后的相机内参、外参等标定信息,即完成了整个迭代优化过程。
为了得到精确的内外参数据,本实施例把主相机和目标相机对应的所有的匹配数据和两个相机的目标内外参数一起代入优化过程,优化过程中的代价函数是重投影误差,通过迭代优化,最终得到相对精确的相机内外参数。
步骤S213,将已得到目标外参的目标相机记为主相机,与未匹配到匹配数据的其他光学相机重复上一步操作,直至得到所有光学相机的目标外参。
在步骤S212中,将其他的光学相机和主相机进行匹配时,可能存在光学相机与主相机没有足够的匹配数据,此时需要重新定义主相机,将未能匹配到匹配数据的光学相机重新进行匹配和计算外部参数过程。本步骤将已经计算好外部参数的光学相机定义为另一个主相机,将此主相机与未能匹配到匹配数据的光学相机重复步骤S212的操作进行匹配和计算外部参数,直至所有的光学相机都得到外部参数。
在一个实施例中,步骤S2中,获取每个光学相机的硬件分辨率,根据硬件分辨率,得到每个光学相机的目标内参,包括:
步骤S221,光学相机的目标内参包括成像长度、成像宽度和焦距,获取光学相机的硬件分辨率,将硬件分辨率中的数值大的记为光学相机的成像长度,将硬件分辨率中的数值小的记为光学相机的成像宽度。
本步骤由于只对预设帧数的坐标数据进行区域标定算法过程,这些坐标数据可能只覆盖光学相机的某一部分,所以不能用于内参的初始化,因此本步骤直接采用硬件分辨率来确定目标内参。例如,硬件分辨率为2048*1024时,光学相机的成像长度为2048,光学相机的成像宽度为1024。
步骤S222,光学相机的焦距通过以下计算公式得到:
设成像长度为W,成像宽度为H,则成像长度比值alpha、成像宽度beta比值分别为:
alpha=W/(W+H)
beta=H/(W+H);
光学相机的焦距在成像长度方向上的值fx以及在成像宽度方向上的值fy为:
fx=W*0.5/alpha
fy=H*0.5/beta;
其中,fx、fy为光学相机的焦距。
得到了光学相机成像长度和成像宽度后,通过上述计算公式,可以得到光学相机的焦距。
本实施例通过上述两步骤的计算方式,可以最终确定出每个光学相机较为精确的内部参数。
在一个实施例中,步骤S2后,还包括:
将所有光学相机的目标内参、目标外参和采集到的所有坐标数据输入至预设的光束法平差模型,光束法平差模型的输出结果为优化后的所有光学相机的目标内参和目标外参。
通过步骤S2后,得到了所有光学相机相对精确的内外参,但由于这些参数在计算过程中,均由光学相机两两匹配计算得到,并没有考虑所有光学相机的整体关系,所以需要对这些参数进行一个整体的优化。本步骤采用的是Ceres非线性优化库中光束法平差模型(Bundle_Adjustment,简称BA)。整个BA的目标是最小化重投影误差,BA的输入数据是所有光学相机采集到的坐标数据,且坐标数据已经匹配好,以及所有相机的内外参数,BA的输出结果是高精度的相机内参信息。
步骤S3,标定精度反馈:根据每个光学相机的目标内参、目标外参和采集到的所有坐标数据,计算每个光学相机的重投影误差,将重投影误差记为光学相机的标定精度,若任一光学相机的标定精度大于预设的精度阈值,则对于标定精度大于精度阈值的光学相机重复第一步,直至所有的光学相机的标定精度不大于精度阈值。
其中,重投影误差是指像素2D坐标(观测到的相机坐标)与计算得到3D点按照当前的相机内外参数信息进行投影得到的位置相比较得到的误差。例如,假设二维图像点坐标为A(a1,a2),三维空间点坐标为P(p1,p2,p3),相机a的旋转矩阵为Rcam,平移矩阵为Tcam,经过下述公式得到三维空间点P的重投影坐标:
首先通过计算P'=P*Rcam+Tcam,P'(p1',p2',p3')是一个三维坐标,我们将P'归一化即可得到三维空间点C在相机a的重投影坐标A'(a1',a2')=(p1'/p3',p2'/p3')。
将第二帧的相机图像坐标A(a1,a2)与重投影坐标A'(a1',a2')计算差值,得到重投影误差error:
error=A-A'=(a1-a1',a2-a2')
在得到了所有光学相机的内外参数后,利用这些参数和预设帧数的坐标数据,可以计算每个光学相机的重投影误差,将其记为每个相机的标定精度,此标定精度可以通过交互界面反馈给用户查看,用户可以根据每个光学相机当前的标定精度来决定是否结束此次标定,若所有相机的标定精度都达到了理想状况即可结束标定计算。也可以直接通过标定精度与精度阈值进行比较,确定是否结束标定计算。若有需要继续下一轮区域标定算法过程,则可以将这些标定精度未达到精度阈值的光学相机作为重点相机,在重点相机所在的区域挥动标定杆以收集坐标数据即可。
步骤S4,整体优化:从所有的主相机中筛选出唯一主相机,将唯一主相机的旋转信息定义为单位阵,将主相机平移信息定义为零矩阵,根据唯一主相机的旋转信息和平移信息,得到每个主相机的旋转信息和平移信息,根据每个主相机的旋转信息和平移信息,得到每个光学相机的旋转信息和平移信息,旋转信息和平移信息为光学相机的目标外参。
当所有光学相机的标定精度达到一个理想的状态,如小于预设精度阈值时,即可选择结束标定过程,此时收集此前所有的区域标定信息来对所有光学相机的内外参数作一个整体的优化。首先,要对比所有区域标定中的主相机信息,确定所有光学相机中的唯一主相机,规定唯一主相机的旋转信息为单位阵,而平移信息为零矩阵。然后以唯一主相机为基准,转化所有的区域标定主相机的旋转信息和平移信息,进而转化得到所有光学相机的旋转信息和平移信息。
在一个实施例中,步骤S4,包括:
步骤S401,获取根据多个所初始数据确定出的多个主相机,将在初始数据中的出现次数最多的主相机作为候选主相机。
步骤S402,若任一其他光学相机与候选主相机及其他的主相机同时存在联系,则将同时存在联系的光学相机记为唯一主相机。
其中,在判断各个相机之间是否存在着联系时,可以采用在同一帧数据中,各相机之间看到标定杆的数量进行判断,即当两个相机彼此看到标定杆的数量大于预设阈值,则这两个相机之间存在着联系。
步骤S403,若与候选主相机及其他的主相机同时存在联系的其他光学相机为多个,则选择标定精度最小的光学相机为记为唯一主相机。
本实施例根据区域标定过程中的每个主相机来确定唯一主相机,并还给出了有两种特殊情况,最终确定出唯一主相机,以唯一主相机作为基准,来转化其他光学相机。
步骤S4中,根据唯一主相机的旋转信息和平移信息得到每个主相机的旋转信息和平移信息,计算方式与根据每个主相机的旋转信息和平移信息得到每个光学相机的旋转信息和平移信息相同,即采用如下计算方式:
首先确定与主相机有足够匹配信息的相机。然后,依次把这些相机与主相机两两匹配计算。在进行匹配计算时,首先通过八点法求得本质矩阵,进而SVD分解得到初始的旋转和平移信息。为了得到精确的内外参数据,把两个相机所有的匹配坐标数据和两个相机的初始内外参数一起代入优化过程,优化过程中的代价函数是重投影误差,通过迭代优化,就会得到相对精确的相机内外参数。通过步骤S4后,得到了融合后的所有光学相机的内外参数,还对所有光学相机的内外参数进行整体优化,同样采用与步骤S2相同的优化方式,即将所有光学相机的目标内参、目标外参和采集到的所有坐标数据输入光束法平差模型,光束法平差模型的输出结果为优化后的所有光学相机的目标内参和目标外参。本步骤采用的是Ceres非线性优化库中光束法平差模型(Bundle_Adjustment,简称BA)。整个BA的目标是最小化重投影误差,BA的输入数据是所有光学相机采集到的坐标数据,且坐标数据已经匹配好,以及所有相机的内外参数,BA的输出结果是高精度的相机内参信息。
步骤S5,标定中心点:得到了所有光学相机的高精度的内外参数,但这些光学相机的目标外参都是相对于主相机的旋转和平移,而在实际应用中,目标外参应要相对于场地中心点,因此需要将二维标定杆置于场地中心点处。
步骤S501,将标定杆的多个标记点的高度定义为零,获取多个标记点的位置坐标信息,根据位置坐标信息得到多个标记点的三维空间坐标。
在本步骤中,将标定杆当作一个刚体,标定杆上面多个标记点坐标位置信息已知,此时定义高度为0,那么多个标记点的三维空间坐标就得到了,若标定杆设置有五个标记点,则此时五个标记点的三维空间坐标记为P={p1,…,p5}。
步骤S502,根据唯一主相机的目标外参计算得到唯一主相机的三维空间坐标。
通过唯一主相机采集数据,通过步骤S4的优化得到的外参数据就可以计算出唯一主相机参数下的三维空间坐标,记为P’={P’1,…,P’5},此时的求解问题变成了3D-3D的位姿估计。
步骤S503,将多个标记点的三维空间坐标和唯一主相机的三维空间坐标代 入如下等式中,通过迭代最近点求解欧式变换旋转矩阵和平移矩阵:
P=RP'+T
其中,P为多个标记点的三维空间坐标,P'为光学相机的三维空间坐标,R为欧式变换旋转矩阵,T为平移矩阵。
本步骤可以用迭代最近点(Iterative Closest Point,ICP)求解R和T,采用SVD分解的方法进行ICP求解,从而得到了当前标定杆的位姿信息。
步骤S504,欧式变换旋转矩阵和平移矩阵为标定杆的位姿信息,将标定杆的位姿信息中的欧式变换旋转矩阵记为R,将标定杆的位姿信息中的平移矩阵记为T,将任一光学相机的目标外参分别记为R0和T0,则将标定杆位姿信息作用于任一光学相机后,光学相机的目标外参中的旋转矩阵为R*R0,平移矩阵为R*T0+T。
本步骤把步骤S503计算得到的标定杆位姿信息作用于每个光学相机的目标外参后,就可得到每个光学相机相对于场地中心点的外参数据。
本实施例大空间环境下边扫场边标定方法,通过多个光相机捕获运动中的标定杆的标记点,获取预设帧数的坐标数据,利用这些坐标数据进行一轮区域标定,在标定结果不理想情况下,开始新一轮标定,并与上一轮的区域标定结果进行融合,使得在扫场过程中能够接收到系统实时的标定结果反馈,减少了传统标定方法的标定时间。本申请的标定技术相对于传统的标定方法具有非常明显的优势,既提供了高精度的计算结果,又节省了大量人力物力。
在一个实施例中,提出了一种大空间环境下边扫场边标定装置,如图3所示,该装置包括:
获取初始数据模块,用于获取多个光学相机的相机序号,采集每个光学相机对挥动中的标定杆捕获的多帧数据,将含有坐标数据的多帧数据按帧进行分类,每帧均得到对应的多个初始数据,每个初始数据均包括相机序号及对应的坐标数据;
确定参数模块,用于在多个初始数据中,当至少有两个光学相机采集到预设帧数的坐标数据时,根据多个初始数据确定出多个光学相机中的主相机,根据主相机得到每个光学相机的目标外参;获取每个光学相机的硬件分辨率,根据硬件分辨率,得到每个光学相机的目标内参;
标定精度反馈模块,用于根据每个光学相机的目标内参、目标外参和采集到的所有坐标数据,计算每个光学相机的重投影误差,将重投影误差记为光学相机的标定精度,若任一光学相机的标定精度大于预设的精度阈值,则对于标定精度大于精度阈值的光学相机重复第一步,直至所有的光学相机的标定精度不大于精度阈值;
整体优化模块,用于从所有的主相机中筛选出唯一主相机,将唯一主相机的旋转信息定义为单位阵,将主相机平移信息定义为零矩阵,根据唯一主相机的旋转信息和平移信息,得到每个主相机的旋转信息和平移信息,根据每个主相机的旋转信息和平移信息,得到每个光学相机的旋转信息和平移信息,旋转信息和平移信息为光学相机的目标外参。
在一个实施例中,所述确定参数模块,还设置为:
定时判断初始数据中是否含有至少两个光学相机采集到预设帧数的坐标数 据,若未含有至少两个光学相机采集到预设帧数的坐标数据,则继续进行采集每个光学相机对挥动中的标定杆捕获的多帧数据步骤;
若含有至少两个光学相机采集到预设帧数的坐标数据,则将每帧的坐标数据中的坐标点个数少于预设最少个数的初始数据剔除,将每帧的坐标数据中的坐标点个数大于预设的最大个数的初始数据剔除,每帧均得到剔除后的初始数据;
获取标定杆上多个标记点的位置关系数据,在剔除后的初始数据中,检测坐标数据中是否含有位置关系数据的多个坐标点,若含有,则将多个坐标点及对应的相机序号进行记录形成有效数据,否则,剔除初始数据,则每帧得到对应的多个有效数据,将含有有效数据最多的相机序号确定为主相机。
在一个实施例中,所述确定参数模块,还设置为:
将所述主相机的旋转信息定义为单位阵,将所述主相机平移信息定义为零矩阵,单位阵和零矩阵为主相机的目标外参;
根据每帧的多个有效数据,将其他的光学相机和主相机进行匹配,将含有匹配数据的光学相机记为目标相机,通过主相机的旋转信息和平移信息得到目标相机的旋转信息和平移信息,旋转信息和平移信息为目标相机的目标外参;将已得到目标外参的目标相机记为主相机,与未匹配到匹配数据的其他光学相机重复上一步操作,直至得到所有光学相机的目标外参
在一个实施例中,所述标定精度反馈模块,还设置为:
光学相机的目标内参包括成像长度、成像宽度和焦距,获取光学相机的硬件分辨率,将硬件分辨率中的数值大的记为光学相机的成像长度,将硬件分辨率中的数值小的记为光学相机的成像宽度;
光学相机的焦距通过以下计算公式得到:
设成像长度为W,成像宽度为H,则成像长度比值alpha、成像宽度beta比值分别为:
alpha=W/(W+H)
beta=H/(W+H);
光学相机的焦距在成像长度方向上的值fx以及在成像宽度方向上的值fy为:
fx=W*0.5/alpha
fy=H*0.5/beta;
其中,fx、fy为光学相机的焦距。
在一个实施例中,所述整体优化模块,还设置为:
获取根据多个初始数据确定出的多个主相机,将在初始数据中的出现次数最多的主相机作为候选主相机;
若任一其他光学相机与候选主相机及其他的主相机同时存在联系,则将同时存在联系的光学相机记为唯一主相机;
若与候选主相机及其他的主相机同时存在联系的其他光学相机为多个,则 选择标定精度最小的光学相机为记为唯一主相机。
在一个实施例中,所述大空间环境下边扫场边标定装置还设置为:
在根据多个初始数据确定出多个光学相机中的主相机,根据主相机得到每个光学相机的目标外参;获取每个光学相机的硬件分辨率,根据硬件分辨率,得到每个光学相机的目标内参后,将所有光学相机的目标内参、目标外参和采集到的所有坐标数据输入至预设的光束法平差模型,光束法平差模型的输出结果为优化后的所有光学相机的目标内参和目标外参;
在根据唯一主相机的旋转信息和平移信息,得到每个主相机的旋转信息和平移信息,根据每个主相机的旋转信息和平移信息,得到每个光学相机的旋转信息和平移信息,旋转信息和平移信息为光学相机的目标外参后,将所有光学相机的目标内参、目标外参和采集到的所有坐标数据输入光束法平差模型,光束法平差模型的输出结果为优化后的所有光学相机的目标内参和目标外参。
在一个实施例中,所述大空间环境下边扫场边标定装置还设置为:
将标定杆的多个标记点的高度定义为零,获取多个标记点的位置坐标信息,根据位置坐标信息得到多个标记点的三维空间坐标;
根据唯一主相机的目标外参计算得到唯一主相机的三维空间坐标;
将多个标记点的三维空间坐标和唯一主相机的三维空间坐标代入如下等式中,通过迭代最近点求解欧式变换旋转矩阵和平移矩阵:
P=RP'+T
其中,P为多个标记点的三维空间坐标,P'为光学相机的三维空间坐标,R为欧式变换旋转矩阵,T为平移矩阵;
欧式变换旋转矩阵和平移矩阵为标定杆的位姿信息,将标定杆的位姿信息中的欧式变换旋转矩阵记为R,将标定杆的位姿信息中的平移矩阵记为T,将任一光学相机的目标外参分别记为R0和T0,则将标定杆位姿信息作用于任一光学相机后,光学相机的目标外参中的旋转矩阵为R*R0,平移矩阵为R*T0+T。
在一个实施例中,提出了一种大空间环境下边扫场边标定设备,设备包括:存储器、处理器以及存储在存储器上并可在处理器上运行的大空间环境下边扫场边标定程序,大空间环境下边扫场边标定程序被处理器执行时实现上述各实施例的大空间环境下边扫场边标定方法中的步骤。
在一个实施例中,一种计算机可读存储介质,计算机可读存储介质上存储有大空间环境下边扫场边标定程序,大空间环境下边扫场边标定程序被处理器执行时实现上述各实施例的大空间环境下边扫场边标定方法中的步骤。其中,存储介质可以易失性存储介质,存储介质也可以为非易失性存储介质。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请一些示例性实施例,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种大空间环境下边扫场边标定方法,其中,所述方法包括以下步骤:
    获取多个光学相机的相机序号,采集每个所述光学相机对挥动中的标定杆捕获的多帧数据,将含有坐标数据的所述多帧数据按帧进行分类,每帧均得到对应的多个初始数据,每个所述初始数据均包括相机序号及对应的坐标数据;
    在多个所述初始数据中,当至少有两个所述光学相机采集到预设帧数的坐标数据时,根据多个所述初始数据确定出多个所述光学相机中的主相机,根据所述主相机得到每个所述光学相机的目标外参;获取每个所述光学相机的硬件分辨率,根据所述硬件分辨率,得到每个所述光学相机的目标内参;
    根据每个所述光学相机的所述目标内参、所述目标外参和采集到的所有坐标数据,计算每个所述光学相机的重投影误差,将所述重投影误差记为所述光学相机的标定精度,若任一所述光学相机的标定精度大于预设的精度阈值,则对于所述标定精度大于所述精度阈值的所述光学相机重复第一步,直至所有的所述光学相机的标定精度不大于所述精度阈值;
    从所有的主相机中筛选出唯一主相机,将所述唯一主相机的旋转信息定义为单位阵,将所述主相机平移信息定义为零矩阵,根据所述唯一主相机的旋转信息和平移信息,得到每个所述主相机的旋转信息和平移信息,根据每个所述主相机的旋转信息和平移信息,得到每个所述光学相机的旋转信息和平移信息,所述旋转信息和所述平移信息为所述光学相机的目标外参。
  2. 根据权利要求1所述的大空间环境下边扫场边标定方法,其中,所述在多个所述初始数据中,当至少有两个所述光学相机采集到预设帧数的坐标数据时,根据多个所述初始数据确定出多个所述光学相机中的主相机,包括:
    定时判断所述初始数据中是否含有至少两个所述光学相机采集到预设帧数的坐标数据,若未含有至少两个所述光学相机采集到预设帧数的坐标数据,则继续进行采集每个所述光学相机对挥动中的标定杆捕获的多帧数据步骤;
    若含有至少两个所述光学相机采集到预设帧数的坐标数据,则将每帧的所述坐标数据中的坐标点个数少于预设最少个数的初始数据剔除,将每帧的所述坐标数据中的坐标点个数大于预设的最大个数的初始数据剔除,每帧均得到剔除后的初始数据;
    获取所述标定杆上多个标记点的位置关系数据,在剔除后的所述初始数据中,检测所述坐标数据中是否含有所述位置关系数据的多个坐标点,若含有,则将多个所述坐标点及对应的相机序号进行记录形成有效数据,否则,剔除所述初始数据,则每帧得到对应的多个所述有效数据,将含有所述有效数据最多的相机序号确定为主相机。
  3. 根据权利要求2所述的大空间环境下边扫场边标定方法,其中,所述根据所述主相机得到每个所述光学相机的目标外参,包括:
    将所述主相机的旋转信息定义为单位阵,将所述主相机平移信息定义为零矩阵,所述单位阵和所述零矩阵为所述主相机的目标外参;
    根据每帧的多个所述有效数据,将其他的光学相机和所述主相机进行匹配,将含有匹配数据的光学相机记为目标相机,通过所述主相机的所述旋转信息和所述平移信息得到所述目标相机的旋转信息和平移信息,所述旋转信息和所述平移信息为所述目标相机的目标外参;
    将已得到目标外参的所述目标相机记为主相机,与未匹配到匹配数据的其他光学相机重复上一步操作,直至得到所有光学相机的目标外参。
  4. 根据权利要求1所述的大空间环境下边扫场边标定方法,其中,所述获取每个所述光学相机的硬件分辨率,根据所述硬件分辨率,得到每个所述光学相机的目标内参,包括:
    所述光学相机的目标内参包括成像长度、成像宽度和焦距,获取所述光学相机的硬件分辨率,将所述硬件分辨率中的数值大的记为所述光学相机的成像长度,将所述硬件分辨率中的数值小的记为所述光学相机的成像宽度;
    所述光学相机的焦距通过以下计算公式得到:
    设所述成像长度为W,所述成像宽度为H,则成像长度比值alpha、成像宽度beta比值分别为:
    alpha=W/(W+H)
    beta=H/(W+H);
    所述光学相机的焦距在成像长度方向上的值fx以及在成像宽度方向上的值fy为:
    fx=W*0.5/alpha
    fy=H*0.5/beta;
    其中,fx、fy为所述光学相机的焦距。
  5. 根据权利要求1所述的大空间环境下边扫场边标定方法,其中,所述从所有的主相机中筛选出唯一主相机,将所述唯一主相机的旋转信息定义为单位阵,将所述主相机平移信息定义为零矩阵,包括:
    获取根据多个所述初始数据确定出的多个主相机,将在所述初始数据中的出现次数最多的主相机作为候选主相机;
    若任一其他光学相机与所述候选主相机及其他的所述主相机同时存在联系,则将同时存在联系的所述光学相机记为所述唯一主相机;
    若与所述候选主相机及其他的所述主相机同时存在联系的其他光学相机为多个,则选择所述标定精度最小的光学相机为记为所述唯一主相机。
  6. 根据权利要求1所述的大空间环境下边扫场边标定方法,其中,还包括:
    在根据多个所述初始数据确定出多个所述光学相机中的主相机,根据所述主相机得到每个所述光学相机的目标外参;获取每个所述光学相机的硬件分辨率,根据所述硬件分辨率,得到每个所述光学相机的目标内参后,将所有所述光学相机的目标内参、目标外参和采集到的所有坐标数据输入至预设的光束法平差模型,所述光束法平差模型的输出结果为优化后的所有光学相机的目标内参和目标外参;
    在根据所述唯一主相机的旋转信息和平移信息,得到每个所述主相机的旋转信息和平移信息,根据每个所述主相机的旋转信息和平移信息,得到每个所述光学相机的旋转信息和平移信息,所述旋转信息和所述平移信息为所述光学相机的目标外参后,将所有所述光学相机的目标内参、目标外参和采集到的所有坐标数据输入所述光束法平差模型,所述光束法平差模型的输出结果为优化 后的所有光学相机的目标内参和目标外参。
  7. 根据权利要求1所述的大空间环境下边扫场边标定方法,其中,所述从所有的主相机中筛选出唯一主相机,将所述唯一主相机的旋转信息定义为单位阵,将所述主相机平移信息定义为零矩阵,根据所述唯一主相机的旋转信息和平移信息,得到每个所述主相机的旋转信息和平移信息,根据每个所述主相机的旋转信息和平移信息,得到每个所述光学相机的旋转信息和平移信息,所述旋转信息和所述平移信息为所述光学相机的目标外参后,还包括:
    将所述标定杆的多个标记点的高度定义为零,获取多个所述标记点的位置坐标信息,根据所述位置坐标信息得到多个所述标记点的三维空间坐标;
    根据所述唯一主相机的目标外参计算得到所述唯一主相机的三维空间坐标;
    将多个所述标记点的三维空间坐标和所述唯一主相机的三维空间坐标代入如下等式中,通过迭代最近点求解欧式变换旋转矩阵和平移矩阵:
    P=RP'+T
    其中,P为多个所述标记点的三维空间坐标,P'为所述光学相机的三维空间坐标,R为欧式变换旋转矩阵,T为平移矩阵;
    所述欧式变换旋转矩阵和平移矩阵为所述标定杆的位姿信息,将所述标定杆的位姿信息中的欧式变换旋转矩阵记为R,将所述标定杆的位姿信息中的平移矩阵记为T,将任一所述光学相机的目标外参分别记为R0和T0,则将所述标定杆位姿信息作用于任一所述光学相机后,所述光学相机的目标外参中的旋转矩阵为R*R0,平移矩阵为R*T0+T。
  8. 一种大空间环境下边扫场边标定装置,其中,所述装置包括:
    获取初始数据模块,用于获取多个光学相机的相机序号,采集每个所述光学相机对挥动中的标定杆捕获的多帧数据,将含有坐标数据的所述多帧数据按帧进行分类,每帧均得到对应的多个初始数据,每个所述初始数据均包括相机序号及对应的坐标数据;
    确定参数模块,用于在多个所述初始数据中,当至少有两个所述光学相机采集到预设帧数的坐标数据时,根据多个所述初始数据确定出多个所述光学相机中的主相机,根据所述主相机得到每个所述光学相机的目标外参;获取每个所述光学相机的硬件分辨率,根据所述硬件分辨率,得到每个所述光学相机的目标内参;
    标定精度反馈模块,用于根据每个所述光学相机的所述目标内参、所述目标外参和采集到的所有坐标数据,计算每个所述光学相机的重投影误差,将所述重投影误差记为所述光学相机的标定精度,若任一所述光学相机的标定精度大于预设的精度阈值,则对于所述标定精度大于所述精度阈值的所述光学相机重复第一步,直至所有的所述光学相机的标定精度不大于所述精度阈值;
    整体优化模块,用于从所有的主相机中筛选出唯一主相机,将所述唯一主相机的旋转信息定义为单位阵,将所述主相机平移信息定义为零矩阵,根据所述唯一主相机的旋转信息和平移信息,得到每个所述主相机的旋转信息和平移信息,根据每个所述主相机的旋转信息和平移信息,得到每个所述光学相机的旋转信息和平移信息,所述旋转信息和所述平移信息为所述光学相机的目标外参。
  9. 根据权利要求8所述的大空间环境下边扫场边标定装置,其中,所述确 定参数模块,还设置为:
    定时判断所述初始数据中是否含有至少两个所述光学相机采集到预设帧数的坐标数据,若未含有至少两个所述光学相机采集到预设帧数的坐标数据,则继续进行采集每个所述光学相机对挥动中的标定杆捕获的多帧数据步骤;
    若含有至少两个所述光学相机采集到预设帧数的坐标数据,则将每帧的所述坐标数据中的坐标点个数少于预设最少个数的初始数据剔除,将每帧的所述坐标数据中的坐标点个数大于预设的最大个数的初始数据剔除,每帧均得到剔除后的初始数据;
    获取所述标定杆上多个标记点的位置关系数据,在剔除后的所述初始数据中,检测所述坐标数据中是否含有所述位置关系数据的多个坐标点,若含有,则将多个所述坐标点及对应的相机序号进行记录形成有效数据,否则,剔除所述初始数据,则每帧得到对应的多个所述有效数据,将含有所述有效数据最多的相机序号确定为主相机。
  10. 根据权利要求9所述的大空间环境下边扫场边标定装置,其中,所述确定参数模块,还设置为:
    将所述主相机的旋转信息定义为单位阵,将所述主相机平移信息定义为零矩阵,所述单位阵和所述零矩阵为所述主相机的目标外参;
    根据每帧的多个所述有效数据,将其他的光学相机和所述主相机进行匹配,将含有匹配数据的光学相机记为目标相机,通过所述主相机的所述旋转信息和所述平移信息得到所述目标相机的旋转信息和平移信息,所述旋转信息和所述平移信息为所述目标相机的目标外参;
    将已得到目标外参的所述目标相机记为主相机,与未匹配到匹配数据的其他光学相机重复上一步操作,直至得到所有光学相机的目标外参。
  11. 根据权利要求8所述的大空间环境下边扫场边标定装置,其中,所述标定精度反馈模块,还设置为:
    所述光学相机的目标内参包括成像长度、成像宽度和焦距,获取所述光学相机的硬件分辨率,将所述硬件分辨率中的数值大的记为所述光学相机的成像长度,将所述硬件分辨率中的数值小的记为所述光学相机的成像宽度;
    所述光学相机的焦距通过以下计算公式得到:
    设所述成像长度为W,所述成像宽度为H,则成像长度比值alpha、成像宽度beta比值分别为:
    alpha=W/(W+H)
    beta=H/(W+H);
    所述光学相机的焦距在成像长度方向上的值fx以及在成像宽度方向上的值fy为:
    fx=W*0.5/alpha
    fy=H*0.5/beta;
    其中,fx、fy为所述光学相机的焦距。
  12. 根据权利要求8所述的大空间环境下边扫场边标定装置,其中,所述 整体优化模块,还设置为:
    获取根据多个所述初始数据确定出的多个主相机,将在所述初始数据中的出现次数最多的主相机作为候选主相机;
    若任一其他光学相机与所述候选主相机及其他的所述主相机同时存在联系,则将同时存在联系的所述光学相机记为所述唯一主相机;
    若与所述候选主相机及其他的所述主相机同时存在联系的其他光学相机为多个,则选择所述标定精度最小的光学相机为记为所述唯一主相机。
  13. 根据权利要求8所述的大空间环境下边扫场边标定装置,还设置为:
    在根据多个所述初始数据确定出多个所述光学相机中的主相机,根据所述主相机得到每个所述光学相机的目标外参;获取每个所述光学相机的硬件分辨率,根据所述硬件分辨率,得到每个所述光学相机的目标内参后,将所有所述光学相机的目标内参、目标外参和采集到的所有坐标数据输入至预设的光束法平差模型,所述光束法平差模型的输出结果为优化后的所有光学相机的目标内参和目标外参;
    在根据所述唯一主相机的旋转信息和平移信息,得到每个所述主相机的旋转信息和平移信息,根据每个所述主相机的旋转信息和平移信息,得到每个所述光学相机的旋转信息和平移信息,所述旋转信息和所述平移信息为所述光学相机的目标外参后,将所有所述光学相机的目标内参、目标外参和采集到的所有坐标数据输入所述光束法平差模型,所述光束法平差模型的输出结果为优化后的所有光学相机的目标内参和目标外参。
  14. 根据权利要求8所述的大空间环境下边扫场边标定装置,还设置为:
    将所述标定杆的多个标记点的高度定义为零,获取多个所述标记点的位置坐标信息,根据所述位置坐标信息得到多个所述标记点的三维空间坐标;
    根据所述唯一主相机的目标外参计算得到所述唯一主相机的三维空间坐标;
    将多个所述标记点的三维空间坐标和所述唯一主相机的三维空间坐标代入如下等式中,通过迭代最近点求解欧式变换旋转矩阵和平移矩阵:
    P=RP'+T
    其中,P为多个所述标记点的三维空间坐标,P'为所述光学相机的三维空间坐标,R为欧式变换旋转矩阵,T为平移矩阵;
    所述欧式变换旋转矩阵和平移矩阵为所述标定杆的位姿信息,将所述标定杆的位姿信息中的欧式变换旋转矩阵记为R,将所述标定杆的位姿信息中的平移矩阵记为T,将任一所述光学相机的目标外参分别记为R0和T0,则将所述标定杆位姿信息作用于任一所述光学相机后,所述光学相机的目标外参中的旋转矩阵为R*R0,平移矩阵为R*T0+T。
  15. 一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行以下步骤:
    获取多个光学相机的相机序号,采集每个所述光学相机对挥动中的标定杆捕获的多帧数据,将含有坐标数据的所述多帧数据按帧进行分类,每帧均得到对应的多个初始数据,每个所述初始数据均包括相机序号及对应的坐标数据;
    在多个所述初始数据中,当至少有两个所述光学相机采集到预设帧数的坐 标数据时,根据多个所述初始数据确定出多个所述光学相机中的主相机,根据所述主相机得到每个所述光学相机的目标外参;获取每个所述光学相机的硬件分辨率,根据所述硬件分辨率,得到每个所述光学相机的目标内参;
    根据每个所述光学相机的所述目标内参、所述目标外参和采集到的所有坐标数据,计算每个所述光学相机的重投影误差,将所述重投影误差记为所述光学相机的标定精度,若任一所述光学相机的标定精度大于预设的精度阈值,则对于所述标定精度大于所述精度阈值的所述光学相机重复第一步,直至所有的所述光学相机的标定精度不大于所述精度阈值;
    从所有的主相机中筛选出唯一主相机,将所述唯一主相机的旋转信息定义为单位阵,将所述主相机平移信息定义为零矩阵,根据所述唯一主相机的旋转信息和平移信息,得到每个所述主相机的旋转信息和平移信息,根据每个所述主相机的旋转信息和平移信息,得到每个所述光学相机的旋转信息和平移信息,所述旋转信息和所述平移信息为所述光学相机的目标外参。
  16. 一种存储有计算机可读指令的存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行以下步骤:
    获取多个光学相机的相机序号,采集每个所述光学相机对挥动中的标定杆捕获的多帧数据,将含有坐标数据的所述多帧数据按帧进行分类,每帧均得到对应的多个初始数据,每个所述初始数据均包括相机序号及对应的坐标数据;
    在多个所述初始数据中,当至少有两个所述光学相机采集到预设帧数的坐标数据时,根据多个所述初始数据确定出多个所述光学相机中的主相机,根据所述主相机得到每个所述光学相机的目标外参;获取每个所述光学相机的硬件分辨率,根据所述硬件分辨率,得到每个所述光学相机的目标内参;
    根据每个所述光学相机的所述目标内参、所述目标外参和采集到的所有坐标数据,计算每个所述光学相机的重投影误差,将所述重投影误差记为所述光学相机的标定精度,若任一所述光学相机的标定精度大于预设的精度阈值,则对于所述标定精度大于所述精度阈值的所述光学相机重复第一步,直至所有的所述光学相机的标定精度不大于所述精度阈值;
    从所有的主相机中筛选出唯一主相机,将所述唯一主相机的旋转信息定义为单位阵,将所述主相机平移信息定义为零矩阵,根据所述唯一主相机的旋转信息和平移信息,得到每个所述主相机的旋转信息和平移信息,根据每个所述主相机的旋转信息和平移信息,得到每个所述光学相机的旋转信息和平移信息,所述旋转信息和所述平移信息为所述光学相机的目标外参。
  17. 根据权利要求16所述的一种存储有计算机可读指令的存储介质,其中,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行在多个所述初始数据中,当至少有两个所述光学相机采集到预设帧数的坐标数据时,根据多个所述初始数据确定出多个所述光学相机中的主相机的步骤时,还执行如下步骤:
    定时判断所述初始数据中是否含有至少两个所述光学相机采集到预设帧数的坐标数据,若未含有至少两个所述光学相机采集到预设帧数的坐标数据,则继续进行采集每个所述光学相机对挥动中的标定杆捕获的多帧数据步骤;
    若含有至少两个所述光学相机采集到预设帧数的坐标数据,则将每帧的所述坐标数据中的坐标点个数少于预设最少个数的初始数据剔除,将每帧的所述坐标数据中的坐标点个数大于预设的最大个数的初始数据剔除,每帧均得到剔除后的初始数据;
    获取所述标定杆上多个标记点的位置关系数据,在剔除后的所述初始数据中,检测所述坐标数据中是否含有所述位置关系数据的多个坐标点,若含有,则将多个所述坐标点及对应的相机序号进行记录形成有效数据,否则,剔除所述初始数据,则每帧得到对应的多个所述有效数据,将含有所述有效数据最多的相机序号确定为主相机。
  18. 根据权利要求17所述的一种存储有计算机可读指令的存储介质,其中,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行根据所述主相机得到每个所述光学相机的目标外参的步骤时,还执行如下步骤:
    将所述主相机的旋转信息定义为单位阵,将所述主相机平移信息定义为零矩阵,所述单位阵和所述零矩阵为所述主相机的目标外参;
    根据每帧的多个所述有效数据,将其他的光学相机和所述主相机进行匹配,将含有匹配数据的光学相机记为目标相机,通过所述主相机的所述旋转信息和所述平移信息得到所述目标相机的旋转信息和平移信息,所述旋转信息和所述平移信息为所述目标相机的目标外参;
    将已得到目标外参的所述目标相机记为主相机,与未匹配到匹配数据的其他光学相机重复上一步操作,直至得到所有光学相机的目标外参。
  19. 根据权利要求16所述的一种存储有计算机可读指令的存储介质,其中,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行获取每个所述光学相机的硬件分辨率,根据所述硬件分辨率,得到每个所述光学相机的目标内参的步骤时,还执行如下步骤:
    所述光学相机的目标内参包括成像长度、成像宽度和焦距,获取所述光学相机的硬件分辨率,将所述硬件分辨率中的数值大的记为所述光学相机的成像长度,将所述硬件分辨率中的数值小的记为所述光学相机的成像宽度;
    所述光学相机的焦距通过以下计算公式得到:
    设所述成像长度为W,所述成像宽度为H,则成像长度比值alpha、成像宽度beta比值分别为:
    alpha=W/(W+H)
    beta=H/(W+H);
    所述光学相机的焦距在成像长度方向上的值fx以及在成像宽度方向上的值fy为:
    fx=W*0.5/alpha
    fy=H*0.5/beta;
    其中,fx、fy为所述光学相机的焦距。
  20. 根据权利要求16所述的一种存储有计算机可读指令的存储介质,其中,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行从所有的主相机中筛选出唯一主相机,将所述唯一主相机的旋转信息定义为单位阵,将所述主相机平移信息定义为零矩阵的步骤时,还执行如下步骤:
    获取根据多个所述初始数据确定出的多个主相机,将在所述初始数据中的出现次数最多的主相机作为候选主相机;
    若任一其他光学相机与所述候选主相机及其他的所述主相机同时存在联系,则将同时存在联系的所述光学相机记为所述唯一主相机;
    若与所述候选主相机及其他的所述主相机同时存在联系的其他光学相机为多个,则选择所述标定精度最小的光学相机为记为所述唯一主相机。
PCT/CN2020/082886 2020-04-02 2020-04-02 大空间环境下边扫场边标定方法、装置、设备及存储介质 WO2021196108A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202111008244.0A CN113744346B (zh) 2020-04-02 2020-04-02 大空间环境下边扫场边标定方法、装置、设备及存储介质
CN202080000455.7A CN111566701B (zh) 2020-04-02 2020-04-02 大空间环境下边扫场边标定方法、装置、设备及存储介质
CN202111008457.3A CN113744347B (zh) 2020-04-02 2020-04-02 大空间环境下边扫场边标定方法、装置、设备及存储介质
PCT/CN2020/082886 WO2021196108A1 (zh) 2020-04-02 2020-04-02 大空间环境下边扫场边标定方法、装置、设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/082886 WO2021196108A1 (zh) 2020-04-02 2020-04-02 大空间环境下边扫场边标定方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021196108A1 true WO2021196108A1 (zh) 2021-10-07

Family

ID=72074012

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/082886 WO2021196108A1 (zh) 2020-04-02 2020-04-02 大空间环境下边扫场边标定方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (3) CN111566701B (zh)
WO (1) WO2021196108A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113959335A (zh) * 2021-10-20 2022-01-21 武汉联影智融医疗科技有限公司 光学定位器精度检测装置、系统、方法、电子装置和介质
CN114022370A (zh) * 2021-10-13 2022-02-08 山东大学 一种振镜激光加工畸变校正方法及系统
CN115375772A (zh) * 2022-08-10 2022-11-22 北京英智数联科技有限公司 相机标定方法、装置、设备及存储介质
CN115423863A (zh) * 2022-11-04 2022-12-02 深圳市其域创新科技有限公司 相机位姿估计方法、装置及计算机可读存储介质
CN116128981A (zh) * 2023-04-19 2023-05-16 北京元客视界科技有限公司 光学系统标定方法、装置和标定系统

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215896B (zh) * 2020-09-01 2024-01-30 深圳市瑞立视多媒体科技有限公司 多相机标定的相机帧数据处理方法、装置和计算机设备
CN113031620A (zh) * 2021-03-19 2021-06-25 成都河狸智能科技有限责任公司 一种机器人复杂环境定位方法
CN114283203B (zh) * 2021-12-08 2023-11-21 北京元客方舟科技有限公司 一种多相机系统的标定方法及系统
CN114399554B (zh) * 2021-12-08 2024-05-03 北京元客视界科技有限公司 一种多相机系统的标定方法及系统
CN114202588B (zh) * 2021-12-09 2022-09-23 纵目科技(上海)股份有限公司 一种车载环视相机快速自动标定方法和装置
CN114205483B (zh) * 2022-02-17 2022-07-29 杭州思看科技有限公司 扫描仪精度校准方法、装置和计算机设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101226638A (zh) * 2007-01-18 2008-07-23 中国科学院自动化研究所 一种对多相机系统的标定方法及装置
CN103035008A (zh) * 2012-12-15 2013-04-10 北京工业大学 一种多相机系统的加权标定方法
US20150271483A1 (en) * 2014-03-20 2015-09-24 Gopro, Inc. Target-Less Auto-Alignment Of Image Sensors In A Multi-Camera System
CN107358633A (zh) * 2017-07-12 2017-11-17 北京轻威科技有限责任公司 一种基于三点标定物的多相机内外参标定方法
CN110689584A (zh) * 2019-09-30 2020-01-14 深圳市瑞立视多媒体科技有限公司 多相机环境中主动式刚体的位姿定位方法及相关设备

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633536B (zh) * 2017-08-09 2020-04-17 武汉科技大学 一种基于二维平面模板的相机标定方法及系统
CN107767420B (zh) * 2017-08-16 2021-07-23 华中科技大学无锡研究院 一种水下立体视觉系统的标定方法
CN108564617B (zh) * 2018-03-22 2021-01-29 影石创新科技股份有限公司 多目相机的三维重建方法、装置、vr相机和全景相机
CN108510551B (zh) * 2018-04-25 2020-06-02 上海大学 一种远距离大视场条件下相机参数的标定方法及系统
CN110689580B (zh) * 2018-07-05 2022-04-15 杭州海康机器人技术有限公司 多相机标定方法及装置
CN109754432B (zh) * 2018-12-27 2020-09-22 深圳市瑞立视多媒体科技有限公司 一种相机自动标定方法及光学动作捕捉系统
CN110310338B (zh) * 2019-06-24 2022-09-06 西北工业大学 一种基于多中心投影模型的光场相机标定方法
CN110288713B (zh) * 2019-07-03 2022-12-23 北京机械设备研究所 一种基于多目视觉的快速三维模型重建方法及系统
CN110473262A (zh) * 2019-08-22 2019-11-19 北京双髻鲨科技有限公司 多目相机的外参标定方法、装置、存储介质及电子设备
CN110689577B (zh) * 2019-09-30 2022-04-01 深圳市瑞立视多媒体科技有限公司 单相机环境中主动式刚体的位姿定位方法及相关设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101226638A (zh) * 2007-01-18 2008-07-23 中国科学院自动化研究所 一种对多相机系统的标定方法及装置
CN103035008A (zh) * 2012-12-15 2013-04-10 北京工业大学 一种多相机系统的加权标定方法
US20150271483A1 (en) * 2014-03-20 2015-09-24 Gopro, Inc. Target-Less Auto-Alignment Of Image Sensors In A Multi-Camera System
CN107358633A (zh) * 2017-07-12 2017-11-17 北京轻威科技有限责任公司 一种基于三点标定物的多相机内外参标定方法
CN110689584A (zh) * 2019-09-30 2020-01-14 深圳市瑞立视多媒体科技有限公司 多相机环境中主动式刚体的位姿定位方法及相关设备

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114022370A (zh) * 2021-10-13 2022-02-08 山东大学 一种振镜激光加工畸变校正方法及系统
CN114022370B (zh) * 2021-10-13 2022-08-05 山东大学 一种振镜激光加工畸变校正方法及系统
CN113959335A (zh) * 2021-10-20 2022-01-21 武汉联影智融医疗科技有限公司 光学定位器精度检测装置、系统、方法、电子装置和介质
CN113959335B (zh) * 2021-10-20 2023-12-12 武汉联影智融医疗科技有限公司 光学定位器精度检测装置、系统、方法、电子装置和介质
CN115375772A (zh) * 2022-08-10 2022-11-22 北京英智数联科技有限公司 相机标定方法、装置、设备及存储介质
CN115375772B (zh) * 2022-08-10 2024-01-19 北京英智数联科技有限公司 相机标定方法、装置、设备及存储介质
CN115423863A (zh) * 2022-11-04 2022-12-02 深圳市其域创新科技有限公司 相机位姿估计方法、装置及计算机可读存储介质
CN116128981A (zh) * 2023-04-19 2023-05-16 北京元客视界科技有限公司 光学系统标定方法、装置和标定系统

Also Published As

Publication number Publication date
CN113744347B (zh) 2023-06-16
CN111566701B (zh) 2021-10-15
CN113744346A (zh) 2021-12-03
CN113744347A (zh) 2021-12-03
CN113744346B (zh) 2023-06-23
CN111566701A (zh) 2020-08-21

Similar Documents

Publication Publication Date Title
WO2021196108A1 (zh) 大空间环境下边扫场边标定方法、装置、设备及存储介质
WO2021129791A1 (zh) 基于光学动捕的大空间环境下多相机标定方法及相关设备
US8493459B2 (en) Registration of distorted images
CN116433737A (zh) 一种激光雷达点云与图像配准的方法、装置及智能终端
KR20200023211A (ko) 스테레오 이미지의 정류를 위한 방법 및 시스템
CN108801218A (zh) 大尺寸动态摄影测量系统的高精度定向及定向精度评价方法
CN113706635B (zh) 一种基于点特征和线特征融合的长焦距相机标定方法
JP2022151676A (ja) ロバストなカメラ姿勢推定を用いた画像スティッチングのシステムおよび方法
CN113034565B (zh) 一种单目结构光的深度计算方法及系统
CN116385347A (zh) 一种基于变形分析的飞机蒙皮曲面图案视觉检测方法
WO2022252362A1 (zh) 一种结合几何和纹理的在线匹配优化方法和三维扫描系统
CN115063394A (zh) 一种图像校正与视差估计相融合的深度估计方法
CN110232715B (zh) 一种多深度相机自校准的方法、装置及系统
CN112700504A (zh) 一种多视角远心相机的视差测量方法
CN112819901B (zh) 基于图像边缘信息的红外相机自标定方法
CN111462321A (zh) 点云地图处理方法、处理装置、电子装置和车辆
CN110599504B (zh) 一种图像处理方法及装置
CN112183171B (zh) 一种基于视觉信标建立信标地图方法、装置
Rupp et al. Robust camera calibration using discrete optimization
Garcia et al. Fusion of Low-Density LiDAR Data with RGB Images for Plant 3D Modeling
Dou et al. An Unmanned Aerial Vehicle Pose Estimation System Guided by Desired Shot
CN116524019A (zh) 相机姿态确定方法及恢复方法、装置、设备及存储介质
KR20210141266A (ko) 2차원 다시점 영상 보정 방법 및 장치
CN117934635A (zh) 相机的标定方法、装置、计算机设备和存储介质
CN115205359A (zh) 基于扫描光场的鲁棒深度估计方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20928533

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20928533

Country of ref document: EP

Kind code of ref document: A1