WO2022120567A1 - Automatic calibration system based on visual guidance - Google Patents

Automatic calibration system based on visual guidance Download PDF

Info

Publication number
WO2022120567A1
WO2022120567A1 PCT/CN2020/134521 CN2020134521W WO2022120567A1 WO 2022120567 A1 WO2022120567 A1 WO 2022120567A1 CN 2020134521 W CN2020134521 W CN 2020134521W WO 2022120567 A1 WO2022120567 A1 WO 2022120567A1
Authority
WO
WIPO (PCT)
Prior art keywords
calibration
camera
cameras
faceted
calibration plate
Prior art date
Application number
PCT/CN2020/134521
Other languages
French (fr)
Chinese (zh)
Inventor
程俊
张能波
郭海光
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Priority to PCT/CN2020/134521 priority Critical patent/WO2022120567A1/en
Publication of WO2022120567A1 publication Critical patent/WO2022120567A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Definitions

  • the invention relates to the technical field of graphics and image processing, and more particularly, to an automatic calibration system based on vision guidance.
  • the multi-camera joint calibration technique is used to solve the relative pose relationship (R, T) between the multi-robots.
  • camera calibration technology can be roughly divided into two categories:
  • the first category use a specially made calibration board to determine the camera parameters.
  • Commonly used traditional camera calibration methods include: Faugeras calibration method, Tsai two-step method, and Zhang Zhengyou plane calibration method.
  • the linear model camera calibration of the Faugeras calibration method is based on the least squares problem of a system of linear equations.
  • the Tsai calibration method needs to obtain some parameter values in advance, first solve some parameters by linear method, and then solve the remaining camera parameters by nonlinear optimization.
  • the Zhang Zhengyou calibration method utilizes multiple images of the plane calibration plate at different viewing angles, and calibrates the camera parameters according to the designed homography matrix.
  • the second category self-calibration method, that is, calibration is performed according to the corresponding relationship between the two images generated during the camera movement.
  • self-calibration method based on infinite plane, absolute quadratic surface, self-calibration method based on Kruppa equation, etc.
  • the self-calibration method does not depend on the calibration reference, and has high flexibility, but the constraints are strong, and the calibration accuracy is low and the robustness is insufficient.
  • patent application CN110689585A discloses a joint calibration method, device, device and medium for multi-camera external parameters.
  • the specific implementation scheme is as follows: determine the common viewing area of each camera, and obtain a 2D verification point set in the image of the common viewing area; perform external parameter calibration for each camera respectively; The 3D coordinates of the points are verified, and the loss function is calculated according to the 3D coordinates; the joint calibration is performed according to the loss function to obtain the final external parameters of each camera.
  • the patent application CN110766759A discloses a multi-camera calibration method and device without overlapping fields of view.
  • the specific implementation scheme is: place the calibration board in the field of view of each camera; calculate the pose relationship between each camera coordinate system and the calibration board; use the dual theodolite three-dimensional coordinate measurement system to measure any n points on each calibration board respectively The three-dimensional coordinates under the theodolite coordinates; then use the 3D-3D pose estimation iterative closest point method to solve the pose relationship between the two calibration boards; use the camera to measure the pose relationship between the camera and the calibration board, and the theodolite calibration The pose relationship between the two calibration plates is calculated to calculate the relationship between the two cameras.
  • multi-camera joint calibration is a technology that uses computer vision technology to obtain camera internal parameters and relative poses between multiple cameras.
  • This technology has been widely used in multi-robot collaborative systems, but the existing solutions are often limited to specific scenarios, complex calibration processes, time-consuming and labor-intensive; and can not well meet the situation where there is no overlapping field of view between cameras.
  • the purpose of the present invention is to overcome the above-mentioned defects of the prior art, and to provide an automatic calibration system based on vision guidance. For the joint calibration of multiple cameras, there is no need for overlapping fields of view between all cameras, and the automation of the calibration process is realized.
  • an automated calibration system based on vision guidance includes: a calibration vehicle, a rotary control pan-tilt, a multi-faceted calibration board, multiple cameras, and a data acquisition control module, wherein the multi-sided calibration board is connected to the rotary control pan-tilt, and the data acquisition control module is used to control the calibration vehicle to be loaded with the multi-sided calibration board Move, and control the corresponding camera to shoot the multi-faceted calibration plate when moving to the overlapping field of view of the adjacent two cameras, and obtain the image data of the multi-faceted calibration plate; the data acquisition control module will take the obtained image data according to the camera number and multi-faceted calibration.
  • the board position number is stored, and the corner points of the multi-faceted calibration board are detected to obtain the relative pose between cameras.
  • an automated calibration method based on vision guidance includes the following steps:
  • the image data obtained by shooting is stored according to the camera number and the position number of the multi-faceted calibration plate, and the corner points of the multi-faceted calibration plate are detected to obtain the relative posture between the cameras.
  • the present invention has the advantages that it can solve the problem of no overlapping field of view between cameras, realize the automation of the entire calibration process, and obtain high calibration accuracy while the calibration process is convenient and fast.
  • FIG. 1 is a schematic diagram of an automated calibration system based on vision guidance according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a calibration trolley according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a five-sided three-dimensional checkerboard calibration plate according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of solving relative poses between cameras with overlapping fields of view according to an embodiment of the present invention
  • FIG. 6 is a schematic diagram of solving the relative pose between cameras without overlapping fields of view according to an embodiment of the present invention
  • FIG. 7 is a schematic diagram of an image partition according to an embodiment of the present invention.
  • the system as a whole includes: a calibration vehicle 10 (taking a four-wheeled trolley as an example), Rotation control pan/tilt 20, multi-faceted calibration plate 30, multiple cameras (shown as 3 cameras, high-resolution industrial cameras can be used) and data acquisition control module (not shown), wherein the multi-faceted calibration plate 30 is connected to the rotation control cloud
  • the stage 20 the data acquisition control module is used to control the calibration vehicle 10 to move with the multi-faceted calibration plate 30, and control the corresponding camera to photograph the multi-faceted calibration plate when moving to the overlapping field of view of the adjacent two cameras, and obtain the multi-faceted calibration plate.
  • Image data the data acquisition control module stores the image data obtained by shooting according to the camera number and the position number of the multi-faceted calibration plate, and detects the corners of the multi-faceted calibration plate to obtain the relative posture between the cameras.
  • the calibration process of the provided automatic calibration system mainly includes: step S1, collecting and obtaining a calibration image; step S2, solving the initial relative posture between cameras; Attitude calculation re-projection error; step S4, the attitude error is established on the global data and optimized, and the final relative attitude between cameras is obtained.
  • step S1 collecting and obtaining a calibration image
  • step S2 solving the initial relative posture between cameras
  • step S4 the attitude error is established on the global data and optimized, and the final relative attitude between cameras is obtained.
  • Step S1 collecting and acquiring calibration images
  • the calibration vehicle is a four-wheel smart car
  • the four-wheel smart car carries a multi-faceted calibration board and a cloud control platform, wherein the multi-faceted calibration board is connected to the rotary control pan-tilt, and the signal transceiver device is also set.
  • Wireless WiFi which can send commands and control the PTZ to the automatic trolley through wireless devices, so that it can perform multi-angle and multi-face rotation on the calibration board.
  • the calibration plate is a five-sided three-dimensional checkerboard calibration plate. Each side is a 7 ⁇ 10 checkerboard calibration plate, and the angle between the four calibration plates 2, 3, 4, 5 and the calibration plate 1 is 45°, so that cameras in different orientations can detect more corners information.
  • the calibration board is followed by an intelligent pan-tilt with a changeable angle, which is fixed on the intelligent trolley for movement.
  • the camera captures the corner data of five faces at a time, and then uses the common corner data of the two cameras to calculate the relative pose. Thus, more data can be obtained in less time for later optimization.
  • the data acquisition software can integrate multiple programs such as smart car control, camera shooting control, and data storage and visualization.
  • the process of collecting and obtaining the calibration image is shown in Figure 4, which includes: the data acquisition software controls the intelligent car to move with the calibration plate loaded.
  • the calibration plate moves to the overlapping field of view of two adjacent cameras (such as camera 1 and camera 2) Control the smart car to stop moving and send a shooting command to the camera; the camera shoots the calibration board to obtain the image data of the calibration board; the data acquisition software stores the captured pictures according to the camera number and the position number of the calibration board, and uses The visualization program detects and visualizes the corners of the calibration board; controls the trolley to continue moving with the calibration board (for example, the calibration board moves to the overlapping field of view of camera 2 and camera 3), and repeats the above steps until the data collection is completed.
  • Step S2 solve the initial relative pose between cameras
  • the internal parameter matrix K of each camera and the single-camera rotation and translation matrix (R, t) between each camera and the calibration plate at different positions are obtained by Zhang Zhengyou's calibration method.
  • R 12 R 1 -1 ⁇ R 2 (1)
  • R 12 represents two cameras with overlapping fields of view, that is, the rotation and translation matrix between camera 1 and camera 2, and R 1 and R 2 respectively represent the calibration of camera 1 and camera 2 in their overlapping fields of view Rotation translation matrix between plates.
  • R 13 R 12 -1 ⁇ R 23 (2)
  • R 13 represents two cameras with no overlapping field of view, that is, the rotation and translation matrix between camera 3 and camera 1; R 12 and R 23 are the overlapping field of view cameras calculated by formula (1).
  • step S3 re-acquisition control is performed on the image and the re-projection error is calculated by using the initial relative posture between the cameras.
  • the shooting areas of the camera pictures are divided. As shown in Figure 7, the image is divided into 9 regions. Due to the existence of the perspective imaging theorem, the visual camera always has high calibration accuracy in its calibration acquisition area, but low calibration accuracy in the non-acquisition area.
  • the core idea of this embodiment is to allow the calibrated position of the trolley to traverse these nine areas. And in each area, it is guaranteed that the car can go to the designated area. Then the camera gets to take the picture.
  • C cameras can be set, and each camera contains p positions. At least C ⁇ p position picture data needs to be collected.
  • the number of p is set to 9 as an example.
  • the re-acquisition algorithm process includes:
  • Step S11 construct a corpus location queue T according to the region and the label in sequence.
  • step S12 the existing images are checked, and a queue K of the acquired positions is constructed.
  • step S13 the difference set L of the set T and the set K is obtained.
  • Step S14 traverse the set L, search for camera pictures, obtain the current position of the car, and change the car to move forward or backward at the current position.
  • the labels of all image blocks represented in T for example, there are nine blocks in one image, and ten pictures are ninety blocks.
  • an element is added to the queue K, and the uncollected set L is obtained.
  • Repeat the above process until all data is collected.
  • new calibration plate image data D add is obtained.
  • P 3D is the 3-dimensional coordinate of the target point obtained from the calibration image
  • P 2D is the 2-dimensional coordinates of the target point obtained from the single camera calibration
  • P 3D_2D is the two-dimensional coordinate of the estimated point generated by using the initial inter-camera RT matrix and the camera internal parameter matrix K
  • R represents the rotation matrix
  • t is the offset matrix
  • P 3D_2D is generated by using the initial inter-camera RT matrix and the camera internal parameter matrix K.
  • the two-dimensional coordinates of the estimated point, u , v represent the pixel coordinates on the image, and xw, yw and zw represent the three-dimensional coordinates.
  • Step S4 establish attitude error and optimize the global data
  • the calculated projection error is relatively large.
  • the obtained relative poses between multiple cameras are globally modeled, and then the Levenberg-Marquardt algorithm is used to iteratively optimize the reprojection error to improve the accuracy of the calibration.
  • the calibration accuracy is used to obtain the final relative pose between cameras.
  • f(x) is the two-dimensional coordinate of the generated estimated point P 3D_2D :
  • P 3D_2D K[R,t]P 3D
  • y is the two-dimensional coordinate of the target point P 2D obtained by single camera calibration. Since the present invention has a picture resolution of 600 ⁇ 800 when collecting an image, 0 ⁇ f(x) ⁇ 800, and the specific situation is determined according to the image data format during the actual calibration.
  • Levenberg-Marquardt algorithm is used to optimize the projection error.
  • the execution process of the algorithm is as follows:
  • step S21 an initial value x 0 and an initial optimized radius ⁇ are given.
  • Step S22 for the k-th iteration process, solve:
  • is the radius of the confidence region and D is the coefficient matrix.
  • Step S27 it is judged whether the algorithm has converged. If not converged, return to step S22, otherwise end.
  • x k represents the initial relative attitude data after k optimizations
  • x k+1 represents the initial relative attitude data after k+1 optimizations
  • ⁇ x k represents the k+1 optimization process
  • the obtained pair x k The correction amount of , f(x k ) represents the two-dimensional coordinates of the estimated point after the kth optimization; J[x k ] represents the first derivative of f(x k ) with respect to x, and D is the coefficient matrix.
  • the given initial value x 0 is the initial relative pose data R 12 between the first camera and the second camera.
  • the set initial optimization radius ⁇ can be specifically set according to the actual situation.
  • is greater than the preset threshold
  • x k+1 x k + ⁇ x k ; at this time, the reprojection error after iterative optimization can be compared with the preset re-projection error.
  • the size of the projection error threshold is used to judge whether the algorithm converges; if the reprojection error after iterative optimization is less than or equal to the preset reprojection error threshold, it is determined that the iterative optimization is complete, and the reprojection error after iterative optimization is used as the second reprojection error.
  • x k+1 is the relative pose data between the first camera and the second camera. If the reprojection error after the iterative optimization is greater than the preset reprojection error threshold, continue to perform the k+2 th iterative optimization.
  • the finally obtained x k+1 is the required multi-camera (R, t) matrix.
  • the present invention uses the optimized relative posture between cameras to calculate the reprojection error again. It has been verified that compared with the initial relative posture between cameras obtained in step S2, the optimized error is reduced to about 1/10 of the initial error, and the calibration accuracy Huge improvements.
  • the present invention utilizes the five-sided stereo calibration plate and the intelligent trolley, which greatly simplifies the calibration process, saves time and effort; utilizes the movement of the trolley load calibration plate and the relationship between multiple cameras to solve the problem of no overlapping fields of view between cameras; Using global modeling and optimization, the calibration accuracy is significantly improved. It can solve the situation that there is no overlapping field of view between cameras; realize the automation of the whole calibration process, and obtain high calibration accuracy while being easy to operate.
  • the proposed multi-camera joint calibration system calibrates multiple image sensors by using the image sensors installed on the robot, and then obtains the relative pose between the robots. relation.
  • the present invention may be a system, method and/or computer program product.
  • the computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for causing a processor to implement various aspects of the present invention.
  • a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Non-exhaustive list of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), static random access memory (SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory sticks, floppy disks, mechanically coded devices, such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disk read only memory
  • DVD digital versatile disk
  • memory sticks floppy disks
  • mechanically coded devices such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • Computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
  • the computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • the computer program instructions for carrying out the operations of the present invention may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state setting data, or instructions in one or more programming languages.
  • Source or object code written in any combination, including object-oriented programming languages, such as Smalltalk, C++, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through the Internet connect).
  • LAN local area network
  • WAN wide area network
  • custom electronic circuits such as programmable logic circuits, field programmable gate arrays (FPGAs), or programmable logic arrays (PLAs)
  • FPGAs field programmable gate arrays
  • PDAs programmable logic arrays
  • Computer readable program instructions are executed to implement various aspects of the present invention.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions. It is well known to those skilled in the art that implementation in hardware, implementation in software, and implementation in a combination of software and hardware are all equivalent.

Abstract

Disclosed in the present invention are an automatic calibration system and method based on visual guidance. The system comprises: a calibration vehicle, a rotation control gimbal, a multi-face calibration target, a plurality of cameras and a data acquisition control module. The multi-face calibration target is connected to the rotation control gimbal, and the data acquisition control module is used for controlling the calibration vehicle to carry the multi-face calibration target to move and controlling corresponding cameras to photograph the multi-face calibration target when the multi-face calibration target is moved to an overlapped field of view of two adjacent cameras so as to obtain image data of the multi-face calibration target. The data acquisition control module stores, according to camera numbers and the position number of the multi-face calibration target, the image data obtained by means of photography, and detects corner points of the multi-face calibration target to obtain relative poses between the cameras. By means of the present invention, the automation of the whole calibration process can be achieved, the problem of there being no overlapped field of view between cameras can be solved, the operation is simple and convenient, and a relatively high calibration precision is also achieved.

Description

一种基于视觉引导的自动化标定系统An automatic calibration system based on vision guidance 技术领域technical field
本发明涉及图形和图像处理技术领域,更具体地,涉及一种基于视觉引导的自动化标定系统。The invention relates to the technical field of graphics and image processing, and more particularly, to an automatic calibration system based on vision guidance.
背景技术Background technique
面对工业制造生产线更具柔性,加工任务更加复杂多变的趋势,多机器人协作系统正在快速发展。在多机器人协作问题中,多相机联合标定技术被用于求解多机器人之间的相对位姿关系(R,T)。Facing the trend of more flexible industrial production lines and more complex and changeable processing tasks, multi-robot collaborative systems are developing rapidly. In the multi-robot collaboration problem, the multi-camera joint calibration technique is used to solve the relative pose relationship (R, T) between the multi-robots.
目前相机标定技术大致可分为两类:At present, camera calibration technology can be roughly divided into two categories:
第一类:利用专门制作的标定板来确定相机参数。常用的传统相机标定法包括:Faugeras标定法,Tsai两步法,张正友平面标定法。Faugeras标定法的线性模型相机标定是基于线性方程组的最小二乘问题。Tsai标定法需要预先得到一部分参数值,先通过线性方法求解出部分参数,再通过非线性优化求解出剩余相机参数。张正友标定法利用平面标定板在不同视角下的多幅图像,依据设计的单应矩阵,标定得到相机参数。The first category: use a specially made calibration board to determine the camera parameters. Commonly used traditional camera calibration methods include: Faugeras calibration method, Tsai two-step method, and Zhang Zhengyou plane calibration method. The linear model camera calibration of the Faugeras calibration method is based on the least squares problem of a system of linear equations. The Tsai calibration method needs to obtain some parameter values in advance, first solve some parameters by linear method, and then solve the remaining camera parameters by nonlinear optimization. The Zhang Zhengyou calibration method utilizes multiple images of the plane calibration plate at different viewing angles, and calibrates the camera parameters according to the designed homography matrix.
第二类:自标定法,即根据相机运动过程中所产生的两图像间的对应关系进行标定。如基于无穷远平面、绝对二次曲面的自标定方法、基于Kruppa方程的自标定方法等。自标定法不依赖于标定参考物,灵活性较高,但是约束条件强,且标定精度低,鲁棒性不足。The second category: self-calibration method, that is, calibration is performed according to the corresponding relationship between the two images generated during the camera movement. Such as self-calibration method based on infinite plane, absolute quadratic surface, self-calibration method based on Kruppa equation, etc. The self-calibration method does not depend on the calibration reference, and has high flexibility, but the constraints are strong, and the calibration accuracy is low and the robustness is insufficient.
经分析,尽管传统相机标定法精度较高,操作较为简便,但对标定物的精度要求较高。此外,对于多相机标定,目前的方案一般需要相机之间存在重叠视场,或者是在一个已知结构的场景中。这并不能满足复杂工况下,标定场景复杂且相机之间无重叠视场的情况。After analysis, although the traditional camera calibration method has high accuracy and simple operation, it has high requirements on the accuracy of the calibration object. Furthermore, for multi-camera calibration, current schemes generally require overlapping fields of view between cameras, or in a scene with a known structure. This does not meet the complex working conditions, where the calibration scene is complex and there is no overlapping field of view between cameras.
例如,专利申请CN110689585A公开了一种多相机外参的联合标定方法、装置、设备和介质。具体实现方案为:确定各个相机的共视区域,并 在共视区域的图像中获得2D验证点集;对每个相机分别进行外参标定;分别利用所述各相机的当前外参计算每隔验证点的3D坐标,并根据3D坐标计算损失函数;依据所述损失函数进行联合标定,得到各相机最终的外参。For example, patent application CN110689585A discloses a joint calibration method, device, device and medium for multi-camera external parameters. The specific implementation scheme is as follows: determine the common viewing area of each camera, and obtain a 2D verification point set in the image of the common viewing area; perform external parameter calibration for each camera respectively; The 3D coordinates of the points are verified, and the loss function is calculated according to the 3D coordinates; the joint calibration is performed according to the loss function to obtain the final external parameters of each camera.
又如,专利申请CN110766759A公开了一种无重叠视场的多相机标定方法及装置。具体实现方案为:将标定板放置于每个相机的视野中;计算每个相机坐标系到标定板的位姿关系;利用双经纬仪三维坐标测量系统分别测量每个标定板在上任意n个点在经纬仪坐标下的三维坐标;然后利用3D-3D位姿估计迭代最近点法求解两个标定板之间的位姿关系;利用相机测量的相机与标定板之间的位姿关系,以及经纬仪标定的两个标定板之间的位姿关系计算两个相机之间的关系。For another example, the patent application CN110766759A discloses a multi-camera calibration method and device without overlapping fields of view. The specific implementation scheme is: place the calibration board in the field of view of each camera; calculate the pose relationship between each camera coordinate system and the calibration board; use the dual theodolite three-dimensional coordinate measurement system to measure any n points on each calibration board respectively The three-dimensional coordinates under the theodolite coordinates; then use the 3D-3D pose estimation iterative closest point method to solve the pose relationship between the two calibration boards; use the camera to measure the pose relationship between the camera and the calibration board, and the theodolite calibration The pose relationship between the two calibration plates is calculated to calculate the relationship between the two cameras.
综上,多相机联合标定是一种利用计算机视觉技术获取相机内参,以及多相机间相对姿的技术。该技术被大量运用于多机器人协作系统中,但现有方案常常受限于具体场景、标定流程复杂、耗时耗力;且不能很好的满足相机间无重叠视场的情况。In summary, multi-camera joint calibration is a technology that uses computer vision technology to obtain camera internal parameters and relative poses between multiple cameras. This technology has been widely used in multi-robot collaborative systems, but the existing solutions are often limited to specific scenarios, complex calibration processes, time-consuming and labor-intensive; and can not well meet the situation where there is no overlapping field of view between cameras.
发明内容SUMMARY OF THE INVENTION
本发明的目的是克服上述现有技术的缺陷,提供一种基于视觉引导的自动化标定系统,针对多相机联合标定,无需所有相机之间存在重叠视场,并实现了标定过程的自动化。The purpose of the present invention is to overcome the above-mentioned defects of the prior art, and to provide an automatic calibration system based on vision guidance. For the joint calibration of multiple cameras, there is no need for overlapping fields of view between all cameras, and the automation of the calibration process is realized.
根据本发明的第一方面,提供一种基于视觉引导的自动化标定系统。该系统包括:标定车、旋转控制云台、多面标定板、多个相机、数据采集控制模块,其中,多面标定板连接旋转控制云台,数据采集控制模块用于控制标定车负载着多面标定板进行移动,并在移动到相邻两相机的重叠视场时控制对应相机对多面标定板进行拍摄,获取多面标定板的图像数据;数据采集控制模块将拍摄获得的图像数据按照相机编号和多面标定板位置编号进行存储,并对多面标定板的角点进行检测,获得相机间相对姿态。According to a first aspect of the present invention, an automated calibration system based on vision guidance is provided. The system includes: a calibration vehicle, a rotary control pan-tilt, a multi-faceted calibration board, multiple cameras, and a data acquisition control module, wherein the multi-sided calibration board is connected to the rotary control pan-tilt, and the data acquisition control module is used to control the calibration vehicle to be loaded with the multi-sided calibration board Move, and control the corresponding camera to shoot the multi-faceted calibration plate when moving to the overlapping field of view of the adjacent two cameras, and obtain the image data of the multi-faceted calibration plate; the data acquisition control module will take the obtained image data according to the camera number and multi-faceted calibration. The board position number is stored, and the corner points of the multi-faceted calibration board are detected to obtain the relative pose between cameras.
根据本发明的第二方面,提供一种基于视觉引导的自动化标定方法。该方法包括以下步骤:According to a second aspect of the present invention, an automated calibration method based on vision guidance is provided. The method includes the following steps:
控制标定车负载着多面标定板进行移动,并在移动到相邻两相机的重 叠视场时控制对应相机对多面标定板进行拍摄,获取多面标定板的图像数据;Control the calibration vehicle to move with the multi-faceted calibration plate, and control the corresponding camera to shoot the multi-faceted calibration plate when moving to the overlapping field of view of two adjacent cameras, and obtain the image data of the multi-faceted calibration plate;
将拍摄获得的图像数据按照相机编号和多面标定板位置编号进行存储,并对多面标定板的角点进行检测,获得相机间相对姿态。The image data obtained by shooting is stored according to the camera number and the position number of the multi-faceted calibration plate, and the corner points of the multi-faceted calibration plate are detected to obtain the relative posture between the cameras.
与现有技术相比,本发明的优点在于,能够解决相机之间无重叠视场的情况,并实现整个标定过程的自动化,在标定流程方便快捷的同时,可获得较高的标定精度。Compared with the prior art, the present invention has the advantages that it can solve the problem of no overlapping field of view between cameras, realize the automation of the entire calibration process, and obtain high calibration accuracy while the calibration process is convenient and fast.
通过以下参照附图对本发明的示例性实施例的详细描述,本发明的其它特征及其优点将会变得清楚。Other features and advantages of the present invention will become apparent from the following detailed description of exemplary embodiments of the present invention with reference to the accompanying drawings.
附图说明Description of drawings
被结合在说明书中并构成说明书的一部分的附图示出了本发明的实施例,并且连同其说明一起用于解释本发明的原理。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention.
图1是根据本发明一个实施例的基于视觉引导的自动化标定系统的示意图;1 is a schematic diagram of an automated calibration system based on vision guidance according to an embodiment of the present invention;
图2是根据本发明一个实施例的标定小车的示意图;2 is a schematic diagram of a calibration trolley according to an embodiment of the present invention;
图3是根据本发明一个实施例的五面立体棋盘格标定板示意图;3 is a schematic diagram of a five-sided three-dimensional checkerboard calibration plate according to an embodiment of the present invention;
图4是根据本发明一个实施例的数据采集与获取的流程图;4 is a flowchart of data collection and acquisition according to an embodiment of the present invention;
图5是根据本发明一个实施例的有重叠视场的相机间相对姿态求解的示意图;5 is a schematic diagram of solving relative poses between cameras with overlapping fields of view according to an embodiment of the present invention;
图6是根据本发明一个实施例的无重叠视场的相机间相对姿态求解的示意图;6 is a schematic diagram of solving the relative pose between cameras without overlapping fields of view according to an embodiment of the present invention;
图7是根据本发明一个实施例的图像分区示意图。FIG. 7 is a schematic diagram of an image partition according to an embodiment of the present invention.
具体实施方式Detailed ways
现在将参照附图来详细描述本发明的各种示例性实施例。应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本发明的范围。Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that the relative arrangement of components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the invention unless specifically stated otherwise.
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作 为对本发明及其应用或使用的任何限制。The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail, but where appropriate, such techniques, methods, and apparatus should be considered part of the specification.
在这里示出和讨论的所有例子中,任何具体值应被解释为仅仅是示例性的,而不是作为限制。因此,示例性实施例的其它例子可以具有不同的值。In all examples shown and discussed herein, any specific values should be construed as illustrative only and not limiting. Accordingly, other instances of the exemplary embodiment may have different values.
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。It should be noted that like numerals and letters refer to like items in the following figures, so once an item is defined in one figure, it does not require further discussion in subsequent figures.
在本发明中,提出了一种基于视觉引导的自动化标定系统,实现了多相机联合标定的自动化,参见图1所示,该系统整体上包括:标定车10(以四轮小车为例)、旋转控制云台20、多面标定板30、多个相机(示意为3个相机,可采用高分辨率工业相机)和数据采集控制模块(未示出),其中,多面标定板30连接旋转控制云台20,数据采集控制模块用于控制标定车10负载着多面标定板30进行移动,并在移动到相邻两相机的重叠视场时控制对应相机对多面标定板进行拍摄,获取多面标定板的图像数据;数据采集控制模块将拍摄获得的图像数据按照相机编号和多面标定板位置编号进行存储,并对多面标定板的角点进行检测,获得相机间相对姿态。In the present invention, an automated calibration system based on vision guidance is proposed, which realizes the automation of multi-camera joint calibration. Referring to FIG. 1 , the system as a whole includes: a calibration vehicle 10 (taking a four-wheeled trolley as an example), Rotation control pan/tilt 20, multi-faceted calibration plate 30, multiple cameras (shown as 3 cameras, high-resolution industrial cameras can be used) and data acquisition control module (not shown), wherein the multi-faceted calibration plate 30 is connected to the rotation control cloud The stage 20, the data acquisition control module is used to control the calibration vehicle 10 to move with the multi-faceted calibration plate 30, and control the corresponding camera to photograph the multi-faceted calibration plate when moving to the overlapping field of view of the adjacent two cameras, and obtain the multi-faceted calibration plate. Image data; the data acquisition control module stores the image data obtained by shooting according to the camera number and the position number of the multi-faceted calibration plate, and detects the corners of the multi-faceted calibration plate to obtain the relative posture between the cameras.
简言之,所提供的自动化标定系统的标定流程主要包括:步骤S1,采集和获取标定图像;步骤S2,求解相机间初始相对姿态;步骤S3,对图像进行重采集控制并利用相机间初始相对姿态计算重投影误差;步骤S4,对全局数据建立姿态误差并进行优化,获得最终的相机间相对姿态。以下将对标定流程的各环节进行具体介绍。In short, the calibration process of the provided automatic calibration system mainly includes: step S1, collecting and obtaining a calibration image; step S2, solving the initial relative posture between cameras; Attitude calculation re-projection error; step S4, the attitude error is established on the global data and optimized, and the final relative attitude between cameras is obtained. Each link of the calibration process will be described in detail below.
步骤S1,采集和获取标定图像Step S1, collecting and acquiring calibration images
结合图1,首先利用智能小车和自动化数据采集软件(或称数据采集控制模块)采集图像数据,然后利用张正友标定法计算得到初始化相对姿态。With reference to Figure 1, first use the intelligent car and automatic data acquisition software (or data acquisition control module) to collect image data, and then use Zhang Zhengyou's calibration method to calculate the initialized relative attitude.
在一个实施例中,如图2所示,标定车是四轮智能小车,该四轮智能小车承载多面标定板和云控制平台,其中多面标定板连接旋转控制云台, 还设置信号收发装置的无线WiFi,可通过无线设备对自动小车进行指令发送和云台控制等操作,使其可以对标定板进行多角度多面度旋转。In one embodiment, as shown in FIG. 2 , the calibration vehicle is a four-wheel smart car, and the four-wheel smart car carries a multi-faceted calibration board and a cloud control platform, wherein the multi-faceted calibration board is connected to the rotary control pan-tilt, and the signal transceiver device is also set. Wireless WiFi, which can send commands and control the PTZ to the automatic trolley through wireless devices, so that it can perform multi-angle and multi-face rotation on the calibration board.
例如,参见图3所示,标定板为一五面立体棋盘格标定板。每一面均为7×10的棋盘格标定板,②、③、④、⑤号四块标定板与①号标定板之间的角度为45°,使得不同方位的相机能够检测得到更多角点信息。标定板后接可变换角度的智能云台,固定于智能小车上进行移动。For example, as shown in FIG. 3 , the calibration plate is a five-sided three-dimensional checkerboard calibration plate. Each side is a 7×10 checkerboard calibration plate, and the angle between the four calibration plates ②, ③, ④, ⑤ and the calibration plate ① is 45°, so that cameras in different orientations can detect more corners information. The calibration board is followed by an intelligent pan-tilt with a changeable angle, which is fixed on the intelligent trolley for movement.
当标定板移动到相邻两相机之间的重叠视场时,相机一次拍摄得到五个面的角点数据,然后利用两相机的公共角点数据计算得到相对姿态。从而在更短时间内获得更多数据,以便后期优化。When the calibration plate moves to the overlapping field of view between two adjacent cameras, the camera captures the corner data of five faces at a time, and then uses the common corner data of the two cameras to calculate the relative pose. Thus, more data can be obtained in less time for later optimization.
数据采集软件可集成智能小车控制、相机拍摄控制和数据存储与可视化等多个程序。The data acquisition software can integrate multiple programs such as smart car control, camera shooting control, and data storage and visualization.
采集和获取标定图像的过程如图4所示,具体包括:数据采集软件控制智能小车负载着标定板进行移动,当标定板移动到相邻两相机(如相机1与相机2)的重叠视场时,控制智能小车停止移动,并给相机发出拍摄指令;相机对标定板进行拍摄,获取标定板的图像数据;数据采集软件将拍摄获得的图片按照相机编号和标定板位置编号进行存储,并利用可视化程序对标定板的角点进行检测与可视化;控制小车负载着标定板继续移动(如标定板移动到相机2与相机3的重叠视场),重复上述步骤直至数据采集完成。The process of collecting and obtaining the calibration image is shown in Figure 4, which includes: the data acquisition software controls the intelligent car to move with the calibration plate loaded. When the calibration plate moves to the overlapping field of view of two adjacent cameras (such as camera 1 and camera 2) Control the smart car to stop moving and send a shooting command to the camera; the camera shoots the calibration board to obtain the image data of the calibration board; the data acquisition software stores the captured pictures according to the camera number and the position number of the calibration board, and uses The visualization program detects and visualizes the corners of the calibration board; controls the trolley to continue moving with the calibration board (for example, the calibration board moves to the overlapping field of view of camera 2 and camera 3), and repeats the above steps until the data collection is completed.
步骤S2,求解相机间初始相对姿态Step S2, solve the initial relative pose between cameras
对于多个相机中的某两个相机之间,分存在重叠视场和无重叠视场两种情况进行讨论For two cameras in multiple cameras, there are two cases of overlapping field of view and no overlapping field of view.
(a)存在重叠视场的相机间相对姿态求解(a) Solving the relative pose between cameras with overlapping fields of view
根据步骤S1中相机拍摄得到的标定板数据,例如,利用张正友标定法得到各个相机的内参矩阵K以及各个相机与不同位置标定板之间的单相机旋转平移矩阵(R,t)。According to the calibration plate data captured by the camera in step S1, for example, the internal parameter matrix K of each camera and the single-camera rotation and translation matrix (R, t) between each camera and the calibration plate at different positions are obtained by Zhang Zhengyou's calibration method.
对于相邻两个存在重叠视场的相机,如图5所示,利用单相机的旋转平移矩阵,分别计算存在重叠视场相机间的相对姿态,表示为:For two adjacent cameras with overlapping fields of view, as shown in Figure 5, the rotation and translation matrix of a single camera is used to calculate the relative poses between cameras with overlapping fields of view, which are expressed as:
R 12=R 1 -1·R 2    (1) R 12 =R 1 -1 ·R 2 (1)
公式(1)中,R 12表示存在重叠视场的两个相机,即相机1与相机2之间的旋转平移矩阵,R 1,R 2分别表示相机1与相机2与其重叠视场中的标定板之间的旋转平移矩阵。 In formula (1), R 12 represents two cameras with overlapping fields of view, that is, the rotation and translation matrix between camera 1 and camera 2, and R 1 and R 2 respectively represent the calibration of camera 1 and camera 2 in their overlapping fields of view Rotation translation matrix between plates.
(b)无重叠视场的相机间相对姿态求解(b) Solving the relative pose between cameras without overlapping fields of view
如图6所示,对于相互之间无重叠视场的相机,利用上述求得的两组存在重叠视场的相机间的相对姿态进行转化,进而求解旋转平移矩阵,获得无重叠视场的相机间相对姿态,表示为:As shown in Figure 6, for cameras with no overlapping fields of view, the relative poses between the two sets of cameras with overlapping fields of view obtained above are used to transform, and then the rotation and translation matrix is solved to obtain cameras with no overlapping fields of view. The relative attitude between them is expressed as:
R 13=R 12 -1·R 23    (2) R 13 =R 12 -1 ·R 23 (2)
公式(2)中,R 13表示无重叠视场的两个相机,即相机3与相机1之间的旋转平移矩阵;R 12与R 23是利用公式(1)求出的存在重叠视场相机之间的旋转平移矩阵。 In formula (2), R 13 represents two cameras with no overlapping field of view, that is, the rotation and translation matrix between camera 3 and camera 1; R 12 and R 23 are the overlapping field of view cameras calculated by formula (1). The rotation-translation matrix between .
步骤S3,对图像进行重采集控制并利用相机间初始相对姿态计算重投影误差。In step S3, re-acquisition control is performed on the image and the re-projection error is calculated by using the initial relative posture between the cameras.
为了有效遍历所有相机内部拍摄区域,对相机图片进行拍摄区域划分。如图7所示,将图像划分为9个区域。由于透视成像定理的存在,视觉相机总在其标定采集区域的标定精度高,而在未采集区域的标定精度低。该实施例的核心思想是让小车的标定位置能遍历这九个区域。并且在每个区域都保证小车可以到指定区域。然后相机获取拍摄照片。In order to effectively traverse all the shooting areas inside the camera, the shooting areas of the camera pictures are divided. As shown in Figure 7, the image is divided into 9 regions. Due to the existence of the perspective imaging theorem, the visual camera always has high calibration accuracy in its calibration acquisition area, but low calibration accuracy in the non-acquisition area. The core idea of this embodiment is to allow the calibrated position of the trolley to traverse these nine areas. And in each area, it is guaranteed that the car can go to the designated area. Then the camera gets to take the picture.
如图7可以设置C个相机,其中每个相机都含有p个位置,则至少需要采集C×p位置的图片数据,在图7中,以p的数量设为9为例。As shown in FIG. 7 , C cameras can be set, and each camera contains p positions. At least C×p position picture data needs to be collected. In FIG. 7 , the number of p is set to 9 as an example.
重采集算法过程包括:The re-acquisition algorithm process includes:
步骤S11,依次按区域和标号,构建全集位置队列T。Step S11, construct a corpus location queue T according to the region and the label in sequence.
T={t|t∈C×p}T={t|t∈C×p}
步骤S12,检查现存图像,构建已采位置队列K。In step S12, the existing images are checked, and a queue K of the acquired positions is constructed.
K={k|k∈C×p}K={k|k∈C×p}
步骤S13,求集合T和集合K的差集L。In step S13, the difference set L of the set T and the set K is obtained.
L={l|l∈C×p}L={l|l∈C×p}
步骤S14,遍历集合L,搜索相机图片,获取当前小车位置,将小车在当前位置下进行前进后退改变。Step S14, traverse the set L, search for camera pictures, obtain the current position of the car, and change the car to move forward or backward at the current position.
在上述步骤中,T中表示的所有图像块的标号,例如一张图像中有九块,十张图片就是九十块。每采集一张图片,队列K中就增加一个元素,在求出未采集集合L。此时,发送命令到标定小车,指示到下一区域采集图片。重复上述过程,直到将所有数据全部采集完毕。通过上述操作,得到新的标定板图像数据D addIn the above steps, the labels of all image blocks represented in T, for example, there are nine blocks in one image, and ten pictures are ninety blocks. Each time a picture is collected, an element is added to the queue K, and the uncollected set L is obtained. At this time, send a command to the calibration trolley, instructing to collect pictures in the next area. Repeat the above process until all data is collected. Through the above operations, new calibration plate image data D add is obtained.
获取标定图像数据D add中角点的三维坐标,通过计算得到的初始相机间RT矩阵及计算得到的单相机内参矩阵K将三维坐标转化为二维坐标: Obtain the three-dimensional coordinates of the corner points in the calibration image data D add , and convert the three-dimensional coordinates into two-dimensional coordinates through the calculated initial inter-camera RT matrix and the calculated single-camera internal parameter matrix K:
Figure PCTCN2020134521-appb-000001
Figure PCTCN2020134521-appb-000001
利用公式(3)得到的生成点二维坐标与目标点二维坐标进行比较,计算初始的重投影误差:The two-dimensional coordinates of the generated point obtained by formula (3) are compared with the two-dimensional coordinates of the target point, and the initial reprojection error is calculated:
error=||P 2D-P 3D_2D|| 2  (4) error=||P 2D -P 3D_2D || 2 (4)
在上述公式中,P 3D是从标定图片中获取的目标点3维坐标
Figure PCTCN2020134521-appb-000002
P 2D是从单相机标定得到的目标点2维坐标
Figure PCTCN2020134521-appb-000003
P 3D_2D是利用初始相机间RT矩阵及相机内参矩阵K生成的估计点二维坐标,R表示的是旋转矩阵,t是偏移矩阵,P 3D_2D是利用初始相机间RT矩阵及相机内参矩阵K生成的估计点二维坐标,u,v表示图像上的像素坐标,x w,y w和z w表示的是三维坐标。
In the above formula, P 3D is the 3-dimensional coordinate of the target point obtained from the calibration image
Figure PCTCN2020134521-appb-000002
P 2D is the 2-dimensional coordinates of the target point obtained from the single camera calibration
Figure PCTCN2020134521-appb-000003
P 3D_2D is the two-dimensional coordinate of the estimated point generated by using the initial inter-camera RT matrix and the camera internal parameter matrix K, R represents the rotation matrix, t is the offset matrix, and P 3D_2D is generated by using the initial inter-camera RT matrix and the camera internal parameter matrix K. The two-dimensional coordinates of the estimated point, u , v represent the pixel coordinates on the image, and xw, yw and zw represent the three-dimensional coordinates.
步骤S4,对全局数据建立姿态误差并进行优化Step S4, establish attitude error and optimize the global data
在利用步骤S3求得的初始的相机间相对姿态进行重投影时,计算得到的投影误差较大。为了减小重投影误差、提高标定精度,优选地,对求得的多相机间相对姿态进行全局建模,然后利用列文伯格-马夸尔特算法对重投影误差进行迭代优化,以提高标定精度,得到最终的相机间相对姿态。When re-projecting is performed using the initial relative pose between cameras obtained in step S3, the calculated projection error is relatively large. In order to reduce the reprojection error and improve the calibration accuracy, preferably, the obtained relative poses between multiple cameras are globally modeled, and then the Levenberg-Marquardt algorithm is used to iteratively optimize the reprojection error to improve the accuracy of the calibration. The calibration accuracy is used to obtain the final relative pose between cameras.
全局建模:Global modeling:
min R,tF(x)=‖f(x)-y‖ 2    (5) min R,t F(x)=‖f(x)-y‖ 2 (5)
s.t.0≤f(x)≤800s.t.0≤f(x)≤800
公式(5)中,f(x)是生成的估计点P 3D_2D的二维坐标:P 3D_2D=K[R,t]P 3D,y是单相机标定得到的目标点P 2D的二维坐标。由于本发明在采集图像时,图片分辨率为600×800,因此有0≤f(x)≤800,具体情况根据实际标定时的图像数据格式而定。 In formula (5), f(x) is the two-dimensional coordinate of the generated estimated point P 3D_2D : P 3D_2D =K[R,t]P 3D , and y is the two-dimensional coordinate of the target point P 2D obtained by single camera calibration. Since the present invention has a picture resolution of 600×800 when collecting an image, 0≤f(x)≤800, and the specific situation is determined according to the image data format during the actual calibration.
进一步地,采用列文伯格-马夸尔特算法对投影误差优化,算法的执行过程如下:Further, the Levenberg-Marquardt algorithm is used to optimize the projection error. The execution process of the algorithm is as follows:
步骤S21,给定初始值x 0,以及初始优化半径μ。 In step S21, an initial value x 0 and an initial optimized radius μ are given.
步骤S22,对于第k次迭代过程,求解:Step S22, for the k-th iteration process, solve:
Figure PCTCN2020134521-appb-000004
Figure PCTCN2020134521-appb-000004
其中,μ是信赖区域的半径,D为系数矩阵。where μ is the radius of the confidence region and D is the coefficient matrix.
步骤S23,计算
Figure PCTCN2020134521-appb-000005
Step S23, calculate
Figure PCTCN2020134521-appb-000005
步骤S24,若
Figure PCTCN2020134521-appb-000006
则设置μ=2μ。
Step S24, if
Figure PCTCN2020134521-appb-000006
Then set μ=2μ.
步骤S25,若
Figure PCTCN2020134521-appb-000007
则设置μ=0.5μ。
Step S25, if
Figure PCTCN2020134521-appb-000007
Then set μ=0.5μ.
步骤S26,如果ρ大于某阈值,则认为近似可行,令x k+1=x k+Δx kStep S26, if ρ is greater than a certain threshold, it is considered that the approximation is feasible, and let x k+1 =x k +Δx k .
步骤S27,判断算法是否收敛。如果不收敛则返回第S22步,否则结束。Step S27, it is judged whether the algorithm has converged. If not converged, return to step S22, otherwise end.
为清楚起见,对上述过程说明如下:For clarity, the above process is described as follows:
x k表示经过k次优化后的初始相对姿态数据,x k+1表示经过k+1次优化后的初始相对姿态数据,Δx k表示第k+1次优化过程中,求得的对x k的修正量,f(x k)表示第k次优化后的估计点的二维坐标;J[x k]表示f(x k)关于x一阶导数,D为系数矩阵。 x k represents the initial relative attitude data after k optimizations, x k+1 represents the initial relative attitude data after k+1 optimizations, Δx k represents the k+1 optimization process, the obtained pair x k The correction amount of , f(x k ) represents the two-dimensional coordinates of the estimated point after the kth optimization; J[x k ] represents the first derivative of f(x k ) with respect to x, and D is the coefficient matrix.
给定的初始值x 0是第一相机与第二相机之间的初始相对姿态数据R 12。设定的初始优化半径μ可根据实际情况进行具体设定。 The given initial value x 0 is the initial relative pose data R 12 between the first camera and the second camera. The set initial optimization radius μ can be specifically set according to the actual situation.
在对重投影误差f(x)进行k次迭代优化后,计算
Figure PCTCN2020134521-appb-000008
的值,其中,ρ是在列文伯格马夸尔特算法中,为了刻画高斯牛顿法中采用的近似二阶泰勒展开式的好坏程度而设置的一个指标,分母J(x) TΔx k是实际函数下降的值,分子f(x+Δx k)-f(x)是近似模型下降的值。
After k iterations of optimizing the reprojection error f(x), compute
Figure PCTCN2020134521-appb-000008
The value of , where ρ is an index set in the Levenberg-Marquardt algorithm to describe the quality of the approximate second-order Taylor expansion used in the Gauss-Newton method, the denominator J(x) T Δx k is the value at which the actual function falls, and the numerator f(x+Δx k )-f(x) is the value at which the approximate model falls.
对于检测ρ大小的过程,如果计算获得的ρ值较小,则应缩小优化半 径μ(在本实施例中,设定
Figure PCTCN2020134521-appb-000009
时,缩小优化半径μ,令μ=0.5μ);如果计算获得的ρ值较大,则应放大优化半径μ(在本实施例中,设定
Figure PCTCN2020134521-appb-000010
时,放大优化半径μ,令μ=2μ)。
For the process of detecting the size of ρ, if the calculated value of ρ is small, the optimization radius μ should be reduced (in this embodiment, set
Figure PCTCN2020134521-appb-000009
When , reduce the optimization radius μ, let μ=0.5μ); if the ρ value obtained by calculation is large, the optimization radius μ should be enlarged (in this embodiment, set
Figure PCTCN2020134521-appb-000010
, enlarge the optimized radius μ, let μ=2μ).
在检测到ρ大于预设阈值时,判定表示此次迭代过程采用的近似可行,可令x k+1=x k+Δx k;此时可通过比较迭代优化后的重投影误差与预设重投影误差阈值的大小,来判断算法是否收敛;若迭代优化后的重投影误差小于或等于预设重投影误差阈值,则判定迭代优化完成,将迭代优化后的重投影误差作为第二重投影误差,x k+1即为第一相机与第二相机之间的相对姿态数据。若迭代优化后的重投影误差大于预设重投影误差阈值,则继续执行第k+2次迭代优化。 When it is detected that ρ is greater than the preset threshold, it is determined that the approximation used in this iterative process is feasible, and x k+1 = x k +Δx k ; at this time, the reprojection error after iterative optimization can be compared with the preset re-projection error. The size of the projection error threshold is used to judge whether the algorithm converges; if the reprojection error after iterative optimization is less than or equal to the preset reprojection error threshold, it is determined that the iterative optimization is complete, and the reprojection error after iterative optimization is used as the second reprojection error. , x k+1 is the relative pose data between the first camera and the second camera. If the reprojection error after the iterative optimization is greater than the preset reprojection error threshold, continue to perform the k+2 th iterative optimization.
在优化算法执行完毕后,最终得到的x k+1即为要求的多相机间(R,t)矩阵。本发明利用优化后的相机间相对姿态再次进行重投影误差计算,经验证,与步骤S2求得的初始相机间相对姿态相比,优化后误差减小为初始误差的1/10左右,标定精度大大提升。 After the optimization algorithm is executed, the finally obtained x k+1 is the required multi-camera (R, t) matrix. The present invention uses the optimized relative posture between cameras to calculate the reprojection error again. It has been verified that compared with the initial relative posture between cameras obtained in step S2, the optimized error is reduced to about 1/10 of the initial error, and the calibration accuracy Huge improvements.
综上所述,本发明利用五面立体标定板和智能小车,使得标定过程大大简化,省时省力;利用小车负载标定板移动以及多相机间关系,解决了相机间无重叠视场的情况;利用全局建模及优化,显著提升了标定精度。可解决相机之间无重叠视场的情况;实现整个标定过程的自动化,操作简便的同时获得了较高的标定精度。To sum up, the present invention utilizes the five-sided stereo calibration plate and the intelligent trolley, which greatly simplifies the calibration process, saves time and effort; utilizes the movement of the trolley load calibration plate and the relationship between multiple cameras to solve the problem of no overlapping fields of view between cameras; Using global modeling and optimization, the calibration accuracy is significantly improved. It can solve the situation that there is no overlapping field of view between cameras; realize the automation of the whole calibration process, and obtain high calibration accuracy while being easy to operate.
通过实际场景的模拟实验,本发明拥有很好的结果,所提出的多相机联合标定系统通过利用安装于机器人身上的图像传感器,对多个图像传感器进行标定,进而得到机器人之间的相对位姿关系。Through the simulation experiment of the actual scene, the present invention has good results. The proposed multi-camera joint calibration system calibrates multiple image sensors by using the image sensors installed on the robot, and then obtains the relative pose between the robots. relation.
本发明可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本发明的各个方面的计算机可读程序指令。The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for causing a processor to implement various aspects of the present invention.
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的 列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。A computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device. The computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (non-exhaustive list) of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), static random access memory (SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory sticks, floppy disks, mechanically coded devices, such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above. Computer-readable storage media, as used herein, are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。The computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
用于执行本发明操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本发明的各个方面。The computer program instructions for carrying out the operations of the present invention may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state setting data, or instructions in one or more programming languages. Source or object code, written in any combination, including object-oriented programming languages, such as Smalltalk, C++, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through the Internet connect). In some embodiments, custom electronic circuits, such as programmable logic circuits, field programmable gate arrays (FPGAs), or programmable logic arrays (PLAs), can be personalized by utilizing state information of computer readable program instructions. Computer readable program instructions are executed to implement various aspects of the present invention.
这里参照根据本发明实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本发明的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams. These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
附图中的流程图和框图显示了根据本发明的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。对于本领域技术人员来说公知的是,通过硬件方式实现、通过软件方式实现以及通过软件和硬件结合的方式实现都是等价的。The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions. In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It is also noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions. It is well known to those skilled in the art that implementation in hardware, implementation in software, and implementation in a combination of software and hardware are all equivalent.
以上已经描述了本发明的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。本发明的范围由所附权利要求来限定。Various embodiments of the present invention have been described above, and the foregoing descriptions are exemplary, not exhaustive, and not limiting of the disclosed embodiments. Numerous modifications and variations will be apparent to those skilled in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the various embodiments, the practical application or technical improvement in the marketplace, or to enable others of ordinary skill in the art to understand the various embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (10)

  1. 一种基于视觉引导的自动化标定系统,包括:标定车、旋转控制云台、多面标定板、多个相机、数据采集控制模块,其中,多面标定板连接旋转控制云台,数据采集控制模块用于控制标定车负载着多面标定板进行移动,并在移动到相邻两相机的重叠视场时控制对应相机对多面标定板进行拍摄,获取多面标定板的图像数据;数据采集控制模块将拍摄获得的图像数据按照相机编号和多面标定板位置编号进行存储,并对多面标定板的角点进行检测,获得相机间相对姿态。An automated calibration system based on vision guidance, comprising: a calibration vehicle, a rotary control pan-tilt, a multi-faceted calibration board, a plurality of cameras, and a data acquisition control module, wherein the multi-sided calibration board is connected to the rotary control pan-tilt, and the data acquisition control module is used for Control the calibration vehicle to move with the multi-faceted calibration plate, and control the corresponding camera to shoot the multi-faceted calibration plate when moving to the overlapping field of view of the two adjacent cameras to obtain the image data of the multi-faceted calibration plate; the data acquisition control module will capture the obtained image data. The image data is stored according to the camera number and the position number of the multi-faceted calibration plate, and the corners of the multi-faceted calibration plate are detected to obtain the relative posture between the cameras.
  2. 根据权利要求1所述的系统,其特征在于,所述多面标定板是五面立体棋盘格标定板,每一面均为7×10的棋盘格标定板,其中四面标定板设置在另一面标定板的四周,并与该另一面标定板之间的角度设置为45°。The system according to claim 1, wherein the multi-faceted calibration plate is a five-sided three-dimensional checkerboard calibration plate, each of which is a 7×10 checkerboard calibration plate, wherein the four-sided calibration plate is arranged on the other side of the calibration plate and the angle between it and the other calibration plate is set to 45°.
  3. 根据权利要求1所述的系统,其中,所述数据采集控制模块根据以下步骤获得相机间相对姿态:The system according to claim 1, wherein the data acquisition control module obtains the relative pose between cameras according to the following steps:
    根据拍摄得到的标定板数据,利用张正友标定法得到各个相机的内参矩阵以及各个相机与不同位置标定板之间的单相机旋转平移矩阵;According to the calibration plate data obtained by shooting, Zhang Zhengyou's calibration method is used to obtain the internal parameter matrix of each camera and the single-camera rotation and translation matrix between each camera and the calibration plate at different positions;
    对于相邻两个存在重叠视场的相机,利用单相机的旋转平移矩阵,分别计算存在重叠视场相机间的相对姿态;For two adjacent cameras with overlapping fields of view, use the rotation and translation matrix of a single camera to calculate the relative poses between cameras with overlapping fields of view respectively;
    对于相互之间无重叠视场的相机,利用求得的两组存在重叠视场的相机间的相对姿态来求解旋转平移矩阵;For cameras with no overlapping fields of view, use the obtained relative poses between the two sets of cameras with overlapping fields of view to solve the rotation-translation matrix;
    对相机图片进行拍摄区域划分,控制所述标定车的标定位置遍历所划分的区域,通过重采集拍摄数据,计算初始重投影误差;Divide the shooting area of the camera picture, control the calibration position of the calibration vehicle to traverse the divided area, and calculate the initial re-projection error by re-collecting the shooting data;
    对重投影误差进行迭代优化,得到最终的相机间相对姿态。The reprojection error is iteratively optimized to obtain the final relative pose between cameras.
  4. 根据权利要求3所述的系统,其中,所述计算初始投影误差包括:The system of claim 3, wherein said calculating an initial projection error comprises:
    设置C个相机,其中每个相机都含有p个位置,至少采集C×p位置的图片数据;Set C cameras, each of which contains p positions, at least collect image data of C×p positions;
    依次按区域和标号,构建全集位置队列T,表示为:According to the region and label in turn, construct the corpus position queue T, which is expressed as:
    T={t|t∈C×p}T={t|t∈C×p}
    检查现存图像,构建已采集位置队列K,表示为:Existing images are examined and a queue K of acquired locations is constructed, expressed as:
    K={k|k∈C×p}K={k|k∈C×p}
    计算集合T和集合K的差集L,表示为:Calculate the difference set L of set T and set K, expressed as:
    L={l|l∈C×p}L={l|l∈C×p}
    遍历集合L,搜索相机图片,获取当前标定车位置,将标定车在当前位置下进行前进后退改变;Traverse the set L, search for camera pictures, obtain the current position of the calibration car, and change the calibration car forward and backward under the current position;
    获取标定图片中角点的三维坐标,通过计算得到的初始相机间的旋转平移矩阵及计算得到的单相机内参矩阵将三维坐标转化为二维坐标:Obtain the three-dimensional coordinates of the corner points in the calibration image, and convert the three-dimensional coordinates into two-dimensional coordinates through the calculated rotation and translation matrix between the initial cameras and the calculated single-camera internal parameter matrix:
    利用生成点二维坐标与目标点二维坐标进行比较,计算初始的重投影误差:Calculate the initial reprojection error by comparing the two-dimensional coordinates of the generated point with the two-dimensional coordinates of the target point:
    error=||P 2D-P 3D_2D|| 2 error=||P 2D -P 3D_2D || 2
    其中,P 3D是从标定图片中获取的目标点3维坐标
    Figure PCTCN2020134521-appb-100001
    P 2D是从单相机标定得到的目标点2维坐标
    Figure PCTCN2020134521-appb-100002
    P 3D_2D是利用初始相机间的旋转平移矩阵及相机内参矩阵生成的估计点二维坐标。
    Among them, P 3D is the 3-dimensional coordinates of the target point obtained from the calibration image
    Figure PCTCN2020134521-appb-100001
    P 2D is the 2-dimensional coordinates of the target point obtained from the single camera calibration
    Figure PCTCN2020134521-appb-100002
    P 3D_2D is the two-dimensional coordinate of the estimated point generated by using the rotation and translation matrix between the initial cameras and the camera intrinsic parameter matrix.
  5. 根据权利要求1所述的系统,其中,所述对重投影误差进行迭代优化包括:The system of claim 1, wherein the iteratively optimizing the reprojection error comprises:
    进行全局建模:Do global modeling:
    Figure PCTCN2020134521-appb-100003
    Figure PCTCN2020134521-appb-100003
    s.t.0≤f(x)≤ms.t.0≤f(x)≤m
    利用列文伯格-马夸尔特算法对重投影误差进行迭代优化,得到最终的相机间相对姿态;The reprojection error is iteratively optimized by the Levenberg-Marquardt algorithm to obtain the final relative pose between cameras;
    其中,f(x)是生成的估计点P 3D_2D的二维坐标:P 3D_2D=K[R,t]P 3D,y是单相机标定得到的目标点P 2D的二维坐标,m根据图片分辨率确定。 Among them, f(x) is the two-dimensional coordinate of the generated estimated point P 3D_2D : P 3D_2D =K[R,t]P 3D , y is the two-dimensional coordinate of the target point P 2D obtained by single-camera calibration, m is determined according to the picture rate determined.
  6. 根据权利要求5所述的系统,其中,所述利用列文伯格-马夸尔特算法对重投影误差进行迭代优化包括:The system of claim 5, wherein the iteratively optimizing the reprojection error using the Levenberg-Marquardt algorithm comprises:
    步骤S61,给定初始值x 0,以及初始优化半径μ; Step S61, the initial value x 0 and the initial optimization radius μ are given;
    步骤S62,对于第k次迭代过程,求解:Step S62, for the k-th iteration process, solve:
    Figure PCTCN2020134521-appb-100004
    Figure PCTCN2020134521-appb-100004
    步骤S63,计算
    Figure PCTCN2020134521-appb-100005
    Figure PCTCN2020134521-appb-100006
    则设置μ=2μ,若
    Figure PCTCN2020134521-appb-100007
    则设置μ=0.5μ;
    Step S63, calculate
    Figure PCTCN2020134521-appb-100005
    like
    Figure PCTCN2020134521-appb-100006
    Then set μ=2μ, if
    Figure PCTCN2020134521-appb-100007
    Then set μ=0.5μ;
    步骤S64,如果ρ大于某阈值,则令x k+1=x k+Δx kStep S64, if ρ is greater than a certain threshold, set x k+1 =x k +Δx k ;
    其中,μ是信赖区域的半径,D为系数矩阵,x k+1表示经过k+1次优化后的初始相对姿态数据,Δx k表示第k+1次优化过程中,求得的对x k的修正量,f(x k)表示第k次优化后的估计点的二维坐标;J[x k]表示f(x k)关于x一阶导数,D为系数矩阵。 Among them, μ is the radius of the trust region, D is the coefficient matrix, x k+1 represents the initial relative attitude data after k+1 optimization, Δx k represents the k+1 optimization process, the obtained pair x k The correction amount of , f(x k ) represents the two-dimensional coordinates of the estimated point after the kth optimization; J[x k ] represents the first derivative of f(x k ) with respect to x, and D is the coefficient matrix.
  7. 根据权利要求3所述的系统,其中,对于相互之间无重叠视场的相机,利用求得的两组存在重叠视场的相机间的相对姿态来求解旋转平移矩阵表示为:The system according to claim 3, wherein, for cameras with no overlapping fields of view, the rotation-translation matrix is solved by using the obtained relative poses between the two groups of cameras with overlapping fields of view:
    R 13=R 12 -1·R 23 R 13 =R 12 -1 ·R 23
    其中,R 13表示无重叠视场的第三相机与第一相机之间的旋转平移矩阵,R 12表示存在重叠视场的第一相机与第二相机之间的旋转平移矩阵,R 23表示存在重叠视场的第二相机与第三相机之间的旋转平移矩阵。 Among them, R 13 represents the rotation and translation matrix between the third camera and the first camera without overlapping fields of view, R 12 represents the rotation and translation matrix between the first camera and the second camera with overlapping fields of view, and R 23 represents the existence of Rotation-translation matrix between the second camera and the third camera with overlapping fields of view.
  8. 一种基于视觉引导的自动化标定方法,包括以下步骤:An automated calibration method based on vision guidance, comprising the following steps:
    控制标定车负载着多面标定板进行移动,并在移动到相邻两相机的重叠视场时控制对应相机对多面标定板进行拍摄,获取多面标定板的图像数据;Control the calibration vehicle to move with the multi-faceted calibration plate, and control the corresponding camera to shoot the multi-faceted calibration plate when moving to the overlapping field of view of two adjacent cameras to obtain the image data of the multi-faceted calibration plate;
    将拍摄获得的图像数据按照相机编号和多面标定板位置编号进行存储,并对多面标定板的角点进行检测,获得相机间相对姿态。The image data obtained by shooting is stored according to the camera number and the position number of the multi-faceted calibration plate, and the corner points of the multi-faceted calibration plate are detected to obtain the relative posture between the cameras.
  9. 一种计算机可读存储介质,其上存储有计算机程序,其中,该程序被处理器执行时实现根据权利要求8所述方法的步骤。A computer-readable storage medium having stored thereon a computer program, wherein the program, when executed by a processor, implements the steps of the method according to claim 8 .
  10. 一种计算机设备,包括存储器和处理器,在所述存储器上存储有能够在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现权利要求8所述的方法的步骤。A computer device, comprising a memory and a processor, a computer program that can be run on the processor is stored in the memory, and characterized in that, when the processor executes the program, the method of claim 8 is implemented. step.
PCT/CN2020/134521 2020-12-08 2020-12-08 Automatic calibration system based on visual guidance WO2022120567A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/134521 WO2022120567A1 (en) 2020-12-08 2020-12-08 Automatic calibration system based on visual guidance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/134521 WO2022120567A1 (en) 2020-12-08 2020-12-08 Automatic calibration system based on visual guidance

Publications (1)

Publication Number Publication Date
WO2022120567A1 true WO2022120567A1 (en) 2022-06-16

Family

ID=81973937

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/134521 WO2022120567A1 (en) 2020-12-08 2020-12-08 Automatic calibration system based on visual guidance

Country Status (1)

Country Link
WO (1) WO2022120567A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114897997A (en) * 2022-07-13 2022-08-12 星猿哲科技(深圳)有限公司 Camera calibration method, device, equipment and storage medium
CN114964316A (en) * 2022-07-27 2022-08-30 湖南科天健光电技术有限公司 Position and attitude calibration method and device, and method and system for measuring target to be measured
CN115170674A (en) * 2022-07-20 2022-10-11 禾多科技(北京)有限公司 Camera principal point calibration method, device, equipment and medium based on single image
CN115345943A (en) * 2022-08-08 2022-11-15 恩纳基智能科技无锡有限公司 Calibration method based on differential mode concept
CN115541611A (en) * 2022-09-29 2022-12-30 武汉大学 Parameter calibration method and device for concrete wall appearance image acquisition system
CN115564847A (en) * 2022-11-17 2023-01-03 歌尔股份有限公司 Visual calibration method and device of visual assembly system and storage medium
CN115830148A (en) * 2023-02-23 2023-03-21 深圳佑驾创新科技有限公司 Calibration plate and calibration method
CN116071438A (en) * 2023-03-06 2023-05-05 航天宏图信息技术股份有限公司 Incremental SfM method and device for RigCamera images of unmanned aerial vehicle
CN116228831A (en) * 2023-05-10 2023-06-06 深圳市深视智能科技有限公司 Method and system for measuring section difference at joint of earphone, correction method and controller
CN116503493A (en) * 2023-06-27 2023-07-28 季华实验室 Multi-camera calibration method, high-precision equipment and computer readable storage medium
CN116912333B (en) * 2023-09-12 2023-12-26 安徽炬视科技有限公司 Camera attitude self-calibration method based on operation fence calibration rod
CN117830439A (en) * 2024-03-05 2024-04-05 南昌虚拟现实研究院股份有限公司 Multi-camera system pose calibration method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106097300A (en) * 2016-05-27 2016-11-09 西安交通大学 A kind of polyphaser scaling method based on high-precision motion platform
US20160343136A1 (en) * 2014-01-27 2016-11-24 Xylon d.o.o. Data-processing system and method for calibration of a vehicle surround view system
CN107527336A (en) * 2016-06-22 2017-12-29 北京疯景科技有限公司 Relative position of lens scaling method and device
CN109118547A (en) * 2018-11-01 2019-01-01 百度在线网络技术(北京)有限公司 Multi-cam combined calibrating system and method
CN111758120A (en) * 2019-10-18 2020-10-09 深圳市大疆创新科技有限公司 Calibration method and system of camera device, three-dimensional calibration device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160343136A1 (en) * 2014-01-27 2016-11-24 Xylon d.o.o. Data-processing system and method for calibration of a vehicle surround view system
CN106097300A (en) * 2016-05-27 2016-11-09 西安交通大学 A kind of polyphaser scaling method based on high-precision motion platform
CN107527336A (en) * 2016-06-22 2017-12-29 北京疯景科技有限公司 Relative position of lens scaling method and device
CN109118547A (en) * 2018-11-01 2019-01-01 百度在线网络技术(北京)有限公司 Multi-cam combined calibrating system and method
CN111758120A (en) * 2019-10-18 2020-10-09 深圳市大疆创新科技有限公司 Calibration method and system of camera device, three-dimensional calibration device and storage medium

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114897997A (en) * 2022-07-13 2022-08-12 星猿哲科技(深圳)有限公司 Camera calibration method, device, equipment and storage medium
CN115170674B (en) * 2022-07-20 2023-04-14 禾多科技(北京)有限公司 Camera principal point calibration method, device, equipment and medium based on single image
CN115170674A (en) * 2022-07-20 2022-10-11 禾多科技(北京)有限公司 Camera principal point calibration method, device, equipment and medium based on single image
CN114964316A (en) * 2022-07-27 2022-08-30 湖南科天健光电技术有限公司 Position and attitude calibration method and device, and method and system for measuring target to be measured
CN114964316B (en) * 2022-07-27 2022-11-01 湖南科天健光电技术有限公司 Position and attitude calibration method and device, and method and system for measuring target to be measured
CN115345943A (en) * 2022-08-08 2022-11-15 恩纳基智能科技无锡有限公司 Calibration method based on differential mode concept
CN115345943B (en) * 2022-08-08 2024-04-16 恩纳基智能装备(无锡)股份有限公司 Calibration method based on differential mode concept
CN115541611A (en) * 2022-09-29 2022-12-30 武汉大学 Parameter calibration method and device for concrete wall appearance image acquisition system
CN115541611B (en) * 2022-09-29 2024-04-16 武汉大学 Method and device for checking parameters of concrete wall appearance image acquisition system
CN115564847A (en) * 2022-11-17 2023-01-03 歌尔股份有限公司 Visual calibration method and device of visual assembly system and storage medium
CN115830148A (en) * 2023-02-23 2023-03-21 深圳佑驾创新科技有限公司 Calibration plate and calibration method
CN116071438A (en) * 2023-03-06 2023-05-05 航天宏图信息技术股份有限公司 Incremental SfM method and device for RigCamera images of unmanned aerial vehicle
CN116228831A (en) * 2023-05-10 2023-06-06 深圳市深视智能科技有限公司 Method and system for measuring section difference at joint of earphone, correction method and controller
CN116228831B (en) * 2023-05-10 2023-08-22 深圳市深视智能科技有限公司 Method and system for measuring section difference at joint of earphone, correction method and controller
CN116503493A (en) * 2023-06-27 2023-07-28 季华实验室 Multi-camera calibration method, high-precision equipment and computer readable storage medium
CN116503493B (en) * 2023-06-27 2023-10-20 季华实验室 Multi-camera calibration method, high-precision equipment and computer readable storage medium
CN116912333B (en) * 2023-09-12 2023-12-26 安徽炬视科技有限公司 Camera attitude self-calibration method based on operation fence calibration rod
CN117830439A (en) * 2024-03-05 2024-04-05 南昌虚拟现实研究院股份有限公司 Multi-camera system pose calibration method and device

Similar Documents

Publication Publication Date Title
WO2022120567A1 (en) Automatic calibration system based on visual guidance
Asadi et al. Real-time image localization and registration with BIM using perspective alignment for indoor monitoring of construction
CN106408612B (en) Machine vision system calibration
US9547802B2 (en) System and method for image composition thereof
CN110070564B (en) Feature point matching method, device, equipment and storage medium
CN111127422A (en) Image annotation method, device, system and host
CN110111388B (en) Three-dimensional object pose parameter estimation method and visual equipment
US20120268567A1 (en) Three-dimensional measurement apparatus, processing method, and non-transitory computer-readable storage medium
JP6324025B2 (en) Information processing apparatus and information processing method
Albl et al. Rolling shutter absolute pose problem with known vertical direction
CN104715479A (en) Scene reproduction detection method based on augmented virtuality
US10825249B2 (en) Method and device for blurring a virtual object in a video
CN109993798B (en) Method and equipment for detecting motion trail by multiple cameras and storage medium
JP2022501684A (en) Shooting-based 3D modeling systems and methods, automated 3D modeling equipment and methods
JP5781682B2 (en) Method for aligning at least a portion of a first image and at least a portion of a second image using a collinear transform warp function
CN111801198A (en) Hand-eye calibration method, system and computer storage medium
WO2014168848A1 (en) Multi-sensor camera recalibration
CN109613974B (en) AR home experience method in large scene
Yan et al. Joint camera intrinsic and lidar-camera extrinsic calibration
Zhou et al. Semi-dense visual odometry for RGB-D cameras using approximate nearest neighbour fields
CN112669389A (en) Automatic calibration system based on visual guidance
Yahyanejad et al. Incremental, orthorectified and loop-independent mosaicking of aerial images taken by micro UAVs
CN111829522B (en) Instant positioning and map construction method, computer equipment and device
CN113496503B (en) Point cloud data generation and real-time display method, device, equipment and medium
Silveira Photogeometric direct visual tracking for central omnidirectional cameras

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20964512

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20964512

Country of ref document: EP

Kind code of ref document: A1