WO2022007886A1 - 一种相机自动标定优化方法及相关系统、设备 - Google Patents

一种相机自动标定优化方法及相关系统、设备 Download PDF

Info

Publication number
WO2022007886A1
WO2022007886A1 PCT/CN2021/105195 CN2021105195W WO2022007886A1 WO 2022007886 A1 WO2022007886 A1 WO 2022007886A1 CN 2021105195 W CN2021105195 W CN 2021105195W WO 2022007886 A1 WO2022007886 A1 WO 2022007886A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
dimensional coordinates
projected
calibration
optimized
Prior art date
Application number
PCT/CN2021/105195
Other languages
English (en)
French (fr)
Inventor
王越
郭胜男
许秋子
Original Assignee
深圳市瑞立视多媒体科技有限公司
深圳市瑞立视智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市瑞立视多媒体科技有限公司, 深圳市瑞立视智能科技有限公司 filed Critical 深圳市瑞立视多媒体科技有限公司
Publication of WO2022007886A1 publication Critical patent/WO2022007886A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Definitions

  • the invention relates to the technical field of camera calibration, in particular to an automatic calibration and optimization method for a camera, and related systems and equipment.
  • the optical images of moving objects are collected in the form of multiple cameras.
  • the tracking and positioning software adopts the principle of computer multi-eye vision, according to the matching relationship between the two-dimensional point clouds of the images.
  • the relative position and orientation of the camera calculate the coordinates and orientation of the point cloud in the three-dimensional capture space. Based on the three-dimensional coordinates of the point cloud, by identifying the rigid body structures bound to different parts of the moving object, the position and orientation of each rigid body in the motion space are calculated, and then the motion trajectory of the moving object in the motion space is determined.
  • the motion capture system needs to determine the state of all cameras and their mutual positional relationship before running, which requires camera calibration.
  • the calibration of camera parameters is a very critical link.
  • the accuracy of the calibration results and the stability of the algorithm directly affect the accuracy of the results produced by the camera work, and even camera calibration.
  • the accuracy of the camera will directly affect the capture accuracy of the entire optical motion capture system, and it has a significant impact on the difference of thousands of miles. Therefore, doing a good job in camera calibration is the premise of doing follow-up work.
  • the technical problem that the present invention mainly solves is how to perform automatic camera calibration on multiple cameras in a computer vision system in a timely manner.
  • the present application provides a camera automatic calibration optimization method and related systems and equipment.
  • an embodiment provides a camera automatic calibration optimization method, including the following steps:
  • S1 Match according to the first projected two-dimensional coordinates of the multiple marker points in the space and the first two-dimensional coordinates of the multiple marker points received by the camera, and calculate the Euclidean distance between the two.
  • the projected two-dimensional coordinates are to project the three-dimensional coordinates of the marker points obtained by the camera calibration into the camera to obtain the corresponding first projected two-dimensional coordinates;
  • step S2 determine whether the Euclidean distance is greater than the preset second threshold, if yes, go to step S3, if not, go to step S4;
  • step S3 if the Euclidean distance is greater than the preset second threshold, re-calibrate the camera, and return to step S1;
  • the corresponding camera is determined as a normal camera, and the normal camera is a camera that does not require calibration and optimization;
  • S5 Project the three-dimensional coordinates of the marker points obtained by the normal camera into the camera to be optimized to obtain the corresponding second projected two-dimensional coordinates, and receive the second projected two-dimensional coordinates from the camera to be optimized. The obtained second two-dimensional coordinates are matched to obtain the optimized camera calibration result.
  • the step S3 includes:
  • the Euclidean distance is greater than the preset second threshold, re-calibrate the camera to obtain a new positional relationship between cameras, and obtain new three-dimensional coordinates of the marker point according to the new positional relationship between the cameras , project the new three-dimensional coordinates into the camera to obtain the corresponding new first projected two-dimensional coordinates;
  • step S1 so that the Euclidean distance between the new first projected two-dimensional coordinate obtained by the camera calibration and the first two-dimensional coordinate is not greater than a preset second threshold.
  • the step S5 includes: using a gradient descent method to obtain an optimized camera calibration result, and the specific method is:
  • the projected difference between the second projected two-dimensional coordinates and the second two-dimensional coordinates is calculated, and according to the projected difference Value inverse projection to obtain rotation matrix and translation matrix;
  • the rotation matrix and the translation matrix are updated to the current calibration data of the camera, and the current projection difference is calculated cyclically according to the current calibration data, until the current projection difference is less than the preset threshold, then the The corresponding rotation matrix and translation matrix are used as the optimized camera calibration result.
  • the projection difference between the second projected two-dimensional coordinate and the second two-dimensional coordinate is calculated according to the matching relationship, and a rotation matrix and a translation matrix are obtained by inverse projection according to the projected difference.
  • the calibration data includes rotation information and/or position information, and the rotation information and the position information are respectively used to calibrate the rotation state and offset state of any camera relative to the space coordinate system.
  • the step S5 also includes:
  • Each of the cameras to be optimized is shielded one by one, or multiple cameras to be optimized are shielded at the same time, so that the cameras to be optimized do not participate in the calculation of the three-dimensional coordinates of the marked point, and the three-dimensional coordinates of the marked point obtained by the normal camera are
  • the coordinates are projected into the camera to be optimized to obtain the corresponding second projected two-dimensional coordinates, and the second projected two-dimensional coordinates are matched with the second two-dimensional coordinates received by the camera to be optimized to obtain the optimized camera calibration results.
  • step S5 it also includes:
  • Judgment step Repeat steps S1-S5 for iterative update until it is judged whether the calculated Euclidean distance is less than or equal to the preset first threshold, if so, go to the end step, if not, go to step S1.
  • Ending step Stop the automatic calibration and optimization process of the camera, that is, complete the automatic calibration and optimization processing of the camera to be optimized.
  • an embodiment provides an optical motion capture system, including a plurality of marker points to be captured, a plurality of cameras for photographing the marker points, and a processor;
  • a plurality of the marker points are arranged on a preset rigid body
  • a plurality of the cameras are distributed in a preset motion space, and are all connected in communication with the processor, so as to photograph the marked points on the rigid body;
  • the processor is configured to periodically calibrate each of the cameras according to the camera automatic calibration optimization method described in the first aspect.
  • an embodiment provides an automatic camera calibration optimization processing device, including: a memory and at least one processor, wherein the memory stores instructions, and the memory and the at least one processor communicate with each other through a line even;
  • the at least one processor invokes the instructions in the memory, so that the automatic camera calibration optimization processing device executes the camera automatic calibration optimization processing method described in the first aspect.
  • an embodiment provides a computer-readable storage medium, including a program that can be executed by a processor to implement the method described in the first aspect.
  • the camera automatic calibration optimization method includes: according to the first projected two-dimensional coordinates of the multiple marker points in the space and the multiple marker points received by the camera. The first two-dimensional coordinates are matched, and the Euclidean distance between the two is calculated; it is judged whether the Euclidean distance is greater than the preset second threshold, if so, re-calibrate the camera.
  • the corresponding camera is determined as the camera to be optimized, and if the Euclidean distance is less than or equal to the preset first threshold, the corresponding camera is determined It is a normal camera; the three-dimensional coordinates of the marked points obtained by the normal camera are projected into the camera to be optimized to obtain the corresponding second projected two-dimensional coordinates, and the second projected two-dimensional coordinates are compared with the second two-dimensional coordinates received by the camera to be optimized. match to obtain the optimized camera calibration result. In this way, without affecting the normal operation of the camera system, the matching data between the projected 2D coordinates and the camera 2D coordinates can be collected to determine whether the current calibration information is correct. If there is an obvious deviation, the camera of the current system will be automatically corrected. Calibration data, which improves the accuracy and smoothness of system operation.
  • Fig. 1 is the flow chart of the camera automatic calibration optimization method
  • FIG. 2 is a schematic diagram of forming a projection point by projection
  • Fig. 3 is a schematic diagram of forming a collection point by image processing
  • FIG. 4 is a schematic structural diagram of an optical motion capture system
  • FIG. 5 is a schematic structural diagram of an embodiment of a camera calibration processing device.
  • connection and “connection” mentioned in this application, unless otherwise specified, include both direct and indirect connections (connections).
  • the inventive concept of the present application is: in a computer vision system, especially in an optical motion capture system, in order to solve the problem that the camera cannot be calibrated in time when the internal or external parameters of the camera change, the present application proposes an "automatic calibration" method.
  • Concept and method without affecting the normal operation of the optical motion capture system, by collecting the matching data between the three-dimensional space coordinates and the two-dimensional coordinates of the camera, to judge whether the current calibration information of the camera is correct, if there is an obvious deviation, it will automatically
  • the purpose of modifying the camera calibration file of the current system is to determine the relationship between the three-dimensional geometric position of a certain point on the surface of the space object and its corresponding point in the image in real time, and to establish the geometric model of the camera imaging (the parameters of the geometric model are the internal parameters of the camera.
  • the camera When the camera is automatically calibrated, it can help to improve the accuracy and smoothness of the system operation, and at the same time, to a certain extent, it avoids the problem of the user having to calibrate the camera from time to time, and saves the user's time. It should be clear that the function of automatic calibration is to find errors in the original calibration data of the camera when the system is running, and then automatically adjust and optimize. When a large number of camera positions are shifted, the field must be re-scanned to calibrate. be resolved.
  • the present application discloses an automatic camera calibration optimization method for calibrating multiple cameras in a computer vision system.
  • the claimed camera automatic calibration optimization method includes steps S100-S500, which will be described separately below.
  • Step S100 according to the first projected two-dimensional coordinates of the multiple marker points in the space and the first two-dimensional coordinates of the multiple marker points received by the camera, the Euclidean distance between the two is calculated, and the first projected two-dimensional coordinate is calculated.
  • the coordinates are to project the three-dimensional coordinates of the marker points obtained by the camera calibration into the camera to obtain the corresponding first projected two-dimensional coordinates;
  • the method for implementing the step S100 is as follows:
  • multiple cameras will continuously shoot images of multiple marker points (such as multiple points on one or more captured objects), and establish a motion space where the captured objects are located. space coordinate system (or the world coordinate system), and then obtain the space coordinates of the captured object in the space coordinate system by processing multiple images taken at the same time. Since the method of obtaining spatial coordinates according to multiple image processing is a common technical means in optical motion capture, it will not be described in detail here.
  • the three-dimensional coordinates of each marker point are projected to each camera according to the current calibration parameters of each camera, and the first projected two-dimensional coordinates of each marker point formed in the camera coordinate system of each camera are obtained, wherein the current calibration parameters are obtained by Sweep the space in advance, that is, the known parameter data obtained from the camera calibration in advance, through which the three-dimensional coordinates of the marker point can be determined;
  • a space coordinate system Xw-Yw-Zw and a camera coordinate system Xc-Yc of a camera are constructed, and a three-dimensional coordinate W1 of a marker point is obtained in the space coordinate system. Projecting the three-dimensional coordinate W1 into the camera coordinate system of the camera forms the projected point C1. Since there are differences in rotation angle and offset position between the two coordinate systems, the current calibration parameters of the camera are used in the projection process, so that the three-dimensional coordinate W1 can obtain the projection point C1 under the action of the current calibration parameters. When there are multiple marker points in the space coordinate system, the three-dimensional coordinates of each marker point are projected into the camera coordinate system of the camera to form projection points similar to C1 respectively.
  • each projected point formed by projection is two-dimensional coordinate data, that is, the orthographic projection process from three-dimensional coordinates to two-dimensional coordinates is realized.
  • the spatial coordinate system is a mapping relationship of real objects in space, and the origin of the spatial coordinate system is usually Ow.
  • the camera coordinate system takes the optical axis of the camera as the Z axis.
  • the center position of the light in the camera optical system is the origin Oc (actually the center of the lens).
  • the horizontal axis Xc and the vertical axis Yc are not related to the corresponding axes of the space coordinate system. Parallel, but at a certain angle and with a certain translation.
  • first two-dimensional coordinates of the multiple marker points where the first two-dimensional coordinates refer to the two-dimensional coordinates of the two-dimensional points formed by the image processing of the respective marker points obtained by each camera, that is, each marker point is directly mapped
  • a camera coordinate system of a camera is constructed.
  • the camera takes an image of a mark point
  • the mark point is directly mapped on the lens of the camera by light, it is at the point of the camera.
  • a collection point ie, a marker point received by the camera
  • the collection point will be presented in the form of a two-dimensional point on the captured image, such as Kc.
  • the acquisition point Kc formed by the mapping is two-dimensional coordinate data.
  • the first projected two-dimensional coordinates and the first two-dimensional coordinates are respectively matched, and the Euclidean distance between the two is calculated. In a specific embodiment, it is considered that there is an existence between the first projected two-dimensional coordinates and the first two-dimensional coordinates. When the minimum Euclidean distance is used, there is a matching relationship between the corresponding two coordinates.
  • the first projected two-dimensional coordinates formed on the camera coordinate system include the projected coordinates of each marker point
  • the first two-dimensional coordinates formed by mapping on the camera coordinate system also include the mapped coordinates of each marker point. , but there is still no correspondence between the projected coordinates of any marker point and the mapped coordinates of the marker point. Therefore, the minimum Euclidean distance is used as the judgment criterion here. If the Euclidean distance between a projection point and a collection point is the smallest, it is considered that the projection point and the collection point correspond to the same marker point, that is, between the two points. have a matching relationship. After the matching relationship is determined, the Euclidean distance between the two can be calculated.
  • Step S200 determine whether the Euclidean distance is greater than the preset second threshold, if yes, go to step S300, if not, go to step S400;
  • Step S300 if the Euclidean distance is greater than the preset second threshold, re-calibrate the camera, and return to step S100;
  • the camera calibration is performed again to obtain a new positional relationship between the cameras, and the new three-dimensional coordinates of the marker points are obtained according to the new positional relationship between the cameras.
  • the three-dimensional coordinates are projected into the camera, and the corresponding new first projected two-dimensional coordinates are obtained;
  • step S100 After re-calibrating the camera in advance, return to step S100, and repeat steps S100-S300, so that the Euclidean distance between the new first projected two-dimensional coordinates obtained through camera calibration and the first two-dimensional coordinates is not greater than the preset second threshold.
  • step S300 is used to ensure that the pre-calibrated data errors of all cameras in the field sweeping space are within a reasonable threshold range, that is, do not exceed the preset second threshold value, if it exceeds the preset second threshold value, then The automatic calibration optimization process of steps S400-S500 cannot be performed.
  • Step S400 if the Euclidean distance is greater than a preset first threshold and less than or equal to a preset second threshold, the corresponding camera is determined as the camera to be optimized, wherein the first threshold is less than the second threshold;
  • the corresponding camera is determined as a normal camera, and the normal camera is a camera that does not require calibration and optimization;
  • the Euclidean distance does not exceed the preset second threshold, it means that the calibration parameters of multiple cameras in the space can be used for the automatic calibration optimization process. On this premise, then determine whether the Euclidean distance exceeds the preset first threshold. , where the first threshold is smaller than the second threshold.
  • the Euclidean distance is greater than the preset first threshold and less than or equal to the preset second threshold, it indicates that the corresponding camera is the camera to be optimized.
  • the Euclidean distance is less than or equal to the preset first threshold, it is considered that the measurement error caused by the current calibration parameters of the camera is still within a controllable range, so there is no need to re-calibrate and optimize the calibration parameters of the camera, that is, determine for a normal camera.
  • Step S500 project the three-dimensional coordinates of the marker points obtained by the normal camera into the camera to be optimized to obtain the corresponding second projected two-dimensional coordinates, and match the second projected two-dimensional coordinates with the second two-dimensional coordinates received by the camera to be optimized , the optimized camera calibration results are obtained.
  • the gradient descent method is further used to obtain the optimized camera calibration result.
  • the specific method is as follows:
  • the projection difference between the second projected two-dimensional coordinates and the second two-dimensional coordinates is calculated, and the rotation matrix and the translation matrix are obtained by inverse projection according to the projected difference.
  • the method for obtaining the above-mentioned rotation matrix and translation matrix may specifically be:
  • the projection difference between the second projected two-dimensional coordinate and the second two-dimensional coordinate is calculated according to the matching relationship, and the rotation matrix and the translation matrix are obtained by inverse projection according to the projected difference.
  • the method of projecting the three-dimensional coordinates of the marker points obtained by the normal camera into the camera to be optimized to obtain the corresponding two-dimensional coordinates of the second projection may refer to the method of obtaining the two-dimensional coordinates of the first projection in the above step S100. Detailed description will be given.
  • the method for obtaining the second two-dimensional coordinates received by the camera to be optimized reference may be made to the method for obtaining the first two-dimensional coordinates in the above step S100, which will not be described in detail here.
  • each projection point is compared with the Euclidean distances between the respective collection points, and it is determined that there is a matching relationship between the projection point and the collection point corresponding to the minimum Euclidean distance. In this way, the matching relationship between the projection point and the acquisition point can be obtained.
  • the projection points formed by the projection of marker point 1 to marker point n in one camera are L1(x a1 , y a1 ), ..., Ln(x an , y an ), respectively, and marker point 1 to marker point n are in
  • the acquisition points formed by image processing in the camera are respectively N1(x b1 , y b1 ), ..., Nn(x bn , y bn ), then calculate L1 and N1(x b1 , y b1 ), ...
  • Nn (x bn , y bn ) between the Euclidean distances d1, d2, ..., d n if the value of d1 is the smallest among d1, d2, ..., d n, it is considered that d1 is the smallest Euclidean distance, indicating that there is a matching relationship between the projection point L1 and the collection point N1.
  • the above calibration data includes rotation information and/or position information, and the rotation information and position information are respectively used to calibrate the rotation state and offset state of any camera relative to the space coordinate system.
  • the calibration data of the camera can include internal parameters and external parameters, wherein the internal parameters are usually unique, often composed of a parameter matrix (f x , f y , c x , cy ) and a distortion coefficient (including three radial coefficients k1 , k2, k3 and two tangential coefficients p1, p2).
  • the external parameters are usually not unique and are determined by the relative pose relationship between the camera and the space coordinate system, and are often composed of rotation matrices (eg, rotation matrix R3x3) and translation matrices (eg, Tx, Ty, Tz).
  • the calibration data includes rotation information and/or position information, where the rotation information and position information are respectively used to calibrate the rotation state and offset state of any camera relative to the space coordinate system.
  • step S500 data isolation of the camera to be optimized can be performed to eliminate the interference of the camera to be optimized, and the three-dimensional coordinates of each marker point in the space coordinate system can be obtained through other normal cameras.
  • each camera to be optimized is shielded one by one, the three-dimensional coordinates of the marker points obtained by a normal camera are projected into a camera to be optimized, and the two-dimensional coordinates received by a camera to be optimized are obtained, and then all cameras to be optimized are Perform cyclic optimization in sequence, or shield multiple cameras to be optimized at the same time, project the 3D coordinates of marked points obtained by normal cameras into multiple cameras to be optimized, and obtain the 2D coordinates received by multiple cameras to be optimized, and perform the calibration at one time.
  • the optimization processing of multiple cameras to be optimized so that the camera to be optimized does not participate in the calculation of the three-dimensional coordinates of the marked point, and the three-dimensional coordinates of the marked point obtained by the normal camera are projected into the camera to be optimized, and the corresponding second projected two-dimensional coordinates are obtained.
  • the second projected two-dimensional coordinates are matched with the second two-dimensional coordinates received by the camera to be optimized, and an optimized camera calibration result is obtained.
  • the camera automatic calibration optimization method of the present application further includes a judging step and an ending step.
  • Judgment step Repeat steps S100 to S500 multiple times to perform iterative update until it is determined whether the calculated Euclidean distance is less than or equal to a preset first threshold. That is, if the Euclidean distance is less than or equal to the preset first threshold, it means that all the cameras are normal cameras, and no automatic calibration optimization is required, and then the end step is entered, otherwise, the step S100 is entered.
  • Ending step Stop the automatic calibration and optimization process of the camera, that is, the automatic calibration and optimization process of the camera to be optimized is completed.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • the present application also discloses an optical motion capture system, which not only includes a plurality of marking points to be captured and a camera for photographing the marking points.
  • a plurality of cameras, and a processor 12 are also included.
  • a plurality of marker points are set on one or more capture objects 11 in the motion space, as shown in FIG. 4 .
  • a plurality of cameras eg, camera 1, camera 2, ... camera i, ... camera m, 1 ⁇ i ⁇ m
  • the processor 12 to mark points of the captured object. Take a picture.
  • marking points mentioned in this embodiment may be reflective marking points or fluorescent marking points commonly used in optical motion capture systems for configuring rigid bodies, or may be marking points for active optical rigid body light-emitting sources.
  • the processor 12 is configured to periodically calibrate each camera according to the camera automatic calibration optimization method disclosed in the first embodiment. For example, according to steps S100-S400, the working status of each camera is periodically judged. If it is determined that camera 1 is the camera to be optimized, the new calibration data of camera 1 will be calculated according to step S500 and the current calibration data will be updated until The new calibration data obtained in the last iteration is not greater than the preset first threshold, and then the end step is entered to end the camera automatic calibration optimization process.
  • FIG. 5 is a schematic structural diagram of a camera calibration optimization processing device provided by an embodiment of the present invention.
  • the camera calibration processing device 500 may vary greatly due to different configurations or performance, and may include one or more processors (central processing units, CPU) 510 (for example, one or more processors) and a memory 520, a or one or more storage media 530 (eg, one or more mass storage devices) storing applications 533 or data 532.
  • the memory 520 and the storage medium 530 may be short-term storage or persistent storage.
  • the program stored in the storage medium 530 may include one or more modules (not shown in the figure), and each module may include a series of instruction operations on the camera calibration processing device 500 .
  • the processor 510 may be configured to communicate with the storage medium 530 to execute a series of instruction operations in the storage medium 530 on the camera calibration processing device 500 .
  • the camera calibration processing device 500 may also include one or more power supplies 540, one or more wired or wireless network interfaces 550, one or more input and output interfaces 560, and/or, one or more operating systems 531, such as Windows Server , Mac OS X, Unix, Linux, FreeBSD and more.
  • operating systems 531 such as Windows Server , Mac OS X, Unix, Linux, FreeBSD and more.
  • the present application also provides a storage medium, which may be a non-volatile storage medium or a volatile storage medium, where a camera calibration optimization processing program is stored in the storage medium, and the camera calibration optimization processing program is The processor implements the steps of the camera calibration optimization processing method described above when executed.
  • the method and beneficial effects achieved when the camera calibration optimization processing program running on the above-mentioned processor is executed may refer to the various embodiments of the camera calibration optimization processing method of the present application, which will not be repeated here.
  • the program can also be stored in a server, another computer, a magnetic disk, an optical disk, a flash disk or a mobile hard disk and other storage media, and saved by downloading or copying All or part of the functions in the above embodiments can be implemented when the program in the memory is executed by the processor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种相机自动标定优化方法及相关系统、设备,该相机自动标定优化方法包括:根据空间中多个标记点的第一投影二维坐标与相机接收到的多个标记点的第一二维坐标进行匹配,计算两者之间的欧氏距离,通过判断欧氏距离是否在预设阈值内,确定待优化相机,得出优化后的相机标定结果。通过不断优化相机的标定状态来实现自动标定的功能,利于使得系统时刻处于最优的标定状态,从而提升系统运行的精确度和流畅性;另外,找出待优化相机后,使得待优化相机处于屏蔽状态,如此,既可以不影响其它相机的正常工作,又可以通过正常工作的相机来支持待优化相机进行相机的自动标定。

Description

一种相机自动标定优化方法及相关系统、设备 技术领域
本发明涉及相机标定技术领域,具体涉及一种相机自动标定优化方法及相关系统、设备。
背景技术
在图像测量过程以及机器视觉应用中,为确定空间物体表面某点的三维几何位置与其在图像中对应点之间的相互关系,必须建立相机成像的几何模型,这些几何模型参数就是相机参数。在大多数条件下这些参数必须通过实验与计算才能得到,这个求解参数的过程就称之为相机标定(或摄像机标定)。
例如,在光学运动捕捉系统中,采用多个相机的形式采集运动对象的光学图像,在光学运动捕捉过程中,跟踪定位软件均采用计算机多目视觉原理,根据图像二维点云间的匹配关系及相机的相对位置和朝向,计算点云在三维捕捉空间内的坐标及方向。以点云的三维坐标为基础,通过识别绑定在运动物体不同部位的刚体结构,解算出每个刚体在运动空间内的位置及朝向,进而确定出运动物体在运动空间内的运动轨迹。为了精确地计算三维捕捉空间内的点云坐标以及刚体运动姿态,动作捕捉系统运行前均需要确定所有相机的自身状态以及它们之间的相互位置关系,这就需要进行相机标定。
无论是在光学动作捕捉、图像测量或者机器视觉应用中,相机参数的标定都是非常关键的环节,其标定结果的精度及算法的稳定性直接影响到相机工作产生结果的准确性,甚至相机标定的精确度会直接影响整个光学动作捕捉系统的捕捉精度,具有差之毫厘谬以千里的重大影响,因此,做好相机标定是做好后续工作的前提。
然而,在光学动作捕捉系统的应用过程中,关于相机标定还存在以下问题:(1)系统运行的环境是不断变化的,比如说早晚温度的差异,这会影响相机的自身状态,也就是相机内参;(2)相机安装环境不可避免的会遇到震动,从而导致相机和初始安装位置发生了偏移,会影响到当前相机之间的位置关系,也就是相机外参;(3)现实中我们不可能随时都对系统重新进行相机标定,这会浪费大量的时间,也大大降低了整个系统的运行流畅性。
发明内容
本发明主要解决的技术问题是在如何及时地对计算机视觉系统中的多个相 机进行相机自动标定。为解决上述技术问题,本申请提供一种相机自动标定优化方法及相关系统、设备。
根据第一方面,一种实施例中提供一种相机自动标定优化方法,包括以下步骤:
S1:根据空间中多个标记点的第一投影二维坐标与相机接收到的所述多个标记点的第一二维坐标进行匹配,计算两者之间的欧氏距离,所述第一投影二维坐标为将通过相机标定获得的标记点三维坐标投影到相机中,得到对应的第一投影二维坐标;
S2:判断所述欧氏距离是否大于预设的第二阈值,若是,进入步骤S3,若否,进入步骤S4;
S3:若所述欧氏距离大于预设的第二阈值,重新进行所述相机标定,并返回步骤S1;
S4:若所述欧氏距离大于预设的第一阈值且小于或等于预设的第二阈值,则将对应相机确定为待优化的相机,其中,第一阈值小于第二阈值;
若所述欧氏距离小于或等于预设的第一阈值,则将对应相机确定为正常相机,所述正常相机为无需标定优化的相机;
S5:将所述正常相机获得的所述标记点三维坐标投影到所述待优化相机中,得到对应的第二投影二维坐标,将所述第二投影二维坐标与所述待优化相机接收到的第二二维坐标进行匹配,得出优化后的相机标定结果。
所述步骤S3包括:
若所述欧氏距离大于预设的第二阈值,则重新进行所述相机标定,得到相机之间新的位置关系,根据所述相机之间新的位置关系得到所述标记点新的三维坐标,将所述新的三维坐标投影到相机中,得到对应的新的所述第一投影二维坐标;
返回步骤S1,以使所述通过相机标定获得的新的所述第一投影二维坐标与所述第一二维坐标之间的欧氏距离不大于预设的第二阈值。
所述步骤S5包括:采用梯度下降法得出优化后的相机标定结果,具体方法为:
根据所述第二投影二维坐标与所述第二二维坐标的匹配关系,计算所述第二投影二维坐标与所述第二二维坐标之间的投影差值,根据所述投影差值逆投影,得到旋转矩阵和平移矩阵;
将所述旋转矩阵和平移矩阵更新为相机当前的标定数据,并根据所述当前的标定数据循环计算所述当前的投影差值,直至所述当前的投影差值小于预设 阈值时,则将对应的所述旋转矩阵和平移矩阵作为优化后的相机标定结果。
所述根据所述第二投影二维坐标与所述第二二维坐标的匹配关系,计算所述第二投影二维坐标与所述第二二维坐标之间的投影差值,根据所述投影差值逆投影,得到旋转矩阵和平移矩阵包括:
计算所述多个标记点的第二投影二维坐标与所述第二二维坐标之间的欧式距离值,所述欧式距离值越小,则确定对应的所述第二投影二维坐标与所述第二二维坐标之间存在匹配关系;
根据所述匹配关系计算所述第二投影二维坐标与所述第二二维坐标之间的投影差值,根据所述投影差值逆投影,得到旋转矩阵和平移矩阵。
所述标定数据包括旋转信息和/或位置信息,所述旋转信息、所述位置信息分别用于标定任意一相机相对于空间坐标系的旋转状态和偏移状态。
所述步骤S5还包括:
逐一屏蔽每个所述待优化相机,或者同时屏蔽多个所述待优化相机,以使所述待优化相机不参与计算所述标记点三维坐标,将所述正常相机获得的所述标记点三维坐标投影到所述待优化相机中,得到对应的第二投影二维坐标,将所述第二投影二维坐标与所述待优化相机接收到的第二二维坐标进行匹配,得出优化后的相机标定结果。
所述步骤S5之后,还包括:
判断步骤:多次重复步骤S1-S5,以进行迭代更新,直至判断出计算得到的欧氏距离是否小于或等于预设的第一阈值,若是,进入结束步骤,若否,进入步骤S1。
结束步骤:停止相机自动标定优化过程,即完成了对待优化相机的自动标定优化处理。
根据第二方面,一种实施例中提供一种光学动作捕捉系统,包括待捕捉的多个标记点和对所述标记点进行拍摄的多个相机,还包括处理器;
多个所述标记点配置在预设的刚体上;
多个所述相机分布在预设的运动空间中,均与所述处理器通信连接,以对所述刚体上的标记点进行拍摄;
所述处理器用于定期地根据上述第一方面所述的相机自动标定优化方法对各个所述相机进行标定。
根据第三方面,一种实施例中提供一种相机自动标定优化处理设备,包括:存储器和至少一个处理器,所述存储器中存储有指令,所述存储器和所述至少 一个处理器通过线路互连;
所述至少一个处理器调用所述存储器中的所述指令,以使得所述相机自动标定优化处理设备执行上述第一方面所述的相机自动标定优化处理方法。
根据第四方面,一种实施例中提供一种计算机可读存储介质,包括程序,所述程序能够被处理器执行以实现上述第一方面所述的方法。
本申请的有益效果是:
依据上述实施例的一种相机自动标定优化方法及相关系统、设备,该相机自动标定优化方法包括:根据空间中多个标记点的第一投影二维坐标与相机接收到的多个标记点的第一二维坐标进行匹配,计算两者之间的欧氏距离;判断欧氏距离是否大于预设的第二阈值,若是,重新进行相机标定,若否,采取如下方法进行自动优化:若欧氏距离大于预设的第一阈值且小于或等于预设的第二阈值,则将对应相机确定为待优化的相机,若欧氏距离小于或等于预设的第一阈值,则将对应相机确定为正常相机;将正常相机获得的标记点三维坐标投影到待优化相机中,得到对应的第二投影二维坐标,将第二投影二维坐标与待优化相机接收到的第二二维坐标进行匹配,得出优化后的相机标定结果。如此,在不影响相机系统正常运行的情况下,通过收集投影二维坐标与相机二维坐标之间的匹配数据,判断当前标定信息是否正确,如若存在明显的偏差,会自动修正当前系统的相机标定数据,这样提升了系统运行的精确度和流畅性。
附图说明
图1为相机自动标定优化方法的流程图;
图2为通过投影形成投影点的示意图;
图3为通过图像处理形成采集点的示意图;
图4为光学动作捕捉系统的结构示意图;
图5为相机标定处理设备的一个实施例的结构示意图。
具体实施方式
下面通过具体实施方式结合附图对本发明作进一步详细说明。其中不同实施方式中类似元件采用了相关联的类似的元件标号。在以下的实施方式中,很多细节描述是为了使得本申请能被更好的理解。然而,本领域技术人员可以毫不费力的认识到,其中部分特征在不同情况下是可以省略的,或者可以由其它元件、材料、方法所替代。在某些情况下,本申请相关的一些操作并没有在说 明书中显示或者描述,这是为了避免本申请的核心部分被过多的描述所淹没,而对于本领域技术人员而言,详细描述这些相关操作并不是必要的,他们根据说明书中的描述以及本领域的一般技术知识即可完整了解相关操作。
另外,说明书中所描述的特点、操作或者特征可以以任意适当的方式结合形成各种实施方式。同时,方法描述中的各步骤或者动作也可以按照本领域技术人员所能显而易见的方式进行顺序调换或调整。因此,说明书和附图中的各种顺序只是为了清楚描述某一个实施例,并不意味着是必须的顺序,除非另有说明其中某个顺序是必须遵循的。
本文中为部件所编序号本身,例如“第二”、“第二”等,仅用于区分所描述的对象,不具有任何顺序或技术含义。而本申请所说“连接”、“联接”,如无特别说明,均包括直接和间接连接(联接)。
本申请的发明构思在于:在计算机视觉系统中,特别是在光学运动捕捉系统中,针对相机的内参或外参发生变化时不能及时地进行相机标定的问题,本申请提出了“自动标定”的概念和方法,在不影响光学运动捕捉系统正常运行的情况下,通过收集三维空间坐标与相机二维坐标之间的匹配数据,判断相机的当前标定信息是否正确,如若存在明显的偏差,会自动修正当前系统的相机标定文件,其目的是实时地确定空间物体表面某点的三维几何位置与其在图像中对应点之间的相互关系,建立相机成像的几何模型(几何模型的参数就是相机的内参和外参),以保障定位精度和体验效果。对相机进行自动标定时,可有助于提升系统运行的精确度和流畅性,同时在一定程度上避免了用户时不时就要进行相机标定的问题,节约了用户时间。要明确的一点是,自动标定的作用是在系统运行时发现相机原有标定数据存在误差,进而可以自动调整优化,而在有大量相机位置发生了偏移的情况下,必须重新扫场标定才能得以解决。
实施例一:
请参考图1,本申请公开一种相机自动标定优化方法,用于对一计算机视觉系统中的多个相机进行标定。请求保护的相机自动标定优化方法包括步骤S100-S500,下面将分别说明。
步骤S100,根据空间中多个标记点的第一投影二维坐标与相机接收到的多个标记点的第一二维坐标进行匹配,计算两者之间的欧氏距离,第一投影二维坐标为将通过相机标定获得的标记点三维坐标投影到相机中,得到对应的第一投影二维坐标;
在一实施例中,实现该步骤S100的方法如下:
根据各个相机拍摄得到的图像计算各个标记点在空间坐标系中的三维坐 标;
需要说明的是,例如在光学动作捕捉系统中,多个相机均会连续拍摄多个标记点(如一个或多个捕捉对象上的多个点)的图像,并建立捕捉对象所在的运动空间的空间坐标系(或称为世界坐标系),然后根据同一时刻拍摄的多幅图像处理得到捕捉对象在该空间坐标系中的空间坐标。由于根据多幅图像处理得到空间坐标的方法属于光学动作捕捉中常见的技术手段,因此这里不再进行详细说明。
根据每个相机的当前标定参数将各个标记点的三维坐标投影至每个相机,得到各个标记点在每个相机的相机坐标系中形成的第一投影二维坐标,其中,当前标定参数是通过预先对空间进行扫场,即预先进行相机标定获得的已知参数数据,通过该参数就可确定标记点的三维坐标;
在一具体实施例中,可见图2,构建了空间坐标系Xw-Yw-Zw和一个相机的相机坐标系Xc-Yc,在空间坐标系中得到一个标记点的三维坐标W1。将三维坐标W1投影到该相机的相机坐标系中,形成了投影点C1。由于,两个坐标系存在旋转角度和偏移位置的差别,所以在投影过程中借助了该相机的当前标定参数,以使得三维坐标W1在当前标定参数的作用下能够得到投影点C1。当空间坐标系中存在多个标记点时,将各个标记点的三维坐标均投影到该相机的相机坐标系中,分别形成类似于C1的投影点。这里描述的是将三维坐标W1投影到相机坐标系Xc-Yc的过程,同时,还存在将其他三维坐标投影到其他相机坐标系的过程,从而可形成多个投影点的集合。此外,由于各个标记点的三维坐标将投影在相机坐标系的Xc-Yc平面上,所以投影形成的各个投影点均为二维坐标数据,即实现了三维坐标到二维坐标的正投影过程。
需要说明的是,空间坐标系是真实物体在空间中的一种映射关系,空间坐标系的原点是通常为Ow。相机坐标系是以相机的光轴作为Z轴,光线在相机光学系统的中心位置就是原点Oc(实际上就是透镜的中心),水平轴Xc、和垂直轴Yc并不与空间坐标系的相应轴平行,而是有一定的角度,并且有一定的平移。
获取多个标记点的第一二维坐标,这里的第一二维坐标是指每个相机对拍摄得到的各个标记点的图像处理形成的二维点的二维坐标,即各个标记点直接映射到每个相机时在相机坐标系中映射形成的二维坐标;
在一具体实施例中,可见图3,构建了一个相机的相机坐标系,当该相机对一个标记点进行取像时,该标记点通过光线直接映射在该相机的透镜上时,就在该相机的相机坐标系中形成了一个采集点(即相机接收到的标记点),该采集点将在拍摄的图像上以二维点的形式进行呈现,例如Kc。此外,由于标记点直 接映射在相机坐标系的Xc-Yc平面上,所以映射形成的采集点Kc为二维坐标数据。
分别将第一投影二维坐标与第一二维坐标进行匹配,计算两者之间的欧氏距离,在一具体实施例中,认为第一投影二维坐标与第一二维坐标之间存在最小欧氏距离时,对应的两坐标之间具有匹配关系。
需要说明的是,由于在相机坐标系上形成的第一投影二维坐标包括了各个标记点的投影坐标,在相机坐标系上映射形成的第一二维坐标也包括了各个标记点的映射坐标,但还存在无法建立任意一标记点的投影坐标和该标记点的映射坐标之间的对应关系。所以这里采用最小欧氏距离作为判断标准,若一个投影点与一个采集点之间的欧氏距离最小,则认为该投影点和该采集点对应于同一个标记点,即这两个点之间具有匹配关系。确定好匹配关系后,就可计算出两者之间的欧氏距离。
步骤S200,判断欧氏距离是否大于预设的第二阈值,若是,进入步骤S300,若否,进入步骤S400;
步骤S300,若欧氏距离大于预设的第二阈值,重新进行相机标定,并返回步骤S100;
需要说明的是,如果一个相机发生了剧烈变化,即在有大量相机位置发生了偏移的情况下,必须重新扫场标定才能进入下一步自动标定优化步骤。
因为重新扫场就相当于所有相机位置都未知,在未知情况下重新标定相机之间的位置关系就能克服标定参数误差过大问题,从而再进行标定优化。
也就是说,若欧氏距离大于预设的第二阈值,则重新进行相机标定,得到相机之间新的位置关系,根据相机之间新的位置关系得到标记点新的三维坐标,将新的三维坐标投影到相机中,得到对应的新的第一投影二维坐标;
重新预先进行相机标定之后,返回步骤S100,重复执行步骤S100-S300,以使通过相机标定获得的新的第一投影二维坐标与第一二维坐标之间的欧氏距离不大于预设的第二阈值。
需要说明的是,步骤S300用于确保扫场空间内所有相机的预先标定数据误差在合理的阈值范围内,即都不超过预设的第二阈值,如果超过了预设的第二阈值,则无法进行步骤S400-S500的自动标定优化过程。
步骤S400,若欧氏距离大于预设的第一阈值且小于或等于预设的第二阈值,则将对应相机确定为待优化的相机,其中,第一阈值小于第二阈值;
若欧氏距离小于或等于预设的第一阈值,则将对应相机确定为正常相机,该正常相机为无需标定优化的相机;
如果欧氏距离没有超过预设的第二阈值,说明空间中的多个相机的标定参数可以用来进行自动标定优化过程,在这个前提下,再判断欧氏距离是否超过预设的第一阈值,其中,第一阈值小于第二阈值。
如果欧氏距离大于预设的第一阈值且小于或等于预设的第二阈值,则说明对应相机为待优化的相机。
如果欧氏距离小于或等于预设的第一阈值时,则认为该相机的当前标定参数引起的测量误差仍在可控的范围内,所以无需对该相机的标定参数进行重新标定优化,即确定为正常相机。
步骤S500,将正常相机获得的标记点三维坐标投影到待优化相机中,得到对应的第二投影二维坐标,将第二投影二维坐标与待优化相机接收到的第二二维坐标进行匹配,得出优化后的相机标定结果。
该步骤进一步采用梯度下降法得出优化后的相机标定结果,具体方法为:
根据第二投影二维坐标与第二二维坐标的匹配关系,计算第二投影二维坐标与第二二维坐标之间的投影差值,根据投影差值逆投影,得到旋转矩阵和平移矩阵;
在一个实施例中,获得上述旋转矩阵和平移矩阵的方法具体可以为:
计算多个标记点的第二投影二维坐标与第二二维坐标之间的欧式距离值,欧式距离值越小,则确定对应的第二投影二维坐标与第二二维坐标之间存在匹配关系;
根据匹配关系计算第二投影二维坐标与第二二维坐标之间的投影差值,根据投影差值逆投影,得到旋转矩阵和平移矩阵。
将旋转矩阵和平移矩阵更新为相机当前的标定数据,并根据当前的标定数据循环计算当前的投影差值,直至当前的投影差值小于预设阈值时,则将对应的旋转矩阵和平移矩阵作为优化后的相机标定结果。
需要说明的是,将正常相机获得的标记点三维坐标投影到待优化相机中,得到对应的第二投影二维坐标的方法可参考上述步骤S100里获取第一投影二维坐标的方法,这里不再进行详细说明。获取待优化相机接收到的第二二维坐标方法可参考上述步骤S100里获取第一二维坐标的方法,这里不再进行详细说明。
计算第二投影二维坐标与第二二维坐标之间的欧氏距离;对于每一投影点,按照将与该投影点对应的欧氏距离最小的采集点作为该投影点的匹配点的规则,确定第二投影二维坐标与第二二维坐标之间的匹配关系。
在一实施例中,将每个投影点分别与各个采集点之间的欧氏距离进行比较,确定该投影点与最小欧氏距离对应的采集点之间存在匹配关系。如此,可得到 投影点与采集点的匹配关系。
例如,标记点1至标记点n在一个相机中投影形成的投影点分别为L1(x a1,y a1)、...、Ln(x an,y an),标记点1至标记点n在该相机中图像处理形成的采集点分别为N1(x b1,y b1)、...、Nn(x bn,y bn),则计算L1分别与N1(x b1,y b1)、...、Nn(x bn,y bn)之间的欧氏距离d1、d2、...、d n,若在d1、d2、...、d n之中d1的值最小,则认为d1为最小欧氏距离,表明投影点L1和采集点N1之间存在匹配关系。
上述标定数据包括旋转信息和/或位置信息,旋转信息、位置信息分别用于标定任意一相机相对于空间坐标系的旋转状态和偏移状态。
相机的标定数据可包括内部参数和外部参数,其中内部参数通常是唯一的,往往由一个参数矩阵(f x,f y,c x,c y)和一个畸变系数(包括三个径向系数k1、k2、k3和两个切向系数p1、p2)构成。其中外部参数通常是不唯一的,由相机与空间坐标系的相对位姿关系决定,往往由旋转矩阵(例如旋转矩阵R3x3)和平移矩阵(例如Tx,Ty,Tz)构成。
在本实施例中,标定数据包括旋转信息和/或位置信息,这里的旋转信息、位置信息分别用于标定任意一相机相对于空间坐标系的旋转状态和偏移状态。
在进行上述步骤S500时,可对待优化相机进行数据隔离,排除待优化相机的干扰,通过其它正常相机得到各个标记点在空间坐标系中的三维坐标即可。
在一个实施例中,逐一屏蔽每个待优化相机,将正常相机获得的标记点三维坐标投影到一个待优化相机中,以及获得一个待优化相机接收到的二维坐标,然后对所有待优化相机依次进行循环优化,或者同时屏蔽多个待优化相机,将正常相机获得的标记点三维坐标投影到多个待优化相机中,以及获得多个待优化相机接收到的二维坐标,一次性进行对多个待优化相机的优化处理,从而实现使待优化相机不参与计算标记点三维坐标,将正常相机获得的标记点三维坐标投影到待优化相机中,得到对应的第二投影二维坐标,将第二投影二维坐标与待优化相机接收到的第二二维坐标进行匹配,得出优化后的相机标定结果。
在另一个实施例中,为获得较好的相机自动标定优化效果,本申请的相机自动标定优化方法还进一步地包括判断步骤和结束步骤。
判断步骤:多次重复步骤S100至步骤S500,以进行迭代更新,直至判断出计算得到的欧氏距离是否小于或等于预设的第一阈值。即,若欧氏距离小于或等于预设的第一阈值,代表所有相机为正常相机,无需进行自动标定优化,然后进入结束步骤,反之进入步骤S100。
结束步骤:停止相机自动标定优化过程,即完成了对待优化相机的自动标 定优化处理。
实施例二:
请参考图4,本申请在实施例一请求保护的相机自动标定优化方法的基础上,还公开一种光学动作捕捉系统,其不仅包括待捕捉的多个标记点和对该标记点进行摄像的多个相机,还包括处理器12。
其中,多个标记点设置在运动空间中的一个或多个捕捉对象11上,如图4所示。多个相机(例如相机1、相机2、...相机i、...相机m,1<i<m)分布在运动空间中,均与处理器12通信连接,以对捕捉对象的标记点进行摄像。
需要说明的是,本实施例中提到的标记点可以是光学动作捕捉系统中常用到的对刚体进行配置的反光标记点或荧光标记点,也可以是针对主动光刚体发光源的标记点。
处理器12用于定期地根据实施例一中公开的相机自动标定优化方法对各个相机进行标定。例如,根据步骤S100-S400周期性地对各个相机进行工作状态判断,如果判断相机1为待优化的相机,则会根据步骤S500计算得到相机1的新标定数据并对当前标定数据进行更新,直至最后一次迭代所得到的新标定数据不大于预设的第一阈值,然后进入结束步骤以结束相机自动标定优化过程。
参照图5,图5为本发明实施例提供的相机标定优化处理设备的结构示意图。该相机标定处理设备500可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上处理器(central processing units,CPU)510(例如,一个或一个以上处理器)和存储器520,一个或一个以上存储应用程序533或数据532的存储介质530(例如一个或一个以上海量存储设备)。其中,存储器520和存储介质530可以是短暂存储或持久存储。存储在存储介质530的程序可以包括一个或一个以上模块(图示没标出),每个模块可以包括对相机标定处理设备500中的一系列指令操作。更进一步地,处理器510可以设置为与存储介质530通信,在相机标定处理设备500上执行存储介质530中的一系列指令操作。
相机标定处理设备500还可以包括一个或一个以上电源540,一个或一个以上有线或无线网络接口550,一个或一个以上输入输出接口560,和/或,一个或一个以上操作系统531,例如Windows Serve,Mac OS X,Unix,Linux,FreeBSD等等。本领域技术人员可以理解,图5示出的相机标定处理设备结构并不构成对相机标定处理设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
本申请还提供一种存储介质,该存储介质可以为非易失性存储介质,也可 以为易失性存储介质,所述存储介质中存储有相机标定优化处理程序,该相机标定优化处理程序被处理器执行时实现如上所述的相机标定优化处理方法的步骤。
其中,在上述处理器上运行的相机标定优化处理程序被执行时所实现的方法及有益效果可参照本申请相机标定优化处理方法的各个实施例,此处不再赘述。
本领域技术人员可以理解,上述实施方式中各种方法的全部或部分功能可以通过硬件的方式实现,也可以通过计算机程序的方式实现。当上述实施方式中全部或部分功能通过计算机程序的方式实现时,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:只读存储器、随机存储器、磁盘、光盘、硬盘等,通过计算机执行该程序以实现上述功能。例如,将程序存储在设备的存储器中,当通过处理器执行存储器中程序,即可实现上述全部或部分功能。另外,当上述实施方式中全部或部分功能通过计算机程序的方式实现时,该程序也可以存储在服务器、另一计算机、磁盘、光盘、闪存盘或移动硬盘等存储介质中,通过下载或复制保存到本地设备的存储器中,或对本地设备的系统进行版本更新,当通过处理器执行存储器中的程序时,即可实现上述实施方式中全部或部分功能。
以上应用了具体个例对本发明进行阐述,只是用于帮助理解本发明,并不用以限制本发明。对于本发明所属技术领域的技术人员,依据本发明的思想,还可以做出若干简单推演、变形或替换。

Claims (10)

  1. 一种相机自动标定优化方法,其特征在于,包括以下步骤:
    S1:根据空间中多个标记点的第一投影二维坐标与相机接收到的所述多个标记点的第一二维坐标进行匹配,计算两者之间的欧氏距离,所述第一投影二维坐标为将通过相机标定获得的标记点三维坐标投影到相机中,得到对应的第一投影二维坐标;
    S2:判断所述欧氏距离是否大于预设的第二阈值,若是,进入步骤S3,若否,进入步骤S4;
    S3:若所述欧氏距离大于预设的第二阈值,重新进行所述相机标定,并返回步骤S1;
    S4:若所述欧氏距离大于预设的第一阈值且小于或等于预设的第二阈值,则将对应相机确定为待优化的相机,其中,第一阈值小于第二阈值;
    若所述欧氏距离小于或等于预设的第一阈值,则将对应相机确定为正常相机,所述正常相机为无需标定优化的相机;
    S5:将所述正常相机获得的所述标记点三维坐标投影到所述待优化相机中,得到对应的第二投影二维坐标,将所述第二投影二维坐标与所述待优化相机接收到的第二二维坐标进行匹配,得出优化后的相机标定结果。
  2. 根据权利要求1所述的相机自动标定优化方法,其特征在于,所述步骤S3包括:
    若所述欧氏距离大于预设的第二阈值,则重新进行所述相机标定,得到相机之间新的位置关系,根据所述相机之间新的位置关系得到所述标记点新的三维坐标,将所述新的三维坐标投影到相机中,得到对应的新的所述第一投影二维坐标;
    返回步骤S1,以使所述通过相机标定获得的新的所述第一投影二维坐标与所述第一二维坐标之间的欧氏距离不大于预设的第二阈值。
  3. 根据权利要求1所述的相机自动标定优化方法,其特征在于,所述步骤S5包括:采用梯度下降法得出优化后的相机标定结果,具体方法为:
    根据所述第二投影二维坐标与所述第二二维坐标的匹配关系,计算所述第二投影二维坐标与所述第二二维坐标之间的投影差值,根据所述投影差值逆投影,得到旋转矩阵和平移矩阵;
    将所述旋转矩阵和平移矩阵更新为相机当前的标定数据,并根据所述当 前的标定数据循环计算所述当前的投影差值,直至所述当前的投影差值小于预设阈值时,则将对应的所述旋转矩阵和平移矩阵作为优化后的相机标定结果。
  4. 根据权利要求3所述的相机自动标定优化方法,其特征在于,所述根据所述第二投影二维坐标与所述第二二维坐标的匹配关系,计算所述第二投影二维坐标与所述第二二维坐标之间的投影差值,根据所述投影差值逆投影,得到旋转矩阵和平移矩阵包括:
    计算所述多个标记点的第二投影二维坐标与所述第二二维坐标之间的欧式距离值,所述欧式距离值越小,则确定对应的所述第二投影二维坐标与所述第二二维坐标之间存在匹配关系;
    根据所述匹配关系计算所述第二投影二维坐标与所述第二二维坐标之间的投影差值,根据所述投影差值逆投影,得到旋转矩阵和平移矩阵。
  5. 如权利要求3所述的相机自动标定优化方法,其特征在于,所述标定数据包括旋转信息和/或位置信息,所述旋转信息、所述位置信息分别用于标定任意一相机相对于空间坐标系的旋转状态和偏移状态。
  6. 如权利要求1所述的相机自动标定优化方法,其特征在于,所述步骤S5包括:
    逐一屏蔽每个所述待优化相机,或者同时屏蔽多个所述待优化相机,以使所述待优化相机不参与计算所述标记点三维坐标,将所述正常相机获得的所述标记点三维坐标投影到所述待优化相机中,得到对应的第二投影二维坐标,将所述第二投影二维坐标与所述待优化相机接收到的第二二维坐标进行匹配,得出优化后的相机标定结果。
  7. 根据权利要求1-6任一项所述的相机自动标定优化方法,其特征在于,所述步骤S5之后,还包括:
    判断步骤:多次重复所述步骤S1-S5,以进行迭代更新,直至判断出计算得到的所述欧氏距离是否小于或等于预设的第一阈值,若是,进入结束步骤,若否,返回步骤S1。
    结束步骤:停止所述相机自动标定优化过程。
  8. 一种光学动作捕捉系统,包括待捕捉的多个标记点和对所述标记点进行拍摄的多个相机,其特征在于,还包括处理器;
    多个所述标记点配置在预设的刚体上;
    多个所述相机分布在预设的运动空间中,均与所述处理器通信连接,以对所述刚体上的标记点进行拍摄;
    所述处理器用于定期地根据权利要求1-7任一项所述的相机自动标定优化方法对各个所述相机进行标定。
  9. 一种相机自动标定优化处理设备,其特征在于,包括:存储器和至少一个处理器,所述存储器中存储有指令,所述存储器和所述至少一个处理器通过线路互连;
    所述至少一个处理器调用所述存储器中的所述指令,以使得所述相机自动标定优化处理设备执行如权利要求1-7中任一项所述的相机自动标定优化处理方法。
  10. 一种存储介质,所述存储介质上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1-7中任一项所述的相机自动标定优化处理方法。
PCT/CN2021/105195 2020-07-08 2021-07-08 一种相机自动标定优化方法及相关系统、设备 WO2022007886A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010654537.5 2020-07-08
CN202010654537.5A CN111899305A (zh) 2020-07-08 2020-07-08 一种相机自动标定优化方法及相关系统、设备

Publications (1)

Publication Number Publication Date
WO2022007886A1 true WO2022007886A1 (zh) 2022-01-13

Family

ID=73193014

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/105195 WO2022007886A1 (zh) 2020-07-08 2021-07-08 一种相机自动标定优化方法及相关系统、设备

Country Status (2)

Country Link
CN (1) CN111899305A (zh)
WO (1) WO2022007886A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115375772A (zh) * 2022-08-10 2022-11-22 北京英智数联科技有限公司 相机标定方法、装置、设备及存储介质
CN116342662A (zh) * 2023-03-29 2023-06-27 北京诺亦腾科技有限公司 基于多目相机的追踪定位方法、装置、设备及介质
CN116934871A (zh) * 2023-07-27 2023-10-24 湖南视比特机器人有限公司 一种基于标定物的多目系统标定方法、系统及存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111899305A (zh) * 2020-07-08 2020-11-06 深圳市瑞立视多媒体科技有限公司 一种相机自动标定优化方法及相关系统、设备
CN112489133A (zh) * 2020-11-17 2021-03-12 北京京东乾石科技有限公司 手眼系统的标定方法、装置及设备
CN113263499B (zh) * 2021-04-19 2022-12-30 深圳瀚维智能医疗科技有限公司 机械臂手眼标定方法、装置、系统及计算机可读存储介质
CN113283543B (zh) * 2021-06-24 2022-04-15 北京优锘科技有限公司 一种基于WebGL的图像投影融合方法、装置、存储介质和设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982548A (zh) * 2012-12-11 2013-03-20 清华大学 多目立体视频采集系统及其相机参数标定方法
CN109754432A (zh) * 2018-12-27 2019-05-14 深圳市瑞立视多媒体科技有限公司 一种相机自动标定方法及光学动作捕捉系统
CN109816736A (zh) * 2019-02-01 2019-05-28 上海蔚来汽车有限公司 车辆摄像头的自动标定方法、系统、车载控制设备
CN110689580A (zh) * 2018-07-05 2020-01-14 杭州海康机器人技术有限公司 多相机标定方法及装置
KR20200064947A (ko) * 2018-11-29 2020-06-08 (주)코어센스 광학식 위치 트래킹 시스템 기반의 위치 추적 장치 및 그 방법
CN111899305A (zh) * 2020-07-08 2020-11-06 深圳市瑞立视多媒体科技有限公司 一种相机自动标定优化方法及相关系统、设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101826206B (zh) * 2010-03-31 2011-12-28 北京交通大学 一种相机自定标的方法
US10096161B2 (en) * 2010-06-15 2018-10-09 Live Nation Entertainment, Inc. Generating augmented reality images using sensor and location data
CN103745474B (zh) * 2014-01-21 2017-01-18 南京理工大学 基于惯性传感器和摄像机的图像配准方法
CN105222788B (zh) * 2015-09-30 2018-07-06 清华大学 基于特征匹配的飞行器航路偏移误差的自校正方法
CN109636903B (zh) * 2018-12-24 2020-09-15 华南理工大学 一种基于抖动的双目三维重建方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982548A (zh) * 2012-12-11 2013-03-20 清华大学 多目立体视频采集系统及其相机参数标定方法
CN110689580A (zh) * 2018-07-05 2020-01-14 杭州海康机器人技术有限公司 多相机标定方法及装置
KR20200064947A (ko) * 2018-11-29 2020-06-08 (주)코어센스 광학식 위치 트래킹 시스템 기반의 위치 추적 장치 및 그 방법
CN109754432A (zh) * 2018-12-27 2019-05-14 深圳市瑞立视多媒体科技有限公司 一种相机自动标定方法及光学动作捕捉系统
CN109816736A (zh) * 2019-02-01 2019-05-28 上海蔚来汽车有限公司 车辆摄像头的自动标定方法、系统、车载控制设备
CN111899305A (zh) * 2020-07-08 2020-11-06 深圳市瑞立视多媒体科技有限公司 一种相机自动标定优化方法及相关系统、设备

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115375772A (zh) * 2022-08-10 2022-11-22 北京英智数联科技有限公司 相机标定方法、装置、设备及存储介质
CN115375772B (zh) * 2022-08-10 2024-01-19 北京英智数联科技有限公司 相机标定方法、装置、设备及存储介质
CN116342662A (zh) * 2023-03-29 2023-06-27 北京诺亦腾科技有限公司 基于多目相机的追踪定位方法、装置、设备及介质
CN116342662B (zh) * 2023-03-29 2023-12-05 北京诺亦腾科技有限公司 基于多目相机的追踪定位方法、装置、设备及介质
CN116934871A (zh) * 2023-07-27 2023-10-24 湖南视比特机器人有限公司 一种基于标定物的多目系统标定方法、系统及存储介质
CN116934871B (zh) * 2023-07-27 2024-03-26 湖南视比特机器人有限公司 一种基于标定物的多目系统标定方法、系统及存储介质

Also Published As

Publication number Publication date
CN111899305A (zh) 2020-11-06

Similar Documents

Publication Publication Date Title
WO2022007886A1 (zh) 一种相机自动标定优化方法及相关系统、设备
US10666934B1 (en) Camera automatic calibration method and optical motion capture system
WO2021115331A1 (zh) 基于三角测量的坐标定位方法、装置、设备及存储介质
JP6975929B2 (ja) カメラ校正方法、カメラ校正プログラム及びカメラ校正装置
US7023473B2 (en) Camera calibration device and method, and computer system
WO2021004416A1 (zh) 一种基于视觉信标建立信标地图的方法、装置
CN106457562A (zh) 用于校准机器人的方法和机器人系统
US10540813B1 (en) Three-dimensional point data alignment
JP5615055B2 (ja) 情報処理装置及びその処理方法
JP7462769B2 (ja) 物体の姿勢の検出および測定システムを特徴付けるためのシステムおよび方法
JP2015090298A (ja) 情報処理装置、情報処理方法
CN112308925A (zh) 可穿戴设备的双目标定方法、设备及存储介质
JP6860620B2 (ja) 情報処理装置、情報処理方法、及びプログラム
CN113284083A (zh) 执行自动相机校准的方法和系统
CN111311682A (zh) 一种led屏校正过程中的位姿估计方法、装置及电子设备
US10628968B1 (en) Systems and methods of calibrating a depth-IR image offset
WO2022222291A1 (zh) 光轴检测系统的光轴标定方法、装置、终端、系统和介质
CN115457147A (zh) 相机标定方法、电子设备及存储介质
CN109544642B (zh) 一种基于n型靶标的tdi-ccd相机参数标定方法
WO2023201578A1 (zh) 单目激光散斑投影系统的外参数标定方法和装置
JP2019020778A (ja) 情報処理装置、情報処理方法
CN114758011B (zh) 融合离线标定结果的变焦相机在线标定方法
JP7427370B2 (ja) 撮像装置、画像処理装置、画像処理方法、撮像装置の校正方法、ロボット装置、ロボット装置を用いた物品の製造方法、制御プログラムおよび記録媒体
CN115187612A (zh) 一种基于机器视觉的平面面积测量方法、装置及系统
CN115018922A (zh) 畸变参数标定方法、电子设备和计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21836760

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07/06/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21836760

Country of ref document: EP

Kind code of ref document: A1