WO2020259106A1 - 相机和惯性测量单元相对姿态的标定方法及装置 - Google Patents

相机和惯性测量单元相对姿态的标定方法及装置 Download PDF

Info

Publication number
WO2020259106A1
WO2020259106A1 PCT/CN2020/089868 CN2020089868W WO2020259106A1 WO 2020259106 A1 WO2020259106 A1 WO 2020259106A1 CN 2020089868 W CN2020089868 W CN 2020089868W WO 2020259106 A1 WO2020259106 A1 WO 2020259106A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
measurement unit
inertial measurement
calibration
data
Prior art date
Application number
PCT/CN2020/089868
Other languages
English (en)
French (fr)
Inventor
庞敏健
刘贤焯
黄志明
曾杰
王晓梦
杨洪飞
Original Assignee
深圳奥比中光科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳奥比中光科技有限公司 filed Critical 深圳奥比中光科技有限公司
Publication of WO2020259106A1 publication Critical patent/WO2020259106A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • G01C25/005Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass initial alignment, calibration or starting-up of inertial devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration

Definitions

  • the invention relates to the technical field of calibration, and in particular to a method and device for calibrating the relative posture of a camera and an inertial measurement unit.
  • the embodiments of the present invention provide a method and device for calibrating the relative posture of a camera and an inertial measurement unit to solve the technical problem of how to accurately obtain the posture relationship between the IMU and the camera.
  • the first aspect of the embodiments of the present invention provides a method for calibrating the relative attitude of a camera and an inertial measurement unit, including:
  • the camera When the camera is a rolling shutter camera, perform rolling calibration on the camera and obtain the corresponding parameter value.
  • the parameter value is within the first preset range, according to the first image data and the The first inertial measurement unit data calculates the relative external parameters of the camera and the inertial measurement unit.
  • a second aspect of the embodiments of the present invention provides a device for calibrating the relative attitude of a camera and an inertial measurement unit, including:
  • the internal parameter calibration module is used to calibrate the internal parameters of the camera and the inertial measurement unit respectively; the camera and the inertial measurement unit are both set on the robotic arm;
  • the drive and acquisition execution module is used to drive the robotic arm to carry the camera and the inertial measurement unit to move on a preset trajectory, and simultaneously collect the first image data of the calibration board through the camera and collect the first image data through the inertial measurement unit One inertial measurement unit data;
  • the first external parameter calibration module is used to calibrate the camera and obtain the corresponding parameter value when the camera is a rolling shutter camera.
  • the first image data and the first inertial measurement unit data are used to calculate the relative external parameters of the camera and the inertial measurement unit.
  • a third aspect of the embodiments of the present invention provides a terminal device, including a memory and a processor.
  • the memory stores a computer program that can run on the processor.
  • the processor executes the computer program, Implement the steps of the method as described in the first aspect.
  • a fourth aspect of the embodiments of the present invention provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps of the method described in the first aspect are implemented.
  • the internal parameters of the camera and the inertial measurement unit are calibrated separately; the camera and the inertial measurement unit are both set on the robotic arm; then the robotic arm is driven to carry the camera and the inertial measurement unit to preset The trajectory moves, and at the same time the first image data of the calibration board is collected by the camera and the first inertial measurement unit data is collected by the inertial measurement unit; when the camera is a rolling shutter camera, the camera is rolled Calibrate and obtain the corresponding parameter value.
  • the parameter value is within the first preset range, calculate the difference between the camera and the inertial measurement unit according to the first image data and the first inertial measurement unit data. Relatively external parameters.
  • the method provided by the embodiment of the present invention adds the step of performing rolling shutter calibration when the camera is a rolling shutter camera.
  • the corresponding parameter value of the rolling shutter calibration is within the first preset range, the camera and IMU are calculated. Therefore, the method provided by the present invention breaks the limitations of the prior art, and the method provided by the present invention improves the accuracy of calibrating the relative external parameters of the camera and the IMU.
  • FIG. 1 is an implementation flowchart of a method for calibrating the relative attitude of a camera and an inertial measurement unit according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a calibration system for the relative attitude of a camera and an inertial measurement unit according to an embodiment of the present invention
  • FIG. 3 is an implementation flowchart of a method for calibration of camera internal parameters provided by an embodiment of the present invention
  • FIG. 5 is an implementation flowchart of another inertial measurement unit internal parameter calibration method provided by an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of Allan standard deviation provided by an embodiment of the present invention.
  • FIG. 7 is an implementation flowchart of another method for calibrating the relative attitude of a camera and an inertial measurement unit according to an embodiment of the present invention.
  • FIG. 8 is a structural block diagram of a calibration device for the relative posture of a camera and an inertial measurement unit provided by an embodiment of the present invention
  • Fig. 9 is a schematic diagram of a terminal device provided by an embodiment of the present invention.
  • Fig. 1 shows an implementation process of a method for calibrating the relative posture of a camera and an IMU provided by an embodiment of the present invention.
  • the process of the method includes steps S101 to S103.
  • This method is suitable for calibrating the relative posture of the camera and IMU.
  • the method is executed by a calibration device for the relative posture of the camera and the IMU.
  • the device for calibrating the relative posture of the camera and the IMU is configured in a terminal device and can be implemented by software and/or hardware.
  • the specific implementation principle of each step is as follows.
  • the camera and IMU are usually installed on the same printed circuit board (PCB) in use
  • the PCB board is installed on a movable (including translation and rotation) terminal To ensure that the positional relationship between the two sensors in the world coordinate system is determined.
  • the terminal can be a mobile phone, a tablet computer, a personal digital assistant, and wearable devices (such as glasses, watches, bracelets, etc.) and so on.
  • the camera and the inertial measurement unit are both set on a robotic arm, and the robotic arm can be driven by an external terminal device and can rotate around the X axis, Y axis, and Z axis to adapt to Actual application scenarios.
  • the calibration method provided by the embodiment of the present invention is implemented by the calibration system of the relative attitude of the camera and the inertial measurement unit shown in FIG. 2.
  • the calibration system includes a robotic arm 210, a camera 211, an IMU 212, and a terminal device. 213, wherein the camera 211 and the IMU 212 are both set on the robot arm 210, and the terminal device 213 is connected to the robot arm 210, the camera 211 and the IMU 212 respectively.
  • the specific structure of the calibration system and the terminal device will be described in detail in the subsequent embodiments, and will not be repeated here.
  • the terminal device before calibrating the relative external parameters of the camera and the IMU, the terminal device needs to calibrate the internal parameters of the camera and the internal parameters of the IMU.
  • the camera's internal parameters also called internal parameter matrix, include image center coordinates, focal length, lens distortion parameters, and so on.
  • IMU is a device that measures the three-axis attitude angle (or angular rate) and acceleration of an object. It can include multi-axis accelerometers and multi-axis gyroscopes. Its internal parameters generally refer to system errors and random errors. The more common random errors are Gaussian white Noise and random walk noise.
  • methods for calibrating the internal parameters of the camera include, but are not limited to, linear calibration methods, nonlinear optimization calibration methods, and two-step calibration methods.
  • the two-step calibration methods include Tsai's classic two-step method and Zhang Zhengyou's Calibration method.
  • Fig. 3 shows a flowchart of a method for calibrating internal parameters of a camera. The following describes how to calibrate the internal parameters of the camera, including the following steps 301 to 302.
  • S301 Drive the robotic arm to carry the camera to rotate multiple angles, and collect the second image data of the calibration plate at multiple angles through the camera.
  • the calibration board with the calibration pattern is fixed on a plane in front of the robotic arm.
  • the robotic arm can rotate around the X-axis, Y-axis and Z-axis, and the end of the robotic arm can rotate more than 360 degrees, so that the camera can be
  • the calibration images are collected at multiple different angles. In order to improve the robustness of the image data, it is generally appropriate to collect 10 to 20 calibration images. In order to collect a static calibration image at a certain angle, when the robot arm rotates to a certain angle, it should be still for a short period of time, such as 500ms, so that the camera is completely still and then turned on.
  • the size of the calibration board should be designed adaptively. Generally, a small-size calibration board should be adopted, and the distance between the calibration board and the camera should also be appropriate, for example, about 3m. During the process of acquiring calibration images, the distance from the camera to the calibration board can be changed. Among them, in the design process of the calibration plate, its accuracy and flatness should be ensured.
  • the calibration plate can be an alumina calibration plate with an accuracy of 0.01 mm; in another embodiment, a photoetched calibration plate can be used with an accuracy of 0.001 mm.
  • S302 Calculate internal parameters of the camera by using the corner points extracted by the second image data and the parameter characteristics of the calibration plate.
  • the parameter characteristics of the calibration board include but are not limited to the actual side length of the calibration board.
  • the internal parameters of the camera are calculated by using the corner points extracted from the multiple sets of image data and the actual side length of the calibration plate.
  • the above-mentioned camera may be a monocular camera, a binocular camera, a depth camera, or an infrared camera. If it is a binocular camera, in addition to calculating its internal parameters, the position conversion matrix of the two cameras must be calculated.
  • FIG. 4 is a flow chart of the method for calibrating the internal parameters of the IMU. The following describes how to calibrate the internal parameters of the IMU, including the following steps 401 to 402.
  • the preset collection time t is an empirical value, which can be selected as required. In order to obtain enough data and improve the robustness of the data, it is more appropriate to set the preset collection time t to more than two hours.
  • S402 Perform Allan analysis of variance on the second inertial measurement unit data to obtain error parameters of the inertial measurement unit.
  • the Allan variance analysis method is used to perform error modeling on the IMU data to obtain the error parameters of the IMU.
  • the error parameters include random error parameters, which include but are not limited to Gaussian white noise and random walk noise.
  • step 402 includes the following steps 501 to 504.
  • S501 Divide the second inertial measurement unit data into N groups, and calculate the average value of each group of the second inertial measurement unit data.
  • the Allan variance ⁇ 2 (T) is calculated, the Allan standard deviation ⁇ (T) is calculated according to the variance, and the double logarithmic curve of the Allan standard deviation ⁇ (T) and the time span T is used to obtain the error parameter.
  • Figure 6 shows the Allan standard deviation graph.
  • S102 Drive the mechanical arm to carry the camera and the inertial measurement unit to move along a preset trajectory, and at the same time collect the first image data of the calibration board through the camera and collect the first inertial measurement unit data through the inertial measurement unit.
  • the preset trajectory should be designed to enable the robot arm to carry the IMU to perform sufficient translation and rotation. Since the calibration board is stationary and the two sensors are moving relative to it, in order to avoid blurring of the collected calibration images, the movement speed of the robotic arm should be designed reasonably, not too fast, and at the same time, the acceleration and angular velocity of the robotic arm movement should be ensured. It should not be too small, so as not to fully stimulate each axis of the IMU. It is understandable that in order to reduce the image blur, especially the blur of the rolling shutter camera, it is usually necessary to reduce the image exposure time and increase the ambient light at the same time, such as using a high-brightness and high-uniform external light source.
  • the camera when the camera is a rolling shutter camera, perform rolling calibration for it and obtain the corresponding parameter value, and determine whether the parameter value is within the first preset range; when the parameter value is within the first preset range, According to the image data and the inertial measurement unit data, the relative external parameters of the camera and the inertial measurement unit are calculated. It should be noted that when the parameter value is not within the first preset range, an error prompt is performed.
  • the two common exposure modes of cameras include global exposure (Global Shutter) and rolling shutter exposure (Rolling Shutter).
  • Global Shutter global exposure
  • Rolling Shutter rolling shutter exposure
  • all the pixels of the photosensitive element are exposed for a certain time at the same time, and then image is formed.
  • All the pixels of the photosensitive element of the rolling-blind exposure are exposed for a certain time row by row, one row is exposed and output one row, and then the image is imaged.
  • the progressive scan speed is not enough, and the result of the shooting may be "tilted", "swayed” or “partially exposed”. This phenomenon of rolling shutter shooting is defined as the jelly effect.
  • the roll-blind calibration is to calibrate the readout time of a row of the roll-blind camera, and compare it with the first preset range to determine whether it is legalized, that is, to determine it. Whether it is within the first preset range, if it is within the first range, it is legal, where the first preset range is an empirical value, which can be set as required. If it is legal, continue to calibrate the relative external parameters of the camera and IMU. It should be noted that if the calibrated readout time is illegal, an error prompt will be made, that is, once there is a jelly effect, it can be detected in time and an error prompt will be made.
  • the acquisition frequency of the IMU is higher.
  • the acquisition frequency of the camera is low.
  • the robotic arm starts to move at the preset trajectory at time t1 to stop at time t2. During this time, assuming that the camera is shooting video data at a sampling frame rate of 20Hz, the camera is shooting video data The number of frames per second is 20.
  • the IMU collects its own attitude information at a frequency of 200 Hz, that is, the IMU outputs measurement results at a frequency of 200 Hz, that is to say, in the same time period, the number of image frames captured by the camera is small, and the number of measurement results output by the IMU is large. Therefore, it is not possible to simply integrate the IMU data from t1 to t2 at one time. In order to ensure that the integrated IMU data and the calibration image are collected at the same time period, the following two methods can be adopted.
  • One is to take the pre-integration method for the IMU, that is, only integrate the IMU data at two image moments.
  • the first image frame and the second image frame are separated by a preset number of frames from the first exposure time to the second exposure time.
  • the IMU data is integrated, where the first image frame and the second image frame may or may not be adjacent.
  • the step of performing rolling shutter calibration is added.
  • the corresponding parameter value of the rolling shutter calibration is within the first preset range, the relative value of the camera and the IMU is calculated.
  • the method provided by the present invention breaks the limitations of the prior art, and the method provided by the present invention improves the accuracy of calibrating the relative external parameters of the camera and the IMU.
  • a rolling shutter calibration step is added to the rolling shutter camera to prevent the jelly effect and improve the accuracy of the relative posture calibration of the camera and the IMU.
  • FIG. 7 shows a flowchart of another method for calibrating the relative posture of the camera and the inertial measurement unit.
  • FIG. 7 in the embodiment shown in FIG. 7, a situation where the camera is a global shutter camera is added. It should be noted that the steps that are the same as those in the foregoing embodiment will not be repeated here, please refer to the foregoing.
  • S701 Perform internal parameter calibration on the camera and the inertial measurement unit respectively, and both the camera and the inertial measurement unit are set on a mechanical arm.
  • S702 Drive the mechanical arm to carry the camera and the inertial measurement unit to move along a preset trajectory, and simultaneously collect the first image data of the calibration board through the camera and collect the first inertial measurement unit data through the inertial measurement unit.
  • S703 According to the type of the camera, it is judged whether to perform rolling calibration of the camera.
  • the type of the camera it is judged whether to perform a rolling shutter calibration on the camera.
  • the camera is a global shutter camera, no rolling calibration is performed on it, and the relative external parameters of the camera and the inertial measurement unit are calculated according to the first image data and the first inertial measurement unit data;
  • the camera is a rolling shutter camera, It performs rolling blind calibration and obtains the corresponding parameter value, and determines whether the parameter value is within the first preset range; when the parameter value is within the first preset range, the camera and the inertial measurement unit data are calculated according to the image data and the inertial measurement unit data. Relative external parameters of inertial measurement unit.
  • the terminal device obtains the performance parameters of the camera in advance, and the performance parameters include performance parameters that can characterize the camera type. Therefore, the terminal device can determine whether to perform rolling calibration for the camera according to the camera type.
  • the present invention adds the step of determining whether to perform rolling blind calibration according to the camera type, and can use different methods to calibrate the global camera and the rolling blind camera, which further improves the relative relationship between the camera and the inertial measurement unit.
  • the accuracy of the calibration of the external parameters breaks the limitations of the prior art, so that the present invention has a wider range of application scenarios.
  • FIG. 8 shows a structural block diagram of a device for calibrating the relative posture of the camera and the inertial measurement unit provided by an embodiment of the present invention, for ease of description , Only the parts related to the embodiment of the present invention are shown.
  • the device for calibrating the relative attitude of the camera and the inertial measurement unit includes:
  • the internal parameter calibration module 81 is used to calibrate the internal parameters of the camera and the inertial measurement unit respectively; the camera and the inertial measurement unit are both set on the robotic arm;
  • the driving and acquisition execution module 82 is used to drive the robotic arm to carry the camera and the inertial measurement unit to move in a preset trajectory, and simultaneously collect the first image data of the calibration board through the camera and collect the inertial measurement unit Data of the first inertial measurement unit;
  • the first external parameter calibration module 83 is configured to perform a rolling shutter calibration on the camera and obtain corresponding parameter values when the camera is a rolling shutter camera.
  • the parameter value is within a first preset range.
  • the relative external parameters of the camera and the inertial measurement unit are calculated.
  • the device for calibrating the relative attitude of the camera and the inertial measurement unit further includes:
  • the second external parameter calibration module is used to calculate the relative external parameters of the camera and the inertial measurement unit according to the first image data and the first inertial measurement unit data when the camera is a global shutter camera .
  • the device for calibrating the relative attitude of the camera and the inertial measurement unit further includes:
  • the prompting module is configured to perform an error prompt when the parameter value is not within the first preset range; and also used to perform an error prompt when the relative external reference is not within the second preset range.
  • Fig. 9 is a schematic diagram of a terminal device provided by an embodiment of the present invention.
  • the terminal device 9 of this embodiment includes a processor 100, a memory 101, and a computer program 92 stored in the memory 91 and running on the processor 90, such as a camera and an inertial measurement unit.
  • the calibration procedure of the relative attitude The processor 90 implements the steps in the embodiment of the method for calibrating the relative posture of the camera and the inertial measurement unit when the computer program 92 is executed, such as steps S101 to S103 shown in FIG. 1.
  • the processor 90 executes the computer program 92, the functions of the modules/units in the foregoing device embodiments, for example, the functions of the modules 81 to 83 shown in FIG. 8 are realized.
  • the computer program 92 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 91 and executed by the processor 90 to complete this invention.
  • the one or more modules/units may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program 92 in the terminal device 9.
  • the terminal device 9 may be a computer, a tablet, or the like.
  • the terminal device 9 may include, but is not limited to, a processor 90 and a memory 91.
  • FIG. 9 is only an example of the terminal device 9 and does not constitute a limitation on the terminal device 9. It may include more or less components than shown in the figure, or a combination of certain components, or different components.
  • the terminal device may also include input and output devices, network access devices, buses, etc.
  • the so-called processor 90 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 91 may be an internal storage unit of the terminal device 9, such as a hard disk or memory of the terminal device 9.
  • the memory 91 may also be an external storage device of the terminal device 9, for example, a plug-in hard disk equipped on the terminal device 9, a smart memory card (Smart Media Card, SMC), or a Secure Digital (SD) Card, Flash Card, etc. Further, the memory 91 may also include both an internal storage unit of the terminal device 9 and an external storage device.
  • the memory 91 is used to store the computer program and other programs and data required by the terminal device.
  • the memory 91 can also be used to temporarily store data that has been output or will be output.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated module/unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the present invention implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium. When the program is executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the calibration system includes a robotic arm 210, a camera 211, an IMU212, and a terminal device 213, wherein the camera 211 and IMU212 are both installed on the robotic arm. 210.
  • the terminal device 213 is respectively connected to the robotic arm 210, the camera 211 and the IMU 212.
  • the calibration system also includes an illuminating light source.
  • the illuminating light source can be, for example, a light box to provide a light source.
  • the calibration image displays differently when the calibration board is turned at different angles. Grayscale. Therefore, this calibration system provides good lighting, which can reduce the influence of external factors such as uneven ambient light on the calibration.
  • the terminal device 213 can be connected to the robotic arm 210 via a wired connection such as USB, or can be wirelessly connected via a wireless network to control the robotic arm 210.
  • the terminal device 213 can be connected to the camera 211 and the IMU 212 via a wireless network. To control it, or the terminal device can be electrically connected to the camera 211 and the IMU 212 respectively.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Manufacturing & Machinery (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Studio Devices (AREA)

Abstract

本发明适用于标定技术领域,提供了一种相机和惯性测量单元相对姿态的标定方法及装置,所述方法包括:对相机和惯性测量单元分别进行内参标定;所述相机和所述惯性测量单元均设置于机械臂;驱动机械臂携带所述相机和所述惯性测量单元以预设轨迹运动,并同时通过所述相机采集标定板的第一图像数据以及通过所述惯性测量单元采集第一惯性测量单元数据;当所述相机为卷帘快门相机时,对所述相机进行卷帘标定并获取相应的参数值,当所述参数值在第一预设范围以内时,根据所述第一图像数据和所述第一惯性测量单元数据,计算所述相机和所述惯性测量单元的相对外参。本发明提供的方法提高了对相机和IMU的相对姿态进行标定的精确性。

Description

相机和惯性测量单元相对姿态的标定方法及装置 技术领域
本发明涉及标定技术领域,尤其涉及一种相机和惯性测量单元相对姿态的标定方法及装置。
背景技术
在同步定位与建图(simultaneous localization and mapping,SLAM)、无人机导航、运动捕捉以及增强现实等领域中,经常将相机与惯性测量单元(International Mathematical Union,IMU)绑定在一个运动物体上,通过对两者信息的融合,确定出一个较精确的位姿。由于IMU的坐标系和相机的坐标系存在一定的偏差,导致IMU和相机之间存在一定的姿态关系。因此,需要对IMU和相机之间的姿态关系进行标定。
如何简便精确的求取IMU和相机之间的姿态关系成为目前亟待解决的问题。
发明内容
有鉴于此,本发明实施例提供了一种相机和惯性测量单元相对姿态的标定方法及装置,以解决如何精确的求取IMU和相机之间的姿态关系的技术问题。
本发明实施例的第一方面提供了一种相机和惯性测量单元相对姿态的标定方法,包括:
对相机和惯性测量单元分别进行内参标定;所述相机和所述惯性测量单元均设置于机械臂;
驱动机械臂携带所述相机和所述惯性测量单元以预设轨迹运动,并同时通过所述相机采集标定板的第一图像数据以及通过所述惯性测量单元采集第一惯 性测量单元数据;
当所述相机为卷帘快门相机时,对所述相机进行卷帘标定并获取相应的参数值,当所述参数值在第一预设范围以内时,根据所述第一图像数据和所述第一惯性测量单元数据,计算所述相机和所述惯性测量单元的相对外参。
本发明实施例的第二方面提供了一种相机和惯性测量单元相对姿态的标定装置,包括:
内参标定模块,用于对相机和惯性测量单元分别进行内参标定;所述相机和所述惯性测量单元均设置于机械臂;
驱动和采集执行模块,用于驱动机械臂携带所述相机和所述惯性测量单元以预设轨迹运动,并同时通过所述相机采集标定板的第一图像数据以及通过所述惯性测量单元采集第一惯性测量单元数据;
第一外参标定模块,用于当所述相机为卷帘快门相机时,对所述相机进行卷帘标定并获取相应的参数值,当所述参数值在第一预设范围以内时,根据所述第一图像数据和所述第一惯性测量单元数据,计算所述相机和所述惯性测量单元的相对外参。
本发明实施例的第三方面提供了一种终端设备,包括存储器以及处理器,所述存储器中存储有可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现如第一方面所述方法的步骤。
本发明实施例的第四方面提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如第一方面所述方法的步骤。
本发明实施例中,通过对相机和惯性测量单元分别进行内参标定;所述相机和所述惯性测量单元均设置于机械臂;然后驱动机械臂携带所述相机和所述惯性测量单元以预设轨迹运动,并同时通过所述相机采集标定板的第一图像数据以及通过所述惯性测量单元采集第一惯性测量单元数据;当所述相机为卷帘快门相机时,对所述相机进行卷帘标定并获取相应的参数值,当所述参数值在 第一预设范围以内时,根据所述第一图像数据和所述第一惯性测量单元数据,计算所述相机和所述惯性测量单元的相对外参。本发明实施例提供的方法,增加了当相机为卷帘快门相机时,进行卷帘标定的步骤,此外,当卷帘标定相应的参数值在第一预设范围以内时,再计算相机和IMU的相对外参,因此本发明提供的方法打破了现有技术的局限,本发明提供的方法提高了对相机和IMU的相对外参进行标定的精确性。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的一种相机和惯性测量单元相对姿态的标定方法的实现流程图;
图2是本发明实施例提供的一种相机和惯性测量单元相对姿态的标定系统的示意图;
图3是本发明实施例提供的一种相机内参相标定方法的实现流程图;
图4是本发明实施例提供的一种惯性测量单元内参标定方法的实现流程图;
图5是本发明实施例提供的另一种惯性测量单元内参标定方法的实现流程图;
图6是本发明实施例提供的一种Allan标准差示意图;
图7是本发明实施例提供的另一种相机和惯性测量单元相对姿态的标定方法的实现流程图;
图8是本发明实施例提供的一种相机和惯性测量单元相对姿态的标定装置的结构框图;
图9是本发明实施例提供的终端设备的示意图。
具体实施方式
为了说明本发明所述的技术方案,下面将参考附图并结合实施例来进行说明。
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚,完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,所获得的所有其他实施例,都应当属于本发明保护的范围。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本发明实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本发明。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本发明的描述。
需要说明的是,当元件被称为“固定于”或“设置于”另一个元件,它可以直接在另一个元件上或者间接在该另一个元件上。当一个元件被称为是“连接于”另一个元件,它可以是有线连接到另一个元件或无线连接至该另一个元件上,连接用于数据传输作用。
此外,在本发明的说明书、权利要求书及附图中的术语中涉及“第一”或“第二”等的描述仅用于区别类似的对象,而不能理解为指示或暗示其相对重要性或者隐含指明所指示的技术特征的数量,也就是说,这些描述不必用于描述特定的顺序或先后次序。此外,应该理解这些描述在适当情况下可以互换,以便描述本发明的实施例。
图1示出了本发明实施例提供的相机和IMU相对姿态的标定方法的实现流程,该方法流程包括步骤S101至S103。该方法适用于对相机和IMU相对姿态 进行标定的情形。该方法由相机和IMU相对姿态的标定装置执行,所述相机和IMU相对姿态的标定装置配置于终端设备,可由软件和/或硬件实现。各步骤的具体实现原理如下。
S101,对相机和IMU分别进行内参标定。
在实际应用中,作为一非限制性说明,由于相机与IMU在使用中通常是安装在同一印制电路板(Printed Circuit Board,PCB)上,PCB板安装在可移动(包括平移和旋转)终端上,以确保两传感器在世界坐标系下位置关系是确定的。终端可以以是手机、平板电脑、个人数字助理、和穿戴设备(如眼镜、手表和手环等)等等。
因此,在本发明实施例中,所述相机和所述惯性测量单元均设置于机械臂,机械臂是可以通过外部终端设备驱动的,可绕X轴,Y轴和Z轴进行旋转,以适应实际应用场景。请参考图2所示,本发明实施例提供的标定方法由图2所示的一种相机和惯性测量单元相对姿态的标定系统来实现,标定系统包括机械臂210、相机211、IMU212以及终端设备213,其中,相机211和IMU212均设置于机械臂210,终端设备213分别连接机械臂210、相机211和IMU212。标定系统和终端设备的具体结构在后续的实施例中将具体描述,此处不再赘述。
在本发明实施例中,在标定相机与IMU的相对外参之前,终端设备需要对相机的内参和IMU的内参进行标定。
一般来说,相机的内参,也称内参数矩阵,包括图像中心坐标、焦距和镜头畸变参数等等。IMU,是测量物体三轴姿态角(或角速率)以及加速度的装置,可包括多轴加速度计和多轴陀螺仪等,其内参一般指系统误差和随机误差,比较常见的随机误差有高斯白噪声与随机游走噪声。
在本发明实施例中,对相机进行内参标定的方法包括但不限于线性标定方法,非线性优化标定方法,和两步标定法等,其中两步标定法包括Tsai的经典两步法和张正友的标定方法。
作为本发明一非限制性示例,如图3所示一种对相机进行内参标定的方法 的流程图,下面对如何标定相机的内参进行叙述,包括如下步骤301至302。
S301,驱动机械臂携带相机旋转多个角度,并通过相机在多个角度下采集标定板的第二图像数据。
其中,将带有标定图案的标定板固定在机械臂前方的某一平面上,机械臂可绕X轴、Y轴和Z轴进行旋转,机械臂末端旋转可以超过360度,以使得相机可在多个不同角度下采集标定图像,为了提高图像数据的鲁棒性,一般采集10至20幅标定图像比较适宜。为了在某角度下采集到静态的标定图像,可在机械臂旋转至某角度时,先静止一小段时间,例如500ms,使相机完全静止下来后再开启。
可以理解的是,为了覆盖相机的全部视场,应该对标定板的大小进行适应性地设计,一般采取小尺寸标定板,且标定板与相机的距离也应该适宜,例如3m左右,当然,在采集标定图像的过程中,相机到标定板的距离是可以改变的。其中,在标定板的设计过程中,应该要确保其精度与平整度。在一个实施示例中,标定板可采用氧化铝标定板,精度可达0.01mm;在另一个实施示例中,采用光刻的标定板,精度可达0.001mm。
S302,利用所述第二图像数据提取的角点以及所述标定板的参数特征计算所述相机的内参。
其中,所述标定板的参数特征包括但不限于标定板的实际边长。利用所述多组图像数据提取的角点以及所述标定板的实际边长计算所述相机的内参。
可以理解的是,上述相机可以是单目相机、双目相机、深度相机、或红外相机等。如果是双目相机,除了计算出其内参外,还要计算两个相机的位置转换矩阵。
作为本发明一非限制性示例,如图4所示为对IMU进行内参标定的方法流程图,下面对如何标定IMU的内参进行叙述,包括如下步骤401至402。
S401,当惯性测量单元处于静止状态时,通过所述惯性测量单元采集预设采集时长t下的第二惯性测量单元数据。
其中,当惯性测量单元处于静止状态时,以采样时间间隔t 0采集IMU数据,假设预设采集时长t下共采集到M个IMU数据,那么t=Mt 0
预设采集时长t为经验值,可以根据需要进行选择。为了得到足够多的数据,以及提高数据的鲁棒性,预设采集时长t设置为两个小时以上比较合适。
S402,对所述第二惯性测量单元数据进行Allan方差分析获取所述惯性测量单元的误差参数。
在本实施例中,采用Allan方差分析方法对IMU数据进行误差建模,获取IMU的误差参数。其中,误差参数包括随机误差参数,随机误差参数包括但不限于高斯白噪声和随机游走噪声等。
具体地,请参考图5所示,步骤402包括如下步骤501至504。
S501,将所述第二惯性测量单元数据均分为N组,计算每组所述第二惯性测量单元数据的均值;
S502,根据所述均值计算Allan方差σ 2(T);其中,T为每组第二惯性测量单元数据的时间跨度,T=t/N;
S503,根据所述Allan方差σ 2(T)计算Allan标准差σ(T);
S504,作Allan标准差σ(T)与所述时间跨度T的双对数曲线获取误差参数。
在本实施例中,将M个IMU数据分成N组,每组包括M/N个数据,每组IMU数据的时间跨度为T=t/N,先计算每组IMU数据的均值,然后根据均值计算Allan方差σ 2(T),根据方差计算Allan标准差σ(T),作Allan标准差σ(T)与所述时间跨度T的双对数曲线获取误差参数。如图6所示为Allan标准差图,示例性地,Allan标准差图中斜率为-1/2直线在时间跨度T=1对应的数值代表高斯白噪声,斜率为+1/2直线在时间跨度T=3对应的数值代表随机游走噪声。
S102,驱动机械臂携带所述相机和所述惯性测量单元以预设轨迹运动,并同时通过所述相机采集标定板的第一图像数据以及通过所述惯性测量单元采集第一惯性测量单元数据。
其中,为了充分激励IMU的各个轴,也就是说使得IMU进行平移和旋转, 预设轨迹应该要设计成能使得机械臂携带IMU进行充分的平移和旋转。由于标定板是静止的,两传感器相对其是运动的,所以为了避免采集的标定图像模糊,机械臂的运动速度应该设计合理,既不能过快,同时又要保证机械臂运动的加速度和角速度都不能太小,以免无法充分激励IMU的各个轴。可以理解的是,为了降低图像模糊,尤其是卷帘相机的模糊,通常需要降低图像曝光时间、同时提高环境光照,例如使用高亮高均匀外置光源。
S103,当所述相机为卷帘快门相机时,对所述相机进行卷帘标定并获取相应的参数值,当所述参数值在第一预设范围以内时,根据所述第一图像数据和所述第一惯性测量单元数据,计算所述相机和所述惯性测量单元的相对外参。
其中,当相机是卷帘快门相机时,对其进行卷帘标定并获取相应参数值,并判断所述参数值是否在第一预设范围以内;当参数值在第一预设范围以内时,根据图像数据与惯性测量单元数据,计算相机和惯性测量单元的相对外参。需要说明的是,当参数值不在所述第一预设范围以内时,进行错误提示。
可以理解的是,相机常见的两种曝光模式包括全局曝光(Global Shutter)和卷帘曝光模式(Rolling Shutter)。相对于全局曝光,感光元件的所有像素点同时曝光一定时间,进而成像,卷帘曝光的感光元件的所有像素点逐行轮流曝光一定时间,曝光一行输出一行,进而成像。而用Rolling shutter方式拍摄,逐行扫描速度不够,拍摄结果就可能出现“倾斜”、“摇摆不定”或“部分曝光”等任一种情况。这种Rolling shutter方式拍摄出现的现象,就定义为果冻效应。
因此,在对相机和IMU的相对外参进行标定之前,如果是卷帘相机,还需要对其进行卷帘标定。在本发明一非限制性实施例中,卷帘标定是标定卷帘相机的曝光一行的时间(readouttime),并将其与第一预设范围进行对比,以判断其是否合法化,即判断其是否在第一预设范围以内,若在第一范围以内,则合法,其中第一预设范围是经验值,可以根据需要进行设置。如果合法,则继续对相机和IMU的相对外参进行标定。需要说明的是,如果标定的readout time不合法,则进行错误提示,也就是说,一旦有果冻效应,就可以及时检测出来, 并进行错误提示。
应该注意的是,在根据第一图像数据和第一惯性测量单元数据,计算相机和IMU的相对外参的过程中,由于相机与IMU的采集频率是不一样的,IMU的采集频率较高,相机的采集频率较低。机械臂以预设轨迹开始运动的t1时刻至停止运动的t2时刻止,在这段时间内,假设相机在拍摄视频数据的过程中对图像信息的采样帧率为20Hz,即相机在拍摄视频数据时每秒钟拍摄图像的帧数为20。与此同时,IMU以200Hz的频率采集自身的姿态信息,即IMU以200HZ的频率输出测量结果,也就是说,在相同时间段内,相机拍摄的图像帧数少,IMU输出的测量结果数量多,因此不能简单的对t1至t2时刻的IMU数据一次性进行积分。为了确保进行积分的IMU数据和标定图像是在同一时段采集的,可以采取以下两种方式。
一是对IMU采取预积分的方法,即只对两个图像时刻的IMU数据进行积分,例如在视频数据中相隔预设帧数的第一图像帧和第二图像帧第一曝光时刻到第二图像帧的第二曝光时刻的时间段内,对IMU数据进行积分,其中第一图像帧与第二图像帧可以是相邻的也可以不相邻。
二是以系统时间作为参考,记录下每组图片的时间戳与IMU返回每组数据的时间戳,根据每组图片的时间戳寻找与其相对时间差最小的IMU数据。由于时间是线性增加的,在IMU先于相机采集数据的情况下,理论上每组图片都能够找到与之对应的IMU数据。
本发明实施例中,增加了当相机为卷帘快门相机时,进行卷帘标定的步骤,此外,当卷帘标定相应的参数值在第一预设范围以内时,再计算相机和IMU的相对外参,因此本发明提供的方法打破了现有技术的局限,本发明提供的方法提高了对相机和IMU的相对外参进行标定的精确性。也就是说,针对卷帘相机增加了一个卷帘标定的步骤,防止出现果冻效应,提高了对相机和IMU进行相对姿态标定的精确性。
在前述实施例的基础上,图7示出了另一种相机和惯性测量单元相对姿态 的标定方法的实现流程图。如图7所示,在图7所示实施例中增加了相机为全局快门相机的情形。需要说明的是,与前述实施例相同的步骤此处不再赘述,请参见前述。
S701,对相机和惯性测量单元分别进行内参标定,所述相机和所述惯性测量单元均设置于机械臂。
S702,驱动机械臂携带所述相机和所述惯性测量单元以预设轨迹运动,并同时通过所述相机采集标定板的第一图像数据以及通过所述惯性测量单元采集第一惯性测量单元数据。
S703,根据相机类型,判断是否对相机进行卷帘标定。
S704,当所述相机为卷帘快门相机时,对所述相机进行卷帘标定并获取相应的参数值,当所述参数值在第一预设范围以内时,根据所述第一图像数据和所述第一惯性测量单元数据,计算所述相机和所述惯性测量单元的相对外参。
S705,当所述相机为全局快门相机时,根据所述第一图像数据和所述第一惯性测量单元数据,计算所述相机和所述惯性测量单元的相对外参。
其中,根据相机类型,判断是否对相机进行卷帘标定。当相机是全局快门相机时,不对其进行卷帘标定,并根据第一图像数据和第一惯性测量单元数据,计算相机和惯性测量单元的相对外参;当相机是卷帘快门相机时,对其进行卷帘标定并获取相应参数值,并判断所述参数值是否在第一预设范围内;当参数值在第一预设范围内时,根据图像数据与惯性测量单元数据,计算相机和惯性测量单元的相对外参。
需要说明的是,终端设备提前获取了相机的性能参数,性能参数中包括能够表征相机类型的性能参数,因而终端设备可以根据相机类型,判断是否对相机进行卷帘标定。
在本发明实施例中,本发明增加了根据相机类型,判定是否进行卷帘标定的步骤,能对全局相机和卷帘相机采用不同的方法进行标定,进一步提高了对相机和惯性测量单元的相对外参进行标定的精确性,打破了现有技术的局限, 使得本发明具有更广泛的应用场景。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本发明实施例的实施过程构成任何限定。
对应于上文实施例所述的相机和惯性测量单元相对姿态的标定方法,图8示出了本发明实施例提供的一种相机和惯性测量单元相对姿态的标定装置的结构框图,为了便于说明,仅示出了与本发明实施例相关的部分。
参照图8,该相机和惯性测量单元相对姿态的标定装置包括:
内参标定模块81,用于对相机和惯性测量单元分别进行内参标定;所述相机和所述惯性测量单元均设置于机械臂;
驱动和采集执行模块82,用于驱动机械臂携带所述相机和所述惯性测量单元以预设轨迹运动,并同时通过所述相机采集标定板的第一图像数据以及通过所述惯性测量单元采集第一惯性测量单元数据;
第一外参标定模块83,用于当所述相机为卷帘快门相机时,对所述相机进行卷帘标定并获取相应的参数值,当所述参数值在第一预设范围以内时,根据所述第一图像数据和所述第一惯性测量单元数据,计算所述相机和所述惯性测量单元的相对外参。
可选地,所述相机和惯性测量单元相对姿态的标定装置,还包括:
第二外参标定模块,用于当所述相机为全局快门相机时,根据所述第一图像数据和所述第一惯性测量单元数据,计算所述相机和所述惯性测量单元的相对外参。
可选地,所述相机和惯性测量单元相对姿态的标定装置,还包括:
提示模块,用于当所述参数值不在所述第一预设范围以内时,进行错误提示;还用于当所述相对外参不在所述第二预设范围以内时,进行错误提示。
图9是本发明一实施例提供的终端设备的示意图。如图9所示,该实施例的终端设备9包括:处理器100、存储器101以及存储在所述存储器91中并可 在所述处理器90上运行的计算机程序92,例如相机和惯性测量单元相对姿态的标定的程序。所述处理器90执行所述计算机程序92时实现上述相机和惯性测量单元相对姿态的标定方法实施例中的步骤,例如图1所示的步骤S101至S103。或者,所述处理器90执行所述计算机程序92时实现上述各装置实施例中各模块/单元的功能,例如图8所示模块81至83的功能。
示例性的,所述计算机程序92可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器91中,并由所述处理器90执行,以完成本发明。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述所述计算机程序92在所述终端设备9中的执行过程。
所述终端设备9可以是电脑和平板等。所述终端设备9可包括,但不仅限于,处理器90、存储器91。本领域技术人员可以理解,图9仅仅是终端设备9的示例,并不构成对终端设备9的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述终端设备还可以包括输入输出设备、网络接入设备、总线等。
所称处理器90可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器91可以是所述终端设备9的内部存储单元,例如终端设备9的硬盘或内存。所述存储器91也可以是所述终端设备9的外部存储设备,例如所述终端设备9上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器91还可以既包括所述终端设备9的内部存储单元也包括外部存储设备。 所述存储器91用于存储所述计算机程序以及所述终端设备所需的其他程序和数据。所述存储器91还可以用于暂时地存储已经输出或者将要输出的数据。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中, 该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。
请继续参照图2所示,本发明另一实施例提供了一种标定系统,该标定系统包括:机械臂210、相机211、IMU212以及终端设备213,其中,相机211和IMU212均设置于机械臂210,终端设备213分别连接机械臂210、相机211和IMU212。
可选地,标定系统还包括照明光源,照明光源例如可以采用灯箱,用于提供光源,在实际应用中,由于环境中光线不均等外在因素,导致标定板转向不同角度时,标定图像显示不同的灰阶。因此本标定系统提供良好的照明,可以减少环境光照不均等外在因素对标定的影响。
可选地,终端设备213可与机械臂210通过USB等有线连接,也可以通过无线网等进行无线连接以对机械臂210进行控制,其中终端设备213可通过无线网与相机211与IMU212连接,以对其进行控制,或者终端设备可分别与相机211与IMU212进行电连接。
以上所述实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围,均应包含在本发明的保护范围之内。

Claims (10)

  1. 一种相机和惯性测量单元相对姿态的标定方法,其特征在于,包括:
    对相机和惯性测量单元分别进行内参标定,所述相机和所述惯性测量单元均设置于机械臂;
    驱动机械臂携带所述相机和所述惯性测量单元以预设轨迹运动,并同时通过所述相机采集标定板的第一图像数据以及通过所述惯性测量单元采集第一惯性测量单元数据;
    当所述相机为卷帘快门相机时,对所述相机进行卷帘标定并获取相应的参数值,当所述参数值在第一预设范围以内时,根据所述第一图像数据和所述第一惯性测量单元数据,计算所述相机和所述惯性测量单元的相对外参。
  2. 如权利要求1所述的方法,其特征在于,所述通过所述相机采集标定板的第一图像数据以及通过所述惯性测量单元采集第一惯性测量单元数据之后,还包括:当所述相机为全局快门相机时,根据所述第一图像数据和所述第一惯性测量单元数据,计算所述相机和所述惯性测量单元的相对外参。
  3. 如权利要求1或2所述的方法,其特征在于,对相机进行内参标定,包括:
    驱动机械臂携带相机旋转多个角度,并通过相机在多个角度下采集标定板的第二图像数据;
    利用所述第二图像数据提取的角点以及所述标定板的参数特征计算所述相机的内参。
  4. 如权利要求1或2所述的方法,其特征在于,对惯性测量单元进行内参标定,包括:
    当惯性测量单元处于静止状态时,通过所述惯性测量单元采集预设采集时长t下的第二惯性测量单元数据;
    对所述第二惯性测量单元数据进行Allan方差分析获取所述惯性测量单元的误差参数。
  5. 如权利要求4所述的方法,其特征在于,所述对所述第二惯性测量单元数据进行Allan方差分析获取所述惯性测量单元的误差参数,包括:
    将所述第二惯性测量单元数据均分为N组,计算每组所述第二惯性测量单元数据的均值;
    根据所述均值计算Allan方差σ 2(T);其中,T为每组第二惯性测量单元数据的时间跨度,T=t/N;
    根据所述Allan方差σ 2(T)计算Allan标准差σ(T);
    作Allan标准差σ(T)与所述时间跨度T的双对数曲线获取误差参数。
  6. 如权利要求1或2所述的方法,其特征在于,所述对所述相机进行卷帘标定并获取相应的参数值之后,还包括:当所述参数值不在所述第一预设范围以内时,进行错误提示;
    所述计算所述相机和所述惯性测量单元的相对外参之后,还包括:当所述相对外参在第二预设范围以内时,结束本次标定;当所述相对外参不在所述第二预设范围以内时,进行错误提示。
  7. 一种相机和惯性测量单元相对姿态的标定装置,其特征在于,包括:
    内参标定模块,用于对相机和惯性测量单元分别进行内参标定,所述相机和所述惯性测量单元均设置于机械臂;
    驱动和采集执行模块,用于驱动机械臂携带所述相机和所述惯性测量单元以预设轨迹运动,并同时通过所述相机采集标定板的第一图像数据以及通过所述惯性测量单元采集第一惯性测量单元数据;
    第一外参标定模块,用于当所述相机为卷帘快门相机时,对所述相机进行卷帘标定并获取相应的参数值,当所述参数值在第一预设范围以内时,根据所述第一图像数据和所述第一惯性测量单元数据,计算所述相机和所述惯性测量单元的相对外参。
  8. 一种终端设备,包括存储器以及处理器,所述存储器中存储有可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时, 实现如权利要求1至6任一项所述方法的步骤。
  9. 一种相机和惯性测量单元相对姿态的标定系统,其特征在于,包括机械臂、相机、惯性测量单元以及如权利要求8所述的终端设备,其中,所述相机和所述惯性测量单元均设置于所述机械臂,所述终端设备分别连接所述机械臂、所述相机和所述惯性测量单元。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述方法的步骤。
PCT/CN2020/089868 2019-06-24 2020-05-12 相机和惯性测量单元相对姿态的标定方法及装置 WO2020259106A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910547763.0A CN110378968B (zh) 2019-06-24 2019-06-24 相机和惯性测量单元相对姿态的标定方法及装置
CN201910547763.0 2019-06-24

Publications (1)

Publication Number Publication Date
WO2020259106A1 true WO2020259106A1 (zh) 2020-12-30

Family

ID=68249155

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/089868 WO2020259106A1 (zh) 2019-06-24 2020-05-12 相机和惯性测量单元相对姿态的标定方法及装置

Country Status (2)

Country Link
CN (1) CN110378968B (zh)
WO (1) WO2020259106A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112837382A (zh) * 2021-02-20 2021-05-25 中国铁建重工集团股份有限公司 一种多相机标定方法
CN112902988A (zh) * 2021-03-12 2021-06-04 Oppo广东移动通信有限公司 参数标定方法、装置、终端和存储介质
CN112985311A (zh) * 2021-02-09 2021-06-18 上海同陆云交通科技有限公司 一种车载便携轻量化智能巡检方法与系统
CN113655453A (zh) * 2021-08-27 2021-11-16 阿波罗智能技术(北京)有限公司 用于传感器标定的数据处理方法、装置及自动驾驶车辆
CN113763479A (zh) * 2021-07-19 2021-12-07 长春理工大学 一种折反射全景相机与imu传感器的标定方法
CN113838149A (zh) * 2021-10-09 2021-12-24 智道网联科技(北京)有限公司 自动驾驶车辆的相机内参标定方法、服务器及系统
CN114549656A (zh) * 2022-02-14 2022-05-27 希姆通信息技术(上海)有限公司 Ar眼镜相机与imu的标定方法
CN115342806A (zh) * 2022-07-14 2022-11-15 歌尔股份有限公司 头戴显示设备的定位方法、装置、头戴显示设备及介质

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378968B (zh) * 2019-06-24 2022-01-14 奥比中光科技集团股份有限公司 相机和惯性测量单元相对姿态的标定方法及装置
CN112204946A (zh) * 2019-10-28 2021-01-08 深圳市大疆创新科技有限公司 数据处理方法、装置、可移动平台及计算机可读存储介质
CN111060138B (zh) * 2019-12-31 2022-01-28 上海商汤智能科技有限公司 标定方法及装置、处理器、电子设备、存储介质
CN111275769B (zh) * 2020-01-17 2023-10-24 联想(北京)有限公司 一种单目视觉参数的校正方法及装置
CN111739102B (zh) * 2020-05-27 2023-07-21 杭州易现先进科技有限公司 电子设备的内外参标定方法、装置和计算机设备
CN112325905B (zh) * 2020-10-30 2023-02-24 歌尔科技有限公司 一种用于识别imu的测量误差的方法、装置及介质
WO2022193318A1 (zh) * 2021-03-19 2022-09-22 深圳市大疆创新科技有限公司 外参标定方法、装置、可移动平台及计算机可读存储介质
CN113686269B (zh) * 2021-08-24 2024-01-16 浙江西大门新材料股份有限公司 一种卷帘面料平整度测试及评价方法
CN114413929A (zh) * 2021-12-06 2022-04-29 阿波罗智能技术(北京)有限公司 定位信息的校验方法、装置、系统、无人车及存储介质
CN114500842A (zh) * 2022-01-25 2022-05-13 维沃移动通信有限公司 视觉惯性标定方法及其装置
CN114526746A (zh) * 2022-03-15 2022-05-24 智道网联科技(北京)有限公司 高精地图车道线的生成方法、装置、设备及存储介质
CN114833821A (zh) * 2022-03-29 2022-08-02 高德软件有限公司 相机参数标定方法、系统及计算机程序产品

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013092871A (ja) * 2011-10-25 2013-05-16 Secom Co Ltd カメラ姿勢算出装置
US20160379365A1 (en) * 2015-06-26 2016-12-29 Kabushiki Kaisha Topcon Camera calibration device, camera calibration method, and camera calibration program
CN109074664A (zh) * 2017-10-26 2018-12-21 深圳市大疆创新科技有限公司 姿态标定方法、设备及无人飞行器
CN109388874A (zh) * 2018-09-28 2019-02-26 深圳市欢创科技有限公司 一种imu仿真方法及imu仿真模型
CN110378968A (zh) * 2019-06-24 2019-10-25 深圳奥比中光科技有限公司 相机和惯性测量单元相对姿态的标定方法及装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9243931B2 (en) * 2012-11-28 2016-01-26 Drs Sustainment Systems, Inc. AZ/EL gimbal housing characterization
JP6852355B2 (ja) * 2016-11-09 2021-03-31 セイコーエプソン株式会社 プログラム、頭部装着型表示装置
CN107341831B (zh) * 2017-07-06 2020-10-27 青岛海通胜行智能科技有限公司 一种imu辅助的视觉特征鲁棒跟踪方法及装置
CN107314778B (zh) * 2017-08-04 2023-02-10 广东工业大学 一种相对姿态的标定方法、装置及系统
CN107767425A (zh) * 2017-10-31 2018-03-06 南京维睛视空信息科技有限公司 一种基于单目vio的移动端AR方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013092871A (ja) * 2011-10-25 2013-05-16 Secom Co Ltd カメラ姿勢算出装置
US20160379365A1 (en) * 2015-06-26 2016-12-29 Kabushiki Kaisha Topcon Camera calibration device, camera calibration method, and camera calibration program
CN109074664A (zh) * 2017-10-26 2018-12-21 深圳市大疆创新科技有限公司 姿态标定方法、设备及无人飞行器
CN109388874A (zh) * 2018-09-28 2019-02-26 深圳市欢创科技有限公司 一种imu仿真方法及imu仿真模型
CN110378968A (zh) * 2019-06-24 2019-10-25 深圳奥比中光科技有限公司 相机和惯性测量单元相对姿态的标定方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHAO, GUONAN ET AL.: "Chapter 3. Comprehensive and Modern Physics Experiments", PHYSICAL EXPERIMENT OF COLLEGE, 28 February 1996 (1996-02-28), pages 227, XP009525244, ISBN: 7-5635-0148-7 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112985311A (zh) * 2021-02-09 2021-06-18 上海同陆云交通科技有限公司 一种车载便携轻量化智能巡检方法与系统
CN112837382A (zh) * 2021-02-20 2021-05-25 中国铁建重工集团股份有限公司 一种多相机标定方法
CN112902988A (zh) * 2021-03-12 2021-06-04 Oppo广东移动通信有限公司 参数标定方法、装置、终端和存储介质
CN113763479A (zh) * 2021-07-19 2021-12-07 长春理工大学 一种折反射全景相机与imu传感器的标定方法
CN113763479B (zh) * 2021-07-19 2024-04-12 长春理工大学 一种折反射全景相机与imu传感器的标定方法
CN113655453A (zh) * 2021-08-27 2021-11-16 阿波罗智能技术(北京)有限公司 用于传感器标定的数据处理方法、装置及自动驾驶车辆
CN113655453B (zh) * 2021-08-27 2023-11-21 阿波罗智能技术(北京)有限公司 用于传感器标定的数据处理方法、装置及自动驾驶车辆
CN113838149A (zh) * 2021-10-09 2021-12-24 智道网联科技(北京)有限公司 自动驾驶车辆的相机内参标定方法、服务器及系统
CN113838149B (zh) * 2021-10-09 2023-08-18 智道网联科技(北京)有限公司 自动驾驶车辆的相机内参标定方法、服务器及系统
CN114549656A (zh) * 2022-02-14 2022-05-27 希姆通信息技术(上海)有限公司 Ar眼镜相机与imu的标定方法
CN115342806A (zh) * 2022-07-14 2022-11-15 歌尔股份有限公司 头戴显示设备的定位方法、装置、头戴显示设备及介质

Also Published As

Publication number Publication date
CN110378968B (zh) 2022-01-14
CN110378968A (zh) 2019-10-25

Similar Documents

Publication Publication Date Title
WO2020259106A1 (zh) 相机和惯性测量单元相对姿态的标定方法及装置
US20170127045A1 (en) Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof
CN112655024B (zh) 一种图像标定方法及装置
US9781405B2 (en) Three dimensional imaging with a single camera
CN108805938B (zh) 一种光学防抖模组的检测方法、移动终端及存储介质
WO2017020150A1 (zh) 一种图像处理方法、装置及摄像机
CN110296717B (zh) 一种事件数据流的处理方法及计算设备
CN110268445A (zh) 利用陀螺仪的相机自动校准
CN112750168B (zh) 事件相机内参的标定方法、装置、计算机设备和存储介质
US11042984B2 (en) Systems and methods for providing image depth information
CN109906471B (zh) 实时三维相机校准
CN111316325B (zh) 拍摄装置参数标定方法、设备及存储介质
CN111279354A (zh) 图像处理方法、设备及计算机可读存储介质
CN107560637B (zh) 头戴显示设备校准结果验证方法及头戴显示设备
CN111711756A (zh) 一种图像防抖方法、电子设备及存储介质
CN106210505A (zh) 图像校正电路、图像校正方法及相机模块
CN111800589A (zh) 图像处理方法、装置和系统,以及机器人
CN108260360B (zh) 场景深度计算方法、装置及终端
CN113436267B (zh) 视觉惯导标定方法、装置、计算机设备和存储介质
WO2020224199A1 (zh) 鱼眼相机标定系统、方法、装置、电子设备及存储介质
CN117288151B (zh) 一种投影设备的三维姿态确定方法、装置和电子设备
WO2020019175A1 (zh) 图像处理方法和设备、摄像装置以及无人机
CN113256728B (zh) Imu设备参数的标定方法及装置、存储介质、电子装置
CN111353945B (zh) 鱼眼图像校正方法、装置及存储介质
CN116309881A (zh) 一种云台相机外参测算方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20830906

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20830906

Country of ref document: EP

Kind code of ref document: A1