CN110782496B - Calibration method, calibration device, aerial photographing equipment and storage medium - Google Patents

Calibration method, calibration device, aerial photographing equipment and storage medium Download PDF

Info

Publication number
CN110782496B
CN110782496B CN201910843584.1A CN201910843584A CN110782496B CN 110782496 B CN110782496 B CN 110782496B CN 201910843584 A CN201910843584 A CN 201910843584A CN 110782496 B CN110782496 B CN 110782496B
Authority
CN
China
Prior art keywords
camera
sensor
angular velocity
acceleration
calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910843584.1A
Other languages
Chinese (zh)
Other versions
CN110782496A (en
Inventor
谢青青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Autel Intelligent Aviation Technology Co Ltd
Original Assignee
Shenzhen Autel Intelligent Aviation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Autel Intelligent Aviation Technology Co Ltd filed Critical Shenzhen Autel Intelligent Aviation Technology Co Ltd
Priority to CN201910843584.1A priority Critical patent/CN110782496B/en
Publication of CN110782496A publication Critical patent/CN110782496A/en
Priority to PCT/CN2020/113256 priority patent/WO2021043213A1/en
Application granted granted Critical
Publication of CN110782496B publication Critical patent/CN110782496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix

Abstract

The application relates to a calibration method, a calibration device, aerial photography equipment and a storage medium, wherein the calibration method comprises the following steps: acquiring a plurality of camera poses of the camera under a world coordinate system when the camera shoots a plurality of images of the calibration plate in the motion process; acquiring camera angular velocities and camera accelerations of the camera at different moments according to the poses of the plurality of cameras; acquiring the angular velocity and the acceleration of a sensor measured by an inertial sensor in the motion process of a camera; according to preset spatial external parameters of a camera and an inertial sensor, converting the angular speed and the acceleration of the camera from a camera coordinate system of the camera to a sensor coordinate system of the inertial sensor to obtain a first predicted angular speed and a first predicted acceleration; constructing an acceleration error term according to the first predicted angular velocity, the predicted acceleration, the sensor angular velocity and the sensor acceleration; and optimizing the spatial external parameter and the time deviation to obtain the spatial external parameter and the time deviation which enable the acceleration error term to be minimum.

Description

Calibration method, calibration device, aerial photographing equipment and storage medium
Technical Field
The present application relates to the field of machine vision technologies, and in particular, to a calibration method and apparatus, an aerial photography device, and a storage medium.
Background
Smart devices increasingly include one or more cameras or other types of image capture devices to enable a user to capture images. For example, a smartphone or aircraft includes a camera that can capture images in a variety of scenes. Many camera systems may be calibrated during manufacturing. However, the existing calibration method is often used for calibrating time deviation and sensor external parameters step by step, the precision of joint estimation is not high, potential relation between two parameters is ignored, and the calibration precision is low.
Therefore, how to provide a scheme with high calibration precision is a technical problem to be solved urgently by those skilled in the art.
Disclosure of Invention
The application provides a calibration method, a calibration device, aerial photography equipment and a storage medium, and aims to solve the technical problem that the existing calibration scheme is not accurate enough in calibration.
In a first aspect, the present application provides a calibration method applied to an aerial device, where the aerial device includes a camera and an inertial sensor, and the method includes:
acquiring a plurality of camera poses of the camera under a world coordinate system when the camera shoots a plurality of images of a calibration plate in the motion process;
according to the camera poses, obtaining camera angular velocities and camera accelerations of the camera at different moments;
acquiring sensor angular velocity and sensor acceleration measured by the inertial sensor in the motion process of the camera;
transforming the camera angular velocity and the camera acceleration from a camera coordinate system of the camera to a sensor coordinate system of the inertial sensor according to preset spatial parameters of the camera and the inertial sensor to obtain a first predicted angular velocity and a predicted acceleration;
constructing an acceleration error term according to the first predicted angular velocity, the predicted acceleration, the sensor angular velocity, and the sensor acceleration;
and optimizing the spatial external parameter and the time deviation to obtain the spatial external parameter and the time deviation which enables the acceleration error term to be minimum, wherein the time deviation is the time defined by the inertial sensor and is opposite to the time deviation defined by the camera shooting.
Preferably, when the camera captures a plurality of images of a calibration plate in the motion process, the acquiring a plurality of camera poses of the camera in the world coordinate system includes:
respectively extracting angular points of the calibration plate in the plurality of images;
and calculating the poses of the cameras under the world coordinate system according to the corner points of the calibration plate in each image of the images.
Preferably, the obtaining of the camera angular velocity and the camera acceleration of the camera at different time according to the plurality of camera poses includes:
spline fitting the plurality of camera poses to obtain a pose curve;
differentiating the pose curve to obtain the camera angular velocity and the camera acceleration of the camera at different times.
Preferably, the respectively extracting the corner points of the calibration board in the plurality of images includes:
respectively carrying out binarization processing on the multiple images;
performing pixel expansion on white pixels in the multiple images after binarization processing;
extracting quadrangles in the plurality of images after pixel expansion, wherein each image in the plurality of images comprises a plurality of quadrangles;
and extracting a midpoint between points of two adjacent corners of two adjacent quadrangles with opposite angles on the same straight line as the corner point from the plurality of quadrangles in each image.
Preferably, the calculating the camera poses of the camera in the world coordinate system according to the corner points of the calibration plate in each of the plurality of images respectively includes:
and calculating the poses of the camera under the world coordinate system by using a camera calibration algorithm according to the corner points of the calibration plate in each image.
Preferably, the method further comprises:
continuously changing the time deviation within a preset time deviation range to obtain a corresponding sensor angular speed;
calculating a cross-correlation coefficient of a modulo length of the camera angular velocity and a modulo length of the sensor angular velocity, and taking a time offset that maximizes the cross-correlation coefficient as an initial estimate of the time offset;
wherein the initial estimate of the time offset is used as an initial value of the time offset when optimizing the time offset.
Preferably, the method further comprises:
transforming the camera angular velocity from a camera coordinate system of the camera to a sensor coordinate system of an inertial sensor according to an external parameter rotation component in the preset spatial external parameter to obtain a second predicted angular velocity;
calculating a difference between the camera angular velocity and a second predicted angular velocity to obtain the angular velocity error term;
optimizing the external reference rotation component, and taking the external reference rotation component which enables the angular speed error term to be minimum as an initial rotation component;
wherein the initial rotation component is used as an initial value for optimizing the spatial outlier.
In a second aspect, the present invention further provides a calibration apparatus applied to an aerial photographing device, where the aerial photographing device includes a camera and an inertial sensor, and the calibration apparatus includes:
the pose acquisition module is used for acquiring a plurality of camera poses of the camera under a world coordinate system when the camera shoots a plurality of images of the calibration plate in the motion process;
the camera speed acquisition module is used for acquiring camera angular speeds and camera accelerations of the camera at different moments according to the poses of the plurality of cameras;
the sensor speed acquisition module is used for acquiring the sensor angular speed and the sensor acceleration measured by the inertial sensor in the motion process of the camera;
the prediction module is used for transforming the camera angular velocity and the camera acceleration from a camera coordinate system of the camera to a sensor coordinate system of the inertial sensor according to preset spatial parameters of the camera and the inertial sensor so as to obtain a first predicted angular velocity and a first predicted acceleration;
an error module configured to construct an acceleration error term based on the first predicted angular velocity, the predicted acceleration, the sensor angular velocity, and the sensor acceleration;
and the optimization module is used for optimizing the spatial external parameter and time deviation to obtain the spatial external parameter and time deviation which enables the acceleration error term to be minimum, wherein the time deviation is the time defined by the inertial sensor and is opposite to the time deviation defined by the camera shooting.
In a third aspect, the present application further provides an aerial photography device, including:
a camera;
an inertial sensor;
a memory for storing a calibration program; and
and the processor is in communication connection with the camera and the inertial sensor, and is configured to implement the calibration method according to the embodiment of the first aspect of the present application when executing the calibration program.
In a fourth aspect, the present application further provides a storage medium, where the storage medium is a computer-readable storage medium, and a calibration program is stored in the storage medium, where the calibration program, when executed by a processor, implements the calibration method described in the embodiment of the first aspect of the present application.
Compared with the prior art, the calibration method, the calibration device, the aerial photography equipment and the storage medium can optimize the spatial external parameter and the time deviation for calibration, fully consider the potential relation between the time deviation and the spatial external parameter, and effectively improve the calibration precision.
Drawings
Fig. 1 is a flowchart of a calibration method according to a first embodiment of the present application.
Fig. 2 is a flowchart of step S11 in fig. 1.
Fig. 3A is a schematic diagram of an image after binarization processing in the present application.
Fig. 3B is a schematic diagram of an image after expansion of white pixels.
Fig. 3C is a schematic diagram of the image after quadrilateral extraction.
Fig. 3D is a schematic diagram of corner positions in an image according to the present application.
Fig. 4 is a flowchart of step S111 in fig. 2.
Fig. 5 is a flowchart of step S12 in fig. 1.
Fig. 6 is a flowchart of a calibration method according to a second embodiment of the present application.
Fig. 7 is a block diagram of a calibration apparatus according to a third embodiment of the present application.
Fig. 8 is a schematic structural diagram of an aerial photography device according to a fourth embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that the descriptions relating to "first", "second", etc. in this application are for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between the embodiments may be combined with each other, but must be based on the realization of the technical solutions by a person skilled in the art, and when the technical solutions are contradictory to each other or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope claimed in the present application.
Referring to fig. 1, fig. 1 illustrates a calibration method provided in a first embodiment of the present application, which may be performed by a calibration apparatus, which may be implemented in hardware and/or software, for accurately calibrating a camera and an Inertial Measurement Unit (IMU) and applying to an aerial device, where the aerial device includes the camera and the Inertial sensor, and the camera and the Inertial sensor are fixedly connected to each other. The calibrated result can be applied to the unmanned aerial vehicle. The calibration method comprises the following steps:
s11: and when the camera shoots a plurality of images of the calibration plate in the motion process, a plurality of camera poses of the camera under a world coordinate system are obtained.
When the calibration is performed, a calibration board with known physical dimensions needs to be provided, and in this embodiment, the calibration board is a black and white chessboard pattern calibration board. And determining a world coordinate system by the calibration plate, wherein the world coordinate system comprises an x axis, a y axis and a z axis. The camera coordinate system can be determined by the camera, the sensor coordinate system can be determined by the inertial sensor, and the image coordinate system and the pixel coordinate system can be determined by the image shot by the camera. The camera and the inertial sensor move simultaneously, the camera shoots the calibration plate in the moving process, and the camera positions and postures of the camera at different moments and positions can be calculated through a camera calibration algorithm. Wherein the camera pose comprises a camera position and a camera pose. Preferably, the camera and the inertial sensor are made to fully exploit the degrees of freedom of rotation and translation when they are moving simultaneously.
In the moving process of the camera, the camera and the inertial sensor move simultaneously, and the camera shoots an image of the calibration plate once every first time. The first time is not limited, and if the first time is 0.1s, the camera takes an image of the calibration plate every 0.1 s. After a period of time, multiple images may be taken. Preferably, the number of images taken of the calibration plate is greater than 10.
Referring to fig. 2, step S11 may include:
s111: and respectively extracting the angular points of the calibration plate in the plurality of images.
Referring to fig. 3A, 3B, 3C, 3D and 4, step S112 may include:
s1111: and respectively carrying out binarization processing on the plurality of images.
That is, the image of each calibration plate is subjected to binarization processing, wherein fig. 3A is a schematic diagram after binarization processing.
S1112: and performing pixel expansion on the white pixels in the plurality of images after the binarization processing.
In a plurality of images, each white pixel of the binarized image is subjected to pixel expansion. And the expansion of the white pixel points can separate the connection of each black block quadrangle. Fig. 3B is a schematic diagram of the white pixel after expansion.
S1113: and extracting quadrangles in the plurality of images after pixel expansion, wherein each image in the plurality of images comprises a plurality of quadrangles.
Extracting the quadrangles in the multiple images after pixel expansion, extracting outlines, calculating a convex hull of each outline, and judging whether the extracted polygons have four vertexes or not; if only four vertices are detected, it is a quadrilateral. Interfering quadrilaterals may be removed, such as by removing some interfering quadrilaterals using constraints such as aspect ratio, perimeter, area, etc. Fig. 3C is a schematic diagram after the quadrangle is extracted.
S1114: and extracting a midpoint between points of two adjacent corners of two adjacent quadrangles with opposite angles on the same straight line as the corner point from the plurality of quadrangles in each image.
Two nonadjacent corners in the quadrangles are opposite corners, the opposite corners are on the same straight line, the two adjacent quadrangles are only adjacent by the two corners. Two adjacent corners of the two quadrangles are opposite corners. Fig. 3D is a schematic diagram of corner locations.
S112: and calculating the poses of the cameras under the world coordinate system according to the corner points of the calibration plate in each image of the images.
Specifically, step S112 includes:
and calculating the poses of the camera under the world coordinate system by using a camera calibration algorithm according to the corner points of the calibration plate in each image.
Wherein each image is capable of determining a camera pose of the camera. The use of a camera calibration algorithm to calculate the camera pose is a common prior art in the field, and this embodiment is only briefly described.
Specifically, the pose is a variable describing the motion of an object in a three-dimensional space, the position refers to (x, y, z) coordinates of the object in a specified coordinate system, and the pose is rotation around the x-axis, rotation around the y-axis, and rotation around the z-axis, and may be symbolized as (tx, ty, tz, rx, ry, rz). tx, ty, tz represent translation, rx, ry, rz represent rotation. The calibration board is a known object in a world coordinate system, the three-dimensional coordinates of each corner point on the calibration board on the world coordinate system are also known, the pixel coordinates of the corner points are known, and the projection of points on the world coordinate system to two-dimensional points can be performed by the three-dimensional coordinate points, the pixel coordinates of the corner points and camera calibration parameters of the camera, and can be described by the following formula:
P=P1*P2*P3(1)
wherein the content of the first and second substances,
Figure BDA0002194482100000071
Figure BDA0002194482100000072
Figure BDA0002194482100000073
Figure BDA0002194482100000074
the matrix P1 is a rigid transformation matrix that transforms points from the world coordinate system to the camera coordinate system, the matrix P2 is a transformation relation between the camera coordinate system and the image coordinate system, wherein f denotes a focal length of the camera, three-dimensional coordinates in the camera coordinate system can be transformed into two-dimensional coordinates on an imaging plane by using the matrix P2, the matrix P3 is a transformation relation between the image coordinate system and the pixel coordinate system, that is, the matrix P3 is an internal reference matrix of the camera, K denotes a scaling factor between the focal length and a physical size, u and v denote horizontal and vertical axes of the image, the two-dimensional coordinates on the imaging plane can be transformed into standard pixel coordinates, and dx and dy respectively represent physical sizes of each pixel on the horizontal and vertical axes x and y. The matrix P is a matrix representation that transforms points on the world coordinate system to the pixel coordinate system. The matrix P2 and the matrix P3 are determined by parameters of the camera, the matrix P1 can perform equivalent transformation with the pose, and therefore a series of corner point projection equations are listed, namely the unknowns in the first matrix can be solved, so that the poses of the camera under the world coordinate system determined by the calibration board can be determined.
S12: and obtaining the angular speed and the acceleration of the camera at different moments according to the poses of the cameras.
Referring to fig. 5, in some embodiments, step S12 includes:
s121: spline fitting is performed on the plurality of camera poses to obtain a pose curve.
Each camera pose is equivalent to one point, and a pose curve can be obtained by spline fitting a plurality of camera poses. In this embodiment, B-spline fitting is performed for a plurality of poses. B-spline fitting of a series of points is prior art and will not be described in detail in this application. B-spline fitting is selected to express a camera pose curve, and the reason is that the B-spline curve has local characteristics, and the adjustment of a single parameter only has influence on the local part of the spline; the B-spline basis function is a polynomial, can simply carry out differentiation and integration operations, and is easy to evaluate an error term so as to carry out correct calibration. Spline fitting is carried out to obtain a pose curve, and discrete problems can be converted into continuous problems.
S122: differentiating the pose curve to obtain the camera angular velocity and the camera acceleration of the camera at different time instants.
And differentiating the pose curve to obtain the angular speed and the acceleration of the camera at different moments. The camera angular velocity and the camera acceleration may be continuous, i.e. forming a camera angular velocity curve and a camera acceleration curve. The pose curve has six dimensions including the positions and poses of the x-axis, the y-axis and the z-axis, the angular velocity of the camera can be obtained by differentiating the position components of the x-axis, the y-axis and the z-axis, and the angular velocity of the camera can be obtained by differentiating the pose components of the x-axis, the y-axis and the z-axis.
S13: and acquiring the angular velocity and acceleration of the inertial sensor during the motion of the camera.
The inertial sensor and the camera are relatively fixed, and when the camera moves, the inertial sensor also moves synchronously with the camera. In the moving process of the camera, the inertial sensor works, namely the angular velocity and the acceleration of the sensor measured in the moving process of the camera in the working process of the inertial sensor can be obtained. The angular velocity of the sensor is the angular velocity measured by the sensor, and the acceleration of the sensor is the acceleration of the sensor measured by the sensor. Preferably, when the sensor angular velocity and the sensor acceleration are acquired, the sensor angular velocity and the sensor acceleration may be continuously acquired, that is, the sensor angular velocity profile and the sensor acceleration profile may be acquired. The sensor angular velocity and the plurality of sensor accelerations are vectors, including direction and magnitude. It will be appreciated that the time defined by the inertial sensor, and the time defined by the camera, may have errors, i.e., time offsets. For example, the camera is considered to take an image a at time T1, and the system of the inertial sensor considers that the measured camera angular velocity B and the camera acceleration C are acquired at time T1, but actually, there is a difference in time therebetween, such as the camera angular velocity B and the camera acceleration C actually acquired at time T1+0.05 s. Therefore, the time deviation is defined in the application, and the time deviation is the time defined by the inertial sensor and is opposite to the time deviation defined by the camera shooting. The time deviation is optimized subsequently to obtain accurate calibration.
S14: and according to preset spatial parameters of the camera and the inertial sensor, transforming the camera angular velocity and the camera acceleration from a camera coordinate system of the camera to a sensor coordinate system of the inertial sensor to obtain a first predicted angular velocity and a first predicted acceleration.
The spatial extrinsic parameters of the camera and the inertial sensor are the transformation relationships transformed from the camera coordinate system to the sensor coordinate system by rigid body transformation, and the relative positions of the camera and the inertial sensor are fixed and set as needed, so the transformation relationships transformed from the camera coordinate system to the sensor coordinate system by rigid body transformation are known, that is, the spatial extrinsic parameters of the camera and the inertial sensor are known. The camera angular velocity can be transformed from the camera coordinate system of the camera to the sensor coordinate system of the inertial sensor based on the preset spatial external parameters of the camera and the inertial sensor to obtain a first predicted angular velocity, and the camera acceleration can be transformed from the camera coordinate system to the sensor coordinate system based on the preset spatial external parameters of the camera and the inertial sensor to obtain a predicted acceleration. The first predicted angular velocity curve may be obtained by performing coordinate system transformation on all camera angular velocities in the camera angular velocity curve, and the predicted acceleration curve may be obtained by performing coordinate system transformation on all camera accelerations in the camera acceleration curve. The pose curve is differentiated to obtain the first predicted angular velocity and the predicted acceleration, so that the integration process of the measured value of the inertial sensor is avoided, and the calibration accuracy is improved. It will be appreciated that although the spatial external parameters are known, the spatial external parameters of the camera and inertial sensors may be subject to errors, for example, the distance of the camera and inertial sensors may be subject to errors in the measurement, which may cause errors in the determination of the spatial external parameters of the camera and inertial sensors. The method and the device optimize the spatial external parameters of the camera and the inertial sensor after follow-up so as to obtain accurate calibration.
S15: an acceleration error term is constructed from the first predicted angular velocity, the predicted acceleration, the sensor angular velocity, and the sensor acceleration.
The first predicted angular velocity, the predicted acceleration, the sensor angular velocity, and the sensor acceleration may be all of the first predicted angular velocity, the predicted acceleration, the sensor angular velocity, and the sensor acceleration obtained. Namely camera angular velocity in a camera angular velocity curve, camera acceleration in a camera acceleration curve, a first preset angular velocity in a first predicted angular velocity curve, a predicted acceleration in a predicted acceleration curve. The acceleration error term is the square of the difference between the value of the first predicted angular velocity and the value of the predicted acceleration and the value of the sensor angular velocity and the value of the sensor acceleration. Wherein the first predicted angular velocity, the predicted acceleration, the sensor angular velocity, and the sensor acceleration are vectors, and assuming that the first predicted angular velocity is a1, the predicted acceleration is a2, the sensor angular velocity is a3, and the sensor acceleration is a4, an acceleration error term can be formulated as:
(a1-a3).transpose()*(a1-a3)+(a2-a4).transpose()*(a2-a4) (6)
s16: and optimizing the spatial external parameter and the time deviation to obtain the spatial external parameter and the time deviation which enables the acceleration error term to be minimum, wherein the time deviation is the time defined by the inertial sensor and is opposite to the time deviation defined by the camera shooting.
Specifically, the time offset changes, that is, the sensor angular velocity curve of the sensor angular velocity is translated on the time axis as a whole, and the sensor acceleration curve is translated on the time axis as a whole. As for the absolute time T2, it is considered that the camera angular velocity H and the camera acceleration I acquired at the absolute time T2, the camera angular velocity H and the camera acceleration I actually acquired at the absolute time T2-0.01, the inertial sensor is considered that the sensor angular velocity J and the sensor acceleration K are acquired at the absolute time T2, and the sensor angular velocity J and the sensor acceleration K are acquired at the absolute time T2+0.02 s. On the other hand, if the acceleration error term needs to be reduced, the time axis of the inertial sensor needs to be shifted forward by 0.03s, and if the time axis of the inertial sensor is shifted, the sensor angular velocity and the sensor acceleration of the inertial sensor at time T2 change. In the actual process, the specific time difference between the actual time and the absolute time is uncertain, the time deviation needs to be continuously changed, and whether the changed error term is proper or not is judged according to the value of the acceleration error term. When the spatial external parameter changes, the first predicted angular velocity and the predicted acceleration also change accordingly. Therefore, the spatial external parameter and the time deviation are independent variables, the acceleration error term is a dependent variable, and the acceleration error term changes along with the change of the spatial external parameter and the time deviation. When the spatial external parameters and the time deviation are optimized, a nonlinear optimization method can be adopted for optimization, specifically, if a gradient descent method is adopted, no function of an analytic solution exists for the minimum value, the gradient of the minimum value can be solved at an initial value, the independent variable is moved along the direction of the gradient, and the steps are repeated for a plurality of times until the gradient is small enough. The gradient descent method is the prior art, and the application does not specifically describe how to optimize the spatial external parameter and the time deviation by using the gradient descent method, and finally obtains the spatial external parameter and the time deviation which enable the acceleration error term to be minimum. And obtaining proper space external parameters and time deviation, namely completing calibration.
The calibration method provided by the embodiment optimizes the spatial external parameter and the time deviation for calibration, fully considers the potential relation between the time deviation and the spatial external parameter, and effectively improves the calibration precision.
Referring to fig. 6, a second embodiment of the present application also provides a calibration method, and based on the foregoing embodiments, the present application provides a scheme for providing an initial value for a spatial outlier and a time offset, where the calibration method includes:
s21: and when the camera shoots a plurality of images of the calibration plate in the motion process, a plurality of camera poses of the camera under a world coordinate system are obtained.
S22: and obtaining the angular speed and the acceleration of the camera at different moments according to the poses of the cameras.
S23: and acquiring the angular velocity and acceleration of the inertial sensor during the motion of the camera.
S24: and continuously changing the time deviation within a preset time deviation range to obtain the corresponding angular speed of the sensor.
The time offset is continuously changed within a preset time offset range, that is, the time axis of the inertial sensor is continuously translated, so that the angular velocity of the sensor at the same time is changed, for example, if the angular velocity of the camera at the time when the angular velocity D of the sensor is at the time T1+0.05s is obtained originally, and the time offset is changed by 0.02s, the angular velocity of the sensor at the time T +0.03s is obtained, and the angular velocity of the sensor at the time T1+0.05s is no longer the angular velocity D of the sensor. The preset time deviation range can be set as required.
S25: and calculating a cross-correlation coefficient between the module length of the angular velocity of the camera and the module length of the angular velocity of the sensor, and taking the time deviation with the maximum cross-correlation coefficient as an initial time deviation estimation.
The cross correlation coefficient is a statistical index for reflecting the degree of closeness of the correlation between the variables, that is, the degree of closeness of the correlation between the camera angular velocity and the sensor angular velocity. The determination of the cross correlation coefficient between two side paths is prior art, and is only briefly described in this application. Specifically, assuming that the camera angular velocity is (ax1, ay1, az1) and the sensor angular velocity is (ax2, ay2, az2), the modulo length of the phase camera angular velocity is represented by { Xn }, the modulo length of the sensor angular velocity is represented by { Yn }, and then,
Figure BDA0002194482100000111
Figure BDA0002194482100000112
the cross-correlation coefficient of the camera angular velocity modulo length Xn and the sensor angular velocity modulo length Yn is formulated as,
Figure BDA0002194482100000113
wherein m is the time offset. M is more than or equal to-3 and less than or equal to 3 is an assumed preset value.
And changing the time deviation through continuous sliding to obtain different cross correlation coefficients, and taking the time deviation with the maximum cross correlation coefficient as the initial estimation of the time deviation. Wherein the initial estimate of the time offset is used as an initial value of the time offset when optimizing the time offset.
S26: transforming the camera angular velocity from the camera coordinate system of the camera to a sensor coordinate system of an inertial sensor according to an external reference rotation component in the preset spatial external reference to obtain a second predicted angular velocity;
the preset spatial parameters of the camera and the inertial sensor are known. Wherein the spatial external reference comprises an external reference rotation component and an external reference translation component. The external reference rotation component is a conversion relation from the rotation of a camera coordinate system through rigid body rotation conversion to the direction consistent with the direction of a sensor coordinate system. The external reference translation component is translated from a camera coordinate system to a transformation relation consistent with the position of a sensor coordinate system through rigid translation transformation. Wherein the extrinsic rotational component and the extrinsic translational component are both known. The camera angular velocity can be transformed from the camera coordinate system of the camera to the sensor coordinate system of the inertial sensor based on the external reference rotational component to obtain a second predicted angular velocity.
S27: calculating a difference between the camera angular velocity and a second predicted angular velocity to obtain the angular velocity error term;
the angular velocity error term is a difference between the camera angular velocity and the second predicted angular velocity.
S28: optimizing the external reference rotation component, and taking the external reference rotation component which enables the angular speed error term to be minimum as an initial rotation component;
when the external reference rotation component is changed, the angular velocity error term is changed. Therefore, the external reference rotation component is an independent variable, the angular velocity error term is a dependent variable, and the angular velocity error term changes along with the change of the external reference rotation component.
When the external reference rotation component is optimized, a nonlinear optimization method can be used for optimization, specifically, if a gradient descent method is used, a function with no analytical solution for the minimum value exists, the gradient of the function can be solved at an initial value, the independent variable is moved along the direction of the gradient, and the steps are repeated for a plurality of times until the gradient is small enough. The gradient descent method is the prior art, and the application does not specifically describe how to optimize the external reference rotation component by using the gradient descent method, and finally obtains the external reference rotation component which minimizes the angular velocity error term. Wherein the initial rotation component is used as an initial value for optimizing the spatial outlier.
S29: and according to preset spatial parameters of the camera and the inertial sensor, transforming the camera angular velocity and the camera acceleration from a camera coordinate system of the camera to a sensor coordinate system of the inertial sensor to obtain a first predicted angular velocity and a first predicted acceleration.
S210: an acceleration error term is constructed from the first predicted angular velocity, the predicted acceleration, the sensor angular velocity, and the sensor acceleration.
S211: and optimizing the spatial external parameter and the time deviation to obtain the spatial external parameter and the time deviation which enables the acceleration error term to be minimum, wherein the time deviation is the time defined by the inertial sensor and is opposite to the time deviation defined by the camera shooting.
The initial estimate of the time offset is used as an initial value of the time offset when optimizing the time offset, and the initial rotational component is used as an initial value for optimizing the spatial external parameter.
The calibration method provided by the embodiment can respectively determine the initial values of the spatial external parameter and the time deviation, and can effectively accelerate the calibration speed.
Referring to fig. 7, a third embodiment of the present application provides a calibration device 30 for calibrating a camera and an inertial sensor, where the calibration device 30 may be applied to an aerial photographing apparatus, the aerial photographing apparatus includes the camera and the inertial sensor, the camera and the inertial sensor are fixedly connected, the calibration device 30 may implement the calibration method of the foregoing embodiment, and the calibration device 30 includes:
a pose acquisition module 31, configured to acquire multiple camera poses of the camera in a world coordinate system when the camera captures multiple images of a calibration board during a motion process;
the camera speed acquisition module 32 is configured to obtain camera angular speeds and camera accelerations of the camera at different times according to the multiple camera poses;
a sensor speed acquisition module 33, configured to acquire a sensor angular speed and a sensor acceleration measured by the inertial sensor during a motion process of the camera;
a prediction module 34, configured to transform the camera angular velocity and the camera acceleration from a camera coordinate system of the camera to a sensor coordinate system of the inertial sensor according to preset spatial parameters of the camera and the inertial sensor, so as to obtain a first predicted angular velocity and a first predicted acceleration;
an error module 35, configured to construct an acceleration error term according to the first predicted angular velocity, the predicted acceleration, the sensor angular velocity, and the sensor acceleration;
and an optimizing module 36, configured to optimize the spatial outlier time bias to obtain a spatial outlier time bias that minimizes the acceleration error term, where the time bias is a time defined by the inertial sensor and is a time bias defined with respect to the camera shooting.
The calibration device provided by the third embodiment of the application can optimize the spatial external parameter and the time deviation for calibration, fully considers the potential relation between the time deviation and the spatial external parameter, and effectively improves the calibration precision.
Preferably, the calibration device 30 further comprises:
the changing module is used for continuously changing the time deviation within a preset time deviation range to obtain the corresponding angular speed of the sensor;
a first calculation module configured to calculate a cross-correlation coefficient between a modulo length of the camera angular velocity and a modulo length of the sensor angular velocity, and to use a time offset that maximizes the cross-correlation coefficient as an initial time offset estimate; wherein the time offset initial estimate is used as an initial value of the time offset when optimizing the time offset;
a transformation module, configured to transform the camera angular velocity from a camera coordinate system of the camera to a sensor coordinate system of an inertial sensor according to an external parameter rotation component in the preset spatial external parameter to obtain a second predicted angular velocity;
a second calculation module for calculating a difference between the camera angular velocity and a second predicted angular velocity to obtain the angular velocity error term;
and the initial rotation module is used for optimizing the external parameter rotation component and taking the external parameter rotation component which enables the angular speed error term to be minimum as the initial rotation component, wherein the initial rotation component is used as an initial value for optimizing the spatial parameter.
Preferably, the pose acquisition module 31 includes:
the extraction unit is used for respectively extracting the angular points of the calibration plate in the plurality of images;
and the pose determining unit is used for calculating the poses of the cameras under the world coordinate system according to the corner points of the calibration plate in each image of the images.
Preferably, the camera speed acquisition module 32 includes:
a fitting unit for spline fitting the plurality of camera poses to obtain a pose curve;
a camera speed acquisition unit for differentiating the pose curve to obtain the camera angular speed and the camera acceleration of the camera at different times.
Preferably, the extraction unit comprises:
a binarization subunit, configured to perform binarization processing on the multiple images respectively;
the expansion subunit is used for performing pixel expansion on the white pixels in the plurality of images after the binarization processing;
a quadrilateral extracting subunit, configured to extract a quadrilateral from the plurality of images after pixel expansion, where each of the plurality of images includes a plurality of quadrilaterals;
and the angular point extraction subunit is used for extracting a midpoint between points of two adjacent corners of two adjacent quadrangles with opposite angles on the same straight line from the plurality of quadrangles in each image as the angular point.
The product can execute the method provided by any embodiment of the application, and has the corresponding functional module and the beneficial effect of the execution method.
Referring to fig. 8, a fourth embodiment of the present application provides an aerial photography device 40, and the aerial photography device 40 can perform the calibration method described in the foregoing embodiments. The aerial photography device 40 includes:
a camera (not shown); inertial sensors (not shown); one or more processors 41 and memory 42. Here, one processor 41 is illustrated as an example.
The processor 41 and the memory 42 may be connected by a bus or other means, and are illustrated as being connected by a bus. Processor 41 is also communicatively coupled to the camera and the inertial sensor.
The memory 42 is a non-volatile computer readable storage medium, and can be used for storing a non-volatile software program, a non-volatile computer executable program, and program instructions corresponding to a calibration method in the above embodiments of the present application. Processor 41 executes non-volatile software program instructions stored in memory 42 to perform various functional applications of a calibration method and data processing, i.e., to implement one of the above-described method embodiments.
The memory 42 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function, and the like.
Further, the memory 42 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 42 may optionally include memory located remotely from processor 41, which may be connected to processor 41 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The program instructions are stored in the memory 42 and, when executed by the one or more processors 41, perform the steps of a calibration method in any of the method embodiments described above.
The product can execute the method provided by the embodiment of the application, and has corresponding beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the methods provided in the above embodiments of the present application.
Embodiments of the present application also provide a non-transitory computer-readable storage medium storing computer-executable instructions, which are executed by one or more processors, such as a processor 41 in the figure, to enable the computer to perform the steps of a calibration method in any of the above-mentioned method embodiments.
Embodiments of the present application further provide a computer program product, which includes a computer program stored on a non-volatile computer-readable storage medium, where the computer program includes program instructions, which, when executed by one or more processors, such as the processor 41 in the figure, can cause the computer to perform the steps of one of the calibration methods in any of the above-described method embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the embodiments may be implemented by software plus a general hardware platform, and may also be implemented by hardware. Those skilled in the art will appreciate that all or part of the flow of the method implementing the above embodiments may be performed by hardware associated with computer program instructions.
The present application also provides a storage medium, which includes a computer readable storage medium, and the calibration program may be stored in a computer readable storage medium, and when executed, the program may include the flow of the implementation method of the above calibration methods. The storage medium may include, among others, flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like. The memory 107 may in some embodiments be an internal storage unit of the aircraft 10, illustratively a hard disk of the aircraft 10. The memory 107 may also be, in other embodiments, an external storage device of the aircraft 10, illustratively a plug-in hard drive provided on the aircraft 10, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like.
Compared with the prior art, the calibration method, the calibration device, the aerial photography equipment and the storage medium optimize the space external parameter and the time deviation for calibration, fully consider the potential relation between the time deviation and the space external parameter, and effectively improve the calibration precision.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (9)

1. A calibration method for an aerial device, the aerial device comprising a camera and an inertial sensor, the method comprising:
acquiring a plurality of camera poses of the camera under a world coordinate system when the camera shoots a plurality of images of a calibration plate in the motion process;
according to the camera poses, obtaining camera angular velocities and camera accelerations of the camera at different moments;
acquiring sensor angular velocity and sensor acceleration measured by the inertial sensor in the motion process of the camera;
transforming the camera angular velocity and the camera acceleration from a camera coordinate system of the camera to a sensor coordinate system of the inertial sensor according to preset spatial parameters of the camera and the inertial sensor to obtain a first predicted angular velocity and a predicted acceleration;
constructing an acceleration error term according to the first predicted angular velocity, the predicted acceleration, the sensor angular velocity, and the sensor acceleration;
optimizing the spatial external parameter and the time deviation to obtain the spatial external parameter and the time deviation which enables the acceleration error term to be minimum, wherein the time deviation is the time defined by the inertial sensor and is defined relative to the time defined by the camera shooting;
continuously changing the time deviation within a preset time deviation range to obtain a corresponding sensor angular speed;
calculating a cross-correlation coefficient of a modulo length of the camera angular velocity and a modulo length of the sensor angular velocity, and taking a time deviation that maximizes the cross-correlation coefficient as an initial estimate of time deviation;
wherein the initial estimate of the time offset is used as an initial value of the time offset when optimizing the time offset.
2. The calibration method according to claim 1, wherein when the camera captures a plurality of images of the calibration plate during the movement, the acquiring a plurality of camera poses of the camera in the world coordinate system comprises:
respectively extracting angular points of the calibration plate in the plurality of images;
and calculating the poses of the cameras under the world coordinate system according to the corner points of the calibration plate in each image of the images.
3. The calibration method according to claim 2, wherein the obtaining of the camera angular velocity and the camera acceleration of the camera at different times according to the plurality of camera poses comprises:
spline fitting the plurality of camera poses to obtain a pose curve;
differentiating the pose curve to obtain the camera angular velocity and the camera acceleration of the camera at different times.
4. The calibration method as claimed in claim 2, wherein said separately extracting the corner points of the calibration plate in the plurality of images comprises:
respectively carrying out binarization processing on the multiple images;
performing pixel expansion on white pixels in the multiple images after the binarization processing;
extracting quadrangles in the plurality of images after pixel expansion, wherein each image in the plurality of images comprises a plurality of quadrangles;
and extracting a midpoint between points of two adjacent corners of two adjacent quadrangles with opposite angles on the same straight line as the corner point from the plurality of quadrangles in each image.
5. The calibration method according to claim 2, wherein the calculating the plurality of camera poses of the camera in the world coordinate system according to the corner points of the calibration plate in each of the plurality of images respectively comprises:
and calculating the poses of the cameras under the world coordinate system by using a camera calibration algorithm according to the corner points of the calibration plate in each image.
6. A calibration method according to claim 1, characterized in that the method further comprises:
transforming the camera angular velocity from the camera coordinate system of the camera to a sensor coordinate system of an inertial sensor according to an external reference rotation component in the preset spatial external reference to obtain a second predicted angular velocity;
calculating a difference between the camera angular velocity and a second predicted angular velocity to obtain the angular velocity error term;
optimizing the external reference rotation component, and taking the external reference rotation component which enables the angular speed error term to be minimum as an initial rotation component;
wherein the initial rotation component is used as an initial value for optimizing the spatial outlier.
7. The utility model provides a calibration device, is applied to equipment of taking photo by plane, equipment of taking photo by plane includes camera and inertial sensor, its characterized in that, calibration device includes:
the pose acquisition module is used for acquiring a plurality of camera poses of the camera under a world coordinate system when the camera shoots a plurality of images of a calibration plate in the motion process;
the camera speed acquisition module is used for acquiring the camera angular speed and the camera acceleration of the camera at different moments according to the plurality of camera poses;
the sensor speed acquisition module is used for acquiring the sensor angular speed and the sensor acceleration measured by the inertial sensor in the motion process of the camera;
the prediction module is used for transforming the camera angular velocity and the camera acceleration from a camera coordinate system of the camera to a sensor coordinate system of the inertial sensor according to preset spatial parameters of the camera and the inertial sensor so as to obtain a first predicted angular velocity and a first predicted acceleration;
an error module for constructing an acceleration error term based on the first predicted angular velocity, the predicted acceleration, the sensor angular velocity, and the sensor acceleration;
the optimization module is used for optimizing the spatial external parameter and time deviation to obtain the spatial external parameter and time deviation which enables the acceleration error term to be minimum, wherein the time deviation is the time defined by the inertial sensor and is defined relative to the time deviation defined by the shooting of the camera;
continuously changing the time deviation within a preset time deviation range to obtain a corresponding sensor angular speed;
calculating a cross-correlation coefficient of a modulo length of the camera angular velocity and a modulo length of the sensor angular velocity, and taking a time deviation that maximizes the cross-correlation coefficient as an initial estimate of time deviation;
wherein the initial estimate of the time offset is used as an initial value of the time offset when optimizing the time offset.
8. An aerial device, comprising:
a camera;
an inertial sensor;
a memory for storing a calibration program; and
a processor communicatively connected to the camera and the inertial sensor, the processor being configured to implement the calibration method as claimed in any one of claims 1 to 6 when executing the calibration procedure.
9. A storage medium, which is a computer-readable storage medium, characterized in that the storage medium stores therein a calibration program, which when executed by a processor implements a calibration method according to any one of claims 1 to 6.
CN201910843584.1A 2019-09-06 2019-09-06 Calibration method, calibration device, aerial photographing equipment and storage medium Active CN110782496B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910843584.1A CN110782496B (en) 2019-09-06 2019-09-06 Calibration method, calibration device, aerial photographing equipment and storage medium
PCT/CN2020/113256 WO2021043213A1 (en) 2019-09-06 2020-09-03 Calibration method, device, aerial photography device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910843584.1A CN110782496B (en) 2019-09-06 2019-09-06 Calibration method, calibration device, aerial photographing equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110782496A CN110782496A (en) 2020-02-11
CN110782496B true CN110782496B (en) 2022-09-09

Family

ID=69384056

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910843584.1A Active CN110782496B (en) 2019-09-06 2019-09-06 Calibration method, calibration device, aerial photographing equipment and storage medium

Country Status (2)

Country Link
CN (1) CN110782496B (en)
WO (1) WO2021043213A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110782496B (en) * 2019-09-06 2022-09-09 深圳市道通智能航空技术股份有限公司 Calibration method, calibration device, aerial photographing equipment and storage medium
CN111351487A (en) * 2020-02-20 2020-06-30 深圳前海达闼云端智能科技有限公司 Clock synchronization method and device of multiple sensors and computing equipment
CN111551191B (en) * 2020-04-28 2022-08-09 浙江商汤科技开发有限公司 Sensor external parameter calibration method and device, electronic equipment and storage medium
CN113701745B (en) * 2020-05-21 2024-03-08 杭州海康威视数字技术股份有限公司 External parameter change detection method, device, electronic equipment and detection system
CN111951314B (en) * 2020-08-21 2021-08-31 贝壳找房(北京)科技有限公司 Point cloud registration method and device, computer readable storage medium and electronic equipment
CN112362084A (en) * 2020-11-23 2021-02-12 北京三快在线科技有限公司 Data calibration method, device and system
CN112598749B (en) * 2020-12-21 2024-02-27 西北工业大学 Calibration method for large-scene non-common-view multi-camera
CN116558545A (en) * 2022-01-29 2023-08-08 北京三快在线科技有限公司 Calibration method and device for sensor data
CN115235527B (en) * 2022-07-20 2023-05-12 上海木蚁机器人科技有限公司 Sensor external parameter calibration method and device and electronic equipment
CN115388914B (en) * 2022-10-28 2023-02-03 福思(杭州)智能科技有限公司 Parameter calibration method and device for sensor, storage medium and electronic device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107255476A (en) * 2017-07-06 2017-10-17 青岛海通胜行智能科技有限公司 A kind of indoor orientation method and device based on inertial data and visual signature

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10306206B2 (en) * 2013-07-23 2019-05-28 The Regents Of The University Of California 3-D motion estimation and online temporal calibration for camera-IMU systems
CN105606127A (en) * 2016-01-11 2016-05-25 北京邮电大学 Calibration method for relative attitude of binocular stereo camera and inertial measurement unit
WO2019080052A1 (en) * 2017-10-26 2019-05-02 深圳市大疆创新科技有限公司 Attitude calibration method and device, and unmanned aerial vehicle
CN109685852B (en) * 2018-11-22 2020-08-21 上海肇观电子科技有限公司 Calibration method, system, equipment and storage medium for camera and inertial sensor
CN109949370B (en) * 2019-03-15 2023-05-26 苏州天准科技股份有限公司 Automatic method for IMU-camera combined calibration
CN110782496B (en) * 2019-09-06 2022-09-09 深圳市道通智能航空技术股份有限公司 Calibration method, calibration device, aerial photographing equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107255476A (en) * 2017-07-06 2017-10-17 青岛海通胜行智能科技有限公司 A kind of indoor orientation method and device based on inertial data and visual signature

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
New Strategies for Time Delay Estimation during System Calibration for UAV-Based GNSS/INS-Assisted Imaging Systems;Lisa LaForest et al.;《MDPI remote sensing》;20190801;第1-36页 *

Also Published As

Publication number Publication date
WO2021043213A1 (en) 2021-03-11
CN110782496A (en) 2020-02-11

Similar Documents

Publication Publication Date Title
CN110782496B (en) Calibration method, calibration device, aerial photographing equipment and storage medium
CN109993113B (en) Pose estimation method based on RGB-D and IMU information fusion
CN107255476B (en) Indoor positioning method and device based on inertial data and visual features
CN104596502B (en) Object posture measuring method based on CAD model and monocular vision
KR102016551B1 (en) Apparatus and method for estimating position
CN109658457B (en) Method for calibrating arbitrary relative pose relationship between laser and camera
EP3753685B1 (en) Control system and control method
CN103020952B (en) Messaging device and information processing method
KR100855657B1 (en) System for estimating self-position of the mobile robot using monocular zoom-camara and method therefor
CN110261870A (en) It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method
CN111354042A (en) Method and device for extracting features of robot visual image, robot and medium
JP6370038B2 (en) Position and orientation measurement apparatus and method
CN103759716A (en) Dynamic target position and attitude measurement method based on monocular vision at tail end of mechanical arm
CN110176032B (en) Three-dimensional reconstruction method and device
WO2015134794A2 (en) Method and system for 3d capture based on structure from motion with simplified pose detection
EP2912631A1 (en) Determination of position from images and associated camera positions
WO2011105522A1 (en) Three-dimensional measurement apparatus, processing method, and non-transitory computer-readable storage medium
CN112880687A (en) Indoor positioning method, device, equipment and computer readable storage medium
TW201904643A (en) Control device, flight vehicle and recording medium
US20180075614A1 (en) Method of Depth Estimation Using a Camera and Inertial Sensor
CN111968228B (en) Augmented reality self-positioning method based on aviation assembly
KR20220054582A (en) Visual positioning method and related apparatus, device and computer readable storage medium
TWI599987B (en) System and method for combining point clouds
CN110720113A (en) Parameter processing method and device, camera equipment and aircraft
JP5698815B2 (en) Information processing apparatus, information processing apparatus control method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 518055 Shenzhen, Guangdong, Nanshan District Xili street, No. 1001, Zhiyuan Road, B1 9.

Applicant after: Shenzhen daotong intelligent Aviation Technology Co.,Ltd.

Address before: 518055 Shenzhen, Guangdong, Nanshan District Xili street, No. 1001, Zhiyuan Road, B1 9.

Applicant before: AUTEL ROBOTICS Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant