WO2021195939A1 - 一种双目拍摄装置的外参的标定方法、可移动平台及系统 - Google Patents

一种双目拍摄装置的外参的标定方法、可移动平台及系统 Download PDF

Info

Publication number
WO2021195939A1
WO2021195939A1 PCT/CN2020/082359 CN2020082359W WO2021195939A1 WO 2021195939 A1 WO2021195939 A1 WO 2021195939A1 CN 2020082359 W CN2020082359 W CN 2020082359W WO 2021195939 A1 WO2021195939 A1 WO 2021195939A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
point cloud
rotation
translation
Prior art date
Application number
PCT/CN2020/082359
Other languages
English (en)
French (fr)
Inventor
熊策
周游
徐彬
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/082359 priority Critical patent/WO2021195939A1/zh
Publication of WO2021195939A1 publication Critical patent/WO2021195939A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration

Definitions

  • This application relates to the field of vision application technology, and in particular to a calibration method, a movable platform and a system for external parameters of a binocular camera.
  • the external parameters of the binocular camera may indicate the spatial relationship between the first camera and the second camera in the binocular camera, and the spatial relationship may include a translation relationship and a rotation relationship.
  • the rotation relationship may include the rotation of the first camera and the second camera of the binocular camera in the epipolar direction and the rotation in the parallax direction.
  • the existing method can only calibrate the rotation of the binocular camera in the epipolar direction, and cannot calibrate the rotation of the binocular camera in the parallax direction, which leads to the calibration of the external parameters of the binocular camera. It is not accurate enough, so how to calibrate the rotation of the binocular camera in the parallax direction is a problem that needs to be solved urgently.
  • the embodiments of the present application provide a method for calibrating the external parameters of a binocular camera, a movable platform, and a system, which can compare the first camera and the second camera through the point cloud sensor, the first camera, and the second camera. Rotation in the parallax direction between the calibration.
  • an embodiment of the present application provides a method for calibrating external parameters of a binocular camera.
  • the method is applied to a movable platform.
  • the movable platform includes a point cloud sensor and a binocular camera.
  • the photographing device includes a first photographing device and a second photographing device, and the method includes:
  • the rotation between the first camera and the second camera in the parallax direction is calibrated according to the point cloud, the first image, and the second image.
  • an embodiment of the present application provides a calibration system for external parameters of a binocular camera, including:
  • a memory a processor, a point cloud sensor, and a binocular camera, wherein the binocular camera includes a first camera and a second camera;
  • the memory is used to store program code
  • the processor calls the program code, and when the program code is executed, is used to perform the following operations:
  • the rotation between the first camera and the second camera in the parallax direction is calibrated according to the point cloud, the first image, and the second image.
  • an embodiment of the present application provides a movable platform, including:
  • an embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores one or more instructions, and the one or more instructions are suitable for being loaded by a processor and executed as described in the above-mentioned first On the one hand, the method for calibrating the external parameters of the binocular camera.
  • the movable platform uses a point cloud sensor for assistance to perform accurate optimization calculations on the rotation of the binocular camera in the parallax direction, specifically calibrating the rotation of the binocular camera in the parallax direction. Therefore, by implementing the method for calibrating the external parameters of the binocular camera, the movable platform, and the system described in the embodiments of this application, the complete calibration of the external parameters of the binocular camera can be achieved, reducing the difficulty of the calibration process, and The accuracy of external parameter calibration of the binocular camera is improved.
  • FIG. 1 is a schematic diagram of the rotation of a binocular camera in the parallax direction according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a method for calibrating external parameters of a binocular camera provided by an embodiment of the present invention
  • FIG. 3 is a schematic flowchart of a method for determining rotation and translation between a point cloud sensor and a first camera according to an embodiment of the present invention
  • FIG. 4 is a schematic flowchart of a method for determining rotation between a first camera and a second camera in a parallax direction according to an embodiment of the present invention
  • Figure 5a is a schematic plan view of an embodiment of the present invention.
  • Figure 5b is a non-planar schematic view according to an embodiment of the present invention.
  • Fig. 6a is a schematic diagram of an initial planar image area provided by an embodiment of the present invention.
  • Fig. 6b is a schematic diagram of a planar image area provided by an embodiment of the present invention.
  • FIG. 7 is a structural diagram of a calibration system for external parameters of a binocular camera provided by an embodiment of the present invention.
  • the method for calibrating the external parameters of the binocular camera provided in the embodiment of the present application can be applied to a movable platform.
  • the movable platform may include: binocular camera, point cloud sensor, etc.
  • the movable platform provided in the embodiments of the present invention may include unmanned aerial vehicles, VR glasses, self-driving vehicles, smart phones and other smart devices with computer vision modules;
  • the binocular camera may include a first camera and a second camera.
  • the photographing device, the first photographing device or the second photographing device may be a camera or a camera, etc.;
  • the point cloud sensor may be a 3D-ToF (Time of flight) camera, a laser radar, a high-resolution millimeter wave radar, etc.
  • the present invention does not do this limited.
  • the external parameters of the binocular camera may indicate the spatial relationship between the first camera and the second camera in the binocular camera, and the spatial relationship may include a translation relationship and a rotation relationship.
  • the rotation relationship may include the rotation of the first camera and the second camera of the binocular camera in the parallax direction.
  • FIG. 1 is a schematic diagram of the rotation of a binocular camera in the parallax direction according to an embodiment of the present application.
  • the external parameters of the binocular camera can be calibrated online in real time, this is particularly important for the stereo vision module, which is multiple camera devices in the same direction.
  • the external parameters will change due to thermal expansion and contraction, collision and vibration, etc., especially the rotation of the binocular camera device in the parallax direction is the easiest to change, and the binocular
  • the rotation of the camera in the parallax direction has a greater impact on the accuracy of the results of the device.
  • the general approach is to use the self-calibration algorithm. Self-calibration can be divided into two parts: the epipolar direction and the parallax direction.
  • the epipolar direction can be optimized by the epipolar constraint, and the rotation in the parallax direction is difficult to calibrate during the flight in the natural scene. .
  • the embodiments of the present invention provide a calibration method, a movable platform, and a system for external parameters of a binocular camera.
  • the external parameters of the binocular camera can be calibrated as follows: Obtain the output of a point cloud sensor The point cloud of the surrounding environment of the movable platform; acquiring the first image of the surrounding environment of the movable platform output by the first camera and the second image of the surrounding environment of the movable platform output by the second camera; according to The point cloud, the first image, and the second image calibrate the rotation between the first camera and the second camera in the parallax direction.
  • the movable platform uses a point cloud sensor for assistance to perform accurate optimization calculations on the rotation of the binocular camera in the parallax direction, specifically calibrating the rotation of the binocular camera in the parallax direction. Therefore, by implementing the method for calibrating the external parameters of the binocular camera described in the embodiments of the present application, the online calibration of the rotation of the first camera and the second camera in the parallax direction can be achieved, so that the binocular can be accurately adjusted.
  • the calibration of the external parameters of the camera reduces the difficulty of the calibration process and improves the accuracy of the calibration of the external parameters of the binocular camera.
  • the following takes the calibration method of the external parameters of the binocular camera as an example for description.
  • FIG. 2 is a schematic flowchart of a method for calibrating external parameters of a binocular camera according to an embodiment of the present invention.
  • the method is applied to a movable platform.
  • the movable platform includes a point cloud sensor and a binocular camera.
  • the binocular camera includes a first camera and a second camera.
  • the method for calibrating the external parameters of the binocular camera device may include steps S210 to S230. in:
  • Step S210 the movable platform obtains the point cloud of the surrounding environment of the movable platform output by the point cloud sensor.
  • the point cloud is a massive collection of points that express the spatial distribution of the target and the characteristics of the target surface under the same spatial reference system. After obtaining the spatial coordinates of each sampling point on the surface of the object, the result is a collection of points, which is called "point cloud ".
  • Step S220 The movable platform acquires the first image of the surrounding environment of the movable platform output by the first camera and the second image of the surrounding environment of the movable platform output by the second camera.
  • the first camera is a reference camera, which can be specifically a left-eye camera.
  • the first camera can be a front camera.
  • the first camera can also be any one of the two rear cameras; optionally, the first camera can also be a right-eye camera.
  • the first camera can be a front camera, and the first camera can also be any one of the two rear cameras.
  • the second photographing device is all the photographing devices in the movable platform except for the reference photographing device. Specifically, it may be a right-eye photographing device.
  • the mobile phone has multiple cameras, such as a front camera. And two rear cameras, the first camera can be a front camera, and the second camera can be any one of the two rear cameras; the optional second camera can also be a left-eye camera.
  • Step S230 The movable platform calibrates the rotation between the first camera and the second camera in the parallax direction according to the point cloud, the first image and the second image.
  • the movable platform determines the first camera and the second camera based on the point cloud, the rotation and translation between the point cloud sensor and the first camera, the first image and the second image.
  • the deviation of the parallax between the photographing devices; the movable platform calibrates the rotation between the first photographing device and the second photographing device in the parallax direction according to the deviation of the parallax.
  • the deviation of the parallax is calculated by the parallax between the point cloud sensor and the first camera and the parallax between the first camera and the second camera.
  • the deviation of the parallax is the parallax between the point cloud sensor and the first camera, and the difference between the parallax between the first camera and the second camera.
  • the deviation of the parallax is zero, it means that the feature point in the first image output by the first camera is in the space point of the surrounding environment of the movable platform, and the corresponding feature point in the second image output by the second camera is in the movable platform.
  • the spatial points in the environment around the platform are the same point.
  • the deviation of the parallax is greater than zero, it means that the feature point in the first image output by the first camera is in the space point of the surrounding environment of the movable platform, and the corresponding feature point in the second image output by the second camera is in the movable platform.
  • the spatial points in the environment around the platform are not the same point.
  • the rotation of the first camera and the second camera in the parallax direction can be calibrated online with the assistance of the point cloud sensor, so that accurate alignment can be achieved.
  • the calibration of the external parameters of the binocular camera reduces the difficulty of the calibration process and improves the accuracy of the external parameter calibration of the binocular camera.
  • FIG. 3 is a schematic flowchart of a method for determining rotation and translation between a point cloud sensor and a first camera according to an embodiment of the present invention.
  • the schematic flow chart may include steps S310 to S360. in:
  • Step S310 The movable platform determines multiple depth jump points in the point cloud.
  • the movable platform traverses the point cloud to determine multiple target point clouds, wherein the depth value of the target point cloud and the depth values of multiple adjacent point clouds are greater than or equal to a preset Depth threshold, the movable platform determines the multiple target point clouds as the multiple depth jump points.
  • the positional relationship between the target point cloud and at least one adjacent point cloud adjacent to the target point cloud can be the target point cloud and the points in the eight areas, that is, the nine square grid.
  • the eight points other than the target point cloud are called eight points. field.
  • the positional relationship between the target point cloud and at least one adjacent point cloud adjacent to the target point cloud may be the target point cloud and the points in the three domains, that is, the four square grids, and the three other than the target point cloud.
  • the points are called three fields.
  • the present invention does not specifically limit the "adjacent" relationship between the target point cloud and the adjacent point cloud. It can be set in advance, or in different application scenarios, different target point clouds and adjacent point clouds can be set. The "adjacent" relationship.
  • the movable platform traverses the point cloud collected by the point cloud sensor, and when the difference between the depth value of a certain point cloud and the depth value of any point cloud in the eight domains is greater than a certain preset threshold, it can be determined
  • the point cloud is the depth jump point, where the depth jump point may be the edge point of the object.
  • the movable platform records the determined multiple depth jump points as P, which is a set of points, for example, P contains P1, P2, P3...
  • the depth value of a point P1 in the point cloud is 3m
  • the point on the left of P1 is P2
  • the depth of P2 is 2.9m
  • the point on the right of P1 is P3
  • the depth of P3 is The value is 1.0m
  • Step S320 the movable platform obtains the original translation and original rotation between the point cloud sensor and the first camera.
  • the movable platform obtains the original translation and original rotation between the point cloud sensor and the first camera.
  • the rotation and translation between the 3D-ToF and the left-eye camera can be written as:
  • R refers to the rotation between the 3D-ToF and the left-eye camera
  • t refers to the translation between the 3D-ToF and the left-eye camera.
  • R refers to the rotation between the 3D-ToF and the left-eye camera
  • t refers to the translation between the 3D-ToF and the left-eye camera.
  • R can respectively indicate the rotation R from the 3D-ToF coordinate system to the coordinate system of the left-eye camera; and the position t of the origin of the 3D-ToF coordinate in the coordinate system of the left-eye camera.
  • the original translation and rotation between the point cloud sensor and the first camera device can be the translation and rotation between the point cloud sensor and the first camera device calibrated on the movable platform before leaving the factory, or it can be on the movable platform
  • the translation and rotation between the point cloud sensor and the first camera are preset after the factory.
  • the preset translation and rotation between the point cloud sensor and the first camera can be set based on experience.
  • Step S330 The movable platform determines the pixel points of the multiple depth jump points in the first image according to the multiple depth jump points and the original translation and original rotation.
  • the depth jump point is a point in a three-dimensional space
  • the pixel point is a point corresponding to the depth jump point in a two-dimensional coordinate system.
  • Step S340 The movable platform obtains the edge response values of the pixel points of the multiple depth jump points in the first image.
  • the movable platform runs an edge detection algorithm on the first image to obtain edge response values of pixels in the first image.
  • the edge detection algorithm may include: Sobel edge detection algorithm, Isotropic Sobel edge detection algorithm, Roberts edge detection algorithm, Prewitt edge detection algorithm, Laplacian edge detection algorithm, Canny edge detection algorithm, etc.
  • the edge detection algorithm as the Canny edge detection algorithm as an example, the movable platform detects the edge information in the first image collected by the first camera, and can obtain the pixel points corresponding to each of the depth jump points in the first image.
  • the edge information may specifically be that the movable platform obtains the edge response values of the pixel points of the multiple depth jump points in the first image.
  • Step S350 The movable platform takes the minimum sum of the edge response values of the pixels in the first image of the multiple depth jump points as the optimization target, and performs the optimization operation with the original translation and the original rotation as the optimization objects.
  • the movable platform projects a set P of multiple depth jump points onto the first image.
  • the pixel coordinates of the pixel points corresponding to each of the depth jump points in the first image are p, where p is a set of pixel points, and p may include p1, p2, p3.... Among them, this projection process is denoted as ⁇ , then:
  • P, p, t are vectors
  • R is a matrix
  • the movable platform sequentially finds the pixel point p1 corresponding to P1, the pixel point p2 corresponding to P2, and the pixel point p3 corresponding to P3. After obtaining multiple sets of (P, p), the The sum of the edge response values E(p) is used as the input of Cost Function, and the accurate rotation R and displacement relationship t are obtained by the optimization solution.
  • arg represents the optimized parameter is rotation R, displacement t.
  • Step S360 The movable platform determines the optimized original translation and original rotation as the rotation and translation between the point cloud sensor and the first camera.
  • the movable platform performs optimization operations on the original translation and the original rotation according to the positions of the multiple depth jump points, the original translation, the original rotation, and the edge response value, which may
  • the mobile platform determines the optimized original translation and original rotation as the rotation and translation between the point cloud sensor and the first camera.
  • the rotation and translation of the point cloud sensor and the first camera in the movable platform after the factory can be accurately determined.
  • FIG. 4 is a schematic flowchart of a method for determining the rotation between the first camera and the second camera in the parallax direction according to an embodiment of the present invention.
  • the schematic flow chart may include steps S410 to S450. in:
  • Step S410 The movable platform determines a plane area in the surrounding environment of the movable platform according to the point cloud.
  • the movable platform projects the plane area into the first image according to the rotation and translation between the point cloud sensor and the first imaging device to obtain an initial plane image area.
  • the movable platform executes a fitting algorithm according to the calculated reference three-dimensional information of each spatial point to determine the initial plane area, where the three-dimensional information may be three-dimensional coordinates. Then, the movable platform can calculate the distance from each spatial point to the initial plane area. Wherein, each space point whose distance from the initial plane area is less than or equal to the first distance threshold is marked as the target space point. If the number of target space points is greater than or equal to the first preset number threshold, or the ratio of the number of target space points to the number of multiple space points is greater than or equal to the first ratio threshold, it is determined that the initial plane area is a plane area.
  • the different gray levels in the figure represent different distances from each target space point to the grid plane.
  • the gray scale of each target space point is approximately the same, and it can be judged that the distance between each target space point and the grid plane is approximately the same, and it can be fitted to a horizontal plane, which is a flat area.
  • the gray scale of each target space point is different, and it can be judged that the distance between each target space point and the grid plane is different, and it cannot be fitted to a plane, indicating that the current area is uneven.
  • the movable platform can choose three points that are not on the same straight line from each spatial point each time, and a candidate plane is determined by the selected three points, and finally the movable platform can pass through this 4 spatial points determine 4 candidate planes. Then, the movable platform separately calculates the sum of the distances from each spatial point to each candidate plane, and determines the distance from each spatial point to the candidate plane and the smallest candidate plane as the initial plane area. Then, the distance from each space point to the initial plane area is calculated separately, and the space point whose distance is less than the first distance threshold is determined as the target space point. If the number of target space points is greater than or equal to the first preset number threshold, or the ratio of the number of target space points to the number of each space point is greater than or equal to the first ratio threshold, the initial plane area is determined to be a plane area.
  • the candidate plane can be calculated by the following formula:
  • v k is the candidate plane
  • P k , P k+1 , P k+2 are three spatial points that are not on the same straight line.
  • the point P i to the space of the distance d ki candidate plane can be calculated by the following equation:
  • the movable platform can choose 3 out of 4 spatial points to form a candidate plane at a time, and finally the movable platform can determine 4 candidate planes through these 4 spatial points. Then, the movable platform calculates the distance from these 4 spatial points to each candidate plane, and determines the distance from the 4 spatial points to the candidate plane and the smallest candidate plane as the initial plane area. Next, calculate the distances from these four spatial points to the initial plane area, assuming that the first distance threshold is 5cm, and the first preset number threshold is 3. The distances from the four spatial points to the initial plane area are 1cm, 3cm, 4cm, respectively. 2cm.
  • the distances from the four spatial points to the initial plane area are all less than the first distance threshold of 5 cm. Therefore, all 4 spatial points are determined as target spatial points.
  • the number of target space points is 4, which is greater than the first preset number threshold of 3. From this, it can be determined that the initial plane area is a plane area. For another example, if the distances from the 4 spatial points to the initial plane area are 10cm, 15cm, 4cm, and 8cm respectively, then among the 4 spatial points, only the spatial point with a distance of 4cm to the initial plane area is determined as the target space point.
  • the number of target space points is 1, which is less than the first preset number threshold of 3. From this, it can be determined that there may be pits or convex hulls in the initial planar area, and the initial planar area is a non-planar area.
  • the movable platform after the movable platform projects the plane area into the first image according to the rotation and translation between the point cloud sensor and the first camera to obtain the initial plane image area, the movable platform pairs The initial plane image area performs a connection detection operation to obtain the plane image area.
  • the connectivity detection operation is the image segmentation algorithm.
  • the image segmentation algorithm includes but is not limited to the region growing segmentation algorithm, the mean iterative segmentation algorithm, the maximum between-cluster variance segmentation algorithm, the maximum entropy segmentation algorithm, and so on.
  • the movable platform projects the plane area into the first image according to the rotation and translation between the point cloud sensor and the first camera to obtain the initial plane image area as shown in FIG. 6a.
  • the initial planar image area is optimized with the image segmentation algorithm to obtain the precise planar image area as shown in Figure 6b.
  • Step S420 The movable platform determines the planar image area of the planar area in the first image according to the rotation and translation between the point cloud sensor and the first camera.
  • the planar area is the area in the real space environment, that is, the three-dimensional planar area; the planar image area refers to the two-dimensional planar area.
  • Step S430 The movable platform acquires multiple sets of feature point pairs in the first image and the second image, wherein the feature points in the first image in the multiple sets of feature point pairs are located in the planar image area.
  • the mobile platform extracts feature points from the first image.
  • the feature points can be extracted by Moravec corner detection algorithms, or by Harris corner points. Implementation of the detection algorithm (harris corner detection algorithm), etc.
  • Harris corner detection algorithm etc.
  • the present invention does not limit the algorithm for feature point extraction here.
  • the movable platform tracks and matches the feature points between the first image and the second image, where the second image is the image collected by the second camera.
  • the movable platform can track and match the feature points between the first image and the second image through the Kanade-Lucas-Tomasi feature tracker (kanade–lucas–tomasi feature tracker) Realization, the movable platform obtains all the spatial points between the first image and the second image.
  • Kanade-Lucas-Tomasi feature tracker Kanade–lucas–tomasi feature tracker
  • the movable platform uses a stereo vision algorithm to analyze the first image and the second image collected by the second camera. After all the spatial points are obtained, the movable platform uses the polar line to constrain the calibration pole. Two rotation angles in the line direction.
  • the basic principle of the binocular stereo vision algorithm is to use the left and right cameras to capture the same object or scene in space at the same time, and calculate the position of the space point through the coordinates of the space point on the imaging plane of the left and right cameras. Among them, how to determine the image coordinates of the left-eye camera corresponding to the same point by the known point in the image coordinates of the left-eye camera is the point matching.
  • the epipolar constraint is a point-on-line constraint, not a point-to-point constraint. Nevertheless, the epipolar constraint gives important constraints on the corresponding point, which matches the corresponding point from the entire image
  • the search is compressed to find the corresponding point on a straight line. For example, that is to say the mapping of the same point on the first image and the second image, and the mapping point p1 on the first image is known, then the process of solving the mapping point p2 on the second image.
  • Step S440 The movable platform determines the disparity between the first camera and the second camera according to the positions of the feature points in the first image and the second image in the first image and the second image in the multiple sets of feature point pairs. deviation.
  • the movable platform is based on the position of the feature points in the first image and the second image in the first image and the second image in the multiple sets of feature point pairs and the initial value of the deviation of the parallax The distance between the spatial point in the surrounding environment of the movable platform and the plane area corresponding to the multiple sets of feature point pairs is determined.
  • the initial value of the deviation of the parallax may be set in advance, or an empirical value, for example, the initial value of the deviation of the parallax may be five pixels.
  • the movable platform takes the sum of the distances between the multiple spatial points and the planar area as an optimization goal to perform an optimization operation on the initial deviation of the parallax, and calculates the optimization to obtain
  • the initial value of the deviation of the parallax is determined as the deviation of the parallax between the first camera and the second camera.
  • the mobile platform uses the sum of the distances between the multiple spatial points and the planar area as the optimization target. It may be the sum of the distances from the multiple spatial points to the three-dimensional planar area corresponding to the multiple spatial points. The smallest.
  • Step S450 the movable platform calibrates the rotation between the first camera and the second camera in the parallax direction according to the deviation of the parallax.
  • the movable platform obtains the focal length of the first camera or the second camera, and the movable platform obtains the difference between the deviation of the parallax and the focal length according to the deviation of the parallax and the focal length. And perform arctangent calculation on the ratio to obtain the rotation in the parallax direction between the first camera and the second camera.
  • each precise plane area determined by the movable platform is converted to the camera coordinate system, and the three-dimensional plane area obtained in the camera coordinate system is denoted PLANE i , and the corresponding two-dimensional plane image area is denoted AREA i .
  • the mobile platform can calculate the disparity ⁇ d1 between the 3D-ToF and the left-eye camera through the BlockMatching disparity estimation algorithm or the SemiGlobal disparity estimation algorithm, as well as the left eye sensor and the right eye
  • the movable platform calculates the three-dimensional position P i ′ of the feature point p i in the coordinate system of the binocular camera. This process is expressed as
  • D (P 'i, PLANE i) denotes the point P' i to the distance from the planar region PLANE i.
  • the movable platform can obtain the focal length of the left-eye camera or the right-eye camera, assuming it is f, pass
  • the rotation angle ⁇ in the parallax direction between the left-eye camera and the right-eye camera is calculated.
  • the movable platform will restrict the calibration of the two rotation angles ⁇ 'and ⁇ " in the epipolar direction by the epipolar line, combined with the rotation angle ⁇ in the parallax direction between the left-eye camera and the right-eye camera, three The combination of rotation angles is converted into a rotation matrix to obtain the rotation between the left-eye camera and the right-eye camera.
  • the rotation of the first camera and the second camera in the parallax direction is calibrated online with the assistance of the point cloud sensor.
  • the embodiment of the present invention provides a calibration system for external parameters of a binocular camera.
  • the system can run on a mobile platform with a computer vision module, such as unmanned aerial vehicles, VR glasses, autonomous vehicles, and smart phones.
  • the computer vision module may specifically be a binocular camera, and the binocular camera includes a first camera and a second camera.
  • FIG. 7 is a structural diagram of a calibration system for external parameters of a binocular camera provided by an embodiment of the present invention.
  • the calibration system 700 for external parameters of a binocular camera includes a memory 701, a processor 702, and a point cloud.
  • the processor 702 performs the following operations:
  • the rotation between the first imaging device and the second imaging device in the parallax direction is calibrated according to the point cloud, the first image, and the second image.
  • the processor 702 performs the following when calibrating the rotation between the first camera and the second camera in the parallax direction according to the point cloud, the first image, and the second image operate:
  • the rotation and translation between the point cloud sensor and the first camera, the first image and the second image calibrate the rotation between the first camera and the second camera in the parallax direction .
  • the processor 702 performs the following operations when determining the rotation and translation between the point cloud sensor and the first camera according to the point cloud and the first image:
  • the processor 702 performs optimization calculations on the original translation and the original rotation according to the positions of the multiple depth jump points, the original translation, the original rotation, and the edge response value. , Do the following:
  • the optimized original translation and original rotation are determined as the rotation and translation between the point cloud and the first image to determine the point cloud sensor and the first camera.
  • the processor 702 performs the following operations when determining the edge response value of the pixel in the first image:
  • An edge detection algorithm is run on the first image to obtain edge response values of pixels in the first image.
  • the processor 702 processor determines multiple depth jump points in the point cloud, and performs the following operations:
  • the multiple target point clouds are determined as the multiple depth jump points.
  • the processor 702 calibrates the first camera and the second camera according to the point cloud, the rotation and translation between the point cloud sensor and the first camera, the first image and the second image.
  • the processor 702 When rotating between the two cameras in the parallax direction, perform the following operations:
  • the rotation between the first camera and the second camera in the parallax direction is calibrated according to the deviation of the parallax.
  • the processor 702 determines the first camera and the second camera based on the point cloud, the rotation and translation between the point cloud sensor and the first camera, the first image and the second image. 2.
  • the parallax deviation between the shooting devices perform the following operations:
  • the positions of the feature points in the first image and the second image in the multiple sets of feature point pairs in the first image and the second image determine the disparity between the first imaging device and the second imaging device deviation.
  • the processor 702 determines the first image and the second image according to the positions of the feature points in the first image and the second image in the multiple sets of feature point pairs.
  • the parallax deviation between the camera and the second camera perform the following operations:
  • the positions of the feature points in the first image and the second image in the multiple sets of feature point pairs and the initial value of the deviation of the parallax determine the corresponding to the multiple sets of feature point pairs The distance between a spatial point in the surrounding environment of the movable platform and the plane area;
  • An optimization calculation is performed on the initial deviation of the parallax according to the distance, and the initial deviation of the parallax obtained by the optimization calculation is determined as the deviation of the parallax between the first imaging device and the second imaging device.
  • the processor 702 performs the following operations when performing an optimization operation on the initial deviation of the parallax according to the distance:
  • an optimization operation is performed on the initial deviation of the parallax.
  • the processor 702 performs the following operations when determining the planar image area of the planar area in the first image according to the rotation and translation between the point cloud sensor and the first camera:
  • a connection detection operation is performed on the initial plane image area to obtain the plane image area.
  • the point cloud sensor includes a 3D-TOF camera, a lidar, or a millimeter wave radar.
  • the processor 702 performs the following when calibrating the rotation between the first camera and the second camera in the parallax direction according to the point cloud, the first image, and the second image operate:
  • the rotation between the first camera and the second camera in the parallax direction is calibrated according to the point cloud, the first image, and the second image.
  • the meeting a preset requirement includes receiving a calibration instruction sent by a control terminal or detecting that the movable platform is turned on.
  • the system for calibrating the external parameters of the binocular camera provided in this embodiment can execute the method for calibrating the external parameters of the binocular camera provided in the foregoing embodiments, and the execution method and beneficial effects are similar, and will not be repeated here.
  • the embodiment of the present invention also provides a movable platform, which is characterized in that it includes:
  • the system for calibrating the external parameters of the binocular camera provided in this embodiment can execute the method for calibrating the external parameters of the binocular camera provided in the foregoing embodiments, and the execution method and beneficial effects are similar, and will not be repeated here.
  • the embodiment of the present application also provides a readable storage medium, and the readable storage medium stores a computer program.
  • the computer program When the computer program is executed by a processor, it can be used to implement the description in the embodiment corresponding to FIG. 2 of the embodiment of the present application.
  • the calibration method of the external parameters of the binocular camera device will not be repeated here.
  • the above-mentioned software functional unit is stored in a storage medium and includes several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) execute the method described in the various embodiments of the present invention. Part of the steps.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Studio Devices (AREA)

Abstract

本发明实施例提供一种双目拍摄装置的外参的标定方法、可移动平台及系统,其中该方法包括:获取点云传感器输出的所述可移动平台周围环境的点云;获取所述第一拍摄装置输出的所述可移动平台周围环境第一图像和第二拍摄装置输出的所述可移动平台周围环境第二图像;根据所述点云、所述第一图像和所述第二图像标定第一拍摄装置和第二拍摄装置之间在视差方向上的旋转。本发明实施例能够在双目拍摄装置运动过程中通过点云传感器进行辅助,可以通过点云传感器、第一拍摄装置和第二拍摄装置对第一拍摄装置和第二拍摄装置之间在视差方向上的旋转进行标定。

Description

一种双目拍摄装置的外参的标定方法、可移动平台及系统 技术领域
本申请涉及视觉应用技术领域,尤其涉及一种双目拍摄装置的外参的标定方法、可移动平台及系统。
背景技术
随着科学的进步与技术的发展,越来越多的智能设备使用了计算机视觉技术,并配有多目摄像头,如无人飞行器、智能手机、VR眼镜、自动驾驶车等。
计算机视觉系统中,双目拍摄装置的外参可以表示双目拍摄装置中第一拍摄装置和第二拍摄装置的空间关系,空间关系可以包括平移关系和旋转关系。其中旋转关系可以包括双目拍摄装置的第一拍摄装置和第二拍摄装置在极线方向上的旋转以及在视差方向上的旋转。在实际生产阶段,智能设备中的双目拍摄装置在出厂标定后,会由于热胀冷缩,碰撞震动等原因,引起双目拍摄装置的外参结构发生变化。在这种情况下,现有方法只能标定双目拍摄装置在极线方向上的旋转,不能将双目拍摄装置在视差方向上的旋转标定出来,从而导致双目拍摄装置的外参的标定不够准确,因此如何标定双目拍摄装置在视差方向上的旋转是当前亟需解决的问题。
发明内容
本申请实施例提供了一种双目拍摄装置的外参的标定方法、可移动平台及系统,可以通过点云传感器、第一拍摄装置和第二拍摄装置对第一拍摄装置和第二拍摄装置之间在视差方向上的旋转进行标定。
第一方面,本申请实施例提供一种双目拍摄装置的外参的标定方法,所述方法应用于可移动平台,所述可移动平台包括点云传感器和双目拍摄装置,所述双目拍摄装置包括第一拍摄装置和第二拍摄装置,所述方法包括:
获取所述点云传感器输出的所述可移动平台周围环境的点云;
获取所述第一拍摄装置输出的所述可移动平台周围环境第一图像和所述第二拍摄装置输出的所述可移动平台周围环境第二图像;
根据所述点云、所述第一图像和所述第二图像标定所述第一拍摄装置和所 述第二拍摄装置之间在视差方向上的旋转。
第二方面,本申请实施例提供一种双目拍摄装置的外参的标定系统,包括:
存储器、处理器、点云传感器以及双目拍摄装置,其中双目拍摄装置包括第一拍摄装置和第二拍摄装置;
所述存储器用于存储程序代码;
所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
获取所述点云传感器输出的所述可移动平台周围环境的点云;
获取所述第一拍摄装置输出的所述可移动平台周围环境第一图像和所述第二拍摄装置输出的所述可移动平台周围环境第二图像;
根据所述点云、所述第一图像和所述第二图像标定所述第一拍摄装置和所述第二拍摄装置之间在视差方向上的旋转。
第三方面,本申请实施例提供一种可移动平台,包括:
机身;
以及上述第二方面提供的双目拍摄装置的外参的标定系统,其中,所述标定系统承载在所述机身上。
第四方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有一条或多条指令,所述一条或多条指令适于由处理器加载并执行如上述第一方面所述的双目拍摄装置的外参的标定方法。
本发明实施例,可移动平台通过使用点云传感器进行辅助,对双目拍摄装置在视差方向上的旋转进行精准的优化计算,具体为将双目拍摄装置在视差方向上的旋转标定出来。因此,通过实施本申请实施例所描述的双目拍摄装置的外参的标定方法、可移动平台及系统,可以实现对双目拍摄装置的外参的完整标定,降低了标定过程的难度,并提高了双目拍摄装置的外参标定的精度。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的 前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种双目拍摄装置在视差方向上的旋转示意图;
图2是本发明实施例提供的一种双目拍摄装置的外参的标定方法的流程示意图;
图3是本发明实施例提供的一种点云传感器与第一拍摄装置之间的旋转和平移的确定方法的流程示意图;
图4是本发明实施例提供的一种第一拍摄装置和第二拍摄装置之间在视差方向上的旋转的确定方法的流程示意图;
图5a是本发明实施例提供的一种平面示意图;
图5b是本发明实施例提供的一种非平面示意图;
图6a是本发明实施例提供的一种初始平面图像区域示意图;
图6b是本发明实施例提供的一种平面图像区域示意图;
图7是本发明实施例提供的一种双目拍摄装置的外参的标定系统的结构图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组合。
下面结合附图,对本发明的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
本申请实施例提供的双目拍摄装置的外参的标定方法可以应用在可移动平台中。可移动平台可以包括:双目拍摄装置,点云传感器等。其中,本发明 实施例中提供的可移动平台可以包括无人飞行器、VR眼镜、自动驾驶车辆、智能手机等带有计算机视觉模块的智能设备;双目拍摄装置可以包括第一拍摄装置和第二拍摄装置,第一拍摄装置或第二拍摄装置可以是摄像头或相机等;点云传感器可以为3D-ToF(Time of flight)相机,激光雷达,高分辨率毫米波雷达等,本发明对此不作限定。
双目拍摄装置的外参可以表示双目拍摄装置中第一拍摄装置和第二拍摄装置的空间关系,所述空间关系可以包括平移关系和旋转关系。其中所述旋转关系可以包括双目拍摄装置的第一拍摄装置和第二拍摄装置在视差方向上的旋转。
请参见图1,图1是本申请实施例提供的一种双目拍摄装置在视差方向上的旋转示意图。
可移动平台在自然界场景移动的过程中,如果能够实时在线标定双目拍摄装置的外参,这对于立体视觉模块尤为重要,立体视觉模块即同一个方向多个摄像装置。在实际生产阶段,可移动平台在出厂标定后,会由于热胀冷缩,碰撞震动等原因,导致外参发生变化,尤其是双目拍摄装置在视差方向上的旋转最容易变化,并且双目拍摄装置在视差方向上的旋转对设备的结果精度影响比较大。一般做法是使用自标定算法,自标定可以分为极线方向和视差方向两部分,其中极线方向可以通过极线约束来优化计算,而视差方向上的旋转在自然界场景飞行的过程中难以标定。其中,视差方向上的旋转如图1中H=H 2H 1所示。
为了避免这种情况,本发明实施例提供了一种双目拍摄装置的外参的标定方法、可移动平台及系统,该双目拍摄装置的外参的标定方法可以为:获取点云传感器输出的所述可移动平台周围环境的点云;获取所述第一拍摄装置输出的所述可移动平台周围环境第一图像和第二拍摄装置输出的所述可移动平台周围环境第二图像;根据所述点云、所述第一图像和所述第二图像标定第一拍摄装置和第二拍摄装置之间在视差方向上的旋转。
本发明实施例,可移动平台通过使用点云传感器进行辅助,对双目拍摄装置在视差方向上的旋转进行精准的优化计算,具体为将双目拍摄装置在视差方 向上的旋转标定出来。因此,通过实施本申请实施例所描述的双目拍摄装置的外参的标定方法,可以实现在线标定第一拍摄装置与第二拍摄装置在视差方向上的旋转,从而可以精准的实现对双目拍摄装置的外参的标定,降低了标定过程的难度,并且提高了双目拍摄装置的外参标定的精度。
下面以双目拍摄装置的外参的标定方法为例进行说明。
请参见图2,图2是本发明实施例提供的一种双目拍摄装置的外参的标定方法的流程示意图。所述方法应用于可移动平台,可移动平台包括点云传感器和双目拍摄装置,双目拍摄装置包括第一拍摄装置和第二拍摄装置。如图2所示,该双目拍摄装置的外参的标定方法可包括步骤S210~S230。其中:
步骤S210:可移动平台获取点云传感器输出的所述可移动平台周围环境的点云。
其中,点云是在同一空间参考系下表达目标空间分布和目标表面特性的海量点集合,在获取物体表面每个采样点的空间坐标后,得到的是点的集合,称之为“点云”。
步骤S220:可移动平台获取所述第一拍摄装置输出的所述可移动平台周围环境第一图像和第二拍摄装置输出的所述可移动平台周围环境第二图像。
其中,第一拍摄装置为基准拍摄装置,具体可以为左目拍摄装置,举例来说,假设手机有多个摄像头,例如一个前置摄像头以及两个后置摄像头,则第一拍摄装置可以为前置摄像头,第一拍摄装置也可以为两个后置摄像头中任意一个摄像头;可选的,第一拍摄装置也可以为右目拍摄装置,举例来说,假设手机有多个摄像头,例如一个前置摄像头以及两个后置摄像头,则第一拍摄装置可以为前置摄像头,第一拍摄装置也可以为两个后置摄像头中任意一个摄像头。
其中,第二拍摄装置为可移动平台中所有的拍摄装置中除去基准拍摄装置之外的其它拍摄装置,具体可以为右目拍摄装置,举例来说,假设手机有多个摄像头,例如一个前置摄像头以及两个后置摄像头,则第一拍摄装置可以为前置摄像头,第二拍摄装置可以为两个后置摄像头中任意一个摄像头;可选的第二拍摄装置也可以为左目拍摄装置。
步骤S230:可移动平台根据所述点云、所述第一图像和所述第二图像标定第一拍摄装置和第二拍摄装置之间在视差方向上的旋转。
在一种实现方式中,可移动平台根据所述点云、点云传感器与第一拍摄装置之间的旋转、平移、所述第一图像和所述第二图像确定第一拍摄装置和第二拍摄装置之间视差的偏差;可移动平台根据所述视差的偏差标定第一拍摄装置和第二拍摄装置之间在视差方向上的旋转。
其中,视差的偏差是通过所述点云传感器与所述第一拍摄装置之间的视差以及所述第一拍摄装置和所述第二拍摄装置之间视差计算得到的。例如,视差的偏差为点云传感器与所述第一拍摄装置之间的视差,与所述第一拍摄装置和所述第二拍摄装置之间视差的差值。当视差的偏差为零时,表示第一拍摄装置输出的第一图像中的特征点在可移动平台周围环境中的空间点,与第二拍摄装置输出的第二图像中对应特征点在可移动平台周围环境中的空间点为同一个点。当视差的偏差大于零时,表示第一拍摄装置输出的第一图像中的特征点在可移动平台周围环境中的空间点,与第二拍摄装置输出的第二图像中对应特征点在可移动平台周围环境中的空间点不为同一个点。
通过本发明实施例所描述的双目拍摄装置的外参的标定方法,通过点云传感器的辅助,在线标定第一拍摄装置与第二拍摄装置在视差方向上的旋转,从而可以精准的实现对双目拍摄装置的外参的标定,降低了标定过程的难度,并且提高了双目拍摄装置的外参标定的精度。
请参见图3,图3是本发明实施例提供的一种点云传感器与第一拍摄装置之间的旋转和平移的确定方法的流程示意图。如图3所示,该流程示意图可包括步骤S310~S360。其中:
步骤S310:可移动平台在点云中确定多个深度跳变点。
在一种实现方式中,可移动平台遍历所述点云,确定多个目标点云,其中,所述目标点云的深度值与其相邻的多个点云的深度值大于或等于预设的深度阈值,可移动平台将所述多个目标点云确定为所述多个深度跳变点。
其中,目标点云与目标点云相邻的至少一个相邻点云之间的位置关系可以是目标点云与其八领域内的点,即九宫格,除目标点云以外的八个点称为八领域。可选的,目标点云与目标点云相邻的至少一个相邻点云之间的位置关系可 以是目标点云与其三领域内的点,即四宫格,除目标点云以外的三个点称为三领域。本发明对目标点云与相邻点云之间的“相邻”关系不作具体限定,可以预先提前设定,也可以在不同应用场景下,设置不同的目标点云与相邻点云之间的“相邻”关系。
举例来说,可移动平台遍历点云传感器采集到的点云,当某个点云的深度值与其八领域内任一点云的深度值之间的差异大于一定预设阈值时,即可判定此该点云为深度跳变点,其中深度跳变点可能是物体边缘点。可移动平台将确定得到的多个深度跳变点记作P,P为点的集合,例如P包含P1,P2,P3…。假设相邻关系为九宫格,点云中某一个点P1的深度值是3m,位于P1点左边的点为P2,P2的深度值是2.9m,位于P1点右边的点为P3,P3点的深度值是1.0m,那么可以确定P1点就是深度发生跳变处的点,即深度跳变点。
步骤S320:可移动平台获取点云传感器和第一拍摄装置之间的原始平移和原始旋转。
在一种实现方式中,可移动平台在所述点云中确定多个深度跳变点之后,可移动平台获取所述点云传感器和所述第一拍摄装置之间的原始平移和原始旋转。
举例来说,以点云传感器为3D-ToF以及第一拍摄装置为左目拍摄装置为例,3D-ToF与左目拍摄装置之间的旋转和平移,可以写成:
Figure PCTCN2020082359-appb-000001
其中,R指的是3D-ToF与左目拍摄装置之间的旋转,t指的是3D-ToF与左目拍摄装置之间的平移。具体可以为,分别表示3D-ToF坐标系到左目拍摄装置坐标系的旋转R;以及在左目拍摄装置坐标系下3D-ToF坐标原点的位置t。
其中,点云传感器和第一拍摄装置之间的原始平移和原始旋转可以是可移动平台在出厂前标定的点云传感器和第一拍摄装置之间的平移和旋转,也可以是在可移动平台出厂后预设的点云传感器和第一拍摄装置之间的平移和旋转。其中,预设的点云传感器和第一拍摄装置之间的平移和旋转可以是根据经验设定的。
步骤S330:可移动平台根据多个深度跳变点和原始平移和原始旋转确定多个深度跳变点在第一图像中的像素点。
其中,深度跳变点为三维空间中的点,像素点为深度跳变点对应到二维坐标系下的点。
步骤S340:可移动平台获取多个深度跳变点在第一图像中的像素点的边缘响应值。
在一种实现方式中,可移动平台对所述第一图像运行边缘检测算法以获取第一图像中像素点的边缘响应值。
举例来说,边缘检测算法可以包括:Sobel边缘检测算法、Isotropic Sobel边缘检测算法、Roberts边缘检测算法、Prewitt边缘检测算法、Laplacian边缘检测算法以及Canny边缘检测算法等。以边缘检测算法为Canny边缘检测算法为例,可移动平台检测第一拍摄装置采集到的第一图像中的边缘信息,可以得到第一图像中的各个所述深度跳变点对应的像素点的边缘信息,具体可以为可移动平台获取所述多个深度跳变点在第一图像中的像素点的边缘响应值。
步骤S350:可移动平台将多个深度跳变点在第一图像中的像素点的边缘响应值总和最小作为优化目标、以原始平移和原始旋转为优化对象进行优化运算。
举例来说,可移动平台按照点云传感器3D-ToF与左目拍摄装置之间的原始旋转R与原始位移关系t,可移动平台将多个深度跳变点的集合P投影到第一图像上,第一图像中的各个所述深度跳变点对应的像素点的像素坐标记为p,p为像素点的集合,p可以包括p1,p2,p3…。其中,这个投影过程记为π,则有:
p i=π(RP i+t)
其中,P,p,t是向量,R是矩阵。
在一种实现方式中,可移动平台依次找到P1对应的像素点p1,P2对应的像素点p2,P3对应的像素点p3…,得到多组(P,p)后,将各个p点处的边缘响应值E(p)总和作为Cost Function(代价函数)输入,优化求解得到精准的旋转R以及位移关系t。
Figure PCTCN2020082359-appb-000002
其中,arg代表优化的参数是旋转R,位移t。
步骤S360:可移动平台将优化得到的原始平移和原始旋转确定为点云传感器与第一拍摄装置之间的旋转和平移。
在一种实现方式中,可移动平台根据所述多个深度跳变点的位置、所述原始平移、原始旋转和所述边缘响应值对所述原始平移和所述原始旋转进行优化运算,可移动平台将优化得到的原始平移和原始旋转确定为点云传感器与第一拍摄装置之间的旋转和平移。
通过本发明实施例所描述的双目拍摄装置的外参的标定方法,可以对出厂后的可移动平台中的点云传感器与第一拍摄装置的旋转和平移进行精准的确定。
请参见图4,图4是本发明实施例提供的一种第一拍摄装置和第二拍摄装置之间在视差方向上的旋转的确定方法的流程示意图。如图4所示,该流程示意图可包括步骤S410~S450。其中:
步骤S410:可移动平台根据点云确定可移动平台周围环境中的平面区域。
在一种实现方式中,可移动平台根据所述点云传感器与第一拍摄装置之间的旋转、平移将所述平面区域投影到第一图像中以获取初始平面图像区域。
本申请实施例中,首先,可移动平台根据计算得到的各个空间点的参考三维信息执行拟合算法以确定初始平面区域,其中三维信息可以为三维坐标。然后,可移动平台可以计算各个空间点到初始平面区域的距离。其中,与初始平面区域的距离小于或等于第一距离阈值的各个空间点被标记为目标空间点。若目标空间点的数量大于或等于第一预设数量阈值,或者目标空间点的数量与多个空间点的数量的比值大于或等于第一比例阈值,则确定该初始平面区域是平面区域。
如图5a和图5b所示,图中不同的灰度代表各个目标空间点到网格平面的不同距离。从图5a中可以看出,各个目标空间点的灰度大致相同,由此可以判断各个目标空间点到网格平面的距离大致相同,可以拟合为一个水平平面,为平面区域。从图5b中可以看出,各个目标空间点的灰度各不相同,由此可以判断各个目标空间点到网格平面的距离各不相同,无法拟合为一个平面,说明当前区域凹凸不平,可能存在凹坑或者凸包,为非平面区域。
在一种实现方式中,可移动平台每次在各个空间点中任选三个不在同一直线上的三个点,通过选出的三个点确定1个候选平面,最终可移动平台可以通过这4个空间点确定4个候选平面。然后,可移动平台分别计算各个空间点到每个候选平面的距离的和,将各个空间点到候选平面的距离和最小的候选平面确定为初始平面区域。接着,分别计算各个空间点到初始平面区域的距离,将距离小于第一距离阈值的空间点确定为目标空间点。若目标空间点的数量大于或等于第一预设数量阈值,或者目标空间点的数量与各个空间点的数量的比值大于或等于第一比例阈值,则确定该初始平面区域为平面区域。
示例性的,候选平面可以通过如下公式计算得到:
Figure PCTCN2020082359-appb-000003
Figure PCTCN2020082359-appb-000004
其中,v k为候选平面,
Figure PCTCN2020082359-appb-000005
为候选平面法向量,P k,P k+1,P k+2为不在同一直线上的三个空间点。
示例性的,空间点P i到候选平面的距离d ki可以通过如下公式计算得到:
Figure PCTCN2020082359-appb-000006
举例来说,假设有4个空间点,且其中任意3个空间点都不在一条直线上。可移动平台可以每次从4个空间点中任选3个构成1个候选平面,最终可移动平台可以通过这4个空间点确定4个候选平面。接着,可移动平台计算这4个空间点到每一个候选平面的距离,将4个空间点到候选平面距离和最小的候选平面确定为初始平面区域。接着,计算这4个空间点到初始平面区域的距离,假设第一距离阈值为5cm,第一预设数量阈值为3。4个空间点到初始平面区域的距离分别为1cm,3cm,4cm,2cm。则4个空间点到初始平面区域的距离都小于第一距离阈值5cm。因此,4个空间点都被确定为目标空间点。目标空间点的数量为4,大于第一预设数量阈值3。由此可以判定该初始平面区域为平面区域。再如,4个空间点到初始平面区域的距离分别为10cm,15cm,4cm,8cm,则4个空间点中只有到初始平面区域的距离为4cm的空间点被确定为目标空间点。目标空间点的数量为1,小于第一预设数量阈值3。由此可以判定该初始平面区域可能存在凹坑或者凸包,该初始平面区域为非平面区域。
在一种实现方式中,可移动平台根据所述点云传感器与第一拍摄装置之间 的旋转、平移将所述平面区域投影到第一图像中以获取初始平面图像区域之后,可移动平台对所述初始平面图像区域作连通检测运算以获取所述平面图像区域。
其中,连通检测运算即图像分割算法,图像分割算法包括但不限于区域生长分割算法、均值迭代分割算法、最大类间方差分割算法、最大熵分割算法等。如图6a和图6b所示,可移动平台根据所述点云传感器与第一拍摄装置之间的旋转、平移将所述平面区域投影到第一图像中以获取初始平面图像区域如图6a所示,再将初始平面图像区域结合图像分割算法进行优化得到如图6b所示的精准的平面图像区域。
步骤S420:可移动平台根据点云传感器与第一拍摄装置之间的旋转、平移确定平面区域在所述第一图像中的平面图像区域。
其中,平面区域为真实空间环境中的区域,即为三维平面区域;平面图像区域是指二维的平面区域。
步骤S430:可移动平台获取第一图像和第二图像中的多组特征点对,其中,多组特征点对中在所述第一图像中的特征点位于平面图像区域中。
在一种实现方式中,首先,可移动平台从第一图像中提取特征点,具体可以通过莫拉维克角点检测算法(Moravec corner detection algorithms)进行特征点提取,也可以通过哈里斯角点检测算法(harris corner detection algorithms)实现等,本发明对特征点提取的算法在此不作限定。然后,可移动平台对第一图像与第二图像之间特征点的跟踪匹配,其中,第二图像为第二拍摄装置采集到的图像。
在一种实现方式中,可移动平台对第一图像与第二图像之间特征点的跟踪匹配可以通过卡纳德-卢卡斯-托马西功能跟踪器(kanade–lucas–tomasi feature tracker)实现,可移动平台得到第一图像以及第二图像之间的所有空间点。
在一种实现方式中,可移动平台通过立体视觉算法,对所述第一图像与第二拍摄装置采集到的第二图像进行分析,得到所有空间点之后,可移动平台通过极线约束标定极线方向的两个旋转角。其中双目立体视觉算法的基本原理是用左右两个摄像机同时捕捉空间的同一物体或场景,通过空间点在左右摄像机成像平面上的坐标来计算空间点的位置。其中,已知点在左目拍摄装置的图像 坐标如何确定左目拍摄装置对应同一点的图像坐标即为点的匹配。需要说明的是,极线约束是一种点对直线的约束,而不是点与点的约束,尽管如此,极线约束给出了对应点重要的约束条件,它将对应点匹配从整幅图像寻找压缩到在一条直线上寻找对应点。举例来说,也就是说同一个点在第一图像与第二图像上的映射,已知第一图像上的映射点p1,那么求解第二图像上的映射点p2的过程。
步骤S440:可移动平台根据多组特征点对中在第一图像和第二图像中的特征点在第一图像和第二图像中的位置确定第一拍摄装置和第二拍摄装置之间视差的偏差。
在一种实现方式中,可移动平台根据所述多组特征点对中在所述第一图像和第二图像中的特征点在第一图像和第二图像中的位置和视差的偏差初值确定所述多组特征点对对应的在所述可移动平台周围环境中的空间点与所述平面区域之间的距离。
其中,所述视差的偏差初值可以是提前设定好的,也可以是一个经验值,例如视差的偏差初值可以是五个像素。
在一种实现方式中,可移动平台以所述多个空间点与所述平面区域之间的距离的和最小化作为优化目标对所述视差的偏差初值进行优化运算,并将优化计算得到的视差的偏差初值确定为所述第一拍摄装置和第二拍摄装置之间视差的偏差。
具体实现时,可移动平台以所述多个空间点与所述平面区域之间的距离的和作为优化目标可以是多个空间点到所述多个空间点对应的三维平面区域的距离之和最小。
步骤S450:可移动平台根据视差的偏差标定第一拍摄装置和第二拍摄装置之间在视差方向上的旋转。
具体实现时,可移动平台获取所述第一拍摄装置或所述第二拍摄装置的焦距,可移动平台根据所述视差的偏差和所述焦距,获取所述视差的偏差与所述焦距之间的比值,并将所述比值进行反正切运算,得到所述第一拍摄装置和所述第二拍摄装置之间的视差方向上的旋转。
举例来说,首先,可移动平台确定的每个精准的平面区域转换到相机坐标 系下,得到相机坐标系下的三维平面区域记为PLANE i,对应的二维平面图像区域记为AREA i,对于每个图像上平面图像区域AREA i内的特征点p i,可移动平台可以通过BlockMatching视差估计算法或者SemiGlobal视差估计算法计算3D-ToF与左目拍摄装置之间的视差Δd1,以及左目传感器和右目拍摄装置之间视差Δd2,计算得到视差的偏差初值为Δd,其中Δd=|Δd1-Δd2|。然后,可移动平台计算特征点p i在双目拍摄装置坐标系下的三维位置P i′,这个过程表示为
P i′=π -1(p i,Δd)
最后,使得所有点到对应的平面区域PLANE i的距离之和最小,优化求解Δd
Figure PCTCN2020082359-appb-000007
其中,D(P′ i,PLANE i)表示点P′ i到平面区域PLANE i的距离。
求解出Δd之后,可移动平台获取左目拍摄装置或者右目拍摄装置的焦距,假设为f,通过
Figure PCTCN2020082359-appb-000008
计算得到左目拍摄装置和右目拍摄装置之间的视差方向上的旋转角θ。
在一种实现方式中,可移动平台将通过极线约束标定极线方向的两个旋转角θ’和θ”,结合左目拍摄装置和右目拍摄装置之间的视差方向上的旋转角θ,三个旋转角组合转化为旋转矩阵,得到左目拍摄装置和右目拍摄装置之间的旋转。从而可以精准的实现对双目拍摄装置的外参的完整标定,降低了标定过程的难度,并且提高了双目拍摄装置的外参标定的精度。
通过本发明实施例所描述的双目拍摄装置的外参的标定方法,通过点云传感器的辅助,在线标定第一拍摄装置与第二拍摄装置在视差方向上的旋转。
本发明实施例提供一种双目拍摄装置的外参的标定系统,该系统可以运行在无人飞行器、VR眼镜、自动驾驶车辆、智能手机等带有计算机视觉模块的可移动平台中。其中,计算机视觉模块具体可以为双目拍摄装置,双目拍摄装 置包括第一拍摄装置以及第二拍摄装置。图7是本发明实施例提供的双目拍摄装置的外参的标定系统的结构图,如图7所示,双目拍摄装置的外参的标定系统700包括存储器701、处理器702、点云传感器703以及双目拍摄装置704,其中双目拍摄装置包括第一拍摄装置705和第二拍摄装置706,其中,存储器501中存储有程序代码,处理器702调用存储器701中的程序代码,当程序代码被执行时,处理器702执行如下操作:
获取点云传感器输出的所述可移动平台周围环境的点云;
获取所述第一拍摄装置输出的所述可移动平台周围环境第一图像和第二拍摄装置输出的所述可移动平台周围环境第二图像;
根据所述点云、所述第一图像和所述第二图像标定第一拍摄装置和第二拍摄装置之间在视差方向上的旋转。
在一种实现方式中,处理器702在根据所述点云、所述第一图像和所述第二图像标定第一拍摄装置和第二拍摄装置之间在视差方向上的旋转时,执行如下操作:
根据所述点云和所述第一图像确定点云传感器与第一拍摄装置之间的旋转和平移;
根据所述点云、点云传感器与第一拍摄装置之间的旋转、平移、所述第一图像和所述第二图像标定第一拍摄装置和第二拍摄装置之间在视差方向上的旋转。
在一种实现方式中,处理器702在根据所述点云和所述第一图像确定点云传感器与第一拍摄装置之间的旋转和平移时,执行如下操作:
在所述点云中确定多个深度跳变点;
获取所述点云传感器和所述第一拍摄装置之间的原始平移和原始旋转;
确定所述第一图像中像素点的边缘响应值;
根据所述多个深度跳变点的位置、所述原始平移、原始旋转和所述边缘响应值对所述原始平移和所述原始旋转进行优化运算以获取所述点云和所述第一图像确定点云传感器与第一拍摄装置之间的旋转和平移。
在一种实现方式中,处理器702在根据所述多个深度跳变点的位置、所述原始平移、原始旋转和所述边缘响应值对所述原始平移和所述原始旋转进行优 化运算时,执行如下操作:
根据所述多个深度跳变点和所述原始平移和原始旋转确定所述多个深度跳变点在第一图像中的像素点;
获取所述多个深度跳变点在第一图像中的像素点的边缘响应值;
将多个深度跳变点在第一图像中的像素点的边缘响应值总和最小作为优化目标、以所述原始平移和原始旋转为优化对象进行优化运算;
将优化得到的原始平移和原始旋转确定为所述点云和所述第一图像确定点云传感器与第一拍摄装置之间的旋转和平移。
在一种实现方式中,处理器702在确定所述第一图像中像素点的边缘响应值时,执行如下操作:
对所述第一图像运行边缘检测算法以获取第一图像中像素点的边缘响应值。
在一种实现方式中,处理器702处理器在点云中确定多个深度跳变点,执行如下操作:
遍历所述点云,确定多个目标点云,其中,所述目标点云的深度值与其相邻的多个点云的深度值大于或等于预设的深度阈值;
将所述多个目标点云确定为所述多个深度跳变点。
在一种实现方式中,处理器702在根据所述点云、点云传感器与第一拍摄装置之间的旋转、平移、所述第一图像和所述第二图像标定第一拍摄装置和第二拍摄装置之间在视差方向上的旋转时,执行如下操作:
根据所述点云、点云传感器与第一拍摄装置之间的旋转、平移、所述第一图像和所述第二图像确定第一拍摄装置和第二拍摄装置之间视差的偏差;
根据所述视差的偏差标定第一拍摄装置和第二拍摄装置之间在视差方向上的旋转。
在一种实现方式中,处理器702在根据所述点云、点云传感器与第一拍摄装置之间的旋转、平移、所述第一图像和所述第二图像确定第一拍摄装置和第二拍摄装置之间视差的偏差时,执行如下操作:
根据所述点云确定所述可移动平台周围环境中的平面区域;
根据所述点云传感器与第一拍摄装置之间的旋转、平移确定所述平面区域在所述第一图像中的平面图像区域;
获取第一图像和第二图像中的多组特征点对,其中,所述多组特征点对中在所述第一图像中的特征点位于所述平面图像区域中;
根据所述多组特征点对中在所述第一图像和第二图像中的特征点在第一图像和第二图像中的位置确定所述第一拍摄装置和第二拍摄装置之间视差的偏差。
在一种实现方式中,处理器702在根据所述多组特征点对中在所述第一图像和第二图像中的特征点在第一图像和第二图像中的位置确定所述第一拍摄装置和第二拍摄装置之间视差的偏差时,执行如下操作:
根据所述多组特征点对中在所述第一图像和第二图像中的特征点在第一图像和第二图像中的位置和视差的偏差初值确定所述多组特征点对对应的在所述可移动平台周围环境中的空间点与所述平面区域之间的距离;
根据所述距离对所述视差的偏差初值进行优化运算,并将优化计算得到的视差的偏差初值确定为所述第一拍摄装置和第二拍摄装置之间视差的偏差。
在一种实现方式中,处理器702在根据所述距离所述对所述视差的偏差初值进行优化运算时,执行如下操作:
以所述多个空间点与所述平面区域之间的距离的和最小化作为优化目标对所述视差的偏差初值进行优化运算。
在一种实现方式中,处理器702在根据所述点云传感器与第一拍摄装置之间的旋转、平移确定所述平面区域在所述第一图像中的平面图像区域时,执行如下操作:
根据所述点云传感器与第一拍摄装置之间的旋转、平移将所述平面区域投影到第一图像中以获取初始平面图像区域;
对所述初始平面图像区域作连通检测运算以获取所述平面图像区域。
在一种实现方式中,所述点云传感器包括3D-TOF相机或激光雷达或毫米波雷达。
在一种实现方式中,处理器702在根据所述点云、所述第一图像和所述第二图像标定第一拍摄装置和第二拍摄装置之间在视差方向上的旋转时,执行如下操作:
当满足预设条件时,根据所述点云、所述第一图像和所述第二图像标定第一拍摄装置和第二拍摄装置之间在视差方向上的旋转。
在一种实现方式中,所述满足预设要求包括接收到控制终端发送的标定指令或检测到所述可移动平台开机。
本实施例提供的双目拍摄装置的外参的标定系统能够执行前述实施例提供的双目拍摄装置的外参的标定方法,其执行方式和有益效果类似,在这里不再赘述。
本发明实施例还提供一种可移动平台,其特征在于,包括:
机身;
以及上述实施例提供的双目拍摄装置的外参的标定系统,其中,所述标定系统承载在所述机身上。
本实施例提供的双目拍摄装置的外参的标定系统能够执行前述实施例提供的双目拍摄装置的外参的标定方法,其执行方式和有益效果类似,在这里不再赘述。
本申请实施例还提供一种可读存储介质,所述可读存储介质存储有计算机程序,所述计算机程序被处理器执行时,可以用于实现本申请实施例图2所对应实施例中描述的双目拍摄装置的外参的标定方法,在此不再赘述。
上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本发明各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者 对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (30)

  1. 一种双目拍摄装置的外参的标定方法,其特征在于,所述方法应用于可移动平台,所述可移动平台包括点云传感器和双目拍摄装置,所述双目拍摄装置包括第一拍摄装置和第二拍摄装置,所述方法包括:
    获取所述点云传感器输出的所述可移动平台周围环境的点云;
    获取所述第一拍摄装置输出的所述可移动平台周围环境第一图像和所述第二拍摄装置输出的所述可移动平台周围环境第二图像;
    根据所述点云、所述第一图像和所述第二图像标定所述第一拍摄装置和所述第二拍摄装置之间在视差方向上的旋转。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述点云、所述第一图像和所述第二图像标定所述第一拍摄装置和所述第二拍摄装置之间在视差方向上的旋转,包括:
    根据所述点云和所述第一图像确定所述点云传感器与所述第一拍摄装置之间的旋转和平移;
    根据所述点云、所述点云传感器与所述第一拍摄装置之间的旋转、平移、所述第一图像和所述第二图像,标定所述第一拍摄装置和第二拍摄装置之间在视差方向上的旋转。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述点云和所述第一图像确定所述点云传感器与所述第一拍摄装置之间的旋转和平移,包括:
    在所述点云中确定多个深度跳变点;
    获取所述点云传感器和所述第一拍摄装置之间的原始平移和原始旋转;
    确定所述第一图像中像素点的边缘响应值;
    根据所述多个深度跳变点的位置、所述原始平移、原始旋转和所述边缘响应值对所述原始平移和所述原始旋转进行优化运算,以确定所述点云传感器与第一拍摄装置之间的旋转和平移。
  4. 根据权利要求3所述的方法,其特征在于,
    所述根据所述多个深度跳变点的位置、所述原始平移、原始旋转和所述边缘响应值对所述原始平移和所述原始旋转进行优化运算,以确定所述点云传感器与第一拍摄装置之间的旋转和平移包括:
    根据所述多个深度跳变点和所述原始平移和原始旋转确定所述多个深度跳变点在第一图像中的像素点;
    获取所述多个深度跳变点在所述第一图像中的像素点的边缘响应值;
    将多个深度跳变点在第一图像中的像素点的边缘响应值总和最小作为优化目标、以所述原始平移和原始旋转为优化对象进行优化运算;
    将优化得到的原始平移和原始旋转确定为所述点云传感器与第一拍摄装置之间的旋转和平移。
  5. 根据权利要求3或4所述的方法,其特征在于,所述确定所述第一图像中像素点的边缘响应值,包括:
    对所述第一图像运行边缘检测算法以获取第一图像中像素点的边缘响应值。
  6. 根据权利要求3-5任一项所述的方法,其特征在于,所述在所述点云中确定多个深度跳变点,包括:
    遍历所述点云,确定多个目标点云,其中,所述目标点云的深度值与其相邻的多个点云的深度值大于或等于预设的深度阈值;
    将所述多个目标点云确定为所述多个深度跳变点。
  7. 根据权利要求2-6任一项所述的方法,其特征在于,所述根据所述点云、所述点云传感器与所述第一拍摄装置之间的旋转、平移、所述第一图像和所述第二图像,标定所述第一拍摄装置和第二拍摄装置之间在视差方向上的旋转,包括:
    根据所述点云、所述点云传感器与所述第一拍摄装置之间的旋转、平移、所述第一图像和所述第二图像,确定所述第一拍摄装置和所述第二拍摄装置之间视差的偏差;
    根据所述视差的偏差标定所述第一拍摄装置和第二拍摄装置之间在视差 方向上的旋转。
  8. 根据权利要求7所述的方法,其特征在于,所述根据所述点云、所述点云传感器与所述第一拍摄装置之间的旋转、平移、所述第一图像和所述第二图像,确定所述第一拍摄装置和所述第二拍摄装置之间视差的偏差,包括:
    根据所述点云确定所述可移动平台周围环境中的平面区域;
    根据所述点云传感器与第一拍摄装置之间的旋转、平移确定所述平面区域在所述第一图像中的平面图像区域;
    获取所述第一图像和所述第二图像中的多组特征点对,其中,所述多组特征点对中在所述第一图像中的特征点位于所述平面图像区域中;
    根据所述多组特征点对中在所述第一图像和第二图像中的特征点在第一图像和第二图像中的位置,确定所述第一拍摄装置和第二拍摄装置之间视差的偏差。
  9. 根据权利要求8所述的方法,其特征在于,所述根据所述多组特征点对中在所述第一图像和第二图像中的特征点在第一图像和第二图像中的位置,确定所述第一拍摄装置和第二拍摄装置之间视差的偏差,包括:
    根据所述多组特征点对中在所述第一图像和第二图像中的特征点在第一图像和第二图像中的位置和视差的偏差初值,确定所述多组特征点对对应的在所述可移动平台周围环境中的空间点与所述平面区域之间的距离;
    根据所述距离对所述视差的偏差初值进行优化运算,并将优化计算得到的视差的偏差初值确定为所述第一拍摄装置和第二拍摄装置之间视差的偏差。
  10. 根据权利要求9所述的方法,其特征在于,所述根据所述距离对所述视差的偏差初值进行优化运算,包括:
    以所述多个空间点与所述平面区域之间的距离的和最小化作为优化目标对所述视差的偏差初值进行优化运算。
  11. 根据权利要求8-10任一项所述的方法,其特征在于,所述根据所述点云传感器与所述第一拍摄装置之间的旋转、平移确定所述平面区域在所述第 一图像中的平面图像区域,包括:
    根据所述点云传感器与所述第一拍摄装置之间的旋转、平移将所述平面区域投影到第一图像中以获取初始平面图像区域;
    对所述初始平面图像区域作连通检测运算以获取所述平面图像区域。
  12. 根据权利要求1-11任一项所述的方法,其特征在于,所述点云传感器包括3D-TOF相机或激光雷达或毫米波雷达。
  13. 根据权利要求1-12任一项所述的方法,其特征在于,所述根据所述点云、所述第一图像和所述第二图像标定所述第一拍摄装置和第二拍摄装置之间在视差方向上的旋转,包括:
    当满足预设条件时,根据所述点云、所述第一图像和所述第二图像标定所述第一拍摄装置和第二拍摄装置之间在视差方向上的旋转。
  14. 根据权利要求13所述的方法,其特征在于,所述满足预设要求包括接收到控制终端发送的标定指令或检测到所述可移动平台开机。
  15. 一种双目拍摄装置的外参的标定系统,其特征在于,包括存储器和处理器、点云传感器以及双目拍摄装置,其中双目拍摄装置包括第一拍摄装置和第二拍摄装置;
    所述存储器,用于存储程序代码;
    所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
    获取所述点云传感器输出的所述可移动平台周围环境的点云;
    获取所述第一拍摄装置输出的所述可移动平台周围环境第一图像和所述第二拍摄装置输出的所述可移动平台周围环境第二图像;
    根据所述点云、所述第一图像和所述第二图像标定所述第一拍摄装置和所述第二拍摄装置之间在视差方向上的旋转。
  16. 根据权利要求15所述的系统,其特征在于,所述处理器在根据所述 点云、所述第一图像和所述第二图像标定所述第一拍摄装置和所述第二拍摄装置之间在视差方向上的旋转时,执行如下操作:
    根据所述点云和所述第一图像确定所述点云传感器与所述第一拍摄装置之间的旋转和平移;
    根据所述点云、所述点云传感器与所述第一拍摄装置之间的旋转、平移、所述第一图像和所述第二图像,标定所述第一拍摄装置和第二拍摄装置之间在视差方向上的旋转。
  17. 根据权利要求16所述的系统,其特征在于,所述处理器在根据所述点云和所述第一图像确定所述点云传感器与所述第一拍摄装置之间的旋转和平移时,执行如下操作:
    在所述点云中确定多个深度跳变点;
    获取所述点云传感器和所述第一拍摄装置之间的原始平移和原始旋转;
    确定所述第一图像中像素点的边缘响应值;
    根据所述多个深度跳变点的位置、所述原始平移、原始旋转和所述边缘响应值对所述原始平移和所述原始旋转进行优化运算,以确定所述点云传感器与第一拍摄装置之间的旋转和平移。
  18. 根据权利要求17所述的系统,其特征在于,
    所述处理器在根据所述多个深度跳变点的位置、所述原始平移、原始旋转和所述边缘响应值对所述原始平移和所述原始旋转进行优化运算,以确定所述点云传感器与第一拍摄装置之间的旋转和平移时,执行如下操作:
    根据所述多个深度跳变点和所述原始平移和原始旋转确定所述多个深度跳变点在第一图像中的像素点;
    获取所述多个深度跳变点在所述第一图像中的像素点的边缘响应值;
    将多个深度跳变点在第一图像中的像素点的边缘响应值总和最小作为优化目标、以所述原始平移和原始旋转为优化对象进行优化运算;
    将优化得到的原始平移和原始旋转确定为所述点云传感器与第一拍摄装置之间的旋转和平移。
  19. 根据权利要求17或18所述的系统,其特征在于,所述处理器在确定所述第一图像中像素点的边缘响应值时,执行如下操作:
    对所述第一图像运行边缘检测算法以获取第一图像中像素点的边缘响应值。
  20. 根据权利要求17-19任一项所述的系统,其特征在于,所述处理器在所述点云中确定多个深度跳变点,执行如下操作:
    遍历所述点云,确定多个目标点云,其中,所述目标点云的深度值与其相邻的多个点云的深度值大于或等于预设的深度阈值;
    将所述多个目标点云确定为所述多个深度跳变点。
  21. 根据权利要求16-20任一项所述的系统,其特征在于,所述处理器在根据所述点云、所述点云传感器与所述第一拍摄装置之间的旋转、平移、所述第一图像和所述第二图像,标定所述第一拍摄装置和第二拍摄装置之间在视差方向上的旋转时,执行如下操作:
    根据所述点云、所述点云传感器与所述第一拍摄装置之间的旋转、平移、所述第一图像和所述第二图像,确定所述第一拍摄装置和所述第二拍摄装置之间视差的偏差;
    根据所述视差的偏差标定所述第一拍摄装置和第二拍摄装置之间在视差方向上的旋转。
  22. 根据权利要求21所述的系统,其特征在于,所述处理器在根据所述点云、所述点云传感器与所述第一拍摄装置之间的旋转、平移、所述第一图像和所述第二图像确定所述第一拍摄装置和所述第二拍摄装置之间视差的偏差时,执行如下操作:
    根据所述点云确定所述可移动平台周围环境中的平面区域;
    根据所述点云传感器与第一拍摄装置之间的旋转、平移确定所述平面区域在所述第一图像中的平面图像区域;
    获取所述第一图像和所述第二图像中的多组特征点对,其中,所述多组特征点对中在所述第一图像中的特征点位于所述平面图像区域中;
    根据所述多组特征点对中在所述第一图像和第二图像中的特征点在第一图像和第二图像中的位置,确定所述第一拍摄装置和第二拍摄装置之间视差的偏差。
  23. 根据权利要求22所述的系统,其特征在于,所述处理器在根据所述多组特征点对中在所述第一图像和第二图像中的特征点在第一图像和第二图像中的位置,确定所述第一拍摄装置和第二拍摄装置之间视差的偏差时,执行如下操作:
    根据所述多组特征点对中在所述第一图像和第二图像中的特征点在第一图像和第二图像中的位置和视差的偏差初值,确定所述多组特征点对对应的在所述可移动平台周围环境中的空间点与所述平面区域之间的距离;
    根据所述距离对所述视差的偏差初值进行优化运算,并将优化计算得到的视差的偏差初值确定为所述第一拍摄装置和第二拍摄装置之间视差的偏差。
  24. 根据权利要求23所述的系统,其特征在于,所述处理器在根据所述距离对所述视差的偏差初值进行优化运算时,执行如下操作:
    以所述多个空间点与所述平面区域之间的距离的和最小化作为优化目标对所述视差的偏差初值进行优化运算。
  25. 根据权利要求22-24任一项所述的系统,其特征在于,所述处理器在根据所述点云传感器与所述第一拍摄装置之间的旋转、平移确定所述平面区域在所述第一图像中的平面图像区域时,执行如下操作:
    根据所述点云传感器与所述第一拍摄装置之间的旋转、平移将所述平面区域投影到第一图像中以获取初始平面图像区域;
    对所述初始平面图像区域作连通检测运算以获取所述平面图像区域。
  26. 根据权利要求15-25任一项所述的系统,其特征在于,所述点云传感器包括3D-TOF相机或激光雷达或毫米波雷达。
  27. 根据权利要求15-26任一项所述的系统,其特征在于,所述处理器在 根据所述点云、所述第一图像和所述第二图像标定所述第一拍摄装置和第二拍摄装置之间在视差方向上的旋转时,执行如下操作:
    当满足预设条件时,根据所述点云、所述第一图像和所述第二图像标定所述第一拍摄装置和第二拍摄装置之间在视差方向上的旋转。
  28. 根据权利要求27所述的系统,其特征在于,所述满足预设要求包括接收到控制终端发送的标定指令或检测到所述可移动平台开机。
  29. 一种可移动平台,其特征在于,包括:
    机身;
    如权利要求15-28中任一项所述的双目拍摄装置的外参的标定系统,其中,所述标定系统承载在所述机身上。
  30. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序在被执行时,实现如权利要求1-14任一项所述的双目拍摄装置的外参的标定方法。
PCT/CN2020/082359 2020-03-31 2020-03-31 一种双目拍摄装置的外参的标定方法、可移动平台及系统 WO2021195939A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/082359 WO2021195939A1 (zh) 2020-03-31 2020-03-31 一种双目拍摄装置的外参的标定方法、可移动平台及系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/082359 WO2021195939A1 (zh) 2020-03-31 2020-03-31 一种双目拍摄装置的外参的标定方法、可移动平台及系统

Publications (1)

Publication Number Publication Date
WO2021195939A1 true WO2021195939A1 (zh) 2021-10-07

Family

ID=77927782

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/082359 WO2021195939A1 (zh) 2020-03-31 2020-03-31 一种双目拍摄装置的外参的标定方法、可移动平台及系统

Country Status (1)

Country Link
WO (1) WO2021195939A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114022570A (zh) * 2022-01-05 2022-02-08 荣耀终端有限公司 相机间外参的标定方法及电子设备
CN114449165A (zh) * 2021-12-27 2022-05-06 广州极飞科技股份有限公司 拍照控制方法、装置、无人设备及存储介质
CN114494466A (zh) * 2022-04-15 2022-05-13 北京主线科技有限公司 外参标定方法、装置及设备、存储介质
CN115144828A (zh) * 2022-07-05 2022-10-04 同济大学 一种智能汽车多传感器时空融合的自动在线标定方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012173032A (ja) * 2011-02-18 2012-09-10 Ricoh Co Ltd 画像処理装置、方法、プログラムおよび記録媒体
CN108171787A (zh) * 2017-12-18 2018-06-15 桂林电子科技大学 一种基于orb特征检测的三维重建方法
CN108828606A (zh) * 2018-03-22 2018-11-16 中国科学院西安光学精密机械研究所 一种基于激光雷达和双目可见光相机联合测量方法
CN109323650A (zh) * 2018-01-31 2019-02-12 黑龙江科技大学 视觉图像传感器与点光测距传感器测量坐标系的统一方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012173032A (ja) * 2011-02-18 2012-09-10 Ricoh Co Ltd 画像処理装置、方法、プログラムおよび記録媒体
CN108171787A (zh) * 2017-12-18 2018-06-15 桂林电子科技大学 一种基于orb特征检测的三维重建方法
CN109323650A (zh) * 2018-01-31 2019-02-12 黑龙江科技大学 视觉图像传感器与点光测距传感器测量坐标系的统一方法
CN108828606A (zh) * 2018-03-22 2018-11-16 中国科学院西安光学精密机械研究所 一种基于激光雷达和双目可见光相机联合测量方法

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114449165A (zh) * 2021-12-27 2022-05-06 广州极飞科技股份有限公司 拍照控制方法、装置、无人设备及存储介质
CN114449165B (zh) * 2021-12-27 2023-07-18 广州极飞科技股份有限公司 拍照控制方法、装置、无人设备及存储介质
CN114022570A (zh) * 2022-01-05 2022-02-08 荣耀终端有限公司 相机间外参的标定方法及电子设备
CN114022570B (zh) * 2022-01-05 2022-06-17 荣耀终端有限公司 相机间外参的标定方法及电子设备
CN114494466A (zh) * 2022-04-15 2022-05-13 北京主线科技有限公司 外参标定方法、装置及设备、存储介质
CN114494466B (zh) * 2022-04-15 2022-06-28 北京主线科技有限公司 外参标定方法、装置及设备、存储介质
CN115144828A (zh) * 2022-07-05 2022-10-04 同济大学 一种智能汽车多传感器时空融合的自动在线标定方法
CN115144828B (zh) * 2022-07-05 2024-04-12 同济大学 一种智能汽车多传感器时空融合的自动在线标定方法

Similar Documents

Publication Publication Date Title
WO2021195939A1 (zh) 一种双目拍摄装置的外参的标定方法、可移动平台及系统
Ishikawa et al. Lidar and camera calibration using motions estimated by sensor fusion odometry
US10659768B2 (en) System and method for virtually-augmented visual simultaneous localization and mapping
CN107945220B (zh) 一种基于双目视觉的重建方法
US8452081B2 (en) Forming 3D models using multiple images
US8447099B2 (en) Forming 3D models using two images
US8199977B2 (en) System and method for extraction of features from a 3-D point cloud
EP2116975B1 (en) Method and apparatus for vision based motion determination
US8903161B2 (en) Apparatus for estimating robot position and method thereof
US9841271B2 (en) Three-dimensional measurement apparatus, processing method, and non-transitory computer-readable storage medium
US20170305546A1 (en) Autonomous navigation method and system, and map modeling method and system
WO2018120040A1 (zh) 一种障碍物检测方法及装置
Muñoz-Bañón et al. Targetless camera-LiDAR calibration in unstructured environments
JP2013187862A (ja) 画像データ処理装置、画像データ処理方法および画像データ処理用のプログラム
Cvišić et al. Recalibrating the KITTI dataset camera setup for improved odometry accuracy
US11145072B2 (en) Methods, devices and computer program products for 3D mapping and pose estimation of 3D images
CN111429571A (zh) 一种基于时空图像信息联合相关的快速立体匹配方法
JP2009186287A (ja) 平面パラメータ推定装置、平面パラメータ推定方法及び平面パラメータ推定プログラム
Cui et al. ACLC: Automatic Calibration for non-repetitive scanning LiDAR-Camera system based on point cloud noise optimization
CN117495975A (zh) 一种变焦镜头的标定方法、装置及电子设备
Lin et al. Real-time low-cost omni-directional stereo vision via bi-polar spherical cameras
Gan et al. Robust binocular pose estimation based on pigeon-inspired optimization
JP7508288B2 (ja) 情報処理装置、情報処理装置の制御方法なおよびプログラム
CN115019167B (zh) 基于移动终端的融合定位方法、系统、设备及存储介质
WO2023070441A1 (zh) 可移动平台的定位方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20928782

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20928782

Country of ref document: EP

Kind code of ref document: A1