WO2021016854A1 - Procédé et dispositif d'étalonnage, plateforme mobile et support de stockage - Google Patents

Procédé et dispositif d'étalonnage, plateforme mobile et support de stockage Download PDF

Info

Publication number
WO2021016854A1
WO2021016854A1 PCT/CN2019/098354 CN2019098354W WO2021016854A1 WO 2021016854 A1 WO2021016854 A1 WO 2021016854A1 CN 2019098354 W CN2019098354 W CN 2019098354W WO 2021016854 A1 WO2021016854 A1 WO 2021016854A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
cloud data
projected
dimensional space
data
Prior art date
Application number
PCT/CN2019/098354
Other languages
English (en)
Chinese (zh)
Inventor
李威
刘天博
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2019/098354 priority Critical patent/WO2021016854A1/fr
Priority to CN201980030471.8A priority patent/CN112106111A/zh
Publication of WO2021016854A1 publication Critical patent/WO2021016854A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Definitions

  • the present invention relates to the field of control technology, in particular to a calibration method, equipment, movable platform and storage medium.
  • the calibration methods between lidar and camera mainly include external parameter calibration with and without targets.
  • the target external parameter calibration method relies on specific markers such as calibration plates or labels, and the external parameter calibration process is mostly offline. This kind of method can achieve higher precision external parameter calibration under the condition of relying on specific markers, and the calibration results have good consistency.
  • the embodiment of the present invention provides a calibration method, equipment, a movable platform and a storage medium, which realizes the calibration of the surrounding environment of the movable platform when there is no specific marker, and improves the calibration accuracy.
  • an embodiment of the present invention provides a calibration method, which is applied to a movable platform on which a laser scanning device and a camera are provided, and the method includes:
  • an embodiment of the present invention provides a calibration device, including a memory and a processor
  • the memory is used to store programs
  • the processor is used to call the program, and when the program is executed, it is used to perform the following operations:
  • an embodiment of the present invention provides a movable platform, and the movable platform includes:
  • the power system configured on the fuselage is used to provide mobile power for the movable platform
  • an embodiment of the present invention provides a computer-readable storage medium that stores a computer program that, when executed by a processor, implements the method described in the first aspect.
  • the calibration device obtains the first point cloud data of the surrounding environment of the movable platform collected by the laser scanning device and the image data collected by the camera, and determines the second point cloud data according to the first point cloud data, and Project the second point cloud data to a three-dimensional grid space in the camera coordinate system to obtain a projected three-dimensional space.
  • the projected three-dimensional space Projecting onto the image data, and obtaining the optimal position of the projected three-dimensional space projected onto the image data, thereby achieving calibration of the surrounding environment of the movable platform when there is no specific marker, and improving the calibration accuracy.
  • Figure 1 is a schematic structural diagram of a calibration system provided by an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of a calibration method provided by an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a three-dimensional grid space provided by an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a discontinuous point cloud provided by an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of an offline calibration method provided by an embodiment of the present invention.
  • FIG. 6 is a schematic flowchart of an online calibration method provided by an embodiment of the present invention.
  • Fig. 7 is a schematic structural diagram of a calibration device provided by an embodiment of the present invention.
  • the calibration method provided in the embodiment of the present invention may be executed by a calibration system, and specifically, may be executed by a calibration device in the calibration system.
  • the calibration system includes a calibration device and a movable platform.
  • the calibration device may be installed on a movable platform; in some embodiments, the calibration device may be spatially independent of the movable platform; in some embodiments, the calibration device The device may be a component of a movable platform, that is, the movable platform includes a calibration device.
  • the calibration method can also be applied to other mobile devices, such as mobile devices that can move autonomously, such as robots, unmanned vehicles, and unmanned ships.
  • the calibration equipment in the calibration system can obtain the first point cloud data corresponding to the surrounding environment of the movable platform collected by the laser scanning device and the image data collected by the camera; in some embodiments, the laser scanning device and the The cameras are respectively detachably connected to the movable platform. In other embodiments, the laser scanning device and the camera may also be fixedly arranged on the movable platform, which is not limited herein. Further, in some embodiments, the laser scanning device includes any one or more of laser radar, millimeter wave radar, and ultrasonic radar; in some embodiments, the first point cloud data may be Obtained through lidar acquisition, or acquired through millimeter wave radar, ultrasonic radar, etc. on a movable platform, which is not specifically limited in the embodiment of the present invention.
  • the lidar is a perceptual sensor that can obtain three-dimensional information of the scene.
  • the basic principle is to actively emit laser pulse signals to the detected object and obtain the reflected pulse signals.
  • the depth information of the distance detector of the object to be measured is calculated; Knowing the launch direction, obtain the angle information of the measured object relative to the lidar; combine the aforementioned depth and angle information to obtain a large number of detection points (called point clouds), and based on the point cloud, the spatial three-dimensional information of the measured object relative to the lidar can be reconstructed.
  • the invention provides a method for calibration of lidar and camera in a natural scene without relying on specific markers, and also provides a solution for online detection and correction of calibration results.
  • this solution can calibrate the camera and lidar offline; in some embodiments, the solution can also calibrate the camera and lidar online, and detect the calibration error between the lidar and the camera , To correct the calibration error to improve the calibration accuracy.
  • the calibration system provided by the embodiment of the present invention will be schematically described below with reference to FIG. 1.
  • FIG. 1 is a schematic structural diagram of a calibration system provided by an embodiment of the present invention.
  • the calibration system includes: a calibration device 11 and a movable platform 12.
  • a communication connection can be established between the movable platform 12 and the calibration device 11 through a wireless communication connection.
  • a communication connection between the movable platform 12 and the calibration device 11 may also be established through a wired communication connection.
  • the movable platform 12 may be a movable device such as an unmanned vehicle, an unmanned ship, and a movable robot.
  • the movable platform 12 includes a power system 121, and the power system 121 is used to provide the movable platform 12 with moving power.
  • the movable platform 12 and the calibration device 11 are independent of each other.
  • the calibration device 11 is set in a cloud server and establishes a communication connection with the movable platform 12 through a wireless communication connection.
  • the calibration device may obtain the first point cloud data of the surrounding environment of the movable platform collected by the laser scanning device and the image data collected by the camera, and determine the second point according to the first point cloud data Cloud data, where the second point cloud data is used to indicate invalid point cloud data and/or discontinuous point cloud data.
  • the calibration device can project the second point cloud data to a three-dimensional grid space in the camera coordinate system to obtain a projected three-dimensional space.
  • the The projected three-dimensional space is projected onto the image data collected by the camera, thereby obtaining the optimal position of the projected three-dimensional space projected on the image data, so as to realize a calibration method that does not rely on a calibration object and improve the calibration result Consistency.
  • FIG. 2 is a schematic flowchart of a calibration method provided by an embodiment of the present invention.
  • the method may be executed by a calibration device, and the specific explanation of the calibration device is as described above.
  • the method of the embodiment of the present invention includes the following steps.
  • S201 Acquire the first point cloud data of the surrounding environment of the movable platform collected by the laser scanning device and the image data collected by the camera.
  • the calibration equipment can obtain the first point cloud data of the surrounding environment of the movable platform collected by the laser scanning device and the image data collected by the camera.
  • the laser scanning device includes any one or more of laser radar, millimeter wave radar, and ultrasonic radar.
  • the camera may be mounted on a movable platform. In some embodiments, the camera can also be independent of the movable platform and installed in the environment where the movable platform is located. In some embodiments, the camera of the camera includes but is not limited to a binocular camera, a monocular camera, a TOF camera and other camera devices.
  • the calibration device may convert the first point cloud data into the camera coordinate system based on a preset conversion matrix to obtain the first point cloud data in the camera coordinate system corresponding to the surrounding environment where the movable platform is located.
  • Point cloud data wherein the preset conversion matrix includes an internal parameter matrix and an external parameter matrix, and the external parameter matrix includes a rotation matrix and/or a translation vector.
  • the external parameter matrix when the origin of the camera coordinate system is set on the movable platform, the external parameter matrix only includes a rotation matrix.
  • the internal parameter matrix is determined based on a plurality of internal parameters, and the internal parameters may be parameters of the camera, such as focal length, image principal point coordinates, and so on.
  • the external parameter matrix may be parameters calibrated by the camera and the laser scanning device. For example, it may include a rotation matrix and/or a translation vector, where the rotation matrix may be determined by the posture of the camera, The translation vector can be determined by the positioning information of the camera.
  • the calibration device may determine that the movable platform is in an offline low-speed state when the moving speed of the movable platform is less than a preset speed threshold, and obtain the data collected by the laser scanning device.
  • the first point cloud data of the surrounding environment and the image data collected by the camera when the mobile platform is in an offline low-speed state to achieve offline calibration. Through offline calibration, you can quickly collect enough calibration data at one time, reduce the impact of motion on the calibration accuracy, and improve the calibration accuracy.
  • the calibration device may establish a three-dimensional grid space relative to the camera coordinate system before acquiring the first point cloud data of the surrounding environment when the movable platform is offline and low-speed collected by the laser scanning device .
  • the first point cloud data may be projected to the image shown in FIG. 3 through external parameters.
  • Figure 3 is a schematic diagram of a three-dimensional grid space provided by an embodiment of the present invention.
  • the calibration equipment may determine that the movable platform is in a moving state when the moving speed of the movable platform is greater than or equal to the preset speed threshold, and obtain the data collected by the laser scanning device
  • the first point cloud data of the surrounding environment and the image data collected by the camera can be used to realize online error detection.
  • the calibration data that meets the requirements of a certain scene will be continuously collected, and the current calibration will be checked whether the current calibration is optimal. If a better calibration result is continuously found, the current calibration effect will be performed Update to ensure the consistency of calibration results.
  • S202 Determine second point cloud data according to the first point cloud data, where the second point cloud data is used to indicate invalid point cloud data and/or discontinuous point cloud data.
  • the calibration device may determine second point cloud data according to the first point cloud data, where the second point cloud data is used to indicate invalid point cloud data and/or discontinuous point cloud data.
  • the second point cloud data is used to indicate discontinuous point cloud data.
  • the calibration device may determine the distance between two adjacent first point cloud data in the first point cloud data, And according to the distance between the two adjacent first point cloud data, the discontinuous second point cloud data is determined.
  • the calibration device when the calibration device determines the discontinuous second point cloud data according to the distance between the two adjacent first point cloud data, it may determine that the two adjacent first point cloud data Whether the distance between the first point cloud data is greater than a first preset threshold, and when it is determined that the distance between the two adjacent first point cloud data is greater than the first preset threshold, determine the phase The two adjacent first point cloud data are discontinuous second point cloud data.
  • the data collected by the lidar is continuous. If the distance between the two point cloud data before and after changes greatly, it means that this is a place of depth jump, which is discontinuous.
  • Point cloud data For example, the distance between the two point clouds can be obtained through a suitable algorithm based on the depth information of the two point cloud data.
  • FIG. 4 is taken as an example for illustration.
  • FIG. 4 is a schematic diagram of a discontinuous point cloud provided by an embodiment of the present invention.
  • two adjacent first point cloud data point cloud 41 and point cloud 42 if it is determined that the distance between the point cloud 41 and the point cloud 42 is greater than the first preset threshold, it can be determined
  • the point cloud 41 and the point cloud 42 are discontinuous second point cloud data.
  • the first preset threshold may be a certain value.
  • the distance between the first point cloud data and the origin can also be obtained, and then the distance between the first point cloud data and the origin and the distance between two adjacent first point cloud data To determine whether the two adjacent point cloud data are discontinuous second point cloud data.
  • the first point cloud data whose distance from the origin is greater than the preset value can be determined, and from the first point cloud data whose distance from the origin is greater than the preset value, it is determined that two adjacent points Whether the distance between the first point cloud data is greater than a preset distance threshold.
  • the distance between two adjacent first point cloud data is greater than the preset distance threshold, it is determined that the two adjacent first point cloud data are discontinuous second point cloud data.
  • the distance between two adjacent first point cloud data whose distance to the origin is greater than the preset value may be set as the preset distance threshold.
  • the preset distance threshold may be a function related to the distance from the origin. For example, as the distance from the origin is farther, the preset distance threshold gradually increases, and as the distance from the origin is closer, the preset distance threshold slowing shrieking. In this way, the error caused by the divergence angle can be compensated, the probability of false detection can be reduced, and the calibration accuracy can be improved.
  • the second point cloud data is used to indicate invalid point cloud data.
  • the calibration device determines the second point cloud data according to the first point cloud data, it may determine whether depth information exists in the first point cloud data, and determine the first point cloud data according to the depth information.
  • the second point cloud data that is invalid in the point cloud data.
  • invalid point cloud data can be determined in a scene without radar echo.
  • the scene without radar echo includes sky, water, etc. in the background.
  • the calibration device when the calibration device determines the second point cloud data according to the depth information, it may determine from the first point cloud data that the first point cloud data without depth information is The invalid second point cloud data.
  • the lidar actively emits laser pulse signals to the detected object to obtain the reflected pulse signals.
  • the lidar collects the first point cloud data
  • the background of is the sky
  • the lidar cannot receive the pulse signal returned by the detected object, so the depth information of the first point cloud data cannot be obtained, so if the first point cloud data obtained If there is no depth information, it can be determined that the first point cloud data is invalid second point cloud data.
  • the calibration device when the calibration device determines the second point cloud data according to the depth information, it can acquire the change value of the depth information of the first point cloud data, when the first point cloud data When the change value of the depth information is greater than a second preset threshold, it is determined that the first point cloud data corresponding to the greater than the second preset threshold is invalid second point cloud data.
  • the background of the first point cloud data collected by the camera and lidar is scenes such as fences and grass. As the lidar passes through such fences and grasses, it will obtain a large amount of fluctuating depth information. It is invalid point cloud data.
  • the lidar passes through fences, grass, etc., a lot of first point cloud data is acquired. If the depth information of the acquired multiple first point cloud data is greater than the second preset threshold, at this time, the acquired If the depth information of the plurality of first point cloud data fluctuates greatly, it can be determined that the first point cloud data is invalid second point cloud data.
  • the second point cloud data is used to indicate invalid point cloud data and discontinuous point cloud data.
  • the calibration device is determining the second point cloud data according to the first point cloud data. The methods for invalid point cloud data and discontinuous point cloud data in the point cloud data are as described above, and will not be repeated here.
  • the calibration device may compare the acquired first point cloud data of the current frame with the acquired first point cloud data.
  • the cloud data is matched, and the degree of similarity between the spatial distribution of the first point cloud data of the current frame and the spatial distribution of the acquired first point cloud data is determined. If the similarity is greater than the preset similarity threshold, the calibration device may delete the first point cloud data of the current frame; if the similarity is less than or equal to the preset similarity threshold, it may determine to change The first point cloud data of the current frame is added to the first point cloud data that has been acquired.
  • the data of repeated scenes can be prevented from being repeatedly detected, so that the amount of invalid point cloud data can be reduced and the calculation efficiency can be improved.
  • the first point cloud data detected in each frame will be compared with the first point cloud data that has been acquired. If the spatial distribution is relatively similar, the first point cloud data of the frame will be deleted to ensure that each selected The first point cloud data of one frame can cover different scenes as much as possible.
  • S203 Project the second point cloud data to the three-dimensional grid space in the camera coordinate system to obtain the projected three-dimensional space.
  • the calibration device may project the second point cloud data to the three-dimensional grid space in the camera coordinate system to obtain the projected three-dimensional space.
  • the calibration device when the calibration device projects the second point cloud data to the three-dimensional grid space in the camera coordinate system to obtain the projected three-dimensional space, it can determine the relative relationship between the laser scanning device and the camera. Position information, and project the second point cloud data to a three-dimensional grid space in the camera coordinate system according to the relative position information to obtain a projected three-dimensional space.
  • the calibration device may determine the second point before projecting the second point cloud data to the three-dimensional grid space in the camera coordinate system according to the relative position information to obtain the projected three-dimensional space
  • the cloud data is similar to the spatial distribution of the point cloud data that already exists in the three-dimensional grid space, and the second point cloud data whose spatial distribution similarity is greater than a preset similarity threshold is deleted. In this way, redundant point cloud data can be deleted in advance to improve computing efficiency.
  • the calibration device when the calibration device projects the second point cloud data to the three-dimensional raster space in the camera coordinate system according to the relative position information to obtain the projected three-dimensional space, it can be based on the relative position information Projecting the deleted second point cloud data to the three-dimensional grid space in the camera coordinate system to obtain the projected three-dimensional space.
  • the calibration device may determine the Location information and location information of the point cloud data that already exists in the three-dimensional grid space, and based on the location information of the second point cloud data and the location information of the point cloud data that already exists in the three-dimensional grid space, Determine the spatial distribution similarity between the second point cloud data and the point cloud data that already exists in the three-dimensional grid space.
  • the calibration device before the calibration device projects the second point cloud data to the three-dimensional grid space in the camera coordinate system, it may be determined whether the angle of view of the camera is smaller than the angle of view of the laser scanning device. When the angle of view of the camera is smaller than the angle of view of the laser scanning device, the step of projecting the second point cloud data to the three-dimensional grid space in the camera coordinate system may be performed.
  • step S202 and step S203 can also be reversed.
  • the point cloud data can be projected into the three-dimensional raster space of the camera first, and then based on the first point cloud data
  • the second point cloud data is determined, and the second point cloud data is used to indicate invalid point cloud data and/or discontinuous point cloud data. This is only an exemplary description and is not limited herein.
  • the calibration device may project the projected three-dimensional space onto the image data collected by the camera, and obtain the projection The three-dimensional space is projected to the optimal position on the image data. Specifically, at the optimal position, the projection three-dimensional space matches the image data position optimally.
  • satisfying the preset condition includes that the quantity of the second point cloud data in each grid area in the projected three-dimensional space is greater than a preset quantity threshold.
  • the calibration device when the calibration device projects the projected three-dimensional space onto the image data collected by the camera, and obtains the optimal position of the projected three-dimensional space onto the image data, it can be based on The image data collected by the camera determines the gradient image corresponding to the image data, and projects the second point cloud data in the projected three-dimensional space onto the gradient image.
  • the calibration device can determine that the projected three-dimensional The optimal position of the spatial projection on the image data.
  • the calibration device when it is determined that the second point cloud data of the projected three-dimensional space is projected to the gradient image, and the second point cloud data of the projected three-dimensional space is completely fused with the gradient image, the calibration device
  • the optimal position of the projection three-dimensional space projected onto the image data can be determined according to the following formula (1).
  • D p is the gradient of the corresponding projection point on the image.
  • the calibration device when the calibration device determines the gradient image corresponding to the image data according to the image data collected by the camera, it may determine the gradient image corresponding to the image data according to the image data collected by the camera. And extracting gradient information and/or edge information from the gray image, so as to determine the gradient image according to the gradient information and/or edge information.
  • the calibration device when the calibration device projects the projected three-dimensional space onto the image data collected by the camera, and obtains the optimal position of the projected three-dimensional space onto the image data, it can obtain The projected three-dimensional space is projected onto the target image obtained from the image data collected by the camera, and the reflectivity of the second point cloud data in the target image is determined, and the gray scale of the grayscale image corresponding to the target image is determined. Degree value, so as to determine the optimal projected three-dimensional space projected onto the image data according to the reflectivity of the second point cloud data in the target image and the gray value of the gray image corresponding to the target image position.
  • the calibration device determines that the projection three-dimensional space is projected to the target image according to the reflectivity of the second point cloud data in the target image and the gray value of the gray image corresponding to the target image.
  • the optimal position of the projection three-dimensional space projected on the image data can be determined according to formula (2).
  • I p is the gray value of the corresponding projection point on the image.
  • the calibration device when the calibration device projects the projected three-dimensional space onto the image data collected by the camera, and obtains the optimal position of the projected three-dimensional space onto the image data, it can obtain The movement information of the movable platform in the moving process, and according to the movement information, the compensation information of the second point cloud data is determined, and the second point cloud data in the projected three-dimensional space is determined according to the compensation information Compensation is performed, so that the compensated second point cloud data is projected onto the image data collected by the camera to obtain an optimal position of the projection three-dimensional space projected onto the image data.
  • the motion information includes any one or more of position, speed information, and acceleration information.
  • the calibration device when the calibration device projects the projected three-dimensional space onto the image data collected by the camera, and obtains the optimal position of the projected three-dimensional space onto the image data, it can obtain The second point cloud data in the process of moving the movable platform within a preset time range, and the second point cloud data in the projection three-dimensional space acquired within the preset time range is projected to the camera On the collected image data to obtain the optimal position of the projection three-dimensional space projected onto the image data.
  • the image data collected by the camera is not limited to grayscale images, and this embodiment is only an exemplary description and is not limited herein.
  • the color image data collected by the camera can also be processed.
  • algorithms such as machine learning can be used to first identify specific objects in the scene, such as lane lines, telephone poles, etc., and determine that the projected three-dimensional space is projected to all objects based on physical information such as the reflectance and brightness of the identified specific objects. According to the optimal position on the image data, the probability of false detection can be reduced and the calibration accuracy can be improved.
  • the calibration device when the calibration device projects the projected three-dimensional space onto the image data collected by the camera, and obtains the optimal position of the projected three-dimensional space onto the image data, it can obtain During the movement of the movable platform, the projection three-dimensional space is projected onto multiple target images obtained by the image data collected by the camera, and the data of each target image is compared. If the data of each target image is determined If they are consistent, it can be determined that the position information of the target image is the optimal position projected onto the image data in the projected three-dimensional space.
  • the calibration device compares the data of each target image, if it is determined that the data of each target image is inconsistent, it can be determined that the external parameters of the laser scanning device have changed; further, The external parameters of the laser scanning device are updated.
  • the calibration device when it compares the data of each target image, if it is determined that the data of each target image is inconsistent, it can trigger a preset alarm device to give an alarm to remind the user of the laser
  • a preset alarm device to give an alarm to remind the user of the laser
  • the external parameters of the scanning device are changed, and further, the user may be prompted to check the laser scanning device or automatically check the laser scanning device, which is not limited here.
  • the calibration device takes the first point cloud data of the surrounding environment of the movable platform collected by the laser scanning device and the image data collected by the camera, and determines the second point cloud data according to the first point cloud data, and Project the second point cloud data to a three-dimensional grid space in the camera coordinate system to obtain a projected three-dimensional space.
  • the projected three-dimensional space Projecting onto the image data, and obtaining the optimal position of the projected three-dimensional space projected onto the image data, thereby achieving calibration of the surrounding environment of the movable platform when there is no specific marker, and improving the calibration accuracy.
  • FIG. 5 is a schematic flowchart of an offline calibration method provided by an embodiment of the present invention.
  • the first step of the surrounding environment of the movable platform is collected by lidar.
  • Point cloud data image data is collected by a camera, a point cloud depth discontinuity point is detected according to the first point cloud data, and the point cloud depth discontinuity point is determined to be the second point cloud data, and the second point cloud Project the data into the three-dimensional grid space to obtain the projected three-dimensional space, compare the projected three-dimensional space with the existing data, if it is similar, discard the frame data, if not, add the projected three-dimensional space data
  • a database when it is determined that the data in the database is sufficient, the optimal position of the projection three-dimensional space projected onto the image data is obtained.
  • Figure 6 is a schematic flow chart of an online calibration method provided by an embodiment of the present invention.
  • the online calibration process includes the offline calibration process.
  • the included offline calibration process will not be repeated here.
  • the difference between the online calibration process and the offline calibration process is that in the online calibration process, the projection three-dimensional space is obtained and projected onto the image data
  • the consistency check can be performed on the optimal position.
  • the consistency detection includes: storing the result of the optimal position in a result queue, and storing multiple optimal positions in the result queue to detect whether the optimal positions are consistent, And output the detection results, and judge whether the optimal position is consistent with the image data according to the detection results. If they are consistent, the external parameters have changed and the optimal position needs to be updated.
  • the structure may be loose and cannot Complete calibration. Further, when the optimal position is inconsistent with the image data, the preset alarm device can be triggered to give an alarm to remind the user that the external parameters of the laser scanning device have changed, or prompt the user to check the laser scanning device, or automatically
  • the laser scanning device performs inspection, which is not limited here.
  • FIG. 7 is a schematic structural diagram of a calibration device provided by an embodiment of the present invention.
  • the calibration device includes: a memory 701 and a processor 702.
  • the calibration device further includes a data interface 703, and the data interface 703 is used to transfer data information between the calibration device and other devices.
  • the memory 701 may include a volatile memory (volatile memory); the memory 701 may also include a non-volatile memory (non-volatile memory); the memory 701 may also include a combination of the foregoing types of memories.
  • the processor 702 may be a central processing unit (CPU).
  • the processor 702 may further include a hardware chip.
  • the aforementioned hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD) or a combination thereof.
  • the foregoing PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), or any combination thereof.
  • the memory 701 is used to store a program, and the processor 702 can call the program stored in the memory 701 to perform the following steps:
  • processor 702 determines the second point cloud data according to the first point cloud data, it is specifically configured to:
  • the discontinuous second point cloud data is determined.
  • processor 702 determines the discontinuous second point cloud data according to the distance between the two adjacent first point cloud data, it is specifically configured to:
  • processor 702 is further configured to:
  • the discontinuous second point cloud data is determined according to the distance between the first point cloud data and the origin and the distance between the two adjacent first point cloud data.
  • the processor 702 determines the discontinuous second point according to the distance between the first point cloud data and the origin and the distance between the two adjacent first point cloud data When cloud data is used, it is specifically used for:
  • processor 702 determines the second point cloud data according to the first point cloud data, it is specifically configured to:
  • the invalid second point cloud data in the first point cloud data is determined according to the depth information.
  • processor 702 determines the second point cloud data according to the depth information, it is specifically configured to:
  • processor 702 determines the second point cloud data according to the depth information, it is specifically configured to:
  • the processor 702 is further configured to:
  • the similarity is less than or equal to the preset similarity threshold, it is determined to add the first point cloud data of the current frame to the first point cloud data that has been acquired.
  • processor 702 projects the second point cloud data to the three-dimensional grid space in the camera coordinate system to obtain the projected three-dimensional space, it is specifically used for:
  • the second point cloud data is projected to the three-dimensional grid space in the camera coordinate system to obtain the projected three-dimensional space.
  • the processor 702 projects the second point cloud data to the three-dimensional grid space in the camera coordinate system according to the relative position information, and before obtaining the projected three-dimensional space, it is also used to:
  • the processor 702 projects the second point cloud data to the three-dimensional grid space in the camera coordinate system according to the relative position information, when obtaining the projected three-dimensional space, it is specifically used for:
  • processor 702 determines the spatial distribution similarity between the second point cloud data and the point cloud data that already exists in the three-dimensional grid space, it is specifically configured to:
  • the position information of the second point cloud data and the position information of the point cloud data that already exists in the three-dimensional grid space determine the second point cloud data and the point cloud that already exists in the three-dimensional grid space The spatial distribution similarity of the data.
  • processor 702 projects the second point cloud data to the three-dimensional grid space in the camera coordinate system, it is also used to:
  • the step of projecting the second point cloud data to the three-dimensional grid space in the camera coordinate system is performed.
  • the meeting the preset condition includes:
  • the quantity of the second point cloud data in each grid area in the projected three-dimensional space is greater than a preset quantity threshold.
  • processor 702 projects the projected three-dimensional space onto the image data collected by the camera, and obtains the optimal position of the projected three-dimensional space onto the image data, it is specifically used for:
  • the processor 702 determines the gradient image corresponding to the image data according to the image data collected by the camera, it is specifically configured to:
  • the gradient image is determined according to the gradient information and/or edge information.
  • processor 702 projects the projected three-dimensional space onto the image data collected by the camera, and obtains the optimal position of the projected three-dimensional space onto the image data, it is specifically used for:
  • the optimal position of the projection three-dimensional space projected onto the image data is determined.
  • processor 702 projects the projected three-dimensional space onto the image data collected by the camera, and obtains the optimal position of the projected three-dimensional space onto the image data, it is specifically used for:
  • the motion information includes any one or more of position information, speed information, and acceleration information.
  • processor 702 projects the projected three-dimensional space onto the image data collected by the camera, and obtains the optimal position of the projected three-dimensional space onto the image data, it is specifically used for:
  • processor 702 projects the projected three-dimensional space onto the image data collected by the camera, and obtains the optimal position of the projected three-dimensional space onto the image data, it is specifically used for:
  • the position information of the target image is the optimal position of the projection three-dimensional space projected onto the image data.
  • processor 702 is further configured to:
  • processor 702 is further configured to:
  • a preset alarm device is triggered to give an alarm to prompt the user to check the laser scanning device.
  • the laser scanning device includes any one or more of laser radar, millimeter wave radar, and ultrasonic radar.
  • the calibration device obtains the first point cloud data of the surrounding environment of the movable platform collected by the laser scanning device and the image data collected by the camera, and determines the second point cloud data according to the first point cloud data, and Project the second point cloud data to a three-dimensional grid space in the camera coordinate system to obtain a projected three-dimensional space.
  • the projected three-dimensional space Projecting onto the image data, and obtaining the optimal position of the projected three-dimensional space projected onto the image data, thereby achieving calibration of the surrounding environment of the movable platform when there is no specific marker, and improving the calibration accuracy.
  • the embodiment of the present invention also provides a movable platform, the movable platform includes: a fuselage; a power system configured on the fuselage for providing mobile power for the movable platform; and the above-mentioned calibration equipment.
  • the movable platform obtains the first point cloud data of the surrounding environment of the movable platform collected by the laser scanning device and the image data collected by the camera, and determines the second point cloud data according to the first point cloud data, And projecting the second point cloud data to a three-dimensional grid space in the camera coordinate system to obtain a projected three-dimensional space.
  • the projected three-dimensional Space projection onto the image data When each grid area in the projected three-dimensional space meets a preset condition, the projected three-dimensional Space projection onto the image data, and obtain the optimal position of the projected three-dimensional space projected onto the image data, so as to realize the calibration of the surrounding environment of the movable platform when there is no specific marker, and improve the calibration accuracy .
  • the embodiment of the present invention also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and the computer program is executed by a processor to implement the method described in the embodiment corresponding to FIG. 2 of the present invention ,
  • the device corresponding to the embodiment of the present invention described in FIG. 7 can also be implemented, which will not be repeated here.
  • the computer-readable storage medium may be an internal storage unit of the device described in any of the foregoing embodiments, such as a hard disk or memory of the device.
  • the computer-readable storage medium may also be an external storage device of the device, such as a plug-in hard disk equipped on the device, a Smart Media Card (SMC), or a Secure Digital (SD) card , Flash Card, etc.
  • the computer-readable storage medium may also include both an internal storage unit of the device and an external storage device.
  • the computer-readable storage medium is used to store the computer program and other programs and data required by the terminal.
  • the computer-readable storage medium can also be used to temporarily store data that has been output or will be output.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

Les modes de réalisation de la présente invention concernent un procédé et un dispositif d'étalonnage, une plateforme mobile et un support de stockage. Le procédé comprend les étapes consistant à : (S201) obtenir des premières données de nuage de points de l'environnement d'une plateforme mobile, collectées par un dispositif de balayage laser, et des données d'image collectées par un appareil de prise de vue ; (S202) déterminer des deuxièmes données de nuage de points en fonction des premières données de nuage de points, les deuxièmes données de nuage de points étant utilisées pour indiquer des données de nuage de points invalides et/ou des données de nuage de points non consécutives ; (S203) projeter les deuxièmes données de nuage de points sur un espace tridimensionnel de grille dans un système de coordonnées d'appareil de prise de vue pour obtenir un espace tridimensionnel projeté ; et (S204) lorsque chaque région de grille dans l'espace tridimensionnel projeté est conforme à une condition prédéfinie, projeter l'espace tridimensionnel projeté sur les données d'image collectées par l'appareil de prise de vue et obtenir la position optimale de l'espace tridimensionnel projeté, projeté sur les données d'image. Par conséquent, l'environnement de la plateforme mobile peut être étalonné sans aucun repère spécifique, et la précision d'étalonnage est améliorée.
PCT/CN2019/098354 2019-07-30 2019-07-30 Procédé et dispositif d'étalonnage, plateforme mobile et support de stockage WO2021016854A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/098354 WO2021016854A1 (fr) 2019-07-30 2019-07-30 Procédé et dispositif d'étalonnage, plateforme mobile et support de stockage
CN201980030471.8A CN112106111A (zh) 2019-07-30 2019-07-30 一种标定方法、设备、可移动平台及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/098354 WO2021016854A1 (fr) 2019-07-30 2019-07-30 Procédé et dispositif d'étalonnage, plateforme mobile et support de stockage

Publications (1)

Publication Number Publication Date
WO2021016854A1 true WO2021016854A1 (fr) 2021-02-04

Family

ID=73748811

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/098354 WO2021016854A1 (fr) 2019-07-30 2019-07-30 Procédé et dispositif d'étalonnage, plateforme mobile et support de stockage

Country Status (2)

Country Link
CN (1) CN112106111A (fr)
WO (1) WO2021016854A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2858148A1 (es) * 2021-05-27 2021-09-29 Univ Madrid Politecnica Equipo multipanel orientable para la calibracion de movimientos a partir de nubes de puntos obtenidas con "laser escaner terrestre (tls)" en campo
CN114529884A (zh) * 2022-02-23 2022-05-24 广东汇天航空航天科技有限公司 基于双目相机的障碍物探测处理方法、装置、设备及系统
CN115267746A (zh) * 2022-06-13 2022-11-01 广州文远知行科技有限公司 激光雷达点云投影错误的定位方法及相关设备
CN117011503A (zh) * 2023-08-07 2023-11-07 青岛星美装饰服务有限公司 一种加工数据确定方法、装置、设备和可读存储介质
CN117523105A (zh) * 2023-11-24 2024-02-06 哈工大郑州研究院 激光雷达和多相机数据融合的三维场景重建方法

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112578356B (zh) * 2020-12-25 2024-05-17 上海商汤临港智能科技有限公司 一种外参标定方法、装置、计算机设备及存储介质
CN114756162B (zh) * 2021-01-05 2023-09-05 成都极米科技股份有限公司 触控系统及方法、电子设备及计算机可读存储介质
CN113639685B (zh) * 2021-08-10 2023-10-03 杭州申昊科技股份有限公司 位移检测方法、装置、设备和存储介质
CN113740829A (zh) * 2021-11-05 2021-12-03 新石器慧通(北京)科技有限公司 环境感知设备的外参监控方法、装置、介质及行驶装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150310135A1 (en) * 2014-04-24 2015-10-29 The Board Of Trustees Of The University Of Illinois 4d vizualization of building design and construction modeling with photographs
CN107564069A (zh) * 2017-09-04 2018-01-09 北京京东尚科信息技术有限公司 标定参数的确定方法、装置及计算机可读存储介质
CN108406731A (zh) * 2018-06-06 2018-08-17 珠海市微半导体有限公司 一种基于深度视觉的定位装置、方法及机器人
CN109300162A (zh) * 2018-08-17 2019-02-01 浙江工业大学 一种基于精细化雷达扫描边缘点的多线激光雷达和相机联合标定方法
WO2019035049A1 (fr) * 2017-08-16 2019-02-21 Mako Surgical Corp. Recalage osseux en imagerie ultrasonore avec calibrage de la vitesse du son et segmentation fondés sur l'apprentissage

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9430822B2 (en) * 2013-06-14 2016-08-30 Microsoft Technology Licensing, Llc Mobile imaging platform calibration
JP6318576B2 (ja) * 2013-11-22 2018-05-09 株式会社リコー 画像投影システム、画像処理装置、画像投影方法およびプログラム
CN109949371A (zh) * 2019-03-18 2019-06-28 北京智行者科技有限公司 一种用于激光雷达和相机数据的标定方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150310135A1 (en) * 2014-04-24 2015-10-29 The Board Of Trustees Of The University Of Illinois 4d vizualization of building design and construction modeling with photographs
WO2019035049A1 (fr) * 2017-08-16 2019-02-21 Mako Surgical Corp. Recalage osseux en imagerie ultrasonore avec calibrage de la vitesse du son et segmentation fondés sur l'apprentissage
CN107564069A (zh) * 2017-09-04 2018-01-09 北京京东尚科信息技术有限公司 标定参数的确定方法、装置及计算机可读存储介质
CN108406731A (zh) * 2018-06-06 2018-08-17 珠海市微半导体有限公司 一种基于深度视觉的定位装置、方法及机器人
CN109300162A (zh) * 2018-08-17 2019-02-01 浙江工业大学 一种基于精细化雷达扫描边缘点的多线激光雷达和相机联合标定方法

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2858148A1 (es) * 2021-05-27 2021-09-29 Univ Madrid Politecnica Equipo multipanel orientable para la calibracion de movimientos a partir de nubes de puntos obtenidas con "laser escaner terrestre (tls)" en campo
CN114529884A (zh) * 2022-02-23 2022-05-24 广东汇天航空航天科技有限公司 基于双目相机的障碍物探测处理方法、装置、设备及系统
CN115267746A (zh) * 2022-06-13 2022-11-01 广州文远知行科技有限公司 激光雷达点云投影错误的定位方法及相关设备
CN117011503A (zh) * 2023-08-07 2023-11-07 青岛星美装饰服务有限公司 一种加工数据确定方法、装置、设备和可读存储介质
CN117011503B (zh) * 2023-08-07 2024-05-28 青岛星美装饰服务有限公司 一种加工数据确定方法、装置、设备和可读存储介质
CN117523105A (zh) * 2023-11-24 2024-02-06 哈工大郑州研究院 激光雷达和多相机数据融合的三维场景重建方法
CN117523105B (zh) * 2023-11-24 2024-05-28 哈工大郑州研究院 激光雷达和多相机数据融合的三维场景重建方法

Also Published As

Publication number Publication date
CN112106111A (zh) 2020-12-18

Similar Documents

Publication Publication Date Title
WO2021016854A1 (fr) Procédé et dispositif d'étalonnage, plateforme mobile et support de stockage
WO2021189468A1 (fr) Procédé, appareil et système de correction d'attitude pour radar laser
WO2020215172A1 (fr) Procédé et dispositif de détection d'obstacle, plateforme mobile et support de stockage
CN112017251B (zh) 标定方法、装置、路侧设备和计算机可读存储介质
Kang et al. Automatic targetless camera–lidar calibration by aligning edge with gaussian mixture model
KR102159376B1 (ko) 레이저 스캔 시스템, 레이저 스캔 방법, 이동 레이저 스캔 시스템 및 프로그램
TWI624170B (zh) 影像掃描系統及其方法
CN111080784B (zh) 一种基于地面图像纹理的地面三维重建方法和装置
CN111684382B (zh) 可移动平台状态估计方法、系统、可移动平台及存储介质
CN111142514B (zh) 一种机器人及其避障方法和装置
WO2021195939A1 (fr) Procédé d'étalonnage pour paramètres externes d'un dispositif de photographie binoculaire, plateforme mobile et système
CN113111513B (zh) 传感器配置方案确定方法、装置、计算机设备及存储介质
WO2022179207A1 (fr) Procédé et appareil de détection d'occlusion de fenêtre
CN114581480B (zh) 多无人机协同目标状态估计控制方法及其应用
CN112146848A (zh) 一种确定摄像头的畸变参数的方法及装置
KR20200076628A (ko) 모바일 디바이스의 위치 측정 방법, 위치 측정 장치 및 전자 디바이스
CN111553342B (zh) 一种视觉定位方法、装置、计算机设备和存储介质
CN114782556B (zh) 相机与激光雷达的配准方法、系统及存储介质
US20220291009A1 (en) Information processing apparatus, information processing method, and storage medium
CN113014899B (zh) 一种双目图像的视差确定方法、装置及系统
CN112598736A (zh) 一种基于地图构建的视觉定位方法及装置
CN113409376A (zh) 一种基于相机的深度估计进行激光雷达点云过滤的方法
JP2021012043A (ja) 機械学習用情報処理装置、機械学習用情報処理方法、および機械学習用情報処理プログラム
US20230419468A1 (en) Image processing apparatus, image processing method, and image processing program
US20210404843A1 (en) Information processing apparatus, control method for information processing apparatus, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19939805

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19939805

Country of ref document: EP

Kind code of ref document: A1