WO2021097807A1 - 一种探测装置的外参数标定方法、装置及可移动平台 - Google Patents

一种探测装置的外参数标定方法、装置及可移动平台 Download PDF

Info

Publication number
WO2021097807A1
WO2021097807A1 PCT/CN2019/120278 CN2019120278W WO2021097807A1 WO 2021097807 A1 WO2021097807 A1 WO 2021097807A1 CN 2019120278 W CN2019120278 W CN 2019120278W WO 2021097807 A1 WO2021097807 A1 WO 2021097807A1
Authority
WO
WIPO (PCT)
Prior art keywords
detection device
detection
target
relative
area
Prior art date
Application number
PCT/CN2019/120278
Other languages
English (en)
French (fr)
Inventor
刘天博
李威
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980038511.3A priority Critical patent/CN112272757A/zh
Priority to PCT/CN2019/120278 priority patent/WO2021097807A1/zh
Publication of WO2021097807A1 publication Critical patent/WO2021097807A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D18/00Testing or calibrating apparatus or arrangements provided for in groups G01D1/00 - G01D15/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • G01C25/005Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass initial alignment, calibration or starting-up of inertial devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Definitions

  • the embodiments of the present invention relate to the technical field of mobile platforms, and in particular to a method, a device and a movable platform for calibrating external parameters of a detection device.
  • External parameter calibration is used to calculate the transformation relationship between the positions and orientations of multiple different detection devices.
  • detection devices include, but are not limited to, inertial measurement units, cameras, and lidars. After calibration of these detection devices, the displacement and rotation parameters between any two detection devices can be calculated, which are external parameters. Using these external parameters, the position and posture between any two different detection devices can be converted mutually.
  • the calibration method for the detection device on the movable platform usually relies on special environmental information. It is necessary to arrange the calibration object in the mobile environment of the movable platform in advance, so that the external parameter calibration can only be carried out in the specific built area, but not Carrying out in other venues greatly affects the flexibility of external parameter calibration.
  • the embodiment of the present invention provides a method, device, mobile platform and storage medium for calibrating external parameters of a detection device, which can conveniently complete external parameter calibration.
  • the embodiment of the present invention provides a method for calibrating the external parameters of a detection device, the method is suitable for a movable platform, and at least a first detection device and a second detection device are arranged at different positions of the movable platform
  • the first detection device and the second detection device are respectively used to collect environmental detection information, and the method includes:
  • first environment detection information detected by the first detection device and acquire second environment detection information detected by the second detection device, wherein the first environment detection information and the second environment detection information Each includes information about a target environmental area, where the target environmental area is a part of the environmental detection area corresponding to the movable platform;
  • the first detection device and the second detection device are determined Out-of-target parameters between devices.
  • an embodiment of the present invention provides an external parameter calibration device of a detection device, the device is configured on a movable platform, and at least a first detection device and a second detection device are configured at different positions of the movable platform.
  • a device, the first detection device and the second detection device are respectively used to collect environmental detection information, and the device includes:
  • the acquiring module is configured to acquire the first environmental detection information detected by the first detection device, and acquire the second environmental detection information detected by the second detection device, wherein the first environmental detection information and the The second environment detection information includes information about a target environment area, and the target environment area is a part of the environment area in the environment detection area corresponding to the movable platform;
  • the processing module is configured to determine the pose data of the first detection device relative to the target environment area according to the first environment detection information, and determine the position and orientation data of the second detection device relative to the target environment area according to the second environment detection information. State the pose data of the target environment area;
  • the processing module is further configured to determine the first detection device based on the pose data of the first detection device relative to the target environment area and the pose data of the second detection device relative to the target environment area.
  • the external target parameter between the detection device and the second detection device is further configured to determine the first detection device based on the pose data of the first detection device relative to the target environment area and the pose data of the second detection device relative to the target environment area. The external target parameter between the detection device and the second detection device.
  • an embodiment of the present invention provides a movable platform of a detection device, the movable platform is configured on a movable platform, and at least a first detection device and a second detection device are configured at different positions of the movable platform.
  • Detection device, the first detection device and the second detection device are respectively used to collect environmental detection information
  • the movable platform includes a processor and a communication interface, the processor and the communication interface are connected to each other, wherein The communication interface is controlled by the processor for sending and receiving instructions, and the processor is used for:
  • first environment detection information detected by the first detection device and acquire second environment detection information detected by the second detection device, wherein the first environment detection information and the second environment detection information Each includes information about a target environmental area, where the target environmental area is a part of the environmental detection area corresponding to the movable platform;
  • the first detection device and the second detection device are determined Out-of-target parameters between devices.
  • the embodiment of the present invention provides another method for calibrating the external parameters of the detection device.
  • the method is suitable for a movable platform.
  • the detection device is configured at different positions of the movable platform, and the detection device includes a first A detection device and a third detection device, the third detection device includes an inertial sensor, and the method includes:
  • the acceleration of the third detection device By comparing the acceleration of the third detection device, the angular velocity of the third detection device, the acceleration of the first detection device and the angular velocity of the first detection device, the first detection device and the third detection device are compared. Out-of-target parameters between detection devices.
  • the embodiment of the present invention provides another external parameter calibration device of a detection device, the device is suitable for a movable platform, and detection devices are arranged at different positions of the movable platform, and the detection device includes The first detection device and the third detection device, the third detection device includes an inertial sensor, and the device includes:
  • a processing module configured to determine the relative translation and relative rotation of the first detection device between different positions under the target environment area detected
  • the processing module is further configured to determine the acceleration and angular velocity of the first detection device based on the relative translation and the relative rotation of the first detection device between the different positions;
  • An acquisition module for acquiring the acceleration and angular velocity of the third detection device
  • the processing module is further configured to compare the acceleration of the third detection device, the angular velocity of the third detection device, the acceleration of the first detection device and the angular velocity of the first detection device to obtain the first detection device.
  • an embodiment of the present invention provides another movable platform of a detection device, and detection devices are arranged at different positions of the movable platform, and the detection device includes a first detection device and a third detection device,
  • the third detection device includes an inertial sensor
  • the movable platform includes a processor and a communication interface
  • the processor is configured to:
  • the acceleration of the third detection device By comparing the acceleration of the third detection device, the angular velocity of the third detection device, the acceleration of the first detection device and the angular velocity of the first detection device, the first detection device and the third detection device are compared. Out-of-target parameters between detection devices.
  • an embodiment of the present invention provides a computer storage medium that stores computer program instructions, and the computer program instructions are used to implement the above-mentioned external parameter calibration method of the detection device when the computer program instructions are executed.
  • the movable platform can respectively determine that the first detection device and the second detection device are relative to the target according to the first environmental detection information detected by the first detection device and the second environmental detection information detected by the second detection device.
  • the pose data of the environmental area and determine the external target parameters between the first detection device and the second detection device according to the pose data.
  • Using such external parameter calibration method does not depend on special calibration equipment and specific calibration environment, and can improve the flexibility and efficiency of calibration for external parameters of the detection device.
  • FIG. 1 is a schematic diagram of a scene of external parameter calibration of a detection device provided by an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a scene of external parameter calibration of another detection device provided by an embodiment of the present invention
  • FIG. 3 is a schematic flowchart of a method for calibrating external parameters of a detection device according to an embodiment of the present invention
  • FIG. 4 is a schematic flowchart of another method for calibrating external parameters of a detection device according to an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of another external parameter calibration method of a detection device provided by an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of an external parameter calibration device of a detection device provided by an embodiment of the present invention.
  • Fig. 7 is a schematic structural diagram of a movable platform provided by an embodiment of the present invention.
  • Calibration of external parameters can be applied to many fields, such as the field of autonomous driving. Calibration of external parameters of detection devices mounted on autonomous vehicles is a necessary and critical link in the development and production of autonomous vehicles.
  • the embodiment of the present invention proposes a method for calibrating the external parameters of the detection device. The method is used to calibrate the external parameters of the detection devices configured at different positions of the movable platform, which can target different positions in real time during the movement of the movable platform.
  • the configured detection device is calibrated with external parameters, and the detection device configured at different positions can also be calibrated with external parameters before the movable platform moves, which is not specifically limited in the present invention.
  • the above-mentioned movable platform may be some mobile devices that can be driven on public transportation roads, such as autonomous vehicles, smart electric vehicles, scooters, balance vehicles and other vehicles.
  • At least a first detection device and a second detection device are configured at different positions of the movable platform.
  • the first detection device may be any sensor of the target type sensor, and the target type sensor includes an image sensor (such as but not limited to Camera device) or sensing sensor (such as but not limited to laser radar), and the second detection device may be a plurality of sensors of different or the same type.
  • N is an integer greater than 1
  • the other N-1 detection devices are determined to be the second detection devices.
  • 4 detection devices at different positions of the movable platform which are binocular camera A, binocular camera B, lidar C, and lidar D.
  • a certain sensor at a certain position can be pre-selected As the main sensor (that is, the first detection device).
  • the binocular camera A may be determined as the main sensor
  • the binocular camera B may be determined as the main sensor
  • the lidar C or D may also be determined as the main sensor, which is not specifically limited in the embodiment of the present application.
  • a detection device is provided on the front, rear, left, and right sides of the outside of the movable platform 10.
  • the detection device provided on the front side of the movable platform 10 can be selected as the first detection device.
  • the detection devices on the rear, left and right sides of the movable platform 10 are selected as the second detection device.
  • the detection data of all detection devices can be continuously collected and buffered in the memory, and the first detection device can be called in a short period of time.
  • the same environmental area (such as environmental area 1 and environmental area 2 in Figure 1) is observed with different positions and rotations within the time period.
  • the movement trajectory of the first detection device can be determined. Wherein, it is necessary to ensure that the first detection device and each second detection device can detect the same environmental area at least once during the movement.
  • both the first detection data and the second detection data may include Point cloud data (for example, a frame of point cloud collected by lidar) and/or image data (for example, a frame of picture collected by a camera).
  • Point cloud data for example, a frame of point cloud collected by lidar
  • image data for example, a frame of picture collected by a camera.
  • the target environmental area is a part of the environmental detection area corresponding to the movable platform, for example, the environmental area 1 or the environmental area 2 in FIG. 1.
  • the pose data of the first detection device relative to the target environment area is determined according to the first detection data
  • the pose data of the second detection device relative to the target environment area is determined according to the second detection data
  • the pose data of the second detection device relative to the target environment area is determined based on the relative position of the first detection device.
  • the pose data of the target environment area and the pose data of the second detection device relative to the target environment area determine the extra-target parameters between the first detection device and the second detection device. It can be seen that the use of such external parameter calibration method does not rely on special calibration equipment and specific calibration environment, and does not require a shared field of view between sensors, which can achieve more flexible and efficient external parameter calibration for the detection device.
  • the first detection device is arranged on the front side of the movable platform 10
  • a second detection device is arranged on each of the other sides.
  • the device, the first detection device and the second detection device located on the right side of the mobile platform both detect the same target environmental area: environmental area 1 at different moments.
  • the second detection data when the second detection device on the right side of the mobile platform detects the environmental area 1 can be obtained based on the first detection data.
  • the detection data and the second detection data respectively determine the pose data of the first detection device relative to the target environment area, and the pose data of the second detection device located on the right side of the mobile platform relative to the target environment area. Further, according to the pose data of the first detection device relative to the target environment area and the pose data of the second detection device located on the right side of the mobile platform relative to the target environment area, the first detection device and the location on the mobile platform are calculated. The parameters outside the target between the second detection device on the right.
  • the pose data of the first detection device relative to the target environment area includes first position data and first pose data of the first detection device relative to the target environment area
  • the pose data of the second detection device relative to the target environment area includes The second position data and the second attitude data of the second detection device relative to the target environment area.
  • the difference between the first position data and the second position data, and the difference between the first posture data and the second posture data can be determined as the target outside the first detection device and the second detection device.
  • the target external parameter table includes displacement and rotation parameters between the first detection device and the second detection device.
  • the movable platform 10 in FIG. 1 and FIG. 2 is only an example.
  • the movable platform shown in FIG. 1 and FIG. 2 may also be other mobile devices, or it may be mounted on a competitive robot.
  • mobile devices such as drones, unmanned vehicles, etc., the present invention does not limit this.
  • FIG. 3 is a schematic flowchart of a method for calibrating external parameters of a detection device provided by an embodiment of the present invention.
  • the method of the embodiment of the present invention may be executed by a movable platform at different positions of the movable platform. At least a first detection device and a second detection device are configured, and the first detection device and the second detection device are respectively used to collect environmental detection information.
  • the movable platform can obtain the first environment detection information detected by the first detection device and the second environment detection information detected by the second detection device in S301 .
  • the first environment detection information and the second environment detection information both include information about the target environment area, and the target environment area is a part of the environment area in the environment detection area corresponding to the movable platform.
  • the first detection device and the second detection device detect the target environment area at different times.
  • the mobile platform can detect multiple environmental areas through the first detection device and the second detection device during the movement.
  • the multiple environmental areas constitute the environmental detection area corresponding to the mobile platform's current movement process
  • the target environmental area is multiple environments Any of the regions.
  • the environmental detection area corresponding to the current movement process of the movable platform includes environmental area 1 and environmental area 2.
  • the target environmental area is a part of the environmental area in the environmental detection area, for example, the environmental area 1 or environmental zone 2.
  • the movable platform may call the first detection device and the second detection device to collect environmental detection information separately during the movement, and store the collected environmental detection information in a preset storage area. Further, after the movement of the movable platform is completed and all the environmental detection information is collected, the movable platform can automatically check all the collected environmental detection information. When the same target environmental area is detected from all the environmental detection information, the first The first environmental detection information detected by a detection device and the second environmental information detected by the second detection device.
  • the movable platform After the movable platform obtains the first environmental detection information detected by the first detection device and the second environmental detection information detected by the second detection device, in step S302, it is determined that the first detection device is relative to the target according to the first environmental detection information.
  • the pose data of the environment area determines the pose data of the second detection device relative to the target environment area according to the second environment detection information.
  • first detection device and second detection device may both be image sensors (for example, camera devices) or perception sensors.
  • both the first environment detection information detected by the first detection device and the second environment detection information detected by the second detection device may include point cloud data about the target environment area or image data about the target environment area.
  • the above-mentioned perception sensor may be, for example, a lidar, which can obtain three-dimensional information of the scene.
  • the basic principle is to actively emit laser pulse signals to the detected object and receive the reflected laser pulse signals. According to the time difference between the emitted laser pulse signal and the received reflected laser pulse signal and the propagation speed of the laser pulse signal , Calculate the depth information of the measured object; obtain the angle information of the measured object relative to the lidar according to the emission direction of the lidar; combine the aforementioned depth information and angle information to obtain a large number of detection points.
  • the data set of the detection points is called a point cloud , Based on the point cloud, the three-dimensional information of the measured object relative to the lidar can be reconstructed.
  • the second environment detection information includes image data about the target environment area.
  • the pose data of the second detection device relative to the target environment area is determined.
  • the specific implementation may be: based on image algorithms
  • the image data about the target environment area is processed to obtain the pose data of the second detection device relative to the target environment area.
  • the pose data of the second detection device relative to the target environment area includes second position data and second pose data of the second detection device relative to the target environment area.
  • the second position data may be the world coordinates of the second detection device relative to the target environment area
  • the second posture data may be the rotation angle of the second detection device relative to the target environment area.
  • the second detection device is a camera
  • the second environment detection information includes image data about the target environment area.
  • the image data may be a frame of picture J1 about the target environment area.
  • the image algorithm may be a perspective n-point algorithm ( Perspective-n-Point, PnP).
  • PnP Perspective-n-Point
  • the mobile platform can use PnP to combine the world coordinates of the feature points in the picture J1 in the world coordinate system and the imaging of the feature points in the picture J1 (ie pixel coordinates) to solve the problem when the camera collects the picture J1.
  • the world coordinates and rotation angle relative to the target environment area can be respectively used as a translation matrix (t1) and a rotation matrix (R1).
  • the pose data of the first detection device relative to the target environment area is determined.
  • the image data of the target environment area is processed to obtain the pose data of the first detection device relative to the target environment area.
  • the second environment detection information includes point cloud data about the target environment area.
  • the pose data of the second detection device relative to the target environment area is determined.
  • the specific implementation may be: based on iteration
  • the closest point algorithm (ICP) processes the point cloud data about the target environment area to obtain the pose data of the second detection device relative to the target environment area.
  • the second detection device is a lidar
  • the second environment detection information includes point cloud data about the target environment area.
  • the point cloud data may be a frame of point cloud about the target environment area.
  • the movable platform may use ICP to The point cloud is processed, and the world coordinates and rotation angle of the target environment area relative to the world coordinates and rotation angle of the target environment area when the above-mentioned lidar collects the point cloud about the target environment area can be respectively used as a translation matrix (t2) and a rotation matrix (R2) ).
  • the first environmental detection information includes point cloud data about the target environmental area
  • determine the pose data of the first detection device relative to the target environmental area The specific implementation may be: based on ICP pairing
  • the image data of the target environment area is processed to obtain the pose data of the first detection device relative to the target environment area.
  • step S303 may be based on the pose data and the first detection device relative to the target environment area.
  • the second detection device determines the external parameters of the target between the first detection device and the second detection device relative to the pose data of the target environment area.
  • the pose data of the first detection device relative to the target environment area includes first position data and first pose data of the first detection device relative to the target environment area
  • the pose data of the second detection device relative to the target environment area includes The second position data and the second attitude data of the second detection device relative to the target environment area
  • the target external parameters may be the displacement and rotation parameters between the first detection device and the second detection device
  • the first position data and the first attitude data may be the target environment area detected by the first detection device, respectively.
  • the position coordinate S 1 and the rotation angle ⁇ 1 at time , the second position data and the second posture data may be the position coordinate S 2 and the rotation angle ⁇ 2 when the second detection device detects the target environment area, respectively.
  • the movable platform can determine the difference between the position coordinate S 1 and the position coordinate S 2 as the displacement between the first detection device and the second detection device, and calculate the difference between the rotation angle ⁇ 1 and the rotation angle ⁇ 2 The value is determined as a rotation parameter between the first detection device and the second detection device.
  • the above-mentioned external target parameter may be a translation matrix and a rotation matrix between the first detection device and the second detection device.
  • the first position data and the first attitude data may be the first translation matrix and the first rotation matrix relative to the target environment area when the first detection device detects the target environment area
  • the second position data and the second attitude data may be They are respectively the second translation matrix and the second rotation matrix relative to the target environment area when the second detection device detects the target environment area.
  • the mobile platform can calculate the first detection device based on the first translation matrix and the second translation matrix
  • the translation matrix between the first detection device and the second detection device is calculated based on the first rotation matrix and the second rotation matrix to calculate the rotation matrix between the first detection device and the second detection device.
  • the movable platform can determine the difference between the first detection device and the second detection device based on the pose data of the first detection device relative to the target environment area and the pose data of the second detection device relative to the target environment area.
  • the first external parameter of the time Further, the second external parameter between the first detection device and the second detection device may be determined based on the pose data of the first detection device relative to the reference environment area and the pose data of the second detection device relative to the reference environment area , And then perform data processing on the first external parameter and the second external parameter to obtain the target external parameter between the first detection device and the second detection device.
  • the above-mentioned specific implementation manner of performing data processing on the first external parameter and the second external parameter to obtain the target external parameter between the first detection device and the second detection device may be: the first external parameter and the second external parameter Calculate the average, and determine the obtained average value as the external target parameter between the first detection device and the second detection device.
  • the above-mentioned target environment area and the reference environment area may be different.
  • the target environment area is a part of the environment detection area corresponding to the movable platform, and the reference environment area is another part of the environment detection area.
  • the environmental detection area corresponding to this movement of the movable platform includes environmental area 1 and environmental area 2.
  • the target environmental area is environmental area 1, then the reference environmental area can be environmental area 2.
  • the first detection device and the second detection device both detected the environmental area 1 and the environmental area 2 at least once at different times.
  • the above-mentioned target environmental area and the reference environmental area may be the same.
  • the first detection device and the second detection device both detect the target environment area at least twice at different times.
  • the first detection device and the second detection device both detected the target environment area twice at different times.
  • the specific detection time is shown in Table 1.
  • the movable platform can determine the first detection based on the pose data of the first detection device relative to the target environment area at the 5th minute and the pose data of the second detection device relative to the target environment area at the 10th minute
  • the first external parameter between the device and the second detection device based on the pose data of the first detection device relative to the target environment area at the 20th minute and the pose data of the second detection device relative to the target environment area at the 25th minute , Determine the second external parameter between the first detection device and the second detection device.
  • data processing is performed on the first external parameter and the second external parameter to obtain the target external parameter between the first detection device and the second detection device.
  • Detection time (minutes) Detection device 5th minute First detection device 10 minutes Second detection device 20 minutes First detection device 25th minute Second detection device
  • the movable platform can respectively determine that the first detection device and the second detection device are relative to the target according to the first environmental detection information detected by the first detection device and the second environmental detection information detected by the second detection device.
  • the pose data of the environmental area and determine the external target parameters between the first detection device and the second detection device according to the pose data.
  • Using such external parameter calibration method does not depend on special calibration equipment and specific calibration environment, and can improve the flexibility and efficiency of calibration for external parameters of the detection device.
  • Figure 4 is a schematic flow chart of another method for calibrating external parameters of a detection device according to an embodiment of the present invention.
  • the method of the embodiment of the present invention can be executed by a movable platform, and is located at different positions of the movable platform. At least a first detection device and a second detection device are arranged at the place, and the first detection device and the second detection device are respectively used for collecting environmental detection information.
  • the movable platform can obtain the first environmental detection information detected by the first detection device and the second environmental detection information detected by the second detection device in S401 . Further, in step S402, the pose data of the first detection device relative to the target environment area is determined according to the first environment detection information, and the pose data of the second detection device relative to the target environment area is determined according to the second environment detection information.
  • step S401 to step S402 reference may be made to the related description of step S301 to step S302 in the foregoing embodiment, which will not be repeated here.
  • the movement trajectory of the first detection device can be acquired in step S403, and the movement trajectory and the first detection device
  • the pose data of the detection device relative to the target environment area and the pose data of the second detection device relative to the target environment area are used to calculate the target external parameters between the first detection device and the second detection device.
  • the movable platform before the movable platform obtains the movement trajectory of the first detection device, it can obtain various second environmental detection information detected by the first detection device at different positions, and each second environmental detection information includes information about the target.
  • the information of the environmental area further, based on the respective second environmental detection information, determine the relative translation and relative rotation of the first detection device between different positions, and then determine the movement of the first detection device based on the relative translation and relative rotation Trajectory.
  • the first detection device can be called to observe the same environment area (such as the target environment area) at different positions and rotation angles in a relatively short period of time, and by comparing the first detection
  • the corresponding second environment detection information determines the relative translation and relative rotation of the first detection device between the above-mentioned different positions, and accumulates these relative translations and relative rotations to determine The trajectory of the first detection device.
  • the movable platform is further provided with a third detection device, which includes an inertial sensor.
  • a third detection device which includes an inertial sensor.
  • the movable platform determines the relative translation and relative rotation of the first detection device between different positions, it may be based on the first detection device.
  • the relative translation and relative rotation of a detecting device between different positions determine the acceleration and angular velocity of the first detecting device, and obtain the acceleration and angular velocity of the inertial sensor.
  • the acceleration and the angular velocity of the first detection device obtain the target external parameters between the first detection device and the inertial sensor.
  • the inertial sensor can output the measured acceleration and angular velocity in real time.
  • the position of the first detection device that is, the relative translation of the first detection device between different positions
  • the attitude of the first detection device can be determined by the second-order difference.
  • the acceleration and angular velocity of the first detection device are obtained.
  • the relative position and posture between the inertial sensor and the first detection device that is, the parameters outside the target, can be obtained.
  • the movable platform can respectively determine that the first detection device and the second detection device are relative to the target according to the first environmental detection information detected by the first detection device and the second environmental detection information detected by the second detection device.
  • the pose data of the environmental area and determine the external target parameters between the first detection device and the second detection device according to the pose data.
  • Using such external parameter calibration method does not depend on special calibration equipment and specific calibration environment, and can improve the flexibility and efficiency of calibration for external parameters of the detection device.
  • FIG. 5 is a schematic flowchart of another method for calibrating external parameters of a detection device according to an embodiment of the present invention.
  • the method in this embodiment of the present invention may be executed by a movable platform.
  • a detection device is arranged at the position, the detection device includes a first detection device and a third detection device, and the third detection device includes an inertial sensor.
  • the relative translation and relative rotation of the first detection device between different positions under the target environment area detected by the first detection device can be determined in step S501, and based on the first detection device
  • the relative translation and relative rotation between different positions determine the acceleration and angular velocity of the first detection device.
  • determining the relative translation and relative rotation of the first detection device between different positions under the detected target environment area includes: when the movable platform is moving, calling the first detection device to detect in different positions and attitudes The target environment area, each of the second environment detection information detected by the first detection device at different positions is acquired, and each of the second environment detection information includes information about the target environment area. Further, based on the respective second environmental detection information, the relative translation and relative rotation of the first detection device between different positions are determined.
  • the target environmental area is environmental area 1 as shown in Fig. 1.
  • the movable platform is in three positions during the movement: A, B, and C detect the same environmental area 1, and the first detection device is in
  • the detection information of each second environment detected in different positions represents the detection time and position coordinates corresponding to each position, as shown in Table 2.
  • the relative translation S2-S1 and relative rotation ⁇ 2 - ⁇ 1 of the first detection device between the detection position A and the detection position B can be calculated based on the above-mentioned respective second environmental detection information;
  • the position of the first detection device through the second-order difference is the position of the first detection device through the second-order difference:
  • the acceleration of the first detection device is determined, and accordingly, the angular velocity of the first detection device can be determined by first-order difference of the posture of the first detection device.
  • step S502 Obtain the acceleration and angular velocity of the third detection device in step S502, and compare the acceleration and angular velocity of the third detection device, the acceleration of the first detection device and the angular velocity of the first detection device in step S503 to obtain The extra-target parameters between the first detection device and the third detection device.
  • the inertial sensor (that is, the third detection device) can output the measured acceleration and angular velocity to the movable platform in real time, and the movable platform can measure the acceleration and angular velocity after receiving the acceleration and angular velocity.
  • the acceleration and angular velocity are stored in the preset area.
  • step S502 the acceleration and angular velocity measured by the inertial sensor can be obtained from the foregoing preset area.
  • the movable platform may not rely on special calibration equipment and a specific calibration environment, and realize the calibration of external parameters between the first detection device and the inertial sensor more efficiently and flexibly.
  • the embodiment of the present invention also provides an external parameter calibration device of the detection device as shown in FIG. 6.
  • the control device may be configured on but not limited to a movable platform, and at least a first detection device and a second detection device are configured at different positions of the movable platform, the first detection device and the second detection device
  • the detection devices are respectively used to collect environmental detection information
  • the external parameter calibration device includes:
  • the obtaining module 60 is configured to obtain first environmental detection information detected by the first detection device, and obtain second environmental detection information detected by the second detection device, wherein the first environmental detection information and the The second environment detection information includes information about a target environment area, and the target environment area is a part of the environment area in the environment detection area corresponding to the movable platform;
  • the processing module 61 is configured to determine the pose data of the first detection device relative to the target environment area according to the first environment detection information, and determine the second detection device relative to the target environment area according to the second environment detection information The pose data of the target environment area;
  • the processing module 61 is further configured to determine the first detection device based on the pose data of the first detection device relative to the target environment area and the pose data of the second detection device relative to the target environment area. An external target parameter between a detection device and the second detection device.
  • the first detection device is any one of target-type sensors, and the target-type sensor includes an image sensor and a perception sensor.
  • the second environment detection information includes image data about the target environment area
  • the processing module 61 is specifically configured to process the image data about the target environment area based on an image algorithm To obtain the pose data of the second detection device relative to the target environment area.
  • the second environment detection information includes point cloud data about the target environment area
  • the processing module 61 is specifically configured to perform calculations on the points about the target environment area based on an iterative closest point algorithm.
  • the cloud data is processed to obtain the pose data of the second detection device relative to the target environment area.
  • the processing module 61 is specifically configured to be based on the pose data of the first detection device relative to the target environment area and the pose data of the second detection device relative to the target environment area. Data to determine the first extrinsic parameter between the first detection device and the second detection device;
  • the second external parameter between the first external parameter and the second external parameter are processed to obtain the target external parameter between the first detection device and the second detection device.
  • the target extrinsic parameter is a translation matrix and a rotation matrix between the first detection device and the second detection device.
  • the first detection device and the second detection device detect the target environment area at different times.
  • the processing module 61 is specifically configured to obtain the movement trajectory of the first detection device; according to the movement trajectory, the pose data of the first detection device relative to the target environment area and The second detection device calculates an external target parameter between the first detection device and the second detection device relative to the pose data of the target environment area.
  • the acquisition module 60 is further configured to acquire each second environmental detection information detected by the first detection device at different positions, and each of the second environmental detection information includes information about the target. Information about the environmental area; the processing module 61 is further configured to determine the relative translation and relative rotation of the first detection device between the different positions based on the respective second environmental detection information; based on the relative translation And the relative rotation to determine the movement trajectory of the first detection device.
  • the movable platform is further provided with a third detection device, the third detection device includes an inertial sensor, and the processing module 61 is further configured to be based on the position of the first detection device in the different positions.
  • the detection device includes a first detection device and a third detection device
  • the third detection device includes an inertial sensor
  • the processing module 61 is further configured to determine whether the first detection device detects The relative translation and relative rotation between different positions in the target environment area, and the acceleration of the first detection device is determined based on the relative translation and the relative rotation between the different positions of the first detection device And angular velocity;
  • the acquisition module 60 is also used to acquire the acceleration and angular velocity of the third detection device;
  • the processing module 61 is also used to compare the acceleration of the third detection device and the angular velocity of the third detection device , The acceleration of the first detection device and the angular velocity of the first detection device obtain the external target parameter between the first detection device and the third detection device.
  • the processing module 61 is specifically configured to call the first detection device to detect the target environment area in a different position and posture when the movable platform is moving; to obtain the first detection device
  • Each of the second environmental detection information detected in the different positions, each of the second environmental detection information includes information about the target environmental area; based on the respective second environmental detection information, determine the first The relative translation and relative rotation of the detection device between the different positions.
  • FIG. 7 is a schematic block diagram of the structure of a movable platform according to an embodiment of the present invention.
  • a first detection device and a second detection device are arranged at different positions of the movable platform, and the first detection device and the second detection device are respectively used to collect environmental detection information
  • the The mobile platform includes a processor and a communication interface.
  • the mobile platform may include a processor 70, a communication interface 71, and a memory 72.
  • the processor 70, the communication interface 71 and the memory 72 are connected by a bus, and the memory 72 is used to store programs. Instructions and environmental detection information.
  • the memory 72 may include a volatile memory (volatile memory), such as a random-access memory (random-access memory, RAM); the memory 72 may also include a non-volatile memory (non-volatile memory), such as a flash memory (flash memory), solid-state drive (SSD), etc.; the memory 72 may also be a double rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDR); the memory 72 may also include a combination of the foregoing types of memories.
  • volatile memory volatile memory
  • RAM random-access memory
  • non-volatile memory such as a flash memory (flash memory), solid-state drive (SSD), etc.
  • flash memory flash memory
  • SSD solid-state drive
  • DDR double rate synchronous dynamic random access memory
  • the memory 72 may also include a combination of the foregoing types of memories.
  • the memory 72 is used to store a computer program, and the computer program includes program instructions, and the processor 70 is configured to execute when the program instructions are called:
  • the first environment detection information of the, and the second environment detection information detected by the second detection device is acquired, wherein the first environment detection information and the second environment detection information both include information about the target environment area ,
  • the target environment area is a part of the environment area in the environment detection area corresponding to the movable platform;
  • the pose data of the first detection device relative to the target environment area is determined according to the first environment detection information, according to
  • the second environment detection information determines the pose data of the second detection device relative to the target environment area; based on the pose data of the first detection device relative to the target environment area and the second detection
  • the device determines the extra-target parameters between the first detection device and the second detection device relative to the pose data of the target environment area.
  • the first detection device is any one of target-type sensors, and the target-type sensor includes an image sensor and a perception sensor.
  • the second environment detection information includes image data about the target environment area
  • the processor 70 is specifically configured to process the image data about the target environment area based on an image algorithm To obtain the pose data of the second detection device relative to the target environment area.
  • the second environment detection information includes point cloud data about the target environment area
  • the processor 70 is further specifically configured to perform an iterative closest point algorithm to the The point cloud data of the target environment area is processed to obtain the pose data of the second detection device relative to the target environment area.
  • the processor 70 is further specifically configured to be based on the position and orientation data of the first detection device relative to the target environment area and the position and orientation data of the second detection device relative to the target environment area. Attitude data, determine the first external parameter between the first detection device and the second detection device; based on the pose data of the first detection device relative to the reference environment area and the second detection device relative to the Determining the second extrinsic parameter between the first detection device and the second detection device by the pose data of the reference environment area; performing data processing on the first extrinsic parameter and the second extrinsic parameter, Obtain target external parameters between the first detection device and the second detection device.
  • the target extrinsic parameter is a translation matrix and a rotation matrix between the first detection device and the second detection device.
  • the first detection device and the second detection device detect the target environment area at different times.
  • the processor 70 is further specifically configured to obtain the movement trajectory of the first detection device; according to the movement trajectory, the pose data of the first detection device relative to the target environment area And the pose data of the second detection device relative to the target environment area to calculate the target external parameters between the first detection device and the second detection device.
  • the processor 70 is further configured to obtain each second environmental detection information detected by the first detection device at different positions, and each of the second environmental detection information includes information about the target Information of the environmental area; based on the respective second environmental detection information, determine the relative translation and relative rotation of the first detection device between the different positions; determine the relative translation and relative rotation based on the relative translation and the relative rotation The movement track of the first detection device.
  • the movable platform is further provided with a third detection device
  • the processor 70 is further configured to be based on the relative translation between the different positions of the first detection device and the Relative rotation, determine the acceleration and angular velocity of the first detection device; obtain the acceleration and angular velocity of the inertial sensor; compare the acceleration of the inertial sensor, the angular velocity of the inertial sensor, the acceleration of the first detection device, and The angular velocity of the first detection device obtains the target external parameter between the first detection device and the inertial sensor.
  • the detection device further includes a third detection device
  • the third detection device includes an inertial sensor
  • the processor 70 may also be configured to execute when the program instruction is invoked: determining the first A detection device detects the relative translation and relative rotation between different positions in the target environment area; based on the relative translation and the relative rotation between the first detection device between the different positions, determining the The acceleration and angular velocity of the first detection device; obtain the acceleration and angular velocity of the third detection device; by comparing the acceleration and angular velocity of the third detection device, the angular velocity of the third detection device, the acceleration of the first detection device and The angular velocity of the first detection device obtains the target external parameter between the first detection device and the third detection device.
  • the processor 70 is further specifically configured to: when the movable platform is moving, call the first detection device to detect the target environment area in a different position and posture; obtain the first detection Each of the second environmental detection information detected by the device at the different positions, each of the second environmental detection information includes information about the target environmental area; based on the respective second environmental detection information, the first environmental detection information is determined The relative translation and relative rotation of a detection device between the different positions.
  • the specific implementation of the above-mentioned processor 70 may refer to the description of related content in the embodiment corresponding to FIG. 3, FIG. 4, or FIG. 5.
  • the program can be stored in a computer readable storage medium, and the program can be stored in a computer readable storage medium. During execution, it may include the procedures of the above-mentioned method embodiments.
  • the storage medium may be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Manufacturing & Machinery (AREA)
  • Length Measuring Devices With Unspecified Measuring Means (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

一种探测装置的外参数标定方法、装置及可移动平台(10),在可移动平台(10)的不同位置处至少配置有第一探测装置和第二探测装置,方法包括:获取第一探测装置探测到的第一环境探测信息,和获取第二探测装置探测到的第二环境探测信息(S301),根据第一探测装置探测到的第一环境探测信息和第二探测装置探测到的第二环境探测信息分别确定第一探测装置和第二探测装置相对于目标环境区域的位姿数据(S302),并根据该位姿数据确定第一探测装置与第二探测装置之间的目标外参数(S303)。不依赖于特殊的标定设备和特定的标定环境,可以提高针对探测装置外参数标定的灵活性和高效性。

Description

一种探测装置的外参数标定方法、装置及可移动平台 技术领域
本发明实施例涉及移动平台技术领域,尤其涉及一种探测装置的外参数标定方法、装置及可移动平台。
背景技术
外部参数标定,简称外参数标定,用于计算多个不同的探测装置的位置和朝向的变换关系。常用的探测装置包括但不限于惯性测量单元、相机、激光雷达,对这些探测装置进行标定后,可计算出任意两个探测装置间的位移和旋转参数,即为外参数。利用这些外参数,可以在任意两个不同探测装置间的位置姿态之间互相转换。
目前,针对可移动平台上探测装置的标定方法通常依赖于特殊的环境信息,需要预先在可移动平台的移动环境中布置标定物,使得外参数标定只能在特定搭建的区域内进行,而不能在其他场地开展,大大影响了外参数标定的灵活性。
发明内容
本发明实施例提供了一种探测装置的外参数标定方法、装置、移动平台及存储介质,可便捷完成外参数标定。
一方面,本发明实施例提供了一种探测装置的外参数标定方法,所述方法适用于可移动平台,在所述可移动平台的不同位置处至少配置有第一探测装置和第二探测装置,所述第一探测装置和所述第二探测装置分别用于采集得到环境探测信息,该方法包括:
获取所述第一探测装置探测到的第一环境探测信息,并获取所述第二探测装置探测到的第二环境探测信息,其中,所述第一环境探测信息和所述第二环境探测信息中均包括关于目标环境区域的信息,所述目标环境区域是所述可移动平台对应环境探测区域中的部分环境区域;
根据所述第一环境探测信息确定所述第一探测装置相对于所述目标环境区域的位姿数据,根据所述第二环境探测信息确定所述第二探测装置相对于所 述目标环境区域的位姿数据;
基于所述第一探测装置相对于所述目标环境区域的位姿数据和所述第二探测装置相对于所述目标环境区域的位姿数据,确定所述第一探测装置与所述第二探测装置之间的目标外参数。
另一方面,本发明实施例提供了一种探测装置的外参数标定装置,所述装置配置于可移动平台,在所述可移动平台的不同位置处至少配置有第一探测装置和第二探测装置,所述第一探测装置和所述第二探测装置分别用于采集得到环境探测信息,该装置包括:
获取模块,用于获取所述第一探测装置探测到的第一环境探测信息,并获取所述第二探测装置探测到的第二环境探测信息,其中,所述第一环境探测信息和所述第二环境探测信息中均包括关于目标环境区域的信息,所述目标环境区域是所述可移动平台对应环境探测区域中的部分环境区域;
处理模块,用于根据所述第一环境探测信息确定所述第一探测装置相对于所述目标环境区域的位姿数据,根据所述第二环境探测信息确定所述第二探测装置相对于所述目标环境区域的位姿数据;
所述处理模块,还用于基于所述第一探测装置相对于所述目标环境区域的位姿数据和所述第二探测装置相对于所述目标环境区域的位姿数据,确定所述第一探测装置与所述第二探测装置之间的目标外参数。
再一方面,本发明实施例提供了一种探测装置的可移动平台,所述可移动平台配置于可移动平台,在所述可移动平台的不同位置处至少配置有第一探测装置和第二探测装置,所述第一探测装置和所述第二探测装置分别用于采集得到环境探测信息,所述可移动平台包括处理器和通信接口,所述处理器和通信接口相互连接,其中,所述通信接口受所述处理器的控制用于收发指令,所述处理器用于:
获取所述第一探测装置探测到的第一环境探测信息,并获取所述第二探测装置探测到的第二环境探测信息,其中,所述第一环境探测信息和所述第二环境探测信息中均包括关于目标环境区域的信息,所述目标环境区域是所述可移动平台对应环境探测区域中的部分环境区域;
根据所述第一环境探测信息确定所述第一探测装置相对于所述目标环境 区域的位姿数据,根据所述第二环境探测信息确定所述第二探测装置相对于所述目标环境区域的位姿数据;
基于所述第一探测装置相对于所述目标环境区域的位姿数据和所述第二探测装置相对于所述目标环境区域的位姿数据,确定所述第一探测装置与所述第二探测装置之间的目标外参数。
再一方面,本发明实施例提供了另一种探测装置的外参数标定方法所述方法适用于可移动平台,在所述可移动平台的不同位置处配置有探测装置,所述探测装置包括第一探测装置和第三探测装置,所述第三探测装置包括惯性传感器,该方法包括:
确定所述第一探测装置在探测到目标环境区域下的不同位置之间的相对平移和相对旋转;
基于所述第一探测装置在所述不同位置之间的所述相对平移和所述相对旋转,确定所述第一探测装置的加速度和角速度;
获取所述第三探测装置的加速度和角速度;
通过对比所述第三探测装置的加速度、所述第三探测装置的角速度、所述第一探测装置的加速度和所述第一探测装置的角速度,得到所述第一探测装置与所述第三探测装置之间的目标外参数。
再一方面,本发明实施例提供了另一种探测装置的外参数标定装置,所述装置适用于可移动平台,在所述可移动平台的不同位置处配置有探测装置,所述探测装置包括第一探测装置和第三探测装置,所述第三探测装置包括惯性传感器,该装置包括:
处理模块,用于确定所述第一探测装置在探测到目标环境区域下的不同位置之间的相对平移和相对旋转;
所述处理模块,还用于基于所述第一探测装置在所述不同位置之间的所述相对平移和所述相对旋转,确定所述第一探测装置的加速度和角速度;
获取模块,用于获取所述第三探测装置的加速度和角速度;
所述处理模块,还用于通过对比所述第三探测装置的加速度、所述第三探测装置的角速度、所述第一探测装置的加速度和所述第一探测装置的角速度,得到所述第一探测装置与所述第三探测装置之间的目标外参数。
再一方面,本发明实施例提供了另一种探测装置的可移动平台,在所述可移动平台的不同位置处配置有探测装置,所述探测装置包括第一探测装置和第三探测装置,所述第三探测装置包括惯性传感器,所述可移动平台包括:处理器和通信接口,所述处理器用于:
确定所述第一探测装置在探测到目标环境区域下的不同位置之间的相对平移和相对旋转;
基于所述第一探测装置在所述不同位置之间的所述相对平移和所述相对旋转,确定所述第一探测装置的加速度和角速度;
获取所述第三探测装置的加速度和角速度;
通过对比所述第三探测装置的加速度、所述第三探测装置的角速度、所述第一探测装置的加速度和所述第一探测装置的角速度,得到所述第一探测装置与所述第三探测装置之间的目标外参数。
再一方面,本发明实施例提供一种计算机存储介质,该计算机存储介质存储有计算机程序指令,该计算机程序指令被执行时用于实现上述的探测装置的外参数标定方法。
本发明实施例中,可移动平台可以根据第一探测装置探测到的第一环境探测信息和第二探测装置探测到的第二环境探测信息分别确定第一探测装置和第二探测装置相对于目标环境区域的位姿数据,并根据该位姿数据确定第一探测装置与第二探测装置之间的目标外参数。采用这样的外参数标定方式,不依赖于特殊的标定设备和特定的标定环境,可以提高针对探测装置外参数标定的灵活性和高效性。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的一种探测装置的外参数标定的场景示意图;
图2是本发明实施例提供的另一种探测装置的外参数标定的场景示意图;
图3是本发明实施例提供的一种探测装置的外参数标定方法的流程示意图;
图4是本发明实施例提供的另一种探测装置的外参数标定方法的流程示意图;
图5是本发明实施例提供的又一种探测装置的外参数标定方法的流程示意图;
图6是本发明实施例提供的一种探测装置的外参数标定装置的结构示意图;
图7是本发明实施例提供的一种可移动平台的结构示意图。
具体实施方式
针对外参数的标定可以应用于多种领域,例如自动驾驶领域,针对自动驾驶车辆所挂载探测装置的外参数标定是自动驾驶车辆的研发和生产过程中的必要且关键环节。本发明实施例提出了一种探测装置的外参数标定方法,该方法用于对可移动平台的不同位置处配置的探测装置进行外参数标定,可以在可移动平台移动过程中实时针对不同位置处配置的探测装置进行外参数标定,也可以在可移动平台移动之前,预先对不同位置处配置的探测装置进行外参数标定,本发明对此不作具体限定。
其中,上述可移动平台可以为一些能够行驶在公共交通道路上的移动装置,例如自动驾驶车辆、智能电动车、滑板车、平衡车等车辆。可移动平台的不同位置处至少配置有第一探测装置和第二探测装置,该第一探测装置可以为目标类型的传感器中的任一个传感器,该目标类型的传感器包括图像传感器(例如但不限于摄像装置)或者感知传感器(例如但不限于激光雷达),第二探测装置可以为多个不同或者相同类型的传感器。
在一个实施例中,当可移动平台的不同位置处配置有N(N为大于1的整数)个目标类型的探测装置时,可以将N个探测装置中的任一个确定为第一探测装置,其它的N-1个探测装置确定为第二探测装置。示例性地,假设可移动平台的不同位置处设置有4个探测装置,分别为双目摄像头A、双目摄像头B、激光雷达C和激光雷达D,可以预先选定某一位置的某一个传感器作为主 传感器(即第一探测装置)。例如,可以将双目摄像头A确定为主传感器,也可以将双目摄像头B确定为主传感器,还可以将激光雷达C或D确定为主传感器,本申请实施例对此不作具体限定。
示例性地,参见图1,在可移动平台10外部的前后左右各侧均设置有一个探测装置,其中,可以将设置于可移动平台10前侧的探测装置选取为第一探测装置,将设置于可移动平台10后侧、左侧和右侧的探测装置选取为第二探测装置。可移动平台10在移动过程中,可连续采集所有探测装置(包括但不限于惯性测量单元,所有相机,所有激光雷达等)的探测数据并缓存在内存中,并调用第一探测装置在较短的时间内,以不同的位置和旋转观察到相同的环境区域(例如图1中的环境区域1和环境区域2),通过对比探测到相同的环境区域时第一探测装置的探测数据,可以确定出第一探测装置在上述两个不同位置之间的相对平移和相对旋转,积累这些相对平移和相对旋转,可以确定出第一探测装置的运动轨迹。其中,在移动过程中需保证第一探测装置和各个第二探测装置能够探测到相同的环境区域至少一次。
在可移动平台10移动完毕,所有探测数据采集完成后,可移动平台10可以自动检查所有采集到的探测数据,从中筛选出探测到相同环境区域(即目标环境区域)时,第一探测装置所采集到的第一探测数据(即第一环境探测信息)和第二探测装置所采集到的第二探测数据(即第一环境探测信息),该第一探测数据和第二探测数据均可以包括点云数据(例如激光雷达采集到的一帧点云)和/或者图像数据(例如相机采集到的一帧图片)。在本申请实施例中,对于所有探测到某一环境区域的第二探测装置的探测数据,均可以查找到不同时刻下探测到该某一环境区域的第一探测装置对应的探测数据。其中,该目标环境区域为可移动平台对应环境探测区域中的部分区域,例如为图1中的环境区域1或者环境区域2。
进一步地,根据第一探测数据确定第一探测装置相对于目标环境区域的位姿数据,根据第二探测数据确定第二探测装置相对于目标环境区域的位姿数据,进而基于第一探测装置相对于目标环境区域的位姿数据和第二探测装置相对于目标环境区域的位姿数据,确定第一探测装置与第二探测装置之间的目标外参数。可以看出,采用这样的外参数标定方式,不依赖于特殊的标定设备和特 定的标定环境,也不需要传感器之间存在共享视野,可以更加灵活高效地实现针对探测装置的外参数标定。
示例性地,再参见图2,可移动平台10的四周配置有4个不同的探测装置,可移动平台10的前侧配置有1个第一探测装置,其余各侧各配置有一个第二探测装置,在不同时刻下第一探测装置和位于移动平台右侧的第二探测装置均探测到相同的目标环境区域:环境区域1。这种情况下,可以获取第一探测装置探测到环境区域1时的第一探测数据,位于移动平台右侧的第二探测装置探测到环境区域1时的第二探测数据,进而基于该第一探测数据和第二探测数据分别确定出第一探测装置相对于目标环境区域的位姿数据,以及该位于移动平台右侧的第二探测装置相对于目标环境区域的位姿数据。进一步地,根据第一探测装置相对于目标环境区域的位姿数据和上述位于移动平台右侧的第二探测装置相对于目标环境区域的位姿数据,计算得到第一探测装置与该位于移动平台右侧的第二探测装置之间的目标外参数。
其中,第一探测装置相对于目标环境区域的位姿数据包括第一探测装置相对于目标环境区域的第一位置数据和第一姿态数据,第二探测装置相对于目标环境区域的位姿数据包括第二探测装置相对于目标环境区域的第二位置数据和第二姿态数据。在一个实施例中,可以将第一位置数据与第二位置数据的差值,以及第一姿态数据和第二姿态数据的差值确定为第一探测装置与第二探测装置之间的目标外参数。该目标外参数表包括第一探测装置与第二探测装置之间的位移和旋转参数。
其中,图1和图2中的可移动平台10仅为举例说明,在其他例子中,图1和图2中所示的可移动平台也可以为其它移动设备,还可以为挂载在竞技机器人、无人机、无人驾驶汽车等移动设备上,本发明对此不作限定。
参见图3,图3是本发明实施例提供的一种探测装置的外参数标定方法的流程示意图,本发明实施例的所述方法可以由可移动平台来执行,在可移动平台的不同位置处至少配置有第一探测装置和第二探测装置,该第一探测装置和第二探测装置分别用于采集得到环境探测信息。
在图3所示的探测装置的外参数标定方法中,可移动平台可以在S301中获取第一探测装置探测到的第一环境探测信息,并获取第二探测装置探测到的 第二环境探测信息。其中,第一环境探测信息和第二环境探测信息中均包括关于目标环境区域的信息,该目标环境区域是可移动平台对应环境探测区域中的部分环境区域。在一个实施例中,第一探测装置和第二探测装置在不同时刻下探测到该目标环境区域。
可移动平台在移动过程中可以通过第一探测装置和第二探测装置探测到多个环境区域,多个环境区域组成了移动平台本次移动过程对应的环境探测区域,目标环境区域为多个环境区域中的任一个。示例性地,参见图1所示,可移动平台本次移动过程对应的环境探测区域包括环境区域1和环境区域2,目标环境区域是该环境探测区域中的部分环境区域,例如可以为环境区域1或者环境区域2。
在一个实施例中,可移动平台在移动过程中可调用第一探测装置和第二探测装置分别采集环境探测信息,并将采集到的环境探测信息存储在预设存储区域。进一步地,在可移动平台移动结束,所有环境探测信息采集完成后,可移动平台可以自动检查所有采集到的环境探测信息,从所有的环境探测信息中获取探测到同一个目标环境区域时,第一探测装置探测到的第一环境探测信息和第二探测装置探测到的第二环境信息。
可移动平台获取到第一探测装置探测到的第一环境探测信息和第二探测装置探测到的第二环境探测信息之后,在步骤S302中根据第一环境探测信息确定第一探测装置相对于目标环境区域的位姿数据,根据第二环境探测信息确定第二探测装置相对于目标环境区域的位姿数据。
其中,上述第一探测装置和第二探测装置均可以为图像传感器(例如摄像装置)或者感知传感器。相应地,第一探测装置探测到的第一环境探测信息和第二探测装置探测到的第二环境探测信息均可以包括关于目标环境区域的点云数据或者关于目标环境区域的图像数据。
其中,上述感知传感器例如可以为激光雷达,激光雷达可以获得场景的三维信息。其基本原理为主动对被探测对象发射激光脉冲信号,并接收其反射回来的激光脉冲信号,根据发射的激光脉冲信号和接收的反射回来的激光脉冲信号之间的时间差和激光脉冲信号的传播速度,计算被测对象的深度信息;根据激光雷达的发射方向,获得被测对象相对激光雷达的角度信息;结合前述深度 信息和角度信息得到海量的探测点,该探测点的数据集称为点云,基于点云即可以重建被测对象相对激光雷达的空间三维信息。
在一个实施例中,假设第二环境探测信息包括关于目标环境区域的图像数据,根据第二环境探测信息确定第二探测装置相对于目标环境区域的位姿数据具体实施方式可以为:基于图像算法对关于目标环境区域的图像数据进行处理,得到第二探测装置相对于目标环境区域的位姿数据。其中,第二探测装置相对于目标环境区域的位姿数据包括第二探测装置相对于目标环境区域的第二位置数据和第二姿态数据。该第二位置数据可以为第二探测装置相对于目标环境区域的世界坐标,第二姿态数据可以为第二探测装置相对于目标环境区域的旋转角度。
示例性地,第二探测装置为摄像头,第二环境探测信息包括关于目标环境区域的图像数据,该图像数据可以为关于目标环境区域的一帧图片J1,该图像算法可以为透视n点算法(Perspective-n-Point,PnP)。针对这种情况,可移动平台可以利用PnP结合图片J1中特征点在世界坐标系下的世界坐标和上述特征点在图片J1中的成像(即像素坐标),求解出上述摄像头采集到图片J1时相对于目标环境区域的世界坐标以及旋转角度,可以分别用平移矩阵(t1表示)和旋转矩阵(R1表示)。
与之相似的,假设第一环境探测信息包括关于目标环境区域的图像数据,根据第一环境探测信息确定第一探测装置相对于目标环境区域的位姿数据具体实施方式可以为:基于图像算法对关于目标环境区域的图像数据进行处理,得到第一探测装置相对于目标环境区域的位姿数据。
在一个实施例中,假设第二环境探测信息包括关于目标环境区域的点云数据,根据第二环境探测信息确定第二探测装置相对于目标环境区域的位姿数据具体实施方式可以为:基于迭代最近点算法(Iterative Closest Point,ICP)对关于目标环境区域的点云数据进行处理,得到第二探测装置相对于目标环境区域的位姿数据。
示例性地,第二探测装置为激光雷达,第二环境探测信息包括关于目标环境区域的点云数据,该点云数据可以为关于目标环境区域的一帧点云,可移动平台可以利用ICP对点云进行处理,求解出上述激光雷达采集到关于目标环境 区域的那帧点云时,相对于目标环境区域的世界坐标以及旋转角度,可以分别用平移矩阵(t2表示)和旋转矩阵(R2表示)。
与之相似的,假设第一环境探测信息包括关于目标环境区域的点云数据,根据第一环境探测信息确定第一探测装置相对于目标环境区域的位姿数据具体实施方式可以为:基于ICP对关于目标环境区域的图像数据进行处理,得到第一探测装置相对于目标环境区域的位姿数据。
进一步地,可移动平台确定出第一探测装置和第二探测装置分别相对于目标环境区域的位姿数据后,可以在步骤S303中基于第一探测装置相对于目标环境区域的位姿数据和第二探测装置相对于目标环境区域的位姿数据,确定第一探测装置与第二探测装置之间的目标外参数。
其中,第一探测装置相对于目标环境区域的位姿数据包括第一探测装置相对于目标环境区域的第一位置数据和第一姿态数据,第二探测装置相对于目标环境区域的位姿数据包括第二探测装置相对于目标环境区域的第二位置数据和第二姿态数据。
在一个实施例中,目标外参数可以为第一探测装置与第二探测装置之间的位移和旋转参数,该第一位置数据和第一姿态数据可以分别为第一探测装置探测到目标环境区域时的位置坐标S 1和旋转角度α 1,第二位置数据和第二姿态数据可以分别为第二探测装置探测到目标环境区域时的位置坐标S 2和旋转角度α 2。这种情况下,可移动平台可以将位置坐标S 1和位置坐标S 2的差值确定为第一探测装置与第二探测装置之间的位移,将旋转角度α 1和旋转角度α 2的差值确定为第一探测装置与第二探测装置之间的旋转参数。
在一个实施例中,上述目标外参数可以为第一探测装置和第二探测装置之间的平移矩阵和旋转矩阵。该第一位置数据和第一姿态数据可以分别为第一探测装置探测到目标环境区域时相对于目标环境区域的第一平移矩阵和第一旋转矩阵,该第二位置数据和第二姿态数据可以分别为第二探测装置探测到目标环境区域时相对于目标环境区域的第二平移矩阵和第二旋转矩阵,进一步地,移动平台可以基于第一平移矩阵和第二平移矩阵计算出第一探测装置和第二探测装置之间的平移矩阵,基于第一旋转矩阵和第二旋转矩阵计算出第一探测装置和第二探测装置之间的旋转矩阵。
在一个实施例中,可移动平台可以基于第一探测装置相对于目标环境区域的位姿数据和第二探测装置相对于目标环境区域的位姿数据,确定第一探测装置与第二探测装置之间的第一外参数。进一步地,可以基于第一探测装置相对于参考环境区域的位姿数据和第二探测装置相对于参考环境区域的位姿数据,确定第一探测装置与第二探测装置之间的第二外参数,进而对第一外参数和第二外参数进行数据处理,得到第一探测装置与第二探测装置之间的目标外参数。
其中,上述对第一外参数和第二外参数进行数据处理,得到第一探测装置与第二探测装置之间的目标外参数的具体实施方式可以为:对第一外参数和第二外参数求平均,并将求得的平均值确定为第一探测装置与第二探测装置之间的目标外参数。
在一个实施例中,上述目标环境区域与参考环境区域可以不同,该目标环境区域为可移动平台对应环境探测区域中的部分环境区域,则参考环境区域为环境探测区域中的另一部分环境区域。示例性地,参见图1所示,可移动平台本次移动过程对应的环境探测区域包括环境区域1和环境区域2,目标环境区域为环境区域1,那么参考环境区域可以为环境区域2,在本次可移动平台的移动过程中,第一探测装置和第二探测装置均在不同时刻下探测到环境区域1和环境区域2至少一次。
在一个实施例中,上述目标环境区域与参考环境区域可以相同。这种情况下,在本次可移动平台的移动过程中,第一探测装置和第二探测装置均在不同时刻下探测到目标环境区域至少两次。示例性地,在本次可移动平台的移动过程中第一探测装置和第二探测装置均在不同时刻下探测到目标环境区域两次,具体探测时间如表1所示。这种情况下,可移动平台可以基于第5分钟时第一探测装置相对于目标环境区域的位姿数据和第10分钟时第二探测装置相对于目标环境区域的位姿数据,确定第一探测装置与第二探测装置之间的第一外参数;基于第20分钟时第一探测装置相对于目标环境区域的位姿数据和第25分钟时第二探测装置相对于目标环境区域的位姿数据,确定第一探测装置与第二探测装置之间的第二外参数。进一步地,对第一外参数和第二外参数进行数据处理,得到第一探测装置与第二探测装置之间的目标外参数。
表1
探测时间(分钟) 探测装置
第5分钟 第一探测装置
第10分钟 第二探测装置
第20分钟 第一探测装置
第25分钟 第二探测装置
本发明实施例中,可移动平台可以根据第一探测装置探测到的第一环境探测信息和第二探测装置探测到的第二环境探测信息分别确定第一探测装置和第二探测装置相对于目标环境区域的位姿数据,并根据该位姿数据确定第一探测装置与第二探测装置之间的目标外参数。采用这样的外参数标定方式,不依赖于特殊的标定设备和特定的标定环境,可以提高针对探测装置外参数标定的灵活性和高效性。
参见图4,图4是本发明实施例提供的另一种探测装置的外参数标定方法的流程示意图,本发明实施例的所述方法可以由可移动平台来执行,在可移动平台的不同位置处至少配置有第一探测装置和第二探测装置,该第一探测装置和第二探测装置分别用于采集得到环境探测信息。
在图4所示的探测装置的外参数标定方法中,可移动平台可以在S401中获取第一探测装置探测到的第一环境探测信息,并获取第二探测装置探测到的第二环境探测信息。进一步地,在步骤S402中根据第一环境探测信息确定第一探测装置相对于目标环境区域的位姿数据,根据第二环境探测信息确定第二探测装置相对于目标环境区域的位姿数据。其中,步骤S401~步骤S402的具体实施方式可以参见上述实施例中步骤S301~步骤S302的相关描述,此处不再赘述。
进一步地,可移动平台确定出第一探测装置和第二探测装置分别相对于目标环境区域的位姿数据后,可以在步骤S403中获取第一探测装置的运动轨迹,并根据运动轨迹、第一探测装置相对于目标环境区域的位姿数据和第二探测装置相对于目标环境区域的位姿数据,计算第一探测装置与第二探测装置之间的目标外参数。
在一个实施例中,可移动平台获取第一探测装置的运动轨迹之前,可以获取第一探测装置在不同位置下探测到的各个第二环境探测信息,该各个第二环境探测信息均包括关于目标环境区域的信息,进一步地,基于各个第二环境探测信息,确定第一探测装置在不同位置之间的相对平移和相对旋转,进而基于该相对平移和相对旋转,确定出第一探测装置的运动轨迹。
示例性地,可移动平台在移动过程中,可调用第一探测装置在较短的时间内,以不同的位置和旋转角度观察到相同的环境区域(例如目标环境区域),通过对比第一探测装置在不同位置下探测到目标环境区域时,对应的第二环境探测信息,确定出第一探测装置在上述不同位置之间的相对平移和相对旋转,积累这些相对平移和相对旋转,可以确定出第一探测装置的运动轨迹。
在一个实施例中,可移动平台还设置了第三探测装置,该第三探测装置包括惯性传感器,可移动平台确定第一探测装置在不同位置之间的相对平移和相对旋转之后,可以基于第一探测装置在不同位置之间的相对平移和相对旋转,确定第一探测装置的加速度和角速度,获取惯性传感器的加速度和角速度,通过对比惯性传感器的加速度、惯性传感器的角速度、第一探测装置的加速度和第一探测装置的角速度,得到第一探测装置与惯性传感器之间的目标外参数。
示例性地,在可移动平台移动过程中,惯性传感器可以实时输出测量到的加速度与角速度。同样的,在运算第一探测装置的运动轨迹时,可以通过二阶差分第一探测装置的位置(即第一探测装置在不同位置之间的相对平移)和一阶差分第一探测装置的姿态(即第一探测装置在不同位置之间的相对旋转),获得第一探测装置的加速度与角速度。进一步地,通过比对惯性传感器的加速度、角速度和探测装置的加速度和角速度,可以获得惯性传感器和第一探测装置之间的相对位置和姿态,即目标外参数。
本发明实施例中,可移动平台可以根据第一探测装置探测到的第一环境探测信息和第二探测装置探测到的第二环境探测信息分别确定第一探测装置和第二探测装置相对于目标环境区域的位姿数据,并根据该位姿数据确定第一探测装置与第二探测装置之间的目标外参数。采用这样的外参数标定方式,不依赖于特殊的标定设备和特定的标定环境,可以提高针对探测装置外参数标定的灵活性和高效性。
参见图5,图5是本发明实施例提供的又一种探测装置的外参数标定方法的流程示意图,本发明实施例的所述方法可以由可移动平台来执行,在该可移动平台的不同位置处配置有探测装置,该探测装置包括第一探测装置和第三探测装置,该第三探测装置包括惯性传感器。
在图5所示的探测装置的外参数标定方法中,可以在步骤S501中确定第一探测装置在探测到目标环境区域下的不同位置之间的相对平移和相对旋转,并基于第一探测装置在不同位置之间的相对平移和相对旋转,确定第一探测装置的加速度和角速度。
在一个实施例中,确定第一探测装置在探测到目标环境区域下的不同位置之间的相对平移和相对旋转,包括:在可移动平台移动时,调用第一探测装置以不同位置和姿态探测目标环境区域,获取第一探测装置在不同位置下探测到的各个第二环境探测信息,该各个第二环境探测信息均包括关于目标环境区域的信息。进一步地,基于各个第二环境探测信息,确定第一探测装置在不同位置之间的相对平移和相对旋转。
示例性地,假设目标环境区域如图1所示的环境区域1,可移动平台在移动过程中,分别在3个位置:A、B、C探测到相同的环境区域1,第一探测装置在不同位置下探测到的各个第二环境探测信息,表征了每个位置对应的探测时间和位置坐标,如表2所示。这种情况下,可以基于上述各个第二环境探测信息,计算得到第一探测装置在探测位置A和探测位置B之间的相对平移S2-S1和相对旋转α 21;计算得到第一探测装置在探测位置B和探测位置C之间的相对平移S3-S2和相对旋转α 32。进一步地,通过二阶差分第一探测装置的位置:
Figure PCTCN2019120278-appb-000001
确定出第一探测装置的加速度,相应地,可以通过一阶差分第一探测装置的姿态,确定出第一探测装置的角速度。
表2
探测位置 探测时间 位置坐标 旋转角度
A T 1 S 1 α 1
B T 2 S 2 α 2
C T 3 S 3 α 3
在步骤S502中获取第三探测装置的加速度和角速度,并在步骤S503中通过对比第三探测装置的加速度、第三探测装置的角速度、第一探测装置的加速度和第一探测装置的角速度,得到第一探测装置与第三探测装置之间的目标外参数。
示例性地,在可移动平台移动过程中,惯性传感器(即第三探测装置)可以实时向可移动平台输出测量到的加速度与角速度,可移动平台接收到该加速度与角速度之后,可以将测量到的加速度与角速度存储在预设区域。后续执行步骤S502时,可以从上述预设区域中获取该惯性传感器测量到的加速度与角速度。
本发明实施例中,可移动平台可以不依赖于特殊的标定设备和特定的标定环境,更加高效灵活地实现第一探测装置和惯性传感器之间的外参数标定。
基于上述方法实施例的描述,在一个实施例中,本发明实施例还提供了一种如图6所示的探测装置的外参数标定装置。其中,所述控制装置可以配置于但不限于可移动平台,在所述可移动平台的不同位置处至少配置有第一探测装置和第二探测装置,所述第一探测装置和所述第二探测装置分别用于采集得到环境探测信息,所述外参数标定装置包括:
获取模块60,用于获取所述第一探测装置探测到的第一环境探测信息,并获取所述第二探测装置探测到的第二环境探测信息,其中,所述第一环境探测信息和所述第二环境探测信息中均包括关于目标环境区域的信息,所述目标环境区域是所述可移动平台对应环境探测区域中的部分环境区域;
处理模块61,用于根据所述第一环境探测信息确定所述第一探测装置相对于所述目标环境区域的位姿数据,根据所述第二环境探测信息确定所述第二探测装置相对于所述目标环境区域的位姿数据;
所述处理模块61,还用于基于所述第一探测装置相对于所述目标环境区域的位姿数据和所述第二探测装置相对于所述目标环境区域的位姿数据,确定所述第一探测装置与所述第二探测装置之间的目标外参数。
在一个实施例中,所述第一探测装置为目标类型的传感器中的任一个传感器,所述目标类型的传感器包括图像传感器和感知传感器。
在一个实施例中,所述第二环境探测信息包括关于所述目标环境区域的图像数据,所述处理模块61,具体用于基于图像算法对所述关于所述目标环境区域的图像数据进行处理,得到所述第二探测装置相对于所述目标环境区域的位姿数据。
在一个实施例中,所述第二环境探测信息包括关于所述目标环境区域的点云数据,所述处理模块61,具体用于基于迭代最近点算法对所述关于所述目标环境区域的点云数据进行处理,得到所述第二探测装置相对于所述目标环境区域的位姿数据。
在一个实施例中,所述处理模块61,具体用于基于所述第一探测装置相对于所述目标环境区域的位姿数据和所述第二探测装置相对于所述目标环境区域的位姿数据,确定所述第一探测装置与所述第二探测装置之间的第一外参数;
基于所述第一探测装置相对于参考环境区域的位姿数据和所述第二探测装置相对于所述参考环境区域的位姿数据,确定所述第一探测装置与所述第二探测装置之间的第二外参数;对所述第一外参数和所述第二外参数进行数据处理,得到所述第一探测装置与所述第二探测装置之间的目标外参数。
在一个实施例中,所述目标外参数为所述第一探测装置和所述第二探测装置之间的平移矩阵和旋转矩阵。
在一个实施例中,所述第一探测装置和所述第二探测装置在不同时刻下探测到所述目标环境区域。
在一个实施例中,所述处理模块61,具体用于获取所述第一探测装置的运动轨迹;根据所述运动轨迹、所述第一探测装置相对于所述目标环境区域的位姿数据和所述第二探测装置相对于所述目标环境区域的位姿数据,计算所述第一探测装置与所述第二探测装置之间的目标外参数。
在一个实施例中,所述获取模块60,还用于获取所述第一探测装置在不同位置下探测到的各个第二环境探测信息,所述各个第二环境探测信息均包括关于所述目标环境区域的信息;所述处理模块61,还用于基于所述各个第二环境探测信息,确定所述第一探测装置在所述不同位置之间的相对平移和相对旋转;基于所述相对平移和所述相对旋转,确定出所述第一探测装置的运动轨迹。
在一个实施例中,所述可移动平台还设置了第三探测装置,所述第三探测装置包括惯性传感器,所述处理模块61,还用于基于所述第一探测装置在所述不同位置之间的所述相对平移和所述相对旋转,确定所述第一探测装置的加速度和角速度;获取所述惯性传感器的加速度和角速度;通过对比所述惯性传感器的加速度、所述惯性传感器的角速度、所述第一探测装置的加速度和所述第一探测装置的角速度,得到所述第一探测装置与所述惯性传感器之间的目标外参数。
在一个实施例中,所述探测装置包括第一探测装置和第三探测装置,所述第三探测装置包括惯性传感器,所述处理模块61,还用于确定所述第一探测装置在探测到目标环境区域下的不同位置之间的相对平移和相对旋转,基于所述第一探测装置在所述不同位置之间的所述相对平移和所述相对旋转,确定所述第一探测装置的加速度和角速度;获取模块60,还用于获取所述第三探测装置的加速度和角速度;所述处理模块61,还用于通过对比所述第三探测装置的加速度、所述第三探测装置的角速度、所述第一探测装置的加速度和所述第一探测装置的角速度,得到所述第一探测装置与所述第三探测装置之间的目标外参数。
在一个实施例中,所述处理模块61,具体用于在所述可移动平台移动时,调用所述第一探测装置以不同位置和姿态探测所述目标环境区域;获取所述第一探测装置在所述不同位置下探测到的各个第二环境探测信息,所述各个第二环境探测信息均包括关于所述目标环境区域的信息;基于所述各个第二环境探测信息,确定所述第一探测装置在所述不同位置之间的相对平移和相对旋转。
在本发明实施例中,上述各个模块的具体实现可参考前述附图3、图4或者图5所对应的实施例中相关内容的描述。
请参见图7,是本发明实施例提供的一种可移动平台的结构示意性框图。其中,在所述可移动平台的不同位置处至少配置有第一探测装置和第二探测装置,所述第一探测装置和所述第二探测装置分别用于采集得到环境探测信息,所述可移动平台包括处理器和通信接口,所述可移动平台可包括处理器70、通信接口71和存储器72,处理器70、通信接口71和存储器72通过总线相连接,所述存储器72用于存储程序指令和环境探测信息。
所述存储器72可以包括易失性存储器(volatile memory),如随机存取存储器(random-access memory,RAM);存储器72也可以包括非易失性存储器(non-volatile memory),如快闪存储器(flash memory),固态硬盘(solid-state drive,SSD)等;存储器72也可以是双倍速率同步动态随机存储器(Double Data Rate SDRAM,DDR);存储器72还可以包括上述种类的存储器的组合。
本发明实施例中,所述存储器72用于存储计算机程序,所述计算机程序包括程序指令,所述处理器70被配置用于调用所述程序指令时执行:获取所述第一探测装置探测到的第一环境探测信息,并获取所述第二探测装置探测到的第二环境探测信息,其中,所述第一环境探测信息和所述第二环境探测信息中均包括关于目标环境区域的信息,所述目标环境区域是所述可移动平台对应环境探测区域中的部分环境区域;根据所述第一环境探测信息确定所述第一探测装置相对于所述目标环境区域的位姿数据,根据所述第二环境探测信息确定所述第二探测装置相对于所述目标环境区域的位姿数据;基于所述第一探测装置相对于所述目标环境区域的位姿数据和所述第二探测装置相对于所述目标环境区域的位姿数据,确定所述第一探测装置与所述第二探测装置之间的目标外参数。
在一个实施例中,所述第一探测装置为目标类型的传感器中的任一个传感器,所述目标类型的传感器包括图像传感器和感知传感器。
在一个实施例中,所述第二环境探测信息包括关于所述目标环境区域的图像数据,所述处理器70,具体用于基于图像算法对所述关于所述目标环境区域的图像数据进行处理,得到所述第二探测装置相对于所述目标环境区域的位姿数据。
在一个实施例中,其特征在于,所述第二环境探测信息包括关于所述目标 环境区域的点云数据,所述处理器70,还具体用于基于迭代最近点算法对所述关于所述目标环境区域的点云数据进行处理,得到所述第二探测装置相对于所述目标环境区域的位姿数据。
在一个实施例中,所述处理器70,还具体用于基于所述第一探测装置相对于所述目标环境区域的位姿数据和所述第二探测装置相对于所述目标环境区域的位姿数据,确定所述第一探测装置与所述第二探测装置之间的第一外参数;基于所述第一探测装置相对于参考环境区域的位姿数据和所述第二探测装置相对于所述参考环境区域的位姿数据,确定所述第一探测装置与所述第二探测装置之间的第二外参数;对所述第一外参数和所述第二外参数进行数据处理,得到所述第一探测装置与所述第二探测装置之间的目标外参数。
在一个实施例中,所述目标外参数为所述第一探测装置和所述第二探测装置之间的平移矩阵和旋转矩阵。
在一个实施例中,所述第一探测装置和所述第二探测装置在不同时刻下探测到所述目标环境区域。
在一个实施例中,所述处理器70,还具体用于获取所述第一探测装置的运动轨迹;根据所述运动轨迹、所述第一探测装置相对于所述目标环境区域的位姿数据和所述第二探测装置相对于所述目标环境区域的位姿数据,计算所述第一探测装置与所述第二探测装置之间的目标外参数。
在一个实施例中,所述处理器70,还用于获取所述第一探测装置在不同位置下探测到的各个第二环境探测信息,所述各个第二环境探测信息均包括关于所述目标环境区域的信息;基于所述各个第二环境探测信息,确定所述第一探测装置在所述不同位置之间的相对平移和相对旋转;基于所述相对平移和所述相对旋转,确定出所述第一探测装置的运动轨迹。
在一个实施例中,所述可移动平台还设置了第三探测装置,所述处理器70,还用于基于所述第一探测装置在所述不同位置之间的所述相对平移和所述相对旋转,确定所述第一探测装置的加速度和角速度;获取所述惯性传感器的加速度和角速度;通过对比所述惯性传感器的加速度、所述惯性传感器的角速度、所述第一探测装置的加速度和所述第一探测装置的角速度,得到所述第一探测装置与所述惯性传感器之间的目标外参数。
在一个实施例中,所述探测装置还包括第三探测装置,所述第三探测装置包括惯性传感器,所述处理器70还可以被配置用于调用所述程序指令时执行:确定所述第一探测装置在探测到目标环境区域下的不同位置之间的相对平移和相对旋转;基于所述第一探测装置在所述不同位置之间的所述相对平移和所述相对旋转,确定所述第一探测装置的加速度和角速度;获取所述第三探测装置的加速度和角速度;通过对比所述第三探测装置的加速度、所述第三探测装置的角速度、所述第一探测装置的加速度和所述第一探测装置的角速度,得到所述第一探测装置与所述第三探测装置之间的目标外参数。
在一个实施例中,所述处理器70还具体用于:在所述可移动平台移动时,调用所述第一探测装置以不同位置和姿态探测所述目标环境区域;获取所述第一探测装置在所述不同位置下探测到的各个第二环境探测信息,所述各个第二环境探测信息均包括关于所述目标环境区域的信息;基于所述各个第二环境探测信息,确定所述第一探测装置在所述不同位置之间的相对平移和相对旋转。
在本发明实施例中,上述处理器70的具体实现可参考前述附图3、图4或者图5所对应的实施例中相关内容的描述。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
以上所揭露的仅为本发明的部分实施例而已,当然不能以此来限定本发明之权利范围,本领域普通技术人员可以理解实现上述实施例的全部或部分流程,并依本发明权利要求所作的等同变化,仍属于发明所涵盖的范围。

Claims (28)

  1. 一种探测装置的外参数标定方法,其特征在于,所述方法适用于可移动平台,在所述可移动平台的不同位置处至少配置有第一探测装置和第二探测装置,所述第一探测装置和所述第二探测装置分别用于采集得到环境探测信息,该方法包括:
    获取所述第一探测装置探测到的第一环境探测信息,并获取所述第二探测装置探测到的第二环境探测信息,其中,所述第一环境探测信息和所述第二环境探测信息中均包括关于目标环境区域的信息,所述目标环境区域是所述可移动平台对应环境探测区域中的部分环境区域;
    根据所述第一环境探测信息确定所述第一探测装置相对于所述目标环境区域的位姿数据,根据所述第二环境探测信息确定所述第二探测装置相对于所述目标环境区域的位姿数据;
    基于所述第一探测装置相对于所述目标环境区域的位姿数据和所述第二探测装置相对于所述目标环境区域的位姿数据,确定所述第一探测装置与所述第二探测装置之间的目标外参数。
  2. 根据权利要求1所述的方法,其特征在于,所述第一探测装置为目标类型的传感器中的任一个传感器,所述目标类型的传感器包括图像传感器和感知传感器。
  3. 根据权利要求1或2所述的方法,其特征在于,所述第二环境探测信息包括关于所述目标环境区域的图像数据,所述根据所述第二环境探测信息确定所述第二探测装置相对于所述目标环境区域的位姿数据,包括:
    基于图像算法对所述关于所述目标环境区域的图像数据进行处理,得到所述第二探测装置相对于所述目标环境区域的位姿数据。
  4. 根据权利要求1或2所述的方法,其特征在于,所述第二环境探测信息包括关于所述目标环境区域的点云数据,所述根据所述第二环境探测信息确 定所述第二探测装置相对于所述目标环境区域的位姿数据,包括:
    基于迭代最近点算法对所述关于所述目标环境区域的点云数据进行处理,得到所述第二探测装置相对于所述目标环境区域的位姿数据。
  5. 根据权利要求1所述的方法,所述基于所述第一探测装置相对于所述目标环境区域的位姿数据和所述第二探测装置相对于所述目标环境区域的位姿数据,确定所述第一探测装置与所述第二探测装置之间的目标外参数,包括:
    基于所述第一探测装置相对于所述目标环境区域的位姿数据和所述第二探测装置相对于所述目标环境区域的位姿数据,确定所述第一探测装置与所述第二探测装置之间的第一外参数;
    基于所述第一探测装置相对于参考环境区域的位姿数据和所述第二探测装置相对于所述参考环境区域的位姿数据,确定所述第一探测装置与所述第二探测装置之间的第二外参数;
    对所述第一外参数和所述第二外参数进行数据处理,得到所述第一探测装置与所述第二探测装置之间的目标外参数。
  6. 根据权利要求1所述的方法,其特征在于,所述目标外参数为所述第一探测装置和所述第二探测装置之间的平移矩阵和旋转矩阵。
  7. 根据权利要求1所述的方法,其特征在于,所述第一探测装置和所述第二探测装置在不同时刻下探测到所述目标环境区域。
  8. 根据权利要求1所述的方法,其特征在于,所述基于所述第一探测装置相对于所述目标环境区域的位姿数据和所述第二探测装置相对于所述目标环境区域的位姿数据,确定所述第一探测装置与所述第二探测装置之间的目标外参数,包括:
    获取所述第一探测装置的运动轨迹;
    根据所述运动轨迹、所述第一探测装置相对于所述目标环境区域的位姿数据和所述第二探测装置相对于所述目标环境区域的位姿数据,计算所述第一探 测装置与所述第二探测装置之间的目标外参数。
  9. 根据权利要求8所述的方法,其特征在于,所述获取所述第一探测装置的运动轨迹之前,所述方法包括:
    获取所述第一探测装置在不同位置下探测到的各个第二环境探测信息,所述各个第二环境探测信息均包括关于所述目标环境区域的信息;
    基于所述各个第二环境探测信息,确定所述第一探测装置在所述不同位置之间的相对平移和相对旋转;
    基于所述相对平移和所述相对旋转,确定出所述第一探测装置的运动轨迹。
  10. 根据权利要求1所述的方法,其特征在于,所述可移动平台还设置了第三探测装置,所述第三探测装置包括惯性传感器,所述方法还包括:
    基于所述第一探测装置在不同位置之间的相对平移和相对旋转,确定所述第一探测装置的加速度和角速度;
    获取所述惯性传感器的加速度和角速度;
    通过对比所述惯性传感器的加速度、所述惯性传感器的角速度、所述第一探测装置的加速度和所述第一探测装置的角速度,得到所述第一探测装置与所述惯性传感器之间的目标外参数。
  11. 一种探测装置的外参数标定方法,其特征在于,所述方法适用于可移动平台,在所述可移动平台的不同位置处配置有探测装置,所述探测装置包括第一探测装置和第三探测装置,所述第三探测装置包括惯性传感器,该方法包括:
    确定所述第一探测装置在探测到目标环境区域下的不同位置之间的相对平移和相对旋转;
    基于所述第一探测装置在所述不同位置之间的所述相对平移和所述相对旋转,确定所述第一探测装置的加速度和角速度;
    获取所述第三探测装置的加速度和角速度;
    通过对比所述第三探测装置的加速度、所述第三探测装置的角速度、所述 第一探测装置的加速度和所述第一探测装置的角速度,得到所述第一探测装置与所述第三探测装置之间的目标外参数。
  12. 根据权利要求11所述的方法,其特征在于,所述确定所述第一探测装置在探测到目标环境区域下的不同位置之间的相对平移和相对旋转,包括:
    在所述可移动平台移动时,调用所述第一探测装置以不同位置和姿态探测所述目标环境区域;
    获取所述第一探测装置在所述不同位置下探测到的各个第二环境探测信息,所述各个第二环境探测信息均包括关于所述目标环境区域的信息;
    基于所述各个第二环境探测信息,确定所述第一探测装置在所述不同位置之间的相对平移和相对旋转。
  13. 一种探测装置的外参数标定装置,其特征在于,所述装置配置于可移动平台,在所述可移动平台的不同位置处至少配置有第一探测装置和第二探测装置,所述第一探测装置和所述第二探测装置分别用于采集得到环境探测信息,该装置包括:
    获取模块,用于获取所述第一探测装置探测到的第一环境探测信息,并获取所述第二探测装置探测到的第二环境探测信息,其中,所述第一环境探测信息和所述第二环境探测信息中均包括关于目标环境区域的信息,所述目标环境区域是所述可移动平台对应环境探测区域中的部分环境区域;
    处理模块,用于根据所述第一环境探测信息确定所述第一探测装置相对于所述目标环境区域的位姿数据,根据所述第二环境探测信息确定所述第二探测装置相对于所述目标环境区域的位姿数据;
    所述处理模块,还用于基于所述第一探测装置相对于所述目标环境区域的位姿数据和所述第二探测装置相对于所述目标环境区域的位姿数据,确定所述第一探测装置与所述第二探测装置之间的目标外参数。
  14. 一种可移动平台,其特征在于,在所述可移动平台的不同位置处至少配置有第一探测装置和第二探测装置,所述第一探测装置和所述第二探测装置 分别用于采集得到环境探测信息,所述可移动平台包括处理器和通信接口,所述处理器用于:
    获取所述第一探测装置探测到的第一环境探测信息,并获取所述第二探测装置探测到的第二环境探测信息,其中,所述第一环境探测信息和所述第二环境探测信息中均包括关于目标环境区域的信息,所述目标环境区域是所述可移动平台对应环境探测区域中的部分环境区域;
    根据所述第一环境探测信息确定所述第一探测装置相对于所述目标环境区域的位姿数据,根据所述第二环境探测信息确定所述第二探测装置相对于所述目标环境区域的位姿数据;
    基于所述第一探测装置相对于所述目标环境区域的位姿数据和所述第二探测装置相对于所述目标环境区域的位姿数据,确定所述第一探测装置与所述第二探测装置之间的目标外参数。
  15. 根据权利要求14所述的可移动平台,其特征在于,所述第一探测装置为目标类型的传感器中的任一个传感器,所述目标类型的传感器包括图像传感器和感知传感器。
  16. 根据权利要求14或15所述的可移动平台,其特征在于,所述第二环境探测信息包括关于所述目标环境区域的图像数据,所述处理器,具体用于基于图像算法对所述关于所述目标环境区域的图像数据进行处理,得到所述第二探测装置相对于所述目标环境区域的位姿数据。
  17. 根据权利要求14或15所述的可移动平台,其特征在于,所述第二环境探测信息包括关于所述目标环境区域的点云数据,所述处理器,还具体用于基于迭代最近点算法对所述关于所述目标环境区域的点云数据进行处理,得到所述第二探测装置相对于所述目标环境区域的位姿数据。
  18. 根据权利要求14所述的可移动平台,所述处理器,还具体用于基于所述第一探测装置相对于所述目标环境区域的位姿数据和所述第二探测装置 相对于所述目标环境区域的位姿数据,确定所述第一探测装置与所述第二探测装置之间的第一外参数;基于所述第一探测装置相对于参考环境区域的位姿数据和所述第二探测装置相对于所述参考环境区域的位姿数据,确定所述第一探测装置与所述第二探测装置之间的第二外参数;对所述第一外参数和所述第二外参数进行数据处理,得到所述第一探测装置与所述第二探测装置之间的目标外参数。
  19. 根据权利要求14所述的可移动平台,其特征在于,所述目标外参数为所述第一探测装置和所述第二探测装置之间的平移矩阵和旋转矩阵。
  20. 根据权利要求14所述的可移动平台,其特征在于,所述第一探测装置和所述第二探测装置在不同时刻下探测到所述目标环境区域。
  21. 根据权利要求14所述的可移动平台,其特征在于,所述处理器,还具体用于获取所述第一探测装置的运动轨迹;根据所述运动轨迹、所述第一探测装置相对于所述目标环境区域的位姿数据和所述第二探测装置相对于所述目标环境区域的位姿数据,计算所述第一探测装置与所述第二探测装置之间的目标外参数。
  22. 根据权利要求21所述的可移动平台,其特征在于,所述处理器,还用于获取所述第一探测装置在不同位置下探测到的各个第二环境探测信息,所述各个第二环境探测信息均包括关于所述目标环境区域的信息;基于所述各个第二环境探测信息,确定所述第一探测装置在所述不同位置之间的相对平移和相对旋转;基于所述相对平移和所述相对旋转,确定出所述第一探测装置的运动轨迹。
  23. 根据权利要求14所述的可移动平台,其特征在于,所述可移动平台还设置了第三探测装置,所述处理器,还用于基于所述第一探测装置在不同位置之间的相对平移和相对旋转,确定所述第一探测装置的加速度和角速度;获 取所述惯性传感器的加速度和角速度;通过对比所述惯性传感器的加速度、所述惯性传感器的角速度、所述第一探测装置的加速度和所述第一探测装置的角速度,得到所述第一探测装置与所述惯性传感器之间的目标外参数。
  24. 一种探测装置的外参数标定装置,其特征在于,所述装置适用于可移动平台,在所述可移动平台的不同位置处配置有探测装置,所述探测装置包括第一探测装置和第三探测装置,所述第三探测装置包括惯性传感器,该装置包括:
    处理模块,用于确定所述第一探测装置在探测到目标环境区域下的不同位置之间的相对平移和相对旋转;
    所述处理模块,还用于基于所述第一探测装置在所述不同位置之间的所述相对平移和所述相对旋转,确定所述第一探测装置的加速度和角速度;
    获取模块,用于获取所述第三探测装置的加速度和角速度;
    所述处理模块,还用于通过对比所述第三探测装置的加速度、所述第三探测装置的角速度、所述第一探测装置的加速度和所述第一探测装置的角速度,得到所述第一探测装置与所述第三探测装置之间的目标外参数。
  25. 根据权利要求24所述的装置,其特征在于,所述处理模块,具体用于在所述可移动平台移动时,调用所述第一探测装置以不同位置和姿态探测所述目标环境区域;通过所述获取模块获取所述第一探测装置在所述不同位置下探测到的各个第二环境探测信息,所述各个第二环境探测信息均包括关于所述目标环境区域的信息;基于所述各个第二环境探测信息,确定所述第一探测装置在所述不同位置之间的相对平移和相对旋转。
  26. 一种可移动平台,其特征在于,在所述可移动平台的不同位置处配置有探测装置,所述探测装置包括第一探测装置和第三探测装置,所述第三探测装置包括惯性传感器,所述可移动平台包括:处理器和通信接口,所述处理器用于:
    确定所述第一探测装置在探测到目标环境区域下的不同位置之间的相对 平移和相对旋转;
    基于所述第一探测装置在所述不同位置之间的所述相对平移和所述相对旋转,确定所述第一探测装置的加速度和角速度;
    获取所述第三探测装置的加速度和角速度;
    通过对比所述第三探测装置的加速度、所述第三探测装置的角速度、所述第一探测装置的加速度和所述第一探测装置的角速度,得到所述第一探测装置与所述第三探测装置之间的目标外参数。
  27. 根据权利要求26所述的可移动平台,其特征在于,所述处理器具体用于:在所述可移动平台移动时,调用所述第一探测装置以不同位置和姿态探测所述目标环境区域;获取所述第一探测装置在所述不同位置下探测到的各个第二环境探测信息,所述各个第二环境探测信息均包括关于所述目标环境区域的信息;基于所述各个第二环境探测信息,确定所述第一探测装置在所述不同位置之间的相对平移和相对旋转。
  28. 一种计算机存储介质,其特征在于,该计算机存储介质中存储有程序指令,该程序指令被执行时,用于实现如权利要求1-12任一项所述的方法。
PCT/CN2019/120278 2019-11-22 2019-11-22 一种探测装置的外参数标定方法、装置及可移动平台 WO2021097807A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980038511.3A CN112272757A (zh) 2019-11-22 2019-11-22 一种探测装置的外参数标定方法、装置及可移动平台
PCT/CN2019/120278 WO2021097807A1 (zh) 2019-11-22 2019-11-22 一种探测装置的外参数标定方法、装置及可移动平台

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/120278 WO2021097807A1 (zh) 2019-11-22 2019-11-22 一种探测装置的外参数标定方法、装置及可移动平台

Publications (1)

Publication Number Publication Date
WO2021097807A1 true WO2021097807A1 (zh) 2021-05-27

Family

ID=74349512

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/120278 WO2021097807A1 (zh) 2019-11-22 2019-11-22 一种探测装置的外参数标定方法、装置及可移动平台

Country Status (2)

Country Link
CN (1) CN112272757A (zh)
WO (1) WO2021097807A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113655453B (zh) * 2021-08-27 2023-11-21 阿波罗智能技术(北京)有限公司 用于传感器标定的数据处理方法、装置及自动驾驶车辆

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017080715A1 (en) * 2015-10-19 2017-05-18 Continental Automotive Gmbh Adaptive calibration using visible car details
CN107850901A (zh) * 2015-05-23 2018-03-27 深圳市大疆创新科技有限公司 使用惯性传感器和图像传感器的传感器融合
CN109100741A (zh) * 2018-06-11 2018-12-28 长安大学 一种基于3d激光雷达及图像数据的目标检测方法
CN109143205A (zh) * 2018-08-27 2019-01-04 深圳清创新科技有限公司 一体化传感器外参数标定方法、装置
CN109767475A (zh) * 2018-12-28 2019-05-17 广州小鹏汽车科技有限公司 一种传感器的外部参数标定方法及系统
CN109946680A (zh) * 2019-02-28 2019-06-28 北京旷视科技有限公司 探测系统的外参数标定方法、装置、存储介质及标定系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9123135B2 (en) * 2012-06-14 2015-09-01 Qualcomm Incorporated Adaptive switching between vision aided INS and vision only pose
CN207923150U (zh) * 2017-08-04 2018-09-28 广东工业大学 一种深度相机和惯性测量单元相对姿态的标定系统
CN107747941B (zh) * 2017-09-29 2020-05-15 歌尔股份有限公司 一种双目视觉定位方法、装置及系统
CN108375775B (zh) * 2018-01-17 2020-09-29 上海禾赛光电科技有限公司 车载探测设备及其参数的调整方法、介质、探测系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107850901A (zh) * 2015-05-23 2018-03-27 深圳市大疆创新科技有限公司 使用惯性传感器和图像传感器的传感器融合
WO2017080715A1 (en) * 2015-10-19 2017-05-18 Continental Automotive Gmbh Adaptive calibration using visible car details
CN109100741A (zh) * 2018-06-11 2018-12-28 长安大学 一种基于3d激光雷达及图像数据的目标检测方法
CN109143205A (zh) * 2018-08-27 2019-01-04 深圳清创新科技有限公司 一体化传感器外参数标定方法、装置
CN109767475A (zh) * 2018-12-28 2019-05-17 广州小鹏汽车科技有限公司 一种传感器的外部参数标定方法及系统
CN109946680A (zh) * 2019-02-28 2019-06-28 北京旷视科技有限公司 探测系统的外参数标定方法、装置、存储介质及标定系统

Also Published As

Publication number Publication date
CN112272757A (zh) 2021-01-26

Similar Documents

Publication Publication Date Title
US20210124029A1 (en) Calibration of laser and vision sensors
CN111156998B (zh) 一种基于rgb-d相机与imu信息融合的移动机器人定位方法
CN108369743B (zh) 使用多方向相机地图构建空间
US10866101B2 (en) Sensor calibration and time system for ground truth static scene sparse flow generation
US10488521B2 (en) Sensor calibration and time method for ground truth static scene sparse flow generation
US20190204084A1 (en) Binocular vision localization method, device and system
US11151741B2 (en) System and method for obstacle avoidance
KR101956447B1 (ko) 그래프 구조 기반의 무인체 위치 추정 장치 및 그 방법
US11057604B2 (en) Image processing method and device
US20200191556A1 (en) Distance mesurement method by an unmanned aerial vehicle (uav) and uav
WO2017020150A1 (zh) 一种图像处理方法、装置及摄像机
WO2018227576A1 (zh) 地面形态检测方法及系统、无人机降落方法和无人机
CN110470333B (zh) 传感器参数的标定方法及装置、存储介质和电子装置
US10481267B2 (en) Undistorted raw LiDAR scans and static point extractions method for ground truth static scene sparse flow generation
US20180357773A1 (en) Sparse image point correspondences generation and correspondences refinement system for ground truth static scene sparse flow generation
KR101672732B1 (ko) 객체 추적 장치 및 방법
JP2013187862A (ja) 画像データ処理装置、画像データ処理方法および画像データ処理用のプログラム
US20180356824A1 (en) Time synchronization and data acquisition method for ground truth static scene sparse flow generation
WO2022135594A1 (zh) 目标物体的检测方法及装置、融合处理单元、介质
CN111142514B (zh) 一种机器人及其避障方法和装置
US20180357314A1 (en) Time synchronization and data acquisition system for ground truth static scene sparse flow generation
CN105844692A (zh) 基于双目立体视觉的三维重建装置、方法、系统及无人机
CN113052907B (zh) 一种动态环境移动机器人的定位方法
CN113767264A (zh) 参数标定方法、装置、系统和存储介质
US20180357315A1 (en) UNDISTORTED RAW LiDAR SCANS AND STATIC POINT EXTRACTIONS SYSTEM FOR GROUND TRUTH STATIC SCENE SPARSE FLOW GENERATION

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19953608

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19953608

Country of ref document: EP

Kind code of ref document: A1