WO2019037484A1 - 激光扫描设备标定的方法、装置、设备及存储介质 - Google Patents

激光扫描设备标定的方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2019037484A1
WO2019037484A1 PCT/CN2018/087251 CN2018087251W WO2019037484A1 WO 2019037484 A1 WO2019037484 A1 WO 2019037484A1 CN 2018087251 W CN2018087251 W CN 2018087251W WO 2019037484 A1 WO2019037484 A1 WO 2019037484A1
Authority
WO
WIPO (PCT)
Prior art keywords
coordinate
cloud data
point cloud
frame
coordinate system
Prior art date
Application number
PCT/CN2018/087251
Other languages
English (en)
French (fr)
Inventor
曾超
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to KR1020197030956A priority Critical patent/KR102296723B1/ko
Priority to EP18847637.8A priority patent/EP3686557A4/en
Priority to JP2020511185A priority patent/JP6906691B2/ja
Publication of WO2019037484A1 publication Critical patent/WO2019037484A1/zh
Priority to US16/383,358 priority patent/US20190235062A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C15/00Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
    • G01C15/002Active optical surveying means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4817Constructional features, e.g. arrangements of optical elements relating to scanning
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • G01S7/4972Alignment of sensor

Definitions

  • the present application relates to the field of driverless technology, and in particular, to a method, device, device and storage medium for calibration of a laser scanning device.
  • a navigation system in an unmanned vehicle can provide a navigation path for the driverless vehicle to follow the navigation path.
  • the unmanned vehicle can also scan the surrounding environment in real time through the laser scanning device to obtain a three-dimensional image of the surrounding environment, so that the unmanned vehicle can travel in conjunction with the surrounding environment and the navigation path to avoid obstacles in the surrounding environment, further Ensure the safety of driving.
  • the laser scanning device needs to be calibrated before using the laser scanning device.
  • the laser scanning device is calibrated by generally setting up a marker in the calibration field, and setting a plurality of calibration points with obvious positions in the marker to establish a calibration field including a plurality of calibration points.
  • the vehicle coordinate system with the unmanned vehicle as the coordinate origin is established in the calibration field, and the coordinates of each calibration point in the vehicle coordinate system are manually measured by the traditional surveying and mapping method.
  • a laser coordinate system with the laser scanning device as the origin is established, and the calibration field is scanned by the laser scanning device to obtain a frame of point cloud data, the frame point cloud data includes a set of surface points of the marker in the calibration field, and a surface point The coordinates of each point in the set in the laser coordinate system.
  • a plurality of punctuation points in the set of surface points are manually selected, and coordinates of each of the calibration points in the laser coordinate system are obtained.
  • the SVD (Singular Value Decomposition) algorithm is used to calculate the deviation of the laser coordinate system from the vehicle coordinate system.
  • the shifting posture includes a numerical value of an offset position of the laser coordinate system with respect to the vehicle coordinate system and a numerical value of the yaw angle, and the offset posture is directly used as a value of a laser external parameter of the laser scanning device.
  • the yaw angle is the angle between the x-axis of the laser coordinate system (directly in front of the laser scanning device) and the x-axis of the vehicle coordinate system (directly in front of the driverless vehicle).
  • the laser scanning device is calibrated by the value of the external laser parameter.
  • the above method requires manual establishment of a calibration field, and subsequent methods of manual measurement or identification are required to determine the coordinates of each calibration point in the vehicle coordinate system and the coordinates in the laser coordinate system, thereby causing the above-mentioned laser scanning device calibration.
  • the method is inefficient.
  • a method, apparatus, apparatus, and storage medium for laser scanning device calibration are provided.
  • a method of calibrating a laser scanning device comprising:
  • a device for calibrating a laser scanning device comprising:
  • An acquiring module configured to acquire, according to at least two frame point cloud data obtained by scanning a target area by the laser scanning device, a first coordinate of the feature element in the point cloud data of each frame, where the first coordinate is the feature element The coordinates in the laser coordinate system;
  • a first determining module configured to determine, according to the map data of the target area, a second coordinate of the feature element in the frame coordinate system in the vehicle coordinate system;
  • a second determining module configured to determine an offset pose of the point cloud data of each frame according to the first coordinate and the second coordinate of the feature element for the frame cloud data of each frame;
  • a calculating module configured to calculate a value of the laser external parameter of the laser scanning device according to the offset pose of the at least two frames of point cloud data to calibrate the laser scanning device.
  • a computer apparatus comprising a memory and a processor, the memory storing computer readable instructions, the computer readable instructions being executed by the processor such that the processor performs the following steps:
  • a non-transitory computer readable storage medium storing computer readable instructions, when executed by one or more processors, causes the one or more processors to perform the following steps:
  • FIG. 1 is a schematic diagram of a driving system provided by an embodiment of the present application.
  • FIG. 2 is a flow chart of a method for calibration of a laser scanning device according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of a preset scanning route provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a first distance provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a second distance provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an apparatus for calibrating a laser scanning device according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a computer device according to an embodiment of the present application.
  • the embodiment of the present application discloses a method of calibrating a laser scanning device.
  • the laser scanning device can be a laser scanning device installed in any driver that needs to navigate.
  • the laser scanning device may be installed in a driver such as an unmanned vehicle, a drone, or a robot that requires navigation, which is not specifically limited in the embodiment of the present application.
  • the embodiment of the present application is described by taking only a laser scanning device installed in a vehicle as an example.
  • the driving system includes: a laser scanning device 101 and a navigation system 102.
  • Map data is pre-stored in the navigation system 102, and the map data includes at least position coordinates of each feature element in the target area in the map coordinate system.
  • the navigation system 102 includes a GPS (Global Positioning System) and an IMU (Inertial Measurement Unit).
  • the navigation system 102 can receive satellite signals via GPS and locate the current position coordinates of the vehicle in the map coordinate system in real time.
  • the navigation system 102 can determine a navigation path of the vehicle in the map data according to the current position coordinate of the vehicle and the destination position coordinate of the vehicle, and map the corresponding path coordinate in the map coordinate system to the geocentric coordinate system and the station.
  • the cardiac coordinate system is converted into a vehicle coordinate system to cause the vehicle to travel in accordance with a navigation path in the vehicle coordinate system.
  • the accelerometer and the gyroscope are integrated in the IMU.
  • the navigation system 102 can also acquire the heading angle and the traveling speed of the vehicle in the vehicle coordinate system in real time through the IMU, thereby monitoring the running state of the vehicle in real time.
  • the driving system further includes a laser scanning device 101.
  • the vehicle can also scan the surrounding environment in real time through the laser scanning device 101 to obtain multi-frame point cloud data of the surrounding environment, and each frame point cloud data includes each of the surrounding environments.
  • the position coordinates of each obstacle in the laser coordinate system are converted into the vehicle coordinate system; the vehicle can travel in conjunction with the navigation path in the vehicle coordinate system and each obstacle in the surrounding environment, thereby further ensuring the safety of the vehicle running.
  • the map data may be map data of a to-be-traveled area that is set and stored in advance according to user needs. Further, the map data may be high-precision map data.
  • the high-precision map data is a next-generation navigation map with centimeter-level positioning accuracy and including road auxiliary facilities information (such as traffic lights, electronic eyes, traffic signs, etc.) and dynamic traffic information, and the navigation data can be more accurately navigated through the high-precision map data. .
  • the vehicle may be an unmanned vehicle that acquires a navigation path through the navigation system 102 and acquires multi-frame point cloud data of the surrounding environment by the laser scanning device 101, so that the unmanned vehicle can combine the vehicle coordinates
  • the navigation path in the system and each obstacle in the surrounding environment further ensure that the unmanned vehicle can travel safely.
  • the map coordinate system is generally WGS84 (World Geodetic System for 1984, 1984 World Geodetic Coordinate System), and the position coordinates of each feature element is the latitude and longitude coordinates and elevation coordinates of the feature element in the WGS84 coordinate system.
  • the vehicle coordinate system uses the vehicle as the coordinate origin, with the front of the vehicle running in the positive direction of the x-axis, the direction perpendicular to the left and the direction perpendicular to the x-axis being the positive direction of the y-axis, and the direction of the vertical upward being the positive z-axis. direction.
  • the laser coordinate system uses the laser scanning device as the coordinate origin, with the positive direction of the laser scanning device being the positive x-axis direction, the direction horizontally to the left and perpendicular to the x-axis being directly in front of the y-axis, and the direction in the vertical direction is The coordinate system of the positive direction of the z-axis.
  • the geocentric coordinate system uses the Earth's centroid as the coordinate origin o, the direction of the intersection of the first meridional plane and the equatorial plane to the east is the positive x-axis direction, and the direction of the north of the earth's rotation axis is the positive direction of the z-axis.
  • the xoz plane is vertical and the direction determined by the right-hand rule is the positive direction of the y-axis, and the spatial rectangular coordinate system is established.
  • the center coordinate system of the station is the origin of the coordinate system, the direction of the long semi-axial direction of the earth ellipsoid (east direction) is the positive direction of the x-axis, and the direction of the semi-axis of the earth ellipsoid is north (north direction).
  • the space is a rectangular coordinate system established by the positive direction of the y-axis and the normal direction of the ellipsoid of the earth (the sky direction) is the positive direction of the z-axis.
  • the laser external parameters of the laser scanning device are the offset position and the yaw angle between the laser coordinate system and the vehicle coordinate system.
  • the offset position is an offset distance of the laser coordinate system relative to the vehicle coordinate system in the x-axis and the y-axis direction
  • the yaw angle is an angle between the x-axis of the laser coordinate system and the x-axis of the vehicle coordinate system. That is, the angle between the front of the laser scanning device and the front of the vehicle.
  • the present application also relates to the heading angle of the vehicle.
  • the heading angle refers to an angle between the front side and the true north direction of the vehicle.
  • the execution body of the method is a terminal, and the terminal may be an in-vehicle terminal or any terminal having a data processing function.
  • the method includes:
  • the terminal scans the target area based on the preset scan route by using the laser scanning device to obtain at least two frames of point cloud data, where the target area is any area including the feature element.
  • the laser scanning device is installed in the vehicle and can be disposed on the front side or the side of the vehicle for scanning the environment around the vehicle.
  • the preset scan route may be a travel route designed to scan the target area.
  • the step may be: the terminal acquires a preset scan route, and uses the preset scan route as a travel route of the vehicle to control the vehicle to travel along the preset scan route.
  • the terminal controls the laser scanning device to scan the target area once every preset time period to obtain a frame of point cloud data of the target area.
  • the terminal controls the laser scanning device to perform at least two scans to obtain point cloud data of at least two frames of the target area.
  • the per-frame point cloud data includes, but is not limited to, a set of surface points of each obstacle in the target area, and position coordinates of each surface point in the laser coordinate system.
  • the preset duration may be set and changed according to the needs of the user, which is not specifically limited in this embodiment of the present application. For example, the preset duration may be 100 milliseconds, 5 seconds, or the like.
  • the feature of the feature includes, but is not limited to, a fixed road tooth, a road guardrail, a rod-shaped feature or a traffic sign in the target area. Since the feature element is a fixed-position object in the target area, the ground element in the target area is used as a basic element of the punctuation, and the laser scanning can be finally performed by determining different coordinates of the feature element in each coordinate system. The device is calibrated.
  • the target area may be any area including a feature element.
  • the terminal may select an open area with fewer pedestrians as the target area.
  • unnecessary noise data of other vehicles and the like are less, thereby reducing interference of environmental noise, and improving subsequent extraction of feature elements based on point cloud data.
  • the accuracy of the first coordinate is the accuracy of the first coordinate.
  • the preset scan route may be a scan route determined based on the target area.
  • the determined preset scan route is a circular route around the target area.
  • the traveling direction of the vehicle may be any of the directions of east, south, west, north, etc. during the running of the vehicle. Therefore, the terminal can control the vehicle to travel along the loop road, so that point cloud data of the target area in each traveling direction can be obtained. Moreover, since the vehicle travels on the road side while observing the traffic rules, the point cloud data collected by the terminal is the point cloud data of the left side or the right side.
  • the terminal can control the vehicle to travel back and forth along the ring road, that is, control the vehicle to travel clockwise along the ring road, and then drive counterclockwise along the loop road, so that the vehicle is traveling on the left side of the road and biased. Scanning can be performed while driving on the right side, which improves the accuracy of determining the value of the external laser parameter according to the offset posture of the cloud data of each frame.
  • the target area is an A area
  • the preset scanning route may be a circular route around the A area, that is, the terminal controls the vehicle to travel clockwise along the ring road from the starting point B, and returns to the starting point B. Then, take a circle from the starting point B counterclockwise along the circular road.
  • the terminal For each frame point cloud data, the terminal extracts a first coordinate of the feature element in the laser coordinate system.
  • each frame point cloud data includes a set of surface points of each obstacle in the target area and position coordinates of each surface point in the laser coordinate system
  • the terminal needs to extract from each frame point cloud data.
  • the first coordinate of the feature element the first coordinate being the coordinate of the feature element in the laser coordinate system.
  • the terminal For each frame of point cloud data, the terminal extracts a point set corresponding to the feature of the feature from the point cloud data by using a preset extraction algorithm. For each feature element, a set of position coordinates of the point set corresponding to the feature element in the laser coordinate system is used as the first coordinate of the feature element; and then the first feature of the feature element included in each point cloud data is obtained. coordinate.
  • the preset extraction algorithm may be set and changed according to the user's needs, which is not specifically limited in this embodiment of the present application.
  • the preset extraction algorithm may be: a segmentation based extraction algorithm or a detection based extraction algorithm.
  • the foregoing steps 201-202 are actually at least two frame point cloud data obtained by the terminal scanning the target area based on the laser scanning device, and obtaining the first coordinate of the feature element in the point cloud data of each frame. the way.
  • the foregoing specific implementation manner may be replaced by other implementation manners.
  • the foregoing specific implementation manner actually obtains point cloud data through real-time scanning, and in an actual scenario, may also obtain the historical data obtained by pre-scanning.
  • At least two frames of point cloud data of the target area are implemented, which is not specifically limited in this embodiment of the present application.
  • the terminal acquires map data of the target area from the navigation system, where the map data includes latitude and longitude coordinates and elevation coordinates of the feature element in a map coordinate system.
  • the navigation data of the vehicle stores the map data of the target area
  • the terminal may acquire the map data of the target area from the navigation system according to the area information of the target area.
  • the navigation system may further store map data of an arbitrary area other than the target area, where the map data is actually high-precision map data of the target area, and therefore, the map data of the target area includes at least the feature elements in the target area.
  • the area information may be an area identifier or a latitude and longitude range of the target area.
  • the zone ID can be the name of the zone.
  • the terminal needs to obtain the difference between the vehicle coordinate system and the laser coordinate system of the target area. Therefore, after the terminal acquires the first coordinate of the feature element in the target area, the terminal needs to acquire the feature of the feature. The position coordinates in the map coordinate system, so that the terminal subsequently determines the second coordinate of the feature element in the vehicle coordinates.
  • the terminal can locate the current position coordinates of the vehicle in the map coordinate system through the navigation system, for each frame point cloud data, when the terminal acquires the point cloud data of each frame, the terminal needs to obtain the frame point through the map data in the navigation system.
  • the position coordinates of the feature element included in the cloud coordinate system in the map coordinate system, and the position coordinate is converted into the second coordinate in the vehicle coordinate system.
  • the area information may be an area identifier
  • the terminal may store the correspondence between the area identifier and the map data.
  • the step of the terminal acquiring the map data of the target area from the navigation system may be: the terminal acquires the target.
  • the area identifier of the area is obtained according to the area identifier of the target area
  • the map data corresponding to the target area is obtained from the correspondence between the area identifier and the map data.
  • the area information may be a latitude and longitude range
  • the terminal stores the correspondence between the latitude and longitude range and the map data.
  • the step of the terminal acquiring the map data of the target area from the navigation system may be: the terminal acquires the target area.
  • the latitude and longitude range, according to the latitude and longitude range of the target area obtains map data corresponding to the target area from the correspondence between the latitude and longitude range and the map data.
  • the terminal determines, according to the map data of the target area, the second coordinate of the feature element in the vehicle coordinate system.
  • the vehicle acquires point cloud data for each frame during the running of the vehicle, the vehicle coordinate system with the vehicle as the origin moves accordingly, in order to determine the second coordinate of the corresponding feature element in the vehicle coordinate system of each frame point cloud data.
  • the terminal acquires the frame point cloud data, and obtains the latitude and longitude coordinates of the feature element in the map coordinate system from the map data according to the feature elements included in the frame point cloud data. And elevation coordinates.
  • the terminal determines the second coordinate of the feature element in the vehicle coordinate system according to the latitude and longitude coordinates and the elevation coordinate of the feature element in the map coordinate system.
  • the process of determining, by the terminal, the second coordinate of the feature element in the vehicle coordinate according to the latitude and longitude coordinates and the elevation coordinate of the feature element in the map coordinate system may be: the terminal first maps the feature element in map coordinates The latitude and longitude coordinates and elevation coordinates in the system are converted into position coordinates in the geocentric coordinate system with the origin of the Earth's centroid as the origin, and then the position coordinates of the feature element in the geocentric coordinate system are converted into positions in the center coordinate system. coordinate.
  • the terminal acquires the heading angle of the vehicle through the IMU in the navigation system, and the terminal converts the position coordinates of the feature element in the station center coordinate system into the second coordinate in the vehicle coordinate system according to the heading angle.
  • the coordinates of the center coordinate system and the vehicle coordinate system are the same, except that the positive directions of the x and y axes are different, and the positive direction of the x-axis of the vehicle coordinate system and the positive direction of the y-axis of the central coordinate system are
  • the angle is the heading angle of the vehicle. Therefore, the terminal may first convert the feature element in the map coordinate system to the center coordinate system via the geocentric coordinate system, and finally obtain the second coordinate of the feature element according to the heading angle of the vehicle.
  • the system deviation is the position coordinate of the feature element in the map coordinate system in the map data, and the actual feature element is in the map coordinate system.
  • the displacement deviation between the position coordinates in Therefore, in order to improve the accuracy of determining the second coordinate, the terminal also needs to consider the influence of the system deviation on the second coordinate.
  • the process of converting, by the terminal, the position coordinate of the feature element in the center coordinate system into the second coordinate in the vehicle coordinate system according to the heading angle may be: the initial system deviation of the terminal acquiring the map data, according to the initial System deviation, adjust the position coordinates in the center coordinate system.
  • the terminal converts the adjusted position coordinates into the second coordinates in the vehicle coordinate system according to the heading angle.
  • the process of adjusting the position coordinates may be expressed as follows: the initial system deviation may be represented by (x′ 0 , y′ 0 ), that is, the position coordinates of the terminal element in the center coordinate system. , offset by x' 0 unit distance in the positive direction of the x-axis, and offset by y' 0 unit distance in the positive direction of the y-axis.
  • the terminal actually determines a specific implementation manner of the second coordinate of the feature element in the vehicle coordinate system in the point cloud data of each frame based on the map data of the target area.
  • the specific implementation manner is actually obtaining the second coordinate by acquiring the map data of the target area from the navigation system, and the terminal may also pre-follow the navigation during the actual operation.
  • the map data of the target area is obtained in the system, and the map data of the target area is stored in the terminal, and the second coordinate is determined based on the map data of the target area stored in the terminal, which is not specifically limited in this embodiment of the present application.
  • the offset posture of the point cloud data of each frame is an offset posture between the laser coordinate system and the vehicle coordinate system when the terminal acquires the point cloud data of each frame, because the vehicle moves with the vehicle.
  • the laser coordinate system with the laser scanner as the coordinate origin and the vehicle coordinate system with the vehicle as the coordinate origin also move, resulting in the offset posture of the point cloud data of each frame may be the same or different. Therefore, the terminal also needs to determine the offset pose of each frame point cloud data by the following steps 205-207.
  • the terminal acquires an initial offset pose between the vehicle coordinate system and the laser coordinate system.
  • the offset pose includes a value of an offset position between the vehicle coordinate system and the laser coordinate system and a value of a yaw angle.
  • the offset position between the vehicle coordinate system and the laser coordinate system may be represented by a position coordinate of a coordinate origin of the laser coordinate system in the vehicle coordinate system, and the yaw angle may be the x-axis of the laser coordinate system and the vehicle coordinate system. The angle between the x-axes is indicated.
  • the initial offset pose of each frame point cloud data is determined by step 205, and then the offset pose of each frame point cloud data is determined through steps 206-207.
  • the initial offset pose includes a value of an initial offset position and a value of an initial yaw angle.
  • the terminal may pre-acquire and store the initial offset pose between the vehicle coordinate system and the laser coordinate system by using the measurement, and use the initial offset pose as the initial offset of the point cloud data of each frame. posture.
  • the terminal can measure the coordinates of the laser scanning device in the vehicle coordinate system and the angle between the x-axis of the laser coordinate system and the x-axis of the vehicle coordinate system by using a measuring tool such as a tape measure, and take the measured coordinates as The value of the initial offset position, using the measured angle as the value of the initial yaw angle.
  • the terminal determines, according to the initial offset pose and the second coordinate of the feature element, a third coordinate of the feature element, where the third coordinate is the feature element in the laser The coordinates in the coordinate system.
  • the step may be: for each frame cloud data, the terminal offsets the second coordinate of the feature element according to the value of the initial offset position in the initial offset posture of the frame point cloud data, and the terminal
  • the second coordinate after the offset position is angularly offset according to the value of the initial yaw angle in the initial offset pose of the frame point cloud data.
  • the terminal uses the position coordinate after the offset position and the angle offset as the third coordinate of the feature element.
  • the value of the initial offset position can be represented by (dx", dy")
  • the initial yaw angle can be represented by dyaw" that is, the terminal offsets the second coordinate of the feature element along the positive direction of the x-axis. Shift dx" unit distances, offset dy" unit distances in the positive direction of the y-axis, and rotate the offset second coordinates counterclockwise by dyaw" unit angles.
  • the terminal determines, according to the first coordinate and the third coordinate of the feature element, an offset pose of the point cloud data of each frame.
  • the terminal may first determine an offset pose corresponding to each frame point cloud data by using step 207, so that the subsequent frame cloud data may be subsequently Corresponding offset pose determines an offset pose that reflects the general law.
  • This step can be implemented by the following steps 2071-2072.
  • the terminal calculates, according to the first coordinate and the third coordinate of the feature element, a first distance between each first point element and an adjacent second point element, and each of the first point elements and The second distance between adjacent linear features.
  • each feature element in each frame point cloud data, is composed of a point element and a line element, wherein the first point element is a point in the feature element corresponding to the first coordinate.
  • the second element is a point element among the feature elements corresponding to the third coordinate
  • the line element is a linear element among the feature elements corresponding to the third coordinate.
  • the distance between the first point element and the adjacent element can be calculated by any of the following methods.
  • the first method by calculating a first point of the feature element in the point cloud data of each frame, and a first distance between the second point element, as a subsequent matching of the first coordinate and the third coordinate Reference distance.
  • the terminal calculates the position according to the position coordinates of each first point element in the laser coordinate system and the position coordinates of the second point element adjacent to the first point element in the laser coordinate system. The first distance between the point feature and the second point feature.
  • the second point element adjacent to the first point element is a second point shape centering on the first point element and closest to the first point element among the plurality of second point elements Elements.
  • the point C is the first point element
  • the point D is the second point element adjacent to the point C
  • the terminal can calculate the first distance between the point C and the point D.
  • the second method is to calculate a second distance between the first point element in the feature element of each frame point cloud data and the line element as a reference distance for matching the first coordinate and the third coordinate .
  • the second distance between the first point element and the adjacent line element is a normal distance of the first point element to the line element. Therefore, in this step, the terminal calculates the position according to the position coordinates of each first point element in the laser coordinate system and the position coordinates of the line element adjacent to the first point element in the laser coordinate system. The normal distance between the point element and the line element, and the normal distance is taken as the second distance.
  • the linear element adjacent to the first point element is a line element having the first point element as a center and the closest to the first point element among the plurality of line elements.
  • the point C is the first point element
  • the line L is a line element adjacent to the point C
  • the terminal can calculate the normal distance between the point C and the line L, thereby obtaining the second distance.
  • the terminal determines a plurality of first distances by using position coordinates of the plurality of first point elements and position coordinates of the plurality of second point elements, and the position coordinates of the plurality of first point elements and the plurality of The position coordinates of the linear features determine a plurality of second distances.
  • the terminal determines, according to the first distance and the second distance, an offset pose of the point cloud data of each frame.
  • the terminal may determine the offset pose of the point cloud data of each frame by iteratively matching the first coordinate and the third coordinate of the feature element.
  • the process includes the following steps a-g:
  • Step a For each frame of point cloud data, the terminal selects, according to the first distance and the second distance, a first point element having a first distance smaller than a first preset threshold and a second point element corresponding to the first point element And an element, and selecting a first point element having a second distance smaller than the first predetermined threshold and a line element corresponding to the first point element.
  • the second point element corresponding to the first point element is a second point element adjacent to the first point element when the terminal calculates the first distance.
  • the linear element corresponding to the first point element is a linear element adjacent to the first point element when the terminal calculates the second distance.
  • Step b the terminal determines, according to the selected first point element and the second point element, and the first point element and the line element, based on a mean square error expression between the first coordinate and the third coordinate
  • the offset matrix with the smallest value of the mean square error will make the offset matrix with the smallest value of the mean square error as the intermediate offset matrix between the first coordinate and the third coordinate.
  • Step c The terminal updates an initial offset matrix of the frame point cloud data according to the intermediate offset matrix between the first coordinate and the third coordinate, and multiplies the updated initial offset matrix by the second coordinate to obtain a first Four coordinates, thus completing the first iteration match.
  • the step of updating the initial offset matrix of the frame point cloud data according to the intermediate offset matrix between the first coordinate and the third coordinate may be: the terminal intermediate the first coordinate and the third coordinate
  • the offset matrix is multiplied by the initial offset matrix of the frame point cloud data to obtain an updated initial offset matrix
  • step c is actually a process of converting the second coordinate in the vehicle coordinate system into the laser coordinate system again, and the implementation manner is the same as that in step 206, and details are not described herein again.
  • Step d the terminal calculates a third distance between each first point element and the adjacent second point element according to the first coordinate and the fourth coordinate of the feature element, and each first point element And the fourth distance between adjacent linear features.
  • the step d is actually a process of recalculating the first distance and the second distance according to the first coordinate and the fourth coordinate converted into the laser coordinate system again, and the implementation manner is consistent with the step 2071, and the method is no longer one by one. Narration.
  • Step e Determine the initial offset matrix after the update again by the implementation in steps a-c, thereby completing the second iteration matching.
  • Step f Perform multiple iterations matching by the implementation in the above steps a-e.
  • the minimum mean square error corresponding to the intermediate offset matrix is smaller than the second preset threshold
  • the initial offset matrix updated according to the intermediate offset matrix is obtained, and the obtained initial offset matrix is taken as The offset matrix of the point cloud data.
  • the updated initial offset matrix in the last iterative matching process is obtained, and the obtained initial offset matrix is used as the offset matrix of the frame point cloud data.
  • Step g The terminal determines an offset pose of the frame point cloud data according to the offset matrix of the frame point cloud data.
  • the step b may be specifically: the terminal according to the selected first point element and the second point element corresponding to the first point element, and the first point element and the line corresponding to the first point element
  • the element by the following formula 1, the mean square error expression, makes the offset matrix with the smallest value of the mean square error as the intermediate offset matrix between the first coordinate and the third coordinate:
  • X is the first coordinate of the feature element
  • Y is the third coordinate of the feature element
  • E(X, Y) is the mean square error between the first coordinate and the third coordinate of the feature element
  • x i is The first distance or the second distance is not greater than the preset threshold
  • y i is the second point element corresponding to the i-th first point element or
  • m is the number of first point features whose first distance or second distance is not greater than a preset threshold
  • M is an intermediate offset matrix between the first coordinate and the third coordinate.
  • the intermediate offset matrix between the first coordinate and the third coordinate may be represented by M.
  • the intermediate offset matrix includes a numerical value (dx', dy') of the offset position between the first coordinate and the third coordinate and a numerical value dyaw' of the yaw angle.
  • the first preset threshold, the second preset threshold, and the third preset threshold may be set and changed according to the user's needs, which is not specifically limited in this embodiment of the present application.
  • the first preset threshold may be 1 meter, 0.5 meter, or the like.
  • the second preset threshold may be 0.1, 0.3, or the like.
  • the third preset threshold may be 20, 100, or the like.
  • the terminal specifically determines, according to the first coordinate and the second coordinate of the feature element, the offset posture of the point cloud data of each frame.
  • the specific implementation manner is actually by converting the second coordinate in the vehicle coordinate system into the laser coordinate system, according to the first coordinate and the converted third coordinate. Determine the offset pose of each frame point cloud data.
  • the terminal can also convert the first coordinate in the laser coordinate system into the vehicle coordinate system to obtain the converted fourth coordinate, according to the second coordinate and after the conversion.
  • the fourth coordinate of the method determines the offset pose of the point cloud data of each frame, which is not specifically limited in this embodiment of the present application.
  • the terminal establishes an observation equation between the offset pose of the at least two frames of point cloud data and the offset position, the yaw angle, and the system deviation. For each frame cloud data, the terminal acquires the point cloud of each frame. The heading angle of the vehicle corresponding to the data.
  • the laser external parameter of the laser scanning device includes an offset position and a yaw angle between the vehicle coordinate system and the laser coordinate system.
  • steps 203-204 due to system deviation in the map data, The second coordinate of the feature element is deviated from the actual coordinate of the feature element in the vehicle coordinate system, and when the offset pose of the point cloud data is determined for each frame, the influence of the system deviation on the second coordinate is considered. Therefore, in this step, when the terminal establishes the observation equation, it is also necessary to consider the influence of the system deviation.
  • the terminal establishes an observation equation according to the offset pose of the at least two frames of point cloud data, the offset position, the yaw angle, and the system deviation:
  • the system deviation is (x 0 , y 0 ), the offset position is (dx, dy), the yaw angle is dyaw, and (dx′ i , dy′ i ) is the ith frame of the at least two frames of point cloud data.
  • the value of the offset position of the point cloud data, dyaw' i is the value of the yaw angle of the i-th point point cloud data in the at least two frames of point cloud data, and yaw i is the i-th frame point in the at least two frames of point cloud data
  • the system deviation can be converted to the projection in the x-axis direction and the projection in the y-axis direction. Since the system deviation is an error in the map data, the actual operation is performed.
  • the coordinate system is converted into the vehicle coordinate system.
  • the center coordinate system and the vehicle coordinate system all use the vehicle as the coordinate origin, and the difference lies in the positive direction of the x-axis and the y-axis, the positive direction of the y-axis of the center-center coordinate system and the vehicle coordinate system.
  • the angle between the positive directions of the x-axis is equal to the heading angle of the vehicle.
  • the terminal needs to obtain the heading angle of the vehicle corresponding to the point cloud data of the frame.
  • the process may be: when the terminal acquires the point cloud data of each frame, the terminal obtains the IMU in the navigation system. The heading angle of the vehicle corresponding to the frame point cloud data.
  • the terminal calculates a value of the offset position and a value of the yaw angle in the observation equation according to the heading angle and an offset posture of the point cloud data of each frame.
  • the terminal may substitute the offset pose of the at least two frames of point cloud data into the observation equation, thereby calculating the offset in the observation equation according to the offset pose of the at least two frames of point cloud data.
  • the value of the offset position in the observation equation, the value of the yaw angle, and the value of the system deviation can be determined.
  • the value of the relatively robust external laser parameter is obtained.
  • the terminal may acquire the offset pose of the n-frame point cloud data (n is a positive integer greater than 2), and the The heading angle of the vehicle corresponding to each point cloud data in the offset pose of the n-frame point cloud data, respectively, the offset pose of each frame of cloud data, and the corresponding heading angle are substituted into the observation equation, using least squares method Calculating the value of the offset position in the observation equation, the value of the yaw angle, and the value of the system deviation. Due to the offset pose of the n-frame point cloud data, the possible existence of the point cloud data per frame is reduced. The interference of random noise reduces the error, which in turn makes the value of the determined external laser parameters more accurate.
  • the terminal actually calculates the value of the laser external parameter of the laser scanning device according to the offset posture of the at least two frames of point cloud data, so as to calibrate the specific implementation manner of the laser scanning device.
  • the above specific implementation manner may also be replaced by other implementation manners.
  • the specific implementation manner described above actually determines the value of the external laser parameter by establishing an observation equation between the offset pose and the offset position, the yaw angle, and the system deviation.
  • the terminal may also pre-establish and store the observation equation, or pre-write and store the same program instruction as the observation equation, and the terminal directly obtains the observation equation to determine the value of the external laser parameter; or directly The program instruction is obtained, and the program instruction is executed to determine the value of the laser external parameter.
  • the laser scanning device in the vehicle is calibrated by the value of the external laser parameter, and the navigation system is calibrated by the determined value of the system deviation, so that the vehicle is combined and calibrated.
  • the latter laser scanning device provides point cloud data and map data provided by the calibrated navigation system to improve driving safety.
  • the terminal may acquire, according to at least two frame point cloud data obtained by scanning the target area by the laser scanning device, the first coordinate of the feature element in the point cloud data of each frame, where the first coordinate is the feature The coordinates of the element in the laser coordinate system; and the terminal directly determines the second coordinate of the feature element in the vehicle coordinate system in each point cloud data based on the map data of the target area of the vehicle;
  • the first coordinate and the second coordinate perform subsequent processes, omitting the process of manually establishing the calibration field and the manual measurement, improving the efficiency of determining the first coordinate and the second coordinate, thereby improving the efficiency of calibration of the laser scanning device.
  • the terminal determines an offset pose of the point cloud data of each frame according to the first coordinate and the second coordinate of the feature element; and subsequently continues to be based on the offset of the at least two frames of point cloud data.
  • the value of the laser external parameter of the laser scanning device is calculated to calibrate the laser scanning device in the vehicle. Since the terminal calculates the value of the laser external parameter of the laser scanning device according to the multi-frame point cloud data, the interference of the random noise in the cloud data of each frame is reduced, thereby reducing the error, thereby improving the accuracy of determining the external parameters of the laser. .
  • FIG. 6 is a schematic structural diagram of an apparatus for calibrating a laser scanning device according to an embodiment of the present application.
  • the apparatus includes: an obtaining module 601, a first determining module 602, a second determining module 603, and a calculating module 604.
  • the acquiring module 601 is configured to acquire, according to at least two frame point cloud data obtained by scanning the target area by the laser scanning device, a first coordinate of the feature element in the point cloud data of each frame, where the first coordinate is the feature element The coordinates in the laser coordinate system;
  • the first determining module 602 is configured to determine, according to map data of the target area of the vehicle, a second coordinate of the feature element in the point coordinate data of the frame in the vehicle coordinate system;
  • the second determining module 603 is configured to determine, according to the first coordinate and the second coordinate of the feature element, the offset pose of each point cloud data for the frame cloud data of each frame;
  • the calculating module 604 is configured to calculate a value of the laser external parameter of the laser scanning device according to the offset pose of the at least two frames of point cloud data to calibrate the laser scanning device.
  • the obtaining module 601 includes:
  • a scanning unit configured to scan the target area based on a preset scan route by using a laser scanning device, to obtain the at least two frames of point cloud data, where the target area is any area including the feature element;
  • an extracting unit configured to extract, for each frame of point cloud data, a first coordinate of the feature element in the laser coordinate system.
  • the first determining module 602 includes:
  • a first acquiring unit configured to acquire map data of the target area from a navigation system of the vehicle, where the map data includes latitude and longitude coordinates and elevation coordinates of the feature element in a map coordinate system;
  • the first determining unit is configured to determine, according to the map data of the target area, the second coordinate of the feature element in the vehicle coordinate system for the frame cloud data of each frame.
  • the second determining module 603 includes:
  • a second acquiring unit configured to acquire an initial offset posture between the vehicle coordinate system and the laser coordinate system
  • a second determining unit configured to determine, according to the initial offset pose and the second coordinate of the feature element, the third coordinate of the feature element, where the third coordinate is the feature The coordinates of the feature in the laser coordinate system;
  • a third determining unit configured to determine an offset pose of the point cloud data of each frame according to the first coordinate and the third coordinate of the feature element.
  • the third determining unit includes:
  • a calculating subunit configured to calculate a first distance between each first point element and an adjacent second point element according to the first coordinate and the third coordinate of the feature element, and each first point a second distance between the feature element and the adjacent linear element, wherein the first point element is a point element among the feature elements corresponding to the first coordinate, and the second point element is corresponding to the third coordinate a point element in the feature element, wherein the line element is a line element among the feature elements corresponding to the third coordinate;
  • Determining a subunit configured to determine an offset pose of the point cloud data per frame according to the first distance and the second distance.
  • the laser external parameter of the laser scanning device includes an offset position and a yaw angle between the vehicle coordinate system and the laser coordinate system
  • the calculating module 604 includes:
  • Establishing a unit configured to establish an observation equation between the offset pose of the at least two frames of point cloud data and the offset position, the yaw angle, and the system deviation, where the system deviation is a navigation system of the vehicle and the map data Deviation between
  • a third acquiring unit configured to acquire a heading angle of the vehicle corresponding to the point cloud data of each frame for the point cloud data of each frame;
  • a calculating unit configured to calculate a value of the offset position and a value of the yaw angle in the observation equation according to the heading angle and an offset posture of the point cloud data of each frame.
  • the terminal may acquire, according to at least two frame point cloud data obtained by scanning the target area by the laser scanning device, the first coordinate of the feature element in the point cloud data of each frame, where the first coordinate is the feature The coordinates of the element in the laser coordinate system; and the terminal directly determines the second coordinate of the feature element in the vehicle coordinate system in each point cloud data based on the map data of the target area of the vehicle;
  • the first coordinate and the second coordinate perform subsequent processes, omitting the process of manually establishing the calibration field and the manual measurement, improving the efficiency of determining the first coordinate and the second coordinate, thereby improving the efficiency of calibration of the laser scanning device.
  • the terminal determines an offset pose of the point cloud data of each frame according to the first coordinate and the second coordinate of the feature element; and subsequently continues to be based on the offset of the at least two frames of point cloud data.
  • the value of the laser external parameter of the laser scanning device is calculated to calibrate the laser scanning device in the vehicle. Since the terminal calculates the value of the laser external parameter of the laser scanning device according to the multi-frame point cloud data, the interference of the random noise in the cloud data of each frame is reduced, thereby reducing the error, thereby improving the accuracy of determining the external parameters of the laser. .
  • the device for calibrating the laser scanning device provided by the above embodiment is exemplified by the division of each functional module in the laser scanning device calibration. In practical applications, the functions may be assigned differently according to requirements.
  • the function module is completed, that is, the internal structure of the terminal is divided into different functional modules to complete all or part of the functions described above.
  • the device for calibrating the laser scanning device provided by the above embodiment is the same as the method for calibrating the laser scanning device. For the specific implementation process, refer to the method embodiment, and details are not described herein again.
  • FIG. 7 is a schematic structural diagram of a computer device 700 according to an embodiment of the present application.
  • the computer device 700 includes a processor and a memory, and may further include a communication interface and a communication bus, and may further include an input and output interface and a display device, wherein the processor, the memory, the input/output interface, the display device, and the communication interface pass The communication bus completes communication with each other.
  • the memory comprises a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium of the computer device stores an operating system and can also store computer readable instructions that, when executed by the processor, cause the processor to implement a method of laser scanning device calibration.
  • the internal memory can also store computer readable instructions that, when executed by the processor, cause the processor to perform a method of laser scanning device calibration.
  • a communication bus is a circuit that connects the elements described and implements transmission between these elements.
  • the processor receives commands from other elements over the communication bus, decrypts the received commands, and performs calculations or data processing in accordance with the decrypted commands.
  • the memory may include program modules such as a kernel, middleware, an application programming interface (API), and an application.
  • the program module can be composed of software, firmware or hardware, or at least two of them.
  • the input and output interface forward commands or data entered by the user through input and output devices (eg, sensors, keyboards, touch screens).
  • the display device displays various information to the user.
  • the communication interface connects the computer device 700 with other network devices, user devices, networks.
  • the communication interface can be connected to the network by wired or wireless to connect to other external network devices or user devices.
  • the wireless communication may include at least one of the following: Wireless Fidelity (WiFi), Bluetooth (BT), Near Field Communication (NFC), and Global Positioning System (GPS).
  • the wired communication may include at least one of the following: a Universal Serial Bus (USB), a High Definition Multimedia Interface (HDMI), and an asynchronous standard interface (Recommended Standard 232, RS-232). , and ordinary old-fashioned telephone business (Plain Old T Elephone Service, POTS).
  • the network can be a telecommunication network and a communication network.
  • the communication network can be a computer network, the Internet, an Internet of Things, a telephone network.
  • the computer device 700 can be connected to the network through a communication interface, and the computer device 700 communicates with other network devices.
  • the protocol can be supported by at least one of an application, an Application Programming Interface (API), a middleware, a kernel, and a communication interface.
  • API Application Programming Interface
  • a computer readable storage medium storing a computer program, such as a memory storing computer readable instructions that, when executed by a processor, implement the laser of the above embodiments Scan the method of calibration of the device.
  • the computer readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), or a Compact Disc Read-Only Memory (CD-ROM). , tapes, floppy disks, and optical data storage devices.
  • a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
  • the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Manufacturing & Machinery (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Navigation (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种激光扫描设备(101)标定的方法,包括:基于激光扫描设备(101)对目标区域进行扫描所得到的至少两帧点云数据,获取每帧点云数据中地物要素的第一坐标,第一坐标为地物要素在激光坐标系中的坐标;基于车辆的目标区域的地图数据,确定每帧点云数据中地物要素在车辆坐标系中的第二坐标;对于每帧点云数据,根据地物要素的第一坐标和第二坐标,确定每帧点云数据的偏移位姿;及根据至少两帧点云数据的偏移位姿,计算激光扫描设备(101)的激光外参数的数值,以标定激光扫描设备(101)。

Description

激光扫描设备标定的方法、装置、设备及存储介质
本申请要求于2017年08月23日提交中国专利局,申请号为201710731253.X、发明名称为“激光扫描设备标定的方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及无人驾驶技术领域,特别涉及一种激光扫描设备标定的方法、装置、设备及存储介质。
背景技术
随着无人驾驶技术的发展,无人驾驶车辆中的导航系统可以提供导航路径,以使无人驾驶车辆按照该导航路径行驶。同时,无人驾驶车辆还可以通过激光扫描设备实时扫描周围环境,得到周围环境的三维图像,从而使得无人驾驶车辆能够结合周围环境和导航路径行驶,以避开周围环境中的障碍物,进一步保证驾驶的安全性。然而,该三维图像所在的激光坐标系和导航路径所在的车辆坐标系之间存在一定的位置偏移和角度偏移,因此,使用激光扫描设备之前,还需对该激光扫描设备进行标定。
相关技术中,对激光扫描设备进行标定的过程为:通常在标定场中搭建标志物,并在标志物中设置多个位置明显的标定点,从而建立包括多个标定点的标定场。并在标定场中建立以无人驾驶车辆为坐标原点的车辆坐标系,通过传统测绘方式,人工测量每个标定点在车辆坐标系中的坐标。然后,建立以激光扫描设备为原点的激光坐标系,通过激光扫描设备扫描该标定场,得到一帧点云数据,该帧点云数据包括标定场中标志物的表面点的集合,以及表面点的集合中每个点在激光坐标系中的坐标。基于该帧点云数据,人工选取出表面点的集合中的多个标点定,获取该每个标定点在激光坐标系中的坐标。根据每个标定点在车辆坐标系中的坐标,以及该标定点在激光坐标系中的坐标,通过SVD(Singular Value Decomposition,奇异值分解)算法,计算出激光坐标系相对于车辆坐标系的偏移位姿,该偏移位姿包括激光坐标系相对于车辆坐标系的偏移位置的数值和偏航角的数值,直接将该偏移位姿作为激光扫描设备的激光外参数的数值。其中,该航偏角为激光坐标系的x轴(激光扫描设备正前方)与车 辆坐标系的x轴(无人驾驶车辆正前方)之间的夹角。通过该激光外参数的数值,对激光扫描设备进行标定。
在实现本申请实施例的过程中,发明人发现相关技术至少存在以下问题:
上述方法需要人工建立标定场,并且后续还需通过人工测量或识别的方法,才能确定每个标定点在车辆坐标系中的坐标,以及在激光坐标系中的坐标,从而导致上述激光扫描设备标定方法效率低。
发明内容
根据本申请的各种实施例,提供了一种激光扫描设备标定的方法、装置、设备及存储介质。
一种激光扫描设备标定的方法,所述方法包括:
基于激光扫描设备对目标区域进行扫描所得到的至少两帧点云数据,获取每帧点云数据中地物要素的第一坐标,所述第一坐标为所述地物要素在激光坐标系中的坐标;
基于所述目标区域的地图数据,确定所述每帧点云数据中地物要素在车辆坐标系中的第二坐标;
对于所述每帧点云数据,根据所述地物要素的第一坐标和第二坐标,确定所述每帧点云数据的偏移位姿;及
根据所述至少两帧点云数据的偏移位姿,计算所述激光扫描设备的激光外参数的数值,以标定所述激光扫描设备。
一种激光扫描设备标定的装置,所述装置包括:
获取模块,用于基于激光扫描设备对目标区域进行扫描所得到的至少两帧点云数据,获取每帧点云数据中地物要素的第一坐标,所述第一坐标为所述地物要素在激光坐标系中的坐标;
第一确定模块,用于基于所述目标区域的地图数据,确定所述每帧点云数据中地物要素在车辆坐标系中的第二坐标;
第二确定模块,用于对于所述每帧点云数据,根据所述地物要素的第一坐标和第二坐标,确定所述每帧点云数据的偏移位姿;及
计算模块,用于根据所述至少两帧点云数据的偏移位姿,计算所述激光扫描设备的激光外参数的数值,以标定所述激光扫描设备。
一种计算机设备,包括存储器和处理器,所述存储器中储存有计算机可读 指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行以下步骤:
基于激光扫描设备对目标区域进行扫描所得到的至少两帧点云数据,获取每帧点云数据中地物要素的第一坐标,所述第一坐标为所述地物要素在激光坐标系中的坐标;
基于所述目标区域的地图数据,确定所述每帧点云数据中地物要素在车辆坐标系中的第二坐标;
对于所述每帧点云数据,根据所述地物要素的第一坐标和第二坐标,确定所述每帧点云数据的偏移位姿;及
根据所述至少两帧点云数据的偏移位姿,计算所述激光扫描设备的激光外参数的数值,以标定所述激光扫描设备。
一种非易失性的计算机可读存储介质,存储有计算机可读指令,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
基于激光扫描设备对目标区域进行扫描所得到的至少两帧点云数据,获取每帧点云数据中地物要素的第一坐标,所述第一坐标为所述地物要素在激光坐标系中的坐标;
基于所述目标区域的地图数据,确定所述每帧点云数据中地物要素在车辆坐标系中的第二坐标;
对于所述每帧点云数据,根据所述地物要素的第一坐标和第二坐标,确定所述每帧点云数据的偏移位姿;及
根据所述至少两帧点云数据的偏移位姿,计算所述激光扫描设备的激光外参数的数值,以标定所述激光扫描设备。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种驾驶系统示意图;
图2是本申请实施例提供的一种激光扫描设备标定的方法流程图;
图3是本申请实施例提供的一种预设扫描路线示意图;
图4是本申请实施例提供的一种第一距离示意图;
图5是本申请实施例提供的一种第二距离示意图;
图6是本申请实施例提供的一种激光扫描设备标定的装置的结构示意图;
图7是本申请实施例提供的一种计算机设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例公开了对激光扫描设备进行标定的方法。其中,该激光扫描设备可以为安装在任一需要导航的驾驶器中的激光扫描设备。例如,该激光扫描设备可以安装于无人驾驶车辆、无人机或者需要导航的机器人等驾驶器中,本申请实施例对此不作具体限定。本申请实施例仅以安装在车辆中的激光扫描设备为例进行说明。
图1是本申请实施例提供的一种驾驶系统示意图,该驾驶系统包括:激光扫描设备101和导航系统102。
导航系统102中预先存储了地图数据,该地图数据至少包括目标区域中的每个地物要素在地图坐标系中的位置坐标。该导航系统102中包括GPS(Global Positioning System,全球定位系统)和IMU(Inertial Measurement Unit,惯性测量单元)。导航系统102可以通过GPS接收卫星信号,实时定位车辆在地图坐标系中的当前位置坐标。该导航系统102可以根据车辆的当前位置坐标和车辆的目的地位置坐标,在地图数据中确定车辆的导航路径,并将该导航路径在地图坐标系中对应的路径坐标经地心坐标系、站心坐标系转换到车辆坐标系中,以使车辆按照该车辆坐标系中的导航路径行驶。并且,该IMU中集成了加速度计和陀螺仪,车辆行驶过程中,导航系统102还可以通过IMU实时获取车辆在车辆坐标系中的航向角和行驶速度,从而实时监控车辆的行驶状态。
该驾驶系统中还包括激光扫描设备101,车辆行驶过程中,车辆还可通过激 光扫描设备101实时扫描周围环境,得到周围环境的多帧点云数据,每帧点云数据包括周围环境中的每个障碍物在激光坐标系中的位置坐标;障碍物包括但不限于周围环境中固定的地物要素以及移动的其它车辆、行人等;并基于激光扫描设备101的激光外参数,将周围环境中每个障碍物在激光坐标系中的位置坐标转换到车辆坐标系中;车辆可以结合车辆坐标系中的导航路径和周围环境中每个障碍物行驶,从而进一步保证车辆行驶的安全性。
下面,针对上述驾驶系统中出现的名词以及会涉及到的一些坐标系和参数等进行介绍:
该地图数据可以为根据用户需要,预先设置并存储的待行驶区域的地图数据。进一步地,地图数据可以为高精地图数据。该高精地图数据为具有厘米级定位精度,且包括道路附属设施信息(如红绿灯、电子眼和交通路牌等)和动态交通信息的下一代导航地图,通过该高精地图数据可以更加准确地进行导航。
其中,该车辆可以为无人驾驶车辆,该无人驾驶车辆通过导航系统102获取导航路径,以及通过激光扫描设备101获取周围环境的多帧点云数据,从而使得无人驾驶车辆可以结合车辆坐标系中的导航路径和周围环境中每个障碍物行驶,进一步保证无人驾驶车辆可以安全行驶。
该地图坐标系一般为WGS84(World Geodetic System for 1984,1984年世界大地坐标系),每个地物要素的位置坐标即为该WGS84坐标系中该地物要素的经纬度坐标和高程坐标。
该车辆坐标系为以车辆为坐标原点,以车辆行驶的正前方为x轴正方向,以水平向左且与x轴垂直的方向为y轴正方向,以竖直向上的方向为z轴正方向。
该激光坐标系为以激光扫描设备为坐标原点,以激光扫描设备的正前方为x轴正方向,以水平向左且垂直于x轴的方向为y轴正前方,以竖直向上的方向为z轴正方向的坐标系。
该地心坐标系为以地球质心为坐标原点o,以首子午面与赤道面的交线向东的方向为x轴正方向,以地球旋转轴向北的方向为z轴正方向,以与xoz平面垂直且根据右手法则确定的方向为y轴正方向,建立的空间直角坐标系。
该站心坐标系为以站心为坐标系原点,以地球椭球的长半轴向东的方向(东向)为x轴正方向,以地球椭球短的半轴向北的方向(北向)为y轴正方向,以地球椭球法线向上(天向)为z轴正方向,建立的空间直角坐标系。
激光扫描设备的激光外参数为激光坐标系和车辆坐标系之间的偏移位置和偏航角。其中,偏移位置为激光坐标系相对于车辆坐标系在x轴、y轴方向上的偏移距离,该偏航角为激光坐标系的x轴与车辆坐标系的x轴之间的夹角,即激光扫描设备的正前方与该车辆行驶的正前方之间的夹角。另外,本申请还涉及到车辆的航向角。该航向角是指车辆行驶的正前方与正北方向之间的夹角。
图2是本申请实施例提供的一种激光扫描设备标定的方法流程图。该方法的执行主体为终端,该终端可以为车载终端或者具备数据处理功能任一终端,参见图2,该方法包括:
201、终端通过激光扫描设备,基于预设扫描路线扫描目标区域,得到至少两帧点云数据,该目标区域为包括该地物要素的任一区域。
其中,该激光扫描设备安装于车辆中,可以设置于车辆的前侧或侧面,用以对车辆周围的环境进行扫描。该预设扫描路线可以是为了对目标区域进行扫描以设计的行驶路线。
本申请实施例中,本步骤可以为:终端获取预设扫描路线,将该预设扫描路线作为车辆的行驶路线,控制车辆沿该预设扫描路线行驶。在车辆沿该预设扫描路线行驶过程中,终端每隔预设时长,控制激光扫描设备对目标区域进行一次扫描,得到一帧该目标区域的点云数据。整个行驶过程中,终端控制激光扫描设备至少进行两次扫描,得到至少两帧目标区域的点云数据。该每帧点云数据包括但不限于目标区域中每个障碍物的表面点的集合,以及每个表面点在激光坐标系中的位置坐标。其中,该预设时长可以根据用户需要设置并更改,本申请实施例对此不做具体限定。例如,该预设时长可以为100毫秒、5秒等。
其中,该地物要素包括但不限于:目标区域中固定的马路牙子、道路护栏、杆状地物或者交通标志牌等。由于该地物要素为目标区域中位置固定的物体,因此,以目标区域中的地物要素作为标点的基本元素,可以通过确定该地物要素在各个坐标系中的不同坐标来最终对激光扫描设备进行标定。
本申请实施例中,该目标区域可以为包括地物要素的任一区域,为了避免环境噪声干扰,终端可以选择行人较少的空旷区域作为目标区域。激光扫描设备对该目标区域进行扫描得到的多帧点云数据中,存在的其他车辆等不必要的噪声数据较少,从而减少了环境噪声的干扰,提高了后续基于点云数据提取地物要素的第一坐标的准确性。
本申请实施例中,该预设扫描路线可以为基于该目标区域所确定的扫描路 线,一般地,所确定的预设扫描路线为围绕该目标区域的环形路线。发明人认识到,实际作业时,由于车辆行驶过程中,其行驶方向可以为东、南、西、北等方向中的任意方向。因此,终端可控制车辆沿环状道路行驶,从而可以得到每个行驶方向上目标区域的点云数据。并且,由于车辆行驶时,需遵守交通规则靠道路一侧行驶,终端采集的每帧点云数据为偏左侧或者偏右侧的点云数据。因此,终端可以控制车辆沿环状道路往返行驶,即,控制车辆沿环状道路顺时针行驶一圈,再逆时针沿该环状道路行驶一圈,使得车辆在道路偏左侧行驶时和偏右侧行驶时均可进行扫描,提高了后续根据每帧点云数据的偏移位姿确定激光外参数的数值的准确性。
如图3所示,该目标区域为A区域,该预设扫描路线可以为围绕A区域的环形路线,即,终端控制车辆从起点B处沿环状道路顺时针行驶一圈,回到起点B,再从起点B处逆时针沿该环状道路行驶一圈。
202、对于每帧点云数据,终端提取该地物要素在该激光坐标系中的第一坐标。
本申请实施例中,由于每帧点云数据包括目标区域中每个障碍物的表面点的集合以及每个表面点在激光坐标系中的位置坐标,终端还需从每帧点云数据中提取地物要素的第一坐标,该第一坐标为该地物要素在激光坐标系中的坐标。
对于每帧点云数据,终端通过预设提取算法,从点云数据中提取地物要素对应的点集。对于每个地物要素,将该地物要素对应的点集在激光坐标系中的位置坐标集合作为该地物要素的第一坐标;进而得到每帧点云数据包括的地物要素的第一坐标。其中,该预设提取算法可以根据用户需要设置并更改,本申请实施例对此不做具体限定。例如,该预设提取算法可以为:基于分割的提取算法或者基于检测的提取算法。
需要说明的是,上述步骤201-202事实上是终端基于激光扫描设备对目标区域进行扫描所得到的至少两帧点云数据,获取每帧点云数据中地物要素的第一坐标的具体实现方式。但是,上述具体实现方式还可以由其他实现方式替代,上述具体实现方式实际上是通过实时的扫描获取到点云数据,而在实际场景中,还可以通过从预先扫描得到的历史数据中获取该目标区域的至少两帧点云数据来实现,本申请实施例对此不做具体限定。
203、终端从导航系统中获取该目标区域的地图数据,该地图数据包括该地物要素在地图坐标系中的经纬度坐标和高程坐标。
本申请实施例中,车辆的导航系统中存储了目标区域的地图数据,终端可以根据目标区域的区域信息从导航系统中获取目标区域的地图数据。当然,导航系统中还可以存储有目标区域以外的任意区域的地图数据,该地图数据实际上是目标区域的高精地图数据,因此,该目标区域的地图数据至少包括目标区域中地物要素在地图坐标系中的位置坐标。其中,区域信息可以为目标区域的区域标识或经纬度范围。例如,该区域标识可以为该区域的名称。
本申请实施例中,终端需要获取该目标区域在车辆坐标系和激光坐标系之间的差异,因此,终端获取该目标区域中地物要素的第一坐标后,终端还需获取该地物要素在地图坐标系中的位置坐标,从而终端后续确定该地物要素在车辆坐标中的第二坐标。
由于终端可以通过导航系统定位车辆在地图坐标系中的当前位置坐标,因此,对于每帧点云数据,终端获取每帧点云数据时,还需要通过导航系统中的地图数据,获取该帧点云数据包括的地物要素在地图坐标系中的位置坐标,并将该位置坐标转换为车辆坐标系中的第二坐标。
在一种可能实现方式中,区域信息可以为区域标识,终端可以存储区域标识和地图数据的对应关系,相应的,终端从导航系统中获取该目标区域的地图数据的步骤可以为:终端获取目标区域的区域标识,根据该目标区域的区域标识,从区域标识和地图数据的对应关系中,获取该目标区域对应的地图数据。
在一种可能实现方式中,区域信息可以为经纬度范围,终端存储经纬度范围和地图数据的对应关系,相应的,终端从导航系统中获取该目标区域的地图数据的步骤可以为:终端获取目标区域的经纬度范围,根据该目标区域的经纬度范围,从经纬度范围和地图数据的对应关系中,获取该目标区域对应的地图数据。
204、终端对于该每帧点云数据,根据该目标区域的地图数据,确定该地物要素在该车辆坐标系中的第二坐标。
由于车辆行驶过程中,终端获取每帧点云数据时,以车辆为原点的车辆坐标系也随之移动,为了确定每帧点云数据中对应的地物要素在车辆坐标系中的第二坐标,对于每帧点云数据,终端获取该帧点云数据的同时,根据该帧点云数据中包括的地物要素,从该地图数据中,获取该地物要素在地图坐标系中的经纬度坐标和高程坐标。终端根据该地物要素在地图坐标系中的经纬度坐标和高程坐标,确定该地物要素在车辆坐标系中的第二坐标。
其中,因此,终端根据该地物要素在地图坐标系中的经纬度坐标和高程坐标,确定该地物要素在车辆坐标中的第二坐标的过程可以为:终端先将该地物要素在地图坐标系中的经纬度坐标和高程坐标,转换为以地球质心为原点的地心坐标系中位置坐标,再将该地物要素在地心坐标系中的位置坐标,转换为站心坐标系中的位置坐标。终端通过导航系统中的IMU,获取车辆的航向角,终端根据该航向角,将该地物要素在站心坐标系中的位置坐标转换为车辆坐标系中的第二坐标。
本申请实施例中,站心坐标系与车辆坐标系的坐标原点均相同,只是x、y轴正方向不同,车辆坐标系的x轴正方向与站心坐标系的y轴正方向之间的夹角大小为车辆的航向角。因此,终端可以先将地物要素在地图坐标系中经地心坐标系转换到站心坐标系中,再根据车辆的航向角,最终获取地物要素的第二坐标。
本申请实施例中,由于通过导航系统获取的地图数据中会存在系统偏差,该系统偏差为该地图数据中地物要素在地图坐标系中的位置坐标,与实际该地物要素在地图坐标系中的位置坐标之间的位移偏差。因此,为了提高确定第二坐标的准确性,终端还需考虑该系统偏差对第二坐标的影响。具体的,终端根据该航向角,将该地物要素在站心坐标系中的位置坐标转换为车辆坐标系中的第二坐标的过程可以为:终端获取地图数据的初始系统偏差,根据该初始系统偏差,对站心坐标系中的位置坐标进行调整。终端根据该航向角,将调整后的位置坐标转换为车辆坐标系中的第二坐标。
其中,该对位置坐标进行调整的过程可以表示为以下过程:该初始系统偏差可以用(x′ 0,y′ 0)表示,即,终端将该地物要素在站心坐标系中的位置坐标,沿x轴正方向偏移x′ 0个单位距离,沿y轴正方向偏移y′ 0个单位距离。
需要说明的是,上述步骤203-204事实上是终端基于该目标区域的地图数据,确定该每帧点云数据中地物要素在车辆坐标系中的第二坐标的具体实现方式。但是,上述具体实现方式还可以由其他实现方式替代,上述具体实现方式实际上是通过从导航系统中获取目标区域的地图数据进而获取第二坐标,而在实际作业时,终端还可以预先从导航系统中获取目标区域的地图数据,并将目标区域的地图数据存储在终端中,进而基于终端中已存储的目标区域的地图数据确定第二坐标,本申请实施例对此不做具体限定。
本申请实施例中,该每帧点云数据的偏移位姿为终端获取每帧点云数据时, 该激光坐标系和车辆坐标系之间的偏移位姿,由于随着车辆的移动,该以激光扫描仪为坐标原点的激光坐标系,和以车辆为坐标原点的车辆坐标系也随之移动,导致每帧点云数据的偏移位姿可能相同,也可能不相同。因此,终端还需通过以下步骤205-207,确定每帧点云数据的偏移位姿。
205、终端获取该车辆坐标系和该激光坐标系之间的初始偏移位姿。
本申请实施例中,该偏移位姿包括该车辆坐标系和该激光坐标系之间的偏移位置的数值和偏航角的数值。该车辆坐标系和该激光坐标系之间的偏移位置可以用激光坐标系的坐标原点在该车辆坐标系中的位置坐标表示,该偏航角可以用激光坐标系的x轴与车辆坐标系的x轴之间的夹角表示。
本申请实施例中,先通过步骤205确定每帧点云数据的初始偏移位姿,后续再通过步骤206-207,确定每帧点云数据的偏移位姿。其中,该初始偏移位姿包括初始偏移位置的数值和初始偏航角的数值。
本步骤中,终端可以通过测量的方式,预先获取并存储车辆坐标系和激光坐标系之间的初始偏移位姿,将该初始偏移位姿作为该每帧点云数据的初始偏移位姿。具体的,终端可以通过卷尺等测量工具,测量该激光扫描设备在该车辆坐标系中的坐标,以及激光坐标系的x轴与车辆坐标系的x轴之间的夹角,将测量的坐标作为初始偏移位置的数值,将测量的夹角作为初始偏航角的数值。206、对于该每帧点云数据,终端根据该初始偏移位姿和该地物要素的第二坐标,确定该地物要素的第三坐标,该第三坐标为该地物要素在该激光坐标系中的坐标。
本步骤可以为:对于该每帧点云数据,终端根据该帧点云数据的初始偏移位姿中的初始偏移位置的数值,将该地物要素的第二坐标进行偏移位置,终端根据该帧点云数据的初始偏移位姿中的初始偏航角的数值,将偏移位置后的第二坐标进行角度偏移。终端将偏移位置和角度偏移后的位置坐标作为该地物要素的第三坐标。
其中,该初始偏移位置的数值可以用(dx″,dy″)表示,该初始偏航角可以用dyaw″表示,即,终端将该地物要素的第二坐标,沿x轴正方向偏移dx″个单位距离,沿y轴正方向偏移dy″个单位距离,并将偏移后的第二坐标逆时针旋转dyaw″个单位角度。
207、终端根据该地物要素的第一坐标和第三坐标,确定该每帧点云数据的偏移位姿。
本申请实施例中,由于每帧点云数据均对应一个偏移位姿,因此,终端可以通过步骤207先确定每帧点云数据对应的偏移位姿,从而后续可以根据多帧点云数据对应的偏移位姿,确定一个能够体现一般规律的偏移位姿。
本步骤可以通过以下步骤2071-2072实现。
2071、终端根据该地物要素的第一坐标和第三坐标,计算每个第一点状要素和相邻第二点状要素之间的第一距离,以及该每个第一点状要素和相邻线状要素之间的第二距离。
本申请实施例中,在每帧点云数据中,每个地物要素由点状要素和线状要素组成,其中,该第一点状要素为该第一坐标对应的地物要素中的点状要素,该第二点状要素为该第三坐标对应的地物要素中的点状要素,该线状要素为该第三坐标对应的地物要素中的线状要素。
对于每帧点云数据,可以采取下述任一种方式计算第一点状要素与相邻要素之间的距离。
第一种方式、通过计算每帧点云数据中地物要素中的第一点状要素,与第二点状要素之间的第一距离,作为后续对第一坐标和第三坐标进行匹配的参考距离。
本步骤中,终端根据每个第一点状要素在激光坐标系中的位置坐标,以及与该第一点状要素相邻的第二点状要素在激光坐标系中的位置坐标,计算该第一点状要素和第二点状要素之间的第一距离。
需要说明的是,与该第一点状要素相邻的第二点状要素为以第一点状要素为中心,多个第二点状要素中距离第一点状要素最近的第二点状要素。
如图4所示,点C为第一点状要素,点D为与点C相邻的第二点状要素,终端可以计算点C和点D之间的第一距离。
第二种方式、通过计算每帧点云数据中地物要素中的第一点状要素,与线状要素之间的第二距离,作为后续对第一坐标和第三坐标进行匹配的参考距离。
其中,该第一点状要素和相邻的线状要素之间的第二距离为该第一点状要素到该线状要素的法线距离。因此,本步骤中,终端根据每个第一点状要素在激光坐标系中的位置坐标,以及与该第一点状要素相邻的线状要素在激光坐标系中的位置坐标,计算该第一点状要素和线状要素之间的法线距离,将该法线距离作为第二距离。
需要说明的是,与该第一点状要素相邻的线状要素为以第一点状要素为中 心,多个线状要素中距离第一点状要素最近的线状要素。
如图5所示,点C为第一点状要素,线条L为与点C相邻的线状要素,终端可以计算点C和线条L之间的法线距离,从而得到第二距离。
本步骤中,终端通过多个第一点状要素的位置坐标和多个第二点状要素的位置坐标,确定出多个第一距离,通过多个第一点状要素的位置坐标和多个线状要素的位置坐标,确定出多个第二距离。
2072、终端根据该第一距离和该第二距离,确定该每帧点云数据的偏移位姿。
本申请实施例中,终端可以通过对该地物要素的第一坐标和第三坐标多次迭代匹配,确定该每帧点云数据的偏移位姿。
其过程包括以下步骤a-g:
步骤a:对于每帧点云数据,终端根据第一距离和第二距离,选择第一距离小于第一预设阈值的第一点状要素和与该第一点状要素对应的第二点状要素,以及选择第二距离小于第一预设阈值的第一点状要素和与该第一点状要素对应的线状要素。
其中,与该第一点状要素对应的第二点状要素为,终端计算第一距离时,与该第一点状要素相邻的第二点状要素。与该第一点状要素对应的线状要素为,终端计算第二距离时,与该第一点状要素相邻的线状要素。
步骤b:终端根据选择的第一点状要素和第二点状要素,以及第一点状要素和线状要素,基于第一坐标和第三坐标之间的均方误差表达式,确定使得该均方误差的数值最小的偏移矩阵,将使得该均方误差的数值最小的偏移矩阵作为第一坐标和第三坐标之间的中间偏移矩阵。
步骤c:终端根据该第一坐标和第三坐标之间的中间偏移矩阵,更新该帧点云数据的初始偏移矩阵,将更新后的初始偏移矩阵与第二坐标相乘,得到第四坐标,从而完成第一次迭代匹配。
其中,终端根据该第一坐标和第三坐标之间的中间偏移矩阵,更新该帧点云数据的初始偏移矩阵的步骤可以为:终端将该第一坐标和第三坐标之间的中间偏移矩阵,与该帧点云数据的初始偏移矩阵相乘,得到更新后的初始偏移矩阵。
需要说明的是,上述步骤c实际上是将车辆坐标系中的第二坐标再次转换到激光坐标系中的过程,其实现方式与步骤206相同,此处不再一一赘述。
步骤d:终端根据该地物要素的第一坐标和第四坐标,计算每个第一点状要素和相邻第二点状要素之间的第三距离,以及该每个第一点状要素和相邻线状要素之间的第四距离。
其中,步骤d实际上是根据第一坐标和再次转换到激光坐标系中的第四坐标,重新计算第一距离和第二距离的过程,其实现方式与步骤2071一致,此处不再一一赘述。
步骤e:通过步骤a-c中的实现方式,确定再次更新后的初始偏移矩阵,从而完成第二次迭代匹配。
步骤f:通过上述步骤a-e中的实现方式,完成多次迭代匹配。在多次迭代过程中,当中间偏移矩阵对应的均方误差最小值小于第二预设阈值时,获取根据该中间偏移矩阵更新后的初始偏移矩阵,将获取的初始偏移矩阵作为该帧点云数据的偏移矩阵。或者,当迭代匹配次数达到第三预设阈值时,获取最后一次迭代匹配过程中更新后的初始偏移矩阵,将获取的初始偏移矩阵作为该帧点云数据的偏移矩阵。
步骤g:终端根据该帧点云数据的偏移矩阵,确定该帧点云数据的偏移位姿。
其中,步骤b具体可以为:终端根据选择的第一点状要素和与该第一点状要素对应的第二点状要素,以及第一点状要素和与该第一点状要素对应线状要素,通过以下公式一,即均方误差表达式,将使得该均方误差的数值最小的偏移矩阵作为第一坐标和第三坐标之间的中间偏移矩阵:
公式一:
Figure PCTCN2018087251-appb-000001
其中,X为地物要素的第一坐标,Y为地物要素的第三坐标,E(X,Y)该地物要素的第一坐标和第三坐标之间的均方误差,x i为第一距离或第二距离不大于预设阈值的多个第一点状要素中第i个第一点状要素,y i为该第i个第一点状要素对应的第二点状要素或线状要素,m为第一距离或第二距离不大于预设阈值的第一点状要素的个数,M为第一坐标和第三坐标之间的中间偏移矩阵。
本申请实施例中,该第一坐标和第三坐标之间的中间偏移矩阵可以用M表 示,
Figure PCTCN2018087251-appb-000002
该中间偏移矩阵中包括第一坐标和第三坐标之间的偏移位置的数值(dx′,dy′)和偏航角的数值dyaw′。
其中,该第一预设阈值、第二预设阈值和第三预设阈值可以根据用户需要设置并更改,本申请实施例对此不做具体限定。例如,该第一预设阈值可以为1米、0.5米等。该第二预设阈值可以为0.1、0.3等。该第三预设阈值可以为20、100等。
需要说明的是,上述步骤205-207事实上是终端对于该每帧点云数据,根据该地物要素的第一坐标和第二坐标,确定该每帧点云数据的偏移位姿的具体实现方式。但是,上述具体实现方式还可以由其他实现方式替代,上述具体实现方式实际上是通过将车辆坐标系中的第二坐标转换到激光坐标系中,根据第一坐标以及转换后的第三坐标,确定每帧点云数据的偏移位姿,实际作业时,终端还可以将激光坐标系中的第一坐标转换到车辆坐标系中,得到转换后的第四坐标,根据第二坐标和转换后的第四坐标,确定每帧点云数据的偏移位姿,本申请实施例对此不做具体限定。
208、终端建立该至少两帧点云数据的偏移位姿与该偏移位置、该偏航角和系统偏差之间的观测方程;对于该每帧点云数据,终端获取该每帧点云数据对应的该车辆的航向角。
本申请实施例中,该激光扫描设备的激光外参数包括该车辆坐标系和该激光坐标系之间的偏移位置和偏航角,在步骤203-204中,由于地图数据中存在系统偏差,使得该地物要素的第二坐标与该地物要素在车辆坐标系中的实际坐标有偏差,确定每帧点云数据的偏移位姿时,考虑了该系统偏差对第二坐标的影响。因此,本步骤中,终端建立观测方程时,也需考虑该系统偏差的影响。
本步骤中,终端根据该至少两帧点云数据的偏移位姿、该偏移位置、该偏航角和该系统偏差,建立观测方程如下:
Figure PCTCN2018087251-appb-000003
其中,系统偏差为(x 0,y 0),偏移位置为(dx,dy),偏航角为dyaw,(dx′ i,dy′ i)为该至少两帧点云数据中第i帧点云数据的偏移位置的数值,dyaw′ i为该至少两帧点云数据中第i帧点云数据的偏航角的数值,yaw i为该至少两帧点云数据中第i帧点云数据对应的航向角,k为点云数据的总帧数。
需要说明的是,在激光坐标系中,可将系统偏差转换到x轴方向上的投影,以及y轴方向上的投影,由于该系统偏差为地图数据中的误差,实际作业时,经站心坐标系转换到车辆坐标系中,站心坐标系和车辆坐标系均以车辆为坐标原点,其不同之处在于x轴、y轴正方向,站心坐标系的y轴正方向和车辆坐标系的x轴正方向之间的夹角的大小等于车辆的航向角。
因此,对于每帧点云数据,终端还需获取该帧点云数据对应的车辆的航向角,该过程可以为:终端获取每帧点云数据的同时,终端通过该导航系统中的IMU,获取该帧点云数据对应的车辆的航向角。
209、终端根据该航向角和该每帧点云数据的偏移位姿,计算该观测方程中该偏移位置的数值、该偏航角的数值。
本步骤中,终端可以将该至少两帧点云数据的偏移位姿代入该观测方程中,从而根据该至少两帧点云数据的偏移位姿,计算出该观测方程中的该偏移位置的数值、该偏航角的数值和系统偏差的数值。
其中,虽然理论上,只根据至少两帧点云数据的偏移位姿,即可确定出该观测方程中该偏移位置的数值、该偏航角的数值和该系统偏差的数值。为了减小随机噪声的影响,获得比较鲁棒的激光外参数的数值,本申请实施例中,终端可以获取n帧点云数据的偏移位姿(n为大于2的正整数),以及该n帧点云数据的偏移位姿中每帧点云数据对应的车辆的航向角,分别将每帧云数据的偏移位姿,以及对应的航向角代入该观测方程中,采用最小二乘法,计算该观测方程中的该偏移位置的数值、该偏航角的数值和该系统偏差的数值,由于通过n帧点云数据的偏移位姿,降低了每帧点云数据中可能存在的随机噪声的干扰,从而减小了误差,进而使得确定出的激光外参数的数值更加准确。
需要说明的是,上述步骤208-209事实上是终端根据该至少两帧点云数据的偏移位姿,计算该激光扫描设备的激光外参数的数值,以标定该激光扫描设备的具体实现方式。但是,上述具体实现方式还可以由其他实现方式替代,上述具体实现方式实际上是通过建立偏移位姿与偏移位置、偏航角和系统偏差之间的观测方程确定激光外参数的数值。实际操作时,终端还可以预先建立并存储 该观测方程,或者预先编写并存储与该观测方程功能相同的程序指令,终端通过直接获取观测方程,从而确定出该激光外参数的数值;或者通过直接获取该程序指令,执行该程序指令,从而确定出该激光外参数的数值。
终端确定该激光外参数的数值后,通过该激光外参数的数值,对车辆中的激光扫描设备进行标定,并且,通过确定出的系统偏差的数值,对导航系统进行标定,从而使得车辆结合标定后的激光扫描设备提供点云数据以及标定后的导航系统提供的地图数据行驶,提高了驾驶的安全性。
本申请实施例中,终端可以基于激光扫描设备对目标区域进行扫描所得到的至少两帧点云数据,获取每帧点云数据中地物要素的第一坐标,该第一坐标为该地物要素在激光坐标系中的坐标;并且,终端直接基于车辆的该目标区域的地图数据,即可确定该每帧点云数据中地物要素在车辆坐标系中的第二坐标;从而直接根据该第一坐标和第二坐标进行后续过程,省略了人工建立标定场以及人工量取的过程,提高了确定第一坐标和第二坐标的效率,从而提高了对激光扫描设备进行标定的效率。并且,对于该每帧点云数据,终端根据该地物要素的第一坐标和第二坐标,确定该每帧点云数据的偏移位姿;后续继续根据该至少两帧点云数据的偏移位姿,计算该激光扫描设备的激光外参数的数值,以标定该车辆中的该激光扫描设备。由于终端根据多帧点云数据计算该激光扫描设备的激光外参数的数值,降低了每帧点云数据中的随机噪声的干扰,从而减小了误差,进而提高了确定激光外参数的准确性。
图6是本申请实施例提供的一种激光扫描设备标定的装置的结构示意图。参见图6,该装置包括:获取模块601、第一确定模块602、第二确定模块603和计算模块604。
该获取模块601,用于基于激光扫描设备对目标区域进行扫描所得到的至少两帧点云数据,获取每帧点云数据中地物要素的第一坐标,该第一坐标为该地物要素在激光坐标系中的坐标;
该第一确定模块602,用于基于车辆的该目标区域的地图数据,确定该每帧点云数据中地物要素在车辆坐标系中的第二坐标;
该第二确定模块603,用于对于该每帧点云数据,根据该地物要素的第一坐标和第二坐标,确定该每帧点云数据的偏移位姿;
该计算模块604,用于根据该至少两帧点云数据的偏移位姿,计算该激光扫描设备的激光外参数的数值,以标定该激光扫描设备。
可选地,该获取模块601,包括:
扫描单元,用于通过激光扫描设备,基于预设扫描路线扫描该目标区域,得到该至少两帧点云数据,该目标区域为包括该地物要素的任一区域;
提取单元,用于对于该每帧点云数据,提取该地物要素在该激光坐标系中的第一坐标。
可选地,该第一确定模块602,包括:
第一获取单元,用于从该车辆的导航系统中获取该目标区域的地图数据,该地图数据包括该地物要素在地图坐标系中的经纬度坐标和高程坐标;
第一确定单元,用于对于该每帧点云数据,根据该目标区域的地图数据,确定该地物要素在该车辆坐标系中的第二坐标。
可选地,该第二确定模块603,包括:
第二获取单元,用于获取该车辆坐标系和该激光坐标系之间的初始偏移位姿;
第二确定单元,用于对于该每帧点云数据,根据该初始偏移位姿和该地物要素的第二坐标,确定该地物要素的第三坐标,该第三坐标为该地物要素在该激光坐标系中的坐标;
第三确定单元,用于根据该地物要素的第一坐标和第三坐标,确定该每帧点云数据的偏移位姿。
可选地,该第三确定单元,包括:
计算子单元,用于根据该地物要素的第一坐标和第三坐标,计算每个第一点状要素和相邻第二点状要素之间的第一距离,以及该每个第一点状要素和相邻线状要素之间的第二距离,该第一点状要素为该第一坐标对应的地物要素中的点状要素,该第二点状要素为该第三坐标对应的地物要素中的点状要素,该线状要素为该第三坐标对应的地物要素中的线状要素;
确定子单元,用于根据该第一距离和该第二距离,确定该每帧点云数据的偏移位姿。
可选地,该激光扫描设备的激光外参数包括该车辆坐标系和该激光坐标系之间的偏移位置和偏航角,该计算模块604,包括:
建立单元,用于建立该至少两帧点云数据的偏移位姿与该偏移位置、该偏航角和系统偏差之间的观测方程,该系统偏差为该车辆的导航系统和该地图数据之间的偏差;
第三获取单元,用于对于该每帧点云数据,获取该每帧点云数据对应的该车辆的航向角;
计算单元,用于根据该航向角和该每帧点云数据的偏移位姿,计算该观测方程中该偏移位置的数值和该偏航角的数值。
本申请实施例中,终端可以基于激光扫描设备对目标区域进行扫描所得到的至少两帧点云数据,获取每帧点云数据中地物要素的第一坐标,该第一坐标为该地物要素在激光坐标系中的坐标;并且,终端直接基于车辆的该目标区域的地图数据,即可确定该每帧点云数据中地物要素在车辆坐标系中的第二坐标;从而直接根据该第一坐标和第二坐标进行后续过程,省略了人工建立标定场以及人工量取的过程,提高了确定第一坐标和第二坐标的效率,从而提高了对激光扫描设备进行标定的效率。并且,对于该每帧点云数据,终端根据该地物要素的第一坐标和第二坐标,确定该每帧点云数据的偏移位姿;后续继续根据该至少两帧点云数据的偏移位姿,计算该激光扫描设备的激光外参数的数值,以标定该车辆中的该激光扫描设备。由于终端根据多帧点云数据计算该激光扫描设备的激光外参数的数值,降低了每帧点云数据中的随机噪声的干扰,从而减小了误差,进而提高了确定激光外参数的准确性。
上述所有可选技术方案,可以采用任意结合形成本公开的可选实施例,在此不再一一赘述。
需要说明的是:上述实施例提供的激光扫描设备标定的装置在激光扫描设备标定时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将终端的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的激光扫描设备标定的装置与激光扫描设备标定的方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
图7是本申请实施例提供的一种计算机设备700的结构示意图。参见图7,该计算机设备700包括处理器和存储器,还可以包括通信接口和通信总线,还可以包括输入输出接口和显示设备,其中,处理器、存储器、输入输出接口、显示设备和通信接口通过通信总线完成相互间的通信。其中,存储器包括非易失性存储介质和内存储器。该计算机设备的非易失性存储介质存储有操作系统,还可存储有计算机可读指令,该计算机可读指令被处理器执行时,可使得处理器实现激光扫描设备标定的方法。该内存储器中也可储存有计算机可读指令, 该计算机可读指令被处理器执行时,可使得处理器执行激光扫描设备标定的方法。
通信总线是连接所描述的元素的电路并且在这些元素之间实现传输。例如,处理器通过通信总线从其它元素接收到命令,解密接收到的命令,根据解密的命令执行计算或数据处理。存储器可以包括程序模块,例如内核(kernel),中间件(middleware),应用程序编程接口(Application Programming Interface,API)和应用。该程序模块可以是有软件、固件或硬件、或其中的至少两种组成。输入输出接口转发用户通过输入输出设备(例如感应器、键盘、触摸屏)输入的命令或数据。显示设备显示各种信息给用户。通信接口将该计算机设备700与其它网络设备、用户设备、网络进行连接。例如,通信接口可以通过有线或无线连接到网络以连接到外部其它的网络设备或用户设备。无线通信可以包括以下至少一种:无线保真(Wireless Fidelity,WiFi),蓝牙(Bluetooth,BT),近距离无线通信技术(Near Field Communication,NFC),全球卫星定位系统(Global Positioning System,GPS)和蜂窝通信(cellular communication)(例如,长期演进技术(Long Term Evolution,LTE),长期演进技术的后续演进(Long Term Evolution–Advanced,LTE-A),码分多址(Code Division Multiple Access,CDMA),宽带码分多址(Wideband CDMA,WCDMA),通用移动通信系统(Universal Mobile Telecommunication System,UMTS),无线宽带接入(Wireless Broadband,WiBro)和全球移动通讯系统(Global System for Mobile communication,GSM)。有线通信可以包括以下至少一种:通用串行总线(Universal Serial Bus,USB),高清晰度多媒体接口(High Definition Multimedia Interface,HDMI),异步传输标准接口(Recommended Standard 232,RS-232),和普通老式电话业务(Plain Old Telephone Service,POTS)。网络可以是电信网络和通信网络。通信网络可以为计算机网络、因特网、物联网、电话网络。计算机设备700可以通过通信接口连接网络,计算机设备700和其它网络设备通信所用的协议可以被应用、应用程序编程接口(Application Programming Interface,API)、中间件、内核和通信接口至少一个支持。
在示例性实施例中,还提供了一种存储有计算机程序的计算机可读存储介质,例如存储有计算机可读指令的存储器,上述计算机可读指令被处理器执行时实现上述实施例中的激光扫描设备标定的方法。例如,所述计算机可读存储介质可以是只读内存(Read-Only Memory,ROM)、随机存取存储器(Random  Access Memory,RAM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)、磁带、软盘和光数据存储设备等。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种激光扫描设备标定的方法,应用于计算机设备,所述方法包括:
    基于激光扫描设备对目标区域进行扫描所得到的至少两帧点云数据,获取每帧点云数据中地物要素的第一坐标,所述第一坐标为所述地物要素在激光坐标系中的坐标;
    基于所述目标区域的地图数据,确定所述每帧点云数据中地物要素在车辆坐标系中的第二坐标;
    对于所述每帧点云数据,根据所述地物要素的第一坐标和第二坐标,确定所述每帧点云数据的偏移位姿;及
    根据所述至少两帧点云数据的偏移位姿,计算所述激光扫描设备的激光外参数的数值,以标定所述激光扫描设备。
  2. 根据权利要求1所述的方法,其特征在于,所述基于激光扫描设备对目标区域进行扫描所得到的至少两帧点云数据,获取每帧点云数据中地物要素的第一坐标包括:
    通过所述激光扫描设备,基于预设扫描路线扫描所述目标区域,得到所述至少两帧点云数据,所述目标区域为包括所述地物要素的任一区域;及
    对于所述每帧点云数据,提取所述地物要素在所述激光坐标系中的第一坐标。
  3. 根据权利要求1所述的方法,其特征在于,所述基于所述目标区域的地图数据,确定所述每帧点云数据中地物要素在车辆坐标系中的第二坐标包括:
    从导航系统中获取所述目标区域的地图数据,所述地图数据包括所述地物要素在地图坐标系中的经纬度坐标和高程坐标;及
    对于所述每帧点云数据,根据所述目标区域的地图数据,确定所述地物要素在所述车辆坐标系中的第二坐标。
  4. 根据权利要求3所述的方法,其特征在于,所述对于所述每帧点云数据,根据所述目标区域的地图数据,确定所述地物要素在所述车辆坐标系中的第二坐标包括:
    将所述地物要素在地图坐标系中的经纬度坐标和高程坐标,转换为地心坐标系中的位置坐标;
    将所述地物要素在地心坐标系中的位置坐标,转换为站心坐标系中的位置坐标;及
    根据获取的车辆的航向角将所述地物要素在站心坐标系中的位置坐标转换为车辆坐标系中的第二坐标。
  5. 根据权利要求1所述的方法,其特征在于,所述对于所述每帧点云数据,根据所述地物要素的第一坐标和第二坐标,确定所述每帧点云数据的偏移位姿包括:
    获取所述车辆坐标系和所述激光坐标系之间的初始偏移位姿;
    对于所述每帧点云数据,根据所述初始偏移位姿和所述地物要素的第二坐标,确定所述地物要素的第三坐标,所述第三坐标为所述地物要素在所述激光坐标系中的坐标;及
    根据所述地物要素的第一坐标和第三坐标,确定所述每帧点云数据的偏移位姿。
  6. 根据权利要求5所述的方法,其特征在于,所述对于所述每帧点云数据,根据所述初始偏移位姿和所述地物要素的第二坐标,确定所述地物要素的第三坐标包括:
    对于所述每帧点云数据,根据所述初始偏移位资中的初始偏移位置的数值,将所述地物要素的第二坐标进行偏移位置,根据所述初始偏移位资中的初始偏航角的数值,将偏移位置后的第二坐标进行角度偏移;及
    将偏移位置和角度偏移后的位置坐标作为所述地物要素的第三坐标。
  7. 根据权利要求5所述的方法,其特征在于,所述根据所述地物要素的第一坐标和第三坐标,确定所述每帧点云数据的偏移位姿包括:
    根据所述地物要素的第一坐标和第三坐标,计算每个第一点状要素和相邻第二点状要素之间的第一距离,以及所述每个第一点状要素和相邻线状要素之间的第二距离,所述第一点状要素为所述第一坐标对应的地物要素中的点状要素,所述第二点状要素为所述第三坐标对应的地物要素中的点状要素,所述线状要素为所述第三坐标对应的地物要素中的线状要素;及
    根据所述第一距离和所述第二距离,确定所述每帧点云数据的偏移位姿。
  8. 根据权利要求1所述的方法,其特征在于,所述激光扫描设备的激光外参数包括所述车辆坐标系和所述激光坐标系之间的偏移位置和偏航角,所 述根据所述至少两帧点云数据的偏移位姿,计算所述激光扫描设备的激光外参数的数值包括:
    建立所述至少两帧点云数据的偏移位姿与所述偏移位置、所述偏航角和系统偏差之间的观测方程,所述系统偏差为所述地图数据中的系统误差;
    对于所述每帧点云数据,获取所述每帧点云数据对应的所述车辆的航向角;及
    根据所述航向角和所述每帧点云数据的偏移位姿,计算所述观测方程中所述偏移位置的数值和所述偏航角的数值。
  9. 一种计算机设备,包括存储器和处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行以下步骤:
    基于激光扫描设备对目标区域进行扫描所得到的至少两帧点云数据,获取每帧点云数据中地物要素的第一坐标,所述第一坐标为所述地物要素在激光坐标系中的坐标;
    基于所述目标区域的地图数据,确定所述每帧点云数据中地物要素在车辆坐标系中的第二坐标;
    对于所述每帧点云数据,根据所述地物要素的第一坐标和第二坐标,确定所述每帧点云数据的偏移位姿;及
    根据所述至少两帧点云数据的偏移位姿,计算所述激光扫描设备的激光外参数的数值,以标定所述激光扫描设备。
  10. 根据权利要求9所述的计算机设备,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器在执行所述基于激光扫描设备对目标区域进行扫描所得到的至少两帧点云数据,获取每帧点云数据中地物要素的第一坐标的步骤时,执行以下步骤:
    通过所述激光扫描设备,基于预设扫描路线扫描所述目标区域,得到所述至少两帧点云数据,所述目标区域为包括所述地物要素的任一区域;及
    对于所述每帧点云数据,提取所述地物要素在所述激光坐标系中的第一坐标。
  11. 根据权利要求9所述的计算机设备,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器在执行所述基于所述目标区域的 地图数据,确定所述每帧点云数据中地物要素在车辆坐标系中的第二坐标的步骤时,执行以下步骤:
    从导航系统中获取所述目标区域的地图数据,所述地图数据包括所述地物要素在地图坐标系中的经纬度坐标和高程坐标;及
    对于所述每帧点云数据,根据所述目标区域的地图数据,确定所述地物要素在所述车辆坐标系中的第二坐标。
  12. 根据权利要求9所述的计算机设备,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器在执行所述对于所述每帧点云数据,根据所述地物要素的第一坐标和第二坐标,确定所述每帧点云数据的偏移位姿的步骤时,执行以下步骤:
    获取所述车辆坐标系和所述激光坐标系之间的初始偏移位姿;
    对于所述每帧点云数据,根据所述初始偏移位姿和所述地物要素的第二坐标,确定所述地物要素的第三坐标,所述第三坐标为所述地物要素在所述激光坐标系中的坐标;及
    根据所述地物要素的第一坐标和第三坐标,确定所述每帧点云数据的偏移位姿。
  13. 根据权利要求12所述的计算机设备,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器在执行所述根据所述地物要素的第一坐标和第三坐标,确定所述每帧点云数据的偏移位姿的步骤时,执行以下步骤:
    根据所述地物要素的第一坐标和第三坐标,计算每个第一点状要素和相邻第二点状要素之间的第一距离,以及所述每个第一点状要素和相邻线状要素之间的第二距离,所述第一点状要素为所述第一坐标对应的地物要素中的点状要素,所述第二点状要素为所述第三坐标对应的地物要素中的点状要素,所述线状要素为所述第三坐标对应的地物要素中的线状要素;及
    根据所述第一距离和所述第二距离,确定所述每帧点云数据的偏移位姿。
  14. 根据权利要求11所述的计算机设备,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器在执行所述激光扫描设备的激光外参数包括所述车辆坐标系和所述激光坐标系之间的偏移位置和偏航角,所述根据所述至少两帧点云数据的偏移位姿,计算所述激光扫描设备的激光外 参数的数值的步骤时,执行以下步骤:
    建立所述至少两帧点云数据的偏移位姿与所述偏移位置、所述偏航角和系统偏差之间的观测方程,所述系统偏差为所述地图数据中的系统误差;
    对于所述每帧点云数据,获取所述每帧点云数据对应的所述车辆的航向角;及
    根据所述航向角和所述每帧点云数据的偏移位姿,计算所述观测方程中所述偏移位置的数值和所述偏航角的数值。
  15. 一种非易失性的计算机可读存储介质,存储有计算机可读指令,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
    基于激光扫描设备对目标区域进行扫描所得到的至少两帧点云数据,获取每帧点云数据中地物要素的第一坐标,所述第一坐标为所述地物要素在激光坐标系中的坐标;
    基于所述目标区域的地图数据,确定所述每帧点云数据中地物要素在车辆坐标系中的第二坐标;
    对于所述每帧点云数据,根据所述地物要素的第一坐标和第二坐标,确定所述每帧点云数据的偏移位姿;及
    根据所述至少两帧点云数据的偏移位姿,计算所述激光扫描设备的激光外参数的数值,以标定所述激光扫描设备。
  16. 根据权利要求15所述的计算机可读存储介质,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器在执行所述基于激光扫描设备对目标区域进行扫描所得到的至少两帧点云数据,获取每帧点云数据中地物要素的第一坐标的步骤时,执行以下步骤:
    通过所述激光扫描设备,基于预设扫描路线扫描所述目标区域,得到所述至少两帧点云数据,所述目标区域为包括所述地物要素的任一区域;及
    对于所述每帧点云数据,提取所述地物要素在所述激光坐标系中的第一坐标。
  17. 根据权利要求15所述的计算机可读存储介质,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器在执行所述基于所述目标区域的地图数据,确定所述每帧点云数据中地物要素在车辆坐标系中的第 二坐标的步骤时,执行以下步骤:
    从导航系统中获取所述目标区域的地图数据,所述地图数据包括所述地物要素在地图坐标系中的经纬度坐标和高程坐标;及
    对于所述每帧点云数据,根据所述目标区域的地图数据,确定所述地物要素在所述车辆坐标系中的第二坐标。
  18. 根据权利要求15所述的计算机可读存储介质,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器在执行所述对于所述每帧点云数据,根据所述地物要素的第一坐标和第二坐标,确定所述每帧点云数据的偏移位姿的步骤时,执行以下步骤:
    获取所述车辆坐标系和所述激光坐标系之间的初始偏移位姿;
    对于所述每帧点云数据,根据所述初始偏移位姿和所述地物要素的第二坐标,确定所述地物要素的第三坐标,所述第三坐标为所述地物要素在所述激光坐标系中的坐标;及
    根据所述地物要素的第一坐标和第三坐标,确定所述每帧点云数据的偏移位姿。
  19. 根据权利要求18所述的计算机可读存储介质,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器在执行所述根据所述地物要素的第一坐标和第三坐标,确定所述每帧点云数据的偏移位姿的步骤时,执行以下步骤:
    根据所述地物要素的第一坐标和第三坐标,计算每个第一点状要素和相邻第二点状要素之间的第一距离,以及所述每个第一点状要素和相邻线状要素之间的第二距离,所述第一点状要素为所述第一坐标对应的地物要素中的点状要素,所述第二点状要素为所述第三坐标对应的地物要素中的点状要素,所述线状要素为所述第三坐标对应的地物要素中的线状要素;及
    根据所述第一距离和所述第二距离,确定所述每帧点云数据的偏移位姿。
  20. 根据权利要求15所述的计算机可读存储介质,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器在执行所述激光扫描设备的激光外参数包括所述车辆坐标系和所述激光坐标系之间的偏移位置和偏航角,所述根据所述至少两帧点云数据的偏移位姿,计算所述激光扫描设备的激光外参数的数值的步骤时,执行以下步骤:
    建立所述至少两帧点云数据的偏移位姿与所述偏移位置、所述偏航角和系统偏差之间的观测方程,所述系统偏差为所述地图数据中的系统误差;
    对于所述每帧点云数据,获取所述每帧点云数据对应的所述车辆的航向角;及
    根据所述航向角和所述每帧点云数据的偏移位姿,计算所述观测方程中所述偏移位置的数值和所述偏航角的数值。
PCT/CN2018/087251 2017-08-23 2018-05-17 激光扫描设备标定的方法、装置、设备及存储介质 WO2019037484A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020197030956A KR102296723B1 (ko) 2017-08-23 2018-05-17 레이저 스캐닝 디바이스 교정 방법, 장치, 디바이스 및 저장 매체
EP18847637.8A EP3686557A4 (en) 2017-08-23 2018-05-17 LASER SCAN DEVICE CALIBRATION METHOD, APPARATUS AND DEVICE, AND STORAGE MEDIA
JP2020511185A JP6906691B2 (ja) 2017-08-23 2018-05-17 レーザー走査デバイスの標定方法、装置、デバイス及び記憶媒体
US16/383,358 US20190235062A1 (en) 2017-08-23 2019-04-12 Method, device, and storage medium for laser scanning device calibration

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710731253.X 2017-08-23
CN201710731253.XA CN109425365B (zh) 2017-08-23 2017-08-23 激光扫描设备标定的方法、装置、设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/383,358 Continuation US20190235062A1 (en) 2017-08-23 2019-04-12 Method, device, and storage medium for laser scanning device calibration

Publications (1)

Publication Number Publication Date
WO2019037484A1 true WO2019037484A1 (zh) 2019-02-28

Family

ID=65439766

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/087251 WO2019037484A1 (zh) 2017-08-23 2018-05-17 激光扫描设备标定的方法、装置、设备及存储介质

Country Status (7)

Country Link
US (1) US20190235062A1 (zh)
EP (1) EP3686557A4 (zh)
JP (1) JP6906691B2 (zh)
KR (1) KR102296723B1 (zh)
CN (1) CN109425365B (zh)
MA (1) MA50182A (zh)
WO (1) WO2019037484A1 (zh)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111402328A (zh) * 2020-03-17 2020-07-10 北京图森智途科技有限公司 一种基于激光里程计的位姿计算方法及装置
CN111784836A (zh) * 2020-06-28 2020-10-16 北京百度网讯科技有限公司 高精地图生成方法、装置、设备及可读存储介质
CN111986472A (zh) * 2019-05-22 2020-11-24 阿里巴巴集团控股有限公司 车辆速度确定方法及车辆
CN112100900A (zh) * 2020-06-30 2020-12-18 北京控制工程研究所 一种空间非合作目标点云初始姿态测量方法
CN112164138A (zh) * 2020-10-30 2021-01-01 上海商汤临港智能科技有限公司 一种点云数据筛选方法及装置
CN112596063A (zh) * 2020-11-27 2021-04-02 北京迈格威科技有限公司 点云描述子构建方法及装置,闭环检测方法及装置
CN112639882A (zh) * 2019-09-12 2021-04-09 华为技术有限公司 定位方法、装置及系统
CN112684432A (zh) * 2019-10-18 2021-04-20 北京万集科技股份有限公司 激光雷达标定方法、装置、设备及存储介质
CN113034685A (zh) * 2021-03-18 2021-06-25 北京百度网讯科技有限公司 激光点云与高精地图的叠加方法、装置及电子设备
CN113238202A (zh) * 2021-06-08 2021-08-10 上海海洋大学 光子激光三维成像系统的坐标系点云计算方法及其应用
CN113721227A (zh) * 2021-08-06 2021-11-30 上海有个机器人有限公司 一种激光器的偏移角度计算方法
CN113739774A (zh) * 2021-09-14 2021-12-03 煤炭科学研究总院 基于移动激光与标靶协作的掘进机位姿纠偏方法
CN113984072A (zh) * 2021-10-28 2022-01-28 阿波罗智能技术(北京)有限公司 车辆定位方法、装置、设备、存储介质及自动驾驶车辆
CN114353807A (zh) * 2022-03-21 2022-04-15 沈阳吕尚科技有限公司 一种机器人的定位方法及定位装置
CN114581379A (zh) * 2022-02-14 2022-06-03 浙江华睿科技股份有限公司 一种密封胶的检测方法及装置

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109946732B (zh) * 2019-03-18 2020-12-01 李子月 一种基于多传感器数据融合的无人车定位方法
CN110298103A (zh) * 2019-06-25 2019-10-01 中国电建集团成都勘测设计研究院有限公司 基于无人机机载三维激光扫描仪的高陡危岩体调查方法
CN112212871B (zh) * 2019-07-10 2024-07-19 浙江未来精灵人工智能科技有限公司 一种数据处理方法、装置及机器人
CN112241016B (zh) * 2019-07-19 2024-07-19 北京初速度科技有限公司 一种泊车地图地理坐标的确定方法和装置
CN110780325B (zh) * 2019-08-23 2022-07-19 腾讯科技(深圳)有限公司 运动对象的定位方法及装置、电子设备
CN110736456B (zh) * 2019-08-26 2023-05-05 广东亿嘉和科技有限公司 稀疏环境下基于特征提取的二维激光实时定位方法
CN112630751B (zh) * 2019-10-09 2024-06-18 中车时代电动汽车股份有限公司 一种激光雷达的标定方法
CN110794392B (zh) * 2019-10-15 2024-03-19 上海创昂智能技术有限公司 车辆定位方法、装置、车辆及存储介质
CN110837080B (zh) * 2019-10-28 2023-09-05 武汉海云空间信息技术有限公司 激光雷达移动测量系统的快速标定方法
CN110888120B (zh) * 2019-12-03 2023-04-07 华南农业大学 一种基于组合导航系统矫正激光雷达点云数据运动畸变的方法
CN111207762B (zh) * 2019-12-31 2021-12-07 深圳一清创新科技有限公司 地图生成方法、装置、计算机设备和存储介质
CN111508021B (zh) * 2020-03-24 2023-08-18 广州视源电子科技股份有限公司 一种位姿确定方法、装置、存储介质及电子设备
CN116930933A (zh) * 2020-03-27 2023-10-24 深圳市速腾聚创科技有限公司 激光雷达的姿态校正方法和装置
CN111949816B (zh) * 2020-06-22 2023-09-26 北京百度网讯科技有限公司 定位处理方法、装置、电子设备和存储介质
CN113866779A (zh) * 2020-06-30 2021-12-31 上海商汤智能科技有限公司 点云数据的融合方法、装置、电子设备及存储介质
CN112068108A (zh) * 2020-08-11 2020-12-11 南京航空航天大学 一种基于全站仪的激光雷达外部参数标定方法
CN112595325B (zh) * 2020-12-21 2024-08-13 武汉汉宁轨道交通技术有限公司 初始位置确定方法、装置、电子设备和存储介质
CN112578356B (zh) * 2020-12-25 2024-05-17 上海商汤临港智能科技有限公司 一种外参标定方法、装置、计算机设备及存储介质
CN112904317B (zh) * 2021-01-21 2023-08-22 湖南阿波罗智行科技有限公司 一种多激光雷达与gnss_ins系统标定方法
CN112509053B (zh) * 2021-02-07 2021-06-04 深圳市智绘科技有限公司 机器人位姿的获取方法、装置及电子设备
JP2022152629A (ja) * 2021-03-29 2022-10-12 株式会社トプコン 測量システム及び点群データ取得方法及び点群データ取得プログラム
CN113124777B (zh) * 2021-04-20 2023-02-24 辽宁因泰立电子信息有限公司 车辆尺寸确定方法、装置、系统及存储介质
CN113247769B (zh) * 2021-04-28 2023-06-06 三一海洋重工有限公司 一种集卡定位方法及其定位系统、岸桥
CN113237896B (zh) * 2021-06-08 2024-02-20 诚丰家具有限公司 一种基于光源扫描的家具板材动态监测系统及方法
CN113671527B (zh) * 2021-07-23 2024-08-06 国电南瑞科技股份有限公司 一种提高配网带电作业机器人的精准作业方法及装置
CN113362328B (zh) * 2021-08-10 2021-11-09 深圳市信润富联数字科技有限公司 点云图生成方法、装置、电子设备和存储介质
CN113721255B (zh) * 2021-08-17 2023-09-26 北京航空航天大学 基于激光雷达与视觉融合的列车站台停车点精准检测方法
CN113743483B (zh) * 2021-08-20 2022-10-21 浙江省测绘科学技术研究院 一种基于空间平面偏移分析模型的道路点云误差场景分析方法
CN113884278B (zh) * 2021-09-16 2023-10-27 杭州海康机器人股份有限公司 一种线激光设备的系统标定方法和装置
CN113959397B (zh) * 2021-10-19 2023-10-03 广东电网有限责任公司 一种电力杆塔姿态监测方法、设备及介质
CN114018228B (zh) * 2021-11-04 2024-01-23 武汉天测测绘科技有限公司 一种移动式轨道交通三维数据获取方法及系统
CN114399550B (zh) * 2022-01-18 2024-06-07 中冶赛迪信息技术(重庆)有限公司 一种基于三维激光扫描的汽车鞍座提取方法及系统
CN116246020B (zh) * 2023-03-07 2023-09-08 武汉理工大学 一种多激光点云技术三维重建系统及方法
CN117269939B (zh) * 2023-10-25 2024-03-26 北京路凯智行科技有限公司 用于传感器的参数标定系统、方法及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104019829A (zh) * 2014-06-09 2014-09-03 武汉克利福昇科技有限责任公司 一种基于pos系统的车载全景相机和线阵激光扫描仪的外参标定方法
CN104833372A (zh) * 2015-04-13 2015-08-12 武汉海达数云技术有限公司 一种车载移动测量系统高清全景相机外参数标定方法
CN105180811A (zh) * 2015-09-21 2015-12-23 武汉海达数云技术有限公司 基于同名特征地物的移动测量系统激光扫描仪标定方法
CN105203023A (zh) * 2015-07-10 2015-12-30 中国人民解放军信息工程大学 一种车载三维激光扫描系统安置参数的一站式标定方法
US20160070981A1 (en) * 2014-09-08 2016-03-10 Kabushiki Kaisha Topcon Operating device, operating system, operating method, and program therefor
CN106996795A (zh) * 2016-01-22 2017-08-01 腾讯科技(深圳)有限公司 一种车载激光外参标定方法和装置

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3395393B2 (ja) * 1994-08-05 2003-04-14 日産自動車株式会社 車両周囲表示装置
JP5069439B2 (ja) * 2006-09-21 2012-11-07 パナソニック株式会社 自己位置認識システム
JP2011191239A (ja) * 2010-03-16 2011-09-29 Mazda Motor Corp 移動体位置検出装置
EP2523017A1 (de) * 2011-05-13 2012-11-14 Hexagon Technology Center GmbH Kalibrierverfahren für ein Gerät mit Scanfunktionalität
GB201116961D0 (en) * 2011-09-30 2011-11-16 Bae Systems Plc Fast calibration for lidars
US9043069B1 (en) * 2012-11-07 2015-05-26 Google Inc. Methods and systems for scan matching approaches for vehicle heading estimation
CN105164549B (zh) * 2013-03-15 2019-07-02 优步技术公司 用于机器人的多传感立体视觉的方法、系统和设备
KR102003339B1 (ko) * 2013-12-06 2019-07-25 한국전자통신연구원 정밀 위치 설정 장치 및 방법
EP3129807B1 (de) * 2014-04-09 2018-06-13 Continental Teves AG & Co. oHG Positionskorrektur eines fahrzeugs durch referenzierung zu objekten im umfeld
DE102014211176A1 (de) * 2014-06-11 2015-12-17 Continental Teves Ag & Co. Ohg Verfahren und System zur Korrektur von Messdaten und/oder Navigationsdaten eines Sensorbasissystems
US20150362587A1 (en) * 2014-06-17 2015-12-17 Microsoft Corporation Lidar sensor calibration using surface pattern detection
US9823059B2 (en) * 2014-08-06 2017-11-21 Hand Held Products, Inc. Dimensioning system with guided alignment
JP6442193B2 (ja) * 2014-08-26 2018-12-19 株式会社トプコン 点群位置データ処理装置、点群位置データ処理システム、点群位置データ処理方法およびプログラム
CN104180793A (zh) * 2014-08-27 2014-12-03 北京建筑大学 一种用于数字城市建设的移动空间信息获取装置和方法
CN104657464B (zh) * 2015-02-10 2018-07-03 腾讯科技(深圳)有限公司 一种数据处理方法及装置
CN106546260B (zh) * 2015-09-22 2019-08-13 腾讯科技(深圳)有限公司 一种移动测量数据的纠正方法及系统
US9916703B2 (en) * 2015-11-04 2018-03-13 Zoox, Inc. Calibration for autonomous vehicle operation
EP3182065A1 (de) * 2015-12-14 2017-06-21 Leica Geosystems AG Handhaltbares entfernungsmessgerät und verfahren zum erfassen relativer positionen
KR102373926B1 (ko) * 2016-02-05 2022-03-14 삼성전자주식회사 이동체 및 이동체의 위치 인식 방법
US10837773B2 (en) * 2016-12-30 2020-11-17 DeepMap Inc. Detection of vertical structures based on LiDAR scanner data for high-definition maps for autonomous vehicles
CN108732582B (zh) * 2017-04-20 2020-07-10 百度在线网络技术(北京)有限公司 车辆定位方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104019829A (zh) * 2014-06-09 2014-09-03 武汉克利福昇科技有限责任公司 一种基于pos系统的车载全景相机和线阵激光扫描仪的外参标定方法
US20160070981A1 (en) * 2014-09-08 2016-03-10 Kabushiki Kaisha Topcon Operating device, operating system, operating method, and program therefor
CN104833372A (zh) * 2015-04-13 2015-08-12 武汉海达数云技术有限公司 一种车载移动测量系统高清全景相机外参数标定方法
CN105203023A (zh) * 2015-07-10 2015-12-30 中国人民解放军信息工程大学 一种车载三维激光扫描系统安置参数的一站式标定方法
CN105180811A (zh) * 2015-09-21 2015-12-23 武汉海达数云技术有限公司 基于同名特征地物的移动测量系统激光扫描仪标定方法
CN106996795A (zh) * 2016-01-22 2017-08-01 腾讯科技(深圳)有限公司 一种车载激光外参标定方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3686557A4

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986472A (zh) * 2019-05-22 2020-11-24 阿里巴巴集团控股有限公司 车辆速度确定方法及车辆
CN111986472B (zh) * 2019-05-22 2023-04-28 阿里巴巴集团控股有限公司 车辆速度确定方法及车辆
CN112639882A (zh) * 2019-09-12 2021-04-09 华为技术有限公司 定位方法、装置及系统
CN112639882B (zh) * 2019-09-12 2021-12-14 华为技术有限公司 定位方法、装置及系统
CN112684432A (zh) * 2019-10-18 2021-04-20 北京万集科技股份有限公司 激光雷达标定方法、装置、设备及存储介质
CN112684432B (zh) * 2019-10-18 2024-04-16 武汉万集光电技术有限公司 激光雷达标定方法、装置、设备及存储介质
CN111402328A (zh) * 2020-03-17 2020-07-10 北京图森智途科技有限公司 一种基于激光里程计的位姿计算方法及装置
CN111402328B (zh) * 2020-03-17 2023-11-10 北京图森智途科技有限公司 一种基于激光里程计的位姿计算方法及装置
CN111784836B (zh) * 2020-06-28 2024-06-04 北京百度网讯科技有限公司 高精地图生成方法、装置、设备及可读存储介质
CN111784836A (zh) * 2020-06-28 2020-10-16 北京百度网讯科技有限公司 高精地图生成方法、装置、设备及可读存储介质
CN112100900B (zh) * 2020-06-30 2024-03-26 北京控制工程研究所 一种空间非合作目标点云初始姿态测量方法
CN112100900A (zh) * 2020-06-30 2020-12-18 北京控制工程研究所 一种空间非合作目标点云初始姿态测量方法
CN112164138A (zh) * 2020-10-30 2021-01-01 上海商汤临港智能科技有限公司 一种点云数据筛选方法及装置
CN112596063A (zh) * 2020-11-27 2021-04-02 北京迈格威科技有限公司 点云描述子构建方法及装置,闭环检测方法及装置
CN112596063B (zh) * 2020-11-27 2024-04-02 北京迈格威科技有限公司 点云描述子构建方法及装置,闭环检测方法及装置
CN113034685A (zh) * 2021-03-18 2021-06-25 北京百度网讯科技有限公司 激光点云与高精地图的叠加方法、装置及电子设备
CN113034685B (zh) * 2021-03-18 2022-12-06 北京百度网讯科技有限公司 激光点云与高精地图的叠加方法、装置及电子设备
CN113238202A (zh) * 2021-06-08 2021-08-10 上海海洋大学 光子激光三维成像系统的坐标系点云计算方法及其应用
CN113238202B (zh) * 2021-06-08 2023-08-15 上海海洋大学 光子激光三维成像系统的坐标系点云计算方法及其应用
CN113721227A (zh) * 2021-08-06 2021-11-30 上海有个机器人有限公司 一种激光器的偏移角度计算方法
CN113739774A (zh) * 2021-09-14 2021-12-03 煤炭科学研究总院 基于移动激光与标靶协作的掘进机位姿纠偏方法
CN113984072A (zh) * 2021-10-28 2022-01-28 阿波罗智能技术(北京)有限公司 车辆定位方法、装置、设备、存储介质及自动驾驶车辆
CN113984072B (zh) * 2021-10-28 2024-05-17 阿波罗智能技术(北京)有限公司 车辆定位方法、装置、设备、存储介质及自动驾驶车辆
CN114581379B (zh) * 2022-02-14 2024-03-22 浙江华睿科技股份有限公司 一种密封胶的检测方法及装置
CN114581379A (zh) * 2022-02-14 2022-06-03 浙江华睿科技股份有限公司 一种密封胶的检测方法及装置
CN114353807B (zh) * 2022-03-21 2022-08-12 沈阳吕尚科技有限公司 一种机器人的定位方法及定位装置
CN114353807A (zh) * 2022-03-21 2022-04-15 沈阳吕尚科技有限公司 一种机器人的定位方法及定位装置

Also Published As

Publication number Publication date
CN109425365B (zh) 2022-03-11
CN109425365A (zh) 2019-03-05
KR20190129978A (ko) 2019-11-20
JP2020531831A (ja) 2020-11-05
US20190235062A1 (en) 2019-08-01
JP6906691B2 (ja) 2021-07-21
EP3686557A1 (en) 2020-07-29
EP3686557A4 (en) 2021-08-04
MA50182A (fr) 2020-07-29
KR102296723B1 (ko) 2021-08-31

Similar Documents

Publication Publication Date Title
WO2019037484A1 (zh) 激光扫描设备标定的方法、装置、设备及存储介质
US11802769B2 (en) Lane line positioning method and apparatus, and storage medium thereof
WO2021139590A1 (zh) 基于蓝牙与slam的室内定位导航装置及其方法
US9378558B2 (en) Self-position and self-orientation based on externally received position information, sensor data, and markers
KR101444685B1 (ko) 영상기반 멀티센서 데이터를 이용한 차량의 위치자세 결정 방법 및 장치
WO2020146102A1 (en) Robust lane association by projecting 2-d image into 3-d world using map information
CN110160545B (zh) 一种激光雷达与gps的增强定位系统及方法
KR20200064542A (ko) 드론을 이용한 지상기준점 측량장치 및 그 방법
CN113295174B (zh) 一种车道级定位的方法、相关装置、设备以及存储介质
KR102239562B1 (ko) 항공 관측 데이터와 지상 관측 데이터 간의 융합 시스템
US20160379365A1 (en) Camera calibration device, camera calibration method, and camera calibration program
CN110031880B (zh) 基于地理位置定位的高精度增强现实方法及设备
US20210208608A1 (en) Control method, control apparatus, control terminal for unmanned aerial vehicle
CN109541570B (zh) 毫米波扫描设备标定的方法及设备
JP2022130588A (ja) 自動運転車両の位置合わせ方法、装置、電子機器及び車両
CN114296097A (zh) 基于GNSS和LiDAR的SLAM导航方法及系统
CN113900517A (zh) 线路导航方法和装置、电子设备、计算机可读介质
JP2021143861A (ja) 情報処理装置、情報処理方法及び情報処理システム
KR102136924B1 (ko) 피테스트 카메라 모듈의 상대 위치 산출 정확도를 평가하기 위한 평가 방법 및 시스템
TW201812338A (zh) 旋翼飛行器的定位方法
US20230266483A1 (en) Information processing device, information processing method, and program
CN113566847B (zh) 导航校准方法和装置、电子设备、计算机可读介质
JP7125927B2 (ja) 情報端末装置、方法及びプログラム
CN117953007B (zh) 一种基于图像匹配的线性运动补偿控制方法
KR101282917B1 (ko) 모바일 프로젝터를 이용한 길안내 방법 및 길안내 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18847637

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20197030956

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2020511185

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018847637

Country of ref document: EP

Effective date: 20200323