WO2022007504A1 - 位置确定方法、装置、系统和计算机可读存储介质 - Google Patents

位置确定方法、装置、系统和计算机可读存储介质 Download PDF

Info

Publication number
WO2022007504A1
WO2022007504A1 PCT/CN2021/094394 CN2021094394W WO2022007504A1 WO 2022007504 A1 WO2022007504 A1 WO 2022007504A1 CN 2021094394 W CN2021094394 W CN 2021094394W WO 2022007504 A1 WO2022007504 A1 WO 2022007504A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
cloud data
transformation matrix
ground
reference point
Prior art date
Application number
PCT/CN2021/094394
Other languages
English (en)
French (fr)
Inventor
孔旗
张金凤
Original Assignee
北京京东乾石科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京京东乾石科技有限公司 filed Critical 北京京东乾石科技有限公司
Priority to US18/004,508 priority Critical patent/US20230252674A1/en
Priority to EP21837484.1A priority patent/EP4152052A1/en
Publication of WO2022007504A1 publication Critical patent/WO2022007504A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present disclosure relates to the field of computer technology, and in particular, to a location determination method, apparatus, system, and computer-readable storage medium.
  • a vehicle in an autonomous driving state needs to know its position on the map in real time, especially the initial position is particularly important. For the scene where the vehicle starts at or near the preset departure point, the vehicle also needs to be positioned before departure to determine its precise position.
  • vehicles generally use GPS (Global Positioning System, global positioning system) or GPS and INS (Inertial Navigation System, inertial navigation system) combined equipment to determine the initial position.
  • GPS Global Positioning System, global positioning system
  • INS Inertial Navigation System, inertial navigation system
  • a method for determining a position including: acquiring laser point cloud data measured by a vehicle at a current position point, as reference point cloud data, and acquiring a preset starting point of the vehicle corresponding to the point cloud map point cloud data, as the target point cloud data; match the target point cloud data with the reference point cloud data to determine the transformation matrix between the target point cloud data and the reference point cloud data; according to the coordinate information and transformation matrix of the preset starting point , to determine the coordinate information of the current position point.
  • matching the target point cloud data with the reference point cloud data, and determining the transformation matrix between the target point cloud data and the reference point cloud data includes: dividing the target point cloud data into ground target point cloud data and non-ground target point cloud data.
  • Ground target point cloud data divide the reference point cloud data into ground reference point cloud data and non-ground reference point cloud data; determine the first transformation matrix according to the ground target point cloud data and the ground reference point cloud data; according to the first transformation matrix , the non-ground target point cloud data and the non-ground reference point cloud data to determine the second transformation matrix.
  • determining the first transformation matrix according to the ground target point cloud data and the ground reference point cloud data includes: performing down-sampling processing on the ground reference point cloud data to obtain down-sampled ground reference point cloud data; The target point cloud data and the down-sampled ground reference point cloud data are matched to obtain a rotation and translation matrix from the ground target point cloud data to the down-sampled ground reference point cloud data as a first transformation matrix.
  • determining the second transformation matrix according to the first transformation matrix, the non-ground target point cloud data and the non-ground reference point cloud data includes: transforming the non-ground target point cloud data according to the first transformation matrix to obtain a transformation The non-ground target point cloud data after conversion; the non-ground reference point cloud data is down-sampled to obtain the down-sampled non-ground reference point cloud data; the transformed non-ground target point cloud data and the down-sampled non-ground target point cloud data are obtained.
  • the reference point cloud data is matched to obtain a rotation and translation matrix from the transformed non-ground target point cloud data to the down-sampled non-ground reference point cloud data, as a second transformation matrix.
  • determining the coordinate information of the current position point according to the coordinate information of the preset starting point and the transformation matrix includes: transforming the coordinate information of the preset starting point according to the first transformation matrix to obtain the first coordinate information, The z-axis coordinate value in the coordinate information is used as the z-axis coordinate value of the current position point; the first coordinate information is transformed according to the second transformation matrix to obtain the second coordinate information, and the x-axis coordinate value in the second coordinate information and the y-axis coordinate value are obtained.
  • the method further includes: transforming the preset attitude information corresponding to the preset starting point according to the first transformation matrix to obtain the first attitude information, and converting the roll angle value and the pitch angle value in the first attitude information , as the current roll angle value and pitch angle value of the vehicle; transform the first attitude information according to the second transformation matrix to obtain the second attitude information, and use the heading angle value in the second attitude information as the current heading angle of the vehicle value.
  • acquiring the point cloud data corresponding to the preset starting point of the vehicle in the point cloud map, as the target point cloud data includes: acquiring in the point cloud map to predict the range according to the lidar measurement range corresponding to the reference point cloud data. Set the starting point as the center, and the point cloud data within the corresponding range of the lidar measurement range as the target point cloud data.
  • a position determination device comprising: an acquisition module configured to acquire laser point cloud data measured at the current position of the vehicle, as reference point cloud data, to acquire a preset starting point of the vehicle The corresponding point cloud data in the point cloud map is used as the target point cloud data; the matching module is used to match the target point cloud data with the reference point cloud data, and determine the transformation matrix between the target point cloud data and the reference point cloud data ; A determination module for determining the coordinate information of the current position point according to the coordinate information of the preset starting point and the transformation matrix.
  • the matching module is configured to divide the target point cloud data into ground target point cloud data and non-ground target point cloud data, and divide the reference point cloud data into ground reference point cloud data and non-ground reference point cloud data;
  • the first transformation matrix is determined according to the ground target point cloud data and the ground reference point cloud data;
  • the second transformation matrix is determined according to the first transformation matrix, the non-ground target point cloud data and the non-ground reference point cloud data.
  • the matching module is configured to perform down-sampling processing on the ground reference point cloud data to obtain down-sampled ground reference point cloud data; and match the ground target point cloud data with the down-sampled ground reference point cloud data , to obtain the rotation and translation matrix from the ground target point cloud data to the down-sampled ground reference point cloud data, as the first transformation matrix.
  • the matching module is used to transform the non-ground target point cloud data according to the first transformation matrix to obtain the transformed non-ground target point cloud data; perform downsampling processing on the non-ground reference point cloud data to obtain a reduced The sampled non-ground reference point cloud data; the transformed non-ground target point cloud data and the down-sampled non-ground reference point cloud data are matched to obtain the transformed non-ground target point cloud data to the down-sampled non-ground reference point cloud data.
  • the rotation and translation matrix of the non-ground reference point cloud data as the second transformation matrix.
  • the determining module is configured to transform the coordinate information of the preset starting point according to the first transformation matrix to obtain the first coordinate information, and use the z-axis coordinate value in the first coordinate information as the z-axis coordinate of the current position point value; transform the first coordinate information according to the second transformation matrix to obtain the second coordinate information, and use the x-axis coordinate value and y-axis coordinate value in the second coordinate information as the x-axis coordinate value and y-axis coordinate value of the current position point Coordinate value.
  • the determining module is further configured to transform the preset attitude information corresponding to the preset starting point according to the first transformation matrix to obtain the first attitude information, and convert the roll angle value and the pitch angle value in the first attitude information , as the current roll angle value and pitch angle value of the vehicle; transform the first attitude information according to the second transformation matrix to obtain the second attitude information, and use the heading angle value in the second attitude information as the current heading angle of the vehicle value.
  • the acquisition module is configured to acquire, in the point cloud map, the point cloud data within the range corresponding to the lidar measurement range centered on the preset starting point according to the lidar measurement range corresponding to the reference point cloud data, as the target point cloud data.
  • a position determination apparatus comprising: a processor; and a memory coupled to the processor for storing instructions, and when the instructions are executed by the processor, the processor executes any of the foregoing The position determination method of an embodiment.
  • a non-transitory computer-readable storage medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the position determination method of any of the foregoing embodiments.
  • a position determination system comprising: the position determination device of any of the foregoing embodiments; a vehicle body, the position determination device is disposed on the vehicle body; and a lidar device disposed on the vehicle body , which is configured to scan at the current position to obtain laser point cloud data.
  • FIG. 1 shows a schematic flowchart of a location determination method according to some embodiments of the present disclosure.
  • FIG. 2 shows a schematic flowchart of a location determination method according to other embodiments of the present disclosure.
  • FIG. 3 shows a schematic structural diagram of a position determination apparatus according to some embodiments of the present disclosure.
  • FIG. 4 shows a schematic structural diagram of a position determination apparatus according to other embodiments of the present disclosure.
  • FIG. 5 shows a schematic structural diagram of a position determination apparatus according to further embodiments of the present disclosure.
  • FIG. 6 shows a schematic structural diagram of a position determination system according to some embodiments of the present disclosure.
  • the determination of the initial position of the vehicle relies heavily on GPS.
  • the GPS signal at the initial position is poor or even no signal, the position given by the GPS will have a large error or the position information cannot be given, which will lead to the determination of the initial position. failure, and affect the subsequent automatic driving process of the vehicle.
  • a technical problem to be solved by the present disclosure is to improve the accuracy of the position determination of the vehicle for a scene where the vehicle departs at or near the preset departure point.
  • the present disclosure provides a method for determining the position of the vehicle, which will be described below with reference to FIG. 1 .
  • FIG. 1 is a flowchart of some embodiments of the disclosed location determination method. As shown in FIG. 1, the method of this embodiment includes steps S102-S106.
  • step S102 the laser point cloud data measured by the vehicle at the current position is obtained as the reference point cloud data, and the point cloud data corresponding to the preset starting point of the vehicle in the point cloud map is obtained as the target point cloud data.
  • the location information of the preset starting point can be measured in advance, and it only needs to be measured once and can be used permanently.
  • the position information of the preset departure point can be represented in different ways according to different coordinate systems used by the vehicle. For example, if the vehicle uses the WGS84 coordinate system, the position information of the preset departure point can be expressed in the form of latitude and longitude. If the vehicle uses the slam map, the position information of the preset departure point can be expressed by the relative position of the vehicle relative to the origin of the map. Listed.
  • the location information of the preset departure point can be bound with the client command. For example, name the default starting point home point and the client directive home pose. Before the vehicle enters the automatic driving state, place the vehicle at or near the preset departure point. On the premise that the vehicle is powered on, the client sends the home pose command to the vehicle. After the location determination device receives the location information of the preset departure point , you can use the laser point cloud data measured by the laser radar to scan the surrounding environment as the reference point cloud data, and obtain the high-precision point cloud map according to the position information of the preset starting point, and obtain the point cloud corresponding to the preset starting point in the point cloud map data, as the target point cloud data.
  • point cloud data within a range corresponding to the lidar measurement range centered on the preset starting point is acquired in the point cloud map as the target point cloud data. If the location information of the preset departure point is different from the coordinate system of the point cloud map, the location information of the preset departure point may be converted to the coordinate system of the point cloud map first.
  • the range of the target point cloud data can be equal to or greater than the lidar measurement range.
  • the laser point cloud data can be converted from the lidar coordinate system to the point cloud map coordinate system as a reference point cloud data.
  • step S104 the target point cloud data and the reference point cloud data are matched to determine the transformation matrix between the target point cloud data and the reference point cloud data.
  • a point cloud registration algorithm can be used to match the target point cloud data with the reference point cloud data, for example, the ICP (Iterative Closest Point, the nearest iteration) algorithm, the GICP (Generalized Iterative Closest Point, the generalized nearest iteration) algorithm, the NDT (Normal Distribution Transform, normal distribution transformation) algorithm or positioning algorithm based on multi-resolution Gaussian mixture mapping, etc., are not limited to the examples.
  • the transformation matrix from the target point cloud data to the reference point cloud data can be obtained.
  • the transformation matrix may be a 4 ⁇ 4 rotation and translation matrix, and the error between the target point cloud data and the reference point cloud data after the rotation and translation transformation is performed according to the transformation matrix satisfies a preset condition.
  • step S106 the coordinate information of the current position point is determined according to the coordinate information of the preset starting point and the transformation matrix.
  • the coordinate information of the preset starting point is expressed as (x, y, z), and the transformation matrix is expressed as Then the coordinate information (x', y', z') of the current position point can be based on Sure.
  • the coordinate values (x, y, z) and (x', y', z') of the x, y, z axes in the map coordinate system can represent longitude, latitude and altitude, respectively.
  • the laser point cloud data measured by the vehicle at the current position is used as the reference point cloud data, and the point cloud data corresponding to the preset starting point of the vehicle in the point cloud map is used as the target point cloud data;
  • the point cloud data and the reference point cloud data are matched to determine the transformation matrix between the target point cloud data and the reference point cloud data;
  • the coordinate information of the current position point is determined according to the coordinate information and transformation matrix of the preset starting point.
  • the method of the above embodiment is suitable for the scene where the vehicle starts at or near the preset starting point, does not rely on the GPS device, solves the problem that the position of the vehicle cannot be determined due to poor GPS signal or even no signal, and enables the vehicle to operate in various complex environments. Accurate initial position can be obtained quickly, and the accuracy of position determination can be improved.
  • the present disclosure improves the process of matching the target point cloud data with the reference point cloud data, which will be described below with reference to FIG. 2 .
  • FIG. 2 is a flowchart of other embodiments of the disclosed location determination method. As shown in FIG. 2, the method of this embodiment includes steps S202-S216.
  • step S202 the laser point cloud data measured by the vehicle at the current position is obtained as the reference point cloud data, and the point cloud data corresponding to the preset starting point of the vehicle in the point cloud map is obtained as the target point cloud data.
  • step S204 the target point cloud data is divided into ground target point cloud data and non-ground target point cloud data
  • the reference point cloud data is divided into ground reference point cloud data and non-ground reference point cloud data.
  • the method of dividing point cloud data into ground point cloud data and non-ground point cloud data that is, the target point cloud data is divided into ground target point cloud data and non-ground target point cloud data, and the reference point cloud data is divided into ground reference point cloud data
  • the data and the non-ground reference point cloud data may adopt the prior art, and will not be repeated here.
  • step S206 a first transformation matrix is determined according to the ground target point cloud data and the ground reference point cloud data.
  • the ground reference point cloud data may be down-sampled first to obtain down-sampled ground reference point cloud data; the ground target point cloud data and the down-sampled ground reference point cloud data are matched to obtain The rotation and translation matrix from the ground target point cloud data to the down-sampled ground reference point cloud data is used as the first transformation matrix.
  • the downsampled ground reference point cloud data can be matched with the ground target point cloud data in density, distance between points, etc.
  • the ICP algorithm may be used to match the ground target point cloud data and the down-sampled ground reference point cloud data to obtain the first transformation matrix M 1 .
  • step S208 a second transformation matrix is determined according to the first transformation matrix, the non-ground target point cloud data and the non-ground reference point cloud data.
  • the non-ground target point cloud data is transformed according to the first transformation matrix to obtain transformed non-ground target point cloud data; the non-ground reference point cloud data is down-sampled to obtain the down-sampled non-ground target point cloud data.
  • Ground reference point cloud data match the transformed non-ground target point cloud data with the down-sampled non-ground reference point cloud data, and obtain the non-ground reference point from the transformed non-ground target point cloud data to the down-sampled non-ground reference point.
  • the rotation and translation matrix of cloud data as the second transformation matrix. After the non-ground target point cloud data is transformed by the first transformation matrix, it is closer to the non-ground reference point cloud data.
  • the transformed non-ground target point cloud data is further matched with the down-sampled non-ground reference point cloud data, and the accuracy is higher.
  • the amount of data processing can be reduced and the matching efficiency can be improved.
  • Algorithms such as positioning algorithms based on Multiresolution Gaussian Mixture Maps can be used to match the transformed non-ground target point cloud data with the down-sampled non-ground reference point cloud data to obtain a second transformation matrix M 2 .
  • step S210 the coordinate information of the preset starting point is transformed according to the first transformation matrix to obtain first coordinate information, and the z-axis coordinate value in the first coordinate information is used as the z-axis coordinate value of the current position point.
  • the z-axis coordinate value of the current position point can represent the height of the current position point.
  • step S212 the preset attitude information corresponding to the preset starting point is transformed according to the first transformation matrix to obtain first attitude information, and the roll angle value and pitch angle value in the first attitude information are used as the current roll angle of the vehicle Roll and pitch values.
  • the transformation of the height, roll angle and pitch angle can be more accurately determined.
  • a first transform matrix M 1 to determine the current position of the z-axis coordinate value of the point, the current value of the roll angle and the pitch angle of the vehicle more accurate values.
  • the preset attitude information corresponding to the preset starting point may be attitude information when the point cloud data corresponding to the preset starting point in the point cloud map is generated.
  • the pose (position and pose) matrix of The pose matrix is a 4 ⁇ 4 matrix
  • the first pose matrix of the current position can be obtained by multiplying it with the first transformation matrix M 1 , and the first coordinate information and the first pose information can be obtained according to the first pose matrix, thereby obtaining The z-axis coordinate value of the current position point, and the current roll and pitch angle values of the vehicle.
  • Steps S210 and S212 may be performed in parallel after step S206.
  • step S214 transform the first coordinate information according to the second transformation matrix to obtain second coordinate information, and use the x-axis coordinate value and the y-axis coordinate value in the second coordinate information as the x-axis coordinate value of the current position point and y-axis coordinate values.
  • the x-axis coordinate value and the y-axis coordinate value of the current position point can respectively represent the longitude and latitude of the current position point in the point cloud map coordinate system.
  • step S216 the first attitude information is transformed according to the second transformation matrix to obtain second attitude information, and the heading angle value in the second attitude information is taken as the current heading angle value of the vehicle.
  • the first pose matrix and the second transformation matrix M 2 can be multiplied to obtain the second pose matrix, and the second coordinate information and the second pose information can be obtained according to the second pose matrix, so as to obtain the x-axis coordinate value of the current position point. , the y-axis coordinate value and the current heading angle value of the vehicle.
  • the transformation of longitude, latitude and heading angle can be more accurately determined. Therefore, it is more accurate to determine the x-axis coordinate value of the current position point, the y-axis coordinate value and the current heading angle value of the vehicle according to the second transformation matrix M 2 .
  • Step S214 and step S216 may be performed in parallel.
  • reference may be made to the prior art, and details are not described herein again.
  • the method of the above embodiment divides the point cloud data into ground point cloud data and non-ground point cloud data, and conducts the data according to the ground target point cloud data and the ground reference point cloud data, the non-ground target point cloud data and the non-ground reference point cloud data respectively. Match twice to determine the first transformation matrix and the second transformation matrix respectively.
  • the changes of the height, pitch angle and roll angle of the preset starting point and the current position point can be more accurately determined.
  • the matching of point cloud data can more accurately determine the latitude, longitude and heading of the preset departure point and the current position. Therefore, the method according to the above embodiment can more accurately determine the pose information of the vehicle at the current position.
  • the present disclosure also provides a position determination device, which will be described below with reference to FIG. 3 .
  • FIG. 3 is a block diagram of some embodiments of the disclosed position determination apparatus. As shown in FIG. 3 , the apparatus 30 in this embodiment includes: an acquisition module 310 , a matching module 320 , and a determination module 330 .
  • the acquisition module 310 is used to acquire the laser point cloud data measured at the current position of the vehicle as the reference point cloud data, and acquire the point cloud data corresponding to the preset starting point of the vehicle in the point cloud map as the target point cloud data.
  • the acquisition module 310 is configured to acquire, in the point cloud map, the point cloud data within the range corresponding to the lidar measurement range centered on the preset starting point according to the lidar measurement range corresponding to the reference point cloud data, as Target point cloud data.
  • the matching module 320 is configured to match the target point cloud data with the reference point cloud data, and determine the transformation matrix between the target point cloud data and the reference point cloud data.
  • the matching module is configured to divide the target point cloud data into ground target point cloud data and non-ground target point cloud data, and divide the reference point cloud data into ground reference point cloud data and non-ground reference point cloud data;
  • the first transformation matrix is determined according to the ground target point cloud data and the ground reference point cloud data;
  • the second transformation matrix is determined according to the first transformation matrix, the non-ground target point cloud data and the non-ground reference point cloud data.
  • the matching module 320 is configured to perform down-sampling processing on the ground reference point cloud data to obtain down-sampled ground reference point cloud data; After matching, the rotation and translation matrix from the ground target point cloud data to the down-sampled ground reference point cloud data is obtained as the first transformation matrix.
  • the matching module 320 is configured to transform the non-ground target point cloud data according to the first transformation matrix to obtain transformed non-ground target point cloud data; perform downsampling processing on the non-ground reference point cloud data to obtain The down-sampled non-ground reference point cloud data; the transformed non-ground target point cloud data and the down-sampled non-ground reference point cloud data are matched to obtain the transformed non-ground target point cloud data to the down-sampled point cloud data.
  • the rotation and translation matrix of the non-ground reference point cloud data as the second transformation matrix.
  • the determining module 330 is configured to determine the coordinate information of the current position point according to the coordinate information of the preset starting point and the transformation matrix.
  • the determining module 330 is configured to transform the coordinate information of the preset starting point according to the first transformation matrix to obtain the first coordinate information, and use the z-axis coordinate value in the first coordinate information as the z-axis of the current position point Coordinate value; transform the first coordinate information according to the second transformation matrix to obtain the second coordinate information, and use the x-axis coordinate value and y-axis coordinate value in the second coordinate information as the x-axis coordinate value and y-axis coordinate value of the current position point axis coordinate value.
  • the determining module 330 is further configured to transform the preset attitude information corresponding to the preset starting point according to the first transformation matrix to obtain the first attitude information, and convert the roll angle value and the pitch angle in the first attitude information value, as the current roll angle value and pitch angle value of the vehicle; transform the first attitude information according to the second transformation matrix to obtain the second attitude information, and use the heading angle value in the second attitude information as the current heading of the vehicle angle value.
  • the position determination apparatuses in the embodiments of the present disclosure may be implemented by various computing devices or computer systems, which will be described below in conjunction with FIG. 4 and FIG. 5 .
  • FIG. 4 is a block diagram of some embodiments of the disclosed position determination apparatus.
  • the apparatus 40 of this embodiment includes a memory 410 and a processor 420 coupled to the memory 410 , the processor 420 is configured to execute any of the implementations of the present disclosure based on instructions stored in the memory 410 The location determination method in the example.
  • the memory 410 may include, for example, a system memory, a fixed non-volatile storage medium, and the like.
  • the system memory stores, for example, an operating system, an application program, a boot loader (Boot Loader), a database, and other programs. Both the memory and the processor may be implemented in hardware.
  • FIG. 5 is a structural diagram of other embodiments of the position determination apparatus of the present disclosure.
  • the apparatus 50 in this embodiment includes: a memory 510 and a processor 520 , which are similar to the memory 410 and the processor 420 , respectively. It may also include an input-output interface 530, a network interface 540, a storage interface 550, and the like. These interfaces 530 , 540 , 550 and the memory 510 and the processor 520 can be connected, for example, through a bus 560 .
  • the input and output interface 530 provides a connection interface for input and output devices such as a display, a mouse, a keyboard, and a touch screen.
  • the network interface 540 provides a connection interface for various networked devices, for example, it can be connected to a database server or a cloud storage server.
  • the storage interface 550 provides a connection interface for external storage devices such as SD cards and U disks.
  • the present disclosure also provides a position determination system, which will be described below in conjunction with FIG. 6 .
  • the system 6 of this embodiment includes: the position determination device 30/40/50 of any of the foregoing embodiments; a vehicle body 62, and the position determination device 30/40/50 is arranged on the vehicle body 62;
  • the lidar device 64 on the vehicle body is configured to scan the current position point to obtain laser point cloud data.
  • the present disclosure also provides a non-transitory computer-readable storage medium on which a computer program is stored, wherein when the program is executed by a processor, the position determination method of any of the foregoing embodiments is implemented.
  • embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein .
  • computer-usable non-transitory storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions
  • the apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.
  • These computer program instructions can also be loaded on a computer or other programmable data processing device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process such that The instructions provide steps configured to implement the functions specified in a flow or flows of the flowcharts and/or a block or blocks of the block diagrams.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

一种位置确定方法、装置、系统和计算机可读存储介质,涉及计算机技术领域。方法包括:获取车辆在当前位置点测得的激光点云数据,作为参考点云数据,获取车辆的预设出发点在点云地图中对应的点云数据,作为目标点云数据(S102);将目标点云数据与参考点云数据进行匹配,确定目标点云数据与参考点云数据之间的变换矩阵(S104);根据预设出发点的坐标信息和变换矩阵,确定当前位置点的坐标信息(S106)。

Description

位置确定方法、装置、系统和计算机可读存储介质
相关申请的交叉引用
本申请是以CN申请号为202010658041.5,申请日为2020年7月9日的申请为基础,并主张其优先权,该CN申请的公开内容在此作为整体引入本申请中。
技术领域
本公开涉及计算机技术领域,特别涉及一种位置确定方法、装置、系统和计算机可读存储介质。
背景技术
处于自动驾驶状态的车辆需要实时知道自己在地图中的位置,特别是初始位置尤其重要。针对车辆在预设出发点或预设出发点附近出发的场景,车辆出发前也需要进行定位,确定自己的精确位置。
目前,车辆一般都采用GPS(Global Positioning System,全球定位系统)或者GPS和INS(Inertial Navigation System,惯性导航系统)组合设备进行初始位置的确定。
发明内容
根据本公开的一些实施例,提供的一种位置确定方法,包括:获取车辆在当前位置点测得的激光点云数据,作为参考点云数据,获取车辆的预设出发点在点云地图中对应的点云数据,作为目标点云数据;将目标点云数据与参考点云数据进行匹配,确定目标点云数据与参考点云数据之间的变换矩阵;根据预设出发点的坐标信息和变换矩阵,确定当前位置点的坐标信息。
在一些实施例中,将目标点云数据与参考点云数据进行匹配,确定目标点云数据与参考点云数据之间的变换矩阵包括:将目标点云数据划分为地面目标点云数据和非地面目标点云数据,将参考点云数据划分为地面参考点云数据和非地面参考点云数据;根据地面目标点云数据和地面参考点云数据,确定第一变换矩阵;根据第一变换矩阵,非地面目标点云数据和非地面参考点云数据,确定第二变换矩阵。
在一些实施例中,根据地面目标点云数据和地面参考点云数据,确定第一变换矩 阵包括:对地面参考点云数据进行降采样处理,得到降采样后的地面参考点云数据;将地面目标点云数据和降采样后的地面参考点云数据进行匹配,得到由地面目标点云数据到降采样后的地面参考点云数据的旋转平移矩阵,作为第一变换矩阵。
在一些实施例中,根据第一变换矩阵,非地面目标点云数据和非地面参考点云数据,确定第二变换矩阵包括:将非地面目标点云数据根据第一变换矩阵进行变换,得到变换后的非地面目标点云数据;将非地面参考点云数据进行降采样处理,得到降采样后的非地面参考点云数据;将变换后的非地面目标点云数据和降采样后的非地面参考点云数据进行匹配,得到由变换后的非地面目标点云数据到降采样后的非地面参考点云数据的旋转平移矩阵,作为第二变换矩阵。
在一些实施例中,根据预设出发点的坐标信息和变换矩阵,确定当前位置点的坐标信息包括:将预设出发点的坐标信息根据第一变换矩阵进行变换,得到第一坐标信息,将第一坐标信息中z轴坐标值,作为当前位置点的z轴坐标值;将第一坐标信息根据第二变换矩阵进行变换,得到第二坐标信息,将第二坐标信息中的x轴坐标值和y轴坐标值,作为当前位置点的x轴坐标值和y轴坐标值。
在一些实施例中,该方法还包括:将预设出发点对应的预设姿态信息根据第一变换矩阵进行变换,得到第一姿态信息,将第一姿态信息中的横滚角值和俯仰角值,作为车辆当前的横滚角值和俯仰角值;将第一姿态信息根据第二变换矩阵进行变换,得到第二姿态信息,将第二姿态信息中的航向角值,作为车辆当前的航向角值。
在一些实施例中,获取车辆的预设出发点在点云地图中对应的点云数据,作为目标点云数据包括:根据参考点云数据对应的激光雷达测量范围,在点云地图中获取以预设出发点为中心,与激光雷达测量范围相应范围内的点云数据,作为目标点云数据。
根据本公开的另一些实施例,提供的一种位置确定装置,包括:获取模块,用于获取车辆在当前位置点测得的激光点云数据,作为参考点云数据,获取车辆的预设出发点在点云地图中对应的点云数据,作为目标点云数据;匹配模块,用于将目标点云数据与参考点云数据进行匹配,确定目标点云数据与参考点云数据之间的变换矩阵;确定模块,用于根据预设出发点的坐标信息和变换矩阵,确定当前位置点的坐标信息。
在一些实施例中,匹配模块用于将目标点云数据划分为地面目标点云数据和非地面目标点云数据,将参考点云数据划分为地面参考点云数据和非地面参考点云数据;根据地面目标点云数据和地面参考点云数据,确定第一变换矩阵;根据第一变换矩阵,非地面目标点云数据和非地面参考点云数据,确定第二变换矩阵。
在一些实施例中,匹配模块用于对地面参考点云数据进行降采样处理,得到降采样后的地面参考点云数据;将地面目标点云数据和降采样后的地面参考点云数据进行匹配,得到由地面目标点云数据到降采样后的地面参考点云数据的旋转平移矩阵,作为第一变换矩阵。
在一些实施例中,匹配模块用于将非地面目标点云数据根据第一变换矩阵进行变换,得到变换后的非地面目标点云数据;将非地面参考点云数据进行降采样处理,得到降采样后的非地面参考点云数据;将变换后的非地面目标点云数据和降采样后的非地面参考点云数据进行匹配,得到由变换后的非地面目标点云数据到降采样后的非地面参考点云数据的旋转平移矩阵,作为第二变换矩阵。
在一些实施例中,确定模块用于将预设出发点的坐标信息根据第一变换矩阵进行变换,得到第一坐标信息,将第一坐标信息中z轴坐标值,作为当前位置点的z轴坐标值;将第一坐标信息根据第二变换矩阵进行变换,得到第二坐标信息,将第二坐标信息中的x轴坐标值和y轴坐标值,作为当前位置点的x轴坐标值和y轴坐标值。
在一些实施例中,确定模块还用于将预设出发点对应的预设姿态信息根据第一变换矩阵进行变换,得到第一姿态信息,将第一姿态信息中的横滚角值和俯仰角值,作为车辆当前的横滚角值和俯仰角值;将第一姿态信息根据第二变换矩阵进行变换,得到第二姿态信息,将第二姿态信息中的航向角值,作为车辆当前的航向角值。
在一些实施例中,获取模块用于根据参考点云数据对应的激光雷达测量范围,在点云地图中获取以预设出发点为中心,与激光雷达测量范围相应范围内的点云数据,作为目标点云数据。
根据本公开的又一些实施例,提供的一种位置确定装置,包括:处理器;以及耦接至处理器的存储器,用于存储指令,指令被处理器执行时,使处理器执行如前述任意实施例的位置确定方法。
根据本公开的再一些实施例,提供的一种非瞬时性计算机可读存储介质,其上存储有计算机程序,其中,该程序被处理器执行时实现前述任意实施例的位置确定方法。
根据本公开的又一些实施例,提供的一种位置确定系统,包括:前述任意实施例的位置确定装置;车体,位置确定装置设置于车体上;以及设置于车体上的激光雷达装置,被配置为在当前位置点进行扫描得到激光点云数据。
通过以下参照附图对本公开的示例性实施例的详细描述,本公开的其它特征及其优点将会变得清楚。
附图说明
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出本公开的一些实施例的位置确定方法的流程示意图。
图2示出本公开的另一些实施例的位置确定方法的流程示意图。
图3示出本公开的一些实施例的位置确定装置的结构示意图。
图4示出本公开的另一些实施例的位置确定装置的结构示意图。
图5示出本公开的又一些实施例的位置确定装置的结构示意图。
图6示出本公开的一些实施例的位置确定系统的结构示意图。
具体实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本公开及其应用或使用的任何限制。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
发明人发现:目前车辆的初始位置的确定严重依赖GPS,在初始位置GPS信号较差甚至无信号时,GPS给出的位置将存在较大误差或无法给出位置信息,会导致初始位置确定的失败,并且影响车辆后续的自动驾驶过程。
本公开所要解决的一个技术问题是:针对车辆在预设出发点或预设出发点附近出发的场景,提高车辆的位置确定的准确性。
针对车辆在预设出发点或预设出发点附近出发的场景,本公开提供一种车辆的位置的确定方法,下面结合图1进行描述。
图1为本公开位置确定方法一些实施例的流程图。如图1所示,该实施例的方法包括:步骤S102~S106。
在步骤S102中,获取车辆在当前位置点测得的激光点云数据,作为参考点云数据,获取车辆的预设出发点在点云地图中对应的点云数据,作为目标点云数据。
预设出发点的位置信息可以预先测量,只需要测量一次,可永久使用。预设出发点的位置信息根据车辆所使用坐标系不同可以采用不同的表示方式。例如,车辆如果使用WGS84坐标系,预设出发点的位置信息可以用经纬度的形式表示,车辆如果使用slam地图,预设出发点的位置信息可以用车辆相对于地图原点的相对位置表示,不限于所举示列。
可以将预设出发点的位置信息与客户端指令进行绑定。例如,将预设出发点命名为home点,客户端指令命名为home pose。车辆进入自动驾驶状态前将车辆置于预设出发点或预设出发点附近,在车辆已上电前提下由客户端给车辆下发home pose指令,位置确定装置在接收到预设出发点的位置信息后,可以利用激光雷达扫描周围环境测得的激光点云数据,作为参考点云数据,并且根据预设出发点的位置信息获取高精度点云地图,获取预设出发点在点云地图中对应的点云数据,作为目标点云数据。
在一些实施例中,根据参考点云数据对应的激光雷达测量范围,在点云地图中获取以预设出发点为中心,与激光雷达测量范围相应范围内的点云数据,作为目标点云数据。如果预设出发点的位置信息与点云地图的坐标系不同,可以首先将预设出发点的位置信息转换到点云地图坐标系。目标点云数据的范围可以等于或大于激光雷达测量范围。
车辆在当前位置点测得的激光点云数据的坐标系如果与点云地图坐标系不同,例如为激光雷达坐标系,则可以将激光点云数据由激光雷达坐标系转换到点云地图坐标系作为参考点云数据。
在步骤S104中,将目标点云数据与参考点云数据进行匹配,确定目标点云数据与参考点云数据之间的变换矩阵。
可以采用点云配准算法将目标点云数据与参考点云数据进行匹配,例如,ICP(Iterative Closest Point,最近迭代)算法、GICP(Generalized Iterative Closest Point,广义最近迭代)算法、NDT(Normal Distribution Transform,正态分布转换)算法或基于多分辨率高斯混合映射的定位算法等,不限于所举示例。将目标点云数据与参考点云数据进行匹配后,可以得到由目标点云数据到参考点云数据的变换矩阵。变换矩阵可以为4×4的旋转平移矩阵,目标点云数据根据变换矩阵进行旋转平移变换后与参考点云数据的误差满足预设条件。
在步骤S106中,根据预设出发点的坐标信息和变换矩阵,确定当前位置点的坐标信息。
例如,预设出发点的坐标信息表示为(x,y,z),变换矩阵表示为
Figure PCTCN2021094394-appb-000001
则当前位置点的坐标信息(x’,y’,z’)可以根据
Figure PCTCN2021094394-appb-000002
确定。
地图坐标系中x,y,z轴的坐标值(x,y,z)和(x’,y’,z’)可以分别表示经度,纬度和高度。
上述实施例的方法将车辆在当前位置点测得的激光点云数据,作为参考点云数据,将车辆的预设出发点在点云地图中对应的点云数据,作为目标点云数据;根据目标点云数据与参考点云数据进行匹配,确定目标点云数据与参考点云数据之间的变换矩阵;根据预设出发点的坐标信息和变换矩阵,确定当前位置点的坐标信息。上述实施例的方法适用于车辆在预设出发点或附近出发的场景,不依赖GPS设备,解决GPS信号较差甚至无信号导致无法确定车辆的位置的问题,可以使车辆在各种复杂环境下都可以快速的获得精确的初始位置,提高位置确定的准确性。
为了进一步提高位置确定的准确性,本公开对目标点云数据与参考点云数据进行匹配的过程进行了改进,下面结合图2进行描述。
图2为本公开位置确定方法另一些实施例的流程图。如图2所示,该实施例的方法包括:步骤S202~S216。
在步骤S202中,获取车辆在当前位置点测得的激光点云数据,作为参考点云数据,获取车辆的预设出发点在点云地图中对应的点云数据,作为目标点云数据。
在步骤S204中,将目标点云数据划分为地面目标点云数据和非地面目标点云数据,将参考点云数据划分为地面参考点云数据和非地面参考点云数据。
将点云数据划分为地面点云数据和非地面点云数据的方法,即将目标点云数据划分为地面目标点云数据和非地面目标点云数据,将参考点云数据划分为地面参考点云数据和非地面参考点云数据可以采用现有技术,在此不再赘述。
在步骤S206中,根据地面目标点云数据和地面参考点云数据,确定第一变换矩阵。
在一些实施例中,可以首先对地面参考点云数据进行降采样处理,得到降采样后的地面参考点云数据;将地面目标点云数据和降采样后的地面参考点云数据进行匹配, 得到由地面目标点云数据到降采样后的地面参考点云数据的旋转平移矩阵,作为第一变换矩阵。通过降采样处理可以减少数据处理量,提高匹配效率。降采样后的地面参考点云数据可以与地面目标点云数据的密度,点之间的距离等相匹配。可以采用ICP算法将地面目标点云数据和降采样后的地面参考点云数据进行匹配,得到第一变换矩阵M 1
在步骤S208中,根据第一变换矩阵,非地面目标点云数据和非地面参考点云数据,确定第二变换矩阵。
在一些实施例中,将非地面目标点云数据根据第一变换矩阵进行变换,得到变换后的非地面目标点云数据;将非地面参考点云数据进行降采样处理,得到降采样后的非地面参考点云数据;将变换后的非地面目标点云数据和降采样后的非地面参考点云数据进行匹配,得到由变换后的非地面目标点云数据到降采样后的非地面参考点云数据的旋转平移矩阵,作为第二变换矩阵。利用第一变换矩阵对非地面目标点云数据进行变换后,与非地面参考点云数据更加接近。进一步将变换后的非地面目标点云数据和降采样后的非地面参考点云数据进行匹配,准确度更高。通过降采样处理可以减少数据处理量,提高匹配效率。可以采用基于多分辨率高斯混合映射(Multiresolution Gaussian Mixture Maps)的定位算法等算法将变换后的非地面目标点云数据和降采样后的非地面参考点云数据进行匹配,得到第二变换矩阵M 2
在步骤S210中,将预设出发点的坐标信息根据第一变换矩阵进行变换,得到第一坐标信息,将第一坐标信息中z轴坐标值,作为当前位置点的z轴坐标值。当前位置点的z轴坐标值可以表示当前位置点的高度。
在步骤S212中,将预设出发点对应的预设姿态信息根据第一变换矩阵进行变换,得到第一姿态信息,将第一姿态信息中的横滚角值和俯仰角值,作为车辆当前的横滚角值和俯仰角值。
根据地面目标点云数据和地面参考点云数据的匹配,能够更加准确的确定高度、横滚角和俯仰角的变换情况。因此,采用第一变换矩阵M 1确定当前位置点的z轴坐标值,车辆当前的横滚角值和俯仰角值更加准确。
预设出发点对应的预设姿态信息可以为点云地图中预设出发点对应的点云数据生成时的姿态信息。预设姿态信息包括预设横滚角值、预设俯仰角值和预设航向角值,一般情况下可以默认均为0。假设预设出发点的坐标信息表示为P 0=(x,y,z),根据预设出发点对应的预设姿态信息得到姿态矩阵R(例如,为3×3矩阵),则可以得到 预设出发点的位姿(位置和姿态)矩阵
Figure PCTCN2021094394-appb-000003
位姿矩阵为4×4矩阵,与第一变换矩阵M 1相乘可以得到当前位置点的第一位姿矩阵,根据第一位姿矩阵可以得到第一坐标信息和第一姿态信息,从而得到当前位置点的z轴坐标值,和车辆当前的横滚角值和俯仰角值。
步骤S210和S212可以在步骤S206之后并列执行。
在步骤S214中,将第一坐标信息根据第二变换矩阵进行变换,得到第二坐标信息,将第二坐标信息中的x轴坐标值和y轴坐标值,作为当前位置点的x轴坐标值和y轴坐标值。当前位置点的x轴坐标值和y轴坐标值可以分别表示当前位置点在点云地图坐标系中经度和纬度。
在步骤S216中,将第一姿态信息根据第二变换矩阵进行变换,得到第二姿态信息,将第二姿态信息中的航向角值,作为车辆当前的航向角值。
可以将第一位姿矩阵与第二变换矩阵M 2相乘得到第二位姿矩阵,根据第二位姿矩阵得到第二坐标信息和第二姿态信息,从而得到当前位置点的x轴坐标值,y轴坐标值和车辆当前的航向角值。根据非地面目标点云数据和非地面参考点云数据的匹配,能够更加准确的确定经度、纬度和航向角的变换情况。因此,根据第二变换矩阵M 2,确定当前位置点的x轴坐标值,y轴坐标值和车辆当前的航向角值更加准确。
步骤S214和步骤S216可以并列执行。上述实施例中各个坐标系之间的转换和位姿矩阵的变换等可以参考现有技术,在此不再赘述。
上述实施例的方法将点云数据划分为地面点云数据和非地面点云数据,分别根据地面目标点云数据和地面参考点云数据,非地面目标点云数据和非地面参考点云数据进行两次匹配,分别确定第一变换矩阵和第二变换矩阵。通过地面目标点云数据和地面参考点云数据的匹配,能够更加准确的确定预设出发点和当前位置点的高度、俯仰角和横滚角的变化,通过非地面目标点云数据和非地面参考点云数据的匹配,能够更加准确的确定预设出发点和当前位置点的经纬度和航向的变化。因此,根据上述实施例的方法能够更加准确的确定车辆在当前位置点的位姿信息。
本公开还提供一种位置确定装置,下面结合图3进行描述。
图3为本公开位置确定装置的一些实施例的结构图。如图3所示,该实施例的装置30包括:获取模块310,匹配模块320,确定模块330。
获取模块310用于获取车辆在当前位置点测得的激光点云数据,作为参考点云数据,获取车辆的预设出发点在点云地图中对应的点云数据,作为目标点云数据。
在一些实施例中,获取模块310用于根据参考点云数据对应的激光雷达测量范围,在点云地图中获取以预设出发点为中心,与激光雷达测量范围相应范围内的点云数据,作为目标点云数据。
匹配模块320用于将目标点云数据与参考点云数据进行匹配,确定目标点云数据与参考点云数据之间的变换矩阵。在一些实施例中,匹配模块用于将目标点云数据划分为地面目标点云数据和非地面目标点云数据,将参考点云数据划分为地面参考点云数据和非地面参考点云数据;根据地面目标点云数据和地面参考点云数据,确定第一变换矩阵;根据第一变换矩阵,非地面目标点云数据和非地面参考点云数据,确定第二变换矩阵。
在一些实施例中,匹配模块320用于对地面参考点云数据进行降采样处理,得到降采样后的地面参考点云数据;将地面目标点云数据和降采样后的地面参考点云数据进行匹配,得到由地面目标点云数据到降采样后的地面参考点云数据的旋转平移矩阵,作为第一变换矩阵。
在一些实施例中,匹配模块320用于将非地面目标点云数据根据第一变换矩阵进行变换,得到变换后的非地面目标点云数据;将非地面参考点云数据进行降采样处理,得到降采样后的非地面参考点云数据;将变换后的非地面目标点云数据和降采样后的非地面参考点云数据进行匹配,得到由变换后的非地面目标点云数据到降采样后的非地面参考点云数据的旋转平移矩阵,作为第二变换矩阵。
确定模块330用于根据预设出发点的坐标信息和变换矩阵,确定当前位置点的坐标信息。
在一些实施例中,确定模块330用于将预设出发点的坐标信息根据第一变换矩阵进行变换,得到第一坐标信息,将第一坐标信息中z轴坐标值,作为当前位置点的z轴坐标值;将第一坐标信息根据第二变换矩阵进行变换,得到第二坐标信息,将第二坐标信息中的x轴坐标值和y轴坐标值,作为当前位置点的x轴坐标值和y轴坐标值。
在一些实施例中,确定模块330还用于将预设出发点对应的预设姿态信息根据第一变换矩阵进行变换,得到第一姿态信息,将第一姿态信息中的横滚角值和俯仰角值,作为车辆当前的横滚角值和俯仰角值;将第一姿态信息根据第二变换矩阵进行变换,得到第二姿态信息,将第二姿态信息中的航向角值,作为车辆当前的航向角值。
本公开的实施例中的位置确定装置可各由各种计算设备或计算机系统来实现,下面结合图4以及图5进行描述。
图4为本公开位置确定装置的一些实施例的结构图。如图4所示,该实施例的装置40包括:存储器410以及耦接至该存储器410的处理器420,处理器420被配置为基于存储在存储器410中的指令,执行本公开中任意一些实施例中的位置确定方法。
其中,存储器410例如可以包括系统存储器、固定非易失性存储介质等。系统存储器例如存储有操作系统、应用程序、引导装载程序(Boot Loader)、数据库以及其他程序等。存储器和处理器均可以采用硬件实现。
图5为本公开位置确定装置的另一些实施例的结构图。如图5所示,该实施例的装置50包括:存储器510以及处理器520,分别与存储器410以及处理器420类似。还可以包括输入输出接口530、网络接口540、存储接口550等。这些接口530,540,550以及存储器510和处理器520之间例如可以通过总线560连接。其中,输入输出接口530为显示器、鼠标、键盘、触摸屏等输入输出设备提供连接接口。网络接口540为各种联网设备提供连接接口,例如可以连接到数据库服务器或者云端存储服务器等。存储接口550为SD卡、U盘等外置存储设备提供连接接口。
本公开还提供一种位置确定系统,下面结合图6进行描述。
图6为本公开位置确定系统的一些实施例的结构图。如图6所示,该实施例的系统6包括:前述任意实施例的位置确定装置30/40/50;车体62,位置确定装置30/40/50设置于车体62上;以及设置于车体上的激光雷达装置64,被配置为在当前位置点进行扫描得到激光点云数据。
本公开还提供的一种非瞬时性计算机可读存储介质,其上存储有计算机程序,其中,该程序被处理器执行时实现前述任意实施例的位置确定方法。
本领域内的技术人员应当明白,本公开的实施例可提供为方法、系统、或计算机程序产品。因此,本公开可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本公开可采用在一个或多个其中包含有计算机可用程序代码的计算机可用非瞬时性存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本公开是参照根据本公开实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解为可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的 处理器执行的指令产生被配置为实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供被配置为实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述仅为本公开的较佳实施例,并不用以限制本公开,凡在本公开的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本公开的保护范围之内。

Claims (11)

  1. 一种位置确定方法,包括:
    获取车辆在当前位置点测得的激光点云数据,作为参考点云数据,获取所述车辆的预设出发点在点云地图中对应的点云数据,作为目标点云数据;
    将所述目标点云数据与所述参考点云数据进行匹配,确定所述目标点云数据与所述参考点云数据之间的变换矩阵;
    根据所述预设出发点的坐标信息和所述变换矩阵,确定所述当前位置点的坐标信息。
  2. 根据权利要求1所述的位置确定方法,其中,将所述目标点云数据与所述参考点云数据进行匹配,确定所述目标点云数据与所述参考点云数据之间的变换矩阵包括:
    将所述目标点云数据划分为地面目标点云数据和非地面目标点云数据,将所述参考点云数据划分为地面参考点云数据和非地面参考点云数据;
    根据所述地面目标点云数据和地面参考点云数据,确定第一变换矩阵;
    根据所述第一变换矩阵,所述非地面目标点云数据和所述非地面参考点云数据,确定第二变换矩阵。
  3. 根据权利要求2所述的位置确定方法,其中,所述根据所述地面目标点云数据和地面参考点云数据,确定第一变换矩阵包括:
    对所述地面参考点云数据进行降采样处理,得到降采样后的地面参考点云数据;
    将所述地面目标点云数据和所述降采样后的地面参考点云数据进行匹配,得到由所述地面目标点云数据到所述降采样后的地面参考点云数据的旋转平移矩阵,作为所述第一变换矩阵。
  4. 根据权利要求2所述的位置确定方法,其中,所述根据所述第一变换矩阵,所述非地面目标点云数据和所述非地面参考点云数据,确定第二变换矩阵包括:
    将所述非地面目标点云数据根据所述第一变换矩阵进行变换,得到变换后的非地面目标点云数据;
    将所述非地面参考点云数据进行降采样处理,得到降采样后的非地面参考点云数据;
    将变所述换后的非地面目标点云数据和所述降采样后的非地面参考点云数据进行匹配,得到由所述变换后的非地面目标点云数据到所述降采样后的非地面参考点云 数据的旋转平移矩阵,作为所述第二变换矩阵。
  5. 根据权利要求2所述的位置确定方法,其中,所述根据所述预设出发点的坐标信息和所述变换矩阵,确定所述当前位置点的坐标信息包括:
    将所述预设出发点的坐标信息根据所述第一变换矩阵进行变换,得到第一坐标信息,将所述第一坐标信息中z轴坐标值,作为所述当前位置点的z轴坐标值;
    将所述第一坐标信息根据所述第二变换矩阵进行变换,得到第二坐标信息,将所述第二坐标信息中的x轴坐标值和y轴坐标值,作为所述当前位置点的x轴坐标值和y轴坐标值。
  6. 根据权利要求2所述的位置确定方法,还包括:
    将所述预设出发点对应的预设姿态信息根据所述第一变换矩阵进行变换,得到第一姿态信息,将所述第一姿态信息中的横滚角值和俯仰角值,作为所述车辆当前的横滚角值和俯仰角值;
    将所述第一姿态信息根据所述第二变换矩阵进行变换,得到第二姿态信息,将所述第二姿态信息中的航向角值,作为所述车辆当前的航向角值。
  7. 根据权利要求1所述的位置确定方法,其中,获取所述车辆的预设出发点在点云地图中对应的点云数据,作为目标点云数据包括:
    根据所述参考点云数据对应的激光雷达测量范围,在所述点云地图中获取以所述预设出发点为中心,与所述激光雷达测量范围相应范围内的点云数据,作为所述目标点云数据。
  8. 一种位置确定装置,包括:
    获取模块,用于获取车辆在当前位置点测得的激光点云数据,作为参考点云数据,获取所述车辆的预设出发点在点云地图中对应的点云数据,作为目标点云数据;
    匹配模块,用于将所述目标点云数据与所述参考点云数据进行匹配,确定所述目标点云数据与所述参考点云数据之间的变换矩阵;
    确定模块,用于根据所述预设出发点的坐标信息和所述变换矩阵,确定所述当前位置点的坐标信息。
  9. 一种位置确定装置,包括:
    处理器;以及
    耦接至所述处理器的存储器,用于存储指令,所述指令被所述处理器执行时,使所述处理器执行如权利要求1-7任一项所述的位置确定方法。
  10. 一种非瞬时性计算机可读存储介质,其上存储有计算机程序,其中,该程序被处理器执行时实现权利要求1-7任一项所述方法的步骤。
  11. 一种位置确定系统,包括:权利要求8或9所述的位置确定装置;
    车体,所述位置确定装置设置于所述车体上;以及
    设置于所述车体上的激光雷达装置,被配置为在当前位置点进行扫描得到激光点云数据。
PCT/CN2021/094394 2020-07-09 2021-05-18 位置确定方法、装置、系统和计算机可读存储介质 WO2022007504A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US18/004,508 US20230252674A1 (en) 2020-07-09 2021-05-18 Position determination method, device, and system, and computer-readable storage medium
EP21837484.1A EP4152052A1 (en) 2020-07-09 2021-05-18 Location determination method, device, and system, and computer readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010658041.5 2020-07-09
CN202010658041.5A CN111812658B (zh) 2020-07-09 2020-07-09 位置确定方法、装置、系统和计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2022007504A1 true WO2022007504A1 (zh) 2022-01-13

Family

ID=72842070

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/094394 WO2022007504A1 (zh) 2020-07-09 2021-05-18 位置确定方法、装置、系统和计算机可读存储介质

Country Status (4)

Country Link
US (1) US20230252674A1 (zh)
EP (1) EP4152052A1 (zh)
CN (1) CN111812658B (zh)
WO (1) WO2022007504A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114353807A (zh) * 2022-03-21 2022-04-15 沈阳吕尚科技有限公司 一种机器人的定位方法及定位装置
CN115877349A (zh) * 2023-02-20 2023-03-31 北京理工大学 一种基于激光雷达的交叉路口车辆定位方法及系统

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111812658B (zh) * 2020-07-09 2021-11-02 北京京东乾石科技有限公司 位置确定方法、装置、系统和计算机可读存储介质
CN112835085B (zh) * 2020-07-09 2022-04-12 北京京东乾石科技有限公司 确定车辆位置的方法和装置
CN112382116A (zh) * 2020-11-12 2021-02-19 浙江吉利控股集团有限公司 一种用于获取车辆的点云地图的方法和系统
CN112348897A (zh) * 2020-11-30 2021-02-09 上海商汤临港智能科技有限公司 位姿确定方法及装置、电子设备、计算机可读存储介质
CN112802111B (zh) * 2021-04-01 2021-06-25 中智行科技有限公司 一种物体模型构建方法及装置
CN115220009A (zh) * 2021-04-15 2022-10-21 阿里巴巴新加坡控股有限公司 数据处理方法、装置、电子设备及计算机存储介质
CN113465606A (zh) * 2021-06-30 2021-10-01 三一机器人科技有限公司 末端工位定位方法、装置及电子设备
CN113744409B (zh) * 2021-09-09 2023-08-15 上海柏楚电子科技股份有限公司 工件定位方法、装置、系统、设备与介质
CN114549605B (zh) * 2021-12-31 2023-08-04 广州景骐科技有限公司 基于点云匹配的图像优化方法、装置、设备及存储介质
CN114459471B (zh) * 2022-01-30 2023-08-11 中国第一汽车股份有限公司 定位信息确定方法、装置、电子设备及存储介质
CN115410173B (zh) * 2022-11-01 2023-03-24 北京百度网讯科技有限公司 多模态融合的高精地图要素识别方法、装置、设备及介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109887028A (zh) * 2019-01-09 2019-06-14 天津大学 一种基于点云数据配准的无人车辅助定位方法
CN109900298A (zh) * 2019-03-01 2019-06-18 武汉光庭科技有限公司 一种车辆定位校准方法及系统
US20190226852A1 (en) * 2016-09-09 2019-07-25 Nanyang Technological University Simultaneous localization and mapping methods and apparatus
CN110082779A (zh) * 2019-03-19 2019-08-02 同济大学 一种基于3d激光雷达的车辆位姿定位方法及系统
CN110927740A (zh) * 2019-12-06 2020-03-27 合肥科大智能机器人技术有限公司 一种移动机器人定位方法
CN111812658A (zh) * 2020-07-09 2020-10-23 北京京东乾石科技有限公司 位置确定方法、装置、系统和计算机可读存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9292913B2 (en) * 2014-01-31 2016-03-22 Pictometry International Corp. Augmented three dimensional point collection of vertical structures
CN106969763B (zh) * 2017-04-07 2021-01-01 百度在线网络技术(北京)有限公司 用于确定无人驾驶车辆的偏航角的方法和装置
CN109059906B (zh) * 2018-06-26 2020-09-29 上海西井信息科技有限公司 车辆定位方法、装置、电子设备、存储介质
CN110609290B (zh) * 2019-09-19 2021-07-23 北京智行者科技有限公司 激光雷达匹配定位方法及装置
CN110794392B (zh) * 2019-10-15 2024-03-19 上海创昂智能技术有限公司 车辆定位方法、装置、车辆及存储介质
CN110988894B (zh) * 2019-12-25 2022-04-08 畅加风行(苏州)智能科技有限公司 一种面向港口环境的多源数据融合的无人驾驶汽车实时定位方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190226852A1 (en) * 2016-09-09 2019-07-25 Nanyang Technological University Simultaneous localization and mapping methods and apparatus
CN109887028A (zh) * 2019-01-09 2019-06-14 天津大学 一种基于点云数据配准的无人车辅助定位方法
CN109900298A (zh) * 2019-03-01 2019-06-18 武汉光庭科技有限公司 一种车辆定位校准方法及系统
CN110082779A (zh) * 2019-03-19 2019-08-02 同济大学 一种基于3d激光雷达的车辆位姿定位方法及系统
CN110927740A (zh) * 2019-12-06 2020-03-27 合肥科大智能机器人技术有限公司 一种移动机器人定位方法
CN111812658A (zh) * 2020-07-09 2020-10-23 北京京东乾石科技有限公司 位置确定方法、装置、系统和计算机可读存储介质

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114353807A (zh) * 2022-03-21 2022-04-15 沈阳吕尚科技有限公司 一种机器人的定位方法及定位装置
CN114353807B (zh) * 2022-03-21 2022-08-12 沈阳吕尚科技有限公司 一种机器人的定位方法及定位装置
CN115877349A (zh) * 2023-02-20 2023-03-31 北京理工大学 一种基于激光雷达的交叉路口车辆定位方法及系统

Also Published As

Publication number Publication date
CN111812658A (zh) 2020-10-23
US20230252674A1 (en) 2023-08-10
EP4152052A1 (en) 2023-03-22
CN111812658B (zh) 2021-11-02

Similar Documents

Publication Publication Date Title
WO2022007504A1 (zh) 位置确定方法、装置、系统和计算机可读存储介质
CN109270545B (zh) 一种定位真值校验方法、装置、设备及存储介质
JP6918885B2 (ja) 相対的位置姿勢の標定方法、相対的位置姿勢の標定装置、機器及び媒体
JP7179110B2 (ja) 測位方法、装置、計算装置、コンピュータ可読記憶媒体及びコンピュータプログラム
CN109059906B (zh) 车辆定位方法、装置、电子设备、存储介质
CN110927708B (zh) 智能路侧单元的标定方法、装置及设备
WO2022007602A1 (zh) 确定车辆位置的方法和装置
CN108279670B (zh) 用于调整点云数据采集轨迹的方法、设备以及计算机可读介质
WO2022179094A1 (zh) 车载激光雷达外参数联合标定方法、系统、介质及设备
CN114111774B (zh) 车辆的定位方法、系统、设备及计算机可读存储介质
WO2023131048A1 (zh) 位姿信息的确定方法、装置、电子设备和存储介质
CN113933818A (zh) 激光雷达外参的标定的方法、设备、存储介质及程序产品
WO2021016806A1 (zh) 高精度地图定位方法、系统、平台及计算机可读存储介质
CN111469781A (zh) 用于输出信息的方法和装置
CN113436233A (zh) 自动驾驶车辆的配准方法、装置、电子设备和车辆
CN112154355B (zh) 高精度地图定位方法、系统、平台及计算机可读存储介质
CN112835086B (zh) 确定车辆位置的方法和装置
CN112665579B (zh) 基于几何验证的星图识别方法和装置
CN116295466A (zh) 地图生成方法、装置、电子设备、存储介质、及车辆
CN113538699A (zh) 基于三维点云的定位方法、装置、设备及存储介质
US20210405197A1 (en) GLOBAL LOCALIZATION APPARATUS AND METHOD IN DYNAMIC ENVIRONMENTS USING 3D LiDAR SCANNER
CN110647591A (zh) 用于测试矢量地图的方法和装置
CN113777635B (zh) 一种全球导航卫星数据校准方法、装置、终端及存储介质
Wang et al. Updating Smartphone's Exterior Orientation Parameters by Image-based Localization Method Using Geo-tagged Image Datasets and 3D Point Cloud as References
Sousa et al. Extrinsic sensor calibration methods for mobile robots: a short review

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21837484

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021837484

Country of ref document: EP

Effective date: 20221212

NENP Non-entry into the national phase

Ref country code: DE