CN113538699A - Positioning method, device and equipment based on three-dimensional point cloud and storage medium - Google Patents

Positioning method, device and equipment based on three-dimensional point cloud and storage medium Download PDF

Info

Publication number
CN113538699A
CN113538699A CN202110687387.2A CN202110687387A CN113538699A CN 113538699 A CN113538699 A CN 113538699A CN 202110687387 A CN202110687387 A CN 202110687387A CN 113538699 A CN113538699 A CN 113538699A
Authority
CN
China
Prior art keywords
vehicle
point cloud
lidar
pose
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110687387.2A
Other languages
Chinese (zh)
Inventor
李晓欢
苏昭宇
陈倩
唐欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Comprehensive Transportation Big Data Research Institute
Guilin University of Electronic Technology
Original Assignee
Guangxi Comprehensive Transportation Big Data Research Institute
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Comprehensive Transportation Big Data Research Institute, Guilin University of Electronic Technology filed Critical Guangxi Comprehensive Transportation Big Data Research Institute
Priority to CN202110687387.2A priority Critical patent/CN113538699A/en
Publication of CN113538699A publication Critical patent/CN113538699A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

本申请提供了一种基于三维点云的定位方法、装置、设备及存储介质,涉及自动驾驶技术领域,该方法包括:通过分别获取组合导航数据和激光雷达测量的点云数据,并分别对点云数据和组合导航数据进行处理,先基于组合导航数据确定车辆的当前位姿信息,并确定点云定位中的变换矩阵和平移矩阵,再基于该变换矩阵和平移矩阵确定车辆的激光雷达位姿信息,然后基于该激光雷达位姿信息和车辆的姿态信息确定车辆的姿态计算法方程,然后采用扩展卡尔曼滤波方法基于所述位姿状态计算方程对所述车辆进行定位,更加符合车辆的形式规律,保证车辆定位的准确性。

Figure 202110687387

The present application provides a three-dimensional point cloud-based positioning method, device, equipment and storage medium, and relates to the technical field of automatic driving. The cloud data and integrated navigation data are processed. First, the current pose information of the vehicle is determined based on the integrated navigation data, and the transformation matrix and translation matrix in the point cloud positioning are determined. Then, the lidar pose of the vehicle is determined based on the transformation matrix and translation matrix. information, and then determine the vehicle's attitude calculation method equation based on the lidar position and attitude information and the vehicle's attitude information, and then use the extended Kalman filter method to locate the vehicle based on the position and attitude state calculation equation, which is more in line with the form of the vehicle rules to ensure the accuracy of vehicle positioning.

Figure 202110687387

Description

基于三维点云的定位方法、装置、设备及存储介质Positioning method, device, device and storage medium based on 3D point cloud

技术领域technical field

本申请涉及自动驾驶技术领域,具体而言,本申请涉及一种基于三维点云的定位方法、装置、设备及存储介质。The present application relates to the technical field of automatic driving, and in particular, the present application relates to a positioning method, apparatus, device and storage medium based on a three-dimensional point cloud.

背景技术Background technique

基于高精度地图的定位过程中,由于地图的非实时性与传感器噪声,实时采集的数据与地图之间不一致的存在问题。这一问题通常会给基于地图匹配的定位算法的精度带来影响,甚至造成局部最优或者定位无法收敛的情况。针对现有的问题,目前出现了许多融合不同传感器进行的定位,融合的方法大都基于KF(卡尔曼滤波),然而在现实车辆运动过程中,并非属于线性系统,同时由于点云存在有噪声,使用经典的配准算法,如ICP或者NDT,存在有累计误差,造成在点云定位过程中存在定位不收敛,最终造成定位不准确。In the positioning process based on high-precision maps, due to the non-real-time nature of the map and sensor noise, there is a problem of inconsistency between the real-time collected data and the map. This problem usually affects the accuracy of the localization algorithm based on map matching, and even causes local optimality or the situation that the localization cannot converge. In view of the existing problems, there are many positioning methods that fuse different sensors. Most of the fusion methods are based on KF (Kalman filter). However, in the real vehicle motion process, it is not a linear system, and due to the noise in the point cloud, Using classic registration algorithms, such as ICP or NDT, there are cumulative errors, resulting in non-convergence of localization during the point cloud localization process, and ultimately resulting in inaccurate localization.

发明内容SUMMARY OF THE INVENTION

本申请的目的旨在至少能解决上述的技术缺陷之一,特别是现有技术存在有累计误差,造成在点云定位过程中存在定位不收敛,最终造成定位不准确的技术缺陷。The purpose of the present application is to solve at least one of the above-mentioned technical defects, especially the existing technology has accumulated errors, resulting in non-convergence of positioning in the process of point cloud positioning, and finally resulting in the technical defects of inaccurate positioning.

第一方面,提供了一种基于三维点云的定位方法,包括:In a first aspect, a three-dimensional point cloud-based positioning method is provided, including:

获取车辆的组合导航数据和激光雷达测量的点云数据;Obtain the integrated navigation data of the vehicle and the point cloud data measured by the lidar;

对所述组合导航数据进行解析,得到所述车辆的当前组合导航位姿信息,基于所述当前组合导航位姿信息确定所述车辆在点云定位中点云变换矩阵和平移矩阵;Analyzing the combined navigation data to obtain the current combined navigation pose information of the vehicle, and determining a point cloud transformation matrix and a translation matrix of the vehicle in point cloud positioning based on the current combined navigation pose information;

基于所述变换矩阵和所述平移矩阵将所述车辆的当前帧点云与预先建立的地图进行匹配,得到所述车辆的激光雷达位姿信息;Matching the current frame point cloud of the vehicle with a pre-established map based on the transformation matrix and the translation matrix, to obtain the lidar pose information of the vehicle;

对所述组合导航数据进行解析得到所述车辆的当前姿态信息,将所述当前姿态信息与所述激光雷达位姿信息结合,确定所述车辆的位姿状态计算方程;Analyzing the combined navigation data to obtain the current attitude information of the vehicle, and combining the current attitude information with the lidar attitude information to determine the vehicle's attitude and attitude calculation equation;

采用扩展卡尔曼滤波方法基于所述位姿状态计算方程对所述车辆进行定位。The vehicle is positioned based on the pose state calculation equation using an extended Kalman filter method.

作为本申请一种可能的实施方式,在该实施方式中,所述车辆的激光雷达位姿信息为:As a possible implementation manner of the present application, in this implementation manner, the lidar pose information of the vehicle is:

X=[LT,WT,VT]T X=[L T ,W T ,V T ] T

其中,L=(lx,ly,lz)T用于表示所述激光雷达的位置信息,其中lx为激光雷达x轴坐标信息,ly为激光雷达y轴坐标信息,lz为激光雷达z轴坐标信息;

Figure BDA0003125244110000021
用于表示所述激光雷达的欧拉角,其中为
Figure BDA0003125244110000022
俯仰角,θ为横滚角,ψ为航向角,V=(vx,vy,vz)T用于表示所述激光雷达的速度,其中vx为x轴速度,vy为y轴速度,vz为z轴速度。Wherein, L=(l x , l y , l z ) T is used to represent the position information of the lidar, where l x is the x-axis coordinate information of the lidar, ly is the y -axis coordinate information of the lidar, and l z is Lidar z-axis coordinate information;
Figure BDA0003125244110000021
is used to represent the Euler angles of the lidar, where
Figure BDA0003125244110000022
Pitch angle, θ is the roll angle, ψ is the heading angle, V=(v x , v y , v z ) T is used to represent the speed of the lidar, where v x is the x-axis speed, and v y is the y-axis Velocity, v z is the z-axis velocity.

作为本申请一种可能的实施方式,在该实施方式中,所述车辆的当前姿态信息为:As a possible implementation manner of the present application, in this implementation manner, the current attitude information of the vehicle is:

U=[wT,aT]T U=[w T ,a T ] T

其中,w=(wx,wy,wz)T用于表示所述车辆位姿的角速度,其中wx为x轴的角速度,wy为y轴的角速度,wz为z轴的角速度;a=(ax,ay,az)T用于表示所述车辆位姿的加速度,其中ax为x轴的加速度,ay为y轴的加速度,az为z轴的加速度。Wherein, w=(w x , w y , w z ) T is used to represent the angular velocity of the vehicle pose, where w x is the angular velocity of the x-axis, w y is the angular velocity of the y-axis, and w z is the angular velocity of the z-axis ; a=(a x , a y , a z ) T is used to represent the acceleration of the vehicle posture, where a x is the acceleration of the x-axis, a y is the acceleration of the y-axis, and a z is the acceleration of the z-axis.

作为本申请一种可能的实施方式,在该实施方式中,所述车辆的位姿状态计算方程为:As a possible implementation manner of the present application, in this implementation manner, the calculation equation of the pose state of the vehicle is:

Figure BDA0003125244110000023
Figure BDA0003125244110000023

其中,

Figure BDA0003125244110000024
表示车辆位姿速度状态方程,为车辆速度,
Figure BDA0003125244110000025
表示车辆位姿角速度状态方程,其中w表示车辆位姿角速度;in,
Figure BDA0003125244110000024
Represents the state equation of the vehicle pose and velocity, which is the vehicle speed,
Figure BDA0003125244110000025
Represents the state equation of the vehicle pose angular velocity, where w represents the vehicle pose angular velocity;

Figure BDA0003125244110000026
为方向余弦矩阵;
Figure BDA0003125244110000031
为状态转移矩阵;
Figure BDA0003125244110000032
为重力矩阵,其中g为重力加速度。
Figure BDA0003125244110000026
is the direction cosine matrix;
Figure BDA0003125244110000031
is the state transition matrix;
Figure BDA0003125244110000032
is the gravity matrix, where g is the acceleration of gravity.

作为本申请一种可能的实施方式,在该实施方式中,所述基于所述变换矩阵和所述平移矩阵将所述车辆的当前帧点云与预先建立的地图进行匹配之前,还包括:As a possible implementation manner of the present application, in this implementation manner, before matching the point cloud of the current frame of the vehicle with the pre-established map based on the transformation matrix and the translation matrix, the method further includes:

对所述激光雷达测量的点云数据进行降采样处理,并对降采样处理后的所述点云数据进行特征提取。Perform down-sampling processing on the point cloud data measured by the lidar, and perform feature extraction on the point cloud data after down-sampling processing.

作为本申请一种可能的实施方式,在该实施方式中,所述采用扩展卡尔曼滤波方法基于所述位姿状态计算方程对所述车辆进行定位,包括:As a possible implementation manner of the present application, in this implementation manner, using the extended Kalman filter method to locate the vehicle based on the pose and state calculation equation includes:

根据公式(P+)-1=k·P-1+(1-k)·CTR-1C计算先验估计协方差;Calculate the a priori estimated covariance according to the formula (P + ) -1 =k·P -1 +(1-k)·C T R -1 C;

确定姿态预测方程为

Figure BDA0003125244110000033
Determine the attitude prediction equation as
Figure BDA0003125244110000033

确定卡尔曼增益为K=PCT(k·R+CTPC)-1Determine the Kalman gain as K=PC T (k·R+ CT PC) −1 ;

采用公式

Figure BDA0003125244110000034
对激光雷达位姿新进行更新;using the formula
Figure BDA0003125244110000034
Update the lidar pose;

确定更新后的协方差为P+=P-(1-k)-1·KCP;Determine the updated covariance as P + =P-(1-k) -1 ·KCP;

其中,P+为先验估计协方差矩阵,P为协方差矩阵,

Figure BDA0003125244110000035
为先验估计位姿,R为系统误差矩阵,C为点云初始配准时的姿态估计信息投影矩阵,F为激光雷达的姿态信息矩阵,k为帧间点云的匹配度,
Figure BDA0003125244110000036
Figure BDA0003125244110000037
其中,Pj表示当前帧点云中的一个点,Pi表示地图与中Pj最近的对应点。where P + is the prior estimated covariance matrix, P is the covariance matrix,
Figure BDA0003125244110000035
is the a priori estimated pose, R is the system error matrix, C is the projection matrix of the pose estimation information during the initial registration of the point cloud, F is the attitude information matrix of the lidar, k is the matching degree of the point cloud between frames,
Figure BDA0003125244110000036
Figure BDA0003125244110000037
Among them, P j represents a point in the point cloud of the current frame, and Pi represents the closest corresponding point to Pj in the map.

第二方面,提供了一种基于三维点云的定位装置,该装置包括:In a second aspect, a positioning device based on a three-dimensional point cloud is provided, the device comprising:

数据获取模块,用于获取车辆的组合导航数据和激光雷达测量的点云数据;The data acquisition module is used to acquire the integrated navigation data of the vehicle and the point cloud data measured by the lidar;

矩阵确定模块,用于对所述组合导航数据进行解析,得到所述车辆的当前组合导航位姿信息,基于所述当前组合导航位姿信息确定所述车辆在点云定位中点云变换矩阵和平移矩阵;The matrix determination module is used to analyze the combined navigation data, obtain the current combined navigation pose information of the vehicle, and determine the point cloud transformation matrix and the point cloud transformation matrix of the vehicle in the point cloud positioning based on the current combined navigation pose information. translation matrix;

位姿确定模块,用于基于所述变换矩阵和所述平移矩阵将所述车辆的当前帧点云与预先建立的地图进行匹配,得到所述车辆的激光雷达位姿信息;a pose determination module, configured to match the current frame point cloud of the vehicle with a pre-established map based on the transformation matrix and the translation matrix to obtain the lidar pose information of the vehicle;

方程确定模块,用于对所述组合导航数据进行解析得到所述车辆的当前姿态信息,将所述当前姿态信息与所述激光雷达位姿信息结合,确定所述车辆的位姿状态计算方程;an equation determination module, configured to analyze the combined navigation data to obtain the current attitude information of the vehicle, and combine the current attitude information with the lidar attitude information to determine the vehicle's attitude and attitude calculation equation;

定位模块,用于采用扩展卡尔曼滤波方法基于所述位姿状态计算方程对所述车辆进行定位。The positioning module is configured to use the extended Kalman filter method to locate the vehicle based on the pose and state calculation equation.

第三方面,提供了一种电子设备,该电子设备包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现上述的基于三维点云的定位方法。In a third aspect, an electronic device is provided, the electronic device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the above when executing the program 3D point cloud-based localization method.

第四方面,提供了一种计算机可读存储介质,所述存储介质存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现上述的基于三维点云的定位方法。In a fourth aspect, a computer-readable storage medium is provided, the storage medium stores at least one instruction, at least one piece of program, code set or instruction set, the at least one instruction, the at least one piece of program, the code set Or the instruction set is loaded and executed by the processor to implement the above-mentioned three-dimensional point cloud-based positioning method.

本申请实施例通过分别获取组合导航数据和激光雷达测量的点云数据,并分别对点云数据和组合导航数据进行处理,先基于组合导航数据确定车辆的当前位姿信息,并确定点云定位中的变换矩阵和平移矩阵,再基于该变换矩阵和平移矩阵确定车辆的激光雷达位姿信息,然后基于该激光雷达位姿信息和车辆的姿态信息确定车辆的姿态计算法方程,然后采用扩展卡尔曼滤波方法基于所述位姿状态计算方程对所述车辆进行定位,更加符合车辆的形式规律,保证车辆定位的准确性。In this embodiment of the present application, the combined navigation data and the point cloud data measured by the lidar are obtained respectively, and the point cloud data and the combined navigation data are processed respectively, the current position and attitude information of the vehicle is determined based on the combined navigation data, and the point cloud positioning is determined. transformation matrix and translation matrix in The Mann filter method locates the vehicle based on the pose state calculation equation, which is more in line with the formal rules of the vehicle and ensures the accuracy of the vehicle positioning.

附图说明Description of drawings

为了更清楚地说明本申请实施例中的技术方案,下面将对本申请实施例描述中所需要使用的附图作简单地介绍。In order to illustrate the technical solutions in the embodiments of the present application more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments of the present application.

图1为本申请实施例提供的一种基于三维点云的定位方法的流程示意图;1 is a schematic flowchart of a positioning method based on a three-dimensional point cloud provided by an embodiment of the present application;

图2为本申请实施例提供的一种定位方法的流程框图;2 is a flowchart of a positioning method according to an embodiment of the present application;

图3为本申请实施例提供的一种基于三维点云的定位装置的结构示意图;3 is a schematic structural diagram of a positioning device based on a three-dimensional point cloud provided by an embodiment of the present application;

图4为本申请实施例提供的一种电子设备的结构示意图。FIG. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.

结合附图并参考以下具体实施方式,本申请各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。The above and other features, advantages and aspects of the various embodiments of the present application will become more apparent with reference to the following detailed description in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that the originals and elements are not necessarily drawn to scale.

具体实施方式Detailed ways

下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本申请,而不能解释为对本申请的限制。The following describes in detail the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are exemplary and are only used to explain the present application, but not to be construed as a limitation on the present application.

本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”、“所述”和“该”也可包括复数形式。应该进一步理解的是,本申请的说明书中使用的措辞“包括”是指存在所述特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。应该理解,当我们称元件被“连接”或“耦接”到另一元件时,它可以直接连接或耦接到其他元件,或者也可以存在中间元件。此外,这里使用的“连接”或“耦接”可以包括无线连接或无线耦接。这里使用的措辞“和/或”包括一个或更多个相关联的列出项的全部或任一单元和全部组合。It will be understood by those skilled in the art that the singular forms "a", "an", "the" and "the" as used herein can include the plural forms as well, unless expressly stated otherwise. It should be further understood that the word "comprising" used in the specification of this application refers to the presence of stated features, integers, steps, operations, elements and/or components, but does not preclude the presence or addition of one or more other features, Integers, steps, operations, elements, components and/or groups thereof. It will be understood that when we refer to an element as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Furthermore, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combination of one or more of the associated listed items.

为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。In order to make the objectives, technical solutions and advantages of the present application clearer, the embodiments of the present application will be further described in detail below with reference to the accompanying drawings.

本申请提供的基于三维点云的定位方法、装置、电子设备和计算机可读存储介质,旨在解决现有技术的如上技术问题。The three-dimensional point cloud-based positioning method, device, electronic device and computer-readable storage medium provided by the present application aim to solve the above technical problems in the prior art.

下面以具体地实施例对本申请的技术方案以及本申请的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本申请的实施例进行描述。The technical solutions of the present application and how the technical solutions of the present application solve the above-mentioned technical problems will be described in detail below with specific examples. The following specific embodiments may be combined with each other, and the same or similar concepts or processes may not be repeated in some embodiments. The embodiments of the present application will be described below with reference to the accompanying drawings.

本申请实施例中提供了一种基于三维点云的定位方法,如图1所示,该方法包括:An embodiment of the present application provides a positioning method based on a three-dimensional point cloud. As shown in FIG. 1 , the method includes:

步骤S101,获取车辆的组合导航数据和激光雷达测量的点云数据;Step S101, acquiring the combined navigation data of the vehicle and the point cloud data measured by the lidar;

步骤S102,对所述组合导航数据进行解析,得到所述车辆的当前组合导航位姿信息,基于所述当前组合导航位姿信息确定所述车辆在点云定位中点云变换矩阵和平移矩阵;Step S102, analyze the combined navigation data to obtain the current combined navigation pose information of the vehicle, and determine the point cloud transformation matrix and translation matrix of the vehicle in the point cloud positioning based on the current combined navigation pose information;

步骤S103,基于所述变换矩阵和所述平移矩阵将所述车辆的当前帧点云与预先建立的地图进行匹配,得到所述车辆的激光雷达位姿信息;Step S103, matching the current frame point cloud of the vehicle with a pre-established map based on the transformation matrix and the translation matrix to obtain the lidar pose information of the vehicle;

步骤S104,对所述组合导航数据进行解析得到所述车辆的当前姿态信息,将所述当前姿态信息与所述激光雷达位姿信息结合,确定所述车辆的位姿状态计算方程;Step S104, analysing the combined navigation data to obtain the current attitude information of the vehicle, combining the current attitude information with the lidar attitude information to determine the vehicle attitude and attitude calculation equation;

步骤S105,采用扩展卡尔曼滤波方法基于所述位姿状态计算方程对所述车辆进行定位。Step S105, using an extended Kalman filter method to locate the vehicle based on the pose and state calculation equation.

本申请实施例中,提供的基于三维点云的定位方法适用于自动驾驶或导航技术领域,基于高精度地图的定位过程中,由于地图的非实时性与传感器噪声,实时采集的数据与地图之间不一致的存在问题。这一问题通常会给基于地图匹配的定位算法的精度带来影响,甚至造成局部最优或者定位无法收敛的情况。针对现有的问题,目前出现了许多融合不同传感器进行的定位,融合的方法大都基于KF(卡尔曼滤波),然而在现实车辆运动过程中,并非属于线性系统,同时由于点云存在有噪声,使用经典的配准算法,如ICP或者NDT,存在有累计误差,造成在点云定位过程中存在定位不收敛,最终造成定位不准确,为此提出一种基于EKF(扩展卡尔曼滤波)点云的定位方法。In the embodiment of this application, the provided positioning method based on 3D point cloud is suitable for the field of automatic driving or navigation technology. During the positioning process based on high-precision map, due to the non-real-time nature of the map and sensor noise, the difference between the real-time collected data and the map inconsistencies exist. This problem usually affects the accuracy of the localization algorithm based on map matching, and even causes local optimality or the situation that the localization cannot converge. In view of the existing problems, there are many positioning methods that fuse different sensors. Most of the fusion methods are based on KF (Kalman filter). However, in the real vehicle motion process, it is not a linear system, and due to the noise in the point cloud, Using classic registration algorithms, such as ICP or NDT, there are accumulated errors, resulting in non-convergence of positioning in the process of point cloud positioning, and ultimately inaccurate positioning. For this reason, a point cloud based on EKF (Extended Kalman Filter) is proposed. positioning method.

为方便说明,以一个具体实施例为例,激光雷达和GPS组合导航安装在自动驾驶的车辆上,或者具有自动导航功能的车辆上,车上传感器接收组合导航数据和激光雷达测量的点云数据,并分别对组合导航数据和激光雷达测量的点云数据进行处理,其中,对于组合导航数据,先进行解析,得到当前时刻车辆的位姿信息,其中该位姿信息Z0=(x0,y0,z0,yaw0,pitch0,roll0)作为车辆的初始定位位姿信息,然后根据当前时刻的位姿信息计算得到激光雷达测量的点云数据中该车辆的点云定位中点云变换矩阵R0和平移矩阵t0,然后基于该变换矩阵和平移矩阵将当前帧点云与预先建立的地图进行匹配,确定所述车辆的激光雷达位姿信息For the convenience of description, taking a specific embodiment as an example, the combined navigation of lidar and GPS is installed on an autonomous vehicle, or a vehicle with automatic navigation function, and the sensors on the vehicle receive the combined navigation data and the point cloud data measured by the lidar. , and process the combined navigation data and the point cloud data measured by the lidar respectively, wherein, for the combined navigation data, the analysis is performed first to obtain the pose information of the vehicle at the current moment, where the pose information Z 0 =(x 0 , y 0 , z 0 , yaw 0 , pitch 0 , roll 0 ) as the initial positioning pose information of the vehicle, and then calculate the midpoint of the point cloud positioning of the vehicle in the point cloud data measured by the lidar according to the pose information at the current moment Cloud transformation matrix R 0 and translation matrix t 0 , and then based on the transformation matrix and translation matrix, the current frame point cloud is matched with the pre-established map, and the lidar pose information of the vehicle is determined

X=[LT,WT,VT]T X=[L T ,W T ,V T ] T

其中,L=(lx,ly,lz)T用于表示所述激光雷达的位置信息,

Figure BDA0003125244110000071
用于表示所述激光雷达的俯仰角、横滚角、以及航向角的欧拉角,V=(vx,vy,vz)T用于表示所述激光雷达的速度。Among them, L=(l x , ly , lz) T is used to represent the position information of the lidar,
Figure BDA0003125244110000071
The Euler angle used to represent the pitch angle, roll angle, and heading angle of the lidar, V=(v x , vy , vz) T is used to represent the speed of the lidar.

对所述组合导航数据进行解析得到所述车辆的当前姿态信息,将所述当前姿态信息与所述激光雷达位姿信息结合,确定所述车辆的位姿状态计算方程:Analyze the combined navigation data to obtain the current attitude information of the vehicle, combine the current attitude information with the lidar attitude information, and determine the vehicle's attitude and attitude calculation equation:

Figure BDA0003125244110000072
为方向余弦矩阵;
Figure BDA0003125244110000073
为状态转移矩阵;
Figure BDA0003125244110000074
为重力矩阵。
Figure BDA0003125244110000072
is the direction cosine matrix;
Figure BDA0003125244110000073
is the state transition matrix;
Figure BDA0003125244110000074
is the gravity matrix.

采用扩展卡尔曼滤波方法基于所述位姿状态计算方程对所述车辆进行定位,包括:根据公式(P+)-1=k·P-1+(1-k)·CTR-1C计算先验估计协方差;确定姿态预测方程为

Figure BDA0003125244110000075
确定卡尔曼增益为K=PCT(k·R+CTPC)-1;采用公式
Figure BDA0003125244110000076
对激光雷达位姿新进行更新;确定更新后的协方差为P+=P-(1-k)-1·KCP;其中,P为先验估计协方差矩阵,R为系统误差矩阵,C为点云初始配准时的姿态估计信息投影矩阵,F为激光雷达的姿态信息矩阵,k为帧间点云的匹配度,
Figure BDA0003125244110000077
其中,Pj表示当前帧点云中的一个点,Pi表示地图与中Pj最近的对应点。Using the extended Kalman filter method to locate the vehicle based on the pose state calculation equation, including: according to the formula (P + ) -1 =k·P -1 +(1-k)·C T R -1 C Calculate the prior estimated covariance; determine the attitude prediction equation as
Figure BDA0003125244110000075
Determine the Kalman gain as K=PC T (k·R+ CT PC) -1 ; adopt the formula
Figure BDA0003125244110000076
Update the lidar pose; determine the updated covariance as P + =P-(1-k) -1 ·KCP; where P is the prior estimated covariance matrix, R is the system error matrix, and C is the The projection matrix of the attitude estimation information during the initial registration of the point cloud, F is the attitude information matrix of the lidar, k is the matching degree of the point cloud between frames,
Figure BDA0003125244110000077
Among them, P j represents a point in the point cloud of the current frame, and P i represents the closest corresponding point to P j in the map.

本申请实施例通过分别获取组合导航数据和激光雷达测量的点云数据,并分别对点云数据和组合导航数据进行处理,先基于组合导航数据确定车辆的当前位姿信息,并确定点云定位中的变换矩阵和平移矩阵,再基于该变换矩阵和平移矩阵确定车辆的激光雷达位姿信息,然后基于该激光雷达位姿信息和车辆的姿态信息确定车辆的姿态计算法方程,然后采用扩展卡尔曼滤波方法基于所述位姿状态计算方程对所述车辆进行定位,更加符合车辆的形式规律,保证车辆定位的准确性。In this embodiment of the present application, the combined navigation data and the point cloud data measured by the lidar are obtained respectively, and the point cloud data and the combined navigation data are processed respectively, the current position and attitude information of the vehicle is determined based on the combined navigation data, and the point cloud positioning is determined. transformation matrix and translation matrix in The Mann filter method locates the vehicle based on the pose state calculation equation, which is more in line with the formal rules of the vehicle and ensures the accuracy of the vehicle positioning.

作为本申请一种可能的实施方式,在该实施方式中,对所述组合导航数据进行解析得到所述车辆的当前姿态信息,得到车辆的当前姿态信息为As a possible implementation manner of the present application, in this implementation manner, the current attitude information of the vehicle is obtained by analyzing the integrated navigation data, and the current attitude information of the vehicle is obtained as:

U=[wT,aT]T U=[w T ,a T ] T

其中,w=(wx,wy,wz)T用于表示所述车辆位姿的角速度,a=(ax,ay,az)T用于表示所述车辆位姿的加速度。Wherein, w=(w x , w y , w z ) T is used to represent the angular velocity of the vehicle pose, and a=(a x , a y , az ) T is used to represent the acceleration of the vehicle pose.

作为本申请一种可能的实施方式,在该实施方式中,所述基于所述变换矩阵和所述平移矩阵将所述车辆的当前帧点云与预先建立的地图进行匹配之前,还包括:As a possible implementation manner of the present application, in this implementation manner, before matching the point cloud of the current frame of the vehicle with the pre-established map based on the transformation matrix and the translation matrix, the method further includes:

对所述激光雷达测量的点云数据进行降采样处理,并对降采样处理后的所述点云数据进行特征提取。Perform down-sampling processing on the point cloud data measured by the lidar, and perform feature extraction on the point cloud data after down-sampling processing.

在本申请实施例中,车辆获取激光雷达测量的点云数据和组合导航数据,并且对这两组数据进行不同处理。其中对获取的点云数据进行降采样处理,进行降采样的目的是滤除掉距离激光雷达过远的点以及距离激光雷达中心过远的点,提高配准的精度以及效率。接着对点云数据进行特征点提取:一次扫描的点通过曲率值来分类,特征点曲率大于阈值的为边缘点,特征点曲率小于阈值的为平面点。为了使特征点均匀的分布在环境中,将一次扫描划分为4个独立的子区域,每个子区域最多提供2个边缘点和4个平面。此外,将不稳定的特征点(瑕点)排除。同时对获取的组合导航的数据进行数据解析,得到GPS数据,对GPS数据进行UTM变换,即根据GPS数据进行经纬度到本地坐标的转换,变换得到的结果用于计算初始点云配准的旋转矩阵R和平移矩阵t。待当前帧点云已经匹配到预先构建的地图时,就可以计算出当前车辆的激光雷达位姿测量值,同时也不再需要GPS数据。In the embodiment of the present application, the vehicle acquires point cloud data and integrated navigation data measured by lidar, and performs different processing on these two sets of data. The obtained point cloud data is subjected to down-sampling processing. The purpose of down-sampling is to filter out points that are too far from the lidar and points that are too far from the center of the lidar, so as to improve the accuracy and efficiency of registration. Then, extract feature points from the point cloud data: the points of a scan are classified by the curvature value, and the feature points whose curvature is greater than the threshold are edge points, and those whose curvature is less than the threshold are plane points. In order to distribute the feature points evenly in the environment, a scan is divided into 4 independent sub-regions, each sub-region provides at most 2 edge points and 4 planes. In addition, unstable feature points (defect points) are excluded. At the same time, perform data analysis on the acquired integrated navigation data, obtain GPS data, and perform UTM transformation on the GPS data, that is, convert the latitude and longitude to local coordinates according to the GPS data, and the result obtained from the transformation is used to calculate the rotation matrix of the initial point cloud registration. R and translation matrix t. When the point cloud of the current frame has been matched to the pre-built map, the LiDAR pose measurement value of the current vehicle can be calculated, and GPS data is no longer required.

在本申请实施例中,如图2所示,在对激光雷达测量的点云数据和组合导航数据进行处理之后,需要进行EKF融合,建立扩展卡尔曼(EKF)状态和观测状态模型,车辆的激光雷达姿态信息X=[LT,WT,VT]T,其中,L=(lx,ly,lz)T用于表示所述激光雷达的位置信息,

Figure BDA0003125244110000081
用于表示所述激光雷达的俯仰角、横滚角、以及航向角的欧拉角,V=(vx,vy,vz)T用于表示所述车辆的速度。经过组合导航原始数据解析后可以得到姿态信息U=[wT,aT]T,其中,w=(wx,wy,wz)T用于表示所述激光雷达的角速度,a=(ax,ay,az)T用于表示所述激光雷达的加速度。接着可以建立车辆位姿估计状态方程:In the embodiment of the present application, as shown in FIG. 2 , after processing the point cloud data and integrated navigation data measured by the lidar, EKF fusion needs to be performed to establish the Extended Kalman (EKF) state and observation state model. Lidar attitude information X=[L T , W T , V T ] T , where L=(l x , ly , l z ) T is used to represent the position information of the lidar,
Figure BDA0003125244110000081
The Euler angle used to represent the pitch angle, roll angle, and heading angle of the lidar, V=(v x , vy , v z ) T is used to represent the speed of the vehicle. The attitude information U=[w T , a T ] T can be obtained after analyzing the original data of the combined navigation, where w=(w x , w y , w z ) T is used to represent the angular velocity of the lidar, a=( a x , a y , az ) T is used to represent the acceleration of the lidar. Then the state equation for vehicle pose estimation can be established:

Figure BDA0003125244110000091
Figure BDA0003125244110000091

其中Ew为状态转移矩阵,Rw为方向余弦矩阵,G代表重力,在车辆运动过程中,组合导航和激光雷达导航x轴的方向保持一致,所以相应的矩阵为:Among them, E w is the state transition matrix, R w is the direction cosine matrix, and G represents the gravity. In the process of vehicle motion, the direction of the x-axis of the combined navigation and lidar navigation is consistent, so the corresponding matrix is:

Figure BDA0003125244110000092
Figure BDA0003125244110000092

Figure BDA0003125244110000093
Figure BDA0003125244110000093

Figure BDA0003125244110000094
Figure BDA0003125244110000094

最后进行EKF的预测和更新:先验估计协方差:(P+)-1=k·P-1+(1-k)·CTR-1C;激光雷达姿态预测方程:

Figure BDA0003125244110000095
Figure BDA0003125244110000096
卡尔曼增益:K=PCTk·R+CTPC-1;LiDAR姿态更新:
Figure BDA0003125244110000097
更新后验估计协方差:P+=P-(1-k)-1·KCP;其中P为先验估计协方差矩阵,R为系统误差矩阵,C为点云初始配准时的姿态估计信息投影矩阵,F为激光雷达输出的姿态信息矩阵,k为帧间点云的匹配度,
Figure BDA0003125244110000098
Pj表示当前帧点云中的一个点,Pi表示地图中与Pj最近的对应点;根据EKF最后得到激光雷达测量的姿态的最优估计值
Figure BDA0003125244110000099
经过坐标变换得到当前车辆的定位信息。Finally, predict and update the EKF: a priori estimated covariance: (P + ) -1 = k · P -1 + (1-k) · C T R -1 C; lidar attitude prediction equation:
Figure BDA0003125244110000095
Figure BDA0003125244110000096
Kalman gain: K=PCTk R+CTPC-1; LiDAR attitude update:
Figure BDA0003125244110000097
Update the posterior estimated covariance: P + =P-(1-k) -1 ·KCP; where P is the prior estimated covariance matrix, R is the system error matrix, and C is the attitude estimation information projection during the initial registration of the point cloud matrix, F is the attitude information matrix output by lidar, k is the matching degree of point cloud between frames,
Figure BDA0003125244110000098
P j represents a point in the point cloud of the current frame, and P i represents the corresponding point closest to P j in the map; according to the EKF, the optimal estimated value of the attitude measured by the lidar is finally obtained
Figure BDA0003125244110000099
The positioning information of the current vehicle is obtained through coordinate transformation.

本申请实施例通过分别获取组合导航数据和激光雷达测量的点云数据,并分别对点云数据和组合导航数据进行处理,先基于组合导航数据确定车辆的当前位姿信息,并确定点云定位中的变换矩阵和平移矩阵,再基于该变换矩阵和平移矩阵确定车辆的激光雷达位姿信息,然后基于该激光雷达位姿信息和车辆的姿态信息确定车辆的姿态计算法方程,然后采用扩展卡尔曼滤波方法基于所述位姿状态计算方程对所述车辆进行定位,更加符合车辆的形式规律,保证车辆定位的准确性。In this embodiment of the present application, the combined navigation data and the point cloud data measured by the lidar are obtained respectively, and the point cloud data and the combined navigation data are processed respectively, the current position and attitude information of the vehicle is determined based on the combined navigation data, and the point cloud positioning is determined. transformation matrix and translation matrix in The Mann filter method locates the vehicle based on the pose state calculation equation, which is more in line with the formal rules of the vehicle and ensures the accuracy of the vehicle positioning.

本申请实施例提供了一种基于三维点云的定位方法、装置、设备及存储介质装置,如图3所示,该基于三维点云的定位装置30可以包括:数据获取模块301、矩阵确定模块302、位姿确定模块303、方程确定模块304、以及定位模块305,其中,The embodiments of the present application provide a positioning method, device, device, and storage medium device based on a three-dimensional point cloud. As shown in FIG. 3 , the three-dimensional point cloud-based positioning device 30 may include: a data acquisition module 301 , a matrix determination module 302, a pose determination module 303, an equation determination module 304, and a positioning module 305, wherein,

数据获取模块301,用于获取车辆的组合导航数据和激光雷达测量的点云数据;The data acquisition module 301 is used to acquire the integrated navigation data of the vehicle and the point cloud data measured by the lidar;

矩阵确定模块302,用于对所述组合导航数据进行解析,得到所述车辆的当前组合导航位姿信息,基于所述当前组合导航位姿信息确定所述车辆在点云定位中点云变换矩阵和平移矩阵;A matrix determination module 302, configured to analyze the combined navigation data, obtain the current combined navigation pose information of the vehicle, and determine the point cloud transformation matrix of the vehicle in the point cloud positioning based on the current combined navigation pose information and translation matrix;

位姿确定模块303,用于基于所述变换矩阵和所述平移矩阵将所述车辆的当前帧点云与预先建立的地图进行匹配,得到所述车辆的激光雷达位姿信息;A pose determination module 303, configured to match the current frame point cloud of the vehicle with a pre-established map based on the transformation matrix and the translation matrix to obtain the lidar pose information of the vehicle;

方程确定模块304,用于对所述组合导航数据进行解析得到所述车辆的当前姿态信息,将所述当前姿态信息与所述激光雷达位姿信息结合,确定所述车辆的位姿状态计算方程;Equation determination module 304, configured to parse the combined navigation data to obtain the current attitude information of the vehicle, combine the current attitude information with the lidar attitude information, and determine the vehicle's attitude and attitude state calculation equation ;

定位模块305,用于采用扩展卡尔曼滤波方法基于所述位姿状态计算方程对所述车辆进行定位。The positioning module 305 is configured to use the extended Kalman filter method to locate the vehicle based on the pose and state calculation equation.

作为本申请一种可能的实施方式,在该实施方式中,所述车辆的当前姿态信息为:As a possible implementation manner of the present application, in this implementation manner, the current attitude information of the vehicle is:

U=[wT,aT]T U=[w T ,a T ] T

其中,w=(wx,wy,wz)T用于表示所述车辆位姿的角速度,a=(ax,ay,az)T用于表示所述车辆位姿的加速度。Wherein, w=(w x , w y , w z ) T is used to represent the angular velocity of the vehicle pose, and a=(a x , a y , az ) T is used to represent the acceleration of the vehicle pose.

作为本申请一种可能的实施方式,在该实施方式中,所述车辆的位姿状态计算方程为:As a possible implementation manner of the present application, in this implementation manner, the calculation equation of the pose state of the vehicle is:

Figure BDA0003125244110000111
Figure BDA0003125244110000111

其中,in,

Figure BDA0003125244110000112
为方向余弦矩阵;
Figure BDA0003125244110000113
为状态转移矩阵;
Figure BDA0003125244110000114
为重力矩阵。
Figure BDA0003125244110000112
is the direction cosine matrix;
Figure BDA0003125244110000113
is the state transition matrix;
Figure BDA0003125244110000114
is the gravity matrix.

作为本申请一种可能的实施方式,在该实施方式中,所述基于所述变换矩阵和所述平移矩阵将所述车辆的当前帧点云与预先建立的地图进行匹配之前,还包括:As a possible implementation manner of the present application, in this implementation manner, before matching the point cloud of the current frame of the vehicle with the pre-established map based on the transformation matrix and the translation matrix, the method further includes:

对所述激光雷达测量的点云数据进行降采样处理,并对降采样处理后的所述点云数据进行特征提取。Perform down-sampling processing on the point cloud data measured by the lidar, and perform feature extraction on the point cloud data after down-sampling processing.

作为本申请一种可能的实施方式,在该实施方式中,所述采用扩展卡尔曼滤波方法基于所述位姿状态计算方程对所述车辆进行定位,包括:As a possible implementation manner of the present application, in this implementation manner, using the extended Kalman filter method to locate the vehicle based on the pose and state calculation equation includes:

根据公式(P+)-1=k·P-1+(1-k)·CTR-1C计算先验估计协方差;Calculate the a priori estimated covariance according to the formula (P + ) -1 =k·P -1 +(1-k)·C T R -1 C;

确定姿态预测方程为

Figure BDA0003125244110000115
Determine the attitude prediction equation as
Figure BDA0003125244110000115

确定卡尔曼增益为K=PCT(k·R+CTPC)-1Determine the Kalman gain as K=PC T (k·R+ CT PC) −1 ;

采用公式

Figure BDA0003125244110000116
对激光雷达位姿新进行更新;using the formula
Figure BDA0003125244110000116
Update the lidar pose;

确定更新后的协方差为P+=P-(1-k)-1·CP;Determine the updated covariance as P + =P-(1-k) -1 ·CP;

其中,P为先验估计协方差矩阵,R为系统误差矩阵,C为点云初始配准时的姿态估计信息投影矩阵,F为激光雷达的姿态信息矩阵,k为帧间点云的匹配度,

Figure BDA0003125244110000117
其中,Pj表示当前帧点云中的一个点,Pi表示地图与中Pj最近的对应点。Among them, P is the prior estimation covariance matrix, R is the system error matrix, C is the attitude estimation information projection matrix during the initial registration of the point cloud, F is the attitude information matrix of the lidar, and k is the matching degree of point clouds between frames,
Figure BDA0003125244110000117
Among them, P j represents a point in the point cloud of the current frame, and P i represents the closest corresponding point to P j in the map.

本申请实施例通过分别获取组合导航数据和激光雷达测量的点云数据,并分别对点云数据和组合导航数据进行处理,先基于组合导航数据确定车辆的当前位姿信息,并确定点云定位中的变换矩阵和平移矩阵,再基于该变换矩阵和平移矩阵确定车辆的激光雷达位姿信息,然后基于该激光雷达位姿信息和车辆的姿态信息确定车辆的姿态计算法方程,然后采用扩展卡尔曼滤波方法基于所述位姿状态计算方程对所述车辆进行定位,更加符合车辆的形式规律,保证车辆定位的准确性。In this embodiment of the present application, the combined navigation data and the point cloud data measured by the lidar are obtained respectively, and the point cloud data and the combined navigation data are processed respectively, the current position and attitude information of the vehicle is determined based on the combined navigation data, and the point cloud positioning is determined. transformation matrix and translation matrix in The Mann filter method locates the vehicle based on the pose state calculation equation, which is more in line with the formal rules of the vehicle and ensures the accuracy of the vehicle positioning.

本申请实施例中提供了一种电子设备,该电子设备包括:存储器和处理器;至少一个程序,存储于存储器中,用于被处理器执行时,获取车辆的组合导航数据和激光雷达测量的点云数据;对所述组合导航数据进行解析,得到所述车辆的当前组合导航位姿信息,基于所述当前组合导航位姿信息确定所述车辆在点云定位中点云变换矩阵和平移矩阵;基于所述变换矩阵和所述平移矩阵将所述车辆的当前帧点云与预先建立的地图进行匹配,得到所述车辆的激光雷达位姿信息;对所述组合导航数据进行解析得到所述车辆的当前姿态信息,将所述当前姿态信息与所述激光雷达位姿信息结合,确定所述车辆的位姿状态计算方程;采用扩展卡尔曼滤波方法基于所述位姿状态计算方程对所述车辆进行定位。An embodiment of the present application provides an electronic device, the electronic device includes: a memory and a processor; at least one program, stored in the memory, is used to acquire the integrated navigation data of the vehicle and the information measured by the lidar when executed by the processor. point cloud data; analyze the combined navigation data to obtain the current combined navigation pose information of the vehicle, and determine the point cloud transformation matrix and translation matrix of the vehicle in the point cloud positioning based on the current combined navigation pose information ; Based on the transformation matrix and the translation matrix, the current frame point cloud of the vehicle is matched with a pre-established map to obtain the lidar pose information of the vehicle; The integrated navigation data is analyzed to obtain the The current attitude information of the vehicle, and the current attitude information is combined with the lidar position and attitude information to determine the vehicle's attitude and state calculation equation; the extended Kalman filtering method is used to calculate the position and attitude state calculation equation based on the Position the vehicle.

与现有技术相比可实现:本申请实施例通过分别获取组合导航数据和激光雷达测量的点云数据,并分别对点云数据和组合导航数据进行处理,先基于组合导航数据确定车辆的当前位姿信息,并确定点云定位中的变换矩阵和平移矩阵,再基于该变换矩阵和平移矩阵确定车辆的激光雷达位姿信息,然后基于该激光雷达位姿信息和车辆的姿态信息确定车辆的姿态计算法方程,然后采用扩展卡尔曼滤波方法基于所述位姿状态计算方程对所述车辆进行定位,更加符合车辆的形式规律,保证车辆定位的准确性。Compared with the prior art, it can be realized: the embodiment of the present application obtains the integrated navigation data and the point cloud data measured by the lidar respectively, and processes the point cloud data and the integrated navigation data respectively, and first determines the current state of the vehicle based on the integrated navigation data. pose information, and determine the transformation matrix and translation matrix in the point cloud positioning, and then determine the vehicle's lidar pose information based on the transformation matrix and translation matrix, and then determine the vehicle's position and attitude information based on the lidar pose information and the vehicle's attitude information. Then, the extended Kalman filter method is used to locate the vehicle based on the posture and state calculation equation, which is more in line with the formal rules of the vehicle and ensures the accuracy of the vehicle positioning.

在一个可选实施例中提供了一种电子设备,如图4所示,图4所示的电子设备4000包括:处理器4001和存储器4003。其中,处理器4001和存储器4003相连,如通过总线4002相连。可选地,电子设备4000还可以包括收发器4004。需要说明的是,实际应用中收发器4004不限于一个,该电子设备4000的结构并不构成对本申请实施例的限定。In an optional embodiment, an electronic device is provided. As shown in FIG. 4 , the electronic device 4000 shown in FIG. 4 includes: a processor 4001 and a memory 4003 . The processor 4001 is connected to the memory 4003, for example, through the bus 4002. Optionally, the electronic device 4000 may also include a transceiver 4004 . It should be noted that, in practical applications, the transceiver 4004 is not limited to one, and the structure of the electronic device 4000 does not constitute a limitation to the embodiments of the present application.

处理器4001可以是CPU(Central Processing Unit,中央处理器),通用处理器,DSP(Digital Signal Processor,数据信号处理器),ASIC(Application SpecificIntegrated Circuit,专用集成电路),FPGA(Field Programmable Gate Array,现场可编程门阵列)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器4001也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等。The processor 4001 may be a CPU (Central Processing Unit, central processing unit), a general-purpose processor, a DSP (Digital Signal Processor, data signal processor), an ASIC (Application Specific Integrated Circuit, an application-specific integrated circuit), an FPGA (Field Programmable Gate Array, Field Programmable Gate Array) or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof. It may implement or execute the various exemplary logical blocks, modules and circuits described in connection with this disclosure. The processor 4001 may also be a combination for realizing computing functions, such as a combination of one or more microprocessors, a combination of a DSP and a microprocessor, and the like.

总线4002可包括一通路,在上述组件之间传送信息。总线4002可以是PCI(Peripheral Component Interconnect,外设部件互连标准)总线或EISA(ExtendedIndustry Standard Architecture,扩展工业标准结构)总线等。总线4002可以分为地址总线、数据总线、控制总线等。为便于表示,图4中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。The bus 4002 may include a path to transfer information between the components described above. The bus 4002 may be a PCI (Peripheral Component Interconnect, Peripheral Component Interconnect Standard) bus or an EISA (Extended Industry Standard Architecture, Extended Industry Standard Architecture) bus or the like. The bus 4002 can be divided into an address bus, a data bus, a control bus, and the like. For ease of presentation, only one thick line is used in FIG. 4, but it does not mean that there is only one bus or one type of bus.

存储器4003可以是ROM(Read Only Memory,只读存储器)或可存储静态信息和指令的其他类型的静态存储设备,RAM(Random Access Memory,随机存取存储器)或者可存储信息和指令的其他类型的动态存储设备,也可以是EEPROM(Electrically ErasableProgrammable Read Only Memory,电可擦可编程只读存储器)、CD-ROM(Compact DiscReadOnly Memory,只读光盘)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。The memory 4003 can be a ROM (Read Only Memory, read only memory) or other types of static storage devices that can store static information and instructions, a RAM (Random Access Memory, random access memory) or other types that can store information and instructions. A dynamic storage device can also be an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read Only Memory, a CD-ROM), or other CD-ROM storage, CD-ROM storage (including compressed CD-ROM, laser compact disc, compact disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage medium or other magnetic storage device, or any other medium capable of carrying or storing desired program code in the form of instructions or data structures and capable of being accessed by a computer , but not limited to this.

存储器4003用于存储执行本申请方案的应用程序代码,并由处理器4001来控制执行。处理器4001用于执行存储器4003中存储的应用程序代码,以实现前述方法实施例所示的内容。The memory 4003 is used for storing the application program code for executing the solution of the present application, and the execution is controlled by the processor 4001 . The processor 4001 is configured to execute the application program code stored in the memory 4003 to implement the content shown in the foregoing method embodiments.

本申请实施例提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,当其在计算机上运行时,使得计算机可以执行前述方法实施例中相应内容。与现有技术相比,本申请实施例通过分别获取组合导航数据和激光雷达测量的点云数据,并分别对点云数据和组合导航数据进行处理,先基于组合导航数据确定车辆的当前位姿信息,并确定点云定位中的变换矩阵和平移矩阵,再基于该变换矩阵和平移矩阵确定车辆的激光雷达位姿信息,然后基于该激光雷达位姿信息和车辆的姿态信息确定车辆的姿态计算法方程,然后采用扩展卡尔曼滤波方法基于所述位姿状态计算方程对所述车辆进行定位,更加符合车辆的形式规律,保证车辆定位的准确性。Embodiments of the present application provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program runs on the computer, the computer can execute the corresponding content in the foregoing method embodiments. Compared with the prior art, the embodiment of the present application obtains the combined navigation data and the point cloud data measured by the lidar respectively, and processes the point cloud data and the combined navigation data respectively, and first determines the current position and attitude of the vehicle based on the combined navigation data. information, and determine the transformation matrix and translation matrix in the point cloud positioning, and then determine the vehicle's lidar pose information based on the transformation matrix and translation matrix, and then determine the vehicle's attitude calculation based on the lidar pose information and the vehicle's attitude information. Then, the extended Kalman filtering method is used to locate the vehicle based on the pose and state calculation equation, which is more in line with the formal rules of the vehicle and ensures the accuracy of the vehicle positioning.

应该理解的是,虽然附图的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,附图的流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。It should be understood that although the various steps in the flowchart of the accompanying drawings are sequentially shown in the order indicated by the arrows, these steps are not necessarily executed in sequence in the order indicated by the arrows. Unless explicitly stated herein, the execution of these steps is not strictly limited to the order and may be performed in other orders. Moreover, at least a part of the steps in the flowchart of the accompanying drawings may include multiple sub-steps or multiple stages, and these sub-steps or stages are not necessarily executed at the same time, but may be executed at different times, and the execution sequence is also It does not have to be performed sequentially, but may be performed alternately or alternately with other steps or at least a portion of sub-steps or stages of other steps.

以上所述仅是本申请的部分实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。The above are only part of the embodiments of the present application. It should be pointed out that for those skilled in the art, without departing from the principles of the present application, several improvements and modifications can also be made. It should be regarded as the protection scope of this application.

Claims (9)

1.一种基于三维点云的定位方法,其特征在于,包括:1. a positioning method based on three-dimensional point cloud, is characterized in that, comprises: 获取车辆的组合导航数据和激光雷达测量的点云数据;Obtain the integrated navigation data of the vehicle and the point cloud data measured by the lidar; 对所述组合导航数据进行解析,得到所述车辆的当前组合导航位姿信息,基于所述当前组合导航位姿信息确定所述车辆在点云定位中点云变换矩阵和平移矩阵;Analyzing the combined navigation data to obtain the current combined navigation pose information of the vehicle, and determining a point cloud transformation matrix and a translation matrix of the vehicle in point cloud positioning based on the current combined navigation pose information; 基于所述变换矩阵和所述平移矩阵将所述车辆的当前帧点云与预先建立的地图进行匹配,得到所述车辆的激光雷达位姿信息;Matching the current frame point cloud of the vehicle with a pre-established map based on the transformation matrix and the translation matrix, to obtain the lidar pose information of the vehicle; 对所述组合导航数据进行解析得到所述车辆的当前姿态信息,将所述当前姿态信息与所述激光雷达位姿信息结合,确定所述车辆的位姿状态计算方程;Analyzing the combined navigation data to obtain the current attitude information of the vehicle, and combining the current attitude information with the lidar attitude information to determine the vehicle's attitude and attitude calculation equation; 采用扩展卡尔曼滤波方法基于所述位姿状态计算方程对所述车辆进行定位。The vehicle is positioned based on the pose state calculation equation using an extended Kalman filter method. 2.根据权利要求1所述的基于三维点云的定位方法,其特征在于,所述车辆的激光雷达位姿信息为:2. The three-dimensional point cloud-based positioning method according to claim 1, wherein the laser radar pose information of the vehicle is: X=[LT,WT,VT]T X=[L T , W T , V T ] T 其中,L=(lx,ly,lz)T用于表示所述激光雷达的位置信息,其中lx为激光雷达x轴坐标信息,ly为激光雷达y轴坐标信息,lz为激光雷达z轴坐标信息;
Figure FDA0003125244100000011
用于表示所述激光雷达的欧拉角,其中为
Figure FDA0003125244100000012
俯仰角,θ为横滚角,ψ为航向角,V=(vx,vy,vz)T用于表示所述激光雷达的速度,其中vx为x轴速度,vy为y轴速度,vz为z轴速度。
Among them, L=(l x , ly , l z ) T is used to represent the position information of the lidar, where l x is the x-axis coordinate information of the lidar, ly is the y-axis coordinate information of the lidar, and l z is Lidar z-axis coordinate information;
Figure FDA0003125244100000011
is used to represent the Euler angles of the lidar, where
Figure FDA0003125244100000012
Pitch angle, θ is the roll angle, ψ is the heading angle, V=(v x , v y , v z ) T is used to represent the speed of the lidar, where v x is the x-axis speed, and v y is the y-axis Velocity, v z is the z-axis velocity.
3.根据权利要求1所述的基于三维点云的定位方法,其特征在于,所述车辆的当前姿态信息为:3. The positioning method based on three-dimensional point cloud according to claim 1, is characterized in that, the current attitude information of described vehicle is: U=[wT,aT]T U=[w T , a T ] T 其中,w=(wx,wy,wz)T用于表示所述车辆位姿的角速度,其中wx为x轴的角速度,wy为y轴的角速度,wz为z轴的角速度;a=(ax,ay,az)T用于表示所述车辆位姿的加速度,其中ax为x轴的加速度,ay为y轴的加速度,az为z轴的加速度。Wherein, w=(w x , w y , w z ) T is used to represent the angular velocity of the vehicle pose, where w x is the angular velocity of the x-axis, w y is the angular velocity of the y-axis, and w z is the angular velocity of the z-axis ; a=(a x , a y , az ) T is used to represent the acceleration of the vehicle posture, where a x is the acceleration of the x-axis, a y is the acceleration of the y-axis, and a z is the acceleration of the z-axis. 4.根据权利要求1所述的基于三维点云的定位方法,其特征在于,所述车辆的位姿状态计算方程为:4. The positioning method based on three-dimensional point cloud according to claim 1, is characterized in that, the posture state calculation equation of described vehicle is:
Figure FDA0003125244100000021
Figure FDA0003125244100000021
其中,
Figure FDA0003125244100000022
表示车辆位姿速度状态方程,V为车辆速度,
Figure FDA0003125244100000023
表示车辆位姿角速度状态方程,其中w表示车辆位姿角速度;
in,
Figure FDA0003125244100000022
represents the state equation of vehicle pose and velocity, V is the vehicle speed,
Figure FDA0003125244100000023
Represents the state equation of the vehicle pose angular velocity, where w represents the vehicle pose angular velocity;
Figure FDA0003125244100000024
为方向余弦矩阵;
Figure FDA0003125244100000025
为状态转移矩阵;
Figure FDA0003125244100000026
为重力矩阵,其中g为重力加速度。
Figure FDA0003125244100000024
is the direction cosine matrix;
Figure FDA0003125244100000025
is the state transition matrix;
Figure FDA0003125244100000026
is the gravity matrix, where g is the acceleration of gravity.
5.根据权利要求1所述的基于三维点云的定位方法,其特征在于,所述基于所述变换矩阵和所述平移矩阵将所述车辆的当前帧点云与预先建立的地图进行匹配之前,还包括:5. The three-dimensional point cloud-based positioning method according to claim 1, wherein before the current frame point cloud of the vehicle is matched with a pre-established map based on the transformation matrix and the translation matrix ,Also includes: 对所述激光雷达测量的点云数据进行降采样处理,并对降采样处理后的所述点云数据进行特征提取。Perform down-sampling processing on the point cloud data measured by the lidar, and perform feature extraction on the point cloud data after down-sampling processing. 6.根据权利要求1所述的基于三维点云的定位方法,其特征在于,所述采用扩展卡尔曼滤波方法基于所述位姿状态计算方程对所述车辆进行定位,包括:6. The three-dimensional point cloud-based positioning method according to claim 1, wherein the described vehicle is positioned based on the pose state calculation equation using an extended Kalman filter method, comprising: 根据公式(P+)-1=k·P-1+(1-k)·CTR-1C计算先验估计协方差;Calculate the a priori estimated covariance according to the formula (P + ) -1 =k·P -1 +(1-k)·C T R -1 C; 确定姿态预测方程为
Figure FDA0003125244100000027
Determine the attitude prediction equation as
Figure FDA0003125244100000027
确定卡尔曼增益为K=PCT(k·R+CTPC)-1Determine the Kalman gain as K=PC T (k·R+ CT PC) −1 ; 采用公式
Figure FDA0003125244100000028
对激光雷达位姿新进行更新;
using the formula
Figure FDA0003125244100000028
Update the lidar pose;
确定更新后的协方差为P+=P-(1-k)-1·KCP;Determine the updated covariance as P + =P-(1-k) -1 ·KCP; 其中,P+为先验估计协方差矩阵,P为协方差矩阵,
Figure FDA0003125244100000029
为先验估计位姿,R为系统误差矩阵,C为点云初始配准时的姿态估计信息投影矩阵,F为激光雷达的姿态信息矩阵,k为帧间点云的匹配度,
Figure FDA00031252441000000210
Figure FDA00031252441000000211
其中,Pj表示当前帧点云中的一个点,Pi表示地图与中Pj最近的对应点。
Among them, P+ is the prior estimated covariance matrix, P is the covariance matrix,
Figure FDA0003125244100000029
is the a priori estimated pose, R is the system error matrix, C is the projection matrix of the pose estimation information during the initial registration of the point cloud, F is the attitude information matrix of the lidar, k is the matching degree of the point cloud between frames,
Figure FDA00031252441000000210
Figure FDA00031252441000000211
Among them, Pj represents a point in the point cloud of the current frame, and Pi represents the closest corresponding point to Pj in the map.
7.一种基于三维点云的定位装置,其特征在于,包括:7. A positioning device based on three-dimensional point cloud, is characterized in that, comprises: 数据获取模块,用于获取车辆的组合导航数据和激光雷达测量的点云数据;The data acquisition module is used to acquire the integrated navigation data of the vehicle and the point cloud data measured by the lidar; 矩阵确定模块,用于对所述组合导航数据进行解析,得到所述车辆的当前组合导航位姿信息,基于所述当前组合导航位姿信息确定所述车辆在点云定位中点云变换矩阵和平移矩阵;The matrix determination module is used to analyze the combined navigation data, obtain the current combined navigation pose information of the vehicle, and determine the point cloud transformation matrix and the point cloud transformation matrix of the vehicle in the point cloud positioning based on the current combined navigation pose information. translation matrix; 位姿确定模块,用于基于所述变换矩阵和所述平移矩阵将所述车辆的当前帧点云与预先建立的地图进行匹配,得到所述车辆的激光雷达位姿信息;a pose determination module, configured to match the current frame point cloud of the vehicle with a pre-established map based on the transformation matrix and the translation matrix to obtain the lidar pose information of the vehicle; 方程确定模块,用于对所述组合导航数据进行解析得到所述车辆的当前姿态信息,将所述当前姿态信息与所述激光雷达位姿信息结合,确定所述车辆的位姿状态计算方程;an equation determination module, configured to analyze the combined navigation data to obtain the current attitude information of the vehicle, and combine the current attitude information with the lidar attitude information to determine the vehicle's attitude and attitude calculation equation; 定位模块,用于采用扩展卡尔曼滤波方法基于所述位姿状态计算方程对所述车辆进行定位。The positioning module is configured to use the extended Kalman filter method to locate the vehicle based on the pose and state calculation equation. 8.一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现权利要求1~6中任一项所述的基于三维点云的定位方法。8. An electronic device, comprising a memory, a processor and a computer program stored on the memory and running on the processor, wherein the processor implements any one of claims 1 to 6 when executing the program The three-dimensional point cloud-based positioning method described in item. 9.一种计算机可读存储介质,其特征在于,所述存储介质存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如权利要求1~6中任一项所述的基于三维点云的定位方法。9. A computer-readable storage medium, wherein the storage medium stores at least one instruction, at least one piece of program, code set or instruction set, the at least one instruction, the at least one piece of program, the code set Or the instruction set is loaded and executed by the processor to implement the three-dimensional point cloud-based positioning method according to any one of claims 1 to 6 .
CN202110687387.2A 2021-06-21 2021-06-21 Positioning method, device and equipment based on three-dimensional point cloud and storage medium Pending CN113538699A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110687387.2A CN113538699A (en) 2021-06-21 2021-06-21 Positioning method, device and equipment based on three-dimensional point cloud and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110687387.2A CN113538699A (en) 2021-06-21 2021-06-21 Positioning method, device and equipment based on three-dimensional point cloud and storage medium

Publications (1)

Publication Number Publication Date
CN113538699A true CN113538699A (en) 2021-10-22

Family

ID=78125442

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110687387.2A Pending CN113538699A (en) 2021-06-21 2021-06-21 Positioning method, device and equipment based on three-dimensional point cloud and storage medium

Country Status (1)

Country Link
CN (1) CN113538699A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114429515A (en) * 2021-12-21 2022-05-03 北京地平线机器人技术研发有限公司 A point cloud map construction method, device and device
CN114924288A (en) * 2022-05-26 2022-08-19 浙江孔辉汽车科技有限公司 Construction system and construction method of front three-dimensional digital elevation map
CN115792931A (en) * 2022-10-25 2023-03-14 上海有个机器人有限公司 Robot positioning method, device, equipment and storage medium
CN116165652A (en) * 2021-11-25 2023-05-26 中国移动通信有限公司研究院 Target tracking method and device and communication equipment
WO2024001649A1 (en) * 2022-06-29 2024-01-04 深圳市海柔创新科技有限公司 Robot positioning method, apparatus and computing readable storage medium
CN118226401A (en) * 2024-05-22 2024-06-21 南京航空航天大学 Fixed-wing aircraft posture measurement method and device based on laser radar three-dimensional point cloud

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108320329A (en) * 2018-02-02 2018-07-24 维坤智能科技(上海)有限公司 A kind of 3D map creating methods based on 3D laser
CN111161353A (en) * 2019-12-31 2020-05-15 深圳一清创新科技有限公司 Vehicle positioning method and device, readable storage medium and computer equipment
CN111915675A (en) * 2020-06-17 2020-11-10 广西综合交通大数据研究院 Particle filter point cloud positioning method based on particle drift, and device and system thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108320329A (en) * 2018-02-02 2018-07-24 维坤智能科技(上海)有限公司 A kind of 3D map creating methods based on 3D laser
CN111161353A (en) * 2019-12-31 2020-05-15 深圳一清创新科技有限公司 Vehicle positioning method and device, readable storage medium and computer equipment
CN111915675A (en) * 2020-06-17 2020-11-10 广西综合交通大数据研究院 Particle filter point cloud positioning method based on particle drift, and device and system thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
严小意: "基于LiDAR/IMU融合的移动机器人导航定位技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, pages 16 - 47 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116165652A (en) * 2021-11-25 2023-05-26 中国移动通信有限公司研究院 Target tracking method and device and communication equipment
CN114429515A (en) * 2021-12-21 2022-05-03 北京地平线机器人技术研发有限公司 A point cloud map construction method, device and device
CN114924288A (en) * 2022-05-26 2022-08-19 浙江孔辉汽车科技有限公司 Construction system and construction method of front three-dimensional digital elevation map
WO2024001649A1 (en) * 2022-06-29 2024-01-04 深圳市海柔创新科技有限公司 Robot positioning method, apparatus and computing readable storage medium
CN115792931A (en) * 2022-10-25 2023-03-14 上海有个机器人有限公司 Robot positioning method, device, equipment and storage medium
CN118226401A (en) * 2024-05-22 2024-06-21 南京航空航天大学 Fixed-wing aircraft posture measurement method and device based on laser radar three-dimensional point cloud

Similar Documents

Publication Publication Date Title
CN113538699A (en) Positioning method, device and equipment based on three-dimensional point cloud and storage medium
WO2022007504A1 (en) Location determination method, device, and system, and computer readable storage medium
CN112835085B (en) Method and device for determining vehicle position
JP7179110B2 (en) Positioning method, device, computing device, computer-readable storage medium and computer program
CN112379352B (en) Laser radar calibration method, device, equipment and storage medium
CN113147738A (en) Automatic parking positioning method and device
CN114034307A (en) Lane line-based vehicle pose calibration method and device and electronic equipment
CN113933818A (en) Method, device, storage medium and program product for calibration of external parameters of lidar
KR101890612B1 (en) Method and apparatus for detecting object using adaptive roi and classifier
CN113763549B (en) Simultaneous positioning and mapping method and device integrating laser radar and IMU and storage medium
CN113945937A (en) Precision detection method, device and storage medium
CN110160528B (en) Mobile device pose positioning method based on angle feature recognition
CN112964291A (en) Sensor calibration method and device, computer storage medium and terminal
CN112835086B (en) Method and device for determining vehicle position
CN109916417B (en) Map establishing method, map establishing device, computer equipment and storage medium thereof
CN110779514B (en) Hierarchical Kalman fusion method and device for auxiliary attitude determination of bionic polarization navigation
CN113720349A (en) Odometer information smoothing method based on Kalman filtering
CN115560744A (en) Robot, multi-sensor-based three-dimensional mapping method and storage medium
CN117516535A (en) Three-dimensional laser radar positioning method based on key frame map
CN116859424A (en) Vehicle positioning method, device, equipment and storage medium
CN115164885A (en) Image inverse perspective transformation method, image inverse perspective transformation device, electronic device and storage medium
CN114842224A (en) An absolute visual matching positioning scheme for monocular UAV based on geographic basemap
CN116380056B (en) Inertial positioning method, inertial positioning device, electronic equipment and storage medium
JP7462847B1 (en) Position estimation device, position estimation method, and position estimation program
CN117232550A (en) Multi-sensor space reference calibration method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20211022