CN109633665A - The sparse laser point cloud joining method of traffic scene - Google Patents

The sparse laser point cloud joining method of traffic scene Download PDF

Info

Publication number
CN109633665A
CN109633665A CN201811539606.7A CN201811539606A CN109633665A CN 109633665 A CN109633665 A CN 109633665A CN 201811539606 A CN201811539606 A CN 201811539606A CN 109633665 A CN109633665 A CN 109633665A
Authority
CN
China
Prior art keywords
point cloud
gps
denoted
frame
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811539606.7A
Other languages
Chinese (zh)
Inventor
王超
张天雷
何贝
刘鹤云
郑思仪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Mainline Technology Co Ltd
Original Assignee
Beijing Mainline Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Mainline Technology Co Ltd filed Critical Beijing Mainline Technology Co Ltd
Priority to CN201811539606.7A priority Critical patent/CN109633665A/en
Publication of CN109633665A publication Critical patent/CN109633665A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/87Combinations of systems using electromagnetic waves other than radio waves
    • G01S17/875Combinations of systems using electromagnetic waves other than radio waves for determining attitude

Abstract

The present invention relates to a kind of sparse laser point cloud joining methods of traffic scene, it is characterized in that: the high precision position and posture information that are acquired using the sparse point cloud data and GPS-IMU unit of low line number laser radar scanning, carry out a cloud feature extraction and point cloud, the location information acquired in conjunction with GPS-IMU unit, local coordinate data are transformed into global coordinate system, are spliced to form high-precision map.The utility model has the advantages that method of the invention, which overcomes, acquires intensive point cloud data using expensive high line number laser radar, cause high-precision map cost of manufacture high and low line number radar acquisition point cloud data is more sparse than high line number radar, it is difficult the bottleneck of splicing effect obtained, it realizes and is optimized for traffic scene, point cloud data joining method is acquired using low line number radar, the high-precision map efficiently spliced for three-dimensional point cloud under creation global coordinate system.

Description

The sparse laser point cloud joining method of traffic scene
Technical field
The invention belongs to pilotless automobile technical field more particularly to a kind of sparse laser point cloud splicing sides of traffic scene Method.
Background technique
High-precision map is the very important part in unmanned field, it is capable of providing accurate road structure, road Element information, surrounding scene information etc..Based on three-dimensional laser point cloud, auxiliary uses visual pattern, and it is high to extract corresponding element creation Precision map is present main stream approach, including data acquisition and two stages of map building.In data acquisition phase, vehicle is used Data acquisition platform is carried, obtains the laser point cloud and image data of relevant environment, and record vehicle GPS location information.In map Single frames laser point cloud data is stitched together by establishing stage according to time and spatial information, obtains relevant environment and sits in the overall situation Laser point cloud under mark system, then interested environmental element is extracted, create high-precision map.
In data acquisition phase, existing method acquires intensive point cloud data usually using expensive high line number laser radar, Such as 32 lines, 40 lines, 64 line laser radars cause high-precision map cost of manufacture high.Low cost can be used therefore, it is necessary to a kind of Low line number laser radar (such as 16 lines) carry out high-precision cartography method.But the point cloud of low line number laser radar acquisition Data are more sparse than high line number laser radar, and by the existing matched method of point off density cloud, such as 4 method (4-Points Congruent Sets), the methods of ICP (Iterative Closest Point), the splicing for sparse cloud is difficult to obtain Good splicing effect.In addition, existing laser point cloud joining method is not optimized for traffic scene, this is also shadow Ring the one of the major reasons of splicing effect.Therefore, the efficient splicing of three-dimensional point cloud is creation high-precision map under global coordinate system Important prerequisite.
Summary of the invention
It is an object of the invention to overcome the shortcomings of above-mentioned technology, and provide a kind of sparse laser point cloud splicing of traffic scene Method acquires point cloud data using low line number laser radar, and targetedly extracts in point cloud data about traffic scene Point, line, surface and cylinder body characteristics, then sparse laser point cloud is efficiently spliced, create three-dimensional point cloud under global coordinate system High-precision map.
The present invention to achieve the above object, using following technical scheme: a kind of sparse laser point cloud splicing side of traffic scene Method, it is characterized in that: the high precision position acquired using the sparse point cloud data and GPS-IMU unit of low line number laser radar scanning And posture information, a cloud feature extraction and point cloud are carried out, in conjunction with the location information that GPS-IMU unit acquires, by local seat Mark data are transformed into global coordinate system, are spliced to form high-precision map, the specific steps are as follows:
Step 1: data acquire
Low line number laser radar is mounted on roof or front of the car acquisition primary data frame, and primary data frame includes low line number The high precision position and posture information of sparse point cloud data and GPS-IMU the unit acquisition of laser radar scanning;
Step 2: point cloud feature extraction
Extract the point, line, surface and cylinder body characteristics of laser point cloud data in kth frame primary data frame;Wherein, except routine Outside point, line, surface and cylinder body characteristics, while according to the particularity of traffic scene, pointedly design includes that the line of lane line is special Sign, the paved road of different reflectivity, greenbelt, building facade, traffic signboard region feature, trunk, lamppost, traffic mark Know the cylinder body characteristics of bracket;Specifically,
A. point feature, i.e. original point cloud data are denoted as set Pk={ p }k
B. line feature, it is of different sizes according to laser point reflectivity, straight line related with lane line is extracted, L is denoted ask={ l }k
C. region feature extracts ground and building facade, is denoted as Sk={ s }k
D. cylinder body characteristics extract lamppost, trunk and traffic mark bracket, are denoted as Ck={ c }k
Step 3: point cloud
A. to the preceding k-1 frame point cloud spliced, point, line, surface are extracted, cylinder body characteristics are remembered respectively with described in step 2 For,
P1_k-1={ p }1_k-1, L1_k-1={ l }1_k-1, S1_k-1={ s }1_k-1, C1_k-1={ c }1_k-1
B. the point cloud structure feature based on extraction solves so that distance is most between kth frame point cloud and preceding k-1 frame splice point cloud Small translation vector and spin matrix, translation vector are denoted as Tk, spin matrix is denoted as Rk
Specifically, the position of GPS-IMU unit acquisition and posture information include: the position and course of vehicle, it is denoted as G= { x, y, z, roll, pitch, yaw }, wherein (x, y, z) is three-dimensional location coordinates, roll is roll angle, is denoted as γ, and pitch is Pitch angle is denoted as β, and yaw is course angle, is denoted as α;According to the position of -1 frame GPS-IMU unit record of kth frame and kth and posture Information calculates vehicle movement translation vector and spin matrix initial value, is denoted as Tk,gps、Rk,gps, in which:
TK, gps=(x, y, z)
Rotational order is then to rotate around x axis γ, each spin matrix further around y-axis rotation β around z-axis rotation alpha are as follows:
RK, gps=Rx(γ)Ry(β)Rz(α)
With Tk,gps、Rk,gpsAs a cloud translation vector TkWith spin matrix RkInitial value, with corresponding special between cloud It is minimum as optimization aim to levy element distance, optimal T is solved using Levenberg-Marquadt methodkAnd Rk
Distance definition is the sum of the distance of different characteristic between point cloud, i.e.,
Dis=dis (P)+dis (L)+dis (S)+dis (C) (1)
Wherein,
Wherein,For set PkMiddle certain point,
Point after rotation translation is denoted as
For set P1_k-1Middle distanceNearest point.
Line, face, the distance definition of cylindrical body are consistent with point cloud.Solve spin matrix RkWith translation vector TkThe problem of Description are as follows:
Step 4: by kth frame laser point cloud after coordinate transform together with preceding k-1 frame point cloud, in conjunction with initial number According to the location information of the GPS-IMU unit acquisition in frame, k-th frame local coordinate data are transformed into global coordinate system and update ground Figure repeats Step 2: step 3 and step 4, the point cloud until completing entire traffic scene.
The vehicle speed of the low line number laser radar installation is less than 30km/h.
The utility model has the advantages that compared with prior art, method of the invention overcomes is adopted using expensive high line number laser radar Collect intensive point cloud data, causes high-precision map cost of manufacture high and low line number radar acquisition point cloud data is than high line number radar It is sparse, it is difficult the bottleneck of splicing effect obtained, realizes and optimized for traffic scene, is acquired using low line number radar Point-clouds Registration method, the high-precision map efficiently spliced for three-dimensional point cloud under creation global coordinate system.
Detailed description of the invention
Fig. 1 is the method flow diagram of embodiment;
Fig. 2 is the kth frame point cloud data acquired using 16 line laser radars, traffic scene examples of features figure;
Fig. 3 is the traffic scene examples of features figure for having spliced point cloud data in embodiment.
In figure: A-2 is trunk, and B-2 is lamppost, and C-2 is elevation of building, and D-2 is lane line, and E-2 is ground;
A-3 is trunk, and B-3 is is lane line, and C-3 is lamppost, and D-3 is elevation of building, and E-3 is lane line, the ground F-3.
Specific embodiment
Below with reference to the preferred embodiment specific embodiment that the present invention will be described in detail.
It is detailed in attached drawing, present embodiments provides a kind of sparse laser point cloud joining method of traffic scene, it is characterized in that: utilizing The high precision position and posture information of sparse point cloud data and GPS-IMU the unit acquisition of low line number laser radar scanning, carry out Local coordinate data frame is transformed into entirely by point cloud feature extraction and point cloud in conjunction with the location information that GPS-IMU unit acquires Office's coordinate system, is spliced to form high-precision map, the specific steps are as follows:
Step 1: data acquire
Low line number laser radar is mounted on roof or front of the car acquisition primary data frame, and primary data frame includes low line number The high precision position and posture information of sparse point cloud data and GPS-IMU the unit acquisition of laser radar scanning;The low line number The vehicle speed of laser radar installation is less than 30km/h.
Step 2: point cloud feature extraction
Extract the point, line, surface and cylinder body characteristics of laser point cloud data in kth frame primary data frame;Wherein, except routine Outside point, line, surface and cylinder body characteristics, while according to the particularity of traffic scene, pointedly design includes that the line of lane line is special Sign, the paved road of different reflectivity, greenbelt, building facade, traffic signboard region feature, trunk, lamppost, traffic mark Know the cylinder body characteristics of bracket;Specifically,
A. point feature, i.e. original point cloud data are denoted as set Pk={ p }k
B. line feature, it is of different sizes according to laser point reflectivity, straight line related with lane line is extracted, L is denoted ask={ l }k
C. region feature extracts ground and building facade, is denoted as Sk={ s }k
D. cylinder body characteristics extract lamppost, trunk and traffic mark bracket, are denoted as Ck={ c }k
Step 3: point cloud
A. to the preceding k-1 frame point cloud spliced, point, line, surface are extracted, cylinder body characteristics are remembered respectively with described in step 2 For,
P1_k-1={ p }1_k-1, L1_k-1={ l }1_k-1, S1_k-1={ s }1_k-1, C1_k-1={ c }1_k-1
B. the point cloud structure feature based on extraction solves so that distance is most between kth frame point cloud and preceding k-1 frame splice point cloud Small translation vector and spin matrix, translation vector are denoted as Tk, spin matrix is denoted as Rk
Specifically, the position of GPS-IMU unit acquisition and posture information include: the position and course of vehicle, it is denoted as G= { x, y, z, roll, pitch, yaw }, wherein (x, y, z) is three-dimensional location coordinates, roll is roll angle, is denoted as γ, and pitch is Pitch angle is denoted as β, and yaw is course angle, is denoted as α;According to the position of -1 frame GPS-IMU unit record of kth frame and kth and posture Information calculates vehicle movement translation vector and spin matrix initial value, is denoted as Tk,gps、Rk,gps, in which:
TK, gps=(x, y, z)
Rotational order is then to rotate around x axis γ, each spin matrix further around y-axis rotation β around z-axis rotation alpha are as follows:
RK, gps=Rx(γ)Ry(β)Rz(α)
With Tk,gps、Rk,gpsAs a cloud translation vector TkWith spin matrix RkInitial value, with corresponding special between cloud It is minimum as optimization aim to levy element distance, optimal T is solved using Levenberg-Marquadt methodkAnd Rk
Distance definition is the sum of the distance of different characteristic between point cloud, i.e.,
Dis=dis (P)+dis (L)+dis (S)+dis (C) (1)
Wherein,
Wherein,For set PkMiddle certain point,
Point after rotation translation is denoted as
For set P1_k-1Middle distanceNearest point.
Line, face, the distance definition of cylindrical body are consistent with point cloud.Solve spin matrix RkWith translation vector TkThe problem of Description are as follows:
Step 4: by kth frame laser point cloud after coordinate transform together with preceding k-1 frame point cloud, in conjunction with initial number According to the location information of the GPS-IMU unit acquisition in frame, k-th frame local coordinate data are transformed into global coordinate system and update ground Figure repeats Step 2: step 3 and step 4, the point cloud until completing entire traffic scene.
Embodiment 1
The flow diagram of sparse cloud method of traffic scene based on laser radar, this method comprises:
1) data are acquired using vehicular platform, wherein acquiring sparse laser point cloud data, radar using 16 line laser radars It is mounted on vehicle right front, obtains GPS data, the i.e. high precision position of vehicle and posture letter using NovAtel SPAN-CPT Breath;
2) the traffic scene feature extraction of the 2nd frame laser point cloud data, including vehicle are carried out using method proposed by the present invention The point, line, surface such as diatom, trunk, lamppost, ground, elevation of building, cylinder body characteristics;Specifically,
A. point feature, i.e. original point cloud data are denoted as set P2={ p }2
B. line feature, it is of different sizes according to laser point reflectivity, straight line related with lane line is extracted, L is denoted as2={ l }2
C. region feature extracts ground and building facade, is denoted as S2={ s }2
D. cylindrical body body characteristics extract lamppost, trunk, are denoted as C2={ c }2
3) preceding 1 frame is carried out using method proposed by the present invention, i.e. the traffic scene feature of the 1st frame laser point cloud data mentions It takes, including the point, line, surface such as lane line, trunk, lamppost, ground, elevation of building, cylinder body characteristics;Specifically,
A. point feature, i.e. original point cloud data are denoted as set P1_1={ p }1_1
B. line feature, it is of different sizes according to laser point reflectivity, straight line related with lane line is extracted, L is denoted as1_1= {l}1_1
C. region feature extracts ground and building facade, is denoted as, S1_1={ s }1_1
D. cylindrical body body characteristics extract lamppost, trunk, are denoted as C1_1={ c }1_1
4) at the beginning of carrying out the translation vector and spin matrix of the point cloud matching based on GPS data using method proposed by the present invention Value calculates.It is calculated using arest neighbors method and extracts distance between feature;According to formula (3), it is optimal that solution obtains point cloud matching Translation vector and spin matrix;
Specifically, solving so that the smallest translation vector of distance and spin moment between the 2nd frame point cloud and the 1st frame splice point cloud Battle array, translation vector are denoted as T2, spin matrix is denoted as R2.The vehicle location and appearance of GPS-IMU unit NovAtel SPAN-CPT acquisition State information, position and course including vehicle are denoted as G={ x, y, z, roll, pitch, yaw }, wherein (x, y, z) is three-dimensional position Coordinate is set, roll is roll angle, is denoted as γ, and pitch is pitch angle, is denoted as β, and yaw is course angle, is denoted as α.According to the 2nd frame and The position of 1st frame and posture information calculate vehicle movement translation vector and spin matrix initial value, are denoted as T2,gps、R2,gps, in which:
T2, gps=(x, y, z)
Rotational order is then to rotate around x axis γ, each spin matrix further around y-axis rotation β around z-axis rotation alpha are as follows:
R2, gps=Rx(γ)Ry(β)Rz(α)
With T2,gps、R2,gpsAs a cloud translation vector T2With spin matrix R2Initial value, with corresponding special between cloud It is minimum as optimization aim to levy element distance, optimal T is solved using Levenberg-Marquadt method2And R2.Between point cloud Distance definition is the sum of the distance of different characteristic, i.e.,
Dis=dis (P)+dis (L)+dis (S)+dis (C)
Wherein,
Wherein,For set P2Middle certain point,
Point after rotation translation is denoted as
For set P1_1Middle distanceNearest point.
Line, face, the distance definition of cylindrical body body are consistent with point cloud.Further according to,
Solve spin matrix R2With translation vector T2
5) the translation vector T obtained using solution2With spin matrix R2, by the 2nd frame laser point cloud after coordinate transform, and Together, in conjunction with the point cloud location information in primary data frame, the 2nd frame local coordinate data are transformed into for 1st frame point cloud Global coordinate system, and update the point cloud map spliced.
6) to the 3rd frame point cloud data, point, line, surface, cylinder body characteristics are extracted.Method proposed by the present invention is reused, according to Formula (3) solves so that the smallest translation vector T of distance between the 3rd frame point cloud and the 2nd frame splice point cloud3With spin matrix R3。 The translation vector T obtained using solution3With spin matrix R3, the 3rd frame laser point cloud is spelled after coordinate transform with the 2nd frame point cloud It is connected together, in conjunction with the point cloud location information in primary data frame, the 3rd frame local coordinate data is transformed into global coordinate system simultaneously Update splice point cloud map.This step is repeated, the point cloud until completing entire traffic scene.
Wherein, collected kth frame laser point cloud data schematic diagram and extracting section arrive correlated characteristic as shown in Fig. 2, Specifically, A-2 is that trunk is used as cylinder body characteristics, B-2 is that lamppost is used as cylinder body characteristics, and C-2 is elevation of building It is used as region feature, D-2 is that lane line is used as line feature, and E-2 is that ground is used as region feature, original point cloud conduct Point feature uses.
Spliced point cloud data and Partial Feature example are carried out as shown in figure 3, specifically, A-3 is using proposition method Trunk is used as cylinder body characteristics, and B-3 is that lane line is used as line feature, and C-3 is that lamppost is used as cylinder body characteristics, D-3 is that elevation of building is used as region feature, and E-3 is that lane line is used as line feature, and F-3 is that ground makes as region feature With original point cloud is used as point feature.
The above-mentioned detailed description carried out referring to embodiment to a kind of sparse laser point cloud joining method of traffic scene, is explanation Property without being restrictive, several embodiments can be enumerated according to limited range, therefore of the invention overall not departing from Change and modification under design, should belong within protection scope of the present invention.

Claims (2)

1. a kind of sparse laser point cloud joining method of traffic scene, it is characterized in that: utilizing the sparse of low line number laser radar scanning The high precision position and posture information of point cloud data and the acquisition of GPS-IMU unit carry out a cloud feature extraction and point cloud, knot Local coordinate data are transformed into global coordinate system, are spliced to form accurately by the location information for closing the acquisition of GPS-IMU unit Figure, the specific steps are as follows:
Step 1: data acquire
Low line number laser radar is mounted on roof or front of the car acquisition primary data frame, and primary data frame includes low line number laser The high precision position and posture information of sparse point cloud data and GPS-IMU the unit acquisition of radar scanning;
Step 2: point cloud feature extraction
Extract the point, line, surface and cylinder body characteristics of laser point cloud data in kth frame primary data frame;Wherein, except conventional point, Outside line, face and cylinder body characteristics, while according to the particularity of traffic scene, pointedly design includes the line feature of lane line, The paved road of different reflectivity, greenbelt, building facade, traffic signboard region feature, trunk, lamppost, traffic mark The cylinder body characteristics of bracket;Specifically,
A. point feature, i.e. original point cloud data are denoted as set Pk={ p }k
B. line feature, it is of different sizes according to laser point reflectivity, straight line related with lane line is extracted, L is denoted ask={ l }k
C. region feature extracts ground and building facade, is denoted as Sk={ s }k
D. cylinder body characteristics extract lamppost, trunk and traffic mark bracket, are denoted as Ck={ c }k
Step 3: point cloud
A. to the preceding k-1 frame point cloud spliced, extraction point, line, surface, cylinder body characteristics are denoted as respectively with described in step 2,
P1_k-1={ p }1_k-1, L1_k-1={ l }1_k-1, S1_k-1={ s }1_k-1, C1_k-1={ c }1_k-1
B. the point cloud structure feature based on extraction solves so that distance is the smallest between kth frame point cloud and preceding k-1 frame splice point cloud Translation vector and spin matrix, translation vector are denoted as Tk, spin matrix is denoted as Rk
Specifically, GPS-IMU unit acquisition position and posture information include: vehicle position and course, be denoted as G=x, y, Z, roll, pitch, yaw }, wherein (x, y, z) is three-dimensional location coordinates, roll is roll angle, is denoted as γ, and pitch is pitching Angle is denoted as β, and yaw is course angle, is denoted as α;According to the position of -1 frame GPS-IMU unit record of kth frame and kth and posture information, Vehicle movement translation vector and spin matrix initial value are calculated, T is denoted ask,gps、Rk,gps, in which:
TK, gps=(x, y, z)
Rotational order is then to rotate around x axis γ, each spin matrix further around y-axis rotation β around z-axis rotation alpha are as follows:
RK, gps=Rx(γ)Ry(β)Rz(α)
With Tk,gps、Rk,gpsAs a cloud translation vector TkWith spin matrix RkInitial value, between cloud character pair member Element distance is minimum to be used as optimization aim, and optimal T is solved using Levenberg-Marquadt methodkAnd Rk
Distance definition is the sum of the distance of different characteristic between point cloud, i.e.,
Dis=dis (P)+dis (L)+dis (S)+dis (C) (1)
Wherein,
Wherein,For set PkMiddle certain point,
Point after rotation translation is denoted as
For set P1_k-1Middle distanceNearest point.
Line, face, the distance definition of cylindrical body are consistent with point cloud.Solve spin matrix RkWith translation vector TkThe problem of can describe Are as follows:
Step 4: by kth frame laser point cloud after coordinate transform together with preceding k-1 frame point cloud, in conjunction with primary data frame In GPS-IMU unit acquisition location information, kth frame local coordinate data are transformed into global coordinate system and update map, It repeats Step 2: step 3 and step 4, the point cloud until completing entire traffic scene.
2. the sparse laser point cloud joining method of traffic scene according to claim 1, it is characterized in that: the low line number laser The vehicle speed of radar installation is less than 30km/h.
CN201811539606.7A 2018-12-17 2018-12-17 The sparse laser point cloud joining method of traffic scene Pending CN109633665A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811539606.7A CN109633665A (en) 2018-12-17 2018-12-17 The sparse laser point cloud joining method of traffic scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811539606.7A CN109633665A (en) 2018-12-17 2018-12-17 The sparse laser point cloud joining method of traffic scene

Publications (1)

Publication Number Publication Date
CN109633665A true CN109633665A (en) 2019-04-16

Family

ID=66074534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811539606.7A Pending CN109633665A (en) 2018-12-17 2018-12-17 The sparse laser point cloud joining method of traffic scene

Country Status (1)

Country Link
CN (1) CN109633665A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109828592A (en) * 2019-04-22 2019-05-31 深兰人工智能芯片研究院(江苏)有限公司 A kind of method and apparatus of detection of obstacles
CN110196044A (en) * 2019-05-28 2019-09-03 广东亿嘉和科技有限公司 It is a kind of based on GPS closed loop detection Intelligent Mobile Robot build drawing method
CN110322553A (en) * 2019-07-10 2019-10-11 广州建通测绘地理信息技术股份有限公司 The method and system of laser radar point cloud mixed reality scene implementation setting-out
CN110728623A (en) * 2019-08-27 2020-01-24 深圳市华讯方舟太赫兹科技有限公司 Cloud point splicing method, terminal equipment and computer storage medium
CN110927743A (en) * 2019-12-05 2020-03-27 武汉理工大学 Intelligent vehicle positioning method based on multi-line laser point cloud polarization representation
CN111161353A (en) * 2019-12-31 2020-05-15 深圳一清创新科技有限公司 Vehicle positioning method and device, readable storage medium and computer equipment
CN111223136A (en) * 2020-01-03 2020-06-02 三星(中国)半导体有限公司 Depth feature extraction method and device for sparse 2D point set
CN111862219A (en) * 2020-07-29 2020-10-30 上海高仙自动化科技发展有限公司 Computer equipment positioning method and device, computer equipment and storage medium
CN112710318A (en) * 2020-12-14 2021-04-27 深圳市商汤科技有限公司 Map generation method, route planning method, electronic device, and storage medium
CN113359141A (en) * 2021-07-28 2021-09-07 东北林业大学 Forest fire positioning method and system based on unmanned aerial vehicle multi-sensor data fusion
CN113495278A (en) * 2020-04-02 2021-10-12 北京京东乾石科技有限公司 Method and apparatus for enhancing point cloud data
CN113748693A (en) * 2020-03-27 2021-12-03 深圳市速腾聚创科技有限公司 Roadbed sensor and pose correction method and device thereof
CN113768419A (en) * 2021-09-17 2021-12-10 安克创新科技股份有限公司 Method and device for determining sweeping direction of sweeper and sweeper
CN114359463A (en) * 2022-03-20 2022-04-15 宁波博登智能科技有限公司 Point cloud marking system and method for ground identification
CN110728623B (en) * 2019-08-27 2024-04-19 深圳市重投华讯太赫兹科技有限公司 Cloud point splicing method, terminal equipment and computer storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090041314A1 (en) * 2007-08-02 2009-02-12 Tom Vercauteren Robust mosaicing method. notably with correction of motion distortions and tissue deformations for a vivo fibered microscopy
US20130054187A1 (en) * 2010-04-09 2013-02-28 The Trustees Of The Stevens Institute Of Technology Adaptive mechanism control and scanner positioning for improved three-dimensional laser scanning
US20130155058A1 (en) * 2011-12-14 2013-06-20 The Board Of Trustees Of The University Of Illinois Four-dimensional augmented reality models for interactive visualization and automated construction progress monitoring
CN105260988A (en) * 2015-09-09 2016-01-20 百度在线网络技术(北京)有限公司 High-precision map data processing method and high-precision map data processing device
CN107167815A (en) * 2017-04-28 2017-09-15 上海华测导航技术股份有限公司 The automatic creation system and method for a kind of highway road surface line number evidence
CN107179086A (en) * 2017-05-24 2017-09-19 北京数字绿土科技有限公司 A kind of drafting method based on laser radar, apparatus and system
CN107918753A (en) * 2016-10-10 2018-04-17 腾讯科技(深圳)有限公司 Processing Method of Point-clouds and device
CN108053432A (en) * 2017-11-14 2018-05-18 华南理工大学 The method for registering of indoor sparse cloud scene based on local I CP
CN108647646A (en) * 2018-05-11 2018-10-12 北京理工大学 The optimizing detection method and device of low obstructions based on low harness radar
CN108732584A (en) * 2017-04-17 2018-11-02 百度在线网络技术(北京)有限公司 Method and apparatus for updating map
CN108955702A (en) * 2018-05-07 2018-12-07 西安交通大学 Based on the lane of three-dimensional laser and GPS inertial navigation system grade map creation system
CN108955670A (en) * 2017-05-25 2018-12-07 百度在线网络技术(北京)有限公司 Information acquisition method and device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090041314A1 (en) * 2007-08-02 2009-02-12 Tom Vercauteren Robust mosaicing method. notably with correction of motion distortions and tissue deformations for a vivo fibered microscopy
US20130054187A1 (en) * 2010-04-09 2013-02-28 The Trustees Of The Stevens Institute Of Technology Adaptive mechanism control and scanner positioning for improved three-dimensional laser scanning
US20130155058A1 (en) * 2011-12-14 2013-06-20 The Board Of Trustees Of The University Of Illinois Four-dimensional augmented reality models for interactive visualization and automated construction progress monitoring
CN105260988A (en) * 2015-09-09 2016-01-20 百度在线网络技术(北京)有限公司 High-precision map data processing method and high-precision map data processing device
CN107918753A (en) * 2016-10-10 2018-04-17 腾讯科技(深圳)有限公司 Processing Method of Point-clouds and device
CN108732584A (en) * 2017-04-17 2018-11-02 百度在线网络技术(北京)有限公司 Method and apparatus for updating map
CN107167815A (en) * 2017-04-28 2017-09-15 上海华测导航技术股份有限公司 The automatic creation system and method for a kind of highway road surface line number evidence
CN107179086A (en) * 2017-05-24 2017-09-19 北京数字绿土科技有限公司 A kind of drafting method based on laser radar, apparatus and system
CN108955670A (en) * 2017-05-25 2018-12-07 百度在线网络技术(北京)有限公司 Information acquisition method and device
CN108053432A (en) * 2017-11-14 2018-05-18 华南理工大学 The method for registering of indoor sparse cloud scene based on local I CP
CN108955702A (en) * 2018-05-07 2018-12-07 西安交通大学 Based on the lane of three-dimensional laser and GPS inertial navigation system grade map creation system
CN108647646A (en) * 2018-05-11 2018-10-12 北京理工大学 The optimizing detection method and device of low obstructions based on low harness radar

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANDREW W. FITZGIBBON: "Robust registration of 2D and 3D point sets", 《IMAGE AND VISION COMPUTING》 *
吴晓庆: "基于点云处理的场景三维重建", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
魏巍: "低空航拍图像正射全景拼图方法研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109828592A (en) * 2019-04-22 2019-05-31 深兰人工智能芯片研究院(江苏)有限公司 A kind of method and apparatus of detection of obstacles
CN110196044A (en) * 2019-05-28 2019-09-03 广东亿嘉和科技有限公司 It is a kind of based on GPS closed loop detection Intelligent Mobile Robot build drawing method
CN110322553A (en) * 2019-07-10 2019-10-11 广州建通测绘地理信息技术股份有限公司 The method and system of laser radar point cloud mixed reality scene implementation setting-out
CN110322553B (en) * 2019-07-10 2024-04-02 广州建通测绘地理信息技术股份有限公司 Method and system for lofting implementation of laser radar point cloud mixed reality scene
CN110728623A (en) * 2019-08-27 2020-01-24 深圳市华讯方舟太赫兹科技有限公司 Cloud point splicing method, terminal equipment and computer storage medium
CN110728623B (en) * 2019-08-27 2024-04-19 深圳市重投华讯太赫兹科技有限公司 Cloud point splicing method, terminal equipment and computer storage medium
CN110927743A (en) * 2019-12-05 2020-03-27 武汉理工大学 Intelligent vehicle positioning method based on multi-line laser point cloud polarization representation
CN111161353B (en) * 2019-12-31 2023-10-31 深圳一清创新科技有限公司 Vehicle positioning method, device, readable storage medium and computer equipment
CN111161353A (en) * 2019-12-31 2020-05-15 深圳一清创新科技有限公司 Vehicle positioning method and device, readable storage medium and computer equipment
CN111223136A (en) * 2020-01-03 2020-06-02 三星(中国)半导体有限公司 Depth feature extraction method and device for sparse 2D point set
CN111223136B (en) * 2020-01-03 2024-04-23 三星(中国)半导体有限公司 Depth feature extraction method and device for sparse 2D point set
CN113748693A (en) * 2020-03-27 2021-12-03 深圳市速腾聚创科技有限公司 Roadbed sensor and pose correction method and device thereof
CN113748693B (en) * 2020-03-27 2023-09-15 深圳市速腾聚创科技有限公司 Position and pose correction method and device of roadbed sensor and roadbed sensor
CN113495278A (en) * 2020-04-02 2021-10-12 北京京东乾石科技有限公司 Method and apparatus for enhancing point cloud data
CN113495278B (en) * 2020-04-02 2024-04-16 北京京东乾石科技有限公司 Method and device for enhancing point cloud data
CN111862219A (en) * 2020-07-29 2020-10-30 上海高仙自动化科技发展有限公司 Computer equipment positioning method and device, computer equipment and storage medium
CN112710318A (en) * 2020-12-14 2021-04-27 深圳市商汤科技有限公司 Map generation method, route planning method, electronic device, and storage medium
CN113359141A (en) * 2021-07-28 2021-09-07 东北林业大学 Forest fire positioning method and system based on unmanned aerial vehicle multi-sensor data fusion
CN113768419A (en) * 2021-09-17 2021-12-10 安克创新科技股份有限公司 Method and device for determining sweeping direction of sweeper and sweeper
CN114359463A (en) * 2022-03-20 2022-04-15 宁波博登智能科技有限公司 Point cloud marking system and method for ground identification

Similar Documents

Publication Publication Date Title
CN109633665A (en) The sparse laser point cloud joining method of traffic scene
CN108802785B (en) Vehicle self-positioning method based on high-precision vector map and monocular vision sensor
CN106651953B (en) A kind of vehicle position and orientation estimation method based on traffic sign
CN109631887B (en) Inertial navigation high-precision positioning method based on binocular, acceleration and gyroscope
CN108868268B (en) Unmanned parking space posture estimation method based on point-to-surface distance and cross-correlation entropy registration
CN107194989B (en) Traffic accident scene three-dimensional reconstruction system and method based on unmanned aerial vehicle aircraft aerial photography
CN104280036B (en) A kind of detection of transport information and localization method, device and electronic equipment
JP2021508815A (en) Systems and methods for correcting high-definition maps based on the detection of obstructing objects
JP2021508027A (en) Systems and methods for positioning vehicles under poor lighting conditions
EP3650814A1 (en) Vision augmented navigation
WO2020083103A1 (en) Vehicle positioning method based on deep neural network image recognition
CN109166140A (en) A kind of vehicle movement track estimation method and system based on multi-line laser radar
CN104865578A (en) Indoor parking lot high-precision map generation device and method
CN110119698A (en) For determining the method, apparatus, equipment and storage medium of Obj State
JP5286653B2 (en) Stationary object map generator
CN106373088A (en) Quick mosaic method for aviation images with high tilt rate and low overlapping rate
CN110223354A (en) A kind of Camera Self-Calibration method based on SFM three-dimensional reconstruction
WO2021017211A1 (en) Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal
CN106705962A (en) Method and system for acquiring navigation data
CN112749584B (en) Vehicle positioning method based on image detection and vehicle-mounted terminal
CN112446915B (en) Picture construction method and device based on image group
CN116152342A (en) Guideboard registration positioning method based on gradient
CN209214563U (en) Multi-information fusion spectrum of road surface roughness real-time testing system
CN113296133A (en) Device and method for realizing position calibration based on binocular vision measurement and high-precision positioning fusion technology
CN113673386A (en) Method for marking traffic signal lamp in prior-to-check map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190416

WD01 Invention patent application deemed withdrawn after publication