CN114445799A - Automatic lane line element extraction method based on point cloud base map - Google Patents

Automatic lane line element extraction method based on point cloud base map Download PDF

Info

Publication number
CN114445799A
CN114445799A CN202210009275.6A CN202210009275A CN114445799A CN 114445799 A CN114445799 A CN 114445799A CN 202210009275 A CN202210009275 A CN 202210009275A CN 114445799 A CN114445799 A CN 114445799A
Authority
CN
China
Prior art keywords
point cloud
lane line
base map
cloud base
lane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210009275.6A
Other languages
Chinese (zh)
Inventor
郭林栋
何贝
刘鹤云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sinian Zhijia Technology Co ltd
Original Assignee
Beijing Sinian Zhijia Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sinian Zhijia Technology Co ltd filed Critical Beijing Sinian Zhijia Technology Co ltd
Priority to CN202210009275.6A priority Critical patent/CN114445799A/en
Publication of CN114445799A publication Critical patent/CN114445799A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a point cloud base map-based automatic extraction method for lane line elements, which comprises the following steps: carrying out image processing on the point cloud base map; constructing a convolutional neural network, inputting the processed point cloud base map into the trained convolutional neural network to extract lane lines; optimizing the extracted lane line; carrying out topology analysis on the optimized lane lines to optimize lane line pairs; and carrying out parameter estimation on the lane line pair. The automatic extraction method of the lane line elements based on the point cloud base map accurately extracts the lane line parameter model from the point cloud data by using a deep learning method. The efficiency of making high accuracy map is improved.

Description

Automatic lane line element extraction method based on point cloud base map
Technical Field
The invention belongs to the technical field of computer vision, automatic driving and high-precision maps, and particularly relates to a method for automatically extracting lane line elements based on a point cloud base map.
Background
High-precision maps are a basic research direction in the field of autopilot, which help autopilot by providing high-precision map element information, including but not limited to lane lines, ground signs, traffic signs, and curbs. By means of the information of the high-precision map, the automatic driving system can control and restrict the autonomous behavior of the vehicle, and the safety and stability of the whole system are guaranteed. The production of the high-precision map is a complete production chain and comprises a plurality of key elements, including acquisition and maintenance of mapping qualification, design of a map production tool chain, formulation of a high-precision map format and a release strategy, management of production personnel and an automation means. Because the manufacturing process of the high-precision map is far more complex than that of the traditional electronic map, how to strengthen the automation process is important to accelerate the manufacturing process.
Automated means typically include the following:
1. due to the simplicity and intuition of image acquisition, companies represented by Momenta perform visual extraction of lane line elements by processing images. Firstly, detecting and identifying the deep learning of an image to obtain the position, the type and the like of a lane line in the image; secondly, obtaining coordinates in a three-dimensional space under a vehicle body coordinate system through inverse projection transformation; and finally, transforming the combined navigation of the vehicle body into a real three-dimensional space to obtain absolute coordinates. The method is limited by the problems of illumination change, external parameter jitter, vehicle body combination navigation precision and the like, and drawing precision is difficult to guarantee.
2. Traditional map makers such as Baidu, four-dimensional and Goodand adopt a mode of establishing a base map by laser point cloud and conducting crowd-sourcing updating on images to manufacture high-precision maps. Because the reflectivity of the lane line is obviously different from that of the common road surface, the suspected points of the lane line can be extracted through simple binarization, then a lane line parameter equation is fitted through a B spline interpolation scheme, and finally the lane line and the lane are identified through manual confirmation. However, the real road condition and lane line models are complex, and it is difficult to obtain stable automatic extraction results through simple processing and model estimation, and the workload of subsequent manual confirmation may be increased.
Disclosure of Invention
The invention provides a point cloud base map-based automatic extraction method for lane line elements, which solves the technical problems in the prior art and adopts the following technical scheme:
a method for automatically extracting lane line elements based on a point cloud base map comprises the following steps:
carrying out image processing on the point cloud base map;
inputting the processed point cloud base map into a trained convolutional neural network to extract lane lines;
optimizing the extracted lane line;
carrying out topology analysis on the optimized lane lines to optimize lane line pairs;
and carrying out parameter estimation on the lane line pair.
Further, a specific method for performing image processing on the point cloud base map is as follows:
removing dynamic and static obstacles in the point cloud base map;
carrying out image splicing on the point cloud base map after the obstacles are removed;
and cutting the spliced large map into a plurality of point cloud surface patches.
Further, the point cloud base map is operated to remove the dynamic and static obstacles in the point cloud base map through the following formula:
{pi|zi≤Tz and(xi,yi)∈Regionroad},
wherein p isi=(xi,yi,zi) Representing a point cloud point i, TzRepresenting a z-direction height threshold, RegionroadRepresenting the road surface area.
Further, the specific method for image stitching of the point cloud base map after the obstacle is removed is as follows:
the point cloud is spliced in a mode of front-end GNSS + RTK and rear-end gstam closed loop to obtain a pose sequence L ═ { L ═1,l2,…,ln};
Obtaining a pose sequence L' ═ { L ═ matched with the laser point cloud through linear interpolation1′,l2′,…,ln', wherein,
Figure BDA0003456728920000021
wherein, alpha is a time coefficient,
Figure BDA0003456728920000022
and
Figure BDA0003456728920000023
respectively representing two poses which are nearest in time, and the corresponding recording time is respectively
Figure BDA0003456728920000024
And
Figure BDA0003456728920000025
let l'iRecording time of tiThen, the first step is executed,
Figure BDA0003456728920000026
further, the specific method for cutting the spliced large map into a plurality of point cloud patches comprises the following steps:
the original mosaic is cropped to a picture sequence of 1024 x 1024 size with a minimum lateral and longitudinal resolution of 0.25 m/pixel.
Further, the specific method for performing image processing on the point cloud base map further comprises the following steps:
and rotating the point cloud surface patches according to the vehicle body poses corresponding to the point cloud surface patches to enable the vehicle body corresponding to each point cloud surface patch to face vertically upwards.
Further, the convolutional neural network comprises an initialization initial layer, three sets of down-sampling bottleck layers, two sets of up-sampling bottleck layers and a convolutional layer.
Further, a specific method for performing optimization processing on the extracted lane line is as follows:
performing closing operation and opening operation on the lane line output by the convolutional neural network;
traversing each area, and performing transverse bone drying on each area;
and estimating the lane line parameters by a random consistency sampling algorithm.
Further, the optimized lane line is subjected to topology analysis, and the specific method for optimizing the lane line pair comprises the following steps:
constructing a global optimization model, wherein the corresponding optimization formula is as follows:
Figure BDA0003456728920000031
wherein L and m are LiTwo lane lines of a lane line pair, wamo、wlenAnd wlinkThe weights respectively representing the number, the length and the connection relationship are respectively:
wamo=exp{-n22},
Figure BDA0003456728920000032
Figure BDA0003456728920000033
where n is the number of lane lines detected, σ is the given number of ideal lane lines, slAnd H represents the length of the ith lane line and the height of the image, rlAnd ρlRadius and angle, r, of a parameter space representing the lane line lmAnd ρmRadius and angle, r, of a parameter space representing a lane line mminAnd rmaxA minimum threshold and a maximum threshold for the radius of the parameter space, and ρ is a lane line angle threshold.
Further, a specific method for performing parameter estimation on the lane line pair is as follows:
and (4) adopting a B spline model to fit and break up the lane line pairs, and taking the scattered points as control points to carry out lane line parameter equation estimation.
The method has the advantages that the method for automatically extracting the lane line elements based on the point cloud base map accurately extracts the lane line parameter model from the point cloud data by using a deep learning method. The efficiency of making high accuracy map is improved.
Drawings
FIG. 1 is a schematic diagram of an automatic lane line element extraction method based on a point cloud base map according to the present invention;
FIG. 2 is a schematic diagram of a lane point cloud patch of the present invention;
FIG. 3 is a schematic diagram of an operator of the convolutional neural network of the present invention;
FIG. 4 is a schematic diagram of the overall convolutional neural network of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and the embodiments.
Fig. 1 shows an automatic lane line element extraction method based on a point cloud base map, which includes the following steps: s1: and carrying out image processing on the point cloud base map. S2: and inputting the processed point cloud base map into a trained convolutional neural network to extract lane lines. S3: and optimizing the extracted lane lines. S4: and carrying out topology analysis on the optimized lane lines, and optimizing the lane line pairs. S5: and carrying out parameter estimation on the lane line pair. According to the method, the lane line parameter model is accurately extracted from the point cloud data by using a deep learning method. The efficiency of making high accuracy map is improved. The recall rate is 99.1%, the accuracy rate is 97.2%, and the drawing precision is 20cm, so that the requirement of a high-precision map can be met. The above steps are specifically described below.
For step S1: and carrying out image processing on the point cloud base map.
In step S1, the method for processing the dot cloud base map includes:
and removing the moving and static obstacles in the point cloud base map.
Specifically, through algorithms such as height difference in the z direction and depth learning, dynamic and static obstacles including vehicles, pedestrians, guardrails, trees, buildings and the like can be removed from the single-frame point cloud. Therefore, on one hand, the number of point clouds is reduced, and the time and space complexity of subsequent calculation are saved; and on the other hand, the influence of the similarity of the reflectivity after rasterization on the identification result of the lane line is avoided. The following formula is specifically adopted for calculation,
{pi|zi≤Tz and(xi,yi)∈Regionroad},
wherein p isi=(xi,yi,zi) Representing a point cloud point i, TzRepresenting a z-direction height threshold, RegionroadRepresenting the road surface area.
And carrying out image splicing on the point cloud base map after the obstacles are removed.
Specifically, a front-end GNSS + RTK and a rear-end gstam closed-loop mode are adopted to splice point clouds. The GNSS comprises a GPS, a GLONASS and a Beidou, and positioning errors can be controlled to be about +/-2 cm through energization of a difference technology. Considering possible hopping of GNSS and ghost problems caused by poor satellite signals, an optimization model is constructed by using gstam, and a gyroscope, a wheel speed meter and interframe matching are used for dead reckoning to ensure the overall positioning accuracy. The pose sequence L ═ { L } can be obtained through the above process1,l2,…,ln}. Because the positioning is not completely consistent with the time stamp of the laser point cloud. In the application, a pose sequence L' matched with the laser point cloud is obtained through linear interpolation1′,l2′,…,ln', wherein,
Figure BDA0003456728920000041
wherein alpha is a time coefficient obtained by weighting according to the time difference between the sampling time and two adjacent pose timestamps,
Figure BDA0003456728920000042
and
Figure BDA0003456728920000043
respectively representing two poses which are nearest in time, and the corresponding recording time is respectively
Figure BDA0003456728920000044
And
Figure BDA0003456728920000045
let l'nIs recorded for a time tiThen, the first step is executed,
Figure BDA0003456728920000046
and cutting the spliced large map into a plurality of point cloud surface patches.
In the present application, the original mosaic is cropped to a picture sequence of 1024 × 1024 size with a minimum horizontal and vertical resolution of 0.25 m/pixel. I.e. each graph represents a 256 x 256 square meter pavement. It will be appreciated that the cropped point cloud patch size may be adjusted by altering the minimum lateral and longitudinal resolutions. After the original large spliced picture is cut, the area related to each small picture is a grid. After preparing the grid space, normalizing the reflectivity of the original point cloud (i.e. the point cloud used for generating the big image), counting and normalizing the reflectivity information of each point in the original point cloud, and mapping the reflectivity range of the original point cloud to the range of [0,255], namely, mapping the reflectivity range of the original point cloud to the value domain space of [0,255] linearly.
Since the vehicle usually runs along a lane line, for convenience of subsequent topology analysis, after the spliced large map is cut into a plurality of point cloud patches, the point cloud patches are rotated according to the vehicle body poses corresponding to the point cloud patches, so that the vehicle body orientation corresponding to each point cloud patch is vertically upward, and the vehicle body orientations in all the point cloud patches are consistent and are vertically upward, namely parallel to the y axis of the image and upward. That is, the road angle exhibited by each point cloud patch is different, but the vehicle orientation is consistent in all patches. As shown in fig. 2.
For step S2: and inputting the processed point cloud base map into a trained convolutional neural network to extract lane lines.
The step inputs are point cloud base maps and outputs are identified images, wherein pixels containing lane lines are assigned with special rgb information to distinguish from the original input.
In the present application, a convolutional neural network is constructed as follows.
First, an initialization initial layer is built, as shown in FIG. 3 a. First, two operators are used: and the conv layer with maxporoling and stride of 2 ensures that the output scales are the same, and then the concat layer is utilized for merging, so that the speed can be increased, and necessary characteristic information can be extracted.
Next, a bottleeck operator is constructed, as shown in FIG. 3 b. It contains two modes, namely downsampled and without downsampling. The main difference is that the former contains a maxporoling layer and the stride of the conv layer is 2. The component of the Conv layer adopts a classical resnet construction mode, dimension reduction of the feature diagram is carried out through operators of 1x1, meanwhile, prelu nonlinear response is designed, and generalization capability of the system is guaranteed.
Finally, the whole network is constructed, as shown in fig. 4. The convolutional neural network comprises an initialization initial layer, three groups of downsampling layers, two groups of upsampling layers and convolutional layers. Firstly, an initialization initial layer is used for carrying out preliminary rough extraction on a point cloud image. Followed by 3 sets of downsampling layers, of which numbers 2 and 3 cover the deplab classic hole convolution layer for increasing the field of view. And uses asymmetric convolution operators to reduce parameters and increase speed. Then, 2 groups of up-sampling layers are followed, and the quality of the final output result is ensured by mainly using a parsenet classical up-convolution operator. And finally, adding a layer of common convolution layer, and changing the characteristic dimension to enable the output category to meet the requirements of people.
The classes of lane lines output by the convolutional neural network include, but are not limited to: white lane lines, yellow lane lines, bus lane lines, flow guide lines, ground arrows, ground characters and the like.
For step S3: and optimizing the extracted lane lines.
The specific method for optimizing the extracted lane lines comprises the following steps:
and performing closing operation and opening operation on the lane line output by the convolutional neural network. In the step, the broken lane line area is sewed, and some wild point areas are removed.
Traversing each area, and performing transverse bone drying on each area. Namely, only the pixel point with the maximum value is reserved in each row to construct a preselected point set. Specifically, the following formula is adopted for transverse bone drying:
pi*=maxj vij
in the formula, p is the selected pixel point, v is the image to be processed, and i and j are the corresponding pixels index.
The estimation of the lane line parameters is performed by the random consensus Sampling algorithm ransac (random Sampling consensus).
Specifically, a straight line is selected as the initialization model. 2 points that do not repeat are randomly selected from the local area each time. A straight line fit was made using these 2 points. Counting the number of the interior points, and finally selecting the linear equation with the maximum number of the interior points. These interior points are used to perform a final straight line fit. In the present application, the straight line fitting is done using the least squares method, i.e.:
Figure BDA0003456728920000061
wherein a and b are parameters of a linear equation.
For step S4: and carrying out topology analysis on the optimized lane lines, and optimizing lane line pairs.
The foregoing step S3 describes how to acquire the "lane line", and the step S4 is to process the "lane line pair". The "pair of lane lines" is a line of "parallel to each other" in the "lane line", and the step S4 optimizes the "pair of lane lines" in the "lane line",
in the application, a batch optimization method of lane line pairs is designed to construct road surface topology. Lane lines are the basic elements that make up a roadway, their linearity, length, width, orientation and number are all relevant, so we can design the following rules to mask unreasonable lane line pairs:
1) it is desirable to include as many lane lines as possible, but the most combinations are not necessarily the best ones, but rather are the best ones where appropriate, and the best definition requires a limit on the number of lane lines;
2) the length of the lane line is a clear factor, but the length can cause a long false detection to influence the whole result, so the influence degree of the length of the lane line is limited;
3) there are also many restrictions as a lane line pair. For example, the direction of any two lane lines cannot be greatly different from each other, and the distance between two adjacent lane lines cannot be smaller than a certain value.
Therefore, according to the above limitation, a global optimization model is constructed, and the corresponding optimization formula is as follows:
Figure BDA0003456728920000062
wherein L isiFor the lane line pair to be optimized, L and m are LiTwo lane lines of a lane line pair, wamo、wlenAnd wlinkThe weights respectively representing the number, length and connection relationship are respectively:
wamo=exp{-n22},
Figure BDA0003456728920000063
Figure BDA0003456728920000064
the constraint is that for two lane lines m and l in a lane line pair, the deviation of the orientation of the two lane lines m and l cannot be larger than a specific value, the distance of the two lane lines m and l needs to be within a specific range, and the optimization constraint can be effective.
Where n is the number of lane lines detected and σ is the given number of ideal lane lines used to constrain the number of lane lines. s islAnd H represents the length of the ith lane line and the height of the image, rlAnd ρlRadius and angle, r, of a parameter space representing a lane line lmAnd ρmRadius sum of parameter space representing lane line mAngle rminAnd rmaxThe minimum threshold and the maximum threshold of the radius of the parameter space, and rho is a lane line angle threshold.
The length constraint is balanced by using a sigmoid function, so that the weight of a long lane line is increased, and the situation that the lane line falls into local optimization can be avoided.
For step S5: and carrying out parameter estimation on the lane line pair.
The specific method for carrying out parameter estimation on the lane line pair comprises the following steps: and (4) adopting a B spline model to fit and break up the lane line pairs, and taking the scattered points as control points to carry out lane line parameter equation estimation.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It should be understood by those skilled in the art that the above embodiments do not limit the present invention in any way, and all technical solutions obtained by using equivalent alternatives or equivalent variations fall within the scope of the present invention.

Claims (10)

1. A method for automatically extracting lane line elements based on a point cloud base map is characterized by comprising the following steps:
carrying out image processing on the point cloud base map;
inputting the processed point cloud base map into the trained convolutional neural network to extract a lane line;
optimizing the extracted lane line;
carrying out topology analysis on the optimized lane lines to optimize lane line pairs;
and performing parameter estimation on the lane line pair.
2. The method for automatically extracting lane line elements based on point cloud base map as claimed in claim 1,
the specific method for processing the point cloud base map comprises the following steps:
removing moving and static obstacles in the point cloud base map;
carrying out image splicing on the point cloud base map after the obstacles are removed;
and cutting the spliced large map into a plurality of point cloud surface patches.
3. The method for automatically extracting lane line elements based on point cloud base map as claimed in claim 2,
the point cloud base map is operated by the following formula to remove the dynamic and static obstacles in the point cloud base map:
{pi|zi≤Tzand(xi,yi)∈Regionroad},
wherein p isi=(xi,yi,zi) Representing a point cloud point i, TzRepresenting a z-direction height threshold, RegionroadIndicating the road surface area.
4. The method for automatically extracting lane line elements based on point cloud base map as claimed in claim 3,
the specific method for carrying out image splicing on the point cloud base map after the obstacles are removed comprises the following steps:
the point cloud is spliced in a mode of front-end GNSS + RTK and rear-end gstam closed loop to obtain a pose sequence L ═ { L ═1,l2,…,ln};
Obtaining a pose sequence L' ═ { L ═ matched with the laser point cloud through linear interpolation1′,l2′,…,ln', wherein,
Figure FDA0003456728910000011
wherein, alpha is a time coefficient,
Figure FDA0003456728910000012
and
Figure FDA0003456728910000013
respectively representing two poses that are temporally closest, corresponding recordsAt times respectively of
Figure FDA0003456728910000014
And
Figure FDA0003456728910000015
let liRecording time of' tiThen, the first step is executed,
Figure FDA0003456728910000016
5. the method for automatically extracting lane line elements based on point cloud base map according to claim 4,
the specific method for cutting the spliced large map into a plurality of point cloud surface patches comprises the following steps:
the stitched large picture is cropped to a picture sequence of 1024 x 1024 size with a minimum lateral and longitudinal resolution of 0.25 m/pixel.
6. The method for automatically extracting lane line elements based on point cloud base map as claimed in claim 5,
the specific method for processing the point cloud base map further comprises the following steps:
and rotating the point cloud surface patches according to the vehicle body poses corresponding to the point cloud surface patches to enable the direction of the vehicle body corresponding to each point cloud surface patch to be vertical upwards.
7. The method for automatically extracting lane line elements based on point cloud base map as claimed in claim 1,
the convolutional neural network comprises an initialization initial layer, three groups of downsampling bottleeck layers, two groups of upsampling bottleeck layers and a convolutional layer.
8. The method for automatically extracting lane line elements based on point cloud base map as claimed in claim 1,
the specific method for performing optimization processing on the extracted lane line comprises the following steps:
performing closing operation and opening operation on the lane line output by the convolutional neural network;
traversing each region, and performing transverse bone drying on each region;
and estimating the lane line parameters by a random consistency sampling algorithm.
9. The method for automatically extracting lane line elements based on point cloud base map as claimed in claim 8,
the method for optimizing the lane line pair comprises the following steps of carrying out topology analysis on the optimized lane line, wherein the specific method for optimizing the lane line pair comprises the following steps:
constructing a global optimization model, wherein the corresponding optimization formula is as follows:
Figure FDA0003456728910000021
wherein L and m are LiTwo lane lines of a lane line pair, wamo、wlenAnd wlinkThe weights respectively representing the number, length and connection relationship are respectively:
wamo=exp{-n22},
Figure FDA0003456728910000022
Figure FDA0003456728910000023
where n is the number of lane lines detected, σ is the given number of ideal lane lines, slAnd H represents the length of the ith lane line and the height of the image, rlAnd ρlRadius and angle of a parameter space representing a lane line lDegree, rmAnd ρmRadius and angle, r, of a parameter space representing a lane line mminAnd rmaxThe minimum threshold and the maximum threshold of the radius of the parameter space, and rho is a lane line angle threshold.
10. The method for automatically extracting lane line elements based on point cloud base map as claimed in claim 9,
the specific method for performing parameter estimation on the lane line pair comprises the following steps:
and adopting a B spline model to fit and break up the lane line pairs, and taking the broken points as control points to carry out lane line parameter equation estimation.
CN202210009275.6A 2022-01-05 2022-01-05 Automatic lane line element extraction method based on point cloud base map Pending CN114445799A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210009275.6A CN114445799A (en) 2022-01-05 2022-01-05 Automatic lane line element extraction method based on point cloud base map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210009275.6A CN114445799A (en) 2022-01-05 2022-01-05 Automatic lane line element extraction method based on point cloud base map

Publications (1)

Publication Number Publication Date
CN114445799A true CN114445799A (en) 2022-05-06

Family

ID=81365847

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210009275.6A Pending CN114445799A (en) 2022-01-05 2022-01-05 Automatic lane line element extraction method based on point cloud base map

Country Status (1)

Country Link
CN (1) CN114445799A (en)

Similar Documents

Publication Publication Date Title
Lu et al. Cultivated land information extraction in UAV imagery based on deep convolutional neural network and transfer learning
Li et al. Springrobot: A prototype autonomous vehicle and its algorithms for lane detection
CN110599537A (en) Mask R-CNN-based unmanned aerial vehicle image building area calculation method and system
CN110263717B (en) Method for determining land utilization category of street view image
CN102222236B (en) Image processing system and position measuring system
CN110188696A (en) A kind of water surface is unmanned to equip multi-source cognitive method and system
Li Remotely sensed images and GIS data fusion for automatic change detection
US11694354B2 (en) Geospatial object geometry extraction from imagery
EP3035237B1 (en) Method and system for classifying a terrain type in an area
Lizarazo et al. Automatic mapping of land surface elevation changes from UAV-based imagery
CN115546768B (en) Pavement marking identification method and system based on multi-scale mechanism and attention mechanism
CN111640116B (en) Aerial photography graph building segmentation method and device based on deep convolutional residual error network
US20230289974A1 (en) Performing semantic segmentation of 3d data using deep learning
Ma et al. Boundarynet: extraction and completion of road boundaries with deep learning using mobile laser scanning point clouds and satellite imagery
DE112021005607T5 (en) Systems and methods for camera-LiDAR-fused object detection
CN111931683A (en) Image recognition method, image recognition device and computer-readable storage medium
CN112906616A (en) Lane line extraction and generation method
CN114758086A (en) Method and device for constructing urban road information model
CN114627073A (en) Terrain recognition method, terrain recognition device, computer equipment and storage medium
Koc-San et al. A model-based approach for automatic building database updating from high-resolution space imagery
CN114445799A (en) Automatic lane line element extraction method based on point cloud base map
Bimanjaya et al. Extraction of road network in urban area from orthophoto using deep learning and douglas-peucker post-processing algorithm
Koç San Approaches for automatic urban building extraction and updating from high resolution satellite imagery
Huang et al. Ground filtering algorithm for mobile LIDAR using order and neighborhood point information
Widyaningrum et al. Tailored features for semantic segmentation with a DGCNN using free training samples of a colored airborne point cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination