CN113313081B - Road traffic rod object classification method integrating vehicle-mounted three-dimensional laser point cloud and image - Google Patents

Road traffic rod object classification method integrating vehicle-mounted three-dimensional laser point cloud and image Download PDF

Info

Publication number
CN113313081B
CN113313081B CN202110852461.1A CN202110852461A CN113313081B CN 113313081 B CN113313081 B CN 113313081B CN 202110852461 A CN202110852461 A CN 202110852461A CN 113313081 B CN113313081 B CN 113313081B
Authority
CN
China
Prior art keywords
rod
point
point cloud
shaped object
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110852461.1A
Other languages
Chinese (zh)
Other versions
CN113313081A (en
Inventor
李发帅
肖建华
李海亭
王诗云
段梦梦
郭明武
刘剑
王闪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Geomatics Institute
Original Assignee
Wuhan Geomatics Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Geomatics Institute filed Critical Wuhan Geomatics Institute
Priority to CN202110852461.1A priority Critical patent/CN113313081B/en
Publication of CN113313081A publication Critical patent/CN113313081A/en
Application granted granted Critical
Publication of CN113313081B publication Critical patent/CN113313081B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a road traffic rod object classification method fusing vehicle-mounted three-dimensional laser point cloud and images, which comprises the following steps of obtaining point cloud and image data; removing ground points in the point cloud to obtain a pre-segmentation body, slicing, performing characteristic constraint on a continuous slice set, and taking the continuous slice set which meets the characteristic constraint as a rod-shaped object original seed point; growing the original seed points of the rod-shaped objects by a growing algorithm to obtain complete rod-shaped object point clouds so as to obtain accurate position information of the rod-shaped object point clouds; projecting the rod-shaped object point cloud onto the panoramic image according to the corresponding relation between the point cloud and the panoramic image to obtain an image range corresponding to the rod-shaped object point cloud; and carrying out example segmentation on the image by using the trained MaskRCNN to obtain example segmentation information of the rod-shaped object, and finely classifying the rod-shaped object by using the range of the point cloud of the rod-shaped object in the panoramic image. The road traffic rod-shaped object classification method integrating the vehicle-mounted three-dimensional laser point cloud and the images realizes accurate positioning and fine classification of the rod-shaped objects in the point cloud.

Description

Road traffic rod object classification method integrating vehicle-mounted three-dimensional laser point cloud and image
Technical Field
The invention belongs to the field of image recognition and mapping, and particularly relates to a road traffic rod classification method fusing vehicle-mounted three-dimensional laser point cloud and images.
Background
With the rapid advancement of urbanization and the wide popularization of personal automobiles in recent years, the problem of urban traffic congestion is becoming more serious, great trouble is caused to the daily life of urban residents, the problem of urban traffic congestion becomes an important issue in urban management, and in order to solve the problem, developed countries such as Europe and America develop and put into use an intelligent traffic system, and the system greatly relieves the problem of urban traffic congestion. The road traffic rod is used as an important component of an intelligent traffic system, and the fine classification of the road traffic rod plays an important role in constructing the intelligent traffic system. In addition, the road traffic rod is used as an important hanging object of the 5G base station, and the automatic identification of the road traffic rod is also significant to the layout of the 5G base station. The vehicle-mounted three-dimensional laser scanning system is widely applied to three-dimensional data acquisition of urban road scenes due to good maneuverability and high data acquisition precision, high-precision three-dimensional point clouds and vehicle-mounted images acquired by the vehicle-mounted three-dimensional laser scanning system provide powerful support for automatic identification of road traffic rod-shaped objects, the urban road scenes are complex, the types of the objects are more, serious shielding, noise and adhesion phenomena exist, and the problems are serious and the automatic identification precision of the road traffic rod-shaped objects is seriously imaged. The fine classification of the road traffic rod at the present stage depends heavily on manual translation, so that the fine automatic classification of the road traffic rod in the vehicle-mounted laser point cloud and the vehicle-mounted image of a large-range urban scene has important significance for constructing an intelligent traffic system and 5G base station layout.
At present, the identification research on road traffic rods is mainly divided into the following three methods, namely a method based on knowledge reasoning, an algorithm based on traditional machine learning and a method based on a deep learning neural network. The method based on knowledge reasoning comprises the steps of firstly extracting high-order features of a point cloud or an image, then manually constructing a series of feature constraint rules according to the characteristics of a target in observation experience, and finally identifying the point cloud or the image according to the extracted features and rules. Brenner et al first proposed concentric cylinder models for the detection of traffic rods based on their characteristics, Lehtom ä ki et al improved the concentric cylinder model detection method by using scan line segmentation preprocessing, and Cabo et al constructed voxels extended and accelerated the method. Bremer et al propose a method based on eigenvalue vectors to extract traffic rods. Pu et al propose a method based on slicing the upper end of the shaft to extract traffic shafts and classify with shape information of the attachments. Yang et al extract features based on hyper-voxels and formulate semantic rules to identify road traffic shafts. The rule-based approach has the disadvantage that we need to design many rules when the target class is many, and the portability of the method is too poor, and the advantage is that many training samples are not needed. The method based on the traditional machine learning operator comprises the steps of firstly extracting low-order or high-order features of point clouds or images, then inputting the extracted point cloud features or image features and corresponding labels into the traditional machine learning operator for training, for example, machine learning operators such as a support vector machine and a random forest are supported, and finally, the point clouds or the images with the extracted features are automatically identified by utilizing a trained machine learning operator model. The deep learning-based method is to input point cloud into a pre-designed deep neural network for training and classify objects. The network with representativeness comprises a voxel-based convolutional neural network, a multi-view projection image-based convolutional neural network and PointNet, wherein the first neural network is trained by inputting pre-generated voxels into the three-dimensional convolutional neural network, the second neural network is trained by inputting images generated by point clouds at different views into the convolutional neural network, and different from the first two networks, the PointNet is trained by directly inputting the point clouds and constructing a symmetric function to normalize the point cloud characteristics into the same space. The deep learning-based method has the advantages that high-level features are not required to be designed, and the defects that a large training sample number and a complicated training sample calibration work are required.
In view of this, a method for classifying road traffic rods by fusing vehicle-mounted three-dimensional laser point clouds and images is urgently needed.
Disclosure of Invention
Therefore, the technical problem to be solved by the invention is to provide a road traffic rod integrating vehicle-mounted three-dimensional laser point clouds and images.
The invention relates to a road traffic rod integrating vehicle-mounted three-dimensional laser point cloud and images, which comprises the following steps:
s1, acquiring point clouds and image data corresponding to the point clouds;
s2, removing ground points in the point cloud to obtain a pre-segmentation body, horizontally slicing the pre-segmentation body along the vertical direction according to a preset height to obtain a continuous slice set, performing feature constraint on the continuous slice set through two-dimensional features of slices and concentric cylinder features, and taking the continuous slice set which meets the feature constraint as a rod-shaped object original seed point;
s3, growing the original seed points of the rod-shaped objects through a growing algorithm to obtain complete rod-shaped object point clouds so as to obtain accurate position information of the rod-shaped object point clouds;
s4, projecting the rod-shaped object point cloud to the panoramic image according to the corresponding relation between the point cloud and the panoramic image to obtain an image range corresponding to the rod-shaped object point cloud;
and S5, carrying out example segmentation on the image by using the trained MaskRCNN to obtain example segmentation information of the rod-shaped object, and finely classifying the rod-shaped object point cloud obtained in the step S4 in the range of the panoramic image.
Further, step S2 specifically includes the following steps,
s201, screening ground points in the point cloud through the following formula:
Figure 339562DEST_PATH_IMAGE001
wherein:
Figure 783313DEST_PATH_IMAGE002
substitute pointing device
Figure 202793DEST_PATH_IMAGE003
Is a condition of the ground;
Figure 554140DEST_PATH_IMAGE004
pointing device
Figure 375465DEST_PATH_IMAGE003
The absolute value of the difference between the height of the track point and the installation height of the three-dimensional laser scanner is smaller than a certain threshold, and the threshold range is 0.25 m;
Figure 783925DEST_PATH_IMAGE005
substitute pointing device
Figure 690701DEST_PATH_IMAGE003
The local variation elevation of the neighborhood points is less than 0.15 m;
Figure 642477DEST_PATH_IMAGE006
is a point randomly selected from the point cloud;
Figure 318309DEST_PATH_IMAGE007
is a point
Figure 900600DEST_PATH_IMAGE003
The track point height is obtained by matching the track information of the point during point cloud acquisition with the time tag;
Figure 294672DEST_PATH_IMAGE008
the height of the vehicle-mounted three-dimensional laser scanner from the ground;
Figure 722242DEST_PATH_IMAGE009
is a point
Figure 314898DEST_PATH_IMAGE003
The local elevation change value of the point cloud is obtained byCalculating by the formula:
Figure 802511DEST_PATH_IMAGE010
wherein:
Figure 418300DEST_PATH_IMAGE011
is the point in the point cloud
Figure 711878DEST_PATH_IMAGE012
A set of points of nearest neighbor points;
Figure 362302DEST_PATH_IMAGE013
is any two points in the set of points;
Figure 286396DEST_PATH_IMAGE014
is a point
Figure 592743DEST_PATH_IMAGE015
The elevation value of (a);
s202, traversing all points in the point cloud, and removing all ground points to obtain a pre-segmentation body set;
s203, horizontally slicing each pre-segmentation body in the pre-segmentation body set at a height of 0.25m in the vertical direction;
s204, extracting the two-dimensional maximum distance between each point in the slice and the offset between the slices in the two-dimensional plane direction, simultaneously constructing a concentric cylinder model comprising an inner cylinder and an outer cylinder, and then filtering the slices through a filtering formula based on two-dimensional characteristics and concentric cylinder characteristics, wherein the filtering formula is as follows:
Figure 893275DEST_PATH_IMAGE016
wherein:
Figure 663785DEST_PATH_IMAGE017
finger pre-segmentation body
Figure 758780DEST_PATH_IMAGE018
Is the condition of the shaft;
Figure 83582DEST_PATH_IMAGE019
finger pre-segmentation body
Figure 919295DEST_PATH_IMAGE018
The number of the middle continuous slice sets is more than 5;
Figure 341049DEST_PATH_IMAGE020
finger pre-segmentation body
Figure 606945DEST_PATH_IMAGE018
The maximum offset of the middle continuous slice set in the two-dimensional plane direction is less than 0.25 m;
Figure 684623DEST_PATH_IMAGE021
finger pre-segmentation body
Figure 61378DEST_PATH_IMAGE018
At least 85% of the points in the middle serial section set fall into the inner cylinder;
Figure 744163DEST_PATH_IMAGE022
is a pre-division body
Figure 180960DEST_PATH_IMAGE018
Vertical direction continuous slice set with middle two-dimensional maximum distance smaller than distance threshold
Figure 542672DEST_PATH_IMAGE023
The distance threshold is 0.2-0.5 m;
Figure 723117DEST_PATH_IMAGE024
is a continuous slice set
Figure 57146DEST_PATH_IMAGE025
Maximum offset in the direction of the two-dimensional plane;
C 1 andC 2 is based on a set of serial slices
Figure 399266DEST_PATH_IMAGE026
The inner and outer circular cylinders are fitted,
Figure 248273DEST_PATH_IMAGE027
fall on the inner cylinderC 1 The point of the inner one of the points,
Figure 232410DEST_PATH_IMAGE028
is located at the outer cylinderC 2 Inner point, and
Figure 155367DEST_PATH_IMAGE029
Figure 730704DEST_PATH_IMAGE030
<1,
Figure 270270DEST_PATH_IMAGE030
is a concentric cylinder model proportional threshold, r1 is the inner cylinder radius, r2 is the outer cylinder radius;
s205, traversing the pre-segmentation body set
Figure 792518DEST_PATH_IMAGE031
All slices of each pre-segmented body in the set of filtered serial slices
Figure 304402DEST_PATH_IMAGE032
For pre-dividing the rod-like articles
Figure 86912DEST_PATH_IMAGE033
The initial seed point of (1).
Further, the growing algorithm in step S3 includes the following steps:
s301, pre-dividing the body by a rod-shaped object
Figure 113773DEST_PATH_IMAGE034
In a continuous slice set
Figure 174133DEST_PATH_IMAGE035
Set of primitive seed points
Figure 337261DEST_PATH_IMAGE036
Collecting the seed points
Figure 519981DEST_PATH_IMAGE036
Added to a rod-shaped object partition
Figure 502981DEST_PATH_IMAGE037
Performing the following steps;
s302, from
Figure 163769DEST_PATH_IMAGE036
Randomly taking one point as a seed point
Figure 446983DEST_PATH_IMAGE038
From the collection of seed points
Figure 738287DEST_PATH_IMAGE036
Deletion of seeds
Figure 208582DEST_PATH_IMAGE038
And in the cloud
Figure 876324DEST_PATH_IMAGE039
Searching seed point
Figure 810782DEST_PATH_IMAGE038
K nearest neighbors;
s303, point cloud from K nearest neighbors of point to rodPeripheral diffusion, if there are seed points in the point domain of K nearest neighbors
Figure 7408DEST_PATH_IMAGE038
Is less than the seed point
Figure 230579DEST_PATH_IMAGE038
Points of 20% intensity, then add these points to the seed point set
Figure 436433DEST_PATH_IMAGE036
And a rod-shaped object division body
Figure 225397DEST_PATH_IMAGE040
In and at the point cloud
Figure 858504DEST_PATH_IMAGE041
Deleting seed points
Figure 568971DEST_PATH_IMAGE038
And these diffusion seed points;
s304, circularly executing S302 and S303 until the collection
Figure 109674DEST_PATH_IMAGE036
Is empty;
s305, finally obtaining a rod-shaped object split body
Figure 956407DEST_PATH_IMAGE042
The complete point cloud of the shaft objects is obtained, and therefore accurate position information of each shaft object can be obtained.
Further, step S4 specifically includes the following steps;
s401, mapping an on-board three-dimensional laser point cloud coordinate system corresponding to the rod-shaped object point cloud to a panoramic image coordinate system through IMU/GNSS to obtain panoramic image data of the rod-shaped object point cloud;
s402, calculating coordinates of each vertex in an outer bounding box of the point cloud of the rod-shaped object, mapping the coordinates into a simulation unit spherical coordinate system, and converting the coordinates through the following formula:
Figure 557152DEST_PATH_IMAGE043
wherein:
Figure 754916DEST_PATH_IMAGE044
the coordinate of the center point of the camera in a vehicle-mounted three-dimensional laser point cloud coordinate system;
Figure 34063DEST_PATH_IMAGE045
the method is characterized in that a rotation matrix for converting a unit spherical coordinate system to a vehicle-mounted three-dimensional laser point cloud coordinate system is simulated, and the rotation matrix is obtained through a pitch angle, a rotation angle and a yaw angle recorded by an IMU;
Figure 882DEST_PATH_IMAGE046
is a point in the vehicle-mounted three-dimensional laser point cloud
Figure 38108DEST_PATH_IMAGE047
To the center point of the camera
Figure 457588DEST_PATH_IMAGE048
The distance of (d);
Figure 543355DEST_PATH_IMAGE049
the points in the vehicle-mounted three-dimensional laser point cloud coordinate system are projected to the points of the simulation unit spherical surface in the simulation unit spherical coordinate system, and calculation is performed through the formula in the step S402.
S403, mapping the coordinates of each vertex in the outer bounding box of the rod point cloud in the unit sphere coordinate system simulated in the previous step to a panoramic image coordinate system, and converting through the following formula:
Figure 364681DEST_PATH_IMAGE050
Figure 776071DEST_PATH_IMAGE051
Figure 682847DEST_PATH_IMAGE052
wherein:
Figure 369043DEST_PATH_IMAGE053
the pixel point coordinates in the panoramic image coordinate system;
Figure 310454DEST_PATH_IMAGE054
and
Figure 627166DEST_PATH_IMAGE055
respectively the length and width of the panoramic image;
Figure 817976DEST_PATH_IMAGE056
and
Figure 245546DEST_PATH_IMAGE057
respectively corresponding to the latitudinal angle and the longitudinal angle of the simulation unit ball, wherein the radius of the simulation unit ball is 1;
Figure 775885DEST_PATH_IMAGE058
is a simulated unit sphere coordinate;
and S403, mapping the coordinates of the outer bounding box of the rod point cloud into the panoramic image by using the conversion relation in the step S402, and calculating the range of the circumscribed rectangle in the panoramic image.
Further, the specific process of step S5 is:
the specific process of step S5 is:
s501, image data including rod type and example information are marked in the panoramic image, and the image data are trained through a MaskRCNN neural network to obtain a neural network model;
s502, carrying out automatic example segmentation on the panoramic image data by using the trained MaskRCNN neural network to obtain the category and example information of the rod-shaped object.
S503, performing superposition analysis on the panoramic image range calculated in the step S4 and the category and example information calculated in the step S502 to obtain the fine category and example information of the rod-shaped object;
s504, establishing the corresponding relation between the fine category of the rod in the step S503 and the example information and the accurate position information of the rod in the step S3, and finishing the classification.
Compared with the prior art, the technical scheme of the invention has the following advantages:
according to the method, a point cloud high-order characteristic set is extracted by a characteristic extraction algorithm in the field of point cloud processing, road traffic rods are automatically extracted through high-order characteristics and grammar rules in knowledge inference, position information of the road traffic rods is further acquired, the point cloud of the road traffic rods which are automatically extracted is projected to a vehicle-mounted image to intercept corresponding image blocks, then the road traffic rods are finely identified by a deep learning model in the field of image identification, and the position and fine category information of the road traffic rods are finally output, so that the road traffic rods in large-range urban road scene point cloud and images are automatically and finely classified, and the efficiency and the automation degree of the fine classification process of the traffic rods in complex urban road scenes are improved.
Drawings
FIG. 1 is a schematic flow chart of a method provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram of a concentric cylinder model in a method provided by an embodiment of the invention;
FIG. 3 is a schematic diagram of a growing algorithm in the method provided by the embodiment of the invention;
FIG. 4 is a schematic diagram of a cloud of rod points detected by a growing algorithm in a method provided by an embodiment of the invention;
FIG. 5 is a schematic diagram illustrating a conversion relationship between a point cloud coordinate of a rod and a panoramic coordinate and a simulation unit sphere coordinate in the method according to the embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating a point cloud projected onto a panoramic image according to an embodiment of the present invention;
FIG. 7 is a schematic illustration, by way of example, of segmentation in a method provided by an embodiment of the present invention;
fig. 8 is a schematic diagram of example segmentation performed in the method according to the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The method for classifying road traffic rods integrating vehicle-mounted three-dimensional laser point clouds and influences in the embodiment as shown in fig. 1 comprises the following steps.
And S1, acquiring the point cloud and the image data corresponding to the point cloud. Specifically, the point cloud is collected by a vehicle-mounted three-dimensional laser scanner, the image data is collected by a vehicle-mounted camera, and corresponding IMU/GNSS data is collected by the IMU/GNSS while the two data are collected.
S2, removing ground points in the point cloud to obtain a pre-segmentation body, horizontally slicing the pre-segmentation body along the vertical direction according to a preset height to obtain a continuous slice set, performing feature constraint on the continuous slice set through two-dimensional features of slices and concentric cylinder features, and taking the continuous slice set which meets the feature constraint as a rod-shaped object original seed point. The installation position of the vehicle-mounted three-dimensional laser scanner is relatively fixed, so that the height of a central point of the vehicle-mounted three-dimensional laser scanner, namely a vehicle-mounted track point, from the ground is close to a fixed value
Figure 325815DEST_PATH_IMAGE059
The ground of a city is generally flat,and the local elevation change rate of the ground point is flat because the elevation change is not large in a small range.
Therefore, step S2 includes the following steps.
201. Firstly, screening ground points in point cloud by the following formula:
Figure 207183DEST_PATH_IMAGE060
wherein the content of the first and second substances,
Figure 438444DEST_PATH_IMAGE061
substitute pointing device
Figure 885606DEST_PATH_IMAGE062
Is a condition of the ground;
Figure 278541DEST_PATH_IMAGE063
pointing device
Figure 647206DEST_PATH_IMAGE062
The absolute value of the difference between the height of the track point and the installation height of the three-dimensional laser scanner is smaller than a certain threshold, and the threshold range is 0.25 m;
Figure 478896DEST_PATH_IMAGE064
substitute pointing device
Figure 983826DEST_PATH_IMAGE062
The local variation elevation of the neighborhood points is less than 0.15 m;
Figure 810312DEST_PATH_IMAGE062
is a point randomly selected from the point cloud;
Figure 463011DEST_PATH_IMAGE065
is a point
Figure 301654DEST_PATH_IMAGE062
The track point height is obtained by matching the track information of the point during point cloud acquisition with the time tag;
Figure 395512DEST_PATH_IMAGE066
the height of the vehicle-mounted three-dimensional laser scanner from the ground;
Figure 458145DEST_PATH_IMAGE067
is a point
Figure 270244DEST_PATH_IMAGE062
The local elevation change value of the point cloud, the local elevation change characteristic of the point cloud is used for describing the flatness degree of each local neighborhood of each point, and the calculation process of the characteristic is firstly constructed
Figure 912578DEST_PATH_IMAGE068
Fast tree search for each point in point cloud
Figure 188838DEST_PATH_IMAGE069
Close to the point and then for each point
Figure 360057DEST_PATH_IMAGE069
Analyzing the elevation of each adjacent point, wherein the difference between the maximum elevation value and the minimum elevation value is the elevation change characteristic of the local point cloud of the point, and the difference is calculated by the following formula:
Figure 925030DEST_PATH_IMAGE070
wherein:
Figure 902213DEST_PATH_IMAGE071
is the point in the point cloud
Figure 970663DEST_PATH_IMAGE072
A set of points of nearest neighbor points;
Figure 578362DEST_PATH_IMAGE073
is any two points in the set of points;
Figure 427370DEST_PATH_IMAGE074
is a point
Figure 880348DEST_PATH_IMAGE075
The elevation value of (a).
S202, traversing all points in the point cloud, removing all ground points from the point cloud, carrying out three-dimensional communication on other points in the point cloud, and then obtaining a pre-segmentation body set
Figure 131200DEST_PATH_IMAGE076
S203, horizontally slicing each pre-segment in the set of pre-segments at a height of 0.25m in the vertical direction.
S204, extracting the two-dimensional maximum distance between each point in the slice and the offset between the slices in the two-dimensional plane direction, and simultaneously constructing a concentric cylinder model comprising an inner cylinder and an outer cylinder, wherein the concentric cylinder model is shown in FIG. 2. Due to its morphological characteristics, the shaft exhibits a small amount of offset in the set of successive slices, while the number of points of the shaft falling in the inner cylinder will be greater than in the outer cylinder, based on the concentric cylinder model.
Then, the slice is filtered by a filtering formula based on the two-dimensional feature and the concentric cylindrical feature, the filtering formula is as follows:
Figure 909801DEST_PATH_IMAGE077
wherein:
Figure 183787DEST_PATH_IMAGE078
finger pre-segmentation body
Figure 706035DEST_PATH_IMAGE079
Is the condition of the shaft;
Figure 811395DEST_PATH_IMAGE080
finger pre-segmentation body
Figure 495317DEST_PATH_IMAGE081
The number of the middle continuous slice sets is more than 5;
Figure 53337DEST_PATH_IMAGE082
finger pre-segmentation body
Figure 379276DEST_PATH_IMAGE081
The maximum offset of the middle continuous slice set in the two-dimensional plane direction is less than 0.25 m;
Figure 276825DEST_PATH_IMAGE083
finger pre-segmentation body
Figure 193966DEST_PATH_IMAGE081
At least 85% of the points in the middle serial section set fall into the inner cylinder;
Figure 705194DEST_PATH_IMAGE084
is a pre-division body
Figure 303665DEST_PATH_IMAGE085
Continuous slice set with middle two-dimensional maximum distance smaller than distance threshold
Figure 118038DEST_PATH_IMAGE086
The continuous slice set refers to a set formed by adjacent slices in the vertical direction, and the distance threshold is 0.2-0.5m and is determined by actually detected data of the rod point cloud;
Figure 674921DEST_PATH_IMAGE087
is a slice set
Figure 145217DEST_PATH_IMAGE088
Maximum offset in the direction of the two-dimensional plane;C 1 andC 2 is based on a set of serial slices
Figure 609696DEST_PATH_IMAGE089
The inner and outer circular cylinders are fitted,
Figure 216258DEST_PATH_IMAGE090
fall on the inner cylinderC 1 The point of the inner one of the points,
Figure 944042DEST_PATH_IMAGE091
is located at the outer cylinderC 2 Inner point, and
Figure 963951DEST_PATH_IMAGE092
Figure 904225DEST_PATH_IMAGE093
<1,
Figure 896452DEST_PATH_IMAGE093
for the concentric cylinder model scale threshold, r1 is the inner cylinder radius and r2 is the outer cylinder radius.
S205, traversing the pre-segmentation body set
Figure 591875DEST_PATH_IMAGE094
All slices of each pre-segmented body in the set of filtered serial slices
Figure 771184DEST_PATH_IMAGE095
For pre-dividing the rod-like articles
Figure 577466DEST_PATH_IMAGE096
The initial seed point of (1).
And S3, growing the original seed points of the rod-shaped object through a growing algorithm to obtain complete rod-shaped object point cloud so as to obtain accurate position information of the rod-shaped object point cloud, wherein the growing algorithm grows through the reflection intensity of each point in the point cloud.
Specifically, as shown in fig. 3, the growing algorithm in step S3 includes the following steps.
S301, pre-dividing the body by a rod-shaped object
Figure 689779DEST_PATH_IMAGE097
In a continuous slice set
Figure 228207DEST_PATH_IMAGE098
Set of primitive seed points
Figure 425970DEST_PATH_IMAGE099
Collecting the seed points
Figure 770364DEST_PATH_IMAGE099
Added to a rod-shaped object partition
Figure 471604DEST_PATH_IMAGE100
In (1).
S302, from
Figure 243251DEST_PATH_IMAGE101
Randomly taking one point as a seed point
Figure 928310DEST_PATH_IMAGE102
From the collection of seed points
Figure 14078DEST_PATH_IMAGE099
Deletion of seeds
Figure 632141DEST_PATH_IMAGE102
And in the cloud
Figure 43531DEST_PATH_IMAGE103
Searching seed point
Figure 681798DEST_PATH_IMAGE102
K nearest neighbors.
S303, diffusing from K nearest neighbors of the point to the periphery of the point cloud of the rod-shaped object, if the seed point exists in the point domain of the K nearest neighbors
Figure 367994DEST_PATH_IMAGE102
Is less than the seed point
Figure 309405DEST_PATH_IMAGE102
Points of 20% intensity, then add these points to the seed point set
Figure 360538DEST_PATH_IMAGE099
And a rod-shaped object division body
Figure 551348DEST_PATH_IMAGE104
In and at the point cloud
Figure 244497DEST_PATH_IMAGE105
Deleting seed points
Figure 774836DEST_PATH_IMAGE102
And these diffusion seed points.
S304, circularly executing S302 and S303 until the collection
Figure 324766DEST_PATH_IMAGE099
Is empty.
S305, as shown in FIG. 4, the rod-shaped object split body finally obtained
Figure 206134DEST_PATH_IMAGE104
Is a complete point cloud of the shaft, so that accurate position information of each shaft can be obtained, as shown in fig. 4.
And S4, projecting the rod-shaped object point cloud onto the panoramic image according to the corresponding relation between the point cloud and the panoramic image to obtain an image range corresponding to the rod-shaped object point cloud.
Step S4 specifically includes the following steps;
s401, as shown in fig. 5, mapping the vehicle-mounted three-dimensional laser point cloud coordinate system corresponding to the rod-like object point cloud to the panoramic image coordinate system through the IMU/GNSS, so as to obtain panoramic image data of the rod-like object point cloud.
S402, as shown in fig. 5, calculating coordinates of each vertex in the outer bounding box of the rod point cloud, mapping the coordinates into a simulation unit spherical coordinate system, and converting the coordinates by the following formula:
Figure 171816DEST_PATH_IMAGE106
wherein:
Figure 822240DEST_PATH_IMAGE107
the coordinate of the center point of the vehicle-mounted camera in a vehicle-mounted three-dimensional laser point cloud coordinate system;
Figure 543072DEST_PATH_IMAGE108
the method is characterized in that a rotation matrix for converting a unit spherical coordinate system to a vehicle-mounted three-dimensional laser point cloud coordinate system is simulated, and the rotation matrix is obtained through a pitch angle, a rotation angle and a yaw angle recorded by an IMU;
Figure 646157DEST_PATH_IMAGE109
is a point in the vehicle-mounted three-dimensional laser point cloud
Figure 477847DEST_PATH_IMAGE110
To the center point of the camera
Figure 982777DEST_PATH_IMAGE111
The distance of (d);
Figure 812193DEST_PATH_IMAGE112
the points in the vehicle-mounted three-dimensional laser point cloud coordinate system are projected to the points of the simulation unit spherical surface in the simulation unit spherical coordinate system, and calculation is performed through the formula in the step S402.
S403, as shown in fig. 6, mapping the coordinates of each vertex in the outer bounding box of the rod point cloud in the unit sphere coordinate system of the previous simulation step into the panoramic image coordinate system, and converting the coordinates by the following formula:
Figure 668154DEST_PATH_IMAGE113
Figure 37955DEST_PATH_IMAGE114
Figure 397392DEST_PATH_IMAGE115
wherein:
Figure 663288DEST_PATH_IMAGE116
the pixel point coordinates in the panoramic image coordinate system;
Figure 537704DEST_PATH_IMAGE117
and
Figure 914458DEST_PATH_IMAGE118
respectively the length and width of the panoramic image;
Figure 393981DEST_PATH_IMAGE119
and
Figure 627516DEST_PATH_IMAGE120
respectively corresponding to the latitudinal angle and the longitudinal angle of the simulation unit ball, wherein the radius of the simulation unit ball is 1;
Figure 923981DEST_PATH_IMAGE121
is the simulated unit sphere coordinates.
And S403, mapping the coordinates of the outer bounding box of the rod point cloud into the panoramic image by using the conversion relation in the step S402, and calculating the range of the circumscribed rectangle in the panoramic image.
S5, as shown in fig. 7, the image is subjected to example segmentation by using the trained MaskRCNN to obtain example segmentation information of the rod, and the rod point clouds obtained in step S4 are finely classified by using the range of the rod point clouds in the panoramic image.
The specific process of step S5 is:
s501, image data including rod type and example information are marked in the panoramic image, and the image data are trained through a MaskRCNN neural network to obtain a neural network model.
S502, carrying out automatic example segmentation on the panoramic image data by using the trained MaskRCNN neural network to obtain the category and example information of the rod-shaped object.
S503, the panoramic image range calculated in the step S4 and the category and example information calculated in the step S502 are subjected to superposition analysis, and the fine category and example information of the rod-shaped object can be obtained.
S504, establishing the corresponding relation between the fine category of the rod in the step S503 and the example information and the accurate position information of the rod in the step S3, and finishing the classification.
As shown in FIG. 8, a road traffic rod with panoramic image is obtained by example segmentation of the image
Figure 838847DEST_PATH_IMAGE122
Example information of middle part
Figure 235194DEST_PATH_IMAGE123
The method includes the category and number corresponding to the component, and the image range projected by the road traffic rod and the panorama example segmentation information in the step S403 are superimposed to filter out some error segmentation results, so as to obtain the accurate range and example information of the road traffic rod
Figure 577313DEST_PATH_IMAGE123
Combining with the road traffic rod obtained in step S305
Figure 629583DEST_PATH_IMAGE122
Position information of
Figure DEST_PATH_IMAGE124
We can obtain the shaft of interest
Figure 348140DEST_PATH_IMAGE122
Refined instance information
Figure 802255DEST_PATH_IMAGE123
And location information
Figure 377593DEST_PATH_IMAGE125
Finally, the two are displayed in the image, and the fine classification of the rod-shaped objects is completed.
According to the method, a point cloud high-order characteristic set is extracted by using a characteristic extraction algorithm in the field of point cloud processing, road traffic rods are automatically extracted through high-order characteristics and grammar rules in knowledge inference, so that the position information of the road traffic rods is obtained, the automatically extracted point cloud of the road traffic rods is projected to a vehicle-mounted image to intercept corresponding image blocks, then the road traffic rods are finely identified by using a deep learning model in the field of image identification, and finally the position and fine category information of the road traffic rods are output, so that the road traffic rods in a large-range urban road scene point cloud and an image are automatically and finely classified, and the efficiency and the automation degree of the fine classification process of the traffic rods in a complex urban road scene are improved.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (4)

1. A road traffic rod object classification method fusing vehicle-mounted three-dimensional laser point cloud and images is characterized by comprising the following steps:
s1, acquiring point clouds and image data corresponding to the point clouds;
s2, removing ground points in the point cloud to obtain a pre-segmentation body, horizontally slicing the pre-segmentation body along the vertical direction according to a preset height to obtain a continuous slice set, performing feature constraint on the continuous slice set through two-dimensional features of slices and concentric cylinder features, and taking the continuous slice set which meets the feature constraint as a rod-shaped object original seed point;
s3, growing the original seed points of the rod-shaped objects through a growing algorithm to obtain complete rod-shaped object point clouds so as to obtain accurate position information of the rod-shaped object point clouds;
s4, projecting the rod-shaped object point cloud to the panoramic image according to the corresponding relation between the point cloud and the panoramic image to obtain an image range corresponding to the rod-shaped object point cloud;
s5, carrying out example segmentation on the image by using the trained MaskRCNN to obtain example segmentation information of the rod-shaped object, and finely classifying the rod-shaped object in the panoramic image by using the range of the rod-shaped object point cloud obtained in the step S4;
s201, screening ground points in the point cloud through the following formula:
Figure 820370DEST_PATH_IMAGE001
wherein:
Figure 275491DEST_PATH_IMAGE002
substitute pointing device
Figure 64455DEST_PATH_IMAGE003
Is a condition of the ground;
Figure 431982DEST_PATH_IMAGE004
pointing device
Figure 939187DEST_PATH_IMAGE003
The absolute value of the difference between the height of the track point and the installation height of the three-dimensional laser scanner is smaller than a certain threshold, and the threshold range is 0.25 m;
Figure 683152DEST_PATH_IMAGE005
substitute pointing device
Figure 326623DEST_PATH_IMAGE003
The local variation elevation of the neighborhood points is less than 0.15 m;
Figure 130631DEST_PATH_IMAGE003
is a point randomly selected from the point cloud;
Figure 859553DEST_PATH_IMAGE006
is a point
Figure 166730DEST_PATH_IMAGE003
The track point height is obtained by matching the track information of the point during point cloud acquisition with the time tag;
Figure 930287DEST_PATH_IMAGE007
the height of the vehicle-mounted three-dimensional laser scanner from the ground;
Figure 639617DEST_PATH_IMAGE008
is a point
Figure 121414DEST_PATH_IMAGE009
The point cloud local elevation change value is calculated by the following formula:
Figure 207181DEST_PATH_IMAGE010
wherein:
Figure 825245DEST_PATH_IMAGE011
is the point in the point cloud
Figure 705476DEST_PATH_IMAGE012
A set of points of nearest neighbor points;
Figure 408990DEST_PATH_IMAGE013
is a point set
Figure 547716DEST_PATH_IMAGE014
Any two points in (1);
Figure 20286DEST_PATH_IMAGE015
is a point
Figure 336997DEST_PATH_IMAGE016
The elevation value of (a);
s202, traversing all points in the point cloud, and removing all ground points to obtain a pre-segmentation body set;
s203, horizontally slicing each pre-segmentation body in the pre-segmentation body set at a height of 0.25m in the vertical direction;
s204, extracting the two-dimensional maximum distance between each point in the slice and the offset between the slices in the two-dimensional plane direction, simultaneously constructing a concentric cylinder model comprising an inner cylinder and an outer cylinder, and then filtering the slices through a filtering formula based on two-dimensional characteristics and concentric cylinder characteristics, wherein the filtering formula is as follows:
Figure 731070DEST_PATH_IMAGE017
wherein:
Figure 220957DEST_PATH_IMAGE018
finger pre-segmentation body
Figure 751295DEST_PATH_IMAGE019
Is the condition of the shaft;
Figure 301225DEST_PATH_IMAGE020
finger pre-segmentation body
Figure 402168DEST_PATH_IMAGE019
The number of the middle continuous slice sets is more than 5;
Figure 695746DEST_PATH_IMAGE021
finger pre-segmentation body
Figure 80591DEST_PATH_IMAGE019
The maximum offset of the middle continuous slice set in the two-dimensional plane direction is less than 0.25 m;
Figure 801422DEST_PATH_IMAGE022
finger pre-segmentation body
Figure 904507DEST_PATH_IMAGE019
At least 85% of the points in the middle serial section set fall into the inner cylinder;
Figure 736197DEST_PATH_IMAGE023
is a pre-division body
Figure 975548DEST_PATH_IMAGE019
Vertical direction continuous slice set with middle two-dimensional maximum distance smaller than distance threshold
Figure 867281DEST_PATH_IMAGE024
The distance threshold is 0.2-0.5 m;
Figure 441351DEST_PATH_IMAGE025
is a continuous slice set
Figure 76731DEST_PATH_IMAGE026
Maximum offset in the direction of the two-dimensional plane;
C 1 andC 2 is based on a set of serial slices
Figure 436169DEST_PATH_IMAGE027
The inner and outer circular cylinders are fitted,
Figure 233223DEST_PATH_IMAGE028
fall on the inner cylinderC 1 The point of the inner one of the points,
Figure 310901DEST_PATH_IMAGE029
is located at the outer cylinderC 2 Inner point, and
Figure 218814DEST_PATH_IMAGE030
Figure 495074DEST_PATH_IMAGE031
<1,
Figure 666293DEST_PATH_IMAGE031
is a concentric cylinder model proportional threshold, r1 is the inner cylinder radius, r2 is the outer cylinder radius;
s205, traversing the pre-segmentation body set
Figure 28004DEST_PATH_IMAGE032
All slices of each pre-segmented body in the set of filtered serial slices
Figure 690673DEST_PATH_IMAGE033
For pre-dividing the rod-like articles
Figure 87019DEST_PATH_IMAGE034
The initial seed point of (1).
2. The method according to claim 1, wherein the growing algorithm in step S3 comprises the steps of:
s301, pre-dividing the body by a rod-shaped object
Figure 429139DEST_PATH_IMAGE035
In a continuous slice set
Figure 12567DEST_PATH_IMAGE036
Set of primitive seed points
Figure 731124DEST_PATH_IMAGE037
Collecting the seed points
Figure 981977DEST_PATH_IMAGE037
Added to a rod-shaped object partition
Figure 494998DEST_PATH_IMAGE038
Performing the following steps;
s302, from
Figure 565722DEST_PATH_IMAGE037
Randomly taking one point as a seed point
Figure 71659DEST_PATH_IMAGE039
From the collection of seed points
Figure 177018DEST_PATH_IMAGE037
Deletion of seeds
Figure 860940DEST_PATH_IMAGE039
And in the cloud
Figure 684540DEST_PATH_IMAGE040
Searching seed point
Figure 479321DEST_PATH_IMAGE039
K nearest neighbors;
s303, seed point
Figure 704765DEST_PATH_IMAGE039
Diffusing to K nearest neighbors if the seed point exists in the point domain of the K nearest neighbors
Figure 559589DEST_PATH_IMAGE039
Is less than the seed point
Figure 339326DEST_PATH_IMAGE039
Points of 20% intensity, then add these points to the seed point set
Figure 115DEST_PATH_IMAGE037
And a rod-shaped object division body
Figure 34061DEST_PATH_IMAGE041
In and at the point cloud
Figure 59786DEST_PATH_IMAGE040
Deleting seed points
Figure 592398DEST_PATH_IMAGE039
And these diffusion seed points;
s304, circularly executing S302 and S303 until the collection
Figure 994561DEST_PATH_IMAGE037
Is empty;
s305, finally obtaining a rod-shaped object split body
Figure 929019DEST_PATH_IMAGE041
The complete point cloud of the shaft objects is obtained, and therefore accurate position information of each shaft object can be obtained.
3. The method according to claim 1, wherein step S4 comprises the following steps;
s401, mapping an on-board three-dimensional laser point cloud coordinate system corresponding to the rod-shaped object point cloud to a panoramic image coordinate system through IMU/GNSS to obtain panoramic image data of the rod-shaped object point cloud;
s402, calculating coordinates of each vertex in an outer bounding box of the point cloud of the rod-shaped object, mapping the coordinates into a simulation unit spherical coordinate system, and converting the coordinates through the following formula:
Figure 125645DEST_PATH_IMAGE042
wherein:
Figure 145553DEST_PATH_IMAGE043
the coordinate of the center point of the camera in a vehicle-mounted three-dimensional laser point cloud coordinate system;
Figure 600674DEST_PATH_IMAGE044
the method is characterized in that a rotation matrix for converting a unit spherical coordinate system to a vehicle-mounted three-dimensional laser point cloud coordinate system is simulated, and the rotation matrix is obtained through a pitch angle, a rotation angle and a yaw angle recorded by an IMU;
Figure 389639DEST_PATH_IMAGE045
is a point in the vehicle-mounted three-dimensional laser point cloud
Figure 553904DEST_PATH_IMAGE046
To the center point of the camera
Figure 998792DEST_PATH_IMAGE047
The distance of (d);
Figure 805074DEST_PATH_IMAGE048
projecting points in a vehicle-mounted three-dimensional laser point cloud coordinate system to points of a simulation unit spherical surface in a simulation unit spherical coordinate system, and calculating by using a formula in the step S402;
s403, mapping the coordinates of each vertex in the outer bounding box of the rod point cloud in the unit sphere coordinate system simulated in the previous step to a panoramic image coordinate system, and converting through the following formula:
Figure 386228DEST_PATH_IMAGE049
Figure 252553DEST_PATH_IMAGE050
Figure 919157DEST_PATH_IMAGE051
wherein:
Figure 529130DEST_PATH_IMAGE052
the pixel point coordinates in the panoramic image coordinate system;
Figure 712593DEST_PATH_IMAGE053
and
Figure 749820DEST_PATH_IMAGE054
respectively the length and width of the panoramic image;
Figure 903720DEST_PATH_IMAGE055
and
Figure 51805DEST_PATH_IMAGE056
respectively corresponding to the latitudinal angle and the longitudinal angle of the simulation unit ball, wherein the radius of the simulation unit ball is 1;
Figure 607551DEST_PATH_IMAGE057
is a simulated unit sphere coordinate;
and S403, mapping the coordinates of the outer bounding box of the rod point cloud into the panoramic image by using the conversion relation in the step S402, and calculating the range of the circumscribed rectangle in the panoramic image.
4. The method according to claim 1, wherein the specific process of step S5 is as follows:
s501, image data including rod type and example information are marked in the panoramic image, and the image data are trained through a MaskRCNN neural network to obtain a neural network model;
s502, carrying out automatic example segmentation on panoramic image data by using the trained MaskRCNN neural network to obtain the category and example information of the rod-shaped object;
s503, performing superposition analysis on the panoramic image range calculated in the step S4 and the category and example information calculated in the step S502 to obtain the fine category and example information of the rod-shaped object;
s504, establishing the corresponding relation between the fine category of the rod in the step S503 and the example information and the accurate position information of the rod in the step S3, and finishing the classification.
CN202110852461.1A 2021-07-27 2021-07-27 Road traffic rod object classification method integrating vehicle-mounted three-dimensional laser point cloud and image Active CN113313081B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110852461.1A CN113313081B (en) 2021-07-27 2021-07-27 Road traffic rod object classification method integrating vehicle-mounted three-dimensional laser point cloud and image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110852461.1A CN113313081B (en) 2021-07-27 2021-07-27 Road traffic rod object classification method integrating vehicle-mounted three-dimensional laser point cloud and image

Publications (2)

Publication Number Publication Date
CN113313081A CN113313081A (en) 2021-08-27
CN113313081B true CN113313081B (en) 2021-11-09

Family

ID=77382356

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110852461.1A Active CN113313081B (en) 2021-07-27 2021-07-27 Road traffic rod object classification method integrating vehicle-mounted three-dimensional laser point cloud and image

Country Status (1)

Country Link
CN (1) CN113313081B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310849B (en) * 2023-05-22 2023-09-19 深圳大学 Tree point cloud monomerization extraction method based on three-dimensional morphological characteristics

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107516077A (en) * 2017-08-17 2017-12-26 武汉大学 Traffic sign information extracting method based on laser point cloud and image data fusion
CN111815776A (en) * 2020-02-04 2020-10-23 山东水利技师学院 Three-dimensional building fine geometric reconstruction method integrating airborne and vehicle-mounted three-dimensional laser point clouds and streetscape images
CN111899302A (en) * 2020-06-23 2020-11-06 武汉闻道复兴智能科技有限责任公司 Point cloud data-based visual detection method, device and system
CN112446343A (en) * 2020-12-07 2021-03-05 苏州工业园区测绘地理信息有限公司 Vehicle-mounted point cloud road rod-shaped object machine learning automatic extraction method integrating multi-scale features

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10679338B2 (en) * 2017-08-23 2020-06-09 General Electric Company Three-dimensional modeling of an object

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107516077A (en) * 2017-08-17 2017-12-26 武汉大学 Traffic sign information extracting method based on laser point cloud and image data fusion
CN111815776A (en) * 2020-02-04 2020-10-23 山东水利技师学院 Three-dimensional building fine geometric reconstruction method integrating airborne and vehicle-mounted three-dimensional laser point clouds and streetscape images
CN111899302A (en) * 2020-06-23 2020-11-06 武汉闻道复兴智能科技有限责任公司 Point cloud data-based visual detection method, device and system
CN112446343A (en) * 2020-12-07 2021-03-05 苏州工业园区测绘地理信息有限公司 Vehicle-mounted point cloud road rod-shaped object machine learning automatic extraction method integrating multi-scale features

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Pole-Like Road Furniture Detection and Decomposition in Mobile Laser Scanning Data Based on Spatial Relations;Fashuai Li等;《remote sensing》;20181231;1-28 *
一种层次化的车载激光点云中杆状地物提取方法研究;朱岩彬等;《地理信息世界》;20191231;56-60 *

Also Published As

Publication number Publication date
CN113313081A (en) 2021-08-27

Similar Documents

Publication Publication Date Title
CN110263717B (en) Method for determining land utilization category of street view image
Ke et al. A review of methods for automatic individual tree-crown detection and delineation from passive remote sensing
Chen et al. Vehicle detection in high-resolution aerial images via sparse representation and superpixels
Wu et al. Rapid localization and extraction of street light poles in mobile LiDAR point clouds: A supervoxel-based approach
Chen et al. Hierarchical object oriented classification using very high resolution imagery and LIDAR data over urban areas
CN112381861B (en) Forest land point cloud data registration and segmentation method based on foundation laser radar
Alidoost et al. A CNN-based approach for automatic building detection and recognition of roof types using a single aerial image
CN109858450B (en) Ten-meter-level spatial resolution remote sensing image town extraction method and system
CN111046880A (en) Infrared target image segmentation method and system, electronic device and storage medium
CN111598856B (en) Chip surface defect automatic detection method and system based on defect-oriented multipoint positioning neural network
CN106610969A (en) Multimodal information-based video content auditing system and method
CN104318051B (en) The rule-based remote sensing of Water-Body Information on a large scale automatic extracting system and method
CN111898688A (en) Airborne LiDAR data tree species classification method based on three-dimensional deep learning
CN108509950B (en) Railway contact net support number plate detection and identification method based on probability feature weighted fusion
Karsli et al. Automatic building extraction from very high-resolution image and LiDAR data with SVM algorithm
CN115049640B (en) Road crack detection method based on deep learning
Kumar et al. A deep learning paradigm for detection of harmful algal blooms
CN113313081B (en) Road traffic rod object classification method integrating vehicle-mounted three-dimensional laser point cloud and image
Husain et al. Detection and thinning of street trees for calculation of morphological parameters using mobile laser scanner data
CN113033386B (en) High-resolution remote sensing image-based transmission line channel hidden danger identification method and system
Zheng et al. Single shot multibox detector for urban plantation single tree detection and location with high-resolution remote sensing imagery
Kazimi et al. Semantic segmentation of manmade landscape structures in digital terrain models
CN111597939B (en) High-speed rail line nest defect detection method based on deep learning
CN112200248B (en) Point cloud semantic segmentation method, system and storage medium based on DBSCAN clustering under urban road environment
Kim et al. Generation of a DTM and building detection based on an MPF through integrating airborne lidar data and aerial images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant