CN113313081A - Road traffic rod object classification method integrating vehicle-mounted three-dimensional laser point cloud and image - Google Patents

Road traffic rod object classification method integrating vehicle-mounted three-dimensional laser point cloud and image Download PDF

Info

Publication number
CN113313081A
CN113313081A CN202110852461.1A CN202110852461A CN113313081A CN 113313081 A CN113313081 A CN 113313081A CN 202110852461 A CN202110852461 A CN 202110852461A CN 113313081 A CN113313081 A CN 113313081A
Authority
CN
China
Prior art keywords
rod
point
point cloud
shaped object
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110852461.1A
Other languages
Chinese (zh)
Other versions
CN113313081B (en
Inventor
李发帅
肖建华
李海亭
王诗云
段梦梦
郭明武
刘剑
王闪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Geomatics Institute
Original Assignee
Wuhan Geomatics Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Geomatics Institute filed Critical Wuhan Geomatics Institute
Priority to CN202110852461.1A priority Critical patent/CN113313081B/en
Publication of CN113313081A publication Critical patent/CN113313081A/en
Application granted granted Critical
Publication of CN113313081B publication Critical patent/CN113313081B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a road traffic rod object classification method fusing vehicle-mounted three-dimensional laser point cloud and images, which comprises the following steps of obtaining point cloud and image data; removing ground points in the point cloud to obtain a pre-segmentation body, slicing, performing characteristic constraint on a continuous slice set, and taking the continuous slice set which meets the characteristic constraint as a rod-shaped object original seed point; growing the original seed points of the rod-shaped objects by a growing algorithm to obtain complete rod-shaped object point clouds so as to obtain accurate position information of the rod-shaped object point clouds; projecting the rod-shaped object point cloud onto the panoramic image according to the corresponding relation between the point cloud and the panoramic image to obtain an image range corresponding to the rod-shaped object point cloud; and carrying out example segmentation on the image by using the trained MaskRCNN to obtain example segmentation information of the rod-shaped object, and finely classifying the rod-shaped object by using the range of the point cloud of the rod-shaped object in the panoramic image. The road traffic rod-shaped object classification method integrating the vehicle-mounted three-dimensional laser point cloud and the images realizes accurate positioning and fine classification of the rod-shaped objects in the point cloud.

Description

Road traffic rod object classification method integrating vehicle-mounted three-dimensional laser point cloud and image
Technical Field
The invention belongs to the field of image recognition and mapping, and particularly relates to a road traffic rod classification method fusing vehicle-mounted three-dimensional laser point cloud and images.
Background
With the rapid advancement of urbanization and the wide popularization of personal automobiles in recent years, the problem of urban traffic congestion is becoming more serious, great trouble is caused to the daily life of urban residents, the problem of urban traffic congestion becomes an important issue in urban management, and in order to solve the problem, developed countries such as Europe and America develop and put into use an intelligent traffic system, and the system greatly relieves the problem of urban traffic congestion. The road traffic rod is used as an important component of an intelligent traffic system, and the fine classification of the road traffic rod plays an important role in constructing the intelligent traffic system. In addition, the road traffic rod is used as an important hanging object of the 5G base station, and the automatic identification of the road traffic rod is also significant to the layout of the 5G base station. The vehicle-mounted three-dimensional laser scanning system is widely applied to three-dimensional data acquisition of urban road scenes due to good maneuverability and high data acquisition precision, high-precision three-dimensional point clouds and vehicle-mounted images acquired by the vehicle-mounted three-dimensional laser scanning system provide powerful support for automatic identification of road traffic rod-shaped objects, the urban road scenes are complex, the types of the objects are more, serious shielding, noise and adhesion phenomena exist, and the problems are serious and the automatic identification precision of the road traffic rod-shaped objects is seriously imaged. The fine classification of the road traffic rod at the present stage depends heavily on manual translation, so that the fine automatic classification of the road traffic rod in the vehicle-mounted laser point cloud and the vehicle-mounted image of a large-range urban scene has important significance for constructing an intelligent traffic system and 5G base station layout.
At present, the identification research on road traffic rods is mainly divided into the following three methods, namely a method based on knowledge reasoning, an algorithm based on traditional machine learning and a method based on a deep learning neural network. The method based on knowledge reasoning comprises the steps of firstly extracting high-order features of a point cloud or an image, then manually constructing a series of feature constraint rules according to the characteristics of a target in observation experience, and finally identifying the point cloud or the image according to the extracted features and rules. Brenner et al first proposed concentric cylinder models for the detection of traffic rods based on their characteristics, Lehtom ä ki et al improved the concentric cylinder model detection method by using scan line segmentation preprocessing, and Cabo et al constructed voxels extended and accelerated the method. Bremer et al propose a method based on eigenvalue vectors to extract traffic rods. Pu et al propose a method based on slicing the upper end of the shaft to extract traffic shafts and classify with shape information of the attachments. Yang et al extract features based on hyper-voxels and formulate semantic rules to identify road traffic shafts. The rule-based approach has the disadvantage that we need to design many rules when the target class is many, and the portability of the method is too poor, and the advantage is that many training samples are not needed. The method based on the traditional machine learning operator comprises the steps of firstly extracting low-order or high-order features of point clouds or images, then inputting the extracted point cloud features or image features and corresponding labels into the traditional machine learning operator for training, for example, machine learning operators such as a support vector machine and a random forest are supported, and finally, the point clouds or the images with the extracted features are automatically identified by utilizing a trained machine learning operator model. The deep learning-based method is to input point cloud into a pre-designed deep neural network for training and classify objects. The network with representativeness comprises a voxel-based convolutional neural network, a multi-view projection image-based convolutional neural network and PointNet, wherein the first neural network is trained by inputting pre-generated voxels into the three-dimensional convolutional neural network, the second neural network is trained by inputting images generated by point clouds at different views into the convolutional neural network, and different from the first two networks, the PointNet is trained by directly inputting the point clouds and constructing a symmetric function to normalize the point cloud characteristics into the same space. The deep learning-based method has the advantages that high-level features are not required to be designed, and the defects that a large training sample number and a complicated training sample calibration work are required.
In view of this, a method for classifying road traffic rods by fusing vehicle-mounted three-dimensional laser point clouds and images is urgently needed to be proposed.
Disclosure of Invention
Therefore, the technical problem to be solved by the invention is to provide a road traffic rod integrating vehicle-mounted three-dimensional laser point clouds and images.
The invention relates to a road traffic rod integrating vehicle-mounted three-dimensional laser point cloud and images, which comprises the following steps:
s1, acquiring point clouds and image data corresponding to the point clouds;
s2, removing ground points in the point cloud to obtain a pre-segmentation body, horizontally slicing the pre-segmentation body along the vertical direction according to a preset height to obtain a continuous slice set, performing feature constraint on the continuous slice set through two-dimensional features of slices and concentric cylinder features, and taking the continuous slice set which meets the feature constraint as a rod-shaped object original seed point;
s3, growing the original seed points of the rod-shaped objects through a growing algorithm to obtain complete rod-shaped object point clouds so as to obtain accurate position information of the rod-shaped object point clouds;
s4, projecting the rod-shaped object point cloud to the panoramic image according to the corresponding relation between the point cloud and the panoramic image to obtain an image range corresponding to the rod-shaped object point cloud;
and S5, carrying out example segmentation on the image by using the trained MaskRCNN to obtain example segmentation information of the rod-shaped object, and finely classifying the rod-shaped object point cloud obtained in the step S4 in the range of the panoramic image.
Further, step S2 specifically includes the following steps,
s201, screening ground points in the point cloud through the following formula:
Figure 681572DEST_PATH_IMAGE001
wherein:
Figure 804249DEST_PATH_IMAGE002
substitute pointing device
Figure 764115DEST_PATH_IMAGE003
Is a condition of the ground;
Figure 681255DEST_PATH_IMAGE004
pointing device
Figure 992151DEST_PATH_IMAGE003
The absolute value of the difference between the height of the track point and the installation height of the three-dimensional laser scanner is smaller than a certain threshold, and the threshold range is 0.25 m;
Figure 918518DEST_PATH_IMAGE005
substitute pointing device
Figure 749202DEST_PATH_IMAGE003
The local variation elevation of the neighborhood points is less than 0.15 m;
Figure 837244DEST_PATH_IMAGE003
is a point randomly selected from the point cloud;
Figure 369856DEST_PATH_IMAGE006
is a point
Figure 834336DEST_PATH_IMAGE003
By obtaining the point trajectory information and time of the point cloudMatching the labels to obtain;
Figure 503215DEST_PATH_IMAGE007
the height of the vehicle-mounted three-dimensional laser scanner from the ground;
Figure 27737DEST_PATH_IMAGE008
is a point
Figure 47645DEST_PATH_IMAGE003
The point cloud local elevation change value is calculated by the following formula:
Figure 50236DEST_PATH_IMAGE009
wherein:
Figure 88468DEST_PATH_IMAGE010
is the midpoint of the point cloud
Figure 49471DEST_PATH_IMAGE011
A set of points of nearest neighbor points;
Figure 291097DEST_PATH_IMAGE012
is any two points in the set of points;
Figure 362958DEST_PATH_IMAGE013
is a point
Figure 272008DEST_PATH_IMAGE014
The elevation value of (a);
s202, traversing all points in the point cloud, and removing all ground points to obtain a pre-segmentation body set;
s203, horizontally slicing each pre-segmentation body in the pre-segmentation body set at a height of 0.25m in the vertical direction;
s204, extracting the two-dimensional maximum distance between each point in the slice and the offset between the slices in the two-dimensional plane direction, simultaneously constructing a concentric cylinder model comprising an inner cylinder and an outer cylinder, and then filtering the slices through a filtering formula based on two-dimensional characteristics and concentric cylinder characteristics, wherein the filtering formula is as follows:
Figure 872754DEST_PATH_IMAGE015
wherein:
Figure 617987DEST_PATH_IMAGE016
finger partition body
Figure 962381DEST_PATH_IMAGE017
Is the condition of the shaft;
Figure 991517DEST_PATH_IMAGE018
finger partition body
Figure 28743DEST_PATH_IMAGE017
The number of the middle continuous slice sets is more than 5;
Figure 510540DEST_PATH_IMAGE019
finger pre-segmentation body
Figure 658624DEST_PATH_IMAGE017
The maximum offset of the middle continuous slice set in the two-dimensional plane direction is less than 0.25 m;
Figure 791534DEST_PATH_IMAGE020
finger pre-segmentation body
Figure 265241DEST_PATH_IMAGE017
At least 85% of the points in the middle serial section set fall into the inner cylinder;
Figure 968755DEST_PATH_IMAGE021
is a pre-division body
Figure 654951DEST_PATH_IMAGE017
Vertical direction continuous slice set with middle two-dimensional maximum distance smaller than distance threshold
Figure 393100DEST_PATH_IMAGE022
The distance threshold is 0.2-0.5 m;
Figure 506549DEST_PATH_IMAGE023
is a continuous slice set
Figure 448092DEST_PATH_IMAGE024
Maximum offset in the direction of the two-dimensional plane;
C 1 andC 2 is based on a set of serial slices
Figure 203558DEST_PATH_IMAGE024
The inner and outer circular cylinders are fitted,
Figure 61793DEST_PATH_IMAGE025
fall on the inner cylinderC 1 The point of the inner one of the points,
Figure 611723DEST_PATH_IMAGE026
is located at the outer cylinderC 2 Inner point, and
Figure 289829DEST_PATH_IMAGE027
Figure 317827DEST_PATH_IMAGE028
<1,
Figure 14257DEST_PATH_IMAGE028
is a concentric cylinder model proportional threshold, r1 is the inner cylinder radius, r2 is the outer cylinder radius;
S205、traverse pre-segmentation volume set
Figure 735088DEST_PATH_IMAGE029
All slices of each pre-segmented body in the set of filtered serial slices
Figure 166070DEST_PATH_IMAGE030
For dividing the rod-like object
Figure 997759DEST_PATH_IMAGE031
The initial seed point of (1).
Further, the growing algorithm in step S3 includes the following steps:
s301, dividing the body by a rod
Figure 565007DEST_PATH_IMAGE032
In a continuous slice set
Figure 456740DEST_PATH_IMAGE033
Set of primitive seed points
Figure 860170DEST_PATH_IMAGE034
Collecting the seed points
Figure 229972DEST_PATH_IMAGE034
Added to a rod-shaped object partition
Figure 651726DEST_PATH_IMAGE035
Performing the following steps;
s302, from
Figure 448780DEST_PATH_IMAGE034
Randomly taking one point as a seed point
Figure 323196DEST_PATH_IMAGE036
From the collection of seed points
Figure 762267DEST_PATH_IMAGE034
Deletion of seeds
Figure 38528DEST_PATH_IMAGE036
And in the cloud
Figure 272063DEST_PATH_IMAGE037
Searching seed point
Figure 633774DEST_PATH_IMAGE036
K nearest neighbors;
s303, diffusing from K nearest neighbors of the point to the periphery of the point cloud of the rod-shaped object, if the seed point exists in the point domain of the K nearest neighbors
Figure 594646DEST_PATH_IMAGE036
Is less than the seed point
Figure 990992DEST_PATH_IMAGE036
Points of 20% intensity, then add these points to the seed point set
Figure 395429DEST_PATH_IMAGE034
And a rod-shaped object division body
Figure 244436DEST_PATH_IMAGE035
In and at the point cloud
Figure 759731DEST_PATH_IMAGE037
Deleting seed points
Figure 276163DEST_PATH_IMAGE036
And these diffusion seed points;
s304, circularly executing S302 and S303 until the collection
Figure 851501DEST_PATH_IMAGE034
Is empty;
s305, finally obtaining a rod-shaped object split body
Figure 922225DEST_PATH_IMAGE035
The complete point cloud of the shaft objects is obtained, and therefore accurate position information of each shaft object can be obtained.
Further, step S4 specifically includes the following steps;
s401, mapping an on-board three-dimensional laser point cloud coordinate system corresponding to the rod-shaped object point cloud to a panoramic image coordinate system through IMU/GNSS to obtain panoramic image data of the rod-shaped object point cloud;
s402, calculating coordinates of each vertex in an outer bounding box of the point cloud of the rod-shaped object, mapping the coordinates into a simulation unit spherical coordinate system, and converting the coordinates through the following formula:
Figure 991943DEST_PATH_IMAGE038
wherein:
Figure 97302DEST_PATH_IMAGE039
the coordinate of the center point of the camera in a vehicle-mounted three-dimensional laser point cloud coordinate system;
Figure 843542DEST_PATH_IMAGE040
the method is characterized in that a rotation matrix for converting a unit spherical coordinate system to a vehicle-mounted three-dimensional laser point cloud coordinate system is simulated, and the rotation matrix is obtained through a pitch angle, a rotation angle and a yaw angle recorded by an IMU;
Figure 401562DEST_PATH_IMAGE041
is a point in the vehicle-mounted three-dimensional laser point cloud
Figure 524239DEST_PATH_IMAGE042
To the center point of the camera
Figure 484104DEST_PATH_IMAGE043
The distance of (d);
Figure 135666DEST_PATH_IMAGE044
the points in the vehicle-mounted three-dimensional laser point cloud coordinate system are projected to the points of the simulation unit spherical surface in the simulation unit spherical coordinate system, and calculation is performed through the formula in the step S402.
S403, mapping the coordinates of each vertex in the outer bounding box of the rod point cloud in the unit sphere coordinate system simulated in the previous step to a panoramic image coordinate system, and converting through the following formula:
Figure 446561DEST_PATH_IMAGE045
Figure 107350DEST_PATH_IMAGE046
Figure 170989DEST_PATH_IMAGE047
wherein:
Figure 524610DEST_PATH_IMAGE048
the pixel point coordinates in the panoramic image coordinate system;
Figure 57223DEST_PATH_IMAGE049
and
Figure 256123DEST_PATH_IMAGE050
respectively the length and width of the panoramic image;
Figure 190581DEST_PATH_IMAGE051
and
Figure 715103DEST_PATH_IMAGE052
respectively corresponding to the latitudinal angle and the longitudinal angle of the simulation unit ball, wherein the radius of the simulation unit ball is 1;
Figure 469433DEST_PATH_IMAGE053
is a simulated unit sphere coordinate;
and S403, mapping the coordinates of the outer bounding box of the rod point cloud into the panoramic image by using the conversion relation in the step S402, and calculating the range of the circumscribed rectangle in the panoramic image.
Further, the specific process of step S5 is:
the specific process of step S5 is:
s501, image data including rod type and example information are marked in the panoramic image, and the image data are trained through a MaskRCNN neural network to obtain a neural network model;
s502, carrying out automatic example segmentation on the panoramic image data by using the trained MaskRCNN neural network to obtain the category and example information of the rod-shaped object.
S503, performing superposition analysis on the panoramic image range calculated in the step S4 and the category and example information calculated in the step S502 to obtain the fine category and example information of the rod-shaped object;
s504, establishing the corresponding relation between the fine category of the rod in the step S503 and the example information and the accurate position information of the rod in the step S3, and finishing the classification.
Compared with the prior art, the technical scheme of the invention has the following advantages:
according to the method, a point cloud high-order characteristic set is extracted by a characteristic extraction algorithm in the field of point cloud processing, road traffic rods are automatically extracted through high-order characteristics and grammar rules in knowledge inference, position information of the road traffic rods is further acquired, the point cloud of the road traffic rods which are automatically extracted is projected to a vehicle-mounted image to intercept corresponding image blocks, then the road traffic rods are finely identified by a deep learning model in the field of image identification, and the position and fine category information of the road traffic rods are finally output, so that the road traffic rods in large-range urban road scene point cloud and images are automatically and finely classified, and the efficiency and the automation degree of the fine classification process of the traffic rods in complex urban road scenes are improved.
Drawings
FIG. 1 is a schematic flow chart of a method provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram of a concentric cylinder model in a method provided by an embodiment of the invention;
FIG. 3 is a schematic diagram of a growing algorithm in the method provided by the embodiment of the invention;
FIG. 4 is a schematic diagram of a cloud of rod points detected by a growing algorithm in a method provided by an embodiment of the invention;
FIG. 5 is a schematic diagram illustrating a conversion relationship between a point cloud coordinate of a rod and a panoramic coordinate and a simulation unit sphere coordinate in the method according to the embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating a point cloud projected onto a panoramic image according to an embodiment of the present invention;
FIG. 7 is a schematic illustration, by way of example, of segmentation in a method provided by an embodiment of the present invention;
fig. 8 is a schematic diagram of example segmentation performed in the method according to the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The method for classifying road traffic rods integrating vehicle-mounted three-dimensional laser point clouds and influences in the embodiment as shown in fig. 1 comprises the following steps.
And S1, acquiring the point cloud and the image data corresponding to the point cloud. Specifically, the point cloud is collected by a vehicle-mounted three-dimensional laser scanner, the image data is collected by a vehicle-mounted camera, and corresponding IMU/GNSS data is collected by the IMU/GNSS while the two data are collected.
S2, removing ground points in the point cloud to obtain a pre-segmentation body, horizontally slicing the pre-segmentation body along the vertical direction according to a preset height to obtain a continuous slice set, performing feature constraint on the continuous slice set through two-dimensional features and concentric cylinder features of slices, and taking the continuous slice set which meets the feature constraint as the original rod seedAnd (4) point. The installation position of the vehicle-mounted three-dimensional laser scanner is relatively fixed, so that the height of a central point of the vehicle-mounted three-dimensional laser scanner, namely a vehicle-mounted track point, from the ground is close to a fixed value
Figure 472024DEST_PATH_IMAGE054
The ground of a city is generally flat, and has no large elevation change in a small range, so that the local elevation change rate of a ground point is flat.
Therefore, step S2 includes the following steps.
201. Firstly, screening ground points in point cloud by the following formula:
Figure 260988DEST_PATH_IMAGE055
wherein the content of the first and second substances,
Figure 441565DEST_PATH_IMAGE056
substitute pointing device
Figure 948770DEST_PATH_IMAGE057
Is a condition of the ground;
Figure 755052DEST_PATH_IMAGE058
pointing device
Figure 132943DEST_PATH_IMAGE057
The absolute value of the difference between the height of the track point and the installation height of the three-dimensional laser scanner is smaller than a certain threshold, and the threshold range is 0.25 m;
Figure 999268DEST_PATH_IMAGE059
substitute pointing device
Figure 728190DEST_PATH_IMAGE057
The local variation elevation of the neighborhood points is less than 0.15 m;
Figure 338163DEST_PATH_IMAGE057
is a point randomly selected from the point cloud;
Figure 101719DEST_PATH_IMAGE060
is a point
Figure 873366DEST_PATH_IMAGE061
The track point height is obtained by matching the track information of the point with the time tag when point cloud is obtained;
Figure 616149DEST_PATH_IMAGE062
the height of the vehicle-mounted three-dimensional laser scanner from the ground;
Figure 764234DEST_PATH_IMAGE063
is a point
Figure 382297DEST_PATH_IMAGE061
The local elevation change value of the point cloud, the local elevation change characteristic of the point cloud is used for describing the flatness degree of each local neighborhood of each point, and the calculation process of the characteristic is firstly constructed
Figure 324845DEST_PATH_IMAGE064
Fast tree search for each point in point cloud
Figure 762780DEST_PATH_IMAGE065
Close to the point and then for each point
Figure 714556DEST_PATH_IMAGE065
Analyzing the elevation of the adjacent point, and determining the difference between the maximum elevation and the minimum elevation as the point
Figure 187125DEST_PATH_IMAGE061
The elevation change characteristic of the local point cloud is calculated by the following formula:
Figure 566154DEST_PATH_IMAGE066
wherein:
Figure 756964DEST_PATH_IMAGE067
is the point in the point cloud
Figure 997583DEST_PATH_IMAGE068
A set of points of nearest neighbor points;
Figure 590239DEST_PATH_IMAGE069
is any two points in the set of points;
Figure 140169DEST_PATH_IMAGE070
is a point
Figure 552696DEST_PATH_IMAGE071
The elevation value of (a).
S202, traversing all points in the point cloud, removing all ground points from the point cloud, carrying out three-dimensional communication on other points in the point cloud, and then obtaining a pre-segmentation body set
Figure 846274DEST_PATH_IMAGE072
S203, horizontally slicing each pre-segment in the set of pre-segments at a height of 0.25m in the vertical direction.
S204, extracting the two-dimensional maximum distance between each point in the slice and the offset between the slices in the two-dimensional plane direction, and simultaneously constructing a concentric cylinder model comprising an inner cylinder and an outer cylinder, wherein the concentric cylinder model is shown in FIG. 2. Due to its morphological characteristics, the shaft exhibits a small amount of offset in the set of successive slices, while the number of points of the shaft falling in the inner cylinder will be greater than in the outer cylinder, based on the concentric cylinder model.
Then, the slice is filtered by a filtering formula based on the two-dimensional feature and the concentric cylindrical feature, the filtering formula is as follows:
Figure 293436DEST_PATH_IMAGE073
wherein:
Figure 748688DEST_PATH_IMAGE074
finger markCutting body
Figure 914090DEST_PATH_IMAGE075
Is the condition of the shaft;
Figure 745780DEST_PATH_IMAGE076
finger partition body
Figure 296715DEST_PATH_IMAGE075
The number of the middle continuous slice sets is more than 5;
Figure 188448DEST_PATH_IMAGE077
finger pre-segmentation body
Figure 309988DEST_PATH_IMAGE075
The maximum offset of the middle continuous slice set in the two-dimensional plane direction is less than 0.25 m;
Figure 945369DEST_PATH_IMAGE078
finger pre-segmentation body
Figure 367123DEST_PATH_IMAGE075
At least 85% of the points in the middle serial section set fall into the inner cylinder;
Figure 429757DEST_PATH_IMAGE079
is a pre-division body
Figure 304172DEST_PATH_IMAGE075
Continuous slice set with middle two-dimensional maximum distance smaller than distance threshold
Figure 477664DEST_PATH_IMAGE080
The continuous slice set refers to a set formed by adjacent slices in the vertical direction, and the distance threshold is 0.2-0.5m and is determined by actually detected data of the rod point cloud;
Figure 753925DEST_PATH_IMAGE081
is a slice set
Figure 738192DEST_PATH_IMAGE082
Maximum offset in the direction of the two-dimensional plane;C 1 andC 2 is based on a set of serial slices
Figure 834324DEST_PATH_IMAGE083
The inner and outer circular cylinders are fitted,
Figure 762573DEST_PATH_IMAGE084
fall on the inner cylinderC 1 The point of the inner one of the points,
Figure 175230DEST_PATH_IMAGE085
is located at the outer cylinderC 2 Inner point, and
Figure 579667DEST_PATH_IMAGE086
Figure 428674DEST_PATH_IMAGE087
<1,
Figure 317871DEST_PATH_IMAGE087
for the concentric cylinder model scale threshold, r1 is the inner cylinder radius and r2 is the outer cylinder radius.
S205, traversing the pre-segmentation body set
Figure 834303DEST_PATH_IMAGE088
All slices of each pre-segmented body in the set of filtered serial slices
Figure 675220DEST_PATH_IMAGE089
For dividing the rod-like object
Figure 496676DEST_PATH_IMAGE090
The initial seed point of (1).
And S3, growing the original seed points of the rod-shaped object through a growing algorithm to obtain complete rod-shaped object point cloud so as to obtain accurate position information of the rod-shaped object point cloud, wherein the growing algorithm grows through the reflection intensity of each point in the point cloud.
Specifically, as shown in fig. 3, the growing algorithm in step S3 includes the following steps.
S301, dividing the body by a rod
Figure 877979DEST_PATH_IMAGE091
In a continuous slice set
Figure 983338DEST_PATH_IMAGE092
Set of primitive seed points
Figure 729577DEST_PATH_IMAGE093
Collecting the seed points
Figure 802445DEST_PATH_IMAGE093
Added to a rod-shaped object partition
Figure 659542DEST_PATH_IMAGE094
In (1).
S302, from
Figure 884987DEST_PATH_IMAGE093
Randomly taking one point as a seed point
Figure 802127DEST_PATH_IMAGE095
From the collection of seed points
Figure 581865DEST_PATH_IMAGE093
Deletion of seeds
Figure 242653DEST_PATH_IMAGE095
And in the cloud
Figure 322605DEST_PATH_IMAGE096
Searching seed point
Figure 410646DEST_PATH_IMAGE095
K nearest neighbors.
S303, diffusing from K nearest neighbors of the point to the periphery of the point cloud of the rod-shaped object, if the K nearest neighbors exist in the point domain, determining the type of the pointSub-dots
Figure 943259DEST_PATH_IMAGE095
Is less than the seed point
Figure 158471DEST_PATH_IMAGE095
Points of 20% intensity, then add these points to the seed point set
Figure 827349DEST_PATH_IMAGE093
And a rod-shaped object division body
Figure 86292DEST_PATH_IMAGE094
In and at the point cloud
Figure 106201DEST_PATH_IMAGE096
Deleting seed points
Figure 374371DEST_PATH_IMAGE095
And these diffusion seed points.
S304, circularly executing S302 and S303 until the collection
Figure 163336DEST_PATH_IMAGE093
Is empty.
S305, as shown in FIG. 4, the rod-shaped object split body finally obtained
Figure 593180DEST_PATH_IMAGE094
Is a complete point cloud of the shaft, so that accurate position information of each shaft can be obtained, as shown in fig. 4.
And S4, projecting the rod-shaped object point cloud onto the panoramic image according to the corresponding relation between the point cloud and the panoramic image to obtain an image range corresponding to the rod-shaped object point cloud.
Step S4 specifically includes the following steps;
s401, as shown in fig. 5, mapping the vehicle-mounted three-dimensional laser point cloud coordinate system corresponding to the rod-like object point cloud to the panoramic image coordinate system through the IMU/GNSS, so as to obtain panoramic image data of the rod-like object point cloud.
S402, as shown in fig. 5, calculating coordinates of each vertex in the outer bounding box of the rod point cloud, mapping the coordinates into a simulation unit spherical coordinate system, and converting the coordinates by the following formula:
Figure 834806DEST_PATH_IMAGE097
wherein:
Figure 641088DEST_PATH_IMAGE098
the coordinate of the center point of the vehicle-mounted camera in a vehicle-mounted three-dimensional laser point cloud coordinate system;
Figure 284558DEST_PATH_IMAGE099
the method is characterized in that a rotation matrix for converting a unit spherical coordinate system to a vehicle-mounted three-dimensional laser point cloud coordinate system is simulated, and the rotation matrix is obtained through a pitch angle, a rotation angle and a yaw angle recorded by an IMU;
Figure 400151DEST_PATH_IMAGE100
is a point in the vehicle-mounted three-dimensional laser point cloud
Figure 129072DEST_PATH_IMAGE042
To the center point of the camera
Figure 739045DEST_PATH_IMAGE101
The distance of (d);
Figure 971444DEST_PATH_IMAGE102
the points in the vehicle-mounted three-dimensional laser point cloud coordinate system are projected to the points of the simulation unit spherical surface in the simulation unit spherical coordinate system, and calculation is performed through the formula in the step S402.
S403, as shown in fig. 6, mapping the coordinates of each vertex in the outer bounding box of the rod point cloud in the unit sphere coordinate system of the previous simulation step into the panoramic image coordinate system, and converting the coordinates by the following formula:
Figure 8670DEST_PATH_IMAGE103
Figure 224887DEST_PATH_IMAGE104
Figure 372972DEST_PATH_IMAGE105
wherein:
Figure 991035DEST_PATH_IMAGE106
the pixel point coordinates in the panoramic image coordinate system;
Figure 933583DEST_PATH_IMAGE107
and
Figure 637097DEST_PATH_IMAGE108
respectively the length and width of the panoramic image;
Figure 339605DEST_PATH_IMAGE109
and
Figure 812175DEST_PATH_IMAGE110
respectively corresponding to the latitudinal angle and the longitudinal angle of the simulation unit ball, wherein the radius of the simulation unit ball is 1;
Figure 925624DEST_PATH_IMAGE111
is the simulated unit sphere coordinates.
And S403, mapping the coordinates of the outer bounding box of the rod point cloud into the panoramic image by using the conversion relation in the step S402, and calculating the range of the circumscribed rectangle in the panoramic image.
S5, as shown in fig. 7, the image is subjected to example segmentation by using the trained MaskRCNN to obtain example segmentation information of the rod, and the rod point clouds obtained in step S4 are finely classified by using the range of the rod point clouds in the panoramic image.
The specific process of step S5 is:
s501, image data including rod type and example information are marked in the panoramic image, and the image data are trained through a MaskRCNN neural network to obtain a neural network model.
S502, carrying out automatic example segmentation on the panoramic image data by using the trained MaskRCNN neural network to obtain the category and example information of the rod-shaped object.
S503, the panoramic image range calculated in the step S4 and the category and example information calculated in the step S502 are subjected to superposition analysis, and the fine category and example information of the rod-shaped object can be obtained.
S504, establishing the corresponding relation between the fine category of the rod in the step S503 and the example information and the accurate position information of the rod in the step S3, and finishing the classification.
As shown in FIG. 8, a road traffic rod with panoramic image is obtained by example segmentation of the image
Figure 116434DEST_PATH_IMAGE112
Example information of middle part
Figure 606321DEST_PATH_IMAGE113
The method includes the category and number corresponding to the component, and the image range projected by the road traffic rod and the panorama example segmentation information in the step S403 are superimposed to filter out some error segmentation results, so as to obtain the accurate range and example information of the road traffic rod
Figure 198977DEST_PATH_IMAGE113
Combining with the road traffic rod obtained in step S305
Figure 483328DEST_PATH_IMAGE114
Position information of
Figure 895854DEST_PATH_IMAGE115
We can obtain the shaft of interest
Figure 189432DEST_PATH_IMAGE112
Refined instance information
Figure 636594DEST_PATH_IMAGE113
And location information
Figure 606693DEST_PATH_IMAGE115
Finally, the two are displayed in the image, and the fine classification of the rod-shaped objects is completed.
According to the method, a point cloud high-order characteristic set is extracted by using a characteristic extraction algorithm in the field of point cloud processing, road traffic rods are automatically extracted through high-order characteristics and grammar rules in knowledge inference, so that the position information of the road traffic rods is obtained, the automatically extracted point cloud of the road traffic rods is projected to a vehicle-mounted image to intercept corresponding image blocks, then the road traffic rods are finely identified by using a deep learning model in the field of image identification, and finally the position and fine category information of the road traffic rods are output, so that the road traffic rods in a large-range urban road scene point cloud and an image are automatically and finely classified, and the efficiency and the automation degree of the fine classification process of the traffic rods in a complex urban road scene are improved.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (5)

1. The road traffic rod object classification method integrating the vehicle-mounted three-dimensional laser point cloud and the image is characterized by comprising the following steps of:
s1, acquiring point clouds and image data corresponding to the point clouds;
s2, removing ground points in the point cloud to obtain a pre-segmentation body, horizontally slicing the pre-segmentation body along the vertical direction according to a preset height to obtain a continuous slice set, performing feature constraint on the continuous slice set through two-dimensional features of slices and concentric cylinder features, and taking the continuous slice set which meets the feature constraint as a rod-shaped object original seed point;
s3, growing the original seed points of the rod-shaped objects through a growing algorithm to obtain complete rod-shaped object point clouds so as to obtain accurate position information of the rod-shaped object point clouds;
s4, projecting the rod-shaped object point cloud to the panoramic image according to the corresponding relation between the point cloud and the panoramic image to obtain an image range corresponding to the rod-shaped object point cloud;
and S5, carrying out example segmentation on the image by using the trained MaskRCNN to obtain example segmentation information of the rod-shaped object, and finely classifying the rod-shaped object point cloud obtained in the step S4 in the range of the panoramic image.
2. The method according to claim 1, wherein step S2 comprises the following steps,
s201, screening ground points in the point cloud through the following formula:
Figure 599265DEST_PATH_IMAGE001
wherein:
Figure 875526DEST_PATH_IMAGE002
substitute pointing device
Figure 109061DEST_PATH_IMAGE003
Is a condition of the ground;
Figure 205193DEST_PATH_IMAGE004
pointing device
Figure 182376DEST_PATH_IMAGE003
The absolute value of the difference between the height of the track point and the installation height of the three-dimensional laser scanner is smaller than a certain threshold, and the threshold range is 0.25 m;
Figure 313143DEST_PATH_IMAGE005
substitute pointing device
Figure 717580DEST_PATH_IMAGE003
The local variation elevation of the neighborhood points is less than 0.15 m;
Figure 566587DEST_PATH_IMAGE003
is a point randomly selected from the point cloud;
Figure 347461DEST_PATH_IMAGE006
is a point
Figure 83467DEST_PATH_IMAGE003
The track point height is obtained by matching the track information of the point during point cloud acquisition with the time tag;
Figure 658805DEST_PATH_IMAGE007
the height of the vehicle-mounted three-dimensional laser scanner from the ground;
Figure 729529DEST_PATH_IMAGE008
is a point
Figure 48515DEST_PATH_IMAGE003
The point cloud local elevation change value is calculated by the following formula:
Figure 153874DEST_PATH_IMAGE009
wherein:
Figure 900113DEST_PATH_IMAGE010
is the point in the point cloud
Figure 723713DEST_PATH_IMAGE011
A set of points of nearest neighbor points;
Figure 580811DEST_PATH_IMAGE012
is a point set
Figure 540676DEST_PATH_IMAGE013
Any two points in (1);
Figure 972664DEST_PATH_IMAGE014
is a point
Figure 17980DEST_PATH_IMAGE015
The elevation value of (a);
s202, traversing all points in the point cloud, and removing all ground points to obtain a pre-segmentation body set;
s203, horizontally slicing each pre-segmentation body in the pre-segmentation body set at a height of 0.25m in the vertical direction;
s204, extracting the two-dimensional maximum distance between each point in the slice and the offset between the slices in the two-dimensional plane direction, simultaneously constructing a concentric cylinder model comprising an inner cylinder and an outer cylinder, and then filtering the slices through a filtering formula based on two-dimensional characteristics and concentric cylinder characteristics, wherein the filtering formula is as follows:
Figure 413189DEST_PATH_IMAGE016
wherein:
Figure 493141DEST_PATH_IMAGE017
finger partition body
Figure 581182DEST_PATH_IMAGE018
Is a rod-shaped objectConditions;
Figure 113795DEST_PATH_IMAGE019
finger partition body
Figure 578274DEST_PATH_IMAGE018
The number of the middle continuous slice sets is more than 5;
Figure 512732DEST_PATH_IMAGE020
finger pre-segmentation body
Figure 771675DEST_PATH_IMAGE018
The maximum offset of the middle continuous slice set in the two-dimensional plane direction is less than 0.25 m;
Figure 526005DEST_PATH_IMAGE021
finger pre-segmentation body
Figure 279328DEST_PATH_IMAGE018
At least 85% of the points in the middle serial section set fall into the inner cylinder;
Figure 68293DEST_PATH_IMAGE022
is a pre-division body
Figure 763716DEST_PATH_IMAGE018
Vertical direction continuous slice set with middle two-dimensional maximum distance smaller than distance threshold
Figure 5342DEST_PATH_IMAGE023
The distance threshold is 0.2-0.5 m;
Figure 546044DEST_PATH_IMAGE024
is a continuous slice set
Figure 455095DEST_PATH_IMAGE025
Maximum offset in the direction of the two-dimensional plane;
C 1 andC 2 is based on a set of serial slices
Figure 55840DEST_PATH_IMAGE026
The inner and outer circular cylinders are fitted,
Figure 50341DEST_PATH_IMAGE027
fall on the inner cylinderC 1 The point of the inner one of the points,
Figure 394735DEST_PATH_IMAGE028
is located at the outer cylinderC 2 Inner point, and
Figure 419278DEST_PATH_IMAGE029
Figure 456504DEST_PATH_IMAGE030
<1,
Figure 672721DEST_PATH_IMAGE030
is a concentric cylinder model proportional threshold, r1 is the inner cylinder radius, r2 is the outer cylinder radius;
s205, traversing the pre-segmentation body set
Figure 820806DEST_PATH_IMAGE031
All slices of each pre-segmented body in the set of filtered serial slices
Figure 438869DEST_PATH_IMAGE032
For dividing the rod-like object
Figure 381417DEST_PATH_IMAGE033
Of (2) originalAnd (4) seed points.
3. The method of claim 2, wherein the growing algorithm of step S3 comprises the steps of:
s301, dividing the body by a rod
Figure 350510DEST_PATH_IMAGE034
In a continuous slice set
Figure 36707DEST_PATH_IMAGE035
Set of primitive seed points
Figure 525588DEST_PATH_IMAGE036
Collecting the seed points
Figure 639037DEST_PATH_IMAGE036
Added to a rod-shaped object partition
Figure 829847DEST_PATH_IMAGE037
Performing the following steps;
s302, from
Figure 319735DEST_PATH_IMAGE036
Randomly taking one point as a seed point
Figure 646811DEST_PATH_IMAGE038
From the collection of seed points
Figure 931162DEST_PATH_IMAGE036
Deletion of seeds
Figure 609268DEST_PATH_IMAGE038
And in the cloud
Figure 902846DEST_PATH_IMAGE039
Searching seed point
Figure 350007DEST_PATH_IMAGE038
K nearest neighbors;
s303, seed point
Figure 805260DEST_PATH_IMAGE038
Diffusing to K nearest neighbors if the seed point exists in the point domain of the K nearest neighbors
Figure 219929DEST_PATH_IMAGE038
Is less than the seed point
Figure 51619DEST_PATH_IMAGE038
Points of 20% intensity, then add these points to the seed point set
Figure 618867DEST_PATH_IMAGE036
And a rod-shaped object division body
Figure 245020DEST_PATH_IMAGE040
In and at the point cloud
Figure 632139DEST_PATH_IMAGE041
Deleting seed points
Figure 267520DEST_PATH_IMAGE038
And these diffusion seed points;
s304, circularly executing S302 and S303 until the collection
Figure 423695DEST_PATH_IMAGE036
Is empty;
s305, finally obtaining a rod-shaped object split body
Figure 486328DEST_PATH_IMAGE040
The complete point cloud of the shaft objects is obtained, and therefore accurate position information of each shaft object can be obtained.
4. The method according to claim 1, wherein step S4 comprises the following steps;
s401, mapping an on-board three-dimensional laser point cloud coordinate system corresponding to the rod-shaped object point cloud to a panoramic image coordinate system through IMU/GNSS to obtain panoramic image data of the rod-shaped object point cloud;
s402, calculating coordinates of each vertex in an outer bounding box of the point cloud of the rod-shaped object, mapping the coordinates into a simulation unit spherical coordinate system, and converting the coordinates through the following formula:
Figure 377055DEST_PATH_IMAGE042
wherein:
Figure 816127DEST_PATH_IMAGE043
the coordinate of the center point of the camera in a vehicle-mounted three-dimensional laser point cloud coordinate system;
Figure 92387DEST_PATH_IMAGE044
the method is characterized in that a rotation matrix for converting a unit spherical coordinate system to a vehicle-mounted three-dimensional laser point cloud coordinate system is simulated, and the rotation matrix is obtained through a pitch angle, a rotation angle and a yaw angle recorded by an IMU;
Figure 325923DEST_PATH_IMAGE045
is a point in the vehicle-mounted three-dimensional laser point cloud
Figure 687634DEST_PATH_IMAGE046
To the center point of the camera
Figure 664817DEST_PATH_IMAGE047
The distance of (d);
Figure 795584DEST_PATH_IMAGE048
projecting points in a vehicle-mounted three-dimensional laser point cloud coordinate system to points of a simulation unit spherical surface in a simulation unit spherical coordinate system, and calculating by using a formula in the step S402;
s403, mapping the coordinates of each vertex in the outer bounding box of the rod point cloud in the unit sphere coordinate system simulated in the previous step to a panoramic image coordinate system, and converting through the following formula:
Figure 934441DEST_PATH_IMAGE049
Figure 783449DEST_PATH_IMAGE050
Figure 548011DEST_PATH_IMAGE051
wherein:
Figure 798864DEST_PATH_IMAGE052
the pixel point coordinates in the panoramic image coordinate system;
Figure 374202DEST_PATH_IMAGE053
and
Figure 444926DEST_PATH_IMAGE054
respectively the length and width of the panoramic image;
Figure 763912DEST_PATH_IMAGE055
and
Figure 869271DEST_PATH_IMAGE056
simulating half of unit ball corresponding to the latitudinal angle and the longitudinal angle of the unit ball respectivelyThe diameter is 1;
Figure 349931DEST_PATH_IMAGE057
is a simulated unit sphere coordinate;
and S403, mapping the coordinates of the outer bounding box of the rod point cloud into the panoramic image by using the conversion relation in the step S402, and calculating the range of the circumscribed rectangle in the panoramic image.
5. The method according to claim 1, wherein the specific process of step S5 is as follows:
s501, image data including rod type and example information are marked in the panoramic image, and the image data are trained through a MaskRCNN neural network to obtain a neural network model;
s502, carrying out automatic example segmentation on panoramic image data by using the trained MaskRCNN neural network to obtain the category and example information of the rod-shaped object;
s503, performing superposition analysis on the panoramic image range calculated in the step S4 and the category and example information calculated in the step S502 to obtain the fine category and example information of the rod-shaped object;
s504, establishing the corresponding relation between the fine category of the rod in the step S503 and the example information and the accurate position information of the rod in the step S3, and finishing the classification.
CN202110852461.1A 2021-07-27 2021-07-27 Road traffic rod object classification method integrating vehicle-mounted three-dimensional laser point cloud and image Active CN113313081B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110852461.1A CN113313081B (en) 2021-07-27 2021-07-27 Road traffic rod object classification method integrating vehicle-mounted three-dimensional laser point cloud and image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110852461.1A CN113313081B (en) 2021-07-27 2021-07-27 Road traffic rod object classification method integrating vehicle-mounted three-dimensional laser point cloud and image

Publications (2)

Publication Number Publication Date
CN113313081A true CN113313081A (en) 2021-08-27
CN113313081B CN113313081B (en) 2021-11-09

Family

ID=77382356

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110852461.1A Active CN113313081B (en) 2021-07-27 2021-07-27 Road traffic rod object classification method integrating vehicle-mounted three-dimensional laser point cloud and image

Country Status (1)

Country Link
CN (1) CN113313081B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310849A (en) * 2023-05-22 2023-06-23 深圳大学 Tree point cloud monomerization extraction method based on three-dimensional morphological characteristics

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107516077A (en) * 2017-08-17 2017-12-26 武汉大学 Traffic sign information extracting method based on laser point cloud and image data fusion
US20190066283A1 (en) * 2017-08-23 2019-02-28 General Electric Company Three-dimensional modeling of an object
CN111815776A (en) * 2020-02-04 2020-10-23 山东水利技师学院 Three-dimensional building fine geometric reconstruction method integrating airborne and vehicle-mounted three-dimensional laser point clouds and streetscape images
CN111899302A (en) * 2020-06-23 2020-11-06 武汉闻道复兴智能科技有限责任公司 Point cloud data-based visual detection method, device and system
CN112446343A (en) * 2020-12-07 2021-03-05 苏州工业园区测绘地理信息有限公司 Vehicle-mounted point cloud road rod-shaped object machine learning automatic extraction method integrating multi-scale features

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107516077A (en) * 2017-08-17 2017-12-26 武汉大学 Traffic sign information extracting method based on laser point cloud and image data fusion
US20190066283A1 (en) * 2017-08-23 2019-02-28 General Electric Company Three-dimensional modeling of an object
CN111815776A (en) * 2020-02-04 2020-10-23 山东水利技师学院 Three-dimensional building fine geometric reconstruction method integrating airborne and vehicle-mounted three-dimensional laser point clouds and streetscape images
CN111899302A (en) * 2020-06-23 2020-11-06 武汉闻道复兴智能科技有限责任公司 Point cloud data-based visual detection method, device and system
CN112446343A (en) * 2020-12-07 2021-03-05 苏州工业园区测绘地理信息有限公司 Vehicle-mounted point cloud road rod-shaped object machine learning automatic extraction method integrating multi-scale features

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FASHUAI LI等: "Pole-Like Road Furniture Detection and Decomposition in Mobile Laser Scanning Data Based on Spatial Relations", 《REMOTE SENSING》 *
朱岩彬等: "一种层次化的车载激光点云中杆状地物提取方法研究", 《地理信息世界》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310849A (en) * 2023-05-22 2023-06-23 深圳大学 Tree point cloud monomerization extraction method based on three-dimensional morphological characteristics
CN116310849B (en) * 2023-05-22 2023-09-19 深圳大学 Tree point cloud monomerization extraction method based on three-dimensional morphological characteristics

Also Published As

Publication number Publication date
CN113313081B (en) 2021-11-09

Similar Documents

Publication Publication Date Title
CN110263717B (en) Method for determining land utilization category of street view image
CN106022381B (en) Automatic extraction method of street lamp pole based on vehicle-mounted laser scanning point cloud
Chen et al. Vehicle detection in high-resolution aerial images via sparse representation and superpixels
Wu et al. Rapid localization and extraction of street light poles in mobile LiDAR point clouds: A supervoxel-based approach
Ke et al. A review of methods for automatic individual tree-crown detection and delineation from passive remote sensing
Alidoost et al. A CNN-based approach for automatic building detection and recognition of roof types using a single aerial image
CN108734143A (en) A kind of transmission line of electricity online test method based on binocular vision of crusing robot
CN109858450B (en) Ten-meter-level spatial resolution remote sensing image town extraction method and system
CN106610969A (en) Multimodal information-based video content auditing system and method
Wang et al. Bottle detection in the wild using low-altitude unmanned aerial vehicles
CN111898688A (en) Airborne LiDAR data tree species classification method based on three-dimensional deep learning
CN108509950B (en) Railway contact net support number plate detection and identification method based on probability feature weighted fusion
CN112241692B (en) Channel foreign matter intelligent detection and classification method based on aerial image super-pixel texture
Kumar et al. A deep learning paradigm for detection of harmful algal blooms
CN113313081B (en) Road traffic rod object classification method integrating vehicle-mounted three-dimensional laser point cloud and image
CN113313107A (en) Intelligent detection and identification method for multiple types of diseases on cable surface of cable-stayed bridge
Zheng et al. Single shot multibox detector for urban plantation single tree detection and location with high-resolution remote sensing imagery
CN113033386B (en) High-resolution remote sensing image-based transmission line channel hidden danger identification method and system
CN108765446B (en) Power line point cloud segmentation method and system based on random field and random forest
Li et al. The research on traffic sign recognition based on deep learning
CN111597939B (en) High-speed rail line nest defect detection method based on deep learning
CN112200248B (en) Point cloud semantic segmentation method, system and storage medium based on DBSCAN clustering under urban road environment
CN109117841B (en) Scene text detection method based on stroke width transformation and convolutional neural network
Gao et al. Intelligent crack damage detection system in shield tunnel using combination of retinanet and optimal adaptive selection
CN110889418A (en) Gas contour identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant