CN113313081B - Road traffic rod object classification method integrating vehicle-mounted three-dimensional laser point cloud and image - Google Patents
Road traffic rod object classification method integrating vehicle-mounted three-dimensional laser point cloud and image Download PDFInfo
- Publication number
- CN113313081B CN113313081B CN202110852461.1A CN202110852461A CN113313081B CN 113313081 B CN113313081 B CN 113313081B CN 202110852461 A CN202110852461 A CN 202110852461A CN 113313081 B CN113313081 B CN 113313081B
- Authority
- CN
- China
- Prior art keywords
- rod
- point
- point cloud
- shaped object
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/255—Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a road traffic rod object classification method fusing vehicle-mounted three-dimensional laser point cloud and images, which comprises the following steps of obtaining point cloud and image data; removing ground points in the point cloud to obtain a pre-segmentation body, slicing, performing characteristic constraint on a continuous slice set, and taking the continuous slice set which meets the characteristic constraint as a rod-shaped object original seed point; growing the original seed points of the rod-shaped objects by a growing algorithm to obtain complete rod-shaped object point clouds so as to obtain accurate position information of the rod-shaped object point clouds; projecting the rod-shaped object point cloud onto the panoramic image according to the corresponding relation between the point cloud and the panoramic image to obtain an image range corresponding to the rod-shaped object point cloud; and carrying out example segmentation on the image by using the trained MaskRCNN to obtain example segmentation information of the rod-shaped object, and finely classifying the rod-shaped object by using the range of the point cloud of the rod-shaped object in the panoramic image. The road traffic rod-shaped object classification method integrating the vehicle-mounted three-dimensional laser point cloud and the images realizes accurate positioning and fine classification of the rod-shaped objects in the point cloud.
Description
Technical Field
The invention belongs to the field of image recognition and mapping, and particularly relates to a road traffic rod classification method fusing vehicle-mounted three-dimensional laser point cloud and images.
Background
With the rapid advancement of urbanization and the wide popularization of personal automobiles in recent years, the problem of urban traffic congestion is becoming more serious, great trouble is caused to the daily life of urban residents, the problem of urban traffic congestion becomes an important issue in urban management, and in order to solve the problem, developed countries such as Europe and America develop and put into use an intelligent traffic system, and the system greatly relieves the problem of urban traffic congestion. The road traffic rod is used as an important component of an intelligent traffic system, and the fine classification of the road traffic rod plays an important role in constructing the intelligent traffic system. In addition, the road traffic rod is used as an important hanging object of the 5G base station, and the automatic identification of the road traffic rod is also significant to the layout of the 5G base station. The vehicle-mounted three-dimensional laser scanning system is widely applied to three-dimensional data acquisition of urban road scenes due to good maneuverability and high data acquisition precision, high-precision three-dimensional point clouds and vehicle-mounted images acquired by the vehicle-mounted three-dimensional laser scanning system provide powerful support for automatic identification of road traffic rod-shaped objects, the urban road scenes are complex, the types of the objects are more, serious shielding, noise and adhesion phenomena exist, and the problems are serious and the automatic identification precision of the road traffic rod-shaped objects is seriously imaged. The fine classification of the road traffic rod at the present stage depends heavily on manual translation, so that the fine automatic classification of the road traffic rod in the vehicle-mounted laser point cloud and the vehicle-mounted image of a large-range urban scene has important significance for constructing an intelligent traffic system and 5G base station layout.
At present, the identification research on road traffic rods is mainly divided into the following three methods, namely a method based on knowledge reasoning, an algorithm based on traditional machine learning and a method based on a deep learning neural network. The method based on knowledge reasoning comprises the steps of firstly extracting high-order features of a point cloud or an image, then manually constructing a series of feature constraint rules according to the characteristics of a target in observation experience, and finally identifying the point cloud or the image according to the extracted features and rules. Brenner et al first proposed concentric cylinder models for the detection of traffic rods based on their characteristics, Lehtom ä ki et al improved the concentric cylinder model detection method by using scan line segmentation preprocessing, and Cabo et al constructed voxels extended and accelerated the method. Bremer et al propose a method based on eigenvalue vectors to extract traffic rods. Pu et al propose a method based on slicing the upper end of the shaft to extract traffic shafts and classify with shape information of the attachments. Yang et al extract features based on hyper-voxels and formulate semantic rules to identify road traffic shafts. The rule-based approach has the disadvantage that we need to design many rules when the target class is many, and the portability of the method is too poor, and the advantage is that many training samples are not needed. The method based on the traditional machine learning operator comprises the steps of firstly extracting low-order or high-order features of point clouds or images, then inputting the extracted point cloud features or image features and corresponding labels into the traditional machine learning operator for training, for example, machine learning operators such as a support vector machine and a random forest are supported, and finally, the point clouds or the images with the extracted features are automatically identified by utilizing a trained machine learning operator model. The deep learning-based method is to input point cloud into a pre-designed deep neural network for training and classify objects. The network with representativeness comprises a voxel-based convolutional neural network, a multi-view projection image-based convolutional neural network and PointNet, wherein the first neural network is trained by inputting pre-generated voxels into the three-dimensional convolutional neural network, the second neural network is trained by inputting images generated by point clouds at different views into the convolutional neural network, and different from the first two networks, the PointNet is trained by directly inputting the point clouds and constructing a symmetric function to normalize the point cloud characteristics into the same space. The deep learning-based method has the advantages that high-level features are not required to be designed, and the defects that a large training sample number and a complicated training sample calibration work are required.
In view of this, a method for classifying road traffic rods by fusing vehicle-mounted three-dimensional laser point clouds and images is urgently needed.
Disclosure of Invention
Therefore, the technical problem to be solved by the invention is to provide a road traffic rod integrating vehicle-mounted three-dimensional laser point clouds and images.
The invention relates to a road traffic rod integrating vehicle-mounted three-dimensional laser point cloud and images, which comprises the following steps:
s1, acquiring point clouds and image data corresponding to the point clouds;
s2, removing ground points in the point cloud to obtain a pre-segmentation body, horizontally slicing the pre-segmentation body along the vertical direction according to a preset height to obtain a continuous slice set, performing feature constraint on the continuous slice set through two-dimensional features of slices and concentric cylinder features, and taking the continuous slice set which meets the feature constraint as a rod-shaped object original seed point;
s3, growing the original seed points of the rod-shaped objects through a growing algorithm to obtain complete rod-shaped object point clouds so as to obtain accurate position information of the rod-shaped object point clouds;
s4, projecting the rod-shaped object point cloud to the panoramic image according to the corresponding relation between the point cloud and the panoramic image to obtain an image range corresponding to the rod-shaped object point cloud;
and S5, carrying out example segmentation on the image by using the trained MaskRCNN to obtain example segmentation information of the rod-shaped object, and finely classifying the rod-shaped object point cloud obtained in the step S4 in the range of the panoramic image.
Further, step S2 specifically includes the following steps,
s201, screening ground points in the point cloud through the following formula:
wherein:
pointing deviceThe absolute value of the difference between the height of the track point and the installation height of the three-dimensional laser scanner is smaller than a certain threshold, and the threshold range is 0.25 m;
substitute pointing deviceThe local variation elevation of the neighborhood points is less than 0.15 m;
is a pointThe track point height is obtained by matching the track information of the point during point cloud acquisition with the time tag;
is a pointThe local elevation change value of the point cloud is obtained byCalculating by the formula:
wherein:
s202, traversing all points in the point cloud, and removing all ground points to obtain a pre-segmentation body set;
s203, horizontally slicing each pre-segmentation body in the pre-segmentation body set at a height of 0.25m in the vertical direction;
s204, extracting the two-dimensional maximum distance between each point in the slice and the offset between the slices in the two-dimensional plane direction, simultaneously constructing a concentric cylinder model comprising an inner cylinder and an outer cylinder, and then filtering the slices through a filtering formula based on two-dimensional characteristics and concentric cylinder characteristics, wherein the filtering formula is as follows:
wherein:
finger pre-segmentation bodyThe maximum offset of the middle continuous slice set in the two-dimensional plane direction is less than 0.25 m;
finger pre-segmentation bodyAt least 85% of the points in the middle serial section set fall into the inner cylinder;
is a pre-division bodyVertical direction continuous slice set with middle two-dimensional maximum distance smaller than distance thresholdThe distance threshold is 0.2-0.5 m;
C 1 andC 2 is based on a set of serial slicesThe inner and outer circular cylinders are fitted,fall on the inner cylinderC 1 The point of the inner one of the points,
is located at the outer cylinderC 2 Inner point, and,<1,is a concentric cylinder model proportional threshold, r1 is the inner cylinder radius, r2 is the outer cylinder radius;
s205, traversing the pre-segmentation body setAll slices of each pre-segmented body in the set of filtered serial slicesFor pre-dividing the rod-like articlesThe initial seed point of (1).
Further, the growing algorithm in step S3 includes the following steps:
s301, pre-dividing the body by a rod-shaped objectIn a continuous slice setSet of primitive seed pointsCollecting the seed pointsAdded to a rod-shaped object partitionPerforming the following steps;
s302, fromRandomly taking one point as a seed pointFrom the collection of seed pointsDeletion of seedsAnd in the cloudSearching seed pointK nearest neighbors;
s303, point cloud from K nearest neighbors of point to rodPeripheral diffusion, if there are seed points in the point domain of K nearest neighborsIs less than the seed pointPoints of 20% intensity, then add these points to the seed point setAnd a rod-shaped object division bodyIn and at the point cloudDeleting seed pointsAnd these diffusion seed points;
s305, finally obtaining a rod-shaped object split bodyThe complete point cloud of the shaft objects is obtained, and therefore accurate position information of each shaft object can be obtained.
Further, step S4 specifically includes the following steps;
s401, mapping an on-board three-dimensional laser point cloud coordinate system corresponding to the rod-shaped object point cloud to a panoramic image coordinate system through IMU/GNSS to obtain panoramic image data of the rod-shaped object point cloud;
s402, calculating coordinates of each vertex in an outer bounding box of the point cloud of the rod-shaped object, mapping the coordinates into a simulation unit spherical coordinate system, and converting the coordinates through the following formula:
wherein:
the coordinate of the center point of the camera in a vehicle-mounted three-dimensional laser point cloud coordinate system;
the method is characterized in that a rotation matrix for converting a unit spherical coordinate system to a vehicle-mounted three-dimensional laser point cloud coordinate system is simulated, and the rotation matrix is obtained through a pitch angle, a rotation angle and a yaw angle recorded by an IMU;
is a point in the vehicle-mounted three-dimensional laser point cloudTo the center point of the cameraThe distance of (d);
the points in the vehicle-mounted three-dimensional laser point cloud coordinate system are projected to the points of the simulation unit spherical surface in the simulation unit spherical coordinate system, and calculation is performed through the formula in the step S402.
S403, mapping the coordinates of each vertex in the outer bounding box of the rod point cloud in the unit sphere coordinate system simulated in the previous step to a panoramic image coordinate system, and converting through the following formula:
wherein:
andrespectively corresponding to the latitudinal angle and the longitudinal angle of the simulation unit ball, wherein the radius of the simulation unit ball is 1;
and S403, mapping the coordinates of the outer bounding box of the rod point cloud into the panoramic image by using the conversion relation in the step S402, and calculating the range of the circumscribed rectangle in the panoramic image.
Further, the specific process of step S5 is:
the specific process of step S5 is:
s501, image data including rod type and example information are marked in the panoramic image, and the image data are trained through a MaskRCNN neural network to obtain a neural network model;
s502, carrying out automatic example segmentation on the panoramic image data by using the trained MaskRCNN neural network to obtain the category and example information of the rod-shaped object.
S503, performing superposition analysis on the panoramic image range calculated in the step S4 and the category and example information calculated in the step S502 to obtain the fine category and example information of the rod-shaped object;
s504, establishing the corresponding relation between the fine category of the rod in the step S503 and the example information and the accurate position information of the rod in the step S3, and finishing the classification.
Compared with the prior art, the technical scheme of the invention has the following advantages:
according to the method, a point cloud high-order characteristic set is extracted by a characteristic extraction algorithm in the field of point cloud processing, road traffic rods are automatically extracted through high-order characteristics and grammar rules in knowledge inference, position information of the road traffic rods is further acquired, the point cloud of the road traffic rods which are automatically extracted is projected to a vehicle-mounted image to intercept corresponding image blocks, then the road traffic rods are finely identified by a deep learning model in the field of image identification, and the position and fine category information of the road traffic rods are finally output, so that the road traffic rods in large-range urban road scene point cloud and images are automatically and finely classified, and the efficiency and the automation degree of the fine classification process of the traffic rods in complex urban road scenes are improved.
Drawings
FIG. 1 is a schematic flow chart of a method provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram of a concentric cylinder model in a method provided by an embodiment of the invention;
FIG. 3 is a schematic diagram of a growing algorithm in the method provided by the embodiment of the invention;
FIG. 4 is a schematic diagram of a cloud of rod points detected by a growing algorithm in a method provided by an embodiment of the invention;
FIG. 5 is a schematic diagram illustrating a conversion relationship between a point cloud coordinate of a rod and a panoramic coordinate and a simulation unit sphere coordinate in the method according to the embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating a point cloud projected onto a panoramic image according to an embodiment of the present invention;
FIG. 7 is a schematic illustration, by way of example, of segmentation in a method provided by an embodiment of the present invention;
fig. 8 is a schematic diagram of example segmentation performed in the method according to the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The method for classifying road traffic rods integrating vehicle-mounted three-dimensional laser point clouds and influences in the embodiment as shown in fig. 1 comprises the following steps.
And S1, acquiring the point cloud and the image data corresponding to the point cloud. Specifically, the point cloud is collected by a vehicle-mounted three-dimensional laser scanner, the image data is collected by a vehicle-mounted camera, and corresponding IMU/GNSS data is collected by the IMU/GNSS while the two data are collected.
S2, removing ground points in the point cloud to obtain a pre-segmentation body, horizontally slicing the pre-segmentation body along the vertical direction according to a preset height to obtain a continuous slice set, performing feature constraint on the continuous slice set through two-dimensional features of slices and concentric cylinder features, and taking the continuous slice set which meets the feature constraint as a rod-shaped object original seed point. The installation position of the vehicle-mounted three-dimensional laser scanner is relatively fixed, so that the height of a central point of the vehicle-mounted three-dimensional laser scanner, namely a vehicle-mounted track point, from the ground is close to a fixed valueThe ground of a city is generally flat,and the local elevation change rate of the ground point is flat because the elevation change is not large in a small range.
Therefore, step S2 includes the following steps.
201. Firstly, screening ground points in point cloud by the following formula:
wherein the content of the first and second substances,substitute pointing deviceIs a condition of the ground;pointing deviceThe absolute value of the difference between the height of the track point and the installation height of the three-dimensional laser scanner is smaller than a certain threshold, and the threshold range is 0.25 m;substitute pointing deviceThe local variation elevation of the neighborhood points is less than 0.15 m;is a point randomly selected from the point cloud;is a pointThe track point height is obtained by matching the track information of the point during point cloud acquisition with the time tag;the height of the vehicle-mounted three-dimensional laser scanner from the ground;is a pointThe local elevation change value of the point cloud, the local elevation change characteristic of the point cloud is used for describing the flatness degree of each local neighborhood of each point, and the calculation process of the characteristic is firstly constructedFast tree search for each point in point cloudClose to the point and then for each pointAnalyzing the elevation of each adjacent point, wherein the difference between the maximum elevation value and the minimum elevation value is the elevation change characteristic of the local point cloud of the point, and the difference is calculated by the following formula:
wherein:is the point in the point cloudA set of points of nearest neighbor points;is any two points in the set of points;is a pointThe elevation value of (a).
S202, traversing all points in the point cloud, removing all ground points from the point cloud, carrying out three-dimensional communication on other points in the point cloud, and then obtaining a pre-segmentation body set。
S203, horizontally slicing each pre-segment in the set of pre-segments at a height of 0.25m in the vertical direction.
S204, extracting the two-dimensional maximum distance between each point in the slice and the offset between the slices in the two-dimensional plane direction, and simultaneously constructing a concentric cylinder model comprising an inner cylinder and an outer cylinder, wherein the concentric cylinder model is shown in FIG. 2. Due to its morphological characteristics, the shaft exhibits a small amount of offset in the set of successive slices, while the number of points of the shaft falling in the inner cylinder will be greater than in the outer cylinder, based on the concentric cylinder model.
Then, the slice is filtered by a filtering formula based on the two-dimensional feature and the concentric cylindrical feature, the filtering formula is as follows:
finger pre-segmentation bodyThe maximum offset of the middle continuous slice set in the two-dimensional plane direction is less than 0.25 m;finger pre-segmentation bodyAt least 85% of the points in the middle serial section set fall into the inner cylinder;is a pre-division bodyContinuous slice set with middle two-dimensional maximum distance smaller than distance thresholdThe continuous slice set refers to a set formed by adjacent slices in the vertical direction, and the distance threshold is 0.2-0.5m and is determined by actually detected data of the rod point cloud;is a slice setMaximum offset in the direction of the two-dimensional plane;C 1 andC 2 is based on a set of serial slicesThe inner and outer circular cylinders are fitted,fall on the inner cylinderC 1 The point of the inner one of the points,is located at the outer cylinderC 2 Inner point, and,<1,for the concentric cylinder model scale threshold, r1 is the inner cylinder radius and r2 is the outer cylinder radius.
S205, traversing the pre-segmentation body setAll slices of each pre-segmented body in the set of filtered serial slicesFor pre-dividing the rod-like articlesThe initial seed point of (1).
And S3, growing the original seed points of the rod-shaped object through a growing algorithm to obtain complete rod-shaped object point cloud so as to obtain accurate position information of the rod-shaped object point cloud, wherein the growing algorithm grows through the reflection intensity of each point in the point cloud.
Specifically, as shown in fig. 3, the growing algorithm in step S3 includes the following steps.
S301, pre-dividing the body by a rod-shaped objectIn a continuous slice setSet of primitive seed pointsCollecting the seed pointsAdded to a rod-shaped object partitionIn (1).
S302, fromRandomly taking one point as a seed pointFrom the collection of seed pointsDeletion of seedsAnd in the cloudSearching seed pointK nearest neighbors.
S303, diffusing from K nearest neighbors of the point to the periphery of the point cloud of the rod-shaped object, if the seed point exists in the point domain of the K nearest neighborsIs less than the seed pointPoints of 20% intensity, then add these points to the seed point setAnd a rod-shaped object division bodyIn and at the point cloudDeleting seed pointsAnd these diffusion seed points.
S305, as shown in FIG. 4, the rod-shaped object split body finally obtainedIs a complete point cloud of the shaft, so that accurate position information of each shaft can be obtained, as shown in fig. 4.
And S4, projecting the rod-shaped object point cloud onto the panoramic image according to the corresponding relation between the point cloud and the panoramic image to obtain an image range corresponding to the rod-shaped object point cloud.
Step S4 specifically includes the following steps;
s401, as shown in fig. 5, mapping the vehicle-mounted three-dimensional laser point cloud coordinate system corresponding to the rod-like object point cloud to the panoramic image coordinate system through the IMU/GNSS, so as to obtain panoramic image data of the rod-like object point cloud.
S402, as shown in fig. 5, calculating coordinates of each vertex in the outer bounding box of the rod point cloud, mapping the coordinates into a simulation unit spherical coordinate system, and converting the coordinates by the following formula:
wherein:the coordinate of the center point of the vehicle-mounted camera in a vehicle-mounted three-dimensional laser point cloud coordinate system;the method is characterized in that a rotation matrix for converting a unit spherical coordinate system to a vehicle-mounted three-dimensional laser point cloud coordinate system is simulated, and the rotation matrix is obtained through a pitch angle, a rotation angle and a yaw angle recorded by an IMU;is a point in the vehicle-mounted three-dimensional laser point cloudTo the center point of the cameraThe distance of (d);the points in the vehicle-mounted three-dimensional laser point cloud coordinate system are projected to the points of the simulation unit spherical surface in the simulation unit spherical coordinate system, and calculation is performed through the formula in the step S402.
S403, as shown in fig. 6, mapping the coordinates of each vertex in the outer bounding box of the rod point cloud in the unit sphere coordinate system of the previous simulation step into the panoramic image coordinate system, and converting the coordinates by the following formula:
wherein:the pixel point coordinates in the panoramic image coordinate system;andrespectively the length and width of the panoramic image;andrespectively corresponding to the latitudinal angle and the longitudinal angle of the simulation unit ball, wherein the radius of the simulation unit ball is 1;is the simulated unit sphere coordinates.
And S403, mapping the coordinates of the outer bounding box of the rod point cloud into the panoramic image by using the conversion relation in the step S402, and calculating the range of the circumscribed rectangle in the panoramic image.
S5, as shown in fig. 7, the image is subjected to example segmentation by using the trained MaskRCNN to obtain example segmentation information of the rod, and the rod point clouds obtained in step S4 are finely classified by using the range of the rod point clouds in the panoramic image.
The specific process of step S5 is:
s501, image data including rod type and example information are marked in the panoramic image, and the image data are trained through a MaskRCNN neural network to obtain a neural network model.
S502, carrying out automatic example segmentation on the panoramic image data by using the trained MaskRCNN neural network to obtain the category and example information of the rod-shaped object.
S503, the panoramic image range calculated in the step S4 and the category and example information calculated in the step S502 are subjected to superposition analysis, and the fine category and example information of the rod-shaped object can be obtained.
S504, establishing the corresponding relation between the fine category of the rod in the step S503 and the example information and the accurate position information of the rod in the step S3, and finishing the classification.
As shown in FIG. 8, a road traffic rod with panoramic image is obtained by example segmentation of the imageExample information of middle partThe method includes the category and number corresponding to the component, and the image range projected by the road traffic rod and the panorama example segmentation information in the step S403 are superimposed to filter out some error segmentation results, so as to obtain the accurate range and example information of the road traffic rodCombining with the road traffic rod obtained in step S305Position information ofWe can obtain the shaft of interestRefined instance informationAnd location informationFinally, the two are displayed in the image, and the fine classification of the rod-shaped objects is completed.
According to the method, a point cloud high-order characteristic set is extracted by using a characteristic extraction algorithm in the field of point cloud processing, road traffic rods are automatically extracted through high-order characteristics and grammar rules in knowledge inference, so that the position information of the road traffic rods is obtained, the automatically extracted point cloud of the road traffic rods is projected to a vehicle-mounted image to intercept corresponding image blocks, then the road traffic rods are finely identified by using a deep learning model in the field of image identification, and finally the position and fine category information of the road traffic rods are output, so that the road traffic rods in a large-range urban road scene point cloud and an image are automatically and finely classified, and the efficiency and the automation degree of the fine classification process of the traffic rods in a complex urban road scene are improved.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.
Claims (4)
1. A road traffic rod object classification method fusing vehicle-mounted three-dimensional laser point cloud and images is characterized by comprising the following steps:
s1, acquiring point clouds and image data corresponding to the point clouds;
s2, removing ground points in the point cloud to obtain a pre-segmentation body, horizontally slicing the pre-segmentation body along the vertical direction according to a preset height to obtain a continuous slice set, performing feature constraint on the continuous slice set through two-dimensional features of slices and concentric cylinder features, and taking the continuous slice set which meets the feature constraint as a rod-shaped object original seed point;
s3, growing the original seed points of the rod-shaped objects through a growing algorithm to obtain complete rod-shaped object point clouds so as to obtain accurate position information of the rod-shaped object point clouds;
s4, projecting the rod-shaped object point cloud to the panoramic image according to the corresponding relation between the point cloud and the panoramic image to obtain an image range corresponding to the rod-shaped object point cloud;
s5, carrying out example segmentation on the image by using the trained MaskRCNN to obtain example segmentation information of the rod-shaped object, and finely classifying the rod-shaped object in the panoramic image by using the range of the rod-shaped object point cloud obtained in the step S4;
s201, screening ground points in the point cloud through the following formula:
wherein:
pointing deviceThe absolute value of the difference between the height of the track point and the installation height of the three-dimensional laser scanner is smaller than a certain threshold, and the threshold range is 0.25 m;
substitute pointing deviceThe local variation elevation of the neighborhood points is less than 0.15 m;
is a pointThe track point height is obtained by matching the track information of the point during point cloud acquisition with the time tag;
wherein:
s202, traversing all points in the point cloud, and removing all ground points to obtain a pre-segmentation body set;
s203, horizontally slicing each pre-segmentation body in the pre-segmentation body set at a height of 0.25m in the vertical direction;
s204, extracting the two-dimensional maximum distance between each point in the slice and the offset between the slices in the two-dimensional plane direction, simultaneously constructing a concentric cylinder model comprising an inner cylinder and an outer cylinder, and then filtering the slices through a filtering formula based on two-dimensional characteristics and concentric cylinder characteristics, wherein the filtering formula is as follows:
wherein:
finger pre-segmentation bodyThe maximum offset of the middle continuous slice set in the two-dimensional plane direction is less than 0.25 m;
finger pre-segmentation bodyAt least 85% of the points in the middle serial section set fall into the inner cylinder;
is a pre-division bodyVertical direction continuous slice set with middle two-dimensional maximum distance smaller than distance thresholdThe distance threshold is 0.2-0.5 m;
C 1 andC 2 is based on a set of serial slicesThe inner and outer circular cylinders are fitted,fall on the inner cylinderC 1 The point of the inner one of the points,is located at the outer cylinderC 2 Inner point, and,<1,is a concentric cylinder model proportional threshold, r1 is the inner cylinder radius, r2 is the outer cylinder radius;
2. The method according to claim 1, wherein the growing algorithm in step S3 comprises the steps of:
s301, pre-dividing the body by a rod-shaped objectIn a continuous slice setSet of primitive seed pointsCollecting the seed pointsAdded to a rod-shaped object partitionPerforming the following steps;
s302, fromRandomly taking one point as a seed pointFrom the collection of seed pointsDeletion of seedsAnd in the cloudSearching seed pointK nearest neighbors;
s303, seed pointDiffusing to K nearest neighbors if the seed point exists in the point domain of the K nearest neighborsIs less than the seed pointPoints of 20% intensity, then add these points to the seed point setAnd a rod-shaped object division bodyIn and at the point cloudDeleting seed pointsAnd these diffusion seed points;
3. The method according to claim 1, wherein step S4 comprises the following steps;
s401, mapping an on-board three-dimensional laser point cloud coordinate system corresponding to the rod-shaped object point cloud to a panoramic image coordinate system through IMU/GNSS to obtain panoramic image data of the rod-shaped object point cloud;
s402, calculating coordinates of each vertex in an outer bounding box of the point cloud of the rod-shaped object, mapping the coordinates into a simulation unit spherical coordinate system, and converting the coordinates through the following formula:
wherein:
the coordinate of the center point of the camera in a vehicle-mounted three-dimensional laser point cloud coordinate system;
the method is characterized in that a rotation matrix for converting a unit spherical coordinate system to a vehicle-mounted three-dimensional laser point cloud coordinate system is simulated, and the rotation matrix is obtained through a pitch angle, a rotation angle and a yaw angle recorded by an IMU;
is a point in the vehicle-mounted three-dimensional laser point cloudTo the center point of the cameraThe distance of (d);
projecting points in a vehicle-mounted three-dimensional laser point cloud coordinate system to points of a simulation unit spherical surface in a simulation unit spherical coordinate system, and calculating by using a formula in the step S402;
s403, mapping the coordinates of each vertex in the outer bounding box of the rod point cloud in the unit sphere coordinate system simulated in the previous step to a panoramic image coordinate system, and converting through the following formula:
wherein:
andrespectively corresponding to the latitudinal angle and the longitudinal angle of the simulation unit ball, wherein the radius of the simulation unit ball is 1;
and S403, mapping the coordinates of the outer bounding box of the rod point cloud into the panoramic image by using the conversion relation in the step S402, and calculating the range of the circumscribed rectangle in the panoramic image.
4. The method according to claim 1, wherein the specific process of step S5 is as follows:
s501, image data including rod type and example information are marked in the panoramic image, and the image data are trained through a MaskRCNN neural network to obtain a neural network model;
s502, carrying out automatic example segmentation on panoramic image data by using the trained MaskRCNN neural network to obtain the category and example information of the rod-shaped object;
s503, performing superposition analysis on the panoramic image range calculated in the step S4 and the category and example information calculated in the step S502 to obtain the fine category and example information of the rod-shaped object;
s504, establishing the corresponding relation between the fine category of the rod in the step S503 and the example information and the accurate position information of the rod in the step S3, and finishing the classification.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110852461.1A CN113313081B (en) | 2021-07-27 | 2021-07-27 | Road traffic rod object classification method integrating vehicle-mounted three-dimensional laser point cloud and image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110852461.1A CN113313081B (en) | 2021-07-27 | 2021-07-27 | Road traffic rod object classification method integrating vehicle-mounted three-dimensional laser point cloud and image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113313081A CN113313081A (en) | 2021-08-27 |
CN113313081B true CN113313081B (en) | 2021-11-09 |
Family
ID=77382356
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110852461.1A Active CN113313081B (en) | 2021-07-27 | 2021-07-27 | Road traffic rod object classification method integrating vehicle-mounted three-dimensional laser point cloud and image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113313081B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116310849B (en) * | 2023-05-22 | 2023-09-19 | 深圳大学 | Tree point cloud monomerization extraction method based on three-dimensional morphological characteristics |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107516077A (en) * | 2017-08-17 | 2017-12-26 | 武汉大学 | Traffic sign information extracting method based on laser point cloud and image data fusion |
CN111815776A (en) * | 2020-02-04 | 2020-10-23 | 山东水利技师学院 | Three-dimensional building fine geometric reconstruction method integrating airborne and vehicle-mounted three-dimensional laser point clouds and streetscape images |
CN111899302A (en) * | 2020-06-23 | 2020-11-06 | 武汉闻道复兴智能科技有限责任公司 | Point cloud data-based visual detection method, device and system |
CN112446343A (en) * | 2020-12-07 | 2021-03-05 | 苏州工业园区测绘地理信息有限公司 | Vehicle-mounted point cloud road rod-shaped object machine learning automatic extraction method integrating multi-scale features |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10679338B2 (en) * | 2017-08-23 | 2020-06-09 | General Electric Company | Three-dimensional modeling of an object |
-
2021
- 2021-07-27 CN CN202110852461.1A patent/CN113313081B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107516077A (en) * | 2017-08-17 | 2017-12-26 | 武汉大学 | Traffic sign information extracting method based on laser point cloud and image data fusion |
CN111815776A (en) * | 2020-02-04 | 2020-10-23 | 山东水利技师学院 | Three-dimensional building fine geometric reconstruction method integrating airborne and vehicle-mounted three-dimensional laser point clouds and streetscape images |
CN111899302A (en) * | 2020-06-23 | 2020-11-06 | 武汉闻道复兴智能科技有限责任公司 | Point cloud data-based visual detection method, device and system |
CN112446343A (en) * | 2020-12-07 | 2021-03-05 | 苏州工业园区测绘地理信息有限公司 | Vehicle-mounted point cloud road rod-shaped object machine learning automatic extraction method integrating multi-scale features |
Non-Patent Citations (2)
Title |
---|
Pole-Like Road Furniture Detection and Decomposition in Mobile Laser Scanning Data Based on Spatial Relations;Fashuai Li等;《remote sensing》;20181231;1-28 * |
一种层次化的车载激光点云中杆状地物提取方法研究;朱岩彬等;《地理信息世界》;20191231;56-60 * |
Also Published As
Publication number | Publication date |
---|---|
CN113313081A (en) | 2021-08-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110263717B (en) | Method for determining land utilization category of street view image | |
Ke et al. | A review of methods for automatic individual tree-crown detection and delineation from passive remote sensing | |
Chen et al. | Vehicle detection in high-resolution aerial images via sparse representation and superpixels | |
Wu et al. | Rapid localization and extraction of street light poles in mobile LiDAR point clouds: A supervoxel-based approach | |
Chen et al. | Hierarchical object oriented classification using very high resolution imagery and LIDAR data over urban areas | |
CN112381861B (en) | Forest land point cloud data registration and segmentation method based on foundation laser radar | |
Alidoost et al. | A CNN-based approach for automatic building detection and recognition of roof types using a single aerial image | |
CN109858450B (en) | Ten-meter-level spatial resolution remote sensing image town extraction method and system | |
CN111046880A (en) | Infrared target image segmentation method and system, electronic device and storage medium | |
CN111598856B (en) | Chip surface defect automatic detection method and system based on defect-oriented multipoint positioning neural network | |
CN106610969A (en) | Multimodal information-based video content auditing system and method | |
CN104318051B (en) | The rule-based remote sensing of Water-Body Information on a large scale automatic extracting system and method | |
CN111898688A (en) | Airborne LiDAR data tree species classification method based on three-dimensional deep learning | |
CN108509950B (en) | Railway contact net support number plate detection and identification method based on probability feature weighted fusion | |
Karsli et al. | Automatic building extraction from very high-resolution image and LiDAR data with SVM algorithm | |
CN115049640B (en) | Road crack detection method based on deep learning | |
Kumar et al. | A deep learning paradigm for detection of harmful algal blooms | |
CN113313081B (en) | Road traffic rod object classification method integrating vehicle-mounted three-dimensional laser point cloud and image | |
Husain et al. | Detection and thinning of street trees for calculation of morphological parameters using mobile laser scanner data | |
CN113033386B (en) | High-resolution remote sensing image-based transmission line channel hidden danger identification method and system | |
Zheng et al. | Single shot multibox detector for urban plantation single tree detection and location with high-resolution remote sensing imagery | |
Kazimi et al. | Semantic segmentation of manmade landscape structures in digital terrain models | |
CN111597939B (en) | High-speed rail line nest defect detection method based on deep learning | |
CN112200248B (en) | Point cloud semantic segmentation method, system and storage medium based on DBSCAN clustering under urban road environment | |
Kim et al. | Generation of a DTM and building detection based on an MPF through integrating airborne lidar data and aerial images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |