CN112433198A - Method for extracting plane from three-dimensional point cloud data of laser radar - Google Patents

Method for extracting plane from three-dimensional point cloud data of laser radar Download PDF

Info

Publication number
CN112433198A
CN112433198A CN201910786166.3A CN201910786166A CN112433198A CN 112433198 A CN112433198 A CN 112433198A CN 201910786166 A CN201910786166 A CN 201910786166A CN 112433198 A CN112433198 A CN 112433198A
Authority
CN
China
Prior art keywords
plane
point cloud
straight line
cloud data
extracting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910786166.3A
Other languages
Chinese (zh)
Other versions
CN112433198B (en
Inventor
王远鹏
张二阳
葛凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Xunji Zhixing Robot Technology Co ltd
Original Assignee
Suzhou Xunji Zhixing Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Xunji Zhixing Robot Technology Co ltd filed Critical Suzhou Xunji Zhixing Robot Technology Co ltd
Priority to CN201910786166.3A priority Critical patent/CN112433198B/en
Publication of CN112433198A publication Critical patent/CN112433198A/en
Application granted granted Critical
Publication of CN112433198B publication Critical patent/CN112433198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00

Abstract

The invention belongs to the technical field of artificial intelligence, and particularly discloses a method for extracting a plane from three-dimensional point cloud data of a laser radar, which comprises the following steps: s1, layering 3D point cloud data obtained by scanning of the multi-line laser radar; s2, extracting straight lines from the 2D point cloud data of each plane; s3, extracting face candidates from the plane straight line set obtained by all the planes; and S4, solving a plane equation by using a least square method according to the straight line end points in each group of plane straight line set, and extracting a plane in the three-dimensional point cloud data. The method can extract the straight lines from the 2D laser data of each layer more quickly, can extract the 3D plane by using all the straight lines of each layer, and has the advantages of high efficiency and no need of any priori knowledge about actual terrain. The 3D plane acquired by the method can be used for correcting single laser data falling in the plane, so that the fluctuation of the single laser data can be greatly reduced, and the reliability of the laser data is improved.

Description

Method for extracting plane from three-dimensional point cloud data of laser radar
Technical Field
The invention belongs to the technical field of artificial intelligence, and particularly relates to a method for extracting a plane from 3D data of a laser radar. The typical characteristic of the 3D laser data acquired by the multiline lidar is that one frame of 3D data is actually a combination of a plurality of sets of 2D laser data with different pitch angles.
Background
At present, due to the high cost of the 3D laser radar, the technical data about the 3D laser radar data preprocessing is less, and more is the 3D laser data processing technology. Although the existing 3D data processing technology can extract features from data, the features are often referred to as 3D objects, that is, the features include geometric information of the actual object and give the actual meaning of the object, such as extracting a car, a house, a pedestrian, and the like. Although these very powerful algorithms extract such rich information from the 3D laser data, they do not have the technical effort to improve the data quality itself, such as the repeated measurement accuracy, and so on, and have more strict requirements on the algorithms, either requiring very large resource consumption, or being based on artificial intelligence algorithms such as neural networks and so on that need to provide the required parameters in advance.
Still another document starts from the huge storage required by 3D data, and tries to approximate 3D data points by using a plane to reduce the storage of data, and mainly performs plane approximation on 3D laser data scanning a known (simple) environment (such as an empty room) by using an EM algorithm, converts the massive 3D data into several simple planes, and performs optimization by using the EM algorithm to obtain a good precision of the planes, for example, the situation that the front and back planes of the room are not parallel, which may exist in the former algorithm, does not occur. However, as for the application scenario, to use this algorithm, the general structure of the environment needs to be known in advance, and the environment cannot be too complex, and although the extracted surface can assist in increasing the confidence of the data, the implementation manner is still quite limited.
The 3D hough transform is another technique for extracting planes from 3D laser data. Direct 3D Hough transform extends 2D Hough transform to 3D, at discretized θ and
Figure BDA0002178092750000021
then use
Figure BDA0002178092750000022
For each point (x, y, z) at each discretized sum of θ
Figure BDA0002178092750000023
Middle voting, counting each pair after traversing all points
Figure BDA0002178092750000024
Number of tickets in (1), the ticket being deemed to exceed the threshold
Figure BDA0002178092750000025
Pairs represent a plane. The optimized 3D Hough transform preprocesses all data points first, the method is to obtain the normal vector of each point according to each point and the point near to the point, and only the normal vector represents
Figure BDA0002178092750000026
And voting is carried out. Whether a direct 3D hough transform or an optimized 3D hough transform, the temporal complexity is very high, or it experiences all the laser points in its traversal
Figure BDA0002178092750000027
The pair either needs to correctly compute the normal vector from the nearby points in the pre-processing. In addition, if the plane information needs to be extracted from the plane obtained by using the 3D hough transform, additional operations are required.
Disclosure of Invention
In view of the above, an object of the present invention is to solve the problems of the prior art, and provide a method for extracting a plane from 3D point cloud data of a laser radar, which can correct individual laser data falling within the plane without any a priori knowledge about actual terrain, and provide reliability of the laser data.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for extracting a plane from three-dimensional point cloud data of a laser radar is characterized by comprising the following steps:
s1, layering 3D point cloud data obtained by scanning of the multi-line laser radar, wherein each layer comprises a group of 2D point cloud data representing a plane, and the group of 2D point cloud data is scanned by the laser radar at intervals of a fixed angle deltahThe measurement is carried out for one time to obtain;
s2, extracting straight lines from the 2D point cloud data of each plane, specifically: when the absolute value of the difference between two adjacent 2D point cloud data is less than c of the two datamaxThen the two adjacent 2D point cloud data belong to the same straight line, cmaxConverting distance values a and b in two 2D point cloud data into 2D coordinates, storing the 2D coordinates into a linear coordinate set, linearly fitting the 2D coordinates in the linear coordinate set after the 2D coordinates in the linear coordinate set reach a preset number to obtain a straight line, and storing parameters of the straight line, wherein the parameters comprise an open angle interval of a starting position to an origin O and two end points of the straight line;
s3, extracting the candidates of the plane from the plane straight line set obtained from all the planes, specifically: firstly, the minimum layer number N of a judgment plane is determined according to the attribute of the laser radarthresholdTaking a straight line set in the top 2D laser plane, recording an angle interval occupied by each straight line, and storing each interval as a plane straight line set; traversing the rest 2D laser planes from top to bottom, screening all plane linear sets, and deleting angle intervals with the number smaller than NthresholdThe set of planar lines of (a);
and S4, solving a plane equation by using a least square method according to the straight line end points in each group of plane straight line set, and extracting a plane in the three-dimensional point cloud data.
The main function of the invention is to extract planes from 3D laser data, which also includes an algorithm for extracting lines from 2D laser data. Compared with the prior art, the method can extract straight lines from each layer of 2D laser data more quickly, and can extract a 3D plane by using all the straight lines of each layer, wherein any priori knowledge about actual terrain is not needed. One direct use of the acquired 3D plane is to correct individual laser data falling within the plane, i.e. to modify the laser data (distance value) in the corresponding direction to the distance between the corresponding direction and the intersection of the plane, since the plane contains several laser data points, the fluctuation of the individual laser data can be greatly reduced, and the reliability of these corrected laser data is thus much higher than that of the laser data that has not been corrected.
Drawings
FIG. 1 is a schematic diagram of the present invention extracting planes from 2D laser data of each plane.
Detailed Description
The invention is further illustrated by the following figures and examples.
The invention discloses a method for extracting a plane from three-dimensional point cloud data of a laser radar, which takes a multiline laser radar to scan the surrounding environment as an example and comprises the following steps.
(one) extracting straight lines from the (2D) laser data of each plane
S1, layering 3D point cloud data obtained by scanning of the multi-line laser radar, wherein each layer comprises a group of 2D point cloud data representing a plane, and the group of 2D point cloud data is scanned by the laser radar at intervals of a fixed angle deltahIs obtained by carrying out one measurement. In 3D multiline lidar, each layer acquires a set of 2D data, which is typically measured every small angle, which is the (planar) angular resolution, denoted δhRepresents δhoritonal. For a typical 3D multiline radar,
Figure BDA0002178092750000041
such as deltah=0.5°。
S2, extracting straight lines from the 2D point cloud data of each plane, specifically: when the absolute value of the difference between two adjacent 2D point cloud data is less than c of the two datamaxThen the two adjacent 2D point cloud data belong to the same straight line, cmaxTwo adjacent laser beams are irradiated on two points A, B on the obstacle during the measurement of the laser radar with the origin point OThe maximum linear distance of the line segment AB between the two 2D point cloud data is converted into 2D coordinates according to the distance values a and b in the two 2D point cloud data, the 2D coordinates are stored in a linear coordinate set, after the 2D coordinates in the linear coordinate set reach a preset number, the 2D coordinates in the linear coordinate set are linearly fitted to obtain a straight line, and the parameters of the straight line are stored, wherein the parameters comprise an angle interval formed by the starting position and the stopping position relative to the original point O and two end points of the straight line.
The range value output by the laser radar generally has a minimum value, and the minimum value has two meanings, namely, the lower limit of the laser range (data smaller than the range does not exist), or the laser data smaller than the range can not ensure the accuracy. And since the lidar is often mounted in the center of the device, or for reasons of lidar protection, there is also a minimum distance between the lidar and the obstacle, unless it is the more extreme way of mounting the lidar (e.g., on the left and right boundaries), which is also a considerable amount. Taking the larger of the two minimum values as l0According to experience, take l0=0.5m。
As shown in fig. 1, in Δ OAB, point O represents the position of the laser radar, and OA and OB are two adjacent laser paths emitted by the laser radar on a certain plane, so ═ AOB ═ δh(ii) a The line segment AB is a part of the obstacle, which is assumed to be the distance between the lidar and the line AB
Figure BDA0002178092750000051
Note the book
Figure BDA0002178092750000052
Then in Δ OAB, according to the cosine theorem, there are
c2=a2+D2-2ab cosδh
At the same time according to l ≥ l0Is provided with
l0≥a sinθ
Plus the trigonometric identity
cos2θ+sin2θ=1
Can be obtained by three simultaneous methods
Figure BDA0002178092750000061
Taking into account deltahVery small, with a ≈ b + c, i.e.
Figure BDA0002178092750000064
That is, if two adjacent laser data form a straight line, the difference between the two data cannot exceed c at mostmax. According to the convention before, δh=0.5°,l00.5m, the proportionality coefficient
Figure BDA0002178092750000062
So utilize cmaxWhether the adjacent laser data belong to the same straight line or not can be judged very quickly.
Using cmaxThe pseudo code (flow) of the algorithm to extract the straight line from the 2D laser data is as follows:
Figure BDA0002178092750000063
Figure BDA0002178092750000071
the above algorithm is intended to be simple, so there are certain drawbacks that it is not possible to identify a broken line (treated equally as a straight line, then considered too diffuse to be discarded), nor two parallel lines that are very close together. The straight line that it can handle is a line segment that is far apart between end points, such as the boundary of different objective subjects.
For this drawback of the present algorithm, the following ways can be used to compensate:
A) on line 8, the sum of absolute values c of the differences between two adjacent data (distance values) is passedmaxThe comparison method is changed into the first judgmentSign of difference between two data, and directly summing the difference between two datamaxFor comparison. The distance between the point of interest and a fixed point outside the straight line is regular as the point of interest gradually moves from one end point of the line segment to the other end point, the distance between the point of interest and the fixed point outside the straight line is smaller as the point of interest is closer to the foot of the point outside the line on the straight line, namely when the point of interest is far away from the foot of the line, the distance is monotonously changed, so that the absolute value can be completely eliminated on the premise.
B) The rough parameters of this line can be determined first from the first few points (e.g. the first 10 2D coordinate points) and then c heremaxIt can be further calculated from the angle between the line and the straight line between the point of interest and the fixed point, which is a predictive technique, and to calculate c more accurately in this waymaxAnd parameters of the straight line can be updated in real time.
C) These points will be discarded directly after considering too much diffusion, but it is quite possible to try to analyze these 2D coordinate points one by one: the method comprises the steps of firstly using the first 10 (1 st to 10 th) 2D coordinate points as an initial straight line point set to determine the rough parameters of the straight line (if the 10 coordinate points are already dispersed, extending to the 2 nd to the 11 th, repeating until the proper 10 points are found or reaching the head), then trying to put each subsequent point into the straight line one by one, if the dispersion degree of all the 2D coordinate points after adding the point is not obviously increased, considering the point to belong to the straight line, and if the dispersion degree is obviously increased, the points before the point are all the points of the straight line, so that a part of the coordinate points are extracted from the dispersed points and form a straight line. This point and the next 9 points are then collected as the next initial set of straight line points, and the loop. After all the dispersion points are traversed, for each straight line, forward continuation is tried again, and if a point before a large dispersion point occurs in the previous straight line is tried to be placed in the next straight line, the dispersion degree of the data points in the two straight lines cannot be reduced at the same time.
However, these methods tend to have a relatively large time consumption, which conflicts with the high efficiency proposed by the present algorithm.
(II) extracting candidates of the plane from the straight line set obtained from all the M planes
And S3, extracting the candidates of the plane from the plane straight line set obtained by all the planes. This step can be very simple and can also be made very comprehensive. A simple way of looking more at the boundary level or near level is used here. This approach requires introducing a structure, a "plane line set", whose main body is a set of saved angle intervals, and the angle intervals are used to describe lines in a certain 2D plane, including the inclination angle of the 2D plane and the angle range of the lines open to the origin. Before the algorithm is operated, the minimum layer number N of the judgment plane is firstly determined according to the attributes (the number of threads and the resolution of the pitching angle) of the laser radarthresholdIn the sense that only when a plane is actually physical at least by NthresholdLayers may not be identified as facets until after laser detection, and facets below this threshold are ignored. In consideration of the third step, N is setthresholdWhen, N is required to be satisfiedthreshold≥3。
The straight lines obtained in (one) all reserve the occupied angle, so that the operation can be carried out as follows:
taking a straight line set in the top 2D laser plane, recording an angle interval occupied by each straight line, and storing each interval as a plane straight line set;
traversing the remaining 2D laser planes from top to bottom;
checking all plane straight line sets, if a certain set does not contain straight lines in adjacent planes above the current plane, identifying the plane straight line set as inactive, otherwise, identifying as active;
for each straight line in the current plane, taking an angle interval of the straight line, comparing the angle interval with the angle interval at the lowest part in all active plane straight line sets, comparing the overlapping intervals of the straight lines, and putting the straight line into the plane straight line set with the largest overlapping angle interval;
screening all plane linear sets, and deleting angle intervals with the number smaller than NthresholdThe set of planar lines of (a).
In fact, this method has some problems due to its simplicity, such as difficulty in detecting a plane whose edge is not horizontal (e.g., a square plane whose boundary and horizontal plane form an angle of 45 °, and whose lowermost portion is only one point), and due to only setting a fixed NthresholdTherefore, the recognition rate for relatively distant surfaces is relatively low, and the like. Although these problems can be solved (e.g. by extending already determined faces to find faces that are not horizontal boundaries, or by extending NthresholdSet inversely proportional to distance, etc.), where this most straightforward approach is chosen for simplicity.
(III) calculating plane parameters according to each group of plane straight line sets
And S4, solving a plane equation by using a least square method according to the straight line end points in each group of plane straight line set, and extracting a plane in the three-dimensional point cloud data. In a plane line set (in the form of two end points thereof), a plurality of lines which should belong to the same plane are stored, and the problem is that the plane equation needs to be solved by using the lines.
Since the 3D lidar scans at a pitch angle, the plane equation that it can scan may take the form of Ax + By + D0, but these planes should not be horizontal planes, i.e. the plane equation Ax + By + Cz + D0 where a and B cannot be zero at the same time, so that the plane can be formulated as a plane
x=By+Cz+D
Or
y=Ax+Cz+D
Thus, the plane equation, i.e. the resulting function, can be solved using the least squares method
Figure BDA0002178092750000101
Or
Figure BDA0002178092750000102
(B, C, D) or (A, C, D) at the minimum, where the sum extends over all 3D coordinate points. Which equation to choose from can be directly calculated from the distribution of the 3D coordinates.
Also to extract the 3D coordinate points from the line equations, consider the two end points of each line (segment) because the end points can give the largest residual to the plane. One straight line can be determined due to two points in space, namely, each straight line corresponds to residual error
Figure BDA0002178092750000103
The contribution of (1) is only two points, and other points on the straight line are only linear combinations of the end points, so that the addition of other points on a line segment except the two end points does not have too great contribution.
After the above (one), (two) and (three), the plane in the 3D laser data is extracted, so that the laser data itself contained therein can be corrected, specifically, the original laser distance is covered by the distance from the origin to the plane in the azimuth of the corresponding laser data. Such a correction of the laser distance does not cause too large a variation in the laser distance value because the fluctuation of the laser data is limited. Since the laser data has a very high accuracy of orientation, and the plane equation is obtained from several data, the corrected laser data has a higher accuracy (lower accuracy of repeated measurement) than the original data.
Compared with the prior art, the invention has the following advantages:
1. the algorithm is only directed at a very common and very simple three-dimensional structure, namely a plane, and is easy and convenient to implement.
2. The efficiency is high. The algorithm for extracting the straight line has good efficiency and has the time complexity of O (N); respectively extracting straight lines from the laser data of different planes, wherein the straight lines can be calculated in parallel; and the way of acquiring the plane by using the straight line is simpler.
3. No a priori knowledge about the actual terrain is required.
The above embodiments are merely illustrative of the technical ideas and features of the present invention, and the purpose thereof is to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and not to limit the protection scope of the present invention. All equivalent changes and modifications made according to the spirit of the present invention should be covered within the protection scope of the present invention.

Claims (5)

1. A method for extracting a plane from three-dimensional point cloud data of a laser radar is characterized by comprising the following steps:
s1, layering 3D point cloud data obtained by scanning of the multi-line laser radar, wherein each layer comprises a group of 2D point cloud data representing a plane, and the group of 2D point cloud data is scanned by the laser radar at intervals of a fixed angle deltahThe measurement is carried out for one time to obtain;
s2, extracting straight lines from the 2D point cloud data of each plane, specifically: when the absolute value of the difference between two adjacent 2D point cloud data is less than c of the two datamaxThen the two adjacent 2D point cloud data belong to the same straight line, cmaxConverting distance values a and b in two 2D point cloud data into 2D coordinates, storing the 2D coordinates into a linear coordinate set, linearly fitting the 2D coordinates in the linear coordinate set after the 2D coordinates in the linear coordinate set reach a preset number to obtain a straight line, and storing parameters of the straight line, wherein the parameters comprise an open angle interval of a starting position to an origin O and two end points of the straight line;
s3, extracting the candidates of the plane from the plane straight line set obtained from all the planes, specifically: firstly, the minimum layer number N of a judgment plane is determined according to the attribute of the laser radarthresholdTaking a straight line set in the top 2D laser plane, recording an angle interval occupied by each straight line, and storing each interval as a plane straight line set; traversing the rest 2D laser planes from top to bottom, screening all plane linear sets, and deleting angle intervals with the number smaller than NthresholdThe set of planar lines of (a);
and S4, solving a plane equation by using a least square method according to the straight line end points in each group of plane straight line set, and extracting a plane in the three-dimensional point cloud data.
2. The method of extracting a plane from three-dimensional point cloud data of lidar according to claim 1, wherein the fixed angle δ 1 is defined ashIs the pitch resolution of the lidar.
3. Method for extracting a plane from three-dimensional point cloud data of a lidar according to claim 2, wherein δh=0.5°。
4. The method of extracting a plane from the three-dimensional point cloud data of lidar according to claim 1, wherein the predetermined number is at least 10 in step S2.
5. The method of claim 1, wherein in step S3, N isthreshold≥3。
CN201910786166.3A 2019-08-24 2019-08-24 Method for extracting plane from three-dimensional point cloud data of laser radar Active CN112433198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910786166.3A CN112433198B (en) 2019-08-24 2019-08-24 Method for extracting plane from three-dimensional point cloud data of laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910786166.3A CN112433198B (en) 2019-08-24 2019-08-24 Method for extracting plane from three-dimensional point cloud data of laser radar

Publications (2)

Publication Number Publication Date
CN112433198A true CN112433198A (en) 2021-03-02
CN112433198B CN112433198B (en) 2022-04-12

Family

ID=74689886

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910786166.3A Active CN112433198B (en) 2019-08-24 2019-08-24 Method for extracting plane from three-dimensional point cloud data of laser radar

Country Status (1)

Country Link
CN (1) CN112433198B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107192994A (en) * 2016-03-15 2017-09-22 山东理工大学 Multi-line laser radar mass cloud data is quickly effectively extracted and vehicle, lane line characteristic recognition method
CN108873013A (en) * 2018-06-27 2018-11-23 江苏大学 A kind of road using multi-line laser radar can traffic areas acquisition methods
CN109254586A (en) * 2018-09-19 2019-01-22 绵阳紫蝶科技有限公司 Point and non-thread upper point classification, electric power line drawing and path planning method on line
CN109300162A (en) * 2018-08-17 2019-02-01 浙江工业大学 A kind of multi-line laser radar and camera combined calibrating method based on fining radar scanning marginal point
CN110009029A (en) * 2019-03-28 2019-07-12 北京智行者科技有限公司 Feature matching method based on point cloud segmentation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107192994A (en) * 2016-03-15 2017-09-22 山东理工大学 Multi-line laser radar mass cloud data is quickly effectively extracted and vehicle, lane line characteristic recognition method
CN108873013A (en) * 2018-06-27 2018-11-23 江苏大学 A kind of road using multi-line laser radar can traffic areas acquisition methods
CN109300162A (en) * 2018-08-17 2019-02-01 浙江工业大学 A kind of multi-line laser radar and camera combined calibrating method based on fining radar scanning marginal point
CN109254586A (en) * 2018-09-19 2019-01-22 绵阳紫蝶科技有限公司 Point and non-thread upper point classification, electric power line drawing and path planning method on line
CN110009029A (en) * 2019-03-28 2019-07-12 北京智行者科技有限公司 Feature matching method based on point cloud segmentation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
倪欢等: "三维点云边缘检测和直线段提取进展与展望", 《测绘通报》 *
戴楠等: "激光点云提取建筑物平面目标算法研究", 《微计算机信息》 *

Also Published As

Publication number Publication date
CN112433198B (en) 2022-04-12

Similar Documents

Publication Publication Date Title
CN106886980B (en) Point cloud density enhancement method based on three-dimensional laser radar target identification
CN108846888B (en) Automatic extraction method for fine size information of ancient wood building components
CN107179768B (en) Obstacle identification method and device
Kang et al. Automatic targetless camera–lidar calibration by aligning edge with gaussian mixture model
CN110230998B (en) Rapid and precise three-dimensional measurement method and device based on line laser and binocular camera
CN111986115A (en) Accurate elimination method for laser point cloud noise and redundant data
Cheng et al. Building boundary extraction from high resolution imagery and lidar data
CN110349260B (en) Automatic pavement marking extraction method and device
CN110992356A (en) Target object detection method and device and computer equipment
Jin et al. A point-based fully convolutional neural network for airborne lidar ground point filtering in forested environments
CN102324041B (en) Pixel classification method, joint body gesture recognition method and mouse instruction generating method
WO2020237516A1 (en) Point cloud processing method, device, and computer readable storage medium
CN112819958B (en) Engineering geological mapping method and system based on three-dimensional laser scanning
CN106500594B (en) Merge the railroad track method for semi-automatically detecting of reflected intensity and geometric properties
CN107610174B (en) Robust depth information-based plane detection method and system
Kallwies et al. Triple-SGM: stereo processing using semi-global matching with cost fusion
Lari et al. Alternative methodologies for the estimation of local point density index: Moving towards adaptive LiDAR data processing
CN108520255B (en) Infrared weak and small target detection method and device
CN112232248B (en) Method and device for extracting plane features of multi-line LiDAR point cloud data
CN112433198B (en) Method for extracting plane from three-dimensional point cloud data of laser radar
CN116843742B (en) Calculation method and system for stacking volume after point cloud registration for black coal loading vehicle
Lin et al. $ k $-segments-based geometric modeling of VLS scan lines
Zhu et al. Triangulation of well-defined points as a constraint for reliable image matching
CN107765257A (en) A kind of laser acquisition and measuring method based on the calibration of reflected intensity accessory external
CN111915724A (en) Point cloud model slice shape calculation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant