CN114862788A - Automatic identification method for plane target coordinates of three-dimensional laser scanning - Google Patents

Automatic identification method for plane target coordinates of three-dimensional laser scanning Download PDF

Info

Publication number
CN114862788A
CN114862788A CN202210473238.0A CN202210473238A CN114862788A CN 114862788 A CN114862788 A CN 114862788A CN 202210473238 A CN202210473238 A CN 202210473238A CN 114862788 A CN114862788 A CN 114862788A
Authority
CN
China
Prior art keywords
target
point cloud
value
cloud data
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210473238.0A
Other languages
Chinese (zh)
Inventor
杨承昆
吴勇生
文言
曾雄鹰
王佳龙
黎凯
任自力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cccc Central South Engineering Bureau Co ltd
Hunan Lianzhi Technology Co Ltd
Original Assignee
Hunan Lianzhi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Lianzhi Technology Co Ltd filed Critical Hunan Lianzhi Technology Co Ltd
Priority to CN202210473238.0A priority Critical patent/CN114862788A/en
Publication of CN114862788A publication Critical patent/CN114862788A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Abstract

The invention provides a three-dimensional laser scanning plane target coordinate automatic identification method, which comprises the following steps: acquiring point cloud data comprising complete front patterns of each planar target; converting the point cloud data D into a spherical coordinate system; calculating the origin of the distance coordinate system
Figure DDA0003623924380000011
Maximum horizontal/maximum vertical laser angle range alpha on a plane target perpendicular to the laser direction in meters i Horizontal angular region ROI of target detection interest region ROI Hi And vertical angular range ROI of target detection region of interest ROI Vi (ii) a Obtaining target detection interest regionsProjecting the regional ROI point cloud in a gray image p under spherical coordinates by taking reflectivity as gray value i And obtaining a binarization critical threshold value f 'through iterative processing' i‑(0.4‑0.6) (ii) a Calculating the corner value C i‑p And a corner value C i‑p Corresponding horizontal angle value H c‑i And vertical angle value V c‑i (ii) a Obtaining the plane E of the target i Plane parameter E under rectangular coordinate system i :a i x+b i y+c i z+d i 0; simultaneous plane parameters E i Horizontal angle value H c‑i And vertical angle value V c‑i Calculating to obtain the coordinate C of the target center under the rectangular coordinate system i =[x i ,y i ,z i ]。

Description

Automatic identification method for plane target coordinates of three-dimensional laser scanning
Technical Field
The invention relates to the technical field of engineering measurement, in particular to a three-dimensional laser scanning plane target coordinate automatic identification method.
Background
The three-dimensional laser scanning technology is applied to many measurement projects which need to ensure high precision, such as cultural relic scanning modeling, topographic survey, earthwork measurement, tunnel section detection and the like, and has wide application fields. However, the fixed three-dimensional laser scanner cannot move (the moving scanning precision is greatly reduced) during working, and only after one area is scanned, the station is moved to perform next area scanning, and meanwhile, a same-name target is arranged between adjacent stations to perform multi-station point cloud splicing. When the coordinates of the object to be scanned are introduced into a construction coordinate system or other geographic coordinate systems, the two coordinate systems need to be converted by using the plane target as a bridge between the scanner coordinate system and other coordinate systems, and the precision and reliability of the point cloud plane target center coordinate identification (hereinafter referred to as target identification) greatly affect the precision of the whole project.
The existing point cloud plane target identification is mainly realized through point cloud processing software carried by a scanner, needs manual intervention, and has different identification effects. For example, the special point cloud data processing software Geomagic Studio and the RiSCAN PRO matched with the riegl scanner only have the function of identifying the ball target and have no function of identifying the plane target; the Faro scanner supporting software SCENE can manually click and select the plane target in the point cloud through a mouse and then identify the plane target, but the identification effect is very poor, and only half of the plane target can be identified in the project; the Z + F scanner supporting software Z + F Laser Control can perform identification after manually clicking a plane target in a point cloud through a mouse, and the identification effect is good; the TBC (TBC) matched software of the Trimble scanner has no plane target identification function, the Realworks professional processing software under the company flags needs manual fitting identification after point clouds are selected in a three-dimensional point cloud frame, the identification effect is poor, and the operation steps are complex. However, the commercial software identification methods all need to purchase corresponding scanners to obtain corresponding software use rights, one instrument only has one dongle and cannot be operated in multiple terminals, point clouds scanned by one instrument can only be identified by the matching software of the instrument, and the cost is high.
Therefore, many enterprises select an identification method for independently developing targets, and in the prior art, the identification method for targets still has the following problems:
1) the target identification accuracy is too low because the incidence angle of the target is unpredictable (as disclosed in patent CN 105423915A);
2) due to different postures of the target, the distance between the point clouds is larger than the scanning resolution, thereby causing grid point cloud loss and gray level image distortion (such as the technical schemes disclosed by CN106447715B and CN 104007444A);
3) because the actual use steps are complicated and the relationship between the intersection points of the three planes and the target base is required to be fixed, the geometric position relationship between the intersection points of the planes and the target base can be changed only by slight vibration, so that the plane target coordinates cannot be identified, and meanwhile, a construction coordinate system cannot be introduced, and the method can only be used in a scanning environment without construction coordinates or buried coordinates (such as the technical schemes disclosed in CN109323656A and CN 105447855 a).
Disclosure of Invention
The invention provides a three-dimensional laser scanning plane target coordinate automatic identification method, which comprises the following steps:
the method comprises the steps of firstly, obtaining point cloud data containing complete front patterns of plane targets, recording a minimum resolution angle value theta and deriving point cloud data D containing point cloud reflectivity information when deriving the data;
step two, converting the point cloud data D into a spherical coordinate system, and converting the point cloud data D into the spherical coordinate system spherical Is recorded as: d spherical =[H V R f]Wherein: h is a horizontal direction angle of the point cloud data in the spherical coordinate system, V is a vertical direction angle of the point cloud data in the spherical coordinate system, R is a distance from an origin in the spherical coordinate system of the point cloud data, and f is a reflectivity of the point cloud data;
thirdly, point cloud data D according to the spherical coordinate system spherical Is calculated from the origin of the coordinate system
Figure BDA0003623924360000021
Maximum horizontal/maximum vertical laser angle range alpha on a plane target perpendicular to the laser direction in meters i Horizontal angular range of target detection region of interest ROI
Figure BDA0003623924360000022
And vertical angular extent of the target detection region of interest ROI
Figure BDA0003623924360000023
Step four, detecting the center of the ROI by using the target
Figure BDA0003623924360000024
As a base point, extracting
Figure BDA0003623924360000025
And
Figure BDA0003623924360000026
obtaining point cloud data in the range to obtain a gray image p of ROI point cloud of the target detection interest area projected under spherical coordinates by taking reflectivity as gray value i And obtaining a threshold value of a binarization process as a critical threshold value f by iteration processing i ' -(0.4-0.6)
Step five: for gray scale image p i Carrying out binarization processing and then carrying out angular point detection to obtain an angular point detection value P i And calculating the corner point detection value P i And deleting the corner point detection values larger than 1 time of the standard deviation and averaging the rest corner point detection values to obtain a corner point value C i-p Then according to the gray image p i Center angle value C i-p The angular point value C is calculated by the inverse geometrical relation of the coordinate and the minimum resolution angular value theta i-p Corresponding horizontal angle value H c-i And vertical angle value V c-i
Step six: to D spherical-i-(0.4-0.6) The middle reflectivity is larger than the threshold f of the binarization process of the gray level image i-(0.4-0.6) Performing plane fitting on the reflectivity points to obtain a plane E of the target i Plane parameter E under rectangular coordinate system i :a i x+b i y+c i z+d i 0, wherein: a is i 、b i 、c i 、d i Are respectively a plane E i X, y and z are respectively coordinates of a plane under a rectangular coordinate system;
step seven, simultaneous plane parameters E i Horizontal angle value H c-i And vertical angle value V c-i Calculating to obtain the coordinate C of the target center under the rectangular coordinate system i =[x i ,y i ,z i ]。
Optionally, a specific method for obtaining point cloud data including a complete front pattern of each planar target in the first step is as follows:
the method comprises the steps of ensuring that the target scans n plane targets on the premise of precisely scanning the central area of the three-dimensional laser scanner so as to obtain point cloud data containing complete front patterns of all the plane targets.
Optionally, the distance from the origin of the coordinate system
Figure BDA0003623924360000031
Maximum horizontal/maximum vertical laser angle range alpha on a plane target perpendicular to the laser direction in meters i Is recorded as:
Figure BDA0003623924360000032
horizontal angular range of target detection region of interest ROI
Figure BDA0003623924360000033
Is recorded as:
Figure BDA0003623924360000034
vertical angular range of target detection region of interest ROI
Figure BDA0003623924360000035
Is recorded as:
Figure BDA0003623924360000036
wherein: d is the side length or diameter of the planar target used for the scan, i ═ 1,2, … n,
Figure BDA0003623924360000037
is the average value of the horizontal direction angles of the point cloud data of the ith target in a spherical coordinate system,
Figure BDA0003623924360000038
is the average value of the vertical direction angles of the point cloud data of the ith target in a spherical coordinate system,
Figure BDA0003623924360000039
the average value of the distances from the origin in the spherical coordinate system of the point cloud data is shown.
Optionally, the target in the fourth step detects the center of the region of interest ROI
Figure BDA00036239243600000310
The obtaining method is as follows:
1) obtaining a spherical coordinate system D spherical Point cloud data D of the ith target spherical-i
2) By point cloud data D spherical-i Average value of each parameter of
Figure BDA00036239243600000311
Center of interest region ROI as target detection
Figure BDA00036239243600000312
Optionally, obtaining a target detection interest region ROI point cloud and projecting the reflectivity as a gray level image p under a spherical coordinate by using the gray level as a gray level i The method comprises the following steps:
detection of the center of a region of interest ROI with a target
Figure BDA00036239243600000313
As a base point, extracting
Figure BDA00036239243600000314
And
Figure BDA00036239243600000315
point cloud data within a range
Figure BDA00036239243600000316
Taking the minimum resolution angle value theta as the grid distance and H i-(0.7-0.9) And V i-(0.7-0.9) Is the maximum and minimum of the X-axis/Y-axis of the grid, and generates the grid by H i-(0.7-0.9) And V i-(0.7-0.9) Corresponding f i-(0.7-0.9) Linear interpolation of a grid as a function of valueObtaining the point cloud of the ROI of the target detection interest region, and projecting the point cloud of the ROI under the spherical coordinate by taking the reflectivity as a gray value i
Optionally, obtaining a target detection interest region ROI point cloud and projecting the reflectivity as a gray level image p under a spherical coordinate by using the gray level as a gray level i The method comprises the following steps:
detection of the center of a region of interest ROI with a target
Figure BDA0003623924360000041
As a base point, extracting
Figure BDA0003623924360000042
And
Figure BDA0003623924360000043
point cloud data D within range spherical-i-0.85 =[H i-0.85 V i-0.85 R i-0.85 f i-0.85 ]The minimum resolution angle value theta is taken as the distance, H i-0.85 And V i-0.85 Is the maximum and minimum of the X-axis/Y-axis of the grid, and generates the grid by H i-0.85 And V i-0.85 Corresponding f i-0.85 Performing linear interpolation on the grid as a function value to obtain a gray image p of the ROI point cloud of the target detection interest region projected on the spherical coordinate by taking the reflectivity as a gray value i
Optionally, the method for using the obtained threshold of the binarization process as the critical threshold comprises:
(1) detection of the center of a region of interest ROI with a target
Figure BDA0003623924360000044
As a base point, extracting
Figure BDA0003623924360000045
And
Figure BDA0003623924360000046
point cloud data within a range
Figure BDA0003623924360000047
Take f i-(0.4-0.6) Average value of (2)
Figure BDA0003623924360000048
In that
Figure BDA0003623924360000049
Taking 2j +1 values of the within-range equal difference values as thresholds to respectively perform comparison on the gray level images p i Binarizing, s is a difference value, and deleting the gray level image p with the area less than 0.1 time in each binarized image respectively i After the area of the image spot, recording the quantity of the connected components of each binary image, counting the total number N of the connected components of the binary image, wherein the quantity of the connected components of the binary image is equal to 1, and calculating a threshold value f of the binary process of the gray level image i-(0.4-0.6) Is recorded as:
Figure BDA00036239243600000410
wherein: j ═ (1,2, … n);
(2) when in use
Figure BDA00036239243600000411
When the step (1) is needed, repeating; when N is equal to [1,2j ]]Then, the threshold value of the obtained binary process is used as the critical threshold value f i ' -(0.4-0.6)
Optionally, the method for using the obtained threshold value of the binarization process as the critical threshold value specifically comprises:
detecting the center of an ROI (region of interest) by using a target
Figure BDA00036239243600000412
As a base point, extracting
Figure BDA00036239243600000413
And
Figure BDA00036239243600000414
point cloud data D within range spherical-i-0.5 =[H i-0.5 V i-0.5 R i-0.5 f i-0.5 ]Take f i-0.5 Average value of (2)
Figure BDA00036239243600000415
In that
Figure BDA00036239243600000416
Taking 2j +1 values of the within-range equal difference values as thresholds to respectively correspond to the gray level images p i Binarizing, s is a difference value, and deleting the gray level image p with the area less than 0.1 time in each binarized image respectively i After the area of the image spot, recording the quantity of the connected components of each binary image, counting the total number N of the connected components of the binary image, wherein the quantity of the connected components of the binary image is equal to 1, and calculating a threshold value f of the binary process of the gray level image i-0.5 Is recorded as:
Figure BDA0003623924360000051
② when
Figure BDA0003623924360000052
Then, repeating the first step; when N is equal to [1,2j ]]Then, the obtained binary process threshold value is taken as a critical threshold value f' i-0.5
Optionally, 2j +1 groups of corner point detection values P are obtained i The method comprises the following steps:
in [ < f >' i-0.5 -sj,f' i-0.5 +sj]Taking 2j +1 values in the range as threshold values to respectively correspond to the gray level images p i Binaryzation, deleting the gray image p with the size less than 0.1 times in the binaryzation image i Area pattern spots, each binarization image is respectively subjected to angular point detection by an image sub-pixel angular point detection method to obtain 2j +1 groups of angular point detection values P i I.e. grey scale image p i Initial value of the corner point.
Optionally, to the plane E of the target i The specific method of the plane parameters under the rectangular coordinate system comprises the following steps:
extraction of D spherical-i-0.5 The middle reflectivity is larger than the threshold f of the gray level image binarization process i-0.5 Performing plane fitting on the reflectivity points obtained by the stable characteristic value plane fitting method to obtain a plane E of the target i And (5) plane parameters under a rectangular coordinate system.
Compared with the prior art, the invention has the following beneficial effects:
the high-precision automatic measuring method for the planar target scanned by the three-dimensional laser combines a multi-iteration gray level image binarization critical threshold value and an iteration multi-threshold angular point detection result to remove the difference, greatly improves the detection robustness, and can still stably identify the target coordinate under the condition of unstable target reflectivity caused by severe environments such as planar target dust deposition, large scanning environment humidity, large dust and the like.
In addition to the objects, features and advantages described above, other objects, features and advantages of the present invention are also provided. The present invention will be described in further detail below with reference to the drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a schematic flow chart of a method for automatically identifying coordinates of a planar target by three-dimensional laser scanning according to an embodiment of the present invention;
FIG. 2 is a schematic view of a fine scan region during target scanning according to an embodiment of the present invention;
FIG. 3(a) is a gray scale image of a first measurement of point cloud data of target A1 according to an embodiment of the present invention;
FIG. 3(b) is a schematic view of the first measurement identification of the point cloud data of target A1 according to an embodiment of the present invention;
FIG. 4(a) is a gray scale image of a first measurement of point cloud data of target A2 according to an embodiment of the present invention;
FIG. 4(b) is a schematic view of the first measurement identification of the point cloud data of target A2 according to an embodiment of the present invention;
FIG. 5(a) is a gray scale image of a first measurement of target A3 point cloud data in accordance with an embodiment of the present invention;
fig. 5(b) is a schematic diagram of the first measurement and identification of the point cloud data of target a3 according to the embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features, advantages, and the like of the present invention more clearly understandable, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. It should be noted that the drawings of the present invention are simplified and are not to precise scale, and are provided for convenience and clarity in assisting the description of the embodiments of the present invention; the several references in this disclosure are not limited to the particular numbers in the examples of the figures; the directions or positional relationships indicated by ' front ' middle, ' rear ' left ', right ', upper ', lower ', top ', bottom ', middle ', etc. in the present invention are based on the directions or positional relationships shown in the drawings of the present invention, and do not indicate or imply that the devices or components referred to must have a specific direction, nor should be construed as limiting the present invention.
In this embodiment:
taking the measurement of the coordinates of the identification targets a1, a2 and A3 as an example, referring to fig. 1 to 5, a method for automatically identifying the coordinates of a planar target by three-dimensional laser scanning comprises the following steps:
firstly, respectively selecting 3 laid plane targets A1, A2 and A3 by a manual frame, the three targets are different in placing posture and distance from the scanner, the target patterns are different, the target A1 is a square pattern target, the target A2 and the target A3 are triangular pattern targets, the scanning resolution is set to be the highest to scan, the scanned target is prevented from being shielded in the scanning process, the minimum angle value theta of the highest point cloud resolution of the target is 0.0018 degrees by inquiring an instrument specification, the point cloud data containing complete front patterns of the plane targets are obtained, and when selecting the fine scanning area, ensuring that the target is approximately in the central area of the fine scanning area, as shown in fig. 2, when the target is on the wall and the fine scanning area is framed, the black rectangular area is the fine scanning area framed, the target is approximately in the central area of the fine scanning area, and point cloud data D containing point cloud reflectivity information is derived when data is derived.
Step two, converting the point cloud data D into a spherical coordinate system, and converting the point cloud data D into the spherical coordinate system spherical (i.e., spherical coordinate system point cloud data D spherical ) Is recorded as:
D spherical =[H V R f];
H=[H 1 …H m ];
V=[V 1 …V m ];
R=[R 1 …R m ];
f=[f 1 …f m ];
wherein: h is a horizontal direction angle of the point cloud data in the spherical coordinate system, V is a vertical direction angle of the point cloud data in the spherical coordinate system, R is a distance from the point cloud data to an original point in the spherical coordinate system, f is a reflectivity of the point cloud data, and 1 … m is the number of points from 1 to m.
Step three, acquiring point cloud data D of the spherical coordinate system spherical Point cloud data D of the ith target spherical-i And taking the average value of each parameter
Figure BDA0003623924360000071
Target detection interest region as the center point of target detection interest region ROI and locating the target detection interest region from the origin of coordinate system
Figure BDA0003623924360000072
Maximum horizontal/maximum vertical laser angle range alpha on a plane target perpendicular to the laser direction in meters i Is recorded as:
Figure BDA0003623924360000073
horizontal angular range of target detection region of interest ROI
Figure BDA0003623924360000074
Is recorded as:
Figure BDA0003623924360000075
and vertical angular extent of the target detection region of interest ROI
Figure BDA0003623924360000076
Is recorded as:
Figure BDA0003623924360000077
wherein: d is the side length or diameter of the planar target used for the scan, i ═ 1,2, … n,
Figure BDA0003623924360000078
is the average value of the horizontal direction angles of the point cloud data of the ith target in a spherical coordinate system,
Figure BDA0003623924360000079
the average value of the vertical direction angles of the point cloud data of the ith target in the spherical coordinate system is shown.
Step four, detecting the center of the ROI by using the target
Figure BDA00036239243600000710
As a base point, extracting
Figure BDA00036239243600000711
And
Figure BDA00036239243600000712
point cloud data within a range
Figure BDA00036239243600000713
Taking the minimum resolution angle value theta as the grid distance and H i-(0.7-0.9) And V i-(0.7-0.9) Is the maximum and minimum of the X-axis/Y-axis of the grid, and generates the grid by H i-(0.7-0.9) And V i-(0.7-0.9) Corresponding f i-(0.7-0.9) Performing linear interpolation on the grid as a function value to obtain a gray image p of the ROI point cloud of the target detection interest area projected on the spherical coordinate by taking the reflectivity as a gray value i (ii) a Preference is given here to: detection of the center of a region of interest ROI with a target
Figure BDA00036239243600000714
As a base point, extracting
Figure BDA00036239243600000715
And
Figure BDA00036239243600000716
point cloud data D within range spherical-i-0.85 =[H i-0.85 V i-0.85 R i-0.85 f i-0.85 ]Taking the minimum resolution angle value theta as the grid interval and H i-0.85 And V i-0.85 Is the maximum and minimum of the X-axis/Y-axis of the grid, and generates the grid by H i-0.85 And V i-0.85 Corresponding f i-0.85 Performing linear interpolation (preferably cubic interpolation method, but not using nearest neighbor interpolation method) on the grid as a function value to obtain a gray image p of ROI point cloud of the target detection interest area projected on a spherical coordinate by taking reflectivity as gray value i
Step five, detecting the center of the ROI by using the target
Figure BDA0003623924360000081
As a base point, extracting
Figure BDA0003623924360000082
And
Figure BDA0003623924360000083
point cloud data within a range
Figure BDA0003623924360000084
Take f i-(0.4-0.6) Average value of (2)
Figure BDA0003623924360000085
In that
Figure BDA0003623924360000086
Taking 2j +1 values of range equal difference values as threshold values to respectively correspond to the gray level images p i Binarizing, s is a difference value, and deleting the gray level image p with the area less than 0.1 time in each binarized image respectively i After the area of the image spot, recording the quantity of the connected components of each binary image, counting the total number N of the connected components of the binary image, wherein the quantity of the connected components of the binary image is equal to 1, and calculating a threshold value f of the binary process of the gray level image i-(0.4-0.6) Is recorded as:
Figure BDA0003623924360000087
wherein j ═ (1,2, … n); preference is given here to: detecting ROI center of interest region with target
Figure BDA0003623924360000088
As a base point, extracting
Figure BDA0003623924360000089
And
Figure BDA00036239243600000810
point cloud data D within range spherical-i-0.5 =[H i-0.5 V i-0.5 R i-0.5 f i-0.5 ]Take f i-0.5 Average value of (2)
Figure BDA00036239243600000811
In that
Figure BDA00036239243600000812
Taking 2j +1 values of range equal difference values as thresholds to respectively perform comparison on the gray level images p i Binarizing, and deleting the gray level image p with the area less than 0.1 times in each binarized image i After the area of the image spot, recording the quantity of the connected components of each binary image, counting the total number N of the connected components of the binary image, wherein the quantity of the connected components of the binary image is equal to 1, and calculating a threshold value f of the binary process of the gray level image i-0.5 Is recorded as:
Figure BDA00036239243600000813
step six, when
Figure BDA00036239243600000814
Repeating the fifth step; when N is equal to [1,2j ]]Then, the obtained binary process threshold value is taken as a critical threshold value f' i-0.5 . In particular, in the threshold range
Figure BDA00036239243600000815
Taking values at equal intervals and respectively taking the values as threshold values of the binarization process to the gray level image p i Binarization wherein s isEach interval quantity (i.e., difference/step length) of the equal interval values, where 2sj is the total amount (total step length) of the threshold range, then 2j +1 is the total interval quantity (total step length), and taking s as 0.001 as an example, then taking the total amount 2sj of the threshold range as 0.3, then 2j +1 as 31 (the value rule here is that the value of N needs to be iteratively calculated in the fifth step and the sixth step until N belongs to [1,2j ]]Stopping iteration and obtaining a critical threshold value f' i-0.5 . Wherein: in order to increase the calculation rate, the smaller the iteration number is, the better the iteration number is, and in order to determine the value, before the method is used in actual combat, a plurality of targets can be scanned for experiment, the total step length is respectively selected in the range of 2sj ∈ (0.1, 0.5) and the step length is selected in the range of s ∈ (0.005,0.015) for iteration of the fifth step and the sixth step, and when most of the experiment targets can be iterated once only, the iteration is ended, and the total step length 2sj and the step length s are selected under the condition that 2sj is as small as possible. Preference is given here to: through experiments, 2sj is preferably 0.3, s is preferably 0.01, iteration can be finished only once in most instrument and target combinations, and then gray level images p with the area less than 0.1 time of each binary image are deleted respectively i After the area of the pattern spot (the step is used for removing the target noise point, the target pattern is composed of two black, two white and four equal-area triangles or squares, each triangle or square occupies 0.25 time of the total area of the pattern, the pattern can be formed by deleting the pattern spot with too large area, the pattern can be formed by deleting the pattern spot with too small area, the noise point can not be deleted, and the noise point can be effectively removed by selecting the area of 0.1 time).
Step seven, in [ f' i-0.5 -sj,f' i-0.5 +sj]Taking 21 values in the range as threshold values to respectively correspond to the gray level images p i Binaryzation, deleting the gray image p with the size less than 0.1 times in the binaryzation image i Area pattern spots, each binarization image is respectively subjected to angular point detection by an image sub-pixel angular point detection method to obtain 21 groups of angular point detection values P i I.e. grey scale image p i Initial value of the corner point;
step eight: calculating a corner detection value P i And deleting the corner point detection values larger than 1 time of the standard deviation and averaging the rest corner point detection values to obtain a corner point value C i-p And according to the gray scale image p i Center point value C i-p Coordinates andthe geometric relation of the minimum resolution angle value theta is inversely calculated to obtain an angle value C i-p Corresponding horizontal angle value H c-i And vertical angle value V c-i
Step nine: extraction of D spherical-i-0.5 The middle reflectivity is larger than the threshold f of the binarization process of the gray level image i-0.5 The reflectivity points are subjected to plane fitting by the reflectivity points obtained by a stable characteristic value plane fitting method (the concrete method can refer to the prior art: Guanyulan, Chengxiang, Shi Gui, a stable point cloud data plane fitting method [ J ]]The university of Tongji school newspaper (Nature science edition), 2008(07):981- i Plane parameters in a rectangular coordinate system:
E i :a i x+b i y+c i z+d i =0;
wherein: a is i 、b i 、c i 、d i Are respectively a plane E i X, y and z are respectively coordinates of a plane under a rectangular coordinate system;
step ten, simultaneous planes E i Plane parameter and horizontal angle value H under rectangular coordinate system c-i And vertical angle value V c-i Calculating to obtain the coordinate C of the target center under the rectangular coordinate system i =[x i ,y i ,z i ];
Step eleven, repeating the steps 1 to 10, scanning the targets A1, A2 and A3 for five times respectively, and identifying, wherein the calculation time for identifying a single target is about 1.5s, the coordinate identification results of the first scanning of A1, A2 and A3 are respectively shown in the images in the figures 3, 4 and 5, the left images in the figures 3, 4 and 5 are the identification results of the corner points of the target plane, the coordinates of the identification results are marked by the number of x, the right images are the identification results of the three-dimensional center of the target, the coordinates of the identification results are marked by the number of x, and the identification results of the corner points of the target plane are better as shown in the identification results of the corner points of the target plane in the figures 3, 4 and 5, and the identification points are all in the center of the target pattern. Counting the coordinate value of each recognition and the relative distance between the centers of the targets among three targets, calculating the internal coincidence precision, and comparing the calculated internal coincidence precision with the target coordinate obtained by the function recognition of the manual recognition target coordinate in Z + F laser control commercial software, wherein the target coordinate is shown in tables 1 and 2:
TABLE 1 comparison of target identification coordinate values and fit-in accuracy therein
Figure BDA0003623924360000101
TABLE 2 target Pitch measurements and comparison of fit-in accuracies therein
Figure BDA0003623924360000102
Figure BDA0003623924360000111
As can be seen from tables 1 and 2, fig. 3(a), fig. 3(b), fig. 4(a), fig. 4(b), fig. 5(a) and fig. 5(b), compared with the prior art, the method for automatically measuring the planar target by three-dimensional laser scanning with high accuracy provided by the present invention has the following advantages:
(1) the center coordinates of the plane target with different poses relative to the scanner can be identified, whether the identification is successful or not is not limited by the pose of the target, and the reliability is high;
(2) the method for obtaining the difference between the gray level image binarization critical threshold value and the iteration multi-threshold angular point detection result through multiple iterations greatly improves the detection robustness, and can still stably identify the target coordinates under the condition of unstable target reflectivity caused by severe environments such as plane target dust deposition, high scanning environment humidity, high dust and the like;
(3) the recognition effect of the center coordinate of the plane target is good, the recognition precision reaches the level of the existing commercial software, and the recognition precision is higher than that of most commercial software;
(4) the target identification method is not limited by instruments, and identification can be carried out as long as target point cloud data is exported in a point cloud file format;
(5) the gray level image obtained by combining the point cloud under the spherical coordinate system with the two-dimensional linear interpolation method has higher reliability, and the distortion of the gray level image is avoided;
(6) the plane target used by the method only needs the target pattern to be a common black and white cross pattern, is simple to manufacture and has no rigid regulation on the shape;
(7) the method has strong process reproducibility and strong universality.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A three-dimensional laser scanning plane target coordinate automatic identification method is characterized by comprising the following steps:
the method comprises the steps of firstly, obtaining point cloud data containing complete front patterns of plane targets, recording a minimum resolution angle value theta and deriving point cloud data D containing point cloud reflectivity information when deriving the data;
step two, converting the point cloud data D into a spherical coordinate system, and converting the point cloud data D into the spherical coordinate system spherical Is recorded as: d spherical =[H V R f]Wherein: h is a horizontal direction angle of the point cloud data in the spherical coordinate system, V is a vertical direction angle of the point cloud data in the spherical coordinate system, R is a distance from an origin in the spherical coordinate system of the point cloud data, and f is a reflectivity of the point cloud data;
thirdly, point cloud data D according to the spherical coordinate system spherical Is calculated from the origin of the coordinate system
Figure FDA0003623924350000011
Maximum horizontal/maximum vertical laser angle range alpha on a plane target perpendicular to the laser direction in meters i Horizontal angular range of target detection region of interest ROI
Figure FDA0003623924350000012
And vertical angular extent of the target detection region of interest ROI
Figure FDA0003623924350000013
Step four, detecting the center of the ROI by using the target
Figure FDA0003623924350000014
As a base point, extracting
Figure FDA0003623924350000015
And
Figure FDA0003623924350000016
obtaining point cloud data in the range to obtain a gray image p of a target detection interest region ROI point cloud projected under spherical coordinates by taking reflectivity as gray value i And obtaining a threshold value of a binarization process as a critical threshold value f 'through iterative processing' i-(0.4-0.6)
Step five: for gray scale image p i Carrying out binarization processing and then carrying out angular point detection to obtain an angular point detection value P i And calculating the corner point detection value P i And deleting the corner point detection values larger than 1 time of the standard deviation and averaging the rest corner point detection values to obtain a corner point value C i-p Then according to the gray image p i Center point value C i-p The angular point value C is calculated by the inverse geometrical relation of the coordinate and the minimum resolution angular value theta i-p Corresponding horizontal angle value H c-i And vertical angle value V c-i
Step six: to D spherical-i-(0.4-0.6) The middle reflectivity is larger than the threshold f of the binarization process of the gray level image i-(0.4-0.6) Performing plane fitting on the reflectivity points to obtain a plane E of the target i Plane parameter E under rectangular coordinate system i :a i x+b i y+c i z+d i 0, wherein: a is i 、b i 、c i 、d i Are respectively a plane E i X, y and z are respectively coordinates of a plane under a rectangular coordinate system;
step seven, simultaneous plane parameters E i Horizontal angle value H c-i And vertical angle value V c-i Calculating to obtain the coordinate C of the target center under the rectangular coordinate system i =[x i ,y i ,z i ]。
2. The method of claim 1, wherein the first step of obtaining point cloud data comprising a complete front pattern of each planar target comprises the following steps:
the method comprises the steps of ensuring that the target scans n plane targets on the premise of precisely scanning the central area of the three-dimensional laser scanner so as to obtain point cloud data containing complete front patterns of all the plane targets.
3. The method of claim 1 for automatic identification of planar target coordinates for three-dimensional laser scanning,
will be from the origin of the coordinate system
Figure FDA0003623924350000021
Maximum horizontal/maximum vertical laser angle range alpha on a plane target perpendicular to the laser direction in meters i Is recorded as:
Figure FDA0003623924350000022
horizontal angular range of target detection region of interest ROI
Figure FDA0003623924350000023
Is recorded as:
Figure FDA0003623924350000024
vertical angular range of target detection region of interest ROI
Figure FDA0003623924350000025
Is recorded as:
Figure FDA0003623924350000026
wherein: d is the side length or diameter of the planar target used for the scan, i ═ 1,2, … n,
Figure FDA0003623924350000027
is the average value of the horizontal direction angles of the point cloud data of the ith target in a spherical coordinate system,
Figure FDA0003623924350000028
is the average value of the vertical direction angles of the point cloud data of the ith target in a spherical coordinate system,
Figure FDA0003623924350000029
the average value of the distances from the origin in the spherical coordinate system of the point cloud data is shown.
4. The method of claim 1 wherein the step four comprises the step of detecting the center of the region of interest (ROI) of the target
Figure FDA00036239243500000210
The obtaining method is as follows:
1) obtaining a spherical coordinate system D spherical Point cloud data D of the ith target spherical-i
2) By point cloud data D spherical-i Average value of each parameter of
Figure FDA00036239243500000211
Center of interest region ROI as target detection
Figure FDA00036239243500000212
5. The method of claim 4, wherein the ROI point cloud of the target detection region is obtained and projected as a gray scale image p under spherical coordinates by using the reflectivity as a gray scale value i The method comprises the following steps:
detection of the center of a region of interest ROI with a target
Figure FDA00036239243500000213
As a base point, extracting
Figure FDA00036239243500000214
And
Figure FDA00036239243500000215
point cloud data within a range
Figure FDA00036239243500000216
Taking the minimum resolution angle value theta as the grid distance and H i-(0.7-0.9) And V i-(0.7-0.9) Is the maximum and minimum of the X-axis/Y-axis of the grid, and generates the grid by H i-(0.7-0.9) And V i-(0.7-0.9) Corresponding f i-(0.7-0.9) Performing linear interpolation on the grid as a function value to obtain a gray image p of the ROI point cloud of the target detection interest region projected on the spherical coordinate by taking the reflectivity as a gray value i
6. The method of claim 5, wherein the ROI point cloud of the target detection region is obtained and projected as a gray scale image p under spherical coordinates by using the reflectivity as a gray scale value i The method comprises the following steps:
detection of the center of a region of interest ROI with a target
Figure FDA0003623924350000031
As a base point, extracting
Figure FDA0003623924350000032
And
Figure FDA0003623924350000033
point cloud data D within range spherical-i-0.85 =[H i-0.85 V i-0.85 R i-0.85 f i-0.85 ]The minimum resolution angle value theta is taken as the distance and H is taken as i-0.85 And V i-0.85 Is the maximum and minimum of the X-axis/Y-axis of the grid, and generates the grid by H i-0.85 And V i-0.85 Corresponding f i-0.85 Performing linear interpolation on the grid as a function value to obtain a gray image p of the ROI point cloud of the target detection interest region projected on the spherical coordinate by taking the reflectivity as a gray value i
7. The method for automatically identifying the coordinates of a planar target scanned by three-dimensional laser according to claim 6, wherein the method for obtaining the threshold value of the binarization process as the critical threshold value comprises:
(1) detection of the center of a region of interest ROI with a target
Figure FDA0003623924350000034
As a base point, extracting
Figure FDA0003623924350000035
And
Figure FDA0003623924350000036
point cloud data within a range
Figure FDA0003623924350000037
Take f i-(0.4-0.6) Average value of (2)
Figure FDA0003623924350000038
In that
Figure FDA0003623924350000039
Taking 2j +1 values of the within-range equal difference values as thresholds to respectively correspond to the gray level images p i Binarizing, s is a difference value, and deleting the gray level image p with the area less than 0.1 time in each binarized image respectively i After the area of the image spot, recording the quantity of the connected components of each binary image, counting the total number N of the connected components of the binary image, wherein the quantity of the connected components of the binary image is equal to 1, and calculating a threshold value f of the binary process of the gray level image i-(0.4-0.6) Is recorded as:
Figure FDA00036239243500000310
wherein: j ═ (1,2, … n);
(2) when in use
Figure FDA00036239243500000311
When the step (1) is needed, repeating; when N is equal to [1,2j ]]Then the obtained threshold value of the binarization process is used as a critical threshold value f' i-(0.4-0.6)
8. The method for automatically identifying the coordinates of a planar target scanned by three-dimensional laser according to claim 7, wherein the method for obtaining the threshold value of the binarization process as the critical threshold value specifically comprises:
detecting the center of an ROI (region of interest) by using a target
Figure FDA0003623924350000041
As a base point, extracting
Figure FDA0003623924350000042
And
Figure FDA0003623924350000043
point cloud data D within range spherical-i-0.5 =[H i-0.5 V i-0.5 R i-0.5 f i-0.5 ]Take f i-0.5 Average value of (2)
Figure FDA0003623924350000044
In that
Figure FDA0003623924350000045
Taking 2j +1 values of the within-range equal difference values as thresholds to respectively correspond to the gray level images p i Binarizing, s is a difference value, and deleting the gray level image p with the area less than 0.1 time in each binarized image respectively i After the area of the image spot, recording the quantity of the connected components of each binary image, counting the total number N of the connected components of the binary image, wherein the quantity of the connected components of the binary image is equal to 1, and calculating a threshold value f of the binary process of the gray level image i-0.5 Is recorded as:
Figure FDA0003623924350000046
② when
Figure FDA0003623924350000047
Then, repeating the first step; when N is equal to [1,2j ]]Then, the obtained binary process threshold value is taken as a critical threshold value f' i-0.5
9. The method of claim 8 wherein 2j +1 sets of angular point measurements P are obtained i The method comprises the following steps:
in [ < f >' i-0.5 -sj,f' i-0.5 +sj]Taking 2j +1 values in the range as threshold values to respectively correspond to the gray level images p i Binaryzation, deleting the gray image p with the size less than 0.1 times in the binaryzation image i The angular point detection of each binary image is respectively carried out on the area pattern spots by an image sub-pixel angular point detection method to obtain 2j +1 groups of angular point detection values P i I.e. grey scale image p i Initial value of the corner point.
10. The method of claim 9, wherein the plane E of the target is obtained i The specific method of the plane parameters under the rectangular coordinate system comprises the following steps:
extraction of D spherical-i-0.5 The middle reflectivity is larger than the threshold f of the binarization process of the gray level image i-0.5 The method of plane fitting by using the stable characteristic valueCarrying out plane fitting on the obtained reflectivity points to obtain a plane E of the target i And (5) plane parameters under a rectangular coordinate system.
CN202210473238.0A 2022-04-29 2022-04-29 Automatic identification method for plane target coordinates of three-dimensional laser scanning Pending CN114862788A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210473238.0A CN114862788A (en) 2022-04-29 2022-04-29 Automatic identification method for plane target coordinates of three-dimensional laser scanning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210473238.0A CN114862788A (en) 2022-04-29 2022-04-29 Automatic identification method for plane target coordinates of three-dimensional laser scanning

Publications (1)

Publication Number Publication Date
CN114862788A true CN114862788A (en) 2022-08-05

Family

ID=82634807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210473238.0A Pending CN114862788A (en) 2022-04-29 2022-04-29 Automatic identification method for plane target coordinates of three-dimensional laser scanning

Country Status (1)

Country Link
CN (1) CN114862788A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117495969A (en) * 2024-01-02 2024-02-02 中交四航工程研究院有限公司 Automatic point cloud orientation method, equipment and storage medium based on computer vision

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447855A (en) * 2015-11-13 2016-03-30 中国人民解放军空军装备研究院雷达与电子对抗研究所 Terrestrial 3D laser scanning point cloud spherical target automatic identification method
CN106447715A (en) * 2016-01-29 2017-02-22 北京建筑大学 Plane reflection target central point position extraction method for laser radar
CN112884902A (en) * 2021-03-17 2021-06-01 中山大学 Point cloud registration-oriented target ball position optimization method
WO2021208442A1 (en) * 2020-04-14 2021-10-21 广东博智林机器人有限公司 Three-dimensional scene reconstruction system and method, device, and storage medium
WO2021259151A1 (en) * 2020-06-24 2021-12-30 深圳市道通科技股份有限公司 Calibration method and apparatus for laser calibration system, and laser calibration system
CN114234832A (en) * 2021-12-21 2022-03-25 中国铁路设计集团有限公司 Tunnel monitoring and measuring method based on target identification

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447855A (en) * 2015-11-13 2016-03-30 中国人民解放军空军装备研究院雷达与电子对抗研究所 Terrestrial 3D laser scanning point cloud spherical target automatic identification method
CN106447715A (en) * 2016-01-29 2017-02-22 北京建筑大学 Plane reflection target central point position extraction method for laser radar
WO2021208442A1 (en) * 2020-04-14 2021-10-21 广东博智林机器人有限公司 Three-dimensional scene reconstruction system and method, device, and storage medium
WO2021259151A1 (en) * 2020-06-24 2021-12-30 深圳市道通科技股份有限公司 Calibration method and apparatus for laser calibration system, and laser calibration system
CN112884902A (en) * 2021-03-17 2021-06-01 中山大学 Point cloud registration-oriented target ball position optimization method
CN114234832A (en) * 2021-12-21 2022-03-25 中国铁路设计集团有限公司 Tunnel monitoring and measuring method based on target identification

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吴超;袁永博;张明媛;: "基于反射强度和K-means聚类的平面标靶定位研究", 激光技术, no. 03, 25 May 2015 (2015-05-25) *
陈俊杰;闫伟涛;: "基于激光点云的平面标靶中心坐标提取方法研究", 工程勘察, no. 08, 1 August 2013 (2013-08-01) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117495969A (en) * 2024-01-02 2024-02-02 中交四航工程研究院有限公司 Automatic point cloud orientation method, equipment and storage medium based on computer vision
CN117495969B (en) * 2024-01-02 2024-04-16 中交四航工程研究院有限公司 Automatic point cloud orientation method, equipment and storage medium based on computer vision

Similar Documents

Publication Publication Date Title
Remondino et al. A critical review of automated photogrammetric processing of large datasets
CN110163918B (en) Line structure cursor positioning method based on projective geometry
CN111598770B (en) Object detection method and device based on three-dimensional data and two-dimensional image
Herráez et al. 3D modeling by means of videogrammetry and laser scanners for reverse engineering
CN108804714A (en) Point cloud data storage method and device
CN111524168B (en) Point cloud data registration method, system and device and computer storage medium
CN113052881A (en) Automatic registration method for extracting pole point in indoor three-dimensional point cloud
CN110738273A (en) Image feature point matching method, device, equipment and storage medium
CN110047133A (en) A kind of train boundary extraction method towards point cloud data
CN111640158A (en) End-to-end camera based on corresponding mask and laser radar external reference calibration method
CN110780276A (en) Tray identification method and system based on laser radar and electronic equipment
CN112880562A (en) Method and system for measuring pose error of tail end of mechanical arm
CN114862788A (en) Automatic identification method for plane target coordinates of three-dimensional laser scanning
CN116452852A (en) Automatic generation method of high-precision vector map
CN112033385B (en) Pier pose measuring method based on mass point cloud data
CN116524109B (en) WebGL-based three-dimensional bridge visualization method and related equipment
Sumi et al. Multiple TLS point cloud registration based on point projection images
CN115186347B (en) Building CityGML modeling method combining house type plan view and inclination model
CN115908562A (en) Different-surface point cooperation marker and measuring method
CN115880371A (en) Method for positioning center of reflective target under infrared visual angle
CN111145201B (en) Steady and fast unmanned aerial vehicle photogrammetry mark detection and positioning method
CN111521125B (en) Method and system for detecting iron tower deformation based on point cloud data
CN112651427A (en) Image point fast and efficient matching method for wide-base-line optical intersection measurement
Chen et al. A novel artificial landmark for monocular global visual localization of indoor robots
Li et al. Computer-assisted archaeological line drawing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20231023

Address after: No.168, Section 2, Yanhe Road, Wangcheng economic and Technological Development Zone, Changsha City, Hunan Province

Applicant after: Hunan Lianzhi Technology Co.,Ltd.

Applicant after: CCCC Central South Engineering Bureau Co.,Ltd.

Address before: No.168, Section 2, Yanhe Road, Wangcheng economic and Technological Development Zone, Changsha City, Hunan Province

Applicant before: Hunan Lianzhi Technology Co.,Ltd.

TA01 Transfer of patent application right