CN107203973B - Sub-pixel positioning method for center line laser of three-dimensional laser scanning system - Google Patents

Sub-pixel positioning method for center line laser of three-dimensional laser scanning system Download PDF

Info

Publication number
CN107203973B
CN107203973B CN201610829325.XA CN201610829325A CN107203973B CN 107203973 B CN107203973 B CN 107203973B CN 201610829325 A CN201610829325 A CN 201610829325A CN 107203973 B CN107203973 B CN 107203973B
Authority
CN
China
Prior art keywords
pixel
image
points
target
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610829325.XA
Other languages
Chinese (zh)
Other versions
CN107203973A (en
Inventor
马国军
赵彬
何康
胡颖
谢丽娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University of Science and Technology
Original Assignee
Jiangsu University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University of Science and Technology filed Critical Jiangsu University of Science and Technology
Priority to CN201610829325.XA priority Critical patent/CN107203973B/en
Publication of CN107203973A publication Critical patent/CN107203973A/en
Application granted granted Critical
Publication of CN107203973B publication Critical patent/CN107203973B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Abstract

The invention discloses a sub-pixel positioning method of a line laser center in a three-dimensional laser scanning system, which comprises the following steps: firstly, acquiring original image information scanned by laser from a camera, and preprocessing the original image to eliminate noise in the image; carrying out threshold segmentation on the image without the noise, and carrying out coarse extraction on the linear laser; eliminating the burr phenomenon of the line laser after the rough extraction, and further reducing the width of the line laser; and obtaining a sub-pixel level coordinate of the line laser center by using an improved gravity center method, and correcting a pseudo target pixel in the sub-pixel level coordinate by using Hough transformation. The positioning method combines an improved gravity center method and Hough transformation, and can meet the requirement that a three-dimensional laser scanning system accurately acquires the sub-pixel coordinates of the line laser center in real time, so that the sub-pixel accurate positioning of the line laser center is realized.

Description

Sub-pixel positioning method for center line laser of three-dimensional laser scanning system
Technical Field
The invention relates to a sub-pixel positioning method of a line laser center in a three-dimensional laser scanning system.
Background
With the continuous development of machine vision technology, the three-dimensional laser scanning technology is becoming the core power of the field. By means of the advantages of real-time acquisition of three-dimensional data information of the surface of an object to be measured, high scanning precision, non-contact characteristic and the like, three-dimensional laser scanning becomes one of new technical research hotspots.
The three-dimensional laser scanning system generally comprises a line laser, a camera, a stepping motor, a rotary platform and other structures, wherein the line laser is used for emitting linear laser light bands, the camera is used for capturing pictures containing linear laser on the surface of a measured object in real time, and the stepping motor and the rotary platform are used for acquiring scanning information of the object at multiple angles.
The measurement principle of the three-dimensional laser scanning system is generally three types: the method comprises a pulse ranging method, a phase ranging method and a triangular ranging method, wherein the former two principles are simpler and have higher requirements on cost, and the triangular ranging method has lower requirements on hardware cost and higher requirements on algorithm accuracy. The laser scanning system related by the invention is based on the principle of triangulation distance measurement: when the laser is projected on the surface of the measured object, the surface information of the object can be obtained by accurately positioning the sub-pixel information of the line laser center. The line laser in the ideal is a thin line, however, due to various interferences, such as uneven illumination and difference of surface properties of the object to be measured, the obtained line laser has a certain width, and the scanning result is directly influenced by the quality of the line laser center extraction.
The traditional algorithm for extracting the line laser center comprises an extreme value method, a threshold value method, a gravity center method, a Hessian matrix method, a curve fitting method and the like, wherein the extreme value method is used for extracting pixel points with the maximum gray value of each line (or each column) in pixels, and the extreme value method has the advantages of high extraction speed and easiness in noise interference. The threshold method is that a certain gray threshold is set, the gray value of the pixel in each row (or each column) of the image is detected as the coordinate point of the threshold, then the coordinate values corresponding to the coordinate points are summed and averaged, and the pixel coordinate of the central point is obtained. The gravity center method is an algorithm in on-line laser center sub-pixel extraction, namely, a gray value of an image is taken as the mass of a particle generated by a infinitesimal in physics, and a sub-pixel-level pixel point of a line laser center is obtained by a formula for solving the mass center. The Hessian matrix method is to obtain the normal direction of the line laser in the image by using the Hessian matrix, and then to calculate the extreme point of the normal direction, namely the coordinate value of the sub-pixel. Each pixel point of the image in the Hessian matrix method needs to be subjected to 5 times of Gaussian convolution operation, the result obtained by the Hessian matrix method is high in precision, large in required calculated amount and low in speed, and the Hessian matrix method is not suitable for a real-time scanning system. The curve fitting method is based on the characteristic that the gray value of the line laser is approximately gaussian, as shown in fig. 2, the central point of the line laser is the peak point of the light intensity, the sub-pixel coordinates of the central point of the laser are calculated by using a curve fitting method, generally comprising gaussian curve fitting and quadratic curve fitting, and the maximum value of the fitting curve is used as the center of the line laser. The curve fitting algorithm has high extraction precision, can reach sub-pixel precision, but has larger data amount in the calculation process.
The extreme value method, the threshold value method and the gray scale gravity center method have the advantages of simple algorithm, high speed and capability of realizing the real-time scanning, but are seriously influenced by noise. The curve fitting method and the Hessian matrix method have high extraction precision, can reach a sub-pixel level, are not easily interfered by noise, have good robustness, but have high algorithm complexity and low detection speed, and cannot meet the requirement of real-time property.
The existing extraction algorithms of several line laser centers and the advantages and disadvantages of the existing extraction algorithms are analyzed, and on the basis, the method capable of achieving accurate positioning of the line laser centers is provided.
Disclosure of Invention
The invention aims to provide a sub-pixel positioning method of a line laser center in a three-dimensional laser scanning system, which combines an improved gravity center method with Hough transformation, can meet the requirement that the three-dimensional laser scanning system can accurately acquire the sub-pixel coordinates of the line laser center in real time, and realizes the accurate positioning of the sub-pixels of the line laser center.
In order to achieve the above purpose, the solution of the invention is:
a sub-pixel positioning method for a line laser center in a three-dimensional laser scanning system comprises the following steps:
(1) acquiring original image information scanned by laser from a camera, and preprocessing the original image to eliminate noise in the image;
(2) carrying out threshold segmentation on the image without the noise, and carrying out coarse extraction on the linear laser;
(3) eliminating the burr phenomenon of the line laser after the rough extraction, and further reducing the width of the line laser;
(4) obtaining sub-pixel level coordinates of the line laser center by using an improved gravity center method;
(5) and correcting the false target pixel in the sub-pixel level coordinate by adopting Hough transformation.
The step (1) specifically comprises the following steps:
(11) converting the original image three-channel form acquired by the camera into a single-channel gray image;
(12) and (4) carrying out median filtering processing on the gray level image to eliminate noise in the image.
In the step (12), the grayscale image is subjected to median filtering processing using a 3 × 3 filtering window.
In the step (2), the optimal threshold for segmenting the target and the background in the image is calculated by using the maximum inter-class variance method.
The specific process of the threshold segmentation is as follows:
let the gray value of each point in the original image be F (x, y), and the total pixel number be N ═ N0+N1+…+NL-1The gray level number is set as L, the threshold T divides the image into a target class O and a background class B, i represents the gray value of the pixel, and the number of the pixel points with the gray value i is recorded as NiThe corresponding probability is denoted as PiThen, there are:
Figure BDA0001115489340000031
the gray scale of the background class is marked as [0, T]And the gray scale range of the target class is marked as [ T +1, L-1 ]]The gray level probability of the background class is denoted as PBThe gray level probability of the target class is denoted as POThen, there are:
Figure BDA0001115489340000032
Figure BDA0001115489340000033
PB+PO=1
the average value of the gray levels of the background pixels is muBThe mean value of the gray levels of the pixels of the target class is muoThen, there are:
Figure BDA0001115489340000034
Figure BDA0001115489340000035
POμO+PBμB=μt
then the mean square error between classes is expressed as:
σ2=PBBt)2+POOt)2
obtaining:
σ2=PBPOBO)2
the different T's also have different σ's, and the T that maximizes σ is found to be the best threshold found.
The specific steps of the step (3) are as follows: dividing pixels on the image into a target point and a background point by adopting a Zhang parallel refinement algorithm, taking the pixel value of any point in the image as p1, marking the neighborhood with p2-p9, and performing the following processing:
the first step is as follows: if the following four conditions are satisfied, deleting the point in the image;
A.2≤N(p1)≤6
B.Z0(p1)=1
C.p2*p4*p6=0
D.p4*p6*p8=0
where N (p1) is the number of non-zero points in the neighborhood of p1, and Z0(p1) is the number of changes in its corresponding pixel value from 0 to 1 during the traversal from p2 to p 9;
the second step is that: scanning the image again, and if the 8 neighborhoods of the points which are not 0 meet the following 4 conditions, deleting the points in the image;
A.2≤N(p1)≤6
B.Z0(p1)=1
C.p2*p6*p8=0
D.p2*p4*p8=0。
in the step (3), the open operation is also used for the image to remove possible noise points, that is, the image is corroded and then expanded, so as to achieve the purpose of removing isolated points.
In the step (4), the formula of the improved gravity center method is as follows:
Figure BDA0001115489340000041
wherein, X represents the pixel point with the strongest light intensity, and G (X, y) represents the gray value of each pixel after the image is extracted.
The details of the step (5) are as follows:
a) detecting the number of target pixels in a sub-pixel coordinate neighborhood of the center of the line laser;
b) if the number of the target points meets the requirement, the target points are regarded as the target points, and the target points are converted into c), otherwise the target points are noise points, and the target points are converted into d);
c) continuously traversing the next row of sub-pixel points, and detecting the number of target points in the neighborhood;
d) and taking the target points of 10 lines above and below the noise point as samples, then carrying out Hough transformation to obtain a straight line, replacing the noise point with the corresponding point on the straight line, and converting into c) until the target pixel in the whole image is traversed.
After the scheme is adopted, the invention has the advantages that:
(1) the method utilizes a maximum inter-class variance method to calculate different thresholds according to different images, and further approximately divides the overall profile of the line laser;
(2) according to the invention, a Zhang parallel thinning algorithm is utilized, so that the burr phenomenon of the line laser is effectively removed, the boundary noise point is thinned, the width of the line laser is reduced, and the calculated amount is reduced;
(3) the invention combines the improved gravity center method and Hough transformation for accurately positioning the line laser center for the first time, can effectively improve the positioning accuracy of line laser to a sub-pixel level, has strong inhibition capability on isolated noise points in an image and enhances the robustness.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a graph of the intensity profile of a line laser;
FIG. 3 is a neighborhood arrangement diagram of the Zhang parallel refinement algorithm;
fig. 4 is a graph corresponding to the relationship between (x, y) and (r, θ) in the Hough transform.
Detailed Description
The technical scheme of the invention is explained in detail in the following with the accompanying drawings.
As shown in fig. 1, the present invention provides a sub-pixel positioning method for a line laser center in a three-dimensional laser scanning system, which combines an improved gravity center method and Hough transformation to realize accurate positioning of the sub-pixel of the line laser center, and specifically comprises the following steps:
(1) acquiring original image information scanned by laser from a camera, and preprocessing the original image to eliminate noise in the image;
the method specifically comprises the following steps:
a) the purpose of image preprocessing is to reduce the effect of noise on the center of the laser of the extraction line, and to retain useful image information. In the image preprocessing process, because a color image contains more color information and the processing speed is relatively slow, an original three-channel image needs to be converted into a single-channel gray image before image preprocessing, and the most common gray image conversion formula is as follows:
Gray=R*0.229+G*0.587+B*0.114 (1)
r, G, B represents the values of the red, green, and blue components, respectively.
b) The median filtering can effectively remove salt and pepper noise and speckle noise, and simultaneously retain the edge contour and image details of the image, a proper filtering window is required to be selected in an experiment, so that the filtering effect can be ensured, the calculated amount cannot be too large, and the filtering window of 3 × 3 is generally adopted to carry out median filtering on the image, thereby eliminating isolated noise points in the image.
(2) Carrying out threshold segmentation on the image without the noise, and carrying out coarse extraction on the linear laser;
in a laser scanning system, subsequent sub-pixel extraction of an object is only possible if the object is effectively segmented from the image. According to the characteristic that the contrast of the laser relative to the background is strong, the purpose of separating the background from the center of the line laser to be extracted can be achieved by a method of setting a threshold value. The idea is to compare the gray value of each pixel in the image with a set threshold value to determine whether the pixel is the target point. The threshold extraction method comprises fixed threshold segmentation and adaptive threshold segmentation, wherein the fixed threshold segmentation means that the whole image is segmented into a target and a background by using the same set threshold, the threshold is selected according to multiple experiments, and the method is poor in adaptability because the thresholds adopted in different environments are different. The self-adaptive threshold segmentation determines the threshold of the image according to the gray value of the pixel of the actual image, and the corresponding thresholds of different images in different brightness, contrast and texture environments are different, so that the flexibility is good. In order to improve the flexibility of a scanning system, the invention adopts an adaptive threshold value extraction method and utilizes a maximum inter-class variance method to obtain the optimal segmentation threshold value of each image.
The maximum between-class variance image segmentation method is that an acquired image is divided into a target part and a background part according to the gray value of a pixel, then the between-class variance of the target and the background is calculated, when the between-class variance of the image reaches the maximum value, the difference between the target part and the background part reaches the maximum value, and the gray value at the moment is the optimal threshold value for segmenting the image. The method comprises the following specific steps:
let the gray value of each point in the original image be F (x, y), and the total pixel number be N ═ N0+N1+…+NL-1The gray scale number is set to be L (0 to L-1), the threshold T divides the image into a target class O and a background class B, i represents the gray value of the pixel, and the number of the pixel points with the gray value i is recorded as NiThe corresponding probability is denoted as PiThen, there are:
Figure BDA0001115489340000061
the gray scale of the background class is marked as [0, T]And the gray scale range of the target class is marked as [ T +1, L-1 ]]The gray level probability of the background class is denoted as PBThe gray level probability of the target class is denoted as POThen, there are:
Figure BDA0001115489340000062
Figure BDA0001115489340000063
PB+PO=1 (5)
the average value of the gray levels of the background pixels is muBThe mean value of the gray levels of the pixels of the target class is muoThen, there are:
Figure BDA0001115489340000064
Figure BDA0001115489340000065
POμO+PBμB=μt(8)
the mean square error between classes can be expressed as:
σ2=PBBt)2+POOt)2(9)
the following formula (5) and formula (8) can be obtained:
σ2=PBPOBO)2(10)
the different T's result in different σ's, and the T that maximizes σ is found, i.e., the best threshold found. By this value, the line laser can be extracted.
Setting the gray value of each point of the original image as F (x, y), setting T as the selected adaptive threshold, and G (x, y) as the gray value of each point pixel after image extraction, including:
Figure BDA0001115489340000071
(3) eliminating the burr phenomenon of the line laser after the rough extraction, and further reducing the width of the line laser;
after threshold segmentation, the line laser width in the image occupies more pixel points, the precision of the pixel points used in a laser scanning system needs to reach a sub-pixel level, and before the sub-pixel line laser is extracted, edge thinning processing needs to be carried out on the image after threshold segmentation so as to eliminate the influence of the fringe edge point on the extraction of the center of the sub-pixel. The edge thinning means to delete the unneeded contour points and only keep the skeleton.
The Zhang parallel thinning algorithm has the advantages of effectively removing points on the edge of the outline, not influencing the connectivity of pixels inside the stripe, being capable of removing the burr phenomenon of laser, keeping the complete structure of the central line of the laser and the like, and meets the system requirements, so the Zhang parallel thinning algorithm is adopted for thinning the line laser, whether the current pixel is deleted or not is determined based on the gray value of the neighborhood of the pixel 8, and the Zhang parallel thinning algorithm has the advantages of strong practicability, high speed, capability of keeping the connectivity of the internal structure and no burr of the thinned image and the like.
The Zhang parallel thinning algorithm divides pixels on an image into target points and background points, wherein the target points are points with pixel gray values not being 0, and the background points are pixel points with pixel gray values being 0. Taking the pixel value of any point in the image as p1, the neighborhood is marked by p2-p9, and the corresponding positions of the marks are shown in FIG. 3:
the detailed steps of the refining are as follows:
the first step is as follows: deleting the point in the image (setting the pixel value to 0) if the following four conditions are satisfied;
A.2≤N(p1)≤6
B.Z0(p1)=1
C.p2*p4*p6=0
D.p4*p6*p8=0
where N (p1) is the number of non-zero points in the neighborhood of p1, this condition is set because 2 or more guarantees that the p1 point is not an end point or an isolated point, and 6 or less guarantees that the p1 point is a boundary point rather than an interior point, because end points as well as interior points should not be deleted. Z0(p1) is the number of changes of the corresponding pixel value from 0 to 1 during the traversal from p2 to p9, and the value is set to 1 to ensure the connectivity of the area after the point is deleted. If p4 and p6 occur 2 times in conditions C and D, i.e., one of them is 0, C, D is satisfied. And after all the boundary points are judged, deleting all the mark points together, and then entering a deleting step of the second stage.
The second step is that: scanning the image again, and if the 8 neighborhoods of the points which are not 0 satisfy the following 4 conditions, deleting the points in the image (setting the pixel value to be 0);
A.2≤N(p1)≤6
B.Z0(p1)=1
C.p2*p6*p8=0
D.p2*p4*p8=0
p2 and p8 in the condition C, D occur 2 times, and if one of them is 0, C, D is satisfied.
In order to ensure that the influence of the isolated noise existing in the process on the positioning precision of the whole sub-pixel is reduced to the minimum, at the moment, because the line laser with a certain width is extracted, the isolated noise can not be denoised again by using median filtering, because the median filtering is characterized in that the median in the neighborhood is taken, the extracted framework can be greatly damaged; therefore, the more appropriate choice is to use open operation to remove the possible noise, i.e. to corrode and then expand the image, so as to achieve the purpose of removing the isolated points.
(4) Accurately positioning sub-pixels of the line laser center by combining an improved gravity center method with Hough transformation;
after the image is thinned, the burr phenomenon of the laser central line is obviously eliminated, and the width of the line laser is reduced to a plurality of pixels. Because the line laser light band obeys Gaussian distribution, a maximum value point is not located on a certain pixel point, and deviation is certainly caused if a certain pixel value is directly taken as the accurate position of the line laser center, so the method combines the advantages of Hough transformation and the gravity center method, obtains the sub-pixel coordinate of the line laser center through the improved gravity center method, corrects the influence of noise points on the improved gravity center method by utilizing the Hough transformation, and finally determines the sub-pixel coordinate.
The traditional gravity center method is to obtain the sub-pixel coordinates of the line laser center by calculating the whole row (or column), and the gravity center method is applied to the line laser extraction, so that the pixel point with the strongest light intensity can be found, namely:
Figure BDA0001115489340000081
where n represents the number of pixels per line in the image.
The traditional gravity center method is extremely easy to be interfered by noise, and aiming at the defect, a thinned image is divided into a target part and a background part, wherein the target part is a thinned line laser, before the center of a sub-pixel line laser is extracted by using the gravity center method, a whole row (or a whole column) of pixel points need to be traversed, a target area is searched, and only the target area is subjected to an improved gravity center method, which has the following formula:
Figure BDA0001115489340000091
wherein x is0Position coordinates, y, representing the left boundary pixel of the target area0Indicating the location coordinates of the right border pixel of the target area.
The method has the advantages that: according to the characteristic that the light intensity distribution of the line laser is Gaussian distribution, namely, the point with larger gray value is closer to the central position of the line laser, the corresponding pixel value is also larger, the weight of the pixel point with stronger gray value in the formula is increased in the formula, the calculation range of the gravity center method is reduced, the extraction precision is improved, and the calculation amount is reduced.
The gravity center method is easily interfered by noise, and aiming at the defect, the method adopts Hough transformation to correct the pseudo target pixel obtained by the improved gravity center method.
A straight line has two representations in a two-dimensional coordinate system, namely:
a. in a cartesian coordinate system, it can be represented by a slope and an intercept;
b. in a polar coordinate system, the polar radius and the polar angle of the parameters can be used for representing, for Hough transformation, the polar coordinate system is used for representing a straight line, and the equation of the straight line in the polar coordinate system is as follows:
Figure BDA0001115489340000092
after the reduction can be expressed as:
r=xcosθ+ysinθ (15)
as shown in fig. 4, it is rewritten into Hough transform, i.e. mapping of independent variables (x, y) to parameter variables (r, θ), for point (x)0,y0) All straight lines passing through this point in polar coordinates can be expressed as:
r=x0cosθ+y0sinθ (16)
then for a given fixed point (x)0,y0) All straight lines passing through the point are drawn in polar coordinates for its polar diameter and polar angle, resulting in a sinusoidal curve.
All points passing through y ═ kx + b in the image space are intersected at a point (k, b) in the parameter space after Hough transformation, and by utilizing the property, if the curves obtained after the operations are performed on two different fixed points, the two points are intersected, which means that the two points pass through the same straight line. Through Hough transformation, the detection of straight lines in an image space can be converted into the detection of points in a parameter space.
The method comprises the following specific steps:
1) establishing a two-dimensional counter (corresponding to r, theta respectively) in the parameter space, namely a two-dimensional array cnt (r)ii) (ii) a All values in the array are initialized to 0;
2) scanning all points (x) in image spacei,yi) Conversion of image space to parameter space (r) by Hough conversionii) Count cnt (r)ii)+1;
3) Determining a threshold thr, imageMore than thr points in the set are collinear and a straight line, cnt (r)ii)>When thr is higher, it is considered that (r)ii) Straight lines in the image may be composed.
According to this property, a straight line can be detected by finding the number of curves that intersect at a point, and more points that intersect at a point means that the straight line represented by the intersection point is composed of more points.
The Hough transformation determines a straight line by using the number of points passing through the straight line, so that the Hough transformation is global, has strong anti-interference capability and is large in calculation amount. The gravity center method has high calculation speed and is sensitive to noise. Aiming at the advantages of Hough transformation and a gravity center method, the invention combines the advantages of Hough transformation and gravity center method together: under the condition of no noise interference, the sub-pixel position of the central point is accurately positioned by directly using an improved gravity center method; when a noise point exists, firstly deleting the noise point, then carrying out Hough transformation by taking target pixel points of 10 lines of the upper line and the lower line as samples to obtain a straight line formed by the points, and replacing the original noise point with the corresponding point on the straight line, wherein the method specifically comprises the following steps:
a) obtaining a sub-pixel coordinate of the line laser center through an improved gravity center method;
b) detecting the number of target pixels in a sub-pixel coordinate neighborhood of the center of the line laser;
c) if the number of the target points meets the requirement, the target points are regarded as the target points, and the target points are converted into d), otherwise, the target points are noise points, and the target points are converted into e);
d) continuously traversing the next row of sub-pixel points, and detecting the number of target points in the neighborhood;
e) and taking the target points of 10 lines above and below the noise point as samples, then carrying out Hough transformation to obtain a straight line, replacing the noise point with the corresponding point on the straight line, and converting the straight line into d) until the target pixel in the whole image is traversed.
The above embodiments are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modifications made on the basis of the technical scheme according to the technical idea of the present invention fall within the protection scope of the present invention.

Claims (2)

1. A sub-pixel positioning method for a line laser center in a three-dimensional laser scanning system is characterized by comprising the following steps:
(1) acquiring original image information scanned by laser from a camera, and preprocessing the original image to eliminate noise in the image;
(2) carrying out threshold segmentation on the image without the noise, and carrying out coarse extraction on the linear laser;
(3) eliminating the burr phenomenon of the line laser after the rough extraction, and further reducing the width of the line laser;
(4) obtaining sub-pixel level coordinates of the line laser center by using an improved gravity center method;
(5) correcting the pseudo target pixel in the sub-pixel level coordinate by adopting Hough transformation;
wherein, the pretreatment in the step (1) comprises the following steps:
(11) converting the original image three-channel form acquired by the camera into a single-channel gray image;
(12) carrying out median filtering processing on the gray level image by adopting a 3 × 3 filtering window to eliminate noise in the image;
in the step (2), the threshold segmentation is to calculate an optimal threshold for segmenting the target and the background in the image by using a maximum inter-class variance method, and the specific process of the threshold segmentation is as follows:
let the gray value of each point in the original image be F (x, y), and the total pixel number be N ═ N0+N1+…+NL-1The gray level number is set as L, the threshold T divides the image into a target class O and a background class B, i represents the gray value of the pixel, and the number of the pixel points with the gray value i is recorded as NiThe corresponding probability is denoted as PiThen, there are:
Figure FDA0002212704560000011
Figure 1
the gray scale of the background class is marked as [0, T]Eyes of peopleThe gray scale range of the mark class is marked as [ T +1, L-1]The gray level probability of the background class is denoted as PBThe gray level probability of the target class is denoted as POThen, there are:
Figure 2
Figure FDA0002212704560000013
PB+PO=1
the average value of the gray levels of the background pixels is muBThe mean value of the gray levels of the pixels of the target class is muoThen, there are:
Figure FDA0002212704560000014
Figure FDA0002212704560000021
POμO+PBμB=μt
then the mean square error between classes is expressed as:
σ2=PBBt)2+POOt)2
obtaining:
σ2=PBPOBO)2
the sigma obtained by different T is different, and the T which enables the sigma to obtain the maximum value is obtained, namely the obtained optimal threshold;
the specific steps of the step (3) are as follows: dividing pixels on the image into a target point and a background point by adopting a Zhang parallel refinement algorithm, taking the pixel value of any point in the image as p1, marking the neighborhood with p2-p9, and performing the following processing:
the first step is as follows: if the following four conditions are satisfied, deleting the point in the image;
A.2≤N(p1)≤6
B.Z0(p1)=1
C.p2*p4*p6=0
D.p4*p6*p8=0
where N (p1) is the number of non-zero points in the neighborhood of p1, and Z0(p1) is the number of changes in its corresponding pixel value from 0 to 1 during the traversal from p2 to p 9;
the second step is that: scanning the image again, and if the 8 neighborhoods of the points which are not 0 meet the following 4 conditions, deleting the points in the image;
A.2≤N(p1)≤6
B.Z0(p1)=1
C.p2*p6*p8=0
D.p2*p4*p8=0;
in the step (4), the formula of the improved gravity center method is as follows:
Figure FDA0002212704560000022
wherein, X represents the pixel point with the strongest light intensity, G (X, y) represents the gray value of each pixel after the image is extracted;
the details and method steps of the correction of step (5) are:
a) detecting the number of target pixels in a sub-pixel coordinate neighborhood of the center of the line laser;
b) if the number of the target points meets the requirement, the target points are regarded as the target points, and the target points are converted into c), otherwise the target points are noise points, and the target points are converted into d);
c) continuously traversing the next row of sub-pixel points, and detecting the number of target points in the neighborhood;
d) and taking the target points of 10 lines above and below the noise point as samples, then carrying out Hough transformation to obtain a straight line, replacing the noise point with the corresponding point on the straight line, and converting into c) until the target pixel in the whole image is traversed.
2. The method of claim 1, wherein the sub-pixel positioning method comprises: in the step (3), the open operation is also used for the image to remove possible noise points, that is, the image is corroded and then expanded, so that the purpose of removing isolated points is achieved.
CN201610829325.XA 2016-09-18 2016-09-18 Sub-pixel positioning method for center line laser of three-dimensional laser scanning system Active CN107203973B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610829325.XA CN107203973B (en) 2016-09-18 2016-09-18 Sub-pixel positioning method for center line laser of three-dimensional laser scanning system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610829325.XA CN107203973B (en) 2016-09-18 2016-09-18 Sub-pixel positioning method for center line laser of three-dimensional laser scanning system

Publications (2)

Publication Number Publication Date
CN107203973A CN107203973A (en) 2017-09-26
CN107203973B true CN107203973B (en) 2020-06-23

Family

ID=59904369

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610829325.XA Active CN107203973B (en) 2016-09-18 2016-09-18 Sub-pixel positioning method for center line laser of three-dimensional laser scanning system

Country Status (1)

Country Link
CN (1) CN107203973B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107784669A (en) * 2017-10-27 2018-03-09 东南大学 A kind of method that hot spot extraction and its barycenter determine
CN108648258A (en) * 2018-04-26 2018-10-12 中国科学院半导体研究所 Image calculating for laser night vision homogenizes Enhancement Method
CN108921864B (en) * 2018-06-22 2022-02-15 广东工业大学 Light strip center extraction method and device
CN109269407A (en) * 2018-09-28 2019-01-25 中铁工程装备集团有限公司 A kind of vertical shaft laser guide localization method based on labview
CN109598738A (en) * 2018-11-12 2019-04-09 长安大学 A kind of line-structured light center line extraction method
CN109559324B (en) * 2018-11-22 2020-06-05 北京理工大学 Target contour detection method in linear array image
CN109712147A (en) * 2018-12-19 2019-05-03 广东工业大学 A kind of interference fringe center line approximating method extracted based on Zhang-Suen image framework
CN109709574B (en) * 2019-01-09 2021-10-26 国家海洋局第一海洋研究所 Seabed microtopography laser scanning imaging system and three-dimensional terrain reconstruction method
CN110292384A (en) * 2019-06-26 2019-10-01 浙江大学 A kind of intelligent foot arch index measurement method based on plantar pressure data distribution
CN110443275B (en) * 2019-06-28 2022-11-25 炬星科技(深圳)有限公司 Method, apparatus and storage medium for removing noise
CN110599539B (en) * 2019-09-17 2022-05-17 广东奥普特科技股份有限公司 Stripe center extraction method of structured light stripe image
CN111539934B (en) * 2020-04-22 2023-05-16 苏州中科行智智能科技有限公司 Extraction method of line laser center
CN112330667B (en) * 2020-11-26 2023-08-22 上海应用技术大学 Morphology-based laser stripe center line extraction method
CN114001671B (en) * 2021-12-31 2022-04-08 杭州思看科技有限公司 Laser data extraction method, data processing method and three-dimensional scanning system
CN115619860A (en) * 2022-09-15 2023-01-17 珠海一微半导体股份有限公司 Laser positioning method based on image information and robot

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100486476C (en) * 2007-11-08 2009-05-13 浙江理工大学 Method and system for automatic generating shoe sole photopolymer coating track based on linear structure optical sensor
US8086026B2 (en) * 2008-06-27 2011-12-27 Waldean Schulz Method and system for the determination of object positions in a volume
CN101504275A (en) * 2009-03-11 2009-08-12 华中科技大学 Hand-hold line laser three-dimensional measuring system based on spacing wireless location
CN102663781A (en) * 2012-03-23 2012-09-12 南昌航空大学 Sub-pixel level welding center extraction method based on visual sense
CN102794763B (en) * 2012-08-31 2014-09-24 江南大学 Systematic calibration method of welding robot guided by line structured light vision sensor
US9123113B2 (en) * 2013-03-08 2015-09-01 Raven Industries, Inc. Row guidance parameterization with Hough transform
CN104657587B (en) * 2015-01-08 2017-07-18 华中科技大学 A kind of center line extraction method of laser stripe
CN104677305B (en) * 2015-02-11 2017-09-05 浙江理工大学 A kind of body surface three-dimensional method for reconstructing and system based on cross structure light

Also Published As

Publication number Publication date
CN107203973A (en) 2017-09-26

Similar Documents

Publication Publication Date Title
CN107203973B (en) Sub-pixel positioning method for center line laser of three-dimensional laser scanning system
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN107463918B (en) Lane line extraction method based on fusion of laser point cloud and image data
CN108921176B (en) Pointer instrument positioning and identifying method based on machine vision
CN111507390B (en) Storage box body identification and positioning method based on contour features
CN110232389B (en) Stereoscopic vision navigation method based on invariance of green crop feature extraction
CN111968144B (en) Image edge point acquisition method and device
CN109801333B (en) Volume measurement method, device and system and computing equipment
CN107248159A (en) A kind of metal works defect inspection method based on binocular vision
CN107392929B (en) Intelligent target detection and size measurement method based on human eye vision model
KR20130030220A (en) Fast obstacle detection
CN104899888B (en) A kind of image sub-pixel edge detection method based on Legendre squares
CN109034017A (en) Head pose estimation method and machine readable storage medium
CN110021029B (en) Real-time dynamic registration method and storage medium suitable for RGBD-SLAM
CN109559324A (en) A kind of objective contour detection method in linear array images
CN108986129B (en) Calibration plate detection method
CN112767359B (en) Method and system for detecting corner points of steel plate under complex background
CN106952262B (en) Ship plate machining precision analysis method based on stereoscopic vision
CN111354047B (en) Computer vision-based camera module positioning method and system
CN109492525B (en) Method for measuring engineering parameters of base station antenna
CN109886935A (en) A kind of road face foreign matter detecting method based on deep learning
CN113643371A (en) Method for positioning aircraft model surface mark points
CN115018785A (en) Hoisting steel wire rope tension detection method based on visual vibration frequency identification
CN104573635B (en) A kind of little height recognition methods based on three-dimensional reconstruction
CN105005985B (en) Backlight image micron order edge detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20170926

Assignee: ZHEJIANG BITAI SYSTEM ENGINEERING Co.,Ltd.

Assignor: JIANGSU University OF SCIENCE AND TECHNOLOGY

Contract record no.: X2020980007232

Denomination of invention: A subpixel location method for the center line of a 3D laser scanning system

Granted publication date: 20200623

License type: Common License

Record date: 20201029

EE01 Entry into force of recordation of patent licensing contract
EC01 Cancellation of recordation of patent licensing contract

Assignee: ZHEJIANG BITAI SYSTEM ENGINEERING Co.,Ltd.

Assignor: JIANGSU University OF SCIENCE AND TECHNOLOGY

Contract record no.: X2020980007232

Date of cancellation: 20201223

EC01 Cancellation of recordation of patent licensing contract