CN110866924A - Line structured light center line extraction method and storage medium - Google Patents
Line structured light center line extraction method and storage medium Download PDFInfo
- Publication number
- CN110866924A CN110866924A CN201910906682.5A CN201910906682A CN110866924A CN 110866924 A CN110866924 A CN 110866924A CN 201910906682 A CN201910906682 A CN 201910906682A CN 110866924 A CN110866924 A CN 110866924A
- Authority
- CN
- China
- Prior art keywords
- image
- point
- pixel
- light bar
- gray
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 23
- 238000003860 storage Methods 0.000 title claims description 4
- 238000000034 method Methods 0.000 claims abstract description 34
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 29
- 239000011159 matrix material Substances 0.000 claims abstract description 23
- 238000012545 processing Methods 0.000 claims abstract description 22
- 230000000877 morphologic effect Effects 0.000 claims abstract description 11
- 230000011218 segmentation Effects 0.000 claims abstract description 11
- 238000001914 filtration Methods 0.000 claims abstract description 8
- 230000006870 function Effects 0.000 claims description 24
- 238000004364 calculation method Methods 0.000 claims description 7
- 230000003287 optical effect Effects 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 5
- 238000009499 grossing Methods 0.000 claims description 4
- 238000012163 sequencing technique Methods 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 238000005260 corrosion Methods 0.000 claims description 2
- 230000007797 corrosion Effects 0.000 claims description 2
- 230000009977 dual effect Effects 0.000 claims description 2
- 230000002708 enhancing effect Effects 0.000 claims description 2
- 230000001629 suppression Effects 0.000 claims description 2
- 238000013507 mapping Methods 0.000 claims 1
- 238000001514 detection method Methods 0.000 abstract description 2
- 230000000007 visual effect Effects 0.000 abstract description 2
- 238000005259 measurement Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000009826 distribution Methods 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000003705 background correction Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
Abstract
The invention requests to protect a line structured light center line extraction method, belongs to the technical field of machine vision, and comprises the following steps: the method comprises the following steps of carrying out a series of processing on an image collected by a CCD industrial camera, such as clipping, image graying, image enhancement, image denoising, image binarization, morphological opening and closing operation, image light strip area segmentation and the like; thinning processing is carried out by adopting a thinning algorithm to obtain an image containing the central line of the single-pixel light bar; the Steger algorithm is improved. Firstly, determining a region of interest, and performing median filtering on the region; secondly, moving on an image line according to the determined constraint threshold and a 1 multiplied by 5 movable template to find out a rough central point; then solving a Hessian matrix through separability and symmetry of the Gaussian function; and finally, performing Taylor secondary expansion to obtain a sub-pixel level central coordinate. The algorithm has good connectivity, no burr, simple operation, high operation efficiency, high extraction speed and high precision. The invention can meet the real-time requirement of the visual detection system.
Description
Technical Field
The invention belongs to the technical field of machine vision, and particularly relates to a line structured light center line extraction method.
Background
The three-dimensional measurement technology is the basis of the three-dimensional reconstruction technology, and can be divided into contact measurement and non-contact measurement according to different measurement modes. With the rapid development of non-contact measurement technology, in particular, structured light-based non-contact measurement methods are widely used in actual production and life.
In the three-dimensional reconstruction, the extraction of the center line of the structured light stripe is extremely important, and the projection angle of the stripe needs to be calculated on the basis of the center line of the stripe, so the extraction result of the center line of the structured light stripe is good or bad, and the precision of the three-dimensional reconstruction result is directly influenced.
The geometric center method is characterized in that the middle points of two end points of each section of the light bar are used as the central points of the light bar of the section, and the method is simple in algorithm, high in extraction speed, low in precision and poor in universality; the gray scale gravity center method is used for fitting a gray scale distribution curve of the light band and searching the position of the maximum value of the curve, so that the method can overcome the error caused by asymmetrical gray scale distribution of the light band, but has poor stability and low precision; the skeleton extraction method is that in a certain point neighborhood on the image boundary, the point is judged to be reserved or deleted through a certain condition, and iteration operation is carried out until the position of a single-pixel center line is obtained, so that the algorithm is simple to operate, but the precision is low; the gradient gravity center method is used for solving the gradient of the light band area, weighted averaging is carried out according to the gradient, and an extreme point is obtained and is used as the center position of the light band; the extreme method is to take the maximum value point of the gray scale of the optical strip as a point on the central line, and the method has high extraction speed but poor noise resistance; the Steger algorithm obtains extreme points in the normal direction of each point of the light band in the image by using a Hessian matrix, so that the sub-pixel position of the center line of the light band is obtained. The method has high precision and good robustness, but has large calculation amount and is difficult to meet the real-time requirement.
Therefore, a method for extracting the line structured light center line with simple algorithm operation, high operation efficiency, high extraction speed and high accuracy is needed to be found.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. The method for extracting the line-structured light center line has the advantages of simple algorithm operation, high operation efficiency, high extraction speed and high precision. The technical scheme of the invention is as follows:
a line structured light centerline extraction method, comprising the steps of:
step 1, carrying out image graying, image enhancement and image denoising on an image collected by a CCD industrial camera to obtain a preprocessed light bar image;
step 2, carrying out binarization, morphological opening and closing operation and image light bar region segmentation on the preprocessed light bar image to obtain a binarized closed light bar image;
step 3, thinning the image of the closed light bar binarized in the step 3 by adopting a thinning algorithm to obtain an image containing a central line of a single-pixel light bar;
step 4, determining an interested area from the central line of the single-pixel light bar in the step 3, and performing median filtering on the interested area; moving on an image line according to the determined constraint threshold and a 1 multiplied by 5 movable template, and solving a rough center point normal direction by using a Steger algorithm; solving a Hessian matrix through separability and symmetry of a Gaussian function; and finally, performing Taylor secondary expansion to obtain a sub-pixel level central coordinate.
Further, the step 1 of preprocessing the light bar image specifically includes: the image graying, image enhancement and image denoising processing are carried out on the image collected by the CCD industrial camera, and the method specifically comprises the following steps:
step 1.1, in order to make the amount of calculation of the subsequent image smaller, the color image is first converted into a grayscale image. Calculating the average value of R, G, B three components of each pixel point, then giving the average value to the three components of the pixel point, and carrying out gray processing on the graph;
and step 1.2, stretching the gray value of the image after graying to the whole interval of 0-255 through gray conversion, and greatly enhancing the contrast ratio of the image. The following formula can be used to map the gray value of a certain pixel to a larger gray space:
in the formula (1), x and y represent horizontal and vertical coordinates of image pixel points (x and y); i (x, y), Imax、IminRespectively representing an original image and the minimum gray value and the maximum gray value thereof; MIN and MAX are the minimum and maximum values of the gray scale space to be stretched.
And 1.3, performing median filtering processing on the image, adopting a sliding window containing odd points, and replacing the gray value of the central point by the median of the gray values in the window, namely sequencing the gray values in the window, and then assigning the value to the central point. The method comprises the following specific steps:
(1) obtaining the first address of a source image and the width and height of the image;
(2) opening up a memory buffer area for temporarily storing the result image and initializing the result image to be 0;
(3) scanning pixel points in the image one by one, sequencing pixel values of all elements in the neighborhood of the pixel points from small to large, and assigning the obtained intermediate value to the pixel point corresponding to the current point in the target image;
(4) the step (3) is circulated until all pixel points of the source image are processed;
(5) and copying the result from the memory buffer area to the data area of the source image.
Further, step 2 performs binarization, morphological opening and closing operation and image light bar region segmentation processing on the preprocessed light bar image to obtain a binarized closed light bar image, and specifically includes:
step 2.1, setting the image into two different levels respectively by using the difference between the target and the background in the image, and selecting a proper threshold value to determine whether a certain pixel is the target or the background so as to obtain a binary image;
step 2.2, performing morphological closed operation processing on the image, filling fine cavities in the object, connecting adjacent objects, smoothing the boundary of the objects and not obviously changing the area of the objects through the process of expansion and corrosion so as to determine the position of the central line of the light bar in the following step;
step 2.3, the image is subjected to light bar region segmentation processing, and a specific calculation method for solving the edge point by adopting a Canny operator comprises the following steps:
(1) smoothing the image with a gaussian filter;
(2) calculating gradient amplitude and direction by using first-order partial derivative finite difference;
(3)3, carrying out non-maximum suppression on the gradient amplitude;
(4) edges are detected and connected using a dual threshold algorithm.
Further, in step 3, the closed light bar image binarized in step 3 is refined by using a refinement algorithm to obtain an image including a center line of a single-pixel light bar, and the method specifically includes:
and obtaining a skeleton of the image through a Zhang-Suen thinning algorithm, wherein the skeleton is used as one of the characteristics of the image and is used for recognition or pattern matching. The step flow of the classic Zhang-Suen parallel fast refinement algorithm is shown in fig. 9.
Further, in the step 4, the process of extracting the rough central point of the optical bar specifically includes:
step 4.1, in order to ensure the precision of the central point extraction, firstly, a template with the size of 3 multiplied by 3 is used for carrying out expansion processing on the light strip image, and the pixel width of the processed image area is ensured to be larger than that of the original image area;
step 4.2, for an image Z with M multiplied by N pixels, the gray value of the pixel point of the image Z at the j column of the ith row is expressed as Z (i, j), when Z (i, j) > S (T) (0 is more than or equal to j and less than or equal to N), the image Z moves on the image row by using a movable window of 1 multiplied by 5 at the i row, the sum of the gray values of 5 pixel points under the movable window is counted, and the point which enables the sum of 5 pixels to be maximum in the row is the rough position of the optical band center on the row;
step 4.3, continue to find the eligible points in i +1 until i ═ M terminates the loop.
Further, in the step 5, the solving of the Hessian matrix through separability and symmetry of the gaussian function specifically includes:
the Hessian matrix of the two-dimensional image may be represented as:
in the formula (1.2), x and y represent horizontal and vertical coordinates of any point (x and y) on the structured light stripe; h (x, y) and g (x, y) respectively represent a Hessian matrix function and a two-dimensional Gaussian function, and the Gaussian variance is setrxx、rxyAnd ryyThe second partial derivative of the image gray function r (x, y) is obtained by using the convolution operation of the gaussian kernel function and the original image to obtain the following formula:
wherein x and y represent the horizontal and vertical coordinates of any point (x and y) on the structural light stripe; g (x, y) represents a two-dimensional Gaussian function, and the variance of the Gaussian is setNormal direction of the image (n)x,ny) The eigenvector corresponding to the eigenvalue with the largest absolute value in the Hessian matrix is obtained, the second-order directional derivative of the image gray function is the eigenvalue with the largest absolute value in the Hessian matrix, and the Hessian matrix can be obtained only by performing two-dimensional Gaussian convolution on each pixel point of the image for at least 5 times.
Any pixel point (x) in two-dimensional imageo,yo) The adjacent pixel points can be expressed by quadratic taylor polynomial as follows:
g (x) can be obtained by convolving the image f (x, y) with a Gaussian kernelo,yo)、gx(xo,yo)、gy(xo,yo)、gxx(xo,yo)、 gxy(xo,yo) And gyy(xo,yo);g(xo,yo)、
In the two-dimensional image f (x, y), in the edge direction n (x, y), the 1 st-order directional derivative value is zero, and the central point of the line edge is the point where the absolute value of the extreme value of the second-order directional derivative is maximum, (n)x,ny) Denotes the edge direction of n (x, y), and (n)x,ny) Has a modulus of 1, available in the edge direction (n)x,ny) Expressed as:
byCan obtain the productThus, (p)x,py)=(xo+tnx,yo+tny) That is in the image (x)o,yo) The extreme point of the gray value of the light bar image at the point is (tn) if the point with the first derivative of zero is in the current pixelx,tny)∈[-0.5,0.5]×[-0.5,0.5]Then (p)x,py) The point is the required sub-pixel level light bar center point.
A storage medium having stored therein a computer program which, when read by a processor, performs any of the methods described above.
The invention has the following advantages and beneficial effects:
based on the line structured light center line extraction method provided by the invention, firstly, a series of processing such as clipping, image graying, image enhancement, image denoising, image binarization, morphological open-close operation, image light bar area segmentation and the like are carried out on an image collected by a CCD industrial camera; thinning processing is carried out by adopting a thinning algorithm to obtain an image containing the central line of the single-pixel light bar; in order to solve the problems of large operation amount and long operation time of the Steger algorithm, the Steger algorithm is improved. Firstly, determining a region of interest, and performing median filtering on the region; secondly, moving on an image line according to the determined constraint threshold and a 1 multiplied by 5 movable template to find out a rough central point; then solving a Hessian matrix through separability and symmetry of the Gaussian function; and finally, performing Taylor secondary expansion to obtain a sub-pixel level central coordinate. The algorithm has good connectivity, no burr, high precision and good robustness, and the calculation amount is reduced to some extent before the improvement, so that the calculation time is relatively shortened, and the extraction speed is also improved to some extent. The invention can meet the real-time requirement of the visual detection system.
Drawings
FIG. 1 is a schematic flow chart of a method for extracting a line structured light center line according to a preferred embodiment of the present invention;
FIG. 2 is an original light bar image collected by a CCD industrial camera;
FIG. 3 is an image of FIG. 2 after graying;
FIG. 4 is an image of FIG. 3 after a contrast stretch process;
FIG. 5 is an image obtained by performing median filtering and denoising processing on the image shown in FIG. 4;
FIG. 6 is an image after binarization processing of FIG. 5;
FIG. 7 is an image of FIG. 6 after morphological closing operations have been performed;
FIG. 8 is an image of FIG. 7 after Canny operator segmentation;
FIG. 9 is a skeleton of an image obtained by a classical Zhang-Suen parallel fast refinement algorithm;
fig. 10 is an image of fig. 8 after structured light stripe centerline extraction using the modified Steger algorithm.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
fig. 1 is a schematic flow chart of a method for extracting a line structured light center line according to an embodiment of the present invention, which includes the following steps:
step 1, cutting and feature extraction are carried out on an image collected by a CCD industrial camera to obtain a light bar image, and then the image is preprocessed to obtain a preprocessed light bar image;
step 2, carrying out background correction on the preprocessed image to obtain a corrected light bar image;
step 3, carrying out binarization, morphological opening and closing operation and image light strip area segmentation processing on the corrected image to obtain a binarized closed light strip image;
step 4, thinning the binary image by adopting a thinning algorithm to obtain an image containing a single-pixel light strip central line;
step 5, determining an interested area from the central line of the single-pixel light bar, and performing median filtering on the interested area; moving on an image line according to the determined constraint threshold and a 1 multiplied by 5 movable template to find out a rough center point; solving a Hessian matrix through separability and symmetry of a Gaussian function; and finally, performing Taylor secondary unfolding to obtain a sub-pixel level central coordinate.
1. Preprocessing of light bar images
In the actual measurement, the image-sensing area is determined in advance, and the calculation amount can be reduced. And then carrying out a series of preprocessing on the image, such as image graying, image enhancement, image denoising, image binarization, morphological opening and closing operation, image light bar area segmentation and the like.
2. Light bar rough center point extraction process
And processing the image with the light stripes collected by the camera according to the preprocessing method to obtain an image Z. Calculating the gray value average Z of ZeStandard deviation σ and a constraint threshold s (t). In the image Z, each pixel is summed (Z)eσ) the number P 'of pixels and the number P of pixel sums, which are subtracted from each other and have all differences larger than zero, are calculated, and s (t) ═ P/P' can be obtained. The specific implementation process is as follows:
(1) in order to ensure the accuracy of central point extraction, a template with the size of 3 multiplied by 3 is used for carrying out expansion processing on the stripe image, and the pixel width of the processed image area is ensured to be larger than that of the original image area.
(2) For an image Z with M × N pixels, the gray value of the pixel point at the j column of the ith row can be expressed as Z (i, j), when Z (i, j) > S (T) (0 ≦ j ≦ N), the pixel point can be moved on the image row by a movable window of 1 × 5 at the i row, the sum of the gray values of the 5 pixel points under the movable window is counted, and the point in the row which makes the sum of the 5 pixel points maximum is the rough position of the optical band center on the row.
(3) Finding a qualified point in i +1 continues until i-M terminates the loop.
3. Method for solving rough center point normal direction by using improved Steger algorithm
The Hessian matrix of the two-dimensional image may be represented as:
in the formula (1.1), x and y represent horizontal and vertical coordinates of any point (x and y) on the structured light stripe; h (x, y) and g (x, y) respectively represent a Hessian matrix function and a two-dimensional Gaussian function, and the Gaussian variance is setrxx、rxyAnd ryyThe second partial derivative of the image gray function r (x, y) is obtained by using the convolution operation of the gaussian kernel function and the original image to obtain the following formula:
wherein x and y represent the horizontal and vertical coordinates of any point (x and y) on the structural light stripe; g (x, y) represents a two-dimensional Gaussian function, and the variance of the Gaussian is setNormal direction of the image (n)x,ny) The eigenvector corresponding to the eigenvalue with the maximum absolute value in the Hessian matrix, the second-order directional derivative of the image gray function is the eigenvalue with the maximum absolute value in the Hessian matrix, each pixel point of the image can be subjected to at least 5 times of two-dimensional Gaussian convolution to obtain the Hessian matrix,the large amount of two-dimensional Gaussian convolution operation causes that the Steger algorithm is difficult to apply to the convolution effect of a pixel point in an image in a central line with high real-time requirement and a two-dimensional Gaussian template and the convolution effect of the pixel point in the image in the central line with high real-time requirement is the same as the convolution effect of two one-dimensional Gaussian templates. The separability of the gaussian convolution reduces the amount of convolution operations, and a differential form of the gaussian convolution kernel satisfies this property.
4. Subpixel level light bar center point extraction
Any pixel point (x) in two-dimensional imageo,yo) The adjacent pixel points can be expressed by quadratic taylor polynomial as follows:
g (x) can be obtained by convolving the image f (x, y) with a Gaussian kernelo,yo)、gx(xo,yo)、gy(xo,yo)、gxx(xo,yo)、 gxy(xo,yo) And gyy(xo,yo);
In the two-dimensional image f (x, y), in the edge direction n (x, y), the 1 st-order directional derivative value is zero, and the central point of the line edge is the point where the absolute value of the extreme value of the second-order directional derivative is maximum, (n)x,ny) Denotes the edge direction of n (x, y), and (n)x,ny) Has a modulus of 1, available in the edge direction (n)x,ny) Expressed as:
byCan obtain the productThus, (p)x,py)=(xo+tnx,yo+tny) That is in the image (x)o,yo) The extreme point of the gray value of the light bar image at the point is (tn) if the point with the first derivative of zero is in the current pixelx,tny)∈[-0.5,0.5]×[-0.5,0.5]Then (p)x,py) The point is the required sub-pixel level light bar center point.
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.
Claims (8)
1. A line structured light center line extraction method is characterized by comprising the following steps:
step 1, carrying out image graying, image enhancement and image denoising on an image collected by a CCD industrial camera to obtain a preprocessed light bar image;
step 2, carrying out binarization, morphological opening and closing operation and image light bar area segmentation on the preprocessed light bar image to obtain a binarized closed light bar image;
step 3, thinning the image of the closed light bar binarized in the step 3 by adopting a thinning algorithm to obtain an image containing a central line of a single-pixel light bar;
step 4, determining an interested area from the central line of the single-pixel light bar in the step 3, and carrying out median filtering on the interested area; moving on an image line according to the determined constraint threshold and a 1 multiplied by 5 movable template, and solving a rough center point normal direction by using a Steger algorithm; solving a Hessian matrix through separability and symmetry of a Gaussian function; and finally, performing Taylor secondary expansion to obtain a sub-pixel level central coordinate.
2. The method for extracting line structured light centerline according to claim 1, wherein the step 1 of preprocessing the light bar image specifically comprises: the image graying, image enhancement and image denoising processing are carried out on the image collected by the CCD industrial camera, and the method specifically comprises the following steps:
in step 1.1, the color image is first converted into a grayscale image in order to reduce the amount of calculation in the subsequent images. Calculating the average value of R, G, B three components of each pixel point, then giving the average value to the three components of the pixel point, and carrying out gray processing on the graph;
step 1.2, stretching the gray value of the image after graying to the whole interval of 0-255 through gray conversion, greatly enhancing the contrast, and mapping the gray value of a certain pixel to a larger gray space by using the following formula:
in the formula (1), x and y represent horizontal and vertical coordinates of image pixel points (x and y); i (x, y), Imax、IminRespectively representing an original image and the minimum gray value and the maximum gray value thereof; MIN and MAX are the minimum and maximum values of the gray scale space to be stretched.
Step 1.3, performing median filtering processing on the image, adopting a sliding window containing odd number points, and replacing the gray value of the central point by the median of the gray values in the window, namely, sequencing the gray values in the window, and then assigning the values to the central point, wherein the method specifically comprises the following steps:
(1) obtaining the first address of a source image and the width and height of the image;
(2) opening up a memory buffer area for temporarily storing the result image and initializing the result image to be 0;
(3) scanning pixel points in the image one by one, sequencing pixel values of all elements in the neighborhood of the pixel points from small to large, and assigning the obtained intermediate value to the pixel point corresponding to the current point in the target image;
(4) the step (3) is circulated until all pixel points of the source image are processed;
(5) and copying the result from the memory buffer area to the data area of the source image.
3. The method for extracting the line structured light center line according to claim 1, wherein the step 2 performs binarization, morphological opening and closing operation and image light bar area segmentation on the preprocessed light bar image to obtain a binarized closed light bar image, and specifically comprises:
step 2.1, setting the image into two different levels respectively by using the difference between the target and the background in the image, and selecting a proper threshold value to determine whether a certain pixel is the target or the background so as to obtain a binary image;
step 2.2, performing morphological closed operation processing on the image, filling fine holes in the object, connecting adjacent objects, smoothing the boundary of the objects and not obviously changing the area of the objects through the process of expansion and corrosion so as to determine the position of the central line of the light bar subsequently;
step 2.3, the image is subjected to light bar region segmentation processing, and a specific algorithm for solving the edge point by using a Canny operator comprises the following steps:
(1) smoothing the image with a gaussian filter;
(2) calculating gradient amplitude and direction by using first-order partial derivative finite difference;
(3)3, carrying out non-maximum suppression on the gradient amplitude;
(4) edges are detected and connected using a dual threshold algorithm.
4. The method for extracting the line structured light center line according to claim 3, wherein in the step 3, the closed light bar image binarized in the step 3 is refined by adopting a refinement algorithm to obtain an image containing a single-pixel light bar center line, and the method specifically comprises the following steps:
and obtaining a skeleton of the image through a Zhang-Suen thinning algorithm, wherein the skeleton is used as one of the characteristics of the image and is used for recognition or pattern matching.
5. The method as claimed in claim 4, wherein the step 4 of extracting the rough center point of the optical stripe includes:
step 4.1, in order to ensure the precision of the central point extraction, firstly, a template with the size of 3 multiplied by 3 is used for carrying out expansion processing on the light strip image, and the pixel width of the processed image area is ensured to be larger than that of the original image area;
step 4.2, for an image Z with M multiplied by N pixels, the gray value of the pixel point of the image Z at the j column of the ith row is expressed as Z (i, j), when Z (i, j) > S (T) (0 is more than or equal to j and less than or equal to N), a movable window of 1 multiplied by 5 is used for moving on the image row at the i row, the sum of the gray values of the 5 pixel points under the movable window is counted, and the point which enables the sum of the 5 pixels to be maximum in the row is the rough position of the optical band center on the row;
step 4.3, continue to find the eligible points in i +1 until i ═ M terminates the loop.
6. The method as claimed in claim 5, wherein the step 5 of obtaining the Hessian matrix according to separability and symmetry of the gaussian function includes:
the Hessian matrix of the two-dimensional image may be represented as:
in the formula (1.2), x and y represent horizontal and vertical coordinates of any point (x and y) on the structured light stripe; h (x, y) and g (x, y) respectively represent a Hessian matrix function and a two-dimensional Gaussian function, and the Gaussian variance is setrxx、rxyAnd ryyThe second-order partial derivative of the image gray function r (x, y) is obtained by using the convolution operation of the Gaussian kernel function and the original image to obtain the following formula:
wherein x and y represent the horizontal and vertical coordinates of any point (x and y) on the structural light stripe; g (x, y) represents a two-dimensional Gaussian function, and the variance of the Gaussian is setNormal direction of the image (n)x,ny) The feature vector corresponding to the feature value with the maximum absolute value in the Hessian matrix is obtained, the second-order directional derivative of the image gray function is the feature value with the maximum absolute value in the Hessian matrix, and the Hessian matrix can be obtained only by performing two-dimensional Gaussian convolution on each pixel point of the image for at least 5 times.
7. The method for extracting line structured light centerline as claimed in claim 6, wherein in step 5, the obtaining of the sub-pixel level center coordinates by taylor quadratic expansion specifically comprises:
any pixel point (x) in two-dimensional imageo,yo) The adjacent pixel points can be expressed by quadratic taylor polynomial as follows:
g (x) can be obtained by convolving the image f (x, y) with a Gaussian kernelo,yo)、gx(xo,yo)、gy(xo,yo)、gxx(xo,yo)、gxy(xo,yo) And gyy(xo,yo);
In the two-dimensional image f (x, y), in the edge direction n (x, y), the 1 st-order directional derivative value is zero, and the central point of the line edge is the point where the absolute value of the extreme value of the second-order directional derivative is maximum, (n)x,ny) Denotes the edge direction of n (x, y), and (n)x,ny) Has a modulus of 1, available in the edge direction (n)x,ny) Expressed as:
byCan obtain the productThus, (p)x,py)=(xo+tnx,yo+tny) That is in the image (x)o,yo) The extreme point of the gray value of the light bar image at the point is (tn) if the point making the first derivative zero is in the current pixelx,tny)∈[-0.5,0.5]×[-0.5,0.5]Then (p)x,py) The point is the required center point of the sub-pixel level light bar.
8. A storage medium having a computer program stored therein, wherein the computer program, when read by a processor, performs the method of any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910906682.5A CN110866924B (en) | 2019-09-24 | 2019-09-24 | Line structured light center line extraction method and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910906682.5A CN110866924B (en) | 2019-09-24 | 2019-09-24 | Line structured light center line extraction method and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110866924A true CN110866924A (en) | 2020-03-06 |
CN110866924B CN110866924B (en) | 2023-04-07 |
Family
ID=69652398
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910906682.5A Active CN110866924B (en) | 2019-09-24 | 2019-09-24 | Line structured light center line extraction method and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110866924B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111462214A (en) * | 2020-03-19 | 2020-07-28 | 南京理工大学 | Line structure light stripe central line extraction method based on Hough transformation |
CN111524099A (en) * | 2020-04-09 | 2020-08-11 | 武汉钢铁有限公司 | Method for evaluating geometric parameters of cross section of sample |
CN111932506A (en) * | 2020-07-22 | 2020-11-13 | 四川大学 | Method for extracting discontinuous straight line in image |
CN112017206A (en) * | 2020-08-31 | 2020-12-01 | 河北工程大学 | Directional sliding self-adaptive threshold value binarization method based on line structure light image |
CN112102189A (en) * | 2020-09-14 | 2020-12-18 | 江苏科技大学 | Method for extracting central line of light strip of line structure |
CN112489052A (en) * | 2020-11-24 | 2021-03-12 | 江苏科技大学 | Line structure light central line extraction method under complex environment |
CN113029021A (en) * | 2020-08-04 | 2021-06-25 | 南京航空航天大学 | Light strip refining method for line laser skin butt-joint measurement |
CN113256706A (en) * | 2021-05-19 | 2021-08-13 | 天津大学 | ZYNQ-based real-time light stripe center extraction system and method |
CN113256518A (en) * | 2021-05-20 | 2021-08-13 | 上海理工大学 | Structured light image enhancement method for intraoral 3D reconstruction |
CN113947543A (en) * | 2021-10-15 | 2022-01-18 | 天津大学 | Method for correcting center of curved light bar in unbiased mode |
WO2022016873A1 (en) * | 2020-07-23 | 2022-01-27 | Zhejiang Hanchine Ai Tech. Co., Ltd. | Multi-line laser three-dimensional imaging method and system based on random lattice |
CN114627141A (en) * | 2022-05-16 | 2022-06-14 | 沈阳和研科技有限公司 | Cutting path center detection method and system |
CN115393172A (en) * | 2022-08-26 | 2022-11-25 | 无锡砺成智能装备有限公司 | Method and equipment for extracting light stripe centers in real time based on GPU |
CN115953459A (en) * | 2023-03-10 | 2023-04-11 | 齐鲁工业大学(山东省科学院) | Method for extracting laser stripe center line under complex illumination condition |
US11763473B2 (en) | 2020-07-23 | 2023-09-19 | Zhejiang Hanchine Ai Tech. Co., Ltd. | Multi-line laser three-dimensional imaging method and system based on random lattice |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1763472A (en) * | 2005-11-22 | 2006-04-26 | 北京航空航天大学 | Quick and high-precision method for extracting center of structured light stripe |
CN101109620A (en) * | 2007-09-05 | 2008-01-23 | 北京航空航天大学 | Method for standardizing structural parameter of structure optical vision sensor |
CN101178812A (en) * | 2007-12-10 | 2008-05-14 | 北京航空航天大学 | Mixed image processing process of structure light striation central line extraction |
CN105574869A (en) * | 2015-12-15 | 2016-05-11 | 中国北方车辆研究所 | Line-structure light strip center line extraction method based on improved Laplacian edge detection |
CN106023247A (en) * | 2016-05-05 | 2016-10-12 | 南通职业大学 | Light stripe center extraction tracking method based on space-time tracking |
CN106097430A (en) * | 2016-06-28 | 2016-11-09 | 哈尔滨工程大学 | A kind of laser stripe center line extraction method of many gaussian signals matching |
-
2019
- 2019-09-24 CN CN201910906682.5A patent/CN110866924B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1763472A (en) * | 2005-11-22 | 2006-04-26 | 北京航空航天大学 | Quick and high-precision method for extracting center of structured light stripe |
CN101109620A (en) * | 2007-09-05 | 2008-01-23 | 北京航空航天大学 | Method for standardizing structural parameter of structure optical vision sensor |
CN101178812A (en) * | 2007-12-10 | 2008-05-14 | 北京航空航天大学 | Mixed image processing process of structure light striation central line extraction |
CN105574869A (en) * | 2015-12-15 | 2016-05-11 | 中国北方车辆研究所 | Line-structure light strip center line extraction method based on improved Laplacian edge detection |
CN106023247A (en) * | 2016-05-05 | 2016-10-12 | 南通职业大学 | Light stripe center extraction tracking method based on space-time tracking |
CN106097430A (en) * | 2016-06-28 | 2016-11-09 | 哈尔滨工程大学 | A kind of laser stripe center line extraction method of many gaussian signals matching |
Non-Patent Citations (2)
Title |
---|
李栋梁: "基于Hessian矩阵的线结构光中心线提取方法研究", 《汽车实用技术》 * |
陈念 等: "基于Hessian矩阵的线机构光光条中心提取", 《数字技术与应用》 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111462214A (en) * | 2020-03-19 | 2020-07-28 | 南京理工大学 | Line structure light stripe central line extraction method based on Hough transformation |
CN111524099A (en) * | 2020-04-09 | 2020-08-11 | 武汉钢铁有限公司 | Method for evaluating geometric parameters of cross section of sample |
CN111932506B (en) * | 2020-07-22 | 2023-07-14 | 四川大学 | Method for extracting discontinuous straight line in image |
CN111932506A (en) * | 2020-07-22 | 2020-11-13 | 四川大学 | Method for extracting discontinuous straight line in image |
US11763473B2 (en) | 2020-07-23 | 2023-09-19 | Zhejiang Hanchine Ai Tech. Co., Ltd. | Multi-line laser three-dimensional imaging method and system based on random lattice |
WO2022016873A1 (en) * | 2020-07-23 | 2022-01-27 | Zhejiang Hanchine Ai Tech. Co., Ltd. | Multi-line laser three-dimensional imaging method and system based on random lattice |
CN113029021A (en) * | 2020-08-04 | 2021-06-25 | 南京航空航天大学 | Light strip refining method for line laser skin butt-joint measurement |
CN112017206A (en) * | 2020-08-31 | 2020-12-01 | 河北工程大学 | Directional sliding self-adaptive threshold value binarization method based on line structure light image |
CN112102189A (en) * | 2020-09-14 | 2020-12-18 | 江苏科技大学 | Method for extracting central line of light strip of line structure |
CN112489052A (en) * | 2020-11-24 | 2021-03-12 | 江苏科技大学 | Line structure light central line extraction method under complex environment |
CN113256706A (en) * | 2021-05-19 | 2021-08-13 | 天津大学 | ZYNQ-based real-time light stripe center extraction system and method |
CN113256518A (en) * | 2021-05-20 | 2021-08-13 | 上海理工大学 | Structured light image enhancement method for intraoral 3D reconstruction |
CN113256518B (en) * | 2021-05-20 | 2022-07-29 | 上海理工大学 | Structured light image enhancement method for intraoral 3D reconstruction |
CN113947543A (en) * | 2021-10-15 | 2022-01-18 | 天津大学 | Method for correcting center of curved light bar in unbiased mode |
CN114627141B (en) * | 2022-05-16 | 2022-07-22 | 沈阳和研科技有限公司 | Cutting path center detection method and system |
CN114627141A (en) * | 2022-05-16 | 2022-06-14 | 沈阳和研科技有限公司 | Cutting path center detection method and system |
CN115393172A (en) * | 2022-08-26 | 2022-11-25 | 无锡砺成智能装备有限公司 | Method and equipment for extracting light stripe centers in real time based on GPU |
CN115393172B (en) * | 2022-08-26 | 2023-09-05 | 无锡砺成智能装备有限公司 | Method and equipment for extracting light stripe center in real time based on GPU |
CN115953459A (en) * | 2023-03-10 | 2023-04-11 | 齐鲁工业大学(山东省科学院) | Method for extracting laser stripe center line under complex illumination condition |
Also Published As
Publication number | Publication date |
---|---|
CN110866924B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110866924B (en) | Line structured light center line extraction method and storage medium | |
CN108629775B (en) | Thermal state high-speed wire rod surface image processing method | |
CN109580630B (en) | Visual inspection method for defects of mechanical parts | |
CN110717489B (en) | Method, device and storage medium for identifying text region of OSD (on Screen display) | |
CN109961506A (en) | A kind of fusion improves the local scene three-dimensional reconstruction method of Census figure | |
CN107452030B (en) | Image registration method based on contour detection and feature matching | |
CN115170669B (en) | Identification and positioning method and system based on edge feature point set registration and storage medium | |
CN108225319B (en) | Monocular vision rapid relative pose estimation system and method based on target characteristics | |
CN111415376B (en) | Automobile glass subpixel contour extraction method and automobile glass detection method | |
CN110660072B (en) | Method and device for identifying straight line edge, storage medium and electronic equipment | |
US20020158636A1 (en) | Model -based localization and measurement of miniature surface mount components | |
CN112629409A (en) | Method for extracting line structure light stripe center | |
Chen et al. | A color-guided, region-adaptive and depth-selective unified framework for Kinect depth recovery | |
CN115471682A (en) | Image matching method based on SIFT fusion ResNet50 | |
CN114612412A (en) | Processing method of three-dimensional point cloud data, application of processing method, electronic device and storage medium | |
CN108764343B (en) | Method for positioning tracking target frame in tracking algorithm | |
CN113223074A (en) | Underwater laser stripe center extraction method | |
CN113688846A (en) | Object size recognition method, readable storage medium, and object size recognition system | |
CN116503462A (en) | Method and system for quickly extracting circle center of circular spot | |
CN111415365A (en) | Image detection method and device | |
Wang et al. | Rgb-guided depth map recovery by two-stage coarse-to-fine dense crf models | |
CN114882095A (en) | Object height online measurement method based on contour matching | |
Haque et al. | Robust feature-preserving denoising of 3D point clouds | |
CN110490877B (en) | Target segmentation method for binocular stereo image based on Graph Cuts | |
CN112330667A (en) | Morphology-based laser stripe center line extraction method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |