CN110866924B - Line structured light center line extraction method and storage medium - Google Patents

Line structured light center line extraction method and storage medium Download PDF

Info

Publication number
CN110866924B
CN110866924B CN201910906682.5A CN201910906682A CN110866924B CN 110866924 B CN110866924 B CN 110866924B CN 201910906682 A CN201910906682 A CN 201910906682A CN 110866924 B CN110866924 B CN 110866924B
Authority
CN
China
Prior art keywords
image
pixel
point
light bar
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910906682.5A
Other languages
Chinese (zh)
Other versions
CN110866924A (en
Inventor
杨继平
孟佳佳
冯松
赵立明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201910906682.5A priority Critical patent/CN110866924B/en
Publication of CN110866924A publication Critical patent/CN110866924A/en
Application granted granted Critical
Publication of CN110866924B publication Critical patent/CN110866924B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Abstract

The invention relates to a method for extracting a line structured light center line, which belongs to the technical field of machine vision and comprises the following steps: the method comprises the following steps of carrying out a series of processing on an image collected by a CCD industrial camera, such as clipping, image graying, image enhancement, image denoising, image binarization, morphological opening and closing operation, image light strip area segmentation and the like; thinning processing is carried out by adopting a thinning algorithm to obtain an image containing the central line of the single-pixel light bar; the Steger algorithm is improved. Firstly, determining a region of interest, and performing median filtering on the region; secondly, moving on an image line according to the determined constraint threshold and a 1 multiplied by 5 movable template to find out a rough central point; then solving a Hessian matrix through separability and symmetry of the Gaussian function; and finally, performing Taylor secondary expansion to obtain a sub-pixel level central coordinate. The algorithm has good connectivity, no burr, simple operation, high operation efficiency, high extraction speed and high precision. The invention can meet the real-time requirement of the visual detection system.

Description

Line structured light center line extraction method and storage medium
Technical Field
The invention belongs to the technical field of machine vision, and particularly relates to a line structured light center line extraction method.
Background
The three-dimensional measurement technology is the basis of the three-dimensional reconstruction technology, and can be divided into contact measurement and non-contact measurement according to different measurement modes. With the rapid development of non-contact measurement technology, in particular, structured light-based non-contact measurement methods are widely used in actual production and life.
In the three-dimensional reconstruction, the extraction of the center line of the structured light stripe is extremely important, and the projection angle of the stripe needs to be calculated on the basis of the center line of the stripe, so the extraction result of the center line of the structured light stripe is good or bad, and the precision of the three-dimensional reconstruction result is directly influenced.
The geometric center method is characterized in that the middle points of two end points of each section of the light bar are used as the central points of the light bar of the section, and the method has the advantages of simple algorithm, high extraction speed, low precision and poor universality; the gray scale gravity center method is used for fitting a gray scale distribution curve of a light band and searching the maximum value position of the curve, so that the method can overcome the error caused by asymmetric gray scale distribution of light bars, but has poor stability and low precision; the skeleton extraction method is that in the neighborhood of a certain point on the image boundary, the point is judged to be reserved or deleted through a certain condition, and iteration operation is carried out until the position of a single-pixel center line is obtained, so that the algorithm is simple to operate, but the precision is low; the gradient gravity center method is used for solving the gradient of the light band area and carrying out weighted average according to the gradient to obtain an extreme point as the center position of the light band, and the method is good in robustness but large in calculation amount; the extreme method is to take the maximum value point of the gray scale of the optical strip as a point on the central line, and the method has high extraction speed but poor noise resistance; the Steger algorithm utilizes a Hessian matrix to obtain extreme points in the direction of the normal of each point of the light band in the image, thereby obtaining the sub-pixel position of the centerline of the light band. The method has high precision and good robustness, but has large calculation amount and is difficult to meet the real-time requirement.
Therefore, a method for extracting the line structured light center line with simple algorithm operation, high operation efficiency, high extraction speed and high accuracy is needed to be found.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. The method for extracting the line-structured light center line has the advantages of simple algorithm operation, high operation efficiency, high extraction speed and high precision. The technical scheme of the invention is as follows:
a line structured light centerline extraction method, comprising the steps of:
step 1, carrying out image graying, image enhancement and image denoising on an image collected by a CCD industrial camera to obtain a preprocessed light strip image;
step 2, carrying out binarization, morphological opening and closing operation and image light bar area segmentation on the preprocessed light bar image to obtain a binarized closed light bar image;
step 3, thinning the image of the closed light bar binarized in the step 3 by adopting a thinning algorithm to obtain an image containing a central line of a single-pixel light bar;
step 4, determining a region of interest from the central line of the single-pixel light bar in the step 3, and performing median filtering on the region of interest; moving on an image line according to the determined constraint threshold and a 1 multiplied by 5 movable template, and solving a rough center point normal direction by using a Steger algorithm; solving a Hessian matrix through separability and symmetry of a Gaussian function; and finally, performing Taylor secondary expansion to obtain a sub-pixel level central coordinate.
Further, the step 1 of preprocessing the light bar image specifically includes: the image graying, image enhancement and image denoising processing are carried out on the image collected by the CCD industrial camera, and the method specifically comprises the following steps:
in step 1.1, the color image is first converted into a grayscale image in order to reduce the amount of calculation in the subsequent images. Calculating the average value of the R, G and B components of each pixel point, then giving the average value to the three components of the pixel point, and carrying out gray processing on the graph;
and step 1.2, stretching the gray value of the image after graying to the whole interval of 0-255 through gray conversion, and greatly enhancing the contrast ratio of the image. The following formula can be used to map the gray value of a certain pixel to a larger gray space:
Figure GDA0002264044480000021
in the formula (1), x and y represent horizontal and vertical coordinates of image pixel points (x and y); i (x, y), I max 、I min Respectively representing an original image and the minimum gray value and the maximum gray value thereof; MIN and MAX are the minimum and maximum values of the gray scale space to be stretched.
And 1.3, performing median filtering processing on the image, adopting a sliding window containing odd points, and replacing the gray value of the central point by the median of the gray values in the window, namely sequencing the gray values in the window, and then assigning the value to the central point. The method comprises the following specific steps:
(1) Obtaining the first address of a source image and the width and height of the image;
(2) Opening up a memory buffer area for temporarily storing the result image and initializing the result image to be 0;
(3) Scanning pixel points in the image one by one, sequencing pixel values of all elements in the neighborhood of the pixel points from small to large, and assigning the obtained intermediate value to the pixel point corresponding to the current point in the target image;
(4) The step (3) is circulated until all pixel points of the source image are processed;
(5) And copying the result from the memory buffer area to the data area of the source image.
Further, step 2 performs binarization, morphological opening and closing operation and image light bar region segmentation processing on the preprocessed light bar image to obtain a binarized closed light bar image, and specifically includes:
step 2.1, setting the image into two different levels respectively by using the difference between the target and the background in the image, and selecting a proper threshold value to determine whether a certain pixel is the target or the background so as to obtain a binary image;
step 2.2, performing morphological closed operation processing on the image, filling fine holes in the object, connecting adjacent objects, smoothing the boundary of the objects and not obviously changing the area of the objects through the process of expansion and corrosion so as to determine the position of the central line of the light bar subsequently;
step 2.3, the image is subjected to light bar region segmentation processing, and a specific algorithm for solving the edge point by using a Canny operator comprises the following steps:
(1) Smoothing the image with a gaussian filter;
(2) Calculating gradient amplitude and direction by using first-order partial derivative finite difference;
(3) 3, carrying out non-maximum suppression on the gradient amplitude;
(4) Edges are detected and connected using a dual threshold algorithm.
Further, in step 3, a thinning algorithm is adopted to thin the image of the closed light bar binarized in step 3, so as to obtain an image containing a center line of a single-pixel light bar, and the method specifically includes the following steps:
and obtaining a skeleton of the image through a Zhang-Suen thinning algorithm, wherein the skeleton is used as one of the characteristics of the image and is used for recognition or pattern matching. The step flow of the classic Zhang-Suen parallel fast refinement algorithm is shown in fig. 9.
Further, in the step 4, the process of extracting the rough central point of the optical bar specifically includes:
step 4.1, in order to guarantee the precision of the central point extraction, a template with the size of 3 x 3 is used for carrying out expansion processing on the light strip image, and the pixel width of the processed image area is guaranteed to be larger than that of the original image area;
step 4.2, for an image Z with M multiplied by N pixels, the gray value of the pixel point of the image Z at the j column of the ith row is expressed as Z (i, j), when Z (i, j) > S (T) (0 is equal to or less than j and equal to N), a movable window of 1 multiplied by 5 is used for moving on the image row at the i row, the sum of the gray values of the 5 pixel points under the movable window is counted, and the point which enables the sum of the 5 pixels to be maximum in the row is the rough position of the optical band center on the row;
step 4.3, continue to find eligible points in i +1 until i = M, terminating the loop.
Further, in the step 5, the solving of the Hessian matrix through separability and symmetry of the gaussian function specifically includes:
the Hessian matrix of the two-dimensional image may be represented as:
Figure GDA0002264044480000041
in the formula (1.2), x and y represent horizontal and vertical coordinates of any point (x and y) on the structured light stripe; h (x, y) and g (x, y) respectively represent a Hessian matrix function and a two-dimensional Gaussian function, and the Gaussian variance is set
Figure GDA0002264044480000042
r xx 、r xy And r yy The second-order partial derivative of the image gray function r (x, y) is obtained by using the convolution operation of the Gaussian kernel function and the original image to obtain the following formula:
Figure GDA0002264044480000043
Figure GDA0002264044480000044
Figure GDA0002264044480000045
Figure GDA0002264044480000046
wherein x and y represent the horizontal and vertical coordinates of any point (x and y) on the structured light stripe; g (x, y) represents a two-dimensional Gaussian function, and the variance of the Gaussian is set
Figure GDA0002264044480000051
Normal direction of the image (n) x ,n y ) The feature vector corresponding to the feature value with the maximum absolute value in the Hessian matrix is obtained, the second-order directional derivative of the image gray function is the feature value with the maximum absolute value in the Hessian matrix, and the Hessian matrix can be obtained only by performing two-dimensional Gaussian convolution on each pixel point of the image for at least 5 times.
Any pixel point (x) in two-dimensional image o ,y o ) The neighboring pixel points of (a) can be represented by a quadratic taylor polynomial as follows:
Figure GDA0002264044480000052
/>
Figure GDA0002264044480000053
Figure GDA0002264044480000054
g (x) can be obtained by convolving the image f (x, y) with a Gaussian kernel o ,y o )、g x (x o ,y o )、g y (x o ,y o )、g xx (x o ,y o )、g xy (x o ,y o ) And g yy (x o ,y o );g(x o ,y o )、
In the two-dimensional image f (x, y), the 1 st-order directional derivative value is zero in the edge direction n (x, y), and the central point of the line edge is the point that makes the absolute value of the second-order directional derivative extreme value maximum, (n) x ,n y ) Denotes the edge direction of n (x, y), and (n) x ,n y ) Has a modulus of 1, available in the edge direction (n) x ,n y ) Expressed as:
Figure GDA0002264044480000055
by
Figure GDA0002264044480000056
Available>
Figure GDA0002264044480000057
Thus, (p) x ,p y )=(x o +tn x ,y o +tn y ) That is in the image (x) o ,y o ) The extreme point of the gray value of the light bar image at the point is (tn) if the point making the first derivative zero is in the current pixel x ,tn y )∈[-0.5,0.5]×[-0.5,0.5]Then (p) x ,p y ) The point is the required center point of the sub-pixel level light bar.
A storage medium having stored therein a computer program which, when read by a processor, performs any of the methods described above.
The invention has the following advantages and beneficial effects:
based on the line structured light center line extraction method provided by the invention, firstly, a series of processing such as clipping, image graying, image enhancement, image denoising, image binarization, morphological opening and closing operation, image light strip area segmentation and the like are carried out on an image collected by a CCD industrial camera; thinning processing is carried out by adopting a thinning algorithm to obtain an image containing the central line of a single-pixel light strip; in order to solve the problems of large operation amount and long operation time of the Steger algorithm, the Steger algorithm is improved. Firstly, determining a region of interest, and performing median filtering on the region; secondly, moving on an image line according to the determined constraint threshold and a 1 multiplied by 5 movable template to find out a rough central point; then solving a Hessian matrix through separability and symmetry of the Gaussian function; and finally, performing Taylor secondary expansion to obtain a sub-pixel level central coordinate. The algorithm has good connectivity, no burr, high precision and good robustness, and the computation amount is reduced to some extent before the improvement, so that the computation time is shortened relatively, and the extraction speed is also improved to some extent. The invention can meet the real-time requirement of the visual detection system.
Drawings
FIG. 1 is a schematic flow chart of a method for extracting a line structured light center line according to a preferred embodiment of the present invention;
FIG. 2 is an original light bar image collected by a CCD industrial camera;
FIG. 3 is an image after graying out the image of FIG. 2;
FIG. 4 is an image of FIG. 3 after a contrast stretch process;
FIG. 5 is an image obtained by performing median filtering and denoising processing on the image shown in FIG. 4;
FIG. 6 is the image after the binarization processing is performed on FIG. 5;
FIG. 7 is an image of FIG. 6 after morphological closing operations have been performed;
fig. 8 is an image obtained by Canny operator segmentation processing performed on fig. 7;
FIG. 9 is a skeleton of an image obtained by a classical Zhang-Suen parallel fast refinement algorithm;
fig. 10 is an image of fig. 8 after structured light stripe centerline extraction using a modified Steger algorithm.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
fig. 1 is a schematic flow chart of a method for extracting a line structured light centerline according to an embodiment of the present invention, including the following steps:
step 1, cutting and feature extraction are carried out on an image collected by a CCD industrial camera to obtain a light strip image, and then the image is preprocessed to obtain a preprocessed light strip image;
step 2, carrying out background correction on the preprocessed image to obtain a corrected light bar image;
step 3, carrying out binarization, morphological opening and closing operation and image light strip area segmentation processing on the corrected image to obtain a binarized closed light strip image;
step 4, thinning the binary image by adopting a thinning algorithm to obtain an image containing the central line of the single-pixel light bar;
step 5, determining an interested area from the central line of the single-pixel light bar, and performing median filtering on the interested area; moving on an image line according to the determined constraint threshold and a 1 multiplied by 5 movable template to find out a rough central point; solving a Hessian matrix through separability and symmetry of a Gaussian function; and finally, performing Taylor secondary expansion to obtain a sub-pixel level central coordinate.
1. Preprocessing of light bar images
In actual measurement, the image-sensitive region is determined in advance, and the calculation amount can be reduced. Then, a series of preprocessing such as image graying, image enhancement, image denoising, image binaryzation, morphological opening and closing operation, image light strip area segmentation and the like are carried out on the image.
2. Light bar rough center point extraction process
And processing the image with the light stripes collected by the camera according to the preprocessing method to obtain an image Z. Calculating the gray value average Z of Z e Standard deviation σ and a constraint threshold S (T). Let each pixel in the image Z be a sum (Z) e σ) by subtracting and calculating the number P 'of pixels having all differences larger than zero and the number P of pixel sums, S (T) = P/P' can be obtained, and a point in the image at which the pixel gradation value is equal to or larger than S (T) can be regarded as a rough center point of the light bar from the light bar gradation intensity distribution similar to the gaussian distribution. The specific implementation process is as follows:
(1) In order to ensure the accuracy of central point extraction, a template with the size of 3 multiplied by 3 is used for carrying out expansion processing on the stripe image, and the pixel width of the processed image area is ensured to be larger than that of the original image area.
(2) For an image Z with M × N pixels, the gray value of the pixel point at the j column of the ith row can be expressed as Z (i, j), when Z (i, j) > S (T) (0 ≦ j ≦ N), the image can be moved on the image row by a movable window of 1 × 5 at the i row, the sum of the gray values of the 5 pixel points below the movable window is counted, and the point in the row which makes the sum of the 5 pixel points maximum is the rough position of the center of the optical band on the row.
(3) Continue to find a eligible point in i +1 until i = M terminates the loop.
3. Method for solving rough center point normal direction by using improved Steger algorithm
The Hessian matrix of the two-dimensional image may be represented as:
Figure GDA0002264044480000081
in the formula (1.1), x and y represent horizontal and vertical coordinates of any point (x and y) on the structured light stripe; h (x, y) and g (x, y) respectively represent a Hessian matrix function and a two-dimensional Gaussian function, and the Gaussian variance is set
Figure GDA0002264044480000082
r xx 、r xy And r yy The second-order partial derivative of the image gray function r (x, y) is obtained by using the convolution operation of the Gaussian kernel function and the original image to obtain the following formula:
Figure GDA0002264044480000083
Figure GDA0002264044480000084
Figure GDA0002264044480000091
Figure GDA0002264044480000092
wherein x and y represent the horizontal and vertical coordinates of any point (x and y) on the structural light stripe; g (x, y) represents a two-dimensional Gaussian function, and the variance of the Gaussian is set
Figure GDA0002264044480000093
Normal direction of the image (n) x ,n y ) The method is characterized in that the characteristic vector corresponds to a characteristic value with the maximum absolute value in a Hessian matrix, the second-order directional derivative of an image gray function is the characteristic value with the maximum absolute value in the Hessian matrix, each pixel point of an image can be subjected to two-dimensional Gaussian convolution for at least 5 times to obtain the Hessian matrix, and the effect that a Steger algorithm is difficult to apply to a pixel point in the image in a central line with high real-time requirement and a two-dimensional Gaussian template to carry out convolution is the same as the effect of two one-dimensional Gaussian templates. The separability of the gaussian convolution reduces the amount of convolution operations, and a differential form of the gaussian convolution kernel satisfies this property.
4. Subpixel level light bar center point extraction
Any pixel point (x) in two-dimensional image o ,y o ) The adjacent pixel points can be expressed by quadratic taylor polynomial as follows:
Figure GDA0002264044480000094
Figure GDA0002264044480000095
Figure GDA0002264044480000096
g (x) can be obtained by convolution of the image f (x, y) and a Gaussian kernel o ,y o )、g x (x o ,y o )、g y (x o ,y o )、g xx (x o ,y o )、g xy (x o ,y o ) And g yy (x o ,y o );
In the two-dimensional image f (x, y), the 1 st-order directional derivative value is zero in the edge direction n (x, y), and the central point of the line edge is the point that makes the absolute value of the second-order directional derivative extreme value maximum, (n) x ,n y ) Denotes the edge direction of n (x, y), and (n) x ,n y ) Has a modulus of 1, available in the edge direction (n) x ,n y ) Expressed as:
Figure GDA0002264044480000101
by
Figure GDA0002264044480000102
Can be got and/or judged>
Figure GDA0002264044480000103
Thus, (p) x ,p y )=(x o +tn x ,y o +tn y ) That is in the image (x) o , yo ) The extreme point of the gray value of the light bar image at the point is (tn) if the point making the first derivative zero is in the current pixel x ,tn y )∈[-0.5,0.5]×[-0.5,0.5]Then (p x ,p y ) The point is the required center point of the light bar at the sub-pixel level.
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the present invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.

Claims (7)

1. A line structured light center line extraction method is characterized by comprising the following steps:
step 1, carrying out image graying, image enhancement and image denoising on an image collected by a CCD industrial camera to obtain a preprocessed light bar image;
step 2, carrying out binarization, morphological opening and closing operation and image light bar area segmentation on the preprocessed light bar image to obtain a binarized closed light bar image;
step 3, thinning the image of the closed light bar binarized in the step 2 by adopting a thinning algorithm to obtain an image containing a central line of a single-pixel light bar;
step 4, determining an interested area from the central line of the single-pixel light bar in the step 3, and carrying out median filtering on the interested area; moving on an image line according to the determined constraint threshold and a 1 multiplied by 5 movable template, and solving a rough center point normal direction by using a Steger algorithm; solving a Hessian matrix through separability and symmetry of a Gaussian function; finally, performing Taylor secondary expansion to obtain a sub-pixel level central coordinate;
in the step 4, the process of extracting the rough central point of the optical strip specifically includes:
step 4.1, in order to ensure the precision of the central point extraction, firstly, a template with the size of 3 multiplied by 3 is used for carrying out expansion processing on the light strip image, and the pixel width of the processed image area is ensured to be larger than that of the original image area;
step 4.2, for an image Z with M multiplied by N pixels, the gray value of the pixel point of the image Z at the ith row and the jth column is expressed as Z (i, j), when Z (i, j) > S (T) (0 is not less than j and not more than N) is satisfied, S (T) represents a limiting condition threshold, a movable window of 1 multiplied by 5 is used for moving on the image row at the i row, the sum of the gray values of 5 pixel points under the movable window is counted, and the point which enables the sum of 5 pixels to be maximum in the row is the rough position of the center of a light band on the row;
step 4.3, continue to find the eligible points in i +1 until i = M terminates the loop.
2. The method for extracting line structured light centerline according to claim 1, wherein the step 1 of preprocessing the light bar image specifically comprises: the image graying, image enhancement and image denoising processing are carried out on the image acquired by the CCD industrial camera, and the method specifically comprises the following steps:
and 1.1, converting the color image into a gray image. Calculating the average value of the R, G and B components of each pixel point, then giving the average value to the three components of the pixel point, and carrying out gray processing on the graph;
step 1.2, stretching the gray value of the image after graying to the whole interval of 0-255 through gray conversion, greatly enhancing the contrast, and mapping the gray value of a certain pixel to a larger gray space by using the following formula:
Figure FDA0004054734520000021
in the formula (1), x and y represent horizontal and vertical coordinates of image pixel points (x and y); i (x, y), I max 、I min Respectively representing an original image and the minimum gray value and the maximum gray value thereof; MIN and MAX are the minimum and maximum gray levels of the gray level space to be stretched.
Step 1.3, performing median filtering processing on the image, adopting a sliding window containing odd number points, and replacing the gray value of the central point by the median of the gray values in the window, namely, sequencing the gray values in the window, and then assigning the values to the central point, wherein the method specifically comprises the following steps:
(1) Obtaining the first address of a source image and the width and height of the image;
(2) Opening up a memory buffer area for temporarily storing the result image and initializing the result image to be 0;
(3) Scanning pixel points in the image one by one, sequencing pixel values of all elements in the neighborhood of the pixel points from small to large, and assigning the obtained intermediate value to the pixel point corresponding to the current point in the target image;
(4) The step (3) is circulated until all pixel points of the source image are processed;
(5) And copying the result from the memory buffer area to the data area of the source image.
3. The method for extracting the line structured light center line according to claim 1, wherein the step 2 performs binarization, morphological opening and closing operation and image light bar area segmentation on the preprocessed light bar image to obtain a binarized closed light bar image, and specifically comprises:
step 2.1, setting the image into two different levels respectively by using the difference between the target and the background in the image, and selecting a threshold value to determine whether a certain pixel is the target or the background so as to obtain a binary image;
step 2.2, performing morphological closed operation processing on the image, filling fine holes in the object, connecting adjacent objects, smoothing the boundary of the objects and not obviously changing the area of the objects through the process of expansion and corrosion so as to determine the position of the central line of the light bar subsequently;
step 2.3, carrying out light bar region segmentation processing on the image, and solving an edge point by using a Canny operator, wherein the specific algorithm comprises the following steps:
(1) Smoothing the image with a gaussian filter;
(2) Calculating gradient amplitude and direction by using first-order partial derivative finite difference;
(3) 3, carrying out non-maximum suppression on the gradient amplitude;
(4) Edges are detected and connected using a dual threshold algorithm.
4. The method for extracting the line structured light center line according to claim 3, wherein in the step 3, the closed light bar image binarized in the step 2 is refined by adopting a refinement algorithm to obtain an image containing a single-pixel light bar center line, and the method specifically comprises the following steps:
and obtaining a skeleton of the image through a Zhang-Suen thinning algorithm, wherein the skeleton is used as one of the characteristics of the image and is used for recognition or pattern matching.
5. The method as claimed in claim 1, wherein the step 5 of obtaining the Hessian matrix through the separability and symmetry of the gaussian function includes:
the Hessian matrix of the two-dimensional image may be represented as:
Figure FDA0004054734520000031
in the formula (1.2), x and y represent horizontal and vertical coordinates of any point (x and y) on the structured light stripe; h (x, y) and g (x, y) respectively represent a Hessian matrix function and a two-dimensional Gaussian function, and the Gaussian variance is set
Figure FDA0004054734520000032
r xx 、r xy And r yy The second-order partial derivative of the image gray function r (x, y) is obtained by using the convolution operation of the Gaussian kernel function and the original image to obtain the following formula:
Figure FDA0004054734520000033
Figure FDA0004054734520000034
Figure FDA0004054734520000035
/>
Figure FDA0004054734520000041
wherein x and y represent the horizontal and vertical coordinates of any point (x and y) on the structural light stripe; i (x, y) represents the original image; g (x, y) represents a two-dimensional Gaussian function, and the variance of the Gaussian is set
Figure FDA0004054734520000042
Normal direction of the image (n) x ,n y ) The eigenvector corresponding to the eigenvalue with the maximum absolute value in the Hessian matrix is obtained, the second-order directional derivative of the image gray function is the eigenvalue with the maximum absolute value in the Hessian matrix, and the Hessian matrix can be obtained only by performing two-dimensional Gaussian convolution on each pixel point of the image for at least 5 times.
6. The method for extracting line structured light centerline as claimed in claim 5, wherein in step 5, the obtaining of the sub-pixel level center coordinates by taylor quadratic expansion specifically comprises:
any pixel point (x) in two-dimensional image o ,y o ) The adjacent pixel points can be expressed by quadratic taylor polynomial as follows:
Figure FDA0004054734520000043
Figure FDA0004054734520000044
Figure FDA0004054734520000045
g (x) can be obtained by convolution of the image f (x, y) and a Gaussian kernel o ,y o )、g x (x o ,y o )、g y (x o ,y o )、g xx (x o ,y o )、g xy (x o ,y o ) And g yy (x o ,y o );
In the two-dimensional image f (x, y), in the edge direction n (x, y), the 1 st-order directional derivative value is zero, and the central point of the line edge is the point where the absolute value of the extreme value of the second-order directional derivative is maximum, (n) x ,n y ) Denotes the edge direction of n (x, y), and (n) x ,n y ) Has a modulus of 1, available in the edge direction (n) x ,n y ) Expressed as:
Figure FDA0004054734520000046
by
Figure FDA0004054734520000047
Can be got and/or judged>
Figure FDA0004054734520000048
Thus, (p) x ,p y )=(x o +tn x ,y o +tn y ) That is in the image (x) o ,y o ) The extreme point of the gray value of the light bar image at the point is (tn) if the point making the first derivative zero is in the current pixel x ,tn y )∈[-0.5,0.5]×[-0.5,0.5]Then (p) x ,p y ) The point is the required center point of the light bar at the sub-pixel level.
7. A storage medium having a computer program stored therein, wherein the computer program, when read by a processor, performs the method of any one of claims 1 to 6.
CN201910906682.5A 2019-09-24 2019-09-24 Line structured light center line extraction method and storage medium Active CN110866924B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910906682.5A CN110866924B (en) 2019-09-24 2019-09-24 Line structured light center line extraction method and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910906682.5A CN110866924B (en) 2019-09-24 2019-09-24 Line structured light center line extraction method and storage medium

Publications (2)

Publication Number Publication Date
CN110866924A CN110866924A (en) 2020-03-06
CN110866924B true CN110866924B (en) 2023-04-07

Family

ID=69652398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910906682.5A Active CN110866924B (en) 2019-09-24 2019-09-24 Line structured light center line extraction method and storage medium

Country Status (1)

Country Link
CN (1) CN110866924B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111462214B (en) * 2020-03-19 2023-06-09 南京理工大学 Line structure light stripe center line extraction method based on Hough transformation
CN111524099A (en) * 2020-04-09 2020-08-11 武汉钢铁有限公司 Method for evaluating geometric parameters of cross section of sample
CN111932506B (en) * 2020-07-22 2023-07-14 四川大学 Method for extracting discontinuous straight line in image
CN111854642B (en) * 2020-07-23 2021-08-10 浙江汉振智能技术有限公司 Multi-line laser three-dimensional imaging method and system based on random dot matrix
US11763473B2 (en) 2020-07-23 2023-09-19 Zhejiang Hanchine Ai Tech. Co., Ltd. Multi-line laser three-dimensional imaging method and system based on random lattice
CN113029021B (en) * 2020-08-04 2022-08-02 南京航空航天大学 Light strip refining method for line laser skin butt-joint measurement
CN112017206A (en) * 2020-08-31 2020-12-01 河北工程大学 Directional sliding self-adaptive threshold value binarization method based on line structure light image
CN112489052A (en) * 2020-11-24 2021-03-12 江苏科技大学 Line structure light central line extraction method under complex environment
CN113256706A (en) * 2021-05-19 2021-08-13 天津大学 ZYNQ-based real-time light stripe center extraction system and method
CN113256518B (en) * 2021-05-20 2022-07-29 上海理工大学 Structured light image enhancement method for intraoral 3D reconstruction
CN113947543B (en) * 2021-10-15 2024-04-12 天津大学 Curve light bar center unbiased correction method
CN114627141B (en) * 2022-05-16 2022-07-22 沈阳和研科技有限公司 Cutting path center detection method and system
CN115393172B (en) * 2022-08-26 2023-09-05 无锡砺成智能装备有限公司 Method and equipment for extracting light stripe center in real time based on GPU
CN115953459B (en) * 2023-03-10 2023-07-25 齐鲁工业大学(山东省科学院) Method for extracting central line of laser stripe under complex illumination condition

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1763472A (en) * 2005-11-22 2006-04-26 北京航空航天大学 Quick and high-precision method for extracting center of structured light stripe
CN101109620A (en) * 2007-09-05 2008-01-23 北京航空航天大学 Method for standardizing structural parameter of structure optical vision sensor
CN101178812A (en) * 2007-12-10 2008-05-14 北京航空航天大学 Mixed image processing process of structure light striation central line extraction
CN105574869A (en) * 2015-12-15 2016-05-11 中国北方车辆研究所 Line-structure light strip center line extraction method based on improved Laplacian edge detection
CN106023247A (en) * 2016-05-05 2016-10-12 南通职业大学 Light stripe center extraction tracking method based on space-time tracking
CN106097430A (en) * 2016-06-28 2016-11-09 哈尔滨工程大学 A kind of laser stripe center line extraction method of many gaussian signals matching

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1763472A (en) * 2005-11-22 2006-04-26 北京航空航天大学 Quick and high-precision method for extracting center of structured light stripe
CN101109620A (en) * 2007-09-05 2008-01-23 北京航空航天大学 Method for standardizing structural parameter of structure optical vision sensor
CN101178812A (en) * 2007-12-10 2008-05-14 北京航空航天大学 Mixed image processing process of structure light striation central line extraction
CN105574869A (en) * 2015-12-15 2016-05-11 中国北方车辆研究所 Line-structure light strip center line extraction method based on improved Laplacian edge detection
CN106023247A (en) * 2016-05-05 2016-10-12 南通职业大学 Light stripe center extraction tracking method based on space-time tracking
CN106097430A (en) * 2016-06-28 2016-11-09 哈尔滨工程大学 A kind of laser stripe center line extraction method of many gaussian signals matching

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于Hessian矩阵的线机构光光条中心提取;陈念 等;《数字技术与应用》;20190331;第37卷(第03期);全文 *
基于Hessian矩阵的线结构光中心线提取方法研究;李栋梁;《汽车实用技术》;20171130;全文 *

Also Published As

Publication number Publication date
CN110866924A (en) 2020-03-06

Similar Documents

Publication Publication Date Title
CN110866924B (en) Line structured light center line extraction method and storage medium
CN108629775B (en) Thermal state high-speed wire rod surface image processing method
CN113781402B (en) Method and device for detecting scratch defects on chip surface and computer equipment
CN109580630B (en) Visual inspection method for defects of mechanical parts
CN107633192B (en) Bar code segmentation and reading method based on machine vision under complex background
CN115170669B (en) Identification and positioning method and system based on edge feature point set registration and storage medium
CN112651968B (en) Wood board deformation and pit detection method based on depth information
CN107784669A (en) A kind of method that hot spot extraction and its barycenter determine
CN114529459B (en) Method, system and medium for enhancing image edge
CN111415376B (en) Automobile glass subpixel contour extraction method and automobile glass detection method
CN109540925B (en) Complex ceramic tile surface defect detection method based on difference method and local variance measurement operator
CN112233116B (en) Concave-convex mark visual detection method based on neighborhood decision and gray level co-occurrence matrix description
CN110647795A (en) Form recognition method
CN110660072B (en) Method and device for identifying straight line edge, storage medium and electronic equipment
CN112991283A (en) Flexible IC substrate line width detection method based on super-pixels, medium and equipment
CN116503462A (en) Method and system for quickly extracting circle center of circular spot
CN113421210B (en) Surface point Yun Chong construction method based on binocular stereoscopic vision
CN114612412A (en) Processing method of three-dimensional point cloud data, application of processing method, electronic device and storage medium
CN108764343B (en) Method for positioning tracking target frame in tracking algorithm
CN113688846A (en) Object size recognition method, readable storage medium, and object size recognition system
CN113223074A (en) Underwater laser stripe center extraction method
CN115797327A (en) Defect detection method and device, terminal device and storage medium
CN113284158B (en) Image edge extraction method and system based on structural constraint clustering
CN115187744A (en) Cabinet identification method based on laser point cloud
CN109949245B (en) Cross laser detection positioning method and device, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant