CN114419317A - Light strip center extraction method for light with complex environment line structure - Google Patents

Light strip center extraction method for light with complex environment line structure Download PDF

Info

Publication number
CN114419317A
CN114419317A CN202210002972.9A CN202210002972A CN114419317A CN 114419317 A CN114419317 A CN 114419317A CN 202210002972 A CN202210002972 A CN 202210002972A CN 114419317 A CN114419317 A CN 114419317A
Authority
CN
China
Prior art keywords
light
pixel
light bar
pixels
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210002972.9A
Other languages
Chinese (zh)
Inventor
王成琳
罗天洪
孙伟
廖尉捷
范磊
陈婕
李忠涛
杨清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Arts and Sciences
Original Assignee
Chongqing University of Arts and Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Arts and Sciences filed Critical Chongqing University of Arts and Sciences
Priority to CN202210002972.9A priority Critical patent/CN114419317A/en
Publication of CN114419317A publication Critical patent/CN114419317A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Abstract

The invention provides a light strip center extraction method for light with a complex environment line structure, which comprises a light strip segmentation method and a light strip center calculation method; the method utilizes an artificial synthetic image to train a YOLOv5s model, and identifies light bar targets in a complex environment; then extracting light bar pixels by adopting a self-adaptive gray threshold method, a gray mean value method and a sub-pixel-based edge detection method; and finally, searching the central pixels of the light bars by using the constructed sliding window by adopting a gray gravity center method, so as to realize accurate extraction of the centers of the light bars. By adopting the method, the influence of complex environmental factors such as noise, uneven illumination, random shading of the light strip and the like in the extraction process of the center of the line-structured light strip can be effectively avoided, so that the breakage of the light strip or the sudden change of the local light intensity is avoided, and the accuracy, timeliness and effectiveness of the extraction of the center of the light strip are ensured.

Description

Light strip center extraction method for light with complex environment line structure
Technical Field
The invention relates to the technical field of optical measurement, in particular to a light strip center extraction method for light with a complex environment line structure.
Background
The line structured light measurement technology has the advantages of non-contact property, good flexibility, high measurement speed, low cost, high precision, simple operation and the like, and is widely applied to the fields of surface quality detection, three-dimensional reconstruction, geometric parameter measurement and the like. In the process of measuring by line structured light, a laser generator projects a light bar onto the surface of a measured object, and after a vision sensor captures an image of the light bar, geometric parameters of the surface of the measured object are obtained by an image analysis technology; the width of the line-structured light strip generally exceeds one pixel, and the position coordinates of each point on the central line of the light strip represent the most accurate measurement information for reflecting the measured geometric characteristic parameters. Therefore, accurate extraction of the light strip center of the line structured light is an important link for realizing accurate measurement based on the line structured light.
The traditional light strip center extraction method can be divided into an extraction method based on the geometric center of the light strip and an extraction method based on the gray scale characteristic of the light strip; however, due to the influence of complex environmental factors such as noise, uneven illumination, random shading of light modulation and the like, the light bar is easy to break or the local light intensity changes suddenly, so that the phenomenon of line breakage or noise pollution on the surface of the light bar occurs, and the center of the light bar cannot be accurately extracted. Meanwhile, the light strip center extraction algorithms commonly used in the prior art include a contour centerline method, a gray threshold value method, a gray center of gravity method, a gaussian fitting method, a hessian matrix method, and the like: the contour centerline method and the gray threshold value method have poor robustness and low detection precision; the gray scale gravity center method and the Gaussian fitting method do not consider that the normal direction of the light bar is only suitable for the light bar with small change of the normal direction; the hessian matrix method is large in calculation amount and poor in real-time performance.
Therefore, the above methods all have the problems that the centers of the linear light strips of the line structure cannot be effectively and accurately extracted, or the accuracy is not high, the extraction efficiency is low, the extraction is difficult, or the application range is small.
Disclosure of Invention
In view of the above problems in the prior art, an object of the present invention is to provide a method for extracting the center of a light strip for light with a complicated ambient line structure, so as to solve the problems in the prior art that the center of the light strip is difficult to extract or has low extraction accuracy, low efficiency and small application range.
The purpose of the invention is realized by the following technical scheme:
a method for extracting the light strip center of line structured light in complex environment is characterized in that: the method comprises a light strip division method and a light strip center calculation method;
the light bar dividing method comprises the following specific steps:
s101, extracting light strip texture features by adopting an LBP local texture operator, counting the gray value features of the surfaces of the light strips, and artificially synthesizing a light strip image by combining the texture features and the gray value features;
s102, firstly, training a YOLOv5S model by adopting the artificially synthesized light bar image in the step S101; then, recognizing light bars in the image by adopting the trained YOLOv5s model, and segmenting the image by using a confidence frame and reserving an area in the confidence frame;
the light bar center calculation method specifically comprises the following steps:
s201, firstly, extracting pixel points in the confidence frame in the step S102 by using a self-adaptive gray threshold method;
s202, equalizing the gray value of the pixel in the confidence frame by using a gray average method;
s203, extracting the light strip edge in the confidence frame by adopting a sub-pixel edge operator, and refining the edge;
s204, traversing the image by adopting a sliding window, and searching light bar pixel points meeting a threshold (the threshold is determined by AND operation of window pixels and the pixel to be inspected); if the condition is met and traversal is completed, calculating the light bar center according to the searched light bar pixel value and a gray scale gravity center method; and if the window is not satisfied or the traversal is not completed, returning to continue the traversal of the sliding window.
Further optimization is performed, in step S101, the texture information in the window defined by itself is reflected by the value of the LBP local texture operator, which specifically includes:
Figure BDA0003454119570000021
in the formula (x)c,yc) Representing a center pixel point; i iscRepresenting the gray value of the central pixel; i ispRepresenting gray values of pixels adjacent to the central pixel; p represents the number of adjacent pixels based on the central pixel;
wherein the function s (x) is to calculate the difference between the adjacent position pixel and the middle position pixel, and specifically includes:
Figure BDA0003454119570000031
for further optimization, the YOLOv5s model structure comprises an input terminal, a backbone network, a neck structure and a prediction layer; and the convolution kernel numbers of the Focus layer and the CBL layer in the YOLOv5s model are respectively 32, 64, 128, 256 and 521.
For further optimization, the step S102 specifically includes: randomly covering the light bar image segments artificially synthesized in the step S101 at corresponding positions of the original light bar image; marking the synthesized light bar image segment target and the original light bar target by using a labellimg image marking tool, and inputting the light bar image segment target and the original light bar target into a YOLOv5s model as input images for training; and then, after the trained YOLOv5s model is adopted to identify the light bars in the image to be detected, according to the coordinates of the confidence frame in the light bar image (namely the image identified by the trained YOLOv5s model) (when the YOLOv5s model identifies the target, the confidence frame is simultaneously output on the target, at the moment, the coordinates of four vertexes and the coordinates of the center of the confidence frame are recorded as the coordinates of the confidence frame), the outer part of the confidence frame is set to be black, and the segmentation of the light bar image is realized.
For further optimization, the preset threshold of the gray threshold method in step S201 is 165.
For further optimization, a morphological dilation-erosion algorithm is further adopted between the step S201 and the step S202 to remove redundant noise points and small holes.
For further optimization, in step S203, a Canny edge detector is used to extract the light strip edge in the confidence box.
For further optimization, the step S203 of refining the edge by using the sub-pixel edge operator specifically includes:
setting (x, y) as an edge point of a light bar in the confidence box, wherein R (x, y) represents the gray value of the point, then (x-1, y), (x +1, y), (x, y-1) and (x, y +1) are four field points of the point (x, y), and the gray values are P (x-1, y), P (x +1, y), P (x, y-1) and P (x, y +1), respectively:
when R (x, y) is greater than P (x-1, y) and R (x, y) is greater than P (x +1, y), the sub-pixel point (x, y) of point (x, y)0,y0) Comprises the following steps:
Figure BDA0003454119570000041
y0=y;
when R (x, y) is greater than P (x, y-1) and R (x, y) is greater than P (x, y +1), the sub-pixel point (x, y) of point (x, y)0,y0) Comprises the following steps:
x0=x;
Figure BDA0003454119570000042
when R (x, y) is less than P (x, y-1) and R (x, y) is greater than P (x +1, y), the sub-pixel point (x, y) of point (x, y)0,y0) Comprises the following steps:
Figure BDA0003454119570000043
y0=y;
when R (x, y) is less than P (x, y-1) and R (x, y) is greater than P (x, y +1), the sub-pixel point (x, y) of point (x, y)0,y0) Comprises the following steps:
x0=x;
Figure BDA0003454119570000044
for further optimization, the step S204 specifically includes:
the method comprises the following steps of determining the width of a light bar by adopting a sliding window with the size of 4 pixels:
s241, firstly converting the image of the confidence frame containing the light bars into a binary image, and simultaneously sliding from the upper left corner of the confidence frame by adopting a sliding window (namely 4 pixels) of 2x2 as a starting end to traverse the image of the whole confidence frame to complete the search of the pixels of the light bars;
s242, if all the four pixel values of the sliding window are 1 at a certain time point when the window slides through the first row (namely, the threshold value is met, the same applies below), recording the coordinate of the second row 1 of the sliding window;
when traversing the second row, check if the next row of pixels corresponding to the previous row 1 are all 1: if the pixel values are not all 1, the pixels in the previous row 1 are taken as noise points to be removed; if the pixel values are all 1, the row is taken as a checking row, and whether the pixels are all 1 is detected:
if the pixels of the inspection row are all 1, the pixels of the previous row 1 are the minimum ordinate of the light bar; meanwhile, the sliding window continuously traverses the next row until a point that four pixels of the sliding window are not 1 is found, namely the maximum value of the vertical coordinate of the light bar;
if the pixel values of the inspection row are not all 1, traversing the next row by the sliding window until the pixels of the whole row are all 1, and repeating the operation until the minimum and maximum vertical coordinates of the optical bars are found;
s243, combining the minimum and maximum vertical coordinates of the light bars searched by the sliding window in the step S242 by adopting a gray scale gravity center method to obtain the centers of the light bars;
the gray scale gravity center method specifically comprises the following steps:
Figure BDA0003454119570000051
in the formula, f (x)k,yi) Is a coordinate of (x)k,yi) The gray value of the pixel point.
The invention has the following technical effects:
the application provides a hybrid algorithm for accurately extracting the light bar center in the complex environment, and the YOLOv5s model is trained by utilizing the synthesized light bar image, so that the identification progress of the light bar in the complex environment is greatly improved, and the accuracy, the recall rate and the F rate of the identification progress are improved1Score and mean reach 96.70%, 9920%, 97.93% and 99.50%, respectively; then, light bar pixels are extracted by adopting a self-adaptive gray threshold method, a gray mean value method and a sub-pixel-based edge detection method, noise is removed by utilizing a constructed sliding window, the light bar pixels are searched, the width of a light bar can be accurately determined, and the interference of complex environmental factors is eliminated; and finally, calculating the center of the light strip by combining the obtained width of the light strip and adopting a gray scale gravity center method. The maximum deviation and the minimum deviation of the centers of the light stripes extracted by the method relative to the centers of the standard light stripes are 0.052 pixels and 0.017 pixels respectively, which shows that the method has robustness to complex environments.
Drawings
Fig. 1 is a flowchart illustrating center extraction of a line structured light bar according to an embodiment of the present invention.
FIG. 2 is a segment of an artificially synthesized light bar in an embodiment of the present invention; wherein fig. 2(a) is a composite light bar segment projected on a background; FIG. 2(b) is a composite light bar segment projected on a gauge block; fig. 2(c) is a composite light bar segment projected on the tool.
FIG. 3 is a schematic diagram of a sliding window based light bar pixel search according to an embodiment of the present invention.
FIG. 4 is a process for extracting the center of a light bar according to the present application; fig. 4(a) is a light bar recognition result of training the YOLOv5s model using a synthesized light bar image; FIG. 4(b) is the image segmentation result based on the edge of the confidence box; FIG. 4(c) shows the result of extracting pixels from the confidence box; FIG. 4(d) is the edge detection result of the pixel in the confidence box; FIG. 4(e) is the edge refinement result of the pixels in the confidence box; fig. 4(f) shows the extraction results of the light bar centers.
FIG. 5 is a schematic diagram of light bar center extraction comparison based on different methods; wherein, fig. 5(a) is a diagram of the result extracted by the method in the embodiment of the present application; FIG. 5(b) is a graph showing the results of extraction using the skeleton method; FIG. 5(c) is a graph showing the result of extraction using the gray scale centroid method; fig. 5(d) is a graph of the result of extraction by the Hessian matrix method.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example (b):
as shown in fig. 1 to 3, a method for extracting the center of a light strip of line structured light in a complex environment is characterized in that: the method comprises a light strip division method and a light strip center calculation method;
the light bar dividing method comprises the following specific steps:
s101, extracting the texture features of the light bars by adopting an LBP local texture operator, which specifically comprises the following steps:
Figure BDA0003454119570000061
in the formula (x)c,yc) Representing a center pixel point; i iscRepresenting the gray value of the central pixel; i ispRepresenting gray values of pixels adjacent to the central pixel; p represents the number of adjacent pixels based on the central pixel;
wherein the function s (x) is to calculate the difference between the adjacent position pixel and the middle position pixel, and specifically includes:
Figure BDA0003454119570000071
the LBP local texture operator is adopted to extract the light strip texture characteristics, so that light strip segments can be effectively intercepted from the light strip image of the complex environment, wherein the light strip segments comprise segments projected on the background and segments projected on other objects.
Then, counting the gray value characteristics of the surface of the optical strip (which can be realized by adopting common means in the field), and combining the texture characteristics and the gray value characteristics to artificially synthesize an optical strip image; as shown in fig. 2, fig. 2(a) is a composite light bar segment projected on a background; FIG. 2(b) is a composite light bar segment projected on a gauge block; fig. 2(c) is a composite light bar segment projected on the tool.
S102, firstly, training a YOLOv5S model by adopting the artificially synthesized light bar image in the step S101; the Yolov5s model structure comprises an input terminal, a backbone network, a neck structure and a prediction layer; and the convolution kernel numbers of the Focus layer and the CBL layer in the YOLOv5s model are respectively 32, 64, 128, 256 and 521;
then, recognizing the light bars in the image by adopting the trained YOLOv5s model, segmenting the image by using a confidence frame, and reserving the area in the confidence frame, wherein the method specifically comprises the following steps: randomly covering the light bar image segments artificially synthesized in the step S101 at corresponding positions of the original light bar image; marking the synthesized light bar image segment target and the original light bar target by using a labellimg image marking tool, and inputting the light bar image segment target and the original light bar target into a YOLOv5s model as input images for training; and then, after the trained YOLOv5s model is adopted to identify the light bars in the image to be detected, according to the coordinates of the confidence frame in the light bar image (namely the image identified by the trained YOLOv5s model) (when the YOLOv5s model identifies the target, the confidence frame is simultaneously output on the target, at the moment, the coordinates of four vertexes and the coordinates of the center of the confidence frame are recorded as the coordinates of the confidence frame), the outer part of the confidence frame is set to be black, and the segmentation of the light bar image is realized.
The light bar center calculation method specifically comprises the following steps:
s201, firstly, extracting pixel points in the confidence frame in the step S102, namely pixels after the light bar image is segmented, by using a self-adaptive gray threshold method, wherein the preset threshold value of the gray threshold method is 165 (the gray and law enforcement conventional in the field can be adopted for extraction);
and removing redundant noise points and small holes by using a morphological dilation-erosion algorithm (the morphological dilation-erosion algorithm conventional in the field can be used).
S202, equalizing the gray value of the pixel in the confidence frame by using a gray averaging method (the gray averaging method can be adopted for equalization in the prior art);
s203, extracting the light strip edge in the confidence frame by adopting a Canny edge detection operator, and refining the edge by using a proposed sub-pixel edge detection algorithm (namely the Canny edge detection operator);
the method specifically comprises the following steps:
setting (x, y) as an edge point of a light bar in the confidence box, wherein R (x, y) represents the gray value of the point, then (x-1, y), (x +1, y), (x, y-1) and (x, y +1) are four field points of the point (x, y), and the gray values are P (x-1, y), P (x +1, y), P (x, y-1) and P (x, y +1), respectively:
when R (x, y) is greater than P (x-1, y) and R (x, y) is greater than P (x +1, y), the sub-pixel point (x, y) of point (x, y)0,y0) Comprises the following steps:
Figure BDA0003454119570000081
y0=y;
when R (x, y) is greater than P (x, y-1) and R (x, y) is greater than P (x, y +1), the sub-pixel point (x, y) of point (x, y)0,y0) Comprises the following steps:
x0=x;
Figure BDA0003454119570000082
when R (x, y) is less than P (x, y-1) and R (x, y) is greater than P (x +1, y), the sub-pixel point (x, y) of point (x, y)0,y0) Comprises the following steps:
Figure BDA0003454119570000083
y0=y;
when R (x, y) is less than P (x, y-1) and R (x, y) is greater than P (x, y +1), the sub-pixel point (x, y) of point (x, y)0,y0) Comprises the following steps:
x0=x;
Figure BDA0003454119570000091
s204, traversing the image by using a sliding window, and finding out a light bar pixel point satisfying a threshold (the threshold is determined by and operation of the window pixel and the pixel under investigation), which is as follows:
determining the width of the light bar by using a sliding window with the size of 4 pixels, as shown in fig. 3, wherein a dashed box represents the sliding window with the size of 4 pixels, a darkest box represents a noise point pixel, a darker box represents an edge pixel of a confidence box, and a lighter box represents a light bar pixel, and the method specifically comprises the following steps:
s241, firstly converting the image of the confidence frame containing the light bars into a binary image, and simultaneously sliding from the upper left corner of the confidence frame by adopting a sliding window (namely 4 pixels) of 2x2 as a starting end to traverse the image of the whole confidence frame to complete the search of the pixels of the light bars;
s242, if all the four pixel values of the sliding window are 1 at a certain time point when the window slides through the first row (namely, the threshold value is met, the same applies below), recording the coordinate of the second row 1 of the sliding window;
when traversing the second row, check if the next row of pixels corresponding to the previous row 1 are all 1: if the pixel values are not all 1, the pixels in the previous row 1 are taken as noise points to be removed; if the pixel values are all 1, the row is taken as a checking row, and whether the pixels are all 1 is detected:
if the pixels of the inspection row are all 1, the pixels of the previous row 1 are the minimum ordinate of the light bar; meanwhile, the sliding window continuously traverses the next row until a point that four pixels of the sliding window are not 1 is found, namely the maximum value of the vertical coordinate of the light bar;
if the pixel values of the inspection row are not all 1, traversing the next row by the sliding window until the pixels of the whole row are all 1, and repeating the operation until the minimum and maximum vertical coordinates of the optical bars are found;
s243, if the condition is met and the traversal is completed, calculating the center of the light bar according to the searched light bar pixel value and a gray scale gravity center method, specifically:
obtaining the center of the light bar by adopting a gray scale gravity center method in combination with the minimum and maximum vertical coordinates of the light bar searched by the sliding window in the step S242;
the gray scale gravity center method specifically comprises the following steps:
Figure BDA0003454119570000101
in the formula, f (x)k,yi) Is a coordinate of (x)k,yi) The gray value of the pixel point;
and if the window is not satisfied or the traversal is not completed, returning to continue the traversal of the sliding window.
The process of extracting the light bar centers by using the above method is shown in fig. 4, where fig. 4(a) is the light bar recognition result of training the YOLOv5s model by using the synthesized light bar image, fig. 4(b) is the image segmentation result based on the edge of the confidence frame, fig. 4(c) is the extraction result of the pixels in the confidence frame, fig. 4(d) is the edge detection result of the pixels in the confidence frame, fig. 4(e) is the edge refinement result of the pixels in the confidence frame, and fig. 4(f) is the extraction result of the light bar centers, respectively.
The light stripe center under the complex environment is extracted by adopting the traditional methods such as a skeleton method, a gray scale gravity center method, a Hessian matrix method and the like, and the extraction result is compared with the extraction result of the method in the embodiment. The extraction of the light bar centers based on the method of the present application is shown in fig. 5(a), in which the complete and straight centers of the light bars appear under the conditions of uneven illumination and noise interference, fig. 5(b) is the result of extracting the light bar centers using the skeleton method (broken light bar centers and wrong light bar centers are shown in the figure), fig. 5(c) is the result of extracting the light bar centers based on the gray scale barycenter method (many non-straight lines and dotted lines are shown in the figure), and fig. 5(d) is the result of extracting the light bar centers based on the Hessian matrix method (some wrong light bar centers are extracted in the noise region in the figure).
To further verify the performance of the method, in Matlab software, a light bar with a certain width was randomly added to the black background image, and gaussian noise and light intensity regions were added to the surface of the light bar. On the basis of the given width, the light bar center is extracted by human calculation to serve as the standard light bar center. Then, aiming at the light bars extracted by the method, the skeleton method, the gray scale gravity center method and the Hessian matrix method (wherein the skeleton method, the gray scale gravity center method and the Hessian matrix method all adopt conventional means in the field), comparing the light bars with the centers of standard light bars; the deviation index is adopted to evaluate the extraction precision of different methods, and specifically comprises the following steps:
Figure BDA0003454119570000111
in the formula, N represents the comparison test repeated 15 times; (X)n,Yn) Pixel coordinates representing 15 different points in the center of a standard light bar; (x)n,yn) And (3) intersection pixel coordinates between an abscissa or ordinate line representing a sampling point on the standard light bar and the light bar central line extracted by the comparison algorithm.
The comparison result is shown in table 1, that is, the difference of the pixel coordinates between the standard light strip center and the light strip center extracted by the comparison method is recorded in table 1; in table 1, the first light intensity, the second light intensity, and the third light intensity represent three luminance regions of different light intensities, respectively, wherein the luminance of the first light intensity is the lowest, and the luminance of the third light intensity is the highest; meanwhile, the gaussian noise standard deviation σ is set to 0.02, 0.04, and 0.06, respectively.
Table 1: different methods are used to extract the comparison of the light strip centers (unit: pixel)
Figure BDA0003454119570000112
As shown in table 1 above, when the illumination areas with different light intensities and different parameter noises are superimposed on the light stripe, the maximum deviation and the minimum deviation between the center of the standard light stripe and the center of the light stripe extracted by the method of the present application are 0.052 pixels and 0.017 pixels, respectively; meanwhile, under the above noise condition, the maximum deviation and the minimum deviation using the skeleton method are 0.414 pixel and 0.144 pixel, respectively; adopting a Hessian matrix method, wherein the maximum deviation and the minimum deviation are respectively 0.265 pixel and 0.106 pixel; in all methods, the comparison deviation of the gray scale gravity center method is the largest, and the maximum deviation and the minimum deviation of the gray scale gravity center method and the center of the standard light stripe are respectively 0.562 pixel and 0.290 pixel; this indicates that the method of the present application is robust to complex environments.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (6)

1. A method for extracting the light strip center of line structured light in complex environment is characterized in that: the method comprises a light strip division method and a light strip center calculation method;
the light bar dividing method comprises the following specific steps:
s101, extracting light strip texture features by adopting an LBP local texture operator, counting the gray value features of the surfaces of the light strips, and artificially synthesizing a light strip image by combining the texture features and the gray value features;
s102, firstly, training a YOLOv5S model by adopting the artificially synthesized light bar image in the step S101; then, recognizing light bars in the image by adopting the trained YOLOv5s model, and segmenting the image by using a confidence frame and reserving an area in the confidence frame;
the light bar center calculation method specifically comprises the following steps:
s201, firstly, extracting pixel points in the confidence frame in the step S102 by using a self-adaptive gray threshold method;
s202, equalizing the gray value of the pixel in the confidence frame by using a gray average method;
s203, extracting the light strip edge in the confidence frame by adopting a sub-pixel edge operator, and refining the edge;
s204, traversing the image by adopting a sliding window, and searching light bar pixel points meeting a threshold value; if the condition is met and traversal is completed, calculating the light bar center according to the searched light bar pixel value and a gray scale gravity center method; and if the window is not satisfied or the traversal is not completed, returning to continue the traversal of the sliding window.
2. The method of claim 1, wherein the method comprises: in step S101, the texture information in the window defined by the LBP local texture operator is reflected by the value of the LBP local texture operator, which specifically includes:
Figure FDA0003454119560000011
in the formula (x)c,yc) Representing a center pixel point; i iscRepresenting the gray value of the central pixel; i ispRepresenting gray values of pixels adjacent to the central pixel; p represents the number of adjacent pixels based on the central pixel;
wherein the function s (x) is to calculate the difference between the adjacent position pixel and the middle position pixel, and specifically includes:
Figure FDA0003454119560000012
3. the method for extracting the light bar center of the line structured light in the complex environment according to claim 1 or 2, wherein: the YOLOv5s model structure comprises an input terminal, a backbone network, a neck structure and a prediction layer; and the convolution kernel numbers of the Focus layer and the CBL layer in the YOLOv5s model are respectively 32, 64, 128, 256 and 521.
4. The method as claimed in any one of claims 1 to 3, wherein the method comprises the steps of: in step S203, a Canny edge detection operator is used to extract the light strip edge in the confidence frame.
5. The method of claim 4, wherein the method comprises: the step S203 of refining the edge by using the sub-pixel edge operator specifically includes:
setting (x, y) as an edge point of a light bar in the confidence box, wherein R (x, y) represents the gray value of the point, then (x-1, y), (x +1, y), (x, y-1) and (x, y +1) are four field points of the point (x, y), and the gray values are P (x-1, y), P (x +1, y), P (x, y-1) and P (x, y +1), respectively:
when R (x, y) is greater than P (x-1, y) and R (x, y) is greater than P (x +1, y), the sub-pixel point (x, y) of point (x, y)0,y0) Comprises the following steps:
Figure FDA0003454119560000021
y0=y;
when R (x, y) is greater than P (x, y-1) and R (x, y) is greater than P (x, y +1), the sub-pixel point (x, y) of point (x, y)0,y0) Comprises the following steps:
x0=x;
Figure FDA0003454119560000022
when R (x, y) is less than P (x, y-1) and R (x, y) is greater than P (x +1, y), the sub-pixel point (x, y) of point (x, y)0,y0) Comprises the following steps:
Figure FDA0003454119560000031
y0=y;
when R (x, y) is less than P (x, y-1), andwhen R (x, y) is greater than P (x, y +1), the sub-pixel point (x, y) of point (x, y)0,y0) Comprises the following steps:
x0=x;
Figure FDA0003454119560000032
6. the method as claimed in claim 4 or 5, wherein the method comprises the following steps: the step S204 specifically includes:
the method comprises the following steps of determining the width of a light bar by adopting a sliding window with the size of 4 pixels:
s241, firstly converting the image of the confidence frame containing the light bars into a binary image, and simultaneously sliding from the upper left corner of the confidence frame by adopting a 2x2 sliding window as a starting end to traverse the image of the whole confidence frame to complete the search of the pixels of the light bars;
s242, if all the four pixel values of the sliding window are 1 at a certain time point when the window slides through the first row, recording the coordinate of the second row 1 of the sliding window;
when traversing the second row, check if the next row of pixels corresponding to the previous row 1 are all 1: if the pixel values are not all 1, the pixels in the previous row 1 are taken as noise points to be removed; if the pixel values are all 1, the row is taken as a checking row, and whether the pixels are all 1 is detected:
if the pixels of the inspection row are all 1, the pixels of the previous row 1 are the minimum ordinate of the light bar; meanwhile, the sliding window continuously traverses the next row until a point that four pixels of the sliding window are not 1 is found, namely the maximum value of the vertical coordinate of the light bar;
if the pixel values of the inspection row are not all 1, traversing the next row by the sliding window until the pixels of the whole row are all 1, and repeating the operation until the minimum and maximum vertical coordinates of the optical bars are found;
s243, combining the minimum and maximum vertical coordinates of the light bars searched by the sliding window in the step S242 by adopting a gray scale gravity center method to obtain the centers of the light bars;
the gray scale gravity center method specifically comprises the following steps:
Figure FDA0003454119560000041
in the formula, f (x)k,yi) Is a coordinate of (x)k,yi) The gray value of the pixel point.
CN202210002972.9A 2022-01-04 2022-01-04 Light strip center extraction method for light with complex environment line structure Pending CN114419317A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210002972.9A CN114419317A (en) 2022-01-04 2022-01-04 Light strip center extraction method for light with complex environment line structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210002972.9A CN114419317A (en) 2022-01-04 2022-01-04 Light strip center extraction method for light with complex environment line structure

Publications (1)

Publication Number Publication Date
CN114419317A true CN114419317A (en) 2022-04-29

Family

ID=81271461

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210002972.9A Pending CN114419317A (en) 2022-01-04 2022-01-04 Light strip center extraction method for light with complex environment line structure

Country Status (1)

Country Link
CN (1) CN114419317A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115049654A (en) * 2022-08-15 2022-09-13 成都唐源电气股份有限公司 Method for extracting reflective light bar of steel rail

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115049654A (en) * 2022-08-15 2022-09-13 成都唐源电气股份有限公司 Method for extracting reflective light bar of steel rail

Similar Documents

Publication Publication Date Title
CN113450307B (en) Product edge defect detection method
CN109615611B (en) Inspection image-based insulator self-explosion defect detection method
CN110866924B (en) Line structured light center line extraction method and storage medium
CN107705288B (en) Infrared video detection method for dangerous gas leakage under strong interference of pseudo-target motion
CN112651968B (en) Wood board deformation and pit detection method based on depth information
JP6099479B2 (en) Crack detection method
CN111915704A (en) Apple hierarchical identification method based on deep learning
CN112233067A (en) Hot rolled steel coil end face quality detection method and system
CN105447512A (en) Coarse-fine optical surface defect detection method and coarse-fine optical surface defect detection device
CN109540925B (en) Complex ceramic tile surface defect detection method based on difference method and local variance measurement operator
CN111968098A (en) Strip steel surface defect detection method, device and equipment
CN109559324A (en) A kind of objective contour detection method in linear array images
CN114549981A (en) Intelligent inspection pointer type instrument recognition and reading method based on deep learning
CN110348263A (en) A kind of two-dimensional random code image recognition and extracting method based on image recognition
CN111539436B (en) Rail fastener positioning method based on straight template matching
CN110189375A (en) A kind of images steganalysis method based on monocular vision measurement
CN112085675A (en) Depth image denoising method, foreground segmentation method and human motion monitoring method
CN114549441A (en) Sucker defect detection method based on image processing
CN115482195A (en) Train part deformation detection method based on three-dimensional point cloud
CN116358449A (en) Aircraft rivet concave-convex amount measuring method based on binocular surface structured light
CN115841633A (en) Power tower and power line associated correction power tower and power line detection method
CN115953550A (en) Point cloud outlier rejection system and method for line structured light scanning
CN114419317A (en) Light strip center extraction method for light with complex environment line structure
CN114419140A (en) Positioning algorithm for light spot center of track laser measuring device
CN114092682A (en) Small hardware fitting defect detection algorithm based on machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination