CN111243009A - Decision-based center extraction method for laser stripes - Google Patents

Decision-based center extraction method for laser stripes Download PDF

Info

Publication number
CN111243009A
CN111243009A CN202010017870.5A CN202010017870A CN111243009A CN 111243009 A CN111243009 A CN 111243009A CN 202010017870 A CN202010017870 A CN 202010017870A CN 111243009 A CN111243009 A CN 111243009A
Authority
CN
China
Prior art keywords
decision
point
pixel point
center
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010017870.5A
Other languages
Chinese (zh)
Other versions
CN111243009B (en
Inventor
彭树学
杨昕欣
刁为民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhiyu Technology Danyang Co Ltd
Original Assignee
Zhiyu Technology Danyang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhiyu Technology Danyang Co Ltd filed Critical Zhiyu Technology Danyang Co Ltd
Priority to CN202010017870.5A priority Critical patent/CN111243009B/en
Publication of CN111243009A publication Critical patent/CN111243009A/en
Application granted granted Critical
Publication of CN111243009B publication Critical patent/CN111243009B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for extracting the center of a laser stripe based on decision, which converts the problem of extracting the center line into a problem similar to maze walking, namely starting from a starting point, only four directions can be selected for only considering four connections, namely upper, lower, left and right, and then the direction to which the next pixel point goes is determined according to a decision-making means; then, the next pixel point is taken as a new decision point, and the position of another pixel point is judged again, namely, only three choices exist at the moment, and the next pixel point cannot go back; the method is iterated step by step, and each pixel point needs to determine the next position through a decision until the end of the image is reached or the end algorithm of the stripe is finished; then the set of all the walked points is the center of the extracted stripe; the method can simultaneously give consideration to speed and accuracy, and can quickly and accurately extract the fringe center.

Description

Decision-based center extraction method for laser stripes
Technical Field
The invention relates to a method for extracting the center of a laser stripe based on decision, belonging to the technical field of graphical algorithms.
Background
The laser measurement technology is widely applied in various fields of measurement and mapping at present, the main principle is that line laser is used for irradiating an object to be measured, then an industrial camera is used for adjusting exposure time to obtain a light stripe, and a 3D model of the object is obtained through scanning, but in practice, the light stripe has a certain width, so how to extract the stripe center becomes a key technology in laser measurement, the traditional algorithm has a gray scale gravity center method, and the method only takes gray scale as a measurement standard, although the speed is high, the error is large; although the curve fitting method and the hessian matrix extraction algorithm have high accuracy, the operation speed is slow.
Disclosure of Invention
The invention aims to provide a method for extracting the center of a laser stripe based on decision so as to solve the problem that the laser stripe has a certain width in the prior art. The method simultaneously considers the speed and the accuracy, reasonably estimates a path in the image through the statistical characteristics of the pixels, and achieves the purpose of quickly and accurately extracting the center of the stripe.
The invention relates to a center extraction method of a laser stripe based on decision, which converts the problem of extracting a center line into a problem similar to maze walking, namely starting from a starting point, only four directions can be selected for only considering four connections, namely, up, down, left and right, and then determining which direction a next pixel point goes to according to a decision means; then, the next pixel point is taken as a new decision point, and the position of another pixel point is judged again, namely, only three choices exist at the moment, and the next pixel point cannot go back; the method is iterated step by step, and each pixel point needs to determine the next position through a decision until the end of the image is reached or the end algorithm of the stripe is finished; then the set of all the walked points is the center of the extracted stripe; the method comprises the following specific processes:
step 1) preprocessing an original image and determining a starting point of the image;
and 2) determining the position of the next pixel point by adopting a decision-making means for the starting point coordinates, and drawing a path for each next pixel point by adopting the same method until the path reaches the edge of the image or the edge of the line.
And 3) locally optimizing the extracted lines, so that the path passes through the brightest pixel points.
The method for determining the starting point of the image is that the position of the starting point is determined by a gray scale gravity center method in a first column;
wherein, the method for determining the image starting point further comprises: and determining the position of the next pixel point by taking the current pixel point as the center according to the values of the surrounding pixel points.
The decision means specifically determines the position relationship between the next pixel point and the current pixel point from the perspective of the microscopic pixels according to the statistical characteristics of the pixels.
The local optimization specifically includes performing fine adjustment on the extracted lines on the basis of the step 1) and the step 2) to enable the whole line to meet the condition that the pixel value is maximum. That is, the optimization step starts from the starting point, and enumerates possible situations by taking three points behind the current point as the optimization target, and takes the situation that the objective function is maximum.
Wherein, the local optimization and the optimized objective function are as follows:
Figure BDA0002359596100000021
that is, it should be ensured that the sum of pixel values of points on the path is made as large as possible under the condition that the number of the path points is the same from the point a to the point B.
The invention relates to a center extraction method of a laser stripe based on decision, which has the advantages and effects that: the method can simultaneously give consideration to speed and accuracy, and can quickly and accurately extract the fringe center.
Drawings
Fig. 1 shows four choices of the current decision point, which are the upper, lower, left and right of the pixel level.
Fig. 2 shows an example of an algorithm based on a decision-making method, where the arrow direction is based on the current pixel point, and which direction the next step is determined according to the distribution of surrounding pixels.
Fig. 3 shows a real image with laser noise.
Fig. 4a to 4c show the distribution of gray values if the next pixel is in three directions.
Fig. 5 shows the determination of the position of the solution set from the gray values, where the straight lines are the dividing lines.
Fig. 6 shows the boundary lines of fig. 5, and only the gray level distributions of the points in the graph need to be compared in algorithm programming.
Fig. 7 shows possible optimization paths when four-point optimization is adopted in the optimization method.
Fig. 8 is a diagram illustrating an exemplary optimization.
FIG. 9 is a diagram illustrating the effect of the extraction according to the embodiment of the present invention.
FIG. 10 is a block diagram of a method of the present invention.
Detailed description of the preferred embodiments
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
As shown in fig. 10, the method for extracting the center of the laser stripe based on the decision converts the problem of extracting the center line into a problem similar to that of going a maze, that is, starting from the starting point, only four directions, namely, up, down, left and right, are selected for only considering four connections, and then according to some decision means, it is determined which direction the next pixel point goes to, such as ①②③④ in fig. 1, if it is determined that the direction of the next pixel point is right, then the new point is taken as a new decision point, and the position of the next pixel point is determined again (of course, there are only three choices, since it is impossible to go back again at this time).
The embodiment of the invention first preprocesses the image, determines the starting point of the image, and extracts the starting point in different conditions, for example, fig. 3 only needs to determine the position of the starting point in the first column by adopting a gray scale gravity center method. Then, with the starting point as the center, from the perspective of the microscopic pixels, the next pixel should theoretically have four directions from the starting point, but since the starting point is the leftmost pixel and the end point is the rightmost side, it can be considered that the pixels in the path do not have a leftward direction. Therefore, the core step of the decision means is to determine the position of the next pixel point by taking the current pixel point as the center according to the values of the surrounding pixel points.
Different decision methods are needed according to different types of wide stripes, because the noise types of the cross sections of the wide stripes have many situations, such as Rayleigh distribution, uniform distribution, Gaussian distribution, and the like, the method is different under different noise models, if the noise types are uniformly distributed, the decision method can be closer to the idea of gray scale gravity center, and for the laser stripes, the gray scale value distribution in the normal direction can be approximately considered as the Gaussian distribution. So that the pixel point (x) on each path0,y0) The following effects occur:
Figure BDA0002359596100000031
and after the path reaches the current pixel point, three choices are faced: top right and bottom, for three directions, top right and bottom, the theoretical pixel distribution should be one of fig. 4a, b, c.
However, the next pixel point in the actual image is not in such ideal three directions relative to the current pixel point, but we can determine which direction the next pixel point is closer to the three directions.
Assume that the current point coordinate is (x)now,ynow) The distribution of the three directions should be:
Figure BDA0002359596100000032
solution of zright(x,y)=zup(x,y)
The oblique line from the bottom left to the top right in fig. 5 is the solution of the equation, i.e. the point on the line is not enough to determine whether the pixel value is due to the influence of the ① position or the ② position, and can be considered as the dividing line, so if the point is below the lineIf a pixel point is located in the upper direction of the current pixel point, the left pixel value of the line is larger than the right pixel value of the line, and similarly, if the left pixel value of another oblique line (from top right to bottom left) is larger than the right pixel value, the direction of the next pixel point is more likely to be downward, but if the probabilities of the upper direction and the lower direction are both larger than the right direction, the equation Z is aligned at the timeleft(x,y)=ZdownThe (x, y) solution is solved, and the solution set is the horizontal line in fig. 5, so the pixel values above and below the horizontal line should be compared at this time.
In conclusion, the specific method comprises the following steps:
starting from the current pixel point, calculating the average pixel values of the positions of Δ 1, Δ 2, and Δ 3 in fig. 6, if Δ 1 is the largest, the next pixel point is on the top, if Δ 3 is the largest, the next pixel point is on the bottom, otherwise, the next pixel point is on the right, obviously, if the previous pixel point is on the bottom, the value of Δ 3 does not need to be judged, in addition, because the actual image has some noises, some thresholds may be added, for example, if the average value of Δ 1 is subtracted from Δ 2, the average value at the position of Δ 3 is greater than a certain threshold, the next pixel point is considered to be on the top, otherwise, the right is. The central stripe can be obtained by traversing the whole image.
Finally, for each pixel point, the method only considers the position of the next pixel point from the current pixel point from a microscopic view, but the global consideration is lacked, as shown in fig. 7, the following three possibilities are provided from the point a to the point B, and the method only considering the next pixel point may cause misjudgment in the following situations due to the influence of noise, so that the extracted line needs to be optimized.
The optimized objective function should be:
Figure BDA0002359596100000041
that is, it should be ensured that the sum of pixel values of points on the path is made as large as possible under the condition that the number of the path points is the same from the point a to the point B.
Therefore, the optimization procedure starts from the starting point, enumerates possible cases by using the following three points as optimization targets, and takes the case that the objective function is maximum, as shown in fig. 8.
Example (b):
1) and preprocessing the original image, extracting a target area and removing an unnecessary background.
2) Longitudinally scanning the extraction region, and calculating the coordinates of the starting point according to the gray scale gravity center method
3) And determining the position of the next pixel point by adopting a decision-making means for the coordinates of the starting point, and drawing a path for the next pixel point by adopting the method until the path reaches the edge of the image or the edge of the line.
4) And carrying out local optimization on the extracted lines, so that the path passes through the brightest pixel point. The extraction effect is shown in fig. 9.

Claims (6)

1. A method for extracting the center of a laser stripe based on decision is characterized in that: the method converts the problem of extracting the central line into a problem similar to maze walking, namely starting from a starting point, only four connections are considered, namely only four directions are selected, namely up, down, left and right, and then determining which direction a next pixel point goes to according to a decision-making means; then, the next pixel point is taken as a new decision point, and the position of another pixel point is judged again, namely, only three choices are provided at the moment; the method is iterated step by step, and each pixel point needs to determine the next position through a decision until the end of the image is reached or the end algorithm of the stripe is finished; then the set of all the walked points is the center of the extracted stripe; the method comprises the following specific processes:
step 1) preprocessing an original image and determining a starting point of the image;
step 2) determining the position of the next pixel point by adopting a decision-making means for the starting point coordinate, and drawing a path for each next pixel point by adopting the same method until the path reaches the edge of the image or the edge of the line;
and 3) locally optimizing the extracted lines, so that the path passes through the brightest pixel points.
2. The method of claim 1, wherein the center of the laser stripe is extracted based on the decision: the method for determining the starting point of the image in the step 1) is to determine the position of the starting point by adopting a gray scale gravity center method for the first column.
3. The method of claim 1, wherein the center of the laser stripe is extracted based on the decision: the method for determining the image starting point in the step 1) further comprises the following steps: and determining the position of the next pixel point by taking the current pixel point as the center according to the values of the surrounding pixel points.
4. The method of claim 1, wherein the center of the laser stripe is extracted based on the decision: the decision means in the step 2) specifically determines the position relationship between the next pixel point and the current pixel point from the perspective of the microscopic pixels according to the statistical characteristics of the pixels.
5. The method of claim 1, wherein the center of the laser stripe is extracted based on the decision: the local optimization in the step 3), specifically, on the basis of the step 1) and the step 2), fine adjustment is performed on the extracted lines, so that the whole lines meet the condition that the pixel value is maximum; that is, the optimization step starts from the starting point, and enumerates possible situations by taking three points behind the current point as the optimization target, and takes the situation that the objective function is maximum.
6. The method of claim 5, wherein the center of the laser stripe is extracted based on the decision: the local optimization is carried out by the following objective functions:
Figure FDA0002359596090000011
that is, it should be ensured that the sum of pixel values of points on the path is made as large as possible under the condition that the number of the path points is the same from the point a to the point B.
CN202010017870.5A 2020-01-08 2020-01-08 Decision-based laser stripe center extraction method Active CN111243009B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010017870.5A CN111243009B (en) 2020-01-08 2020-01-08 Decision-based laser stripe center extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010017870.5A CN111243009B (en) 2020-01-08 2020-01-08 Decision-based laser stripe center extraction method

Publications (2)

Publication Number Publication Date
CN111243009A true CN111243009A (en) 2020-06-05
CN111243009B CN111243009B (en) 2023-07-14

Family

ID=70880388

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010017870.5A Active CN111243009B (en) 2020-01-08 2020-01-08 Decision-based laser stripe center extraction method

Country Status (1)

Country Link
CN (1) CN111243009B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798519A (en) * 2020-07-21 2020-10-20 广东博智林机器人有限公司 Method and device for extracting laser stripe center, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103411562A (en) * 2013-08-22 2013-11-27 电子科技大学 Structured light laser strip center extraction method based on dynamic programming and mean-shift
CN104657587A (en) * 2015-01-08 2015-05-27 华中科技大学 Method for extracting center line of laser stripe
CN109389639A (en) * 2018-07-16 2019-02-26 中国铁道科学研究院集团有限公司基础设施检测研究所 Rail profile laser stripe center extraction method and device under dynamic environment
CN110599539A (en) * 2019-09-17 2019-12-20 广东奥普特科技股份有限公司 Stripe center extraction method of structured light stripe image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103411562A (en) * 2013-08-22 2013-11-27 电子科技大学 Structured light laser strip center extraction method based on dynamic programming and mean-shift
CN104657587A (en) * 2015-01-08 2015-05-27 华中科技大学 Method for extracting center line of laser stripe
CN109389639A (en) * 2018-07-16 2019-02-26 中国铁道科学研究院集团有限公司基础设施检测研究所 Rail profile laser stripe center extraction method and device under dynamic environment
CN110599539A (en) * 2019-09-17 2019-12-20 广东奥普特科技股份有限公司 Stripe center extraction method of structured light stripe image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
席剑辉;包辉;: "基于改进质心法的激光条纹中心提取算法" *
席剑辉;包辉;: "基于改进质心法的激光条纹中心提取算法", 火力与指挥控制 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798519A (en) * 2020-07-21 2020-10-20 广东博智林机器人有限公司 Method and device for extracting laser stripe center, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111243009B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
CN114782691B (en) Robot target identification and motion detection method based on deep learning, storage medium and equipment
CN110310320B (en) Binocular vision matching cost aggregation optimization method
CN103530893B (en) Based on the foreground detection method of background subtraction and movable information under camera shake scene
CN102982545B (en) A kind of image depth estimation method
CN110276264A (en) A kind of crowd density estimation method based on foreground segmentation figure
CN111354047B (en) Computer vision-based camera module positioning method and system
CN105912977B (en) Lane line detection method based on point clustering
CN101114337A (en) Ground buildings recognition positioning method
CN110956078B (en) Power line detection method and device
CN111243009A (en) Decision-based center extraction method for laser stripes
CN110378199B (en) Rock-soil body displacement monitoring method based on multi-period images of unmanned aerial vehicle
CN117557617B (en) Multi-view dense matching method, system and equipment based on plane priori optimization
CN117788693B (en) Stair modeling method and device based on point cloud data, legged robot and medium
CN113298808B (en) Method for repairing building shielding information in tilt-oriented remote sensing image
CN113095164A (en) Lane line detection and positioning method based on reinforcement learning and mark point characterization
CN108629227B (en) Method and system for determining left and right boundaries of vehicle in image
CN110728669B (en) Video mosaic detection method
JP2013080389A (en) Vanishing point estimation method, vanishing point estimation device, and computer program
CN109816710B (en) Parallax calculation method for binocular vision system with high precision and no smear
CN111428538B (en) Lane line extraction method, device and equipment
CN113920065B (en) Imaging quality evaluation method for visual detection system of industrial site
Seo Edge modeling by two blur parameters in varying contrasts
KR102502122B1 (en) Method of calculating and determining the sawing angle consistency of wafers and system performing the same
CN114677428A (en) Power transmission line icing thickness detection method based on unmanned aerial vehicle image processing
CN109862349B (en) Quality detection method and device of disparity map and automatic driving system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant