CN111243009B - Decision-based laser stripe center extraction method - Google Patents

Decision-based laser stripe center extraction method Download PDF

Info

Publication number
CN111243009B
CN111243009B CN202010017870.5A CN202010017870A CN111243009B CN 111243009 B CN111243009 B CN 111243009B CN 202010017870 A CN202010017870 A CN 202010017870A CN 111243009 B CN111243009 B CN 111243009B
Authority
CN
China
Prior art keywords
point
decision
pixel point
pixel
center
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010017870.5A
Other languages
Chinese (zh)
Other versions
CN111243009A (en
Inventor
彭树学
杨昕欣
刁为民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhiyu Technology Danyang Co ltd
Original Assignee
Zhiyu Technology Danyang Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhiyu Technology Danyang Co ltd filed Critical Zhiyu Technology Danyang Co ltd
Priority to CN202010017870.5A priority Critical patent/CN111243009B/en
Publication of CN111243009A publication Critical patent/CN111243009A/en
Application granted granted Critical
Publication of CN111243009B publication Critical patent/CN111243009B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a decision-based laser stripe center extraction method, which converts the problem of extracting a center line into a maze-like problem, namely, from a starting point, for considering four connections only, only four directions are selectable, namely, up, down, left and right, and then, according to a decision means, the direction to which the next pixel point goes is determined; then, the next pixel point is taken as a new decision point, and the position of the next pixel point is judged again, namely, at the moment, only three choices exist, because the back-going is impossible; gradually iterating according to the method, and determining the next position of each pixel point through a decision until the algorithm of walking to the end of the image or the end of the stripe is finished; then the set of all the points walked through is the center of the extracted stripe; the method can simultaneously give consideration to speed and accuracy, and can extract the fringe center rapidly and accurately.

Description

Decision-based laser stripe center extraction method
Technical Field
The invention relates to a decision-based laser stripe center extraction method, and belongs to the technical field of graphic algorithms.
Background
At present, the laser measurement technology has been widely applied to various fields of measurement and mapping, and the main principle is that line laser is used for irradiating an object to be measured, then an industrial camera is used for adjusting exposure time to obtain a light stripe, and a 3D model of the object is obtained through scanning, but in practice, the light stripe has a certain width, so that how to extract the center of the stripe becomes a key technology in laser measurement, and the traditional algorithm has a gray level gravity center method which only takes gray level as a measurement standard, but has high speed and larger error; the curve fitting method and the hessian matrix extraction algorithm have high accuracy, but have low operation speed.
Disclosure of Invention
The invention aims to provide a decision-based laser stripe center extraction method, which aims to solve the problem that a certain width exists in a light stripe in the prior art. The invention considers the speed and the accuracy rate at the same time, reasonably estimates a path in the image through the statistical characteristics of the pixels, and rapidly and accurately extracts the fringe center.
The invention relates to a decision-based laser stripe center extraction method, which converts the problem of extracting a center line into a maze-like problem, namely, from a starting point, for considering four connections only, only four directions are selectable, namely, up, down, left and right, and then, according to a decision means, the direction of the next pixel point is determined; then, the next pixel point is taken as a new decision point, and the position of the next pixel point is judged again, namely, at the moment, only three choices exist, because the back-going is impossible; gradually iterating according to the method, and determining the next position of each pixel point through a decision until the algorithm of walking to the end of the image or the end of the stripe is finished; then the set of all the points walked through is the center of the extracted stripe; the method comprises the following specific processes:
step 1) preprocessing an original image, and determining a starting point of the image;
and 2) determining the position of the next pixel point by adopting a decision means for the starting point coordinate, and drawing a path for each next pixel point by adopting the method until the path reaches the image edge or the line edge.
Step 3) locally optimizing the extracted lines so that the paths pass through the brightest pixel points.
The method for determining the starting point of the image comprises the steps that a gray level gravity center method is adopted for determining the starting point position in a first column;
wherein the method for determining the starting point of the image further comprises the following steps: and determining the position of the next pixel point by taking the current pixel point as the center according to the values of the surrounding pixel points.
The decision means specifically determines the positional relationship between the next pixel point and the current pixel point from the perspective of microscopic pixels according to the statistical characteristics of the pixels.
The local optimization is specifically that on the basis of the step 1) and the step 2), fine adjustment is performed on the extracted lines, so that the whole lines meet the maximum pixel value. The optimization step is to take three points behind the current point as optimization targets from the starting point, enumerate possible situations and take the situation that the objective function is maximum.
Wherein, the local optimization, the objective function of the optimization is:
Figure BDA0002359596100000021
i.e. it should be ensured that the sum of the pixel values of the points on the path is as large as possible in the case of the same number of path points from point a to point B.
The invention relates to a decision-based laser stripe center extraction method, which has the advantages and effects that: the method can simultaneously give consideration to speed and accuracy, and can extract the fringe center rapidly and accurately.
Drawings
Fig. 1 shows four options for the current decision point, top, bottom, left, and right, respectively, at the pixel level.
Fig. 2 shows an example of an algorithm based on a decision method, and the arrow direction is based on the current pixel point, and decides which direction to be the next step according to the surrounding pixel distribution.
Fig. 3 shows a real image with laser noise.
Fig. 4a to 4c show the distribution of gray values that should be present if the next pixel is three-way.
Fig. 5 shows the position of the solution set determined according to the gray value, and the straight line in the figure is the boundary.
Fig. 6 shows the demarcation line according to fig. 5, in which only the gray scale distribution of each point in the graph needs to be compared in the algorithm programming.
Fig. 7 shows possible optimization paths when four-point optimization is used in the optimization method.
Fig. 8 shows an example of optimization.
Fig. 9 is a diagram showing an extraction effect according to an embodiment of the present invention.
Fig. 10 is a flow chart of the method of the present invention.
Description of the preferred embodiments
The technical scheme of the invention is further described below with reference to the accompanying drawings and examples.
As shown in fig. 10, the decision-based laser stripe center extraction method of the present invention converts the problem of extracting center lines into a maze-like problem. That is, from the starting point, for considering only four connections, only four directions are available for selection, namely, up, down, left and right, then according to some decision means, it is determined which direction the next pixel point goes to, such as (1) (2) (3) (4) of fig. 1, if the direction of the next pixel point is determined to be right, then this new point is taken as a new decision point, and the position of the next pixel point is determined again (of course, there are only three choices at this time, because it is impossible to go back). And iterating step by step according to the method, and determining the next position of each pixel point through a decision until the algorithm of walking to the end of the image or the end of the stripe is finished. The set of all the points walked through is then the center of the extracted stripe and the algorithm is briefly summarized in fig. 2.
The embodiment of the invention is to preprocess the image first, determine the starting point of the image, and the extraction of the starting point is different under different conditions, as shown in fig. 3, only the position of the starting point is determined by adopting a gray gravity center method in the first column. Next, from the viewpoint of the microscopic pixel, the next pixel point should theoretically have four directions from the start point, but since the start point is the leftmost pixel point and the end point is the rightmost pixel point, it is considered that the pixels in the path do not have a leftward direction. The core step of the decision means is to determine the position of the next pixel by taking the current pixel as the center according to the values of surrounding pixels.
Depending on the type of wide fringes, there should be different decision methods, because the noise type of the cross section of the wide fringes has many situations, such as rayleigh distribution, uniform distribution, gaussian distribution, etc., the decision means may be closer to the idea of gray center of gravity if the method is different under different noise models, and the gray value distribution in the normal direction may be approximately regarded as gaussian distribution for the laser fringes. The pixel point (x 0 ,y 0 ) The following effects are produced:
Figure BDA0002359596100000031
when the path reaches the current pixel point, three choices are presented: the theoretical pixel distribution should be one of fig. 4a, b, c for the upper right lower three directions.
However, the next pixel point in the actual image is not in three ideal directions relative to the current pixel point, but we can determine which direction of the three directions the next pixel point is closer to.
Let the current point coordinates be (x now ,y now ) The distribution that should be in three directions is:
Figure BDA0002359596100000032
solution z right (x,y)=z up (x,y)
The lower left-to-right diagonal line in fig. 5 can be obtained as a solution to the equation, i.e., the point on the line is insufficient to determine whether the pixel value is due to the influence of the (1) position or the (2) position, and can be considered as a dividing line, so that if the next pixel point is located in the upper direction of the current pixel point, the left pixel value of the line is larger than the right pixel value of the line, and similarly, if the left pixel value of the other diagonal line (upper right-to-lower left) is larger than the right pixel value, the direction of the next pixel point is more likely to be downward, but if the probabilities of the upper direction and the lower direction are both larger than the right direction, the equation Z left (x,y)=Z down The solution set is the horizontal line in fig. 5, so the pixel values above and below the horizontal line should be compared.
In summary, the specific method comprises the following steps:
starting from the current pixel point, calculating average pixel values of Δ1, Δ2 and Δ3 in fig. 6, if Δ1 is the largest, then the next pixel point is on top, if Δ3 is the largest, then the next pixel point is on bottom, otherwise, the next pixel point is on the right, obviously if the previous pixel point is on bottom, then the value of Δ3 is not needed to be judged, in addition, because the actual image has some noise, some threshold value can be added, for example, if the average value of Δ1 minus Δ2 and Δ3 is larger than a certain threshold value, then the next pixel point is considered to be on top, otherwise, the next pixel point is considered to be on the right. Traversing the whole image to obtain the center stripe.
Finally, this method considers the position of the next pixel from the current pixel only from a microscopic point of view for each pixel, but lacks global consideration in doing so, and as shown in fig. 7, the method of considering the next pixel only may cause erroneous judgment in the following cases due to the influence of noise from the point a to the point B, and thus the extracted line needs to be optimized.
The optimized objective function should be:
Figure BDA0002359596100000041
i.e. it should be ensured that the sum of the pixel values of the points on the path is as large as possible in the case of the same number of path points from point a to point B.
The optimization step starts from a starting point, takes the following three points as optimization targets, enumerates possible situations, and takes the situation that the objective function is maximum, as shown in fig. 8.
Examples:
1) Preprocessing the original image, extracting a target area, and removing an useless background.
2) Longitudinally scanning the extraction region, and obtaining the starting point coordinates according to a gray level gravity center method
3) And determining the position of the next pixel point by adopting a decision means for the starting point coordinate, and drawing a path for the next pixel point by adopting the method until the path reaches the image edge or the line edge.
4) The extracted lines are locally optimized so that the path passes through the brightest pixel point. The extraction effect is shown in fig. 9.

Claims (5)

1. A decision-based laser stripe center extraction method is characterized by comprising the following steps of: the method starts from a starting point, only four directions are considered for selection, namely, up, down, left and right, and then, according to a decision means, the direction to which the next pixel point is to be located is determined; then, the next pixel point is taken as a new decision point, and the position of the next pixel point is judged again, namely, only three choices exist at the moment; gradually iterating according to the method, and determining the next position of each pixel point through a decision until the algorithm of walking to the end of the image or the end of the stripe is finished; then the set of all the points walked through is the center of the extracted stripe; the method comprises the following specific processes:
step 1) preprocessing an original image, and determining a starting point of the image;
step 2) determining the position of the next pixel point by adopting a decision means on the starting point coordinates, and drawing a path on each next pixel point by adopting the method until the path reaches the image edge or the line edge;
step 3) locally optimizing the extracted lines to enable the paths to pass through the brightest pixel points;
after the path reaches the current pixel point, three choices are faced: upper, right and lower;
the next pixel point in the actual image is not in ideal three directions relative to the current pixel point, but can judge which direction of the three directions the next pixel point is closer to;
let the coordinates of the current point be (x now ,y now ) The distribution that should be in three directions is:
Figure FDA0004125705440000011
solution Z right (x,y)=Z up (x,y);
Solving the equation to obtain a slope from the lower left to the upper right, i.e. the point on the line is insufficient to judge whether the pixel value is due to the influence of the upper direction or the right direction, and the point is considered as a dividing line, if the next pixel point is positioned in the upper direction of the current pixel point, the pixel value on the left side of the line is larger than the pixel value on the right side of the lineThe side pixel value is large, and similarly if the left pixel value of the other diagonal line is larger than the right pixel value, the next pixel point direction is downward, but if the probabilities of the upward direction and the downward direction are both larger than the right direction, the equation Z is now up (x,y)=Z down The (x, y) solution, the solution set is a horizontal line, at which time the pixel values above and below the horizontal line should be compared.
2. A method for extracting the center of a decision-based laser stripe according to claim 1, wherein: the method for determining the starting point of the image in the step 1) is to determine the starting point position by adopting a gray level gravity center method for the first column.
3. A method for extracting the center of a decision-based laser stripe according to claim 1, wherein: the decision means described in step 2) specifically determines the positional relationship between the next pixel point and the current pixel point from the perspective of the microscopic pixel according to the statistical characteristics of the pixels.
4. A method for extracting the center of a decision-based laser stripe according to claim 1, wherein: the local optimization in the step 3) is specifically that on the basis of the step 1) and the step 2), fine adjustment is carried out on the extracted lines, so that the whole lines meet the maximum pixel value; the optimization step is to take three points behind the current point as optimization targets from the starting point, enumerate possible situations and take the situation that the objective function is maximum.
5. The decision-based laser stripe center extraction method according to claim 4, wherein: the local optimization is carried out, and the optimized objective function is as follows:
Figure FDA0004125705440000021
it should be ensured that the sum of the pixel values of the points on the path is maximized in the case that the number of path points is the same from point a to point B.
CN202010017870.5A 2020-01-08 2020-01-08 Decision-based laser stripe center extraction method Active CN111243009B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010017870.5A CN111243009B (en) 2020-01-08 2020-01-08 Decision-based laser stripe center extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010017870.5A CN111243009B (en) 2020-01-08 2020-01-08 Decision-based laser stripe center extraction method

Publications (2)

Publication Number Publication Date
CN111243009A CN111243009A (en) 2020-06-05
CN111243009B true CN111243009B (en) 2023-07-14

Family

ID=70880388

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010017870.5A Active CN111243009B (en) 2020-01-08 2020-01-08 Decision-based laser stripe center extraction method

Country Status (1)

Country Link
CN (1) CN111243009B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798519A (en) * 2020-07-21 2020-10-20 广东博智林机器人有限公司 Method and device for extracting laser stripe center, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103411562B (en) * 2013-08-22 2016-01-13 电子科技大学 A kind of structured light strip center extraction method based on dynamic programming and average drifting
CN104657587B (en) * 2015-01-08 2017-07-18 华中科技大学 A kind of center line extraction method of laser stripe
CN109389639B (en) * 2018-07-16 2021-06-25 中国铁道科学研究院集团有限公司基础设施检测研究所 Method and device for extracting center of laser stripe of steel rail outline in dynamic driving environment
CN110599539B (en) * 2019-09-17 2022-05-17 广东奥普特科技股份有限公司 Stripe center extraction method of structured light stripe image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于改进质心法的激光条纹中心提取算法;席剑辉;包辉;;火力与指挥控制(第05期) *

Also Published As

Publication number Publication date
CN111243009A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
US9754377B2 (en) Multi-resolution depth estimation using modified census transform for advanced driver assistance systems
CN114782691B (en) Robot target identification and motion detection method based on deep learning, storage medium and equipment
JP4378571B2 (en) MAP CHANGE DETECTION DEVICE, MAP CHANGE DETECTION METHOD, AND PROGRAM
CN109934847B (en) Method and device for estimating posture of weak texture three-dimensional object
US11530915B2 (en) Dimension measuring device, dimension measuring method, and semiconductor manufacturing system
CN105389807B (en) A kind of particle filter infrared track method for merging Gradient Features and adaptive template
CN112183301B (en) Intelligent building floor identification method and device
CN113077476B (en) Height measurement method, terminal device and computer storage medium
CN109800698A (en) Icon detection method based on depth network
JP6326641B2 (en) Image processing apparatus and image processing method
CN101114337A (en) Ground buildings recognition positioning method
CN109583365A (en) Method for detecting lane lines is fitted based on imaging model constraint non-uniform B-spline curve
CN112669301B (en) High-speed rail bottom plate paint removal fault detection method
CN111243009B (en) Decision-based laser stripe center extraction method
CN110851978B (en) Camera position optimization method based on visibility
CN105354816B (en) A kind of electronic units fix method and device
CN110378199B (en) Rock-soil body displacement monitoring method based on multi-period images of unmanned aerial vehicle
CN111354047B (en) Computer vision-based camera module positioning method and system
JP2005128001A (en) Method of detecting subpixel moving amount applied to optical navigation system
CN109816710B (en) Parallax calculation method for binocular vision system with high precision and no smear
TW202011255A (en) Contour extraction method, contour extraction device, and non-volatile recording medium
CN112857252B (en) Tunnel image boundary line detection method based on reflectivity intensity
CN113095164A (en) Lane line detection and positioning method based on reinforcement learning and mark point characterization
CN114463720A (en) Lane line detection method based on line segment intersection-to-parallel ratio loss function
CN109862349B (en) Quality detection method and device of disparity map and automatic driving system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant