CN116934826A - Multi-line laser stripe clustering and matching method based on custom window iteration - Google Patents

Multi-line laser stripe clustering and matching method based on custom window iteration Download PDF

Info

Publication number
CN116934826A
CN116934826A CN202310706045.XA CN202310706045A CN116934826A CN 116934826 A CN116934826 A CN 116934826A CN 202310706045 A CN202310706045 A CN 202310706045A CN 116934826 A CN116934826 A CN 116934826A
Authority
CN
China
Prior art keywords
window
stripes
stripe
coordinates
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310706045.XA
Other languages
Chinese (zh)
Inventor
李文国
邓海波
邓志鹏
吴新刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunming University of Science and Technology
Original Assignee
Kunming University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunming University of Science and Technology filed Critical Kunming University of Science and Technology
Priority to CN202310706045.XA priority Critical patent/CN116934826A/en
Publication of CN116934826A publication Critical patent/CN116934826A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/254Projection of a pattern, viewing through a pattern, e.g. moiré
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/564Depth or shape recovery from multiple images from contours
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/68Analysis of geometric attributes of symmetry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Geometry (AREA)
  • Health & Medical Sciences (AREA)
  • Optics & Photonics (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a multi-line laser stripe clustering and matching method based on user-defined window iteration, and belongs to the technical field of computer vision measurement. Comprising the following steps: 1. obtaining stripe projection images after the region of interest of the measured object is segmented under two camera view angles; 2. filtering the image obtained in the step 1 and extracting a central line; 3. clustering the central line stripes obtained in the step 2 by establishing a self-defined iterative window; 4. and (3) calculating the barycenter coordinates of the stripes according to the stripe point coordinate data obtained by the clustering of 3, and numbering the stripes with the same numbers in the left view and the right view, namely the corresponding matched stripes. The multi-line laser three-dimensional vision measurement system based on binocular vision can cluster the central lines of multi-line laser stripes projected on the surface of a measured object in a left view and a right view, is not broken by the stripes, is affected by noise points, obtains correctly matched stripes, and avoids the problem of mismatching of the multi-line laser stripes.

Description

Multi-line laser stripe clustering and matching method based on custom window iteration
Technical Field
The invention relates to a multi-line laser stripe clustering and matching method based on user-defined window iteration, and belongs to the technical field of computer vision measurement.
Background
The active optical three-dimensional measurement mainly relies on a structured light source provided by a measurement system to emit structured light with a certain form, the structured light is projected onto the surface of an object, and a camera is used for collecting structured light images of the surface of the object to be measured to calculate and obtain the three-dimensional coordinates of the surface of the object. The three-dimensional measurement method based on the structured light has the advantages of non-contact, high detection precision, high measurement speed, simple structure and the like, so that the three-dimensional measurement method based on the structured light is widely applied to the fields of industrial production, biomedicine, aerospace and constructional engineering.
Structured light can be classified into light point type, line laser and light plane type according to the type of laser projection. Wherein the line laser comprises a single line laser and a multiple line laser. In the dual-purpose multi-line laser three-dimensional measurement system, multi-line laser is utilized to project onto a measured object, pictures of the measured object are collected through a left camera and a right camera at the same time, then the pictures are preprocessed, the central line of a light stripe is extracted, then the central points extracted by the left picture and the right picture are subjected to three-dimensional matching by utilizing polar constraint, and finally three-dimensional coordinates on the object are calculated by means of the laser central matching points of the left picture and the right picture according to the binocular parallax principle. In a dual-purpose multi-line laser-based three-dimensional measurement system, finding the corresponding matching fringes in the left and right views on an object is a key problem in the dual-purpose multi-line structured light technology.
When the object is collected, the stripes are difficult to obtain continuous and complete stripes due to complex textures of the surface of the object and the height change of the surface of the object, and the external light source environment or image noise points are used for extracting the central line. Therefore, the light fringes are difficult to distinguish, so that errors in the matching of the fringes are caused, namely, the first stripe on the left image object is corresponding to the first stripe on the right image object, but the errors correspond to other stripes, and finally, correct matching points cannot be obtained, so that three-dimensional reconstruction cannot be completed.
Disclosure of Invention
Aiming at the existing dual-purpose multi-line laser three-dimensional measurement system, the invention provides a multi-line laser stripe clustering and matching method based on user-defined window iteration, and the number of left and right view stripes is corresponding to that of iterative windows by clustering broken stripes in a user-defined window pick-up point mode by utilizing the similarity of the left and right view stripes. And numbering the clustered stripes, and finally enabling the left-view light stripes to correspond to the light stripes in the right view one by one.
The technical scheme adopted by the invention is as follows: a multi-line laser stripe clustering and matching method based on user-defined window iteration comprises the following steps:
step 1: constructing a divergent multi-line laser measurement model based on binocular vision, and obtaining stripe projection images after the region of interest of the measured object is segmented under two camera view angles;
step 2: filtering the image obtained in the step 1, extracting a central line, and drawing a central line stripe binary image;
step 3: clustering the central line stripes in the central line stripe binary image obtained in the step (2) by establishing a self-defined window, obtaining stripe point coordinate data in clustering, judging whether to iterate the window to re-cluster the stripes according to the comparison result of the quantity of the clustered stripes of the left view and the right view;
step 4: and (3) calculating the centroid coordinates of the stripes and the Euclidean distance from the centroid coordinates to the origin of the pixel coordinates according to the stripe point coordinate data obtained by clustering in the step (3), numbering the stripes according to the calculation result, and obtaining the stripes with the same numbers in the left view and the right view as the corresponding matched stripes.
Specifically, when the divergent multi-line laser measurement model based on binocular vision is constructed in the step 1, the multi-line laser needs to be orthographically projected and the projected fringes need to be vertical or inclined in view imaging.
Specifically, in the step 1, in order to accurately divide the region when the picture is acquired, a piece of solid background paper with different colors from the measured object needs to be used as a pad of the measurement platform. The invention provides a background material used on a measuring platform so as to effectively reduce the influence of background noise. The material can be clearly distinguished from the measured object and can be of various colors and textures. The two cameras firstly collect a picture of the measured object without laser stripe projection, the measured object is not moved, and then the picture of the measured object with structured light stripe projection is collected. The method comprises the steps of detecting and extracting edge features of a picture without laser stripe projection, performing image expansion on the extracted edge contour to obtain an accurate closed image contour, and then filling the closed contour to obtain a mask image of the ROI area of the measured object. And superposing the image with the measured object image of the stripe projection to obtain a stripe projection image after the region of interest is segmented.
Specifically, after extracting the center line in step 2, obtaining the stripe coordinates of the center line, and drawing the coordinates of each pixel point on an image with the same format as the original image and the pixel value of 0 by using the pixel value of 255, thereby obtaining a center line stripe binary image.
Specifically, the step 3 includes the following steps S1 and S2:
step S1, establishing a custom window: defining the initial window size as (m+n+1) × (p+q+1), wherein coordinates (0, 0) in the window are anchor points, the upper row number and the lower row number are respectively m, n, the left column number and the right column number are respectively p, q and m, n, p, q are positive integers, and coordinates of other pixel points in the window are sequentially increased or decreased by taking the anchor point coordinates as a reference standard. The initial value m=1 of the number of lines; the lower line number is set larger, and the stripe segments with broken central lines can be associated, and an initial value n=7 is taken; the projected line laser stripes are vertical or oblique stripes, the stripes do not have a large span in the horizontal direction, so the window takes an initial value p=1, q=1.
Step S2, clustering of left view and right view stripes:
and (3) processing the stripes on the graph obtained in the step (2) by using a self-defined window clustering algorithm to obtain stripes 1, stripes 2 and stripes 3. The right graph is processed in the same manner as the left graph, and the same result is obtained. The clustering algorithm comprises the following steps:
1) Finding the starting point P of the stripe start : traversing the image from left to right and from top to bottom, searching a pixel point with a gray value of 255, and if the pixel point is found, considering the pixel point as a first point P start Setting the pixel value of the point as a background pixel value of 0, storing the coordinate point into a temporary container v, and executing the step 2), otherwise executing the step 6);
2) Searching anchor point P of next window end : according to the given P start The coordinates are used as anchor points of the custom window, and the window is built. If there are points with pixel values of 255 in the window, their coordinates need to be recorded, in order from left to right, top to bottom, defining the window. The coordinate calculation method is to sum the coordinates of the anchor point pixel and the window coordinate value of the point, set the pixel value of the recorded coordinate pixel point as the background pixel value 0, store all the recorded coordinate points into the temporary container v, and take the last collected point coordinate with the pixel value of 255 in the window as the next searched window anchor point P end Step 3) is then performed. Otherwise, there is no point with a pixel value of 255 in the window, step 4) is performed.
3) Window movement and anchor point switching: the window moves along the vector direction of the anchor point to the next anchor point when moving each time, and P is led to start =P end Returning to the step 2). By cycling through steps 2) and 3), the coordinates of points for which all pixel values of a certain centerline stripe are 255 can be continuously searched and recorded.
4) Deleting noise points: the number of all extraction points in the temporary container v is denoted as k (k=1, 2,3, … …), and for the interferences of noise points and short stripes which occur in the extraction process, they are characterized by a small number of points, in order to avoid these interferences, a threshold num is set, if k is less than or equal to num, the extracted stripes are considered as short stripes or noise points, and these noise points are deleted, and step 5) is performed. Where num=20 is taken.
5) Storing stripe points: for the number k of coordinate points in the container v>num's stripe is a stripe after clustering, so the coordinates of these points are stored in the container v j (j=1,2,3 … …) and subsequently clears the data in the container v. Every time 5) steps are performed, j=j+1, and the process returns to step 1.
6) And after the stripes on the left image and the right image are clustered, comparing the quantity of the stripes of the two images. If the quantity is equal, the success of the clustering of the stripes is indicated, and the clustering extraction of the stripes is ended. If not, then the extended window needs to be reclustered. When the window is expanded, firstly, 2 rows are expanded below the window, namely n=n+2, then, 1 column is expanded in the left-right direction of the window, namely p=p+1, q=q+1, the window is expanded once, namely, the window is regarded as one iteration, and the window size (m+n+1+2i) x (p+q+1+2i) after the iteration is carried out, wherein i is the iteration number, i=0, 1 and 2 …. And (3) returning to the step (1) after the window is expanded.
The coordinates of the points in the window in the step 6) are stored by the container, coordinate points are added into the container in each iteration, and the points in the container are ordered in a sequential mode from left to right and from top to bottom of the window.
Specifically, the centroid coordinates (ave x ,ave y ) And the Euclidean distance dis from the centroid coordinates to the pixel coordinate origin is calculated as follows:
wherein ave x 、ave y G is the horizontal and vertical coordinates of the barycenter of the stripes, g is the container v of each clustered stripe j The number of point coordinates, x, contained in the image l 、y l Representing the abscissa value of a point in the stripe, dis is the Euclidean distance from the centroid coordinate to the origin of the pixel coordinate.
Because of the sequential consistency of the fringe projections on the object surface, the fringe numbers are sequentially numbered from small to large according to the abscissa of the fringe centroid. When the barycenter abscissa of the two stripes is similar, the broken stripes of the same light plane are considered to be clustered into two or more stripes, and although the two cameras are approximately arranged in parallel, the left view and the right view still have differences, and the longitudinal characteristic of the barycenter coordinate is obvious compared with the transverse characteristic. Therefore, the smaller number of dis is in front of the euclidean distance dis between the centroid coordinates and the origin of the pixel coordinate system.
The beneficial effects of the invention are as follows: the invention provides a multi-line laser stripe clustering and matching method based on user-defined window iteration, which does not need to calibrate a light plane, clusters broken stripes in a user-defined window picking point mode, and enables the number of left and right view stripes to be corresponding through an iteration window. When the extracted stripes are broken, two results are obtained by clustering, namely, the broken stripes are related through iteration of a window and are regarded as the same stripe; and secondly, the corresponding stripes in the left and right views have large breaking lengths and are regarded as two stripes. The two clustering results can be numbered for multi-stripe completion by calculating the barycenter coordinates of stripes, the corresponding numbers of the stripes extracted from the left view and the right view are corresponding stripes, the matching of the corresponding stripes of the left view and the right view is completed, and the matching points of all the stripes are obtained by using epipolar constraint and the corresponding stripes.
Drawings
FIG. 1 is a system flow diagram of the present invention;
FIG. 2 is a schematic diagram of a measurement system according to the present invention;
FIG. 3 is a schematic view of the present invention for region segmentation;
FIG. 4 is a diagram illustrating the establishment of a custom initial window according to the present invention;
FIG. 5 is a schematic view of a window iteration of the present invention;
FIG. 6 is a schematic diagram of the experimental results of the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings and specific examples.
Example 1: the invention provides a multi-line laser stripe clustering and matching method based on user-defined window iteration. Firstly, dividing a measured object from an object carrying plane, secondly, clustering the extracted central line stripes by using a user-defined iteration window, and finally numbering the stripes clustered by the left view and the right view by utilizing the barycenter coordinates of the stripes based on sequence consistency constraint, and completing three-dimensional matching on the stripes with the same number, wherein the structural block diagram is shown in figure 1.
As shown in fig. 2, a measurement system schematic diagram of a multi-line laser stripe clustering and matching method based on user-defined window iteration comprises two industrial cameras 1, a divergent multi-line laser 2, a measured object 3, a background plate 4, a bracket 5 and a workbench 6. Wherein the parameters of the two cameras are identical, approximately parallel in the horizontal direction and symmetrically arranged on the support 5 with respect to the multiline laser.
The multi-line laser stripe clustering and matching method based on the user-defined window iteration comprises the following steps:
step 1: the multiline laser is orthographically projected and the projected fringes need to be vertical or tilted in view imaging. When the picture is acquired, in order to accurately divide the region, a measurement platform needs to be padded with a piece of solid background paper which is different from the measured object in color. The invention provides a background material used on a measuring platform so as to effectively reduce the influence of background noise. The material can be clearly distinguished from the measured object and can be of various colors and textures. The two cameras firstly collect a picture of the measured object without laser stripe projection, the measured object is not moved, and then the picture of the measured object with structured light stripe projection is collected. The method comprises the steps of detecting and extracting edge features of a picture without laser stripe projection, performing image expansion on the extracted edge contour to obtain an accurate closed image contour, and then filling the closed contour to obtain a mask image of the ROI area of the measured object. And superposing the image with the measured object image of the stripe projection to obtain a stripe projection image of the segmented region of interest, as shown in fig. 3.
Step 2: and (3) carrying out filtering treatment and central line extraction on the segmented image obtained in the step (1) to obtain stripe coordinates of a central line, and drawing the coordinates of each pixel point on an image which has the same format as the original image and has the pixel value of 0 by using the pixel value of 255 to obtain a central line stripe binary image.
Step 3: specifically, the method comprises the following steps S1 and S2:
step S1, establishing a custom window: defining the initial window size as (m+n+1) × (p+q+1), wherein coordinates (0, 0) in the window are anchor points, the upper row number and the lower row number are respectively m, n, the left column number and the right column number are respectively p, q (m, n, p, q are positive integers), and coordinates of other pixel points in the window are sequentially increased or decreased by taking the anchor point coordinates as a reference standard. The initial value m=1 of the number of lines; the lower line number is set larger, and the stripe segments with broken central lines can be associated, and an initial value n=7 is taken; the projected line laser stripes are vertical or oblique stripes, the stripes do not have a large span in the horizontal direction, so the window takes an initial value p=1, q=1, and the initial window is shown in fig. 4 (a).
Step S2, clustering of left view and right view stripes:
and (3) processing the stripes on the graph obtained in the step (2) by using a self-defined window clustering algorithm to obtain stripes 1, stripes 2 and stripes 3. The right graph is processed in the same manner as the left graph, and the same result is obtained. The clustering algorithm comprises the following steps:
1) Finding the starting point P of the stripe start : traversing the image from left to right and from top to bottom, searching a pixel point with a gray value of 255, and if the pixel point is found, considering the pixel point as a first point P start Setting the pixel value of the point as a background pixel value of 0, storing the coordinate point into a temporary container v, and executing the step 2), otherwise executing the step 6);
2) Searching anchor point P of next window end : according to the given P start The coordinates are used as anchor points of the custom window, and the window is built. If there are points with pixel values of 255 in the window, their coordinates need to be recorded, in order of defining the window from left to right and top to bottom, as shown in fig. 4 (b). The coordinate calculation method is to sum the coordinates of the anchor point pixel and the window coordinate value of the point, set the pixel value of the recorded coordinate pixel point as the background pixel value 0, store all the recorded coordinate points into the temporary container v, and take the last collected point coordinate with the pixel value of 255 in the window as the next searched window anchor point P end Step 3) is then performed. Otherwise, there is no point with a pixel value of 255 in the window, step 4) is performed.
3) Window movement and anchor point switching: as shown in fig. 4 (c), the window moves along the vector direction of the current anchor point to the next anchor point and causes P to be start =P end Returning to the step 2). By cycling through steps 2) and 3), the coordinates of points for which all pixel values of a certain centerline stripe are 255 can be continuously searched and recorded.
4) Deleting noise points: the number of all extraction points in the temporary container v is denoted as k (k=1, 2,3, … …), and for the interferences of noise points and short stripes which occur in the extraction process, they are characterized by a small number of points, in order to avoid these interferences, a threshold num is set, if k is less than or equal to num, the extracted stripes are considered as short stripes or noise points, and these noise points are deleted, and step 5) is performed. Where num=20 is taken.
5) Storing stripe points: for the number k of coordinate points in the container v>num's stripe is a stripe after clustering, so the coordinates of these points are stored in the container v j (j=1, 2,3 … …) and then the data in the container v is purged. Every time 5) steps are performed, j=j+1, and the process returns to step 1.
6) And after the stripes on the left image and the right image are clustered, comparing the quantity of the stripes of the two images. If the quantity is equal, the success of the clustering of the stripes is indicated, and the clustering extraction of the stripes is ended. If not, then the extended window needs to be reclustered. When the window is expanded, firstly, 2 rows are expanded below the window, namely n=n+2, then, 1 column is expanded in the left-right direction of the window, namely p=p+1, q=q+1, the window is expanded once, namely, the window is regarded as one iteration, and the window size (m+n+1+2i) x (p+q+1+2i) after the iteration is carried out, wherein i is the iteration number, i=0, 1 and 2 …. And (3) returning to the step (1) after the window is expanded. The iterative process of the window is schematically shown in fig. 5.
The coordinates of the points in the window in the step 6) are stored by the container, coordinate points are added into the container in each iteration, and the points in the container are ordered in a sequential mode from left to right and from top to bottom of the window.
Step 4: calculating the barycenter coordinates (ave) of each stripe according to the stripes clustered in the step 3 x ,ave x ) And Euclidean distance dis from centroid coordinates to pixel coordinate origin, a calculation formula such asThe following steps:
wherein ave x 、ave y G is the horizontal and vertical coordinates of the barycenter of the stripes, g is the container v of each clustered stripe j The number of point coordinates, x, contained in the image l 、y l Representing the abscissa value of a point in the stripe, dis is the Euclidean distance from the centroid coordinate to the origin of the pixel coordinate.
Because of the sequential consistency of the fringe projections on the object surface, the fringe numbers are sequentially numbered from small to large according to the abscissa of the fringe centroid. When the barycenter abscissa of the two stripes is similar, the broken stripes of the same light plane are considered to be clustered into two or more stripes, and although the two cameras are approximately arranged in parallel, the left view and the right view still have differences, and the longitudinal characteristic of the barycenter coordinate is obvious compared with the transverse characteristic. Therefore, the smaller number of dis is in front of the euclidean distance dis between the centroid coordinates and the origin of the pixel coordinate system.
When there is a break in the extracted stripes, the clustering will give two results, one is to correlate the broken stripes through iterations of the window, as the same stripe, as stripes 1-7 in FIG. 6. Secondly, the same light plane stripe, but the corresponding stripe has a large breaking length in left and right views, is regarded as two stripes like stripes 8 and 9 in fig. 6.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, e.g., the model parameters of a divergent multi-line laser, and that the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.

Claims (6)

1. A multi-line laser stripe clustering and matching method based on user-defined window iteration is characterized in that: the method comprises the following steps:
step 1: constructing a divergent multi-line laser measurement model based on binocular vision, and obtaining stripe projection images after the region of interest of the measured object is segmented under two camera view angles;
step 2: filtering the image obtained in the step 1, extracting a central line, and drawing a central line stripe binary image;
step 3: clustering the central line stripes in the central line stripe binary image obtained in the step (2) by establishing a self-defined window, obtaining stripe point coordinate data in clustering, judging whether to iterate the window to re-cluster the stripes according to the comparison result of the quantity of the clustered stripes of the left view and the right view;
step 4: and (3) calculating the centroid coordinates of the stripes and the Euclidean distance from the centroid coordinates to the origin of the pixel coordinates according to the stripe point coordinate data obtained by clustering in the step (3), numbering the stripes according to the calculation result, and obtaining the stripes with the same numbers in the left view and the right view as the corresponding matched stripes.
2. The method for clustering and matching multi-line laser stripes based on custom window iteration of claim 1, wherein step 1 specifically comprises the following steps:
when a divergent multi-line laser measurement model based on binocular vision is constructed, the multi-line laser needs orthographic projection and projected stripes need to be vertical or inclined on view imaging;
when the picture is collected, the measurement platform is padded with a piece of solid background paper which is different from the color of the measured object, the two cameras firstly collect a measured object picture without laser stripe projection, the measured object is not moved, then the measured object picture with structured light stripe projection is collected, the accurate closed image contour is obtained by detecting and extracting the edge characteristics of the picture without laser stripe projection and carrying out image expansion on the extracted edge contour, then the closed contour is filled, a mask image of the ROI area of the measured object is obtained, and the mask image is overlapped with the measured object image with stripe projection, so that a stripe projection image after the region of interest is segmented is obtained.
3. The method for clustering and matching multi-line laser stripes based on user-defined window iteration of claim 1, wherein after extracting a central line in step 2, stripe coordinates of the central line are obtained, coordinates of each pixel point are drawn on an image with the same format as an original image and pixel values of all 0 by using pixel values 255, and a central line stripe binary image is obtained.
4. The method for clustering and matching multi-line laser stripes based on custom window iteration of claim 1, wherein step 3 specifically comprises the following steps S1 and S2:
step S1, establishing a custom window: defining the size of an initial window as (m+n+1) × (p+q+1), wherein coordinates (0, 0) in the window are anchor points, the upper row number and the lower row number are respectively m, n, the left column number and the right column number are respectively p, q and m, n, p, q are positive integers, coordinates of other pixel points in the window are sequentially increased or decreased by taking the anchor point coordinates as a reference standard, the row number is an initial value m=1, the lower row number is an initial value n=7, the left column number and the right column number are an initial value p=1, and q=1;
step S2, clustering of left view and right view stripes:
and (3) processing the stripes on the graph obtained in the step (2) by using a self-defined window clustering algorithm to respectively obtain stripes 1,2 and 3.
1) Finding the starting point P of the stripe start : traversing the image from left to right and from top to bottom, searching a pixel point with a gray value of 255, and if the pixel point is found, considering the pixel point as a first point P start Setting the pixel value of the point as a background pixel value of 0, storing the coordinate point into a temporary container v, and executing the step 2), otherwise executing the step 6);
2) Searching anchor point P of next window end : according to the given P start The coordinates are used as anchor points of a self-defined window, the window is established, if the points with the pixel value of 255 exist in the window, the coordinates of the points need to be recorded, the coordinates are recorded according to the sequence from left to right and from top to bottom of the defined window, the coordinate calculation method is that the sum of the coordinates of the pixels of the anchor points and the coordinate values of the window of the points is adopted, meanwhile, the pixel value of the pixel points with the recorded coordinates is set as the background pixel value 0, all the recorded coordinate points are stored in a temporary container v, and the last acquired point coordinate with the pixel value of 255 in the window is used as the anchor point P of the window for the next search end Step 3) is then performed, otherwise, no point with a pixel value of 255 exists in the window, and step 4) is performed;
3) Window movement and anchor point switching: the window moves along the vector direction of the anchor point to the next anchor point when moving each time, and P is led to start =P end Returning to step 2), continuously searching and recording coordinates of points with 255 pixel values of a certain central line stripe by circularly operating the steps 2) and 3);
4) Deleting noise points: recording the number of all extraction points in the temporary container v as k, wherein k=1, 2 and 3 … …, setting a threshold num, if k is less than or equal to num, considering the extracted stripes as short stripes or noise points, deleting the noise points, and executing the step 5);
5) Storing stripe points: for the number k of coordinate points in the container v>num's stripe is a stripe after clustering, so the coordinates of these points are stored in the container v j J=1, 2,3 … …, then the data in the container v is purged,step 5) is executed once, j=j+1, and the step 1 is returned;
6) After the stripes on the left image and the right image are clustered, comparing the quantity of the stripes of the two images, if the quantity is equal, indicating that clustering the stripes is successful, ending the cluster extraction of the stripes, if the quantity is unequal, then, expanding the window for re-clustering, when the window is expanded, firstly expanding 2 rows below the window, namely n=n+2, secondly expanding 1 column respectively in the left-right direction of the window, namely p=p+1, q=q+1, wherein the window is expanded once and is regarded as one window iteration, the window size (m+n+1+2i) x (p+q+1+2i) after the iteration is carried out, wherein i is the iteration number, i=0, 1,2 …, and returning to the step 1 after the window is expanded;
the coordinates of the points in the window in the step 6) are stored by the container, coordinate points are added into the container in each iteration, and the points in the container are ordered in a sequential mode from left to right and from top to bottom of the window.
5. The method for clustering and matching multi-line laser stripes based on custom window iteration of claim 1, wherein the method is characterized by comprising the following steps: the centroid coordinates (ave x ,ave y ) And the Euclidean distance dis from the centroid coordinates to the pixel coordinate origin is calculated as follows:
wherein ave x 、ave y G is the horizontal and vertical coordinates of the barycenter of the stripes, g is the container v of each clustered stripe j The number of point coordinates, x, contained in the image l 、y l Representing the abscissa value of a point in the stripe, dis is the Euclidean distance from the centroid coordinate to the origin of the pixel coordinate.
6. The method for clustering and matching multi-line laser stripes based on custom window iteration of claim 5, wherein the method is characterized by comprising the following steps: in the step 4, during numbering, the centroid coordinates are compared with the Euclidean distance dis of the origin of the pixel coordinate system, and the smaller number of dis is located at the front.
CN202310706045.XA 2023-06-14 2023-06-14 Multi-line laser stripe clustering and matching method based on custom window iteration Pending CN116934826A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310706045.XA CN116934826A (en) 2023-06-14 2023-06-14 Multi-line laser stripe clustering and matching method based on custom window iteration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310706045.XA CN116934826A (en) 2023-06-14 2023-06-14 Multi-line laser stripe clustering and matching method based on custom window iteration

Publications (1)

Publication Number Publication Date
CN116934826A true CN116934826A (en) 2023-10-24

Family

ID=88386961

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310706045.XA Pending CN116934826A (en) 2023-06-14 2023-06-14 Multi-line laser stripe clustering and matching method based on custom window iteration

Country Status (1)

Country Link
CN (1) CN116934826A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117433450A (en) * 2023-12-20 2024-01-23 派姆特科技(苏州)有限公司 Cross line three-dimensional camera and modeling method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117433450A (en) * 2023-12-20 2024-01-23 派姆特科技(苏州)有限公司 Cross line three-dimensional camera and modeling method
CN117433450B (en) * 2023-12-20 2024-04-19 派姆特科技(苏州)有限公司 Cross line three-dimensional camera and modeling method

Similar Documents

Publication Publication Date Title
CN109631855B (en) ORB-SLAM-based high-precision vehicle positioning method
CN109345620B (en) Improved object point cloud splicing method for ICP (inductively coupled plasma) to-be-measured object by fusing fast point feature histogram
CN106570507B (en) Multi-view-angle consistent plane detection and analysis method for monocular video scene three-dimensional structure
CN110264567B (en) Real-time three-dimensional modeling method based on mark points
US10198858B2 (en) Method for 3D modelling based on structure from motion processing of sparse 2D images
CN109472828B (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
JP4865557B2 (en) Computer vision system for classification and spatial localization of bounded 3D objects
US9672634B2 (en) System and a method for tracking objects
CN108682027A (en) VSLAM realization method and systems based on point, line Fusion Features
KR20150121179A (en) Real time stereo matching
CN107392947A (en) 2D 3D rendering method for registering based on coplanar four point set of profile
JP2002024807A (en) Object movement tracking technique and recording medium
CN102222341A (en) Method and device for detecting motion characteristic point and method and device for detecting motion target
CN116934826A (en) Multi-line laser stripe clustering and matching method based on custom window iteration
CN111105451B (en) Driving scene binocular depth estimation method for overcoming occlusion effect
JP2011242183A (en) Image processing device, image processing method, and program
CN112001973B (en) Quick three-dimensional human head measuring method based on digital speckle correlation
CN110702025A (en) Grating type binocular stereoscopic vision three-dimensional measurement system and method
CN114396875A (en) Rectangular parcel volume measurement method based on vertical shooting of depth camera
JP2011237296A (en) Three dimensional shape measuring method, three dimensional shape measuring device, and program
JP6659095B2 (en) Image processing apparatus, image processing method, and program
JP2018195070A (en) Information processing apparatus, information processing method, and program
Lv et al. Semantically guided multi-view stereo for dense 3d road mapping
JP2002008014A (en) Method and device for extracting three-dimensional shape, and recording medium
CN111784798B (en) Map generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination