CN112819877A - Laser line point cloud generating method and device and computer readable storage medium - Google Patents

Laser line point cloud generating method and device and computer readable storage medium Download PDF

Info

Publication number
CN112819877A
CN112819877A CN202110035554.5A CN202110035554A CN112819877A CN 112819877 A CN112819877 A CN 112819877A CN 202110035554 A CN202110035554 A CN 202110035554A CN 112819877 A CN112819877 A CN 112819877A
Authority
CN
China
Prior art keywords
point
laser line
laser
point set
segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110035554.5A
Other languages
Chinese (zh)
Other versions
CN112819877B (en
Inventor
李楚翘
邓亮
陈先开
冯良炳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Cosmosvision Intelligent Technology Co ltd
Original Assignee
Shenzhen Cosmosvision Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Cosmosvision Intelligent Technology Co ltd filed Critical Shenzhen Cosmosvision Intelligent Technology Co ltd
Priority to CN202110035554.5A priority Critical patent/CN112819877B/en
Priority claimed from CN202110035554.5A external-priority patent/CN112819877B/en
Publication of CN112819877A publication Critical patent/CN112819877A/en
Application granted granted Critical
Publication of CN112819877B publication Critical patent/CN112819877B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Optics & Photonics (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a laser line point cloud generating method and device and a computer readable storage medium, and belongs to the technical field of point cloud generation in industrial machine vision. According to the embodiment of the application, at least 2N +1 shot laser lines are exposed sequentially from high to low by using three kinds of long, medium and short exposure time; extracting laser line pixel points of the acquired images; dividing the laser line into a dark area point set and a bright area point set; then classifying the laser section into a standard section, an over-wide section and a sparse section; and restoring the laser line; and converting the laser line into point cloud through a coordinate conversion relation to generate point cloud information. Therefore, the laser line exposure imaging method can deal with the operation environment with more complex illumination by exposing and imaging the laser line for multiple times; laser lines are classified, and are separately corrected in a segmented and partitioned mode, so that more accurate laser centerline coordinates can be obtained, and more accurate point clouds can be obtained.

Description

Laser line point cloud generating method and device and computer readable storage medium
Technical Field
The invention relates to the technical field of point cloud generation in industrial machine vision, in particular to a laser line point cloud generation method and device and a computer readable storage medium.
Background
Laser-guided seam tracking technology has been used in a number of applications in industrial welding, and the system includes an imaging system consisting of a laser and a camera. The working principle of the system is as follows: firstly, scanning a workpiece by a laser, then shooting a laser line by a camera to obtain the coordinate of the laser line on an image, and finally converting the image coordinate of the laser line into a point cloud through the calibration relation between the camera and a robot.
However, the conventional method for generating a point cloud by a laser line requires adjusting the exposure of a camera according to the radiation of a scene (workpiece and background), so that the laser line can be clearly and accurately imaged. However, the existing method has the following problems: when the scene irradiation span is large in a complex environment (namely, the difference between light and shade is large), a single exposure cannot obtain clear laser line imaging in a light area and a dark area, and accurate point cloud cannot be obtained.
Disclosure of Invention
In view of the above, the present invention provides a laser line point cloud generating method, a laser line point cloud generating device, and a computer-readable storage medium, and aims to solve the problems that laser line imaging is not clear and accurate point cloud cannot be obtained in a complex environment.
The technical scheme adopted by the invention for solving the technical problems is as follows:
the first aspect of the invention provides a laser line point cloud generating method, which comprises the following steps:
sequentially exposing at least 2N +1 shot laser lines from high to low by using three kinds of long, medium and short exposure time; n is a natural number;
extracting laser line pixel points of the acquired images;
segmenting the laser line into a dark area point set and a bright area point set;
classifying the laser sections;
restoring the laser line;
and converting the laser line into point cloud through a coordinate conversion relation to generate point cloud information.
In some embodiments, the step of segmenting the laser line into a dark region point set and a bright region point set comprises:
traversing laser point set celliIf the current point p (x, y) is communicated with any one point in the point set in eight neighborhoods, adding the point into a bright area point set list1, otherwise, adding a dark area point set list 2;
corroding the connected domain in the bright region point set list1, and if the number of pixels of the corroded connected domain is too small, moving the points in the connected domain into the dark region point set list 2;
expanding the connected domain in the bright area point set list 1; the boundaries of the dilated connected component expand outward, and if a dilated pixel overlaps a point in the dark region set list2, the overlapping pixel is moved from the dark region set list2 into the bright region set list 1.
In some embodiments, the classifying the laser segments comprises:
classifying the bright area point set list1, calculating the width of each light band in the list1, if the width is smaller than a width threshold value, marking the section as a standard section, and otherwise, marking the section as an over-wide section;
and classifying the dark region point set list2, calculating the density of each segment point set in the list2, marking the segment as a standard segment if the density is greater than a density threshold, and otherwise, marking the segment as a sparse segment.
In some embodiments, the classification of the set of dark points list2 includes the steps of:
dividing discrete points in the dark point set list2 into a plurality of point sets seti, wherein one point set represents one light band;
sorting discrete points in the dark region point set list2 from small to large according to x coordinates, and dividing points with the distance in the x direction smaller than n pixels into the same point set;
calculating the density of the point set of each segment of the light band;
Figure BDA0002893128090000031
optical band volume (optical band length) optical band width
Optical tape length _ set ═ correlation _ xMax-correlation _ xMin
The optical band width _ set is associated with associate _ yMax-associate _ yMin
Wherein, the maximum x coordinate _ xMax and the minimum x coordinate _ xMin of each point set; maximum y coordinate _ yMax and minimum y coordinate _ yMin;
setting a density threshold th _ density, if the density of the current set is greater than the density threshold, marking the set as a standard segment, and otherwise, marking the set as a sparse segment;
img imaging the laser line needing to be repaired currentlyn+1Extracting a central line from the standard section; and adds the median line to the final set of laser line pixel points.
In some embodiments, the restoring the laser line comprises:
restoring img image of laser line needing repair currentlyn+1A sparse segment of;
adjusting the imaging image img of the laser line currently required to be repairedn+1Too wide section of (2).
In some embodiments, the restoring of the imaged image img of the laser line currently in need of repairn+1The sparse segment of (1) includes:
extracting img1~imgnThe central line of the standard section in the dark region point set list2, if the dark region point sets of the multiple images are all the standard sections, the standard section is the one with higher density of the point sets;
extracting a line of a light band center line;
reduction of imgn+1To obtain a laser spot set cell againn+1Determining sparse segments and calculating the point density of the sparse segments;
calculating laser point set celln+1The distance between the point(s) to each point on the central line is calculated, the minimum distance is compared and judged by the minimum distance and a distance threshold, if the minimum distance is smaller than the distance threshold, the point is added into the corresponding sparse segment setn+1
Recalculating sparse segment setsn+1If the criterion is met, set will ben+1Adding a standard point set;
and if the density does not meet the standard, reducing the gray threshold value again, and repeating the steps of determining the sparse segment and calculating the point density of the sparse segment.
In some embodiments, the adjusting currently requires repairing the imaged image img of the laser linen+1The over-width section of (a) includes:
selecting imgn+2~img2n+1The bright area standard section in the image is used as a reference section, and if the bright areas of the multiple images are all the standard sections, the bright areas with thinner light bands are selected as the reference section;
extracting centerline line of reference standard segmentref(ii) a Correcting midline of over-wide section by midline of standard section
Setting the gray value of the reference segment point to be 255, and solving the central line againbias
Determining a centerline line of a reference segmentrefTo linebiasDistance bias ofdistance
Using the offset distance biasdistanceModified centerline linen+1Obtain a new centerline linen+1
In some embodiments, the method of extracting a line in an optical band comprises:
for a two-dimensional image, the Hessian matrix describes the two-dimensional derivative of each point in the principal direction, and for any point p (x, y) in the set seti of light band points seti, the Hessian matrix can be expressed as
Figure BDA0002893128090000051
Wherein rxrx represents the second order partial derivative of the point along the X direction, and ry represents the second order partial derivative of the point along the Y direction;
eigenvector (n) corresponding to maximum eigenvalue λ of the Hessian matrixx,ny) Is the normal direction of the light band;
with point (x0, y0) as the reference point, the sub-pixel (px, py) in the center of the band. Assuming that there is a coefficient t such that (px, py) ═ x0+ tnx, y0+ tny)
In the formula
Figure BDA0002893128090000052
Wherein rx and ry are respectively x and y directional partial derivatives, rxx, ryy is a second-order partial derivative, and rxy is a mixed partial derivative;
when | t | is less than 0.5, (x0, y0) is a point on the centerline.
The second aspect of the present invention also provides a laser line point cloud generating apparatus for performing the above laser line point cloud generating method, the apparatus comprising: the system comprises a laser line acquisition module, a pixel point acquisition module, a laser line segmentation module, a laser segment classification module, a laser line restoration module and a point cloud generation module;
the laser line acquisition module is used for sequentially exposing at least 2N +1 shot laser lines from high to low by using three kinds of long, medium and short exposure time; n is a natural number;
the laser line segmentation module is used for segmenting the laser line into a dark area point set and a bright area point set;
the laser segment classification module is used for classifying the laser segments;
the laser line restoration module is used for restoring a laser line;
the point cloud generating module is used for converting the laser line into point cloud through a coordinate conversion relation and generating point cloud information.
In some embodiments, the laser line segmentation module comprises a point set classification unit, a new point set acquisition unit, and a dilation operation unit;
the point set classification unit is used for classifying a bright area point set list1 and a dark area point set list 2;
the new point set acquisition unit is used for performing morphological corrosion on the dark area point set, splitting the pixel blocks adhered to the dark areas and obtaining a new bright area point set and a new dark area point set;
the expansion operation unit is used for performing expansion operation on a connected domain in the bright area point set list1 through the updated bright area point set list1 and dark area point set list 2; the boundaries of the dilated connected domain expand outward, and if the newly added dilated pixels overlap with points in the dark region set list2, the overlapping pixels are moved from the dark region set list2 into the light region set list 1.
In some embodiments, the laser line restoration module comprises a sparse segment restoration unit, an over-wide segment adjustment unit, and a midline extraction unit;
the sparse segment restoration unit is used for restoring the imaging image img of the laser line which needs to be restored currentlyn+1A sparse segment of;
the over-width adjusting unit is used for adjusting the imaging image img of the laser line which needs to be repaired currentlyn+1An over-wide section;
the centerline extraction unit extracts a centerline line of the reference segmentrefThe midline of the over-wide section is corrected by the midline of the standard section.
The present application also provides a computer-readable storage medium comprising a processor, a computer-readable storage medium and a computer program stored on the computer-readable storage medium, which computer program, when executed by the processor, performs the steps in the method as described above.
According to the laser line point cloud generation method, the laser line point cloud generation device and the computer storage medium, at least 2N +1 shot laser lines are exposed sequentially from high to low by using three kinds of long, medium and short exposure time; extracting laser line pixel points of the acquired images; dividing the laser line into a dark area point set and a bright area point set; then classifying the laser section into a standard section, an over-wide section and a sparse section; and restoring the laser line; and converting the laser line into point cloud through a coordinate conversion relation to generate point cloud information. Therefore, the laser line exposure imaging method can deal with the operation environment with more complex illumination by exposing and imaging the laser line for multiple times; laser lines are classified, segmented, partitioned and independently corrected, and more accurate laser center line coordinates can be obtained, so that more accurate point clouds are obtained.
Drawings
FIG. 1 is a flowchart of a method of an embodiment of a laser line point cloud generating method according to the present invention;
fig. 2 is a laser area distribution diagram of a laser line point cloud generating method according to an embodiment of the present invention;
FIG. 3 is a flowchart of a method for dividing a laser line into a dark region point set and a bright region point set according to the laser line point cloud generating method provided in the embodiment of the present invention;
FIG. 4 is a flowchart of a method for classifying laser segments of a laser line point cloud generating method according to an embodiment of the present invention;
FIG. 5 is a flowchart of a classification estimation method of the dark point set list2 of the laser line point cloud generation method according to the embodiment of the present invention;
FIG. 6 is a schematic diagram of the classification of the list2 of the dark point set of the laser point cloud generating method according to the embodiment of the present invention;
fig. 7 is a schematic diagram of laser points with gray threshold th _ gray of 128 according to an embodiment of the present invention;
fig. 8 is a schematic diagram of laser spots where the threshold th _ gray is lowered by 80 according to an embodiment of the present invention;
FIG. 9 is a flowchart of a method for restoring a laser line according to an embodiment of the present invention;
FIG. 10 is a flowchart of a method for recovering sparse segments in img3, in accordance with an embodiment of the present invention;
FIG. 11 is a schematic view of a centerline of an extracted band of light according to one embodiment of the present invention;
FIG. 12 is a flowchart of a method for adjusting the over-width of an imaged image img3 of a laser line currently in need of repair, in accordance with one embodiment of the present invention;
FIG. 13 is a schematic diagram of an over-width segment of a laser line point cloud generating method according to an embodiment of the present invention;
FIG. 14 is a diagram illustrating a standard segment of a laser line point cloud generating method according to an embodiment of the present invention;
FIG. 15 is a schematic diagram of a sparse segment of a laser line point cloud generating method according to an embodiment of the present invention;
FIG. 16 is a block diagram of an embodiment of a laser line point cloud generating device according to the present invention;
fig. 17 is a block diagram of another embodiment of a laser line point cloud generating apparatus according to the embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
The terminal may be implemented in various forms. For example, the terminal described in the present invention may include mobile terminals such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and fixed terminals such as a Digital TV, a desktop computer, and the like.
The first embodiment is as follows:
the invention provides a laser line point cloud generating method, please refer to fig. 1, the method comprises the following steps:
s10, sequentially exposing at least 2N +1 shot laser lines from high to low by using three kinds of long, medium and short exposure time according to a certain gradient;
specifically, at least 2N +1 shot laser lines are sequentially exposed from high to low by using three kinds of exposure time of long, medium and short according to a certain gradient, wherein N belongs to a natural number. In this embodiment, N is 2 as an example. Sequentially exposing at least 5 shot laser lines Q from high to low by using three exposure times of long, medium and shortn-2,Qn-1,Qn,Qn+1,Qn+2Exposing 5 times to Qn-2,Qn-1,Qn,Qn+1,Qn+2The laser lines sequentially reduce the exposure time according to a certain gradient to respectively obtain corresponding images imgn-2,imgn-1,imgn,imgn+1,imgn+2Wherein imgnTo the extent that it is currently necessary to repair the imaged image of the laser line, nQand shooting a laser line for the current target needing to be repaired, wherein n is greater than or equal to 3.
S12, extracting laser line pixel points of the acquired images;
in particular, the laser line is relative to other light in the sceneThe source has higher energy and higher gray value on the image. And setting a gray threshold th _ gray, traversing pixels on the image, and adding the pixels with the gray values higher than the gray threshold th _ gray into the laser point set. In this example, 5 images img were obtainedn-2,imgn-1,imgn,imgn+1,imgn+2Corresponding laser point set celln-2,celln-1,celln,celln+1,celln+2
And S14, segmenting the laser line, and specifically dividing the laser line into a dark area point set and a bright area point set.
Specifically, as shown in fig. 2, which is a distribution diagram of laser regions in the embodiment of the present application, laser points are distributed more discretely in dark regions and densely in bright regions. Referring to fig. 3, dividing the laser line into a dark area point set and a bright area point set specifically includes the steps of:
s141, collecting the cell by using the laser pointsi={p0,p1...pnExplaining by way of example, if a current point p (x, y) is communicated with any one eight-neighborhood point in the point set, adding the point into a bright area point set list1, otherwise, adding into a dark area point set list 2;
collecting cell by laser pointi={p0,p1...pnExplaining a laser line segmentation specific method by taking an example, traversing each point in a laser point set, taking i as an example of 3, if a laser point exists in eight neighborhoods of a current point p (x, y), adding the point into a bright area point set list1, otherwise, adding a dark area point set list2, as shown in the following table, wherein a celliIs a laser point set; p is a radical ofnFor the laser point where the current position is located, i and n both belong to natural numbers. The following table is the current point p (x, y) image coordinate and its eight neighborhood point connection schematic table.
(x-1,y-1) (x,y-1) (x+1,y-1)
(x-1,y) P(x,y) (x+1,y)
(x-1,y+1) (x,y+1) (x+1,y+1)
The program code for dividing the laser line into a set of dark and light spots is expressed as follows:
Figure BDA0002893128090000101
and S142, splitting the pixel blocks adhered to the dark region point set, corroding the connected region in the bright region point set list1, and if the number of the pixels of the connected region after corrosion is too small, moving the points in the connected region into the dark region point set list 2.
Specifically, after the points are classified, the bright region point set list1 is composed of several connected domains, that is, list1 ═ { area0, area1.. area }, where small connected domains of dark regions are also included, which now needs to be found and re-classified into the dark region point set list 2. The morphological corrosion can reduce the range of a target area, reduce the number of pixel points, eliminate area boundary points and separate connected areas which are adhered. Therefore, the small connected domain of the dark area can be distinguished by the number of pixels in the connected domain after corrosion. And corroding a connected region in the bright region point set list1, and splitting the pixel blocks adhered to the dark regions to obtain a new bright region point set and a new dark region point set.
Corrosion of region A by structural element B is noted
Figure BDA0002893128090000111
And (3) scanning each connected domain in the bright area point set list1 by taking the structural element B as a window template and taking step 1 as a step length, if the structural element B and the part of the connected domain at the current position completely belong to the bright area point set list1, keeping the point at the position, and if not, deleting the point. After the bright area point set list1 is corroded, a new bright area point set and a connected domain list1_ error ═ area _ Ero _0, area _ Ero _1. Setting a pixel number threshold th _ numb, and if the pixel number of the connected domain of the list1_ error is less than th _ numb, moving the points in the original connected domain from the bright region point set list1 to the dark region point set list 2.
The code program for morphological erosion of the set of light spots list1 and the set of dark spots list2 is as follows:
Figure BDA0002893128090000112
Figure BDA0002893128090000121
s143, detecting burr pixels of the bright area point set, and performing expansion operation on a connected domain in the bright area point set list 1; the boundaries of the dilated connected domain expand outward, and if a dilated pixel overlaps a set of points in the dark region set list2, the overlapping pixel is moved from the dark region set list2 into the bright region set list 1.
Specifically, the edge burr pixels of the bright area laser line are determined as discrete points during segmentation and added into the dark area set list2, and these burr pixels are found out and moved into the corresponding connected domain of the bright area set list 1. And obtaining the updated bright region point set list1 and dark region point set list2 in the last step, performing expansion operation on the connected domain in the bright region point set list1, expanding the boundary of the expanded connected domain outwards, and if the newly added pixels are overlapped with the dark region point set list2, moving the overlapped pixels from the dark region point set list2 into the corresponding connected domain of the bright region point set list 1. For example, if the point p is a point obtained by expanding the area3 and belongs to the dark area set list2, the point p is added to the area 3.
Expansion of the A region using Structure B is shown as
Figure RE-GDA0003020123030000122
And scanning each connected domain in the bright area point set list1 by taking the structural element B as a window template and step 1 as a step length, and adding the pixel at the current position into the expansion point set partition _ list if the intersection of the current position B and the connected domain is not empty. After the scan is complete, the dilated point set is compared to the dark region set list2, and if a point in the dilated point set also belongs to the dark region set list2, the point is moved from the dark region set list2 to the bright region set list 1.
The code for the expansion operation on the updated bright region point set list1 and dark region point set list2 is as follows:
Figure BDA0002893128090000123
Figure BDA0002893128090000131
s16, classifying laser segments;
and judging whether the bright area point set list1 and the dark area point set list2 of each image meet the standard or not, directly adopting the standard segment in the imaging image img3 of the laser line which needs to be repaired at present as the laser line, and carrying out next deviation correction on the nonstandard segment.
Referring to fig. 4, the laser segment classification specifically includes the following steps:
s161, classifying the bright area point set list1, calculating the width of each light band in the list1, marking the segment as a standard segment if the width is smaller than a width threshold value, and otherwise marking the segment as an over-wide segment;
specifically, a width threshold value th _ width is set, if the width of a certain section of the light band in the bright area point set list1 is greater than the width threshold value, the section of the connected domain is marked as an excessively wide section, and if the width is less than the threshold value, the section of the light band is marked as a standard section.
Calculating the width of the light band:
the maximum y coordinate _ yMax and the minimum y coordinate _ yMin of the current optical tape area, the optical tape width _ area is coordinate _ yMax-coordinate _ yMin.
Setting a width threshold th _ width _ area, if the width of the optical tape is larger than the width threshold th _ width _ area, marking the optical tape as an over-wide section, otherwise marking the optical tape as a standard section.
S162, classifying the dark region point set list2, calculating the density of each segment point set in the list2, if the density is greater than a density threshold value, marking the segment as a standard segment, and otherwise, marking the segment as a sparse segment;
and setting a density threshold th _ density, if the density of a segment of the dark region point set list2 is less than the density threshold th _ density, marking the segment of the dark region point set as a sparse segment, and if the density is higher than the density threshold th _ density, marking the segment of the dark region point set as a standard segment.
Specifically, referring to fig. 5 and 6, the classification of the dark point set list2 includes the steps of:
s1621, dividing discrete points in the dark region point set list2 into a plurality of point sets seti, wherein one point set represents a light band, and i is a natural number.
S1622, sorting the discrete points in the dark region point set list2 from small to large according to x coordinates, and dividing the points with the distance in the x direction smaller than n pixels into the same point set.
The maximum x-coordinate _ xMax and the minimum x-coordinate _ xMin, the maximum y-coordinate _ yMax and the minimum y-coordinate _ yMin of each point set are simultaneously noted. Ordered points [ p0(x0, y0), p1(x1, y1), p2(x2, y2).. pn (xn, yn)]X coordinate of sorted points [ x ]0,x1,x2...xn],x0<x1<x2...xn-1<xn
Taking p0 as a starting point pstart, wherein the pstart is p0
Figure BDA0002893128090000141
Figure BDA0002893128090000151
The set of dark points at this time can be represented as list2 ═ set0, set1, set2
S1623, calculating the density of the point set of each segment of the light band;
density of point set
Figure BDA0002893128090000152
Optical band volume (optical band length) optical band width
Optical tape length _ set ═ correlation _ xMax-correlation _ xMin
The optical band width _ set is associated with associate _ yMax-associate _ yMin
Wherein, the maximum x coordinate _ xMax and the minimum x coordinate _ xMin of each point set; the maximum y coordinate _ yMax and the minimum y coordinate _ yMin.
S1624, setting a density threshold th _ density, if the density of the current set is greater than the density threshold, marking the set as a standard segment, and otherwise, marking the set as a sparse segment.
S1625, img imaging image of laser line needing to be repaired currentlyn+1I.e. the standard segment in the intermediate exposure image extracts the middle line, which adds the final set of laser line pixel points laser _ pixel.
In this embodiment, img when n is 2n+1I.e., img3, the method for extracting the middle line is described in detail in the following step S1812, and is not described herein again.
img3 is an example of an imaged image of a laser line currently in need of repair in the embodiments of the present application.
S18 restoring laser line
Referring to fig. 9, the laser line restoration specifically includes the following steps:
s181, restoring the img of the imaging image of the laser line which needs to be repaired currentlyn+1A sparse segment of;
if sparse segments exist in img3, the gray threshold th _ gray of img3 needs to be lowered so that more laser spots appear in the sparse segments, but at the same time, lowering the gray threshold brings noise, and the newly appearing noise spots need to be constrained by other standard segments of the exposed image. As shown in fig. 7, it is a laser spot that appears at th _ gray ═ 128. Now, the threshold value of the gradation is lowered to th _ gray of 80, and a noise point occurs as shown in fig. 8.
Referring to FIGS. 10 through 15, the recovery imgn+1The sparse segment in (1) specifically comprises the following steps:
s1811, extracting img1~imgnThe central line of the standard section in the dark area point set list2, if the dark area point sets of the multiple images are all the standard sections, the standard section is the one with higher density of the point sets;
in this embodiment, img when n is 2n+1Specifically, the central line of the standard segment in the img2 or img1 dark region point set list2 is extracted, and if the dark regions of the img2 and the img1 are both standard segments, the standard segment with higher point set density is adopted.
S1812, extracting a line of a light band center line;
for a two-dimensional image, the Hessian matrix describes the two-dimensional derivative of each point in the principal direction, and for any point p (x, y) in the set seti of light band points seti, the Hessian matrix can be expressed as
Figure BDA0002893128090000161
Where rx rx rx denotes the second order partial derivative of the point along the X direction, ry denotes the second order partial derivative of the point along the Y direction, and the other terms are similar.
Eigenvector (n) corresponding to maximum eigenvalue λ of the Hessian matrixx,ny) The normal direction of the optical band.
With point (x0, y0) as the reference point, the sub-pixel (px, py) in the center of the band. Assuming that there is a coefficient t such that (px, py) ═ x0+ tnx, y0+ tny)
In the formula
Figure BDA0002893128090000171
Wherein rx and ry are respectively x and y directional partial derivatives, rxx, ryy is a second-order partial derivative, and rxy is a mixed partial derivative;
when | t | is less than 0.5, (x0, y0) is a point on the centerline.
S1813, reducing imgn+1To obtain a laser spot set cell againn+1And determining the sparse segment and calculating the point density of the sparse segment
Specifically, the gray threshold of img3 is reduced, a laser point set cell3 is obtained again, a sparse segment is determined, and the point density of the sparse segment is calculated;
after the gray threshold of the imaging image img3 of the laser line which needs to be repaired at present is reduced, points which are larger than the gray threshold are added into a laser point set cell3, and at the moment, the laser point set cell3 comprises bright area points, dark area points and noise points. Points of cell3 near the midline (either of img2 or img 1) are considered newly added points on the sparse segment of img3, and points far from the midline are considered invalid points.
S1814, calculating a laser point set celln+1The distance between the point(s) to each point on the central line is calculated, the minimum distance is compared and judged by the minimum distance and a distance threshold, if the minimum distance is smaller than the distance threshold, the point is added into the corresponding sparse section setn+1
Specifically, in this embodiment, the distance between the point of the cell3 and each point on the center line is calculated, the minimum distance is obtained, the minimum distance is compared with the distance threshold value for judgment, and if the minimum distance is smaller than the distance threshold value, the point is added to the corresponding sparse segment set 3;
specifically, a distance threshold th _ distance is set, the distance between a point of the cell3 and each point on the centerline is calculated, the minimum distance is obtained, and if the minimum distance is smaller than the distance threshold th _ distance, the point is added to the corresponding sparse segment set 3.
S1815, recalculating sparse segment setn+1If the criterion is met, set will ben+1Adding a standard point set;
recalculating the density of the sparse segment set3, and adding set3 into a standard point set cell _ laser if the standard criterion is met; if the density does not reach the standard, the gray threshold is reduced again, and the steps S1813-S1815 are repeated.
And calculating the point density of the sparse segment again, and if the point density reaches the standard, marking the segment as a standard segment and adding the standard segment into the standard point set cell _ laser. And extracting the middle line of the new standard segment and adding the final laser line pixel point set laser _ pixel.
S182, adjusting the img of the imaging image of the laser line needing to be repaired currentlyn+1An excessively wide section of (a);
when n is 2, imgn+1That is, img3, if too wide a segment exists in img3, it passes through the bright area standard segment of img4 or img 5.
The pixels of the over-wide section are basically in an over-exposure state, and the central line of the accurate light band cannot be obtained through the change of the gray value. The step of adjusting the too wide section of the imaging image img3 of the laser line needing to be repaired at present specifically comprises the following steps:
s1821, selecting imgn+2~img2n+1The bright area standard section in the image is used as a reference section, and if the bright areas of the multiple images are all the standard sections, the bright areas with thinner light bands are selected as the reference section;
specifically, in this embodiment, the bright-area standard segment in img4 or img5 is selected as the reference segment, and if both are standard segments, the standard segment with a thinner optical band is selected as the reference standard segment.
S1822, extracting the centerline line of the reference standard segmentref(e.g., black line in the middle of the white laser line in fig. 12), the centerline of the over-wide segment is corrected by the centerline of the standard segment, and the centerline extraction method is the same as the centerline extraction method in step S1812, and is not described herein again.
S1823, setting the gray value of the reference segment point to be 255, and calculating the central line againbias
S1824, calculating a centerline of the reference segmentrefTo linebiasDistance bias ofdistance
If the main direction of the laser line on the image is transverse (i.e. the included angle between the laser line and the x-axis of the image is smaller than the included angle between the laser line and the y-axis of the image), then line can be obtainedrefIn linebiasA mapping point of (a). Line by midlinerefFinding line by x-axis coordinate reference of pointbiasNeutralizing linerefCalculating the difference between the two y coordinates as biasdistanceIf in linebiasDoes not exist in the same x-axis coordinateIf yes, then line is addedbiasThe mid x coordinate corresponds to its closest point.
lineref={pref0,pref1....prefi}
linebias={pbias0,pbias1....pbiasi}
Ifpbiasi.x=prefi.x
{
biasdistance=prefi.y-pbiasi.y
}
Else
{
// if linebiasIs absent from prefiPoints with the same x coordinate
// suppose pbiasi-1X is closest to prefi.x
biasdistance=prefi.y-pbiasi-1.y
}
prefiIs a line of central linerefPoint of (a) pbiasiIs a line of central linebiasPoint (c) above.
Finding the centerline line3 of the img3 over-wide segment, the extraction method is the same as the centerline extraction method, and is not described herein again.
S1825, using the offset distance biasdistanceModified centerline linen+1Obtain a new centerline linen+1
Specifically, the offset distance bias obtained by the previous stepdistanceThis midline is corrected. When n is 2, linen+1I.e., line3, the point of line3 is added with biasdistanceThe corresponding offset yields a new centerline line 3. The new line3 dot is added to the final laser line pixel dot set laser _ pixel.
line3={p0,p1,p2...pn}
biasdistance={bias0,bias1,bias2.....biasn}
For(int i=0;i<line3.size();++i)
{
pi.x=pi.x+biasi;
}
S20, converting the laser line into point cloud through a coordinate conversion relation, and generating point cloud information;
let the coordinates of the image of the point on the laser centerline obtained in the previous step be (u, v, z), where z is the distance from the laser plane to the camera known.
Transformation relation transform from camera coordinate system to robot coordinate system is assumedC2R=[R,T]Has been obtained by calibration. The laser line can be converted into a point cloud, i.e.
Figure BDA0002893128090000201
Therefore, the laser line can be converted into point cloud through the formula conversion.
In the method of the embodiment of the application, clear laser line images of a dark area and a bright area are respectively obtained by shooting laser lines by using three exposure times, namely long exposure time, medium exposure time and short exposure time; dividing the laser line into a dark area point set and a bright area point set through a connected domain; then dividing the laser band into a sparse section, a standard section and an over-wide section through a density threshold and a width threshold; obtaining denser laser points by reducing the gray threshold, and then restraining noise by using a central line of a high exposure standard section; and correcting the center line of the over-wide section through the center line of the standard section, and generating a point cloud through a coordinate conversion relation. Therefore, the laser line exposure imaging method can deal with the operation environment with more complex illumination by exposing and imaging the laser line for multiple times; laser lines are classified, and are separately corrected in a segmented and partitioned mode, so that more accurate laser centerline coordinates can be obtained, and more accurate point clouds can be obtained.
Example two:
the embodiment of the invention provides a laser line point cloud generating device, which adopts the laser line point cloud generating method of the embodiment to complete the generation of the laser line point cloud, and can cope with the more complex operation environment of illumination by exposing and imaging the laser line for many times; laser lines are classified, and are separately corrected in a segmented and partitioned mode, so that more accurate laser centerline coordinates can be obtained, and more accurate point clouds can be obtained.
Referring to fig. 16 and 17, the laser line point cloud generating apparatus according to the embodiment of the present invention includes a laser line obtaining module 301, a pixel point obtaining module 302, a laser line segmenting module 303, a laser segment classifying module 304, a laser line restoring module 305, and a point cloud generating module 306.
The laser line obtaining module 301 is configured to sequentially expose at least 2N +1 shot laser lines from high to low by using three types of exposure time, namely long, medium, and short exposure times, where N is a natural number.
The pixel point obtaining module 302 is configured to extract a laser line pixel point of the obtained image.
The laser line segmenting module 303 is configured to segment the laser line into a dark area point set and a bright area point set.
The laser line segmentation module 303 includes a point set classification unit 3031, a new point set acquisition unit 3032, and an expansion operation unit 3033.
The point set classification unit 3031 is configured to classify a bright area point set list1 and a dark area point set list 2;
specifically, if a laser point exists in the eight neighborhood of the current point p (x, y), the point is added to the set of light spots list1, otherwise, the set of dark spots list2 is added.
And the new point set acquisition unit 3032 is configured to perform morphological corrosion on the dark area point set, and split the pixel block adhered to the dark area to obtain a new bright area point set and a new dark area point set.
Specifically, after the points are classified by the point set classification unit 3031, the bright area point set list1 is composed of a plurality of connected domains to form a list1 ═ { area0, area1. The morphological corrosion can reduce the range of a target area, reduce the number of pixel points, eliminate area boundary points and separate connected areas which are adhered. Therefore, the small connected domain in the dark region can be distinguished by the number of pixels in the corroded connected domain.
Corrosion of region A by structural element B is noted
Figure BDA0002893128090000221
And (3) scanning each connected domain in the list1 by taking the structural element B as a window template and taking step 1 as a step length, if the structural element B and the part of the connected domain at the current position completely belong to the list1, keeping the point at the position, and if not, deleting the point. After the bright area point set list1 is corroded, a new bright area point set and a connected domain list1_ error ═ area _ Ero _0, area _ Ero _1. Setting a pixel number threshold th _ numb, and if the pixel number of the connected domain of the list1_ error is less than th _ numb, moving the points in the original connected domain from the bright region point set list1 to the dark region point set list 2.
The expansion operation unit 3033 is configured to perform an expansion operation on a connected domain in the list1 by using the updated bright region point set list1 and dark region point set list 2; the boundaries of the dilated connected domain expand outward, and if the newly added dilated pixel overlaps a point within list2, the overlapped pixel is moved from the dark region set list2 into the bright region set list 1.
Specifically, the edge burr pixels of the bright area laser line are determined as discrete points to be added to list2 during segmentation, and these burr pixels are now found and moved into the corresponding connected domain of bright area list 1. And (3) obtaining the updated bright region list1 and dark region list2 in the last step, performing expansion operation on the connected domain in the list1, expanding the boundary of the connected domain outwards after expansion, and if the newly added pixel is overlapped with the list2, moving the overlapped pixel from the list2 to the list 1. For example, if the point p is a point obtained by expanding the area3 and belongs to list2, the point p is added to the area 3.
Expansion of the A region using Structure B is shown as
Figure RE-GDA0003020123030000222
And scanning each connected domain in the bright area point set list1 by taking the structural element B as a window template and step 1 as a step length, and adding the pixel at the current position into the expansion point set partition _ list if the intersection of the current position B and the connected domain is not empty. After the scan is complete, the dilated point set is compared to the dark region set list2, and if a point in the dilated point set also belongs to the dark region set list2, the point is moved from the dark region set list2 to the bright region set list 1.
The laser segment classification module 304 is configured to classify laser segments. The laser segment classification module 304 includes a bright area classification unit 3041 and a dark area classification unit 3042.
The bright area classification unit 3041 is configured to classify the bright area point set list1, and calculate the width of each light band in the list1, and if the width is smaller than a width threshold, mark the segment as a standard segment, otherwise, mark the segment as an excessively wide segment.
Specifically, a width threshold value th _ width is set, if the width of a certain section of the light band in the bright area point set list1 is greater than the width threshold value, the section of the connected domain is marked as an excessively wide section, and if the width is less than the threshold value, the section of the connected domain is marked as a standard section.
Calculating the width of the light band:
the maximum y coordinate _ yMax and the minimum y coordinate _ yMin of the current optical tape area, the optical tape width _ area is coordinate _ yMax-coordinate _ yMin.
Setting a width threshold th _ width _ area, if the width of the optical tape is larger than the width threshold th _ width _ area, marking the optical tape as an over-wide section, otherwise marking the optical tape as a standard section.
The dark region classification unit 3042 is configured to classify the set of dark regions list2, and calculate the density of each segment point set in the list2, and if the density is greater than the density threshold, mark the segment as a standard segment, otherwise, mark the segment as a sparse segment.
Specifically, a density threshold th _ density is set, if the density of a segment of the dark region point set list2 is less than the density threshold th _ density, the segment of the dark region point set is marked as a sparse segment, and if the density of the segment of the dark region point set is higher than the density threshold th _ density, the segment of the dark region point set is marked as a standard segment.
The laser line restoration module 305 is used to restore the laser line.
The laser line restoration module 305 includes a sparse segment restoration unit 3051, an excessively wide segment adjustment unit 3052, and a centerline extraction unit 3053.
The sparse segmentA restoring unit 3051 for restoring the imaged image img of the laser line currently in need of repairn+1A sparse segment of;
the method specifically comprises the following steps:
extracting img1~imgnThe central line of the standard section in the dark region point set list2, if the dark region point sets of the multiple images are all the standard sections, the standard section is the one with higher density of the point sets;
in this embodiment, when n is 2, the middle line of the standard segment in the list2 of the dark point set of img2 or img1 is extracted first, and if the dark areas of img2 and img1 are both standard segments, the standard segment with higher density of the point set is used.
Extracting a light band central line;
for a two-dimensional image, the Hessian matrix describes the two-dimensional derivative of each point in the principal direction, and for any point p (x, y) in the set seti of light band points, the Hessian matrix can be represented as
Figure BDA0002893128090000241
Where rx rx rx denotes the second order partial derivative of the point along the X direction, and the other terms are similar. Eigenvector (n) corresponding to maximum eigenvalue λ of the Hessian matrixx,ny) The normal direction of the optical band.
With point (x0, y0) as the reference point, the sub-pixel (px, py) in the center of the band. Assuming that there is a coefficient t such that (px, py) ═ x0+ tnx, y0+ tny)
In the formula
Figure BDA0002893128090000242
Wherein rx and ry are respectively x and y directional partial derivatives, rxx, ryy is a second-order partial derivative, and rxy is a mixed partial derivative;
when | t | is less than 0.5, (x0, y0) is a point on the centerline.
Reduction of imgn+1To obtain a laser spot set cell againn+1Determining sparse segments and calculating the point density of the sparse segments;
in this embodiment, n is 2, that is, after the gray threshold of the imaging image img3 of the laser line that needs to be repaired currently is reduced, a point greater than the gray threshold is added to the point set cell3, and at this time, the cell3 includes a bright area point, a dark area point, and a noise point. Points of cell3 near the midline (either of img2 or img 1) are considered points of addition to the sparse segment of img3, and points away from the midline are considered invalid points.
Calculating laser point set celln+1The distance between the point(s) to each point on the central line is calculated, the minimum distance is compared and judged by the minimum distance and a distance threshold, if the minimum distance is smaller than the distance threshold, the point is added into the corresponding sparse segment setn+1
Specifically, a distance threshold th _ distance is set, the distance between a point of the cell3 and each point on the centerline is calculated, the minimum distance is obtained, and if the minimum distance is smaller than the distance threshold th _ distance, the point is added to the corresponding sparse segment.
And calculating the point density of the sparse segment at the moment again, and marking the segment as a standard segment if the standard is reached. The middle line of the new standard segment is extracted and the final set of laser pixel points is added.
The over-wide section adjusting unit 3052 is configured to adjust an over-wide section of the imaging image img3 of the laser line currently needing to be repaired;
the method specifically comprises the following steps: selecting imgn+2~img2n+1The bright area standard section in the image is used as a reference section, and if the bright areas of the multiple images are all the standard sections, the bright areas with thinner light bands are selected as the reference section;
in this embodiment, if the img3 has too wide a segment, the adjustment is performed by using the bright area standard segment of img4 or img5 as the reference segment.
The pixels of the over-wide section are basically in an over-exposure state, and the central line of the accurate light band cannot be obtained through the change of the gray value. And selecting the bright area standard section in img4 or img5 as a reference section, and selecting the standard section with thinner light band as the reference section.
And the centerline extraction unit 3053 extracting a centerline line of the reference segmentrefThe midline of the over-wide section is rectified by the midline of the standard section.
Specifically, the gray value of the reference segment point is set to 255, and the central line is obtained againbias(lower yellow dotted line)
Determining a centerline line of a reference segmentrefTo linebiasDistance bias ofdistance(blue dotted line). If the main direction of the laser line on the image is transverse (namely the included angle between the laser line and the x axis of the image is smaller than the included angle between the laser line and the y axis of the image), the line can be obtainedrefIn linebiasA mapping point of (a). By linerefFinding line by x-axis coordinate reference of pointbiasThe point with the same x coordinate is calculated, and the difference of the y coordinates of the two points is used as biasdistanceIf in linebiasDoes not have the same point as its x-axis coordinate, then line will be presentbiasThe x coordinate corresponds to its closest point.
And a point cloud generating module 306 for converting the laser line into a point cloud through a coordinate conversion relationship, and generating point cloud information.
In the device according to the embodiment of the present application, the laser line acquisition module 301 uses three exposure times, namely, long exposure time, medium exposure time and short exposure time to shoot a laser line, and the pixel point acquisition module 302 respectively acquires clear laser line imaging pixel points in a dark area and a bright area; the laser line segmentation module 303 divides the laser line into a dark area point set and a bright area point set through a connected domain; then the laser segment classification module 304 divides the light band into sparse segments, standard segments and over-wide segments through a density threshold and a width threshold; obtaining denser laser points by reducing the gray threshold, and then restraining noise by using a central line of a high exposure standard section; the central line of the over-wide section is rectified by the central line of the standard section, and the coordinate conversion relation is converted by the point cloud generating module 306 to generate a point cloud. Therefore, the laser line exposure imaging method and the laser line exposure imaging device can cope with the operation environment with more complex illumination by exposing and imaging the laser line for multiple times; laser lines are classified, and are separately corrected in a segmented and partitioned mode, so that more accurate laser centerline coordinates can be obtained, and more accurate point clouds can be obtained.
Example three:
according to an embodiment of the present invention, a computer-readable storage medium is provided, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps in the above electronic price tag communication method, where the specific steps are as described in the first embodiment, and are not described herein again.
The memory in the present embodiment may be used to store software programs as well as various data. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the mobile phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
According to an example of this embodiment, all or part of the processes in the methods of the embodiments described above may be implemented by a computer program to instruct related hardware, where the program may be stored in a computer readable storage medium, and in this embodiment of the present invention, the program may be stored in the storage medium of a computer system and executed by at least one processor in the computer system, so as to implement the processes including the embodiments of the methods described above. The storage medium includes, but is not limited to, a magnetic disk, a flash disk, an optical disk, a Read-Only Memory (ROM), and the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the above embodiment method can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better embodiment. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes several instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (12)

1. A laser line point cloud generating method is characterized by comprising the following steps:
sequentially exposing at least 2N +1 shot laser lines from high to low by using three kinds of long, medium and short exposure time; n is a natural number;
extracting laser line pixel points of the acquired images;
segmenting the laser line into a dark area point set and a bright area point set;
classifying the laser sections;
restoring the laser line;
and converting the laser line into point cloud through a coordinate conversion relation to generate point cloud information.
2. The laser line point cloud generation method of claim 1, wherein the step of segmenting the laser line into a dark region point set and a bright region point set comprises:
traversing laser point set celliIf the current point p (x, y) is communicated with any point eight neighborhoods in the point set, adding the point into the bright area point set list1, otherwise, adding the point into the bright area point set list1Add the set of dark spots list 2;
corroding the connected domain in the bright region point set list1, and if the number of pixels of the corroded connected domain is too small, moving the points in the connected domain into the dark region point set list 2;
expanding the connected domain in the bright area point set list 1; the boundaries of the dilated connected domain expand outward, and if a dilated pixel overlaps a point in the dark region set list2, the overlapping pixel is moved from the dark region set list2 into the bright region set list 1.
3. The laser line point cloud generation method of claim 1, wherein the classifying the laser segments comprises:
classifying the bright area point set list1, calculating the width of each light band in the list1, if the width is smaller than a width threshold value, marking the section as a standard section, and otherwise, marking the section as an over-wide section;
and classifying the dark region point set list2, calculating the density of each segment point set in the list2, if the density is greater than a density threshold value, marking the segment as a standard segment, and otherwise, marking the segment as a sparse segment.
4. The laser line point cloud generation method of claim 3, wherein the dark region point set list2 classification comprises the steps of:
dividing discrete points in the dark region point set list2 into a plurality of point sets seti, wherein one point set represents one light band;
sorting discrete points in the dark region point set list2 from small to large according to x coordinates, and dividing points with the distance in the x direction smaller than n pixels into the same point set;
calculating the density of the point set of each segment of the light band;
density of point set
Figure FDA0002893128080000021
Optical band volume (optical band length) optical band width
Optical tape length _ set ═ correlation _ xMax-correlation _ xMin
The optical band width _ set is associated with associate _ yMax-associate _ yMin
Wherein, the maximum x coordinate _ xMax and the minimum x coordinate _ xMin of each point set; maximum y coordinate _ yMax and minimum y coordinate _ yMin;
setting a density threshold th _ density, if the density of the current set is greater than the density threshold, marking the set as a standard segment, and otherwise, marking the set as a sparse segment;
img imaging the laser line needing to be repaired currentlyn+1Namely extracting a central line from a standard section in the intermediate exposure image; and adds the median line to the final set of laser line pixel points.
5. The laser line point cloud generation method of claim 1, wherein the restoring the laser line comprises:
restoring img image of laser line needing repair currentlyn+1A sparse segment of;
adjusting the imaging image img of the laser line currently required to be repairedn+1Too wide section of (2).
6. The laser line point cloud generation method of claim 5, wherein the restoring is an img of an imaged image of a laser line currently in need of repairn+1The sparse segment of (1) includes:
extracting img1~imgnThe central line of the standard section in the dark region point set list2, if the dark region point sets of the multiple images are all the standard sections, the standard section is the one with higher density of the point sets;
extracting a line of a light band center line;
reduction of imgn+1To obtain a laser spot set cell againn+1Determining a sparse segment and calculating the point density of the sparse segment;
calculating laser point set celln+1The distance between the point(s) to each point on the central line is calculated, the minimum distance is compared and judged by the minimum distance and a distance threshold, if the minimum distance is smaller than the distance threshold, the point is added into the corresponding sparse segment setn+1
Recalculating sparse segment setsn+1If it reachesStandard rule will setn+1Adding a standard point set;
and if the density does not meet the standard, reducing the gray threshold value again, and repeating the steps of determining the sparse segment and calculating the point density of the sparse segment.
7. The laser line point cloud generation method of claim 5, wherein the adjusting is performed by adjusting an imaged image img of the laser line currently in need of repairn+1The over-width section of (a) includes:
selecting imgn+2~img2n+1The bright area standard section in the image is used as a reference section, and if the bright areas of the multiple images are all the standard sections, the bright area standard section with a thinner light band is selected as the reference section.
Extracting centerline line of reference standard segmentref(ii) a Correcting the midline of the over-wide section through the midline of the standard section;
setting the gray value of the reference segment point to be 255, and solving the central line againbias
Determining a centerline line of a reference segmentrefTo linebiasDistance bias ofdistance
Using the offset distance biasdistanceModified centerline linen+1Obtain a new centerline linen+1
8. The laser line point cloud generation method of claim 6, wherein the extract light band centerline line method comprises:
for a two-dimensional image, the Hessian matrix describes the two-dimensional derivative of each point in the principal direction, and for any point p (x, y) in the set seti of light band points, the Hessian matrix can be represented as
Figure FDA0002893128080000041
Wherein rx rx denotes the second order partial derivative of the point along the X direction, ry denotes the second order partial derivative of the point along the Y direction;
eigenvector (n) corresponding to maximum eigenvalue λ of the Hessian matrixx,ny) Is the normal direction of the light band;
with point (x0, y0) as the reference point, the sub-pixel (px, py) in the center of the strip; assuming that there is a coefficient t such that (px, py) ═ x0+ tnx, y0+ tny)
In the formula
Figure FDA0002893128080000042
Wherein rx and ry are respectively x and y directional partial derivatives, rxx, ryy is a second-order partial derivative, and rxy is a mixed partial derivative;
when | t | is less than 0.5, (x0, y0) is a point on the centerline.
9. A laser line point cloud generating apparatus, the apparatus comprising: the system comprises a laser line acquisition module, a pixel point acquisition module, a laser line segmentation module, a laser segment classification module, a laser line restoration module and a point cloud generation module;
the laser line acquisition module is used for sequentially exposing at least 2N +1 shot laser lines from high to low by using three kinds of long, medium and short exposure time; n is a natural number;
the laser line segmentation module is used for segmenting the laser line into a dark area point set and a bright area point set;
the laser segment classification module is used for classifying the laser segments;
the laser line restoration module is used for restoring a laser line;
the point cloud generating module is used for converting the laser line into point cloud through a coordinate conversion relation to generate point cloud information.
10. The laser line point cloud generation apparatus of claim 9, wherein the laser line segmentation module comprises a point set classification unit, a new point set acquisition unit, and an expansion operation unit;
the point set classification unit is used for classifying a bright area point set list1 and a dark area point set list 2;
the new point set acquisition unit is used for performing morphological corrosion on the dark area point set, splitting the pixel blocks adhered to the dark areas and obtaining a new bright area point set and a new dark area point set;
the expansion operation unit is used for performing expansion operation on a connected domain in the bright area point set list1 through the updated bright area point set list1 and dark area point set list 2; the boundaries of the dilated connected domain expand outward, and if newly added dilated pixels overlap with points in the dark region set list2, the overlapping pixels are moved from the dark region set list2 into the bright region set list 1.
11. The laser line point cloud generating device of claim 9, wherein the laser line restoration module comprises a sparse segment restoration unit, an excessively wide segment adjustment unit, and a centerline extraction unit;
the sparse segment restoration unit is used for restoring the imaging image img of the laser line which needs to be restored currentlyn+1A sparse segment of;
the over-width adjusting unit is used for adjusting the imaging image img of the laser line which needs to be repaired currentlyn+1An over-wide section;
the centerline extraction unit extracts a centerline line of the reference segmentrefThe midline of the over-wide section is rectified by the midline of the standard section.
12. A computer-readable storage medium, comprising a processor, a computer-readable storage medium, and a computer program stored on the computer-readable storage medium, which computer program, when executed by the processor, performs the steps of the method according to any one of claims 1 to 8.
CN202110035554.5A 2021-01-12 Laser line point cloud generation method and device and computer readable storage medium Active CN112819877B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110035554.5A CN112819877B (en) 2021-01-12 Laser line point cloud generation method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110035554.5A CN112819877B (en) 2021-01-12 Laser line point cloud generation method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112819877A true CN112819877A (en) 2021-05-18
CN112819877B CN112819877B (en) 2024-07-09

Family

ID=

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115512099A (en) * 2022-06-10 2022-12-23 探维科技(北京)有限公司 Laser point cloud data processing method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013110580A1 (en) * 2013-09-24 2015-03-26 Faro Technologies, Inc. Method for optically scanning and measuring a scene
US20180143320A1 (en) * 2016-11-18 2018-05-24 Robert Bosch Start-Up Platform North America, LLC, Series 1 Sensing system and method
CN108307675A (en) * 2015-04-19 2018-07-20 快图凯曼有限公司 More baseline camera array system architectures of depth enhancing in being applied for VR/AR
US20190098233A1 (en) * 2017-09-28 2019-03-28 Waymo Llc Synchronized Spinning LIDAR and Rolling Shutter Camera System
CN110286388A (en) * 2016-09-20 2019-09-27 创新科技有限公司 Laser radar system and the method and medium for using its detection object
US20200018869A1 (en) * 2018-07-16 2020-01-16 Faro Technologies, Inc. Laser scanner with enhanced dymanic range imaging
CN111256587A (en) * 2020-01-20 2020-06-09 南昌航空大学 High-reflectivity surface three-dimensional measurement method based on double-line structured light scanning
US20200394812A1 (en) * 2019-06-11 2020-12-17 Cognex Corporation System and method for refining dimensions of a generally cuboidal 3d object imaged by 3d vision system and controls for the same

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013110580A1 (en) * 2013-09-24 2015-03-26 Faro Technologies, Inc. Method for optically scanning and measuring a scene
CN108307675A (en) * 2015-04-19 2018-07-20 快图凯曼有限公司 More baseline camera array system architectures of depth enhancing in being applied for VR/AR
CN110286388A (en) * 2016-09-20 2019-09-27 创新科技有限公司 Laser radar system and the method and medium for using its detection object
US20180143320A1 (en) * 2016-11-18 2018-05-24 Robert Bosch Start-Up Platform North America, LLC, Series 1 Sensing system and method
US20190098233A1 (en) * 2017-09-28 2019-03-28 Waymo Llc Synchronized Spinning LIDAR and Rolling Shutter Camera System
US20200018869A1 (en) * 2018-07-16 2020-01-16 Faro Technologies, Inc. Laser scanner with enhanced dymanic range imaging
US20200394812A1 (en) * 2019-06-11 2020-12-17 Cognex Corporation System and method for refining dimensions of a generally cuboidal 3d object imaged by 3d vision system and controls for the same
CN111256587A (en) * 2020-01-20 2020-06-09 南昌航空大学 High-reflectivity surface three-dimensional measurement method based on double-line structured light scanning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MAHMOUD AHMED等: "Comparison of Point-Cloud Acquisition from Laser-Scanning and Photogrammetry Based on Field Experimentation", 《CONFERENCE: 3RD INTERNATIONAL/9TH CONSTRUCTION SPECIALTY CONFERENCE》, 30 June 2011 (2011-06-30), pages 1 - 11 *
MAOLIN CHEN等: "Classification of Terrestrial Laser Scanning Data With Density-Adaptive Geometric Features", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》, vol. 15, no. 11, 15 August 2018 (2018-08-15), pages 1795, XP011698968, DOI: 10.1109/LGRS.2018.2860589 *
张雪东: "飞秒激光制备带有闪耀光栅的全息板实现复合涡旋光的方法研究", 《中国优秀硕士学位论文全文数据库_信息科技辑》, 15 October 2018 (2018-10-15), pages 135 - 31 *
李涛涛: "多视觉线结构光高精度三维信息提取技术研究", 《中国博士学位论文全文数据库_信息科技辑》, 15 December 2018 (2018-12-15), pages 138 - 61 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115512099A (en) * 2022-06-10 2022-12-23 探维科技(北京)有限公司 Laser point cloud data processing method and device

Similar Documents

Publication Publication Date Title
CN109785291B (en) Lane line self-adaptive detection method
CN110020692B (en) Handwriting separation and positioning method based on print template
US8170368B2 (en) Correcting device and method for perspective transformed document images
US20080232715A1 (en) Image processing apparatus
CN112508015A (en) Nameplate identification method, computer equipment and storage medium
CN110717872B (en) Method and system for extracting characteristic points of V-shaped welding seam image under laser-assisted positioning
Zhang et al. A unified framework for document restoration using inpainting and shape-from-shading
CN110674815A (en) Invoice image distortion correction method based on deep learning key point detection
CN110400278B (en) Full-automatic correction method, device and equipment for image color and geometric distortion
CN110647795A (en) Form recognition method
CN108257155B (en) Extended target stable tracking point extraction method based on local and global coupling
CN111784587B (en) Invoice photo position correction method based on deep learning network
CN111553845B (en) Quick image stitching method based on optimized three-dimensional reconstruction
CN111260675B (en) High-precision extraction method and system for image real boundary
WO2023207064A1 (en) Maskrcnn water seepage detection method and system based on weak light compensation
CN115331245A (en) Table structure identification method based on image instance segmentation
CN113065396A (en) Automatic filing processing system and method for scanned archive image based on deep learning
Lu et al. A shadow removal method for tesseract text recognition
CN112819877A (en) Laser line point cloud generating method and device and computer readable storage medium
CN112819877B (en) Laser line point cloud generation method and device and computer readable storage medium
CN112712058A (en) Character recognition and extraction method
CN111105418B (en) High-precision image segmentation method for rectangular targets in image
CN116758266A (en) Reading method of pointer type instrument
CN111126382B (en) Bill correction method based on key point positioning for OCR (optical character recognition)
Pertuz et al. Improving shape-from-focus by compensating for image magnification shift

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant