CN109493418B - Three-dimensional point cloud obtaining method based on LabVIEW - Google Patents

Three-dimensional point cloud obtaining method based on LabVIEW Download PDF

Info

Publication number
CN109493418B
CN109493418B CN201811298233.9A CN201811298233A CN109493418B CN 109493418 B CN109493418 B CN 109493418B CN 201811298233 A CN201811298233 A CN 201811298233A CN 109493418 B CN109493418 B CN 109493418B
Authority
CN
China
Prior art keywords
shooting
imaging
image
point cloud
labview
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811298233.9A
Other languages
Chinese (zh)
Other versions
CN109493418A (en
Inventor
靳开轩
郭强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningxia Juneng Robot Co Ltd
Original Assignee
Ningxia Juneng Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningxia Juneng Robot Co Ltd filed Critical Ningxia Juneng Robot Co Ltd
Priority to CN201811298233.9A priority Critical patent/CN109493418B/en
Publication of CN109493418A publication Critical patent/CN109493418A/en
Application granted granted Critical
Publication of CN109493418B publication Critical patent/CN109493418B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention discloses a three-dimensional point cloud obtaining method based on LabVIEW, which comprises the following steps: s1, fixing a shooting system: comprises a laser generator and a camera with fixed relative positions; s2, correcting image distortion: compiling a distortion correction program based on LabVIEW, and performing distortion correction on the imaging image of the distortion calibration plate; s3, calibrating the position relation: the shooting system images the calibration plate at different shooting heights, and the conversion relation between the actual size and the imaging size is determined; s4, acquiring an imaging image: the shooting system continuously shoots the surface of the object to be detected to obtain an information imaging image of the surface of the object to be detected; s5, obtaining three-dimensional point cloud: and after the imaging image is subjected to distortion correction, determining the corresponding shooting height, and synthesizing the imaging image points in sequence to obtain a three-dimensional point cloud image. The method and the device for acquiring the three-dimensional point cloud based on LabVIEW can directly acquire the mapping relation between the point cloud and an actual shooting object, avoid complicated camera calibration and simplify the algorithm procedure of the three-dimensional point cloud.

Description

Three-dimensional point cloud obtaining method based on LabVIEW
Technical Field
The invention relates to a three-dimensional point cloud obtaining method, in particular to a three-dimensional point cloud obtaining method based on LabVIEW.
Background
Reverse engineering is a technology for obtaining a three-dimensional digital model of a product by a verified measurement method from a product prototype, and point cloud data is an indispensable loop in the reverse engineering. Most of the point cloud data is generated by 3D visual equipment or acquired by a three-dimensional coordinate measuring machine and a three-dimensional laser scanner, and the point cloud can also be obtained by a laser triangulation method. The laser triangulation method can obtain high-precision point operation data and has strong anti-interference capability, so that the laser triangulation method is widely used for obtaining point cloud data.
In the calibration process of acquiring point cloud data by the traditional laser triangulation method, complicated camera calibration is required, and the point cloud algorithm is complicated.
Disclosure of Invention
The invention provides a LabVIEW-based three-dimensional point cloud acquisition method for overcoming the problems that a three-dimensional point cloud algorithm is complex and complex camera calibration is required in the prior art, so that the complex camera calibration is avoided, the mapping relation between the point cloud and an actual shooting object is directly obtained, the subsequent point cloud positioning is convenient, and the point cloud algorithm program is simplified.
The invention provides a LabVIEW-based three-dimensional point cloud acquisition method, which comprises the following steps:
s1, fixing a shooting system: the shooting system comprises a laser generator and a camera, and the relative positions of the laser generator and the camera are fixed;
s2, image distortion correction: programming a distortion correction program based on LabVIEW, and performing distortion correction on the imaging image of the distortion calibration plate;
s3, calibrating the position relation: the shooting system shoots and images the calibration plate at different shooting heights, and determines the conversion relation between the actual size and the imaging size;
s4, acquiring an imaging image: the shooting system continuously shoots the surface of the object to be detected to obtain a surface imaging image of the object to be detected;
s5, obtaining three-dimensional point cloud: and after the surface imaging image is subjected to distortion correction, determining the shooting height corresponding to the surface imaging image point, and synthesizing the surface imaging image in sequence to obtain a three-dimensional point cloud image.
The invention provides a LabVIEW-based three-dimensional point cloud acquisition method, and as a preferred mode, the step S1 further comprises the following steps:
s11, a camera is placed and combined into a shooting system relative to a laser generator at a fixed horizontal distance and an angle, the laser generator emits laser to irradiate on a calibration plate, the laser irradiation position is marked as a current point, and the camera captures an image of the current point;
s12, moving the shooting system to change the shooting height, and capturing the current point image by the shooting system;
s13, the shooting height and the imaging point of the current point on the camera chip have the following geometrical relationship:
Figure BDA0001851655070000011
wherein:
theta is an included angle between a connecting line of the current point and the imaging point and the laser line;
alpha is the included angle between the camera lens axis and the laser line;
b is the horizontal distance from the center of the camera lens to the laser line;
a is 1/2 size of the camera chip;
f is the focal length of the camera;
h is the shooting height;
x is the distance from the imaging point to the chip boundary;
and S14, substituting the two shooting heights and the corresponding distances from the two imaging points to the chip boundary into a geometric relation, and obtaining constant alpha and constant b values so as to fix the position of the camera relative to the laser generator.
The relative position relation of the laser generator and the camera is determined according to the laser triangulation imaging principle through a simple geometric relation, the number of points is small, the precision is high, the coordinate change in the traditional combined calibration process of the laser generator and the camera is avoided, and the algorithm program is simplified.
The invention provides a LabVIEW-based three-dimensional point cloud acquisition method, which is used as an optimal mode, in the step S13, a distance x is obtained according to the relation between an imaging point and an image pixel, and the distance x has the following relation:
Figure BDA0001851655070000021
wherein:
u is the chip boundary length;
t is the number of pixels from the imaging point to the image boundary;
z is the total number of pixels in the length direction in the image;
when a unified camera is adopted to collect an imaging image, the total pixel number in the direction of the length of the chip and the length direction of the image is fixed, the pixel number from an imaging point to an image boundary can be obtained through software measurement, the precision is high, and the algorithm is simple.
The invention provides a LabVIEW-based three-dimensional point cloud acquisition method, and as a preferred mode, the step S2 further comprises the following steps:
s21, capturing an imaging image of a distortion calibration plate by the shooting system, wherein the shooting height of the shooting system relative to the distortion calibration plate is consistent with the shooting height of the image of the object to be detected during collection, and the distortion calibration plate is a uniformly distributed dot matrix template;
s22, inputting the imaging image of the distortion calibration plate into LabVIEW, and performing distortion correction treatment, wherein the processed imaging image of the distortion calibration plate is used as a distortion correction template;
s23, the distortion correction template and the distortion calibration plate imaging image have the following distortion correction relationship:
Figure BDA0001851655070000022
m is the number of lines of the pixel points before correction;
n is the number of rows and columns where the pixel points are located before correction;
m correction device The number of lines of the corrected pixel points is;
n correction device The number of the columns of the corrected pixel points is;
k m the correction coefficient of the m-th row of pixels;
k n the correction coefficient of the nth column of pixels;
based on LabVIEW, the image distortion is corrected and the image processing program is programmed, so that the point cloud algorithm program is simplified, and the running process is improved.
The invention provides a LabVIEW-based three-dimensional point cloud acquisition method, which is preferably implemented by the following steps of S3:
s31, the shooting system shoots the calibration plate to obtain an imaging image with a clear contour line of the calibration plate;
s32, moving the shooting system to change the shooting height to obtain an imaging image with the clear contour line of the calibration plate at the current position;
s33, carrying out data fitting on the shooting height, the actual length of the contour line of the calibration plate and the imaging length of the contour line of the calibration plate to obtain the imaging size data relation as follows:
l=ksh
wherein:
l is the actual length of the contour line of the calibration plate;
s is the number of pixel points occupied by the contour line imaging length of the calibration plate;
k is a constant coefficient;
h is the shooting height.
The mapping relation between the point cloud and the actual shooting object is directly obtained through data fitting, a foundation is provided for subsequent point cloud positioning, the operation is simple, complex camera calibration is avoided, and a point cloud algorithm is simplified.
The invention provides a LabVIEW-based three-dimensional point cloud acquisition method, and as an optimal mode, a calibration plate is a template with a clear outline and known outline dimensions. The clear calibration plate of outline makes the data more accurate, reduces the error.
The invention provides a LabVIEW-based three-dimensional point cloud acquisition method, which is an optimal mode, and the specific method in the step S4 comprises the following steps:
the shooting system starts from one side of the longitudinal edge of the object to be detected and horizontally moves along the transverse edge of the object to be detected, and simultaneously the shooting system continuously shoots at a fixed frequency until the complete surface imaging image acquisition of the object to be detected is completed. The shooting system moves horizontally along the edge of the object to be detected, so that the surface image synthesis basis is simplified, and the surface image acquisition method provided by the invention is not influenced by the surface shape and size of the object to be detected. And the proper shooting frequency and the number of the shot pictures are adjusted, so that the transverse span range of the acquired imaging image is smaller, and the algorithm precision is improved.
The invention provides a LabVIEW-based three-dimensional point cloud acquisition method, which is used as an optimal mode, and the position of a shooting system relative to an object to be detected meets the requirement that the shooting system can completely acquire longitudinal surface images in a relative interval of the object to be detected in the process of transverse horizontal movement. When the shooting system moves to a certain position, the complete longitudinal surface image in the relative interval can be collected, the surface imaging of the object to be detected can be completely collected by the shooting system in one horizontal moving process, and the surface images are synthesized to generate point cloud in sequence.
The invention provides a LabVIEW-based three-dimensional point cloud acquisition method, and as a preferred mode, the step S5 comprises the following steps:
s51, inputting the surface imaging image into LabVIEW, running the distortion correction program, and performing distortion correction;
s52, extracting a pixel matrix from the surface imaging image after the distortion correction;
s53, binarizing the pixel matrix by a certain threshold value, and performing denoising treatment;
s54, extracting and sequencing the points with the pixel value of 255 in each row of the pixel matrix, substituting the maximum column number of the points into the geometric relation in the step S1, and determining the corresponding shooting heights;
and S55, synthesizing each processed surface imaging image according to the shooting sequence to generate a complete image of the three-dimensional point cloud on the surface of the object to be detected.
And processing the single picture, determining the shooting height corresponding to the picture after removing the background and the noise, and carrying out fuzzy processing on the shooting height of the object to be detected displayed by the single picture to be the same height, so that a basis is provided for picture synthesis, and an algorithm is simplified.
According to the pixel coordinates of the laser line, the z-direction coordinates of the current position are calculated, then the x-coordinate is determined according to the current photographing position, the y-coordinate is determined according to the calibration relation between the actual length and the pixel length under the current height, and the mapping relation between the three-dimensional point cloud image and the actual photographing object can be directly obtained.
The central position of an object to be measured on the plane where the shooting system is located is taken as the real coordinate system original point, the direction of the coordinate system original point pointing to the object to be measured is taken as the Z direction, the moving shooting direction of the shooting system is taken as the X direction, the direction of the coordinate system original point pointing to the shooting system is taken as the Y direction, the number of imaging images of the object to be measured captured by the shooting system is N, the resolution line number of a single image is M, and therefore the imaging images can be synthesized according to the shooting sequence to obtain a two-dimensional matrix A with M lines and N columns:
Figure BDA0001851655070000031
wherein:
a is a two-dimensional matrix with height information;
a mn the height value of the mth row and the nth column of points of the two-dimensional matrix A is obtained;
the m row and n column points in the two-dimensional matrix A have the following expressions in the real coordinate system:
Figure BDA0001851655070000041
wherein:
a mn the height value of the mth row and the nth column of the two-dimensional matrix A is obtained;
x is the distance from the starting point to the ending point;
n is the number of imaging images of the object to be detected;
m is the number of lines of the resolution of a single image;
k is a constant coefficient in the imaging size relation.
The invention provides a LabVIEW-based three-dimensional point cloud acquisition method, which is used as an optimal mode, and shading equipment is arranged around an object to be detected to reduce the interference of an external light source. The shading equipment is arranged, so that the generation of abnormal point cloud can be reduced, and the point cloud data is more accurate.
The invention provides a three-dimensional point cloud acquisition method based on LabVIEW, which is characterized in that the relative position relationship between a laser generator and a camera is determined by a simple geometric relationship and fixed as a shooting system, the coordinate change in the combined calibration process of the traditional laser generator and the camera is avoided, a calibration plate is imaged by the shooting system, and distortion correction is carried out on the basis of the LabVIEW, so that the mapping relationship between a point cloud and an actual shooting object can be directly obtained after surface imaging images obtained by one horizontal movement of an object to be detected are sequentially input into the LabVIEW by the shooting system, the steps are simple, point acquisition data are accurate, and a three-dimensional point cloud algorithm program is simplified.
Drawings
FIG. 1 is a flow chart of a LabVIEW-based three-dimensional point cloud acquisition method;
FIG. 2 is a flow chart of a LabVIEW-based three-dimensional point cloud acquisition method, which includes step S1;
FIG. 3 is a flow chart of a LabVIEW-based three-dimensional point cloud acquisition method, step S2;
FIG. 4 is a flow chart of a LabVIEW-based three-dimensional point cloud acquisition method step S3;
FIG. 5 is a flow chart of a step S5 of a LabVIEW-based three-dimensional point cloud acquisition method;
FIG. 6 is a schematic diagram of the imaging geometry of the camera and laser generator;
fig. 7 is a schematic view of a calibration plate.
Detailed Description
Example 1
As shown in fig. 1, the invention provides a three-dimensional point cloud obtaining method based on LabVIEW, which comprises the following steps:
s1, fixing a shooting system: the shooting system comprises a laser generator and a camera, the laser generator emits laser to irradiate the calibration plate, the camera captures laser point imaging, the position relation between the laser generator and the camera is determined according to the geometric relation, and the laser generator and the camera with fixed relative positions are combined into the shooting system; as shown in fig. 2, step S1 further includes the steps of:
s11, a camera is placed and combined into a shooting system relative to a laser generator at a fixed horizontal distance and an angle, the laser generator emits laser to irradiate on a calibration plate, the laser irradiation position is marked as a current point, and the camera captures an image of the current point;
s12, moving the shooting system to change the shooting height, and capturing the current point image by the shooting system;
s13, the shooting height and the imaging point of the current point on the camera chip have the following geometrical relationship:
Figure BDA0001851655070000051
wherein:
theta is an included angle between a connecting line of the current point and the imaging point and the laser line;
alpha is the included angle between the camera lens axis and the laser line;
b is the horizontal distance from the center of the camera lens to the laser line;
a is 1/2 size of the camera chip;
f is the focal length of the camera;
h is the shooting height;
x is the distance from the imaging point to the chip boundary;
s14, substituting the two shooting heights and the distances between the corresponding two imaging points and the chip boundary into a geometric relation, and obtaining constant alpha and constant b values so as to fix the position of the camera relative to the laser generator; wherein the distance x has the following relationship:
Figure BDA0001851655070000052
wherein:
u is the chip boundary length;
t is the number of pixels from the imaging point to the image boundary;
z is the total number of pixels in the image in the length direction.
S2, image distortion correction: the shooting system images the distortion calibration plate, distortion correction processing is carried out on the distortion calibration plate based on LabVIEW, and a distortion correction program is compiled based on LabVIEW; as shown in fig. 3, step S2 further includes the steps of:
s21, capturing an imaging image of the distortion calibration plate by a shooting system, wherein the shooting height of the shooting system relative to the distortion calibration plate is consistent with the shooting height when the image of the object to be detected is collected, and the distortion calibration plate is a uniformly distributed dot matrix template;
s22, compiling a distortion correction program based on LabVIEW, inputting the imaging image of the distortion calibration plate into the LabVIEW, and performing distortion correction treatment, wherein the processed imaging image of the distortion calibration plate is used as a distortion correction template;
s23, the distortion correction template and the distortion calibration plate imaging image have the following distortion correction relationship:
Figure BDA0001851655070000053
m is the number of lines of the pixel points before correction;
n is the number of rows and columns where the pixel points are located before correction;
m correction device The number of lines of the corrected pixel points is;
n correction device The number of the columns of the corrected pixel points is;
k m the correction coefficient of the m-th row of pixels;
k n the correction coefficient of the nth column of pixels;
s3, calibrating the position relation: the shooting system shoots and images the calibration plate at different shooting heights, and fits an imaging size data relation according to the actual size of the calibration plate and the imaging size data; as shown in fig. 4, step S3 further includes the steps of:
s31, shooting the calibration plate by a shooting system to obtain an imaging image with a clear contour line of the calibration plate;
s32, the mobile shooting system changes the shooting height to obtain an imaging image with a clear contour line of the calibration plate at the current position;
s33, carrying out data fitting on the statistical shooting height, the actual length of the contour line of the calibration plate and the imaging length of the contour line of the calibration plate to obtain the following imaging size data relation:
l=ksh
wherein:
l is the actual length of the contour line of the calibration plate;
s is the number of pixel points occupied by the imaging length of the contour line of the calibration plate;
k is a constant coefficient;
h is the shooting height.
S4, acquiring an imaging image: continuously shooting the surface of the object to be detected by the shooting system until all surface information imaging images of the object to be detected are obtained; the specific method of step S4 is as follows:
the shooting system starts from one side of the longitudinal edge of the object to be detected and horizontally moves along the transverse edge of the object to be detected, and simultaneously the shooting system continuously shoots at a fixed frequency until the complete surface imaging image acquisition of the object to be detected is completed.
S5, obtaining three-dimensional point cloud: extracting a pixel matrix of an image, determining the corresponding shooting height of each point, and sequentially synthesizing surface imaging images to obtain a three-dimensional point cloud image; as shown in fig. 5, step S5 further includes the steps of:
s51, inputting the obtained pictures into LabVIEW by the shooting system in sequence, and carrying out distortion correction treatment;
s52, extracting the pixel matrix of the image after the distortion correction;
s53, binarizing the matrix by a certain threshold value (the threshold value parameter needs to be set according to the brightness), and performing denoising treatment;
s54, extracting and sequencing the points with the pixel value of 255 in each row, and substituting the maximum column number extraction into the formula of S1 to obtain the actual distance between each row of points and the camera;
and S55, synthesizing the processed surface imaging images according to a shooting sequence to generate a complete image of the three-dimensional point cloud on the surface of the object to be detected.
And (3) corresponding the actual distance between the point and the camera pair to different colors according to different heights, thereby obtaining a color three-dimensional point cloud image.
As shown in FIG. 6, in the present embodiment, the steps S1 are performed at h 1 、h 2 Two positions are shot once, and h is 1 、h 2 、x 1 、x 2 By bringing into geometric relationships, can obtain
Figure BDA0001851655070000061
Wherein:
Figure BDA0001851655070000062
Figure BDA0001851655070000063
the values of the constants b and alpha can be deduced according to the shooting height, and then the relation formula for calculating the shooting height h according to x is as follows:
Figure BDA0001851655070000064
locating device for subsequent point cloudProviding a coordinate relation conversion basis;
as shown in fig. 7, in this embodiment, the calibration board is an isosceles trapezoid, the lengths of the contour lines of the upper and lower bottoms are 100mm and 600mm, respectively, and the shooting is performed at positions 415mm, 665mm and 915mm from the highest point of the upper bottom of the calibration board, and the correspondence between the pixels and the shooting heights is shown in the following table:
Figure BDA0001851655070000065
from this table the following relationship can be derived: l = ksh
In the embodiment, a distortion calibration plate is a 7 × 7 dot matrix, distortion correction is performed based on LabVIEW, an object to be measured is a brake disc placed in a material frame, the edge position of the left lower side of the outer side of the material frame of a shooting system starts to move horizontally, a shooting area is 720mm, the object to be measured sweeps the material frame in 5 seconds, 72 photos are shot, the number of pixels in the length direction of a camera is 1200, and finally a three-dimensional point cloud image with colors is obtained, wherein the colors of the three-dimensional point cloud image are related to the Z-direction coordinate height of the point cloud.
The foregoing description is intended to be illustrative rather than limiting, and it will be appreciated by those skilled in the art that various modifications, changes, and equivalents may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (9)

1. A three-dimensional point cloud obtaining method based on LabVIEW is characterized in that: the method comprises the following steps:
s1, fixing a shooting system: the shooting system comprises a laser generator and a camera, and the relative positions of the laser generator and the camera are fixed;
s2, image distortion correction: compiling a distortion correction program based on LabVIEW, and performing distortion correction on the imaging image of the distortion calibration plate;
s3, calibrating the position relation: the shooting system shoots and images the calibration plate at different shooting heights, and determines the conversion relation between the actual size and the imaging size;
s4, acquiring an imaging image: the shooting system continuously shoots the surface of the object to be detected to obtain a surface imaging image of the object to be detected;
s5, obtaining three-dimensional point cloud: and extracting a pixel matrix of the surface imaging image, determining a shooting height corresponding to the surface imaging image, and synthesizing the surface imaging image in sequence to obtain a three-dimensional point cloud image.
2. The LabVIEW-based three-dimensional point cloud acquisition method as claimed in claim 1, wherein: the step S1 further includes the steps of:
s11, placing and combining the camera relative to the laser generator at a fixed horizontal distance and angle to form the shooting system, wherein the laser generator emits laser to irradiate on the calibration plate, the laser irradiation position is marked as a current point, and the camera captures the current point for imaging;
s12, moving the shooting system to change the shooting height, wherein the shooting system captures the current point image;
s13, the shooting height and the imaging point of the current point on the camera chip have the following geometrical relationship:
Figure FDA0001851655060000011
wherein:
theta is an included angle between a connecting line of the current point and the imaging point and the laser line;
alpha is the included angle between the camera lens axis and the laser line;
b is the horizontal distance from the center of the camera lens to the laser line;
a is 1/2 size of the camera chip;
f is the focal length of the camera;
h is the shooting height;
x is the distance from the imaging point to the chip boundary;
and S14, substituting the two shooting heights and the corresponding distances from the two imaging points to the chip boundary into the geometric relationship to obtain constant alpha and constant b values, so that the camera is fixed relative to the laser generator.
3. The LabVIEW-based three-dimensional point cloud acquisition method as claimed in claim 2, wherein: in step S13, the distance x is obtained according to the relationship between the imaging point and the image pixel, and the distance x has the following relationship:
Figure FDA0001851655060000012
wherein:
u is the chip boundary length;
t is the number of pixels from the imaging point to the image boundary;
z is the total number of pixels in the image in the length direction.
4. The LabVIEW-based three-dimensional point cloud acquisition method as claimed in claim 1, wherein: the step S2 further comprises the steps of:
s21, capturing an imaging image of a distortion calibration plate by the shooting system, wherein the shooting height of the shooting system relative to the distortion calibration plate is consistent with the shooting height of the image of the object to be detected during collection, and the distortion calibration plate is a uniformly distributed dot matrix template;
s22, inputting the imaging image of the distortion calibration plate into LabVIEW for distortion correction, and taking the processed imaging image of the distortion calibration plate as a distortion correction template;
s23, the distortion correction template and the distortion calibration plate imaging image have the following distortion correction relationship:
Figure FDA0001851655060000021
m is the number of lines where the pixel points are located before correction;
n is the number of rows and columns where the pixel points are located before correction;
m correction device The number of lines of the corrected pixel points is the number of lines;
n correction device The number of columns of the corrected pixel points is obtained;
k m the correction coefficient of the m-th row of pixels;
k n the correction coefficient of the nth column of pixels;
and S24, programming an aberration correction program based on LabVIEW and the aberration correction relation.
5. The method for acquiring the three-dimensional point cloud based on the LabVIEW as claimed in claim 1, wherein: the step S3 further includes the steps of:
s31, the shooting system shoots the calibration plate to obtain an imaging image with a clear contour line of the calibration plate;
s32, moving the shooting system to change the shooting height to obtain an imaging image with the clear contour line of the calibration plate at the current position;
s33, carrying out data fitting on the shooting height, the actual length of the contour line of the calibration plate and the imaging length of the contour line of the calibration plate to obtain the imaging size data relation as follows:
l=ksh
wherein:
l is the actual length of the contour line of the calibration plate;
s is the number of pixel points occupied by the imaging length of the contour line of the calibration plate;
k is a constant coefficient;
h is the shooting height.
6. The method for acquiring the three-dimensional point cloud based on the LabVIEW as claimed in claim 1, wherein: the specific method of the step S4 is as follows:
the shooting system starts from one side of the longitudinal edge of the object to be detected and horizontally moves along the transverse edge of the object to be detected, and simultaneously the shooting system continuously shoots at a fixed frequency until the complete surface imaging image acquisition of the object to be detected is completed.
7. The LabVIEW-based three-dimensional point cloud acquisition method as recited in claim 6, wherein: the position of the shooting system relative to the object to be detected meets the requirement that the shooting system can completely collect the longitudinal surface image of the object to be detected in the relative interval in the transverse horizontal moving process.
8. The LabVIEW-based three-dimensional point cloud acquisition method as recited in claim 4, wherein: the step S5 includes the steps of:
s51, inputting the surface imaging images into LabVIEW in sequence, operating the distortion correction program, and performing distortion correction;
s52, extracting a pixel matrix from the surface imaging image after the distortion correction;
s53, binarizing the pixel matrix by a certain threshold value, and performing denoising treatment;
s54, extracting and sequencing the points with the pixel value of 255 in each row of the pixel matrix, substituting the maximum column number of the points into the geometric relation in the step S1, and determining the corresponding shooting heights;
and S55, synthesizing each processed surface imaging image according to a shooting sequence to generate a complete image of the three-dimensional point cloud on the surface of the object to be detected.
9. The LabVIEW-based three-dimensional point cloud acquisition method as claimed in claim 1, wherein: and shading equipment is arranged around the object to be detected to reduce the interference of an external light source.
CN201811298233.9A 2018-11-02 2018-11-02 Three-dimensional point cloud obtaining method based on LabVIEW Active CN109493418B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811298233.9A CN109493418B (en) 2018-11-02 2018-11-02 Three-dimensional point cloud obtaining method based on LabVIEW

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811298233.9A CN109493418B (en) 2018-11-02 2018-11-02 Three-dimensional point cloud obtaining method based on LabVIEW

Publications (2)

Publication Number Publication Date
CN109493418A CN109493418A (en) 2019-03-19
CN109493418B true CN109493418B (en) 2022-12-27

Family

ID=65692182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811298233.9A Active CN109493418B (en) 2018-11-02 2018-11-02 Three-dimensional point cloud obtaining method based on LabVIEW

Country Status (1)

Country Link
CN (1) CN109493418B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111681186A (en) * 2020-06-10 2020-09-18 创新奇智(北京)科技有限公司 Image processing method and device, electronic equipment and readable storage medium
CN111951375B (en) * 2020-07-16 2023-06-30 中国航天空气动力技术研究院 Method for extracting particle images in curved surface visual model
CN113075683B (en) * 2021-03-05 2022-08-23 上海交通大学 Environment three-dimensional reconstruction method, device and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107248178A (en) * 2017-06-08 2017-10-13 上海赫千电子科技有限公司 A kind of fisheye camera scaling method based on distortion parameter

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2451659A1 (en) * 2001-06-29 2003-01-09 Melvyn Lionel Smith Overhead dimensioning system and method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107248178A (en) * 2017-06-08 2017-10-13 上海赫千电子科技有限公司 A kind of fisheye camera scaling method based on distortion parameter

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
激光线扫描三维成像系统的图像矫正研究;张梦等;《应用激光》;20180415(第02期);全文 *

Also Published As

Publication number Publication date
CN109493418A (en) 2019-03-19

Similar Documents

Publication Publication Date Title
US10309770B2 (en) Three-dimensional sensor system and three-dimensional data acquisition method
US9886759B2 (en) Method and system for three-dimensional data acquisition
CN109493418B (en) Three-dimensional point cloud obtaining method based on LabVIEW
CN104331896B (en) A kind of system calibrating method based on depth information
US8233041B2 (en) Image processing device and image processing method for performing three dimensional measurements
TWI489082B (en) Method and system for calibrating laser measuring apparatus
CN103257085B (en) Image processing device and method for image processing
CN108562250B (en) Keyboard keycap flatness rapid measurement method and device based on structured light imaging
WO2012053521A1 (en) Optical information processing device, optical information processing method, optical information processing system, and optical information processing program
EP1343332A2 (en) Stereoscopic image characteristics examination system
JP2012215394A (en) Three-dimensional measuring apparatus and three-dimensional measuring method
JP2005326247A (en) Calibrator, calibration method, and calibration program
CN109459984B (en) Positioning and grabbing system based on three-dimensional point cloud and using method thereof
CN110926330A (en) Image processing apparatus, image processing method, and program
CN111637834A (en) Three-dimensional data measuring device and method
CN112509060A (en) CT secondary scanning positioning method and system based on image depth learning and computer readable storage medium
CN112634376A (en) Calibration method and device, calibration equipment and storage medium
CN111932517B (en) Contour mapping method and device for residual plate, electronic equipment and storage medium
US20120158358A1 (en) Three-dimensional shape measurement method and three-dimensional shape measurement system
JP5336325B2 (en) Image processing method
JP5136108B2 (en) 3D shape measuring method and 3D shape measuring apparatus
CN112710662A (en) Generation method and device, generation system and storage medium
CN115684012A (en) Visual inspection system, calibration method, device and readable storage medium
KR20180040316A (en) 3D optical scanner
JP6355544B2 (en) Position measuring apparatus, data correcting apparatus, position measuring method and data correcting method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant