CN115131444B - Calibration method based on monocular vision dispensing platform - Google Patents

Calibration method based on monocular vision dispensing platform Download PDF

Info

Publication number
CN115131444B
CN115131444B CN202211050819.XA CN202211050819A CN115131444B CN 115131444 B CN115131444 B CN 115131444B CN 202211050819 A CN202211050819 A CN 202211050819A CN 115131444 B CN115131444 B CN 115131444B
Authority
CN
China
Prior art keywords
dot
monocular camera
dimensional coordinates
distortion
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211050819.XA
Other languages
Chinese (zh)
Other versions
CN115131444A (en
Inventor
陈辉
窦海波
曲东升
李长峰
张继
郝婷婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Mingseal Robotic Technology Co Ltd
Original Assignee
Changzhou Mingseal Robotic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Mingseal Robotic Technology Co Ltd filed Critical Changzhou Mingseal Robotic Technology Co Ltd
Priority to CN202211050819.XA priority Critical patent/CN115131444B/en
Publication of CN115131444A publication Critical patent/CN115131444A/en
Application granted granted Critical
Publication of CN115131444B publication Critical patent/CN115131444B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions

Abstract

The invention discloses a calibration method based on a monocular vision dispensing platform, which relies on an open source vision library OpenCV and comprises the following steps: step 1, collecting images and preprocessing the images; step 2, extracting two-dimensional coordinates of an angular point; step 3, setting a world coordinate system and extracting a three-dimensional coordinate of an angular point; step 4, calibrating internal and external parameters of the monocular camera; step 5, mapping the matrix; step 6, distortion correction; 7, calibrating a hand-eye affine matrix; step 8, dispensing and positioning operation: and (4) converting the image coordinates of the point to be glued, which are shot and extracted in the glue dispensing process of the glue dispensing equipment, into more accurate physical coordinates of the tail end of the mechanical arm according to the distortion correction in the step 6 and the hand-eye affine matrix obtained by calculation in the step 7, so as to realize high-precision glue dispensing operation. The calibration method reduces the use cost on the premise of ensuring the precision and the stability.

Description

Calibration method based on monocular vision dispensing platform
Technical Field
The invention relates to the technical field of calibration methods based on vision, in particular to a calibration method based on a monocular vision dispensing platform.
Background
In the glue dispensing industry, the requirement on equipment is high, and although the traditional automatic glue dispensing equipment adopts a high-precision tool to position a workpiece point to be glued, the traditional automatic glue dispensing equipment cannot read an image and judge the positioning with high precision. Therefore, it is necessary to introduce vision to achieve efficient positioning determination, where a vision-based calibration technique is crucial to determine the applicability of the operating scene of the device.
The calibration method based on the Halcon commercial vision library commonly used in the dispensing equipment can ensure the calibration of high precision and stability, but the cost is higher, the technology is mastered by foreign companies, the bottom layer is not open, once the problem of algorithm occurs, the vision engineer is difficult to quickly position and solve, and therefore the development of the calibration method which reduces the use cost on the premise of ensuring the precision and the stability is urgently needed.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art.
Therefore, the calibration method based on the monocular vision dispensing platform is provided by the invention, relies on the OpenCV, and reduces the use cost on the premise of ensuring the precision and the stability.
The calibration method based on the monocular vision dispensing platform is dependent on an open source vision library OpenCV, and is characterized by comprising the following steps of:
step 1, collecting images, and preprocessing the images: shooting a plurality of round dot calibration plate pictures at multiple angles by using a monocular camera, recording the physical coordinates of the tail end of a mechanical arm of dispensing equipment corresponding to the round dot calibration plate pictures, respectively converting the shot round dot calibration plate pictures into gray images, and preprocessing the gray images;
step 2, extracting two-dimensional coordinates of a corner point: based on the step 1, respectively extracting angular point two-dimensional coordinates in a plurality of round dot calibration board pictures;
step 3, setting a world coordinate system, and extracting three-dimensional coordinates of corner points: setting an origin of a world coordinate system, and calculating a corner three-dimensional coordinate according to dot spacing and dot row and column numbers in a standard dot calibration board;
step 4, calibrating internal and external parameters of the monocular camera: calibrating internal and external parameters of the monocular camera according to the two-dimensional coordinates of the corner points obtained in the step 2 and the three-dimensional coordinates of the corner points obtained in the step 3 to obtain an internal parameter matrix of the monocular camera and a distortion coefficient of the monocular camera;
step 5, mapping matrix: calculating a mapping matrix according to the internal reference matrix of the monocular camera and the distortion coefficient of the monocular camera obtained in the step 4;
step 6, distortion correction: introducing a bilinear interpolation algorithm according to the mapping matrix obtained by calculation in the step 5, and carrying out distortion correction on a plurality of dot calibration board pictures shot by a monocular camera at multiple angles to obtain the dot calibration board pictures after distortion correction;
7, calibrating a hand-eye affine matrix: performing hand-eye calibration according to the physical coordinates of the tail end of the mechanical arm of the dispensing equipment in the step 1 and the two-dimensional coordinates of the corner points of the same region in the different round dot calibration board pictures extracted in the step 2, and calculating to obtain a hand-eye affine matrix;
step 8, dispensing and positioning operation: and (4) converting the image coordinates of the point to be glued, which are shot and extracted in the glue dispensing process of the glue dispensing equipment, into more accurate physical coordinates of the tail end of the mechanical arm according to the distortion correction in the step 6 and the hand-eye affine matrix obtained by calculation in the step 7, so as to realize high-precision glue dispensing operation.
The method has the beneficial effects that (1) the method depends on the OpenCV, improves the calibration precision of the internal and external parameters of the monocular camera in the dispensing equipment based on the traditional Zhang-Yongyou calibration method by adopting a spot detection algorithm for a dot calibration board and introducing a tangential distortion coefficient; (2) The method relies on the OpenCV, and distortion correction is performed on the image by using the bilinear interpolation in combination with the mapping matrix, so that the distortion correction precision of the image of the product to be glued, which is shot by a single camera in the gluing equipment, is improved.
According to an embodiment of the present invention, in the step 1, the gray scale image preprocessing includes image binarization, color filtering, area filtering, and center point clustering.
According to one embodiment of the invention, each image is binarized by setting the step size and the threshold value.
According to an embodiment of the present invention, in the step 2, the process of extracting the two-dimensional coordinates of the corner points in the picture of the dot calibration board is: and (3) adopting a spot detection algorithm to the picture of the dot calibration board, denoising interference spots through black detection limitation and spot areas, and screening characteristic spots to obtain two-dimensional coordinates of the corner points in the picture of the dot calibration board.
According to an embodiment of the present invention, in the step 4, according to the two-dimensional coordinates of the corner points obtained in the step 2 and the three-dimensional coordinates of the corner points obtained in the step 3, the inside and outside reference calibration of the monocular camera is performed, and a rotation matrix from the circular dot calibration plate picture to the monocular camera and a translation vector from the circular dot calibration plate picture to the monocular camera are also obtained.
According to an embodiment of the present invention, in the 4 th step, the distortion coefficients of the monocular camera include a tangential distortion coefficient and a radial distortion coefficient.
According to an embodiment of the present invention, in the step 2, the number of corner points on each dot calibration board picture is the same among the plurality of dot calibration board pictures.
According to an embodiment of the present invention, in the step 1, the number of the dot calibration board pictures shot by the monocular camera at multiple angles is at least 6.
According to one embodiment of the invention, in said step 3, the dot pitch in the standard dot calibration plate is 2.5mm.
According to an embodiment of the present invention, in the step 3, the number of dot rows in the standard dot calibration board is 7 rows, and the number of dot columns is 7 columns.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present invention, the drawings used in the description of the embodiments or prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart of a monocular camera based calibration method;
FIG. 2 is a grey scale image of a calibration dot plate after pretreatment;
FIG. 3 is an extracted corner point diagram;
FIG. 4 is a pre-distortion corrected image;
fig. 5 is an image after distortion correction.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The calibration method based on the monocular vision dispensing platform according to the embodiment of the present invention is described in detail below with reference to the accompanying drawings.
Referring to fig. 1, fig. 2, fig. 3, fig. 4, and fig. 5, the calibration method based on the monocular vision dispensing platform of the present invention relies on the OpenCV, which includes the following steps:
step 1, collecting images, and preprocessing the images: shooting a plurality of circular dot calibration board pictures at multiple angles by using a monocular camera, recording the physical coordinates of the tail ends of mechanical arms of the dispensing equipment corresponding to the pictures, respectively converting the shot circular dot calibration board pictures into gray images, and preprocessing the gray images. In step 1, the gray scale image preprocessing comprises image binarization, color filtering, area filtering and center point clustering.
Specifically, each gray level image is binarized by setting a step length and a threshold value, so that image binarization processing is realized.
Color filtering is realized by setting the detection color to be black and detecting only black spots in the gray image.
Area filtering is achieved by setting an area threshold and detecting only blobs in the grayscale image whose area of the blob is within the area threshold.
Firstly, searching edges of a binary image, and calculating the centers of the edges; then, clustering the obtained central points into one block, setting a minimum distance, screening the central points just clustered to obtain points meeting the distance, putting the points into a set, and enabling the set to correspond to the characteristics of one spot; to achieve center point clustering.
In step 1, the number of the dot calibration board pictures shot by the monocular camera at multiple angles is at least 6.
Step 2, extracting two-dimensional coordinates of the corner points: based on the step 1, two-dimensional coordinates of corner points in a plurality of round dot calibration board pictures are respectively extracted. In the step 2, the process of extracting the two-dimensional coordinates of the corner points in the picture of the dot calibration plate is as follows: and (3) adopting a spot detection algorithm to the picture of the dot calibration board, denoising interference spots through black detection limitation and spot areas, and screening characteristic spots to obtain two-dimensional coordinates of the corner points in the picture of the dot calibration board. In the step 2, the number of the corner points on each dot calibration board picture is the same in a plurality of dot calibration board pictures.
Step 3, setting a world coordinate system, and extracting three-dimensional coordinates of corner points: setting the origin of a world coordinate system, and calculating the three-dimensional coordinates of the corner points according to the dot spacing and the row and column numbers of the dots in the standard dot calibration board. Preferably, in step 3, the dot pitch in the standard dot calibration plate is 2.5mm, the dot row number in the standard dot calibration plate is 7 rows, and the dot column number is 7 columns.
Step 4, calibrating internal and external parameters of the monocular camera: and (3) performing the inside and outside reference calibration of the monocular camera by adopting an improved Zhang Zhen friend calibration method, namely performing the inside and outside reference calibration of the monocular camera according to the two-dimensional coordinates of the corner points obtained in the step (2) and the three-dimensional coordinates of the corner points obtained in the step (3) to obtain an inside reference matrix of the monocular camera and a distortion coefficient of the monocular camera. In the 4 th step, according to the two-dimensional coordinates of the corner points obtained in the 2 nd step and the three-dimensional coordinates of the corner points obtained in the 3 rd step, the internal and external parameters of the monocular camera are calibrated, and a rotation matrix from the picture of the dot calibration plate to the monocular camera and a translation vector from the picture of the dot calibration plate to the monocular camera are also obtained. The role of the rotation matrix and the translation vector is to calculate the reprojection error: firstly, calculating three-dimensional coordinates of corner points in each round dot calibration board picture by combining a camera internal reference matrix and a distortion coefficient which are obtained by calibrating internal and external references of a monocular camera to obtain new two-dimensional coordinates of the corner points in each calibration board picture; then, the norm summation is carried out according to the new and old two-dimensional coordinates of the angular points in the circular dot calibration board picture, and the average value is obtained, so that the average reprojection error of each circular dot calibration board picture can be obtained, and the calibration result is verified to be good or bad (the smaller the reprojection error is, the more accurate the calibration of the internal and external parameters of the monocular camera is). In the 4 th step, distortion coefficients of the monocular camera include a tangential distortion coefficient and a sagittal distortion coefficient.
If there is a point P in the space, the conversion formula between the pixel coordinate of the point P in the space and the world coordinate is as follows:
Figure 631899DEST_PATH_IMAGE001
(1)
wherein, the meaning of each symbol in the formula (1) is specifically as follows:
Figure 976293DEST_PATH_IMAGE002
is the projection of a point P in space on the Z axis of the camera coordinate system;
Figure 739850DEST_PATH_IMAGE003
is the pixel coordinate of a point P in space;
Figure 777076DEST_PATH_IMAGE004
is a monocular camera internal reference matrix;
Figure 727714DEST_PATH_IMAGE005
is the ratio of the focal length f of the monocular camera lens to the pixel length dx;
Figure 875799DEST_PATH_IMAGE006
is the ratio of the focal length f of the monocular camera lens to the pixel width dy;
Figure 493862DEST_PATH_IMAGE007
is the x, y coordinates of the origin of the image coordinate system under the pixel coordinate system;
Figure 436410DEST_PATH_IMAGE008
is a two-axis error caused by some uncontrollable factors in monocular camera processing and is usually obtained by calculationA value of 0;
Figure 139924DEST_PATH_IMAGE009
is a rotation matrix from the picture of the dot calibration plate to the monocular camera;
Figure 826120DEST_PATH_IMAGE010
the translation vector from the dot calibration plate picture to the monocular camera;
Figure 298690DEST_PATH_IMAGE011
is the world coordinate of a point P in space.
The distortion coefficient of the monocular camera is calculated according to the following formula:
three distortion coefficients for radial distortion
Figure 677719DEST_PATH_IMAGE012
Calculated using the following formula:
Figure 868528DEST_PATH_IMAGE013
(2)
wherein, the meaning of each symbol in the formula (2) is specifically as follows:
Figure 92836DEST_PATH_IMAGE014
is the pixel coordinate before distortion correction;
Figure 419913DEST_PATH_IMAGE015
is the pixel coordinate after distortion correction;
Figure 969843DEST_PATH_IMAGE016
are all radial distortion coefficients;
Figure 382369DEST_PATH_IMAGE017
is the distance from the pixel coordinate after distortion correction to the origin of coordinates.
Two distortion coefficients for tangential distortion
Figure 675947DEST_PATH_IMAGE018
Calculated using the following formula:
Figure 857530DEST_PATH_IMAGE019
(3)
wherein, the meaning of each symbol in the formula (3) is specifically as follows:
Figure 312782DEST_PATH_IMAGE020
are all tangential distortion coefficients.
Step 5, mapping matrix: and 4, calculating a mapping matrix according to the internal reference matrix of the monocular camera and the distortion coefficient of the monocular camera obtained in the step 4.
Combining the formula (2) and the formula (3) to obtain the following formula:
Figure 478184DEST_PATH_IMAGE021
(4)
so the mapping matrix
Figure 44295DEST_PATH_IMAGE022
The calculation results of (a) are as follows:
Figure 611542DEST_PATH_IMAGE023
(5)
wherein, the meaning of each symbol in the formula (5) is specifically as follows:
Figure 237696DEST_PATH_IMAGE024
is a mapping matrix;
Figure 624815DEST_PATH_IMAGE025
is the x, y coordinates of the origin of the image coordinate system in the pixel coordinate system.
Step 6, distortion correction: and (5) introducing a bilinear interpolation algorithm according to the mapping matrix obtained by calculation in the step (5), and carrying out distortion correction on a plurality of dot calibration board pictures shot by a monocular camera at multiple angles to obtain the dot calibration board pictures after distortion correction.
Bilinear interpolation can be regarded as two times of single linear interpolation in succession. When an unknown function h is desired to be obtained at the point
Figure 260195DEST_PATH_IMAGE026
Assuming that the known function h is at
Figure 416370DEST_PATH_IMAGE027
Figure 213425DEST_PATH_IMAGE028
Figure 87840DEST_PATH_IMAGE029
And
Figure 526912DEST_PATH_IMAGE030
values of four points. In the most common case, h is the pixel value of a pixel.
First, linear interpolation is performed in the x direction to obtain:
Figure 537593DEST_PATH_IMAGE031
(6)
Figure 771128DEST_PATH_IMAGE032
(7)
then, linear interpolation is performed in the y direction to obtain:
Figure 132840DEST_PATH_IMAGE033
(8)
wherein
Figure 844444DEST_PATH_IMAGE034
Figure 975211DEST_PATH_IMAGE035
In summary of the equations (6), (7) and (8), the final result of bilinear interpolation can be obtained:
Figure 379647DEST_PATH_IMAGE036
(9)
bilinear interpolation has performed a total of two single-linear interpolations: once in the x-direction and once in the y-direction. R1 and R2 represent two points obtained by performing a single linear interpolation in the x direction.
7, calibrating a hand-eye affine matrix: and (3) performing hand-eye calibration according to the physical coordinates of the tail end of the mechanical arm of the dispensing equipment in the step (1) and the two-dimensional coordinates of the corner points of the same region in the different round dot calibration board pictures extracted in the step (2), and calculating to obtain a hand-eye affine matrix.
Assume the physical coordinates of the end effector of the robot arm as
Figure 963075DEST_PATH_IMAGE037
The two-dimensional coordinates of the first row and the first column of corner points in the round dot calibration board picture are
Figure 478370DEST_PATH_IMAGE038
Then the hand-eye affine matrix is as follows:
Figure 729223DEST_PATH_IMAGE039
(10)
wherein, the meaning of each symbol in the formula (10) is specifically as follows:
Figure 304561DEST_PATH_IMAGE040
is a transformation matrix from a camera to the tail end of the mechanical arm, namely an affine matrix of hands and eyes;
Figure 375285DEST_PATH_IMAGE041
are all unknown variables.
According to
Figure 694271DEST_PATH_IMAGE042
(wherein the content of the first and second components,
Figure 799630DEST_PATH_IMAGE043
is the physical coordinates of the end effector of the mechanical arm;
Figure 280290DEST_PATH_IMAGE044
two-dimensional coordinates of the corner points in the first row and the first column in the dot calibration plate picture) operation, the following results are obtained:
Figure 838310DEST_PATH_IMAGE045
(11)
wherein, the meaning of each symbol in the formula (11) is specifically as follows:
Figure 695408DEST_PATH_IMAGE046
the two items are the first two items of physical coordinates of an end effector of the mechanical arm;
Figure 655274DEST_PATH_IMAGE047
the first two items of two-dimensional coordinates of the first row and column of corner points in the picture of the dot calibration plate.
After unfolding, obtaining:
Figure 572414DEST_PATH_IMAGE048
(12)
the transformation equation of the physical coordinates of the mechanical arm end effector corresponding to the n pictures and the two-dimensional coordinates of the first row and the first column of corner points in the picture of the dot calibration board is as follows:
Figure 883310DEST_PATH_IMAGE049
(13)
wherein, the meaning of each symbol in the formula (13) is specifically as follows:
Figure 278519DEST_PATH_IMAGE050
the first two items of physical coordinates of the mechanical arm end effector corresponding to the 1 st picture;
Figure 92891DEST_PATH_IMAGE051
the first two physical coordinates of the mechanical arm end effector corresponding to the 2 nd picture;
Figure 446512DEST_PATH_IMAGE052
the first two items of physical coordinates of the mechanical arm end effector corresponding to the 3 rd picture;
Figure 713545DEST_PATH_IMAGE053
the first two items of physical coordinates of the mechanical arm end effector corresponding to the 4 th picture;
Figure 178025DEST_PATH_IMAGE054
the first two items of physical coordinates of the mechanical arm end effector corresponding to the 5 th picture;
Figure 581324DEST_PATH_IMAGE055
the first two physical coordinates of the mechanical arm end effector corresponding to the 6 th picture;
Figure 105847DEST_PATH_IMAGE056
the first two items of physical coordinates of the mechanical arm end effector corresponding to the nth picture are shown, wherein n is a positive integer greater than or equal to 6;
Figure 125755DEST_PATH_IMAGE057
the two items are the first two items of two-dimensional coordinates of a first row and a first column of corner points in a 1 st round dot calibration board picture;
Figure 128346DEST_PATH_IMAGE058
the first two items of two-dimensional coordinates of a first row and a first column of corner points in a picture of a 2 nd round dot calibration board;
Figure 651731DEST_PATH_IMAGE059
the first two items of two-dimensional coordinates of a first row and a first column of corner points in a 3 rd round dot calibration board picture;
Figure 347155DEST_PATH_IMAGE060
the first two items of two-dimensional coordinates of a first row and a first column of corner points in a 4 th round dot calibration board picture;
Figure 588780DEST_PATH_IMAGE061
the first two items of two-dimensional coordinates of a first row and a first column of corner points in a 5 th dot calibration board picture;
Figure 129483DEST_PATH_IMAGE062
the first two items of two-dimensional coordinates of a first row and a first column of corner points in a 6 th round dot calibration board picture;
Figure 38533DEST_PATH_IMAGE063
is the first two items of two-dimensional coordinates of the corner points in the first row and the first column in the nth dot calibration board pictureWherein n is a positive integer greater than or equal to 6;
if desired to obtain
Figure 639279DEST_PATH_IMAGE064
For the over-determined system of equations, a least square method based on a matrix solution can be used to calculate the result, that is, the calculation formula of the least square method is:
Figure 368201DEST_PATH_IMAGE065
(14)
wherein, the meaning of each symbol in the formula (14) is specifically as follows:
Figure 712594DEST_PATH_IMAGE066
is the objective function to be solved;
n is n sets of observed data;
Figure 476151DEST_PATH_IMAGE067
is the value of the ith group of observation data, and i is less than or equal to n;
Figure 982219DEST_PATH_IMAGE068
is an observed estimate, which can be understood as
Figure 464016DEST_PATH_IMAGE069
The ideal value of (c).
Step 8, dispensing and positioning operation: and (4) converting the image coordinates of the point to be glued, which are shot and extracted in the glue dispensing process of the glue dispensing equipment, into more accurate physical coordinates of the tail end of the mechanical arm according to the distortion correction in the step (6) and the hand-eye affine matrix obtained by calculation in the step (7), so as to realize high-precision glue dispensing operation.
The method depends on the OpenCV, reduces the use cost on the premise of ensuring the precision and the stability, and solves the problems that the calibration mode based on the Halcon commercial vision library, which is commonly used in the dispensing equipment, has higher cost, the bottom layer is not open, and the vision engineer is difficult to quickly position and solve the problems in algorithm.
The invention is based on a dot calibration plate and introduces a tangential distortion coefficient, and solves the problems that the traditional Zhang-Yong calibration method cannot meet the precision requirement of dispensing equipment and has a large difference with the calibration result based on a Halcon commercial vision library.
The distortion correction method based on the mapping matrix and the bilinear interpolation value solves the problems that a product image shot by a monocular camera in dispensing equipment generates certain distortion, the distortion of a lens of the monocular camera is small and still has certain influence on the dispensing positioning of the product, and the traditional distortion correction method based on the mapping matrix or the interpolation value cannot realize high-precision correction of the image.
Examples
For convenience of explaining the process of the invention, a standard dot calibration plate with 7 x 7 (49 dots are distributed on the dot calibration plate in a matrix manner, namely, 7 dots are distributed on each horizontal row and 7 dots are distributed on each vertical row), the dot spacing is 2.5mm, and the dot diameter is 1.25mm is adopted for calibration. Referring to fig. 2-5, HC025 and 1.25 are shown, wherein HC refers to the size of the dot target plate, 025 refers to the size of the dot target plate, 25mm by 25mm, and 1.25 refers to the diameter of the dots in the dot target plate, which is 1.25mm. Figure 4 is a photograph of a dot calibration plate prior to distortion correction showing primarily barrel and pincushion distortion with a 2.4mm dot spacing around the dot calibration plate. Fig. 5 is a photograph of a calibration dot plate after correction of distortion, the dot pitch after correction of distortion being 2.45mm.
The specific operation flow of this embodiment is as follows:
the method comprises the following steps that firstly, a dot calibration plate is fixed, 9 dot calibration plate pictures are shot in multiple angles by a monocular camera and stored in a pics folder, physical coordinates corresponding to the tail end of a mechanical arm of the dispensing equipment corresponding to the shot dot calibration plate pictures are recorded, and the physical coordinates are stored in a csv format file.
And secondly, performing operations such as gray scaling and format conversion on 9 circular dot calibration board pictures in the pics folder to generate a gray image, wherein the operations are shown in fig. 2.
Thirdly, a speckle detection algorithm is adopted for the processed image, and the gray level image is binarized through continuous threshold values to obtain a threshold value range of
Figure 612100DEST_PATH_IMAGE070
Threshold set with step size 10. Setting color filtering and area filtering: the color of the detected spot is set to black, and the threshold value of the area of the detected spot area is set to
Figure 964584DEST_PATH_IMAGE071
. And (3) the spot detection algorithm binarizes each two-dimensional dot calibration board picture to be detected according to the set step length, color and area threshold value to obtain a coherent binary image.
After obtaining the binary images, firstly, searching the edges of the binary images and calculating the centers of the binary images; then, clustering the obtained central points into one block, setting a minimum distance as 15 pixels, screening the central points just clustered to obtain points meeting the distance, putting the points into a set, and enabling the set to correspond to the characteristics of one spot; then, these feature points are estimated to obtain their radii (automatically estimated by the function in OpenCV, the open source vision library), where the principle of radius estimation is: after the gray processing and the binaryzation, the areas of the feature points in the dot calibration board picture are not uniform, so that the radius length with a medium area in the feature points is selected as the final radius of the feature points.
And a fifth step of screening all the classified feature points according to set color filtering and area filtering to obtain the finally required feature points, performing pixel extraction on the feature points to obtain angular point two-dimensional coordinates, and marking the angular point two-dimensional coordinates in a picture of a dot calibration board, as shown in fig. 3.
And a sixth step of setting the origin of the world coordinate system as the center point of the first row and the first column of circles of the picture of the dot calibration board, and calculating to obtain three-dimensional coordinates of the corner point according to the dot pitch of 2.5mm and the row number and column number of 7 × 7 dots (the three-dimensional coordinates can be changed according to the set origin of the world coordinate system, the origin of the world coordinate system set here is positioned at the center of the first row and the first column of dots in the picture of the dot calibration board, and then the three-dimensional coordinates of the corner point can be obtained according to the dot pitch and the row number of the dots of the picture of the dot calibration board.
A seventh step of introducing a distortion coefficient according to the two-dimensional coordinates and the three-dimensional coordinates of the corner point
Figure 172711DEST_PATH_IMAGE072
(wherein,
Figure 876225DEST_PATH_IMAGE073
is the coefficient of radial distortion that is,
Figure 296842DEST_PATH_IMAGE074
is the tangential distortion coefficient), the radial distortion coefficient is maintained
Figure 34991DEST_PATH_IMAGE075
Remain unchanged during the optimization process (but
Figure 148441DEST_PATH_IMAGE076
The camera calibration is actually an estimated value calculated in the optimization process, the estimated condition that the camera is closer to the camera indicates that the calibration accuracy is better), and a reference matrix camera matrix in the monocular camera, a distortion coefficient distCoeffs of the monocular camera, a rotation matrix rvecMat from a picture of a dot calibration plate to the monocular camera and a translation vector tvecMat from the picture of the dot calibration plate to the monocular camera are obtained through calculation.
And an eighth step of calculating mapping matrixes mapx and map according to the monocular camera intrinsic reference matrix and the monocular camera distortion coefficient distCoeffs, and then performing distortion correction on the image imageSource before distortion correction by combining bilinear interpolation INTER _ LINEAR, as shown in FIG. 4, to obtain a distorted and corrected image newimage, as shown in FIG. 5. It should be noted that, although distortion correction can be performed by using the mapping matrix alone, accuracy is not sufficient; in order to improve the accuracy of distortion correction, it is necessary to obtain more accurate pixel values, and often the pixel coordinate values exist in the form of integers, while the distorted pixel values of normal images are generally small numbers, and the pixels of the images before and after correction cannot be in one-to-one correspondence, so that it is necessary to calculate the integer pixel values by using a bilinear interpolation method.
Step nine, setting the variable name of the two-dimensional coordinate of the first row and the first column of corner points in each round dot calibration board picture as points _ camera, and setting the variable name of the physical coordinate of the tail end of the mechanical arm as points _ robot; storing two-dimensional coordinates of a first row and a first column of corner points in 9 pictures into points _ camera [ i ] (i is less than or equal to 9, and i belongs to N, and N is a nonnegative integer set), storing physical coordinates of the tail end of a mechanical arm in a csv file into points _ robot [ j ] (j is less than or equal to 9, and j belongs to N, and N is a nonnegative integer set), and calculating by using a least square method according to a nine-point calibration algorithm to obtain a hand-eye affine matrix.
The transformation equation of the physical coordinates of the mechanical arm end effector corresponding to the 9 pictures and the two-dimensional coordinates of the first row and the first column of corner points in the picture of the dot calibration board is as follows:
Figure 73671DEST_PATH_IMAGE077
wherein, the meaning of each symbol in the above formula is specifically as follows:
Figure 563559DEST_PATH_IMAGE078
the first two items of physical coordinates of the mechanical arm end effector corresponding to the 7 th picture;
Figure 890635DEST_PATH_IMAGE079
the first two items of physical coordinates of the mechanical arm end effector corresponding to the 8 th picture;
Figure 440565DEST_PATH_IMAGE080
the first two physical coordinates of the mechanical arm end effector corresponding to the 9 th picture;
Figure 118671DEST_PATH_IMAGE081
the first two items of two-dimensional coordinates of the corner points in the first row and the first column in the 7 th round dot calibration board picture;
Figure 146670DEST_PATH_IMAGE082
the first two items of two-dimensional coordinates of a first row and a first column of corner points in an 8 th round dot calibration board picture;
Figure 328252DEST_PATH_IMAGE083
the first two items of two-dimensional coordinates of a first row and a first column of corner points in a 9 th dot calibration board picture;
if desired to obtain
Figure 49084DEST_PATH_IMAGE084
These 6 unknown variables need 6 equations, while the current 9 figures can list 9 equations, the number of equations is larger than the number of unknowns, and for overdetermined equation sets, the result can be calculated by using a method based on matrix solution by the least square method, that is, the calculation formula of the least square method is:
Figure 214486DEST_PATH_IMAGE085
wherein, the meaning of each symbol in the above formula is specifically as follows:
Figure 780596DEST_PATH_IMAGE086
is the objective function to be solved;
Figure 82265DEST_PATH_IMAGE087
is the firstThe values of i groups of observation data, and i is less than or equal to 9;
Figure 973997DEST_PATH_IMAGE088
is an observed estimate, which can be understood as
Figure 361116DEST_PATH_IMAGE089
The ideal value of (c).
Step ten, according to distortion correction and hand-eye affine matrixes, upper computer software converts image coordinates of the to-be-glued points extracted by the monocular camera in actual operation into more accurate physical coordinates of the tail end of the mechanical arm, and the mechanical arm moves to the to-be-glued points for glue dispensing.
The invention is suitable for various industries of machine vision identification and positioning, such as: the dispensing industry and the like.
Based on an open source vision library OpenCV, the invention adopts an improved Zhang Zhengyou calibration method to calibrate the internal and external parameters of the monocular camera (a spot detection algorithm is adopted on a dot calibration plate, a step length and a threshold value are set to carry out binarization on each image, interference spots are denoised through black detection limit and spot area, characteristic spots are screened to obtain two-dimensional coordinates of corner points in the dot calibration plate, a tangential distortion coefficient is introduced, and a radial distortion coefficient is kept
Figure DEST_PATH_IMAGE090
The calibration accuracy of the internal and external parameters of the monocular camera in the dispensing equipment based on the traditional Zhangyingyou calibration method is improved.
Based on an open source vision library OpenCV, the mapping matrixes mapx and copy are calculated through a monocular camera internal reference matrix camera matrix and a monocular camera distortion coefficient distCoeffs, distortion correction is carried out on the images by combining a bilinear interpolation INTER _ LINEAR algorithm, and the distortion correction precision of the pictures of products to be glued, which are shot by the monocular camera in the gluing equipment, is improved.
According to the invention, the conversion from the image coordinate to the physical coordinate at the tail end of the mechanical arm is realized according to the distortion correction and the hand-eye affine matrix, so that the mechanical arm is guided to move to the point to be glued for glue dispensing operation.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (7)

1. A calibration method based on a monocular vision dispensing platform depends on an open source vision library OpenCV, and is characterized by comprising the following steps:
step 1, collecting images, and preprocessing the images: shooting a plurality of round dot calibration board pictures at multiple angles by using a monocular camera, recording physical coordinates of the tail ends of mechanical arms of dispensing equipment corresponding to the round dot calibration board pictures, respectively converting the shot round dot calibration board pictures into gray images, and preprocessing the gray images;
step 2, extracting two-dimensional coordinates of the corner points: based on the step 1, respectively extracting angular point two-dimensional coordinates in a plurality of round dot calibration board pictures; in the step 2, the process of extracting the two-dimensional coordinates of the corner points in the picture of the dot calibration plate is as follows: denoising interference spots through black detection limitation and spot areas by adopting a spot detection algorithm on a picture of the dot calibration board, and screening characteristic spots to obtain two-dimensional coordinates of corner points in the picture of the dot calibration board; in the step 2, the number of the angular points on each dot calibration board picture is the same in a plurality of dot calibration board pictures;
step 3, setting a world coordinate system, and extracting angular point three-dimensional coordinates: setting an origin of a world coordinate system, and calculating three-dimensional coordinates of an angular point according to dot intervals and dot rows and columns in a standard dot calibration board;
step 4, calibrating internal and external parameters of the monocular camera: calibrating internal and external parameters of the monocular camera according to the two-dimensional coordinates of the corner points obtained in the step 2 and the three-dimensional coordinates of the corner points obtained in the step 3 to obtain an internal parameter matrix of the monocular camera and a distortion coefficient of the monocular camera; in the 4 th step, distortion coefficients of the monocular camera include a tangential distortion coefficient and a sagittal distortion coefficient;
the distortion coefficient of the monocular camera is calculated according to the following formula:
three distortion coefficients for radial distortion
Figure 56271DEST_PATH_IMAGE001
Calculated using the following formula:
Figure 238990DEST_PATH_IMAGE002
(2)
wherein, the meaning of each symbol in the formula (2) is specifically as follows:
Figure 18728DEST_PATH_IMAGE003
is the pixel coordinate before distortion correction;
Figure 679516DEST_PATH_IMAGE004
is the pixel coordinate after distortion correction;
Figure 759468DEST_PATH_IMAGE005
are all radial distortion coefficients;
Figure 847509DEST_PATH_IMAGE006
the distance from the pixel coordinate after the distortion correction to the origin of coordinates;
two distortion coefficients for tangential distortion
Figure 114543DEST_PATH_IMAGE007
Calculated using the following formula:
Figure 579022DEST_PATH_IMAGE008
(3)
wherein, the meaning of each symbol in the formula (3) is specifically as follows:
Figure 513480DEST_PATH_IMAGE009
are all tangential distortion coefficients;
step 5, mapping matrix: calculating a mapping matrix according to the internal reference matrix of the monocular camera and the distortion coefficient of the monocular camera obtained in the step 4;
combining the formula (2) and the formula (3) to obtain the following formula:
Figure 506844DEST_PATH_IMAGE010
(4)
so the mapping matrix
Figure 526752DEST_PATH_IMAGE011
The calculation results of (a) are as follows:
Figure 529343DEST_PATH_IMAGE012
(5)
wherein, the meaning of each symbol in the formula (5) is specifically as follows:
Figure 318308DEST_PATH_IMAGE013
is a mapping matrix;
Figure 13731DEST_PATH_IMAGE014
is the x, y coordinates of the origin of the image coordinate system under the pixel coordinate system;
step 6, distortion correction: introducing a bilinear interpolation algorithm according to the mapping matrix obtained by calculation in the step 5, and carrying out distortion correction on a plurality of dot calibration board pictures shot by a monocular camera at multiple angles to obtain the dot calibration board pictures after distortion correction;
7, calibrating a hand-eye affine matrix: performing hand-eye calibration according to the physical coordinates of the tail end of the mechanical arm of the dispensing equipment in the step 1 and the two-dimensional coordinates of the corner points of the same region in the different round dot calibration board pictures extracted in the step 2, and calculating to obtain a hand-eye affine matrix;
step 8, dispensing and positioning operation: and (4) converting the image coordinates of the point to be glued, which are shot and extracted in the glue dispensing process of the glue dispensing equipment, into more accurate physical coordinates of the tail end of the mechanical arm according to the distortion correction in the step 6 and the hand-eye affine matrix obtained by calculation in the step 7, so as to realize high-precision glue dispensing operation.
2. The calibration method based on the monocular vision dispensing platform as recited in claim 1, wherein: in the step 1, the gray level image preprocessing comprises image binarization, color filtering, area filtering and center point clustering.
3. The calibration method based on the monocular vision dispensing platform as recited in claim 2, wherein: and carrying out binarization on each image by setting step length and threshold value.
4. The calibration method based on the monocular vision dispensing platform as recited in claim 1, wherein: in the step 4, according to the two-dimensional coordinates of the corner points obtained in the step 2 and the three-dimensional coordinates of the corner points obtained in the step 3, the inside and outside parameters of the monocular camera are calibrated, and a rotation matrix from the picture of the dot calibration plate to the monocular camera and a translation vector from the picture of the dot calibration plate to the monocular camera are also obtained.
5. The calibration method based on the monocular vision dispensing platform as recited in claim 1, wherein: in the step 1, the number of the round dot calibration board pictures shot by the monocular camera in multiple angles is at least 6.
6. The calibration method based on the monocular vision dispensing platform according to claim 1, characterized in that: in the 3 rd step, the dot pitch in the standard dot calibration plate is 2.5mm.
7. The calibration method based on the monocular vision dispensing platform as recited in claim 1, wherein: in the step 3, the number of dot rows in the standard dot calibration plate is 7, and the number of dot columns is 7.
CN202211050819.XA 2022-08-30 2022-08-30 Calibration method based on monocular vision dispensing platform Active CN115131444B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211050819.XA CN115131444B (en) 2022-08-30 2022-08-30 Calibration method based on monocular vision dispensing platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211050819.XA CN115131444B (en) 2022-08-30 2022-08-30 Calibration method based on monocular vision dispensing platform

Publications (2)

Publication Number Publication Date
CN115131444A CN115131444A (en) 2022-09-30
CN115131444B true CN115131444B (en) 2022-11-15

Family

ID=83387606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211050819.XA Active CN115131444B (en) 2022-08-30 2022-08-30 Calibration method based on monocular vision dispensing platform

Country Status (1)

Country Link
CN (1) CN115131444B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115930784B (en) * 2023-01-09 2023-08-25 广州市易鸿智能装备有限公司 Point inspection method of visual inspection system
CN115775279B (en) * 2023-02-13 2023-05-16 苏州希盟科技股份有限公司 Dispensing positioning method and device and electronic equipment
CN115830147B (en) * 2023-02-20 2023-04-25 常州铭赛机器人科技股份有限公司 Pad printing dispensing rotation center calibration method based on monocular vision
CN116001438B (en) * 2023-02-20 2023-06-16 常州铭赛机器人科技股份有限公司 Visual calibration device based on movable seal
CN116030202B (en) * 2023-03-29 2023-08-01 四川弘和数智集团有限公司 Three-dimensional image reconstruction method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3009862A2 (en) * 2014-10-16 2016-04-20 Uniwersystet Slaski w Katowicach Method of determining distortion and/or distortion correction for projection image obtained during computer tomography routine
CN106485757A (en) * 2016-10-13 2017-03-08 哈尔滨工业大学 A kind of Camera Calibration of Stereo Vision System platform based on filled circles scaling board and scaling method
CN108470361A (en) * 2017-02-23 2018-08-31 南宁市富久信息技术有限公司 A kind of angle point automatic identification camera calibration method
CN109285194A (en) * 2018-09-29 2019-01-29 人加智能机器人技术(北京)有限公司 Camera calibration plate and camera calibration collecting method
CN112465912A (en) * 2020-11-18 2021-03-09 新拓三维技术(深圳)有限公司 Three-dimensional camera calibration method and device
CN113253246A (en) * 2021-06-01 2021-08-13 奥特酷智能科技(南京)有限公司 Calibration method for laser radar and camera
CN113506349A (en) * 2021-07-19 2021-10-15 江苏天楹机器人智能科技有限公司 High-precision hand-eye calibration method for garbage sorting robot
CN114332249A (en) * 2022-03-17 2022-04-12 常州铭赛机器人科技股份有限公司 Camera vision internal segmentation type hand-eye calibration method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108876859B (en) * 2018-04-28 2022-06-07 苏州赛腾精密电子股份有限公司 Calibration method, device, equipment and medium of dispenser
CN108965742B (en) * 2018-08-14 2021-01-22 京东方科技集团股份有限公司 Special-shaped screen display method and device, electronic equipment and computer readable storage medium
CN110599548A (en) * 2019-09-02 2019-12-20 Oppo广东移动通信有限公司 Camera calibration method and device, camera and computer readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3009862A2 (en) * 2014-10-16 2016-04-20 Uniwersystet Slaski w Katowicach Method of determining distortion and/or distortion correction for projection image obtained during computer tomography routine
CN106485757A (en) * 2016-10-13 2017-03-08 哈尔滨工业大学 A kind of Camera Calibration of Stereo Vision System platform based on filled circles scaling board and scaling method
CN108470361A (en) * 2017-02-23 2018-08-31 南宁市富久信息技术有限公司 A kind of angle point automatic identification camera calibration method
CN109285194A (en) * 2018-09-29 2019-01-29 人加智能机器人技术(北京)有限公司 Camera calibration plate and camera calibration collecting method
CN112465912A (en) * 2020-11-18 2021-03-09 新拓三维技术(深圳)有限公司 Three-dimensional camera calibration method and device
CN113253246A (en) * 2021-06-01 2021-08-13 奥特酷智能科技(南京)有限公司 Calibration method for laser radar and camera
CN113506349A (en) * 2021-07-19 2021-10-15 江苏天楹机器人智能科技有限公司 High-precision hand-eye calibration method for garbage sorting robot
CN114332249A (en) * 2022-03-17 2022-04-12 常州铭赛机器人科技股份有限公司 Camera vision internal segmentation type hand-eye calibration method

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
A camera calibration technique using targets of circular features;Mateos G;《Proceedings of V Ibero-American Simposium on Pattern Recognition》;20001231;第1-12页 *
Calibration and correction of lens distortion for two-dimensional digital speckle correlation measurement;Jian Zhao等;《Optik》;20131231;第124卷(第23期);第6042-6047页 *
OpenCV 耦合改进张正友算法的相机标定算法;李莉;《轻工机械》;20150831;第33卷(第4期);第60-63+68页 *
基于机器视觉的工业机器人抓取技术研究;林龙彬;《中国优秀硕士学位论文全文数据库 信息科技辑》;20190115;第I138-2927页 *
线结构光检测系统的摄像机镜头畸变校正及标定;汪洋等;《大连海事大学学报》;20111231(第4期);第67-70页 *
视觉点胶机的摄像机标定技术;郑剑斌等;《科技创新与应用》;20171231(第34期);第9-13页 *

Also Published As

Publication number Publication date
CN115131444A (en) 2022-09-30

Similar Documents

Publication Publication Date Title
CN115131444B (en) Calibration method based on monocular vision dispensing platform
CN110555889B (en) CALTag and point cloud information-based depth camera hand-eye calibration method
CN112017225B (en) Depth image matching method based on point cloud registration
CN109118473B (en) Angular point detection method based on neural network, storage medium and image processing system
CN111775152A (en) Method and system for guiding mechanical arm to grab scattered stacked workpieces based on three-dimensional measurement
CN111429533B (en) Camera lens distortion parameter estimation device and method
CN113344931B (en) Plug-in visual detection and identification method, readable storage medium and device
CN108007388A (en) A kind of turntable angle high precision online measuring method based on machine vision
CN112132907B (en) Camera calibration method and device, electronic equipment and storage medium
CN113269762B (en) Screen defect detection method, system and computer storage medium
CN110889829A (en) Monocular distance measurement method based on fisheye lens
CN112907683B (en) Camera calibration method and device for dispensing platform and related equipment
CN109544643A (en) A kind of camera review bearing calibration and device
CN110310305B (en) Target tracking method and device based on BSSD detection and Kalman filtering
CN112161586A (en) Line structured light vision sensor calibration method based on coding checkerboard
CN112381751A (en) Online intelligent detection system and method based on image processing algorithm
CN112184723B (en) Image processing method and device, electronic equipment and storage medium
CN116117800B (en) Machine vision processing method for compensating height difference, electronic device and storage medium
CN115112098A (en) Monocular vision one-dimensional two-dimensional measurement method
CN113222990B (en) Chip counting method based on image data enhancement
CN112819823A (en) Furniture board-oriented circular hole detection method, system and device
CN112935562A (en) Laser precision machining method based on paraxial offline measurement
CN113048899A (en) Thickness measuring method and system based on line structured light
CN111062907A (en) Homography transformation method based on geometric transformation
CN111178111A (en) Two-dimensional code detection method, electronic device, storage medium and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant