CN112348756A - Image distortion correction method - Google Patents

Image distortion correction method Download PDF

Info

Publication number
CN112348756A
CN112348756A CN202011216165.4A CN202011216165A CN112348756A CN 112348756 A CN112348756 A CN 112348756A CN 202011216165 A CN202011216165 A CN 202011216165A CN 112348756 A CN112348756 A CN 112348756A
Authority
CN
China
Prior art keywords
calculating
picture
image
distortion
calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011216165.4A
Other languages
Chinese (zh)
Inventor
田洪正
武亚飞
张永鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jieenshi Intelligent Technology Co ltd
Original Assignee
Shenzhen Jieenshi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jieenshi Intelligent Technology Co ltd filed Critical Shenzhen Jieenshi Intelligent Technology Co ltd
Priority to CN202011216165.4A priority Critical patent/CN112348756A/en
Publication of CN112348756A publication Critical patent/CN112348756A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for correcting image distortion, which comprises the following steps: step S1: calculating a sub-pixel value of a circle center coordinate of the feature circle of the dot matrix of the calibration picture; step S2: calculating a primary distortion correction mapping table of the image; step S3: calculating a picture grid mapping table; step S4: a undistorted picture is calculated. The invention provides an image distortion correction method, which has the characteristics of high calibration speed and high dimensional accuracy, has good environmental interference resistance to the working environment of a camera, and is very suitable for the field of automatic measurement of industrial automation.

Description

Image distortion correction method
Technical Field
The invention relates to the field of machine vision and automation, in particular to the field of image distortion correction.
Background
With the development of human socioeconomic and industrial, industrial automation wave mats come, and automatic control, automatic adjusting devices and automatic measuring equipment are widely adopted in industrial manufacturing and production to replace manual operation machines and machine systems. The method can improve the safety of the production process, improve the production efficiency, improve the product quality and reduce the loss of raw materials and energy in the production process.
Meanwhile, the machine vision technology plays a crucial role in the development process of industrial automation, and if the automation equipment or machine is compared with the human body and hands and feet, and the software program is compared with the human brain, the machine vision is equal to the human eyes without doubt. Machine vision is now widely used in a series of processes of fitting alignment, dimension measurement, appearance defect detection, 3D modeling, mechanical grabbing and the like in industrial manufacturing and production. On one hand, the working efficiency can be greatly improved, for example, the manual measurement of workpieces with a plurality of sizes can be better for several minutes, the machine vision measurement system can be used for completing the measurement within one second, and whether the measurement is qualified or not can be judged immediately after the measurement is completed. On the other hand, labor cost can be reduced.
Then, most cameras and lenses have a large difference between the shot pictures and the shot objects due to manufacturing errors in the manufacturing process, which is called distortion. The accuracy and precision of machine vision are seriously influenced, particularly in the field of high-precision measurement, the error of one pixel is likely to cause measurement error, so that a large amount of waste or other problems are caused, and therefore a method capable of quickly calibrating image distortion correction with high precision is needed to fill the defects of the machine vision in industrial automation.
Disclosure of Invention
The invention aims to solve the technical problem of providing an image distortion correction method which has the characteristics of high calibration speed and high dimensional accuracy, has good environmental interference resistance to the working environment of a camera and is very suitable for the field of industrial automatic measurement.
The image distortion correction method is realized by the following technical scheme: step S1: calculating a sub-pixel value of a circle center coordinate of the feature circle of the dot matrix of the calibration picture; calculating a sub-pixel value of a characteristic circle center coordinate in a circle dot matrix of the calibration picture by carrying out contour analysis and block analysis on the calibration picture;
step S2: calculating a primary distortion correction mapping table of the image; calculating to obtain a pixel position corresponding mapping table between the original calibration picture and the real picture by utilizing the mapping relation between the sub-pixel value of the circle center coordinate and the circle center coordinate position of the actual calibration plate;
step S3: calculating a picture grid mapping table; calculating a calibration picture subjected to primary distortion removal by using a distortion mapping table, and calculating by using the calibration picture subjected to primary distortion removal to obtain a grid mapping table;
step S4: calculating a distortion-removed picture; and sequentially mapping other pictures shot by the camera and the lens under the same condition twice according to the sequence by utilizing the image distortion correction mapping table and the image grid mapping table, and performing linear interpolation calculation to obtain the high-size-precision distortion-removed picture.
2. The method of image distortion correction according to claim 1, characterized in that:
in step S1, calculating the sub-pixel value of the feature circle center coordinate in the circle dot matrix of the calibration picture by performing contour analysis and block analysis on the calibration picture includes:
step S1.1: carrying out statistical analysis on the pixel values of the image to calculate a value as a binarization threshold value of the image, and carrying out binarization processing on the image by using the threshold value to obtain a binarization image;
step S1.2: calculating all contours in the image, and then sequentially filtering the contours by using the number of contour pixel points, the roundness of the contours and the contour area;
step S1.3: calculating the area of the residual contour, sequencing according to the contour area, calculating the approximate radius of the circle in the circular lattice, and filtering the contour again through the radius;
step S1.4: calculating the central points of the remaining contours, fitting an optimal matrix, and excluding the contours corresponding to the points deviating from the matrix to exceed a certain threshold;
step S1.5: and analyzing the block of the original image to obtain the coordinate of the central point of the block, solving the position relation between the coordinate of the central point and the outline, wherein the central point within the outline is the center coordinate of the sub-pixel of the characteristic circle in the solved circular lattice.
As a further improvement of the above technical solution, in step S2, the step of calculating a position correspondence mapping table between the original picture and the corrected picture by using the calculated sub-pixel value of the center coordinate and the center coordinate position of the actual calibration board includes:
step S2.1: distributing a corresponding coordinate value to each circle center by using the actual distance between two circle center points of the calibration plate and the position relation of each circle center;
step S2.2: calculating an internal reference matrix and a distortion coefficient of the camera by using the calculated sub-pixel circle center coordinate value and the actual calibration plate coordinate value;
step S2.3: calculating a primary distortion mapping matrix by using the camera internal reference matrix and the distortion coefficient matrix;
step S2.4: and calculating the calibration picture after the primary distortion removal by using the primary distortion mapping matrix and the input calibration image.
As a further improvement of the above technical solution, in step S3, the step of calculating a undistorted calibration picture by using the first distortion mapping table, and calculating a undistorted mapping table with higher size precision by using the undistorted calibration picture includes:
step S3.1: calculating by using the original calibration picture and a distortion mapping table to obtain a calibration picture subjected to primary distortion removal;
step S3.2: calculating the sub-pixels of the circular center coordinates of the circular dot array characteristic circle of the primarily undistorted calibration picture by utilizing contour analysis and block analysis;
step S3.3: calculating the sub-pixel position of each point between the adjacent characteristic points of the new matrix in the image after distortion removal correction by using the principle of reverse mapping, and storing the sub-pixel position in the new mapping matrix;
step S3.4: mapping all other points in the new matrix from the primarily undistorted image by using the position, distance relation and linear interpolation principle of adjacent characteristic points to obtain a new mapping matrix;
as a further improvement of the above technical solution, in step S4, the step of sequentially mapping the other pictures taken by the camera and the lens under the same condition twice according to the sequence by using the primary distortion mapping table and the grid mapping table and performing linear interpolation calculation to obtain the high-size-precision undistorted picture includes:
step S4.1: primarily de-distorting the picture shot by the camera by utilizing a linear interpolation method by utilizing a mapping relation between an input picture and a primary distortion mapping table;
step S4.2: and calculating to obtain the undistorted picture with higher precision by utilizing the mapping relation between the primary undistorted picture and the grid distortion grid mapping table and utilizing a linear interpolation method.
The invention has the beneficial effects that: the invention provides an image distortion correction method, which has the characteristics of high calibration speed and high dimensional accuracy, has good environmental interference resistance to the working environment of a camera, and is very suitable for the field of automatic measurement of industrial automation.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a white ceramic dot matrix calibration plate of the present invention;
FIG. 3 is a main flow chart of the center sub-pixel coordinates of the circle according to the present invention;
FIG. 4 is a feature diagram of a feature circle center of a calibration picture according to the present invention;
FIG. 5 is a flow chart of calculating a mapping table corresponding to locations according to the present invention;
FIG. 6 is a flowchart of the grid mapping table for calculating higher dimensional accuracy of the present invention;
FIG. 7 is a process diagram of the present invention for dividing regions;
fig. 8 is a flow chart of the present invention for high dimensional accuracy distortion removal.
Detailed Description
All of the features disclosed in this specification, or all of the steps in any method or process so disclosed, may be combined in any combination, except combinations of features and/or steps that are mutually exclusive.
Any feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving equivalent or similar purposes, unless expressly stated otherwise. That is, unless expressly stated otherwise, each feature is only an example of a generic series of equivalent or similar features.
In the description of the present invention, it is to be understood that the terms "one end", "the other end", "outside", "upper", "inside", "horizontal", "coaxial", "central", "end", "length", "outer end", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used only for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, should not be construed as limiting the present invention.
Further, in the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
The use of terms such as "upper," "above," "lower," "below," and the like in describing relative spatial positions herein is for the purpose of facilitating description to describe one element or feature's relationship to another element or feature as illustrated in the figures. The spatially relative positional terms may be intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as "below" or "beneath" other elements or features would then be oriented "above" the other elements or features. Thus, the exemplary term "below" can encompass both an orientation of above and below. The device may be otherwise oriented and the spatially relative descriptors used herein interpreted accordingly.
In the present invention, unless otherwise explicitly specified or limited, the terms "disposed," "sleeved," "connected," "penetrating," "plugged," and the like are to be construed broadly, e.g., as a fixed connection, a detachable connection, or an integral part; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or they may be connected internally or in any other suitable relationship, unless expressly stated otherwise. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
Fig. 1 is a general flowchart of a camera distortion correction method according to an embodiment. As shown in fig. 1, in the present embodiment, the flow of image distortion correction includes the following steps:
shooting a calibration plate picture;
the calibration plate of this example is a white ceramic dot array calibration plate as shown in fig. 2, the characteristic circle is black, the radius of the circle is 1.5mm, the distance between the two circle centers is 3mm, and the number of the characteristic circle arrays is 41 × 41.
In step S1, calculating a sub-pixel value of the coordinates of the center point of the feature circle in the circular lattice of the calibration picture by performing contour analysis and block analysis on the calibration picture;
the main purpose of this step is to quickly and accurately calculate the sub-pixel coordinates of the center of each feature circle, and the main flow chart is shown in fig. 3.
Firstly, a histogram statistics method is adopted to calculate a binarization threshold of a ROI region of a calibration picture, wherein the calculated threshold of the calibration picture is 118.
And carrying out binarization processing on the calibration picture by using the threshold value, and calculating picture contours to obtain 1748 contours in total, wherein the largest contour is the picture outer contour.
Setting the threshold value of the minimum number of pixels forming the contour to be 10 can filter out non-circular noisy point contours on one hand, and can reduce calibration precision if the circular contour contains too few pixels on the other hand, and the number of the calculated contours after filtering is 1734.
Calculating the outline area, setting the maximum area threshold and the minimum area threshold of the outline as 50000 and 50 respectively, further filtering the outline, and calibrating the outline of the outer frame of the plate and the outline of some small spots by filtering points, wherein the number of the rest outlines is 1718.
And (3) calculating the ratio of the area of the contour to the area of the minimum circumcircle of the contour, setting the threshold range to be 0.7-1.3, filtering the contour of the point line and the rod-shaped object, and setting the number of the filtered contours to be 1684.
The remaining contours are sorted according to the ascending order of the contour area, and the approximate radius of the characteristic circle can be calculated to be 17.415 by utilizing the principle that the areas of the characteristic circles are equal.
The upper and lower thresholds of the profile radius are set to be profiles with unsatisfactory filtering area of 1.3 times and 0.7 times of the approximate radius respectively, and the number of the remaining profiles is 1681.
The images were analyzed for clumps, resulting in a total of 1694 clump center points.
Finding out the center point of the block in 1681 outlines as the sub-pixel coordinate of the center of the characteristic circle of the dot matrix.
The finally obtained circle center of the characteristic circle of the calibration picture is shown in fig. 4, and green is the position of the circle center of the marked characteristic circle.
In step S2, a mapping table of position correspondence between the original picture and the primary undistorted picture is calculated by using the calculated center coordinates and the center coordinates of the actual calibration board, and the main flow is shown in fig. 5;
and calculating the coordinates of the centers of all the characteristic circles of the object calibration plate, assuming that the coordinates of the centers of the characteristic circles at the top left corner are (0, 0, 0), and calculating the coordinates of the centers of all the characteristic circles of the calibration plate according to the distance between the two centers of the calibration plate and the mutual position relationship between the characteristic circles because the characteristic circles specify that the coordinates of the Z axis are 0 in the same plane.
And calculating the center coordinates of all the characteristic circles of the calibration picture and the calculated center coordinates of all the characteristic circles of the object calibration plate to obtain a camera internal reference matrix and a distortion coefficient.
And calculating to obtain a primary distortion correction position mapping table by using the camera internal reference matrix and the distortion coefficient, wherein the size of the mapping table is the same as that of a calibrated picture, and the x and y coordinates are stored in the same mapping table by using a dual-channel matrix.
In step S3, a calibration picture with primary distortion removal is calculated by using the primary distortion mapping table, and then a grid mapping table with higher dimensional accuracy is calculated by using the calibration picture with primary distortion removal, where the main flow is shown in fig. 6;
and calculating by using the obtained mapping table and the calibration picture and adopting a bilinear interpolation calculator to obtain a new picture, wherein the new picture is the calibration picture subjected to primary distortion removal.
Referring to the method of calculating the sub-pixel coordinate value of the center of the feature circle of the calibration picture in step S1, the sub-pixel coordinate value of the center of the feature circle of the calibration picture with distortion removed is calculated.
And (3) rounding the average pixel distance between two points of the characteristic points by statistical calculation to obtain 62 pixels, creating a matrix, wherein the size of the matrix is the size of a calibration picture, the number of channels is 2, the type of matrix data is single-precision floating point type data, and assigning-1 to all positions of the matrix.
Dividing the image into 40-40 grids by using 41-41 characteristic points, dividing the whole calibration picture into regions, using the upper left-corner point A (868.158,242.214) of the rectangle as a reference point, mapping the rectangle to the position of a matrix A1(868,242) according to a rounding principle, mapping B to B1(930,242) and mapping all the characteristic points to the corresponding positions of the matrix according to the distance of 62 pixels as shown in FIG. 7.
Firstly, processing the first grid at the upper left corner, calculating the pixel length rx in the x direction between the point A and the point B as 62.603 and the pixel length ry in the y direction as-0.475 according to the coordinates of the point A and the point B, and calculating the scaling coefficient kx in the x direction as 1.009730 and the scaling coefficient ky in the y direction as-0.007663. The x coordinate of the nth point between A1 and B1 is 868+ n kx, and the y coordinate is 242+ n ky;
the C point is mapped to the position of the matrix C1(868,304), the positions of the other 61 points between A1 and C1 in the original undistorted image are obtained by the same method, and then all the points in the first grid are calculated by the same method of calculating the points between A1 and B1 by using each point between A1 and C1 as a starting point.
By analogy, the positions of all points in other grids relative to the original distorted image can be calculated, and the image mapped by the matrix mapping table has higher size precision.
In step 4, the primary distortion mapping table and the grid mapping table are used to sequentially map other pictures taken by the camera and the lens under the same condition twice in sequence, and linear interpolation calculation is performed to obtain a high-size-precision distortion-removed picture, a main flow of which is shown in fig. 8, and the method includes:
and obtaining a primary distortion-removed picture by using the input original picture and the primary distortion correction mapping table.
And obtaining a high-precision undistorted image by using the input primary undistorted image and the grid mapping table.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that are not thought of through the inventive work should be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope defined by the claims.

Claims (5)

1. A method of image distortion correction, the method comprising the steps of:
step S1: calculating a sub-pixel value of a circle center coordinate of the feature circle of the dot matrix of the calibration picture; calculating a sub-pixel value of a characteristic circle center coordinate in a circle dot matrix of the calibration picture by carrying out contour analysis and block analysis on the calibration picture;
step S2: calculating a primary distortion correction mapping table of the image; calculating to obtain a pixel position corresponding mapping table between the original calibration picture and the real picture by utilizing the mapping relation between the sub-pixel value of the circle center coordinate and the circle center coordinate position of the actual calibration plate;
step S3: calculating a picture grid mapping table; calculating a calibration picture subjected to primary distortion removal by using a distortion mapping table, and calculating by using the calibration picture subjected to primary distortion removal to obtain a grid mapping table;
step S4: calculating a distortion-removed picture; and sequentially mapping other pictures shot by the camera and the lens under the same condition twice according to the sequence by utilizing the image distortion correction mapping table and the image grid mapping table, and performing linear interpolation calculation to obtain the high-size-precision distortion-removed picture.
2. The method of image distortion correction according to claim 1, characterized in that:
in step S1, calculating the sub-pixel value of the feature circle center coordinate in the circle dot matrix of the calibration picture by performing contour analysis and block analysis on the calibration picture includes:
step S1.1: carrying out statistical analysis on the pixel values of the image to calculate a value as a binarization threshold value of the image, and carrying out binarization processing on the image by using the threshold value to obtain a binarization image;
step S1.2: calculating all contours in the image, and then sequentially filtering the contours by using the number of contour pixel points, the roundness of the contours and the contour area;
step S1.3: calculating the area of the residual contour, sequencing according to the contour area, calculating the approximate radius of the circle in the circular lattice, and filtering the contour again through the radius;
step S1.4: calculating the central points of the remaining contours, fitting an optimal matrix, and excluding the contours corresponding to the points deviating from the matrix to exceed a certain threshold;
step S1.5: and analyzing the block of the original image to obtain the coordinate of the central point of the block, solving the position relation between the coordinate of the central point and the outline, wherein the central point within the outline is the center coordinate of the sub-pixel of the characteristic circle in the solved circular lattice.
3. The method of image distortion correction according to claim 1, characterized in that:
in step S2, the step of calculating a mapping table of position correspondence between the original picture and the corrected picture by using the calculated sub-pixel value of the center coordinate and the center coordinate position of the actual calibration board includes:
step S2.1: distributing a corresponding coordinate value to each circle center by using the actual distance between two circle center points of the calibration plate and the position relation of each circle center;
step S2.2: calculating an internal reference matrix and a distortion coefficient of the camera by using the calculated sub-pixel circle center coordinate value and the actual calibration plate coordinate value;
step S2.3: calculating a primary distortion mapping matrix by using the camera internal reference matrix and the distortion coefficient matrix;
step S2.4: and calculating the calibration picture after the primary distortion removal by using the primary distortion mapping matrix and the input calibration image.
4. The method of image distortion correction according to claim 1, characterized in that:
in step S3, the undistorted calibration picture is calculated by using the primary distortion mapping table, and the undistorted mapping table with higher size precision is calculated by using the undistorted calibration picture, including:
step S3.1: calculating by using the original calibration picture and a distortion mapping table to obtain a calibration picture subjected to primary distortion removal;
step S3.2: calculating the sub-pixels of the circular center coordinates of the circular dot array characteristic circle of the primarily undistorted calibration picture by utilizing contour analysis and block analysis;
step S3.3: calculating the sub-pixel position of each point between the adjacent characteristic points of the new matrix in the image after distortion removal correction by using the principle of reverse mapping, and storing the sub-pixel position in the new mapping matrix;
step S3.4: and mapping all other points in the new matrix from the primarily undistorted image by using the position, distance relation and linear interpolation principle of adjacent characteristic points to obtain a new mapping matrix.
5. The method of image distortion correction according to claim 1, characterized in that:
in step S4, the first distortion mapping table and the grid mapping table are used to sequentially map the other pictures taken by the camera and the lens under the same condition twice in sequence and perform linear interpolation calculation to obtain a high-dimensional-precision undistorted picture, including:
step S4.1: primarily de-distorting the picture shot by the camera by utilizing a linear interpolation method by utilizing a mapping relation between an input picture and a primary distortion mapping table;
step S4.2: and calculating to obtain the undistorted picture with higher precision by utilizing the mapping relation between the primary undistorted picture and the grid distortion grid mapping table and utilizing a linear interpolation method.
CN202011216165.4A 2020-11-04 2020-11-04 Image distortion correction method Pending CN112348756A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011216165.4A CN112348756A (en) 2020-11-04 2020-11-04 Image distortion correction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011216165.4A CN112348756A (en) 2020-11-04 2020-11-04 Image distortion correction method

Publications (1)

Publication Number Publication Date
CN112348756A true CN112348756A (en) 2021-02-09

Family

ID=74428255

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011216165.4A Pending CN112348756A (en) 2020-11-04 2020-11-04 Image distortion correction method

Country Status (1)

Country Link
CN (1) CN112348756A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114022370A (en) * 2021-10-13 2022-02-08 山东大学 Galvanometer laser processing distortion correction method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114022370A (en) * 2021-10-13 2022-02-08 山东大学 Galvanometer laser processing distortion correction method and system
CN114022370B (en) * 2021-10-13 2022-08-05 山东大学 Galvanometer laser processing distortion correction method and system

Similar Documents

Publication Publication Date Title
CN111612853B (en) Camera parameter calibration method and device
KR20190028794A (en) GPU-based TFT-LCD Mura Defect Detection Method
CN108895959B (en) Camera calibration plate corner point calculation method based on sub-pixels
CN105205806B (en) A kind of precision compensation method based on machine vision
CN106780623A (en) A kind of robotic vision system quick calibrating method
CN104751458B (en) A kind of demarcation angular-point detection method based on 180 ° of rotation operators
CN114022370B (en) Galvanometer laser processing distortion correction method and system
CN111105466B (en) Calibration method of camera in CT system
CN114331924B (en) Large workpiece multi-camera vision measurement method
CN110793462A (en) Nylon gear reference circle measuring method based on vision technology
CN111633360B (en) Intelligent horizontal surface surfacing method based on vision
CN113222955A (en) Gear size parameter automatic measurement method based on machine vision
CN112348756A (en) Image distortion correction method
CN112802123A (en) Binocular linear array camera static calibration method based on stripe virtual target
CN113610929B (en) Combined calibration method of camera and multi-line laser
CN112634375B (en) Plane calibration and three-dimensional reconstruction method in AI intelligent detection
US10516822B2 (en) Method and device for merging images of calibration devices
CN113538399A (en) Method for obtaining accurate contour of workpiece, machine tool and storage medium
CN111353981B (en) Gear detection method and system based on machine vision and storage medium
CN116880353A (en) Machine tool setting method based on two-point gap
CN108537810B (en) Improved Zernike moment sub-pixel edge detection method
KR100837119B1 (en) A camera calibration method for measuring the image
CN114998417A (en) Method for measuring size of thin-wall stamping part hole group based on secondary curve invariant
CN112207444B (en) Ultrahigh-precision laser marking method for marking defective products of LED lamp beads
CN114998571A (en) Image processing and color detection method based on fixed-size marker

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination