CN113109259B - Intelligent navigation method and device for image - Google Patents
Intelligent navigation method and device for image Download PDFInfo
- Publication number
- CN113109259B CN113109259B CN202110359129.1A CN202110359129A CN113109259B CN 113109259 B CN113109259 B CN 113109259B CN 202110359129 A CN202110359129 A CN 202110359129A CN 113109259 B CN113109259 B CN 113109259B
- Authority
- CN
- China
- Prior art keywords
- image
- point
- auxiliary lens
- lens
- correction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/01—Arrangements or apparatus for facilitating the optical investigation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/02—Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
- G01B11/03—Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by measuring coordinates of points
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/26—Measuring arrangements characterised by the use of optical techniques for measuring angles or tapers; for testing the alignment of axes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
Abstract
The invention provides an intelligent navigation method and device for an image, which are simple to operate and can quickly reach a specified point in an enlarged image. The method comprises the following steps: a. establishing a linkage relation between a main lens center and an auxiliary lens center, b, photographing according to a set test interval and height positions one by one, capturing a correction checkerboard picture and recording the height position during photographing, c, calculating the rotation angle and pixel proportion of each picture in a segmented manner by using each captured group of correction checkerboard pictures and the recorded corresponding height position, d, calculating the global correction coefficient by using the rotation angle and pixel proportion of each picture calculated in the segmented manner, e, calculating the target position of an objective table by adopting the global correction coefficient, and quickly positioning a target measurement area point on the surface of a workpiece to be measured on the auxiliary lens to the main lens image center; the image processing apparatus uses the above method. The invention can be applied to the technical field of image processing.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an intelligent navigation method and device for an image.
Background
When processing an image, the image often needs to be amplified to obtain a more ideal processing effect. However, in the way of viewing the enlarged image from the viewing window, it is difficult to quickly find a specified point from the enlarged image after the image is enlarged. This increases the difficulty of finding a specified point of the enlarged image and greatly reduces the image processing efficiency. There is an image processing apparatus that performs camera calibration and correction by a two-point calibration method. The process is as follows:
1) Carrying out image correction on the main lens;
2) Taking a panoramic view of the machine table by using the auxiliary lens, and recording the position of the machine table during taking the picture;
3) And respectively moving the reference characteristic points of the workpiece to be detected on the machine platform to the central positions of the images of the main lens and the auxiliary lens. Actually measuring a reference characteristic point in the main lens image, and marking the reference characteristic point in the auxiliary lens image;
4) And respectively moving the direction characteristic points of the workpiece to be detected on the machine table to the central positions of the images of the main lens and the auxiliary lens. Actually measuring direction characteristic points in the main lens image, and marking the direction characteristic points in the auxiliary lens image;
5) Respectively connecting reference characteristic points and direction characteristic points in the main lens image and the auxiliary lens image, and calculating related correction parameters by adopting a software algorithm;
6) And clicking the position point of the target to be measured in the auxiliary lens image, and moving the position point of the workpiece to be measured on the machine table to the central position of the main lens image.
However, the above image processing procedure still has the following disadvantages:
(1) The operation is very complicated;
(2) The navigation positioning is inaccurate, and the precision is not high enough;
(3) There is no height information in the correction parameters and a navigation correction operation is required at each measured height level.
Therefore, a new image intelligent navigation processing method is needed to overcome the above problems.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an intelligent navigation method and device which are simple to operate and can realize the purpose of rapidly reaching an image of a specified point in an amplified image.
The invention discloses an intelligent image navigation method, which adopts the technical scheme that: the method is carried out on an image processing platform, the image processing platform comprises a machine table and an upper computer, an object stage moving in the horizontal direction and a support column vertically arranged are arranged on the machine table, a main lens and an auxiliary lens are arranged on the support column, the main lens and the auxiliary lens respectively move up and down along the support column, the upper computer is respectively in electric signal connection with the object stage, the main lens and the auxiliary lens, an image processing device is arranged on the upper computer, and the intelligent navigation method of the image comprises the following steps:
step a, electrifying an image processing platform, calibrating a camera by using a main lens and an auxiliary lens, and establishing a linkage relation between the center of the main lens and the center of the auxiliary lens;
b, placing the correction chessboard plate on an object stage and below the auxiliary lens, adjusting the height position of the auxiliary lens to ensure that the auxiliary lens can clearly and completely obtain a panoramic image of the correction chessboard plate when being positioned at the lowest position and the highest position, taking the auxiliary lens as a correction starting point when reaching the lowest position and taking the auxiliary lens as a correction end point when reaching the highest position, setting a plurality of test intervals with the same height between the correction starting point and the correction end point, finally, shooting from the correction starting point according to the set test intervals and the height positions one by one, capturing the picture of the correction chessboard plate and recording the height position when shooting;
step c, using each group of correction chessboard pictures captured in the step b and the recorded corresponding height position, and calculating the rotation angle and the pixel proportion of each picture in a segmented manner;
step d, calculating a global correction coefficient by using the rotation angle and the pixel proportion of each picture calculated by the segmentation;
e, placing the workpiece to be detected on the object stage, in the image obtained by the auxiliary lens, in the measurable range of the image, randomly selecting a target measurement area point on the surface of the workpiece to be detected, and calculating the target position of the object stage by adopting the global correction coefficient obtained in the step d, thereby quickly positioning the target measurement area on the surface of the workpiece to be detected on the auxiliary lens to the center of the image of the main lens and realizing the purpose of quickly reaching the designated point in the amplified image.
Further, when the camera calibration is performed in the step a, the same feature point on the correction checkerboard is respectively moved to the main lens image center and the auxiliary lens image center, the corresponding actual positions of the machine are recorded, the offset of the main and auxiliary lens centers is calculated, and the linkage relationship between the main lens center and the auxiliary lens center is established according to the offset.
Further, in the step c, the specific step of calculating the rotation angle and the pixel ratio of each picture in a segmented manner is as follows:
step c1, finding out a preliminary corner coordinate position by utilizing corner detection;
step c2, detecting the coordinate position of the fine angular point by using the sub-pixel angular point;
and c3, calculating the overall dimension, the rotation angle and the pixel proportion of the correction chessboard.
Further, in step d, the specific step of calculating the global correction coefficient is:
step d1, fitting a straight line;
step d2, calculating a linear coefficient;
d3, detecting the accuracy of the coefficient;
and d4, calculating the average angle to obtain a global correction coefficient.
Further, in the step e, the specific step of calculating the target position of the stage is:
step e1, calculating a real correction coefficient according to the height; e2, calculating the real offset position of the objective table; e3, synchronizing the pixel coordinates of the main lens and the auxiliary lens; e4, calculating the center offset distance; e5, judging the quadrant of the detection area point;
and e6, obtaining the target position of the objective table.
The technical scheme of the image processing device is as follows: an image processing apparatus to which the intelligent navigation method of images as described above is applied.
The invention has the beneficial effects that: the method comprises the steps of calibrating a camera for a main lens and an auxiliary lens, establishing an image corresponding relation between the main lens and the auxiliary lens, respectively photographing the auxiliary lens at correction points with different heights by setting test intervals, capturing and correcting checkerboard pictures and recording the height position of the photographed pictures, calculating a global correction coefficient by calculating the rotation angle and the pixel proportion of each acquired picture, clicking a target measurement area point in an image imaged by the auxiliary lens, and calculating the target position of an object stage by the global correction coefficient, so that the target measurement area point on the surface of a workpiece to be measured on the auxiliary lens is quickly positioned at the image center of the main lens, and the aim of quickly reaching the designated point in the enlarged image is realized. The method is simple to operate, and the target test point can be accurately and quickly reached in the image acquired by the amplified main lens, so that the working efficiency and the target positioning precision are greatly improved.
Drawings
FIG. 1 is a simplified structural diagram of an image processing platform according to the present invention;
fig. 2 is a bottom view of the lens barrel;
FIG. 3 is a simplified diagram of the embodiment in which sub-pixel corner points are used to detect the coordinate position of a fine corner point;
fig. 4 is a simplified schematic diagram of the indication of the inner angle when finding the coordinates of the outermost corner point in the embodiment.
Detailed Description
The embodiments of the present invention are specifically as follows.
As shown in figure 1, the method is carried out on an image processing platform, the image processing platform comprises a machine table 1 and an upper computer, an object stage 3 moving in the horizontal direction and a support column 4 vertically arranged are arranged on the machine table, a main lens 5 and an auxiliary lens 6 are arranged on the support column 4, the main lens 5 and the auxiliary lens 6 respectively move up and down along the support column 4, the upper computer is respectively in electric signal connection with the object stage 3, the main lens 5 and the auxiliary lens 6, LED lamp beads 7 are annularly distributed on the lower surface of the auxiliary lens 6, and the LED lamp beads 7 are used for improving the brightness of a photographing range when a camera works. An image processing device is arranged on the upper computer. The intelligent image navigation method comprises the following steps:
step a, electrifying an image processing platform, calibrating a camera by using a main lens and an auxiliary lens, and establishing a linkage relation between the center of the main lens and the center of the auxiliary lens;
b, placing the correction chessboard plate on the objective table 3 and below the auxiliary lens 6, adjusting the height position of the auxiliary lens to ensure that the auxiliary lens can clearly and completely obtain a panoramic image of the correction chessboard plate when being positioned at the lowest position and the highest position, taking the auxiliary lens as a correction starting point when reaching the lowest position and taking the auxiliary lens as a correction end point when reaching the highest position, setting a plurality of test intervals with the same height between the correction starting point and the correction end point, finally, photographing from the correction starting point according to the set test intervals one by one, capturing the picture of the correction chessboard plate and recording the height position when photographing;
step c, using each group of correction chessboard pictures captured in the step b and the recorded corresponding height position, and calculating the rotation angle and the pixel proportion of each picture in a segmented manner;
step d, calculating a global correction coefficient by using the rotation angle and the pixel proportion of each picture calculated by segmentation;
and e, placing the workpiece to be measured on the object stage, randomly selecting a target measurement area point on the surface of the workpiece to be measured in the image obtained by the auxiliary lens within the measurable range of the image, and calculating the target position of the object stage by adopting the global correction coefficient obtained in the step d, so that the target measurement area point on the surface of the workpiece to be measured on the auxiliary lens is quickly positioned at the center of the image of the main lens, and the aim of quickly reaching the designated point in the amplified image is realized.
When the camera calibration is carried out in the step a, the same characteristic point on the correction chessboard is respectively moved to the main lens image center and the auxiliary lens image center, the corresponding actual positions of the machine stations are recorded, the offset of the main lens center and the auxiliary lens center is calculated, and the linkage relation between the main lens center and the auxiliary lens center is established according to the offset.
In the step c, the specific steps of calculating the rotation angle and the pixel ratio of each picture in a segmented manner are as follows:
step c1, finding out a preliminary corner coordinate position by utilizing corner detection;
step c2, detecting the coordinate position of the fine angular point by using the sub-pixel angular point;
and c3, calculating the overall dimension, the rotation angle and the pixel proportion of the correction chessboard.
In step d, the specific steps of calculating the global correction coefficient are as follows:
step d1, fitting a straight line;
step d2, calculating a linear coefficient;
d3, detecting the accuracy of the coefficient;
and d4, calculating the average angle to obtain a global correction coefficient.
In step e, the specific step of calculating the target position of the objective table is as follows:
step e1, calculating a real correction coefficient according to the height; e2, calculating the real offset position of the objective table; e3, synchronizing the pixel coordinates of the main lens and the auxiliary lens; e4, calculating the center offset distance; step e5, judging the quadrant of the detection area point;
and e6, obtaining the target position of the objective table.
The image processing device applies the intelligent navigation method of the image.
In order that the invention may be more fully understood, specific examples are set forth below.
As shown in fig. 1, the method of the present invention is performed on an image processing platform, the image processing platform includes a machine table 1 and an upper computer, an object stage 3 moving in a horizontal direction and a support column 4 vertically arranged are arranged on the machine table, a main lens 5 and an auxiliary lens 6 are arranged on the support column 4, the main lens 5 and the auxiliary lens 6 respectively move up and down along the support column 4, the upper computer is respectively connected with the object stage 3, the main lens 5 and the auxiliary lens 6 through electric signals, and an image processing device is arranged on the upper computer. The intelligent image navigation method specifically comprises the following steps.
Step a, electrifying the image processing platform, calibrating the cameras by the main lens and the auxiliary lens, and establishing the center offset relationship of the main lens and the auxiliary lens. The method comprises the following specific steps:
and respectively moving the same characteristic point on the correction chessboard board to the center of the main lens image area window, recording the corresponding machine station actual position P0, moving the characteristic point to the center of the auxiliary lens image area window, and recording the corresponding machine station actual position P1. The center Offset amounts of the main camera and the sub camera are calculated from P0 and P1, and the Offset amounts in the X-axis direction and the Y-axis direction are calculated from the equation,
Offset.x = P0.x- P1.x,
Offset.y = P0.y- P1.y,
simultaneously recording the width W and the height H of the auxiliary lens image, wherein P0.x represents the component of the corresponding actual position of the machine table in the X-axis direction when the same characteristic point on the correction chessboard is respectively moved to the center of the main lens image area window; p1.X represents the component of the actual position of the corresponding machine in the X-axis direction when the feature point is moved to the center of the auxiliary lens image area window; p0.y represents the component of the corresponding actual position of the machine table in the Y-axis direction when the same characteristic point on the correction chessboard board is respectively moved to the center of the main lens image area window; and P1.Y represents the component of the corresponding actual position of the machine table in the Y-axis direction when the characteristic point is moved to the center of the auxiliary lens image area window.
B, placing the correction chessboard plate on the objective table 3 and below the auxiliary lens 6, adjusting the auxiliary lens to the bottom end, clearly and completely obtaining a panoramic image of the correction chessboard plate, recording the actual position of the auxiliary lens at the moment, and taking the actual position as a correction starting point Z0; in the same operation, the auxiliary lens is adjusted to move to the top end, and the actual position of the auxiliary lens at the moment is recorded as the correction end point Z1. And then setting a snap-shot height distance PitchZ, acquiring snap-shot position points from Z0 to Z1 according to the distance PitchZ, calculating all snap-shot height position points, and storing the snap-shot height position points in an array lsPosZ.
And taking out each position point from the array lsPosZ, positioning the movable auxiliary lens, climbing point by point to take a picture, and storing the shot picture of each chessboard board into an array CaptureMats.
And c, using each group of corrected checkerboard pictures captured in the step b and corresponding height positions during image capture, calculating the rotation Angle and pixel proportion of each picture in a segmented manner, storing each calculated pixel proportion coefficient Ratio and the corresponding height PosZ in a coefficient array PointGroup, and storing each calculated Angle coefficient Angle in an Angle array angleGroup.
Further, a specific method for calculating the pixel scale coefficient and the angle coefficient of each height point is as follows:
c.1 Firstly, a corner detection algorithm is used for finding out a preliminary corner coordinate position:
finding out the boundary position of black pixel points (the gray value of a pixel is 0) and white pixel points (the gray value of the pixel is 255) of the chessboard picture through OpenCV and an image processing method carried by the OpenCV, and determining the approximate position of an inner corner point;
c.2, detecting the coordinate position of the fine angular point by using the sub-pixel angular point:
the pixel position of the inner corner point calculated in the step c.1 is subdivided into smaller units by OpenCV and an image processing method carried by the OpenCV, so that higher precision is achieved;
assuming that the approximate location p of an interior corner is near the actual sub-pixel corner q, as shown in FIG. 3, by collecting the set of points near the neighborhood and computing the correlation vector q-p, if the direction of the vector q-p is consistent with the edge direction, the gradient vector dot product of the q-p vector and the p point is 0. The exact corner coordinates can be located by finding the smallest q-p value.
c.3 finding the coordinates of the outermost corner points:
as shown in fig. 4, taking out the coordinates (Xa, ya) of the leftmost inner corner point a and the coordinates (Xb, yb) of the rightmost inner corner point B based on the accurate coordinates calculated in step c.2 above;
c.4, calculating the rotation Angle coefficient Angle and the pixel scale factor Ratio of the chessboard of each height point by the following formulas:
and d, calculating a global correction coefficient, namely the slope k and the intercept b of the primary straight line y = kx + b and an angle coefficient a by using the rotation angle and the pixel proportion of each picture calculated by the segmentation.
The specific calculation method is as follows:
calculating a fitting straight line by using the PointGroup as an input parameter (a specific fitting method can adopt a fitLine algorithm in an OpenCV (open source video library), and the like);
extracting the slope k and the intercept b of the primary straight line from the fitted straight line according to a point-slope mode;
the angle coefficient a is calculated as an average value by using angleGroup.
And e, placing the measured workpiece on an objective table, and recording the current actual machine table position point as Q. And d, clicking a position point P to be measured in the auxiliary lens image area window, and calculating a real position point R of an actual machine table corresponding to the image position of the predicted position point in the main lens image window by using the global correction coefficients k, b and a calculated in the step d. And calling a machine station motion control instruction, and positioning and moving the machine station to a position point R, so that the detected workpiece quickly reaches the center of the main lens image area, and the measuring operation speed and the automation degree are accelerated.
The specific calculation method of the position point R is as follows:
(1) According to the current actual height of the lens, calculating a real pixel proportion coefficient ratio and a real angle coefficient A:
A =-a,
ratio = k * realZ + b;
wherein the content of the first and second substances,
a is the angle coefficient of the global correction coefficient calculated in step d,
k is the slope of the global correction coefficient calculated in step d,
b is the intercept of the global correction coefficient calculated in step d,
real height of the current auxiliary lens of realZ.
(2) Calculating the real offset of the machine table:
realX = Q.X + Offset.x,
realY = Q.y + Offset.y;
wherein, the first and the second end of the pipe are connected with each other,
q.x is the component of the actual stage position point Q in the X-axis direction,
q.y is the component of the actual stage position point Q in the Y-axis direction,
x is the offset of the center of the main auxiliary lens in the X-axis direction calculated in step a,
y is the offset of the center of the main auxiliary lens calculated in step a in the X-axis direction,
realX is the real offset of the machine in the X-axis direction,
realY is the real offset of the machine in the Y-axis direction.
(3) Synchronizing the pixel coordinates of the main lens and the auxiliary lens:
x0= P.x*cos(A)+ P.y*sin(A),
y0= P.x*sin(A) + P.y*cos(A);
wherein the content of the first and second substances,
x is the component of the position point P to be measured in the auxiliary lens image area in the X-axis direction,
p.y is the component of the position point P to be measured in the auxiliary lens image area in the Y-axis direction,
x0 is the X coordinate of the position point P after the main lens and the auxiliary lens are synchronized, and Y0 is the Y coordinate of the position point P after the main lens and the auxiliary lens are synchronized.
(4) Calculating the center offset distance:
dx = abs(x0-0.5*W)*ratio,
dx = abs(y0-0.5*H)*ratio;
wherein the content of the first and second substances,
dx is the X offset distance from the image center of the main lens after the position point P is synchronized to the main lens and the auxiliary lens,
dy is the Y offset distance from the image center of the main lens after the position point P is synchronized to the main lens and the auxiliary lens.
(5) And (3) judging the quadrant direction:
wherein the content of the first and second substances,
x _ dir is the direction in the X-axis of the main lens image area after the position point P is synchronized to the main and auxiliary lenses, Y _ dir is the direction in the Y-axis of the main lens image area after the position point P is synchronized to the main and auxiliary lenses,
w is the width of the main lens image,
h is the height of the main lens image.
(6) Calculating the actual position coordinates of the machine corresponding to the final point to be tested:
R.x= realX + x_dir * dx,
R.y= realY + y_dir * dy;
wherein the content of the first and second substances,
r.x is the X coordinate of the real position point R of the machine corresponding to the position point P to be measured in the main lens image window,
and R.y is the Y coordinate of the real position point R of the machine corresponding to the position point P to be measured in the main lens image window.
The invention establishes the image corresponding relation between the main lens and the auxiliary lens, the auxiliary lens respectively takes pictures at correction points with different heights by setting test intervals, captures and corrects checkerboard pictures and records the height position during taking the pictures, calculates the global correction coefficient by calculating the rotation angle and the pixel proportion of each obtained picture, then selects target measurement area points in the image imaged by the auxiliary lens by points, and calculates the target position of the object stage by the global correction coefficient, thereby quickly positioning the target measurement area points on the surface of the workpiece to be measured on the auxiliary lens to the center of the image of the main lens and realizing the quick reaching of the designated points in the amplified image. The method is simple to operate, and the target test point can be accurately and quickly reached in the image acquired by the amplified main lens, so that the working efficiency and the target positioning precision are greatly improved.
Claims (4)
1. The utility model provides an intelligent navigation method of image, this method is gone on image processing platform, and image processing platform includes board (1) and host computer, is provided with objective table (3) and the support column (4) of vertical setting that move in the horizontal direction on the board be provided with primary lens (5) and auxiliary lens (6) on support column (4), primary lens (5) and auxiliary lens (6) can respectively along support column (4) reciprocate, the host computer respectively with objective table (3), primary lens (5) and auxiliary lens (6) signal of telecommunication connection, be provided with image processing apparatus on the host computer, its characterized in that: the intelligent image navigation method comprises the following steps:
step a, electrifying an image processing platform, calibrating a camera by using a main lens and an auxiliary lens, and establishing a linkage relation between the center of the main lens and the center of the auxiliary lens;
b, placing the correction chessboard plate on an object stage (3) and below an auxiliary lens (6), adjusting the height position of the auxiliary lens to ensure that the auxiliary lens can clearly and completely obtain a panoramic image of the correction chessboard plate when being positioned at the lowest position and the highest position, taking the auxiliary lens as a correction starting point when reaching the lowest position and taking the auxiliary lens as a correction end point when reaching the highest position, setting a plurality of test intervals with the same height between the correction starting point and the correction end point, and finally moving the auxiliary lens from the correction starting point according to the set test intervals, taking pictures at height positions one by one, capturing pictures of the correction chessboard plate and recording the height position when taking pictures;
c, respectively calculating the rotation angle and the pixel Ratio of each picture by using each group of correction chessboard pictures captured in the step b and the recorded corresponding height position; the method comprises the following specific steps: step c1, firstly, finding out the initial coordinate position of the internal angle point by using an angular point detection algorithm; c2, detecting the fine coordinate position of the internal corner by using a sub-pixel corner algorithm; step c3, according to the accurate coordinates detected in the step c2, extracting coordinates (Xa, ya) of the leftmost inner corner point A and coordinates (Xb, yb) of the rightmost inner corner point B; c4, calculating the rotation angle a and the pixel Ratio of the chessboard of each height point by the following formulas:,;
step d, calculating a global correction coefficient by using the calculated rotation angle and the pixel Ratio of each picture: slope k, intercept b and angle coefficient a; specifically, a straight line Ratio = k × PosZ + b is fitted by using a pixel Ratio and a corresponding height position PosZ, and a slope k and an intercept b are obtained through calculation; taking the average value of each calculated rotation angle as an angle coefficient a;
e, placing the workpiece to be measured on an object stage, randomly selecting a target measurement area point on the surface of the workpiece to be measured in the image obtained by the auxiliary lens within the measurable range of the image, and calculating the coordinates of the target measurement area point in the main lens image window corresponding to the real position point of the machine table by adopting the global correction coefficient obtained in the step d, so that the target measurement area point on the surface of the workpiece to be measured on the auxiliary lens is quickly positioned at the center of the main lens image, and the aim of quickly reaching the designated point in the enlarged image is realized.
2. The intelligent image navigation method according to claim 1, wherein: and a, when the camera calibration is carried out in the step a, the same characteristic point on the correction chessboard is respectively moved to the main lens image center and the auxiliary lens image center, the actual positions of the corresponding machine stations are recorded, the offset of the main and auxiliary lens centers is calculated, and the linkage relation between the main lens center and the auxiliary lens center is established according to the offset.
3. The method according to claim 1, wherein in step e, the specific step of calculating the coordinates of the target measurement area point corresponding to the real position point of the machine in the main lens image window comprises:
step e1, calculating a real pixel proportion Ratio according to the actual height RealZ of the current auxiliary lens real And a true angle coefficient A, wherein Ratio real = k readz + b, a = -a; e2, calculating the real offset position of the target measurement area point; e3, synchronizing the pixel coordinates of the main lens and the auxiliary lens; e4, calculating the offset distance between the target measurement area point and the center of the main lens image; e5, judging the quadrant of the target measurement area point;
and e6, obtaining the coordinates of the real position points of the target measurement area points corresponding to the machine in the main lens image window.
4. An image processing apparatus characterized by: the image processing apparatus is used for executing the intelligent navigation method of the image according to claim 1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110359129.1A CN113109259B (en) | 2021-04-02 | 2021-04-02 | Intelligent navigation method and device for image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110359129.1A CN113109259B (en) | 2021-04-02 | 2021-04-02 | Intelligent navigation method and device for image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113109259A CN113109259A (en) | 2021-07-13 |
CN113109259B true CN113109259B (en) | 2023-02-03 |
Family
ID=76713453
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110359129.1A Active CN113109259B (en) | 2021-04-02 | 2021-04-02 | Intelligent navigation method and device for image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113109259B (en) |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002369007A (en) * | 2001-06-05 | 2002-12-20 | Fujitsu Ltd | Method for converting image contrast pattern comprising many parallel lines |
US8792013B2 (en) * | 2012-04-23 | 2014-07-29 | Qualcomm Technologies, Inc. | Method for determining the extent of a foreground object in an image |
CN105234943B (en) * | 2015-09-09 | 2018-08-14 | 大族激光科技产业集团股份有限公司 | A kind of industrial robot teaching device and method of view-based access control model identification |
CA2961921C (en) * | 2016-03-29 | 2020-05-12 | Institut National D'optique | Camera calibration method using a calibration target |
CN106767452A (en) * | 2017-01-03 | 2017-05-31 | 徐兆军 | A kind of wood-based product's width detecting and its detection method |
CN110044291A (en) * | 2019-05-16 | 2019-07-23 | 苏州汇才土水工程科技有限公司 | A kind of method of camera battle array measurement local deformation |
CN111429533B (en) * | 2020-06-15 | 2020-11-13 | 上海海栎创微电子有限公司 | Camera lens distortion parameter estimation device and method |
-
2021
- 2021-04-02 CN CN202110359129.1A patent/CN113109259B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113109259A (en) | 2021-07-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103257085B (en) | Image processing device and method for image processing | |
CN106780623A (en) | A kind of robotic vision system quick calibrating method | |
CN108562250B (en) | Keyboard keycap flatness rapid measurement method and device based on structured light imaging | |
CN110660107A (en) | Plane calibration plate, calibration data acquisition method and system | |
CN109739239B (en) | Planning method for uninterrupted instrument recognition of inspection robot | |
CN105783711B (en) | Three-dimensional scanner correction system and correction method thereof | |
CN107869954B (en) | Binocular vision volume weight measurement system and implementation method thereof | |
CN108470356B (en) | Target object rapid ranging method based on binocular vision | |
JP2008014940A (en) | Camera calibration method for camera measurement of planar subject and measuring device applying same | |
CN113538583A (en) | Method for accurately positioning position of workpiece on machine tool and vision system | |
CN108716890A (en) | A kind of high-precision size detecting method based on machine vision | |
CN110889829A (en) | Monocular distance measurement method based on fisheye lens | |
CN111025701A (en) | Curved surface liquid crystal screen detection method | |
CN111586401B (en) | Optical center testing method, device and equipment | |
JP5222430B1 (en) | Dimension measuring apparatus, dimension measuring method and program for dimension measuring apparatus | |
CN113963065A (en) | Lens internal reference calibration method and device based on external reference known and electronic equipment | |
CN113109259B (en) | Intelligent navigation method and device for image | |
CN206583440U (en) | A kind of projected image sighting distance detecting system | |
CN113805304B (en) | Automatic focusing system and method for linear array camera | |
CN115684012A (en) | Visual inspection system, calibration method, device and readable storage medium | |
CN113079318B (en) | System and method for automatically focusing edge defects and computer storage medium | |
CN112710662A (en) | Generation method and device, generation system and storage medium | |
CN107860933B (en) | Digital image-based automatic detection method and device for fiber content in textile | |
CN113063352B (en) | Detection method and device, detection equipment and storage medium | |
CN113983951B (en) | Three-dimensional target measuring method, device, imager and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |