CN115112098B - Monocular vision one-dimensional two-dimensional measurement method - Google Patents

Monocular vision one-dimensional two-dimensional measurement method Download PDF

Info

Publication number
CN115112098B
CN115112098B CN202211044387.1A CN202211044387A CN115112098B CN 115112098 B CN115112098 B CN 115112098B CN 202211044387 A CN202211044387 A CN 202211044387A CN 115112098 B CN115112098 B CN 115112098B
Authority
CN
China
Prior art keywords
circle
feature
standard
points
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211044387.1A
Other languages
Chinese (zh)
Other versions
CN115112098A (en
Inventor
冀伟
彭彩彤
曲东升
李长峰
张继
储开斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Mingseal Robotic Technology Co Ltd
Original Assignee
Changzhou Mingseal Robotic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Mingseal Robotic Technology Co Ltd filed Critical Changzhou Mingseal Robotic Technology Co Ltd
Priority to CN202211044387.1A priority Critical patent/CN115112098B/en
Publication of CN115112098A publication Critical patent/CN115112098A/en
Application granted granted Critical
Publication of CN115112098B publication Critical patent/CN115112098B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a monocular vision one-dimensional two-dimensional measurement method, which relies on an OpenCV open source vision library and comprises the following steps: step 1, image acquisition and camera calibration; step 2, self-defining an ROI image; step 3, preprocessing images; step 4, extracting a standard feature circle; step 5, extracting standard characteristic straight lines; step 6, configuring a standard detection file; 7, updating contour feature points; 8, describing the position relation among multiple targets; and 9, planning a full-automatic dispensing path. The monocular vision one-dimensional two-dimensional measurement method has the advantages that the features of the workpiece to be measured are extracted through machine vision, the automation degree is high, the measurement precision is high, and the cost is low.

Description

Monocular vision one-dimensional two-dimensional measurement method
Technical Field
The invention relates to the technical field of monocular vision measuring methods, in particular to a monocular vision one-dimensional two-dimensional measuring method.
Background
Traditional online glue dispensing equipment usually transmits images through a camera, and manual punctuation positioning is carried out, and then trajectory planning is carried out, and the intelligent degree and the precision of this kind of method are all lower, and the defective rate of product is high, and production efficiency is low, can not satisfy the index requirement of high quality product production, leads to the enterprise to lack competitiveness and flexibility.
The machine vision technology is developed rapidly, compared with human vision, the machine vision technology can meet the requirements of high-requirement and high-precision positioning and size measurement, the production efficiency is greatly improved by combining the vision technology and the robot, the personal safety in the production process is guaranteed to the maximum extent, and the machine vision technology has important significance for industrial automation development.
The calibration mode based on the Halcon commercial vision library commonly used in the dispensing equipment can ensure the calibration of high precision and stability, but has higher cost, the technology is mastered by foreign companies, the bottom layer is not open, and the risk of being clamped by necks exists technically.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art.
Therefore, the invention provides a monocular vision one-dimensional two-dimensional measurement method, which relies on an OpenCV open source vision library to extract the characteristics of a workpiece to be measured through machine vision, and has the advantages of high automation degree, high measurement precision and low cost.
The monocular vision one-dimensional and two-dimensional measurement method provided by the embodiment of the invention depends on an OpenCV open source vision library, and comprises the following steps of:
step 1, image acquisition and camera calibration: acquiring an image of a workpiece by using a monocular camera; calibrating the monocular camera to obtain a monocular camera internal reference matrix and a monocular camera external reference matrix;
step 2, customizing the ROI area image: based on the image collected in the step 1, performing ROI framing on single target feature areas in the image, marking and numbering the single target ROI area images, and establishing a self-defined coordinate mapping relation between each ROI area image and an original image;
step 3, image preprocessing: preprocessing each ROI regional image based on the single-target ROI regional image generated in the step 2;
step 4, standard feature circle extraction: extracting contour feature points, fitting a standard feature circle through a minimum enclosing circle iterative optimization algorithm, and outputting standard feature circle parameter information, wherein the standard feature circle parameter information comprises a standard feature circle center coordinate and a circle radius;
step 5, standard feature straight line extraction: defining the starting point and the end point of a straight line by user according to the center coordinates of the standard feature circle extracted in the step 4, determining a sampling domain of feature points, performing bidirectional feature point scanning sampling in the horizontal direction and the vertical direction, fitting the standard feature straight line by using a RANSAC iterative algorithm, recording the position information of the feature points and fitting a straight line equation;
step 6, configuring a standard detection file: storing the ROI area image number, the position coordinates of the corresponding standard feature circle or standard feature straight line into a standard detection file;
7, updating contour feature points: based on the standard feature circle in the step 4 and the standard feature straight line in the step 5, when the standard feature circle and the standard feature straight line need to be modified, the position and the size of the ROI area image are adjusted, the contour feature points are extracted again, and the updated coordinates of the contour feature points and the ROI area image number are automatically stored in a standard detection file;
step 8, describing the position relation among multiple targets: reading a standard detection file, performing feature matching on an image to be detected, obtaining position coordinate information of a target feature circle and position coordinate information of a target feature straight line according to the coordinate mapping relation between each ROI area image obtained in the step 2 and an original image, and obtaining the distance from the circle center to the circle center and the distance from the circle center to the straight line, so as to describe the position relation among multiple targets;
step 9, planning the full-automatic dispensing path: and (3) according to the inside and outside parameter matrix of the monocular camera obtained in the step (1), the position information of the characteristic points obtained in the step (8) and the pixel distance between the characteristic points, the coordinates in the image coordinate system can be converted into the coordinates in the real world coordinate system, and the full-automatic dispensing path planning and the calculation of the positioning precision and the repeated measurement precision are realized.
The invention has the advantages that (1) the characteristic extraction is carried out on the workpiece to be measured through machine vision, the pixel-level positioning measurement precision is achieved, the reject ratio of products is greatly reduced, the automation degree is high, the production efficiency is high, and the competitiveness and the flexibility of enterprises are increased; (2) The full-automatic dispensing positioning problem is solved by a fusion method of extracting target contour characteristic points through processing such as self-adaptive threshold segmentation and the like and combining the target contour characteristic points with a multi-target position relation; (3) The single target ROI areas are customized and numbered, the OpenCV open source vision library is used for conducting self-adaptive threshold segmentation, canny edge detection and other processing on each ROI area, a coordinate mapping relation between the customized ROI areas and an original image is established, and the operation data amount is reduced; (4) A feature extraction method of relative position relation is designed, the problem of positioning accuracy caused by difference of workpiece placing positions is solved, the position relation among multi-target features can be described, and the universality and the positioning measurement accuracy of a software system are improved.
According to an embodiment of the present invention, in the step 4, the parameter information of the standard feature circle includes two circle center coordinates, two circle radii and a circle center distance.
According to an embodiment of the present invention, in the step 5, the standard feature straight line extraction process is: self-defining a starting point and an end point of a straight line, determining a sampling domain of straight line characteristic points of an upper frame of a workpiece based on two circle center coordinate positions, sampling horizontal and vertical bidirectional characteristic points in the sampling domain, sending the sampling to a sampling queue, and performing iterative optimization through an RANSAC iterative algorithm to obtain a standard characteristic straight line.
According to one embodiment of the invention, in the 3 rd step, preprocessing includes binarization, adaptive threshold segmentation, canny edge detection and gaussian filtering noise reduction.
According to an embodiment of the present invention, the binarization adopts an OTSU algorithm in an OpenCV open source vision library.
According to one embodiment of the invention, the adaptive threshold segmentation adopts an OTSU algorithm in an OpenCV open source vision library.
According to one embodiment of the invention, the Canny edge detection utilizes a Canny edge detection algorithm to obtain edge features on the ROI area image.
According to one embodiment of the invention, contour feature points are extracted using a contour extraction algorithm.
According to one embodiment of the invention, the standard feature circle is based on the findcontour function in OpenCV to extract contour feature points.
According to one embodiment of the invention, the standard feature straight line is fused with the relative position relationship of the standard feature circle through bidirectional sampling to obtain the contour feature point.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present invention, the drawings used in the description of the embodiments or prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a one-dimensional and two-dimensional measurement method of monocular vision according to the present invention;
FIG. 2 is a diagram of a custom ROI area;
FIG. 3 is a graph of adaptive threshold segmentation effects;
FIG. 4 is a line recognition depiction;
FIG. 5 is a graph of a characteristic circle measurement;
FIG. 6 is a diagram of a multi-target relative positional relationship.
The reference numbers in the figures are: ID0, left optical center; ID1, right optical center; ID2, straight line of upper frame; l0, a reference line; l1, straight line.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The method for measuring monocular vision in one dimension and two dimensions according to the embodiment of the present invention will be described in detail below with reference to the drawings.
The invention is suitable for various industries of machine vision identification and positioning, such as: the dispensing industry and the like.
The invention discloses a monocular vision one-dimensional two-dimensional measurement method, which depends on an OpenCV open source vision library and comprises the following steps:
step 1, image acquisition and camera calibration: acquiring an image of a workpiece by using a monocular camera; and calibrating the monocular camera to obtain a monocular camera internal reference matrix and a monocular camera external reference matrix.
The monocular camera internal reference matrix and the monocular camera external reference matrix are used for calculating coordinate transformation (a three-dimensional scene of a real world is transformed into a two-dimensional image through the monocular camera); the monocular camera internal reference matrix is used for describing that one point on a camera coordinate system is converted into a pixel point through imaging of a lens and a pinhole of the camera. The monocular camera external reference matrix is used for describing that any point in the real world coordinate system is mapped to one point on the camera coordinate system through rotation and translation.
Set a certain point
Figure 379262DEST_PATH_IMAGE001
The conversion from world coordinates to camera coordinate system is as follows:
dot
Figure 229931DEST_PATH_IMAGE002
The coordinates in the camera coordinate system are
Figure 450828DEST_PATH_IMAGE003
Figure 273291DEST_PATH_IMAGE004
(1)
The meaning of each symbol in the formula (1) is specifically as follows:
Figure 805903DEST_PATH_IMAGE005
world coordinates representing point P;
Figure 4803DEST_PATH_IMAGE006
represents the coordinates of point P in the camera coordinate system;
Figure 329474DEST_PATH_IMAGE007
representing a rotation matrix;
Figure 322838DEST_PATH_IMAGE008
representing a translation vector.
Equation (1) may be modified to form equation (2) as follows:
Figure 280430DEST_PATH_IMAGE009
(2)
mapping from the camera three-dimensional coordinate system to the two-dimensional image coordinate system, as follows:
according to the principle of similar triangles, the following proportional relationship can be obtained:
Figure 17442DEST_PATH_IMAGE010
(3)
wherein, the meaning represented by each symbol in the formula (3) is specifically as follows:
Figure 806406DEST_PATH_IMAGE011
represents a focal length;
Figure 236250DEST_PATH_IMAGE012
two-dimensional coordinates representing an imaging plane;
Figure 133668DEST_PATH_IMAGE013
world coordinates representing point P;
further obtain the point
Figure 408792DEST_PATH_IMAGE014
Two-dimensional coordinates of the corresponding imaging plane
Figure 255525DEST_PATH_IMAGE015
Figure 590691DEST_PATH_IMAGE016
(4)
Mapping from the two-dimensional image coordinate system to the pixel coordinate system, the process is as follows:
the process does not involve rotation transformation, only changes the position of the origin of coordinates, only involves translation and expansion transformation, and the origin of the two transformations is not coincident and is set as the coordinates of the origin under a pixel coordinate system
Figure 850771DEST_PATH_IMAGE017
The coordinates in the image coordinate system are
Figure 929586DEST_PATH_IMAGE018
The pixel points are infinitesimal along the x-axis and y-axis directions respectively;
the coordinate relationship of the two pixel points is as follows:
Figure 817776DEST_PATH_IMAGE019
(5)
the meaning of each symbol in the formula (5) is specifically as follows:
Figure 589423DEST_PATH_IMAGE020
two-dimensional coordinates representing an imaging plane;
Figure 71220DEST_PATH_IMAGE021
representing coordinates in a pixel coordinate system;
Figure 953725DEST_PATH_IMAGE022
is shown in
Figure 509472DEST_PATH_IMAGE023
In the direction zoom
Figure 452020DEST_PATH_IMAGE022
Doubling;
Figure 300675DEST_PATH_IMAGE024
is shown in
Figure 721292DEST_PATH_IMAGE025
In the direction zoom
Figure 725020DEST_PATH_IMAGE024
And (4) multiplying.
A matrix transformation relation can be obtained:
Figure 838470DEST_PATH_IMAGE026
(6)
and obtaining the parameters of the camera geometric model in a summary manner:
Figure 966963DEST_PATH_IMAGE027
(7)
wherein, the meaning represented by each symbol in the formula (7) is specifically as follows:
Figure 191271DEST_PATH_IMAGE028
Figure 908560DEST_PATH_IMAGE029
represents the focal length of the sensor in the x-direction;
Figure 458490DEST_PATH_IMAGE030
Figure 871017DEST_PATH_IMAGE031
indicating the focal length of the sensor in the y-direction.
Step 2, self-defining an ROI image: based on the image collected in the step 1, ROI framing is carried out on the single target feature region in the image, the single target ROI region images are marked and numbered, and a self-defined coordinate mapping relation between each ROI region image and the original image is established.
Step 3, image preprocessing: and (3) preprocessing each ROI regional image based on the single-target ROI regional image generated in the step (2).
Step 4, standard feature circle extraction: extracting contour feature points, fitting a standard feature circle through a minimum bounding circle iterative optimization algorithm, and outputting standard feature circle parameter information, wherein the standard feature circle parameter information comprises a standard feature circle center coordinate and a circle radius.
It should be noted that, in order to further optimize the target recognition effect and improve the accuracy of circle center positioning, two lens positions of a workpiece (a camera module) are screened by contour search, all 2D point sets surrounding the workpiece in an image are extracted according to the minimum surrounding circle function, a circle with the minimum area is fitted, and the coordinates of the circle center are solved. The minimum circle-enclosing function algorithm outputs the circle center and the radius of a circle by inputting the identified contour point set, and the specific steps are explained as follows:
firstly, traversing all pixel points of the circle, determining four boundary points, namely four points of an uppermost boundary point H, a lowermost boundary point D, a leftmost boundary point L and a rightmost boundary point R according to position coordinates, solving a minimum enclosing circle of the four points according to the relative position relation of the four points in the image, and outputting the circle center and the radius of the circle, wherein the specific conditions of determining the minimum enclosing circle by the four points are as follows:
in the first case: when the four points are all located on the same straight line, the two farthest points are taken as the two end points of the diameter of the circle, which is the minimum coverage circle of the four points.
In the second case: when only three points are on the same straight line, two points which are farthest away on the same straight line are found out, and a circumscribed circle where three corner points formed by the two points and another non-collinear point are located is the minimum covering circle of the group of points.
In a third case: when any three points in the four points are not on the same straight line, and three points in the set of points form a triangle, and another point falls on the inner side of the triangle, the circumscribed circle of the triangle is the smallest coverage circle of the set of points.
In a fourth case: when there is no case where any three points of the four points are collinear, and the four points are connected to form a convex quadrilateral, if there is a diagonal complement, the four points must be on the same circle, which is the smallest coverage circle of the set of points.
In the fifth case: when the condition that any three points of the four points are in the same line does not exist, the four points are connected to form a convex quadrangle, but any group of opposite angles are not complementary, a group of opposite angles and more than 180 degrees are certain to exist, if the two angles are obtuse angles, the remaining two angles and less than 180 degrees are certain to exist, the two points are two end points with the diameter, and the circle is the smallest covering circle of the group of points.
In the sixth case: when the four points do not exist in the condition that any three points are in the same line, and the four points are connected to form a convex quadrangle, but no group of opposite angles are complementary, a group of opposite angles and more than 180 degrees are certain to exist, if one of the two angles is an acute angle and the other angle is an obtuse angle, the vertex of the acute angle and two points of the other group of opposite angles form a triangle, and the circumscribed circle of the triangle is the smallest coverage circle of the group of points.
And then, iteration is carried out, K =0 is set in the initial state (K is a variable and represents the iteration for the first time), all pixel points on the detected contour are traversed, and if no pixel point outside the circle boundary is detected, the circle is the minimum enclosing circle of the set of points which is finally solved. If a pixel point outside the circle boundary is detected, the following operations are carried out: finding out the point outside the boundary farthest from the circle center, sequentially selecting three points from the four points forming the circle, combining the three points with the point outside the boundary, sequentially detecting whether the other point is outside the circle at the moment of each combination, and if the other point is outside the circle at the moment, continuously judging the next combination point. If another point is not outside the circle, the point is eliminated, the center and radius of the circle in the case are recorded, and the circle is put into a candidate queue.
And performing second iteration, wherein K =1, traversing all pixel points of the circle recorded last time in the candidate queue, if no pixel point outside the circle boundary is detected, and if the pixel point outside the boundary is detected, the circle is the final minimum enclosing circle. Otherwise, setting a farthest point Q outside a boundary which is farthest away from the circle center at the moment, combining three points in the four points in sequence to form a new combination with the point Q, sequentially processing the four combinations in sequence, respectively calculating a minimum enclosing circle of the four points in each combination, and if the other point is not outside the circle boundary, recording the circle center and the radius of the circle and putting the circle into a candidate queue.
And (4) performing the next iteration, repeating the steps until all pixel points on all the contours are traversed, and ending the iteration if all the point results are found to be in the newly-solved circle, wherein the circle found at the moment is finally solved.
Step 5, standard feature straight line extraction: and 4, customizing the starting point and the end point of the straight line according to the standard feature circle center coordinates extracted in the step 4, determining a sampling domain of the feature point, scanning and sampling the bidirectional feature point in the horizontal direction and the vertical direction, fitting the standard feature straight line by using a RANSAC iterative algorithm, recording the position information of the feature point and fitting a linear equation.
Among them, RANSAC (random sample consensus), which is a general parameter estimation method, uses a set as small as possible and then enlarges the set with coincident data points, is to be noted. And randomly sampling the characteristic points in the image, and eliminating outliers to obtain a relatively average result. Outliers refer to data that is far from other data points. These data points can be found at both ends. This means that the outliers may be well below a certain point, which may be called the minimum, or well above other points, which may be called the maximum, and the working principle is to identify outliers in the data set and build the model. This resampling technique estimates the underlying data by using the minimum number of observations (data points) model parameters required to generate the candidate solution. RANSAC is accomplished by the following steps:
first, the
Figure 305540DEST_PATH_IMAGE032
And step one data set S is given, the minimum point number parameter required by the determined model is randomly selected, and the data set S1 is selected.
First, the
Figure 487123DEST_PATH_IMAGE033
And fitting the model to the selected subset, namely constructing a mathematical model M for the selected data set S1 and solving the model.
First, the
Figure 676796DEST_PATH_IMAGE034
And using the rest points of the data set of the calculated M model, if the rest points are within the error tolerance limit, judging the rest points as inner points, otherwise, judging the rest points as outer points, and calling the data sets S1 and S1' consisting of all the inner points as the compatibility set of S1.
First, the
Figure 29149DEST_PATH_IMAGE035
And step (3) recording the model with the maximum number of the inner points through comparison.
And repeating the steps until the iteration is finished or the current model has good effect, wherein the selected iteration number N is high enough to ensure the probability.
And (3) iteration number derivation: assuming that the ratio of "interior points" in the data is t, the case where the selected point has at least one exterior point each time the calculation model uses N points is the case
Figure 595259DEST_PATH_IMAGE036
That is, in the case of k iterations,
Figure 428086DEST_PATH_IMAGE037
it is k times that the iterative computation model will take the "outliers" to compute the probability of the model. Then the probability that the correct N points can be sampled to calculate the correct model is
Figure 929606DEST_PATH_IMAGE038
. From the above equation, an iteration number formula can be obtained:
Figure 316725DEST_PATH_IMAGE039
(8)
the meaning of each symbol in the formula (8) is specifically as follows:
Figure 29652DEST_PATH_IMAGE041
representing the number of iterations;
Figure 357865DEST_PATH_IMAGE042
indicating the proportion of "interior points" in the data,
Figure 966701DEST_PATH_IMAGE043
Figure 281138DEST_PATH_IMAGE044
is a variable representing the minimum number of data applicable to the model;
Figure 291820DEST_PATH_IMAGE045
representing the probability that at least one of the selected n points is not an inlier point;
Figure 790934DEST_PATH_IMAGE046
representing the probability of the correct model that RANSAC is expected to get.
The process of fitting the straight line is: firstly, randomly selecting two points in a given set, respectively calculating parameters of a straight line formed by the two points, corresponding all residual points to the positions of the straight line, and if the distance is less than a given critical point, calling an inner point and carrying out corresponding calculation; then, a similarity operation is performed between the two points to obtain a maximum compatible set, and a straight line is fitted on the basis.
RANSAC is very robust and can estimate a plurality of outliers well, and even if a large number of outliers exist, RANSAC can estimate these parameters well.
Step 6, configuring a json standard detection file: and storing the ROI area image number, the position coordinates of the corresponding standard feature circle or standard feature straight line into a standard detection file.
7, updating contour feature points: based on the standard feature circle of the 4 th step and the standard feature straight line of the 5 th step, when the standard feature circle and the standard feature straight line need to be modified, the position and the size of the ROI area image are adjusted, the contour feature points are extracted again, and the updated coordinates of the contour feature points and the ROI area image number are automatically stored in a standard detection file.
Step 8, describing the position relation among multiple targets: and (3) reading a standard detection file, performing feature matching on the image to be detected, obtaining position coordinate information of a target feature circle and position coordinate information of a target feature straight line according to the coordinate mapping relation between each ROI regional image obtained in the step (2) and the original image, and obtaining the distance from the circle center to the circle center and the distance from the circle center to the straight line, thereby describing the position relation among multiple targets.
Step 9, planning a full-automatic dispensing path: according to the inside and outside parameter matrix of the monocular camera obtained in the step 1, the position information (distance from the circle center to the straight line) of the characteristic points obtained in the step 8 and the pixel distance (circle center distance) between the characteristic points, the coordinates in the image coordinate system can be converted into the coordinates in the real world coordinate system, and the full-automatic dispensing path planning and the calculation of the positioning precision and the repeated measurement precision are realized.
In step 4, the parameter information of the standard feature circle comprises two circle center coordinates, two circle radiuses and a circle center distance.
In the 5 th step, the extraction process of the standard feature straight line is as follows: self-defining the starting point and the end point of a straight line, determining a sampling domain of straight feature points of an upper frame of a workpiece based on the coordinate positions of two circle centers, sampling horizontal and vertical bidirectional feature points in the sampling domain, sending the sampling domain into a sampling queue, and performing iterative optimization through an RANSAC iterative algorithm to obtain a standard feature straight line.
In step 3, preprocessing includes binarization, adaptive threshold segmentation, canny edge detection, gaussian filtering noise reduction, and perspective transformation.
The Gaussian filtering noise reduction is to multiply each pixel point in the image and the adjacent area of the pixel points by a Gaussian matrix, and a final gray scale is obtained by a weighted average method. The gaussian function is a function with a large center and smaller sides than a normal distribution. In the binary image, the position coordinates of one pixel point are
Figure 887066DEST_PATH_IMAGE047
The gray scale of the point on the figure is
Figure 54130DEST_PATH_IMAGE048
. Then, after gaussian filtering, its gray scale becomes:
Figure 919318DEST_PATH_IMAGE049
(9)
the meaning of each symbol in the formula (9) is specifically as follows:
Figure DEST_PATH_IMAGE050
representing the gray values after Gaussian filtering;
Figure 464700DEST_PATH_IMAGE051
represents the standard deviation;
Figure DEST_PATH_IMAGE052
representing a constant of 2.71 \8230.
Due to the angular deviation during the recording, the image can be calibrated by means of perspective transformation, i.e. the image is projected onto a new viewing plane. The method is characterized in that a perspective center, an image point and a target point are collinear, a bearing surface (perspective surface) is rotated by a certain angle around a trace line (perspective axis) according to a perspective rotation law, an original projection light beam is damaged, and the transformation of a projection geometric figure on the bearing surface can be still kept unchanged. The transformation formula of the perspective transformation is:
Figure 313707DEST_PATH_IMAGE053
(10)
the meaning of each symbol in the formula (10) is specifically as follows:
Figure DEST_PATH_IMAGE054
representing the transformed matrix;
Figure 219215DEST_PATH_IMAGE055
represents the original matrix, an
Figure 407751DEST_PATH_IMAGE056
Is parameter 1;
Figure 717509DEST_PATH_IMAGE057
representing a perspective matrix.
The picture coordinate obtained after perspective transformation is
Figure 319392DEST_PATH_IMAGE058
Wherein, in the step (A),
Figure 372799DEST_PATH_IMAGE059
Figure 602792DEST_PATH_IMAGE060
the transformation matrix in the above equation can be split into four parts, the first part representing the linear transformation
Figure 83452DEST_PATH_IMAGE061
The partial matrix is mainly used for zooming and rotating operations of images, and a perspective transformation matrix has 8 parameters, so that 4 coordinate pairs (8 equations) are needed to solve, and the transformation is finished
Figure 907051DEST_PATH_IMAGE062
And
Figure 764149DEST_PATH_IMAGE063
the expression of (c) is:
Figure 661698DEST_PATH_IMAGE064
(11)
therefore, hough line detection (based on an OpenCV open source vision library) is carried out on the peripheral area of the frame of the workpiece (camera module) in the picture, four corner points are calculated and solved through coordinates of two end points of the line, the corner points are sequenced according to the serial numbers of the element images before the perspective transformation matrix is calculated, and then the perspective transformation result is obtained by calculating the transformation matrix.
The binarization adopts an OTSU algorithm in an OpenCV open source vision library.
The adaptive threshold segmentation adopts an OTSU algorithm in an OpenCV open source vision library.
Canny edge detection utilizes a Canny edge detection algorithm to obtain edge features on the ROI area image. The Canny edge detection algorithm, which detects edges at a low error rate and is one of the most popular algorithms for edge detection, is a technique for extracting useful structural information from different visual objects and greatly reducing the amount of data to be processed, and is now widely used in various computer vision systems.
Extracting contour feature points by using a contour extraction algorithm; extracting contour feature points by the standard feature circle based on a findcontour function in OpenCV; and the standard characteristic straight line is fused with the relative position relation of the standard characteristic circle through bidirectional sampling to obtain the contour characteristic point.
Example (b): for convenience of explaining the process of the present invention, the positioning measurement of the camera module is taken as an example.
In industrial production, a workpiece to be measured (a camera module) is sent to a processing platform through an assembly line, a monocular camera acquires images of the workpiece (the camera module) and performs ROI area frame selection on a single target, as shown in figure 2, ID0 is a left optical center, and ID1 is: right optical center, ID2: and an upper frame is straight.
For each ROI region, an adaptive threshold segmentation algorithm (i.e. the atsu algorithm, also known as Otsu algorithm) is used, which is a method of segmenting the global threshold, which has the advantage that it is based entirely on performing on a histogram of an image, which is a one-dimensional array that is readily available. The method comprises the steps of dividing gray pixels of an image histogram into two classes based on a certain threshold value, calculating the variance between the two classes, and continuously iterating to enable the variance between the classes to reach a minimum value so as to obtain the threshold value. The optical center of the workpiece (camera module) is subjected to an adaptive threshold segmentation effect as shown in fig. 3.
The process of extracting the straight line of the upper frame of the workpiece is shown in fig. 4, the straight line where two circle centers are located is used as a reference line L0, the ROI area of the upper frame is determined, a straight line L1 parallel to L0 is selected from the straight line L1, the straight line perpendicular to L0 is made at the position of a left circle center P0 and intersects with L1 at a point M0, pixel points are scanned one by one along the x direction by taking M0 as a starting point, the gray value of each scanned pixel point is compared one by one up and down along the direction perpendicular to the x axis, and if two adjacent gray values in the direction are found to be different, the upper pixel point of the two pixel points is used as a sampling point and is placed in a sampling queue; if a group of targets with different pixel values is not searched within the limits of the upper 10 pixel points and the lower 10 pixel points, the continuous search is stopped, and the next pixel point in the x-axis direction is scanned until a point M1 is scanned, namely, an intersection point of a straight line perpendicular to the L0 and the L1 is made at the position of a right circle center P1. And sequentially putting the sampling points into a sampling queue, and performing iterative optimization by using a RANSAC iterative algorithm.
And establishing a self-defined coordinate mapping relation between each ROI regional image and the original image, namely a circle and a straight line respectively, extracting contour characteristic points for the circle, and performing iterative optimization by using a minimum circle-enclosing algorithm to obtain a circle center coordinate, a circle radius and a circle center distance. On the basis, characteristic point sampling is carried out on the straight line, iterative optimization is carried out by utilizing a RANSAC algorithm, and the distance from the circle center to the straight line is calculated. The left circle is marked with number 0, the right circle is marked with number 1, the center coordinates of the single target, the radius of the circle, the pixel distance of the two circles and the distance in the world coordinate system after conversion can be obtained according to the input ID number, and the test effect and the result are shown in FIG. 5.
The relative positional relationship of the multiple targets is obtained by changing the number of the ROI area, and the ID numbers are set to be 1 and 2, so that the distance between the dot lines can be obtained, and the result is output as shown in FIG. 6.
According to specific requirements of a user, feature extraction is carried out on a workpiece based on an OpenCV open source vision library, a positioning and measuring algorithm is developed, and indexes that positioning accuracy and repeated measuring accuracy are both smaller than 1 pixel are guaranteed. The invention provides a one-dimensional and two-dimensional measuring method based on feature extraction monocular vision, which aims to improve the automation degree of an online dispenser, solve different requirements of different users on workpiece assembly and further reduce the production cost of enterprises under the condition of meeting the precision.
In industrial production, each workpiece is sent to a processing area in a flow line mode, and the problem of positioning accuracy caused by differences of the placement positions of the workpieces needs to be solved. The invention mainly extracts standard contours such as straight lines, circles and the like to carry out one-dimensional and two-dimensional measurement, and solves the full-automatic dispensing positioning problem by a fusion method of self-defining ROI (region of interest) and multi-target position relation.
The method carries out feature positioning based on an OTSU algorithm, a Canny edge detection algorithm, a contour extraction algorithm, a minimum surrounding circle iterative optimization algorithm and the like in OpenCV, and remarkably improves the accuracy rate of positioning and measuring through a multi-target relative position relationship.
The method comprises the steps of performing frame selection on a single target area, numbering and establishing a coordinate mapping relation between a user-defined ROI (region of interest) image and an original image, extracting target contour feature points through self-adaptive threshold segmentation and other processing, fitting a target shape, describing a position relation among multi-target features, reducing the data volume of operation, and improving the universality and the operation efficiency of a software system.
The invention adopts the mode of self-defining ROI area images and configuring the relative position relation between the detection file and multiple targets, thereby solving the problem of positioning precision caused by the difference of the workpiece placing positions.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (7)

1. A monocular vision one-dimensional two-dimensional measurement method depends on an OpenCV open source vision library, and is characterized by comprising the following steps:
step 1, image acquisition and camera calibration: acquiring an image of the workpiece by using a monocular camera; calibrating a monocular camera to obtain an internal reference matrix of the monocular camera and an external reference matrix of the monocular camera;
step 2, self-defining an ROI image: based on the image collected in the step 1, performing ROI framing on single target feature areas in the image, marking and numbering the single target ROI area images, and establishing a self-defined coordinate mapping relation between each ROI area image and an original image;
step 3, image preprocessing: preprocessing each ROI regional image based on the single-target ROI regional image generated in the step 2; in the step 3, preprocessing comprises binarization, adaptive threshold segmentation, canny edge detection and Gaussian filtering noise reduction; the binarization adopts an OTSU algorithm in an OpenCV open source vision library; the adaptive threshold segmentation adopts an OTSU algorithm in an OpenCV open source vision library;
step 4, standard feature circle extraction: extracting contour feature points, fitting a standard feature circle through a minimum bounding circle iterative optimization algorithm, and outputting standard feature circle parameter information, wherein the standard feature circle parameter information comprises a standard feature circle center coordinate and a circle radius;
step 5, standard feature straight line extraction: defining the starting point and the end point of a straight line by user according to the standard feature circle center coordinates extracted in the step 4, determining a sampling domain of a feature point, scanning and sampling the bidirectional feature point in the horizontal direction and the vertical direction, fitting the standard feature straight line by using a RANSAC iterative algorithm, recording the position information of the feature point and fitting a linear equation;
step 6, configuring a standard detection file: the position coordinates of the ROI area image numbers and the corresponding standard feature circles or standard feature straight line points are stored in a standard detection file;
7, updating contour feature points: based on the standard feature circle in the step 4 and the standard feature straight line in the step 5, when the standard feature circle and the standard feature straight line need to be modified, the position and the size of the ROI area image are adjusted, the contour feature points are extracted again, and the updated coordinates of the contour feature points and the ROI area image number are automatically stored in a standard detection file;
step 8, describing the position relation among multiple targets: reading a standard detection file, performing feature matching on an image to be detected, obtaining position coordinate information of a target feature circle and position coordinate information of a target feature straight line according to the coordinate mapping relation between each ROI regional image obtained in the step 2 and an original image, and obtaining the distance from the circle center to the circle center and the distance from the circle center to the straight line, thereby describing the position relation among multiple targets;
step 9, planning a full-automatic dispensing path: and (3) converting coordinates in an image coordinate system into coordinates in a real world coordinate system according to the inside and outside parameter matrix of the monocular camera obtained in the step (1), the position information of the characteristic points obtained in the step (8) and the pixel distance between the characteristic points, so that the full-automatic dispensing path planning is realized, and the positioning precision and the repeated measurement precision are calculated.
2. The method for measuring monocular vision in one dimension and two dimensions according to claim 1, wherein: in the step 4, the parameter information of the standard feature circle includes two circle center coordinates, two circle radii and a circle center distance.
3. A monocular vision one-dimensional two-dimensional measurement method according to claim 2, wherein: in the step 5, the standard feature straight line extraction process is as follows: self-defining a starting point and an end point of a straight line, determining a sampling domain of straight line characteristic points of an upper frame of a workpiece based on two circle center coordinate positions, sampling horizontal and vertical bidirectional characteristic points in the sampling domain, sending the sampling to a sampling queue, and performing iterative optimization through an RANSAC iterative algorithm to obtain a standard characteristic straight line.
4. The method for measuring monocular vision in one dimension and two dimensions according to claim 1, wherein: and the Canny edge detection utilizes a Canny edge detection algorithm to obtain edge features on the ROI area image.
5. The method for measuring monocular vision in one dimension and two dimensions according to claim 1, wherein: and extracting the contour feature points by using a contour extraction algorithm.
6. The method for measuring monocular vision in one dimension and two dimensions according to claim 5, wherein: the standard feature circle extracts contour feature points based on the findcontour function in OpenCV.
7. The method for measuring monocular vision in one dimension and two dimensions according to claim 5, wherein: and the standard characteristic straight line is fused with the relative position relation of the standard characteristic circle through bidirectional sampling to obtain the contour characteristic point.
CN202211044387.1A 2022-08-30 2022-08-30 Monocular vision one-dimensional two-dimensional measurement method Active CN115112098B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211044387.1A CN115112098B (en) 2022-08-30 2022-08-30 Monocular vision one-dimensional two-dimensional measurement method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211044387.1A CN115112098B (en) 2022-08-30 2022-08-30 Monocular vision one-dimensional two-dimensional measurement method

Publications (2)

Publication Number Publication Date
CN115112098A CN115112098A (en) 2022-09-27
CN115112098B true CN115112098B (en) 2022-11-08

Family

ID=83336254

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211044387.1A Active CN115112098B (en) 2022-08-30 2022-08-30 Monocular vision one-dimensional two-dimensional measurement method

Country Status (1)

Country Link
CN (1) CN115112098B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116956640B (en) * 2023-09-19 2024-01-09 深圳市艾姆克斯科技有限公司 Adjusting method and system based on self-adaptive optimization of five-axis dispensing machine

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110197508A (en) * 2019-07-10 2019-09-03 深圳西顺万合科技有限公司 The method and device of the co-melting vision guide movement of 2D, 3D
CN110503638A (en) * 2019-08-15 2019-11-26 上海理工大学 Spiral colloid amount online test method
CN210664460U (en) * 2019-08-12 2020-06-02 盐城市腾辉电子科技有限公司 Novel point UV glues and uses detection tool
CN111460955A (en) * 2020-03-26 2020-07-28 欣辰卓锐(苏州)智能装备有限公司 Image recognition and processing system on automatic tracking dispensing equipment
CN114708338A (en) * 2022-03-29 2022-07-05 博众精工科技股份有限公司 Calibration method, device, equipment and medium of dispenser

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017110533A1 (en) * 2017-05-15 2018-11-15 Lavision Gmbh Method for calibrating an optical measurement setup
CN109816724B (en) * 2018-12-04 2021-07-23 中国科学院自动化研究所 Three-dimensional feature extraction method and device based on machine vision
CN110110760A (en) * 2019-04-17 2019-08-09 浙江工业大学 A kind of workpiece positioning and recognition methods based on machine vision
CN114494045B (en) * 2022-01-10 2024-04-16 南京工大数控科技有限公司 Large spur gear geometric parameter measurement system and method based on machine vision

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110197508A (en) * 2019-07-10 2019-09-03 深圳西顺万合科技有限公司 The method and device of the co-melting vision guide movement of 2D, 3D
CN210664460U (en) * 2019-08-12 2020-06-02 盐城市腾辉电子科技有限公司 Novel point UV glues and uses detection tool
CN110503638A (en) * 2019-08-15 2019-11-26 上海理工大学 Spiral colloid amount online test method
CN111460955A (en) * 2020-03-26 2020-07-28 欣辰卓锐(苏州)智能装备有限公司 Image recognition and processing system on automatic tracking dispensing equipment
CN114708338A (en) * 2022-03-29 2022-07-05 博众精工科技股份有限公司 Calibration method, device, equipment and medium of dispenser

Also Published As

Publication number Publication date
CN115112098A (en) 2022-09-27

Similar Documents

Publication Publication Date Title
CN100430690C (en) Method for making three-dimensional measurement of objects utilizing single digital camera to freely shoot
Xu et al. Line structured light calibration method and centerline extraction: A review
CN114494045B (en) Large spur gear geometric parameter measurement system and method based on machine vision
CN107292869B (en) Image speckle detection method based on anisotropic Gaussian kernel and gradient search
CN115131444B (en) Calibration method based on monocular vision dispensing platform
CN111123242B (en) Combined calibration method based on laser radar and camera and computer readable storage medium
US6980685B2 (en) Model-based localization and measurement of miniature surface mount components
CN115345822A (en) Automatic three-dimensional detection method for surface structure light of aviation complex part
CN113324478A (en) Center extraction method of line structured light and three-dimensional measurement method of forge piece
CN111311618A (en) Circular arc workpiece matching and positioning method based on high-precision geometric primitive extraction
JPH08136220A (en) Method and device for detecting position of article
CN115112098B (en) Monocular vision one-dimensional two-dimensional measurement method
CN115609591A (en) 2D Marker-based visual positioning method and system and composite robot
CN114331995A (en) Multi-template matching real-time positioning method based on improved 2D-ICP
CN114022542A (en) Three-dimensional reconstruction-based 3D database manufacturing method
CN111222507A (en) Automatic identification method of digital meter reading and computer readable storage medium
CN112257721A (en) Image target region matching method based on Fast ICP
CN112329880A (en) Template fast matching method based on similarity measurement and geometric features
CN112184804A (en) Method and device for positioning high-density welding spots of large-volume workpiece, storage medium and terminal
CN113705564B (en) Pointer type instrument identification reading method
CN110992416A (en) High-reflection-surface metal part pose measurement method based on binocular vision and CAD model
CN114612412A (en) Processing method of three-dimensional point cloud data, application of processing method, electronic device and storage medium
JP2001143073A (en) Method for deciding position and attitude of object
CN108447092B (en) Method and device for visually positioning marker
CN111179271B (en) Object angle information labeling method based on retrieval matching and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant