CN115112098A - Monocular vision one-dimensional two-dimensional measurement method - Google Patents
Monocular vision one-dimensional two-dimensional measurement method Download PDFInfo
- Publication number
- CN115112098A CN115112098A CN202211044387.1A CN202211044387A CN115112098A CN 115112098 A CN115112098 A CN 115112098A CN 202211044387 A CN202211044387 A CN 202211044387A CN 115112098 A CN115112098 A CN 115112098A
- Authority
- CN
- China
- Prior art keywords
- circle
- feature
- straight line
- points
- standard
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Abstract
The invention discloses a monocular vision one-dimensional two-dimensional measurement method, which relies on an OpenCV open source vision library and comprises the following steps: step 1, image acquisition and camera calibration; step 2, self-defining an ROI area image; step 3, preprocessing images; step 4, extracting a standard feature circle; step 5, standard characteristic straight line extraction; step 6, configuring a standard detection file; 7, updating contour feature points; 8, describing the position relation among multiple targets; and 9, planning a full-automatic dispensing path. The monocular vision one-dimensional two-dimensional measurement method has the advantages that the features of the workpiece to be measured are extracted through machine vision, the automation degree is high, the measurement precision is high, and the cost is low.
Description
Technical Field
The invention relates to the technical field of monocular vision measuring methods, in particular to a monocular vision one-dimensional two-dimensional measuring method.
Background
Traditional online glue dispensing equipment usually transmits images through a camera, and manual punctuation positioning is carried out, and then trajectory planning is carried out, and the intelligent degree and the precision of this kind of method are all lower, and the defective rate of product is high, and production efficiency is low, can not satisfy the index requirement of high quality product production, leads to the enterprise to lack competitiveness and flexibility.
The machine vision technology is developed rapidly, compared with human vision, the machine vision technology can meet the requirements of high-requirement and high-precision positioning and size measurement, the production efficiency is greatly improved by combining the vision technology and the robot, the personal safety in the production process is guaranteed to the maximum extent, and the machine vision technology has important significance for industrial automation development.
The calibration mode based on the Halcon commercial vision library commonly used in the dispensing equipment can ensure the calibration of high precision and stability, but has higher cost, the technology is mastered by foreign companies, the bottom layer is not open, and the risk of being clamped by necks exists technically.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art.
Therefore, the invention provides a monocular vision one-dimensional two-dimensional measurement method, which relies on an OpenCV open source vision library to extract the characteristics of a workpiece to be measured through machine vision, and has the advantages of high automation degree, high measurement precision and low cost.
The monocular vision one-dimensional and two-dimensional measurement method provided by the embodiment of the invention depends on an OpenCV open source vision library, and comprises the following steps of:
step 1, image acquisition and camera calibration: acquiring an image of a workpiece by using a monocular camera; calibrating the monocular camera to obtain a monocular camera internal reference matrix and a monocular camera external reference matrix;
step 2, customizing the ROI area image: based on the image acquired in the step 1, carrying out ROI framing on the single target feature region in the image, marking and numbering the single target ROI region images, and establishing a self-defined coordinate mapping relation between each single target ROI region image and the original image;
step 3, image preprocessing: preprocessing each ROI regional image based on the single-target ROI regional image generated in the step 2;
step 4, standard feature circle extraction: extracting contour feature points, fitting a standard feature circle through a minimum bounding circle iterative optimization algorithm, and outputting standard feature circle parameter information, wherein the standard feature circle parameter information comprises a standard feature circle center coordinate and a circle radius;
step 5, standard feature straight line extraction: defining the starting point and the end point of a straight line by user according to the standard feature circle center coordinates extracted in the step 4, determining a sampling domain of a feature point, scanning and sampling the bidirectional feature point in the horizontal direction and the vertical direction, fitting the standard feature straight line by using a RANSAC iterative algorithm, recording the position information of the feature point and fitting a linear equation;
step 6, configuring a standard detection file: storing the ROI area image number, the position coordinates of the corresponding standard feature circle or standard feature straight line into a standard detection file;
7, updating contour feature points: based on the standard feature circle in the step 4 and the standard feature straight line in the step 5, when the standard feature circle and the standard feature straight line need to be modified, the position and the size of the ROI area image are adjusted, the contour feature points are extracted again, and the updated coordinates of the contour feature points and the ROI area image number are automatically stored in a standard detection file;
step 8, describing the position relation among multiple targets: reading a standard detection file, performing feature matching on an image to be detected, obtaining position coordinate information of a target feature circle and position coordinate information of a target feature straight line according to the coordinate mapping relation between each ROI area image obtained in the step 2 and an original image, and obtaining the distance from the circle center to the circle center and the distance from the circle center to the straight line, so as to describe the position relation among multiple targets;
step 9, planning a full-automatic dispensing path: and (3) according to the inside and outside parameter matrix of the monocular camera obtained in the step (1), the position information of the characteristic points obtained in the step (8) and the pixel distance between the characteristic points, the coordinates in the image coordinate system can be converted into the coordinates in the real world coordinate system, and the full-automatic dispensing path planning and the calculation of the positioning precision and the repeated measurement precision are realized.
The invention has the advantages that (1) the characteristic extraction is carried out on the workpiece to be measured through the machine vision, the pixel-level positioning measurement precision is achieved, the reject ratio of the product is greatly reduced, the automation degree is high, the production efficiency is high, and the competitiveness and the flexibility of an enterprise are increased; (2) the full-automatic dispensing positioning problem is solved by a fusion method of extracting target contour characteristic points through processing such as self-adaptive threshold segmentation and the like and combining the target contour characteristic points with a multi-target position relation; (3) the single target ROI areas are customized and numbered, the OpenCV open source vision library is used for conducting self-adaptive threshold segmentation, Canny edge detection and other processing on each ROI area, a coordinate mapping relation between the customized ROI areas and an original image is established, and the operation data amount is reduced; (4) a feature extraction method of relative position relation is designed, the problem of positioning accuracy caused by difference of workpiece placement positions is solved, the position relation among multi-target features can be described, and the universality of a software system and the positioning measurement accuracy are improved.
According to an embodiment of the present invention, in the step 4, the parameter information of the standard feature circle includes two circle center coordinates, two circle radii and a circle center distance.
According to an embodiment of the present invention, in the step 5, the standard feature straight line extraction process is: self-defining a starting point and an end point of a straight line, determining a sampling domain of straight line characteristic points of an upper frame of a workpiece based on two circle center coordinate positions, sampling horizontal and vertical bidirectional characteristic points in the sampling domain, sending the sampling to a sampling queue, and performing iterative optimization through an RANSAC iterative algorithm to obtain a standard characteristic straight line.
According to one embodiment of the invention, in the 3 rd step, preprocessing includes binarization, adaptive threshold segmentation, Canny edge detection and gaussian filtering noise reduction.
According to an embodiment of the present invention, the binarization uses an OTSU algorithm in an OpenCV open source visual library.
According to one embodiment of the invention, the adaptive threshold segmentation adopts an OTSU algorithm in an OpenCV open source vision library.
According to one embodiment of the invention, the Canny edge detection utilizes a Canny edge detection algorithm to obtain edge features on the ROI area image.
According to one embodiment of the invention, contour feature points are extracted using a contour extraction algorithm.
According to one embodiment of the invention, the standard feature circle is based on a findcontour function in OpenCV to extract contour feature points.
According to one embodiment of the invention, the standard feature straight line is fused with the relative position relationship of the standard feature circle through bidirectional sampling to obtain the contour feature point.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present invention, the drawings used in the description of the embodiments or prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a one-dimensional and two-dimensional measurement method of monocular vision according to the present invention;
FIG. 2 is a diagram of a custom ROI area;
FIG. 3 is a graph of adaptive threshold segmentation effects;
FIG. 4 is a line recognition depiction;
FIG. 5 is a graph of a characteristic circle measurement;
FIG. 6 is a diagram of a multi-target relative position relationship.
The reference numbers in the figures are: ID0, left optical center; ID1, right optical center; ID2, upper border line; l0, reference line; l1, straight line.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The monocular vision one-dimensional two-dimensional measurement method according to the embodiment of the present invention will be described in detail below with reference to the drawings.
The invention is suitable for various industries of machine vision identification and positioning, such as: the dispensing industry and the like.
The invention discloses a monocular vision one-dimensional two-dimensional measurement method, which depends on an OpenCV open source vision library and comprises the following steps:
step 1, image acquisition and camera calibration: acquiring an image of the workpiece by using a monocular camera; and calibrating the monocular camera to obtain a monocular camera internal reference matrix and a monocular camera external reference matrix.
The monocular camera internal reference matrix and the monocular camera external reference matrix are used for calculating coordinate transformation (a three-dimensional scene of a real world is transformed into a two-dimensional image through the monocular camera); the monocular camera internal reference matrix is used for describing that one point on a camera coordinate system is converted into a pixel point through imaging of a lens and a pinhole of the camera. The monocular camera external reference matrix is used for describing that any point in the real world coordinate system is mapped to a point on the camera coordinate system through rotation and translation.
Wherein, the meaning represented by each symbol in the formula (1) is specifically as follows:
Equation (1) may be modified to form equation (2) as follows:
mapping from the camera three-dimensional coordinate system to the two-dimensional image coordinate system, as follows:
according to the principle of similar triangles, the following proportional relationship can be obtained:
wherein, the meaning represented by each symbol in the formula (3) is specifically as follows:
Mapping from the two-dimensional image coordinate system to the pixel coordinate system, the process is as follows:
the process does not relate to rotation transformation, only changes the position of the origin of the coordinates, and only relates to translation and expansion transformation, and the origins of the translation and the expansion transformation are not coincident, and the coordinates of the origin under a pixel coordinate system are set asThe coordinates in the image coordinate system areThe pixel points are infinitesimal along the x-axis and y-axis directions respectively;
the coordinate relation of the two pixel points is as follows:
wherein, the meaning represented by each symbol in the formula (5) is specifically as follows:
The matrix transformation relation can be obtained:
and obtaining the parameters of the camera geometric model in a conclusion manner:
wherein, the meaning represented by each symbol in the formula (7) is specifically as follows:
Step 2, customizing the ROI area image: and (3) performing ROI framing on the single target feature region in the image based on the image acquired in the step (1), marking and numbering the single target ROI region images, and establishing a self-defined coordinate mapping relation between each single target ROI region image and the original image.
Step 3, image preprocessing: and preprocessing each ROI area image based on the single-target ROI area image generated in the step 2.
Step 4, standard feature circle extraction: extracting contour feature points, fitting a standard feature circle through a minimum bounding circle iterative optimization algorithm, and outputting standard feature circle parameter information, wherein the standard feature circle parameter information comprises a standard feature circle center coordinate and a circle radius.
It should be noted that, in order to further optimize the target recognition effect and improve the accuracy of circle center positioning, two lens positions of a workpiece (a camera module) are screened by contour search, all 2D point sets surrounding the workpiece in an image are extracted according to the minimum surrounding circle function, a circle with the minimum area is fitted, and the coordinates of the circle center are solved. The minimum circle-enclosing function algorithm outputs the circle center and the radius of a circle by inputting the identified contour point set, and the specific steps are explained as follows:
firstly, traversing all pixel points of the circle, determining four boundary points, namely four points of an uppermost boundary point H, a lowermost boundary point D, a leftmost boundary point L and a rightmost boundary point R according to position coordinates, solving a minimum enclosing circle of the four points according to the relative position relation of the four points in the image, and outputting the circle center and the radius of the circle, wherein the specific conditions of determining the minimum enclosing circle by the four points are as follows:
in the first case: when the four points are all located on the same straight line, the two farthest points are taken as the two end points of the diameter of the circle, and the circle is the smallest coverage circle of the four points.
In the second case: when only three points are on the same straight line, two points which are farthest away on the same straight line are found, and a circumscribed circle of three corner points formed by the two points and another non-collinear point is the minimum covering circle of the group of points.
In the third case: when any three points in the four points are not on the same straight line, and three points in the set of points form a triangle, and another point falls on the inner side of the triangle, the circumscribed circle of the triangle is the smallest coverage circle of the set of points.
In a fourth case: when there is no case where any three points of the four points are collinear, and the four points are connected to form a convex quadrilateral, if there is a diagonal complement, the four points must be on the same circle, which is the smallest coverage circle of the set of points.
In the fifth case: when the condition that any three points of the four points are in the same line does not exist, the four points are connected to form a convex quadrangle, but any group of opposite angles are not complementary, a group of opposite angles and more than 180 degrees are certain to exist, if the two angles are obtuse angles, the remaining two angles and less than 180 degrees are certain to exist, the two points are two end points with the diameter, and the circle is the smallest covering circle of the group of points.
In the sixth case: when the four points do not exist in the condition that any three points are in the same line, and the four points are connected to form a convex quadrangle, but no group of opposite angles are complementary, a group of opposite angles and more than 180 degrees are certain to exist, if one of the two angles is an acute angle and the other angle is an obtuse angle, the vertex of the acute angle and two points of the other group of opposite angles form a triangle, and the circumscribed circle of the triangle is the smallest coverage circle of the group of points.
And then, iteration is carried out, K =0 is set in the initial state (K is a variable and represents the iteration for the first time), all pixel points on the detected contour are traversed, and if no pixel point outside the circle boundary is detected, the circle is the minimum enclosing circle of the set of points which is finally solved. If a pixel point outside the circle boundary is detected, the following operations are carried out: finding out the point outside the boundary farthest from the circle center, sequentially selecting three points from the four points forming the circle, combining the three points with the point outside the boundary, sequentially detecting whether the other point is outside the circle at the moment of each combination, and if the other point is outside the circle at the moment, continuously judging the next combination point. If another point is not outside the circle, the point is eliminated, the center and radius of the circle in the case are recorded, and the circle is put into a candidate queue.
And performing second iteration, wherein K =1, traversing all pixel points of the circle recorded last time in the candidate queue, if no pixel point outside the circle boundary is detected, and if the pixel point outside the boundary is detected, the circle is the final minimum enclosing circle. Otherwise, setting a farthest point Q outside the boundary which is farthest from the circle center at the moment, combining three points in the four points in sequence to form a new combination with the point Q, sequentially processing the four combinations in sequence, respectively calculating the minimum enclosing circle of the four points in each combination, and if the other point is not outside the circle boundary, recording the circle center and the radius of the circle and putting the circle center and the radius into a candidate queue.
And (4) carrying out the next iteration, repeating the steps until all pixel points on all the contours are traversed, finding all the points to be in the newly solved circle, ending the iteration, and finding the circle which is finally solved.
Step 5, standard feature straight line extraction: and 4, customizing the starting point and the end point of the straight line according to the standard feature circle center coordinates extracted in the step 4, determining a sampling domain of the feature point, scanning and sampling the bidirectional feature point in the horizontal direction and the vertical direction, fitting the standard feature straight line by using a RANSAC iterative algorithm, recording the position information of the feature point and fitting a linear equation.
Among them, RANSAC (random sample consensus), which is a general parameter estimation method, uses a set as small as possible and then enlarges the set with coincident data points, is to be noted. And randomly sampling the characteristic points in the image, and eliminating outliers to obtain a relatively average result. Outliers refer to data that is far from other data points. These data points can be found at both ends. This means that the outliers may be well below a certain point, which may be called the minimum, or well above other points, which may be called the maximum, and the working principle is to identify outliers in the dataset and build the model. This resampling technique estimates the model parameters by using the minimum number of observations (data points) required to generate the candidate solution estimate base data. RANSAC is accomplished by the following steps:
first, theAnd step two, giving a data set S, randomly selecting a minimum point parameter required by determining a model, and recording the selected data set as S1.
First, theAnd fitting the model to the selected subset, namely constructing a mathematical model M by using the selected data set S1 and solving the model.
First, theAnd step (3) using the rest points of the data set of the calculated M model, if the rest points are within the error tolerance limit, judging the rest points as inner points, otherwise, judging the rest points as outer points, and calling the data sets S1 and S1' consisting of all the inner points as a compatibility set of S1.
First, theAnd step (3) recording the model with the maximum number of the inner points through comparison.
And repeating the steps until the iteration is finished or the current model has good effect, wherein the selected iteration times N are high enough to ensure the probability.
And (3) iteration number derivation: assuming that the ratio of "interior points" in the data is t, the case where the selected point has at least one exterior point each time the calculation model uses N points is the caseThat is, in the case of k iterations,it is k times that the iterative computation model will take the "outliers" to compute the probability of the model. Then the probability that the correct N points can be sampled to calculate the correct model is. From the above equation, an iteration number formula can be obtained:
wherein, the meaning represented by each symbol in the formula (8) is specifically as follows:
The process of fitting the straight line is: firstly, randomly selecting two points in a given set, respectively calculating parameters of straight lines formed by the points, corresponding all residual points of the points to the positions of the lines, and if the distance is less than a given critical point, calling the points as interior points and carrying out corresponding calculation; then, a similarity operation is performed between the two points to obtain a maximum compatible set, and a straight line is fitted on the basis.
RANSAC has good robustness, can well estimate a plurality of outliers, and can well estimate the parameters even if a plurality of outliers exist.
Step 6, configuring a json standard detection file: and storing the ROI area image number, the position coordinates of the corresponding standard feature circle or standard feature straight line into a standard detection file.
7, updating contour feature points: based on the standard feature circle in the step 4 and the standard feature straight line in the step 5, when the standard feature circle and the standard feature straight line need to be modified, the position and the size of the ROI area image are adjusted, the contour feature points are extracted again, and the updated coordinates of the contour feature points and the ROI area image number are automatically stored in a standard detection file.
Step 8, describing the position relation among multiple targets: and reading a standard detection file, performing feature matching on the image to be detected, obtaining position coordinate information of a target feature circle and position coordinate information of a target feature straight line according to the coordinate mapping relation between each ROI area image obtained in the step 2 and the original image, and obtaining the distance from the circle center to the circle center and the distance from the circle center to the straight line, thereby describing the position relation among multiple targets.
Step 9, planning a full-automatic dispensing path: according to the inside and outside parameter matrix of the monocular camera obtained in the step 1, the position information (distance from the circle center to the straight line) of the characteristic points obtained in the step 8 and the pixel distance (circle center distance) between the characteristic points, the coordinates in the image coordinate system can be converted into the coordinates in the real world coordinate system, and the full-automatic dispensing path planning and the calculation of the positioning precision and the repeated measurement precision are realized.
In step 4, the parameter information of the standard feature circle comprises two circle center coordinates, two circle radiuses and a circle center distance.
In the 5 th step, the standard feature straight line extraction process is as follows: self-defining a starting point and an end point of a straight line, determining a sampling domain of straight line characteristic points of an upper frame of a workpiece based on two circle center coordinate positions, sampling horizontal and vertical bidirectional characteristic points in the sampling domain, sending the sampling to a sampling queue, and performing iterative optimization through an RANSAC iterative algorithm to obtain a standard characteristic straight line.
In step 3, preprocessing includes binarization, adaptive threshold segmentation, Canny edge detection, gaussian filtering noise reduction, and perspective transformation.
In the Gaussian filtering noise reduction, each pixel point and the adjacent area of the pixel point in the image are multiplied by a Gaussian matrix, and the final gray scale is obtained by a weighted average method. The gaussian function is a function with a large center and smaller sides than the normal distribution. In the binarized image, the position coordinates of a pixel point areThe gray scale of the point on the figure is. Then, after gaussian filtering, its gray scale becomes:
wherein, the meaning of each symbol in the formula (9) is specifically as follows:
Due to the angular deviation during the recording, the image can be calibrated by means of perspective transformation, i.e. the image is projected onto a new viewing plane. The method is characterized in that a perspective center, an image point and a target point are collinear, a bearing surface (perspective surface) rotates a certain angle around a trace line (perspective axis) according to a perspective rotation law, an original projection light beam is damaged, and the transformation of a projection geometric figure on the bearing surface can be still kept unchanged. The transformation formula of the perspective transformation is:
the meaning of each symbol in the formula (10) is specifically as follows:
The picture coordinate obtained after perspective transformation isWherein, in the step (A),,the transformation matrix in the above equation can be split into four parts, the first part representing the linear transformationThe partial matrix is mainly used for zooming and rotating the image, and the perspective transformation matrix has 8 parameters, so 4 coordinate pairs (8 equations) are needed to solve, and the transformation is finishedAndthe expression of (a) is:
therefore, Hough line detection (based on an OpenCV open source vision library) is carried out on the peripheral area of the frame of the workpiece (camera module) in the picture, four angular points are calculated and solved through coordinates of two end points of the line, the angular points are sequenced according to the serial numbers of the element images before the perspective transformation matrix is calculated, and then the perspective transformation result is obtained by calculating the transformation matrix.
The binarization adopts an OTSU algorithm in an OpenCV open source vision library.
The adaptive threshold segmentation adopts an OTSU algorithm in an OpenCV open source vision library.
Canny edge detection utilizes a Canny edge detection algorithm to obtain edge features on the ROI area image. The Canny edge detection algorithm, which is one of the most popular algorithms for edge detection, is a technique for extracting useful structural information from different visual objects and greatly reducing the amount of data to be processed, and is widely used in various computer vision systems at present.
Extracting contour feature points by using a contour extraction algorithm; extracting contour feature points of the standard feature circle based on a findcontour function in OpenCV; and the standard characteristic straight line is fused with the relative position relation of the standard characteristic circle through bidirectional sampling to obtain the contour characteristic point.
Example (b): for convenience of explaining the process of the present invention, the positioning measurement of the camera module is taken as an example.
In industrial production, a workpiece to be measured (a camera module) is sent to a processing platform through an assembly line, a monocular camera acquires images of the workpiece (the camera module) and performs ROI area frame selection on a single target, as shown in FIG. 2, ID0: left optical center, ID 1: right optical center, ID 2: and (5) straight line of the upper frame.
For each ROI region, an adaptive threshold segmentation algorithm (i.e. the ohr algorithm, also known as the Otsu algorithm) is used, which is a method of segmenting the global threshold, which has the advantage of being based entirely on performing on the histogram of an image, which is a one-dimensional array that is easily available. The idea is to divide the gray pixels of the image histogram into two classes based on a certain threshold value, calculate the inter-class variance of the two classes, and continuously iterate to make the inter-class variance reach a minimum value, thereby obtaining the threshold value. The optical center of the workpiece (camera module) is divided by the adaptive threshold as shown in fig. 3.
The process of extracting the straight line of the upper frame of the workpiece is shown in fig. 4, the straight line where two circle centers are located is used as a reference line L0, an ROI area of the upper frame is determined, a straight line L1 parallel to L0 is selected from the straight line, a straight line perpendicular to L0 is made at the position of a left circle center P0 and is intersected with a point M0 with L1, pixel points are scanned one by one along the x direction with M0 as a starting point, the gray value of each scanned pixel point is compared one by one up and down along the direction perpendicular to the x axis, and if two adjacent gray values in the direction are found to be different, the pixel point above the two pixel points is used as a sampling point and is placed in a sampling queue; if a group of targets with different pixel values is not searched within the limits of the upper 10 pixel points and the lower 10 pixel points, the continuous searching is stopped, and the next pixel point in the x-axis direction is scanned until a point M1 is scanned, namely, the intersection point of a straight line perpendicular to L0 and L1 is made at the right circle center P1. And sequentially putting the sampling points into a sampling queue, and performing iterative optimization by using a RANSAC iterative algorithm.
And establishing a self-defined coordinate mapping relation between each single-target ROI area image and the original image, namely a circle and a straight line, extracting contour characteristic points for the circle, and performing iterative optimization by using a minimum bounding circle algorithm to obtain a circle center coordinate, a circle radius and a circle center distance. On the basis, characteristic point sampling is carried out on the straight line, iterative optimization is carried out by utilizing a RANSAC algorithm, and the distance from the circle center to the straight line is calculated. The left circle is marked with number 0 and the right circle is marked with number 1, and according to the input ID number, the center coordinates, the circle radius, the pixel distance of the two circles and the distance in the world coordinate system after conversion of the single target can be obtained, and the test effect and the result are shown in FIG. 5.
The relative positional relationship of the multiple targets is obtained by changing the number of the ROI area, and the ID numbers are set to be 1 and 2, so that the distance between the dot lines can be obtained, and the result is output as shown in FIG. 6.
According to specific requirements of a user, feature extraction is carried out on a workpiece based on an OpenCV open source vision library, a positioning and measuring algorithm is developed, and indexes that positioning accuracy and repeated measuring accuracy are both smaller than 1 pixel are guaranteed. The invention provides a one-dimensional two-dimensional measurement method based on feature extraction monocular vision, which aims to improve the automation degree of an online dispenser, solve different requirements of different users on workpiece assembly and further reduce the production cost of enterprises under the condition of meeting the precision.
In industrial production, each workpiece is sent to a processing area in a flow line mode, and the problem of positioning accuracy caused by differences of the placement positions of the workpieces needs to be solved. The invention mainly extracts standard contours such as straight lines, circles and the like to carry out one-dimensional and two-dimensional measurement, and solves the full-automatic dispensing positioning problem by a fusion method of self-defining ROI (region of interest) and multi-target position relation.
The method carries out feature positioning based on an OTSU algorithm, a Canny edge detection algorithm, a contour extraction algorithm, a minimum surrounding circle iterative optimization algorithm and the like in OpenCV, and remarkably improves the accuracy rate of positioning and measuring through a multi-target relative position relationship.
The method comprises the steps of carrying out frame selection on a single target area, numbering and establishing a coordinate mapping relation between a user-defined ROI area image and an original image, extracting target contour feature points through self-adaptive threshold segmentation and the like, fitting a target shape, describing a position relation among multi-target features, reducing the data volume of operation, and improving the universality and the operation efficiency of a software system.
The invention adopts the mode of self-defining ROI area images and configuring the relative position relation between the detection file and multiple targets, thereby solving the problem of positioning precision caused by the difference of the workpiece placing positions.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.
Claims (10)
1. A monocular vision one-dimensional two-dimensional measurement method depends on an OpenCV open source vision library, and is characterized by comprising the following steps:
step 1, image acquisition and camera calibration: acquiring an image of the workpiece by using a monocular camera; calibrating the monocular camera to obtain a monocular camera internal reference matrix and a monocular camera external reference matrix;
step 2, customizing the ROI area image: based on the image acquired in the step 1, carrying out ROI framing on the single target feature region in the image, marking and numbering the single target ROI region images, and establishing a self-defined coordinate mapping relation between each single target ROI region image and the original image;
step 3, image preprocessing: preprocessing each ROI regional image based on the single-target ROI regional image generated in the step 2;
step 4, standard feature circle extraction: extracting contour feature points, fitting a standard feature circle through a minimum bounding circle iterative optimization algorithm, and outputting standard feature circle parameter information, wherein the standard feature circle parameter information comprises a standard feature circle center coordinate and a circle radius;
step 5, standard feature straight line extraction: defining the starting point and the end point of a straight line by user according to the standard feature circle center coordinates extracted in the step 4, determining a sampling domain of a feature point, scanning and sampling the bidirectional feature point in the horizontal direction and the vertical direction, fitting the standard feature straight line by using a RANSAC iterative algorithm, recording the position information of the feature point and fitting a linear equation;
step 6, configuring a standard detection file: storing the ROI area image number, the position coordinates of the corresponding standard feature circle or standard feature straight line into a standard detection file;
7, updating contour feature points: based on the standard feature circle in the step 4 and the standard feature straight line in the step 5, when the standard feature circle and the standard feature straight line need to be modified, the position and the size of the ROI area image are adjusted, the outline feature points are extracted again, and the updated coordinates of the outline feature points and the ROI area image number are automatically stored in a standard detection file;
step 8, describing the position relation among multiple targets: reading a standard detection file, performing feature matching on an image to be detected, obtaining position coordinate information of a target feature circle and position coordinate information of a target feature straight line according to the coordinate mapping relation between each ROI area image obtained in the step 2 and an original image, and obtaining the distance from the circle center to the circle center and the distance from the circle center to the straight line, so as to describe the position relation among multiple targets;
step 9, planning a full-automatic dispensing path: and (3) converting coordinates in an image coordinate system into coordinates in a real world coordinate system according to the inside and outside parameter matrix of the monocular camera obtained in the step (1), the position information of the characteristic points obtained in the step (8) and the pixel distance between the characteristic points, so that the full-automatic dispensing path planning is realized, and the positioning precision and the repeated measurement precision are calculated.
2. The method for measuring monocular vision in one dimension and two dimensions according to claim 1, wherein: in the step 4, the parameter information of the standard feature circle comprises two circle center coordinates, two circle radiuses and a circle center distance.
3. A monocular vision one-dimensional two-dimensional measurement method according to claim 2, wherein: in the 5 th step, the standard feature straight line extraction process is as follows: self-defining a starting point and an end point of a straight line, determining a sampling domain of straight line characteristic points of an upper frame of a workpiece based on two circle center coordinate positions, sampling horizontal and vertical bidirectional characteristic points in the sampling domain, sending the sampling to a sampling queue, and performing iterative optimization through an RANSAC iterative algorithm to obtain a standard characteristic straight line.
4. The method for measuring monocular vision in one dimension and two dimensions according to claim 1, wherein: in the step 3, preprocessing comprises binarization, adaptive threshold segmentation, Canny edge detection and Gaussian filtering noise reduction.
5. The method of claim 4, wherein the method comprises: and the binarization adopts an OTSU algorithm in an OpenCV open source vision library.
6. The method of claim 4, wherein the method comprises: the adaptive threshold segmentation adopts an OTSU algorithm in an OpenCV open source vision library.
7. The method for measuring monocular vision in one dimension and two dimensions according to claim 4, wherein: and the Canny edge detection utilizes a Canny edge detection algorithm to obtain edge features on the ROI area image.
8. The method for measuring monocular vision in one dimension and two dimensions according to claim 1, wherein: and extracting the contour feature points by using a contour extraction algorithm.
9. The method of claim 8, wherein the method comprises: the standard feature circle extracts contour feature points based on the findcontour function in OpenCV.
10. The method of claim 8, wherein the method comprises: and the standard feature straight line is fused with the relative position relation of the standard feature circle through bidirectional sampling to obtain the contour feature point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211044387.1A CN115112098B (en) | 2022-08-30 | 2022-08-30 | Monocular vision one-dimensional two-dimensional measurement method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211044387.1A CN115112098B (en) | 2022-08-30 | 2022-08-30 | Monocular vision one-dimensional two-dimensional measurement method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115112098A true CN115112098A (en) | 2022-09-27 |
CN115112098B CN115112098B (en) | 2022-11-08 |
Family
ID=83336254
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211044387.1A Active CN115112098B (en) | 2022-08-30 | 2022-08-30 | Monocular vision one-dimensional two-dimensional measurement method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115112098B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116956640A (en) * | 2023-09-19 | 2023-10-27 | 深圳市艾姆克斯科技有限公司 | Adjusting method and system based on self-adaptive optimization of five-axis dispensing machine |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110110760A (en) * | 2019-04-17 | 2019-08-09 | 浙江工业大学 | A kind of workpiece positioning and recognition methods based on machine vision |
CN110197508A (en) * | 2019-07-10 | 2019-09-03 | 深圳西顺万合科技有限公司 | The method and device of the co-melting vision guide movement of 2D, 3D |
CN110503638A (en) * | 2019-08-15 | 2019-11-26 | 上海理工大学 | Spiral colloid amount online test method |
CN210664460U (en) * | 2019-08-12 | 2020-06-02 | 盐城市腾辉电子科技有限公司 | Novel point UV glues and uses detection tool |
WO2020114035A1 (en) * | 2018-12-04 | 2020-06-11 | 中国科学院自动化研究所 | Three-dimensional feature extraction method and apparatus based on machine vision |
US20200202572A1 (en) * | 2017-05-15 | 2020-06-25 | Lavision Gmbh | Method for calibrating an optical measurement set-up |
CN111460955A (en) * | 2020-03-26 | 2020-07-28 | 欣辰卓锐(苏州)智能装备有限公司 | Image recognition and processing system on automatic tracking dispensing equipment |
CN114494045A (en) * | 2022-01-10 | 2022-05-13 | 南京工大数控科技有限公司 | Large-scale straight gear geometric parameter measuring system and method based on machine vision |
CN114708338A (en) * | 2022-03-29 | 2022-07-05 | 博众精工科技股份有限公司 | Calibration method, device, equipment and medium of dispenser |
-
2022
- 2022-08-30 CN CN202211044387.1A patent/CN115112098B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200202572A1 (en) * | 2017-05-15 | 2020-06-25 | Lavision Gmbh | Method for calibrating an optical measurement set-up |
WO2020114035A1 (en) * | 2018-12-04 | 2020-06-11 | 中国科学院自动化研究所 | Three-dimensional feature extraction method and apparatus based on machine vision |
CN110110760A (en) * | 2019-04-17 | 2019-08-09 | 浙江工业大学 | A kind of workpiece positioning and recognition methods based on machine vision |
CN110197508A (en) * | 2019-07-10 | 2019-09-03 | 深圳西顺万合科技有限公司 | The method and device of the co-melting vision guide movement of 2D, 3D |
CN210664460U (en) * | 2019-08-12 | 2020-06-02 | 盐城市腾辉电子科技有限公司 | Novel point UV glues and uses detection tool |
CN110503638A (en) * | 2019-08-15 | 2019-11-26 | 上海理工大学 | Spiral colloid amount online test method |
CN111460955A (en) * | 2020-03-26 | 2020-07-28 | 欣辰卓锐(苏州)智能装备有限公司 | Image recognition and processing system on automatic tracking dispensing equipment |
CN114494045A (en) * | 2022-01-10 | 2022-05-13 | 南京工大数控科技有限公司 | Large-scale straight gear geometric parameter measuring system and method based on machine vision |
CN114708338A (en) * | 2022-03-29 | 2022-07-05 | 博众精工科技股份有限公司 | Calibration method, device, equipment and medium of dispenser |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116956640A (en) * | 2023-09-19 | 2023-10-27 | 深圳市艾姆克斯科技有限公司 | Adjusting method and system based on self-adaptive optimization of five-axis dispensing machine |
CN116956640B (en) * | 2023-09-19 | 2024-01-09 | 深圳市艾姆克斯科技有限公司 | Adjusting method and system based on self-adaptive optimization of five-axis dispensing machine |
Also Published As
Publication number | Publication date |
---|---|
CN115112098B (en) | 2022-11-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP2919284B2 (en) | Object recognition method | |
JP4785880B2 (en) | System and method for 3D object recognition | |
CN100430690C (en) | Method for making three-dimensional measurement of objects utilizing single digital camera to freely shoot | |
US7526140B2 (en) | Model-based localization and measurement of miniature surface mount components | |
CN114494045B (en) | Large spur gear geometric parameter measurement system and method based on machine vision | |
CN108007388A (en) | A kind of turntable angle high precision online measuring method based on machine vision | |
CN111123242B (en) | Combined calibration method based on laser radar and camera and computer readable storage medium | |
CN115345822A (en) | Automatic three-dimensional detection method for surface structure light of aviation complex part | |
CN115131444B (en) | Calibration method based on monocular vision dispensing platform | |
CN115112098B (en) | Monocular vision one-dimensional two-dimensional measurement method | |
CN112329880A (en) | Template fast matching method based on similarity measurement and geometric features | |
CN114022542A (en) | Three-dimensional reconstruction-based 3D database manufacturing method | |
CN113705564B (en) | Pointer type instrument identification reading method | |
CN110992416A (en) | High-reflection-surface metal part pose measurement method based on binocular vision and CAD model | |
CN114612412A (en) | Processing method of three-dimensional point cloud data, application of processing method, electronic device and storage medium | |
JP2001143073A (en) | Method for deciding position and attitude of object | |
CN113688846A (en) | Object size recognition method, readable storage medium, and object size recognition system | |
CN111179271B (en) | Object angle information labeling method based on retrieval matching and electronic equipment | |
CN110458951B (en) | Modeling data acquisition method and related device for power grid pole tower | |
CN116309817A (en) | Tray detection and positioning method based on RGB-D camera | |
JPH05157518A (en) | Object recognizing apparatus | |
CN114757880A (en) | Automatic detection method for clock travel accuracy based on machine vision | |
CN113592962A (en) | Batch silicon wafer identification method based on machine vision | |
US20200286248A1 (en) | Structured light subpixel accuracy isp pipeline/fast super resolution structured light | |
CN116844058B (en) | Pointer instrument indication recognition method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |