CN116625270A - Machine vision-based full-automatic detection system and method for precisely turned workpiece - Google Patents

Machine vision-based full-automatic detection system and method for precisely turned workpiece Download PDF

Info

Publication number
CN116625270A
CN116625270A CN202310530324.5A CN202310530324A CN116625270A CN 116625270 A CN116625270 A CN 116625270A CN 202310530324 A CN202310530324 A CN 202310530324A CN 116625270 A CN116625270 A CN 116625270A
Authority
CN
China
Prior art keywords
workpiece
module
contour
digital image
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310530324.5A
Other languages
Chinese (zh)
Inventor
王国锋
李杰峰
户满堂
盛延亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202310530324.5A priority Critical patent/CN116625270A/en
Publication of CN116625270A publication Critical patent/CN116625270A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/08Measuring arrangements characterised by the use of optical techniques for measuring diameters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a full-automatic detection system and method for a precision turning workpiece based on machine vision, wherein the detection system comprises a hardware unit and a software unit; the hardware unit comprises a polishing unit, an acquisition unit, a workpiece placement table and a computer; the software unit is integrated in the computer; the system uses a back light measurement to acquire an image with high edge contrast, and then converts the image into a digital image for processing in a computer. Firstly, carrying out preprocessing such as Gaussian filtering and morphological closing operation deburring on an image, and then filling the image edge to obtain a floating point level contour edge vector. By confirming the corresponding outline characteristics of each area of the workpiece. Based on the type of the contour feature, the calculation method of the geometric feature in the database is indexed, so that various geometric measurement tasks are completed, and data analysis, display and storage are performed. The invention is suitable for mass and rapid detection of workpieces in a full-automatic production scene, saves labor and improves efficiency.

Description

Machine vision-based full-automatic detection system and method for precisely turned workpiece
Technical Field
The invention relates to the technical field of detection of turning workpieces, mainly adopts a machine vision technology, and particularly relates to a full-automatic detection system and method for a precision turning workpiece based on machine vision.
Background
In the precise turning process, the size and form and position tolerance of the machined workpiece need to be strictly controlled so as to ensure that the product quality meets the requirements. Rotating body workpieces generally have various characteristics, such as the diameter and taper of the outer circle, the radius value of the transition fillet, the height of the countersink, the length of the bolt, and the like. However, the contact measurement technique is well established and has high accuracy, but is mostly a point-by-point measurement method, so that the measurement speed is relatively slow. In the non-contact measurement method, the positions of various features are often required to be manually determined, and then the measurement is performed, and too much manpower and time are also spent, so that an automatic detection means with higher precision and high efficiency is required to meet the quality detection requirement of the precision turning piece.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a full-automatic detection system and method for a precision turning workpiece based on machine vision aiming at the multi-characteristic feature of the precision turning process on the basis of a conventional machine vision detection system.
The invention aims at realizing the following technical scheme:
a full-automatic detection system for precisely turning workpieces based on machine vision comprises a hardware unit and a software unit;
the hardware unit comprises a polishing unit, an acquisition unit, a workpiece placement table and a computer; the polishing unit consists of a voltage controller and parallel light collecting light sources, the collecting unit consists of a bilateral telecentric lens and a CCD camera, and the voltage controller, the parallel light collecting light sources, the workpiece placing table, the bilateral telecentric lens, the CCD camera and the computer are connected with one another in sequence;
the hardware unit is placed on the detection platform, and the software unit is integrated in the computer;
the software unit comprises an equipment control module, a template making module, an acquisition module, an image processing module, a contour type identification module, a characteristic calculation module, a result display module and a storage module; the device control module and the template making module are respectively connected with the acquisition module, and the acquisition module is sequentially connected with the image processing module, the contour type identification module, the characteristic calculation module, the result display module and the storage module;
the device control module is used for searching and selecting a CCD camera connected with the current PC through a network port and controlling the opening and closing of the CCD camera;
the template making module is used for setting the characteristics to be measured currently and the corresponding tolerance range information thereof, and collecting can be started after the setting of the template making module is completed;
the acquisition module is used for realizing automatic detection of the workpiece and judging whether the workpiece is stably placed or not;
the image processing module is used for receiving the backlight digital image acquired by the acquisition unit, and firstly, carrying out noise reduction processing on the backlight digital image; secondly, carrying out closed operation of a determined path along the edge of the workpiece in the backlight digital image by adopting a closed operation method, and removing dust, flaws and burrs on the surface of the workpiece in the backlight digital image; finally, filling image boundaries of the backlight digital image of the workpiece;
the profile type recognition module is used for determining profile characteristics corresponding to the profiles of all areas of the workpiece to be detected;
the feature calculation module defines the calculation modes of various workpiece geometric features, and based on the result in the contour type identification module (), the geometric features required to be measured in the template making module are automatically indexed for each identified contour feature, and the calculation of the geometric features is completed;
the result display module is used for comparing the calculation result of the characteristic calculation module with the information such as the tolerance range in the template making module, analyzing the state of the workpiece, and displaying and recording the result;
the storage module stores the original workpiece image and the measured annotation display image into JPG and BMP formats, and stores the measurement information.
Further, after the automatic detection mode is selected in the acquisition module, the automatic detection process of the workpiece is started; in the process, the automatic detection of the workpiece placement condition is realized, and after the workpiece placement to the appointed position of the workpiece placement table is detected, the calculation thread after the automatic start is performed; when the workpiece moves or leaves the field of view of the CCD camera, the calculation thread is turned off.
Further, the image processing module firstly receives the backlight digital image acquired by the CCD camera, and performs noise reduction processing on the backlight digital image by using a Gaussian filtering method to remove noise interference; then, a morphological closing operation method is used for carrying out closing operation of a determined path along the edge of the workpiece in the backlight digital image, and dust, flaws and burrs on the surface of the workpiece in the backlight digital image are removed; finally, carrying out backlight digital image boundary filling, and specifically using a Canny edge extraction method to obtain a rough edge of the backlight digital image based on a gradient relation at the edge; and obtaining the sub-pixel edges of the backlight digital image by using a method based on Zernike orthogonal moment to obtain the point set vector of the floating point coordinates.
Further, the step of determining the contour features corresponding to the contours of the areas of the workpiece to be measured by the contour type recognition module is as follows:
(1) Dividing the point set vector of the complete edge contour into two contours of a single straight line or a single circular arc through a binary tree decomposition algorithm based on Pratt, and storing in a binary tree node format;
(2) Since the former step adopts a dichotomy to divide, the division may occur in the middle of a straight line or circular arc profile, thereby causing abnormal disconnection of the profile. Therefore, the whole binary tree needs to be traversed in advance, the contours meeting the splicing conditions are spliced into one block, and the spliced result is called as a segmented independent contour for short;
(3) Establishing a three-level characteristic tree system, wherein the first level is a segmented independent contour, and the edges of small-segment contours after complete contour segmentation are represented; the second level represents outline characteristics and represents the types of all areas of the workpiece to be tested; the third level represents geometric characteristics and represents geometric quantity or size indexes to be measured of the workpiece to be measured;
(4) The arrangement combination and the azimuth arrangement of the independent contours of each segment are predefined, the correspondence from the contours to the features is realized, and the features corresponding to the contours of each region of the workpiece to be detected are determined.
Furthermore, a binary tree decomposition algorithm based on Pratt unifies the positioning method of the straight line and the circular arc by introducing Pratt algebraic fitting; and then, based on the characteristic that a third large eigenvalue represents fitting error in the Pratt algebraic fitting solving process, constructing a contour segmentation algorithm through the eigenvalue, combining the characteristic that all point sets participate in operation simultaneously in data matrix calculation, designing a binary tree segmentation algorithm adapting to fitting characteristics, completing the splicing of interrupted contours in binary tree decomposition based on preamble traversal, and realizing contour segmentation according to types.
Further, the software unit is designed into four threads, namely a thread for man-machine interaction, an acquisition thread of the CCD camera, a work piece in-out judging thread and a calculating thread; in the work piece business turn over judging thread, the judging standard that the work piece got into is: the workpiece enters the visual field and is placed on the workpiece placing table to keep static, namely, the workpiece is placed stably at a designated position; the judgment standard of the workpiece leaving is as follows: the workpiece moves.
The invention also provides a machine vision-based full-automatic detection method for the precisely turned workpiece, which comprises the following steps:
s1, setting the characteristics to be measured currently and the corresponding tolerance range information thereof through a template making module, and starting to collect after finishing template setting;
s2, selecting an automatic detection mode in the acquisition module, and starting an automatic detection process of the workpiece; when detecting that a workpiece is placed at a designated position of a workpiece placement table, automatically starting a subsequent calculation thread; when the workpiece moves or leaves the field of view of the CCD camera, closing the calculation thread;
s3, the image processing module receives the backlight digital image acquired by the CCD camera, and performs noise reduction processing by using a Gaussian filtering method to remove noise interference;
s4, performing closed operation of a determined path along the edge of the workpiece in the backlight digital image by adopting a closed operation method by the image processing module, and removing dust, flaws and burrs on the surface of the workpiece in the backlight digital image;
s5, carrying out boundary filling of the backlight digital image through an image processing module, obtaining a rough edge of the backlight digital image based on a gradient relation at the edge by using a Canny edge extraction method, and finally obtaining a sub-pixel edge of the backlight digital image by using a Zernike orthogonal moment based method to obtain a floating point level point set vector;
s6, in the contour type identification module, a binary tree decomposition algorithm based on Pratt algebraic fitting is used for dividing the point set vector of the complete edge contour into contours of single straight lines or single arcs, and storing the contours in a binary tree node format;
s7, performing preamble traversal on the whole binary tree, splicing the contours meeting the splicing conditions into one block, and enabling each section of contour after being segmented in the step S6 and spliced in the step to be simply called a segmented independent contour;
s8, establishing a three-level feature tree system, wherein the first level is a segmented independent contour, and the contour edge after the complete contour segmentation is represented; the second level represents outline characteristics and represents the types of all areas of the workpiece to be tested; the third level represents geometric characteristics and represents geometric quantity or size indexes to be measured of the workpiece to be measured;
s9, predefining arrangement combination and azimuth arrangement of each sectional independent contour, realizing the correspondence from the contour to the feature, and determining the feature corresponding to the contour of each region of the workpiece to be detected;
s10, for each identified contour feature, automatically indexing geometric features required to be measured in a template making module, and completing calculation of the geometric features;
s11, comparing the calculated value of the characteristic calculation module with the tolerance range in the template making module, analyzing the state of the workpiece, and marking, displaying and recording the result;
s12, storing the original workpiece image and the measured annotation display image.
Compared with the prior art, the technical scheme of the invention has the following beneficial effects:
1. by introducing Pratt algebraic fitting, the positioning method of the straight line and the circular arc is unified, the problem of positioning the straight line and the circular arc in the Hough transformation or common fitting method is avoided, and the positioning efficiency is effectively improved;
2. based on the property of Pratt algebraic fitting eigenvalue, a reasonable contour segmentation threshold is set, and the characteristic that all point sets participate in operation simultaneously in the data matrix calculation process is combined, a binary tree segmentation algorithm adapting to fitting characteristics is designed, the relation between algorithm time complexity and contour length is reduced, and segmentation efficiency is effectively improved.
3. The binary tree preamble traversal-based method solves the problem of contour interruption caused by binary tree, expresses all contour interruption conditions under fewer conditions, and can rapidly realize the program through recursion.
4. The three-level characteristic tree system is constructed by the result of dividing the contour according to the types, and most of the revolving body workpieces without the advanced curves can be effectively represented, so that the multi-characteristic automatic measurement of the multi-type workpieces is realized, and the automation and the systematicness of the detection system are improved.
Drawings
FIG. 1a is a functional block diagram of a hardware unit in the detection system of the present invention.
FIG. 1b is a functional block diagram of a software element in the detection system of the present invention.
FIG. 2 is an exploded flow diagram of an outline binary tree;
fig. 3 is a flow chart of a stitching algorithm based on preamble traversal.
Fig. 4 is a three-level feature tree illustration.
Detailed Description
The invention is described in further detail below with reference to the drawings and the specific examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In the visual detection hardware system, the most critical part is an illumination system, the design of the illumination system can often determine success or failure of the visual detection system, the illumination of the object to be detected is only one aspect of the illumination system, a good illumination system can enable the target characteristic area to be optimally separated from the background area, change between the target area and the background area is improved, edge characteristics are obvious, complexity of an image processing algorithm is reduced, system measurement accuracy can be effectively improved, and meanwhile reliability and stability of the visual detection system are guaranteed.
For this reason, the embodiment mainly designs a machine vision-based full-automatic detection system for precisely turning workpieces, see fig. 1a and 1b, and the detection system comprises a hardware unit and a software unit; the hardware unit comprises a polishing unit, an acquisition unit, a workpiece placement table 103 and a computer 106; the lighting unit consists of a voltage controller 101 and a parallel light collecting light source 102, the collecting unit consists of a bilateral telecentric lens 104 and a CCD camera 105, and the voltage controller 101, the parallel light collecting light source 102, a workpiece placing table 103, the bilateral telecentric lens 104, the CCD camera 105 and a computer 106 are connected with each other in sequence; the hardware unit is placed on the detection platform and the software unit is integrated in the computer 106.
The software units comprise an equipment control module 201, a template making module 202, an acquisition module 203, an image processing module 204, a contour type identification module 205, a feature calculation module 206, a result display module 207 and a storage module 208; the device control module 201 and the template making module 202 are respectively connected with the acquisition module 203, and the acquisition module 203 is sequentially connected with the image processing module 204, the contour type identification module 205, the feature calculation module 206, the result display module 207 and the storage module 208.
The detection method based on the full-automatic detection system for the precisely turned workpiece comprises the following steps of:
hardware unit aspect: and starting the power supply of the CCD camera and the light source, and connecting the CCD camera with a computer through a GIGE interface. The light source is ensured to be coaxial with the lens as much as possible, and the workpiece placement stage 103 is adjusted so that its center is placed on the focal plane of the imaging system, and then fixed.
The implementation process of the software unit is as follows:
s1: the device control module 201 searches for and selects a camera device connected to the current PC through the internet access, and controls the on of the camera device. The template making module 202 can set the information such as the characteristics to be measured currently and the corresponding tolerance range, and can start collecting after the template setting is completed
S2: after the automatic detection mode is selected by the acquisition module 203, the automatic detection process for the revolving body workpiece can be started. In the process, the automatic detection of the workpiece placement condition is realized, and after the workpiece placement to the appointed position of the workpiece placement table is detected, the calculation thread after the automatic start is performed; when the workpiece moves or leaves the field of view of the CCD camera, the calculation thread is turned off.
In this embodiment, the software unit is designed as four threads, which are respectively a man-machine interaction thread, a camera acquisition thread, a work-piece in-out judgment thread and a calculation thread. In the in-out judging thread, the judging standard of the workpiece entering is as follows: the workpiece enters the field of view and is placed on the platform to remain stationary, i.e., stable in a designated position. The judgment standard of the workpiece leaving is as follows: the workpiece moves.
S3: the image processing module 204 first receives the backlight digital image collected by the CCD camera, the backlight image has extremely high edge contrast, and then performs noise reduction processing by using a Gaussian filtering method to remove noise interference.
S4: the image processing module 204 then performs a closed-loop operation of determining a path along the edge of the workpiece in the backlit digital image using the morphological closed-loop operation method multiple times to remove dust, flaws, and large burrs from the surface of the workpiece in the backlit digital image. Since the workpiece edges in the backlit digital image generally account for only a small portion of the image, the computation amount of the path-closing operation is much smaller than that of the global closing operation, and the computation speed is significantly improved.
The extraction of the edge profile is greatly affected by the possible presence of uncleaned dust, burrs and minor surface irregularities on the surface of the workpiece. And the operation of multiple global closed operations can simply and effectively remove most of the influences. However, in order to meet the requirement of high precision, the selected CCD camera generally has very high resolution, and a large number of background blank pixels and pixels inside the workpiece, i.e., pixels which are useless for edge extraction, exist in the backlight image. Therefore, the simple use of global closed operation greatly wastes calculation force and reduces the efficiency of the whole algorithm. Therefore, the embodiment adopts improved closed operation to traverse the structural elements along the pixels near the edge, the effect similar to the global closed operation can be achieved, and the operation speed is improved by hundreds of times.
S5: the image processing module 204 performs backlight digital image boundary filling, and then obtains a coarse edge of the backlight digital image based on a gradient relation at the edge by using a Canny edge extraction method. And then obtaining the sub-pixel edges of the backlight digital image by using a method based on the Zernike orthogonal moment to obtain the point set vector of the floating point coordinates.
In the step, firstly, the boundary filling of the image is carried out, so that the object is ensured to be oversized, and the edge derivative can be calculated when one or more edges of the image are exceeded, thereby obtaining the edge. And because in the backlight image, the background is generally bright 255 pixel values, and the object is low pixel value, the filled border is 255 pixel values.
S6: the contour type recognition module 205 first performs a dichotomy decomposition on the point set vector according to an independent contour algorithm based on Pratt algebraic fitting until the point set vector is divided into independent contours such as a single straight line or a single arc, and stores the independent contours in a binary tree node format, that is, the independent contours are nodes of a binary tree.
The cross section of most revolving body workpieces is composed of two basic geometric elements of straight lines and circular arcs, the two geometric elements can form various characteristics of excircles, sunk heads and the like through the collocation of factors such as different directions, positions, sequences, numbers and the like, and the expression of Pratt algebraic fitting can simultaneously represent two-dimensional geometric shapes of the circular arcs and the straight lines. In the calculation process of Pratt, the feature vector corresponding to the minimum positive feature value is the best fitting parameter, and the corresponding feature value can represent the total error of the fitting process. The feature vector can be used as an initial value of a geometric fitting process in subsequent feature numerical calculation, and the feature value can be used as an independent contour criterion to judge whether the current point set vector is an independent straight line or an independent circular arc.
See fig. 2, S6 specifically includes the following processes and principles:
s6.1: in Pratt algebraic fitting, the equation for a hypothetical circle is:
A(x 2 +y 2 )+Bx+Cy+D=0 (1-1)
compared to the most commonly used general least squares fitting, i.eIn the equation in Fit, the parameter A is added in the equation 1-1 before the quadratic term, so that singular points can be avoided when straight lines are fitted, and the straight lines and circular arcs can be distinguished by setting proper threshold values through the value of A.
In the geometric fit, the error d of each data point i For each data point the geometric distance to the circle, namely:
wherein (a, b) is the center of a circle, R is the radius of the circle, R i Distance from the data point to the center of the circle. Algebraic fitting generally uses some number f that is easy to calculate compared to geometric fitting i Instead of d i Then the total error equation for algebraic fitting can be expressed as:
wherein f i Is generally defined as simple algebraic equations without root numbers.
In general use-the error f of its single point in Fit i The method comprises the following steps:
f i =r i 2 -R 2 =(r i +R)(r i -R)=d i (2R+d i )≈2Rd i (1-4)
wherein about equal to is established due toIn order to calculate the fitting, the deviation d of each point is calculated except for the point which is slightly deviated from the sample seriously i R is considered to beA minimum value, otherwise a reasonable fitting result cannot be obtained. So the main influencing factor of the fitting error is Rd i Obviously, in the fitting process, in order to make the error tend to be smaller, a circle with smaller radius R will tend to be fitted, and the error value multiplied by the coefficient R will be significantly amplified, resulting in the algorithm being more sensitive to errors and losing stability.
Therefore, in Pratt-Fit, to solve the above problem, let the error equation be:
wherein the method comprises the steps ofFormulas 1-5 are obtained by dividing R by 2 The influence of the radius in the formulas 1-4 is effectively restrained, and the accuracy of error estimation in algebraic fitting is ensured.
However, equations 1-5, if solved using a canonical equation, would result in a homogeneous system of linear equations with zero and infinite solutions, which are clearly undesirable. Considering that the coefficients of equation 1-1 change in proportion, without affecting the actual shape of the circle, a constraint equation may be set:
R(H)=B 2 +C 2 -4AD=1 (1-6)
wherein H is (A, B, C, D). Combining formulas 1-5, one can obtain:
F P =∑[Az i +Bx i +Cy i +D] 2 (1-7)
the problem can be solved by solving the error equation minima under constraint equations 1-7 by the Lagrangian multiplier method at this time. Namely:
L(H,η)=F P (H)-η(R(H)-1) (1-8)
wherein F is P (H) R (H) is a constraint equation, and eta is a Lagrange multiplier.
In summary, pratt-Fit hasTwo important advantages not possessed by Fit, on the one hand, the equation expression thereof has a coefficient a, which can represent simultaneously the principal components (circular arcs and straight lines) in the segmented independent profile; on the other hand, the error equation is optimized, so that the calculation result is relatively accurate.
S6.2: equations 1-8 are first converted to a matrix form,
let the parameter vector:
H=[A B C D] T (1-9)
let the data matrix be expressed as:
wherein (x) i ,y i ) To fit the coordinates of points in the dataset. The coefficient matrix may be defined as:
thus, formulas 1-7 can be converted to:
F P (H)=H T MH (1-12)
for the constraint matrix, let:
the constraint equations of equations 1-6 may be converted to:
R(H)=H T NH (1-14)
in combination with 1-8, 1-12, 1-14, the Lagrangian equation can be converted to:
L(H,η)=H T MH-η(H T NH-1) (1-15)
from equations 1-15, the partial derivatives of H and η are derived to obtain the characteristic equation:
H T NH=1 (1-17)
the matrix N in equations 1-16 must be a reversible matrix so the arrow holds. It is obvious that H is a matrix of eigenvectors and η is a one-dimensional vector of eigenvalues.
S6.3: since M is a symmetric matrix, it may be orthogonalized,
i.e.
I.e. square root can be found.
Let yy=m, and y=y T To N -1 Mh=ηh, multiply Y to left, get
YN -1 Y(YH)=ηYH (1-19)
Easily-known YN -1 Y and N -1 M has the same eigenvalue and Y is known from Sylvester's law of inertia T N -1 Y and N -1 As such, there are three positive eigenvalues, 1 negative eigenvalue, so N -1 M also has three positive eigenvalues, 1 negative eigenvalue.
S6.4: left-hand H T After that, H is obtained T MH=ηH T Nh=η. It is known that the number of the components,
1) Let matrix a be any m x n matrix, x be n x 1 vector;
then the first time period of the first time period,
x T A T Ax=(Ax) T (Ax)≥0 (1-20)
so A is T A must be a semi-positive definite matrix.
ThenIs a half positive definiteMatrix, so η is greater than or equal to 0. (in practice, a very small negative value is possible when a perfect straight line is fitted), characteristic equation N -1 The feature vector corresponding to the smallest positive feature value of mh=ηh is the best fit parameter.
2) Due to
F P (H)=H T MH=η (1-21)
The eigenvalue η characterizes the total error of the fit and smaller values indicate better fitting of this parameter.
S6.5: as described in S6.1-S6.4, two criteria can be obtained by adding the basic properties of the geometric elements
1) The eigenvalue η characterizes the magnitude of the fitting error, and the larger the overall image should be at the same noise level, the less likely the segment of the point column is to be an independent contour.
2) For an actual two-dimensional workpiece contour image, stable and resolvable functional relation is difficult to form between the position coordinates x and y. Now, assuming that the curve can be represented by the parametric equation { (x (c), y (c))|c ε [ a, b ] }, its curvature calculation formula can be represented
The method comprises the following steps:
in this form, the first and second derivatives of x (c) and y (c) in equations 3-35 can both be calculated by differential, i.e.:
wherein y (i) is the y coordinate of the ith point, and x is calculated in the same manner as in equations 1-23. In a two-dimensional image of integer coordinates, the step h=1.
S6.6: referring to fig. 2, in the contour decomposition process, pratt fitting is firstly performed on a contour point column, then whether a third characteristic value is smaller than a set threshold value is judged, and if not, pratt fitting is continuously performed by dividing the third characteristic value into two sections; if the fitting characteristic values of the left section and the right section are calculated, the fitting characteristic values can be regarded as independent contour sections if the fitting characteristic values are far smaller than the fitting characteristic values before two halves, and curvature operation is carried out point by point if the fitting characteristic values are not far smaller than the fitting characteristic values before two halves, so that more accurate contour segmentation is obtained.
S7: the profile type identification module 205 then performs a preamble traversal of the entire binary tree, stitching together the individual profiles that meet the stitching condition, i.e., re-stitching together the same segment of profiles split by dichotomy, as a segment of more complete individual profiles. Algebraic fitting is calculated in such a way that all points in a point set participate in the calculation at the same time, rather than point by point. The whole set of points consisting of all contours can be continuously halved using the dichotomy until each segment of contour is guaranteed to meet the independent contour criteria. In this process, there may be a case where a complete individual contour is divided into a plurality of small individual contours, so that the small individual contours need to be spliced. In order to facilitate subsequent splicing processing, the method takes the complete outline as the root of the binary tree, takes each section of outline separated by the dichotomy as the node of the tree, and finally takes each section of independent outline as the terminal node (leaf) of the tree, thereby forming a full binary tree.
See fig. 3, which specifically includes the following processes and principles:
s7.1: the complete outline is firstly taken as the root of a binary tree, each section of outline separated by a dichotomy is taken as a node of the tree, and each section of independent outline is taken as a terminal node (leaf) of the tree, so that a full binary tree is formed.
S7.2: and performing preamble traversal on the binary tree until a node without children is encountered, namely a terminal node, temporarily storing the terminal node, and continuing the traversal until the next terminal node is found. After the temporary storage area acquires two terminal nodes, judging whether the two terminal nodes are brothers or not, if so, the left node is regarded as an independent outline, and the right node is placed in the temporary storage area to wait for the next terminal node. If not, performing Pratt algebraic fitting after splicing, and judging whether the characteristic value is smaller than a threshold value; if the contour is smaller than the preset contour, placing the spliced contour into a temporary storage area and waiting for the next terminal node; if not, the left node is regarded as an independent contour, the right node is put into a temporary storage area, and the next terminal node is waited for.
In summary, the structure of a full binary tree is known 1. Only the terminal nodes of the binary tree are independent profiles, only splicing is possible. 2. All end nodes are combined together to be an initial full profile. 3. Starting from the left side of the binary tree, the splicing can only occur between the two nearest terminal nodes; because adjacent independent contours can be spliced, calculation of the independent contour criteria can not be met across one independent contour. 4. Stitching is not possible between siblings because siblings are bisected because the independent contour criteria are not satisfied. Therefore, when the present embodiment adopts the preamble traversal, all the independent contours that can be spliced can be traversed by taking the non-brothers and no children (terminal nodes) as the conditions. And then taking a more severe threshold value as a judgment, splicing the spliced outlines together, and completing all splicing processes when the sequence traversal is finished. The whole process has rapid operation and simple code structure, and can well meet the splicing requirement.
S8: as shown in fig. 4, in order to raise the result of contour segmentation to the feature recognition level, the contour type recognition module 205 establishes a three-level feature tree system, where the first level is a segmented independent contour, and represents the edges of the small-segment contour after the complete contour segmentation; the second level represents outline characteristics and represents the types of all areas of the workpiece to be measured, such as countersunk heads and fillets; the third level represents geometric features representing geometric or dimensional indicators of the workpiece to be measured, such as the radius of the fillet and the height of the countersunk head.
S9: the profile type recognition module 205 finally predefines the permutation, combination and azimuth arrangement of the individual profiles of each segment, so as to realize the correspondence from the profile to the feature, and determine the feature corresponding to the profile of each region of the workpiece to be measured.
The manner in which the various workpiece profile features are formed from a substantially straight line or circular arc is established in this embodiment. Firstly, the symmetry axis of the revolving body workpiece is obtained, then, based on the symmetry axis, the independent contours are divided into a group of symmetrical contour pairs, and then, the symmetrical contour pairs are compared with a feature library to obtain the features represented by the contours of all positions. For example, the countersunk head of the bolt consists of two symmetrical straight lines with different slopes and a straight line which is not symmetrical about the axis; the fillet of the bolt is composed of two symmetrical circular arcs.
S10: the feature calculation module 206 defines various calculation modes of geometric features, and based on the result in the contour type recognition module 205, the template making module 202 automatically indexes geometric features required to be measured for each recognized contour feature, and completes calculation of the geometric features. Calculating the radius of the fillet area by an arc approximation algorithm under tangent constraint; the diameter of the shaft section is obtained by calculating the distance between two straight lines.
S11: the result display module 207 compares the calculated value of the feature calculation module 206 with the tolerance range in the template formulation module 202, analyzes the state of the workpiece, and displays and records the result as a label.
S12: the storage module 208 may store the original workpiece image and the measured annotation display image in a format such as JPG, BMP, etc., and may store the measurement information in a data format such as Excel1, etc.
The invention is not limited to the embodiments described above. The above description of specific embodiments is intended to describe and illustrate the technical aspects of the present invention, and is intended to be illustrative only and not limiting. Numerous specific modifications can be made by those skilled in the art without departing from the spirit of the invention and scope of the claims, which are within the scope of the invention.

Claims (7)

1. The full-automatic detection system for the precision turning workpiece based on the machine vision is characterized by comprising a hardware unit and a software unit;
the hardware unit comprises a polishing unit, an acquisition unit, a workpiece placement table and a computer; the polishing unit consists of a voltage controller and parallel light collecting light sources, the collecting unit consists of a bilateral telecentric lens and a CCD camera, and the voltage controller, the parallel light collecting light sources, the workpiece placing table, the bilateral telecentric lens, the CCD camera and the computer are connected with one another in sequence;
the hardware unit is placed on the detection platform, and the software unit is integrated in the computer;
the software unit comprises an equipment control module (201), a template making module (202), an acquisition module (203), an image processing module (204), a contour type identification module (205), a characteristic calculation module (206), a result display module (207) and a storage module (208); the device control module (201) and the template making module (202) are respectively connected with the acquisition module (203), and the acquisition module (203) is sequentially connected with the image processing module (204), the contour type identification module (205), the feature calculation module (206), the result display module (207) and the storage module (208);
the device control module (201) is used for searching and selecting a CCD camera connected with the current PC through a network port and controlling the opening and closing of the CCD camera;
the template making module (202) is used for setting the characteristics to be measured currently and the corresponding tolerance range information thereof, and collecting is started after the setting of the template making module (202) is completed;
the acquisition module (203) is used for realizing automatic detection of the workpiece and judging whether the workpiece is stably placed or not;
the image processing module (204) is used for receiving the backlight digital image acquired by the acquisition unit, and firstly, carrying out noise reduction processing on the backlight digital image; secondly, carrying out closed operation of a determined path along the edge of the workpiece in the backlight digital image by adopting a closed operation method, and removing dust, flaws and burrs on the surface of the workpiece in the backlight digital image; finally, filling image boundaries of the backlight digital image of the workpiece;
the profile type recognition module (205) is used for determining profile characteristics corresponding to the profiles of all areas of the workpiece to be detected;
the feature calculation module (206) defines calculation modes of various workpiece geometric features, and based on the result in the contour type identification module (205), the automatic index template formulation module (202) requests the geometric features to be measured for each identified contour feature and completes calculation of the geometric features;
the result display module (207) is used for comparing the calculation result of the characteristic calculation module (206) with the tolerance range information in the template making module (202), analyzing the state of the workpiece, and displaying and recording the result;
the storage module (208) stores the original workpiece image and the measured annotation display image into JPG and BMP formats, and stores the measurement information.
2. The full-automatic detection system for precision turning workpieces based on machine vision according to claim 1, wherein the automatic detection process for the workpieces is started after the automatic detection mode is selected by the acquisition module (203); in the process, the automatic detection of the workpiece placement condition is realized, and after the workpiece placement to the appointed position of the workpiece placement table is detected, the calculation thread after the automatic start is performed; when the workpiece moves or leaves the field of view of the CCD camera, the calculation thread is turned off.
3. The full-automatic detection system for precisely turning workpieces based on machine vision according to claim 1, wherein the image processing module (204) firstly receives a backlight digital image acquired by a CCD camera, and performs noise reduction processing on the backlight digital image by using a Gaussian filtering method to remove noise interference; then, a morphological closing operation method is used for carrying out closing operation of a determined path along the edge of the workpiece in the backlight digital image, and dust, flaws and burrs on the surface of the workpiece in the backlight digital image are removed; finally, carrying out backlight digital image boundary filling, and specifically using a Canny edge extraction method to obtain a rough edge of the backlight digital image based on a gradient relation at the edge; and obtaining the sub-pixel edges of the backlight digital image by using a method based on Zernike orthogonal moment to obtain the point set vector of the floating point coordinates.
4. A machine vision based full automatic inspection system for precision turning workpieces according to claim 3, wherein the step of determining contour features corresponding to the contours of the respective areas of the workpiece to be inspected by the contour type recognition module (205) is as follows:
(1) Dividing the point set vector of the complete edge contour into two contours of a single straight line or a single circular arc through a binary tree decomposition algorithm based on Pratt, and storing in a binary tree node format;
(2) Performing preamble traversal on the whole binary tree, splicing the contours meeting the splicing conditions into one block, and enabling the spliced result to be called as a segmented independent contour for short;
(3) Establishing a three-level characteristic tree system, wherein the first level is a segmented independent contour, and the edges of small-segment contours after complete contour segmentation are represented; the second level represents outline characteristics and represents the types of all areas of the workpiece to be tested; the third level represents geometric characteristics and represents geometric quantity or size indexes to be measured of the workpiece to be measured;
(4) The arrangement combination and the azimuth arrangement of the independent contours of each segment are predefined, the correspondence from the contours to the features is realized, and the features corresponding to the contours of each region of the workpiece to be detected are determined.
5. The machine vision-based full-automatic detection system for precision turning workpieces, as set forth in claim 4, wherein the Pratt-based binary tree decomposition algorithm unifies the positioning method of straight lines and circular arcs by introducing Pratt algebraic fitting; and then, based on the characteristic that a third large eigenvalue represents fitting error in the Pratt algebraic fitting solving process, constructing a contour segmentation algorithm through the eigenvalue, combining the characteristic that all point sets participate in operation simultaneously in data matrix calculation, designing a binary tree segmentation algorithm adapting to fitting characteristics, completing the splicing of interrupted contours in binary tree decomposition based on preamble traversal, and realizing contour segmentation according to types.
6. The full-automatic detection system for precisely turning workpieces based on machine vision according to claim 1, wherein the software unit is designed into four threads, namely a thread for man-machine interaction, an acquisition thread of a CCD camera, a workpiece in-out judging thread and a calculating thread; in the work piece business turn over judging thread, the judging standard that the work piece got into is: the workpiece enters the visual field and is placed on the workpiece placing table to keep static, namely, the workpiece is placed stably at a designated position; the judgment standard of the workpiece leaving is as follows: the workpiece moves.
7. A machine vision-based full-automatic detection method for a precision turning workpiece, based on the full-automatic detection system for the precision turning workpiece according to any one of claims 1 to 6, characterized by comprising the following steps:
s1, setting the characteristics to be measured currently and the corresponding tolerance range information thereof through a template making module, and starting to collect after finishing template setting;
s2, selecting an automatic detection mode in the acquisition module, and starting an automatic detection process of the workpiece; when detecting that a workpiece is placed at a designated position of a workpiece placement table, automatically starting a subsequent calculation thread; when the workpiece moves or leaves the field of view of the CCD camera, closing the calculation thread;
s3, the image processing module receives the backlight digital image acquired by the CCD camera, and performs noise reduction processing by using a Gaussian filtering method to remove noise interference;
s4, performing closed operation of a determined path along the edge of the workpiece in the backlight digital image by adopting a closed operation method by the image processing module, and removing dust, flaws and burrs on the surface of the workpiece in the backlight digital image;
s5, carrying out boundary filling of the backlight digital image through an image processing module, obtaining a rough edge of the backlight digital image based on a gradient relation at the edge by using a Canny edge extraction method, and finally obtaining a sub-pixel edge of the backlight digital image by using a Zernike orthogonal moment based method to obtain a point set vector of floating point coordinates;
s6, in the contour type identification module, a binary tree decomposition algorithm based on Pratt algebraic fitting is used for dividing the point set vector of the complete edge contour into contours of single straight lines or single arcs, and storing the contours in a binary tree node format;
s7, performing preamble traversal on the whole binary tree, splicing the contours meeting the splicing conditions into one block, and enabling each section of contour after being segmented in the step S6 and spliced in the step to be simply called a segmented independent contour;
s8, establishing a three-level feature tree system, wherein the first level is a segmented independent contour, and the contour edges of the complete contour after segmentation and splicing are represented; the second level represents outline characteristics and represents the types of all areas of the workpiece to be tested; the third level represents geometric characteristics and represents geometric quantity or size indexes to be measured of the workpiece to be measured;
s9, predefining arrangement combination and azimuth arrangement of each sectional independent contour, realizing the correspondence from the contour to the feature, and determining the feature corresponding to the contour of each region of the workpiece to be detected;
s10, for each identified contour feature, automatically indexing geometric features required to be measured in a template making module, and completing calculation of the geometric features;
s11, comparing the calculated value of the characteristic calculation module with the tolerance range in the template making module, analyzing the state of the workpiece, and marking, displaying and recording the result;
s12, storing the original workpiece image and the measured annotation display image.
CN202310530324.5A 2023-05-11 2023-05-11 Machine vision-based full-automatic detection system and method for precisely turned workpiece Pending CN116625270A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310530324.5A CN116625270A (en) 2023-05-11 2023-05-11 Machine vision-based full-automatic detection system and method for precisely turned workpiece

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310530324.5A CN116625270A (en) 2023-05-11 2023-05-11 Machine vision-based full-automatic detection system and method for precisely turned workpiece

Publications (1)

Publication Number Publication Date
CN116625270A true CN116625270A (en) 2023-08-22

Family

ID=87641006

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310530324.5A Pending CN116625270A (en) 2023-05-11 2023-05-11 Machine vision-based full-automatic detection system and method for precisely turned workpiece

Country Status (1)

Country Link
CN (1) CN116625270A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117900918A (en) * 2024-03-19 2024-04-19 中船黄埔文冲船舶有限公司 Polishing rule templating method, polishing rule templating system, polishing rule templating terminal and readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117900918A (en) * 2024-03-19 2024-04-19 中船黄埔文冲船舶有限公司 Polishing rule templating method, polishing rule templating system, polishing rule templating terminal and readable storage medium

Similar Documents

Publication Publication Date Title
CN110111331B (en) Honeycomb paper core defect detection method based on machine vision
CN110286124B (en) Machine vision-based refractory brick measuring system
CN107341802B (en) Corner sub-pixel positioning method based on curvature and gray scale compounding
CN111583114B (en) Automatic measuring device and measuring method for pipeline threads
CN112683193B (en) Cutter type distinguishing and geometric parameter detecting method and system based on machine vision
CN114821114B (en) Groove cutting robot image processing method based on vision system
CN107622499A (en) A kind of identification and space-location method based on target two-dimensional silhouette model
CN113865508B (en) Automatic detection device and method for through hole rate of sound lining of honeycomb sandwich composite material
CN105783786A (en) Part chamfering measuring method and device based on structured light vision
CN113112496B (en) Sub-pixel shaft part size measurement method based on self-adaptive threshold
CN106650697A (en) Instrument scale recognition method
CN116625270A (en) Machine vision-based full-automatic detection system and method for precisely turned workpiece
CN112734662B (en) Machine vision detection method and system for bevel gear abrasion
CN116402866A (en) Point cloud-based part digital twin geometric modeling and error assessment method and system
CN113487533B (en) Part assembly quality digital detection system and method based on machine learning
WO2021082380A1 (en) Laser radar-based pallet recognition method and system, and electronic device
CN111815611A (en) Round hole feature extraction method for rivet hole measurement point cloud data
CN116358449A (en) Aircraft rivet concave-convex amount measuring method based on binocular surface structured light
CN110544235A (en) Flexible circuit board image area identification method based on differential geometry
CN107330886B (en) High-precision quantification method for surface micro-damage
CN106546185A (en) A kind of profile quality determining method based on Machine Vision Detection
CN107516329B (en) Positioning method for oil holes of speed reducer
CN113393447B (en) Needle tip true position detection method and system based on deep learning
Póka et al. A robust digital image processing method for measuring the planar burr length at milling
CN116452826A (en) Coal gangue contour estimation method based on machine vision under shielding condition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination