CN113112496A - Sub-pixel shaft part size measurement method based on self-adaptive threshold - Google Patents

Sub-pixel shaft part size measurement method based on self-adaptive threshold Download PDF

Info

Publication number
CN113112496A
CN113112496A CN202110482449.6A CN202110482449A CN113112496A CN 113112496 A CN113112496 A CN 113112496A CN 202110482449 A CN202110482449 A CN 202110482449A CN 113112496 A CN113112496 A CN 113112496A
Authority
CN
China
Prior art keywords
edge
pixel
image
pixel point
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110482449.6A
Other languages
Chinese (zh)
Other versions
CN113112496B (en
Inventor
孔民秀
刘霄朋
李昂
邓晗
姬一明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202110482449.6A priority Critical patent/CN113112496B/en
Publication of CN113112496A publication Critical patent/CN113112496A/en
Application granted granted Critical
Publication of CN113112496B publication Critical patent/CN113112496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

A sub-pixel shaft part size measurement method based on self-adaptive threshold relates to a size measurement method. Calibrating a vision system to obtain internal and external parameters and a mutual position relation; preprocessing the collected image, and improving a gamma correction algorithm by combining a gray histogram to realize automatic selection of a gamma value; completing edge detection, and recording the pixel level edge position by adopting a Canny operator edge detection method; based on the edge connection of the neighborhood, all edge pixel points of the image are checked to complete the edge connection; based on the region extraction of contour tracking, a contour tracking algorithm with topological structure analysis capability is adopted to convert the binary image into contour description; based on sub-pixel edge detection of a self-adaptive threshold, dividing an image region for edge detection, and self-adaptively selecting a threshold of the sub-pixel edge detection; and (4) calculating the size, and solving the linear angle size. The precision and the efficiency are higher, the manpower is saved, and the adaptability to the measuring environment is strong.

Description

Sub-pixel shaft part size measurement method based on self-adaptive threshold
Technical Field
The invention relates to a size measurement method, in particular to a sub-pixel shaft part size measurement method based on a self-adaptive threshold value, and belongs to the technical field of robot visual detection.
Background
In industrial production, the geometric quantity measurement of parts is a necessary means for product quality management, and is a key link of part machining, each part can be determined to be qualified only by carrying out accurate geometric quantity measurement, and the detection result not only influences the qualification of the part, but also determines the subsequent reprocessing and integral assembly of the part. Therefore, the measurement technology occupies an extremely important position in industrial production, and has a very important significance for the research of the measurement technology.
The traditional measurement means is mainly manual measurement, namely measurement of dimension and form and position errors is carried out through measuring tools such as a vernier caliper, a micrometer, a gauge, a knife edge-shaped ruler and the like, or comparison measurement is carried out by using a specially-customized part dimension standard template. These measurement means play a great role in industrial production, but with the rapid increase of the industrial manufacturing level in China, the modern industry also puts forward more strict requirements on the measurement technology. On one hand, the machining precision of parts is higher and higher, and the precision machined parts also need a measurement technology with higher precision; on the other hand, modern mass production requires more efficient measurement means. Traditional measurement techniques have been unable to meet the burdensome quality inspection tasks and increasingly stringent accuracy requirements.
At present, the machine vision measurement technology still has a lot of deficiencies, and in practical industrial application, the measurement precision is not high enough, can not satisfy the measurement requirement of high precision, and the adaptability to the measurement environment is poor, and these all restrict the application of the machine vision measurement technology. Therefore, the further research on the machine vision measuring system is of great significance.
Disclosure of Invention
The invention aims to provide a sub-pixel shaft part size measuring method based on a self-adaptive threshold, which has the advantages of high efficiency, labor saving, higher precision compared with the existing visual size measurement and strong adaptability to the measurement environment.
In order to achieve the purpose, the invention adopts the following technical scheme: a sub-pixel shaft part size measurement method based on self-adaptive threshold comprises the following steps:
step one, visual system calibration: calibrating a camera by adopting a Zhangyingyou plane calibration method, obtaining internal and external parameters of the camera after calibration, obtaining the mutual position relation between the camera and the tail end of the robot according to the hand-eye calibration principle, and completing the hand-eye calibration;
step two, preprocessing the collected image: on the basis of ensuring that originally useful image information is not influenced, random noise of an image is eliminated through a filtering algorithm, a gamma correction algorithm is improved by combining a gray histogram, and the automatic selection of a gamma value gamma is realized, and the method specifically comprises the following steps:
a. arranging image pixels according to gray value from large to small, dividing a gray range into L levels, equally dividing L gray levels into a high gray value area and a low gray value area, and setting the total number of the pixels in each level as nkCounting the probability P (k) of the number of pixels of each gray level in the number n of pixels of the image;
b. setting a threshold value PthresholdIn the high gray value region, when P (k) > PthresholdThe counted number i is added by 1, otherwise, no operation is carried out, and in a low gray value area, when P (k) is more than PthresholdThe counted number j is added by 1, otherwise, no operation is carried out;
c. when i is larger than j, the gamma value gamma is taken in the interval (0.2, 1);
d. when i is less than j, the gamma value gamma is taken in the interval (1,2.4),
using the determined gamma value gamma and the formula f (I) ═ IγExchanging images, wherein I is a pixel value of an input image, realizing automatic selection of a gamma value gamma, highlighting image details, improving information blurred by a filtering algorithm, and adjusting the contrast of the image to be processed to be optimal;
step three, finishing edge detection: performing primary edge positioning by adopting a Canny operator edge detection method, and recording pixel level edge positions;
fourthly, edge connection based on the neighborhood: for the problem of discontinuous edges after edge detection, adopting an edge connection method based on an adjacent domain to complete edge connection by checking all edge pixel points of the image;
step five, extracting the area based on contour tracking: because the binary image with the edge information is only discrete pixel points, no structural relationship exists between the pixel points, the binary image is converted into the profile description by adopting the profile tracking algorithm with the topological structure analysis capability, and the surrounding relationship between different profiles is obtained while the profiles are extracted, thereby realizing the extraction of the peripheral profiles, which specifically comprises the following steps:
firstly defining coordinates of pixel points of an ith row and a jth column as (i, j), and f (i, j) as density values of the points (i, j), secondly defining two boundary forms, namely an outer boundary and a hole boundary, expressing the boundary in a form of a serial number as ND, marking a boundary sequence obtained by the last scanning as LND, setting an initial serial number of the edge as 1, scanning the whole image by adopting a line scanning mode, and carrying out boundary judgment on the pixel points of which the density values f (i, j) of each pixel point (i, j) are not equal to 0;
if the density value f (i, j) of the current pixel point is 1 and f (i, j-1) is 0, the pixel point (i, j) is a starting point of the outer boundary, ND is automatically increased, (i, j) is a starting point of the outer boundary, and ND is automatically increased2,j2) And (i, j-1) taking the pixel point (i, j) as a starting point, tracking the boundary, and if the density value f (i, j) of the current pixel point is more than or equal to 1 and f (i, j +1) is equal to 0, taking the pixel point (i, j) as a starting point of the hole boundary, and ND is automatically increased, (i, j-1) as a starting point of the hole boundary2,j2) Step of ← (i, j +1), which tracks a boundary with the pixel point (i, j) as a start point;
if f (i, j) ≠ 1, then LND ← | f (i, j) |, and restarts scanning from pixel point (i, j +1), and stops when the algorithm executes to the lower right corner of the image;
extracting a peripheral outline of the image by using the algorithm and surrounding the image by using a minimum rectangle to finally obtain an area where the workpiece is located;
step six, sub-image based on adaptive thresholdAnd (3) detecting the prime edge: dividing the image into regions according to the parameters to be detected, respectively carrying out edge detection on each region, and adaptively selecting a threshold k of sub-pixel edge detection by taking the maximum inter-class variance of the target and the background in each region as a criteriontThe robustness of edge sub-pixel positioning is ensured;
step seven, size calculation: and for the detection of the linear dimension, converting the linear dimension into the distance between two straight lines related to the parameter to be detected, for the detection of the included angle dimension, converting the linear dimension into a linear slope chamfering formula for solving, and respectively acquiring the linear dimension by using the data structure of the sub-pixel group based on least square parallel linear cluster fitting.
Compared with the prior art, the invention has the beneficial effects that: the invention solves the problems that the geometric quantity of the traditional part is mainly finished by manual measurement and spot check, the detection efficiency is low, the detection precision is difficult to ensure, the efficiency is high, the labor is saved, the precision is higher compared with the existing visual dimension measurement, and the adaptability to the measurement environment is strong.
Detailed Description
The technical solutions in the present invention will be described clearly and completely with reference to the following embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the invention, rather than all embodiments, and all other embodiments obtained by those skilled in the art without any creative work based on the embodiments of the present invention belong to the protection scope of the present invention.
A sub-pixel shaft part size measurement method based on self-adaptive threshold comprises the following steps:
step one, visual system calibration: calibrating a camera by adopting a Zhangyingyou plane calibration method, obtaining internal and external parameters of the camera after calibration, obtaining the mutual position relation between the camera and the tail end of the robot according to the hand-eye calibration principle, and completing the hand-eye calibration;
step two, preprocessing the collected image: on the basis of ensuring that originally useful image information is not influenced, random noise of the image is eliminated through a filtering algorithm. When an image is acquired, if the image is influenced by illumination, the overall gray value of the image is limited within a small range, so that the identification capability of certain information of the image is reduced, such as edges, surface textures and the like. The purpose of image contrast enhancement is to process the image according to actual needs, highlight details and improve information blurred by the filtering algorithm. Therefore, an image contrast algorithm is researched, and the contrast of the image to be processed is adjusted to be optimal.
When the brightness of the image acquired by the camera is high, the gray value is concentrated in a high range, and the gradient value of the image is distributed in a low range when the edge detection is performed, and when the brightness of the image acquired by the camera is low, the gray value is concentrated in a low range, and the gradient value is also small, which can cause difficulty in selecting the gradient threshold value during the edge detection. Therefore, the selected algorithm is required to be capable of adapting to contrast enhancement under the conditions of high and low brightness, and the algorithm is combined with a gray histogram to improve a gamma correction algorithm, which specifically comprises the following steps:
f(I)=Iγ
the different influence of the gamma value gamma on the contrast enhancement of the image can be seen according to the relation between the input pixel value and the output pixel value of the gamma curve, when the gamma value gamma is smaller than 1, the slope of the gamma curve is gradually reduced, the image can be concentrated in the range of low gray value, and conversely, when the gamma value gamma is larger than 1, the slope of the gamma curve is gradually increased, the image can be concentrated in the range of high gray value.
The traditional gamma correction needs manual setting values, so the gamma correction algorithm is improved by combining the gray histogram, the automatic selection of the gamma value gamma is realized, and the algorithm flow is as follows:
a. arranging image pixels according to gray value from large to small, dividing a gray range into L levels, equally dividing L gray levels into a high gray value area and a low gray value area, and setting the total number of the pixels in each level as nkCounting the probability of the number of pixels of each gray level in the number n of pixels of the image, wherein the probability of each gray level is defined as P (k):
Figure BDA0003049774980000051
b. setting a threshold value PthresholdIn the high gray value region, when P (k) > PthresholdWhen the number i is counted, the number is added by itself by 1, and when P (k) is less than or equal to PthresholdI does not operate, and similarly, in the low gray value region, when P (k) > PthresholdWhen the number j is counted, the self-adding of 1 is carried out, and when P (k) is less than or equal to PthresholdWhen j does not perform any operation;
c. when i is larger than j, the image is concentrated in a high gray value area, and the gamma value gamma is taken in an interval (0.2, 1). Let q be i-j, q ∈ (0, L/2). Mapping the values of interval (0, L/2) to interval (0.2,1) yields:
γq1=1.6q/L+0.2
d. when i is less than j, the image is concentrated in a low-gray value area, and the gamma value gamma is taken in an interval (1, 2.4). Let q be j-i, q ∈ (0, L/2). Mapping the values of interval (0, L/2) to interval (1,2.4) yields:
γq2=2.8q/L+1
step three, finishing edge detection: performing primary edge positioning by adopting a Canny operator edge detection method, and recording pixel level edge positions;
fourthly, edge connection based on the neighborhood: when the Canny algorithm is used for detecting the edges of an image, the detected edges can have the problems of fracture and discontinuity, which is very unfavorable for acquiring the area of a workpiece, so that the discontinuity edge problem needs to be researched. The method adopts a neighborhood-based edge connection method, is prior art, and has the main idea that all edge pixel points of an image are checked, firstly, all pixels in the 8 neighborhood of a central point are searched by taking one edge pixel point as the central point, when the edge pixel point exists in the 8 neighborhood, 8 neighborhood searching is continuously carried out on the next edge pixel point, when the edge pixel point does not exist in the 8 neighborhood of the edge pixel point, 16 neighborhoods on the outer layer of the edge pixel point are searched, if the edge pixel point exists, the pixel between the edge pixel point and the central point is changed into the edge pixel point, if the edge pixel point does not exist, 8 neighborhood searching is carried out on the next edge pixel point again until all the edge pixel points are searched;
step five, extracting the area based on contour tracking: the image after edge detection can obtain a binary image with edge information, but the image is only a few discrete pixel points, and no structural relationship exists between the pixel points, so that a contour tracing algorithm is required to convert the binary image into contour description. The method adopts a contour tracing algorithm with topological structure analysis capability, and can obtain the surrounding relation between different contours while extracting the contour, thereby realizing the extraction of the peripheral contour, and the method specifically comprises the following steps:
firstly, the coordinates of the pixel points in the ith row and the jth column are defined as (i, j), F (i, j) is the density value of the point (i, j), and the whole image can be expressed as F ═ F (i, j) }. Secondly, two boundary forms are defined, an outer boundary and a hole boundary. And expressing the boundary in a serial number mode, and marking as ND, and marking the rightmost pixel point of the boundary as-ND. The last boundary sequence obtained from the scan is denoted as LND.
The specific algorithm flow is as follows:
1) setting the initial ND of the edge as 1, scanning the whole image by adopting a line scanning mode, and carrying out the following operation on each pixel point f (i, j) ≠ 0. Initializing LND as 1 each time line scanning is restarted;
2) if the density value f (i, j) of the current pixel point is 1 and f (i, j-1) is 0, according to the boundary definition, the pixel point (i, j) is a starting point of the outer boundary, ND is increased automatically, (i, j) is a starting point of the outer boundary, ND is increased automatically2,j2) Axle (i, j-1); if the density value f (i, j) of the current pixel point is more than or equal to 1 and f (i, j +1) is 0, the pixel point (i, j) is a starting point of the hole boundary according to the boundary definition, ND is increased by itself, (i, j) is a starting point of the hole boundary2,j2) ← (i, j + 1); if none of the above conditions is satisfied, executing 5);
3) the type of the current border B is compared with the type of the last border B' with LND sequence numbers. If the two boundary types are the same, then the current boundary B is the parent of boundary B'. If the two boundary types are different, the current boundary B and the boundary B' have the same parent boundary;
4) tracking the boundary by taking the pixel point (i, j) as a starting point, and executing steps a) to e);
a) from point (i)2,j2) Initially, in the neighborhood of point (i, j), non-zero points are found in the clockwise direction, with the first non-zero point found being defined as (i, j)1,j1) If there is no non-zero point in the neighborhood, set f (i, j) to ND and perform 5);
b) will (i)2,j2)←(i1,j1) And (i)3,j3)←(i,j);
c) At the current pixel point (i)3,j3) In the neighborhood of (i), by pixel point (i)2,j2) As a starting point, the first non-zero point is found in the counterclockwise direction and is marked as (i)4,j4);
d) Changing a pixel (i)3,j3) F (i) of3,j3) The value is obtained. If pixel point (i) in step c)3,j3+1) is zero, then f (i)3,j3) (xvo) ← -ND; if pixel point (i) in step c)3,j3+1) is not zero and f (i)3,j3) 1, then f (i)3,j3) (xvo) Wen D; if none of the f (i) is met, not changing3,j3);
e) If (i)4,j4) (ii) and (i, j) is3,j3)=(i1,j1) Then execute 5), otherwise (i)2,j2)←(i3,j3),(i3,j3)←(i4,j4) And returning to step c);
5) if f (i, j) ≠ 1, then LND ← | f (i, j) |, and restarts the scan from pixel point (i, j +1), stopping when the algorithm executes to the lower right corner of the image.
Extracting a peripheral outline of the image by using the algorithm and surrounding the image by using a minimum rectangle to finally obtain an area where the workpiece is located;
step six, sub-pixel edge detection based on adaptive threshold: for extracting in each regionSub-pixel edge, calculating threshold k by maximizing inter-class variance of target and background in each regiont. Let the region contain L gray levels and the total number of pixels be N, and the number of pixels with gray value i be NiThen the probability of a pixel with a gray value of i is Pi=Niand/N. Assume that a gray threshold T divides the region into objects C1=[0,1,2,…,T]And background C2=[T+1,T+2,…,L-1]Two classes, then C1And C2The inter-class variance calculation formula is as follows:
Figure BDA0003049774980000081
since the area in the formula is a grayscale image, L is 256, and the threshold k is obtained by maximizing the above formulatThe expression of (a) is:
Figure BDA0003049774980000082
taking the self-adaptively generated gray threshold T as a threshold ktThe value of (2) is taken, so that not only is manual setting with low efficiency avoided, but also the threshold setting of each region to be measured is more reasonable and flexible. Due to the fact that the collected workpiece image has the condition of uneven illumination, the robustness of edge sub-pixel positioning is higher by adopting a regional self-adaptive threshold value selection method. Dividing the image into regions according to the parameters to be detected, respectively carrying out edge detection on each region, and adaptively selecting a threshold k of sub-pixel edge detection by taking the maximum inter-class variance of the target and the background in each region as a criteriontThe robustness of edge sub-pixel positioning is stronger;
step seven, size calculation: firstly, establishing a pixel coordinate system, establishing two coordinate systems for setting a straight line equation conveniently, and combining to obtain an edge sub-pixel array of a detection related area according to the edge sub-pixel positioning method.
Firstly, establishing a pixel coordinate system, constructing data structures of sub-pixel groups on the left side and the right side of relevant sizes based on least square parallel straight line cluster fitting according to straight line sizes such as diameters, and setting a straight line equation set as follows:
Figure BDA0003049774980000091
the objective function is the sum of the squares of the minimum residuals Q, mathematically:
Figure BDA0003049774980000092
order to
Figure BDA0003049774980000093
To obtain
Figure BDA0003049774980000094
Figure BDA0003049774980000095
Wherein M is the number of the sub-pixel points and M is not less than 3, (x)i,yi) And (x)j,yj) Sampled in the left and right sub-pixel groups, respectively.
Thus, the size of the sub-pixel level of the workpiece is d1C-D. Similarly, other linear dimensions can be obtained, which are not described herein.
For the solution of the included angle size theta, the data construction of two related sub-pixel groups is based on least square symmetric linear cluster fitting, and a linear equation set is set as follows:
Figure BDA0003049774980000101
can be obtained by calculation
Figure BDA0003049774980000102
Wherein N is the number of the sub-pixel points and is not less than 3, (x)i,yi) And (x)j,yj) Sampled in the left and right sub-pixel groups, respectively. The value of the included angle size theta can be obtained by a chamfering formula as follows:
Figure BDA0003049774980000103
therefore, the sub-pixel level geometric dimension measurement of the shaft parts is completely finished.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (2)

1. A sub-pixel shaft part size measurement method based on self-adaptive threshold is characterized by comprising the following steps: the method comprises the following steps:
step one, visual system calibration: calibrating a camera by adopting a Zhangyingyou plane calibration method, obtaining internal and external parameters of the camera after calibration, obtaining the mutual position relation between the camera and the tail end of the robot according to the hand-eye calibration principle, and completing the hand-eye calibration;
step two, preprocessing the collected image: on the basis of ensuring that originally useful image information is not influenced, random noise of an image is eliminated through a filtering algorithm, a gamma correction algorithm is improved by combining a gray histogram, and the automatic selection of a gamma value gamma is realized, and the method specifically comprises the following steps:
a. arranging image pixels according to gray value from large to small, dividing a gray range into L levels, equally dividing L gray levels into a high gray value area and a low gray value area, and setting the total number of the pixels in each level as nkCounting the probability P (k) of the number of pixels of each gray level in the number n of pixels of the image;
b. setting a threshold value PthresholdIn the high gray value region, when P (k) > PthresholdThe counted number i is added by 1, otherwise, no operation is carried out, and in a low gray value area, when P (k) is more than PthresholdThe counted number j is added by 1, otherwise, no operation is carried out;
c. when i is larger than j, the gamma value gamma is taken in the interval (0.2, 1);
d. when i is less than j, the gamma value gamma is taken in the interval (1,2.4),
using the determined gamma value gamma and the formula f (I) ═ IγExchanging images, wherein I is a pixel value of an input image, realizing automatic selection of a gamma value gamma, highlighting image details, improving information blurred by a filtering algorithm, and adjusting the contrast of the image to be processed to be optimal;
step three, finishing edge detection: performing primary edge positioning by adopting a Canny operator edge detection method, and recording pixel level edge positions;
fourthly, edge connection based on the neighborhood: for the problem of discontinuous edges after edge detection, adopting an edge connection method based on an adjacent domain to complete edge connection by checking all edge pixel points of the image;
step five, extracting the area based on contour tracking: because the binary image with the edge information is only discrete pixel points, no structural relationship exists between the pixel points, the binary image is converted into the profile description by adopting the profile tracking algorithm with the topological structure analysis capability, and the surrounding relationship between different profiles is obtained while the profiles are extracted, thereby realizing the extraction of the peripheral profiles, which specifically comprises the following steps:
firstly defining coordinates of pixel points of an ith row and a jth column as (i, j), and f (i, j) as density values of the points (i, j), secondly defining two boundary forms, namely an outer boundary and a hole boundary, expressing the boundary in a form of a serial number as ND, marking a boundary sequence obtained by the last scanning as LND, setting an initial serial number of the edge as 1, scanning the whole image by adopting a line scanning mode, and carrying out boundary judgment on the pixel points of which the density values f (i, j) of each pixel point (i, j) are not equal to 0;
if the density value f (i, j) of the current pixel point is 1 and f (i, j-1) is 0, the pixel point (i, j) is a starting point of the outer boundary, ND is automatically increased, (i, j) is a starting point of the outer boundary, and ND is automatically increased2,j2) And (i, j-1) taking the pixel point (i, j) as a starting point, tracking the boundary, and if the density value f (i, j) of the current pixel point is more than or equal to 1 and f (i, j +1) is equal to 0, taking the pixel point (i, j) as a starting point of the hole boundary, and ND is automatically increased, (i, j-1) as a starting point of the hole boundary2,j2) Step of ← (i, j +1), which tracks a boundary with the pixel point (i, j) as a start point;
if f (i, j) ≠ 1, then LND ← | f (i, j) |, and restarts scanning from pixel point (i, j +1), and stops when the algorithm executes to the lower right corner of the image;
extracting a peripheral outline of the image by using the algorithm and surrounding the image by using a minimum rectangle to finally obtain an area where the workpiece is located;
step six, sub-pixel edge detection based on adaptive threshold: dividing the image into regions according to the parameters to be detected, respectively carrying out edge detection on each region, and adaptively selecting a threshold k of sub-pixel edge detection by taking the maximum inter-class variance of the target and the background in each region as a criteriontThe robustness of edge sub-pixel positioning is ensured;
step seven, size calculation: and for the detection of the linear dimension, converting the linear dimension into the distance between two straight lines related to the parameter to be detected, for the detection of the included angle dimension, converting the linear dimension into a linear slope chamfering formula for solving, and respectively acquiring the linear dimension by using the data structure of the sub-pixel group based on least square parallel linear cluster fitting.
2. The method for measuring the size of the sub-pixel shaft parts based on the adaptive threshold value as claimed in claim 1, wherein the method comprises the following steps: in the fourth step, a neighborhood-based edge connection method is adopted, in the process of checking all edge pixel points of an image, firstly, one edge pixel point is used as a central point to search all pixels in the 8 neighborhood of the central point, when the edge pixel point exists in the 8 neighborhood, 8 neighborhood searching is continuously carried out on the next edge pixel point, when the edge pixel point does not exist in the 8 neighborhood of the edge pixel point, the 16 neighborhood of the more outer layer of the edge pixel point is searched, if the edge pixel point exists, the pixel between the edge pixel point and the central point is changed into the edge pixel point, if the edge pixel point does not exist, 8 neighborhood searching is carried out on the next edge pixel point again until all the edge pixel points are searched.
CN202110482449.6A 2021-04-30 2021-04-30 Sub-pixel shaft part size measurement method based on self-adaptive threshold Active CN113112496B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110482449.6A CN113112496B (en) 2021-04-30 2021-04-30 Sub-pixel shaft part size measurement method based on self-adaptive threshold

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110482449.6A CN113112496B (en) 2021-04-30 2021-04-30 Sub-pixel shaft part size measurement method based on self-adaptive threshold

Publications (2)

Publication Number Publication Date
CN113112496A true CN113112496A (en) 2021-07-13
CN113112496B CN113112496B (en) 2022-06-14

Family

ID=76720634

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110482449.6A Active CN113112496B (en) 2021-04-30 2021-04-30 Sub-pixel shaft part size measurement method based on self-adaptive threshold

Country Status (1)

Country Link
CN (1) CN113112496B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538378A (en) * 2021-07-16 2021-10-22 哈尔滨理工大学 Bearing size online detection system based on deep learning
CN114663427A (en) * 2022-04-25 2022-06-24 北京与子成科技有限公司 Boiler part size detection method based on image processing
CN115096206A (en) * 2022-05-18 2022-09-23 西北工业大学 Part size high-precision measurement method based on machine vision
CN116433701A (en) * 2023-06-15 2023-07-14 武汉中观自动化科技有限公司 Workpiece hole profile extraction method, device, equipment and storage medium
CN117670916A (en) * 2024-01-31 2024-03-08 南京华视智能科技股份有限公司 Deep learning-based edge detection method for coating machine
CN117689662A (en) * 2024-02-04 2024-03-12 张家港长寿工业设备制造有限公司 Visual detection method and system for welding quality of heat exchanger tube head
CN118071739A (en) * 2024-04-18 2024-05-24 山东北宏新材料科技有限公司 Masterbatch coloring visual detection method based on image enhancement

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120026352A1 (en) * 2009-07-29 2012-02-02 Harman Becker Automotive Systems Gmbh Edge detection with adaptive threshold
CN104359403A (en) * 2014-11-21 2015-02-18 天津工业大学 Plane part size measurement method based on sub-pixel edge algorithm
CN106204528A (en) * 2016-06-27 2016-12-07 重庆理工大学 A kind of size detecting method of part geometry quality
US20170098310A1 (en) * 2014-06-30 2017-04-06 Ventana Medical Systems, Inc. Edge-based local adaptive thresholding system and methods for foreground detection
CN107742289A (en) * 2017-10-15 2018-02-27 哈尔滨理工大学 One kind is based on machine vision revolving body workpieces detection method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120026352A1 (en) * 2009-07-29 2012-02-02 Harman Becker Automotive Systems Gmbh Edge detection with adaptive threshold
US20170098310A1 (en) * 2014-06-30 2017-04-06 Ventana Medical Systems, Inc. Edge-based local adaptive thresholding system and methods for foreground detection
CN104359403A (en) * 2014-11-21 2015-02-18 天津工业大学 Plane part size measurement method based on sub-pixel edge algorithm
CN106204528A (en) * 2016-06-27 2016-12-07 重庆理工大学 A kind of size detecting method of part geometry quality
CN107742289A (en) * 2017-10-15 2018-02-27 哈尔滨理工大学 One kind is based on machine vision revolving body workpieces detection method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHENG HUANG: "Sub-Pixel Edge detection algorithm based on Canny-Zernike Moment Method", 《JOURNAL OF CIRCUITS,SYSTEMS AND COMPUTERS》 *
RONGBAO CHEN: "Perpendicularity identification method of image-based welding of components", 《IEEE》 *
刘霄朋: "面向孔轴装配机器人的视觉检测系统研究", 《中国优秀硕士学位论文全文数据库》 *
殷炜棋: "轴类零件尺寸的视觉测量技术研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538378A (en) * 2021-07-16 2021-10-22 哈尔滨理工大学 Bearing size online detection system based on deep learning
CN114663427A (en) * 2022-04-25 2022-06-24 北京与子成科技有限公司 Boiler part size detection method based on image processing
CN114663427B (en) * 2022-04-25 2023-06-16 北京与子成科技有限公司 Boiler part size detection method based on image processing
CN115096206A (en) * 2022-05-18 2022-09-23 西北工业大学 Part size high-precision measurement method based on machine vision
CN115096206B (en) * 2022-05-18 2024-04-30 西北工业大学 High-precision part size measurement method based on machine vision
CN116433701A (en) * 2023-06-15 2023-07-14 武汉中观自动化科技有限公司 Workpiece hole profile extraction method, device, equipment and storage medium
CN116433701B (en) * 2023-06-15 2023-10-10 武汉中观自动化科技有限公司 Workpiece hole profile extraction method, device, equipment and storage medium
CN117670916A (en) * 2024-01-31 2024-03-08 南京华视智能科技股份有限公司 Deep learning-based edge detection method for coating machine
CN117670916B (en) * 2024-01-31 2024-04-12 南京华视智能科技股份有限公司 Coating edge detection method based on deep learning
CN117689662A (en) * 2024-02-04 2024-03-12 张家港长寿工业设备制造有限公司 Visual detection method and system for welding quality of heat exchanger tube head
CN117689662B (en) * 2024-02-04 2024-04-26 张家港长寿工业设备制造有限公司 Visual detection method and system for welding quality of heat exchanger tube head
CN118071739A (en) * 2024-04-18 2024-05-24 山东北宏新材料科技有限公司 Masterbatch coloring visual detection method based on image enhancement

Also Published As

Publication number Publication date
CN113112496B (en) 2022-06-14

Similar Documents

Publication Publication Date Title
CN113112496B (en) Sub-pixel shaft part size measurement method based on self-adaptive threshold
CN107341802B (en) Corner sub-pixel positioning method based on curvature and gray scale compounding
CN108132017B (en) Planar weld joint feature point extraction method based on laser vision system
CN109003258B (en) High-precision sub-pixel circular part measuring method
CN105930858B (en) Rapid high-precision geometric template matching method with rotation and scaling functions
CN109060836B (en) Machine vision-based high-pressure oil pipe joint external thread detection method
CN110728667A (en) Automatic and accurate cutter wear loss measuring method based on gray level image probability
CN111062940B (en) Screw positioning and identifying method based on machine vision
CN107588721A (en) The measuring method and system of a kind of more sizes of part based on binocular vision
CN113724193B (en) PCBA part size and clearance high-precision visual measurement method
CN114037675B (en) Airplane sample plate defect detection method and device
CN111047588A (en) Imaging measurement method for size of shaft type small part
EP2555157A2 (en) Combining feature boundaries
CN108717692B (en) CCD image processing-based cut material deviation correcting method
CN105783786A (en) Part chamfering measuring method and device based on structured light vision
CN111640154B (en) Vertical needle micro-plane sub-pixel level positioning method based on micro-vision
CN111311618A (en) Circular arc workpiece matching and positioning method based on high-precision geometric primitive extraction
CN111539446A (en) 2D laser hole site detection method based on template matching
CN116188544A (en) Point cloud registration method combining edge features
CN115235375A (en) Multi-circle characteristic parameter measuring method, detecting method and device for cover plate type workpiece
CN107610174A (en) A kind of plane monitoring-network method and system based on depth information of robust
CN114066752A (en) Line-structured light skeleton extraction and burr removal method for weld tracking
CN113538399A (en) Method for obtaining accurate contour of workpiece, machine tool and storage medium
CN110991233B (en) Automatic reading method of pointer type pressure gauge
CN108416790A (en) A kind of detection method for workpiece breakage rate

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant