CN117557588A - Concentricity determination method and device, storage medium and electronic equipment - Google Patents

Concentricity determination method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN117557588A
CN117557588A CN202210929281.3A CN202210929281A CN117557588A CN 117557588 A CN117557588 A CN 117557588A CN 202210929281 A CN202210929281 A CN 202210929281A CN 117557588 A CN117557588 A CN 117557588A
Authority
CN
China
Prior art keywords
target
pixel points
target object
pixel point
gradient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210929281.3A
Other languages
Chinese (zh)
Inventor
白照阳
齐金双
李绍青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huizhou BYD Electronic Co Ltd
Original Assignee
Huizhou BYD Electronic Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huizhou BYD Electronic Co Ltd filed Critical Huizhou BYD Electronic Co Ltd
Priority to CN202210929281.3A priority Critical patent/CN117557588A/en
Publication of CN117557588A publication Critical patent/CN117557588A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to a concentricity determination method, a concentricity determination device, a storage medium and electronic equipment, and relates to the technical field of computers. The method comprises the following steps: detecting a target image corresponding to a first target object by adopting an edge detection algorithm of eight-direction gradient synthesis to obtain gradient amplitude values of a plurality of first pixel points; determining a plurality of second pixel points corresponding to a plurality of gradient amplitudes larger than a dynamic threshold value from the gradient amplitudes; carrying out morphological corrosion treatment on the plurality of second pixel points to obtain a plurality of target pixel points; fitting the plurality of target pixel points by using a least square method to obtain the center of the first target object; and determining concentricity between the first object and the second object according to the distance between the centers of the first object and the second object. By using the concentricity determination method provided by the disclosure, the fitted edge of the first target object can be more accurate, and the concentricity is also more accurate.

Description

Concentricity determination method and device, storage medium and electronic equipment
Technical Field
The disclosure relates to the technical field of computers, and in particular relates to a concentricity determination method, a concentricity determination device, a storage medium and electronic equipment.
Background
Concentricity, also called coaxiality, is an important technical index for evaluating whether two circular workpieces are on the same axis, and concentricity errors directly affect the assembly of the whole circular workpieces.
In the related art, the edge of a circular workpiece is detected, the circle center of the circular workpiece is determined based on the edge of the circular workpiece, and finally the concentricity between the circle centers of the two circular workpieces is determined.
In this process, the sensing capability of detecting the edge of the circular workpiece is often insufficient, resulting in lower accuracy of the fitted edge of the circular workpiece, and thus lower accuracy of the calculated center of the circle of the circular workpiece, and lower accuracy of the calculated concentricity.
Disclosure of Invention
The disclosure aims to provide a concentricity determination method, a concentricity determination device, a storage medium and electronic equipment, so as to solve the technical problems.
To achieve the above object, a first aspect of embodiments of the present disclosure provides a concentricity determination method, including:
detecting a target image corresponding to a first target object by adopting an edge detection algorithm of eight-direction gradient synthesis to obtain gradient amplitude values of a plurality of first pixel points of the first target object;
determining a plurality of second pixel points corresponding to a plurality of gradient amplitudes larger than a dynamic threshold value from the gradient amplitudes;
performing morphological corrosion treatment on the plurality of second pixel points to obtain a plurality of target pixel points;
fitting the plurality of target pixel points by using a least square method to obtain the center of the first target object;
and determining concentricity between the first target object and the second target object according to the distance between the centers of the first target object and the second target object.
Optionally, detecting the target image corresponding to the first target object by using an edge detection algorithm of eight-direction gradient synthesis to obtain gradient magnitudes of a plurality of first pixel points of the first target object, including:
respectively convolving a first pixel point of the target image by using a first convolution kernel to obtain gradients of the first pixel point in eight directions;
and determining the gradient amplitude of the first pixel point according to the gradients of the first pixel point in eight directions.
Optionally, the dynamic threshold is determined by:
determining the dynamic threshold according to a first exponential function when the background brightness of the first pixel point is above a first preset value and below a second preset value;
determining the dynamic threshold according to a cubic curve function under the condition that the background brightness of the first pixel point is larger than a second preset value and smaller than a third preset value;
and determining the dynamic threshold according to a second exponential function under the condition that the background brightness of the first pixel point is above the third preset value.
Optionally, the step of obtaining a plurality of target pixel points by morphological corrosion processing on the plurality of second pixel points includes:
convolving an image edge formed by the plurality of second pixel points through a second convolution kernel to obtain a local minimum value in the plurality of second pixel points, wherein the second pixel points cover the image edge, by the second convolution kernel;
and taking the local minimum value in the plurality of second pixel points as the plurality of target pixel points.
Optionally, the fitting the plurality of target pixel points by using a least square method to obtain a center of the first target object includes:
and taking the center of the reference circle as the center of the first target object under the condition that the difference between the squares of the first distances between the plurality of target pixel points and the reference center and the squares of the second distances between the reference radii of the reference circle is close to a first value.
Optionally, the distance between the centers of the first object and the second object is determined by:
determining the size of each pixel point through camera calibration;
and determining the distance between the centers of the first object and the second object according to the size of each pixel point and the number of the pixel points spaced between the centers of the first object and the second object.
Optionally, the target image is obtained by:
and smoothing the image to be processed through a third convolution kernel to obtain the target image.
According to a second aspect of embodiments of the present disclosure, there is provided a concentricity determination device, the device comprising:
the edge detection module is configured to detect a target image corresponding to a first target object by adopting an edge detection algorithm of eight-direction gradient synthesis to obtain gradient amplitude values of a plurality of first pixel points of the first target object;
the second pixel point determining module is configured to determine a plurality of second pixel points corresponding to a plurality of gradient amplitude values larger than the dynamic threshold value from the plurality of gradient amplitude values;
the target pixel point determining module is configured to obtain a plurality of target pixel points through morphological corrosion treatment on the plurality of second pixel points;
the fitting module is configured to fit the plurality of target pixel points by a least square method to obtain the center of the first target object;
and the concentricity determination module is configured to determine concentricity between the first target object and the second target object according to the distance between the centers of the first target object and the second target object.
According to a third aspect of the disclosed embodiments there is provided a non-transitory computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of the concentricity determination method provided by the first aspect of the disclosed embodiments.
According to a fourth aspect of embodiments of the present disclosure, there is provided an electronic device comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the concentricity determination method provided by the first aspect of the embodiment of the present disclosure.
According to the technical scheme, according to the edge detection algorithm synthesized by the eight-direction gradients, the eight-direction convolution is carried out on each first pixel point, so that the obtained gradient amplitude of each first pixel point is accurate, and the edge information extracted by the subsequent edge detection algorithm is comprehensive and accurate; according to the second aspect, through the setting of the dynamic threshold, the dynamic threshold can be changed along with the change of the background brightness of the first pixel, the change of the dynamic threshold is more in line with the visual characteristics of human eyes, after the gradient amplitude is compared with the dynamic threshold, the second pixel with proper quantity is obtained, the second pixel with more or less quantity cannot appear, and the phenomenon of false edges caused by more second pixel and edge discontinuity caused by less quantity is avoided; in the third aspect, the second pixel point is processed through morphological corrosion, and the image edge formed by the second pixel point can be filtered to filter out part of isolated light spots and pseudo edges, so that the accuracy of the obtained target pixel point is higher; according to the fourth aspect, a plurality of accurate target pixel points are fitted through a least square method, so that a clearer and more accurate edge of the first target object can be obtained, the center of the first target object obtained based on the clear and accurate edge is more accurate, and the accuracy of the finally calculated concentricity is higher.
Additional features and advantages of the present disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification, illustrate the disclosure and together with the description serve to explain, but do not limit the disclosure. In the drawings:
fig. 1 is a flowchart illustrating steps of a concentricity determination method according to an exemplary embodiment.
FIG. 2 is a schematic diagram of a convolution kernel in the horizontal direction shown in an exemplary embodiment.
FIG. 3 is a schematic diagram of a pixel and a neighborhood of pixels 8, as shown in an exemplary embodiment.
FIG. 4 is a schematic diagram of a vertically oriented convolution kernel shown in an exemplary embodiment.
Fig. 5 is a schematic diagram showing a relationship between a dynamic threshold value and a gray value of background luminance according to an exemplary embodiment.
FIG. 6 is a schematic illustration of morphological erosion as shown in an exemplary embodiment.
Fig. 7 is a logic diagram of a concentricity determination method according to an exemplary embodiment.
Fig. 8 is a schematic diagram of a gemstone and HF gel as shown in an exemplary embodiment.
Fig. 9 is a block diagram of a concentricity determination device shown in an exemplary embodiment.
Fig. 10 is a block diagram of an electronic device, as shown in an exemplary embodiment.
Detailed Description
Specific embodiments of the present disclosure are described in detail below with reference to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating and illustrating the disclosure, are not intended to limit the disclosure.
It should be noted that, all actions for acquiring signals, information or data in the present disclosure are performed under the condition of conforming to the corresponding data protection rule policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
Referring to fig. 1, the disclosure proposes a concentricity determination method, which includes the following steps:
in step S11, an edge detection algorithm of eight-direction gradient synthesis is used to detect a target image corresponding to a first target object, so as to obtain gradient magnitudes of a plurality of first pixel points of the first target object.
In the present disclosure, the first object may be HF glue that surrounds the second object, which may be a circular piece of precious stone or the like. Since the photographed image of the first object is generally blurred in the edge, the edge of the first object needs to be detected; the edges of the image of the second object are clear, so that the edges of the second object do not need to be detected.
The target image is provided with a plurality of first pixel points, each first pixel point is provided with a gradient amplitude corresponding to the first pixel point, and the gradient amplitude refers to the gray value of the coordinate point where the first pixel point is located.
In the related art, a Sobel edge detection algorithm uses the difference between adjacent pixels in the horizontal and vertical directions to calculate the gradient amplitude of a pixel point, thereby realizing the edge detection of a first target object. In this process, the Sobel edge detection algorithm does not consider the difference between adjacent pixels in other directions of the pixel points, resulting in the detected pixel point missing, and part of the edge information of the first object is lost.
In order to reduce partial edge information loss of a first object, the disclosure proposes a Sobel edge detection algorithm adopting gradient synthesis in eight directions to perform edge detection on an edge in an object image corresponding to the first object, which specifically includes the following sub-steps:
substep A1: and respectively convolving the first pixel point of the target image by using a first convolution kernel to obtain gradients of the first pixel point in eight directions.
Wherein, eight directions include: the convolution kernels in the sobel edge detection algorithm are also changed from 3X3 to 5X5 as the first convolution kernel, at 0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, 135 °, and 157.5. After changing the convolution kernel to the first convolution kernel of 5X5, the convolution efficiency on the target image also increases.
Specifically, in the related art, when the Sobel edge detection algorithm in the horizontal direction shown in fig. 2 is used to convolve the first pixel point (x, y) of the target image shown in fig. 3, the gradient in the horizontal direction is obtained as follows:
G x =(z 7 +2z 8 +z 9 )-(z 1 +2z 2 +z 3 ) (1)
in the formula (1)) In (G) x Z is the gradient of the first pixel point in the horizontal direction 1 、z 2 、z 3 、z 7 、z 8 、z 9 For the value of each neighborhood in the convolution kernel.
When the Sobel edge detection algorithm in the numerical direction shown in fig. 4 is used to convolve the first pixel point (x, y) of the target image shown in fig. 3, the gradient in the vertical direction is obtained as follows:
G y =(z 3 +2z 6 +z 9 )-(z 1 +2z 4 +z 7 ) (2)
in the formula (2), G y Z is the gradient of the first pixel point in the vertical direction 1 、z 6 、z 3 、z 7 、z 4 、z 9 For the value of each neighborhood in the convolution kernel.
In the present disclosure, when the target image is convolved in eight directions by using the first convolution kernel of 5×5, the gradients in each direction are obtained as follows:
G =(z 5 +2z 6 +4z 7 +2z 8 +z 9 )-(z 15 +2z 16 +4z 17 +2z 18 +z 19 ) (3)
G 22.5° =(2z 6 +4z 7 +2z 8 +z 10 +4z 11 )-(4z 13 +z 14 +2z 16 +4z 17 +2z 18 ) (4)
G 45° =(z 3 +2z 6 +4z 7 +4z 11 +z 15 )-(z 9 +4z 13 +4z 17 +2z 18 +z 21 ) (5)
G 67.5° =(z 2 +2z 6 +4z 7 +24+2z 16 )-(2z 8 +4z 13 +4z 17 +2z 18 +z 22 ) (6)
G 90° =(z 1 +2z 6 +4z 11 +2z 16 +z 21 )-(z 3 +2z 8 +4z 13 +2z 18 +z 23 ) (7)
G 112.5° =(2z 6 +4z 11 +2z 16 +4z 17 +z 22 )-(z 2 +4z 7 +2z 8 +4z 13 +2z 18 ) (8)
G 135° =(z 5 +4z 11 +2z 16 +4z 17 +z 23 )-(z 1 +4z 7 +2z 8 +4z 13 +z 19 ) (9)
G 157.5° =(z 10 +4z 11 +2z 16 +4z 17 +2z 18 )-(2z 6 +4z 7 +2z 8 +4z 13 +z 14 ) (10)
the gradients of the first pixel point in the target image at 0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, and 135 °, 157.5 can be obtained from the formulas (3) to (10), respectively.
Substep A2: and determining the gradient amplitude of the first pixel point according to the gradients of the first pixel point in eight directions.
In the present disclosure, after obtaining the gradients of the first pixel point in eight directions, the gradient magnitude of the first pixel point may be obtained based on the gradients in eight directions.
Specifically, the gradient magnitude of the first pixel point may be represented by the following formula (11) or formula (12).
M 1 =|G |+|G 22.5° |+|G 45° |+|G 67.5° |+|G 90° |+|G 112.5° |+|G 135° |+|G 157.5° | (11)
M =max{|G |,|G 22.5° |,|G 45° |,|G 67.5° |,|G 90° |,|G 112.5° |,|G 135° |,|G 157.5° |} (12)
In the formulas (11) and (12), M 1 And M is as follows The two gradient amplitudes are similar to each other, so that one gradient amplitude can be selected from the two gradient amplitudes to serve as the gradient amplitude of the first pixel.
After the calculation from the formula (3) to the formula (12), the gradient amplitude of each first pixel point in the target image can be obtained, and as each first pixel point carries out eight-direction convolution through the first convolution kernel, the accuracy of the obtained gradient amplitude of the first pixel point is higher, more comprehensive and natural, and the edge of the target image detected based on the gradient amplitude of the first pixel point is also more accurate and comprehensive.
In step S12, a plurality of second pixel points corresponding to a plurality of gradient magnitudes greater than the dynamic threshold are determined from the plurality of gradient magnitudes.
In the related art, after comparing the gradient amplitude of the first pixel point with a fixed threshold, the first pixel point larger than the fixed threshold is used as a second pixel point, and then a plurality of second pixel points larger than the first pixel point are fitted to obtain the edge of the first object, so that the edge detection of the first object is completed.
In the disclosure, first, in step S11, each first pixel point in the target image is checked by adopting a first convolution to perform eight-direction convolution, so that the gradient amplitude of each obtained first pixel point is more accurate and comprehensive, and after comparing with a threshold value, the edge information extracted by the subsequent edge detection algorithm is more comprehensive.
Furthermore, the subjective vision of human eyes is obtained by irradiating the retina of human eyes with light reflected by an object, and the visual nerves are stimulated, and the subjective brightness is logarithmic to the light intensity entering the human eyes, but the subjective brightness is not completely determined by the brightness of the object itself, and is also related to the background brightness of the object. If the fixed threshold is used for comparing with the gradient amplitude, if the fixed threshold is set too high, the number of the obtained second pixel points is small, and the detected edge of the first target object is interrupted; if the fixed threshold is set too low, the number of the obtained second pixel points is large, and the detected first object has a false edge.
The dynamic threshold is designed, and the dynamic threshold accords with the human eye vision characteristic, so that the number of the obtained first pixel points is proper in different human eye vision stages, excessive or insufficient phenomenon can not occur, and further the phenomenon of edge discontinuity or false edge is avoided.
Wherein, under the condition that the background brightness of the first pixel point is above a first preset value and below a second preset value, determining a dynamic threshold according to a first exponential function; under the condition that the background brightness of the first pixel point is larger than the second preset value and smaller than the third preset value, determining a dynamic threshold according to a cubic curve function; and under the condition that the background brightness of the first pixel point is above a third preset value, determining a dynamic threshold according to the second exponential function.
Specifically, it can be expressed by the following formula (13):
in formula (13), α 0 、α 1 、β 0 、β 1 、β 2 、β 3 、γ 0 、γ 1 Are constant and can be set according to actual conditions; i is the background brightness of the first pixel point; a is a low dark area cut-off gradient; b is the highlight initial gradient; Δi is a dynamic threshold.
The background brightness of the first pixel point can be understood as the background brightness gray value of the first pixel point, which is obtained by the average value of 24 neighborhood pixels around the first pixel point, and the background brightness gray value of the background where the first pixel point is positioned is the average value of 24 neighborhood since the target image is a 5X5 image; the first preset value may be a low dark area cut-off gradient and the second preset value may be a highlight area start gradient; the first exponential function may be represented by alpha 0 exp(1/(α 1 I+1)) and the cubic curve function can be represented by β 0 I 31 I 22 I+β 3 Expressed, the second exponential function may be represented by γ 0 exp(γ 1 /(1.0-I)) to be expressed; the low dark region cut-off gradient may be 0.18 and the high light region start gradient may be 0.71.
In particular, referring to the three-segment function curve shown in fig. 5, the first segment 0, a is a first exponential function, the second segment (a, b) is a cubic curve function, and the third segment [ + ] is a second exponential function. Under the condition that the background brightness of the first pixel point is below a low dark area cut-off gradient, the larger the background brightness of the first pixel point is, the smaller the dynamic threshold value is, and the number of the obtained second pixel points is larger; under the condition that the background brightness of the first pixel point is larger than the cut-off gradient of the low dark area and smaller than the initial gradient of the high bright area, the larger the background brightness of the first pixel point is, the larger the dynamic threshold value is, and the number of the obtained second pixel points is smaller; under the condition that the background brightness of the first pixel point is above the initial gradient of the highlight region, the larger the background brightness of the first pixel point is, the larger the dynamic threshold value is, and the increasing amplitude of the dynamic threshold value is larger than the increasing amplitude of the cubic curve function.
Through the setting of the dynamic threshold value, the dynamic threshold value can be changed along with the change of the background brightness of the first pixel points, so that the human visual characteristics are more met, the number of the obtained second pixel points is more moderate, the phenomenon of too many or too few can not occur, and the phenomenon of false edges caused by too many and edge discontinuity caused by too few can be avoided.
In step S13, a plurality of target pixel points are obtained by morphological etching on the plurality of second pixel points.
Because the Sobel edge detection algorithm is adopted to determine a plurality of first pixel points, and is sensitive to noise, the phenomenon that the fitted first object has pseudo edges and isolated light spots is caused, the isolated light spots refer to redundant bright spots in the object image of the first object, in order to avoid the phenomena of the pseudo edges and the isolated light spots, morphological corrosion treatment is carried out on a plurality of second pixel points, so that a plurality of target pixel points are obtained, the edge of the first object fitted based on the plurality of target pixel points is clearer, and the influence of noise on the object image is reduced.
The image edge formed by the plurality of second pixel points can be convolved through the second convolution kernel, so that a local minimum value in the plurality of second pixel points, which cover the image edge, of the second convolution kernel is obtained; and taking the local minimum value in the plurality of second pixel points as a plurality of target pixel points.
The second convolution kernel may be a convolution kernel of 3X3, for example, as shown in fig. 6, the image before being corroded is a, the second convolution kernel is B, the five-pointed star in the second convolution kernel B is a kernel of the second convolution kernel, and after the second convolution kernel is convolved along the inner boundary of the image a, an corroded image is obtained, and the corroded image is reduced by one turn compared with the image a.
In the process that the second convolution kernel B convolves along the inner boundary of the image a, the pixel points of the image a, which can completely contain the kernel of the convolution kernel, are reserved, and it can be also understood that the kernel of the five-pointed star of the second convolution kernel is used as the target pixel point of the edge of the corroded image, so that the boundary points of a plurality of second pixel points in the image a are filtered, the target pixel point is reserved in the image a, and the image edge formed by the target pixel point is clearer.
The target pixel point can be obtained by carrying out morphological corrosion treatment on the images formed by the plurality of second pixel points, and the phenomena of pseudo edges and isolated light spots can be avoided based on the images formed by the target pixel point, so that the edges of the images are clearer and more accurate.
In step S14, the plurality of target pixel points are fitted by a least square method, so as to obtain a center of the first target object.
In the disclosure, after obtaining a plurality of target pixel points of a first target object, the plurality of target pixel points may be fitted by a least square method to obtain a circular edge of the first target object, and then a center point of the first target object is obtained based on the circular edge of the first target object.
Wherein, in the case that the difference between the square of the first distance between the plurality of target pixel points and the reference center and the square of the second distance between the reference radius of the reference circle is close to the first value, the center of the reference circle is taken as the center of the first target object, and the first value may be 0.
Specifically, the formula (14) may be set first:
R 2 =(x-A) 2 +(y-B) 2 =x 2 -2Ax+A 2 +y 2 -2Ay+B 2 (14)
in the formula (14), R is the radius of the set reference circle, (a, B) is the coordinates of the reference center of the set reference circle, and (x, y) is the coordinates of the pixel point on the set reference circle.
Resetting the coordinates (x i ,y i ) Distance to reference center (A, B) d i The calculation formula is as follows:
d i 2 =(x i -A) 2 +(y i -B) 2 (15)
in formula (15), d i The distance between the target pixel point and the reference center is (A, B) the coordinates of the reference center of the set reference circle, (x) i ,y i ) Is the coordinates of the target pixel point.
Let a= -2a, b= -2b, c=a 2 +B 2 -R 2 D is then i The square difference with R is:
taking the difference Q (a, b, c) between the squares of the distances from all target pixels to the center of a circle and the squares of the distances from the pixels on the reference circle to the center of a circle as an objective function on the basis of the formula (16), and optimizing the objective function to minimize the distances from all target pixels to the reference circle so as to fit a reference circle closest to the edge, wherein the reference circle is specifically represented by the following formula:
Q(a,b,c)=∑(d i 2 -R 2 ) (17)
in the formula (17), Q (a, b, c) is the difference between the square of the distance from the target pixel point to the center of the circle and the square of the distance from the pixel point on the reference circle, d i And R is the radius of the set reference circle for the distance between the target pixel point and the reference center.
Since the square error Q (a, b, c) is 0 or more, Q (a, b, c) has a minimum value of 0 or more and a maximum value of infinity, in this case, Q (a, b, c) may be used to bias a, b, c, respectively, to make the bias equal to 0, to obtain extremum points, and the fitting circle parameter A, B, R at the minimum value may be obtained by comparing the function values of all extremum points.
From formulas (18) to (20), the minima of a, b, c can be obtained, based on formulas a= -2a, b= -2b, c=a 2 +B 2 -R 2 To obtain A, B, R of the reference circle.
It can be seen that the coordinates (x i ,y i ) A reference circle may be fitted, which is then the edge of the first object, and after the reference circle has been determined, the centre (a, B) of the reference circle, i.e. the centre (a, B) of the first object, may be determined.
In step S15, concentricity between the first target and the second target is determined according to the distance between the centers of the first target and the second target.
In the disclosure, a plurality of target pixel points can be fitted based on the least square method to obtain the center coordinates of the first target object, and the edges of the second target object are clear, so that the center coordinates of the second target object can be directly calculated.
After the center coordinates of the first object and the second object are determined, a distance between the first object and the second object may be calculated, and the distance may be taken as concentricity between the two.
The size of each pixel point can be determined through camera calibration; and determining the distance between the centers of the first object and the second object according to the size of each pixel point and the number of the pixel points spaced between the centers of the first object and the second object.
Specifically, a vision processing software may be used to connect a specified camera, collect an image of a standard checkerboard calibration board, select a vision pro calibration tool cgcalibcheckerb for calibrating the camera, select a calibration mode as Linear, set a calibration board characteristic searcher as an exhaustive checkerboard, set a reference symbol as standard indexes, set block sizes X and Y of the calibration board as 3mm, and perform preprocessing on the image to be corrected by using a correction model after capturing the image to be corrected, so as to obtain a target image having a first target object and a second target object.
In the process of collecting the target image by the camera, the size and the pixel number of the target image are preset in the camera, so that the size of each pixel point can be obtained by dividing the size of the target image by the pixel number; and multiplying the number of pixels at the lock interval between the centers of the first object and the second object in the acquired object image by the size of each pixel point to obtain the distance between the center points of the first object and the second object.
When the distance between the center points of the first object and the second object is closer, the concentricity of the first object and the second object is higher; the further the distance between the center points of the first object and the second object is, the lower the concentricity of the first object and the second object is.
According to the concentricity determination method provided by the disclosure, according to the first aspect, eight-direction convolution is performed on each first pixel point through the edge detection algorithm of eight-direction gradient synthesis, so that the gradient amplitude of each obtained first pixel point is accurate, and the edge information extracted by the subsequent edge detection algorithm is comprehensive and accurate; according to the second aspect, through the setting of the dynamic threshold, the dynamic threshold can be changed along with the change of the background brightness of the first pixel, the change of the dynamic threshold is more in line with the visual characteristics of human eyes, after the gradient amplitude is compared with the dynamic threshold, the second pixel with proper quantity is obtained, the second pixel with more or less quantity cannot appear, and the phenomenon of false edges caused by more second pixel and edge discontinuity caused by less quantity is avoided; in the third aspect, the second pixel point is processed through morphological corrosion, and the image edge formed by the second pixel point can be filtered to filter out part of isolated light spots and pseudo edges, so that the accuracy of the obtained target pixel point is higher; according to the fourth aspect, a plurality of accurate target pixel points are fitted through a least square method, so that a clearer and more accurate edge of the first target object can be obtained, the center of the first target object obtained based on the clear and accurate edge is more accurate, and the accuracy of the finally calculated concentricity is higher.
In a possible implementation manner, the image to be processed may be smoothed by a third convolution kernel, so as to obtain the target image.
The filter can be constructed by a one-dimensional Gaussian function G (x, y), and convolution operation is performed on the to-be-processed image f (x, y) according to rows and columns respectively to obtain a target image I (x, y).
Specifically, the target image can be obtained by the following formula:
I(x,y)=[G(x)G(y)]*f(x,y) (23)
in the formulas (21) to (23), σ is the standard deviation of a gaussian function for controlling the smoothness of an output image, f (x, y) is an image to be processed, and I (x, y) is a target image.
After the image to be processed is subjected to smoothing processing, setting of the target image is obtained, and pixel values of pixel points in the target image can be smoother.
In one possible implementation, please refer to the logic diagram of the concentricity determination method shown in fig. 7, which includes the following steps:
in step S21, a lens (second object, e.g., gemstone) is assembled with HAF glue (first object, e.g., HF glue).
Specifically, a gemstone feeding mechanism can be used for transferring a gemstone to an assembly position, and a HAF glue feeding mechanism can be used for transferring HF glue to the assembly position; and then the assembly action of the precious stone and the HAF glue is completed by using the HAF glue attaching mechanism, please refer to FIG. 8, and HF glue is evenly attached to the edge of the precious stone.
In step S22, the camera is assembled to take a picture of the first object and the second object, thereby obtaining a target image.
In step S23, a center of a first object in the target image is fitted based on the target image, and concentricity between the center of the first object and the center of the second object is calculated.
In step S24, if the concentricity is smaller than the set value, packaging the first object and the second object; and screening out the first target object and the second target object under the condition that the concentricity is larger than or equal to the set value.
Specifically, under the condition that the concentricity is smaller than the set value, the concentricity between the first target object and the second target object is higher, the quality of the produced product is better, and the first target object and the second target object can be packaged at the moment; under the condition that the concentricity is larger than or equal to the set value, the concentricity between the first object and the second object is lower, the quality of the produced product is poor, the first object and the second object can be discarded at the moment, and the first object and the second object can be photographed again.
Based on the same inventive concept, the present disclosure proposes a concentricity determination device, referring to fig. 9, the concentricity determination device 120 includes:
the edge detection module 121 is configured to detect a target image corresponding to a first target object by adopting an edge detection algorithm of eight-direction gradient synthesis, so as to obtain gradient magnitudes of a plurality of first pixel points of the first target object;
a second pixel determining module 122 configured to determine a plurality of second pixels corresponding to a plurality of gradient magnitudes greater than the dynamic threshold from the plurality of gradient magnitudes;
the target pixel point determining module 123 is configured to obtain a plurality of target pixel points through morphological corrosion processing on the plurality of second pixel points;
a fitting module 124 configured to fit the plurality of target pixel points by a least square method, so as to obtain a center of the first target object;
a concentricity determination module 125 configured to determine concentricity between the first and second targets based on a distance between centers of the first and second targets.
Optionally, the edge detection module 121 includes:
the gradient determining module is configured to respectively convolve first pixel points of the target image by using a first convolution kernel to obtain gradients of the first pixel points in eight directions;
and the gradient amplitude determining module is configured to determine the gradient amplitude of the first pixel point according to the gradients of the first pixel point in eight directions.
Optionally, the concentricity determination device 120 includes:
the first dynamic threshold determining module is configured to determine the dynamic threshold according to a first exponential function when the background brightness of the first pixel point is above a first preset value and below a second preset value;
the second dynamic threshold determining module is configured to determine the dynamic threshold according to a cubic curve function under the condition that the background brightness of the first pixel point is larger than a second preset value and smaller than a third preset value;
and a third dynamic threshold determining module configured to determine the dynamic threshold according to a second exponential function if the background brightness of the first pixel point is above the third preset value.
Optionally, the target pixel point determining module 123 includes:
the local minimum determining module is configured to convolve the image edges formed by the plurality of second pixel points through a second convolution kernel to obtain local minimum values in the plurality of second pixel points, wherein the second convolution kernel covers the image edges;
and the erosion module is configured to take the local minimum value of the plurality of second pixel points as the plurality of target pixel points.
Optionally, the fitting module 124 includes:
a center determination module configured to take a center of the reference circle as a center of the first object in a case where a difference between squares of a first distance between the plurality of target pixel points and a reference center and squares of a second distance between reference radii of the reference circle approaches a first value.
Optionally, the concentricity determination module 125 comprises:
the camera calibration module is configured to determine the size of each pixel point through camera calibration;
and the distance calculation module is configured to determine the distance between the centers of the first object and the second object according to the size of each pixel point and the number of the pixel points spaced between the centers of the first object and the second object.
Optionally, the concentricity determination device 120 includes:
and the smoothing module is configured to carry out smoothing processing on the image to be processed through a third convolution kernel to obtain the target image.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 10 is a block diagram of an electronic device 700, according to an example embodiment. As shown in fig. 10, the electronic device 700 may include: a processor 701, a memory 702. The electronic device 700 may also include one or more of a multimedia component 703, an input/output (I/O) interface 704, and a communication component 705.
Wherein the processor 701 is configured to control the overall operation of the electronic device 700 to perform all or part of the steps in the concentricity determination method described above. The memory 702 is used to store various types of data to support operation on the electronic device 700, which may include, for example, instructions for any application or method operating on the electronic device 700, as well as application-related data, such as contact data, messages sent and received, pictures, audio, video, and so forth. The Memory 702 may be implemented by any type or combination of volatile or non-volatile Memory devices, such as static random access Memory (Static Random Access Memory, SRAM for short), electrically erasable programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM for short), erasable programmable Read-Only Memory (Erasable Programmable Read-Only Memory, EPROM for short), programmable Read-Only Memory (Programmable Read-Only Memory, PROM for short), read-Only Memory (ROM for short), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia component 703 can include a screen and an audio component. Wherein the screen may be, for example, a touch screen, the audio component being for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signals may be further stored in the memory 702 or transmitted through the communication component 705. The audio assembly further comprises at least one speaker for outputting audio signals. The I/O interface 704 provides an interface between the processor 701 and other interface modules, which may be a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 705 is for wired or wireless communication between the electronic device 700 and other devices. Wireless communication, such as Wi-Fi, bluetooth, near field communication (Near Field Communication, NFC for short), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or one or a combination of more of them, is not limited herein. The corresponding communication component 705 may thus comprise: wi-Fi module, bluetooth module, NFC module, etc.
In an exemplary embodiment, the electronic device 700 may be implemented by one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated ASIC), digital signal processor (Digital Signal Processor, abbreviated DSP), digital signal processing device (Digital Signal Processing Device, abbreviated DSPD), programmable logic device (Programmable Logic Device, abbreviated PLD), field programmable gate array (Field Programmable Gate Array, abbreviated FPGA), controller, microcontroller, microprocessor, or other electronic components for performing the concentricity determination method described above.
In another exemplary embodiment, a computer readable storage medium is also provided, comprising program instructions which, when executed by a processor, implement the steps of the concentricity determination method described above. For example, the computer readable storage medium may be the memory 702 including program instructions described above, which are executable by the processor 701 of the electronic device 700 to perform the concentricity determination method described above.
The preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications can be made to the technical solutions of the present disclosure within the scope of the technical concept of the present disclosure, and all of the simple modifications fall within the scope of the protection of the present disclosure
In addition, the specific features described in the foregoing embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, the present disclosure does not further describe various possible combinations.
Moreover, any combination between the various embodiments of the present disclosure is possible as long as it does not depart from the spirit of the present disclosure, which should also be construed as the disclosure of the present disclosure.

Claims (10)

1. A concentricity determination method, the method comprising:
detecting a target image corresponding to a first target object by adopting an edge detection algorithm of eight-direction gradient synthesis to obtain gradient amplitude values of a plurality of first pixel points of the first target object;
determining a plurality of second pixel points corresponding to a plurality of gradient amplitudes larger than a dynamic threshold value from the gradient amplitudes;
performing morphological corrosion treatment on the plurality of second pixel points to obtain a plurality of target pixel points;
fitting the plurality of target pixel points by using a least square method to obtain the center of the first target object;
and determining concentricity between the first target object and the second target object according to the distance between the centers of the first target object and the second target object.
2. The method of claim 1, wherein the detecting the target image corresponding to the first target object by using an edge detection algorithm of eight-direction gradient synthesis to obtain gradient magnitudes of a plurality of first pixels of the first target object comprises:
respectively convolving a first pixel point of the target image by using a first convolution kernel to obtain gradients of the first pixel point in eight directions;
and determining the gradient amplitude of the first pixel point according to the gradients of the first pixel point in eight directions.
3. The method of claim 1, wherein the dynamic threshold is determined by:
determining the dynamic threshold according to a first exponential function when the background brightness of the first pixel point is above a first preset value and below a second preset value;
determining the dynamic threshold according to a cubic curve function under the condition that the background brightness of the first pixel point is larger than a second preset value and smaller than a third preset value;
and determining the dynamic threshold according to a second exponential function under the condition that the background brightness of the first pixel point is above the third preset value.
4. The method of claim 1, wherein the subjecting the plurality of second pixels to morphological erosion to obtain a plurality of target pixels comprises:
convolving an image edge formed by the plurality of second pixel points through a second convolution kernel to obtain a local minimum value in the plurality of second pixel points, wherein the second pixel points cover the image edge, by the second convolution kernel;
and taking the local minimum value in the plurality of second pixel points as the plurality of target pixel points.
5. The method of claim 1, wherein fitting the plurality of target pixel points by least squares results in a center of the first target object, comprising:
and taking the center of the reference circle as the center of the first target object under the condition that the difference between the squares of the first distances between the plurality of target pixel points and the reference center and the squares of the second distances between the reference radii of the reference circle is close to a first value.
6. The method of claim 1, wherein the distance between the centers of the first target and the second target is determined by:
determining the size of each pixel point through camera calibration;
and determining the distance between the centers of the first object and the second object according to the size of each pixel point and the number of the pixel points spaced between the centers of the first object and the second object.
7. The method according to claim 1, wherein the target image is obtained by:
and smoothing the image to be processed through a third convolution kernel to obtain the target image.
8. A concentricity determination device, comprising:
the edge detection module is configured to detect a target image corresponding to a first target object by adopting an edge detection algorithm of eight-direction gradient synthesis to obtain gradient amplitude values of a plurality of first pixel points of the first target object;
the second pixel point determining module is configured to determine a plurality of second pixel points corresponding to a plurality of gradient amplitude values larger than the dynamic threshold value from the plurality of gradient amplitude values;
the target pixel point determining module is configured to obtain a plurality of target pixel points through morphological corrosion treatment on the plurality of second pixel points;
the fitting module is configured to fit the plurality of target pixel points by a least square method to obtain the center of the first target object;
and the concentricity determination module is configured to determine concentricity between the first target object and the second target object according to the distance between the centers of the first target object and the second target object.
9. A non-transitory computer readable storage medium having stored thereon a computer program, characterized in that the program when executed by a processor realizes the steps of the method according to any of claims 1-7.
10. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of any one of claims 1-7.
CN202210929281.3A 2022-08-03 2022-08-03 Concentricity determination method and device, storage medium and electronic equipment Pending CN117557588A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210929281.3A CN117557588A (en) 2022-08-03 2022-08-03 Concentricity determination method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210929281.3A CN117557588A (en) 2022-08-03 2022-08-03 Concentricity determination method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN117557588A true CN117557588A (en) 2024-02-13

Family

ID=89819097

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210929281.3A Pending CN117557588A (en) 2022-08-03 2022-08-03 Concentricity determination method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN117557588A (en)

Similar Documents

Publication Publication Date Title
US10621729B2 (en) Adaptive focus sweep techniques for foreground/background separation
US10388030B2 (en) Crossing point detector, camera calibration system, crossing point detection method, camera calibration method, and recording medium
US10805508B2 (en) Image processing method, and device
CN115908269B (en) Visual defect detection method, visual defect detection device, storage medium and computer equipment
CN109784250B (en) Positioning method and device of automatic guide trolley
JP2016525840A (en) System and method for correcting image artifacts
US20210042912A1 (en) Method, Apparatus and System for Detecting Fundus Image Based on Machine Learning
CN109479082B (en) Image processing method and apparatus
CN110717942A (en) Image processing method and device, electronic equipment and computer readable storage medium
KR20150116833A (en) Image processor with edge-preserving noise suppression functionality
CN116309510A (en) Numerical control machining surface defect positioning method and device
CN102005051A (en) Edge detection method and related device
CN112465707B (en) Processing method and device of infrared image stripe noise, medium and electronic equipment
CN113452901A (en) Image acquisition method and device, electronic equipment and computer readable storage medium
CN110637227A (en) Detection parameter determining method and detection device
WO2021195873A1 (en) Method and device for identifying region of interest in sfr test chart image, and medium
CN107644442B (en) Spatial position calibration method of double-camera module
CN116908185A (en) Method and device for detecting appearance defects of article, electronic equipment and storage medium
CN111415365B (en) Image detection method and device
CN112184723A (en) Image processing method and device, electronic device and storage medium
CN117557588A (en) Concentricity determination method and device, storage medium and electronic equipment
US11842570B2 (en) Image processing apparatus, image pickup apparatus, image processing method, and storage medium
CN110310235B (en) Fundus image processing method, device and equipment and storage medium
US20200184597A1 (en) Image processing apparatus configured to perform edge preserving smoothing and image processing method thereof
JP6556033B2 (en) Image processing apparatus, image processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination