CN116993966A - Casting polishing vision intelligent positioning method and system - Google Patents

Casting polishing vision intelligent positioning method and system Download PDF

Info

Publication number
CN116993966A
CN116993966A CN202311253246.5A CN202311253246A CN116993966A CN 116993966 A CN116993966 A CN 116993966A CN 202311253246 A CN202311253246 A CN 202311253246A CN 116993966 A CN116993966 A CN 116993966A
Authority
CN
China
Prior art keywords
casting
edge
pixel point
positioning
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311253246.5A
Other languages
Chinese (zh)
Other versions
CN116993966B (en
Inventor
闫新华
黎兆星
闫新兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nobot Intelligent Equipment Shandong Co ltd
Original Assignee
Nobot Intelligent Equipment Shandong Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nobot Intelligent Equipment Shandong Co ltd filed Critical Nobot Intelligent Equipment Shandong Co ltd
Priority to CN202311253246.5A priority Critical patent/CN116993966B/en
Publication of CN116993966A publication Critical patent/CN116993966A/en
Application granted granted Critical
Publication of CN116993966B publication Critical patent/CN116993966B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image processing, and provides a casting polishing vision intelligent positioning method and system, wherein the method comprises the following steps: acquiring a gray level image of a target casting; acquiring a casting area; combining gray features and gradient features of each pixel point of the casting area image to obtain two polarization coefficients of the casting edge and membership of the casting edge of each pixel point of the casting area; acquiring a casting arc edge direction difference index of a pixel point; acquiring a casting positioning edge significant value and a casting positioning edge significant map of the pixel point; dividing casting positioning edge pixel points; acquiring gradient amplitude values of pixel points at the positioning edge of the casting; acquiring a Canny edge detection algorithm self-adaptive double threshold; acquiring an edge image of a target casting; and (5) intelligent positioning of the castings is completed. Thereby realizing the accurate positioning of the edge of the casting and having higher visual positioning precision of the casting.

Description

Casting polishing vision intelligent positioning method and system
Technical Field
The invention relates to the technical field of image processing, in particular to a casting polishing vision intelligent positioning method and system.
Background
Castings are metal shaped articles obtained by a specific casting method, i.e. by melting a metal or other alloy material and then injecting it into a mould prepared beforehand for solidification forming, thus producing metal parts of various shapes and sizes. Because the design of the mould is improper, the casting process or casting materials are problematic, the produced casting has the defects of burrs, flash edges and the like, and therefore, the cleaning and polishing of the casting become an indispensable procedure in the casting production process.
Along with the continuous promotion of the science and technology level, many factories introduce polishing robots to polish castings, but most factories rely on manual work to position the castings, so that the castings need to be positioned through a vision system for realizing automatic polishing of the castings. The existing mainstream method is to realize the positioning of the casting by carrying out contour extraction on the casting image, so that the accuracy of the contour extraction influences the accuracy of casting positioning information, and the traditional Canny edge detection algorithm has higher processing speed and stronger noise resistance and can accurately extract the contour. However, because the selection of the double-threshold parameters in the Canny edge detection algorithm is set manually, when the scene is complex, the adaptability of the algorithm is poor, and the edge is easy to miss or pseudo edge occurs.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide a casting polishing vision intelligent positioning method and system, and the adopted technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for intelligently positioning polishing vision of a casting, the method comprising the steps of:
acquiring a gray level image of a target casting; binarizing the gray level image of the target casting to obtain a casting area;
acquiring two polarization coefficients of the casting edge of each pixel point of the casting area according to the gray value of each pixel point of the casting area; acquiring a suspected casting edge pixel point set according to the gradient amplitude of each pixel point in the casting area; acquiring the casting edge membership degree of each pixel point according to the gradient amplitude value of the suspected casting edge pixel point set; acquiring a casting arc edge direction difference index according to the gradient direction of each pixel point in the casting area;
acquiring a casting positioning edge significant value of the pixel point according to two polarization coefficients of the casting edge of the pixel point, membership of the casting edge and a casting arc edge direction difference index; acquiring casting positioning edge pixel points according to the casting positioning edge significant value; completing self-adaptive selection of the Canny edge detection double threshold according to the gradient amplitude of the casting positioning edge pixel points; acquiring an edge image of the target casting by using a Canny edge detection algorithm in combination with the edge detection double threshold;
and completing intelligent positioning of the castings according to the edge images of the target castings.
Preferably, the obtaining the two polarization coefficients of the casting edge of each pixel point in the casting area according to the gray value of each pixel point in the casting area includes:
and obtaining the intra-class uniformity index and the inter-class difference degree of each pixel point, and taking the product of the intra-class uniformity index and the inter-class difference degree as the two polarization coefficients of the casting edge of each pixel point.
Preferably, the obtaining the intra-class uniformity index and the inter-class difference degree of each pixel point includes:
the intra-class uniformity index is specifically:
obtaining products of standard deviation and extreme difference of gray values of pixel points in each clustering category, and taking the reciprocal of the sum of the products of all the clustering categories as an intra-category uniformity index of each pixel point;
the degree of difference between classes is expressed as follows:
in the method, in the process of the invention,representing pixel dot +.>Degree of inter-class difference in local region, +.>、/>Respectively represent cluster category->、/>The number of middle pixels, < >>、/>Respectively represent cluster category->、/>Pixel points in->Respectively represent pixel points +>、/>Corresponding gray values.
Preferably, the method for obtaining the suspected casting edge pixel point set according to the gradient amplitude of each pixel point in the casting area comprises the following steps:
setting a local window by taking each pixel point in the gray level image of the target casting as the center, calculating the average value of the gradient amplitude values of all the pixel points in the local window, and marking the pixel point set with the gradient amplitude value larger than the average value as a suspected casting edge pixel point set.
Preferably, the obtaining the membership degree of the edge of the casting of each pixel according to the gradient amplitude of the suspected casting edge pixel set specifically includes:
in the method, in the process of the invention,representing the center pixel +.>Suspicious casting edge pixel point set in local window>The number of middle pixels, < >>、/>Respectively representing the pixel point set of the suspected casting edge +.>Mean value, standard deviation, ++of gradient amplitude of middle pixel point>、/>Respectively representing suspected non-casting edge pixel point sets +.>Average value of gradient amplitude of middle pixel pointStandard deviation (S.D.)>Is a constant threshold.
Preferably, the method obtains the difference index of the arc edge direction of the casting according to the gradient direction of each pixel point in the casting area, and the expression is:
in the method, in the process of the invention,representing the center pixel +.>Is a casting arc edge direction difference index; />Is a set of suspected casting edge pixel points; />Representing the center pixel +.>The set in the local window where>The number of the middle pixel points;representation set->Center pixel point is divided>The first->Local gradient direction change characteristic values of the pixel points;representing the center pixel +.>Is a local gradient direction change characteristic value, +.>、/>Respectively represent the set->Maximum and minimum values of gradient direction angles of the middle pixel point.
Preferably, the method includes obtaining a casting positioning edge significant value of the pixel point according to two polarization coefficients of the casting edge of the pixel point, membership degree of the casting edge and direction difference index of the casting arc edge, specifically:
and calculating the product of the two polarization coefficients of the casting edge of the pixel point and the gradient distribution degree of the edge, and taking the ratio of the product to the difference index of the direction of the arc edge of the casting as a casting positioning edge significant value.
Preferably, the self-adaptive selection of the Canny edge detection double threshold is completed according to the gradient amplitude of the casting positioning edge pixel point, and the specific method comprises the following steps:
arranging gradient amplitude values of casting positioning edge pixel points from small to large to obtain a gradient sequence; the maximum gradient amplitude in the gradient sequence is subtracted by 1 as a high threshold value, and a value which is half of the maximum gradient amplitude and rounded down is taken as a low threshold value.
Preferably, the intelligent positioning of the casting is completed according to the edge image of the target casting, which comprises the following specific steps:
and combining a Hough circle detection algorithm to obtain the circle center and the radius of each circle in the edge image of the target casting, selecting three circle centers with the largest radius, calculating the barycenter coordinates of a triangle surrounded by the coordinates of the three circle centers, and taking the barycenter coordinates as the positioning coordinates of the casting in the image.
In a second aspect, an embodiment of the present invention further provides a casting polishing visual intelligent positioning system, including a memory, a processor, and a computer program stored in the memory and running on the processor, where the processor implements the steps of any one of the methods described above when executing the computer program.
The invention has at least the following beneficial effects:
according to the method, according to the pixel points at the edge of the casting positioning, the gray distribution of the pixel points at two sides of the edge of the casting positioning is approximately two-polarization distribution, so that two polarization coefficients of the edge of the casting are constructed, the problem of the positioning of the edge of the casting is solved, and the accuracy of the positioning of the casting is improved; meanwhile, the pixel points on the casting positioning edge in the target casting gray level image are segmented by combining the casting positioning edge salient values, so that the problem of extraction of the casting positioning edge is solved, and the accuracy of the casting positioning edge is improved;
the invention completes the self-adaptive selection of the double-threshold parameter based on the gradient amplitude of the pixel point on the casting positioning edge, overcomes the omission or false edge caused by the manual setting of the high and low thresholds of the traditional Canny operator, can also obtain the high and low thresholds which are most suitable for the current edge detection, obtains a more accurate target casting edge image, and provides better data support for the positioning of the subsequent castings.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of steps of a visual intelligent positioning method for polishing castings according to an embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following description refers to the specific implementation, structure, characteristics and effects of the intelligent positioning method and system for polishing a casting according to the invention in detail by combining the accompanying drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the casting polishing vision intelligent positioning method provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of steps of a method for intelligently positioning polishing vision of a casting according to an embodiment of the present invention is shown, the method includes the following steps:
step S001: and acquiring an image of the target casting, and preprocessing the image.
The casting positioned in this embodiment is a disc-shaped gray iron casting, and the disc-shaped casting is typically positioned with cylindrical holes in the casting. And fixing the CCD camera at a certain set position on the mechanical arm of the polishing robot, photographing the whole cast by using the CCD camera, converting the obtained RGB image into a gray image, and recording the gray image as the gray image of the cast.
Since there may be a plurality of cylindrical holes in the disc-shaped casting, and the casting positioned in the embodiment has two cylindrical holes, and the casting area where the two cylindrical holes are located is referred to as a target casting area, the embodiment positions the two target casting areas by image information of the cylindrical holes of the two target casting areas, so that the two target casting areas are polished by using the obtained casting positioning information later.
Specifically, the hough circle detection algorithm is used to process the gray level image of the casting to obtain a plurality of circular areas, wherein the hough circle detection algorithm is a prior art and is not in the protection scope of the embodiment, and specific contents are not repeated. And acquiring the minimum circumscribed rectangles of the two circular areas with the minimum radius, using the central points of the two minimum circumscribed rectangles as initial positioning points of the two target casting areas respectively, moving the CCD cameras to the positions right above the two initial positioning points respectively for shooting, obtaining images of the two target casting areas, and recording the images as target casting images. It should be noted that, the obtaining operator of the target casting may choose according to the actual situation, and the embodiment is not limited.
In the shooting process of the image, noise generated by factors such as a camera and environment can appear, in order to avoid the influence of the noise on subsequent image processing, denoising is respectively carried out on the two obtained target casting images, the two denoised target casting images are respectively converted into gray images, and the gray images are marked as target casting gray images. It should be noted that, the denoising processing for the image is in the prior art, and the practitioner may select the denoising method for the image according to the actual situation, which is not limited in this embodiment.
Step S002: and acquiring a casting positioning edge saliency map of the target casting, completing self-adaptive selection of double thresholds in a Canny edge detection algorithm, and extracting an edge image of the target casting.
Taking one target casting gray image as an example, the positioning of the target casting area is completed.
In the image of the target casting area, the gray level distribution of the casting surface is uniform, a large difference is formed between the gray level distribution and the background area, and the interior of the cylindrical hole in the target casting area is not irradiated by light, so that the image is presented as a black background area. Therefore, the gray level image of the target casting is processed by using the oxford threshold algorithm to obtain the binary image of the target casting, and it should be noted that the oxford threshold algorithm is in the prior art and is not in the protection scope of the embodiment, and specific contents are not repeated. And marking a foreground region in the target casting binarized image as a casting region, and dividing the casting region in the target casting gray level image. The casting region division implementer can select a division algorithm to extract according to actual conditions, and the embodiment is not limited.
In castingsIn the image of the area, the edges of the cylindrical holes used to locate the castings and the edges of the circular profile of the chassis used to assist in locating the castings are collectively referred to as the casting locating edges. In order to more accurately position the castings, it is necessary to extract the casting positioning edges in the image of the target casting area. In the image, the gray distribution of the pixel points at two sides of the casting positioning edge is approximately two-polarization distribution, the gray distribution of the pixel points at two sides of the casting edge is uniform, but the gray difference between the two sides is larger. Because the color difference between the casting and the background is larger, the two polarization degrees of the pixel gray values at the two sides of the casting positioning edge are larger than the two polarization degrees of the pixel gray values at the two sides of the non-casting edge. Therefore, one is arranged centering on each pixel point in the casting areaA window of a size that is a local area of the pixel. Wherein the size of m can be selected by the practitioner according to the actual situation, here +.>The empirical value was taken to be 7. With pixel points in the casting area +.>For example, calculate the casting edge two polarization coefficients of each pixel in the casting region>
Specifically, the K-means algorithm is used for the pixel pointsThe pixels in the local area are clustered according to gray values, and it should be noted that the K-means algorithm is a prior known technique, and is not within the scope of the present embodiment, the selecting implementer of the cluster number may select the pixel according to the actual situation, and the present embodiment is not limited, and in the present embodiment, the cluster number +_>. The two obtained cluster categories are respectively marked as +.>And->Further, pixel dot->Is>The specific formula is as follows:
wherein:representing pixel dot +.>An intra-class uniformity index of the localized region; />Representing pixel dot +.>The degree of inter-class variability in the local region. />、/>Respectively represent cluster category->Standard deviation and extremely poor gray value of the middle pixel point; />、/>Respectively represent cluster category->Standard deviation and extreme difference of gray values of the middle pixel points. />、/>Respectively represent cluster category->、/>The number of the middle pixel points; />、/>Respectively represent cluster category->、/>Is a pixel point in the display panel; />Respectively represent pixel points +>、/>Corresponding gray values.
The smaller the difference between the gray values of the pixels in the two clustering categories, namely the smaller the values of the standard deviation and the extremely poor, the more uniform the gray distribution of the pixels in the two clustering categories, namely the pixelsWithin-class uniformity index for localized region>The larger the value of (a) is, the more approximate the gray distribution of the pixel point a is to be two polarization distribution in the local area where the pixel point a is located, and the more likely the pixel point a is the pixel point on the edge of the casting; pixel dot +.>Clustering class in local area +.>And->The larger the number of pixel points and the gray level difference between the two clusters, the larger the difference between the two clusters, namely the pixel points +.>Degree of inter-class difference in local region>The greater the value of (2). Pixel dot +.>Within-class uniformity index for localized regionsDegree of inter-class variability->The larger the value of (2) is, the pixel point is represented>The more approximate the gray distribution of the pixel points in the local area is the two polarization distribution, and the greater the two polarization degree is, namely the two polarization coefficients of the casting edge are +>The larger the value of (2) is, the pixel point is indicated>The more likely it is a pixel point on the casting locating edge.
Further, the casting positioning edge is connected with the background, so that the gradient amplitude of the pixel points on the edge of the casting positioning edge is increased. Because burrs and flash areas exist on the edges of the cylindrical holes of the castings, part of the edges of the cylindrical holes are not connected with the background areas, and recognition of the edges of the cylindrical holes is affected. The portion of the edge is designated as the non-contiguous casting locating edge. Because the positioning edge of the non-connected casting and the adjacent area are facing to the light source, the light rays can be reflected to different degrees at the positioning edge of the non-connected casting, so that the brightness of the pixel points at the positioning edge of the non-connected casting is obviously different from the brightness of the pixel points at the adjacent area, and the gradient amplitude of the pixel points at the edge of the non-connected casting is increased. Because the casting positioning edge is a circular arc edge, the gradient directions of two adjacent pixel points on the edge are close, and the gradient direction angles of the two adjacent pixel points on the edge are obviously different for the pixel points on the non-casting positioning edge connected with the background.
Specifically, calculating gradient amplitude values of all pixel points in a casting area by using sobel operatorAnd gradient direction angleThe sobel operator is a known technology, and is not specifically described in the protection scope of the present embodiment. Setting a pixel point in the gray image of the target casting as a center, and setting a pixel point of 3*3 to be largeThe small local window, it should be noted that the size of the local window can be set by the practitioner according to the actual situation, and the embodiment is not limited. For pixel dot->For example, if the pixel is +>Belongs to the pixel point on the edge in the gray level image of the target casting, and the pixel point is +.>In the local window, the gradient amplitude of the pixel points on the non-edge is approximately 0, and the gradient amplitude of the pixel points on the edge is greater than 0. Therefore, calculate pixel +.>Mean value of gradient amplitude of all pixels in local window +.>Pixel dot +.>Gradient magnitude within local windowThe set of pixels of (1) is denoted +.>I.e. the set of suspected casting edge pixels, the set of the rest pixels in the local window is marked as +.>I.e., a set of suspected non-cast edge pixels. Obtain pixel dot +.>Is the casting edge membership of (2)
Wherein:representing pixel set +.>The number of the middle pixel points; />、/>Respectively representing pixel point setsAverage value and standard deviation of gradient amplitude of middle pixel point; />、/>Respectively representing pixel point sets +.>Average value and standard deviation of gradient amplitude of middle pixel point; />Is a constant threshold value for judging the pixel point +.>Whether the pixel is a suspected noise pixel; it should be noted that, the operator selecting the constant threshold value can select the constant threshold value according to the actual situation, in this embodiment +.>The empirical value was taken to be 3.
Further, the noise pixels are typically distributed discretely, ifThen the pixel point is indicated->The suspected noise pixels. Set->And (2) with collection->The more uniform the gradient amplitude distribution of the middle pixel point, +.>The smaller the value of (2) is, the pixel point is represented>The greater the probability of pixel points on the edge of the casting and the set +.>And->The larger the difference in gradient magnitude mean values between +.>The larger the value of (2), i.e. pixel +.>Casting edge membership of>The larger the value of (c) is, the greater the likelihood that the pixel will be a pixel on the edge of the casting.
And secondly, acquiring a gradient direction characteristic value of a central pixel point in the local window, and acquiring the gradient direction characteristic value in the clockwise direction by taking the pixel point in the gradient direction of the central pixel point as an initial pixel point. The gradient direction characteristic value is extracted by using an LBP algorithm in this embodiment, and it should be noted that the extraction process of the LBP algorithm and the characteristic value is a known technology,is not described in detail herein, and is not within the scope of the present embodiment. First, calculate the pixel pointGradient direction angle of pixel point in 8 neighborhood +.>LBP characteristic value +.>Pixel dot +.>LBP characteristic value +.>As a local gradient direction change characteristic value for representing the pixel point +.>The distribution of the local gradient direction angles of (1) to obtain the pixel point +.>Is>
Wherein:representing pixel dot +.>The set of suspected casting edge pixel points in the local window>The number of the middle pixel points; />Representation set->Middle-divided pixel dot->The first->Local gradient direction change characteristic values of the pixel points; />Representing pixel dot +.>Is a local gradient direction change characteristic value; />、/>Respectively represent the set->Maximum and minimum values of gradient direction angles of the middle pixel point.
Note that, the pixel pointThe more similar the distribution of local gradient direction angles of the pixel points at the edge of the rest suspected castings in the local window is, namely +.>The smaller the value of (2) is, the pixel point is represented>The greater the likelihood of a pixel point on a linear or circular arc edge of the casting; and set->Gradient direction angle of middle pixel pointThe greater the range of degrees, i.eThe larger the value of (2) is, the pixel point is represented>The less likely it is that a pixel point on the linear edge of the casting will be. Thus, the first and second substrates are bonded together,the smaller the value of +.>The larger the value of (2) is, the pixel point is represented>The more likely it is a pixel point on the circular arc edge of the casting, the pixel point +.>Is>The smaller the value of (2) is, the pixel point is indicated>The greater the likelihood of pixel points on the casting locating edge.
In order to extract casting positioning edge pixel points more accurately, subsequent casting positioning detection is convenient. Further, calculating the casting positioning edge significant value of each pixel pointPixel dot +.>For example, its casting positioning edge significance +.>The calculation method of (1) is as follows:
wherein:、/>、/>respectively represent pixel points +>The casting edge two polarization coefficients, the casting edge membership degree and the casting arc edge direction difference index; />For normalization function, represent pair->Is normalized to the value of (c). Wherein->The bigger and the more>The larger the value of +.>The smaller the value of (2) is, the pixel point is represented>The more likely it is the pixel point on the casting positioning edge, i.e. the casting positioning edge saliency value +.>The greater the value of (2).
And calculating the casting positioning edge saliency value of each pixel point in the target casting gray image to obtain a casting positioning edge saliency map. Based on the characteristic that the significant difference between the pixel point on the casting positioning edge and other pixel points is large, using an Ojin threshold algorithm to castThe positioning edge saliency map is processed, pixel points with the casting positioning edge saliency value higher than a threshold value are regarded as pixel points on the casting positioning edge, and the pixel points on the casting positioning edge are segmented, wherein the Ojin threshold algorithm is the prior art and is not in the protection scope of the embodiment. The gradient amplitude values of the casting positioning edge pixel points are arranged in order from small to large to obtain a gradient sequenceThe method comprises the following steps:
gradient sequenceMaximum gradient amplitude +.>There is a high probability that the gradient amplitude of the pixel on the positioning edge of the casting will be +.>As a high threshold in the Canny edge detection algorithm, will +.>As a low threshold in the Canny edge detection algorithm. And processing the gray level image of the target casting by using a Canny edge detection algorithm to obtain an edge image of the target casting.
Step S003: and (5) positioning the casting.
Processing the edge image of the target casting by using a Hough circle detection algorithm to obtain circle centers and radii of a plurality of circles, selecting the circle centers of the first three circles with the largest radii, and calculating the barycentric coordinates of a triangle surrounded by the coordinates of the three circle centersAnd as the positioning coordinates of the castings in the images, completing the visual intelligent positioning of the castings.
In summary, according to the embodiment of the invention, the gray distribution of the pixel points at two sides of the edge of the casting is approximately two polarization distribution according to the pixel points at the edge of the casting, so that two polarization coefficients of the edge of the casting are constructed, the problem of positioning the edge of the casting is solved, and the accuracy of positioning the casting is improved; meanwhile, the pixel points on the casting positioning edge in the target casting gray level image are segmented by combining the casting positioning edge salient values, so that the problem of extraction of the casting positioning edge is solved, and the accuracy of the casting positioning edge is improved;
according to the embodiment, the self-adaptive selection of the double-threshold parameter is completed based on the gradient amplitude of the pixel point on the casting positioning edge, the missing detection or false edge caused by manual setting of the high and low thresholds of the traditional Canny operator is overcome, the high and low thresholds which are most suitable for the current edge detection can be obtained, a more accurate target casting edge image is obtained, and better data support is provided for the positioning of the subsequent casting.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.
The foregoing description of the preferred embodiments of the present invention is not intended to be limiting, but rather, any modifications, equivalents, improvements, etc. that fall within the principles of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. The intelligent visual positioning method for casting polishing is characterized by comprising the following steps of:
acquiring a gray level image of a target casting; binarizing the gray level image of the target casting to obtain a casting area;
acquiring two polarization coefficients of the casting edge of each pixel point of the casting area according to the gray value of each pixel point of the casting area; acquiring a suspected casting edge pixel point set according to the gradient amplitude of each pixel point in the casting area; acquiring the casting edge membership degree of each pixel point according to the gradient amplitude value of the suspected casting edge pixel point set; acquiring a casting arc edge direction difference index according to the gradient direction of each pixel point in the casting area;
acquiring a casting positioning edge significant value of the pixel point according to two polarization coefficients of the casting edge of the pixel point, membership of the casting edge and a casting arc edge direction difference index; acquiring casting positioning edge pixel points according to the casting positioning edge significant value; completing self-adaptive selection of the Canny edge detection double threshold according to the gradient amplitude of the casting positioning edge pixel points; acquiring an edge image of the target casting by using a Canny edge detection algorithm in combination with the edge detection double threshold;
and completing intelligent positioning of the castings according to the edge images of the target castings.
2. The method for intelligently positioning the polishing vision of the casting according to claim 1, wherein the step of obtaining the two polarization coefficients of the casting edge of each pixel point of the casting area according to the gray value of each pixel point of the casting area comprises the following steps:
and obtaining an intra-class uniformity index and an inter-class difference index of each pixel point, and taking the product of the intra-class uniformity index and the inter-class difference degree as two polarization coefficients of the casting edge of each pixel point.
3. The method for intelligently positioning the polishing vision of the casting according to claim 2, wherein the step of obtaining the intra-class uniformity index and the inter-class difference degree of each pixel point comprises the following steps:
the intra-class uniformity index is specifically:
obtaining products of standard deviation and extreme difference of gray values of pixel points in each clustering category, and taking the reciprocal of the sum of the products of all the clustering categories as an intra-category uniformity index of each pixel point;
the degree of difference between classes is expressed as follows:
in the method, in the process of the invention,representing pixel dot +.>Degree of inter-class difference in local region, +.>、/>Respectively represent cluster category->、/>The number of middle pixels, < >>、/>Respectively represent cluster category->、/>Pixel points in->Respectively represent pixel points +>、/>Corresponding gray values.
4. The intelligent positioning method for polishing visual of castings according to claim 1, wherein the acquiring the suspected casting edge pixel point set according to the gradient amplitude of each pixel point in the casting area comprises the following specific steps:
setting a local window by taking each pixel point in the gray level image of the target casting as the center, calculating the average value of the gradient amplitude values of all the pixel points in the local window, and marking the pixel point set with the gradient amplitude value larger than the average value as a suspected casting edge pixel point set.
5. The intelligent positioning method for polishing and visual inspection of castings according to claim 1, wherein the step of obtaining the membership degree of the edges of castings of each pixel point according to the gradient amplitude of the suspected casting edge pixel point set is specifically as follows:
in the method, in the process of the invention,representing the center pixel +.>Suspicious casting edge pixel point set in local window>The number of middle pixels, < >>、/>Respectively representing the pixel point set of the suspected casting edge +.>Mean value, standard deviation, ++of gradient amplitude of middle pixel point>、/>Respectively representing suspected non-casting edge pixel point sets +.>Mean value, standard deviation, ++of gradient amplitude of middle pixel point>Is a constant threshold.
6. The intelligent positioning method for polishing visual of castings according to claim 1, wherein the method is characterized in that the difference index of the arc-shaped edge direction of the castings is obtained according to the gradient direction of each pixel point in the casting area, and the expression is as follows:
in the method, in the process of the invention,representing the center pixel +.>Is a casting arc edge direction difference index; />Is a set of suspected casting edge pixel points; />Representing the center pixel +.>The set in the local window where>The number of the middle pixel points; />Representation set->Center pixel point is divided>The first->Local gradient direction change characteristic values of the pixel points; />Representing the center pixel +.>Is a local gradient direction change characteristic value, +.>、/>Respectively represent the set->Maximum and minimum values of gradient direction angles of the middle pixel point.
7. The method for intelligently positioning the polishing vision of the casting according to claim 1, wherein the method is characterized in that the casting positioning edge significant value of the pixel point is obtained according to the casting edge two polarization coefficients, the casting edge membership degree and the casting arc edge direction difference index of the pixel point, and specifically comprises the following steps:
and calculating the product of the two polarization coefficients of the casting edge of the pixel point and the gradient distribution degree of the edge, and taking the ratio of the product to the difference index of the direction of the arc edge of the casting as a casting positioning edge significant value.
8. The intelligent positioning method for grinding vision of castings according to claim 1, wherein the self-adaptive selection of the Canny edge detection double threshold is completed according to the gradient amplitude of the pixel points of the positioning edges of the castings, and the specific method is as follows:
arranging gradient amplitude values of casting positioning edge pixel points from small to large to obtain a gradient sequence; the maximum gradient amplitude in the gradient sequence is subtracted by 1 as a high threshold value, and a value which is half of the maximum gradient amplitude and rounded down is taken as a low threshold value.
9. The intelligent positioning method for polishing and visual inspection of castings according to claim 1, wherein the intelligent positioning of castings according to the edge images of the target castings is accomplished by the following specific method:
and combining a Hough circle detection algorithm to obtain the circle center and the radius of each circle in the edge image of the target casting, selecting three circle centers with the largest radius, calculating the barycenter coordinates of a triangle surrounded by the coordinates of the three circle centers, and taking the barycenter coordinates as the positioning coordinates of the casting in the image.
10. A casting sanding visual intelligent positioning system including a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor, when executing the computer program, performs the steps of the method of any one of claims 1-9.
CN202311253246.5A 2023-09-27 2023-09-27 Casting polishing vision intelligent positioning method and system Active CN116993966B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311253246.5A CN116993966B (en) 2023-09-27 2023-09-27 Casting polishing vision intelligent positioning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311253246.5A CN116993966B (en) 2023-09-27 2023-09-27 Casting polishing vision intelligent positioning method and system

Publications (2)

Publication Number Publication Date
CN116993966A true CN116993966A (en) 2023-11-03
CN116993966B CN116993966B (en) 2023-12-12

Family

ID=88534160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311253246.5A Active CN116993966B (en) 2023-09-27 2023-09-27 Casting polishing vision intelligent positioning method and system

Country Status (1)

Country Link
CN (1) CN116993966B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237384A (en) * 2023-11-16 2023-12-15 潍坊科技学院 Visual detection method and system for intelligent agricultural planted crops

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114387438A (en) * 2022-03-23 2022-04-22 武汉锦辉压铸有限公司 Machine vision-based die casting machine parameter regulation and control method
CN115272376A (en) * 2022-09-27 2022-11-01 山东鑫科来信息技术有限公司 Floating polishing head control method based on machine vision
CN116433885A (en) * 2023-04-13 2023-07-14 湖南大学 Multi-opening pin positioning method based on sub-pixel edge
WO2023134789A1 (en) * 2022-10-25 2023-07-20 苏州德斯米尔智能科技有限公司 Automatic inspection method for belt-type conveying device
CN116758061A (en) * 2023-08-11 2023-09-15 山东优奭趸泵业科技有限公司 Casting surface defect detection method based on computer vision

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114387438A (en) * 2022-03-23 2022-04-22 武汉锦辉压铸有限公司 Machine vision-based die casting machine parameter regulation and control method
CN115272376A (en) * 2022-09-27 2022-11-01 山东鑫科来信息技术有限公司 Floating polishing head control method based on machine vision
WO2023134789A1 (en) * 2022-10-25 2023-07-20 苏州德斯米尔智能科技有限公司 Automatic inspection method for belt-type conveying device
CN116433885A (en) * 2023-04-13 2023-07-14 湖南大学 Multi-opening pin positioning method based on sub-pixel edge
CN116758061A (en) * 2023-08-11 2023-09-15 山东优奭趸泵业科技有限公司 Casting surface defect detection method based on computer vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
温阳东;顾倩芸;陈雪峰;: "基于改进Canny算子的LED晶片边缘检测", 计算机应用, no. 09 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237384A (en) * 2023-11-16 2023-12-15 潍坊科技学院 Visual detection method and system for intelligent agricultural planted crops
CN117237384B (en) * 2023-11-16 2024-02-02 潍坊科技学院 Visual detection method and system for intelligent agricultural planted crops

Also Published As

Publication number Publication date
CN116993966B (en) 2023-12-12

Similar Documents

Publication Publication Date Title
CN108364311B (en) Automatic positioning method for metal part and terminal equipment
CN110992326B (en) QFN chip pin image rapid inclination correction method
CN115830033B (en) Automobile hub surface defect detection method based on machine vision
CN116993966B (en) Casting polishing vision intelligent positioning method and system
Lee et al. Depth-assisted real-time 3D object detection for augmented reality
CN114494045B (en) Large spur gear geometric parameter measurement system and method based on machine vision
CA2369285A1 (en) Automatic image pattern detection
CN107480662B (en) Mould image identification method and device
CN106709500B (en) Image feature matching method
CN110021029B (en) Real-time dynamic registration method and storage medium suitable for RGBD-SLAM
CN114399522A (en) High-low threshold-based Canny operator edge detection method
CN113962306A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN116109637B (en) System and method for detecting appearance defects of turbocharger impeller based on vision
CN114972377A (en) 3D point cloud segmentation method and device based on moving least square method and hyper-voxels
CN111127384A (en) Strong reflection workpiece vision measurement method based on polarization imaging
CN115409785A (en) Method for detecting defects of small pluggable transceiver module base
CN115409787A (en) Method for detecting defects of small pluggable transceiver module base
CN114018946A (en) OpenCV-based high-reflectivity bottle cap defect detection method
Zhixin et al. Adaptive centre extraction method for structured light stripes
CN116630312B (en) Visual detection method for polishing quality of constant-force floating polishing head
CN108205641B (en) Gesture image processing method and device
CN116883498A (en) Visual cooperation target feature point positioning method based on gray centroid extraction algorithm
CN115661110B (en) Transparent workpiece identification and positioning method
CN109741302B (en) SD card form recognition system and method based on machine vision
CN117169247A (en) Metal surface defect multi-dimensional detection method and system based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant