CN117670875B - Visual detection method and system in canning tail sealing process - Google Patents

Visual detection method and system in canning tail sealing process Download PDF

Info

Publication number
CN117670875B
CN117670875B CN202410129217.6A CN202410129217A CN117670875B CN 117670875 B CN117670875 B CN 117670875B CN 202410129217 A CN202410129217 A CN 202410129217A CN 117670875 B CN117670875 B CN 117670875B
Authority
CN
China
Prior art keywords
hough
image
target gray
value
gray value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410129217.6A
Other languages
Chinese (zh)
Other versions
CN117670875A (en
Inventor
林镇杰
林程光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Hengxing Packaging Machinery Co ltd
Original Assignee
Shenzhen Hengxing Packaging Machinery Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Hengxing Packaging Machinery Co ltd filed Critical Shenzhen Hengxing Packaging Machinery Co ltd
Priority to CN202410129217.6A priority Critical patent/CN117670875B/en
Publication of CN117670875A publication Critical patent/CN117670875A/en
Application granted granted Critical
Publication of CN117670875B publication Critical patent/CN117670875B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image processing, and provides a visual detection method and a visual detection system in a canning tail sealing process, wherein the visual detection method comprises the following steps: acquiring a tank opening image of a metal tank, and acquiring the number of tank opening turns; obtaining a USM image, a gray level histogram and a gradient histogram thereof; constructing a first screening function and obtaining a plurality of target gray values; detecting all pixel points of each class of target gray values in the USM image through a Hough circle to obtain a plurality of Hough points; constructing a second screening function and acquiring a main Hough point and a contour circle of each category of target gray values; obtaining the interfered degree of each contour circle; acquiring self-adaptive filtering strength; smoothing to obtain an adjusted USM image and obtaining a tank opening sharpening image; and (3) carrying out tank opening positioning and tank opening correction in the tail sealing process according to the tank opening sharpening image. The invention aims to solve the problem that the tail sealing effect is affected due to inaccurate positioning of a tank opening caused by the influence of sealing threads in the tank opening detection process through computer vision.

Description

Visual detection method and system in canning tail sealing process
Technical Field
The invention relates to the technical field of image processing, in particular to a visual detection method and a visual detection system in a canning tail sealing process.
Background
Canning and end-sealing is a common process in the food or beverage industry and is intended to ensure the sealing of food cans or containers and to maintain the freshness, quality and safety of the product. The metal cover and the plastic cover seal tails can tightly seal the seal tail materials on the tank opening through heat sealing or mechanical pressure, the edge of the tank opening is detected and the position of the tank opening is positioned through machine vision, automatic high-speed production can be realized, but when the position of the tank opening or the container is inconsistent with the expected position of the seal tail equipment, inaccurate seal tails can be caused, and the sealing performance and appearance of products are affected.
Can mouth position deviation is mainly caused by reasons such as production line speed mismatch, conveyor belt vibration, improper operation, when carrying out image acquisition to the can mouth, can mouth image also can receive certain vibration influence, leads to edge definition, contrast quality to drop, and especially some can mouths that have sealing screw thread probably detect a plurality of edge lines, in case image quality goes wrong, probably causes the fusion of a plurality of edge lines, warp, leads to the fact very big interference to the location of follow-up can mouth and seal the tail.
Disclosure of Invention
The invention provides a visual detection method and a visual detection system in a can end sealing process, which aim to solve the problem that the tail sealing effect is affected due to inaccurate positioning of a can end caused by the influence of sealing threads in the conventional can end detection process through computer vision, and the adopted technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a visual inspection method in a can end sealing process, the method including the steps of:
acquiring a tank opening image of a metal tank, and acquiring the number of tank opening turns;
obtaining a USM image through Gaussian blur on the tank opening image, and obtaining a gray level histogram and a gradient histogram of the USM image; according to the distribution of each gray value in the gray histogram, the distribution of each gradient amplitude in the gradient histogram and the number of can opening circles, a first screening function is constructed, and a plurality of target gray values are obtained;
detecting all pixel points of each class of target gray values in the USM image through a Hough circle to obtain a plurality of Hough points; constructing a second screening function according to the number of tickets thrown to the Hough points with different target gray values and circles corresponding to USM images, and acquiring main Hough points and contour circles of each target gray value; acquiring the interference degree of each contour circle according to the contour circle and the corresponding target gray value; acquiring self-adaptive filtering strength according to the interfered degree of the contour circle and the corresponding target gray value;
smoothing the USM image according to the self-adaptive filter strength to obtain an adjusted USM image, and obtaining a tank opening sharpening image according to the adjusted USM image and the tank opening image; and positioning the tank opening according to the tank opening sharpening image.
Preferably, the USM image is obtained by gaussian blur on the tank opening image, and the gray level histogram and the gradient histogram of the USM image are obtained, which comprises the following specific steps:
carrying out Gaussian blur on the tank opening image to obtain a blurred tank opening image, and obtaining a USM image through difference between the tank opening image and the blurred tank opening image; and acquiring a gray level histogram of the USM image, acquiring the gradient of each pixel point in the USM image through a Sobel operator, obtaining the gradient amplitude of each pixel point, and obtaining the gradient histogram of the USM image according to the gradient amplitude of the pixel point.
Preferably, the method for constructing the first screening function and obtaining a plurality of target gray values includes the following specific steps:
for any type of gray values, acquiring the frequency of the type of gray values in a gray histogram; combining the gray values with any gray value to obtain a gray value combination and corresponding one type of gradient amplitude values, wherein the gray value corresponds to a plurality of gradient amplitude values;
acquiring the gradient amplitude value with the largest frequency in the gradient histogram in all gradient amplitude values corresponding to the class of gray values, and taking the gradient amplitude value as the characteristic gradient of the class of gray values; the ratio of the frequency of the class of gray values in the gray histogram to the frequency of the characteristic gradient in the gradient histogram is recorded as a screening characteristic value of the class of gray values;
obtaining screening characteristic values of each class of gray values; selecting each time from all class gray values in the gray histogramClass gray values form a gray value set, < >>Obtaining a plurality of gray value sets for the number of can opening circles, and constructing a first screening function based on gray values in the gray value sets and screening characteristic values thereof;
and obtaining a corresponding output value for each gray value set through a first screening function, and marking the gray value set corresponding to the minimum output value of the first screening function as a target gray value set, wherein each class of gray value in the target gray value set is a target gray value.
Preferably, the specific formula of the first filtering function is:
wherein,for the output value of the first filter function, +.>Is the number of turns of the can mouth, and is>Representing the +.>Screening characteristic values of class gray values, +.>And representing the average value of the screening characteristic values of all class gray values in the gray value set.
Preferably, the detecting the gray value of each category of the target in all pixel points of the USM image through hough circles to obtain a plurality of hough points comprises the following specific methods:
for any type of target gray values, all pixel points of the type of target gray values on the USM image are obtained and marked as distributed pixel points of the type of target gray values, hough circle detection is carried out on all distributed pixel points, a plurality of Hough points in Hough parameter space are obtained, and the Hough points are marked as Hough points of the type of target gray values.
Preferably, the method for constructing the second screening function and obtaining the main hough point and the contour circle of each class of target gray values includes the following specific steps:
obtaining a ballot value of each Hoff point; acquiring the circumference of a circle corresponding to each Hoff point according to the radius of each Hoff point; selecting a Hough point from a plurality of Hough points of each target gray value respectively to form a Hough point set, and obtaining a plurality of Hough point sets; constructing a second screening function based on the Hough point set;
obtaining an output value of a second screening function obtained based on each Hough point set, taking the Hough point set corresponding to the smallest output value as a main Hough point set, taking each Hough point in the main Hough point set as a main Hough point corresponding to a target gray value, restoring the main Hough point in a USM image, obtaining a circle corresponding to each main Hough point in the USM image, marking the circle as a contour circle of each main Hough point, and taking the contour circle as a contour circle of each target gray value.
Preferably, the specific formula of the second filtering function is:
wherein,the representation is based on +.>Output value of second screening function obtained by integrating Hough points,/>For the number of target gray values +.>Indicate->Class target gray value is at +.>Voting values of corresponding Hough points in the Hough point sets, < + >>Indicate->Class target gray value is at +.>Circumference of circle corresponding to corresponding Hough points in the Hough point sets, < + >>Indicate->Hall deviceThe mean value of the circumferences of the circles corresponding to all the Hough points in the Hough point set.
Preferably, the interference degree of each contour circle is obtained by the following specific method:
for any type of target gray values, obtaining pixel points with gray values on the contour circle of the type of target gray values which are not equal to the type of target gray values, and marking the pixel points as non-target gray value pixel points on the contour circle of the type of target gray values; acquiring non-target gray value pixel points on a contour circle of each type of target gray value; first, theInterference degree of contour circle of class target gray value +.>The calculation method of (1) is as follows:
wherein,indicate->Circumference of contour circle of class target gray value, +.>Indicate->The number of non-target gray value pixel points on the contour circle of the class target gray value, +.>Indicate->Class target gray value contour circle +.>Non-target gray value imageGray value of pixel->Indicate->Gray value size of class target gray value, +.>Representing absolute values.
Preferably, the adaptive filtering strength is obtained by the specific method comprising the following steps:
taking any two target gray values as a target gray value combination, obtaining the absolute value of the difference value of the two target gray values in the target gray value combination, and marking the absolute value as the gray difference of the target gray value combination; acquiring the absolute value of the difference value of the radius of the contour circle of two target gray values in the target gray value combination, and recording the absolute value as the radius difference of the target gray value combination; obtaining a plurality of target gray value combinations, obtaining gray difference and radius difference of each target gray value combination, and self-adapting filtering strengthThe calculation method of (1) is as follows:
wherein,representing the number of target gray value combinations, +.>Indicate->Gray-scale difference of the combination of the individual target gray-scale values, +.>Indicate->Radius difference of each target gray value combination, +.>Standard deviation of the degree of interference of contour circles representing all target gray values, +.>An exponential function based on a natural constant is represented.
In a second aspect, another embodiment of the present invention provides a visual inspection system in a can end process, the system including a memory, a processor, and a computer program stored in the memory and running on the processor, the processor implementing the steps of the method described above when executing the computer program.
The beneficial effects of the invention are as follows: according to the invention, the USM sharpening is carried out on the tank opening image, and the self-adaptive filtering denoising is carried out according to the circular distribution in the image in the USM image acquisition process, so that the accurate adjusted USM image is acquired, the tank opening image is sharpened, the accuracy of acquiring the tank opening circle in the tank opening image is further improved, the tank opening positioning is more accurate, and the accuracy of canning and tail sealing is improved. The method comprises the steps of obtaining a USM image from a tank opening image, obtaining a gray level histogram and a gradient histogram from the USM image, and based on the frequency of gray level and gradient amplitude distribution, combining the characteristics of similar gray level values between concentric circles of the tank opening and similar corresponding gradient amplitude, so that the obtaining of a target gray level value is realized, wherein the possibility that the target gray level value is the gray level value of a pixel point on the concentric circle of the tank opening in the USM image is high; acquiring contour circles of each target gray value to represent concentric circles of each tank opening in the USM image by carrying out Hough circle detection on pixel points corresponding to the target gray value; the interference degree of the contour circle is reflected based on the number of non-target gray value pixel points and gray values on the contour circle, so that a basis is provided for subsequent Gaussian filtering, and an accurate contour circle is obtained to further sharpen a contour line of a subsequent tank opening.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
Fig. 1 is a schematic flow chart of a visual detection method in a canning tail sealing process according to an embodiment of the invention;
fig. 2 is a schematic diagram of a tank mouth image acquisition and a tank mouth concentric circle.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a flowchart of a visual inspection method in a can end sealing process according to an embodiment of the invention is shown, the method includes the following steps:
and S001, acquiring a tank opening image of the metal tank, and acquiring the number of tank opening turns.
The purpose of this embodiment is to position and correct the tank mouth in the process of sealing the tail of the tank through computer vision, so that the image of the tank mouth needs to be acquired first; meanwhile, the metal can is generally provided with a sealing thread, and the sealing thread of the can opening is covered with the packaging material when the tail is sealed, so that the sealing performance of the can be greatly improved, and the can opening can show a multi-layer concentric circular profile in a overlooking view, as shown in fig. 2.
In this embodiment, taking a metal can as an example, in the tail sealing process, the metal can is sequentially sealed on a conveyor belt, after the metal can is fixed by using a fixture, a CCD industrial camera loaded on a tail sealing device starts to shoot a metal can opening, so as to obtain a can opening image, and the image shot by the CCD industrial camera is gray scaleAn image; acquiring the number of turns of the metal can opening through priori knowledge and usingTo represent.
So far, the tank opening image of the metal tank is acquired, and the number of the tank opening turns is acquired.
When the tank opening is positioned, the contour line of the tank opening area needs to be extracted firstly, the most common edge extraction algorithm is a Canny operator, but a plurality of edge contour lines exist on the tank opening, and the edge contour lines on the tank opening are in concentric circles; when the problems of vibration of a conveyor belt, uneven production speed and the like exist, certain blurring is generated on edge information of a tank opening image, so that sharpening processing is needed for the tank opening image, however, not all detected edge information on the tank opening image needs to be sharpened, but not all edge information applies the same sharpening scale, because when the tank opening position is positioned, centroid points of the tank opening need to be positioned according to the tank opening contour, and when the sharpening scales of multiple concentric circle contours are consistent, the problem of edge adhesion possibly occurs during edge detection, and therefore accurate contour and centroid cannot be obtained; based on the purpose of jar mouth location, need carry out the self-adaptation to the multilayer contour line of sealing screw thread and sharpen, need guarantee not appear unusual adhesion or deformation problem between the edge line when reinforcing edge information.
It should be further noted that USM is a common image processing technique for enhancing edges and details of an image, that is, sharpening the image; firstly, carrying out Gaussian blur on a tank opening image to generate a blurred tank opening image, wherein the blurred tank opening image reduces high-frequency details in the image by carrying out smoothing treatment on an original image, and only most of low-frequency information is reserved; then subtracting the tank opening image from the fuzzy tank opening image to obtain a USM image; because the high-frequency details in the USM image obtained by subtracting a large amount of low-frequency information from the tank opening image are more prominent, the sharpening degree of the image can be controlled by adjusting the edge intensity in the USM image; however, the USM image is used as a differential image, which contains a large amount of noise information, so that it is difficult to directly segment all edge lines to be enhanced in the USM image, and further, the edge of the can opening cannot be adaptively sharpened, so that adaptive filtering smoothing is required to be performed on the USM image before Canny edge detection, so as to ensure that the edge lines to be enhanced can be accurately positioned when the USM sharpening intensity is adjusted later.
Step S002, obtaining a USM image through Gaussian blur on the tank opening image, and obtaining a gray level histogram and a gradient histogram of the USM image; and constructing a first screening function according to the distribution of each gray value in the gray histogram, the distribution of each gradient amplitude in the gradient histogram and the number of can mouth circles, and obtaining a plurality of target gray values.
It should be noted that, after the USM image is obtained, by obtaining the gray histogram and the gradient histogram, if there are several types of gray values, the frequencies distributed on the gray histogram are similar, and gray value combinations are formed by the gray values with the same or similar frequencies, and the frequencies of the gradient amplitude distributions corresponding to the gray value combinations on the gradient histogram are also similar, then these types of gray values may be the gray values of the tank opening edge line, that is, the concentric circular outline on the tank opening, that is, the target gray value.
Specifically, performing gaussian blur on the tank opening image to obtain a blurred tank opening image, and obtaining a USM image through difference between the tank opening image and the blurred tank opening image, wherein the specific method for obtaining the USM image is a known technology, and the embodiment is not repeated; and acquiring a gray level histogram of the USM image, acquiring the gradient of each pixel point in the USM image through a Sobel operator, obtaining the gradient amplitude of each pixel point, and obtaining the gradient histogram of the USM image according to the gradient amplitude of the pixel point.
Further, for any type of gray value, the frequency of the type of gray value in a gray histogram is obtained; combining the gray values with any gray value to obtain a gray value combination and corresponding to one type of gradient amplitude, wherein the gray value corresponds to a plurality of gradient amplitudes, and the gradient amplitude with the largest frequency in the gradient histogram in all gradient amplitudes corresponding to the gray value is obtained and used as the characteristic gradient of the gray value; the ratio of the frequency of the class of gray values in the gray histogram to the frequency of the characteristic gradient in the gradient histogram is recorded as a screening characteristic value of the class of gray values; the method is characterized in that in the process of acquiring the gradient amplitude corresponding to the gray value, the gray value combination comprises the gray value and the gray value as a combination, and the corresponding gradient amplitude is 0.
Further, from all class gray values in the gray histogram, randomly selecting each timeClass gray values form a gray value set, < >>For the number of can opening circles, a plurality of gray value sets are obtained, a first screening function is constructed based on gray values in the gray value sets and screening characteristic values thereof, and a specific formula of the first screening function is as follows:
wherein,for the output value of the first filter function, +.>Is the number of turns of the can mouth, and is>Representing the +.>Screening characteristic values of class gray values, +.>Representing the average value of screening characteristic values of all class gray values in the gray value set; obtaining a corresponding output value for each gray value set through a first screening function, taking the gray value set corresponding to the time when the output value of the first screening function is minimum as a target gray value set, and taking the target gray value set as a target gray valueEach class of gray values in the set is noted as a target gray value.
It should be noted that, when the frequencies of the screening feature values are acquired based on the frequencies of the gray values and the frequencies of the corresponding feature gradients, the two frequencies of the gray values of several classes included in the gray value set are both closer when the two frequencies are closer, and when the screening feature values of different gray values are also closer, the smaller the variance of the first screening function based on the screening feature values, the more consistent the characteristics that the gray values between concentric circles of the tank mouth are similar and the corresponding gradient amplitude is similar, the gray value set with the smallest output value is the target gray value set.
The gray level histogram and the gradient histogram are obtained through the USM image, and the possibility of obtaining the target gray level value which is the gray level value of the pixel point on the concentric circle of the tank opening in the USM image is high based on the frequency of the gray level value and the gradient amplitude distribution and combining the characteristics that the gray level values of the concentric circles of the tank opening are similar and the corresponding gradient amplitude values are similar.
Step S003, detecting all pixel points of each class of target gray values in the USM image through Hough circles to obtain a plurality of Hough points; constructing a second screening function according to the number of tickets thrown to the Hough points with different target gray values and circles corresponding to USM images, and acquiring main Hough points and contour circles of each target gray value; acquiring the interference degree of each contour circle according to the contour circle and the corresponding target gray value; and acquiring the self-adaptive filtering strength according to the interfered degree of the contour circle and the corresponding target gray value.
After the target gray values are obtained, carrying out Hough circle detection on pixel points corresponding to each target gray value in the USM image, mapping all the pixel points corresponding to each target gray value in a Hough parameter space to form a plurality of Hough points, wherein each Hough point represents a circle, the characteristics of the Hough points comprise coordinates, radiuses and voting numbers, the voting numbers reflect the number of Hough circles taking the Hough point as the circle center and having the same radius, the number of the pixel points on the circle is represented in the image, and a second screening function is constructed based on the voting numbers of the Hough points of each target gray value and the circumferences of circles corresponding to the Hough points, and the second screening function is converged to obtain main Hough points of each target gray value, so that a contour circle is correspondingly obtained in the USM image, wherein the contour circle is a possible concentric circle of a tank opening; and based on the comparison of the number of the pixel points of the target gray value and the number of the pixel points of the non-target gray value on the contour circle, the interference degree of the contour circle is quantized, so that the filtering strength of the Gaussian filter is corrected, the self-adaptive filtering strength is obtained, the independence of each contour circle is ensured while noise is removed, and the mutual influence or adhesion between the contour circles is avoided.
Specifically, for any type of target gray value, all pixel points of the type of target gray value on a USM image are obtained and marked as distributed pixel points of the type of target gray value, hough circle detection is carried out on all distributed pixel points to obtain a plurality of Hough points in a Hough parameter space, the Hough points are marked as Hough points of the type of target gray value, and a voting value of each Hough point is obtained; acquiring the circumference of a circle corresponding to each Hoff point according to the radius of each Hoff point; obtaining a plurality of distributed pixel points, hough points and ticket values of the Hough points of each class of target gray values and circumferences of corresponding circles according to the method; selecting a Hough point from a plurality of Hough points of each target gray value to form a Hough point set, wherein the number of Hough points in the Hough point set is equal to that of the target gray values, and each Hough point corresponds to one target gray value to obtain a plurality of Hough point sets; constructing a second screening function based on the Hough points, wherein the second screening function is used for acquiring main Hough points of each target gray value, and the specific formula of the second screening function is as follows:
wherein,the representation is based on +.>Output value of second screening function obtained by integrating Hough points,/>The number of the target gray values, namely the number of can opening circles; />Indicate->Class target gray value is at +.>Voting values of corresponding Hough points in the Hough point sets, < + >>Indicate->Class target gray value is at +.>Circumference of circle corresponding to corresponding Hough points in the Hough point sets, < + >>Indicate->The average value of circumferences of corresponding circles of all Hough points in the Hough point sets; according to the method, the output value of the second screening function obtained based on each Hough point set is obtained, the Hough point set corresponding to the smallest output value is used as a main Hough point set, each Hough point in the main Hough point set is used as a main Hough point corresponding to a target gray value, the main Hough points are restored in the USM image, a circle corresponding to each main Hough point is obtained in the USM image, and the circle is marked as a contour circle of each main Hough point, namely a contour circle of each target gray value.
The more similar the voting value of the Hough point is to the circumference of the corresponding circle, the more pixel points with the gray value of the corresponding circle as the target gray value are indicated, the more complete the Hough detection result is, and correspondingly, the smaller the ratio is, the closer to 1, the more complete the Hough detection result is, because the circumference is necessarily larger than or equal to the voting value; meanwhile, the closer the circumferences of the corresponding circles of the Hough points of different target gray values are, the smaller the square difference is, the closer the radii are, and the more the distribution of the concentric circles of the tank opening is met.
It should be further noted that, the edge line of each contour circle generally shows a discontinuity under the condition that noise exists, and a plurality of target gray values and non-target gray values exist, so as to obtain the pixel points of the non-target gray values on each contour circle, and calculate the interference degree suffered by each contour circle.
Specifically, for any type of target gray value, obtaining pixel points with gray values on the contour circle of the type of target gray value which are not equal to the type of target gray value, and marking the pixel points as non-target gray value pixel points on the contour circle of the type of target gray value; acquiring non-target gray value pixel points on the contour circle of each type of target gray value, and then the firstInterference degree of contour circle of class target gray value +.>The calculation method of (1) is as follows:
wherein,indicate->Circumference of contour circle of class target gray value, +.>Indicate->The number of non-target gray value pixel points on the contour circle of the class target gray value, +.>Indicate->Class target gray value contour circle +.>Gray value of each non-target gray value pixel, ">Indicate->Gray value size of class target gray value, +.>Representing absolute value; and obtaining the interference degree of the contour circle of each target gray value according to the method.
The ratio of the number of non-target gray value pixels to the perimeter of the contour circle is larger, and the larger the non-target gray value pixels are, the larger the non-target gray value pixels are possibly noise pixels on the contour circle, the larger the interfered degree of the contour circle is; meanwhile, the larger the difference between the gray value of the non-target gray value pixel point and the target gray value is, the larger the interference is, and the larger the interference degree is.
Further, taking any two target gray values as a target gray value combination, obtaining the absolute value of the difference value of the two target gray values in the target gray value combination, and recording the absolute value as the gray difference of the target gray value combination; meanwhile, obtaining the absolute value of the difference value of the radius of the contour circle of two target gray values in the target gray value combination, and recording the absolute value as the radius difference of the target gray value combination; obtaining a plurality of target gray value combinations according to the method, and obtaining gray difference and radius difference of each target gray value combination, then self-adapting filtering strengthThe calculation method of (1) is as follows:
wherein,representing the number of target gray value combinations, +.>Indicate->Gray-scale difference of the combination of the individual target gray-scale values, +.>Indicate->Radius difference of each target gray value combination, +.>Standard deviation of the degree of interference of contour circles representing all target gray values, +.>Representing an exponential function based on natural constants, this embodiment uses +.>Model to present inverse proportional relationship and normalization process, < ->For model input, in other embodiments, the practitioner may set the inverse proportion function and the normalization function according to the actual situation.
The larger the gray difference in the target gray value combination is, the larger the radius difference of the corresponding contour circle is, the smaller the mutual influence is shown in the image, and excessive adjustment of the filtering intensity is not needed; meanwhile, the standard deviation of the interfered degree of the contour circles of all the target gray values is used as the minimum filtering intensity, so that the smooth compensation requirement of each tank opening contour line when the USM image is smoothed by Gaussian filtering is met at minimum, and the minimum filtering intensity is adjusted, so that the tank opening contour lines are ensured not to be mutually influenced or adhered while the smooth compensation requirement is met.
So far, acquiring contour circles of each target gray value to represent concentric circles of each tank opening in the USM image by carrying out Hough circle detection on pixel points corresponding to the target gray value; the interference degree of the contour circle is reflected based on the number of non-target gray value pixel points and gray values on the contour circle, so that a basis is provided for subsequent Gaussian filtering, and an accurate contour circle is obtained to further sharpen a contour line of a subsequent tank opening.
Step S004, smoothing the USM image according to the self-adaptive filtering strength to obtain an adjusted USM image, and obtaining a tank opening sharpening image according to the adjusted USM image and the tank opening image; and (3) carrying out tank opening positioning and tank opening correction in the tail sealing process according to the tank opening sharpening image.
Obtaining adaptive filter strengthThen, the filtering intensity is ∈>Smoothing the USM image, and recording the smoothed image as an adjusted USM image, wherein the method is characterized in that the noise in the USM image is removed by carrying out Gaussian filtering smoothing through self-adaptive filtering strength; and superposing the adjusted USM image and the tank opening image to obtain a tank opening sharpening image, wherein superposition sharpening of the USM image and the tank opening image is the existing USM sharpening method, and the embodiment is not repeated.
Preferably, the specific method for positioning the tank opening according to the tank opening sharpening image comprises the following steps:
edge detection is carried out on the tank opening sharpened image through a Canny operator to obtain a plurality of edge lines, and Hough circle detection is carried out to obtain the edge linesThe round shape is marked as a can mouth circle, < >>Namely the number of turns of the tank opening; acquiring the circle center of the tank opening circle with the largest radius and the circle center of the tank opening circle with the smallest radius, and taking the circle center as a positioning point of the metal tank opening if the two circle centers are overlapped; if the two circle centers are not coincident, the midpoint of the connecting center of the two circle centers is used as a locating point of the metal can opening.
In other embodiments, the tank mouth circle is obtained and positioned based on the tank mouth sharpening image, and the tank mouth is further positioned based on the tank mouth circle, which is not described in detail in this embodiment.
Further, the position of the locating point is compared with the position of the tank opening expected by the tail sealing device, the position of the tank opening expected by the tail sealing device is determined based on the center (circle center) of the tank opening, the offset of the tank opening is determined through the Euclidean distance between the two positions, and the position of the tank opening is corrected based on the offset control conveyor belt or the operation manipulator, so that visual detection and correction of the position of the tank opening in the tail sealing process of the tank are completed.
Therefore, the USM sharpening is carried out on the tank opening image, the self-adaptive filtering denoising is carried out according to the circular distribution in the image in the USM image acquisition process, so that the accurate adjusted USM image is acquired, the tank opening image is sharpened, the accuracy of the tank opening circular acquisition in the tank opening image is further improved, the tank opening positioning is more accurate, and the accuracy of the tank can end sealing is improved.
Another embodiment of the present invention provides a visual inspection system in a can end capping process, the system including a memory, a processor, and a computer program stored in the memory and running on the processor, the processor implementing steps S001 to S004 described above when executing the computer program.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the invention, but any modifications, equivalent substitutions, improvements, etc. within the principles of the present invention should be included in the scope of the present invention.

Claims (5)

1. A visual detection method in a canning tail sealing process is characterized by comprising the following steps:
acquiring a tank opening image of a metal tank, and acquiring the number of tank opening turns;
obtaining a USM image through Gaussian blur on the tank opening image, and obtaining a gray level histogram and a gradient histogram of the USM image; according to the distribution of each gray value in the gray histogram, the distribution of each gradient amplitude in the gradient histogram and the number of can opening circles, a first screening function is constructed, and a plurality of target gray values are obtained;
the method for constructing the first screening function and obtaining a plurality of target gray values comprises the following specific steps:
for any type of gray values, acquiring the frequency of the type of gray values in a gray histogram; combining the gray values with any gray value to obtain a gray value combination and corresponding one type of gradient amplitude values, wherein the gray value corresponds to a plurality of gradient amplitude values;
acquiring the gradient amplitude value with the largest frequency in the gradient histogram in all gradient amplitude values corresponding to the class of gray values, and taking the gradient amplitude value as the characteristic gradient of the class of gray values; the ratio of the frequency of the class of gray values in the gray histogram to the frequency of the characteristic gradient in the gradient histogram is recorded as a screening characteristic value of the class of gray values;
obtaining screening characteristic values of each class of gray values; selecting each time from all class gray values in the gray histogramClass gray values form a gray value set, < >>Obtaining a plurality of gray value sets for the number of can opening circles, and constructing a first screening function based on gray values in the gray value sets and screening characteristic values thereof;
obtaining a corresponding output value for each gray value set through a first screening function, and marking the gray value set corresponding to the minimum output value of the first screening function as a target gray value set, wherein each class of gray value in the target gray value set is a target gray value;
the specific formula of the first screening function is as follows:
wherein,for the output value of the first filter function, +.>Representing the +.>Screening characteristic values of class gray values, +.>Representing the average value of screening characteristic values of all class gray values in the gray value set;
detecting all pixel points of each class of target gray values in the USM image through a Hough circle to obtain a plurality of Hough points; constructing a second screening function according to the number of tickets thrown to the Hough points with different target gray values and circles corresponding to USM images, and acquiring main Hough points and contour circles of each target gray value;
the specific method for constructing the second screening function and obtaining the main Hough point and the contour circle of each target gray value comprises the following steps:
obtaining a ballot value of each Hoff point; acquiring the circumference of a circle corresponding to each Hoff point according to the radius of each Hoff point; selecting a Hough point from a plurality of Hough points of each target gray value respectively to form a Hough point set, and obtaining a plurality of Hough point sets; constructing a second screening function based on the Hough point set;
acquiring an output value of a second screening function obtained based on each Hough point set, taking the Hough point set corresponding to the smallest output value as a main Hough point set, taking each Hough point in the main Hough point set as a main Hough point corresponding to a target gray value, restoring the main Hough point in a USM image, obtaining a circle corresponding to each main Hough point in the USM image, marking the circle as a contour circle of each main Hough point, and taking the contour circle as a contour circle of each target gray value;
the specific formula of the second screening function is as follows:
wherein,the representation is based on +.>Output value of second screening function obtained by integrating Hough points,/>Indicate->Class target gray value is at +.>Voting values of corresponding Hough points in the Hough point sets, < + >>Indicate->Class target gray value is at +.>Circumference of circle corresponding to corresponding Hough points in the Hough point sets, < + >>Indicate->The average value of circumferences of corresponding circles of all Hough points in the Hough point sets;
acquiring the interference degree of each contour circle according to the contour circle and the corresponding target gray value;
the interference degree of each contour circle is obtained by the following specific method:
for any type of target gray values, obtaining pixel points with gray values on the contour circle of the type of target gray values which are not equal to the type of target gray values, and marking the pixel points as non-target gray value pixel points on the contour circle of the type of target gray values; acquiring non-target gray value pixel points on a contour circle of each type of target gray value; first, theInterference degree of contour circle of class target gray value +.>The calculation method of (1) is as follows:
wherein,indicate->Circumference of contour circle of class target gray value, +.>Indicate->The number of non-target gray value pixel points on the contour circle of the class target gray value, +.>Indicate->Class target gray value contour circle +.>Gray value of each non-target gray value pixel, ">Indicate->Gray value size of class target gray value, +.>Representing absolute value;
acquiring self-adaptive filtering strength according to the interfered degree of the contour circle and the corresponding target gray value;
smoothing the USM image according to the self-adaptive filter strength to obtain an adjusted USM image, and obtaining a tank opening sharpening image according to the adjusted USM image and the tank opening image; and positioning the tank opening according to the tank opening sharpening image.
2. The visual inspection method in the can end sealing process of claim 1, wherein the USM image is obtained by gaussian blur of the can end image, and a gray level histogram and a gradient histogram of the USM image are obtained, comprising the following specific steps:
carrying out Gaussian blur on the tank opening image to obtain a blurred tank opening image, and obtaining a USM image through difference between the tank opening image and the blurred tank opening image; and acquiring a gray level histogram of the USM image, acquiring the gradient of each pixel point in the USM image through a Sobel operator, obtaining the gradient amplitude of each pixel point, and obtaining the gradient histogram of the USM image according to the gradient amplitude of the pixel point.
3. The visual inspection method in the can end sealing process of claim 1, wherein the detecting the gray values of each class of the target in all pixels of the USM image by hough circles to obtain a plurality of hough points comprises the following specific steps:
for any type of target gray values, all pixel points of the type of target gray values on the USM image are obtained and marked as distributed pixel points of the type of target gray values, hough circle detection is carried out on all distributed pixel points, a plurality of Hough points in Hough parameter space are obtained, and the Hough points are marked as Hough points of the type of target gray values.
4. The visual inspection method in the can end sealing process of claim 1, wherein the adaptive filter strength is obtained by the following specific method:
taking any two target gray values as a target gray value combination, obtaining the absolute value of the difference value of the two target gray values in the target gray value combination, and marking the absolute value as the gray difference of the target gray value combination; acquiring the absolute value of the difference value of the radius of the contour circle of two target gray values in the target gray value combination, and recording the absolute value as the radius difference of the target gray value combination; obtaining a plurality of target gray value combinations, obtaining gray difference and radius difference of each target gray value combination, and self-adapting filtering strengthThe calculation method of (1) is as follows:
wherein,representing the number of target gray value combinations, +.>Indicate->Gray-scale difference of the combination of the individual target gray-scale values, +.>Indicate->Radius difference of each target gray value combination, +.>Standard deviation of the degree of interference of contour circles representing all target gray values, +.>An exponential function based on a natural constant is represented.
5. A visual inspection system in a can end comprising a memory, a processor and a computer program stored in the memory and running on the processor, wherein the processor, when executing the computer program, performs the steps of a visual inspection method in a can end as claimed in any one of claims 1 to 4.
CN202410129217.6A 2024-01-31 2024-01-31 Visual detection method and system in canning tail sealing process Active CN117670875B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410129217.6A CN117670875B (en) 2024-01-31 2024-01-31 Visual detection method and system in canning tail sealing process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410129217.6A CN117670875B (en) 2024-01-31 2024-01-31 Visual detection method and system in canning tail sealing process

Publications (2)

Publication Number Publication Date
CN117670875A CN117670875A (en) 2024-03-08
CN117670875B true CN117670875B (en) 2024-04-02

Family

ID=90082791

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410129217.6A Active CN117670875B (en) 2024-01-31 2024-01-31 Visual detection method and system in canning tail sealing process

Country Status (1)

Country Link
CN (1) CN117670875B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113902667A (en) * 2021-08-23 2022-01-07 浙大宁波理工学院 Thread turning identification method and system for machine vision
CN116188763A (en) * 2022-12-28 2023-05-30 山西大学 Method for measuring carton identification positioning and placement angle based on YOLOv5
CN117132655A (en) * 2023-10-25 2023-11-28 江苏金旺智能科技有限公司 Filling barrel opening position measuring method based on machine vision

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113902667A (en) * 2021-08-23 2022-01-07 浙大宁波理工学院 Thread turning identification method and system for machine vision
CN116188763A (en) * 2022-12-28 2023-05-30 山西大学 Method for measuring carton identification positioning and placement angle based on YOLOv5
CN117132655A (en) * 2023-10-25 2023-11-28 江苏金旺智能科技有限公司 Filling barrel opening position measuring method based on machine vision

Also Published As

Publication number Publication date
CN117670875A (en) 2024-03-08

Similar Documents

Publication Publication Date Title
US11741367B2 (en) Apparatus and method for image processing to calculate likelihood of image of target object detected from input image
CN115018853B (en) Mechanical component defect detection method based on image processing
CN103745221B (en) Two-dimensional code image correction method
CN107845086A (en) A kind of detection method, system and the device of leather surface conspicuousness defect
CN113962306A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN116912250B (en) Fungus bag production quality detection method based on machine vision
CN111415339B (en) Image defect detection method for complex texture industrial product
Adatrao et al. An analysis of different image preprocessing techniques for determining the centroids of circular marks using hough transform
CN115619775B (en) Material counting method and device based on image recognition
CN112884746A (en) Character defect intelligent detection algorithm based on edge shape matching
CN114119603A (en) Image processing-based snack box short shot defect detection method
Zhang et al. Robust pattern recognition for measurement of three dimensional weld pool surface in GTAW
CN111539927A (en) Detection process and algorithm of automobile plastic assembly fastening buckle lack-assembly detection device
CN115018846A (en) AI intelligent camera-based multi-target crack defect detection method and device
CN117670875B (en) Visual detection method and system in canning tail sealing process
CN115587966A (en) Method and system for detecting whether parts are missing or not under condition of uneven illumination
Felipe et al. Vision-based liquid level detection in amber glass bottles using OpenCV
CN113019973A (en) Online visual inspection method for manufacturing defects of ring-pull cans
CN113781413B (en) Electrolytic capacitor positioning method based on Hough gradient method
CN113658141A (en) Transparent packaging bag sealing identification method and device, storage medium and electronic equipment
CN112926695A (en) Image recognition method and system based on template matching
CN116883987A (en) Pointer instrument reading identification method for unmanned inspection of transformer substation
CN113313725B (en) Bung hole identification method and system for energetic material medicine barrel
CN115372380A (en) Identification system for extremely-short wave optical detection method of plastic film wrapped outside
CN112381755A (en) Infusion apparatus catheter gluing defect detection method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant