CN117152127A - PTFE finished product appearance defect detection method based on machine vision - Google Patents

PTFE finished product appearance defect detection method based on machine vision Download PDF

Info

Publication number
CN117152127A
CN117152127A CN202311394638.3A CN202311394638A CN117152127A CN 117152127 A CN117152127 A CN 117152127A CN 202311394638 A CN202311394638 A CN 202311394638A CN 117152127 A CN117152127 A CN 117152127A
Authority
CN
China
Prior art keywords
point
pixel
super
pixel block
super pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311394638.3A
Other languages
Chinese (zh)
Other versions
CN117152127B (en
Inventor
张军辉
黄力钢
赵文强
邓彬
熊光明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xingdongtai Electronic Co ltd
Original Assignee
Shenzhen Xingdongtai Electronic Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xingdongtai Electronic Co ltd filed Critical Shenzhen Xingdongtai Electronic Co ltd
Priority to CN202311394638.3A priority Critical patent/CN117152127B/en
Publication of CN117152127A publication Critical patent/CN117152127A/en
Application granted granted Critical
Publication of CN117152127B publication Critical patent/CN117152127B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Signal Processing (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of image processing, in particular to a PTFE finished product appearance defect detection method based on machine vision, which comprises the following steps: collecting a tube gray scale map, and obtaining a super pixel block in the tube gray scale map; according to the gray value of the pixel point in each super pixel block, initial similarity weight is obtained; obtaining an included angle between each pixel point and the corresponding seed point according to the horizontal and vertical line distances from the pixel point to the seed point in the super pixel block; obtaining similarity weights of the pixel points and the corresponding seed points according to the included angles and the initial similarity weights; obtaining similarity according to the similarity weight and the gray level difference between each pixel point and the corresponding seed point; and re-dividing the super-pixel blocks into the pipe gray level diagram according to the similarity to obtain the corrected super-pixel blocks in the pipe gray level diagram, and then detecting the appearance defects of the PTFE finished product. The PTFE defect detection accuracy is improved by correcting the super pixel block by using the image processing method.

Description

PTFE finished product appearance defect detection method based on machine vision
Technical Field
The invention relates to the technical field of image processing, in particular to a PTFE finished product appearance defect detection method based on machine vision.
Background
With the development of technology, the material use is not limited to steel, and new building materials such as plastic pipes are used in various fields due to the availability of raw materials, such as PTFE called plastic king. The defect generation can be caused by improper machine adjustment in the production process of the plastic pipe, and the defect detection based on machine vision can reduce the manpower resource cost. The PTFE pipe is an important pipe widely applied to the fields of chemical industry, medical treatment, food, electronics and the like. PTFE tubing is widely used in processes for transporting liquids, gases and corrosive media due to its excellent heat resistance, corrosion resistance and low coefficient of friction. During the production process, ensuring the quality and integrity of PTFE tubing is critical to ensuring product performance and safety. PTFE tubing may have some surface or internal defects during production, such as blisters, cracks, shrinkage, etc. These defects may affect the strength, pressure resistance and service life of the pipe.
However, in the conventional method, super-pixel segmentation is often used to obtain a defect area, but in the process of detecting the PTFE pipe, the PTFE pipe is arranged by a plurality of pipes, and the gray value of the pixel point of the defect area is similar to the gray value of the pixel point of the non-defect area possibly; therefore, when the conventional super-pixel segmentation is used for judging the similarity of gray values of pixel points, the vertical and horizontal characteristics of the PTFE pipe cannot be considered, so that when the gray values are similar, the defect area of the PTFE pipe is inaccurate to detect, and erroneous judgment occurs. Therefore, when the similarity of the pixel points of the image is judged, the change characteristics of each pixel point in the horizontal and vertical directions are considered, the similarity judgment of the pixel points is carried out according to the horizontal and vertical change characteristics, and finally, an accurate super-pixel block is determined.
Disclosure of Invention
The invention provides a PTFE finished product appearance defect detection method based on machine vision, which aims to solve the existing problems.
The PTFE finished product appearance defect detection method based on machine vision adopts the following technical scheme:
one embodiment of the invention provides a PTFE finished product appearance defect detection method based on machine vision, which comprises the following steps:
acquiring an image of the PTFE pipe, and graying to obtain a pipe gray image;
presetting the number K of super pixel blocks, and dividing the pipe gray scale map according to the preset number K of the super pixel blocks to obtain super pixel blocks in the pipe gray scale map, wherein each super pixel block comprises a seed point;
obtaining initial similarity weight of each pixel point in the super pixel block and the corresponding seed point according to the gray value of the seed point in each super pixel block and the gray values of the pixel points on the horizontal and vertical lines corresponding to the seed point;
obtaining an included angle between each pixel point and the corresponding seed point according to the horizontal and vertical line distance from each pixel point to the seed point in the super pixel block; obtaining similarity weight of each pixel point and the corresponding seed point according to the included angle between each pixel point and the corresponding seed point in the super pixel block and the initial similarity weight;
obtaining the similarity of each pixel point and the corresponding seed point according to the similarity weight of each pixel point and the corresponding seed point in the super pixel block and the gray level difference between each pixel point and the corresponding seed point; repartitioning the super-pixel blocks in the pipe gray map according to the similarity between each pixel point and the corresponding seed point to obtain corrected super-pixel blocks in the pipe gray map;
and detecting appearance defects of the PTFE finished product according to the corrected super pixel blocks in the pipe gray level diagram.
Further, the dividing the pipe gray scale map according to the number K of preset super pixel blocks to obtain the super pixel blocks in the pipe gray scale map comprises the following specific steps:
uniformly dividing a gray level image of a pipe into K super pixel blocksThe number of pixel points in each super pixel block isThe side length dimension of each super pixel block is +.>Wherein N represents the number of pixel points in the gray scale graph of the pipe.
Further, the specific obtaining steps of each super pixel block including a seed point are as follows:
and selecting a pixel point at the central position in each super pixel block as a seed point of each super pixel block.
Further, the step of obtaining the initial similarity weight between each pixel point in the super pixel block and the corresponding seed point according to the gray value of the seed point in each super pixel block and the gray values of the pixel points on the horizontal and vertical lines corresponding to the seed point comprises the following specific steps:
the formula of the initial similarity weight of each pixel point and the corresponding seed point in the super pixel block is as follows:
in the method, in the process of the invention,gray value representing seed point in z-th super pixel block,/and the like>Gray value of j-th pixel point on horizontal line corresponding to seed point in super pixel block,/or%>Representing the gray value of the j-th pixel point on the vertical line corresponding to the seed point in the super pixel block, S represents the side length dimension of the super pixel block, < +.>Gray value representing the ith pixel point in the z-th super pixel block,/>Gray value representing pixel point on horizontal line corresponding to ith pixel point in super pixel block, +.>Gray value representing pixel point on vertical line corresponding to ith pixel point in super pixel block, +.>Representing the initial similarity weight of the ith pixel point in the z-th super pixel block and the corresponding seed point.
Further, the horizontal and vertical lines corresponding to the seed points comprise the following specific steps:
the horizontal and vertical lines corresponding to the seed points are horizontal and vertical lines formed by the pixel points of the horizontal and vertical columns where the seed points in each super pixel block are located.
Further, the step of obtaining the included angle between each pixel point and the corresponding seed point according to the distance from each pixel point to the corresponding horizontal and vertical lines of the seed point in the super pixel block comprises the following specific steps:
the formula of the included angle between each pixel point and the corresponding seed point is as follows:
in the method, in the process of the invention,representing the distance of the ith pixel point in the z-th super pixel block to the corresponding vertical line,/and>representing the distance of the ith pixel point in the z-th super pixel block to the corresponding horizontal line,/and>representing the ith pixel point in the superpixel block and the seed point in the zth superpixel blockIncluded angle between (I/II)>Representing an arctangent function.
Further, the specific obtaining steps of the similarity weight between each pixel point and the corresponding seed point are as follows:
the formula of similarity weight between each pixel point and the corresponding seed point is as follows:
in the method, in the process of the invention,representing the initial similarity weight of the ith pixel point in the z-th super pixel block to the corresponding seed point,representing the included angle between the ith pixel point in the super pixel block and the seed point in the z-th super pixel block, and +.>And the similarity weight of the ith pixel point in the z-th super pixel block and the corresponding seed point is represented.
Further, the specific obtaining steps of the similarity between each pixel point and the corresponding seed point are as follows:
the formula of the similarity between each pixel point and the corresponding seed point is as follows:
in the method, in the process of the invention,representing similarity weight of ith pixel point in the z-th super pixel block and corresponding seed point,/and (B)>Gray value representing seed point in z-th super pixel block,/and the like>Gray value representing the ith pixel point in the z-th super pixel block,/>Representing the similarity of the ith pixel point in the z-th super pixel block and the corresponding seed point,/and (B)>An exponential function based on a natural constant is represented.
Further, the super pixel blocks are re-divided into the pipe gray scale map according to the similarity between each pixel point and the corresponding seed point, so as to obtain corrected super pixel blocks in the pipe gray scale map, which comprises the following specific steps:
when the similarity between the gray value of any pixel point in the super pixel block and the corresponding seed point is greater than or equal to a preset threshold value A, judging that the pixel point and the corresponding seed pixel point are of a type, marking the pixel point of the super pixel block, which is of a type with the seed point, as a similar pixel point, and taking all the similar pixel points in the super pixel block as a new super pixel block; the new super pixel block is used as the corrected super pixel block.
Further, the detecting of the appearance defect of the PTFE finished product according to the corrected super pixel block in the pipe gray scale image comprises the following specific steps:
inputting the corrected gray average value of the super pixel blocks into an LOF algorithm for anomaly detection to obtain the LOF value of each corrected super pixel block; and when the LOF value of each corrected super pixel block is larger than a preset threshold value B, judging that the appearance of the PTFE pipe is defective.
The technical scheme of the invention has the beneficial effects that: according to the method, through analyzing the change characteristics in the horizontal and vertical directions and then correcting the similarity between each pixel point and the seed point in the super pixel block, the error in each super pixel block in super pixel segmentation is improved to be very small; and then, carrying out outlier anomaly detection on all the super-pixel blocks, and rapidly screening out super-pixels corresponding to the defect areas, thereby improving the accuracy of PTFE finished product appearance defect detection.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of the steps of the machine vision-based PTFE end product appearance defect detection method of the present invention;
FIG. 2 is a schematic diagram of points in the horizontal and vertical directions corresponding to pixel points of the PTFE product appearance defect detection method based on machine vision;
fig. 3 is a schematic diagram of the angles of the method for detecting appearance defects of the PTFE product based on machine vision according to the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following detailed description is given below of the machine vision-based method for detecting appearance defects of a PTFE finished product according to the invention, which is specific to the implementation, structure, characteristics and effects thereof, with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the machine vision-based PTFE finished product appearance defect detection method provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart illustrating a method for detecting appearance defects of a PTFE product based on machine vision according to an embodiment of the present invention includes the following steps:
step S001: and collecting a gray level diagram of the pipe.
In order to analyze whether or not there is a defect in the PTFE tubing when detecting the PTFE tubing, in this embodiment, an image of the PTFE tubing needs to be acquired and defect detection of the appearance needs to be performed based on the image.
Specifically, a conveyor belt is used for conveying PTFE pipes which are placed side by side, then a camera is arranged right above the conveyor belt for collecting images of the PTFE pipes, and then gray-scale pretreatment is carried out on the collected images of the PTFE pipes to obtain a pipe gray-scale image. Wherein, the conveying direction of the conveyor belt is defined as a vertical direction, and the direction perpendicular to the conveyor belt is defined as a horizontal direction.
And thus, obtaining a gray scale image of the pipe.
Step S002: presetting an initial seed point of a super pixel block in an image, and obtaining a similar weight of each pixel point in the image according to the initial seed point and gray scale characteristics in the vertical and horizontal directions.
In the process of conveying PTFE pipes placed side by using a conveyor belt, because the pipes are placed side by side, the gray level change condition of the obtained image in the direction vertical to the pipes is periodic, and the gray level change is not great in the direction horizontal to the pipes, so that weight similarity calculation is carried out on each pixel point and the corresponding seed point according to the gray level value characteristics in the directions vertical to the pipes and the horizontal to the pipes; and then correcting the gray scale difference in the super-pixel segmentation according to the pixel point weight similarity in the image.
(1) Presetting the number of super pixel blocks, and uniformly distributing and arranging seed points of the super pixel blocks according to the number of the super pixel blocks.
It should be further noted that, in order to determine the specific position of the defect area, the seed points are uniformly distributed and arranged first, and then the difference analysis is performed according to the super-pixel blocks corresponding to the uniformly distributed seed points, so as to obtain the super-pixel block with smaller gray scale difference inside the final super-pixel block.
Specifically, the number of the preset super pixel blocks is K, where the embodiment is described by taking k=100 as an example, and the embodiment is not specifically limited, where K may be determined according to the specific implementation situation. And (5) the number of the pixel points in the pipe gray scale map is recorded as N. Uniformly distributing the number K of preset super pixel blocks in the pipe gray scale map, and obtaining the number of pixel points in each super pixel block according to the number K of the super pixel blocks and the number N of the pixel points in the pipe gray scale map, wherein the number is. The pixel points in each super pixel block are not repeated, and all the pixel points in all the super pixel blocks can occupy the whole pipe gray scale map. Wherein, a square super pixel block is obtained as much as possible according to the number of preset super pixel blocks, and the side length of each super pixel block is +.>Wherein K represents the number of preset super pixel blocks, and N represents the number of pixel points in the pipe gray scale map. Wherein the seed point is located at the center point of the super pixel block.
(2) And obtaining initial similarity weight of each pixel point in the super pixel block according to the gray level difference of each pixel point and the seed point in the super pixel block in the horizontal and vertical directions.
In each super pixel block, the pixel points are differently represented in the horizontal direction and the vertical direction. The difference in the horizontal direction is derived from the non-uniformity of the illumination on the surface of the pipe, namely when a plurality of pipes are placed in parallel, the light rays are relatively strong on the upper surface of the pipe due to the poor gap light rays between the pipes, so that the images acquired in the horizontal direction show periodic gradient changes; the difference in the vertical direction is derived from a defect, that is, if there is no defect, the gray scale variation of the pipe in the vertical direction is small and there is almost no gray scale variation, but if there is a defect in the vertical direction, the difference in gray scale variation becomes large. So that the closer each pixel point is to the vertical line corresponding to the seed point, the greater the weight classified into the same category; in contrast, the closer each pixel point is to the horizontal line corresponding to the seed point, the less weight classified into the same category.
The seed point is used as the origin of coordinates, the direction perpendicular to the conveyor belt is used as the horizontal direction, the vertical axis is used as the vertical direction, and the direction of the conveyor belt is used as the horizontal axis. The horizontal line corresponding to the seed point is the vertical axis; the vertical line corresponding to the seed point is the horizontal axis. A schematic diagram of the horizontal and vertical direction points corresponding to the pixel points is shown in fig. 2.
Specifically, the gray value of the seed point in each super pixel block is obtained and recorded asObtaining initial similarity weight of each pixel point in the super pixel block and the corresponding seed point according to gray values of the pixel points on the horizontal line and the vertical line corresponding to the seed point in the super pixel block, wherein the initial similarity weight is expressed as follows by a formula:
in the method, in the process of the invention,gray value representing seed point in z-th super pixel block,/and the like>Gray value of j-th pixel point on horizontal line corresponding to seed point in super pixel block,/or%>Representing the gray value of the j-th pixel point on the vertical line corresponding to the seed point in the super pixel block, S represents the side length dimension of the super pixel block, < +.>Gray value representing the ith pixel point in the z-th super pixel block,/>Representing superpixelsGray value of pixel point on horizontal line corresponding to ith pixel point in block,/>Gray value representing pixel point on vertical line corresponding to ith pixel point in super pixel block, +.>Representing the initial similarity weight of the ith pixel point in the z-th super pixel block and the corresponding seed point. The horizontal and vertical lines corresponding to the seed points are the horizontal and vertical lines formed by the seed points in each super pixel block at the pixel points of the horizontal rows and the vertical columns.
Wherein whenThe larger and->And when the gray value of the corresponding pixel point is smaller than the gray value of the corresponding seed point, the initial similarity weight of the corresponding pixel point and the seed point is larger, namely the gray value of the corresponding pixel point is more similar to the gray value of the corresponding seed point.
So far, the initial similarity weight of each pixel point and the corresponding seed point in the pipe gray scale map is obtained.
(3) And obtaining the similarity weight according to the initial similarity weight of each pixel point and the corresponding seed point in the pipe gray scale map.
It should be noted that, in the above analysis, it is known that, when the pixel point in the super pixel block is closer to the vertical line corresponding to the seed point, the pixel point and the seed point are more likely to be the same type; when the pixel point in the super pixel block is closer to the horizontal line corresponding to the seed point, the probability that the pixel point and the seed point are of the same type is smaller. And therefore, an included angle which can be represented between the vertical and horizontal is obtained, and the initial similarity weight of each pixel point and the corresponding seed point in the pipe gray scale graph is corrected according to the included angle. The schematic diagram of the included angle is shown in fig. 3.
Specifically, an included angle between each pixel point and a corresponding seed point is obtained according to a distance between a horizontal line and a vertical line corresponding to the pixel point, and the included angle is specifically expressed as follows:
in the method, in the process of the invention,representing the distance of the ith pixel point in the z-th super pixel block to the corresponding vertical line,/and>representing the distance of the ith pixel point in the z-th super pixel block to the corresponding horizontal line,/and>representing the included angle between the ith pixel point in the super pixel block and the seed point in the z-th super pixel block, and +.>Representing an arctangent function.
Correcting the initial similarity weight of each pixel point and the corresponding seed point in the pipe gray scale according to the included angle between each pixel point and the seed point, wherein the initial similarity weight is expressed as follows:
in the method, in the process of the invention,representing the initial similarity weight of the ith pixel point in the z-th super pixel block to the corresponding seed point,representing the included angle between the ith pixel point in the super pixel block and the seed point in the z-th super pixel block, and +.>Representing similarity weight of ith pixel point in z-th super pixel block and corresponding seed point。
Wherein whenThe larger the pixel point and the seed point are, the less likely they are, the more the weight is corrected by a lower coefficient, whereas the weight is corrected by a higher coefficient.
So far, the similarity weight of each pixel point and the corresponding seed point is obtained.
Step S003: and correcting the gray scale difference of the pixel points according to the similar weight of each pixel point and the corresponding seed point in the image to obtain an image corrected super-pixel block.
In the conventional super-pixel segmentation, the difference of the gray values between the seed points and the pixel points in the super-pixel block is used, and it is obvious that the difference of the gray values between the pixel points is not reasonable in the abnormal image, so that the difference of the gray values between the pixel points is corrected by the similar weight of each pixel point and the corresponding seed point.
Specifically, the gray scale difference of each pixel point in the image is corrected according to the similarity weight of each pixel point and the corresponding seed point in the image, so that the similarity degree between each pixel point and the corresponding seed point after the image is corrected is obtained. The specific formula is as follows:
in the method, in the process of the invention,representing similarity weight of ith pixel point in the z-th super pixel block and corresponding seed point,/and (B)>Gray value representing seed point in z-th super pixel block,/and the like>Gray value representing the ith pixel point in the z-th super pixel block,/>Representing the similarity of the ith pixel point in the z-th super pixel block and the corresponding seed point,/and (B)>An exponential function based on a natural constant is represented.
Wherein,representing the gray scale difference between the pixel point and the seed point when +.>The larger the similarity between the pixel point and the seed point is, the smaller.
A threshold value a is preset, where the embodiment is described by taking a=0.7 as an example, and the embodiment is not specifically limited, where a may be determined according to the specific implementation situation. When the similarity between the gray value of any pixel point in the super pixel block and the gray value of the corresponding seed point is larger than or equal to a preset threshold value A, judging that the pixel point and the corresponding seed pixel point are of the same type, marking the pixel point of the super pixel block, which is of the same type as the seed point, as similar pixel points, and taking all similar pixel points in the super pixel block as a new super pixel block; when the similarity between the gray value of any one pixel point in the super pixel block and the gray value of the corresponding seed point is smaller than a preset threshold value A, judging that the pixel point and the corresponding seed pixel point are not of the same type, marking the pixel point which is not of the same type in the super pixel block as a non-similar pixel point, marking all the non-similar pixel points in the super pixel block as a new super pixel block, and marking the new super pixel block as a corrected super pixel block.
And thus, obtaining the corrected super pixel block in the pipe gray level diagram.
Step S004: and detecting defects according to the corrected super pixel blocks.
And calculating the gray average value of all pixel points in each corrected super pixel block, and taking the gray average value of the corrected super pixel block as the representative value of each corrected super pixel block.
A threshold is preset, where the present embodiment is described by taking b=1.1 as an example, and the present embodiment is not specifically limited, where B may be determined according to the specific implementation situation. Inputting the corrected gray average value of the super pixel blocks into an LOF algorithm for anomaly detection to obtain the LOF value of each corrected super pixel block; and when the LOF value of each corrected super pixel block is larger than a preset threshold value B, judging that the appearance of the PTFE pipe is defective. The LOF algorithm is a well-known technique, and is not described in detail herein.
This embodiment is completed.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (10)

1. The PTFE finished product appearance defect detection method based on machine vision is characterized by comprising the following steps of:
acquiring an image of the PTFE pipe, and graying to obtain a pipe gray image;
presetting the number K of super pixel blocks, and dividing the pipe gray scale map according to the preset number K of the super pixel blocks to obtain super pixel blocks in the pipe gray scale map, wherein each super pixel block comprises a seed point;
obtaining initial similarity weight of each pixel point in the super pixel block and the corresponding seed point according to the gray value of the seed point in each super pixel block and the gray values of the pixel points on the horizontal and vertical lines corresponding to the seed point;
obtaining an included angle between each pixel point and the corresponding seed point according to the horizontal and vertical line distance from each pixel point to the seed point in the super pixel block; obtaining similarity weight of each pixel point and the corresponding seed point according to the included angle between each pixel point and the corresponding seed point in the super pixel block and the initial similarity weight;
obtaining the similarity of each pixel point and the corresponding seed point according to the similarity weight of each pixel point and the corresponding seed point in the super pixel block and the gray level difference between each pixel point and the corresponding seed point; repartitioning the super-pixel blocks in the pipe gray map according to the similarity between each pixel point and the corresponding seed point to obtain corrected super-pixel blocks in the pipe gray map;
and detecting appearance defects of the PTFE finished product according to the corrected super pixel blocks in the pipe gray level diagram.
2. The machine vision-based PTFE finished product appearance defect detection method of claim 1, wherein dividing the tube gray scale map according to the number K of preset super pixel blocks to obtain super pixel blocks in the tube gray scale map comprises the following specific steps:
uniformly dividing a tube gray level graph into K super pixel blocks, wherein the number of pixel points in each super pixel block isThe side length dimension of each super pixel block is +.>Wherein N represents the number of pixel points in the gray scale graph of the pipe.
3. The method for detecting appearance defects of a PTFE product based on machine vision according to claim 1, wherein the specific step of obtaining a seed point contained in each super pixel block comprises the following steps:
and selecting a pixel point at the central position in each super pixel block as a seed point of each super pixel block.
4. The machine vision-based PTFE finished product appearance defect detection method of claim 1, wherein the obtaining the initial similarity weight between each pixel point in the super pixel block and the corresponding seed point according to the gray value of the seed point in each super pixel block and the gray values of the pixel points on the horizontal and vertical lines corresponding to the seed point comprises the following specific steps:
the formula of the initial similarity weight of each pixel point and the corresponding seed point in the super pixel block is as follows:
in the method, in the process of the invention,gray value representing seed point in z-th super pixel block,/and the like>Gray value of j-th pixel point on horizontal line corresponding to seed point in super pixel block,/or%>Representing the gray value of the j-th pixel point on the vertical line corresponding to the seed point in the super pixel block, S represents the side length dimension of the super pixel block, < +.>Gray value representing the ith pixel point in the z-th super pixel block,/>Gray value representing pixel point on horizontal line corresponding to ith pixel point in super pixel block, +.>Gray value representing pixel point on vertical line corresponding to ith pixel point in super pixel block, +.>Representing the initial similarity weight of the ith pixel point in the z-th super pixel block and the corresponding seed point.
5. The method for detecting appearance defects of a PTFE product based on machine vision according to claim 3, wherein the horizontal and vertical lines corresponding to the seed points comprise the following specific steps:
the horizontal and vertical lines corresponding to the seed points are horizontal and vertical lines formed by the pixel points of the horizontal and vertical columns where the seed points in each super pixel block are located.
6. The method for detecting the appearance defect of the PTFE finished product based on machine vision according to claim 1, wherein the obtaining the included angle between each pixel point and the corresponding seed point according to the horizontal and vertical distances from each pixel point to the seed point in the super pixel block comprises the following specific steps:
the formula of the included angle between each pixel point and the corresponding seed point is as follows:
in the method, in the process of the invention,representing the distance of the ith pixel point in the z-th super pixel block to the corresponding vertical line,/and>representing the distance of the ith pixel point in the z-th super pixel block to the corresponding horizontal line,/and>representing the included angle between the ith pixel point in the super pixel block and the seed point in the z-th super pixel block, and +.>Representing an arctangent function.
7. The machine vision-based PTFE finished product appearance defect detection method of claim 1, wherein the specific step of obtaining the similarity weight between each pixel point and the corresponding seed point is as follows:
the formula of similarity weight between each pixel point and the corresponding seed point is as follows:
in the method, in the process of the invention,representing an initial similarity weight of an ith pixel point in a z-th super pixel block with a corresponding seed point,/for>Representing the included angle between the ith pixel point in the super pixel block and the seed point in the z-th super pixel block, and +.>And the similarity weight of the ith pixel point in the z-th super pixel block and the corresponding seed point is represented.
8. The machine vision-based PTFE finished product appearance defect detection method according to claim 1, wherein the specific step of obtaining the similarity between each pixel point and the corresponding seed point is as follows:
the formula of the similarity between each pixel point and the corresponding seed point is as follows:
in the method, in the process of the invention,representing similarity weight of ith pixel point in the z-th super pixel block and corresponding seed point,/and (B)>Gray value representing seed point in z-th super pixel block,/and the like>Gray value representing the ith pixel point in the z-th super pixel block,/>Representing the similarity of the ith pixel point in the z-th super pixel block and the corresponding seed point,/and (B)>An exponential function based on a natural constant is represented.
9. The machine vision-based PTFE finished product appearance defect detection method of claim 1, wherein the repartitioning the super-pixel blocks according to the similarity between each pixel point and the corresponding seed point to the tube gray map to obtain the corrected super-pixel blocks in the tube gray map comprises the following specific steps:
when the similarity between the gray value of any pixel point in the super pixel block and the corresponding seed point is greater than or equal to a preset threshold value A, judging that the pixel point and the corresponding seed pixel point are of a type, marking the pixel point of the super pixel block, which is of a type with the seed point, as a similar pixel point, and taking all the similar pixel points in the super pixel block as a new super pixel block; the new super pixel block is used as the corrected super pixel block.
10. The machine vision-based PTFE finished product appearance defect detection method of claim 1, wherein the detecting the appearance defect of the PTFE finished product according to the corrected superpixel block in the tube gray scale map comprises the following specific steps:
inputting the corrected gray average value of the super pixel blocks into an LOF algorithm for anomaly detection to obtain the LOF value of each corrected super pixel block; and when the LOF value of each corrected super pixel block is larger than a preset threshold value B, judging that the appearance of the PTFE pipe is defective.
CN202311394638.3A 2023-10-26 2023-10-26 PTFE finished product appearance defect detection method based on machine vision Active CN117152127B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311394638.3A CN117152127B (en) 2023-10-26 2023-10-26 PTFE finished product appearance defect detection method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311394638.3A CN117152127B (en) 2023-10-26 2023-10-26 PTFE finished product appearance defect detection method based on machine vision

Publications (2)

Publication Number Publication Date
CN117152127A true CN117152127A (en) 2023-12-01
CN117152127B CN117152127B (en) 2024-01-16

Family

ID=88884493

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311394638.3A Active CN117152127B (en) 2023-10-26 2023-10-26 PTFE finished product appearance defect detection method based on machine vision

Country Status (1)

Country Link
CN (1) CN117152127B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105513066A (en) * 2015-12-02 2016-04-20 中山大学 General object detection method based on seed point selection and super pixel fusion
CN109509191A (en) * 2018-11-15 2019-03-22 中国地质大学(武汉) A kind of saliency object detection method and system
CN112991302A (en) * 2021-03-22 2021-06-18 华南理工大学 Flexible IC substrate color-changing defect detection method and device based on super-pixels
CN114913138A (en) * 2022-04-24 2022-08-16 南通飞旋智能科技有限公司 Method and system for detecting defects of pad printing machine product based on artificial intelligence
CN116309600A (en) * 2023-05-24 2023-06-23 山东金佳成工程材料有限公司 Environment-friendly textile quality detection method based on image processing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105513066A (en) * 2015-12-02 2016-04-20 中山大学 General object detection method based on seed point selection and super pixel fusion
CN109509191A (en) * 2018-11-15 2019-03-22 中国地质大学(武汉) A kind of saliency object detection method and system
CN112991302A (en) * 2021-03-22 2021-06-18 华南理工大学 Flexible IC substrate color-changing defect detection method and device based on super-pixels
CN114913138A (en) * 2022-04-24 2022-08-16 南通飞旋智能科技有限公司 Method and system for detecting defects of pad printing machine product based on artificial intelligence
CN116309600A (en) * 2023-05-24 2023-06-23 山东金佳成工程材料有限公司 Environment-friendly textile quality detection method based on image processing

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘安琪 等: "改进的SLIC超像素图像分割与合并算法", 安徽建筑大学学报, vol. 28, no. 04, pages 39 - 46 *
王富治 等: "基于视觉注意的随机游走图像分割", 仪器仪表学报, vol. 38, no. 07, pages 1772 - 1781 *
郑书富 等: "基于超体素聚类的三维点云轮廓特征提取", 兰州文理学院学报(自然科学版), vol. 32, no. 03, pages 50 - 54 *

Also Published As

Publication number Publication date
CN117152127B (en) 2024-01-16

Similar Documents

Publication Publication Date Title
CN110751604B (en) Online detection method for weld defects of steel pipe based on machine vision
CN107798326B (en) Contour vision detection method
CN115294120B (en) Valve surface quality detection method based on image recognition
CN115423813B (en) Method for detecting welding defects on surface of welded pipe
CN116703907B (en) Machine vision-based method for detecting surface defects of automobile castings
CN114998355B (en) Production defect identification method and device for sealing rubber ring
CN116433668B (en) Intelligent hydraulic oil pipe oil leakage detection method
CN116309565B (en) High-strength conveyor belt deviation detection method based on computer vision
CN116563279B (en) Measuring switch detection method based on computer vision
CN117556714B (en) Preheating pipeline temperature data anomaly analysis method for aluminum metal smelting
CN111724358A (en) Concrete quality detection method and system based on image and convolutional neural network
CN116091504A (en) Connecting pipe connector quality detection method based on image processing
CN116523913B (en) Intelligent detection method for quality of screw rod
CN115049670A (en) Tooth profile defect detection method based on gear
CN115147418A (en) Compression training method and device for defect detection model
CN117152127B (en) PTFE finished product appearance defect detection method based on machine vision
CN116416246B (en) Machine vision-based fully-degradable plastic product film coating effect evaluation method
CN115841491B (en) Quality detection method for porous metal material
CN111192261A (en) Method for identifying lens defect types
CN108830281B (en) Repeated image matching method based on local change detection and spatial weighting
CN117109909B (en) Detection and early warning method and system for mechanical sealing performance of large-shaft-diameter semi-split type machine
CN111931651B (en) Visual inspection image processing system and processing method thereof
CN117808796B (en) Gear surface damage detection method based on computer vision
CN117491422B (en) Method for detecting high heat-conducting property of aluminum alloy material
CN117455801A (en) Visual detection method for production quality of anti-slip gasket

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant