CN114758125A - Gear surface defect detection method and system based on deep learning - Google Patents

Gear surface defect detection method and system based on deep learning Download PDF

Info

Publication number
CN114758125A
CN114758125A CN202210346765.5A CN202210346765A CN114758125A CN 114758125 A CN114758125 A CN 114758125A CN 202210346765 A CN202210346765 A CN 202210346765A CN 114758125 A CN114758125 A CN 114758125A
Authority
CN
China
Prior art keywords
defect
image
pixel
segmentation
defect segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210346765.5A
Other languages
Chinese (zh)
Other versions
CN114758125B (en
Inventor
陈小兰
颜小英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Kezhi Electrical Automation Co ltd
Original Assignee
Jiangsu Qingci Machinery Manufacturing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Qingci Machinery Manufacturing Co ltd filed Critical Jiangsu Qingci Machinery Manufacturing Co ltd
Priority to CN202210346765.5A priority Critical patent/CN114758125B/en
Publication of CN114758125A publication Critical patent/CN114758125A/en
Application granted granted Critical
Publication of CN114758125B publication Critical patent/CN114758125B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of mechanical part detection, in particular to a gear surface defect detection method and system based on deep learning. The method comprises the following steps: analyzing the gear tooth region-of-interest image by using a first defect segmentation network to obtain gear defect information; the generation process of the label image of the first defect segmentation network training set comprises the following steps: changing an original defect area in the gear tooth region-of-interest image in the training set, and marking a change pixel point; acquiring a first defect image from the gear tooth region-of-interest image according to the changed defect region; filtering the corresponding positions of the changed pixel points in the first defect image; segmenting a defect region of the filtered first defect image, comparing the defect region with the first defect segmented image, and determining a confidence coefficient change value of a pixel; and generating a label image of the first defect segmentation network according to the confidence coefficient change value of the pixel. The invention improves the detection precision of the surface defects of the gear.

Description

Gear surface defect detection method and system based on deep learning
Technical Field
The invention relates to the technical field of artificial intelligence and mechanical part detection, in particular to a gear surface defect detection method and system based on deep learning.
Background
The gear is a very important mechanical part, directly influences the service life of a machine and is related to safe production. The existing gear surface defect technology generally detects the gear surface defects in a machine vision mode; machine vision in an actual scene may have better generalization capability, but the problem of insufficient precision often exists, and the machine vision is difficult to be applied to a high-precision defect detection task.
Disclosure of Invention
In order to solve the technical problem, the invention provides a gear surface defect detection method based on deep learning, which comprises the following steps:
analyzing the gear tooth region-of-interest image by using a first defect segmentation network to obtain gear defect information; the generation process of the label image of the first defect segmentation network training set comprises the following steps:
performing defect segmentation on the gear tooth region-of-interest image in the training set to obtain a first defect segmentation image;
changing an original defect area in the first defect segmentation image to obtain a first defect area, and marking a change pixel point;
acquiring a first defect image from the gear tooth region-of-interest image according to the first defect region;
filtering the corresponding positions of the changed pixel points in the first defect image to obtain a second defect image;
performing defect segmentation on the second defect image to obtain a second defect segmentation image;
comparing the categories of the pixels at the same positions of the first defect segmentation image and the second defect segmentation image, and determining the confidence coefficient change value of the pixels; and correcting the initial confidence coefficient of the pixel according to the confidence coefficient change value of the pixel to generate a label image of the first defect segmentation network.
Further, the defect segmentation of the gear tooth region-of-interest image in the training set includes: and performing defect segmentation on the gear tooth region-of-interest image in the training set by adopting a second defect segmentation network.
Further, the filtering is specifically an average filtering operation.
Further, the modifying the initial confidence of the pixel according to the confidence change value of the pixel, and generating the label image of the first defect segmentation network includes:
correcting the initial confidence coefficient of the pixel according to the confidence coefficient change value of the pixel to obtain a confidence coefficient graph;
and carrying out threshold processing on the reliability map to obtain a label image of the first defect segmentation network.
Further, the changing the original defect region in the first defect segmentation image includes: the original defect region in the first defect segmentation image is subjected to a dilation operation.
Further, the changing the original defect region in the first defect segmentation image includes:
and carrying out corrosion operation on the original defect area in the first defect segmentation image.
Further, comparing the class of the pixels at the same position of the first defect segmentation image and the second defect segmentation image, and determining the confidence coefficient change value of the pixels comprises:
if the pixel categories of the first defect segmentation image and the second defect segmentation image at the corresponding positions of the changed pixel points are the same and are non-defect categories, the confidence coefficient change value is zero;
if the pixel categories of the first defect segmentation image and the second defect segmentation image at the corresponding positions of the changed pixel points are the same and are defect categories, the confidence coefficient change value is a positive value;
if the pixel category of the first defect segmentation image at the corresponding position of the change pixel point is a non-defect category, and the pixel category of the second defect segmentation image at the corresponding position of the change pixel point is a defect category, the confidence coefficient change value is a positive value;
if the pixel category of the first defect segmentation image at the corresponding position of the change pixel point is a defect category, and the pixel category of the second defect segmentation image at the corresponding position of the change pixel point is a non-defect category, the confidence coefficient change value is zero;
if the pixel categories of the first defect segmentation image and the second defect segmentation image at the corresponding positions of the unchanged pixel points are different, the confidence coefficient change value is a negative value, otherwise, the confidence coefficient change value is a positive value.
The invention also provides a gear surface defect detection system based on deep learning, which comprises:
the first defect segmentation network is used for analyzing the gear tooth region-of-interest image to obtain gear defect information;
the generation process of the first defect segmentation network training set label image comprises the following steps: performing defect segmentation on the gear tooth region-of-interest image in the training set to obtain a first defect segmentation image; changing an original defect area in the first defect segmentation image to obtain a first defect area, and marking a change pixel point; acquiring a first defect image from the image of the region of interest of the gear teeth according to the first defect region; filtering the corresponding positions of the changed pixel points in the first defect image to obtain a second defect image; performing defect segmentation on the second defect image to obtain a second defect segmentation image; comparing the categories of the pixels at the same positions of the first defect segmentation image and the second defect segmentation image, and determining the confidence coefficient change value of the pixels; and correcting the initial confidence coefficient of the pixel according to the confidence coefficient change value of the pixel to generate a label image of the first defect segmentation network.
Further, the performing defect segmentation on the gear tooth region-of-interest image in the training set includes: and carrying out defect segmentation on the gear tooth region-of-interest image in the training set by adopting a second defect segmentation network.
Further, the modifying the initial confidence of the pixel according to the confidence change value of the pixel, and generating the label image of the first defect segmentation network includes:
correcting the initial confidence coefficient of the pixel according to the confidence coefficient change value of the pixel to obtain a confidence coefficient graph;
and carrying out threshold processing on the reliability map to obtain a label image of the first defect segmentation network.
The invention has the following beneficial effects:
according to the method, the original defect segmentation image of the gear is compared with the second defect segmentation image obtained by changing the defect area, the confidence coefficient change value is determined, the label image is further obtained, the precision of the label image of the gear tooth region-of-interest image is improved, manual labeling is not needed, a labeled image with higher precision is generated in a self-adaptive manner, and the segmentation precision of the surface defect of the gear is further effectively improved. According to the method, based on the category change of the changed pixel points and the unchanged pixel points, the influence of the neighborhood pixels of the changed pixel points is considered, the confidence coefficient change value is determined, the accuracy of the confidence coefficient graph is improved, and the accuracy of the label image is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description will be given to a method and a system for detecting surface defects of a gear based on deep learning according to the present invention, with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The specific scenes aimed by the invention are as follows: and executing a gear surface defect detection task under a part manufacturing scene. The invention only detects the abrasion and collision defects, and can be implemented as a module of a defect detection system. Defects due to wear and knock are similar and are treated as similar in this application. The gear is placed on the detected platform, the camera acquires an image of a single tooth of the detected gear in an oblique overlooking visual angle, the visual field can cover the single tooth, the pose of the camera is fixed, and the defect detection of each tooth of the gear is realized by rotating the gear.
The main purposes of the invention are as follows: and performing defect semantic segmentation on the gear surface image through a high-precision semantic segmentation network. In order to realize the content of the invention, the invention designs a gear surface defect detection method and system based on deep learning.
The specific scheme of the gear surface defect detection method and system based on deep learning provided by the invention is specifically described below with reference to the accompanying drawings.
Example 1:
referring to fig. 1, a flowchart illustrating steps of a gear surface defect detecting method based on deep learning according to a first embodiment of the present invention is shown, where the method includes the following steps: and analyzing the interested area image of the gear teeth by using the first defect segmentation network to obtain the defect information of the gear.
The generation process of the label image of the first defect segmentation network training set comprises the following steps:
(1) and performing defect segmentation on the gear tooth region-of-interest image in the training set to obtain a first defect segmentation image.
In this embodiment, a semantic segmentation network method is used to perform defect segmentation, and specifically, a second defect segmentation network is used to perform segmentation. The second defect segmentation network is the same training set used by the first defect segmentation network. The method comprises the steps of collecting multiple single gear tooth images with different sizes, different types and different working conditions, extracting the region of interest to obtain corresponding gear tooth region of interest images to form a training set.
Wherein, carry out the region of interest to single gear tooth image and draw, the beneficial effect that can bring includes: and isolating the influence of the irrelevant working condition on the subsequent semantic segmentation network. The method for extracting the region of interest specifically comprises the following steps: and performing edge extraction through an edge detection algorithm. The edge detection algorithm can adopt sobel and canny operators; for each row in the image, taking the edge points with the largest pixel vertical coordinate and the smallest pixel vertical coordinate as the outer edge points of the gear teeth; and connecting the external edge points of the gear teeth, wherein the connected internal area is an interested area, setting pixel values of pixel points outside the interested area in the image of the gear teeth to be 0, and acquiring the image of the interested area only containing the information of the gear teeth.
After a training set is obtained, marking defect pixel points of the gear tooth interesting region image in the training set, wherein in a second defect segmentation network label image, the pixel value of a background pixel point is 0, the pixel value of a non-defect pixel point is 1, and the pixel value of a defect pixel point is 2; and obtaining a plurality of groups of training samples and labels, and training the second defect segmentation network. The second defect segmentation network architecture is an encoder-decoder structure, wherein the encoder is used for extracting features from an input image, and the decoder is used for up-sampling the features and outputting a semantic segmentation graph; the semantic segmentation network is a common neural network; and taking the sample as an input and the label as supervision of an output, and calculating loss of the network output through a cross entropy loss function to guide updating of the network parameter. And after the training is finished, acquiring the trained second defect segmentation network.
And sending the gear tooth region-of-interest images in the training set into a trained second defect segmentation network, and outputting corresponding first defect segmentation images.
(2) And changing the original defect area in the first defect segmentation image to obtain a first defect area, and marking the changed pixel points. Changing the original defect region in the first defect segmentation image includes: the original defect region in the first defect segmentation image is subjected to a dilation operation.
Specifically, a region with a pixel category as a defect in the first defect segmentation image is extracted, that is, a pixel point set with a pixel value of 2 is extracted, and a first defect binary image representing the original defect region is obtained.
And performing expansion operation on the first defect binary image to obtain a second defect binary image, wherein the expansion operation is an image morphological processing mode, and the second defect binary image represents an expansion defect area.
The second defect binary image is compared with the first defect binary image, so that an expansion added pixel point set (changed pixel points), namely a newly added defect pixel point set (also called a changed pixel point set), and an expansion reserved pixel point set, namely an original defect pixel point set (unchanged pixel points) (also called an unchanged pixel point set) can be obtained.
(3) And acquiring a first defect image from the gear tooth region-of-interest image according to the first defect region.
And multiplying the second defect binary image and the gear tooth region-of-interest image to obtain a first defect image, which can also be called an expansion image.
(4) And filtering the corresponding positions of the changed pixel points in the first defect image to obtain a second defect image.
And performing mean filtering on the corresponding positions of all pixel points in the expansion increasing pixel point set in the expansion image by using a 3-by-3 template. Different from the prior art, the average value of the pixel values of the first n pixels from large to small in the expansion-increased pixel selection template is assigned to the expansion-increased pixel, and preferably, the value of n is 5.
(5) And performing defect segmentation on the second defect image to obtain a second defect segmentation image.
And sending the filtered expansion image into a trained second defect segmentation network, and outputting a corresponding semantic segmentation image called a second defect segmentation image.
(6) Comparing the categories of the pixels at the same positions of the first defect segmentation image and the second defect segmentation image, and determining the confidence coefficient change value of the pixels; and correcting the initial confidence coefficient of the pixel according to the confidence coefficient change value of the pixel to generate a label image of the first defect segmentation network. If the pixel categories of the first defect segmentation image and the second defect segmentation image at the corresponding positions of the changed pixel points are the same and are non-defect categories, the confidence coefficient change value is zero; if the pixel categories of the first defect segmentation image and the second defect segmentation image at the corresponding positions of the changed pixel points are the same and are defect categories, the confidence coefficient change value is a positive value; if the pixel category of the first defect segmentation image at the corresponding position of the change pixel point is a non-defect category, and the pixel category of the second defect segmentation image at the corresponding position of the change pixel point is a defect category, the confidence coefficient change value is a positive value; if the pixel category of the first defect segmentation image at the corresponding position of the change pixel point is a defect category, and the pixel category of the second defect segmentation image at the corresponding position of the change pixel point is a non-defect category, the confidence coefficient change value is zero; if the pixel categories of the first defect segmentation image and the second defect segmentation image at the corresponding positions of the unchanged pixel points are different, the confidence coefficient change value is a negative value, otherwise, the confidence coefficient change value is a positive value.
Specifically, the changed pixels are marked before, that is, the expansion increased pixels and the expansion reserved pixels (unchanged pixels). And comparing the second defect segmentation image with the pixels at the corresponding positions of the marking pixel points of the first defect segmentation image, wherein each pixel point has the following conditions:
for the expansion increase pixel point position, if the first defect segmentation image is of a non-defect type and the second defect segmentation image is of a defect type, increasing a first value to the defect type confidence coefficient of the point, wherein the first value is preferably 0.2; if the two defect segmentation pixel points are consistent in category and are both non-defect categories, the confidence coefficient of the defect category of the point is unchanged.
For the expansion reserved pixel point position, if the first defect segmentation image is of a defect type and the second defect segmentation image is of a non-defect type, the confidence coefficient of the defect type of the point is reduced by a second numerical value, and preferably, the value of the second numerical value is 0.4; if the two defect segmentation pixel points are consistent in category and both are defect categories, the confidence coefficient of the defect category of the point is increased by a third numerical value, and preferably, the value of the third numerical value is 0.2.
And because the pixel value of the expansion increase pixel point position is influenced by a plurality of expansion retention pixel points, a confidence coefficient transmission parameter beta is defined, and preferably, the value of the beta is 0.2. And increasing the corresponding position v of the pixel point for the expansion with the increased confidence degree, multiplying the transmission parameter by the increased confidence degree, and increasing the confidence degree of the expansion retention pixel point participating in the v assignment, namely the neighborhood expansion retention pixel point added to the v. Specifically, the number of 5 pixel point positions where the neighborhood pixel value of v is maximum is increased in this embodiment.
It should be noted that, the confidence range is set to [0,1], and when the confidence of a certain pixel point is reduced to below 0, the confidence is set to 0; when the confidence of a certain pixel point rises to more than 1, the confidence is made to be 1.
And performing initial confidence assignment on the pixel points according to the first defect segmentation image, wherein preferably, the initial confidence of the defect type pixel points is 0.5, and the initial confidence of the non-defect type pixel points is 0. And determining the final confidence degrees of the pixel points by combining the initial confidence degrees with the confidence degree change values, performing threshold segmentation on the final confidence degree graph after determining the final confidence degrees of all the pixel points, and taking the segmented result as a label image of a first defect segmentation network which is also supervised by adopting a cross entropy loss function. In addition, in order to improve efficiency, the second defect segmentation network may be retrained based on the second defect segmentation network, so as to obtain a trained first defect segmentation network.
After training is finished, acquiring a trained first defect semantic segmentation network; when the trained first defect segmentation network is used, the trained first defect segmentation network is input into a gear tooth region-of-interest image, and a defect segmentation image is output, so that extra computing resource consumption is not increased, but compared with the existing semantic segmentation result, namely the first defect segmentation image, the trained first defect segmentation network has a more accurate semantic segmentation result.
Example 2:
the embodiment provides a gear surface defect detection method based on deep learning. In this embodiment, changing the original defect area in the first defect segmentation image includes: and carrying out corrosion operation on the original defect region in the first defect segmentation image to obtain a corrosion defect region.
Specifically, the gear surface defect detection method based on deep learning comprises the following steps: and analyzing the interested area image of the gear teeth by using the first defect segmentation network to obtain the defect information of the gear.
The generation process of the label image of the first defect segmentation network training set comprises the following steps:
(1) and performing defect segmentation on the gear tooth region-of-interest image in the training set to obtain a first defect segmentation image.
The implementation adopts a semantic segmentation network method to perform defect segmentation, and specifically uses a second defect segmentation network to perform segmentation. The second defect segmentation network is the same training set used by the first defect segmentation network. The method comprises the steps of collecting multiple single gear tooth images with different sizes, different types and different working conditions, extracting the region of interest to obtain corresponding gear tooth region of interest images to form a training set.
Wherein, carry out the region of interest to the single-gear tooth image and draw, beneficial effect that can bring includes: and isolating the influence of the irrelevant working condition on the subsequent semantic segmentation network. The method for extracting the region of interest specifically comprises the following steps: and performing edge extraction through an edge detection algorithm. The edge detection algorithm can adopt sobel and canny operators; for each row in the image, taking the edge points with the largest pixel vertical coordinate and the smallest pixel vertical coordinate as the outer edge points of the gear teeth; and connecting the external edge points of the gear teeth, wherein the connected internal area is an interested area, setting pixel values of pixel points outside the interested area in the image of the gear teeth to be 0, and acquiring the image of the interested area only containing the information of the gear teeth.
After a training set is obtained, marking defect pixel points of the gear tooth interesting region image in the training set, wherein in a second defect segmentation network label image, the pixel value of a background pixel point is 0, the pixel value of a non-defect pixel point is 1, and the pixel value of a defect pixel point is 2; and acquiring a plurality of groups of training samples and labels, and training the second defect segmentation network.
The second defect segmentation network architecture is an encoder-decoder structure, wherein the encoder is used for extracting features from an input image, and the decoder is used for up-sampling the features and outputting a semantic segmentation graph; the semantic segmentation network is a common neural network; and taking the sample as an input and the label as supervision of an output, and calculating the loss of the network output through a cross entropy loss function so as to guide the updating of the network parameter. And after the training is finished, acquiring the trained second defect segmentation network.
And sending the gear tooth region-of-interest images in the training set into a trained second defect segmentation network, and outputting corresponding first defect segmentation images.
(2) And changing the original defect area in the first defect segmentation image to obtain a first defect area, and marking the changed pixel points. Changing the original defect region in the first defect segmentation image includes: and carrying out corrosion operation on the original defect area in the first defect segmentation image.
Specifically, a region with a pixel category as a defect in the first defect segmentation image is extracted, that is, a pixel point set with a pixel value of 2 is extracted, and a first defect binary image representing the original defect region is obtained.
And carrying out corrosion operation on the first defect binary image to obtain a second defect binary image, wherein the corrosion operation is an image morphological processing mode, and the second defect binary image represents a corrosion defect area.
And comparing the second defect binary image with the first defect binary image, acquiring a defect pixel point set (also called a change pixel point set) in which a corrosion reduction pixel point (change pixel point) set is corroded, and a corrosion reservation pixel point (non-change pixel point) set is an original defect pixel point set.
(3) And acquiring a first defect image from the gear tooth region-of-interest image according to the first defect region.
And multiplying the second defect binary image by the gear tooth region-of-interest image to obtain a first defect image, which can also be called a corrosion image.
(4) And filtering the corresponding positions of the changed pixel points in the first defect image to obtain a second defect image.
And (3) carrying out mean value filtering on corresponding positions of all pixel points in the corrosion-reducing pixel point set in the corrosion image by using a 3 x 3 template, wherein the corrosion-reducing pixel points are assigned to the corrosion-reducing pixel points by the mean value of pixel values of the first n pixel points from small to large in the corrosion-reducing pixel point selection template, and preferably, the value of n is 5.
(5) And performing defect segmentation on the second defect image to obtain a second defect segmentation image.
And sending the filtered corrosion image into a trained second defect segmentation network, and outputting a corresponding semantic segmentation image called a second defect segmentation image.
(6) Comparing the categories of the pixels at the same positions of the first defect segmentation image and the second defect segmentation image, and determining the confidence coefficient change value of the pixels; and correcting the initial confidence coefficient of the pixel according to the confidence coefficient change value of the pixel to generate a label image of the first defect segmentation network. If the pixel categories of the first defect segmentation image and the second defect segmentation image at the corresponding positions of the changed pixel points are the same and are non-defect categories, the confidence coefficient change value is zero; if the pixel categories of the first defect segmentation image and the second defect segmentation image at the corresponding positions of the changed pixel points are the same and are defect categories, the confidence coefficient change value is a positive value; if the pixel category of the first defect segmentation image at the corresponding position of the change pixel point is a non-defect category, and the pixel category of the second defect segmentation image at the corresponding position of the change pixel point is a defect category, the confidence coefficient change value is a positive value; if the pixel category of the first defect segmentation image at the corresponding position of the change pixel point is a defect category, and the pixel category of the second defect segmentation image at the corresponding position of the change pixel point is a non-defect category, the confidence coefficient change value is zero; if the pixel categories of the first defect segmentation image and the second defect segmentation image at the corresponding positions of the unchanged pixel points are different, the confidence coefficient change value is a negative value, otherwise, the confidence coefficient change value is a positive value.
The changed pixels are marked before, namely the corrosion-reduced pixels and the corrosion-reserved pixels (namely the non-changed pixels). Comparing the pixels for marking the pixel point positions with the first defect segmentation image, wherein each pixel point has the following conditions:
for the position of the corrosion-reduction pixel point, if the first defect segmentation image is of a defect type and the second defect segmentation image is of a non-defect type, the confidence coefficient of the defect type of the point is unchanged; if the two defect segmentation pixel points are consistent in type and both are defect types, the confidence of the defect type of the point is increased by a fourth numerical value, and preferably, the fourth numerical value is 0.4.
For the position of the corrosion reserved pixel point, if the first defect segmentation image is of a defect type and the second defect segmentation image is of a non-defect type, the confidence coefficient of the defect type of the point is reduced by a fifth numerical value, and preferably the fifth numerical value is 0.4; if the two defect segmentation pixel points are consistent in category and both are defect categories, increasing a sixth numerical value to the confidence coefficient of the defect category of the point, preferably taking the sixth numerical value as 0.2;
and because the pixel value of the corrosion-reduced pixel point position is influenced by a plurality of corrosion-retained pixel points, a confidence coefficient is also adopted to transmit a parameter beta, preferably, the value of beta is 0.2, for the corrosion-reduced pixel point with increased confidence coefficient corresponding to the position q, the transmission parameter is multiplied by the increased confidence coefficient, and then the confidence coefficient is increased to the corrosion-retained pixel point participating in q assignment, namely the neighborhood corrosion-retained pixel point increased to q. Specifically increasing to the 5 pixel point positions where the neighborhood pixel value of q is the smallest in this embodiment.
It should be noted that, the confidence range is set to [0,1], and when the confidence of a certain pixel point is reduced to below 0, the confidence is set to 0; when the confidence of a certain pixel point rises to more than 1, the confidence is made to be 1.
And performing pixel point initial confidence value assignment according to the first defect segmentation image, wherein preferably, the initial confidence of the defect type pixel point is 0.5, and the initial confidence of the non-defect type pixel point is 0. And determining the final confidence of the pixel points by combining the initial confidence with the confidence change value, performing threshold segmentation on the final confidence map after determining the final confidence of all the pixel points, and taking the segmented result as a label image of a first defect segmentation network which also adopts a cross entropy loss function for supervision.
After training is finished, acquiring a trained first defect semantic segmentation network; when the trained first defect segmentation network is used, the trained first defect segmentation network is input into an image to be processed, and a defect segmentation image is output, so that extra computing resource consumption is not increased, but compared with the existing semantic segmentation result, namely the first defect segmentation image, the trained first defect segmentation network has a more accurate semantic segmentation result.
Example 3:
the embodiment provides a gear surface defect detection method based on deep learning. In this embodiment, changing the original defect area in the first defect segmentation image includes: performing expansion operation on an original defect area in the first defect segmentation image to obtain an expansion defect area; and carrying out corrosion operation on the original defect region in the first defect segmentation image to obtain a corrosion defect region.
Specifically, the gear surface defect detection method based on deep learning comprises the following steps: and analyzing the interested area image of the gear teeth by using the first defect segmentation network to obtain the defect information of the gear.
The generation process of the label image of the first defect segmentation network training set comprises the following steps:
(1) and performing defect segmentation on the gear tooth region-of-interest image in the training set to obtain a first defect segmentation image.
The implementation adopts a semantic segmentation network method to perform defect segmentation, and specifically uses a second defect segmentation network to perform segmentation. The second defect segmentation network is the same training set used by the first defect segmentation network. And acquiring multiple single gear tooth images with different sizes, different types and different working conditions, and extracting the region of interest to acquire corresponding gear tooth region of interest images to form a training set.
Wherein, carry out the region of interest to the single-gear tooth image and draw, beneficial effect that can bring includes: and isolating the influence of the irrelevant working condition on the subsequent semantic segmentation network. The method for extracting the region of interest specifically comprises the following steps: and performing edge extraction through an edge detection algorithm. The edge detection algorithm can adopt sobel and canny operators; for each row in the image, taking the edge points with the largest pixel vertical coordinate and the smallest pixel vertical coordinate as the outer edge points of the gear teeth; and connecting the external edge points of the gear teeth, wherein the internal area is the region of interest after connection, setting pixel values of pixel points outside the region of interest in the image of the single gear teeth to be 0, and acquiring the region of interest image only containing information of the gear teeth.
After a training set is obtained, marking defect pixel points of the gear tooth interesting region image in the training set, wherein in a second defect segmentation network label image, the pixel value of a background pixel point is 0, the pixel value of a non-defect pixel point is 1, and the pixel value of a defect pixel point is 2; and obtaining a plurality of groups of training samples and labels, and training the second defect segmentation network.
The second defect segmentation network architecture is an encoder-decoder architecture, wherein the encoder is used for extracting features from an input image, and the decoder is used for upsampling the features and outputting a semantic segmentation graph; the semantic segmentation network is a common neural network; and taking the sample as an input and the label as supervision of an output, and calculating loss of the network output through a cross entropy loss function to guide updating of the network parameter. And after the training is finished, acquiring the trained second defect segmentation network.
And sending the gear tooth region-of-interest images in the training set into a trained second defect segmentation network, and outputting corresponding first defect segmentation images.
(2) And changing the original defect area in the first defect segmentation image to obtain a first defect area, and marking the change pixel points. Changing the original defect region in the first defect segmentation image includes: performing expansion operation on an original defect area in the first defect segmentation image to obtain an expanded defect area; and carrying out corrosion operation on the original defect region in the first defect segmentation image to obtain a corrosion defect region. The first defective region includes: expansion defect area, corrosion defect area.
Specifically, a region with a pixel category as a defect in the first defect segmentation image is extracted, that is, a pixel point set with a pixel value of 2 is extracted, and a first defect binary image representing the original defect region is obtained.
And respectively performing expansion operation and corrosion operation on the first defect binary image to obtain a second defect binary image and a third defect binary image, wherein the expansion operation and the corrosion operation are image morphology processing modes, the second defect binary image represents an expansion defect area, and the third defect binary image represents a corrosion defect area.
And comparing the second defect binary image with the first defect binary image to obtain an expansion added pixel point (changed pixel point) set, namely a newly added defect pixel point set, and an expansion reserved pixel point (unchanged pixel point) set, namely an original defect pixel point set. And comparing the third defect binary image with the first defect binary image, acquiring a defect pixel point set which is corroded in the corrosion reduction pixel point (change pixel point) set, and a corrosion reservation pixel point (non-change pixel point) set which is the original defect pixel point set.
(3) And acquiring a first defect image from the gear tooth region-of-interest image according to the first defect region. The first defect image in this embodiment includes an expansion image and an erosion image.
And multiplying the second defect binary image by the gear tooth region-of-interest image to obtain an expansion image.
And multiplying the third defect binary image by the gear tooth region-of-interest image to obtain a corrosion image.
(4) And filtering the corresponding positions of the changed pixel points in the first defect image to obtain a second defect image.
In the expansion image, mean filtering is carried out on corresponding positions of all pixel points in the expansion pixel point set by using a 3 x 3 template, and different from the prior art, the mean value of the pixel values of the first m pixel points from large to small in the expansion pixel point selection template is assigned to the expansion pixel points. Preferably, m has a value of 5.
In the corrosion image, mean filtering is carried out on corresponding positions of all pixel points in the corrosion-reduction pixel point set by using a 3-by-3 template, and different from the prior art, the pixel values of the first n pixel points from small to large in the corrosion-reduction pixel point selection template are assigned to the corrosion-reduction pixel points in a mean value mode. Preferably, n has a value of 5.
The second defect image includes: a filtered dilated image, a filtered erosion image.
(5) And performing defect segmentation on the second defect image to obtain a second defect segmentation image.
And sending the second defect image into a trained second defect segmentation network, and outputting a corresponding semantic segmentation image, namely the second defect segmentation image. Specifically, the filtered expansion image and the filtered erosion image are input into a second defect segmentation network respectively to obtain a second defect segmentation image corresponding to expansion and a second defect segmentation image corresponding to erosion.
(6) Comparing the categories of pixels at the same positions of the first defect segmentation image and the second defect segmentation image, and determining a confidence coefficient change value of the pixels; and correcting the initial confidence coefficient of the pixel according to the confidence coefficient change value of the pixel to generate a label image of the first defect segmentation network. If the pixel categories of the first defect segmentation image and the second defect segmentation image at the corresponding positions of the changed pixel points are the same and are non-defect categories, the confidence coefficient change value is zero; if the pixel categories of the first defect segmentation image and the second defect segmentation image at the corresponding positions of the changed pixel points are the same and are defect categories, the confidence coefficient change value is a positive value; if the pixel category of the first defect segmentation image at the corresponding position of the change pixel point is a non-defect category, and the pixel category of the second defect segmentation image at the corresponding position of the change pixel point is a defect category, the confidence coefficient change value is a positive value; if the pixel category of the first defect segmentation image at the corresponding position of the change pixel point is a defect category, and the pixel category of the second defect segmentation image at the corresponding position of the change pixel point is a non-defect category, the confidence coefficient change value is zero; if the pixel categories of the first defect segmentation image and the second defect segmentation image at the corresponding positions of the unchanged pixel points are different, the confidence coefficient change value is a negative value, otherwise, the confidence coefficient change value is a positive value.
Expansion increase pixel points, expansion retention pixel points, corrosion reduction pixel points and corrosion retention pixel points are marked in the front. Comparing the pixels for marking the pixel point positions with the first defect segmentation image respectively for each second defect segmentation image, wherein each pixel point has the following conditions:
(a) the dilated corresponding second defect segmentation image is compared with the first defect segmentation image.
For the expansion increase pixel point position, if the first defect segmentation image is in a non-defect type and the second defect segmentation image is in a defect type, the defect type confidence coefficient of the point is increased by a first value, and preferably, the value of the first value is 0.2; if the two defect segmentation pixel points are consistent in category and are both non-defect categories, the confidence coefficient of the defect category of the point is unchanged.
For the expansion reserved pixel point position, if the first defect segmentation image is of a defect type and the second defect segmentation image is of a non-defect type, the confidence coefficient of the defect type of the point is reduced by a second numerical value, and preferably, the value of the second numerical value is 0.4; if the two defect segmentation pixel points are consistent in category and both are defect categories, the confidence coefficient of the defect category of the point is increased by a third numerical value, and preferably, the value of the third numerical value is 0.2.
And because the pixel value of the expansion increase pixel point position is influenced by a plurality of expansion retention pixel points, a confidence coefficient transmission parameter beta is defined, and preferably, the value of the beta is 0.2. And increasing the corresponding position v of the pixel point for the expansion with the increased confidence degree, multiplying the transmission parameter by the increased confidence degree, and increasing the confidence degree of the expansion retention pixel point participating in the v assignment, namely the neighborhood expansion retention pixel point added to the v. Specifically increasing to the 5 pixel point positions where the neighborhood pixel value of v is the largest in this embodiment.
(b) And comparing the second defect segmentation image corresponding to the corrosion with the first defect segmentation image.
For the position of the corrosion-reduction pixel point, if the first defect segmentation image is of a defect type and the second defect segmentation image is of a non-defect type, the confidence coefficient of the defect type of the point is unchanged; if the two defect segmentation pixel points have the same category and are both defect categories, the confidence of the defect category of the point is increased by a fourth numerical value, and preferably, the fourth numerical value is 0.4.
For the position of the corrosion reserved pixel point, if the first defect segmentation image is of a defect type and the second defect segmentation image is of a non-defect type, the confidence coefficient of the defect type of the point is reduced by a fifth numerical value, and preferably the fifth numerical value is 0.4; if the two defect segmentation pixel points are consistent in category and both are defect categories, increasing a sixth numerical value to the confidence coefficient of the defect category of the point, preferably taking the sixth numerical value as 0.2;
and because the pixel value of the corrosion-reduced pixel point position is influenced by a plurality of corrosion-retained pixel points, a confidence coefficient is also adopted to transmit a parameter beta, preferably, the value of beta is 0.2, for the corrosion-reduced pixel point with increased confidence coefficient corresponding to the position q, the transmission parameter is multiplied by the increased confidence coefficient, and then the confidence coefficient is increased to the corrosion-retained pixel point participating in q assignment, namely the neighborhood corrosion-retained pixel point increased to q. Specifically increasing to the 5 pixel point positions where the neighborhood pixel value of q is the smallest in this embodiment.
It should be noted that, the confidence range is set to [0,1], when the confidence of a certain pixel point is reduced to below 0, the confidence thereof is set to 0; when the confidence coefficient of a certain pixel point rises to be more than 1, the confidence coefficient is made to be 1.
And performing pixel point initial confidence value assignment according to the first defect segmentation image, wherein preferably, the initial confidence of the defect type pixel point is 0.5, and the initial confidence of the non-defect type pixel point is 0. And determining the final confidence degrees of the pixel points by combining the initial confidence degrees with the confidence degree change values, performing threshold segmentation on the final confidence degree graph after determining the final confidence degrees of all the pixel points, and taking the segmented result as a label image of a first defect segmentation network which is also supervised by adopting a cross entropy loss function. In addition, in order to improve efficiency, the second defect segmentation network may be retrained based on the second defect segmentation network, so as to obtain a trained first defect segmentation network.
After training is finished, acquiring a trained first defect semantic segmentation network; when the trained first defect segmentation network is used, the trained first defect segmentation network is input into an image to be processed, and a defect segmentation image is output, so that extra computing resource consumption is not increased, but compared with the existing semantic segmentation result, namely the first defect segmentation image, the trained first defect segmentation network has a more accurate semantic segmentation result.
Example 4:
the embodiment provides a gear surface defect detection system based on deep learning. The system comprises: the first defect segmentation network is used for analyzing the gear tooth region-of-interest image to obtain gear defect information; the generation process of the first defect segmentation network training set label image comprises the following steps: performing defect segmentation on the gear tooth region-of-interest image in the training set to obtain a first defect segmentation image; changing an original defect area in the first defect segmentation image to obtain a first defect area, and marking a change pixel point; acquiring a first defect image from the gear tooth region-of-interest image according to the first defect region; filtering the corresponding positions of the changed pixel points in the first defect image to obtain a second defect image; inputting the second defect image into a second defect segmentation network to obtain a second defect segmentation image; comparing the categories of the pixels at the same positions of the first defect segmentation image and the second defect segmentation image, and determining the confidence coefficient change value of the pixels; and correcting the initial confidence coefficient of the pixel according to the confidence coefficient change value of the pixel to generate a label image of the first defect segmentation network.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A gear surface defect detection method based on deep learning is characterized by comprising the following steps:
analyzing the gear tooth region-of-interest image by using a first defect segmentation network to obtain gear defect information; the generation process of the label image of the first defect segmentation network training set comprises the following steps:
performing defect segmentation on the gear tooth region-of-interest image in the training set to obtain a first defect segmentation image;
changing an original defect area in the first defect segmentation image to obtain a first defect area, and marking a change pixel point;
acquiring a first defect image from the image of the region of interest of the gear teeth according to the first defect region;
filtering the corresponding positions of the changed pixel points in the first defect image to obtain a second defect image;
performing defect segmentation on the second defect image to obtain a second defect segmentation image;
comparing the categories of the pixels at the same positions of the first defect segmentation image and the second defect segmentation image, and determining the confidence coefficient change value of the pixels; and correcting the initial confidence coefficient of the pixel according to the confidence coefficient change value of the pixel to generate a label image of the first defect segmentation network.
2. The method of claim 1, wherein the defect segmenting the gear tooth region of interest image in the training data set comprises: and performing defect segmentation on the gear tooth region-of-interest image in the training set by adopting a second defect segmentation network.
3. The method according to claim 1, wherein the filtering is in particular a mean filtering operation.
4. The method of claim 1, wherein the modifying the initial confidence level of the pixel according to the confidence level change value of the pixel, and wherein generating the label image of the first defect segmentation network comprises:
correcting the initial confidence coefficient of the pixel according to the confidence coefficient change value of the pixel to obtain a confidence coefficient graph;
and carrying out threshold processing on the reliability map to obtain a label image of the first defect segmentation network.
5. The method of claim 1, wherein said altering the original defect region in the first defect segmentation image comprises:
the original defect region in the first defect segmentation image is subjected to a dilation operation.
6. The method of claim 1 or 5, wherein said altering the original defect region in the first defect segmentation image comprises:
and carrying out corrosion operation on the original defect area in the first defect segmentation image.
7. The method of claim 1, wherein comparing the class of co-located pixels of the first defect segmentation image and the second defect segmentation image, and determining the confidence change value for the pixel comprises:
if the pixel categories of the first defect segmentation image and the second defect segmentation image at the corresponding positions of the changed pixel points are the same and are non-defect categories, the confidence coefficient change value is zero;
if the pixel categories of the first defect segmentation image and the second defect segmentation image at the corresponding positions of the changed pixel points are the same and are defect categories, the confidence coefficient change value is a positive value;
if the pixel category of the first defect segmentation image at the corresponding position of the change pixel point is a non-defect category, and the pixel category of the second defect segmentation image at the corresponding position of the change pixel point is a defect category, the confidence coefficient change value is a positive value;
if the pixel category of the first defect segmentation image at the corresponding position of the change pixel point is a defect category, and the pixel category of the second defect segmentation image at the corresponding position of the change pixel point is a non-defect category, the confidence coefficient change value is zero;
if the pixel categories of the first defect segmentation image and the second defect segmentation image at the corresponding positions of the non-changed pixel points are different, the confidence coefficient change value is a negative value, otherwise, the confidence coefficient change value is a positive value.
8. A gear surface defect detection system based on deep learning, the system comprising:
the first defect segmentation network is used for analyzing the gear tooth region-of-interest image to obtain gear defect information;
the generation process of the first defect segmentation network training set label image comprises the following steps: performing defect segmentation on the gear tooth region-of-interest image in the training set to obtain a first defect segmentation image; changing an original defect area in the first defect segmentation image to obtain a first defect area, and marking a change pixel point; acquiring a first defect image from the gear tooth region-of-interest image according to the first defect region; filtering the corresponding positions of the changed pixel points in the first defect image to obtain a second defect image; performing defect segmentation on the second defect image to obtain a second defect segmentation image; comparing the categories of the pixels at the same positions of the first defect segmentation image and the second defect segmentation image, and determining the confidence coefficient change value of the pixels; and correcting the initial confidence coefficient of the pixel according to the confidence coefficient change value of the pixel to generate a label image of the first defect segmentation network.
9. The system of claim 8, wherein the defect segmentation of the gear tooth region-of-interest image in the training set comprises: and performing defect segmentation on the gear tooth region-of-interest image in the training set by adopting a second defect segmentation network.
10. The system of claim 8, wherein the modifying the initial confidence of the pixel according to the confidence change value of the pixel, and wherein generating the label image of the first defect segmentation network comprises:
correcting the initial confidence coefficient of the pixel according to the confidence coefficient change value of the pixel to obtain a confidence coefficient graph;
and performing thresholding processing on the reliability map to obtain a label image of the first defect segmentation network.
CN202210346765.5A 2022-03-31 2022-03-31 Gear surface defect detection method and system based on deep learning Active CN114758125B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210346765.5A CN114758125B (en) 2022-03-31 2022-03-31 Gear surface defect detection method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210346765.5A CN114758125B (en) 2022-03-31 2022-03-31 Gear surface defect detection method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN114758125A true CN114758125A (en) 2022-07-15
CN114758125B CN114758125B (en) 2023-04-14

Family

ID=82328829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210346765.5A Active CN114758125B (en) 2022-03-31 2022-03-31 Gear surface defect detection method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN114758125B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115035107A (en) * 2022-08-10 2022-09-09 山东正阳机械股份有限公司 Axle gear working error detection method based on image processing
CN117522784A (en) * 2023-10-23 2024-02-06 北京新光凯乐汽车冷成型件股份有限公司 Gear part image detection method and system based on tooth distance segmentation compensation

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110880176A (en) * 2019-11-19 2020-03-13 浙江大学 Semi-supervised industrial image defect segmentation method based on countermeasure generation network
CN112381759A (en) * 2020-10-10 2021-02-19 华南理工大学 Monocrystalline silicon solar wafer defect detection method based on optical flow method and confidence coefficient method
CN112819840A (en) * 2021-02-24 2021-05-18 北京航空航天大学 High-precision image instance segmentation method integrating deep learning and traditional processing
US20210166374A1 (en) * 2018-07-20 2021-06-03 Kabushiki Kaisha N-Tech Construction method, inspection method, and program for image data with label, and construction device and inspection device for image data with label
CN113902890A (en) * 2021-10-14 2022-01-07 宁夏大学 Self-supervision data enhancement method, system and equipment for visual concept detection
US20220020155A1 (en) * 2020-07-16 2022-01-20 Korea Advanced Institute Of Science And Technology Image segmentation method using neural network based on mumford-shah function and apparatus therefor
CN114022406A (en) * 2021-09-15 2022-02-08 济南国科医工科技发展有限公司 Image segmentation method, system and terminal for semi-supervised learning
CN114049309A (en) * 2021-10-26 2022-02-15 广州大学 Image defect detection method and device based on semi-supervised network and storage medium
CN114066867A (en) * 2021-11-23 2022-02-18 沈阳建筑大学 Deep learning-based crack propagation trace missing region segmentation method
CN114240882A (en) * 2021-12-16 2022-03-25 深圳市商汤科技有限公司 Defect detection method and device, electronic equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210166374A1 (en) * 2018-07-20 2021-06-03 Kabushiki Kaisha N-Tech Construction method, inspection method, and program for image data with label, and construction device and inspection device for image data with label
CN110880176A (en) * 2019-11-19 2020-03-13 浙江大学 Semi-supervised industrial image defect segmentation method based on countermeasure generation network
US20220020155A1 (en) * 2020-07-16 2022-01-20 Korea Advanced Institute Of Science And Technology Image segmentation method using neural network based on mumford-shah function and apparatus therefor
CN112381759A (en) * 2020-10-10 2021-02-19 华南理工大学 Monocrystalline silicon solar wafer defect detection method based on optical flow method and confidence coefficient method
CN112819840A (en) * 2021-02-24 2021-05-18 北京航空航天大学 High-precision image instance segmentation method integrating deep learning and traditional processing
CN114022406A (en) * 2021-09-15 2022-02-08 济南国科医工科技发展有限公司 Image segmentation method, system and terminal for semi-supervised learning
CN113902890A (en) * 2021-10-14 2022-01-07 宁夏大学 Self-supervision data enhancement method, system and equipment for visual concept detection
CN114049309A (en) * 2021-10-26 2022-02-15 广州大学 Image defect detection method and device based on semi-supervised network and storage medium
CN114066867A (en) * 2021-11-23 2022-02-18 沈阳建筑大学 Deep learning-based crack propagation trace missing region segmentation method
CN114240882A (en) * 2021-12-16 2022-03-25 深圳市商汤科技有限公司 Defect detection method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
RABIA ALI, MUHAMMAD UMAR KARIM KHAN, CHONG MIN KYUNG: "Self-Supervised Representation Learning for Visual Anomaly Detection", 《ARXIV》 *
李振兴等: "基于图像像素级分割与标记映射的工件表面微小缺陷检测算法", 《组合机床与自动化加工技术》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115035107A (en) * 2022-08-10 2022-09-09 山东正阳机械股份有限公司 Axle gear working error detection method based on image processing
CN115035107B (en) * 2022-08-10 2022-11-08 山东正阳机械股份有限公司 Axle gear working error detection method based on image processing
CN117522784A (en) * 2023-10-23 2024-02-06 北京新光凯乐汽车冷成型件股份有限公司 Gear part image detection method and system based on tooth distance segmentation compensation

Also Published As

Publication number Publication date
CN114758125B (en) 2023-04-14

Similar Documents

Publication Publication Date Title
CN115082683B (en) Injection molding defect detection method based on image processing
CN114758125B (en) Gear surface defect detection method and system based on deep learning
CN110648310B (en) Weak supervision casting defect identification method based on attention mechanism
CN114240939B (en) Method, system, equipment and medium for detecting appearance defects of mainboard components
CN109781737B (en) Detection method and detection system for surface defects of hose
CN110969620A (en) Method and device for detecting magnetic shoe ripple defects
CN113177924A (en) Industrial production line product flaw detection method
CN113658131A (en) Tour type ring spinning broken yarn detection method based on machine vision
CN113888536B (en) Printed matter double image detection method and system based on computer vision
CN113642576B (en) Method and device for generating training image set in target detection and semantic segmentation tasks
CN109740553B (en) Image semantic segmentation data screening method and system based on recognition
CN113256624A (en) Continuous casting round billet defect detection method and device, electronic equipment and readable storage medium
CN109166092A (en) A kind of image defect detection method and system
CN114187289A (en) Plastic product shrinkage pit detection method and system based on computer vision
CN112365478A (en) Motor commutator surface defect detection model based on semantic segmentation
CN112435235A (en) Seed cotton impurity content detection method based on image analysis
CN113487563B (en) EL image-based self-adaptive detection method for hidden cracks of photovoltaic module
CN111612803B (en) Vehicle image semantic segmentation method based on image definition
CN112669269A (en) Pipeline defect classification and classification method and system based on image recognition
CN115375952B (en) Chip glue layer defect classification method
CN116433978A (en) Automatic generation and automatic labeling method and device for high-quality flaw image
CN115082449A (en) Electronic component defect detection method
CN114862786A (en) Retinex image enhancement and Ostu threshold segmentation based isolated zone detection method and system
CN112651936A (en) Steel plate surface defect image segmentation method and system based on image local entropy
CN116597441B (en) Algae cell statistics method and system based on deep learning and image pattern recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230327

Address after: Room 301, Building 3, No. 138, Xinjun Ring Road, Minhang District, Shanghai, 200000

Applicant after: SHANGHAI KEZHI ELECTRICAL AUTOMATION CO.,LTD.

Address before: No.99, development avenue, Haimen port New District, Haimen City, Nantong City, Jiangsu Province

Applicant before: Jiangsu Qingci Machinery Manufacturing Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant