CN116612106A - Method for detecting surface defects of optical element based on YOLOX algorithm - Google Patents

Method for detecting surface defects of optical element based on YOLOX algorithm Download PDF

Info

Publication number
CN116612106A
CN116612106A CN202310660358.6A CN202310660358A CN116612106A CN 116612106 A CN116612106 A CN 116612106A CN 202310660358 A CN202310660358 A CN 202310660358A CN 116612106 A CN116612106 A CN 116612106A
Authority
CN
China
Prior art keywords
image
yolox
optical element
network
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310660358.6A
Other languages
Chinese (zh)
Inventor
何思瑾
王洋
孟凡越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN202310660358.6A priority Critical patent/CN116612106A/en
Publication of CN116612106A publication Critical patent/CN116612106A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E30/00Energy generation of nuclear origin
    • Y02E30/30Nuclear fission reactors

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

A method for detecting surface defects of an optical element based on a YOLOX algorithm belongs to the field of image detection algorithms. The existing optical element defect detection method based on machine vision does not consider the interference of dust factors on detection results and the influence of halation generated on a large-curvature lens on the detection results independently. The method comprises the steps of constructing an imaging system and an image acquisition device based on a polarized light imaging principle, and acquiring an image signal of an optical element to be detected by using the imaging system and the image acquisition device; performing image preprocessing on the acquired optical element image signals; aiming at defects of pits, scratches, bubbles and edge collapse in an optical element, constructing a target identification method based on a YOLOX algorithm; and performing defect recognition on the preprocessed image signal by using the constructed target recognition method. The invention improves the detection precision and can reduce the probability of false detection and missing detection of defects with higher similarity with the surface characteristics of the optical element. The defect types of pits, bubbles, broken edges and scratches can be identified at the same time.

Description

Method for detecting surface defects of optical element based on YOLOX algorithm
Technical Field
The present invention relates to a method for detecting surface defects of optical elements, and more particularly, to a method for detecting surface defects of optical elements based on YOLOX algorithm.
Background
With the continuous progress of science and technology, optical processing and measuring technologies are vigorously developed, optical elements are increasingly widely applied to various fields of scientific research, national defense and military, daily life and the like, and precise optical elements become key components of various high-precision optical instruments and equipment. The quality of the surface of the optical element has a very significant effect on the whole optical system. When light is incident on the surface of the optical element, if the surface has defects, the phenomena of light beam scattering, energy loss and the like are inevitably generated, and the conditions of light beam quality reduction, diffraction fringes, film layer damage and the like are further caused, so that the performance of the whole optical system is greatly damaged, and the service life of the optical system is even greatly reduced.
In the inspection of optical elements, the inspection of surface defects becomes an indispensable part, and the result of the inspection of surface defects becomes one of important criteria for judging the quality of the elements. Therefore, the research on the surface defect detection of the optical element is significant. The optical element surface defects mainly refer to surface defects and surface contaminants. The surface defects refer to various processing defects such as pits, scratches, open bubbles, broken edges, broken points and the like which still exist on the surface of the optical element after the optical processing, and the generation reasons are mainly the processing process or the subsequent improper operation. The surface defect is a micro local defect artificially caused in the processing process, has a certain influence on the surface performance of the optical element, and can possibly cause serious consequences such as operation errors of the optical instrument. In terms of nondestructive detection and identification of surface defects, the main methods are as follows: visual inspection, weighing, machine vision-based detection, and the like. The machine vision detection method is an emerging detection method which is vigorously developed in recent years, an image processing technology is effectively combined with a computer vision technology, a machine is used for identifying people to a great extent, and the surface layer information is continuously detected through the image processing technology. Because it can utilize computer to carry out automatic acquisition and analysis to the information of things, the technique that the machine vision is related to has been applied to each field, especially detects etc. relevant field. A typical machine vision system consists of a light source, an optical imaging section, a camera, and computer processing and communication units. In recent years, techniques for detecting surface defects based on machine vision have been rapidly developed into the research field.
Compared with the use of the traditional machine vision method in defect detection, as a new branch in the machine learning field, the deep learning is obviously started later than the traditional machine vision method, but the achievement of the deep learning in various large fields is quite abundant.
Due to the development of computer science and technology such as computer vision, pattern recognition and multimedia, the search for image processing technology is self-evident and also in the process of bombing and intense. Therefore, in order to better study defect information, we can study the surface defects of the optical element from the image processing point of view to find out information useful for defect detection. At present, the study on deep learning is under way, the deep learning is an important branch of machine learning, and the greatest characteristic of the deep learning is that the deep learning can acquire the characteristics autonomously, so that the conventional manual characteristic setting is replaced, and the problem that the characteristic extraction is difficult or incomplete due to the variety and complexity of the measured images is effectively avoided.
In conclusion, the surface defect detection system based on machine vision is high in adaptability, and the application range of the machine vision is widened. With the improvement of modern mechanical design technology and the increase of debugging experience, the acquisition speed of images is greatly improved, but the recognition accuracy based on image processing and traditional machine learning needs to be further improved. Deep learning-based fault detection has proven effective in other areas, such as road surfaces, rail tracks, and the like. However, the existing machine vision-based optical element defect detection method does not generally consider the interference of dust factors on detection results and the influence of halation generated on a large-curvature lens on the detection results. Therefore, the invention introduces deep learning into the optical field and carries out deep research on the deep learning, and applies the deep learning which is developed in artificial intelligence to the detection of the surface defects of the optical element so as to realize the detection of the defects. The target detection method based on deep learning can acquire the effective characteristics of the target through data learning, has stronger detection adaptability and better effect, and further realizes the efficient detection of the surface defects of the optical element.
Disclosure of Invention
The invention aims to solve the problems that the existing optical element defect detection method based on machine vision does not consider the interference of dust factors on detection results and the influence of halation generated on a large-curvature lens on the detection results, and provides an optical element surface defect detection method based on a YOLOX algorithm.
The above object is achieved by the following technical scheme:
a YOLOX algorithm-based optical element surface defect detection method, the method being implemented by:
firstly, constructing an imaging system and an image acquisition device based on a polarized light imaging principle, and acquiring an image signal of an optical element to be detected by using the imaging system and the image acquisition device;
step two, performing image preprocessing on the acquired optical element image signals;
step three, constructing a target identification method based on a YOLOX algorithm aiming at defects of pits, scratches, bubbles and edge collapse in the optical element;
and step four, performing defect recognition on the image signal subjected to the image preprocessing in the step two by using the target recognition method constructed in the step three.
Further, the step of constructing the imaging system and the image acquisition device based on the polarized light imaging principle specifically comprises the following steps:
the imaging system and the image acquisition device are designed to comprise a bracket, a photographing table, a CCD camera, a first attenuation sheet, an objective table, a second attenuation sheet, a parallel light source and a computer;
the device is designed on one side of the bracket, the top is a flap table, and the bottom is a platform;
a CCD camera is arranged on the bottom surface of the photographing table, a lens of the CCD camera is aligned with a first attenuation sheet below the CCD camera, an objective table fixed on a bracket is arranged below the first attenuation sheet, a second attenuation sheet is arranged below the objective table, and a parallel light source is arranged below the second attenuation sheet; the CCD camera, the first attenuation sheet, the object stage, the second attenuation sheet and the parallel light source are vertically arranged, the CCD camera is connected with the computer, and the parallel light source is arranged on the bottom of the bracket;
the digital image is transmitted into the computer through a computer port or standard equipment by a CCD camera. .
Further, the operation means for performing image preprocessing on the acquired optical element image signal in the second step includes: graying, image denoising, morphological dust removal, image enhancement, specifically:
graying treatment is carried out on the polarized image so as to achieve the conditions of convenient subsequent treatment and observation;
the morphological corrosion and expansion are adopted to treat the interference of dust, so that the purpose of image denoising is realized;
and (5) performing image enhancement processing by adopting a histogram equalization method and a Gamma correction method.
Further, the step of constructing the YOLOX algorithm-based target recognition method in the step three specifically includes:
based on the Yolox, the Backbone adopts a CSPDarknet trunk extraction network in the Yolox series as a structure of an extraction network, and optimizes the structure of the extraction network through a Focus structure and an SPP structure; the transducer adopts an attention mechanism to replace a CSP module of the Neck network with a transducer module, and then uses a Mish activation function to replace a Swish activation function.
Further, the step of using a CSPDarknet trunk extraction network in the YOLO series as the backbond based on YOLOX and optimizing the structure of the extraction network through a Focus structure and an SPP structure specifically comprises the following steps:
the YOLOX structure comprises an input end, a backbone network, a Neck and a detection head, wherein the input end uses the enhancement data adopting Mosaic and Mixup; extracting the characteristics of the picture in a backbone network by using a Darknet 53; integrating the extracted feature images by using a top-down FPN structure in a Neck module, and using three decoupling heads at a prediction end; specifically:
the Yolox adopts two data enhancement methods of Mosaic and Mixup at the input end to enhance training data; YOLOX uses a dark net53 network for feature extraction, dark net53 comprising 53 convolutional layers, 5 residual structural units, each convolutional layer comprising a 1 x 1 and 3 x 3 convolutional kernel; each residual error unit of the Darknet5 is composed of two CBL units and a residual error block; wherein each CBL unit includes a convolutional layer, a normalization, and an activation function;
structure of the lock module of YOLOX; the FPN fuses the information of a high layer with the information of a bottom layer in an up-sampling mode from top to bottom, and transmits semantic features; the PAN structure is from bottom to top, the information of the bottom is transmitted to a high layer to realize fusion, and positioning information is transmitted;
YOLOX introduces three decoupling heads in the Head part to classify and regress, uses 1×1 convolution to reduce dimension, uses two 3×3 convolutions in the two latter branches respectively, and each decoupling Head has three branches output to respectively predict target frame category, judge whether the target frame is foreground or background and predict target frame position information.
Further, the step of replacing the CSP module of the Neck network with the transformerencoder module specifically includes:
introducing a transformer encoder module into a Neck network, replacing a CSP module in the Neck with transformer encoder and transformer encoder modules, setting the number of transformer encoder as 1, and setting the number of heads in a multi-Head attribute as 4; adopting an encoding part and an encoding part of Vision transformer, firstly transforming a feature map in the dimension to generate an ebedding sequence with reduced dimension, inputting the obtained sequence into the encoding part to obtain a three-dimensional tensor, amplifying the tensor to four dimensions, transposing the first dimension and the fourth dimension, and finally changing the dimension of the transposed tensor according to the number of output channels; the dimension of the input feature map is [ b, c, w, h ], wherein b is the number of samples, c is the number of channels, and w and h are the width and height of the feature map respectively; the Encoder comprises a Layernorm part, a Multi-Head Attention part and an MLP3 part, wherein the obtained Embedding sequence is firstly input into the Layernorm layer to obtain Q, K, V, the values are input into a result obtained by the Multi-Head Attention and the Embedding sequence for residual connection, the result is used as input, and then the result obtained by the input through the Layernorm layer and the MLP module and the input are used as final output results; wherein the MLP module comprises two layers of gaussian error linear units.
Further, the step of replacing the Swish activation function with the mich activation function specifically includes:
the Swish activation function in the CBS module is replaced by a Mish activation function, namely:
Mish=x·tanh(In(1+e x ))。
the beneficial effects of the invention are as follows:
the yolo series algorithm is one of the most popular single-stage network algorithms, which has the advantage of high speed and is suitable for real-time detection. The algorithm model of the YOLO series includes several series of versions of YOLOv1, YOLOv2 (YOLO 9000), YOLOv3, YOLOv4, YOLOv5, YOLOX, etc. YOLOv1, although fast, has poor accuracy, and authors have also made improvements very fast, releasing YOLOv2. But the detection effect of the two plates on small target objects is not ideal. The accuracy and the speed of YOLOv3 are improved to a certain extent, and the recognition capability of small target objects is stronger. The network structure of the YOLOv4 is more complex, and the accuracy and the calculation rate of target detection are greatly improved. The YOLOv5 algorithm has a smaller improvement in detection performance than YOLOv4, but the anchor frame fixed in the target detection algorithm based on the anchor frame greatly damages the universality of the detection algorithm, and a large number of generated candidate anchor frames are matched with a real frame, so that sample imbalance is caused, a large amount of memory and calculation complexity are consumed, and a certain gap exists in the accuracy of detecting the surface defects of the optical element. The YOLOX algorithm selected by the invention takes YOLOX 3 as a starting point and dark net53 as a backbone network, wherein design units such as an anchor-free detector, an advanced label distribution strategy, data enhancement and the like are added, so that the performance of the YOLOX is improved, and the performance of the YOLOX algorithm exceeds that of the existing YOLOX series detector, so that faster speed and higher precision are pursued. The invention provides an optical element surface defect detection algorithm for improving the YOLOX based on a YOLOX model, improves based on the YOLOX, adopts CSPDarknet trunk extraction network in the YOLOX series by a backbond, optimizes the structure of the extraction network through a Focus structure and an SPP structure, and greatly improves the precision and the speed. The CSP structure adopts a convolution structure, the receptive field depends on the size of a convolution kernel, the larger the convolution kernel is, the larger the perceived area is, and the increase of the convolution kernel can greatly increase the complexity of operation; when the field area is not large enough, global feature information is lost. The convolution structure has translational invariance, is insensitive to the global position of the feature, and only extracts a small part of local information. The transducer adopts an attention mechanism, and global information is considered in the process of calculating attention, so that the CSP module of the Neck network is replaced by a transducer module, the scope of the receptive field is improved to the whole feature map, and the Prediction network can obtain more comprehensive information. And then, the Mish activation function is used for replacing the Swish activation function, so that the overfitting can be effectively prevented, and the robustness of the network is improved. The improved model improves the detection precision, and can reduce the probability of false detection and omission of defects with higher similarity to the surface features of the optical element. Four common types of defects including pits, bubbles, broken edges and scratches can be identified at the same time, and the detection performance of the optical element image can be further improved.
2. The method can carry out denoising treatment on the image to be treated, and remove the interference of dust on pit defects. Specifically, the dust form is particularly small and is insignificant compared to the size of most defects in the lens, but dust adheres to the surface of the optical element, and therefore the dust can misjudge the identification of pit defects. The method can identify dust different from pit defects.
3. The method can eliminate the effect of halation, solve the problem of halation of the optical element caused by different thickness and curvature, highlight defect characteristics and acquire the image sample which is convenient for subsequent processing. Specifically, in the image acquisition process, irregular halation can be generated on the large-curvature lens image due to different curvatures of the resin lens, so that information reading is difficult. It is desirable to selectively enhance and suppress information in an image to improve the visual effect of the image, highlighting defect information. The method solves the problem that according to the existing image enhancement processing method, a specific recognition algorithm is designed, and then the influence of halation on image recognition is eliminated by designing a light source system.
4. The method can identify and classify the defects of the optical original. In particular, the online defect detection is characterized by huge data volume and redundant information, and has high requirement on real-time performance. The defect area size, shape and dimensions are not fixed. The defects on the resin lens have different areas and small areas, and the defects are easy to treat as noise in the pretreatment stage. And the defects are various and the shape is various. The method of the invention can meet the target identification required by the invention.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a diagram of the YOLOX network architecture in accordance with the present invention;
FIG. 3 is a diagram of a Darknet53 network architecture in accordance with the present invention;
FIG. 4 is a Head portion network architecture in accordance with the present invention;
fig. 5 is a schematic diagram of a Transformer encoder structure according to the present invention;
FIG. 6 is a feature map dimension change illustration in accordance with the present invention;
fig. 7 is a graph of a mich activation function in accordance with the present invention;
FIG. 8 is a Sobel image sharpening in accordance with the simulation experiment section of the present invention;
FIG. 9 is a diagram of the pixel inversion conversion process according to the experimental section of the present invention;
FIG. 10 is a graph of the median filter process involved in the simulation experiment section of the present invention;
FIG. 11 is a graph of the random rotation process involved in the simulation experiment section of the present invention;
FIG. 12 is a graph after an isometric scaling process involved in the simulation experiment section of the present invention;
FIG. 13 is a diagram of the mirror image operation process according to the experimental part of the simulation of the present invention.
Detailed Description
The invention provides a method for detecting surface defects of an optical element based on a YOLOX algorithm, which is realized by the following steps as shown in fig. 1:
firstly, constructing an imaging system and an image acquisition device based on a polarized light imaging principle, and acquiring an image signal of an optical element to be detected by using the imaging system and the image acquisition device;
step two, performing image preprocessing on the acquired optical element image signals;
step three, constructing a target identification method based on a YOLOX algorithm aiming at four common defects such as pits, scratches, bubbles and edge collapse in the optical element;
and step four, performing defect recognition on the image signal subjected to the image preprocessing in the step two by using the target recognition method constructed in the step three.
The step one, the step of constructing an imaging system and an image acquisition device based on the polarized light imaging principle, specifically comprises the following steps:
the imaging system and the image acquisition device are designed to comprise a bracket, a photographing table, a CCD camera, a first attenuation sheet, an objective table, a second attenuation sheet, a parallel light source and a computer;
the device is designed on one side of the bracket, the top is a flap table, and the bottom is a platform;
a CCD camera is arranged on the bottom surface of the photographing table, a lens of the CCD camera is aligned with a first attenuation sheet below the CCD camera, an objective table fixed on a bracket is arranged below the first attenuation sheet, a second attenuation sheet is arranged below the objective table, and a parallel light source is arranged below the second attenuation sheet; the CCD camera, the first attenuation sheet, the object stage, the second attenuation sheet and the parallel light source are vertically arranged, the CCD camera is connected with the computer, and the parallel light source is arranged on the bottom of the bracket;
the digital image is transmitted into the computer through a computer port or standard equipment by a CCD camera.
The operation means for performing image preprocessing on the acquired optical element image signals is used for eliminating irrelevant information in the images, recovering useful real information, enhancing the detectability of relevant information and simplifying data to the greatest extent, thereby improving the reliability of feature extraction, image segmentation, matching and identification. The preprocessing operation adopted for the optical element image in the invention comprises the following steps: graying, image denoising, morphological dust removal, image enhancement, specifically:
the polarized image acquired by the camera in dark field condition is a low-illumination color image. When processing color images, three channels often need to be processed sequentially, and the time cost will be high. In order to achieve the purpose of increasing the processing speed of the entire application system, it is necessary to reduce the amount of data to be processed. Therefore, firstly, the polarized image is subjected to graying treatment so as to achieve the condition of convenient subsequent treatment and observation;
the noise source in the invention is mainly dust, the dust form is extremely small, and the size of most defects of the lens is negligible, but the defect identification of pits can be influenced. Comparing the imaging morphology of pit defects and dust, dust is relatively smaller in morphology and pixel values are lower. Aiming at the characteristic of dust, morphological corrosion and expansion are adopted to treat the interference of the dust, so that the aim of denoising images is fulfilled;
in the image acquisition process, high-light areas can be generated on the large-curvature lens image due to different curvatures of the lens, so that information reading is difficult. The information in the image needs to be selectively enhanced and suppressed to improve the visual effect of the image, and defect information is highlighted, that is, image enhancement processing is required; the invention mainly has the problems of low overall gray value and local halation, and adopts a histogram equalization method and a Gamma correction method to carry out image enhancement processing aiming at the two problems.
The step of constructing the target recognition method based on the YOLOX algorithm specifically comprises the following steps:
the method is characterized in that the method is improved based on the YOLOX, the Backbone adopts a CSPDarknet trunk extraction network in the YOLO series as an extraction network, and the structure of the extraction network is optimized through a Focus structure and an SPP structure, so that the precision and the speed are improved greatly; the CSP structure adopts a convolution structure, the receptive field depends on the size of a convolution kernel, the larger the convolution kernel is, the larger the perceived area is, and the increase of the convolution kernel can greatly increase the complexity of operation; when the area of the receptive field is not large enough, global characteristic information is lost; the convolution structure has translational invariance, is insensitive to the global position of the feature, and only extracts a small part of local information; the transducer adopts an attention mechanism, global information is considered in the process of calculating attention, a CSP module of a Neck network is replaced by a transducer module, the range of a receptive field is improved to be the whole characteristic diagram, the Prediction network can obtain more comprehensive information, and then a Mish activation function is used for replacing a Swish activation function, so that overfitting can be effectively prevented, and the robustness of the network is improved. The improved model improves the detection precision, and can reduce the probability of false detection and omission of defects with higher similarity to the surface features of the optical element.
The improvement based on the YOLOX is that the backbond adopts a CSPDarknet trunk extraction network in the YOLO series as a step of optimizing the structure of the extraction network through a Focus structure and an SPP structure, and specifically comprises the following steps:
the YOLOX structure comprises an input end, a backbone network, a neg and a detection head, and the network structure is shown in fig. 2. The data is enhanced by using Mosaic and Mixup at the input end, so that the robustness and generalization capability of the model are improved; extracting the characteristics of the picture in a backbone network by using a Darknet 53; the extracted feature graphs are integrated in the Neck module by using a top-down FPN structure, so that the feature fusion capability of the Neck module is improved; three decoupling heads are used at the prediction end to solve the problems of classification and regression conflict in target detection, and the accuracy of other regression predictions is improved; specifically:
the Yolox adopts two data enhancement methods of Mosaic and Mixup at the input end to enhance training data; the data enhancement mode of the Mosaic data enhancement is simply to splice 4 pictures in a random scaling, random cutting and random arrangement mode. The method has the advantages that the background and small targets of the detected object are enriched, and data of four pictures can be calculated at a time when Batch Normalization is calculated, so that the mini-batch size is not required to be large, and a good effect can be achieved by one GPU. The Mixup data enhancement mode is to add two pictures pixel by pixel, take the maximum size value of the two pictures as the fused picture size, and take the pixel value of the newly added part as 0, so that the original two pictures are saved, and the absolute position of the target is not changed. The method aims at enriching the data set and improving the diversity of the data set, so that the performance of the training model and the robustness of the model are enhanced. YOLOX uses a dark net53 network for feature extraction, as shown in fig. 3, dark net53 comprises 53 convolutional layers, 5 residual building blocks, each comprising 1 x 1 and 3 x 3 convolutional kernels; each residual error unit of the Darknet5 is composed of two CBL units and a residual error block; wherein each CBL element comprises a convolutional layer, a normalization (batch normalization), and an activation function (leak relu); the residual block can prevent the loss of effective information and the gradient disappearance during deep network training. In addition, there is no pooling layer in the network, and the convolution with the step length of 2 is used for downsampling to replace pooling operation, so that loss of effective information is further prevented, and the method is very beneficial to detecting various targets.
The Neck module of the Yolox adopts a structure of FPN+PAN; the FPN is generally known as Feature Pyramid Networks, and the PAN is generally known as PyramidAttention Network; the FPN fuses the information of a high layer with the information of a bottom layer in an up-sampling mode from top to bottom, and transmits semantic features; the PAN structure is from bottom to top, the information of the bottom is transmitted to a high layer to realize fusion, and positioning information is transmitted; the top-down feature gold (FPN) is used to realize the fusion of shallow and deep features and solve the problem of multi-scale object detection. The feature pyramid firstly sends the input picture into a feature extraction network to perform a bottom-up process, and continuously performs downsampling to obtain feature pictures with different scales; secondly, in order to enable the shallow layer to obtain deep rich semantic feature information, the deep rich semantic feature information is transmitted to the shallow layer through a top-down process, so that the detection effect on a small target is greatly improved; meanwhile, the bottom-up and top-down feature images are continuously fused through transverse connection, so that the information of the feature images is enhanced, and the detection objects are accurately positioned and identified. YOLOX introduces three decoupling heads in the Head section for classification and regression, the network structure of which is shown in fig. 4. The dimension reduction is performed by using 1×1 convolution, and decoupling is avoided by using two 3×3 convolutions on two later branches respectively, so that a great deal of complexity is increased, each decoupling head has three branches of outputs, and the prediction score is performed on the category of the target frame, the foreground or the background of the target frame is judged, and the position information of the target frame is predicted. The convergence speed is greatly improved by using the decoupled Head, and the end-to-end implementation of YOLOX is facilitated, so that the downstream task integration is facilitated.
YOLOX is a typical one-stage target detection network, and has the characteristics of high speed and high precision, but the optical element image has the defects of low resolution, large noise, poor contrast and the like, and has lower precision in the defect detection of high similarity between the surface of the optical element and background characteristics, and cannot meet the requirement of high industrial precision. In order to extract the characteristic information of the optical element image equally effectively, the invention combines the YOLOX algorithm to provide an improved YOLOX model, a transformer encoder module is introduced into a Neck network, and a self-attention mechanism in the module can raise the receptive field to be the whole characteristic map and learn the characteristics of the characteristic map in a larger range. The Transformer encoder module can enable the Prediction network to obtain a feature map with a global visual field, and the detection effect of the defect with high similarity with the background is improved. The Mish activation function is introduced, and the Swish activation function is used in the YOLOX network, so that the accuracy of model detection can be improved compared with the Mish activation function.
The step of replacing the CSP module of the Neck network with a transformerencon module specifically comprises the following steps:
the transducer has great achievements in the CV field, and has good SOTA (State of The Art) effects in downstream tasks such as target classification, target detection, target segmentation and the like. Therefore, the transformer encoder module is introduced into the Neck network, and the CSP module enhances the learning ability of CNN, but the receptive field is still limited to the size of the convolution kernel, and can not consider the pixel information beyond the convolution kernel. The self-Attention mechanism in Transformer encoder can raise the receptive field to be the whole feature map and analyze the global features of the feature map in a larger range, so that the CSP module in Neck is replaced by transformer encoder and transformer encoder modules, the structure is as shown in FIG. 5, the number of transformer encoder is set to be 1, and the number of heads in multi-Head Attention is set to be 4 in order to improve the calculation efficiency; the invention adopts the coding and encoding parts of Vision transformer, firstly, the feature map is transformed in dimension to generate an ebedding sequence with reduced dimension, the obtained sequence is input into the encoding to obtain a three-dimensional tensor, then the tensor is amplified to four dimensions, the first dimension and the fourth dimension are transposed, and finally, the dimension of the tensor after the transposition is changed according to the number of output channels; the dimension of the input feature map is [ b, c, w, h ], wherein b is the number of samples, c is the number of channels, and w and h are the width and height of the feature map respectively; the dimensional change of the feature map is shown in fig. 6. The Encoder comprises a Layernorm part, a Multi-Head Attention part and an MLP3 part, wherein the obtained Embedding sequence is firstly input into the Layernorm layer to obtain Q, K, V, the values are input into a result obtained by the Multi-Head Attention and the Embedding sequence for residual connection, the result is used as input, and then the result obtained by the input through the Layernorm layer and the MLP module and the input are used as final output results; wherein the MLP module comprises two layers of gaussian error linear units (Gaussian error linear units, GELU).
The step of replacing the Swish activation function by the Mish activation function specifically comprises the following steps:
the Swish activation function in the CBS module is replaced by the Mish activation function, the Mish activation function is superior to the Swish activation function in classification accuracy, and the Mish activation function can still keep higher classification accuracy along with network deepening, and the classification accuracy is obviously reduced by using the Swish activation function. The invention therefore replaces Swish with a mich activation function, namely:
Mish=x·tanh(In(1+e x ))
the dash activation function is shown in fig. 7. The Mish activation function has a lower bound and no upper bound, which is the same as the Swish activation function, avoids slow convergence caused by zero gradient during network training, and is beneficial to regularization of network parameters.
The detection mode of the invention is also machine vision detection, the method is non-contact measurement, has the advantages of high accuracy, good real-time performance, high precision, high efficiency and the like, is improved in the image preprocessing stage, eliminates the interference of dust and halation on defect identification, can improve the detection accuracy and reduce the false detection rate. Firstly, integrating defect classification of an optical element, judging defects based on the difference of gray values of the defects, acquiring images by utilizing optical rotation of the optical element and polarized light imaging in a back illumination mode of a parallel light source, which is a direct source of visual information collection, providing original data for subsequent image processing, and carrying out next processing on the acquired images. The processing comprises image denoising, and the influence of the halo on defect information extraction caused by irrelevant information such as dust in the image is eliminated. And training and identifying normal sample data by using a support vector description classifier to carry out defect classification, and researching morphological characteristics of an optical element defect image. The method is used for identifying the surface defects of the optical element based on the improved YOLOX, is high in precision and speed, is suitable for detecting the product defects in real time, can help enterprises to change the process, can greatly improve the production qualification rate of the optical element, can greatly improve the production efficiency, can reduce the production cost and the like, and has very important practical significance.
Simulation experiment
1. Image preprocessing
The image preprocessing can purposefully emphasize the whole or partial characteristics of the image, improve the image quality and facilitate the improvement of the accuracy of image recognition and target detection.
Sobel sharpening
According to the mathematical principle, the maximum value of the gradient function will appear at the place where the gray value changes most obviously in the original image, whereas the minimum value of the gradient value will appear at the area where the gray value changes slowly or not obviously. The gradient described herein is understood to be the rate of change of the gray value, the magnitude of which represents the intensity of the gray value change in a certain pixel region. Therefore, the edge of the defect of the resin lens can be extracted, and the defect and the background are obviously distinguished.
Let f (x, y) represent the gray information of an image, the partial derivatives of the image in the horizontal and vertical directions can be expressed as:
the gradient of the image is then defined as:
wherein Gx and Gy represent the rate of change of gray values in the horizontal and vertical directions, respectively, and T is the transpose operator. The gradient is calculated to 2 norms as the modulus value, and the calculation formula is as follows, namely the gradient direction:
in general, the gradient value can be obtained by sequentially performing convolution operation on each pixel point in the image by using a matrix with different sizes and values, and the matrix is called a gradient operator. The invention uses the Sobel operator to calculate the image gradient value. The operator is 3×3 in size, and is composed of two sub operators in the horizontal direction and the vertical direction, and the edge detection function is realized by calculating the brightness difference in the horizontal direction and the vertical direction of the target pixel point, and the mathematical expression and the matrix form are as follows:
the Sobel operator has larger size and better noise resistance, has stronger investigation capability on the gray change relation between the target pixel point and surrounding neighborhood pixel points, and can obviously enhance the outline of the defect edge when images before and after sharpening the Sobel operator are shown in figure 8.
2. Pixel inversion
The image pixel inversion transformation refers to replacing the original gray scale with the complementary gray scale. The image obtained by the invention is an 8-bit gray level image, the gray level is from 0 to 255, and the formula of pixel inversion is as follows:
s=255-r
after the original image obtained under the dark field condition is sharpened, most pixels are black, the edges and defects of the resin lens are white outlines, the small number of white pixel points are the parts needing to be focused, and the pixel inversion transformation can effectively strengthen white or gray detail information in a black area, so that the inversion transformation is adopted, as shown in fig. 9.
3. Median filtering
After the image enhancement operation, noise is enhanced while the defect information is highlighted, and thus a denoising process is required. The invention adopts a median filter to remove noise, and the mathematical expression of the median filter is as follows:
where f (x, y) is the input image, f' (x, y) is the filtered image, xy S is the window sliding template, and mean represents the median filtering algorithm, as shown in fig. 10. The specific method is that the window sliding template xy S is used for traversing all pixels of the image, the pixels in the sliding window are arranged in the order from small to large, and the median value is used for replacing the original pixel value. The size of the window sliding template xy S used in the present invention is 3×3.
2. Making resin lens defect image dataset
(1) Resin lens defect image data enhancement convolutional neural networks are effective, one of which is that there is enough data. In practical application, large-scale data acquisition is difficult, and the consumed time and labor cost are huge, so that a data enhancement method is generally used, the data volume is expanded, and the learning capacity of the convolutional neural network is improved. The data enhancement method of the present invention has equal scale scaling, random rotation and mirroring operations. Random rotation is a basic geometric transformation operation in image processing, wherein an image is randomly rotated around a central point by a certain angle, the background of the rotated image is increased, and the increased background is required to be filled with black pixels. The distribution of defect positions in the rotated resin lens image is changed, thereby enhancing the robustness of the network, as shown in fig. 11.
The scaling is to scale up or down the image, but to preserve the original aspect ratio of the image. The image amplification is to increase the number of pixels by a bilinear interpolation method, so that the image size is increased, and the edges of the amplified image are required to be cut in order to ensure the consistency of the image size in an input convolutional neural network; the image reduction is to reduce the number of pixels by a downsampling method, so that the image size is reduced, and edge pixel filling is required to be carried out on the reduced image in order to ensure the consistency of the image size in the input convolutional neural network. In the present invention, in order to avoid image distortion, the image scaling is controlled to be within 20%, as shown in fig. 12.
The mirroring operation is divided into horizontal mirroring and vertical mirroring, and the position of the pixel is changed by establishing a mapping relation, so that the mirroring operation is a very important means in data enhancement. As shown in FIG. 13, the invention increases image samples with defects of different shapes through mirror image operation, and effectively expands the data set.
(2) The invention relates to a resin lens defect labeling method for a to-be-detected sample resin lens defect detection algorithm, which is a supervision training method, namely, according to the known position and type of the defect, a network weight parameter is learned and adjusted so that the network can realize the function of detecting the defect. The process of obtaining the position and the category of the defect in the image is the data annotation. The invention uses LabelImg tool to complete the data set production.
And selecting the defects by a manual frame drawing mode, and inputting the defect category names to finish a defect marking. The software automatically calculates the coordinates of the labeling frame, the position information of the wide position and the high position, and generates an xml format file, called a label file, along with the information of the defect type, the image path, the image size and the like. The network model obtains the corresponding required image information by reading the tag file.
The embodiments of the present invention are disclosed as preferred embodiments, but not limited thereto, and those skilled in the art will readily appreciate from the foregoing description that various extensions and modifications can be made without departing from the spirit of the present invention.

Claims (7)

1. A method for detecting surface defects of an optical element based on a YOLOX algorithm is characterized by comprising the following steps of: the method is realized by the following steps:
firstly, constructing an imaging system and an image acquisition device based on a polarized light imaging principle, and acquiring an image signal of an optical element to be detected by using the imaging system and the image acquisition device;
step two, performing image preprocessing on the acquired optical element image signals;
step three, constructing a target identification method based on a YOLOX algorithm aiming at defects of pits, scratches, bubbles and edge collapse in the optical element;
and step four, performing defect recognition on the image signal subjected to the image preprocessing in the step two by using the target recognition method constructed in the step three.
2. The method for detecting surface defects of an optical element based on the YOLOX algorithm according to claim 1, wherein the method comprises the following steps: the step one, the step of constructing an imaging system and an image acquisition device based on the polarized light imaging principle, specifically comprises the following steps:
the imaging system and the image acquisition device are designed to comprise a bracket, a photographing table, a CCD camera, a first attenuation sheet, an objective table, a second attenuation sheet, a parallel light source and a computer;
the device is designed on one side of the bracket, the top is a flap table, and the bottom is a platform;
a CCD camera is arranged on the bottom surface of the photographing table, a lens of the CCD camera is aligned with a first attenuation sheet below the CCD camera, an objective table fixed on a bracket is arranged below the first attenuation sheet, a second attenuation sheet is arranged below the objective table, and a parallel light source is arranged below the second attenuation sheet; the CCD camera, the first attenuation sheet, the object stage, the second attenuation sheet and the parallel light source are vertically arranged, the CCD camera is connected with the computer, and the parallel light source is arranged on the bottom of the bracket;
the digital image is transmitted into the computer through a computer port or standard equipment by a CCD camera.
3. The method for detecting surface defects of an optical element based on the YOLOX algorithm according to claim 2, wherein the method comprises the following steps: the operation means for performing image preprocessing on the acquired optical element image signal includes: graying, image denoising, morphological dust removal, image enhancement, specifically:
graying treatment is carried out on the polarized image so as to achieve the conditions of convenient subsequent treatment and observation;
the morphological corrosion and expansion are adopted to treat the interference of dust, so that the purpose of image denoising is realized;
and (5) performing image enhancement processing by adopting a histogram equalization method and a Gamma correction method.
4. A YOLOX algorithm-based optical element surface defect detection method according to claim 3, wherein: the step of constructing the target recognition method based on the YOLOX algorithm specifically comprises the following steps:
based on the Yolox, the Backbone adopts a CSPDarknet trunk extraction network in the Yolox series as a structure of an extraction network, and optimizes the structure of the extraction network through a Focus structure and an SPP structure; the transducer adopts an attention mechanism to replace a CSP module of the Neck network with a transducer module, and then uses a Mish activation function to replace a Swish activation function.
5. The method for detecting surface defects of an optical element based on the YOLOX algorithm according to claim 4, wherein the method comprises the following steps: the method is characterized in that the backlight adopts CSPDarknet trunk extraction network in the Yolo series as the basis of the Yolo, and optimizes the structure of the extraction network through a Focus structure and an SPP structure, and specifically comprises the following steps:
the YOLOX structure comprises an input end, a backbone network, a Neck and a detection head, wherein the input end uses the enhancement data adopting Mosaic and Mixup; extracting the characteristics of the picture in a backbone network by using a Darknet 53; integrating the extracted feature images by using a top-down FPN structure in a Neck module, and using three decoupling heads at a prediction end; specifically:
the Yolox adopts two data enhancement methods of Mosaic and Mixup at the input end to enhance training data; YOLOX uses a dark net53 network for feature extraction, dark net53 comprising 53 convolutional layers, 5 residual structural units, each convolutional layer comprising a 1 x 1 and 3 x 3 convolutional kernel; each residual error unit of the Darknet5 is composed of two CBL units and a residual error block; wherein each CBL unit includes a convolutional layer, a normalization, and an activation function;
structure of the lock module of YOLOX; the FPN fuses the information of a high layer with the information of a bottom layer in an up-sampling mode from top to bottom, and transmits semantic features; the PAN structure is from bottom to top, the information of the bottom is transmitted to a high layer to realize fusion, and positioning information is transmitted;
YOLOX introduces three decoupling heads in the Head part to classify and regress, uses 1×1 convolution to reduce dimension, uses two 3×3 convolutions in the two latter branches respectively, and each decoupling Head has three branches output to respectively predict target frame category, judge whether the target frame is foreground or background and predict target frame position information.
6. The method for detecting surface defects of an optical element based on the YOLOX algorithm according to claim 5, wherein the method comprises the following steps: the step of replacing the CSP module of the Neck network with a transformerencon module specifically comprises the following steps:
introducing a transformer encoder module into a Neck network, replacing a CSP module in the Neck with transformer encoder and transformer encoder modules, setting the number of transformer encoder as 1, and setting the number of heads in a multi-Head attribute as 4; adopting an encoding part and an encoding part of Vision transformer, firstly transforming a feature map in the dimension to generate an ebedding sequence with reduced dimension, inputting the obtained sequence into the encoding part to obtain a three-dimensional tensor, amplifying the tensor to four dimensions, transposing the first dimension and the fourth dimension, and finally changing the dimension of the transposed tensor according to the number of output channels; the dimension of the input feature map is [ b, c, w, h ], wherein b is the number of samples, c is the number of channels, and w and h are the width and height of the feature map respectively; the Encoder comprises a Layernorm part, a Multi-Head Attention part and an MLP3 part, wherein the obtained Embedding sequence is firstly input into the Layernorm layer to obtain Q, K, V, the values are input into a result obtained by the Multi-Head Attention and the Embedding sequence for residual connection, the result is used as input, and then the result obtained by the input through the Layernorm layer and the MLP module and the input are used as final output results; wherein the MLP module comprises two layers of gaussian error linear units.
7. The method for detecting surface defects of an optical element based on the YOLOX algorithm according to claim 6, wherein: the step of replacing the Swish activation function by the Mish activation function specifically comprises the following steps: the Swish activation function in the CBS module is replaced by a Mish activation function, namely:
Mish=x·tanh(In(1+e x ))。
CN202310660358.6A 2023-06-06 2023-06-06 Method for detecting surface defects of optical element based on YOLOX algorithm Pending CN116612106A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310660358.6A CN116612106A (en) 2023-06-06 2023-06-06 Method for detecting surface defects of optical element based on YOLOX algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310660358.6A CN116612106A (en) 2023-06-06 2023-06-06 Method for detecting surface defects of optical element based on YOLOX algorithm

Publications (1)

Publication Number Publication Date
CN116612106A true CN116612106A (en) 2023-08-18

Family

ID=87679874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310660358.6A Pending CN116612106A (en) 2023-06-06 2023-06-06 Method for detecting surface defects of optical element based on YOLOX algorithm

Country Status (1)

Country Link
CN (1) CN116612106A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116778263A (en) * 2023-08-22 2023-09-19 四川坤鸿电子科技有限公司 Sorting apparatus control method, electronic apparatus, and computer-readable medium
CN116787022A (en) * 2023-08-29 2023-09-22 深圳市鑫典金光电科技有限公司 Heat dissipation copper bottom plate welding quality detection method and system based on multi-source data
CN117173200A (en) * 2023-11-03 2023-12-05 成都数之联科技股份有限公司 Image segmentation method, device, equipment and medium
CN117232790A (en) * 2023-11-07 2023-12-15 中国科学院长春光学精密机械与物理研究所 Method and system for evaluating surface defects of optical element based on two-dimensional scattering

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116778263A (en) * 2023-08-22 2023-09-19 四川坤鸿电子科技有限公司 Sorting apparatus control method, electronic apparatus, and computer-readable medium
CN116778263B (en) * 2023-08-22 2023-11-14 四川坤鸿电子科技有限公司 Sorting apparatus control method, electronic apparatus, and computer-readable medium
CN116787022A (en) * 2023-08-29 2023-09-22 深圳市鑫典金光电科技有限公司 Heat dissipation copper bottom plate welding quality detection method and system based on multi-source data
CN116787022B (en) * 2023-08-29 2023-10-24 深圳市鑫典金光电科技有限公司 Heat dissipation copper bottom plate welding quality detection method and system based on multi-source data
CN117173200A (en) * 2023-11-03 2023-12-05 成都数之联科技股份有限公司 Image segmentation method, device, equipment and medium
CN117173200B (en) * 2023-11-03 2024-02-02 成都数之联科技股份有限公司 Image segmentation method, device, equipment and medium
CN117232790A (en) * 2023-11-07 2023-12-15 中国科学院长春光学精密机械与物理研究所 Method and system for evaluating surface defects of optical element based on two-dimensional scattering
CN117232790B (en) * 2023-11-07 2024-02-02 中国科学院长春光学精密机械与物理研究所 Method and system for evaluating surface defects of optical element based on two-dimensional scattering

Similar Documents

Publication Publication Date Title
Ren et al. State of the art in defect detection based on machine vision
CN108520274B (en) High-reflectivity surface defect detection method based on image processing and neural network classification
CN111223088B (en) Casting surface defect identification method based on deep convolutional neural network
CN113450307B (en) Product edge defect detection method
CN116612106A (en) Method for detecting surface defects of optical element based on YOLOX algorithm
CN111768388B (en) Product surface defect detection method and system based on positive sample reference
CN111223093A (en) AOI defect detection method
CN111383209A (en) Unsupervised flaw detection method based on full convolution self-encoder network
CN109840483B (en) Landslide crack detection and identification method and device
CN113643268B (en) Industrial product defect quality inspection method and device based on deep learning and storage medium
CN112991271B (en) Aluminum profile surface defect visual detection method based on improved yolov3
CN111932511A (en) Electronic component quality detection method and system based on deep learning
CN115830004A (en) Surface defect detection method, device, computer equipment and storage medium
CN114596500A (en) Remote sensing image semantic segmentation method based on channel-space attention and DeeplabV3plus
Zhu et al. Object detection in complex road scenarios: improved YOLOv4-tiny algorithm
CN110991374B (en) Fingerprint singular point detection method based on RCNN
Fan et al. Application of YOLOv5 neural network based on improved attention mechanism in recognition of Thangka image defects
Zhao et al. Research on detection method for the leakage of underwater pipeline by YOLOv3
CN115775236A (en) Surface tiny defect visual detection method and system based on multi-scale feature fusion
CN117252815A (en) Industrial part defect detection method, system, equipment and storage medium based on 2D-3D multi-mode image
CN116434230A (en) Ship water gauge reading method under complex environment
CN114331961A (en) Method for defect detection of an object
CN114549414A (en) Abnormal change detection method and system for track data
CN113705564A (en) Pointer type instrument identification reading method
CN115761606A (en) Box electric energy meter identification method and device based on image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination