CN116797602A - Surface defect identification method and device for industrial product detection - Google Patents

Surface defect identification method and device for industrial product detection Download PDF

Info

Publication number
CN116797602A
CN116797602A CN202311074812.6A CN202311074812A CN116797602A CN 116797602 A CN116797602 A CN 116797602A CN 202311074812 A CN202311074812 A CN 202311074812A CN 116797602 A CN116797602 A CN 116797602A
Authority
CN
China
Prior art keywords
defect
identification
image
defect identification
industrial product
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311074812.6A
Other languages
Chinese (zh)
Inventor
戴齐飞
杨文帮
钱刃
李建华
赵勇
李福池
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongguan Aipeike Technology Co ltd
Original Assignee
Dongguan Aipeike Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongguan Aipeike Technology Co ltd filed Critical Dongguan Aipeike Technology Co ltd
Priority to CN202311074812.6A priority Critical patent/CN116797602A/en
Publication of CN116797602A publication Critical patent/CN116797602A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a surface defect identification method and a device for industrial product detection, which are characterized in that firstly, image data containing the surface of an industrial product to be detected is obtained; preprocessing the image data to obtain gray image surface data; obtaining defect identification parameter data through a preset defect identification mathematical model, wherein the defect identification parameter data comprise defect types, defect positions and defect areas of the surfaces of industrial products to be detected; and finally, acquiring an identification result of the industrial product detection according to the defect identification parameter data. Because the algorithm based on the deep learning neural network avoids the defects of the traditional detection, the adaptability is high, the more accurate detection data performance is realized, the judgment standard can be adjusted according to the actual requirement, and the defect identification and the accuracy and the robustness of defect marking during defect detection are improved.

Description

Surface defect identification method and device for industrial product detection
Technical Field
The application relates to the technical field of industrial product process detection, in particular to a surface defect identification method and device for industrial product detection.
Background
With the deep integration of new generation information technology and manufacturing industry, the manufacturing industry is caused to generate great revolution, and the transition from the quantity amplification to the quality improvement is gradually carried out. The thinking of improving the product quality is to strengthen quality inspection, improve the technical level of the process, standardize production operation and the like, wherein the strengthening quality inspection is the most common mode in the production of the manufacturing industry. Defect detection is an industrially very important application, and due to the variety of defects, the traditional machine vision algorithm is difficult to complete modeling and migration of defect characteristics, the reusability is low, and the working conditions are required to be distinguished, so that a great deal of labor cost is wasted. Deep learning achieves very good effects on feature extraction and localization, and more scholars and engineering staff begin to introduce deep learning algorithms into the field of defect detection. The current industrial product production line has basically completed automatic transformation, but the following problems exist in the prior art, which restricts the improvement of the productivity and the utilization rate of the prior equipment:
1. the traditional assembly line adopts a manual visual detection method, and the detection speed, the detection accuracy and the detection integrity consistency can not meet the requirements of factory quality improvement, so that terminal clients complain more;
2. the traditional method can only detect defects under specific detection conditions, such as obvious defect contours with strong contrast and low noise under a certain scale.
3. The traditional manual visual detection method cannot save logs, marks, records and other marks, so that potential economic losses and legal disputes are difficult to solve.
4. It is difficult to find out the lightning homogeneity problem or the detection cost is too high, and it is difficult to maintain and reform the scheme of the targeted production line.
In addition, the existing algorithms for detecting the surface defects of the object are uneven, the different algorithms have different effects, and a system with one system is difficult to apply to the actual implementation of industry. The defects caused by various surface background textures of objects are also various, the defects are limited by the large number of industrial detection, the detection classification and timely feedback are required, predictive maintenance equipment and a reconstruction production scheme are required, but the size of the defects which are required to be detected by the detection is not excessively large, so that stable defect detection is difficult to realize in terms of precision, lower machine calculation cost and stability, and the algorithms of the direction method are numerous, but the problems are still not effectively solved, so that the existing industrial detection field is insufficient, and the surface defect detection of the industrial object which needs deep learning is improved and improved.
Disclosure of Invention
The application mainly solves the technical problem of improving and enhancing the defect detection technology of the industrial product surface in the prior art.
According to a first aspect, there is provided in one embodiment a surface defect identification method for industrial product inspection, comprising:
acquiring image data containing the surface of an industrial product to be detected;
preprocessing the image data to obtain image surface data; the image surface data comprise gray image data of the surface of the industrial product to be detected;
inputting the image surface data into a preset defect identification mathematical model, and acquiring defect identification parameter data output by the defect identification mathematical model; the defect identification parameter data comprise defect types, defect positions and/or defect areas of the surfaces of the industrial products to be detected; the defect identification mathematical model is acquired based on a deep learning neural network;
acquiring an identification result according to the defect identification parameter data; the identification result comprises whether the industrial product to be detected is qualified or unqualified.
In one embodiment, preprocessing the image data includes:
and carrying out illumination normalization processing on the image data containing the surface of the industrial product to be detected based on Weber's law so as to obtain a gray level image.
In one embodiment, the method for obtaining the defect identification mathematical model includes:
obtaining a defect training set; the defect training set comprises a preset number of defect samples, and each defect sample corresponds to at least one marked defect type;
training the defect identification mathematical model through the defect training set;
the defect identification mathematical model comprises a pre-mapping processing module, a threshold module, a segmentation module and a defect confirmation module;
the mapping preprocessing module comprises an automatic coding network and a cavity convolution network; the mapping preprocessing module is used for firstly applying the automatic coding network to form a corresponding pixel prediction mask for the defect sample, and then applying the cavity convolution network to generate a probability map;
the threshold module is used for carrying out pixel level threshold on the probability map output by the pre-mapping processing module so as to allocate a given threshold map to the result prediction mask to carry out defect binarization judgment, and acquiring a binarized image Isp according to the result of the defect binarization judgment;
the segmentation module is used for segmenting the defect area according to the binarized image Isp;
the defect confirmation module is used for classifying by applying a classifier, and training, learning and defect parameter marking are performed according to preset codes; the defect parameters include defect location, defect area, and/or defect type.
In one embodiment, the pre-mapping processing module includes two concatenated automatic coding networks; two concatenated automatic coding networks are used to perform two-pass pixel prediction mask acquisition on the defective samples to form a higher precision prediction mask.
In an embodiment, when the cavity convolution network of the pre-mapping processing module performs unitization, a pixel cross entropy loss algorithm is used for learning, based on unitization of the two cascade automatic coding networks, unbalanced classes are re-weighted, different parameters of the cross entropy loss algorithm are designed, and pixel-by-pixel analysis is performed on the normalized preprocessed graph.
In one embodiment, the defect verification module includes a plurality of cascaded classifiers.
In one embodiment, the defect confirmation module uses a simple convolutional neural network as a classifier for classifier training.
In one embodiment, the training method of the classifier of the defect confirmation module includes:
scanning the binarized image Isp through a preset window size in a sliding window mode by using a convolutional neural network; in sample images for training a plurality of cascaded classifiers, positive samples are set to be of the type with the largest defect area occupying window of more than 20%, the types are set to be of the type with the largest defect area occupying window, and negative samples are set to be of the type with the defect area occupying window of less than 1%.
According to a second aspect, an embodiment provides a computer readable storage medium having stored thereon a program executable by a processor to implement the method according to the first aspect.
According to a third aspect, there is provided in one embodiment a surface defect inspection apparatus for industrial product inspection for applying the surface defect inspection method as described above, the surface defect inspection apparatus comprising:
an image acquisition unit for acquiring image data containing the surface of the industrial product to be detected;
the image preprocessing unit is used for preprocessing the image data to obtain image surface data; the image surface data comprise gray image data of the surface of the industrial product to be detected;
the defect identification unit is used for inputting the image surface data into a preset defect identification mathematical model and acquiring defect identification parameter data output by the defect identification mathematical model; the defect identification parameter data comprise defect types, defect positions and/or defect areas of the surfaces of the industrial products to be detected; the defect identification mathematical model is acquired based on a deep learning neural network;
the identification result output unit is used for acquiring an identification result according to the defect identification parameter data; the identification result comprises whether the industrial product to be detected is qualified or unqualified.
According to the surface defect identification method based on the embodiment, due to the algorithm based on the deep learning neural network, the defects of traditional detection are avoided, the adaptability is high, the more accurate detection data performance is realized, the judgment standard can be adjusted according to actual requirements, and the defect identification and the defect marking accuracy and robustness during defect detection are improved.
Drawings
FIG. 1 is a flow chart of a surface defect identification method in one embodiment;
FIG. 2 is a flow chart of a method for obtaining a mathematical model for defect identification in one embodiment;
FIG. 3 is a schematic diagram of a framework of a defect review mathematical model in one embodiment;
FIG. 4 is a schematic diagram showing a structure of a surface defect inspection apparatus according to an embodiment;
FIG. 5 is a schematic diagram of metal magnetic head defect detection in an embodiment.
Detailed Description
The application will be described in further detail below with reference to the drawings by means of specific embodiments. Wherein like elements in different embodiments are numbered alike in association. In the following embodiments, numerous specific details are set forth in order to provide a better understanding of the present application. However, one skilled in the art will readily recognize that some of the features may be omitted, or replaced by other elements, materials, or methods in different situations. In some instances, related operations of the present application have not been shown or described in the specification in order to avoid obscuring the core portions of the present application, and may be unnecessary to persons skilled in the art from a detailed description of the related operations, which may be presented in the description and general knowledge of one skilled in the art.
Furthermore, the described features, operations, or characteristics of the description may be combined in any suitable manner in various embodiments. Also, various steps or acts in the method descriptions may be interchanged or modified in a manner apparent to those of ordinary skill in the art. Thus, the various orders in the description and drawings are for clarity of description of only certain embodiments, and are not meant to be required orders unless otherwise indicated.
The numbering of the components itself, e.g. "first", "second", etc., is used herein merely to distinguish between the described objects and does not have any sequential or technical meaning. The term "coupled" as used herein includes both direct and indirect coupling (coupling), unless otherwise indicated.
The surface defect identification method disclosed by the embodiment of the application improves the defects of the traditional detection, combines the deep learning detection algorithm with other detection algorithms, has low requirement on the number of detection sample data sets, has high adaptability, can realize more accurate detection data performance, can adjust the judgment standard according to the setting, and can improve the accuracy and the robustness of defect identification and defect marking during defect detection by adopting lower machine calculation cost.
Embodiment one:
referring to fig. 1, a flow chart of a surface defect identification method in an embodiment is shown, and the surface defect identification method is used for detecting surface defects on different industrial products, classifying and identifying the surface defects of different types, and rapidly outputting defect identification results, wherein the identification results include whether defects exist, classification of the defects, positioning of the defects, and the like. In one embodiment, the types of defects to be detected include surface scratches, offset printing, and the like. The surface defect identification method comprises the following steps:
step 101, image data is acquired.
Image data is acquired containing the surface of the industrial product to be inspected. In one embodiment, the surface of the industrial product is captured by an industrial camera. In one embodiment, the image data is acquired at a fixed predetermined angle and a fixed predetermined intensity.
Step 102, performing prediction processing on the image data.
Preprocessing the image data to obtain image surface data, wherein the image surface data comprise gray image data of the surface of the industrial product to be detected. In one embodiment, the image data comprising the surface of the industrial product to be detected is subjected to illumination normalization processing based on weber's law to obtain a gray scale map. In one embodiment, preprocessing the image data includes normalization segmentation and classification, where the concept of normalization segmentation is to treat an image data as a graph (graph), then calculate a weighted graph (weighted graph), and then segment into regions with the same characteristics (e.g., texture, color, brightness, etc.). In one embodiment, the weights between each point are calculated based on the pixel points in the image data for analysis, and the formula applied is:
wherein X (i) represents the coordinates of the vertex i in the picture, X (j) represents the coordinates of the vertex j in the picture, X is a matrix of [ N,1,2], and the longitudinal and transverse coordinate values of the pixel points are stored in 3 dimensions.
Because the image data is subjected to gray scale normalization, for gray scale, F (i) is the brightness value of pixel i, F (j) is the brightness value of pixel j, and F is [ N,1 ]]Is a matrix of (a) in the matrix. In one embodiment, when W ij And is set to the same area when not equal to "0".
In one embodiment, if the image data of the industrial product surface has color, the image data needs to be normalized to a 512x512 gray scale, and the light source of the industrial camera is also subjected to corresponding illumination normalization based on weber's law. In one embodiment, the generalization rate can be improved by expanding the data set amount through rotation, translation, elastic transformation and scaling of the object.
And step 103, acquiring defect identification parameter data.
Inputting the image surface data into a preset defect identification mathematical model, and obtaining defect identification parameter data output by the defect identification mathematical model. In one embodiment, the defect qualification parameter data includes defect type, defect location, and/or defect area of the surface of the industrial product to be inspected. In one embodiment, the defect review mathematical model is obtained based on a deep learning neural network.
And 104, acquiring an identification result.
And obtaining an identification result according to the defect identification parameter data. In one embodiment, the identification result includes whether the industrial product to be detected is acceptable or unacceptable.
Referring to fig. 2, a flow chart of a method for obtaining a defect identification mathematical model in an embodiment is shown, and in an embodiment, the method for obtaining a defect identification mathematical model includes:
step 201, a defect training set is obtained.
The defect training set includes a preset number of defect samples, each defect sample corresponding to at least one marked defect type.
Step 202, training the model.
And training the defect identification mathematical model through the defect training set.
Referring to fig. 3, a schematic diagram of a defect identification mathematical model in an embodiment is shown, and in an embodiment, the defect identification mathematical model includes a pre-mapping processing module 10, a threshold module 20, a segmentation module 30, and a defect confirmation module 40. The pre-mapping processing module 10 includes an automatic coding network (AE) and a hole convolution network. The mapping preprocessing module is used for forming a corresponding pixel prediction mask for the defect sample by using an automatic coding network, and then generating a probability map by using a hole convolution network. The threshold module 20 is configured to perform pixel level thresholding on the probability map output by the pre-mapping processing module, so as to assign a given threshold map to the result prediction mask, perform defect binarization determination, and obtain a binarized image Isp according to the result of the defect binarization determination. The segmentation module 30 is used for segmenting the defect area according to the binarized image Isp. The defect confirmation module 40 is configured to apply a classifier for classification, training according to a preset code, learning, and marking defect parameters, wherein the defect parameters include defect location, defect area, and/or defect type.
In one embodiment, pre-map processing module 10 includes two concatenated auto-coding networks for performing two pixel prediction mask acquisitions on defective samples to form a higher precision prediction mask. In one embodiment, when the cavity convolution network of the pre-mapping processing module 10 performs unitization, the pixel cross entropy loss algorithm is used for learning, based on unitization of two cascaded automatic coding networks, unbalanced classes are re-weighted, different parameters of the cross entropy loss algorithm are designed, and the normalized preprocessed graph is analyzed pixel by pixel. In one embodiment, the defect verification module 40 includes a plurality of cascaded classifiers. In one embodiment, defect verification module 40 employs a simple convolutional neural network as a classifier for classifier training. In one embodiment, the training method of the defect confirmation module 40 classifier includes:
and scanning the binarized image Isp through a preset window size in a sliding window mode by using a convolutional neural network. In sample images for training a plurality of cascaded classifiers, positive samples are set to be of the type with the largest area, wherein the defect area occupied window is larger than 20%, the types of the positive samples are set to be of the type with the largest area, and the negative samples are set to be of the type with the defect area occupied window smaller than 1%.
In one embodiment, the defect qualification mathematical model is based on CASAE models for accurate defect localization and module segmentation, wherein the processing of two concatenated, auto-coded networks comprises:
firstly, preprocessing acquired image surface data to form a corresponding pixel prediction mask through a first automatic coding network, and then processing the corresponding pixel prediction mask through a second automatic coding network to form a prediction mask with higher precision. The CASAE module processes the feature map of the preprocessed map and forms a probability map via an atrous convolution operation. When the attous convolution unitization is carried out, a pixel cross entropy loss algorithm is used for learning unitization based on a CASAE module, unbalanced classes are re-weighted, different parameters of the cross entropy loss algorithm are designed, the normalized preprocessing chart is analyzed pixel by pixel, and an applied formula is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,for the working weight, k=2 represents the number of classes (sample background and defect), m represents the number of project training samples, n is the number of pixels per sample map, Q (y=kc) is an index function, when y=kc takes 1, otherwise 0 (indicating whether defect or sample background), multiply the log-scale p of the pixel under kc condition kc () Probability.
The results of the CASAE are then refined by performing a mapping assignment through a threshold module based on the CASAE module result values. The threshold module applies a pixel level threshold to the pre-mapped probability map and assigns a given threshold (Ka) mapping analysis to the resulting prediction mask, defect binarization decisions, and defect binarization analysis yields a binarized image Isp. The prediction mask is combined with a given threshold value to be mapped onto the probability map, defect binarization judgment is conveniently carried out, whether defects exist or not is identified, and a threshold value binarization map (Kb) is obtained based on the threshold value mapping probability map.
In one embodiment, the defect binarization analysis applies the formula:
again, depending on whether or not the lap is equal to "1", a pixel level matrix is obtained, which in one embodiment may also be marked with a particular color.
In one embodiment, the defect classification is performed mainly by using a classifier, training and learning are performed according to a pre-coding, and then the defect is marked accurately. Based on classifier training, the preprocessed picture is processed in a sliding window mode through a set window size (such as 15x 15), a scanning matrix is used as classifier training, a plurality of classifiers are obtained by forming cascading classifiers through a simple convolution network, a sample image is obtained, wherein positive samples are defined as defect area occupying windows which are larger than 20%, categories are defined as types with largest areas (considering intersection conditions), negative samples are defined as defect area occupying windows which are smaller than 1%, and other conditions are discarded (middle zones are reserved for boundary distinction). In one embodiment, the defect is located according to the same category as the defect boundary defined by the mark "1" and classified, and is implemented based on the defect area detector, which is also called defect frame growth. In one embodiment, a defect area detector is selected and adopted, and an outline extraction and defect detection algorithm based on opencv complex background is applied to open operation to remove interference; closing operation, and connecting defects; defect frames grow and marks are recorded.
The surface defect identification method disclosed by the embodiment of the application can accurately judge whether the surface of an industrial object (particularly metal) is perfect, judge the defect type of the defective surface and accurately position the positions of the defects, ensure that the size of the identification frame is closely attached to the edge of the defect, provide the judgment conditions of controllable variables and meet various self-defined standard designs to produce the conforming industrial products. Compared with the existing surface defect detection technology, the identification accuracy is higher; the defect types can be accurately identified, so that feedback design and maintenance production process can be performed; in addition, the defect of traditional detection is improved, the deep learning detection algorithm is combined with the algorithm existing in the direction in a comparison and fusion mode, the adaptability is high, the method is applicable to the situation that a detection sample data set is large, the method is also applicable to the situation that the detection sample data set is small, the data performance is detected more accurately, the adjustability of the judgment standard is achieved, the machine calculation cost is lower, and the defect identification and the defect marking accuracy and robustness in defect detection are improved.
Referring to fig. 4, which is a schematic structural diagram of a surface defect inspection apparatus according to an embodiment, the present application further discloses a surface defect inspection apparatus according to an embodiment, which is configured to apply the surface defect inspection method as described above, and includes an image acquisition unit 100, an image preprocessing unit 200, a defect inspection unit 300, and an inspection result output unit 400. The image acquisition unit 100 is used for acquiring image data containing the surface of the industrial product to be inspected. The image preprocessing unit 200 is used for preprocessing image data to obtain image surface data; the image surface data is grayscale image data including the surface of the industrial product to be inspected. The defect identifying unit 300 is configured to input the image surface data into a preset defect identifying mathematical model, and obtain defect identifying parameter data output by the defect identifying mathematical model; the defect qualification parameter data includes a defect type, a defect location, and/or a defect area of the surface of the industrial product to be inspected. The defect identification mathematical model is obtained based on a deep learning neural network. The identification result output unit 400 is configured to obtain an identification result according to the defect identification parameter data, where the identification result includes whether the industrial product to be detected is qualified or unqualified.
The surface defect inspection method disclosed in the present application will be described with reference to fig. 5, which is a schematic diagram of metal magnetic head defect inspection in an embodiment, and the metal magnetic head is used as an example.
First, image data acquired by imaging an industrial microscope is preprocessed to form a dataset. That is, the industrial camera setting parameters (color, brightness, etc. of the setting light) and angles (each detected object is fixed at an angle, and only the consistency of the definition of the shot pictures is ensured not to be blurred, such as 70 DEG), so that the surface of the industrial object to be detected is shot; detecting and preprocessing the photographed original image: picture size normalization, 512x512 size can be set; normalizing the gray level of the picture, and carrying out gray level normalization treatment on the RGB picture and the non-gray picture; the light is normalized with a self-set weber-score k. 100 images were collected as defect data sets, 70 of which were randomly selected as training sets, and the remaining images as test sets. In the segmentation process, all samples have their own label images. A binary image with a label image of constant size. The gray value of the white pixel is 255, representing the background, and the gray value of the black pixel is 0, representing the defective region, based on the label image. The data set quantity is expanded through rotation, translation, elastic transformation and scaling of the object, so that the generalization rate is improved. The above operations greatly increase the size of the data sets, with the number of training sets reaching 8000. For the classification task, all defective images are cropped from the original image. The classification dataset contains 1436 images including lesion sites, offset, dust, and fibers. In the classification task, 70% of these images are used for training and 30% are used for testing. Description: each picture is multi-angle, here, a large data set can be said, or a relatively small data set can be used for learning and stored as a data set file.
And then, constructing a mapping preprocessing module of the defect identification mathematical model.
A CASAE architecture based pre-map processing module is formed. And (3) entering the processed data set into a pre-mapping processing module, and performing double-combined AE processing to form a prediction mask matrix through a defect detection algorithm. And then calculating and outputting probability graphs of all samples through atrous convolution (a convolution which is proposed for reducing image resolution and losing information in the problem of image semantic segmentation and setting new parameters of 'expansion rate') to package the probability graphs to form a data set. Based on a CASAE defect identification mathematical model, for the first AE network, we used training of 604 epochs with a learning rate of 0.001; the second network trains 402 epochs at the same learning rate. Meanwhile, we design the batch size of two AE networks to be 2. The predictive mask values trained from the above modules are input to the threshold module. The resulting prediction mask is assigned a given threshold (Ka) mapping analysis and defect binarization decisions. 200 was set as a threshold to perfect the defect in the experiment. Based on the comparison of each pixel of the data set for each sample to a different kb and a given ka, the rescanning decision can be made as to whether or not the lap is equal to "1", or can be marked with a specific color to obtain a pixel level defect matrix.
Again, the atrous convolutional neural network and threshold analysis that form the threshold module.
The convolutional neural network comprises a four-layer structure, wherein the four-layer structure comprises an input layer, a plurality of hidden layers and an output layer. The number of the layers of the atrous convolutional neural network is 20, wherein the initialization parameter and the bias parameter of each neuron of each layer are between plus and minus 1, and the convolutional kernel of each layer is respectively designed to be 3, 5, 7 and 9. For parameter design of each layer, the convolution kernel size of the input layer is 3x3, the convolution kernel sizes of the pooling layer are 7x7 and 15x15, the convolution steps are 1 and 2, the input layer uses 512x512x3 matrix, the convolution kernel size of the output layer is 1x1, the output layer uses 1x1xM matrix, and M represents the type of defect. As a result of the experiment in this embodiment, M may be equal to the true defect type. Wherein the initial parameters of 8 to 2 are set in the loss function parameters. The 3x3 convolution is replaced with a 2-fold de-convolution of the atlas convolution. The atries convolution separates the summed pixels in the convolution based on the summed pixels and a conventional agreement. The weight of the atrous convolution for the blank in the image is set to zero and the valid value setting for the acceptance field is 7x7. In the AE network, an atrous convolution with a convolution of 1 and a step size of 1 is set. As a result, the boundary of the defect is defined in the same category based on the defect area detector after classification and marking as "1". Selecting a defect area detector, applying a contour extraction and defect detection algorithm based on opencv language, searching the contour of an object in an image by using a findContours () function, drawing the found contour by combining a drawContours () function, removing interference, closing operation, connecting defects, growing a defect frame, and recording marks.
Finally, a Convolutional Neural Network (CNN) architecture is constructed.
The patch of the area marked as a defect in the image is converted into a grayscale image. The gray image size of the defective area in the real column is converted into 227x227 format. The formatted picture is input to a CNN convolutional neural network. The network contains 5 convolutional layers and 3 max pooling layers. Setting the kernel size in the real column as follows: 3. 5, 11, the pooling layer uses 3x3, the stride setting is: 1. 2, 4, filled with 0, 1, the number of cores FC1 layer is set to 2000, while FC2 layer is 256. Each convolution layer is followed by a rectifying linear unit (ReLU). With conventional CNNs, which include relatively deep coding sections, typically include five convolutional layers, where each of the convolutional layers includes one convolutional filter. In the example, 3×3 pixels/feature channels, which correspond to about 0.01% of the area of a conventional 256×256 input image. In the conventional method, the input standard image is 256×256. In conventional CNN, pixel features are extracted at a relatively shallow layer of convolution (here, the first convolution layer), line links and edges, and the receptive field size of the 3x3 convolution filter is set using existing architecture; composite features of the feature combinations identified/extracted by the upper layers are extracted at the deeper layers of the convolution. In examples, such as a first convolution layer that identifies corner points and lines in an image, and a second convolution layer that identifies squares and triangles in an image based on a combination/pattern of previously identified corner points and lines. A convolutional filter of the receptive field of the ROI to be segmented and CNN dependent learning are performed, combining the relatively small spatial features extracted by the first convolutional layer into larger features to be marked/segmented, or image classification based on the content of the whole image. The small spatial features extracted by the front convolutional layer of the network are combined according to the back convolutional layer of the network and constitute the use of larger and deeper networks.
In addition, the defect matrix marked before is subjected to identification classification again. Scanning a defect matrix in a sliding window mode through a set window size (such as 15x 15), training by adopting a set CNN convolution network as a classifier, forming a cascade classifier by a plurality of classifiers to obtain a sample image matrix, wherein positive samples are specified to be types with the defect area of more than 20% and the category to be the largest in area (considering the condition of intersection), negative samples are specified to be types with the defect area of less than 1%, and other conditions are discarded (the middle zone is reserved for boundary distinction), so that the most suitable boundary is formed conveniently, and matrix analysis and boundary division are carried out.
The surface defect identification method disclosed by the embodiment of the application comprises the steps of firstly, acquiring image data containing the surface of an industrial product to be detected; preprocessing the image data to obtain gray image surface data; obtaining defect identification parameter data through a preset defect identification mathematical model, wherein the defect identification parameter data comprise defect types, defect positions and defect areas of the surfaces of industrial products to be detected; and finally, acquiring an identification result of the industrial product detection according to the defect identification parameter data. Because the algorithm based on the deep learning neural network avoids the defects of the traditional detection, the adaptability is high, the more accurate detection data performance is realized, the judgment standard can be adjusted according to the actual requirement, and the defect identification and the accuracy and the robustness of defect marking during defect detection are improved.
Those skilled in the art will appreciate that all or part of the functions of the various methods in the above embodiments may be implemented by hardware, or may be implemented by a computer program. When all or part of the functions in the above embodiments are implemented by means of a computer program, the program may be stored in a computer readable storage medium, and the storage medium may include: read-only memory, random access memory, magnetic disk, optical disk, hard disk, etc., and the program is executed by a computer to realize the above-mentioned functions. For example, the program is stored in the memory of the device, and when the program in the memory is executed by the processor, all or part of the functions described above can be realized. In addition, when all or part of the functions in the above embodiments are implemented by means of a computer program, the program may be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a removable hard disk, and the program in the above embodiments may be implemented by downloading or copying the program into a memory of a local device or updating a version of a system of the local device, and when the program in the memory is executed by a processor.
The foregoing description of the application has been presented for purposes of illustration and description, and is not intended to be limiting. Several simple deductions, modifications or substitutions may also be made by a person skilled in the art to which the application pertains, based on the idea of the application.

Claims (10)

1. A surface defect identification method for industrial product inspection, comprising:
acquiring image data containing the surface of an industrial product to be detected;
preprocessing the image data to obtain image surface data; the image surface data comprise gray image data of the surface of the industrial product to be detected;
inputting the image surface data into a preset defect identification mathematical model, and acquiring defect identification parameter data output by the defect identification mathematical model; the defect identification parameter data comprise defect types, defect positions and/or defect areas of the surfaces of the industrial products to be detected; the defect identification mathematical model is acquired based on a deep learning neural network;
acquiring an identification result according to the defect identification parameter data; the identification result comprises whether the industrial product to be detected is qualified or unqualified.
2. The surface defect inspection method of claim 1, wherein preprocessing the image data comprises:
and carrying out illumination normalization processing on the image data containing the surface of the industrial product to be detected based on Weber's law so as to obtain a gray level image.
3. The surface defect inspection method of claim 2, wherein the method of obtaining a mathematical model of defect inspection comprises:
obtaining a defect training set; the defect training set comprises a preset number of defect samples, and each defect sample corresponds to at least one marked defect type;
training the defect identification mathematical model through the defect training set;
the defect identification mathematical model comprises a pre-mapping processing module, a threshold module, a segmentation module and a defect confirmation module;
the mapping preprocessing module comprises an automatic coding network and a cavity convolution network; the mapping preprocessing module is used for firstly applying the automatic coding network to form a corresponding pixel prediction mask for the defect sample, and then applying the cavity convolution network to generate a probability map;
the threshold module is used for carrying out pixel level threshold on the probability map output by the pre-mapping processing module so as to allocate a given threshold map to the result prediction mask to carry out defect binarization judgment, and acquiring a binarized image Isp according to the result of the defect binarization judgment;
the segmentation module is used for segmenting the defect area according to the binarized image Isp;
the defect confirmation module is used for classifying by applying a classifier, and training, learning and defect parameter marking are performed according to preset codes; the defect parameters include defect location, defect area, and/or defect type.
4. The surface defect identification method of claim 3, wherein the pre-map processing module comprises two cascaded automatic encoding networks; two concatenated automatic coding networks are used to perform two-pass pixel prediction mask acquisition on the defective samples to form a higher precision prediction mask.
5. The surface defect identification method of claim 4, wherein when the cavity convolution network of the pre-mapping processing module performs unitization, a pixel cross entropy loss algorithm is applied to learn, unbalanced classes are re-weighted based on unitization of two cascaded automatic coding networks, different parameters of the cross entropy loss algorithm are designed, and pixel-by-pixel analysis is performed on the normalized preprocessed graph.
6. The surface defect identification method of claim 3, wherein the defect verification module comprises a plurality of cascaded classifiers.
7. The surface defect identification method of claim 6, wherein the defect verification module employs a simple convolutional neural network as a classifier for classifier training.
8. The surface defect identification method of claim 7, wherein the training method of the classifier of the defect verification module comprises:
scanning the binarized image Isp through a preset window size in a sliding window mode by using a convolutional neural network; in sample images for training a plurality of cascaded classifiers, positive samples are set to be of the type with the largest defect area occupying window of more than 20%, the types are set to be of the type with the largest defect area occupying window, and negative samples are set to be of the type with the defect area occupying window of less than 1%.
9. A computer readable storage medium, characterized in that the medium has stored thereon a program executable by a processor to implement the method of any of claims 1-8.
10. A surface defect identification device for industrial product inspection, characterized by applying the surface defect identification method according to any one of claims 1 to 8, the surface defect identification device comprising:
an image acquisition unit for acquiring image data containing the surface of the industrial product to be detected;
the image preprocessing unit is used for preprocessing the image data to obtain image surface data; the image surface data comprise gray image data of the surface of the industrial product to be detected;
the defect identification unit is used for inputting the image surface data into a preset defect identification mathematical model and acquiring defect identification parameter data output by the defect identification mathematical model; the defect identification parameter data comprise defect types, defect positions and/or defect areas of the surfaces of the industrial products to be detected; the defect identification mathematical model is acquired based on a deep learning neural network;
the identification result output unit is used for acquiring an identification result according to the defect identification parameter data; the identification result comprises whether the industrial product to be detected is qualified or unqualified.
CN202311074812.6A 2023-08-25 2023-08-25 Surface defect identification method and device for industrial product detection Pending CN116797602A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311074812.6A CN116797602A (en) 2023-08-25 2023-08-25 Surface defect identification method and device for industrial product detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311074812.6A CN116797602A (en) 2023-08-25 2023-08-25 Surface defect identification method and device for industrial product detection

Publications (1)

Publication Number Publication Date
CN116797602A true CN116797602A (en) 2023-09-22

Family

ID=88046309

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311074812.6A Pending CN116797602A (en) 2023-08-25 2023-08-25 Surface defect identification method and device for industrial product detection

Country Status (1)

Country Link
CN (1) CN116797602A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581388A (en) * 2022-02-24 2022-06-03 国能包神铁路集团有限责任公司 Contact net part defect detection method and device
CN115170548A (en) * 2022-07-29 2022-10-11 衢州学院 Leather defect automatic detection method and device based on unsupervised learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581388A (en) * 2022-02-24 2022-06-03 国能包神铁路集团有限责任公司 Contact net part defect detection method and device
CN115170548A (en) * 2022-07-29 2022-10-11 衢州学院 Leather defect automatic detection method and device based on unsupervised learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIAN TAO 等: "Automatic Metallic Surface Defect Detection and Recognition with Convolutional Neural Networks", 《APPLIED SCIENCES》, no. 8, pages 1 - 15 *

Similar Documents

Publication Publication Date Title
CN108961217B (en) Surface defect detection method based on regular training
CN113160192B (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
CN106875381B (en) Mobile phone shell defect detection method based on deep learning
CN108562589B (en) Method for detecting surface defects of magnetic circuit material
CN107545239B (en) Fake plate detection method based on license plate recognition and vehicle characteristic matching
WO2019104767A1 (en) Fabric defect detection method based on deep convolutional neural network and visual saliency
CN113592845A (en) Defect detection method and device for battery coating and storage medium
CN111915704A (en) Apple hierarchical identification method based on deep learning
CN109840483B (en) Landslide crack detection and identification method and device
CN111667455A (en) AI detection method for various defects of brush
CN109241867B (en) Method and device for recognizing digital rock core image by adopting artificial intelligence algorithm
CN112085024A (en) Tank surface character recognition method
CN111127417B (en) Printing defect detection method based on SIFT feature matching and SSD algorithm improvement
CN113034474A (en) Test method for wafer map of OLED display
US20190272627A1 (en) Automatically generating image datasets for use in image recognition and detection
CN114926407A (en) Steel surface defect detection system based on deep learning
CN113221881A (en) Multi-level smart phone screen defect detection method
CN115937518A (en) Pavement disease identification method and system based on multi-source image fusion
CN112686872B (en) Wood counting method based on deep learning
CN116934761B (en) Self-adaptive detection method for defects of latex gloves
CN116051541B (en) Bearing end face gentle abrasion detection method and device based on stroboscopic light source
CN115797314B (en) Method, system, equipment and storage medium for detecting surface defects of parts
CN112750113B (en) Glass bottle defect detection method and device based on deep learning and linear detection
CN116797602A (en) Surface defect identification method and device for industrial product detection
CN114548250A (en) Mobile phone appearance detection method and device based on data analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination