CN111951238A - Product defect detection method - Google Patents
Product defect detection method Download PDFInfo
- Publication number
- CN111951238A CN111951238A CN202010773007.2A CN202010773007A CN111951238A CN 111951238 A CN111951238 A CN 111951238A CN 202010773007 A CN202010773007 A CN 202010773007A CN 111951238 A CN111951238 A CN 111951238A
- Authority
- CN
- China
- Prior art keywords
- image
- detection
- defect
- model
- product
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
- G01N2021/8883—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges involving the calculation of gauges, generating models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Quality & Reliability (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a product defect detection method, which comprises the following steps: a product image acquisition step: the method comprises the steps of setting a camera according to different products and different optical surfaces of the same product to obtain an original image of the camera, wherein the original image is consistent with a reference optical surface, and transmitting the original image to an image processing module for image processing to finally obtain an image capable of carrying out model detection; and (3) detecting the model: and sending the obtained image capable of carrying out model detection to the depth model and the gray level detection model for detection, and finally obtaining a detection result returned by each optical surface of the product. The machine vision detection method based on the deep learning and the traditional image processing is innovatively adopted, so that the requirement of similarity comparison in the image acquisition stage exists, the consistent images are acquired, and the accuracy of subsequent data for depth model detection and the accuracy of image detection are ensured.
Description
Technical Field
The invention relates to the field of graph detection, in particular to a product defect detection method.
Background
The traditional optical guidance mainly depends on the subjective experience of an engineer, controls the optical imaging quality by adjusting the focal length, the aperture, the working distance and other modes of a camera, and has the advantages of common effect and poor imaging optical consistency of different machines.
The conventional visual guidance process is generally as follows: the feeding mechanism (mechanical arm, sucker and the like) with the industrial camera takes a picture before grabbing materials every time, and calculates the information of the position deviation and the angle deviation of the materials to be grabbed through visual software so as to ensure that the feeding mechanism can meet the feeding precision during feeding. The existing defect is that the precision cannot be ensured.
The traditional target detection algorithm usually adopts a sliding window strategy to traverse the whole image, then uses a Haar, SIFT, HOG and other feature extractors to extract a target object, and then uses SVM, Adaboost and other classifiers to classify the extracted target, although the exhaustive strategy contains all possible positions of the target, the defects are obvious: too high in temporal complexity, too many redundant windows are generated, which also seriously affects the speed and performance of subsequent feature extraction and classification. Moreover, it is not easy to design a robust feature due to the morphological diversity of the target, the illumination variation diversity, the background diversity, etc.
The traditional target detection algorithm usually adopts a sliding window strategy to traverse the whole image, then uses a Haar, SIFT, HOG and other feature extractors to extract a target object, and then uses SVM, Adaboost and other classifiers to classify the extracted target, although the exhaustive strategy contains all possible positions of the target, the defects are obvious: too high in temporal complexity, too many redundant windows are generated, which also seriously affects the speed and performance of subsequent feature extraction and classification. Moreover, it is not easy to design a robust feature due to the morphological diversity of the target, the illumination variation diversity, the background diversity, etc.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a product defect detection method.
The invention provides a product defect detection method, which comprises the following steps:
a product image acquisition step: the method comprises the steps of setting a camera according to different products and different optical surfaces of the same product to obtain an original image of the camera, wherein the original image is consistent with a reference optical surface, and transmitting the original image to an image processing module for image processing to finally obtain an image capable of carrying out model detection;
and (3) detecting the model: sending the obtained image capable of carrying out model detection to a depth model and a gray level detection model for detection, and finally obtaining a detection result returned by each optical surface of the product;
physical quantity filtration step: and filtering the returned detection result by a threshold value or defect length, defect width and defect brightness physical quantity.
Preferably, the product image acquiring step:
step S101: moving the position of the camera and the workpiece to a specified optical point location
Step S102: triggering a camera to take a picture after setting camera parameters and a light source according to the optical surface information;
step S103: receiving an original image returned by a camera, and adding workpiece information corresponding to the original image to the head of the original image;
step S104: storing the camera original image with the head information for tracing the reason when the problem occurs;
step S105: distributing the camera original image with the head information to different optical surface picture preprocessing modules for parallel processing;
step S106: the image preprocessing module performs cutting, compressing, rotating, horizontal mirroring and vertical mirroring algorithm operations on the original camera image according to the number of the workpiece carrying platforms and the number of the machine station channels, and outputs an image meeting the model detection requirement, wherein the image preprocessing module comprises the following conditions:
the camera original image comprises a plurality of workpieces, and the workpieces in the image need to be cut out respectively;
the original image of the camera needs to be compressed to be smaller due to overlarge size so as to improve the model operation speed;
one workpiece is formed by combining a plurality of original camera images, and the original camera images need to be combined after being rotated and mirrored.
Preferably, the model detecting step includes:
the construction step comprises: designing and building a deep learning model for detecting the defects of the workpiece;
and (3) classification step: classifying each pixel point in the learning image to be learned according to categories, and judging the confidence of the type of each pixel point;
deep learning model training: training the learning image subjected to the pixel point class classification and confidence judgment to obtain a trained deep learning model;
and a defect detection step: and detecting the defects of the workpiece by using the trained deep learning model.
Preferably, the step of classifying: classifying each pixel point in the learning image, taking 0 as a background to represent the category of the pixel point, taking 1 as a defect to represent the category of the pixel point, and dividing the learning image into a background area and a defect area according to the category;
the step of classifying includes:
and (3) convolution step: performing feature extraction on an input layer, filtering partial useless information and reserving feature effective information;
a step of pooling: reducing the dimension of the input layer and reducing the calculated amount;
and (3) feature fusion step: performing cross-layer connection on different layers with the same dimension;
a category judgment step: quantizing the feature information obtained in the feature fusion step into confidence of a certain category;
an output step: outputting a multi-dimensional array vector as a result, representing the category and confidence of each pixel value in a learning image;
the multidimensional array vector comprises a [ m, n, c, s ] vector, wherein: m denotes the image width, n denotes the image height, c denotes the class, and s denotes the confidence.
Preferably, the deep learning model training step: and setting a set training step number, training the non-defective images and the defect images in the training set in a single-double alternative mode during training, stopping training until loss is not obviously reduced, and outputting a model corresponding to the step number at the moment as an output model of the training.
Preferably, the method further comprises the gray model detection step of: detecting the defects of the workpiece through gray level transformation and spatial filtering;
the gray model detecting step includes a top depression detecting step:
obtaining a rotation and translation matrix of the image through shape template matching positioning;
the affine transformed top region is obtained by rotating the translation matrix,
performing sub-pixel threshold segmentation on the top area of the affine transformation, and adding the edge line segment obtained by segmentation into the metering model;
the maximum and minimum distances from the edge point to the base line are calculated, and the difference between the maximum distance and the minimum distance is the value of the depression.
Preferably, the gray model detecting step includes: and a burr detection step:
obtaining a rotation and translation matrix of the image through shape template matching positioning;
finding an inner hole area through threshold segmentation, erasing an inner angle area of the inner hole area after affine transformation, and detecting burrs through closed operation;
the gray model detecting step includes: a water gap height detection step:
obtaining a rotation and translation matrix of the image through shape template matching positioning;
judging whether the height of the water gap reaches the standard or not through the rotation angle of the matrix;
the gray scale model detection step comprises a top crack detection step:
carrying out threshold segmentation on the picture to find out a detected region, and carrying out Fourier transform on the picture to transform the time threshold of the picture to a frequency domain;
filtering the intermediate frequency component by a Gaussian filter, converting the intermediate frequency component into a time threshold, and solving the shape of the line by a second derivative mode.
Preferably, the physical quantity filtering step:
setting a filtering rule: different products, different defects and different parts are judged to have different parameter conditions corresponding to the defects, and different rules are set to be used as the conditions for judging the defects; the filtering rules can set combination rules and distinguish rule priorities, and comparison is carried out on the rule priorities; if the first rule is compared and accords with the defect rule, the detection record of the product is directly judged to be the corresponding defect record, and other rules are not compared; if the defect rule is not met, continuing to compare the second rule; if the defect is not met, continuing to compare until all defect rules are compared; if the quality is not met, judging the detection record as a good product record at present;
setting rule conditions: the rule condition refers to a linearly quantized numerical value that can be taken as a condition, and includes the following physical quantity conditions: the defect threshold value, the defect length, the defect width, the defect area, the defect average brightness, the defect contrast, the defect gradient and the defect length-width ratio are combined into a judgment rule by setting one or more rules;
a non-detection area filtering step: aiming at different optical surfaces of a product, corresponding non-detection areas are provided, defects do not need to be detected in the areas, the defects are conditionally shielded and detected by setting area detection rule conditions, and then the target of accurate detection is achieved.
According to the present invention, there is provided a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the product defect detection method of any one of the above.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the invention, through deep learning detection, the time complexity is greatly reduced, the generation of redundant windows is reduced, and the speed and performance of subsequent feature extraction and classification are greatly improved;
2. according to the invention, through deep learning detection, the characteristics of image robustness are improved;
3. the invention combines a plurality of detection modes, and the detection is more comprehensive and accurate.
4. The machine vision detection method based on deep learning and traditional image processing is innovatively adopted, so that the requirement of similarity comparison in the image acquisition stage exists, and the requirement is used for ensuring that consistent images are acquired, so that the accuracy of subsequent data for depth model detection and the accuracy of image detection are ensured.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a schematic flow chart of steps of a model detection method.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
The present invention will be described more specifically below with reference to preferred examples.
Preferred example 1:
firstly, acquiring a product image
The camera original image consistent with the reference optical surface is obtained by carrying out camera setting, exposure, the position of an interested area, line frequency, photographing delay and other parameters on different optical surfaces of different products and the same product, and the obtained original image is transmitted to an image processing module for image processing (including algorithms of cutting, compression, rotation, horizontal mirror image, vertical mirror image, combination and the like), and finally an image capable of being subjected to model detection is obtained.
The specific steps and algorithm for obtaining the product image are as follows:
1) moving the position of the camera and the workpiece to a specified optical point location
2) After camera parameters (including exposure value, gamma value, line frequency, region of interest and the like) and a light source are set according to the optical surface information, triggering the camera to take a picture
3) Receiving the original image returned by the camera, adding the workpiece information (including the workpiece number and the channel number) corresponding to the original image to the head of the original image
4) Storing camera original pictures with header information for tracing reasons when problems occur
5) Distributing camera original image with head information to different optical surface picture preprocessing modules for parallel processing
6) The image preprocessing module performs algorithm operations such as cutting, compressing, rotating, horizontal mirroring and vertical mirroring on the original camera image according to information such as the number of the workpiece carrying platforms and the number of machine passages, and outputs an image meeting the model detection requirement. The method specifically comprises the following conditions:
a) a camera original image comprises a plurality of workpieces, and the workpieces in the image need to be cut out respectively
b) The original image of the camera needs to be compressed to be smaller due to the overlarge size of the original image so as to improve the model operation speed
c) One workpiece is formed by combining a plurality of original images of cameras, and the original images of the cameras need to be combined after being rotated and mirrored
Second, model detection
As shown in fig. 1, the obtained model detection map is sent to a model pipeline service for detection, wherein a part of different products and different optical surfaces are sent to a depth model for detection, and a part of different products and different optical surfaces are sent to a gray detection model for detection, and finally detection information (including defect, defect threshold position information x coordinate, y coordinate, width, height, defect length, defect height, defect area, defect average brightness information, defect gradient information, defect contrast information, defect brightest 20% average brightness information, and defect darkest 20% average brightness information) returned by each optical surface of the product is obtained.
The traditional target detection algorithm usually adopts a sliding window strategy to traverse the whole image, then uses a Haar, SIFT, HOG and other feature extractors to extract a target object, and then uses SVM, Adaboost and other classifiers to classify the extracted target, although the exhaustive strategy contains all possible positions of the target, the defects are obvious: too high in temporal complexity, too many redundant windows are generated, which also seriously affects the speed and performance of subsequent feature extraction and classification. Moreover, it is not easy to design a robust feature due to the morphological diversity of the target, the illumination variation diversity, the background diversity, etc.
The model detection is divided into depth model detection and gray model detection.
1. The depth model detection comprises the following specific implementation steps:
1) the deep learning model is designed and set up and used for detecting the defects of the workpiece and is composed of a segmentation network and a classification network two-stage, wherein the deep learning model mainly comprises a convolution module, a pooling module, a feature fusion module, a category judgment module and an output module.
Dividing classification categories to which each pixel point in the network learning image belongs, expressing the pixel category as a background by 0, expressing the pixel category as a defect by 1, and dividing an image into a background area and a defect area according to the categories; the classification network judges each pixel point in the extracted background area and defect area on the basis of the segmentation network to give the possibility that each pixel point belongs to a certain category, namely confidence.
The convolution layer is characterized in that the input layer is subjected to characteristic extraction, part of useless information is filtered, and most of effective information of the characteristics is reserved; the pooling layer is characterized in that dimension reduction is carried out on the input layer to reduce the calculated amount; the characteristic fusion layer is characterized in that different layers with the same dimensionality are connected in a cross-layer mode to obtain richer characteristic information; the category judgment layer is characterized in that the feature information obtained by the feature fusion layer is quantized into a probability value of a certain category; the output layer is characterized in that after convolution pooling feature fusion and the like, a vector [ m, n, c, s ] is output as a result, and the category and the confidence coefficient of each pixel value in an image are represented.
2) Circularly training a deep learning model by using the divided data sets;
specifically, the embodiment (5) includes:
the deep learning training method is characterized in that all images in a well-divided training set folder are trained, the number of training steps is set to be more than 1000, good images and defect images in a training set are trained in a single-double alternative mode during training, and until loss is not reduced obviously, training is stopped to output a model corresponding to the number of steps at the moment, and the model is used as an output model of the training.
3) Carrying out appearance defect detection on a real scene workpiece by using a deep learning model, and judging and quantifying a detection result;
when the appearance defects of the real scene workpiece are detected, the output model is used for detecting the workpiece formed by injecting the metal powder, and the result is output according to the vector [ m, n, c, s ].
2. The detection of the gray-scale model is carried out,
the gray model detection is to use a gray conversion and spatial filtering mode for detection, and different detection modes are used for different defects. Specific defect types are: top depression, flash burr, nozzle height, impact defect, crack defect, deformation defect, etc. Image inversion, piecewise linear transformation, histogram equalization and matching, spatial filtering and the like exist in the algorithm, some special detections such as bruises and cracks are processed from a time domain to a frequency domain, and the detection is performed by using the characteristics of the frequency domain, and the specific algorithm is as follows:
1) detection of top recession
And positioning through shape template matching to obtain a matrix of the rotation and translation of the image. And obtaining a top area after affine transformation by rotating and translating the matrix, performing sub-pixel threshold segmentation on the top area after affine transformation, adding an edge line segment obtained by segmentation into the metering model, and then calculating the maximum and minimum distances from the edge points to the base line, wherein the difference between the maximum distance and the minimum distance is a depression value.
2) Detection of burrs on burrs
And positioning through shape template matching to obtain a matrix of the rotation and translation of the image. And finding an inner hole area through threshold segmentation, then erasing an inner angle area of the inner hole area after affine transformation, and finally detecting burrs through closed operation.
3) Height of water gap
And positioning through shape template matching to obtain a matrix of the rotation and translation of the image. Because the length of the bottom is fixed, whether the height of the water gap reaches the standard is judged through the rotation angle of the matrix.
4) Detection of roof cracks
The image is subjected to threshold segmentation to find out a detected area, the image is subjected to threshold transformation sometimes to a frequency domain through Fourier transformation, then components of an intermediate frequency are filtered through a Gaussian filter, and then the image is subjected to threshold transformation to obtain the shape of a line through a second derivative mode.
Thirdly, physical quantity filtration
Meets the requirement of dynamic adjustment of the on-site shipment yield of the customer, adopts a linear physical quantity filtering mode,
and (4) filtering physical quantities such as a threshold value, or defect length, defect width, defect brightness and the like of a returned detection result of the model.
The physical quantity filtration step was as follows:
1) setting filtering rules
Different products, defects and parts are judged to have different parameter conditions corresponding to the defects, so different rules are required to be set as the conditions for judging the defects. The filtering rules can set combination rules and distinguish rule priorities, and comparison is carried out on the rule priorities; if the first rule is compared and accords with the defect rule, the detection record of the product is directly judged to be the corresponding defect record, and other rules are not compared; if the defect rule is not met, continuing to compare the second rule; if the defect is not met, continuing to compare until all defect rules are compared; and if the quality is not met, judging the detection record as a good product record at present.
2) Setting rule conditions
The rule condition refers to a linear quantization value which can be used as a condition, and includes physical quantity conditions such as a defect threshold, a defect length, a defect width, a defect area, a defect average brightness, a defect contrast, a defect gradient, a defect length-width ratio and the like, and can be combined into a judgment rule by setting one or more rules, such as a scratch large-area rule: the area >3mm & & threshold >0.4& & defect length >0.5mm is judged as the defect record.
3) Non-detection region filtering
And if the area has over-killed defect records (the defect is detected by conditionally shielding by setting area detection rule conditions to achieve the aim of accurate detection), filtering and shielding. Supplementary explanation: and filtering and shielding defects occurring in the non-detection area.
In the description of the present application, it is to be understood that the terms "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience in describing the present application and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present application.
Those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.
Claims (9)
1. A method for detecting product defects, comprising:
a product image acquisition step: the method comprises the steps of setting a camera according to different products and different optical surfaces of the same product to obtain an original image of the camera, wherein the original image is consistent with a reference optical surface, and transmitting the original image to an image processing module for image processing to finally obtain an image capable of carrying out model detection;
and (3) detecting the model: sending the obtained image capable of carrying out model detection to a depth model and a gray level detection model for detection, and finally obtaining a detection result returned by each optical surface of the product;
physical quantity filtration step: and filtering the returned detection result by a threshold value or defect length, defect width and defect brightness physical quantity.
2. The product defect detection method of claim 1, wherein the product image acquisition step:
step S101: moving the position of the camera and the workpiece to a specified optical point location
Step S102: triggering a camera to take a picture after setting camera parameters and a light source according to the optical surface information;
step S103: receiving an original image returned by a camera, and adding workpiece information corresponding to the original image to the head of the original image;
step S104: storing the camera original image with the head information for tracing the reason when the problem occurs;
step S105: distributing the camera original image with the head information to different optical surface picture preprocessing modules for parallel processing;
step S106: the image preprocessing module performs cutting, compressing, rotating, horizontal mirroring and vertical mirroring algorithm operations on the original camera image according to the number of the workpiece carrying platforms and the number of the machine station channels, and outputs an image meeting the model detection requirement, wherein the image preprocessing module comprises the following conditions:
the camera original image comprises a plurality of workpieces, and the workpieces in the image need to be cut out respectively;
the original image of the camera needs to be compressed to be smaller due to overlarge size so as to improve the model operation speed;
one workpiece is formed by combining a plurality of original camera images, and the original camera images need to be combined after being rotated and mirrored.
3. The product defect detection method of claim 1, wherein the model detection step comprises:
the construction step comprises: designing and building a deep learning model for detecting the defects of the workpiece;
and (3) classification step: classifying each pixel point in the learning image to be learned according to categories, and judging the confidence of the type of each pixel point;
deep learning model training: training the learning image subjected to the pixel point class classification and confidence judgment to obtain a trained deep learning model;
and a defect detection step: and detecting the defects of the workpiece by using the trained deep learning model.
4. The product defect detection method of claim 3, wherein the classifying step: classifying each pixel point in the learning image, taking 0 as a background to represent the category of the pixel point, taking 1 as a defect to represent the category of the pixel point, and dividing the learning image into a background area and a defect area according to the category;
the step of classifying includes:
and (3) convolution step: performing feature extraction on an input layer, filtering partial useless information and reserving feature effective information;
a step of pooling: reducing the dimension of the input layer and reducing the calculated amount;
and (3) feature fusion step: performing cross-layer connection on different layers with the same dimension;
a category judgment step: quantizing the feature information obtained in the feature fusion step into confidence of a certain category;
an output step: outputting a multi-dimensional array vector as a result, representing the category and confidence of each pixel value in a learning image;
the multidimensional array vector comprises a [ m, n, c, s ] vector, wherein: m denotes the image width, n denotes the image height, c denotes the class, and s denotes the confidence.
5. The product defect detection method of claim 4, wherein the deep learning model training step: and setting a set training step number, training the non-defective images and the defect images in the training set in a single-double alternative mode during training, stopping training until loss is not obviously reduced, and outputting a model corresponding to the step number at the moment as an output model of the training.
6. The method of claim 4, further comprising a gray model inspection step: detecting the defects of the workpiece through gray level transformation and spatial filtering;
the gray model detecting step includes a top depression detecting step:
obtaining a rotation and translation matrix of the image through shape template matching positioning;
the affine transformed top region is obtained by rotating the translation matrix,
performing sub-pixel threshold segmentation on the top area of the affine transformation, and adding the edge line segment obtained by segmentation into the metering model;
the maximum and minimum distances from the edge point to the base line are calculated, and the difference between the maximum distance and the minimum distance is the value of the depression.
7. The product defect detecting method according to claim 6, wherein the gray model detecting step comprises: and a burr detection step:
obtaining a rotation and translation matrix of the image through shape template matching positioning;
finding an inner hole area through threshold segmentation, erasing an inner angle area of the inner hole area after affine transformation, and detecting burrs through closed operation;
the gray model detecting step includes: a water gap height detection step:
obtaining a rotation and translation matrix of the image through shape template matching positioning;
judging whether the height of the water gap reaches the standard or not through the rotation angle of the matrix;
the gray scale model detection step comprises a top crack detection step:
carrying out threshold segmentation on the picture to find out a detected region, and carrying out Fourier transform on the picture to transform the time threshold of the picture to a frequency domain;
filtering the intermediate frequency component by a Gaussian filter, converting the intermediate frequency component into a time threshold, and solving the shape of the line by a second derivative mode.
8. The product defect detection method according to claim 1, wherein the physical quantity filtering step:
setting a filtering rule: different products, different defects and different parts are judged to have different parameter conditions corresponding to the defects, and different rules are set to be used as the conditions for judging the defects; the filtering rules can set combination rules and distinguish rule priorities, and comparison is carried out on the rule priorities; if the first rule is compared and accords with the defect rule, the detection record of the product is directly judged to be the corresponding defect record, and other rules are not compared; if the defect rule is not met, continuing to compare the second rule; if the defect is not met, continuing to compare until all defect rules are compared; if the quality is not met, judging the detection record as a good product record at present;
setting rule conditions: the rule condition refers to a linearly quantized numerical value that can be taken as a condition, and includes the following physical quantity conditions: the defect threshold value, the defect length, the defect width, the defect area, the defect average brightness, the defect contrast, the defect gradient and the defect length-width ratio are combined into a judgment rule by setting one or more rules;
a non-detection area filtering step: aiming at different optical surfaces of a product, corresponding non-detection areas are provided, defects do not need to be detected in the areas, the defects are conditionally shielded and detected by setting area detection rule conditions, and then the target of accurate detection is achieved.
9. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the steps of the product defect detection method of any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010773007.2A CN111951238A (en) | 2020-08-04 | 2020-08-04 | Product defect detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010773007.2A CN111951238A (en) | 2020-08-04 | 2020-08-04 | Product defect detection method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111951238A true CN111951238A (en) | 2020-11-17 |
Family
ID=73339364
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010773007.2A Pending CN111951238A (en) | 2020-08-04 | 2020-08-04 | Product defect detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111951238A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112233119A (en) * | 2020-12-16 | 2021-01-15 | 常州微亿智造科技有限公司 | Workpiece defect quality inspection method, device and system |
CN112505056A (en) * | 2021-02-08 | 2021-03-16 | 常州微亿智造科技有限公司 | Defect detection method and device |
CN112669276A (en) * | 2020-12-24 | 2021-04-16 | 苏州华兴源创科技股份有限公司 | Screen detection positioning method and device, electronic equipment and storage medium |
CN112712504A (en) * | 2020-12-30 | 2021-04-27 | 广东粤云工业互联网创新科技有限公司 | Workpiece detection method and system based on cloud and computer-readable storage medium |
CN112798608A (en) * | 2021-04-14 | 2021-05-14 | 常州微亿智造科技有限公司 | Optical detection device and optical detection method for side wall of inner cavity of mobile phone camera support |
CN114266719A (en) * | 2021-10-22 | 2022-04-01 | 广州辰创科技发展有限公司 | Hough transform-based product detection method |
CN114577816A (en) * | 2022-01-18 | 2022-06-03 | 广州超音速自动化科技股份有限公司 | Hydrogen fuel bipolar plate detection method |
CN114998192A (en) * | 2022-04-19 | 2022-09-02 | 深圳格芯集成电路装备有限公司 | Defect detection method, device and equipment based on deep learning and storage medium |
CN115541602A (en) * | 2022-12-01 | 2022-12-30 | 常州微亿智造科技有限公司 | Product defect detection method |
CN115861315A (en) * | 2023-02-27 | 2023-03-28 | 常州微亿智造科技有限公司 | Defect detection method and device |
CN116642893A (en) * | 2023-07-24 | 2023-08-25 | 吉林省艾优数字科技有限公司 | Visual intelligent detection method, device, equipment and medium for antigen detection reagent |
CN116721098A (en) * | 2023-08-09 | 2023-09-08 | 常州微亿智造科技有限公司 | Defect detection method and defect detection device in industrial detection |
CN116934746A (en) * | 2023-09-14 | 2023-10-24 | 常州微亿智造科技有限公司 | Scratch defect detection method, system, equipment and medium thereof |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103473581A (en) * | 2013-07-29 | 2013-12-25 | 郑国义 | Authenticity identification and source tracing system and method applying three-dimensional miniature engraving and secret mark |
CN104331793A (en) * | 2014-11-04 | 2015-02-04 | 哈尔滨红谷园科技开发有限公司 | Object trace management system |
CN104778420A (en) * | 2015-04-24 | 2015-07-15 | 广东电网有限责任公司信息中心 | Method for establishing safety management view of full life cycle of unstructured data |
CN106204614A (en) * | 2016-07-21 | 2016-12-07 | 湘潭大学 | A kind of workpiece appearance defects detection method based on machine vision |
CN109165958A (en) * | 2018-08-17 | 2019-01-08 | 珠海丹德图像技术有限公司 | A kind of commodity traceability system and method based on image information safe practice |
CN109829736A (en) * | 2019-02-11 | 2019-05-31 | 上海元唯壹网络科技有限责任公司 | A kind of application system based on the image recognition of AI in tea cake |
CN110068579A (en) * | 2019-05-30 | 2019-07-30 | 常州微亿智造科技有限公司 | Intelligent AI appearance detection system |
CN111159184A (en) * | 2019-12-25 | 2020-05-15 | 上海中信信息发展股份有限公司 | Metadata tracing method and device and server |
CN111242185A (en) * | 2020-01-03 | 2020-06-05 | 凌云光技术集团有限责任公司 | Defect rapid preliminary screening method and system based on deep learning |
-
2020
- 2020-08-04 CN CN202010773007.2A patent/CN111951238A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103473581A (en) * | 2013-07-29 | 2013-12-25 | 郑国义 | Authenticity identification and source tracing system and method applying three-dimensional miniature engraving and secret mark |
CN104331793A (en) * | 2014-11-04 | 2015-02-04 | 哈尔滨红谷园科技开发有限公司 | Object trace management system |
CN104778420A (en) * | 2015-04-24 | 2015-07-15 | 广东电网有限责任公司信息中心 | Method for establishing safety management view of full life cycle of unstructured data |
CN106204614A (en) * | 2016-07-21 | 2016-12-07 | 湘潭大学 | A kind of workpiece appearance defects detection method based on machine vision |
CN109165958A (en) * | 2018-08-17 | 2019-01-08 | 珠海丹德图像技术有限公司 | A kind of commodity traceability system and method based on image information safe practice |
CN109829736A (en) * | 2019-02-11 | 2019-05-31 | 上海元唯壹网络科技有限责任公司 | A kind of application system based on the image recognition of AI in tea cake |
CN110068579A (en) * | 2019-05-30 | 2019-07-30 | 常州微亿智造科技有限公司 | Intelligent AI appearance detection system |
CN111159184A (en) * | 2019-12-25 | 2020-05-15 | 上海中信信息发展股份有限公司 | Metadata tracing method and device and server |
CN111242185A (en) * | 2020-01-03 | 2020-06-05 | 凌云光技术集团有限责任公司 | Defect rapid preliminary screening method and system based on deep learning |
Non-Patent Citations (3)
Title |
---|
梁智聪: "基于卷积神经网络的工件表面缺陷检测系统", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
陈晓光等: "《新药药理学》", 31 December 2010 * |
鲍青山: "《机械工程中的几何反算与激光服装裁剪机器人》", 31 December 2000 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112233119A (en) * | 2020-12-16 | 2021-01-15 | 常州微亿智造科技有限公司 | Workpiece defect quality inspection method, device and system |
CN112669276A (en) * | 2020-12-24 | 2021-04-16 | 苏州华兴源创科技股份有限公司 | Screen detection positioning method and device, electronic equipment and storage medium |
CN112712504B (en) * | 2020-12-30 | 2023-08-15 | 广东粤云工业互联网创新科技有限公司 | Cloud-based workpiece detection method and system and computer-readable storage medium |
CN112712504A (en) * | 2020-12-30 | 2021-04-27 | 广东粤云工业互联网创新科技有限公司 | Workpiece detection method and system based on cloud and computer-readable storage medium |
CN112505056A (en) * | 2021-02-08 | 2021-03-16 | 常州微亿智造科技有限公司 | Defect detection method and device |
CN112798608A (en) * | 2021-04-14 | 2021-05-14 | 常州微亿智造科技有限公司 | Optical detection device and optical detection method for side wall of inner cavity of mobile phone camera support |
CN114266719A (en) * | 2021-10-22 | 2022-04-01 | 广州辰创科技发展有限公司 | Hough transform-based product detection method |
CN114266719B (en) * | 2021-10-22 | 2022-11-25 | 广州辰创科技发展有限公司 | Hough transform-based product detection method |
CN114577816A (en) * | 2022-01-18 | 2022-06-03 | 广州超音速自动化科技股份有限公司 | Hydrogen fuel bipolar plate detection method |
CN114998192A (en) * | 2022-04-19 | 2022-09-02 | 深圳格芯集成电路装备有限公司 | Defect detection method, device and equipment based on deep learning and storage medium |
CN115541602A (en) * | 2022-12-01 | 2022-12-30 | 常州微亿智造科技有限公司 | Product defect detection method |
CN115541602B (en) * | 2022-12-01 | 2023-03-07 | 常州微亿智造科技有限公司 | Product defect detection method |
CN115861315A (en) * | 2023-02-27 | 2023-03-28 | 常州微亿智造科技有限公司 | Defect detection method and device |
CN116642893A (en) * | 2023-07-24 | 2023-08-25 | 吉林省艾优数字科技有限公司 | Visual intelligent detection method, device, equipment and medium for antigen detection reagent |
CN116642893B (en) * | 2023-07-24 | 2023-10-03 | 吉林省艾优数字科技有限公司 | Visual intelligent detection method, device, equipment and medium for antigen detection reagent |
CN116721098A (en) * | 2023-08-09 | 2023-09-08 | 常州微亿智造科技有限公司 | Defect detection method and defect detection device in industrial detection |
CN116721098B (en) * | 2023-08-09 | 2023-11-14 | 常州微亿智造科技有限公司 | Defect detection method and defect detection device in industrial detection |
CN116934746A (en) * | 2023-09-14 | 2023-10-24 | 常州微亿智造科技有限公司 | Scratch defect detection method, system, equipment and medium thereof |
CN116934746B (en) * | 2023-09-14 | 2023-12-01 | 常州微亿智造科技有限公司 | Scratch defect detection method, system, equipment and medium thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111951237B (en) | Visual appearance detection method | |
CN111951238A (en) | Product defect detection method | |
CN106709436B (en) | Track traffic panoramic monitoring-oriented cross-camera suspicious pedestrian target tracking system | |
CN108985186B (en) | Improved YOLOv 2-based method for detecting pedestrians in unmanned driving | |
CN110866903B (en) | Ping-pong ball identification method based on Hough circle transformation technology | |
CN105678213B (en) | Dual-mode mask person event automatic detection method based on video feature statistics | |
TW202013252A (en) | License plate recognition system and license plate recognition method | |
CN105930822A (en) | Human face snapshot method and system | |
CN113947731B (en) | Foreign matter identification method and system based on contact net safety inspection | |
CN112907519A (en) | Metal curved surface defect analysis system and method based on deep learning | |
CN111179233B (en) | Self-adaptive deviation rectifying method based on laser cutting of two-dimensional parts | |
CN113393426B (en) | Steel rolling plate surface defect detection method | |
CN111967288A (en) | Intelligent three-dimensional object identification and positioning system and method | |
CN107862713B (en) | Camera deflection real-time detection early warning method and module for polling meeting place | |
CN110555867B (en) | Multi-target object tracking method integrating object capturing and identifying technology | |
WO2022121021A1 (en) | Identity card number detection method and apparatus, and readable storage medium and terminal | |
CN112784712B (en) | Missing child early warning implementation method and device based on real-time monitoring | |
CN112288726B (en) | Method for detecting foreign matters on belt surface of underground belt conveyor | |
CN110969164A (en) | Low-illumination imaging license plate recognition method and device based on deep learning end-to-end | |
CN107301421A (en) | The recognition methods of vehicle color and device | |
CN111951234B (en) | Model detection method | |
CN113689365B (en) | Target tracking and positioning method based on Azure Kinect | |
CN112686872A (en) | Wood counting method based on deep learning | |
CN109299743B (en) | Gesture recognition method and device and terminal | |
CN116596838A (en) | Component surface defect detection method based on feature perception |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201117 |