CN110111331B - Honeycomb paper core defect detection method based on machine vision - Google Patents
Honeycomb paper core defect detection method based on machine vision Download PDFInfo
- Publication number
- CN110111331B CN110111331B CN201910418796.5A CN201910418796A CN110111331B CN 110111331 B CN110111331 B CN 110111331B CN 201910418796 A CN201910418796 A CN 201910418796A CN 110111331 B CN110111331 B CN 110111331B
- Authority
- CN
- China
- Prior art keywords
- defect
- picture
- coordinate
- sample
- defects
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000007547 defect Effects 0.000 title claims abstract description 181
- 238000001514 detection method Methods 0.000 title claims abstract description 60
- 238000004519 manufacturing process Methods 0.000 claims abstract description 68
- 230000008439 repair process Effects 0.000 claims abstract description 16
- 238000000034 method Methods 0.000 claims description 23
- 238000012549 training Methods 0.000 claims description 15
- 230000000007 visual effect Effects 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 9
- 238000012360 testing method Methods 0.000 claims description 9
- 230000004927 fusion Effects 0.000 claims description 8
- 238000011156 evaluation Methods 0.000 claims description 6
- 238000004458 analytical method Methods 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 5
- 230000011218 segmentation Effects 0.000 claims description 4
- 238000002474 experimental method Methods 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 238000012217 deletion Methods 0.000 claims description 2
- 230000037430 deletion Effects 0.000 claims description 2
- 238000012216 screening Methods 0.000 claims description 2
- 238000003860 storage Methods 0.000 claims 2
- 238000005520 cutting process Methods 0.000 claims 1
- 239000000123 paper Substances 0.000 abstract description 31
- 239000011087 paperboard Substances 0.000 abstract description 6
- 230000008901 benefit Effects 0.000 abstract description 4
- 238000013528 artificial neural network Methods 0.000 abstract 1
- 238000013136 deep learning model Methods 0.000 abstract 1
- 230000002950 deficient Effects 0.000 description 11
- 230000006870 function Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 238000009776 industrial production Methods 0.000 description 3
- 238000005065 mining Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000011897 real-time detection Methods 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 239000000853 adhesive Substances 0.000 description 1
- 230000001070 adhesive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/95—Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
- G01N21/9515—Objects of complex shape, e.g. examined with use of a surface follower device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/51—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/587—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
- G01N2021/8854—Grading and classifying of flaws
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
- G01N2021/8854—Grading and classifying of flaws
- G01N2021/8861—Determining coordinates of flaws
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
- G01N2021/8887—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20164—Salient point detection; Corner detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Immunology (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Pathology (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Biochemistry (AREA)
- Life Sciences & Earth Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Library & Information Science (AREA)
- Quality & Reliability (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a machine vision-based honeycomb paper core defect detection method, aiming at various defect problems generated in the production process of honeycomb paper cores, defects in the honeycomb paper cores are detected by adopting an SSD deep neural network through collecting honeycomb paper core pictures of a production site, the defect types of the defects are judged and specific positions of the defects are output, then a machine vision algorithm is used for quick rechecking, false detection is prevented, the obtained result is finally transmitted to a honeycomb paper core defect repairing system, and a correct feedback signal is provided, so that the automatic repairing of the defects of the honeycomb paper cores is realized. The invention uses the deep learning model and the machine vision algorithm to detect the defects of the honeycomb paper core in real time, can provide feedback information for an automatic defect repair system in the honeycomb paper core production process, has the advantages of accurate identification, accurate positioning and high identification speed, and can meet the requirement of honeycomb paper board production automation.
Description
Technical Field
The invention relates to the field of image detection, in particular to a honeycomb paper core defect detection method based on machine vision.
Background
The honeycomb paper is made according to the principle of natural honeycomb structure, which is an environment-friendly energy-saving material with novel sandwich structure, wherein the raw paper which is cut into strips and stacked together is connected into a plurality of hollow three-dimensional regular hexagons by an adhesive method to form an integral stress piece, and the two sides of the stress piece are adhered with facial tissues. The plastic has high mechanical strength, can withstand various collisions and falls in the carrying process, is widely used for packaging and transporting various precision devices, fragile devices and even military devices, and has strong industrial practicability. In the honeycomb paperboard production process, defects such as large-area holes, irregular hole structures, continuous fracture and the like are very easy to generate due to improper stretching and various complicated industrial factors, and the problem that the honeycomb paperboard cannot be bypassed in the honeycomb paperboard production process is influenced by the fact that the honeycomb paperboard is subjected to collision. On existing production lines, a method of manually detecting defects and repairing the defects is often adopted to fill the defects generated in real time in the production process, and a large amount of manpower resources are required to be consumed. On the other hand, in the case of long-term high-load operation, a manual defect detection is prone to errors, and various problems such as quality degradation and an increase in defective rate are caused.
Disclosure of Invention
The invention aims to solve the technical problems of providing a machine vision-based honeycomb paper core defect detection method for detecting defects generated on a honeycomb paper core production line in real time, enhancing the robustness of a system and preventing possible false detection.
In order to solve the technical problems, the invention adopts the following technical scheme: a honeycomb paper core defect detection method based on machine vision comprises the following steps:
1) The production equipment is mobilized on the honeycomb paper core industrial production site to generate a large number of samples, a visual platform is built to collect the samples, the image collecting position is the outlet of the production line, the collected images are set to be long (and the size which is consistent with the width of the production line in width is collected as small as possible in length) in order to collect the real-time information of the outlet only and take the historical information as little as possible. The method comprises the steps of equally dividing a honeycomb paper core strip-shaped image, manually identifying each image with the size of 300, judging whether the image has defects (possibly a plurality of defects in one image), screening out images with defects, calibrating the positions of the defects, storing the central coordinates of a lower calibration frame and the height and width of a frame body, judging the type information of the defects at the positions, integrating the information as truth marks (groudtorh) of the images, constructing a database 1 as a training database, and storing the information into the database. And simultaneously establishing a visual operation sample library for standby, wherein the visual operation sample library comprises a defect-free sample table and a typical defect sample table, and the defect sample is also marked manually.
2) The data expansion is performed on the data in the database 1, and specific operations include intercepting, rotating, adjusting brightness saturation and the like, and the newly generated samples are stored in the database 1.
3) A SSD (Single Shot MultiBox Detector) network is constructed and a model is trained by using pictures in the database 1 and corresponding group turth (tag and binding box position information). And selecting a proper number of prior frames, adding a proper convolution layer after the SSD base network to generate feature layers with different scales, and selecting the prior frames with a plurality of aspect ratios on each scale according to the general shape of the defect to form a prior frame set. During training, firstly, prior frame matching is carried out, an intersection ratio (IoU) threshold value (which is set to 0.5) is set, whether the prior frame is matched with a group trunk calibrated by an image in a database or not is judged, if the intersection ratio exceeds the threshold value, positive examples are judged, otherwise, the images of the opposite examples are directly recognized as the background, a large number of positive and negative examples can be obtained from each image, but the number of the opposite examples often greatly exceeds the number of positive examples successfully matched (which causes unbalance of a data set), so that negative samples are sampled by using a difficult negative sample mining (hard negative mining) algorithm, and the proportion of the negative samples is lower. The loss function of the network is defined as follows:
where x is the input, c is the confidence predictor, l is the position predictor of the corresponding boundary of the prior frame, and g is the position parameter of the group trunk. Its L conf Confidence error for classification, L loc For the position error, the calculation formula of the two errors is as follows:
confidence error:
wherein the first term is positive sample cross entropy and the second term negative sample is calculated directly as 0 probability.
Position error:
wherein the position gap is calculated using the smoothl 1 loss in order to make the loss change more smoothly near the origin of coordinates. Wherein the bounding box encoding function is:
the superscripts x, y, w, h represent the center coordinates and width height of the bounding box respectively,for the bounding box predictor, +.>For the corresponding bounding box true position +.>Is a priori frame position.
Optimizing the gradient descent process of the loss function by using an optimizing method, and continuously reducing the comprehensive loss until the comprehensive loss meets the requirement. The model weights are saved.
4) Setting up the same visual platform in the step 1, adjusting the advancing steps of the production line, and controlling each advancing step to be consistent with the size of the acquired image in the length direction of the production line (namely, each group of pictures)No overlapping and connection), images are collected in real time for analysis at each step, each group of images are subjected to the same segmentation operation as in the step 1, the segmented images are stored in a database 2 to serve as a temporary database of a production site, meanwhile, the position information of each image on a production line is recorded, and the database is updated in real time according to actual production conditions. And establishing a coordinate system according to the actual size of the production line, wherein the origin of the coordinates is one end of an inlet of the production line, an x-axis is established in the width direction of the production line, a y-axis is established in the length direction, one scale is set to just comprise one divided picture, namely, each of the x-scale and the y-scale comprises 300 pixel points, and the final detection result of defect detection and the image coordinate result are comprehensively calculated to obtain specific position information of the defect. The formula for performing position calibration on the defect is that the x-axis changes:i=1, 2..n; j= { upper left, lower left, upper right, lower right }; wherein i is the sample, j is the position of the vertex of the defect detection frame, and +.>For the x coordinate of the j vertex of the ith sample defect detection frame, x is the relative coordinate size of the sample picture relative to the x direction of the production line, alpha x For each segmented image the absolute size in x-axis direction, here 300, is obtained +.>Absolute x-coordinate of the defect frame relative to the production line coordinate system. y-axis variation:i=1, 2..n; j= { upper left, lower left, upper right, lower right }; wherein i is the sample, j is the position of the vertex of the defect detection frame, and +.>For the y coordinate of the j vertex of the ith sample defect detection frame, y is the relative coordinate size of the sample picture relative to the y direction of the production line, alpha y Dividing for each webThe absolute size of the image in the y-axis direction, here 300, is obtained +.>The absolute y-coordinate of the defect frame relative to the production line coordinate system.
5) Reconstructing a stored SSD detection network model, taking out pictures from a database 3 according to the defect picture position information provided by the step 7, inputting the pictures into the SSD detection network model for detection analysis, checking category confidence coefficient in the forward process, filtering out a background priori frame, setting a IoU threshold value to filter out a non-standard positive case priori frame, finally removing an overlapped frame by using a smooth non-maximum suppression algorithm (Soft-NMS), outputting a final defect position and defect classification of the position, converting the position information of the network output defect detection frame into 4 vertex coordinates of a frame body, storing the 4 vertex coordinates in the database 2 in real time, and further integrating according to the defect position information: for the situation that the edge of the picture is defective, checking whether the peripheral picture is defective or not, and performing fusion completion of the defect on the cut-off defect, wherein a defect fusion completion algorithm is as follows:
repairing up and down: setting a defect of a picture i below the picture, setting a defect of a picture i+1 above the picture, connecting the two pictures adjacently up and down, and judging the picture iAnd->Whether or not to be equal to alpha y If the condition is satisfied, searching the picture i+1, checking whether a defect frame exists, if not, carrying out fusion of the defect frame, and if so, judging the +.>And->Whether or not is equal to 0, and judge +.>If the above conditions are satisfied, fusing the defect frames to obtain a new defect detection frame of +.> Wherein->For the y-coordinate or x-coordinate of the picture i defect frame at the corresponding vertex position, alpha y For each segmented image the absolute size in the y-axis direction, here 300,/i>For the picture i+1 defect frame, the y coordinate or the x coordinate of the corresponding vertex position is +.>Is the absolute xy coordinate set of the vertex of the defect frame in picture i or picture i+1.
Repairing left and right: the defect of the picture j is positioned on the right side of the picture, the defect of the picture j+1 is positioned on the left side of the picture, the two pictures are adjacently connected left and right, and the picture j is judgedAnd->Whether or not to be equal to alpha x If the condition is satisfied, searching the picture j+1, checking whether a defect frame exists, if not, carrying out fusion of the defect frame, and if so, judging the +.>And->Whether or not is equal to 0, and judge +.>If the above conditions are satisfied, fusing the defect frames to obtain a new defect detection frame of +.> Wherein->For the y-coordinate or x-coordinate of the picture j defect frame at the corresponding vertex position, alpha x For each segmented image the absolute size in x-axis direction, here 300,/for each segmented image>For the y-coordinate or x-coordinate of the picture j+1 defect box at the corresponding vertex position, +.>And the absolute xy coordinate set of the vertex of the defect frame in the picture j or the picture j+1.
6) Rechecking the SSD network calculation result by using a machine vision algorithm:
performing off-line operation on the vision operation sample library, namely obtaining a recheck characteristic standby value by operation aiming at a typical defect sample table, changing an image into a gray image, performing histogram operation, and transcoding the obtained histogram into a plurality of groups to be stored in a database for standby; for the defect-free sample table: graying, carrying out OtsU algorithm operation on the image, searching for a binarization threshold value, carrying out binarization operation according to the threshold value obtained by the algorithm, and carrying out further closing operation on the binarized image. The following processing is carried out synchronously: on one hand, the obtained closed operation image is thinned, angular point detection is carried out (the angular point at the position takes the intersection point of two or more edges) to obtain the number of the inner angles of the image, the number of the angular points is divided by the number of the polygonal outlines of paper cores of the detected image to obtain the number of the angular points contained in the number of unit outlines, and all the pictures in the table are subjected to the same positions as aboveAnd (3) the number of corner points contained in the average unit contour number finally obtained is as follows:and simultaneously calculate the average unit area profile number reserve +.>Wherein b i For the total number of sample corner points, l i S is the total number of sample contours i N is the sample volume, which is the total area of the sample; on the other hand, the area and the perimeter of the polygonal outline of each paper core are directly calculated, the average value of the average single outline area perimeter of the whole picture is obtained, and all pictures in a sample library are subjected to the above operation to finally obtain the common average value:Where N is the sample size, M is the corresponding number of contours in each sample, s ij For the area of the j-th profile of the i sample, c ij The perimeter of the j-th profile for the i-sample. And storing all results obtained by offline operation into a memory for standby.
Performing on-line rechecking operation, namely performing the following processing on the image in the defect frame area of the obtained honeycomb paper core: graying, carrying out histogram operation, comparing with the histograms of the defects corresponding to the typical defect sample table one by one, and calculating the maximum Pasteur distance of the defect sample table:wherein H is i (I) Representing the number of pixels with the gray level of I in the ith typical defect sample histogram, wherein H (I) is the number of pixels with the gray level of I in the recheck histogram, and +.>N is the total number of pixels counted in the histogram. Performing the same pretreatment as in offline operation on the defect picture on line, calculating the number of corner points, and marking the obtained result as B test Let the size of the defect frame be S test The corner point number B and the average unit contained in the average unit contour number stored before takingArea contour number L, and calculating the relative difference value of corner numbers:Calculate the average S of the average single contour area perimeter test 、C test Taking the average single contour area and circumferences S and C obtained by off-line calculation, and calculating the relative difference of the area and the circumference +.>Because the framed defect position is rechecked, the frame body is smaller, and the on-line operation speed is extremely high.
The variables calculated from the weighted online calculation result:obtaining a comprehensive evaluation index delta, wherein alpha, beta, gamma and epsilon are calculated parameters, and d max 、And obtaining a result obtained by online rechecking operation. The larger delta indicates the larger difference between the detected picture and the non-defective sample, the higher the confidence of being a defective picture. And (3) calculating the value gap under the defect-free and defect-free conditions through multiple experiments, selecting a comprehensive evaluation threshold according to specific conditions, judging that the detection is false when the threshold is not reached, and deleting the defect picture information stored in the temporary production database.
7) The mode of cooperation with the repair system: the visual platform is positioned at the inlet end of the production line, the repair system is positioned at the tail end of the production line, the temporary database of the production line is used for storing data, the part corresponding to the picture to be detected and obtained and containing the defect advances along the production line to enter the repair field of the repair system for repair operation, namely, the other end carries out historical repair operation of the part corresponding to the picture containing the defect while detecting, the picture data after repair treatment is put into a standby table for real-time deletion, the main table is updated at any time according to the stepping self-increment of the coordinate system (the real-time coordinate information stored by the picture is automatically increased according to the stepping of the production line, and the picture exceeds the upper limit of the coordinate system and is stored in the standby table and deleted from the main table). Database defect frame self-augmentationRules:wherein->Absolute y-coordinate, alpha, of the defect frame relative to the production line coordinate system y For each segmented image the y-axis direction is of absolute magnitude, here 300, n is the number of steps that have been experienced since entry into the line, the steps do not change the absolute x-coordinate of the defect box relative to the line coordinate system.
Compared with the prior art, the invention has the following beneficial effects:
1. the automatic detection of the defects of the honeycomb paper core is realized by using a machine vision technology, the type information and the accurate position information of the defects are provided for a repairing system, and algorithm conditions are provided for the automatic defect repairing operation of an industrial honeycomb paper core production system.
The SSD algorithm has the advantages of convenient training, accurate and high detection speed in the detection algorithm. The invention uses the network to detect the defects of the honeycomb paper core with defects after the stretching condition is adjusted, positions the defects of different categories, provides defect information for a next-stage repairing system, has accurate network positioning and classification, has extremely high speed and meets the requirement of real-time detection. Particularly, the method cuts the assembly line paperboard into small blocks for processing, and establishes a coordinate system without directly putting the whole into operation, and comprehensively judges the absolute coordinates of the defects through the relative production line coordinates of the pictures and the output position information of the SSD network. By the aid of the method, the influence of poor effect of the SSD network on small object detection is effectively reduced, and the problem of missing detection of defects is prevented.
3. The machine vision algorithm is used for rechecking the detection result of the deep learning target, so that the advantage that the traditional algorithm is used more stably in the industrial field is effectively utilized, and the high-speed rechecking is realized by combining the specific characteristics of the honeycomb paper core. Therefore, the advantages of both sides are utilized, the defects of both sides are avoided, the problem of false detection of defects is effectively solved, and the stability of industrial production and the quality of products are further improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a diagram of a system architecture of the present invention;
FIG. 3 is a diagram of an SSD defect detection network;
fig. 4 is a view showing the effect of the visual operation review preprocessing.
Detailed Description
1. The method comprises the steps of transferring production equipment on a honeycomb paper core industrial production site to generate a large number of honeycomb paper core samples, building a visual platform on the industrial site, fixing a camera, providing constant lighting for a certain amount, setting the acquisition size as a strip-shaped picture, and taking the picture width 2400 and the picture height 300 as measurement units of the advancing steps of the production line when the picture height 300 is taken as measurement units of the advancing steps of the later analysis. Equidistant segmentation (dividing into 8 blocks) is carried out on the acquired long-strip-shaped pictures, namely, a group of images [300,300] are generated, whether defects exist in the pictures is checked manually, if the defects exist, the labelImg is used for marking the positions of the defects, the calibrated pictures are stored in an sql server database, and defect types of the pictures are stored, wherein the defect types comprise small holes, large holes, irregular gaps and continuous fracture, the category background is reserved for counterexample marking, the database is named as a training database, and other pictures are discarded. The method is characterized in that samples of all conditions are required to be collected during collection, the number of samples of various defects is guaranteed to reach a certain value to stop, and the defects of the collected part, except for the condition in the middle of the figure, are marked so that the defects which are just separated by the dividing operation can be detected, the robustness of the model is enhanced, and the method is suitable for various different conditions.
2. The data expansion for each of the following was performed simultaneously with 50% probability of change for the pictures in the training database: random clipping, clipping length-width ratio is [3/4,4/3], random sampling area is [20%,100% ]; flipping the dataset with a 50% probability; scaling hue, saturation, brightness, scaling factor is sampled uniformly from [0.6,1.4 ]. And storing the pictures after the data expansion into a corresponding database.
3. And building an SSD network under a tensorflow framework, performing feature extraction on the picture by using the VGG network as a basic network, and loading a pre-training model of the picture on a ILSVRC CLS-LOC data set. Converting full connection layers fc6 and fc7 of VGG16 into 3*3 convolution conv6 and 1*1 convolution conv7 respectively, adding four conv8-11 convolution layers (both 1*1 and 3*3 convolution layers) after a base network, extracting six conv7, conv8_2, conv9_2, conv10_2, conv11_2 and conv4_3 in the base network as feature graphs, extracting prior frames according to the number of pixel points on each feature graph, and performing aspect ratio change on the prior frames generated on each feature graph to generate the length-to-width ratio [2,1/2,3,1/3 ] of the ratio]Adding the original prior frames to the four kinds of block diagrams of the prior frame set to generate the prior frame set. During training, firstly, prior frame matching is carried out, whether all prior frames are matched with the group trunk or not is calculated, a IoU (cross-over ratio) threshold value is set to be 0.5, prior frames exceeding the threshold value are positive examples, prior frames not exceeding the threshold value are negative examples, the negative examples are marked as background marks reserved in the front, and the obtained positive and negative examples are stored in a memory for standby. Since the prior frame which does not reach the threshold value is often far more than the prior frame which reaches the threshold value, which can cause the problem of unbalance of the data set, the Hard Negative Mining technology is adopted to sample the negative samples, the negative samples are arranged in descending order according to confidence errors (the smaller the confidence of the prediction background is, the larger the error is), the larger top-k of the errors is selected as the training negative samples, and the proportion of the positive samples to the negative samples is controlled to be 1:3. Iterative training, wherein the gradient optimization method selects a momentum optimization method: the gradient update formula is as follows:wherein v is t To step down this gradient, v t-1 For the previous gradient descending pace, gamma is the attenuation rate, b is the gradient direction of the descending time, alpha is the learning rate, and the gradient descending pace is set to be an exponentially decaying learning rate (0.005 is the initial learning rate, 5 percent of each epoch is descended), until the final result of the test set feedforward process and the actual group tIoU between ruths reaches 70%, classification accuracy reaches 98%, and the best weight information is saved.
4. Setting up the same visual platform in step 1, controlling the forward steps of the production line to be consistent with those in step 1, namely 300 pixel points are stepped each time, carrying out image acquisition and analysis between each step, acquiring a group of pictures, carrying out the same segmentation processing as in step 1, storing the obtained pictures into an SQL server database as a production site temporary database, and storing the position information of the production line corresponding to each picture. And (3) establishing a coordinate system by taking one end of the inlet of the production line as an origin, wherein the x-axis corresponds to the width direction of the production line, the y-axis corresponds to the advancing direction of the production line, and the relative position information of each picture in the coordinate axis is represented by taking 300 pixel points as a unit. The database updates the data according to whether the last patching is finished or not, namely, two more positions (allowance and information loss prevention) are added from the coordinate axis y-axis direction to the patching position, and the data stored in the database is deleted from the database when the data exceeds the upper limit of the y-axis.
5. Reconstructing the SSD defect detection network built in the step 4, loading weight information obtained by training in the step 4, inputting the pictures acquired in the step 5 into the step 4 in real time, monitoring the defects of the honeycomb paper core in real time, storing the defective pictures, storing the output of the defective pictures in a temporary database of a production site, and detecting the whole of the network without corresponding defect type output as a background image for the defect-free pictures, so that no further operation is performed. And (3) complementing the segmented defects by using a defect fusion algorithm, updating a temporary database of the production line, stepping the whole detection process along with the production line, and storing the real-time detection result into the temporary database of the production line. The method specifically further comprises the following steps:
1) Defect detection network feed-forward process: and determining the category of the class according to the category softmax layer output and filtering out the prediction frames belonging to the background. Filtering out unqualified positive example frames according to a rule that a IoU threshold exceeds 0.65, and then selecting and deleting the selected overlapped prior frames by using a smooth non-maximum suppression algorithm (Soft-NMS), and iteratively optimizing the final remaining prior frames, wherein the score reset function of the Soft-NMS is as follows:wherein M is a high-score priori frame, b is an overlapping frame to be selected, when both IoU exceed a threshold value N t The selection score s is reduced i . And repeatedly calculating, and selecting the prior frame of the optimal score as a final result.
2) Converting the position information of the network output defect detection frame into 4 vertex coordinates of the frame body, storing the coordinates into a temporary database of a production site for standby, and taking the upper left x coordinate as an example, wherein the conversion formula is as follows:
wherein->X coordinate, d of vertex of upper left corner of defect frame cx Outputting the center x coordinate of the defect frame for the network, d cw The defect frame width size is output for the network.
6. And (3) rechecking: establishing an sql server visual operation sample library, collecting thirty defect-free sample images in the same way as in 1, and dividing the images into [300,300]]Storing into a defect-free sample table, selecting 5 typical defect samples of each type from a training sample library, extracting the region picture of the binding box, storing into the typical defect sample table, and performing off-line operation as described in the summary of the invention: and (3) graying the defective picture, calculating a gray level histogram of the picture after graying the sample, converting the histogram into a plurality of groups of data, and storing the groups of data into a corresponding position of a typical defect sample table for standby. For a non-defective picture, after the picture is grayed, an OtsU self-adaptive threshold algorithm is used for calculating a binarization threshold, the obtained threshold is used for binarizing the picture, and the picture is subjected to closed operation processing, so that non-communicated crossing points are eliminated, honeycomb holes are continuous and complete, and the final processing effect is shown in figure 3. Two threads are started to carry out multi-line Cheng Yunsuan: the image is thinned by thread 1, the skeleton of the image is extracted, the number of corner points (the intersection points of two or more edges) of the thinned skeleton image is counted, meanwhile, the number of contours in the image is counted, and the number of the corner points is divided by the contoursThe number of the angular points under the unit outline of the sample is obtained, such operation is carried out on all non-defective samples in a sample library, the number of the angular points under the average single outline number of the non-defective image is stored in a memory for standby, and meanwhile, the number of the outline under the unit area is calculated (the same is carried out on all the samples and is averaged), and the number is stored in the memory for standby; and the thread 2 directly calculates the area and the perimeter of each paper core polygonal outline in the image after the closed operation, calculates the average value of the average single outline area perimeter of the whole image, performs one operation on all data in the table, calculates the average result of the operation, and stores the average result in a memory. In the on-line calculation, the same calculation is performed on the image in the detection frame area output by the SSD network, firstly, a gray level histogram is calculated, a corresponding histogram group (in which a histogram is stored in a memory in a reconstruction typical defect list) is selected according to the output defect type, continuous comparison is performed on the histogram group, the Pasteur distance is calculated, and the maximum value of 5 times of calculation is obtained. Then calculating the average area circumference and the corner number of the contour which are needed under the corresponding defect-free condition according to the area of the defect frame, subtracting the absolute value from the online result obtained under the same algorithm, comparing the absolute value with the corresponding value obtained by offline operation to obtain the relative difference value, and finally obtaining 4 data after integration: d, d max 、The maximum histogram Babbitt distance, the average angular point number difference, the average area difference and the average perimeter difference are obtained. This 4 value is substantially at [0,1 ]]And a small amount of values are slightly larger than 1, and normalization processing is not needed, so that the comprehensive evaluation indexes are obtained by adding:
wherein the alpha, beta, gamma and epsilon parameters are respectively set to 2, 0.5 and 1. And (3) carrying out multiple experiments on site, taking a large number of example frames as samples (including various defects and half of the defects), calculating delta difference sizes obtained by two different samples, obtaining a comprehensive evaluation threshold value of 2.5, and considering that the detection is accurate and false detection does not occur when delta calculated by input defect frame pictures exceeds the threshold value of 2.5.
7. The repair system traverses the temporary database of the production line in real time, checks the stored picture group data at the tail end position (in the repair field) and the detection result thereof, performs repair work, reserves the picture group which completes the repair work in a standby table built in the database, deletes the picture group from the main table, and deletes and updates the standby table in real time when the standby table exceeds 5 groups of historical data.
According to the embodiment of the invention, the defects of the honeycomb paper core are detected in real time by establishing the deep learning SSD network and combining a machine vision algorithm, and the honeycomb paper core is automatically repaired by matching with a defect repair device, so that the positioning is accurate and quick, and the production quality of the honeycomb paper core is improved.
Claims (4)
1. The honeycomb paper core defect detection method based on machine vision is characterized by comprising the following steps of:
1) Obtaining a honeycomb paper core picture sample containing defects, carrying out equidistant segmentation according to the aspect ratio, screening a segmented picture set, storing pictures containing the defects, classifying the defects in the pictures, carrying out position calibration on the defects, establishing a honeycomb paper core defect detection training database, and carrying out data expansion processing on the training database to different degrees;
2) Establishing an SSD target detection model, performing model training optimization by using samples in a training database, and storing optimized weight information to obtain a honeycomb paper core defect detection network model;
3) The synchronization of the image acquisition process and the stepping of the production line is ensured, the image acquisition is carried out once per step, the analysis of the acquired image is carried out at the step interval, and the acquired honeycomb paper core image is a long bar image; equidistant dividing of the long bar graph, establishing a production site temporary database, temporarily storing the obtained pictures, establishing a production line coordinate system, recording the relative positions of the pictures, and updating the picture coordinates in real time according to the production flow;
4) Sequentially inputting the pictures obtained in the step 3) into the honeycomb paper core defect detection network model established in the step 2), positioning and classifying defects in the pictures, storing the obtained defect information into a production temporary database for temporary storage, checking whether a defect detection frame cut by a cutting operation exists in real time, and fusing and complementing the cut defects;
5) Rechecking the defects, establishing a visual operation sample library, and offline calculating the area, perimeter, histogram and corner features of the sample library pictures by using a machine vision algorithm to obtain a comprehensive judgment threshold; four characteristics of a defect picture in a detection frame output by an SSD target detection model are calculated on line, compared with a threshold value obtained by offline calculation, whether false detection conditions exist or not is judged, and position information of false detection defects in a temporary database of a production site is deleted;
6) Transmitting the database information to a tail end defect repair unit to perform real-time defect repair operation; the specific implementation process of defect repair comprises the following steps: setting a visual platform at the inlet end of a production line, storing data by using a temporary database of the production line, enabling a part corresponding to a picture to be detected and obtained and containing defects to advance into a repairing field of a repairing system along the production line, then performing repairing operation, namely performing historical repairing work on the corresponding part of the picture containing the defects at the other end while detecting, placing the picture data after repairing treatment into a standby table for real-time deletion, updating a main table at any time according to the stepping self-increment of a coordinate system, namely performing real-time self-increment of real-time coordinate information stored by the picture according to the stepping of the production line, storing the real-time coordinate information exceeding the upper limit of the coordinate system into the standby table, and deleting the picture data from the main table;
the real-time coordinate information stored by the picture is subjected to real-time self-increment according to the stepping of the production line:wherein->Absolute y-coordinate, alpha, of the defect frame relative to the production line coordinate system y For each segmented image, the y-axis direction is absolute, n is the number of steps that have been taken since entering the line, and the steps do not change the absolute x-coordinate of the defect frame relative to the line coordinate system.
2. The machine vision-based honeycomb paper core defect detection method according to claim 1, wherein a production line coordinate system is established, an x-axis is established along the width direction of the production line, a y-axis is established along the length direction of the production line, a scale is set to exactly contain a divided picture, and a formula for performing position calibration on the defect is as follows: x-axis variation:i=1, 2..n; j= { upper left, lower left, upper right, lower right }; wherein i is the sample, j is the position of the vertex of the defect detection frame, and +.>For the x coordinate of the j vertex of the ith sample defect detection frame, x is the relative coordinate size of the sample picture relative to the x direction of the production line, alpha x For each segmented image the x-axis direction absolute size, obtained +.>Absolute x-coordinate of the defect frame relative to the production line coordinate system; y-axis variation:i=1, 2..n; j= { upper left, lower left, upper right, lower right }; wherein i is the sample, j is the position of the vertex of the defect detection frame, and +.>For the y coordinate of the j vertex of the ith sample defect detection frame, y is the relative coordinate size of the sample picture relative to the y direction of the production line, alpha y For each segmented image the y-axis direction absolute size, obtained +.>The absolute y-coordinate of the defect frame relative to the production line coordinate system.
3. The machine vision-based honeycomb paper core defect detection method according to claim 1, wherein the specific implementation process of rechecking the defect comprises the following steps:
1) Establishing a visual operation sample library, wherein the storage table comprises the following steps: a defect-free sample table, a typical defect sample table;
2) Changing the image in the typical defect sample table into a gray image, performing histogram operation, and converting the obtained histogram into a plurality of groups to be stored in a database for standby;
3) Graying images in a defect-free sample table, performing OtsU algorithm operation on the images, searching for a binarization threshold value, performing binarization operation according to the threshold value obtained by the algorithm, and performing further closed operation on the binarized images; the following processing is carried out synchronously: on one hand, the obtained closed operation image is thinned, angular point detection is carried out, the number of the inner angles of the image is obtained, the number of the angular points is divided by the number of the polygonal outlines of the paper core of the measured image, the number of the angular points contained in the unit outline is obtained, all pictures in the table are processed in the same way as above, and finally the number of the angular points contained in the average unit outline is:and simultaneously calculate the average unit area profile number for standbyWherein b i For the total number of sample corner points, l i S is the total number of sample contours i N is the sample volume, which is the total area of the sample; on the other hand, the area and the perimeter of the polygonal outline of each paper core are directly calculated, the average value of the average single outline area perimeter of the whole picture is obtained, and all pictures in a sample library are subjected to the above operation to finally obtain the common average value:Where N is the sample size, M is the corresponding number of contours in each sample, s ij For the area of the j-th profile of the i sample, c ij Perimeter of j-th profile for i samples;
4) Lack of honeycomb paper coreThe image in the frame sinking area is processed as follows: graying, carrying out histogram operation, comparing with the histograms of the defects corresponding to the typical defect sample table one by one, and calculating the maximum Pasteur distance of the defect sample table:wherein H is i (I) Representing the number of pixels with the gray level of I in the ith typical defect sample histogram, wherein H (I) is the number of pixels with the gray level of I in the recheck histogram, and +.>N is the total number of pixels counted in the histogram; performing the same pretreatment as in 3) on the defect picture on line, calculating the number of corner points, and marking the obtained result as B test Let the size of the defect frame be S test Taking the corner number B and the average unit area contour number L contained in the stored average unit contour number, and solving the relative difference value of the corner numbers:Calculate the average S of the average single contour area perimeter test 、C test Taking the average single contour area and circumferences S and C obtained by off-line calculation, and calculating the relative difference of the area and the circumference +.>
5) Using the formulaObtaining a comprehensive evaluation index delta, wherein alpha, beta, gamma and epsilon are calculation parameters; the larger delta indicates the higher confidence that the picture contains defects; and (3) calculating the delta gap under the defect-free and defect-free conditions by multiple experiments, selecting a comprehensive evaluation threshold, judging that the delta is false, and deleting the defect picture information stored in the temporary production database when the delta is not up to the threshold.
4. The machine vision-based honeycomb paper core defect detection method according to claim 1, wherein the specific implementation process of fusion completion of the truncated defect comprises:
repairing up and down: the defect of the picture i is arranged below the picture, the defect of the picture i+1 is arranged above the picture,
two pictures are connected adjacently up and down, and the picture i is judgedAnd->Whether or not to be equal to alpha y If equal, searching the picture i+1, checking whether a defect frame exists, if not, without performing defect frame fusion,
If the above conditions are satisfied, fusing the defect frames to obtain a new defect detection frame of +.>
Wherein->For the y-coordinate or x-coordinate of the picture i defect frame at the corresponding vertex position, alpha y For each frameThe y-axis direction of the cut image is of absolute magnitude,
for the y-coordinate or x-coordinate of the picture i +1 defect box at the corresponding vertex position,the absolute xy coordinate set of the vertex of the defect frame in the picture i or the picture i+1;
repairing left and right: the defect of the picture j is positioned on the right side of the picture, the defect of the picture j+1 is positioned on the left side of the picture, the two pictures are adjacently connected left and right, and the picture j is judgedAnd->Whether or not to be equal to alpha x If the condition is satisfied, searching the picture j+1, checking whether a defect frame exists, if not, carrying out fusion of the defect frame, and if so, judging the +.>And->Whether or not is equal to 0, and judge +.>If the above conditions are satisfied, fusing the defect frames to obtain a new defect detection frame of +.>Wherein->Is a picture j lack ofY-coordinate or x-coordinate of the box at the corresponding vertex position, alpha x For each segmented image the x-axis direction is of absolute magnitude,for the y-coordinate or x-coordinate of the picture j +1 defect box at the corresponding vertex position,and the absolute xy coordinate set of the vertex of the defect frame in the picture j or the picture j+1. />
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910418796.5A CN110111331B (en) | 2019-05-20 | 2019-05-20 | Honeycomb paper core defect detection method based on machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910418796.5A CN110111331B (en) | 2019-05-20 | 2019-05-20 | Honeycomb paper core defect detection method based on machine vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110111331A CN110111331A (en) | 2019-08-09 |
CN110111331B true CN110111331B (en) | 2023-06-06 |
Family
ID=67491182
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910418796.5A Active CN110111331B (en) | 2019-05-20 | 2019-05-20 | Honeycomb paper core defect detection method based on machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110111331B (en) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110910372B (en) * | 2019-11-23 | 2021-06-18 | 郑州智利信信息技术有限公司 | Deep convolutional neural network-based uniform light plate defect detection method |
CN111144475A (en) * | 2019-12-22 | 2020-05-12 | 上海眼控科技股份有限公司 | Method and device for determining car seat, electronic equipment and readable storage medium |
CN111179241A (en) * | 2019-12-25 | 2020-05-19 | 成都数之联科技有限公司 | Panel defect detection and classification method and system |
CN111487250A (en) * | 2020-04-02 | 2020-08-04 | 苏州奥创智能科技有限公司 | Intelligent visual detection method and system applied to injection molding defective product detection |
US11893725B2 (en) * | 2020-05-09 | 2024-02-06 | Central South University | Method for evaluating and system for detecting and evaluating geometric form of honeycomb product |
CN111583238B (en) * | 2020-05-09 | 2021-07-20 | 中南大学 | Method for extracting vertex of included angle of contour line of honeycomb, method for detecting vertex of included angle of contour line of honeycomb and device for detecting vertex of included angle of contour line of honeycomb |
CN111627020A (en) * | 2020-06-03 | 2020-09-04 | 山东贝特建筑项目管理咨询有限公司 | Detection method and system for anchor bolt in heat insulation board and computer storage medium |
CN111808367B (en) * | 2020-07-09 | 2023-05-30 | 浙江七色鹿色母粒有限公司 | Improvement method for plastic PPR silver grain whitening defect |
CN112232215B (en) * | 2020-10-16 | 2021-04-06 | 哈尔滨市科佳通用机电股份有限公司 | Railway wagon coupler yoke key joist falling fault detection method |
CN112485260B (en) * | 2020-11-26 | 2023-01-03 | 常州微亿智造科技有限公司 | Workpiece defect detection method and device |
CN112329896B (en) * | 2021-01-05 | 2021-05-14 | 武汉精测电子集团股份有限公司 | Model training method and device |
CN113011057B (en) * | 2021-02-22 | 2023-04-07 | 河南农业大学 | Method and system for predicting performance of aged bonding structure based on gradient degradation of adhesive layer |
CN112991344A (en) * | 2021-05-11 | 2021-06-18 | 苏州天准科技股份有限公司 | Detection method, storage medium and detection system based on deep transfer learning |
CN114429433B (en) * | 2022-04-01 | 2022-06-24 | 武汉市宏伟纸箱包装有限公司 | Honeycomb paper feed amount control method based on image enhancement |
CN114782286A (en) * | 2022-06-21 | 2022-07-22 | 奥蒂玛光学科技(深圳)有限公司 | Defect repairing method, optical repairing device, electronic device and storage medium |
CN115063725B (en) * | 2022-06-23 | 2024-04-26 | 中国民航大学 | Aircraft skin defect identification system based on multi-scale self-adaptive SSD algorithm |
CN115546203B (en) * | 2022-11-23 | 2023-03-10 | 常熟理工学院 | Production monitoring and analyzing method based on image data algorithm |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106127780B (en) * | 2016-06-28 | 2019-01-18 | 华南理工大学 | A kind of curved surface defect automatic testing method and its device |
CN107833220B (en) * | 2017-11-28 | 2021-06-11 | 河海大学常州校区 | Fabric defect detection method based on deep convolutional neural network and visual saliency |
CN108257122A (en) * | 2018-01-09 | 2018-07-06 | 北京百度网讯科技有限公司 | Paper sheet defect detection method, device and server based on machine vision |
-
2019
- 2019-05-20 CN CN201910418796.5A patent/CN110111331B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN110111331A (en) | 2019-08-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110111331B (en) | Honeycomb paper core defect detection method based on machine vision | |
CN111062915B (en) | Real-time steel pipe defect detection method based on improved YOLOv3 model | |
CN105388162B (en) | Raw material silicon chip surface scratch detection method based on machine vision | |
CN110927171A (en) | Bearing roller chamfer surface defect detection method based on machine vision | |
CN108711148B (en) | Tire defect intelligent detection method based on deep learning | |
CN110992349A (en) | Underground pipeline abnormity automatic positioning and identification method based on deep learning | |
KR20040111529A (en) | Surface defect judging method | |
CN111815573B (en) | Coupling outer wall detection method and system based on deep learning | |
CN111914902B (en) | Traditional Chinese medicine identification and surface defect detection method based on deep neural network | |
CN113674216A (en) | Subway tunnel disease detection method based on deep learning | |
CN112634237A (en) | Long bamboo strip surface defect detection method and system based on YOLOv3 improved network | |
KR20210122429A (en) | Method and System for Artificial Intelligence based Quality Inspection in Manufacturing Process using Machine Vision Deep Learning | |
CN109685793A (en) | A kind of pipe shaft defect inspection method and system based on three dimensional point cloud | |
CN115147363A (en) | Image defect detection and classification method and system based on deep learning algorithm | |
CN116109633B (en) | Window detection method and device for bearing retainer | |
CN115597494B (en) | Precision detection method and system for prefabricated part preformed hole based on point cloud | |
CN113627435A (en) | Method and system for detecting and identifying flaws of ceramic tiles | |
CN118097310B (en) | Method for digitally detecting concrete surface defects | |
CN113313107A (en) | Intelligent detection and identification method for multiple types of diseases on cable surface of cable-stayed bridge | |
CN114359235A (en) | Wood surface defect detection method based on improved YOLOv5l network | |
Guerra et al. | Standard quantification and measurement of damages through features characterization of surface imperfections on 3D models: an application on Architectural Heritages | |
CN109615610B (en) | Medical band-aid flaw detection method based on YOLO v2-tiny | |
CN111178405A (en) | Similar object identification method fusing multiple neural networks | |
Samdangdech et al. | Log-end cut-area detection in images taken from rear end of eucalyptus timber trucks | |
CN112633286B (en) | Intelligent security inspection system based on similarity rate and recognition probability of dangerous goods |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |