CN114972285A - Fine detection method for sawtooth welding defects - Google Patents

Fine detection method for sawtooth welding defects Download PDF

Info

Publication number
CN114972285A
CN114972285A CN202210645597.XA CN202210645597A CN114972285A CN 114972285 A CN114972285 A CN 114972285A CN 202210645597 A CN202210645597 A CN 202210645597A CN 114972285 A CN114972285 A CN 114972285A
Authority
CN
China
Prior art keywords
saw blade
target detection
area
saw
rectangular frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210645597.XA
Other languages
Chinese (zh)
Inventor
王国栋
郭晓杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Promadi Computing Technology Co ltd
Original Assignee
Nanjing Promadi Computing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Promadi Computing Technology Co ltd filed Critical Nanjing Promadi Computing Technology Co ltd
Priority to CN202210645597.XA priority Critical patent/CN114972285A/en
Publication of CN114972285A publication Critical patent/CN114972285A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for finely detecting a sawtooth welding defect, which comprises the following steps: identifying the saw blade to obtain a target detection rectangular frame, and revising the target detection rectangular frame to ensure the detection integrity of the saw blade; marking the area of the sawtooth in the target detection rectangular frame; classifying the marked sawteeth, and correcting the direction of a sawteeth cutter head according to a classification result; and selecting a welding area in the corrected sawtooth middle frame, finishing the identification of the sawtooth welding area by using a target detection algorithm, and judging the defect type. The method uses a target detection algorithm and a Canny edge detection algorithm to simultaneously detect the position of the saw blade, and fuses the results to ensure that the saw blade is correctly cut; by adopting the division and treatment idea, different problems in the detection process are respectively solved by using various technologies, and finally the scheme can realize the detection of the welding defects of the saw blade from 24 teeth to 120 teeth.

Description

Fine detection method for sawtooth welding defects
Technical Field
The invention relates to the field of computer vision and intelligent industry, in particular to a method for finely detecting a sawtooth welding defect.
Background
In the saw blade production industry, quality detection after welding of the cutter head is an important link in the whole production process, and the cutter head welding defects are detected in time, so that the method has important guiding significance for tracing the production process problem, improving the production efficiency and grading and pricing the saw blade.
The traditional tool bit welding quality detection method mainly records defect types existing in welding after human eyes are checked one by one, but with the improvement of living standard, enterprises gradually face the problems of difficult recruitment and high cost for simple and repeated procedures. Meanwhile, the welding defects of the saw blade are detected manually, the problems of wrong detection and missed detection are easily caused when people are tired, the computer vision technology is more and more widely applied to industrial defect detection in recent years, and the positions and the types of the defects in the product can be automatically identified through image algorithms such as target detection, semantic segmentation, picture classification and other technologies, so that the influence of human factors is eliminated, the dependence on manpower is greatly reduced, and the production efficiency is improved.
An effective solution to the problems in the related art has not been proposed yet.
Disclosure of Invention
Aiming at the problems in the related art, the invention provides a method for finely detecting the welding defect of the sawtooth, so as to overcome the technical problems in the prior related art.
Therefore, the invention adopts the following specific technical scheme:
a method for finely detecting sawtooth welding defects comprises the following steps:
s1, identifying the saw blade to obtain a target detection rectangular frame, and revising the target detection rectangular frame to ensure the detection integrity of the saw blade;
s2, marking the sawtooth area in the target detection rectangular frame;
s3, classifying the marked sawteeth, and correcting the direction of a sawteeth cutter head according to a classification result;
and S4, selecting a welding area in the corrected sawtooth middle frame, finishing the identification of the sawtooth welding area by using a target detection algorithm, and judging the defect type.
Further, the saw blade is identified in S1 to obtain the target detection rectangular frame, and the target detection rectangular frame is revised to ensure the detection integrity of the saw blade, which further includes the following steps:
the identification of the saw blade is completed by adopting a single-stage target detection algorithm Retianet, the area of a target detection rectangular frame is output, and Res2Net is used as a main network for feature extraction;
detecting the saw blade area by adopting an edge detection algorithm Canny, and calculating the area of the maximum external rectangle of the edge;
and (3) calculating the intersection ratio of the two areas obtained by the single-stage target detection algorithm and the edge detection algorithm, and if the intersection ratio exceeds a set threshold, revising the rectangular frame of the target detection by adopting an edge detection algorithm Canny to finish the cutting of the saw blade picture.
Further, if the cross-over ratio exceeds the set threshold, when the target detection rectangular frame is revised by using an edge detection algorithm Canny, setting a to represent the area of the target detection rectangular frame output by the Retianet, and setting B to represent the area of the maximum bounding rectangle of the edge calculated by Canny, and then expressing the cross-over ratio as:
Figure BDA0003683299870000021
if the intersection ratio IOU is larger than the set threshold, revising the formula of the target detection rectangular frame as follows:
Figure BDA0003683299870000022
in the formula (x) o1 ,y o1 )、(x o2 ,y o2 ) Respectively representing the coordinates of the upper left point and the lower right point of the area A; (x) c1 ,y c1 )、(x c2 ,y c2 ) Individual watchShowing the coordinates of the upper left and lower right points of region B; (x) 1 ,y 1 )、(x 2 ,y 2 ) Respectively representing the coordinates of the upper left and lower right points of the corrected region.
Further, the method for recognizing the saw blade by adopting the single-stage target detection algorithm Retianet, outputting the area of the target detection rectangular frame, and adopting Res2Net as the main network for feature extraction further comprises the following steps:
preparing data: labeling the collected saw blade by using labelimg, labeling the maximum external rectangle of the saw blade, and deriving a label in a COCO data set format according to the labeling content;
training a saw blade detection model: and inputting the collected saw blade picture and the generated label into a target detection model Retianet, training a saw blade classification model, and outputting a region A of a target detection rectangular frame of the saw blade through the Retianet.
Furthermore, Res2Net50 is adopted by the main network of the Retianet, and the 0 th, 1 st, 2 nd and 3 th branch characteristics of Res2Net are input into the FPN network for converging the characteristics of different layers, and simultaneously the output of the FPN is input into the Retina network.
Further, the method for detecting the saw blade area by using the edge detection algorithm Canny and calculating the area of the maximum circumscribed rectangle of the edge further comprises the following steps:
detecting the edge of the saw blade by adopting an edge detection algorithm Canny;
according to the output of Canny, the outline of the saw blade is searched by using a findContours method in Opencv, and the area of the maximum bounding rectangle of the edge is calculated according to the searched outline information.
Further, in the step S2, when the regions of the saw teeth are labeled in the rectangular frame for target detection, an example segmentation algorithm MaskRCNN is used to train an example separation model, and the region where each saw tooth in the saw blade is located is identified.
Further, the training of the example separation model by adopting the example segmentation algorithm MaskRCNN and the identification of the area where each saw tooth in the saw blade is located further comprise the following steps:
preparing data: drawing a polygon on the cut saw blade picture by using labelme, marking the position of the saw blade, and simultaneously deriving a label in a COCO data set format according to the marking content;
data preprocessing: scaling and filling the width and the height of the cut saw blade picture to 2592 and 1944, and normalizing the picture by adopting a normalize method of OpenCV;
training a cutter head segmentation model: inputting the derived labels in the COCO data set format and the zoomed pictures into an example segmentation network MaskRCNN and training a saw blade bit segmentation model;
model output preprocessing: and scaling the mask matrix output by the saw blade head segmentation model to the size of the original image, searching the profile information of saw blade teeth in the mask matrix by using findContours, and simultaneously converting the profile information into a coordinate point form for output.
Further, the step of classifying the marked saw teeth in S3 and correcting the direction of the saw bit according to the classification result further includes the steps of:
preparing data: dividing the divided cutter heads into eight categories, and sequentially storing the eight categories into different folders according to the format of imagenet;
training a classification model: inputting the classified samples into ResNet34 and training the cutter head orientation classification model
And rotating the cutter head to the same direction according to the classification result.
Further, in S4, selecting a welding area in the rectified sawtooth middle frame, and using a target detection algorithm to complete identification of the sawtooth welding area, and determining the defect type further includes the following steps:
preparing data: labeling the welding position in the corrected sawteeth by using labelme, and deriving a label in a COCO data set format according to the labeling content;
data preprocessing: carrying out statistical analysis on the width and the height of the sample picture, taking the median of the width and the height of the pictures in the sample set as an input size, and carrying out zooming and filling processing on the sample picture;
training a target detection model: inputting the sample picture after data preprocessing to a Retianet and training a defect detection model;
and outputting the welding position of the tool bit in the sawtooth and the defect type of the tool bit according to the detection result.
The invention has the beneficial effects that:
the invention uses target detection and Canny edge detection algorithm to simultaneously detect the saw blade position and fuse the results to ensure that the saw blade is correctly cut. Due to the fact that the saw teeth are dense in position, if a target detection algorithm is used, the saw teeth are prone to being cut by mistake, therefore, accurate cutting of the saw teeth is achieved based on an example cutting algorithm, finally, the direction of a tool bit is corrected through classification, and welding defects are located and identified from a saw tooth area through target detection. The invention adopts the divide and conquer idea, uses a plurality of technologies to respectively solve different problems in the detection flow, and finally the proposal can realize the detection of the welding defects of the saw blade from 24 teeth to 120 teeth.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a flowchart of a method for fine detection of welding defects of saw teeth according to an embodiment of the present invention;
FIG. 2 is a front-back direction view of the correction of the saw blade according to the method for the fine detection of the welding defect of the saw tooth of the embodiment of the invention.
Detailed Description
For further explanation of the various embodiments, the drawings which form a part of the disclosure and which are incorporated in and constitute a part of this specification, illustrate embodiments and, together with the description, serve to explain the principles of operation of the embodiments, and to enable others of ordinary skill in the art to understand the various embodiments and advantages of the invention, and, by reference to these figures, reference is made to the accompanying drawings, which are not to scale and wherein like reference numerals generally refer to like elements.
According to the embodiment of the invention, a saw tooth welding defect refined detection method is provided, and by combining target detection, example segmentation and picture classification technologies in computer vision, saw blade defect detection, saw tooth segmentation, saw tooth tool bit orientation correction and saw tooth defect detection are sequentially carried out, so that refined detection of saw tooth welding defects is finally realized, and the saw blade defect refined detection method is realized. Firstly, aiming at the problems of uneven saw blade size and sundry interference in a non-saw blade area, the saw blades are respectively extracted from a picture by using a target detection technology and an edge detection technology, and the saw blade position information extracted by the two methods is fused to ensure that the saw teeth are not cut by mistake. Aiming at the problem of dense sawteeth or too small sawteeth, a semantic segmentation algorithm is used for segmenting the sawteeth from the saw blade. Aiming at the problem of inconsistent orientation of the tool bit on the sawteeth, a sawteeth orientation classification model is trained to uniformly correct the orientation of the sawteeth. And finally, identifying and judging the category of the defect by using a target detection algorithm based on the corrected segmentation picture, wherein the provided refined detection method can detect the welding defect of the saw blade with 24 to 120 saw teeth.
Referring to the drawings and the detailed description, the present invention is further described, as shown in fig. 1, a method for fine detection of a welding defect of a sawtooth according to an embodiment of the present invention, and fig. 1 is a flowchart for fine detection of a welding defect of a sawtooth. Inputting a photo to be detected, sequentially carrying out 4 steps of saw blade detection, saw tooth segmentation, saw tooth head orientation correction and saw tooth defect detection from left to right, and finally outputting the position of a defective saw tooth in a saw blade and the defect type, wherein the method comprises the following steps:
s1, identifying the saw blade to obtain a target detection rectangular frame, and revising the target detection rectangular frame to ensure the detection integrity of the saw blade; the sawtooth detection is abstracted to the target detection problem, and an area correction method is provided by combining the edge detection technology.
Detecting the saw blade: and identifying the position of the saw blade from the collected saw blade picture, and cutting.
Wherein, discerning the saw bit in S1, obtaining the target and detecting the rectangle frame, and revising the target and detecting the rectangle frame, guarantee that the detection integrality of saw bit still includes following step:
the identification of the saw blade is completed by adopting a single-stage target detection algorithm Retianet, the area of a target detection rectangular frame is output, and Res2Net is used as a main network for feature extraction;
in order to avoid that the detected area is incomplete and influences subsequent defect detection, an edge detection algorithm Canny is adopted to finish the detection of the saw blade area, and the area of the maximum external rectangle of the edge is calculated;
and (3) calculating the intersection ratio of the two areas obtained by the single-stage target detection algorithm and the edge detection algorithm, and if the intersection ratio exceeds a set threshold, revising the rectangular frame of the target detection by adopting an edge detection algorithm Canny to finish the cutting of the saw blade picture.
If the cross-over ratio exceeds a set threshold value, when the target detection rectangular frame is revised by adopting an edge detection algorithm Canny, setting an area A representing the target detection rectangular frame output by the Retianet, and setting an area B representing the maximum circumscribed rectangle area of the edge calculated by the Canny, wherein the cross-over ratio is expressed as:
Figure BDA0003683299870000061
if the intersection ratio IOU is larger than the set threshold, revising the formula of the target detection rectangular frame as follows:
Figure BDA0003683299870000062
in the formula (x) o1 ,y o1 )、(x o2 ,y o2 ) Respectively representing the coordinates of the upper left point and the lower right point of the area A; (x) c1 ,y c1 )、(x c2 ,y c2 ) Respectively representing the coordinates of the upper left point and the lower right point of the area B; (x) 1 ,y 1 )、(x 2 ,y 2 ) Respectively representing the coordinates of the upper left and lower right points of the corrected region.
The method comprises the following steps of finishing saw blade identification by adopting a single-stage target detection algorithm Retianet, outputting a region of a target detection rectangular frame, and adopting Res2Net as a main network for feature extraction:
preparing data: labeling the collected saw blade by using labelimg (a picture labeling tool), labeling the maximum external rectangle of the saw blade, and deriving a label in a COCO data set format according to the labeling content;
training a saw blade detection model: and inputting the collected saw blade picture and the generated label into a target detection model Retianet, training a saw blade classification model, and outputting a region A of a target detection rectangular frame of the saw blade through the Retianet.
Res2Net50 is adopted by the main network of the Retianet, and the 0 th, 1 st, 2 th and 3 th branch characteristics of Res2Net (residual error network) are input into an FPN network (characteristic diagram pyramid network) for converging the characteristics of different layers, and simultaneously the output of the FPN is input into the Retina network.
The method for detecting the saw blade area by adopting the edge detection algorithm Canny and calculating the area of the maximum circumscribed rectangle of the edge further comprises the following steps of:
detecting the edge of the saw blade by adopting an edge detection algorithm Canny;
according to Canny output, a findContours method in Opencv (a function for detecting the outline of an object) is adopted to find the outline of the saw blade, and the area of the maximum circumscribed rectangle of the edge is calculated according to the found outline information.
Saw blade sawtooth segmentation: the saw teeth are cut from the cut saw blade.
The aliasing recognition problem is converted into an instance segmentation problem for the image. The output result of the target detection is a rectangular frame, the sawteeth are identified, and other parts of a part of adjacent sawteeth can be cut, especially when the sawteeth are dense, the phenomenon is more serious, and the precision of subsequent defect detection is directly influenced.
S2, marking the sawtooth area in the target detection rectangular frame;
when the saw tooth regions are marked in the target detection rectangular frame in the step S2, an example segmentation algorithm MaskRCNN (a two-stage frame) is adopted to train an example separation model, and the region where each saw tooth in the saw blade is located is identified.
The method for training the example separation model by adopting the example segmentation algorithm MaskRCNN and identifying the area where each saw tooth in the saw blade is located further comprises the following steps:
preparing data: drawing a polygon on the cut saw blade picture by using labelme, marking the position of the saw blade, and simultaneously deriving a label in a COCO data set format according to the marked content;
data preprocessing: scaling and filling the width and the height of the cut saw blade picture to 2592 and 1944, and normalizing the picture by adopting a normalization method of OpenCV;
training a cutter head segmentation model: inputting the derived labels in the COCO data set format and the zoomed pictures into an example segmentation network MaskRCNN and training a saw blade bit segmentation model; wherein, the backbone network of MaskRCNN is Res2Net 50.
Model output preprocessing: and scaling the mask matrix output by the saw blade head segmentation model to the size of the original image, searching the profile information of saw blade teeth in the mask matrix by using findContours, and simultaneously converting the profile information into a coordinate point form for output.
The orientation of the sawtooth tool bit is corrected: because the saw blade is circular, the directions of the cut tool bits are inconsistent, and the subsequent defect detection is greatly influenced, the direction of the tool bit is uniformly corrected by using an image classification method.
The direction of the blade of the saw blade in the clockwise and anticlockwise directions is uncertain, the characteristics of saw tooth welding defects are different when the blade is in different directions, and based on the characteristics, the problem of correcting the direction of the saw tooth blade is abstracted to an image classification problem.
S3, classifying the marked sawteeth, and correcting the direction of a sawtooth tool bit according to the classification result;
wherein, classifying the marked sawteeth in the step S3, and correcting the direction of the saw bit according to the classification result further comprises the following steps:
preparing data: dividing the divided cutter heads into eight categories, namely upward, downward, leftward, rightward, leftward, rightward downward, leftward, rightward upward and rightward downward, and sequentially storing the categories into different folders according to the format of imagenet;
training a classification model: inputting the classified samples into ResNet34 and training the cutter head orientation classification model
Based on the classification result, the tool bit is rotated to the same direction, e.g., downward, and fig. 2 is a schematic view of the tool bit before and after the alignment.
Detecting the welding defects of the tool bit: considering that the cut saw teeth are still interfered by the non-welding region if being directly classified, the welding position and the defect type of the tool bit are identified from the saw teeth by using the target detection algorithm again.
For the corrected sawteeth, the proportion of the welding area to the cut sawteeth is less than 10%, and if an image classification algorithm is used for defect judgment, the defect judgment can still be interfered by a non-welding area.
S4, selecting a welding area in the straightened sawtooth middle frame, finishing the identification of the sawtooth welding area by using a target detection algorithm, and judging the defect type;
in S4, selecting a welding area in the rectified sawtooth frame, and using a target detection algorithm to identify the sawtooth welding area, and determining the defect type further includes the following steps:
preparing data: labeling the welding position in the corrected sawteeth by using labelme, and deriving a label in a COCO data set format according to the labeling content;
data preprocessing: performing statistical analysis on the width and the height of the sample picture, taking the median of the width and the height of the pictures in the sample set as an input size, and performing scaling and filling processing on the sample picture;
training a target detection model: inputting the sample picture after data preprocessing to a Retianet and training a defect detection model, wherein a backbone of the Retianet is Resnet 18;
and outputting the welding position of the tool bit in the sawtooth and the defect type according to the detection result, such as: reverse bonding, skip bonding, cold bonding and normal bonding.
In conclusion, the invention uses the target detection and Canny edge detection algorithm to simultaneously detect the saw blade position, and fuses the results to ensure that the saw blade is correctly cut. Due to the fact that the saw teeth are dense in position, if a target detection algorithm is used, the saw teeth are prone to being cut by mistake, therefore, accurate cutting of the saw teeth is achieved based on an example cutting algorithm, finally, the direction of a tool bit is corrected through classification, and welding defects are located and identified from a saw tooth area through target detection. The invention adopts the divide-and-conquer idea, uses a plurality of technologies to respectively solve different problems in the detection flow, and finally the proposal can realize the detection of the welding defects of the saw blade from 24 teeth to 120 teeth.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A method for finely detecting sawtooth welding defects is characterized by comprising the following steps:
s1, identifying the saw blade to obtain a target detection rectangular frame, and revising the target detection rectangular frame to ensure the detection integrity of the saw blade;
s2, marking the sawtooth area in the target detection rectangular frame;
s3, classifying the marked sawteeth, and correcting the direction of a sawteeth cutter head according to a classification result;
and S4, selecting a welding area in the corrected sawtooth middle frame, finishing the identification of the sawtooth welding area by using a target detection algorithm, and judging the defect type.
2. The method for finely detecting the welding defects of the saw teeth as claimed in claim 1, wherein the step of identifying the saw blade in the step S1 to obtain the rectangular frame of the target detection, and revising the rectangular frame of the target detection to ensure the detection integrity of the saw blade further comprises the steps of:
the identification of the saw blade is completed by adopting a single-stage target detection algorithm Retianet, the area of a target detection rectangular frame is output, and Res2Net is used as a main network for feature extraction;
detecting the saw blade area by adopting an edge detection algorithm Canny, and calculating the area of the maximum external rectangle of the edge;
and (3) calculating the intersection ratio of the two areas obtained by the single-stage target detection algorithm and the edge detection algorithm, and if the intersection ratio exceeds a set threshold, revising the rectangular frame of the target detection by adopting an edge detection algorithm Canny to finish the cutting of the saw blade picture.
3. The method for finely detecting the sawtooth welding defects according to claim 2, wherein if the cross-over ratio exceeds a set threshold, when the target detection rectangular frame is revised by using an edge detection algorithm Canny, if a is set to indicate an area of the target detection rectangular frame output by a Retianet, and B is set to indicate an area of a maximum bounding rectangle of an edge calculated by Canny, the cross-over ratio is expressed as:
Figure FDA0003683299860000011
if the intersection ratio IOU is larger than the set threshold, revising the formula of the target detection rectangular frame as follows:
Figure FDA0003683299860000012
in the formula (x) o1 ,y o1 )、(x o2 ,y o2 ) Respectively representing the coordinates of the upper left point and the lower right point of the area A; (x) c1 ,y c1 )、(x c2 ,y c2 ) Respectively represent the upper left and right of the region BCoordinates of a lower point; (x) 1 ,y 1 )、(x 2 ,y 2 ) Respectively representing the coordinates of the upper left and lower right points of the corrected region.
4. The method for finely detecting the welding defects of the saw teeth as claimed in claim 2, wherein the step of completing the identification of the saw blade by adopting a single-stage target detection algorithm Retianet, outputting the area of a target detection rectangular frame, and adopting Res2Net as a main network for feature extraction further comprises the following steps:
preparing data: labeling the collected saw blade by using labelimg, labeling the maximum external rectangle of the saw blade, and deriving a label in a COCO data set format according to the labeling content;
training a saw blade detection model: and inputting the collected saw blade picture and the generated label into a target detection model Retianet, training a saw blade classification model, and outputting a region A of a target detection rectangular frame of the saw blade through the Retianet.
5. The method for finely detecting the sawtooth welding defects as claimed in claim 4, wherein Res2Net50 is adopted by the main network of the Retianet, and branch 0, 1, 2 and 3 features of Res2Net are input into the FPN network for converging features of different layers, and simultaneously, the output of the FPN is input into the Retina network.
6. The method for fine detection of saw tooth welding defects according to claim 2, wherein the step of detecting the saw blade area by using an edge detection algorithm Canny and calculating the area of the maximum circumscribed rectangle of the edge further comprises the steps of:
detecting the edge of the saw blade by adopting an edge detection algorithm Canny;
according to the output of Canny, the outline of the saw blade is searched by using a findContours method in Opencv, and the area of the maximum bounding rectangle of the edge is calculated according to the searched outline information.
7. The method for finely detecting the welding defects of the saw teeth as claimed in claim 1, wherein in the step S2, when the areas of the saw teeth are marked in the rectangular frame of the target detection, an example separation model is trained by using an example segmentation algorithm MaskRCNN, and the area of each saw tooth in the saw blade is identified.
8. The method for finely detecting the welding defects of the saw teeth as claimed in claim 7, wherein the step of training an example separation model by adopting an example segmentation algorithm MaskRCNN and identifying the area of each saw tooth in the saw blade further comprises the following steps:
preparing data: drawing a polygon on the cut saw blade picture by using labelme, marking the position of the saw blade, and simultaneously deriving a label in a COCO data set format according to the marking content;
data preprocessing: scaling and filling the width and the height of the cut saw blade picture to 2592 and 1944, and normalizing the picture by adopting a normalize method of OpenCV;
training a cutter head segmentation model: inputting the derived labels in the COCO data set format and the zoomed pictures into an example segmentation network MaskRCNN and training a saw blade bit segmentation model;
model output preprocessing: and scaling the mask matrix output by the saw blade head segmentation model to the size of the original image, searching the profile information of saw blade teeth in the mask matrix by using findContours, and simultaneously converting the profile information into a coordinate point form for output.
9. The method for finely detecting the welding defects of the sawteeth as claimed in claim 8, wherein the step of classifying the marked sawteeth in the step S3 and correcting the direction of the sawteeth head according to the classification result further comprises the following steps:
preparing data: dividing the divided cutter heads into eight categories, and sequentially storing the eight categories into different folders according to the format of imagenet;
training a classification model: inputting the classified samples into ResNet34 and training the cutter head orientation classification model
And rotating the cutter head to the same direction according to the classification result.
10. The method for fine detection of saw-tooth welding defects according to claim 9, wherein in step S4, a welding area is selected from the box of the corrected saw-teeth, and the identification of the saw-tooth welding area is completed by using a target detection algorithm, and the determination of the defect type further includes the following steps:
preparing data: labeling the welding position in the corrected sawteeth by using labelme, and deriving a label in a COCO data set format according to the labeling content;
data preprocessing: carrying out statistical analysis on the width and the height of the sample picture, taking the median of the width and the height of the pictures in the sample set as an input size, and carrying out zooming and filling processing on the sample picture;
training a target detection model: inputting the sample picture after data preprocessing to a Retianet and training a defect detection model;
and outputting the welding position of the tool bit in the sawtooth and the defect type of the tool bit according to the detection result.
CN202210645597.XA 2022-06-08 2022-06-08 Fine detection method for sawtooth welding defects Pending CN114972285A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210645597.XA CN114972285A (en) 2022-06-08 2022-06-08 Fine detection method for sawtooth welding defects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210645597.XA CN114972285A (en) 2022-06-08 2022-06-08 Fine detection method for sawtooth welding defects

Publications (1)

Publication Number Publication Date
CN114972285A true CN114972285A (en) 2022-08-30

Family

ID=82960834

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210645597.XA Pending CN114972285A (en) 2022-06-08 2022-06-08 Fine detection method for sawtooth welding defects

Country Status (1)

Country Link
CN (1) CN114972285A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116429782A (en) * 2023-03-29 2023-07-14 南通大学 Saw chain defect detection method based on residual error network and knowledge coding

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116429782A (en) * 2023-03-29 2023-07-14 南通大学 Saw chain defect detection method based on residual error network and knowledge coding
CN116429782B (en) * 2023-03-29 2024-01-09 南通大学 Saw chain defect detection method based on residual error network and knowledge coding

Similar Documents

Publication Publication Date Title
US10430687B2 (en) Trademark graph element identification method, apparatus and system, and computer storage medium
CN111737478B (en) Text detection method, electronic device and computer readable medium
CN111860487B (en) Inscription marking detection and recognition system based on deep neural network
CN113222913B (en) Circuit board defect detection positioning method, device and storage medium
CN114972285A (en) Fine detection method for sawtooth welding defects
CN114445707A (en) Intelligent visual fine detection method for defects of bottled water labels
CN110135407B (en) Sample labeling method and computer storage medium
CN107273890A (en) Graphical verification code recognition methods and device for character combination
CN115272204A (en) Bearing surface scratch detection method based on machine vision
CN111461133A (en) Express delivery surface single item name identification method, device, equipment and storage medium
CN115035092A (en) Image-based bottle detection method, device, equipment and storage medium
CN113870202A (en) Far-end chip defect detection system based on deep learning technology
CN114612444B (en) Fine defect analysis method based on progressive segmentation network
CN114004858A (en) Method and device for identifying aviation cable surface code based on machine vision
CN113962929A (en) Photovoltaic cell assembly defect detection method and system and photovoltaic cell assembly production line
CN111046770B (en) Automatic labeling method for photo archive characters
CN110378337B (en) Visual input method and system for drawing identification information of metal cutting tool
CN109325487B (en) Full-category license plate recognition method based on target detection
CN114943704A (en) Method, device and equipment for detecting defects of battery electrode die-cut piece
CN114596439A (en) Image-based intelligent damage assessment and check method and device for car insurance and electronic equipment
CN113392833A (en) Method for identifying type number of industrial radiographic negative image
CN112101442A (en) Flower counting method based on pistil detection
CN111674033A (en) Engine nameplate intelligent printing and sorting method and system based on OCR
CN115563936A (en) End-to-end table restoring method
CN110378403B (en) Wire spool classification and identification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination