CN111929314A - Wheel hub weld visual detection method and detection system - Google Patents

Wheel hub weld visual detection method and detection system Download PDF

Info

Publication number
CN111929314A
CN111929314A CN202010870708.8A CN202010870708A CN111929314A CN 111929314 A CN111929314 A CN 111929314A CN 202010870708 A CN202010870708 A CN 202010870708A CN 111929314 A CN111929314 A CN 111929314A
Authority
CN
China
Prior art keywords
hub
detection
weld
yolov3
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010870708.8A
Other languages
Chinese (zh)
Inventor
王宸
张秀峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei University of Automotive Technology
Original Assignee
Hubei University of Automotive Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei University of Automotive Technology filed Critical Hubei University of Automotive Technology
Priority to CN202010870708.8A priority Critical patent/CN111929314A/en
Publication of CN111929314A publication Critical patent/CN111929314A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/89Investigating the presence of flaws or contamination in moving material, e.g. running paper or textiles
    • G01N21/8914Investigating the presence of flaws or contamination in moving material, e.g. running paper or textiles characterised by the material examined
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/89Investigating the presence of flaws or contamination in moving material, e.g. running paper or textiles
    • G01N21/8914Investigating the presence of flaws or contamination in moving material, e.g. running paper or textiles characterised by the material examined
    • G01N2021/8918Metal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Textile Engineering (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Quality & Reliability (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention belongs to the technical field of industrial detection, and discloses a visual detection method and a visual detection system for a hub weld defect, wherein the visual detection method for the hub weld defect comprises the following steps: building a wheel hub welding line visual intelligent detection system based on YOLOV 3; verifying the built intelligent detection system; and carrying out visual detection on the welding line of the hub by using a verified intelligent detection system. The invention adopts a network model based on an improved YOLOv3 algorithm to identify, detect and classify the defects of the hub welding line, thereby realizing the intelligent detection of the defects of the hub welding line. Aiming at the problem of application of a detection system in actual production, a detection process from image acquisition to automatic detection and then to defective workpiece ejection is designed, a scheme of additionally installing an intelligent detection system on a production line is realized, a generalized intersection-parallel ratio (GIoU) is used for improvement, parameters in a YOLOv3 model algorithm are adjusted, and a better result is obtained in test detection.

Description

Wheel hub weld visual detection method and detection system
Technical Field
The invention belongs to the technical field of industrial detection, and particularly relates to a wheel hub weld visual detection method and a wheel hub weld visual detection system.
Background
At present, most of welding seam defects of a hub are surface defects, and surface defect detection is carried out in actual production by adopting a manual visual inspection method, but the method has the problems of low detection efficiency, dependence on the level of detection personnel, easy fatigue of the detection personnel and the like, so that the condition of missing detection or false detection exists.
The target detection algorithm based on deep learning is high in speed and accuracy, and has achieved a plurality of achievements in the aspect of application. In particular, the YOLOv3 algorithm proposed in 2018 has extremely high detection speed and high accuracy for the detection of a limited class of targets, so that many scholars apply the YOLOv3 algorithm to the industrial field.
Through the above analysis, the problems and defects of the prior art are as follows: most of the existing detection methods are manual visual detection, the detection efficiency is low, and detection omission or false detection exists depending on the level of detection personnel; the traditional machine vision detection method has the problem of depending on manual design of a feature extraction algorithm, is poor in universality and insufficient in robustness, and is difficult to meet the real-time online detection requirement of hub production enterprises and the high requirement on the detection accuracy rate.
The difficulty in solving the above problems and defects is: designing the whole detection flow scheme, and constructing a hub transmission mechanism, a hub welding seam automatic acquisition device and an unqualified hub screening mechanism. And a proper open source algorithm needs to be selected and improved to meet the requirements of real-time performance and high precision of a production enterprise.
The significance of solving the problems and the defects is as follows: the intelligent detection of the welding seam defects of the hub is realized to replace manual visual inspection, the production efficiency is improved, and the requirement of real-time online detection of the welding seam defects of hub production enterprises is met. The method can quickly and efficiently identify the welding seam defects of the hub, has high robustness and strong universality, can adapt to different production environments, and can also increase the types of the identified welding seam defects according to the requirements.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a visual detection method for a wheel hub welding seam.
The invention is realized in this way, a wheel hub weld visual inspection method, comprising:
step one, building a wheel hub welding seam visual detection system based on YOLOV 3;
step two, verifying the built hub welding seam visual detection system;
and thirdly, visually detecting the hub welding seam by using the verified hub welding seam visual detection system.
Further, in the second step, the hub weld visual inspection system verification method includes:
(1) utilizing camera equipment to acquire a hub weld image aiming at a hub weld defect sample in a factory, preprocessing the acquired image, adjusting the size of the image and classifying the image;
(2) classifying and labeling various images by using an image labeling tool, and manufacturing a data set of the hub welding seam defects;
(3) and carrying out hub weld defect detection training by adopting a YOLOv3-GIoU improved algorithm based on a deep convolutional neural network algorithm, carrying out identification, detection and verification, analyzing verification result data, and optimizing and adjusting parameters through the verification result analysis to obtain the optimized hub weld detection system based on the improved YOLOv 3.
Further, in step (2), the image classification labeling includes: and dividing the weld defects into broken arcs, flash welds, partial welds, poor arcing and air holes, and classifying and marking the defects as different categories.
Further, in the step (2), the hub weld defect data set is as follows: 2: 1, dividing the ratio into a training set, a verification set and a test set;
the training set is used for model training;
the validation set is used for evaluating model performance;
the test set is used to simulate a real detection test.
Further, in the step (3), the YOLOv3-GIoU improvement algorithm based on the deep convolutional neural network algorithm comprises:
1) using a loss function to carry out YOLOv3 algorithm improvement;
the original YOLOv3 algorithm loss function is shown as follows:
Figure BDA0002650987930000031
wherein obj represents that the cell contains the target, and noobj represents that the cell does not contain the target; i represents the ith cell, j represents the jth box predicted by the cell; tx, ty, tw and th in the first two items respectively represent a central point coordinate offset value of the prediction frame and a ratio of the width and the height of the prediction frame relative to the image, wherein the position coordinate information of the real frame is marked; i represents the indicative function of whether the jth frame of the ith cell has a target or not; c is the confidence score of the prediction box; p is a condition category probability value; the first two terms are positioning loss functions, the 3 and 4 terms are confidence coefficient loss functions, and the last term is a classification loss function;
the category confidence score is calculated by multiplying the conditional category probability by the prediction box confidence score as follows:
Figure BDA0002650987930000032
the GIoU is introduced as a positioning loss function, and the calculation formula of the GIoU is as follows:
Figure BDA0002650987930000033
Figure BDA0002650987930000034
LGIoU=1-GIoU#(7)
in the formula, A and B represent two rectangular frames of a prediction frame and a real frame, C represents a minimum bounding rectangle containing A and B, the first two terms in the YOLOv3 loss function are calculation formulas related to a positioning loss function, the coincidence ratio of the A frame and the B frame is represented by using a two-norm value of a coordinate formula, and the two-norm value is replaced by using a calculation formula of a GIoU (general integrity unit), so that the purpose of optimizing the loss function is achieved, and the algorithm performance is improved.
2) Optimizing a prior frame: calculating by utilizing K-means clustering to obtain the prior frame size of the training set, and modifying the value of an anchor box in a parameter file based on the calculation result to perform prior frame optimization;
3) the parameter calculation was performed by the following formula:
W=WBN·Wconv#(8)
b=WBN·bconv+bBN#(9)
wherein, WconvAnd bconvWeight matrix and bias, W, of convolutional layers, respectivelyBNAnd bBNIs the weight matrix and the offset of the BN layer.
Further, in the third step, the visual inspection of the wheel hub welding seam by using the verified intelligent detection system comprises:
firstly, a conveying device conveys a hub to a preset position; when the hub triggers the photoelectric sensor at a preset position, an area array CCD camera is used for collecting image information of a welding seam;
secondly, processing the acquired image information and judging whether the hub welding seam is qualified or not based on the processed image information; if the wheel hub is qualified, the detected wheel hub is transmitted to the next production procedure by using the conveying device; and if the hub is unqualified in detection, starting the ejection device when the hub passes through the second photoelectric sensor, ejecting the unqualified hub onto the defective workpiece recovery raceway, and recovering the defective workpiece.
Another object of the present invention is to provide a YOLOV 3-based wheel hub weld visual inspection system for implementing the YOLOV 3-based wheel hub weld visual inspection method, wherein the YOLOV 3-based wheel hub weld visual inspection system comprises:
the device comprises a conveying module, an image acquisition module, a detection module, a defective workpiece ejection module and a recovery module;
the conveying module is used for conveying the hub to a corresponding position by using the conveying device;
the image acquisition module is used for acquiring image information of the welding seam by using the area array CCD camera when the hub reaches a preset position to trigger the photoelectric sensor;
the detection module is used for processing the acquired image information and judging whether the hub welding line is qualified or not based on the processed image information;
the defective workpiece ejection module is used for ejecting the unqualified wheel hub by using an ejection device when the welding seam of the wheel hub has defects;
and the recovery module is used for recovering unqualified hubs.
Further, the visual inspection system for the welding seam of the hub further comprises a conveying device;
the hub is transmitted to the detection platform from the previous process through the conveying device, when the hub reaches a preset position and triggers the first photoelectric sensor, the image acquisition device starts to capture images of the hub welding line, the acquired image information is transmitted to the image workstation to be processed and judge whether the hub welding line is qualified or not, if the result is that the welding line is not defective, the detection is finished, and the hub is transmitted to the next process; if the detection result is that the welding seam has defects, starting a defective workpiece ejection device when the hub passes through the second photoelectric sensor, ejecting the unqualified hub onto a defective workpiece recovery raceway, and recovering the defective workpiece;
and the actions of the conveying device, the image acquisition device and the defective workpiece ejection device are controlled by a PLC.
The conveying device comprises a driving roller and a driven roller; the driving roller and the driven roller are provided with colloid conveyor belts;
the bearing and the bearing cup are combined and then fixed on a bearing sleeve by using a screw, and the bearing sleeve is sleeved on the driving roller and the driven roller; the number of the bearing sleeves is multiple;
the aluminum profile device is respectively connected with the bearing sleeves at the same ends of the driving roller and the driven roller, and an elastic block is additionally arranged during connection and used for adjusting the distance between the driving roller and the driven roller;
chain wheels are arranged on the driving roller shaft and the output shaft of the stepping motor and connected by chains, and the chain wheels are used for power transmission from the stepping motor to the driving roller.
The image acquisition device comprises an area array CCD camera with more photosensitive lenses and a rotary detection frame; the area array CCD camera is arranged on the rotary detection frame; the rotary inspection frame includes: the device comprises a carousel, a carousel support and a rotating motor;
the carousel is mounted on the carousel; the disc is arranged at the upper end of the disc bracket; the rotating motor is connected below the disc through a shaft sleeve;
the defective workpiece ejection device is used for pushing the determined defective workpiece from the original conveying device to another conveying device vertical to the original conveying device for recycling or reprocessing; a double-acting single-rod piston type hydraulic cylinder is used as a power element, an electromagnetic reversing valve is connected with a PLC for control, and a second photoelectric sensor is matched to realize screening of unqualified hubs; the double-acting single-rod piston type hydraulic cylinder is fixed on the sliding guide rod fixing plate; sliding guide rods which can slide freely penetrate through the sliding guide rod holes on the two sides of the sliding guide rod fixing plate; comprises a left sliding guide rod and a right sliding guide rod; the top end of the double-acting single-rod piston type hydraulic cylinder, the left sliding guide rod and the right sliding guide rod are all connected with the push plate.
It is a further object of the invention to provide a computer arrangement comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to carry out the method.
It is a further object of the invention to provide a computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, causes the processor to carry out the method.
By combining all the technical schemes, the invention has the advantages and positive effects that: the invention provides a novel method for intelligently detecting hub weld defects, which detects the hub weld defects by using a YOLOv3-GIoU improved algorithm based on a deep convolutional neural network algorithm. GIoU is introduced as a new loss function calculation method, and a K-means clustering method is used for optimizing prior frame parameters, so that the precision of a detection model is improved, and the training time is reduced. Through detection test verification, the YOLOv3-GIoU better result model can be used for an intelligent detection system for the hub weld defects. The detection test result shows that the F1 value detected by the improved result model in the verification set is 0.92, the mAP value reaches 89.16%, the total accuracy detected on the test set is 98.54%, and the YOLOv3-GIoU algorithm model can accurately identify, classify and position the defects of the welding line of the tractor hub in practical application, so that the intelligent detection of the defects of the welding line of the tractor is realized, and the manual visual inspection is replaced. The single-sheet detection time of the detection algorithm is not more than 22 milliseconds, the detection efficiency is high, the production line beat requirement of hub production enterprises is met, and the real-time online detection of the hub welding seam defects is realized.
The detection method of the invention has higher detection accuracy on the verification data set and the test data set, and the detection effect of each defect type is shown in the attached figure 12.
The method adopts a network model based on a YOLOv3 algorithm to identify, detect and classify the defects of the hub welding line, and realizes the intelligent detection of the defects of the hub welding line. Aiming at the problem of the application of the detection system in the actual production, the detection process from image acquisition to automatic detection and then to defective workpiece ejection is designed, and the scheme of additionally installing the intelligent detection system on the production line is realized. Aiming at the problem of the detection effect of the algorithm model, the generalized intersection-parallel ratio (GIoU) is used for improvement, parameters in the YOLOv3 model algorithm are adjusted, and a better result is obtained in test detection.
The technical effect or experimental effect of comparison comprises the following steps:
in the invention, a YOLOv3-2 algorithm of an optimized prior frame and an improved YOLOv3-GIoU algorithm are respectively used for training on a training set, then a verification set is used for verifying the performance of the model, and training parameters are continuously adjusted to obtain the model with the best effect. And comprehensively considering the effect of different training parameters and different algorithm models. Since the positive sample judgment criteria have a large influence on the algorithm result when the mean average accuracy (mAP) result is obtained by detection on the verification set, the mAP value is counted by different positive sample judgment criteria (different threshold values of IoU), and the detection result statistics are shown in Table 1. It can be seen that the YOLOv3-GIoU performed better, with the mAP values higher than that of YOLOv3-2 both in the threshold values of 0.3,0.35,0.6,0.7 and the mean values, and especially at 0.7 and 0.75, the relative increases were 3.26% and 3.7%. According to the data, the improved better model of the YOLOv3-GIoU has higher positioning detection precision and higher accuracy and is a better algorithm model.
Table 1 verification set maps of different models at different IoU thresholds
Figure BDA0002650987930000071
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained from the drawings without creative efforts.
FIG. 1 is a flowchart of a wheel hub weld visual inspection method based on YOLOV3 according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a wheel hub weld visual inspection method based on YOLOV3 according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a wheel hub weld visual inspection system based on YOLOV3 according to an embodiment of the present invention.
In fig. 3: 1. a conveying device; 2. an image acquisition device; 3. a defective workpiece ejection device; 4. a hub; 5. a detection platform; 6. a first photosensor; 7. a second photosensor.
Fig. 4 is a schematic structural diagram of a rotary detection frame according to an embodiment of the present invention.
In fig. 4: 8. a carousel; 9. a disc; 10. a disc holder; 11. a rotating electric machine.
Fig. 5 is a schematic view of a defective workpiece ejecting apparatus according to an embodiment of the present invention.
In fig. 5: 12. a double-acting single-rod piston hydraulic cylinder; 13. a left sliding guide bar; 14. a right slide guide; 15. a push plate.
FIG. 6 is an exemplary diagram of a data set provided by an embodiment of the invention.
FIG. 8 is a diagram illustrating a prediction box and a real box under different conditions according to an embodiment of the present invention.
Fig. 7 is a schematic diagram of two frames overlapping according to an embodiment of the present invention.
Fig. 9 is a graph of the Loss value with the number of iterations provided by an embodiment of the present invention.
Fig. 10 is a schematic diagram of an average cross-over ratio iteration curve provided by the embodiment of the invention.
Fig. 11 is a schematic diagram of the variation of the value of the mapp with the number of iterations according to an embodiment of the present invention.
FIG. 12 is a diagram illustrating the detection effect on the test set according to the embodiment of the present invention.
Fig. 13 is a diagram of typical false detection results of good images in a test set according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In view of the problems in the prior art, the present invention provides a method and a system for visually inspecting a wheel hub weld, which are described in detail below with reference to the accompanying drawings.
As shown in fig. 1, the wheel hub weld visual inspection method based on YOLOV3 provided by the embodiment of the invention includes the following steps:
and S101, building a wheel hub welding seam visual detection system based on the YOLOV 3.
And S102, verifying the constructed intelligent detection system.
And S103, carrying out visual detection on the welding line of the hub by using the verified intelligent detection system.
In step S101, the verification method for an intelligent detection system provided in the embodiment of the present invention includes:
(1) utilizing camera equipment to acquire a hub weld image aiming at a hub weld defect sample in a factory, preprocessing the acquired image, adjusting the size of the image and classifying the image;
(2) classifying and labeling various images by using an image labeling tool, and manufacturing a data set of the hub welding seam defects;
(3) and carrying out hub weld defect detection training by adopting a YOLOv3-GIoU improved algorithm based on a deep convolutional neural network algorithm, carrying out identification, detection and verification, analyzing verification result data, and optimizing and adjusting parameters through the verification result analysis to obtain the optimized wheel hub weld detection system based on YOLOv 3.
In step (2), the image classification and labeling provided by the embodiment of the present invention includes: and dividing the weld defects into broken arcs, flash welds, partial welds, poor arcing and air holes, and classifying and marking the defects as different categories.
In the step (2), the hub weld defect data set provided by the embodiment of the invention is as follows: 2: 1, dividing the ratio into a training set, a verification set and a test set;
the training set is used for model training;
the validation set is used for evaluating model performance;
the test set is used to simulate a real detection test.
In step (3), the YOLOv3-GIoU improvement algorithm based on the deep convolutional neural network algorithm provided by the embodiment of the present invention includes:
1) using a loss function to carry out YOLOv3 algorithm improvement;
the YOLOv3 modified algorithm loss function is shown below:
Figure BDA0002650987930000101
wherein obj represents that the cell contains the target, and noobj represents that the cell does not contain the target; i represents the ith cell, j represents the jth box predicted by the cell; tx, ty, tw and th in the first two items respectively represent a central point coordinate offset value of the prediction frame and a ratio of the width and the height of the prediction frame relative to the image, wherein the position coordinate information of the real frame is marked; i represents the indicative function of whether the jth frame of the ith cell has a target or not; c is the confidence score of the prediction box; p is a condition category probability value; the first two terms are positioning loss functions, the 3 and 4 terms are confidence coefficient loss functions, and the last term is a classification loss function;
the category confidence score is calculated by multiplying the conditional category probability by the prediction box confidence score as follows:
Figure BDA0002650987930000102
the GIoU is introduced as a positioning loss function, and the calculation formula of the GIoU is as follows:
Figure BDA0002650987930000103
Figure BDA0002650987930000104
LGIoU=1-GIoU#(7)
in the formula, A and B represent two rectangular boxes, C represents a minimum bounding rectangle containing A and B, the first two terms in the YOLOv3 loss function are calculation formulas related to a positioning loss function, a two-norm value of a coordinate formula is used for representing the coincidence ratio of the A box and the B box, and the two-norm value is replaced by a calculation formula of GIoU (general integrity unit), so that the purpose of optimizing the loss function is achieved, and the algorithm performance is improved.
2) Optimizing a prior frame: calculating by utilizing K-means clustering to obtain the prior frame size of the training set, and modifying the value of an anchor box in a parameter file based on the calculation result to perform prior frame optimization;
3) the parameter calculation was performed by the following formula:
W=WBN·Wconv#(8)
b=WBN·bconv+bBN#(9)
wherein, WconvAnd bconvWeight matrix and bias, respectively, for convolutional layersPut into WBNAnd bBNIs the weight matrix and the offset of the BN layer.
As shown in fig. 2, in step S103, the visual inspection of the hub weld by using the verified intelligent inspection system according to the embodiment of the present invention includes:
firstly, a conveying device conveys a hub to a preset position; when the hub triggers the photoelectric sensor at a preset position, an area array CCD camera is used for collecting image information of a welding seam;
secondly, processing the acquired image information and judging whether the hub welding seam is qualified or not based on the processed image information; if the wheel hub is qualified, the detected wheel hub is transmitted to the next production procedure by using the conveying device; and if the hub is unqualified in detection, starting the ejection device when the hub passes through the second photoelectric sensor, ejecting the unqualified hub onto the defective workpiece recovery raceway, and recovering the defective workpiece.
The wheel hub welding seam visual detection system based on the YOLOV3 provided by the embodiment of the invention comprises:
the conveying module is used for conveying the hub to a corresponding position by using the conveying device;
the image acquisition module is used for acquiring image information of the welding seam by using the area array CCD camera when the hub reaches a preset position to trigger the photoelectric sensor;
the detection module is used for processing the acquired image information and judging whether the hub welding line is qualified or not based on the processed image information;
the defective workpiece ejection module is used for ejecting the unqualified wheel hub by using an ejection device when the welding seam of the wheel hub has defects;
and the recovery module is used for recovering unqualified hubs.
In the invention, as shown in fig. 3, the wheel hub weld visual inspection system based on YOLOV3 provided by the embodiment of the invention aims to realize workpiece transmission and screening functions based on the improved YOLOV3 wheel hub weld intelligent inspection method, and simultaneously complete wheel hub weld image acquisition, so that the problem of fuzzy and inaccurate image acquisition is solved.
The mechanical structure of the wheel hub welding line visual inspection system based on the YOLOV3 provided by the embodiment of the invention mainly comprises a conveying device 1, an image acquisition device (CCD camera) 2 and a defective workpiece ejection device 3. The automatic defect detecting device is characterized by further comprising a PLC, wherein the PLC controls the conveying device 1, the image acquisition device (CCD camera) 2 and the defect workpiece ejection device to work.
In order to ensure high-quality and high-efficiency production of the hub automatic production line, the hub welding line intelligent detection system and the hub automatic production line are matched, and meanwhile, the production line beat requirement and the welding line defect detection precision requirement are met. The relevant parameters of the hub production line are that the height of a production line raceway is 1.35 meters, the speed of the production line raceway is 8.64 meters/minute, the single-station beat of a hub welding robot is 12.5 s/piece, the weight of a hub is 10kg, the minimum circumferential radius of the hub is 150mm, and the maximum circumferential radius of the hub is 400 mm.
The overall detection flow (as shown in fig. 2) of the invention is as follows: in the actual production process, the hub 4 is conveyed to the detection platform 5 through the conveying device 1 from the previous process, when the hub 4 reaches a preset position and the first photoelectric sensor 6 is triggered, the image acquisition device 2 starts to capture an image of the welding seam of the hub 4, the acquired image information is transmitted to an image workstation to be processed and judge whether the welding seam of the hub is qualified or not, if the result is that the welding seam is not defective, the detection is finished, and the hub 4 is transmitted to the next process; if the detection result shows that the welding seam has defects, the defective workpiece ejecting device 3 is started when the hub passes through the second photoelectric sensor 7, the unqualified hub is ejected to the defective workpiece recycling raceway, and the defective workpiece is recycled. The structure diagram is shown in fig. 3.
The conveying device 1 of the invention adopts a conveyor belt for conveying, and the main structure of a single conveying device is as follows. Firstly, the drum is selected to be 76 multiplied by 150 multiplied by 20 (diameter multiplied by length multiplied by diameter of the shaft head), and a single conveying device is divided into 2 driving drums and 2 driven drums. Then, the bearing (6204) and the bearing cup (6204) are combined and fixed on the bearing sleeve (4080) by screws, and 4 groups are counted. A gum conveyor belt was mounted on the primary and secondary rollers, with dimensions 2118 × 150 × 3 (circumference × width × thickness). The bearing sleeves which are combined by 4 groups are respectively sleeved on the driving and driven rollers, then the bearing sleeves at the same ends of the driving and driven rollers are respectively connected by using the national standard 4080 aluminum profiles, and elastic L blocks are additionally arranged during connection, so that the distance between the driving and driven rollers is adjusted, and the function of adjusting the tensioning degree of the conveyor belt is realized. The whole conveyor belt is supported by using a national standard 4040 aluminum profile (150mm), and 4040 aluminum profile and 4080 aluminum profile are connected by adopting 4040 corner connectors, wherein 13FM8 nuts and M8 multiplied by 16 bolts are used for fixing. And finally, chain wheels are respectively arranged on the driving roller shaft and the output shaft of the stepping motor and are connected by chains, so that the power transmission from the motor to the driving roller is realized.
The invention determines the circumference of the conveyor belt according to the size of the detection platform 5, and further determines the power for driving the conveyor belt motor according to the actual weight of the hub and the requirement that the conveying speed is not less than 8 m/min.
As the image area of the hub welding line is larger, in order to ensure the image taking accuracy, the image acquisition device of the system adopts an area array CCD camera with more photosensitive lenses, and the specific model is an industrial camera of an area array CCD sensor of a BFS-U3-89S6C-C model developed and produced by Limited liability company of cloud light technology group. Meanwhile, the USB3.0 is used as an output interface, the interface can directly output digital image signals, and the digital image signal transmission device is low in cost, wide in application and high in transmission speed. Wheel hub that awaits measuring after the device starts can constantly convey to testing platform, gets for instance on testing platform, and general wheel hub has 4 welding seams, need to guarantee that image acquisition device can clearly acquire every welding seam image completely. An industrial camera is arranged right above the center of the hub and forms a 30-degree included angle with a welding line, in order to reduce cost and save occupied space, the electromechanical structure part of the image acquisition device adopts a rotary detection frame scheme, and the structure is shown in fig. 4. The method comprises the following steps: carousel 8, disc 9, disc support 10, rotating electrical machines 11.
The carousel 8 is mounted on a disc 9; the disc 9 is arranged at the upper end of the disc support 10; the rotating motor 11 is connected below the disc 9 through a bushing. Carousel 8 is the same basically with above-mentioned conveyor 1's conveyer belt on structure and selection material, and the difference lies in that carousel 8's bottom is connected with disc 9, under rotating electrical machines's 11 drive, realizes the rotary motion of wheel hub work piece for industrial camera is got for instance more completely and clearly. The industrial camera is connected with the disc support 10 through a fixing rod, and the industrial camera can more completely acquire the welding seam image of the wheel hub to be detected by adjusting the installation angle of the industrial camera.
The defective workpiece ejection device 3 is mainly used for pushing the determined defective workpiece from an original conveying device to another conveying device perpendicular to the original conveying device for recycling or reprocessing, and considering that the hub belongs to a heavy metal workpiece, a double-acting single-rod piston type hydraulic cylinder 12 is adopted as a power element, is connected with a PLC through an electromagnetic reversing valve for control, and is matched with a second photoelectric sensor 7 to realize the screening function of unqualified hubs. The structure is shown in fig. 5.
The double-acting single-rod piston type hydraulic cylinder 12 is fixed on the sliding guide rod fixing plate; sliding guide rods which can slide freely penetrate through the sliding guide rod holes on the two sides of the sliding guide rod fixing plate; comprises a left sliding guide rod 13 and a right sliding guide rod 14; the top end of the double-acting single-rod piston type hydraulic cylinder 12, the left sliding guide rod 13 and the right sliding guide rod 14 are all connected with a push plate 15.
According to the invention, the electro-mechanical transmission controller is Mitsubishi Q35BPLC, and the PLC is used for controlling the stepping motor so as to achieve the function of controlling the start and stop of the conveyor belt, realize accurate limit on the hub to be detected and ensure that the camera is positioned right above the hub so as to ensure the quality of image information. After an image acquisition device of the detection system starts to work, the rotating motor is controlled to complete rotating image acquisition, and the acquired image is transmitted to a workstation through an interface to be processed. And after the output result of the detection system is obtained, controlling the defective workpiece ejection device 3 to eject the unqualified wheel hub to the defective workpiece recovery conveyor belt at a proper position.
The technical effects of the present invention will be further described with reference to specific embodiments.
Example (b):
1.1 construction of wheel hub weld joint detection experiment platform
In the production process of the hub, unqualified products are screened out by manually detecting the welding quality of a hub welding seam, the detection platform mainly comprises a conveying device, an image acquisition device and a defective workpiece ejection device, and the detection flow is as follows: in the actual production process, the hub is transmitted to the detection platform from the previous process by the conveying device, when the hub reaches a preset position and triggers the photoelectric sensor, the image acquisition device acquires an image of the welding line, the acquired image information is transmitted to the computer through the interface to be processed and judge whether the welding line of the hub is qualified or not, if the result is that the welding line is not defective, the detection is finished, and the next production process is transmitted; and if the detection result shows that the welding seam has defects, starting the ejection device when the hub passes through the second photoelectric sensor, ejecting the unqualified hub onto a defective workpiece recovery raceway, and recovering the defective workpiece.
The conveying device is driven by a stepping motor and controlled by a PLC, the adhesive tape is used as a conveying belt to convey the hub to each station, and the image acquisition device adopts an area array CCD camera. After the device starts, wait to detect wheel hub and constantly follow last station conveying to testing platform, get for instance on testing platform, guarantee that the system can detect every welding seam through the camera, and general wheel hub has 4 welding seams. The start and stop of the motor are strictly controlled through the PLC, accurate limit of the wheel hub to be detected is achieved, and the camera is guaranteed to be located right above the wheel hub, so that the quality of collected image information is guaranteed. If the speed is too low in the whole process, the production efficiency is influenced, and if the speed is too high, the definition of an image is influenced, so that the detection accuracy is influenced, the production beat is met, the accuracy is required to be as accurate as possible, and the detection platform has reliable stability and is also an important evaluation index.
1.2 design of the testing procedure
After a detection system simulation platform is built, the detection effect of the intelligent detection system needs to be tested and verified to prove that the algorithm performance of the intelligent detection system reaches the standard, and the purpose of replacing manual detection is achieved. The YOLOv3 algorithm used in the invention is used for converting the defect detection problem into the regression classification problem. The test flow can be roughly divided into image data acquisition, data set production, model training, model testing and comparative analysis.
The method comprises the steps of firstly acquiring a wheel hub welding seam image, then preprocessing the acquired image, adjusting the size of the image, classifying the image, then labeling various images by using an image labeling tool, and manufacturing a data set of the wheel hub welding seam defect, wherein the data set comprises a training set used for model training, a verification set used for evaluating model performance and a test set used for simulating a real detection test. The test is trained by respectively adopting a YOLOv3 algorithm and an improved YOLOv3 algorithm, then a recognition detection test is carried out, detection result data are analyzed, parameters are further adjusted through detection result analysis, the wheel hub welding seam detection algorithm based on YOLOv3 is optimized to obtain higher detection accuracy, and the whole test flow chart is shown in figure 2.
1.3 image data acquisition and annotation
The image data is acquired by photographing with a camera, the hub welding line in the image is clear and complete, the acquired image is derived from a hub welding line defect sample in a factory, and the images of the training set and the verification set are artificially marked. The training of the target detection model based on deep learning needs a large number of labeled samples, and theoretically, the detection effect of the model trained with more samples is better, so that the data samples with large number and multiple condition types need to be labeled when a model with high identification accuracy is trained. The experiment used the open source image calibration tool LabelImg.
After analyzing and researching the types of common hub welding seam defects, the invention divides the welding seam defects into broken arcs, welding beading, partial welding, poor arc striking and air holes, and classifies the welding seam defects as different categories for training and detection, and 5 common welding seam defects are shown in figure 6. FIG. 6a is an arc-breaking defect diagram; FIG. 6b is a weld flash defect map; FIG. 6c is a diagram of off-set weld defects; FIG. 6d is a defect diagram of arcing defects; figure 6e is a defect map of air holes.
And finally, marking pictures 1356, randomly selecting a part as a verification set, and using the rest as a training set. 256 broken arc defect marks, 219 broken arc defect marks are used as training sets, and 57 broken arc defect marks are divided into verification sets; 276 pieces of welding beading defects are marked, 213 pieces of welding beading defects are used as a training set, and 63 pieces of welding beading defects are divided into a verification set; 273 sheets of partial welding defects are marked, 203 sheets are used as a training set, and 70 sheets are used as a verification set; 254 pieces of arc starting defects are marked, 188 pieces of arc starting defects are used as a training set, and 66 pieces of arc starting defects are used as a verification set; 277 pores are marked for defects, 209 pores are used as training sets, and 68 pores are divided into verification sets. The ratio of each data set is about 7: 2: 1. because the defect of the welding seam is a small probability event in the actual production process, 603 perfect and defect-free welding seam images are added in the test set as interference items, so that the method can be more suitable for actual application, and the statistical result is shown in table 2.
TABLE 2 number of pictures in each category in data set
Figure BDA0002650987930000161
2 Algorithm improvement
Loss function improvement
The YOLOv3 algorithm can realize the detection of targets with different size scales. The information size contained in each cell of the feature map is represented by a depth dimension D, and the calculation formula is shown in formula 1.
D=B×(5+C)#(1)
Before training the model, the YOLOv3 algorithm converts the input picture into the same size, and normalizes the labeled real frame coordinate data, as shown in the following formula (2). In the formula (2), width and height are width and height of the image, and xmax、ymaxAnd xmin、yminCoordinates of the lower right corner and the upper left corner of the real frame (the origin is the upper left corner of the picture), x and y are the center coordinates of the real frame, and w and h are the width and the height of the real frame.
Figure BDA0002650987930000171
The YOLOv3 algorithm uses a Loss function to represent the difference degree between the predicted value and the real value when iterative computation is performed, the Loss function is optimized by continuously updating the weight value, so that the Loss function value (Loss value) is continuously reduced, and the YOLOv3 Loss function is shown in formula 3:
Figure BDA0002650987930000172
wherein obj represents that the cell contains the target, and noobj represents that the cell does not contain the target; i represents the ith cell, j represents the jth box predicted by the cell; tx, ty, tw and th in the first two terms respectively represent the coordinate offset value of the central point of the prediction frame and the ratio of the width and the height of the prediction frame relative to the image, and the coordinate value is calculated by using a formula 2, wherein the coordinate value with the superscript is the position coordinate information of the real frame; i represents the indicative function of whether the jth frame of the ith cell has a target or not; c is the confidence score of the prediction box; p is a condition category probability value; the first two terms are localization loss functions, the 3,4 terms are confidence coefficient loss functions, and the last term is a classification loss function. Wherein the category confidence score is calculated by multiplying the conditional category probability by the prediction box confidence score, as shown in equation 4.
Figure BDA0002650987930000173
In the process of calculating the loss function by the YOLOv3 algorithm, the positioning loss function calculation of the first two terms adopts two norm values of coordinates, namely the square sum of the differences of the first two terms tx, ty, tw and th in formula 2, however, this method sometimes cannot reflect the overlapping degree of the prediction frame and the real frame, namely, the situation that the intersection-to-parallel ratio (IoU) is different but the two norm values are calculated to be the same may occur. As shown in fig. 7, it is clear that IoU of the left graph is larger than that of the right graph, and the left graph should be a better prediction box, but the two norms of the two cases in the graph have equal values, so that a better prediction box cannot be found. This can be avoided if the IoU value is used directly as a metric in the positioning loss calculation.
However, there is a problem in that the loss function calculation is performed using IoU as an evaluation index, and when the prediction frame does not overlap the real frame, the IoU value becomes zero, and when IoU is calculated as a loss function, the gradient thereof is zero, and therefore optimization cannot be performed. Secondly, when the two frames are overlapped in different directions, as shown in fig. 8, although IoU values are the same, it is obvious that the effect of the left figure is better than that of the right figure, so that the calculation of the loss function by substituting IoU values is likely to affect the detection accuracy, and therefore, the generalized intersection ratio GIoU can be introduced to make up for the defect of IoU. The calculation formula for GIoU is as follows:
Figure BDA0002650987930000181
Figure BDA0002650987930000182
LGIoU=1-GIoU#(7)
wherein A and B represent two rectangular boxes, and C represents the smallest bounding rectangle containing A and B. Therefore, it is easy to obtain that the value of GIoU is from-1 to 1, IoU is from 0 to 1, and when being greater than zero, the value of GIoU is close to that of IoU, and GIoU is always less than or equal to IoU, and when and only when a and B coincide, the value of GIoU is IoU-1, so that GIoU is a better measurement index, which can measure the coincidence degree and also consider the non-coincidence region, and can better reflect the quality of the prediction box, and LGIoU represents the value used when GIoU participates in the loss calculation, namely, the value range is changed to 2 to 0, so that the algorithm can iteratively converge. The algorithm improved by using GIoU is YOLOv 3-GIoU.
2.3 optimization prior frame
Through the definition of the B parameter in formula 1, it can be known that each cell of the feature map extracted by YOLOv3 generates 3 prediction boxes, and in order to improve the convergence rate and prediction accuracy of the prediction boxes, YOLOv3 introduces a priori box (anchor box). The prior frame limits the range of the prediction frame of the cell, which is equivalent to adding prior experience size, thereby accelerating the convergence speed of the model and being beneficial to multi-scale learning of the algorithm. And (3) counting the sizes of all marked real boxes (ground route boxes) in the training set to obtain the most common shape and size as a prior box. The invention adopts a K-means dimension clustering method to optimize the prior frame size.
The K-means dimension clustering method is used to find the central point in the data, usually using euclidean distance as the calculation metric, while the YOLOv3 algorithm expects to obtain a priori box value with greater intersection ratio (IoU) with all data, so that the value of 1-IoU is used as the measurement for clustering calculation. The prior frame size of the training set is calculated by using K-means clustering before training, the value of an anchor box is modified in a parameter file after the result is obtained, and the values of 9 prior frames used by a final training model are respectively as follows: (41 × 37), (52 × 63), (54 × 44), (69 × 97), (75 × 61), (83 × 132), (110 × 76), (117 × 143), (160 × 100).
2.4 improving the forward inference speed of YOLOv3
In the training process of YOLOv3, the data distribution of YOLOv3 has an Internal Covariate Shift (Internal Covariate Shift) phenomenon, that is, the input distribution of each hidden layer changes when each layer performs parameter update, so that the input layer is usually subjected to sample randomization. However, this will make the training speed slow and the sensitivity to parameters such as learning rate become high, so a Batch Normalization layer (Batch Normalization) is usually added after convolution and full join operation before nonlinear activation, and the sample data of the same Batch is subjected to standard gaussian distribution processing and then participates in calculation as the estimation value of the whole training set, as shown in equation (8).
Figure BDA0002650987930000191
In the formula
Figure BDA0002650987930000192
Is the parameter value of the current layer, fi,jIs the parameter value of the previous layer, WconvAnd bconvWeight matrix and bias, W, of convolutional layers, respectivelyBNAnd bBNIs the weight matrix and the offset of the BN layer. Therefore, the two operations are combined as shown in the following formulas (9) and (10), so that the parameter calculation amount can be reduced, the memory occupation amount in operation is reduced, the effect of accelerating the forward reasoning speed is achieved, and the algorithm effect is not influenced because the calculation formula is not changed.
W=WBN·Wconv#(9)
b=WBN·bconv+bBN#(10)
3 test results and analysis
3.1 test operation platform
The YOLOv3 models trained and tested by the intelligent wheel hub weld joint detection system run on a high-performance computer of a Windows10 operating system, and are configured to be an Intel 2.10GHz four-core CPU, a 32GB memory and an Nvidia GeForce RTX 2080Ti display card. The training and testing calls a GPU for calculation, calls CUDA, Cudnn, OpenCV and other software, and uses 3.0 version of python.
3.2 model training
Before training, parameter values are adjusted in a parameter file, different parameters can influence the training convergence speed and the model effect, and the optimal parameter values are selected through model verification result feedback. The training model is iterated 15000 times every time, the time is about 20h, and the total number of sample pictures participating in calculation in the training process is 192 ten thousand. After training is started, a weight file is generated every 1000 times of iteration, so that 15 weight files which are generated in total after model training is completed are the result model. The models are detected on a hub welding seam verification set, the training parameters are adjusted according to the detection results, a better model is finally obtained, and the model is used for detecting a test set.
3.3 evaluation index of model
In order to objectively evaluate the model trained in the present invention, some evaluation indexes, which are commonly referred to as accuracy (precision), are used, and represent the ratio of the number of correct samples to the number of incorrect samples in the samples predicted as positive examples; the recall rate (recall) represents the ratio of the number of samples predicted to be positive examples to the number of samples predicted to be negative examples in the samples of which the true condition is positive examples; the P-R curve (Precision-Recall curve) represents the accuracy rate change relationship under different Recall rates; the average precision AP (average precision) is the average value of the accuracy rates under different recall rates; mean average precision, mAP (mean average precision) is the average of the AP values of different classes; the intersection ratio iou (intersection over unit) represents the coincidence degree of the real frame and the predicted frame, and the threshold value thereof is generally 0.5, and the formula is as follows.
Figure BDA0002650987930000201
Figure BDA0002650987930000202
AP=∫0 1P(R)dR#(13)
Figure BDA0002650987930000211
Figure BDA0002650987930000212
Where TP is the positive sample predicted to be correct, FP is the negative sample predicted to be correct, FN is the positive sample predicted to be incorrect, k is the number of classes, and AP (i) represents the AP value of the ith class. Since the detection of the present invention needs to consider both the accuracy and the recall rate, the auxiliary evaluation value F1 is added, and the formula is as follows.
Figure BDA0002650987930000213
3.4 comparative analysis of test results
In the experiment, a Yolov3 algorithm and an improved Yolov3-GIoU algorithm are respectively used for training on a training set, then a verification set is used for verifying the performance of the model, and training parameters are continuously adjusted to obtain the model with the best effect. Comprehensively considering the effect of different training parameters and different algorithm models, selecting the following three algorithm models for comparison: one is the YOLOv3 algorithm using better training parameters, but without optimizing the prior box, represented by YOLOv 3-1; secondly, a YOLOv3 algorithm using the same training parameters and optimizing a prior box, which is represented by YOLOv3-2, and thirdly, a YOLOv3-GIoU algorithm using the same training parameters and optimizing the prior box, which is represented by YOLOv 3-GIoU. As shown in fig. 9 and fig. 10, the training process for the three cases is visualized so as to analyze the variation of the Loss function value (Loss value) and the Average cross-over ratio (Average IoU) during the training process.
The Loss values of the algorithm models under three different conditions are rapidly reduced in the previous 2000 times of training and then gradually tend to be stable, but small-amplitude oscillation is still reduced continuously, and the average Loss value curves of the three models have the same descending trend. It is clear that the Loss value of YOLOv3-GIoU is lower and finally converges to around 0.04, while the values of YOLOv3-1 and YOLOv3-2 are slightly higher and finally converge to around 0.066 and 0.07. The average cross-over ratio in the training process is higher in the YOLOv3-GioU overall and smaller in fluctuation degree, while the YOLOv3-2 condition is better than that of YOLOv3-1, as shown in FIG. 10, the average cross-over ratio is gradually stabilized around 0.8 to 0.9 after 8000 times of training. The fact that the optimized prior frame is used can enable the average intersection ratio of training to be higher and more stable, and the positioning precision is higher. From the above data, it can be found that the Loss value of the algorithm using Yolov3-GIoU is lower, and the positioning is more accurate.
It is obvious from the above result analysis that the variation of the Loss value after 2000 times of iterative training is very small, and the Loss value at this time cannot accurately reflect the comprehensive performance of the model, so that a model with better performance after the several times of iterative training needs to be found out. The test uses the mAP value of formula 14 as an evaluation index, uses a validation set to verify the models with different iterative training times under three different conditions, measures the mAP values of the models, and determines that the parameter condition of the positive sample is IoU value greater than 0.6 during calculation, and IoU value is the ratio of the model detection result to the labeled real frame.
According to the statistical result of the test, YOLOv3-1 obtains a better model with an mAP value of 87.66 when the iterative training time is 12000, and the average intersection ratio is 71.39%. YOLOv3-2 yielded a superior model with a mep value of 88.64% at 9000 iterative training times, with an average cross-over ratio of 72.53%. YOLOv3-GIoU yielded a superior model with a mep value of 89.16 at 12000 iterative training times, with an average cross-over ratio of 72.69%. Therefore, preferred models of YOLOv3-2 and YOLOv3-GIoU are preferred. Fig. 11 is a schematic diagram of the variation of the value of the mapp with the number of iterations according to an embodiment of the present invention.
When detecting the mAP result on the verification set, the judgment standard of the positive sample has a large influence on the result value, so the verification set is respectively detected by using a better model of YOLOv3-2 iteration 12000 times and a better model of YOLOv3-GIoU iteration 12000 times, and the mAP value is counted by different positive sample judgment standards (different thresholds of IoU), and the detection result statistics are shown in Table 3. It is clearly seen that YOLOv3-GIoU performed better, with the mAP values higher than that of YOLOv3-2 both in the threshold values of 0.3,0.35,0.6,0.7 and the mean values, and especially at 0.7 and 0.75, increased by 3.26% and 3.7% relatively. According to the data, the improved better model of the Yolov3-GIoU has higher positioning detection precision and higher accuracy and is a better algorithm model.
TABLE 3 verification set mAP of different models at different IoU thresholds
Figure BDA0002650987930000221
Figure BDA0002650987930000231
3.5 test set test results and analysis
After determining the model after 12000 iterations of YOLOv3-GIoU as the final intelligent system detection model, in order to better fit the application in the actual production process, the detection effect of the model on the test set needs to be tested to verify the practicability of the detection model. When the hub welding seam defect is detected, the position of the welding seam defect does not need to be accurately detected temporarily, only the corresponding defect type needs to be detected, and the accuracy is judged to be correct when the defect is larger than a given confidence threshold. Setting the confidence coefficient threshold value to be 0.25, counting the accuracy of the test set, and detecting 5 welding seam defects
Figure BDA0002650987930000233
Figure 12 shows the correct detection effect of type, figure 12a. arc broken; FIG. 12b. flash; FIG. 12c. offset welding; FIG. 12d. poor arcing; FIG. 12e. gasAnd (4) a hole.
The statistics of the test results on the test set are shown in table 4.
TABLE 4 results of the detection of YOLOv3-GIoU on the test set
Figure BDA0002650987930000232
Because the confidence threshold is set to be low, all defect types can be detected completely, but for a perfect welding seam image, 11 defects are detected by mistake, and the 11 pictures are specifically analyzed to find that the defects are mainly detected by mistake as two defects of poor arc starting and air holes, as shown in fig. 13. Fig. 13a. type 1; fig. 13b. type 2. It can be seen that these pictures are indeed similar to the gas hole defect, the real gas hole defect is that the welding seam has a spot with large black color, and the spot in the false detection picture is golden or has small shape; the false detection of poor arcing may be image shooting reflection, and the gap is mistaken for the groove, so that the poor arcing is detected. For these problems, it is considered to increase the number and types of data sets and improve the intensity of the light source angle at the time of image acquisition. In summary, the test set using the superior model of YOLOv3-GIoU reached 98.54% detection accuracy with a mep value of 89.16%, an F1 score value of 0.92, an Average cross-over ratio (Average IoU) of 72.69%, a detection time of 21 to 19 milliseconds per picture, and a model size of 234MB on the validation set with a IoU threshold of 0.6. Therefore, the algorithm model can be used for an intelligent detection system, intelligent detection of the defects of the welding seams of the hubs is realized, and the purpose of replacing manual detection is achieved.
The invention provides a novel method for intelligently detecting hub weld defects, which detects the hub weld defects by using a YOLOv3-GIoU improved algorithm based on a deep convolutional neural network algorithm. GIoU is introduced as a new loss function calculation method, and a K-means clustering method is used for optimizing prior frame parameters, so that the precision of a detection model is improved, and the training time is reduced. Through detection test verification, the YOLOv3-GIoU better result model can be used for an intelligent detection system for the hub weld defects. The detection test result shows that the F1 value detected by the improved result model in the verification set is 0.92, the mAP value reaches 89.16%, the total accuracy of detection in the test set is 98.54%, and the single sheet detection speed does not exceed 22 milliseconds, so that the model has high precision, high efficiency and high robustness, and the YOLOv 3-based algorithm model can automatically, quickly and accurately identify, classify and position the hub weld defects in practical application, realize the intelligent detection of the weld defects and replace manual visual inspection.
The above description is only for the purpose of illustrating the present invention and the appended claims are not to be construed as limiting the scope of the invention, which is intended to cover all modifications, equivalents and improvements that are within the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. The wheel hub welding line visual detection method is characterized by comprising the following steps:
building a wheel hub welding seam visual detection system based on YOLOV 3;
verifying the built hub welding seam visual detection system;
and carrying out visual detection on the welding line of the hub by using the verified visual detection system for the welding line of the hub.
2. The visual inspection method of a hub weld according to claim 1, wherein the verification method of the visual inspection system of a hub weld comprises:
(1) utilizing camera equipment to acquire a hub weld image aiming at a hub weld defect sample in a factory, preprocessing the acquired image, adjusting the size of the image and classifying the image;
(2) classifying and labeling various images by using an image labeling tool, and manufacturing a data set of the hub welding seam defects;
(3) and carrying out hub weld defect detection training by adopting a YOLOv3-GIoU improved algorithm based on a deep convolutional neural network algorithm, carrying out identification, detection and verification, analyzing verification result data, and optimizing and adjusting parameters through the verification result analysis to obtain the optimized wheel hub weld detection system based on YOLOv 3.
3. The visual inspection method for the weld joint of the hub as claimed in claim 2, wherein in the step (2), the image classification labeling comprises: dividing the weld defects into broken arcs, flash welds, partial welds, poor arcing and air holes, and classifying and marking the defects as different categories;
in the step (2), the hub weld defect data set is as follows: 2: 1, dividing the ratio into a training set, a verification set and a test set;
the training set is used for model training;
the validation set is used for evaluating model performance;
the test set is used to simulate a real detection test.
4. The visual inspection method for the weld seam of the hub as claimed in claim 2, wherein in the step (3), the YOLOv3-GIoU improvement algorithm based on the deep convolutional neural network algorithm comprises:
1) using a loss function to carry out YOLOv3 algorithm improvement;
the YOLOv3 modified algorithm loss function is shown below:
Figure FDA0002650987920000021
wherein obj represents that the cell contains the target, and noobj represents that the cell does not contain the target; i represents the ith cell, j represents the jth box predicted by the cell; tx, ty, tw and th in the first two items respectively represent a central point coordinate offset value of the prediction frame and a ratio of the width and the height of the prediction frame relative to the image, wherein the position coordinate information of the real frame is marked; i represents the indicative function of whether the jth frame of the ith cell has a target or not; c is the confidence score of the prediction box; p is a condition category probability value; the first two terms are positioning loss functions, the 3 and 4 terms are confidence coefficient loss functions, and the last term is a classification loss function;
the category confidence score is calculated by multiplying the conditional category probability by the prediction box confidence score as follows:
Figure FDA0002650987920000022
the GIoU is introduced as a positioning loss function, and the calculation formula of the GIoU is as follows:
Figure FDA0002650987920000023
Figure FDA0002650987920000024
LGIoU=1-GIoU
wherein A and B represent two rectangular boxes, and C represents the minimum bounding rectangle containing A and B;
2) optimizing a prior frame: calculating by utilizing K-means clustering to obtain the prior frame size of the training set, and modifying the value of an anchor box in a parameter file based on the calculation result to perform prior frame optimization;
3) the parameter calculation was performed by the following formula:
W=WBN·Wconv
b=WBN·bconv+bBN
wherein, WconvAnd bconvWeight matrix and bias, W, of convolutional layers, respectivelyBNAnd bBNIs the weight matrix and the offset of the BN layer.
5. The visual inspection method of a hub weld according to claim 1, wherein the visual inspection method of a hub weld using a verified visual inspection system of a hub weld comprises:
the conveying device conveys the hub to a preset position; when the hub triggers the photoelectric sensor at a preset position, an area array CCD camera is used for collecting image information of a welding seam;
processing the acquired image information and judging whether the hub welding line is qualified or not based on the processed image information; if the wheel hub is qualified, the detected wheel hub is transmitted to the next production procedure by using the conveying device; and if the hub is unqualified in detection, starting the ejection device when the hub passes through the second photoelectric sensor, ejecting the unqualified hub onto the defective workpiece recovery raceway, and recovering the defective workpiece.
6. A visual inspection system for a weld of a wheel hub for implementing the visual inspection method of the weld of the wheel hub according to claims 1 to 5, wherein the visual inspection system for the weld of the wheel hub comprises:
the device comprises a conveying module, an image acquisition module, a detection module, a defective workpiece ejection module and a recovery module;
the conveying module is used for conveying the hub to a corresponding position by using the conveying device;
the image acquisition module is used for acquiring image information of the welding seam by using the area array CCD camera when the hub reaches a preset position to trigger the photoelectric sensor;
the detection module is used for processing the acquired image information and judging whether the hub welding line is qualified or not based on the processed image information;
the defective workpiece ejection module is used for ejecting the unqualified wheel hub by using an ejection device when the welding seam of the wheel hub has defects;
and the recovery module is used for recovering unqualified hubs.
7. The visual inspection system of a hubcap weld of claim 6 wherein said visual inspection system further comprises a conveyor;
the hub is transmitted to the detection platform from the previous process through the conveying device, when the hub reaches a preset position and triggers the first photoelectric sensor, the image acquisition device starts to capture images of the hub welding line, the acquired image information is transmitted to the image workstation to be processed and judge whether the hub welding line is qualified or not, if the result is that the welding line is not defective, the detection is finished, and the hub is transmitted to the next process; if the detection result is that the welding seam has defects, starting a defective workpiece ejection device when the hub passes through the second photoelectric sensor, ejecting the unqualified hub onto a defective workpiece recovery raceway, and recovering the defective workpiece;
and the actions of the conveying device, the image acquisition device and the defective workpiece ejection device are controlled by a PLC.
8. The visual inspection system for hub welds of claim 7 wherein the conveyor includes a driving roller, a driven roller; the driving roller and the driven roller are provided with colloid conveyor belts;
the bearing and the bearing cup are combined and then fixed on a bearing sleeve by using a screw, and the bearing sleeve is sleeved on the driving roller and the driven roller; the number of the bearing sleeves is multiple;
the aluminum profile device is respectively connected with the bearing sleeves at the same ends of the driving roller and the driven roller, and an elastic block is additionally arranged during connection and used for adjusting the distance between the driving roller and the driven roller;
chain wheels are arranged on the driving roller shaft and the output shaft of the stepping motor and connected by chains, and the chain wheels are used for power transmission from the stepping motor to the driving roller.
The image acquisition device comprises an area array CCD camera with more photosensitive lenses and a rotary detection frame; the area array CCD camera is arranged on the rotary detection frame; the rotary inspection frame includes: the device comprises a carousel, a carousel support and a rotating motor;
the carousel is mounted on the carousel; the disc is arranged at the upper end of the disc bracket; the rotating motor is connected below the disc through a shaft sleeve;
the defective workpiece ejection device is used for pushing the determined defective workpiece from the original conveying device to another conveying device vertical to the original conveying device for recycling or reprocessing; a double-acting single-rod piston type hydraulic cylinder is used as a power element, an electromagnetic reversing valve is connected with a PLC for control, and a second photoelectric sensor is matched to realize screening of unqualified hubs; the double-acting single-rod piston type hydraulic cylinder is fixed on the sliding guide rod fixing plate; sliding guide rods which can slide freely penetrate through the sliding guide rod holes on the two sides of the sliding guide rod fixing plate; comprises a left sliding guide rod and a right sliding guide rod; the top end of the double-acting single-rod piston type hydraulic cylinder, the left sliding guide rod and the right sliding guide rod are all connected with the push plate.
9. A computer arrangement, characterized in that the computer arrangement comprises a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to carry out the method of any one of claims 1-6.
10. A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to perform the method of any one of claims 1 to 6.
CN202010870708.8A 2020-08-26 2020-08-26 Wheel hub weld visual detection method and detection system Pending CN111929314A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010870708.8A CN111929314A (en) 2020-08-26 2020-08-26 Wheel hub weld visual detection method and detection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010870708.8A CN111929314A (en) 2020-08-26 2020-08-26 Wheel hub weld visual detection method and detection system

Publications (1)

Publication Number Publication Date
CN111929314A true CN111929314A (en) 2020-11-13

Family

ID=73305528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010870708.8A Pending CN111929314A (en) 2020-08-26 2020-08-26 Wheel hub weld visual detection method and detection system

Country Status (1)

Country Link
CN (1) CN111929314A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529893A (en) * 2020-12-22 2021-03-19 郑州金惠计算机系统工程有限公司 Hub surface flaw online detection method and system based on deep neural network
CN112588607A (en) * 2020-12-04 2021-04-02 广东工业大学 Multi-view soldering tin defect detection device based on deep learning
CN113305037A (en) * 2021-06-01 2021-08-27 济南大学 Method for improving radial positioning precision of rim weld seam through turntable deceleration
CN113327240A (en) * 2021-06-11 2021-08-31 国网上海市电力公司 Visual guidance-based wire lapping method and system and storage medium
CN113470018A (en) * 2021-09-01 2021-10-01 深圳市信润富联数字科技有限公司 Hub defect identification method, electronic device, device and readable storage medium
CN113469302A (en) * 2021-09-06 2021-10-01 南昌工学院 Multi-circular target identification method and system for video image
CN113628211A (en) * 2021-10-08 2021-11-09 深圳市信润富联数字科技有限公司 Parameter prediction recommendation method, device and computer readable storage medium
CN113808116A (en) * 2021-09-24 2021-12-17 无锡精质视觉科技有限公司 Intelligent detection method and system based on image recognition and product detection system
CN114428164A (en) * 2022-01-14 2022-05-03 无锡来诺斯科技有限公司 Marking device and marking method for tracing surface defects of metal strip
CN114727007A (en) * 2021-05-24 2022-07-08 云南傲远智能环保科技有限公司 Wisdom plastic network management on-line measuring camera
CN114841937A (en) * 2022-04-21 2022-08-02 燕山大学 Detection method for detecting surface defects of automobile hub
CN114998259A (en) * 2022-06-02 2022-09-02 苏州香农科技有限公司 Detection method and device for gravity casting hub
CN116030030A (en) * 2023-02-13 2023-04-28 中建科技集团有限公司 Integrated assessment method for internal and external defects of weld joint of prefabricated part

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110570410A (en) * 2019-09-05 2019-12-13 河北工业大学 Detection method for automatically identifying and detecting weld defects
CN110636715A (en) * 2019-08-27 2019-12-31 杭州电子科技大学 Self-learning-based automatic welding and defect detection method
CN110927171A (en) * 2019-12-09 2020-03-27 中国科学院沈阳自动化研究所 Bearing roller chamfer surface defect detection method based on machine vision
CN111060601A (en) * 2019-12-27 2020-04-24 武汉武船计量试验有限公司 Weld ultrasonic phased array detection data intelligent analysis method based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110636715A (en) * 2019-08-27 2019-12-31 杭州电子科技大学 Self-learning-based automatic welding and defect detection method
CN110570410A (en) * 2019-09-05 2019-12-13 河北工业大学 Detection method for automatically identifying and detecting weld defects
CN110927171A (en) * 2019-12-09 2020-03-27 中国科学院沈阳自动化研究所 Bearing roller chamfer surface defect detection method based on machine vision
CN111060601A (en) * 2019-12-27 2020-04-24 武汉武船计量试验有限公司 Weld ultrasonic phased array detection data intelligent analysis method based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
徐义鎏: "改进损失函数的Yolov3 车型检测算法", 信息通信, no. 12, 31 December 2019 (2019-12-31), pages 1 - 4 *
李超;孙俊;: "基于机器视觉方法的焊缝缺陷检测及分类算法", 计算机工程与应用, no. 06, 28 February 2017 (2017-02-28) *
谷静;谢泽群;张心雨;: "基于改进深度学习模型的焊缝缺陷检测算法", 宇航计测技术, no. 03, 15 June 2020 (2020-06-15) *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112588607A (en) * 2020-12-04 2021-04-02 广东工业大学 Multi-view soldering tin defect detection device based on deep learning
CN112529893A (en) * 2020-12-22 2021-03-19 郑州金惠计算机系统工程有限公司 Hub surface flaw online detection method and system based on deep neural network
CN114727007A (en) * 2021-05-24 2022-07-08 云南傲远智能环保科技有限公司 Wisdom plastic network management on-line measuring camera
CN113305037A (en) * 2021-06-01 2021-08-27 济南大学 Method for improving radial positioning precision of rim weld seam through turntable deceleration
CN113327240A (en) * 2021-06-11 2021-08-31 国网上海市电力公司 Visual guidance-based wire lapping method and system and storage medium
CN113470018A (en) * 2021-09-01 2021-10-01 深圳市信润富联数字科技有限公司 Hub defect identification method, electronic device, device and readable storage medium
CN113469302A (en) * 2021-09-06 2021-10-01 南昌工学院 Multi-circular target identification method and system for video image
CN113808116A (en) * 2021-09-24 2021-12-17 无锡精质视觉科技有限公司 Intelligent detection method and system based on image recognition and product detection system
CN113628211B (en) * 2021-10-08 2022-02-15 深圳市信润富联数字科技有限公司 Parameter prediction recommendation method, device and computer readable storage medium
CN113628211A (en) * 2021-10-08 2021-11-09 深圳市信润富联数字科技有限公司 Parameter prediction recommendation method, device and computer readable storage medium
CN114428164A (en) * 2022-01-14 2022-05-03 无锡来诺斯科技有限公司 Marking device and marking method for tracing surface defects of metal strip
CN114428164B (en) * 2022-01-14 2024-05-31 无锡来诺斯科技有限公司 Marking device and marking method for tracing surface defects of metal strip
CN114841937A (en) * 2022-04-21 2022-08-02 燕山大学 Detection method for detecting surface defects of automobile hub
CN114841937B (en) * 2022-04-21 2023-12-05 燕山大学 Detection method for detecting surface defects of automobile hub
CN114998259A (en) * 2022-06-02 2022-09-02 苏州香农科技有限公司 Detection method and device for gravity casting hub
CN116030030A (en) * 2023-02-13 2023-04-28 中建科技集团有限公司 Integrated assessment method for internal and external defects of weld joint of prefabricated part
CN116030030B (en) * 2023-02-13 2023-08-29 中建科技集团有限公司 Integrated assessment method for internal and external defects of weld joint of prefabricated part

Similar Documents

Publication Publication Date Title
CN111929314A (en) Wheel hub weld visual detection method and detection system
CN109934814B (en) Surface defect detection system and method thereof
CN108355981B (en) Battery connector quality detection method based on machine vision
CN111929309B (en) Cast part appearance defect detection method and system based on machine vision
US7495758B2 (en) Apparatus and methods for two-dimensional and three-dimensional inspection of a workpiece
CN106934800B (en) Metal plate strip surface defect detection method and device based on YOLO9000 network
CN212301356U (en) Wheel hub welding seam visual detection device
CN104063873B (en) A kind of Model For The Bush-axle Type Parts surface defect online test method based on compressed sensing
CN109840900B (en) Fault online detection system and detection method applied to intelligent manufacturing workshop
CN102529019B (en) Method for mould detection and protection as well as part detection and picking
CN110135521A (en) Pole-piece pole-ear defects detection model, detection method and system based on convolutional neural networks
CN110264457A (en) Weld seam autonomous classification method based on rotary area candidate network
JP2021515885A (en) Methods, devices, systems and programs for setting lighting conditions and storage media
Yang et al. An automatic aperture detection system for LED cup based on machine vision
CN117649404A (en) Medicine packaging box quality detection method and system based on image data analysis
CN117817111A (en) Method and system for intelligently identifying and matching process parameters in laser welding
CN116678826A (en) Appearance defect detection system and method based on rapid three-dimensional reconstruction
CN116309313A (en) Battery surface welding defect detection method
CN116465335A (en) Automatic thickness measurement method and system based on point cloud matching
CN114581368A (en) Bar welding method and device based on binocular vision
CN105224941A (en) Process identification and localization method
CN116843615B (en) Lead frame intelligent total inspection method based on flexible light path
CN117269168A (en) New energy automobile precision part surface defect detection device and detection method
CN116342502A (en) Industrial vision detection method based on deep learning
CN112802018B (en) Integrity detection method, device and equipment for segmented circular workpiece and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination