CN111476756B - Method for identifying casting DR image loosening defect based on improved YOLOv network model - Google Patents

Method for identifying casting DR image loosening defect based on improved YOLOv network model Download PDF

Info

Publication number
CN111476756B
CN111476756B CN202010158887.2A CN202010158887A CN111476756B CN 111476756 B CN111476756 B CN 111476756B CN 202010158887 A CN202010158887 A CN 202010158887A CN 111476756 B CN111476756 B CN 111476756B
Authority
CN
China
Prior art keywords
network model
defect
yolov
image
loose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010158887.2A
Other languages
Chinese (zh)
Other versions
CN111476756A (en
Inventor
段黎明
阮浪
杨珂
朱世涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202010158887.2A priority Critical patent/CN111476756B/en
Publication of CN111476756A publication Critical patent/CN111476756A/en
Application granted granted Critical
Publication of CN111476756B publication Critical patent/CN111476756B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30116Casting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for identifying casting DR image loosening defects based on an improved YOLOv network model, which comprises the following steps: 1) And performing defect labeling on the loose defect data set by utilizing a rectangular frame of the image labeling tool. 2) An improved YOLOv network model is built. 3) The modified YOLOv network model is trained using the loose defect data training set. 4) The trained modified YOLOv network model was tested using the loose defect data test set. 5) The improvement YOLOv is made to the network model. 6) And acquiring a DR image of the casting to be detected, inputting the DR image into the improved YOLOv network model, and judging the defect grade and position coordinates of the casting. The invention improves the detection effect of the target detection network on the small target object.

Description

Method for identifying casting DR image loosening defect based on improved YOLOv network model
Technical Field
The invention relates to the field of workpiece casting, in particular to a method for identifying a casting DR image loosening defect based on an improved YOLOv network model.
Background
The object of the defect identification of the casting DR image is to find the position of the defect from the casting radiographic image, then extract various information of the defect, and finally finish the defect identification. At present, three main approaches are available for the technical method of casting defect identification: 1) Direct detection based on image processing; 2) Traditional machine learning model detection based on defect localization and tracking; 3) Detection of casting DR image defects based on a deep learning framework based on FASTER RCNN and the like.
Problems with the above methods are: in the method 1), the image is basically subjected to global processing, the effect of a local defect area is influenced, the types of objects to be detected are not easy to distinguish, and defect identification is easy to be interfered by noise due to the large number of features in the image. In the method 2), a traditional machine learning network model (Bayesian classifier, support vector machine) or a shallow neural network model is adopted to realize DR image defect detection, and the recognition accuracy of DR image defect detection of castings with complex characteristics is not high compared with that of a deep learning framework. The network model of the method 3) can not realize real-time detection while ensuring the identification accuracy, which is important to the actual production requirement.
Disclosure of Invention
The invention aims to provide a method for identifying a casting DR image loosening defect based on an improved YOLOv network model, which mainly comprises the following steps:
1) And obtaining DR loose defect images of a plurality of castings.
The casting is a cast steel swing bolster or a side frame of a railway train bogie.
2) Preprocessing the DR loose defect image and constructing a loose defect data set.
The main steps of preprocessing the DR original defect image are as follows:
2.1 A DR original defect image is uniformly divided into n×n size defect images.
2.2 Data enhancement of the defective image. The data enhancement method comprises image flipping, image rotation and mirroring.
3) And preprocessing the loose defect data set to enhance the gray value of the loose defect data set.
The method for preprocessing the loose defect data set comprises the following steps: the steering filter enhances the algorithm.
4) And marking the loose defect data set by utilizing the rectangular frames of the image marking tool, and obtaining the defect grade corresponding to each rectangular frame, the coordinates (X, Y) of the center point of the rectangular frame, the width W of the rectangular frame and the height H of the rectangular frame. Randomly dividing the marked loose defect data set into a loose defect data training set and a loose defect data testing set.
The defect grade number is 5.
5) An improved YOLOv network model is established, an original weight file of the YOLOv network model is obtained, and the number of filter files, COCO data sets and VOC data set detection grade labels, the iteration times, the learning rate and whether a multi-scale training strategy is adopted or not are set.
6) The modified YOLOv network model is trained using the loose defect data training set.
The main steps for training the improved YOLOv network model are as follows:
6.1 Dividing each image of the loose defect data training set into s x s cells.
6.2 Feature extraction is performed on each cell using the modified YOLOv network model, and 3 different scale feature images are produced.
6.3 A plurality of candidate target boundary boxes are predicted by a regression device, and the main steps are as follows:
6.3.1 A preset bounding box (c x,cy,pw,ph) is set. And (c x,cy) is the center coordinate of the preset boundary box on the characteristic image. p w、ph is the width and height of the preset bounding box on the feature map.
6.3.2 Calculating a prediction boundary box center offset (t x,ty) and a wide-to-high scaling ratio (t w,th).
6.3.3 Updating the prediction target bounding box (b x,by,bw,bh), i.e.:
bx=σ(tx)+cx (1)
by=σ(ty)+cy (2)
In the formula, the sigma (x) function is a Sigmoid function for scaling the preset offset to between 0 and 1. (b x,by) is the prediction target bounding box center coordinates. b w、bh is the width and height in the prediction target bounding box.
6.4 Using a logistic regression method to calculate the confidence coefficient of each candidate target boundary boxPr (object) is the predicted probability that the defect belongs to each defect level in the candidate target bounding box. /(I)Is the accuracy.
Accuracy ofThe following is shown:
Wherein, area (box (trunk) ≡box (pred)) represents the area of the intersection area of the real target boundary box and the predicted target boundary box; area (box) represents the area of the union region of the real target boundary box and the predicted target boundary box;
6.5 A candidate object bounding box with a confidence level above a threshold epsilon is taken as the object bounding box. Calculating by using a Logistic function to obtain the prediction probability P (y= 1|x) of the defects belonging to each defect grade in the target boundary box, namely:
Where the parameter g (x) =ω 01x12x2+…+ωnxn is calculated. ω represents the weight. x represents the input of the modified YOLOv network model. The subscript n indicates the number of input samples.
The loss function of the improved YOLOv network model is l=l loc+Lconf+Lcla.
The target positioning offset loss L loc is as follows:
In the formula, the superscript pred represents a predicted value. The superscript obj represents the true value. The superscript anchor_center represents a prediction target bounding box. y represents the output of the modified YOLOv network model. Representing the weight of the network to the predicted box coordinates. S 2 denotes the number of grid cells of the input image division. B represents the number of bounding boxes generated per grid cell. /(I)When there is a casting defect in the bounding box as a sign function,/>When there is no casting defect within the bounding box, When there is a casting defect in the bounding box as a sign function,/>When there is no casting defect in the bounding box,/>H represents the height of the bounding box; w represents the width of the bounding box; /(I)Representing the weight corresponding to the prediction frame which does not contain the target;
The target confidence error L conf is as follows:
In the method, in the process of the invention, Representing a prediction bounding box. /(I)Representing a real bounding box.
The goal represents a confidence loss.
The target classification error L cla is as follows:
In the method, in the process of the invention, The predicted probability P for a defect belonging to each defect class. /(I)The true probability that a defect belongs to each defect class.
7) And testing the trained improved YOLOv network model by using the loose defect data testing set, evaluating the output result of the improved YOLOv network model, and if the evaluation result does not meet the preset requirement, entering a step 8, otherwise, entering a step 9.
The evaluation parameters of the output result of the improved YOLOv network model comprise accuracy rate, recall rate, F1 value, detection speed and accuracy rate average mAP.
8) The parameters of the modified YOLOv network model are modified and step 6 is returned.
The method for modifying the parameters of the improved YOLOv network model is as follows:
8.1 Under the condition that the number of the preset boundary boxes is not changed, the K-means++ clustering algorithm is utilized to recluster the preset boundary boxes so as to update the sizes of the preset boundary boxes. The coincidence ratio of the preset boundary frame and the clustering center meets the following formula:
d(box,centroid)=1-IOU(box,centroid) (10)
Where d (box, centroid) is the shortest distance between the center of each preset bounding box and the centroid of the cluster center. (box, centroid) is the distance between the centre of each preset bounding box and the centre of the cluster centroid.
8.2 Expanding the 3 different scale feature maps in the improved YOLOv network model to 4 different scale feature maps.
The scale size of the added feature map is 104×104.
9) Modifying the original weight file of the YOLOv network model based on the training process of the improved YOLOv network model, so as to obtain a weight file of the improved YOLOv network model;
10 DR images of castings to be detected are obtained and input into a weight file of the improved YOLOv network model, and defect levels and position coordinates of the castings are judged.
The technical effect of the invention is undoubtedly that the invention provides the method for detecting the loosening defect of the DR image of the casting based on the YOLO v3 network, which can meet the requirement of real-time detection and meet the actual production requirement while ensuring the accuracy. The invention avoids the trouble of manual construction characteristics and has stronger mobility. Compared with the traditional machine learning method, the method has higher accuracy and better real-time performance compared with the detection method based on FASTER RCNN network model. The invention improves YOLOv network, increases model prediction scale of 104×104, and improves detection effect of target detection network to small target object.
Drawings
FIG. 1 is a flow chart;
FIG. 2 is an original image and gray scale profile;
FIG. 3 is an enhanced image and gray scale profile;
FIG. 4 is YOLOv a network training process;
FIG. 5 is a prediction process of a target bounding box;
FIG. 6 is a diagram of a modified YOLOv network architecture;
FIG. 7 is a bolster DR detection image;
FIG. 8 is a side frame DR detection image;
FIG. 9 is a casting defect dataset;
FIG. 10 is an original image of a cast DR image loosening defect;
FIG. 11 shows the result of identifying the loosening defect of the DR image of the casting.
Detailed Description
The present invention is further described below with reference to examples, but it should not be construed that the scope of the above subject matter of the present invention is limited to the following examples. Various substitutions and alterations are made according to the ordinary skill and familiar means of the art without departing from the technical spirit of the invention, and all such substitutions and alterations are intended to be included in the scope of the invention.
Example 1:
Referring to fig. 1 to 6, a method for identifying a cast DR image loosening defect based on a modified YOLOv network model mainly comprises the following steps:
1) And obtaining DR loose defect images of a plurality of castings.
The casting is a cast steel swing bolster or a side frame of a railway train bogie.
2) Preprocessing the DR loose defect image and constructing a loose defect data set.
The main steps of preprocessing the DR original defect image are as follows:
2.1 A DR original defect image is uniformly divided into n×n size defect images.
2.2 Data enhancement of the defective image. The data enhancement method comprises image flipping, image rotation and mirroring.
3) Preprocessing the loose defect data set, and enhancing the gray value of the loose defect data set so that the loose defect data set is easier to distinguish from the gray value of the background, thereby improving the accuracy in the subsequent process of training the model and testing the data set. The result of the enhanced image and gray values is shown in fig. 2.
The method for preprocessing the loose defect data set comprises the following steps: the steering filter enhances the algorithm.
4) And carrying out defect marking on the loose defect data set by utilizing a rectangular frame of LabelImg image marking tools, storing the marked file into an XML file in a PASCAL_VOC format, and converting the XML file into a TXT file in a format of < tag, X, Y, W and H > by using a format conversion script to obtain the defect grade corresponding to each rectangular frame, the coordinates (X and Y) of the center point of the rectangular frame, the width W of the rectangular frame and the height H of the rectangular frame. Randomly dividing the marked loose defect data set into a loose defect data training set and a loose defect data testing set.
The defect grade number is 5, and the grade value is proportional to the defect severity.
5) An improved YOLOv network model is established, an original weight file of the YOLOv network model is obtained, and the number of filter files, COCO data sets and VOC data set detection grade labels, the iteration times, the learning rate and whether a multi-scale training strategy is adopted or not are set.
6) The modified YOLOv network model is trained using the loose defect data training set.
The main steps for training the improved YOLOv network model are as follows:
6.1 Dividing each image of the loose defect data training set into s x s cells.
6.2 Feature extraction is performed on each cell using the modified YOLOv network model, and 3 different scale feature images are produced.
6.3 A plurality of candidate target boundary boxes are predicted by a regression device, and the main steps are as follows:
6.3.1 A preset bounding box (c x,cy,pw,ph) is set. And (c x,cy) is the center coordinate of the preset boundary box on the characteristic image. p w、ph is the width and height of the preset bounding box on the feature map.
6.3.2 Calculating a prediction boundary box center offset (t x,ty) and a wide-to-high scaling ratio (t w,th).
6.3.3 Updating the prediction target bounding box (b x,by,bw,bh), i.e.:
bx=σ(tx)+cx (1)
by=σ(ty)+cy (2)
In the formula, the sigma (x) function is a Sigmoid function for scaling the preset offset to between 0 and 1. (b x,by) is the prediction target bounding box center coordinates. b w、bh is the width and height in the prediction target bounding box.
6.4 Using a logistic regression method to calculate the confidence coefficient of each candidate target boundary boxPr (object) is the predicted probability that the defect belongs to each defect level in the candidate target bounding box. /(I)Is the accuracy.
Accuracy ofThe following is shown:
Wherein, area (box (trunk) ≡box (pred)) represents the area of the intersection area of the real target boundary box and the predicted target boundary box; area (box) represents the area of the union region of the real target boundary box and the predicted target boundary box;
6.5 A threshold epsilon is set, bounding boxes with low confidence scores are filtered, and Non-maximum suppression (Non-Maximum Suppression, NMS) processing is performed on the remaining bounding boxes. And calculating by using a Logistic function to obtain the prediction probability of the defects belonging to each category in the target boundary box. Because the Softmax function is used as a classifier, the bounding box defaults to only contain one target class, and overlapping targets cannot be identified; in some complex scenes, one target may belong to multiple categories, a Logistic function is adopted as a classifier for prediction, a Logistic function is allocated for the targets which may exist, and meanwhile, the accuracy of the classifier is not reduced by using multiple independent Logistic functions instead of Softmax, so that multi-label classification is realized.
The probability P (y= 1|x) is as follows:
Where the parameter g (x) =ω 01x12x2+…+ωnxn is calculated. ω represents the weight. x represents the input of the modified YOLOv network model.
The loss function of the improved YOLOv network model is l=l loc+Lconf+Lcla.
The target positioning offset loss L loc is as follows:
In the formula, the superscript pred represents a predicted value. The superscript obj represents the true value. The superscript anchor_center represents a prediction target bounding box. y represents the output of the modified YOLOv network model. Representing the weight of the network to the predicted box coordinates. S 2 denotes the number of grid cells of the input image division. B represents the number of bounding boxes generated per grid cell. /(I)When there is a casting defect in the bounding box as a sign function,/>When there is no casting defect within the bounding box, When there is a casting defect in the bounding box as a sign function,/>When there is no casting defect in the bounding box,/>H represents the height of the bounding box; w represents the width of the bounding box; /(I)Representing the weight corresponding to the prediction frame which does not contain the target; the index i indicates the grid cell number and the index j indicates the bounding box number.
The target confidence error L conf is as follows:
In the method, in the process of the invention, Representing a prediction bounding box. /(I)Representing a real bounding box.
The goal represents a confidence loss.
The target classification error L cla is as follows:
In the method, in the process of the invention, The predicted probability P for a defect belonging to each defect class. /(I)The true probability that a defect belongs to each defect class.
7) And testing the trained improved YOLOv network model by using the loose defect data testing set, evaluating the output result of the improved YOLOv network model, and if the evaluation result does not meet the preset requirement, entering a step 8, otherwise, entering a step 9.
The evaluation parameters of the output result of the improved YOLOv network model comprise accuracy rate, recall rate, F1 value, detection speed and accuracy rate average mAP.
Precision of Precision is as follows:
Precision=TP/(TP+FP) (10)
In the formula, TP is the number of samples predicted to be 1 and correctly predicted. Tp+fp is the number of samples all predicted to be 1. A prediction of 1 indicates that the probability of the class to which the improved YOLOv network model output results belongs is 1.
Recall is shown below:
Recall=TP/(TP+FN) (11)
Where FN is the number of samples for which all the real cases are 1.
The Accuracy is as follows:
Accuracy=(TP+TN)/(TP+TN+FP+FN) (12)
In the formula, TP+TN+FP+FN is the total number of samples, and TP+TN is the number of samples predicted correctly.
The F1 values are as follows:
F1=2*(Precision*Recall)/(Precision+Recall) (13)
Where Recall is the Recall value.
8) The parameters of the modified YOLOv network model are modified and step 6 is returned.
The method for modifying the parameters of the improved YOLOv network model is as follows:
8.1 Under the condition that the number of the preset boundary boxes is not changed, the K-means++ clustering algorithm is utilized to recluster the preset boundary boxes so as to update the sizes of the preset boundary boxes. The coincidence ratio of the preset boundary frame and the clustering center meets the following formula:
d(box,centroid)=1-IOU(box,centroid) (14)
Where d (box, centroid) is the shortest distance between the center of each preset bounding box and the centroid of the cluster center. (box, centroid) is the distance between the centre of each preset bounding box and the centre of the cluster centroid. IOU (box, centroid) is accuracy.
The results obtained by the cluster analysis are shown in the following table:
8.2 The scale of the multiscale detection of the improved YOLOv network model is also slightly inadequate due to the smaller size of loose defects on the cast DR image. In order to better identify defects on castings, the YOLOv model structure needs to be improved, the original 3 feature images with different scales are mainly expanded into 4 feature images with different scales to detect objects, and the size of the added feature images is 104 multiplied by 104. After the network structure is improved, the training model is increased from the original predicted 3549 bounding boxes to the predicted 14365 bounding boxes, and the bounding boxes which are approximately 5 times larger than the original 3 models with different scales are added, so that the recognition rate of small targets can be improved. The network structure of the modified YOLOv is shown in figure 5.
9) Modifying the original weight file of the YOLOv network model based on the training process of the improved YOLOv network model, so as to obtain a weight file of the improved YOLOv network model; the original weight file is derived from the official network:
https://pjreddie.com/media/files/yolov3.weights。
10 DR images of castings to be detected are obtained and input into a weight file of the improved YOLOv network model, and defect levels and position coordinates of the castings are judged.
Example 2:
a method for identifying casting DR image loosening defects based on an improved YOLOv network model mainly comprises the following steps:
1) And obtaining DR loose defect images of a plurality of castings.
2) Preprocessing the DR loose defect image and constructing a loose defect data set.
3) And preprocessing the loose defect data set to enhance the gray value of the loose defect data set.
4) And performing defect marking on the loose defect data set by using the rectangular frames of the image marking tool, and obtaining the defect type, defect grade, rectangular frame center point coordinates (X, Y), rectangular frame width W and rectangular frame height H corresponding to each rectangular frame. Randomly dividing the loose defect data set after defect labeling into a loose defect data training set and a loose defect data testing set.
5) And establishing an improved YOLOv network model, and setting the filter number, COCO data set and VOC data set detection category labels, the iteration number, the learning rate and whether a multi-scale training strategy is adopted.
6) The modified YOLOv network model is trained using the loose defect data training set.
7) And testing the trained improved YOLOv network model by using the loose defect data testing set, evaluating the output result of the improved YOLOv network model, and if the evaluation result does not meet the preset requirement, entering a step 8, otherwise, entering a step 9.
8) The improvement YOLOv is made to the network model and returns to step 6.
9) A weight file is generated that improves YOLOv the network model.
10 DR images of castings to be detected are obtained and input into a weight file of the improved YOLOv network model, and defect levels and position coordinates of the castings are judged.
Example 3:
a method for identifying casting DR image loosening defects based on an improved YOLOv network model comprises the following main steps in example 2, wherein the main steps for training the improved YOLOv network model are as follows:
1) Dividing each image of the loose defect data training set into s×s cells;
2) Extracting the characteristics of each cell by using an improved YOLOv network model, and producing 3 characteristic images with different scales;
3) A plurality of candidate target boundary boxes are predicted by using a regression, and the main steps are as follows:
3.1 Setting a preset bounding box (c x,cy,pw,ph); the (c x,cy) is the center coordinate of the preset boundary box on the characteristic image; p w、ph is the width and height of the preset bounding box on the feature map;
3.2 Calculating a prediction boundary box center offset (t x,ty) and a wide-to-high scaling ratio (t w,th);
3.3 Updating the prediction target bounding box (b x,by,bw,bh), i.e.:
bx=σ(tx)+cx (1)
by=σ(ty)+cy (2)
Wherein, the sigma (x) function is a Sigmoid function used for scaling the preset offset to between 0 and 1; (b x,by) is the predicted target bounding box center coordinates; b w、bh is the width and height in the prediction target bounding box;
4) Confidence coefficient of each candidate target boundary box is calculated by utilizing a logistic regression method Pr (object) is the prediction probability of the defect belonging to each category in the candidate target bounding box; /(I)Is the accuracy;
Accuracy of The following is shown:
5) Taking the candidate target boundary boxes with the confidence coefficient higher than the threshold epsilon as target boundary boxes; and calculating by using a Logistic function to obtain the prediction probability of the defects belonging to each category in the target boundary box.
Example 4:
Referring to fig. 7, 9 to 11, experiments for verifying a method for identifying a cast DR image loosening defect based on a modified YOLOv network model are mainly as follows:
1) Referring to fig. 7, a cast steel bolster DR detection image is acquired: the swing bolster is generally in a box structure, presents a fish belly shape along the length direction, a center plate seat of an upper plane is provided with a pin hole for inserting a center pin, a lower plane is provided with a round navel for positioning a spring, the upper plane and the bottom adopt symmetrical process holes, and the mass of the swing bolster is generally 350-1000 kg.
2) The obtained casting DR image is uniformly divided into images with the size of 2048X2048 pixels, the original data are 1100 images, and the memory size is about 300-400 kb. According to the defect types generated by the cast steel swing bolster and the side frame, most of the defects are loose defects, so that the detection and identification of the loose defects are mainly researched, and meanwhile, the defect types are subjected to grading detection, so that the defect levels under different degrees of damage are identified. The present embodiment classifies the same defect type into 5-stage processes, and indicates that the degree of damage is gradually increased from 1 to 5 stages, respectively.
Fig. 9 is a casting DR image defect dataset.
3) Referring to fig. 10, an original image to be detected is acquired.
4) And loading a weight file trained obtained in the training process, modifying a path for loading the weight in the yolo.py file, inputting python yolo _video.py-image under an engineering file path in cmd, and inputting an image name (i.e. FIG. 10) to be identified and an image type, so that an object to be detected can be identified, as shown in FIG. 11. In the figure, c represents a loose defect.
Example 5:
Experiments for verifying a method for identifying cast DR image loosening defects based on an improved YOLOv network model are mainly as follows:
1) Referring to fig. 8, a side frame DR detection image is acquired: the side frame is generally in a square frame structure with a hollow middle part, and upright posts and stop blocks are arranged on two sides of the square frame and are used for controlling the transverse displacement of the swing bolster, and meanwhile, triangular inspection holes are formed on two sides of the square frame and can be used for observing and braking a brake shoe, and the blank mass of the side frame is generally 300-600 kg.
2) The obtained casting DR image is uniformly divided into images with the size of 2048X2048 pixels, the original data are 1100 images, and the memory size is about 300-400 kb. According to the defect types generated by the cast steel swing bolster and the side frame, most of the defects are loose defects, so that the detection and identification of the loose defects are mainly researched, and meanwhile, the defect types are subjected to grading detection, so that the defect levels under different degrees of damage are identified. The present embodiment classifies the same defect type into 5-stage processes, and indicates that the degree of damage is gradually increased from 1 to 5 stages, respectively.
3) And acquiring an original image to be detected.
4) And loading a weight file trained _weights.h5 obtained in the training process, modifying a path for loading the weight in the yolo.py file, inputting python yolo _video.py-image under an engineering file path in cmd, and inputting an original image to be identified and an image type, so that a target to be detected can be identified.

Claims (7)

1. The method for identifying the casting DR image loosening defect based on the improved YOLOv network model is characterized by comprising the following steps:
1) Obtaining DR loose defect images of a plurality of castings;
2) Preprocessing the DR loose defect image and constructing a loose defect data set;
3) Preprocessing the loose defect data set, and enhancing the gray value of the loose defect data set;
4) Marking the loose defect data set by utilizing rectangular frames of an image marking tool, and obtaining a defect grade corresponding to each rectangular frame, coordinates (X, Y) of a center point of the rectangular frame, a width W of the rectangular frame and a height H of the rectangular frame; randomly dividing the marked loose defect data set into a loose defect data training set and a loose defect data testing set;
5) Establishing an improved YOLOv network model, obtaining an original weight file of the YOLOv network model, and setting the number of filter, COCO data set and VOC data set detection grade labels, iteration times, learning rate and whether a multi-scale training strategy is adopted or not;
The loss function of the improved YOLOv network model is l= Lloc + Lconf + Lcla;
The target positioning offset loss Lloc is as follows:
Wherein, the superscript pred represents a predicted value; superscript obj represents the true value; the superscript anchor_center represents a prediction target bounding box; y represents the output of the modified YOLOv network model; representing the weight of the network to the predicted frame coordinates; s 2 represents the number of grid cells divided by the input image; b represents the number of bounding boxes generated by each grid cell; /(I) When there is a casting defect in the bounding box as a sign function,/>When there is no casting defect in the bounding box,/> When there is a casting defect in the bounding box as a sign function,/>When there is no casting defect in the bounding box,/>H represents the height of the bounding box; w represents the width of the bounding box; /(I)Representing the weight corresponding to the prediction frame which does not contain the target;
the target confidence error Lconf is as follows:
In the method, in the process of the invention, Representing a prediction bounding box; /(I)Representing a real bounding box;
Representing a target confidence loss; /(I) Is the accuracy;
the target classification error Lcla is as follows:
In the method, in the process of the invention, The prediction probability P for the defects belonging to each defect grade; /(I)The true probability that the defect belongs to each defect grade;
6) Training the improved YOLOv network model using the loose defect data training set;
7) Testing the trained improved YOLOv network model by using a loose defect data testing set, evaluating the output result of the improved YOLOv network model, if the evaluation result does not meet the preset requirement, entering the step 8), otherwise, entering the step 9);
8) Modifying parameters of the improved YOLOv network model, and returning to step 6);
The method for modifying the parameters of the improved YOLOv network model is as follows:
8.1 Under the condition of not changing the number of the preset boundary frames, the K-means++ clustering algorithm is utilized to recluster the preset boundary frames so as to update the size of the preset boundary frames; the coincidence ratio of the preset boundary frame and the clustering center meets the following formula:
d(box,centroid)=1-IOU(box,centroid) (4)
Wherein d (box, centroid) is the shortest distance between the center of each preset bounding box and the centroid of the cluster center; (box, centroid) is the distance between the center of each preset bounding box and the centroid of the cluster center;
8.2 Expanding the 3 different-scale feature maps in the improved YOLOv network model into 4 different-scale feature maps;
the scale size of the added feature map is 104×104;
9) Modifying the original weight file of the YOLOv network model based on the training process of the improved YOLOv network model, so as to obtain a weight file of the improved YOLOv network model;
10 DR images of castings to be detected are obtained and input into a weight file of the improved YOLOv network model, and defect levels and position coordinates of the castings are judged.
2. The method for identifying a cast DR image loosening defect based on a modified YOLOv network model as defined in claim 1, wherein the cast is a cast steel bolster or sideframe of a railroad train truck.
3. The method for identifying loose defects in DR images of castings based on the modified YOLOv network model as set forth in claim 1, wherein the step of preprocessing DR raw defect images is as follows:
1) Uniformly dividing the DR original defect image into defect images with N multiplied by N sizes;
2) Carrying out data enhancement on the defect image; the data enhancement method comprises image flipping, image rotation and mirroring.
4. The method for identifying loose defects in a DR image of a casting based on a modified YOLOv network model according to claim 1, wherein the method for preprocessing the loose defect dataset is: the steering filter enhances the algorithm.
5. The method for identifying loose defects in a DR image of a casting based on a modified YOLOv network model of claim 1, wherein said number of defects is 5.
6. The method for identifying loose defects in a DR image of a casting based on a modified YOLOv network model as defined in claim 1, wherein the step of training the modified YOLOv network model is as follows:
1) Dividing each image of the loose defect data training set into s×s cells;
2) Extracting the characteristics of each cell by using an improved YOLOv network model, and producing 3 characteristic images with different scales;
3) Predicting a plurality of candidate target boundary boxes by using a regressive device, wherein the method comprises the following steps:
3.1 Setting a preset bounding box (cx, cy, pw, ph); (cx, cy) is the center coordinate of the preset bounding box on the feature image; pw and ph are the width and height of the preset boundary box on the feature map;
3.2 Calculating a prediction bounding box center offset (tx, ty) and a wide-to-high scaling ratio (tw, th);
3.3 Updating the prediction target bounding box (bx, by, bw, bh), i.e.:
bx=σ(tx)+cx (5)
by=σ(ty)+cy (6)
Wherein, the sigma (x) function is a Sigmoid function used for scaling the preset offset to between 0 and 1; (bx, by) is the prediction target bounding box center coordinates; bw, bh are width and height in the prediction target bounding box;
4) Confidence coefficient of each candidate target boundary box is calculated by utilizing a logistic regression method Pr (object) is the prediction probability of the defect belonging to each category in the candidate target bounding box; Is the accuracy;
Accuracy of The following is shown:
wherein, area (box (try)/(pred)) 0 represents the area of the intersection area of the real target boundary box and the predicted target boundary box, area (box (try)/(pred)) represents the area of the intersection area of the real target boundary box and the predicted target boundary box;
5) Taking the candidate target boundary boxes with the confidence coefficient higher than the threshold epsilon as target boundary boxes; calculating by using a Logistic function to obtain the prediction probability P (y= 1|x) of the defects belonging to each category in the target boundary box, namely:
Wherein, the calculation parameter g (x) =ω 01x12x2+...+ωnxn; omega represents a weight; x represents the input of the modified YOLOv network model; the subscript n indicates the number of input samples.
7. The method for identifying loose defects in a DR image of a casting based on a modified YOLOv network model as defined in claim 1, wherein the evaluation parameters of the output result of the modified YOLOv network model include precision, recall, F1 value, detection speed, and accuracy mean mAP.
CN202010158887.2A 2020-03-09 2020-03-09 Method for identifying casting DR image loosening defect based on improved YOLOv network model Active CN111476756B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010158887.2A CN111476756B (en) 2020-03-09 2020-03-09 Method for identifying casting DR image loosening defect based on improved YOLOv network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010158887.2A CN111476756B (en) 2020-03-09 2020-03-09 Method for identifying casting DR image loosening defect based on improved YOLOv network model

Publications (2)

Publication Number Publication Date
CN111476756A CN111476756A (en) 2020-07-31
CN111476756B true CN111476756B (en) 2024-05-14

Family

ID=71747288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010158887.2A Active CN111476756B (en) 2020-03-09 2020-03-09 Method for identifying casting DR image loosening defect based on improved YOLOv network model

Country Status (1)

Country Link
CN (1) CN111476756B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215071A (en) * 2020-09-10 2021-01-12 华蓝设计(集团)有限公司 Vehicle-mounted multi-target coupling identification and tracking method for automatic driving under heterogeneous traffic flow
CN112164070A (en) * 2020-09-16 2021-01-01 电子科技大学 Double-layer box opening positioning algorithm based on deep learning
CN112229845A (en) * 2020-10-12 2021-01-15 国网河南省电力公司濮阳供电公司 Unmanned aerial vehicle high-precision winding tower intelligent inspection method based on visual navigation technology
CN112270252A (en) * 2020-10-26 2021-01-26 西安工程大学 Multi-vehicle target identification method for improving YOLOv2 model
CN112288008B (en) * 2020-10-29 2022-03-01 四川九洲电器集团有限责任公司 Mosaic multispectral image disguised target detection method based on deep learning
CN112465759A (en) * 2020-11-19 2021-03-09 西北工业大学 Convolutional neural network-based aeroengine blade defect detection method
CN112465794A (en) * 2020-12-10 2021-03-09 无锡卡尔曼导航技术有限公司 Golf ball detection method based on YOLOv4 and embedded platform
CN112508016B (en) * 2020-12-15 2024-04-16 深圳万兴软件有限公司 Image processing method, device, computer equipment and storage medium
CN112488119A (en) * 2020-12-18 2021-03-12 山西省信息产业技术研究院有限公司 Tunnel block falling or water seepage detection and measurement method based on double-depth learning model
CN112508030A (en) * 2020-12-18 2021-03-16 山西省信息产业技术研究院有限公司 Tunnel crack detection and measurement method based on double-depth learning model
CN112614125B (en) * 2020-12-30 2023-12-01 湖南科技大学 Method and device for detecting glass defects of mobile phone, computer equipment and storage medium
CN112581472B (en) * 2021-01-26 2022-09-02 中国人民解放军国防科技大学 Target surface defect detection method facing human-computer interaction
CN113034478B (en) * 2021-03-31 2023-06-06 太原科技大学 Weld defect identification positioning method and system based on deep learning network
CN113222982A (en) * 2021-06-02 2021-08-06 上海应用技术大学 Wafer surface defect detection method and system based on improved YOLO network
CN113487570B (en) * 2021-07-06 2024-01-30 东北大学 High-temperature continuous casting billet surface defect detection method based on improved yolov5x network model
CN113409314B (en) * 2021-08-18 2021-11-12 南京市特种设备安全监督检验研究院 Unmanned aerial vehicle visual detection and evaluation method and system for corrosion of high-altitude steel structure
CN113887567B (en) * 2021-09-08 2024-06-18 华南理工大学 Vegetable quality detection method, system, medium and equipment
CN113850799B (en) * 2021-10-14 2024-06-07 长春工业大学 YOLOv 5-based trace DNA extraction workstation workpiece detection method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064461A (en) * 2018-08-06 2018-12-21 长沙理工大学 A kind of detection method of surface flaw of steel rail based on deep learning network
CN109325454A (en) * 2018-09-28 2019-02-12 合肥工业大学 A kind of static gesture real-time identification method based on YOLOv3
WO2019104767A1 (en) * 2017-11-28 2019-06-06 河海大学常州校区 Fabric defect detection method based on deep convolutional neural network and visual saliency
CN110060248A (en) * 2019-04-22 2019-07-26 哈尔滨工程大学 Sonar image submarine pipeline detection method based on deep learning
CN110189304A (en) * 2019-05-07 2019-08-30 南京理工大学 Remote sensing image target on-line quick detection method based on artificial intelligence
CN110660052A (en) * 2019-09-23 2020-01-07 武汉科技大学 Hot-rolled strip steel surface defect detection method based on deep learning
CN110779937A (en) * 2019-10-11 2020-02-11 上海航天精密机械研究所 Casting product internal defect intelligent detection device
CN110838112A (en) * 2019-11-08 2020-02-25 上海电机学院 Insulator defect detection method based on Hough transform and YOLOv3 network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019104767A1 (en) * 2017-11-28 2019-06-06 河海大学常州校区 Fabric defect detection method based on deep convolutional neural network and visual saliency
CN109064461A (en) * 2018-08-06 2018-12-21 长沙理工大学 A kind of detection method of surface flaw of steel rail based on deep learning network
CN109325454A (en) * 2018-09-28 2019-02-12 合肥工业大学 A kind of static gesture real-time identification method based on YOLOv3
CN110060248A (en) * 2019-04-22 2019-07-26 哈尔滨工程大学 Sonar image submarine pipeline detection method based on deep learning
CN110189304A (en) * 2019-05-07 2019-08-30 南京理工大学 Remote sensing image target on-line quick detection method based on artificial intelligence
CN110660052A (en) * 2019-09-23 2020-01-07 武汉科技大学 Hot-rolled strip steel surface defect detection method based on deep learning
CN110779937A (en) * 2019-10-11 2020-02-11 上海航天精密机械研究所 Casting product internal defect intelligent detection device
CN110838112A (en) * 2019-11-08 2020-02-25 上海电机学院 Insulator defect detection method based on Hough transform and YOLOv3 network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Research on Automatic Recognition of Casting Defects Based on Deep Learning;Liming Duan等;《IEEE Access 》;20201231;全文 *
基于改进YOLOv3的目标识别方法;陈正斌;叶东毅;朱彩霞;廖建坤;;计算机系统应用;20200115(01);全文 *
基于改进YOLOv3网络的齿轮缺陷检测;张广世;葛广英;朱荣华;孙群;;激光与光电子学进展;20191107(12);全文 *
基于深度学习的铸件DR图像缺陷自动识别的研究;阮浪;《万方数据》;20210701;全文 *

Also Published As

Publication number Publication date
CN111476756A (en) 2020-07-31

Similar Documents

Publication Publication Date Title
CN111476756B (en) Method for identifying casting DR image loosening defect based on improved YOLOv network model
CN111680542B (en) Steel coil point cloud identification and classification method based on multi-scale feature extraction and Pointnet neural network
CN109118479B (en) Capsule network-based insulator defect identification and positioning device and method
CN109300111B (en) Chromosome recognition method based on deep learning
CN110349260B (en) Automatic pavement marking extraction method and device
CN107833213A (en) A kind of Weakly supervised object detecting method based on pseudo- true value adaptive method
CN109840483B (en) Landslide crack detection and identification method and device
CN111797829A (en) License plate detection method and device, electronic equipment and storage medium
CN109919145B (en) Mine card detection method and system based on 3D point cloud deep learning
CN103093240A (en) Calligraphy character identifying method
CN109636846B (en) Target positioning method based on cyclic attention convolution neural network
CN106845458B (en) Rapid traffic sign detection method based on nuclear overrun learning machine
CN112200225A (en) Steel rail damage B display image identification method based on deep convolutional neural network
CN107886539B (en) High-precision gear visual detection method in industrial scene
CN110852358A (en) Vehicle type distinguishing method based on deep learning
CN115797354A (en) Method for detecting appearance defects of laser welding seam
CN114332473A (en) Object detection method, object detection device, computer equipment, storage medium and program product
CN112733747A (en) Identification method, system and device for relieving falling fault of valve pull rod
CN112381806A (en) Double centromere aberration chromosome analysis and prediction method based on multi-scale fusion method
CN113609895A (en) Road traffic information acquisition method based on improved Yolov3
CN105404858A (en) Vehicle type recognition method based on deep Fisher network
CN111597939B (en) High-speed rail line nest defect detection method based on deep learning
CN116681653A (en) Three-dimensional point cloud extraction method and extraction system
CN112433228B (en) Multi-laser radar decision-level fusion method and device for pedestrian detection
CN115830371A (en) Deep learning-based rail transit subway steering frame rod member classification detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant