CN116883313A - Method for rapidly detecting vehicle body paint surface defects, image processing equipment and readable medium - Google Patents

Method for rapidly detecting vehicle body paint surface defects, image processing equipment and readable medium Download PDF

Info

Publication number
CN116883313A
CN116883313A CN202310631915.1A CN202310631915A CN116883313A CN 116883313 A CN116883313 A CN 116883313A CN 202310631915 A CN202310631915 A CN 202310631915A CN 116883313 A CN116883313 A CN 116883313A
Authority
CN
China
Prior art keywords
defect
vehicle body
detection
paint
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310631915.1A
Other languages
Chinese (zh)
Inventor
王云鹏
王宇哲
俞宏秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Aiqisheng Video Technology Co ltd
Original Assignee
Hangzhou Aiqisheng Video Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Aiqisheng Video Technology Co ltd filed Critical Hangzhou Aiqisheng Video Technology Co ltd
Priority to CN202310631915.1A priority Critical patent/CN116883313A/en
Publication of CN116883313A publication Critical patent/CN116883313A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30156Vehicle coating

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application relates to a vehicle body paint surface defect rapid detection method, image processing equipment and a readable medium, wherein the vehicle body paint surface defect rapid detection method is used for analyzing surface image information of a vehicle body by adopting a YOLOv5 model as a defect detection model, so that tiny defects existing on the vehicle body paint surface can be accurately identified, the accuracy rate of defect detection identification reaches 98%, and the recall rate reaches 98%, thereby realizing rapid detection of the vehicle body paint surface defects. Compared with the similar target detection methods such as Fast RCNN and Fast RCNN, the detection speed is greatly improved while the detection accuracy is ensured, the picture detection speed reaches 20 pieces/s, and the beat requirement of an automobile production line can be met.

Description

Method for rapidly detecting vehicle body paint surface defects, image processing equipment and readable medium
Technical Field
The application relates to the technical field of paint surface defect detection, in particular to a method for rapidly detecting a paint surface defect of a vehicle body, image processing equipment and a readable medium.
Background
The coating process is an important link in the automobile production process, and the corrosion resistance of the body and the appearance of the automobile are improved by spraying paint on the surface of the automobile. Due to the limitations of production environment and industrial level, the paint surface of the automobile body inevitably has defects in the automobile coating process, such as tiny particle points caused by impurities in the spraying process, paint surface scratches caused by improper operation in the carrying process, and the existence of the paint surface defects seriously affects the quality of the automobile and needs to be detected and repaired. At present, the traditional manual visual detection is mainly adopted for detecting the paint surface defect of the automobile body, 6-8 persons are often required for detecting one automobile, the labor cost is high, the detection efficiency is low, the subjectivity is high, the omission ratio is high, and the requirement of quick detection of a production line is difficult to meet.
With the rapid promotion of the 4.0-age industry and the rapid development of technologies such as artificial intelligence and computer vision in recent years, deep learning target detection has been widely applied in the industry, but is still blank in the field of vehicle body paint defect detection. At present, the commonly used target detection methods in the industry, such as Fast RCNN, fast RCNN and the like, are two-stage detection algorithms, and in the target detection process, candidate areas are required to be extracted first and then target detection is carried out, so that the detection speed is low, and the detection of the paint surface defects of the vehicle body is difficult to finish within the takt time of a production line.
Disclosure of Invention
Based on this, it is necessary to provide a method for rapidly detecting the paint defects of the vehicle body, an image processing device and a readable medium, aiming at the problem that the detection speed is low and the detection of the paint defects of the vehicle body is difficult to be completed within the takt time of a production line because the target detection algorithms such as Fast RCNN and Fast RCNN which are commonly used in industry are two-stage detection algorithms.
The application provides a method for rapidly detecting a vehicle body paint surface defect, which comprises the following steps:
acquiring surface image information of a vehicle body in a detection station;
inputting the surface image information into a defect detection model, and operating the defect detection model to obtain a detection result of the defect detection model;
wherein the defect detection model adopts a YOLOv5 model.
The present application also provides an image processing apparatus including: comprising a memory and a processor, the memory coupled to the processor; the memory is used for storing program data, and the processor is used for executing the program data to realize the method for rapidly detecting the vehicle body paint defects.
The application also provides a computer readable medium, which when being executed by a processor, realizes the method for rapidly detecting the paint defects of the vehicle body.
The application relates to a vehicle body paint surface defect rapid detection method, image processing equipment and a readable medium, wherein the vehicle body paint surface defect rapid detection method is used for analyzing surface image information of a vehicle body by adopting a YOLOv5 model as a defect detection model, so that tiny defects existing on the vehicle body paint surface can be accurately identified, the accuracy rate of defect detection identification reaches 98%, and the recall rate reaches 98%, thereby realizing rapid detection of the vehicle body paint surface defects. Compared with the similar target detection methods such as Fast RCNN and Fast RCNN, the detection speed is greatly improved while the detection accuracy is ensured, the picture detection speed reaches 20 pieces/s, and the beat requirement of an automobile production line can be met.
Drawings
Fig. 1 is a schematic flow chart of a method for rapidly detecting a paint defect of a vehicle body according to an embodiment of the application.
Fig. 2 is a network structure diagram of a defect detection model in a method for rapidly detecting a paint defect of a vehicle body according to an embodiment of the present application.
Fig. 3 is a schematic diagram of a prediction detection frame and a defect real frame of a defect detection model in a method for rapidly detecting a vehicle body paint defect according to an embodiment of the present application.
Fig. 4 is a schematic diagram of image segmentation in a method for rapidly detecting a paint defect of a vehicle body according to an embodiment of the present application.
Fig. 5 is a network structure diagram of an improved defect detection model in a method for rapidly detecting defects of a paint surface of a vehicle body according to an embodiment of the present application.
Fig. 6 is a schematic diagram of a light source and an area array camera in a method for rapidly detecting a paint defect of a vehicle body according to an embodiment of the present application.
Fig. 7 (a) to 7 (d) are schematic diagrams illustrating a process of image preprocessing in the method for rapidly detecting a paint surface defect of a vehicle body according to an embodiment of the present application, wherein fig. 7 (a) is a first defect image at the same position, fig. 7 (b) is a second defect image at the same position, fig. 7 (c) is a difference map obtained by performing a difference process on two defect images, and fig. 7 (d) is an image obtained by enhancing data.
Fig. 8 is a diagram showing the variation of each index in the model training process in the method for rapidly detecting the paint defects of the vehicle body according to an embodiment of the present application.
Fig. 9 is a diagram of a detection result of a method for rapidly detecting defects of a paint surface of a vehicle body according to an embodiment of the present application, where the detected defects are particles.
Fig. 10 is a diagram of another detection result of the method for rapidly detecting defects of a paint surface of a vehicle body according to an embodiment of the present application, where the detected defects are fibers.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The application provides a method for rapidly detecting a vehicle body paint surface defect. It should be noted that the rapid detection method for the paint surface defects of the vehicle body provided by the application can be applied to the detection of one or more types of paint surface defects of the vehicle body, such as defects of particles, fibers, shrinkage cavities, electrophoresis shrinkage cavities, paint points, metal plates, flow marks, trauma and the like.
In addition, the method for rapidly detecting the paint defects of the vehicle body is not limited to an execution main body. Optionally, the execution main body of the method for rapidly detecting the paint surface defect of the vehicle body provided by the application can be an industrial personal computer.
As shown in fig. 1, in an embodiment of the present application, the method for rapidly detecting a paint defect of a vehicle body includes the following steps S100 to S200:
s100, acquiring surface image information of the vehicle body in the detection station.
S200, inputting the surface image information into a defect detection model, and operating the defect detection model to obtain a detection result of the defect detection model. Specifically, the defect detection model adopts a YOLOv5 model.
For example, the detection results are shown in fig. 9 and 10, in which defects are selected by a box, and the defect type and confidence are shown. The defect type in fig. 9 can be observed as particles (keli) with a confidence of 0.82. The defect type in fig. 10 is fiber (xianwei) with a confidence of 0.85.
In the embodiment, the YOLOv5 model is adopted as the defect detection model to analyze the surface image information of the vehicle body, so that the tiny defects of the paint surface of the vehicle body can be accurately identified, the accuracy rate of defect detection and identification reaches 98%, and the recall rate reaches 98%, thereby realizing the rapid detection of the defects of the paint surface of the vehicle body. Compared with the similar target detection methods such as Fast RCNN and Fast RCNN, the detection speed is greatly improved while the detection accuracy is ensured, the picture detection speed reaches 20 pieces/s, and the beat requirement of an automobile production line can be met.
As shown in fig. 2, in an embodiment of the present application, the network structure of the YOLOv5 model includes: input, backhaul base network, neck network and Prediction output layer.
Specifically, the Input terminal comprises a Mosaic data enhancement structure, an adaptive anchor frame calculation structure and an adaptive picture scaling structure. The backhaul base network includes a Focus unit, an SPP unit, and a C3 unit. The Neck network adopts a FPN+PAN structure, wherein the FPN structure is used for transmitting strong semantic feature information from top to bottom. The PAN structure adds an upward feature pyramid after the FPN structure for conveying strong positioning information from bottom to top. The Prediction output layer includes a coding Box loss function and NMS non-maximum suppression.
As shown in fig. 3, in an embodiment of the present application, the rounding Box Loss function uses a ciou_loss Loss function, and the expression of the ciou_loss function is as follows:
in equations 1 to 3, IOU represents the intersection ratio of the defect prediction frame and the defect real frame, ρ 2 (b,b gt ) Representing the center point b of the defect prediction frame and the center point b of the defect real frame gt C represents the diagonal length of the circumscribed rectangle of the defect prediction frame and the defect real frame. v represents the aspect ratio similarity of the defect prediction frame and the defect real frame, and α represents the influence factor of v. w (w) gt Representing the width of a real rectangular frame, h gt Representing the height, W, of a true rectangular box p Representing the width of the predicted rectangular box, h p Representing the height of the predicted rectangular box.
In the paint defect target detection task, the detection frame is a rectangular frame, defects tend to be irregular, various shapes such as a circle, a strip, a streamline shape and the like can exist, and the defects can be very close to each other. Therefore, the problem that the detection frames overlap and repeat is unavoidable in the process of detecting multiple targets, so that the screening of multiple target frames is required in the post-processing process of target detection.
In an embodiment of the present application, the NMS non-maximum suppression uses a weighted NMS to screen the multi-objective frame to obtain a final prediction frame, where the screening conditions are as follows:
in 4I represents the sequence number of the target frame, iou represents the cross ratio of multiple target frames, N i Indicating the set target value.
When iou is less than N i At the time S i =S i Representing a reserved target box.
When iou is greater than or equal to N i At the time S i =0, indicating that the target frame is truncated.
In this embodiment, multiple target frames are screened by using a weighted NMS to avoid overlapping and repeating of detection frames during multiple target detection.
Preferably, since the defects to be detected are dense, N is selected to avoid overlapping prediction frames i Set to 0.1.
According to the statistical result of the defects, the particle defects in the paint surface defects of the vehicle body account for more than 90%, the particle defects are usually round in the image, the diameter is between 0.1mm and 1mm, and the size of one image obtained by shooting is 160X 160mm, so that the detection of the fine particle defects belongs to small target detection in target detection tasks. Through testing, the original yolov5 model is found to have poor defect detection effect on the diameter below 0.5 mm. For small particle detection, an improvement on the original yolov5 model is needed.
In an embodiment of the application, before the inputting the surface image information into the defect detection model, the method for rapidly detecting the paint defects of the vehicle body further includes:
for the object detection detect module, an image cutting section is added. Cutting an original image into a plurality of cut images, inputting the cut images as surface image information into a defect detection model, outputting defect information such as defect coordinates, defect types and the like in the cut images by the model, and integrating the defect information of each cut image.
For example, as shown in fig. 4, an original image of 5120×5120px is cropped into several 640×640px images.
Wherein the coordinates (X, Y) of the defect in the original image are:
X=x ij +640× (j-1) 5
Y=y ij +640× (i-1) 6
In formula 5 and formula 6, x ij Is the abscissa of the defect in the cut image of the ith row and jth column after cutting. y is ij Is the ordinate of the defect in the cut image of the ith row and jth column after cutting.
In the embodiment, through image cutting operation, the proportion of defect points in an image is increased, and the detection effect of the defect detection model on small particle defects is improved.
As shown in fig. 5, in an embodiment of the present application, in order to enhance the detectability of the defect detection model to small targets, the network structure of the YOLOv5 model is modified as follows:
1) After the layer 17 c3_3 cells of the network structure of the YOLOv5 model, a layer 19 upsampling cell (Upsample) is added.
2) And in the 20 th layer Concat unit, carrying out Concat fusion on the obtained characteristic diagram and the characteristic diagram of the 2 nd layer C3_3 unit in the backhaul base network, thereby obtaining a larger characteristic diagram and carrying out small target detection.
3) In the target detection layer part, a small target layer detect is added after the 21 st layer C3_3 unit, and the improved defect detection model uses the characteristic diagram of four layers of C3_3 units to detect, such as the 21 st layer C3_3 unit, the 24 th layer C3_3 unit, the 27 th layer C3_3 unit and the 30 th layer C3_3 unit.
The defect detection performance of the defect detection model before and after improvement for different diameter sizes is shown in the following tables 1 and 2:
table 1 shows the detection performance of the defect detection model before improvement for defects of different sizes
Table 2 shows the detection performance of the improved defect detection model for defects of different sizes
Based on the comparison of tables 1 and 2, the accuracy (Precision) and Recall (Recall) are comprehensively considered, and the detection performance of the improved defect detection model for defects with diameters smaller than or equal to 3mm is obviously improved.
In this embodiment, a group of smaller anchors is added on the basis of the existing target detection frame, so that the capturing capability of the model for small particle targets is enhanced.
In an embodiment of the present application, the method for creating the defect detection model includes St10 to St50 as follows.
St10, collecting a vehicle body paint surface defect image as a defect data set.
St20, performing image preprocessing on the defect data set.
St30, performing defect labeling on the defect data set after image preprocessing, and dividing the defect data set into a training set, a verification set and a test set after labeling.
St40, training and verifying the defect detection model by using the constructed training set and verification set.
St50, detecting the test set by using the trained defect detection model, and analyzing the detection result.
As shown in fig. 1, in an embodiment of the present application, the St10 includes the following St11 to St13.
St11, manually presetting the paint surface defect of the vehicle body.
Specifically, the defect type of the paint surface of the vehicle body totally considers 8 main defects such as particles, fibers, shrinkage cavities, electrophoretic shrinkage cavities, paint points, metal plates, flow marks, trauma and the like.
St12, collecting a vehicle body paint surface defect image, and shooting two images according to different light source forms at each point position.
Specifically, as shown in fig. 6, the collection of the paint surface defect image of the vehicle body uses an LED black-and-white stripe light source 101, and three 2500 ten thousand pixel industrial area array cameras 102 are matched to obtain a paint surface defect photo of the vehicle body, and the two defect photos are obtained by respectively shooting two times according to black-white light and white-black light at the same position. The laser sensor 103 is used to detect the distance of the camera from the vehicle body.
St13, screening clear vehicle body paint defect images to prepare a vehicle body paint defect data set.
In an embodiment of the present application, the St20 includes the following St21 to St22.
St21, cutting the vehicle body paint defect image.
In one embodiment, the vehicle body paint defect image may be cropped to a plurality of 640 x 640 pixel square images.
St22, performing differential processing on two defect images at the same position to obtain a differential graph, and further obtaining a clear defect image.
St23, performing data enhancement processing on the differential graph, and improving the visibility of defects in the image.
For example, an embodiment will be described below, where, as shown in fig. 7 (a) to 7 (d), fig. 7 (a) and 7 (b) are images of two sinusoidal phase-shifted fringes at the same position, fig. 7 (c) is an image obtained by combining the two images of fig. 7 (a) and 7 (b) through differential processing, and after data enhancement, a differential graph obtained after data enhancement shown in fig. 7 (d) is obtained, and according to the effect shown in fig. 7 (d), the visibility of defects is significantly improved.
In the embodiment, the defect image of the paint surface of the vehicle body is cut, and the difference and data enhancement operation is carried out, so that the defect is more obvious in the image and is easy to detect by a model.
In an embodiment of the present application, the St30 includes the following St31 to St33.
St31, a YOLO format annotation file containing defect location and size information is generated using the annotation software Labellmg.
In the defect labeling process, the defects in the image are marked in the form of rectangular frames by using the software Labellmg, and a YOLO format labeling file containing defect position and size information is generated after labeling and stored in a labells folder.
St32, marking the defect data set for secondary inspection, and unifying marking standards.
In this embodiment, the subjectivity of defect labeling for different staff is reduced by adopting secondary inspection.
St33, according to 8:1: the marked images are divided into a training set, a verification set and a test set according to the proportion of 1.
In an embodiment of the present application, the St40 includes the following St41 to St43.
St41, configuring a model training environment.
St42, training and verifying the training set and the image input model in the verification set, and setting the training round number as 300.
St43, training the model by adopting three network structures of Yolov5s, yolov5m and Yolov5l respectively, and obtaining a corresponding. Pt weight file.
For example, one embodiment is described below using a training model of YOLOv5 v6.2, the environment of which is configured with python=3.9, torch=1.13.0, cuda=11.7, gpu= NVIDIA GeForce RTX 3090 Ti,Core TM i9 i9-12900K. Before training, training parameters of the model need to be set, in this embodiment, the number of network iterations is set to be 200, the learning rate is 0.01, the training batch size is 16, and three network structures of YOLOv5s, YOLOv5m and YOLOv5l provided by authorities are respectively adopted to train the model.
In an embodiment of the application, the St50 includes the following St51 to St53.
St51, inputting the images in the test set into the trained model, and verifying the model detection effect.
St52, comparing the detection effects of the three models of Yolov5s, yolov5m and Yolov5l, and selecting an optimal model.
St53, deploying the obtained optimal model.
For example, in the following description, the model evaluation index includes an accuracy rate P, a recall rate R, an average accuracy mean value mAP, and a time required for testing 2000 pictures, and the related formulas are shown in formulas 7 to 10:
in this task, when the prediction frame intersects with the real frame at a ratio greater than 0.5 (i.e., iou > 0.5), it is considered to be correctly predicted. In the above equations 7 to 10, the accuracy P represents the proportion of the correct predicted result among all the predicted results given by the model. The recall rate R represents the proportion of the true result to the correctly predicted result. TP represents the number of defects for which defects are correctly predicted. FP represents the number of defects for which no defect is predicted as a defect. FN table had defects but no predicted number. The average accuracy AP represents the prediction accuracy of a certain defect class and the area under a recall rate curve, the area is calculated by adopting an interpolation method, M is the number of points of interpolation, and P (R) represents the accuracy P corresponding to the interpolation points. The average accuracy average mAP represents the average value of all defect categories AP, and m represents the predicted defect category number.
In the model training process, as the number of iteration rounds increases, as shown in fig. 8, it can be seen that the loss functions train/box_loss, train/obj_loss, train/cls_loss, val/box_loss, val/obj_loss, val/cls_loss of the training set and the verification set are continuously reduced, and the accuracy rate procision, recall ratio recovery and average accuracy mean value mAP_0.5 are continuously increased. The respective prediction model indexes are shown in table 3 below.
Table 3 is an index data table of each prediction model
Model Precision(P) Recall(R) mAP(0.5) Time for detecting 2000 pictures
yolov5.s 0.938 0.945 0.969 103s
yolov5.m 0.956 0.949 0.974 120s
yolov5.l 0.961 0.971 0.981 157s
In this embodiment, the detection accuracy and the detection efficiency are comprehensively considered, a yolov5.M model is selected and used, the prediction accuracy is 95.6%, the time for detecting 2000 pictures is 120s, and the model is deployed according to specific requirements in other use scenes.
The application also provides an image processing device.
In one embodiment of the application, the image processing device includes a memory and a processor, the memory being coupled to the processor. The memory is used for storing program data, and the processor is used for executing the program data to realize the method for rapidly detecting the vehicle body paint defects.
The present application also provides a computer readable medium having stored thereon a computer program which, when executed by a processor, implements a method for rapid detection of vehicle body paint defects as described in the foregoing.
The technical features of the above embodiments may be combined arbitrarily, and the steps of the method are not limited to the execution sequence, so that all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description of the present specification.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

1. The rapid detection method for the vehicle body paint surface defects is characterized by comprising the following steps of:
acquiring surface image information of a vehicle body in a detection station;
inputting the surface image information into a defect detection model, and operating the defect detection model to obtain a detection result of the defect detection model;
wherein the defect detection model adopts a YOLOv5 model.
2. The method for rapidly detecting a paint defect of a vehicle body according to claim 1, wherein the network structure of the YOLOv5 model comprises:
the Input end comprises a Mosaic data enhancement structure, a self-adaptive anchor frame calculation structure and a self-adaptive picture scaling structure;
the backhaul base network comprises a Focus unit, an SPP unit and a C3 unit;
the Neck network adopts a FPN+PAN structure, wherein the FPN structure is used for transmitting strong semantic feature information from top to bottom; the PAN structure is added with an upward characteristic pyramid after the FPN structure and is used for transmitting strong positioning information from bottom to top;
the Prediction output layer comprises a binding Box loss function and NMS non-maximum suppression.
3. The method for rapidly detecting the paint surface defects of the vehicle body according to claim 2, wherein the boundin Box Loss function adopts a CIOU_Loss Loss function, and the expression of the CIOU_Loss Loss function is as follows:
wherein IOU represents the intersection ratio of the defect prediction frame and the defect real frame, ρ 2 (b,b gt ) Representing the center point b of the defect prediction frame and the center point b of the defect real frame gt C represents the Euclidean distance between the defect prediction frame and the defect real frameDiagonal length of the circumscribed rectangle forming a whole; v represents the aspect ratio similarity of the defect prediction frame and the defect real frame, and alpha represents the influence factor of v; w (w) gt Representing the width of a real rectangular frame, h gt Representing the height, w, of a true rectangular box p Representing the width of the predicted rectangular box, h p Representing the height of the predicted rectangular box.
4. The rapid detection method for vehicle body paint defects according to claim 2, wherein the NMS non-maximum inhibition adopts weighted NMS, and the multi-objective frame is screened to obtain a final predicted frame, and the screening conditions are as follows:
wherein i represents the sequence number of the target frame, iou represents the cross ratio of multiple target frames, N i Representing the set target value;
when iou is less than N i At the time S i =S i Representing a reserved target frame;
when iou is greater than or equal to N i At the time S i =0, indicating that the target frame is truncated.
5. The method for quickly detecting a paint defect of a vehicle body according to claim 2, wherein before the inputting of the surface image information to a defect detection model, the method for quickly detecting a paint defect of a vehicle body further comprises:
cutting an original image into a plurality of cut images, and inputting the cut images serving as surface image information into a defect detection model; wherein the coordinates (X, Y) of the defect in the original image are:
X=x ij +640× (j-1) 5
Y=y ij +640× (i-1) 6
Wherein x is ij The horizontal coordinate of the defect in the cut image of the ith row and the jth column after cutting; y is ij Cutting of the ith row and jth column after cutting for defectsOrdinate in the partial image.
6. The rapid detection method for vehicle body paint defects according to claim 5, wherein the network structure of the YOLOv5 model is further added with a 19 th layer up-sampling unit after the 17 th layer c3_3 unit;
in a 20 th layer Concat unit, carrying out Concat fusion on the obtained feature map and a feature map of a 2 nd layer C3_3 unit in a backhaul base network, so as to obtain a larger feature map for small target detection;
a small target layer detect is also added after the layer 21 c3_3 cell in the target detection layer section.
7. The rapid detection method for vehicle body paint defects according to claim 1, wherein the defect detection model is established by the following establishing method:
collecting a vehicle body paint surface defect image as a defect data set;
performing image preprocessing on the defect data set;
performing defect labeling on the defect data set subjected to image preprocessing, and dividing the defect data set into a training set, a verification set and a test set after labeling;
training and verifying the defect detection model by using the constructed training set and verification set;
and detecting the test set by using the trained defect detection model, and analyzing the detection result.
8. The method for rapidly detecting paint defects of a vehicle body according to claim 1, wherein the collecting a paint defect image of the vehicle body as a defect data set comprises:
the method comprises the steps of obtaining paint surface defect images by adopting a stripe light source matched with an area array camera, wherein two sine phase shift stripe images are obtained at the same position;
the image preprocessing of the defect data set comprises:
cutting the paint defect image of the vehicle body;
carrying out differential processing on two vehicle body paint surface defect images at the same position to obtain a differential graph;
and carrying out data enhancement processing on the differential graph.
9. An image processing apparatus comprising a memory and a processor, the memory coupled to the processor; wherein the memory is configured to store program data and the processor is configured to execute the program data to implement the vehicle body paint defect rapid detection method according to any one of claims 1 to 8.
10. A computer readable medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the vehicle body finish defect rapid detection method according to any one of claims 1 to 8.
CN202310631915.1A 2023-05-31 2023-05-31 Method for rapidly detecting vehicle body paint surface defects, image processing equipment and readable medium Pending CN116883313A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310631915.1A CN116883313A (en) 2023-05-31 2023-05-31 Method for rapidly detecting vehicle body paint surface defects, image processing equipment and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310631915.1A CN116883313A (en) 2023-05-31 2023-05-31 Method for rapidly detecting vehicle body paint surface defects, image processing equipment and readable medium

Publications (1)

Publication Number Publication Date
CN116883313A true CN116883313A (en) 2023-10-13

Family

ID=88259283

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310631915.1A Pending CN116883313A (en) 2023-05-31 2023-05-31 Method for rapidly detecting vehicle body paint surface defects, image processing equipment and readable medium

Country Status (1)

Country Link
CN (1) CN116883313A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117409010A (en) * 2023-12-15 2024-01-16 菲特(天津)检测技术有限公司 Paint surface defect detection model training, detecting and encoding method and detecting system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117409010A (en) * 2023-12-15 2024-01-16 菲特(天津)检测技术有限公司 Paint surface defect detection model training, detecting and encoding method and detecting system
CN117409010B (en) * 2023-12-15 2024-04-26 菲特(天津)检测技术有限公司 Paint surface defect detection model training, detecting and encoding method and detecting system

Similar Documents

Publication Publication Date Title
CN111223088B (en) Casting surface defect identification method based on deep convolutional neural network
CN113674247B (en) X-ray weld defect detection method based on convolutional neural network
CN111598861B (en) Improved Faster R-CNN model-based non-uniform texture small defect detection method
CN113450307B (en) Product edge defect detection method
CN107543828B (en) Workpiece surface defect detection method and system
CN104700099B (en) The method and apparatus for recognizing traffic sign
CN113378686B (en) Two-stage remote sensing target detection method based on target center point estimation
CN113362326A (en) Method and device for detecting welding spot defects of battery
CN113920107A (en) Insulator damage detection method based on improved yolov5 algorithm
CN114663346A (en) Strip steel surface defect detection method based on improved YOLOv5 network
CN106934455B (en) Remote sensing image optics adapter structure choosing method and system based on CNN
CN115331245B (en) Table structure identification method based on image instance segmentation
CN111798447B (en) Deep learning plasticized material defect detection method based on fast RCNN
CN112819748B (en) Training method and device for strip steel surface defect recognition model
CN115496746A (en) Method and system for detecting surface defects of plate based on fusion of image and point cloud data
CN112308826A (en) Bridge structure surface defect detection method based on convolutional neural network
CN114419007B (en) Defect type identification method and system for multi-strategy fusion deep learning network model
CN111462140B (en) Real-time image instance segmentation method based on block stitching
CN109584206B (en) Method for synthesizing training sample of neural network in part surface flaw detection
CN116883313A (en) Method for rapidly detecting vehicle body paint surface defects, image processing equipment and readable medium
CN113393426A (en) Method for detecting surface defects of rolled steel plate
CN113962929A (en) Photovoltaic cell assembly defect detection method and system and photovoltaic cell assembly production line
CN113496480A (en) Method for detecting weld image defects
CN112198170A (en) Detection method for identifying water drops in three-dimensional detection of outer surface of seamless steel pipe
CN115187544A (en) DR-RSBU-YOLOv 5-based fabric flaw detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination