CN113421262A - Hub defect detection method and device, electronic equipment and storage medium - Google Patents

Hub defect detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113421262A
CN113421262A CN202110964839.7A CN202110964839A CN113421262A CN 113421262 A CN113421262 A CN 113421262A CN 202110964839 A CN202110964839 A CN 202110964839A CN 113421262 A CN113421262 A CN 113421262A
Authority
CN
China
Prior art keywords
images
hub
image
defect detection
inputting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110964839.7A
Other languages
Chinese (zh)
Other versions
CN113421262B (en
Inventor
陈彪
黄雪峰
熊海飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xinrun Fulian Digital Technology Co Ltd
Original Assignee
Shenzhen Xinrun Fulian Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xinrun Fulian Digital Technology Co Ltd filed Critical Shenzhen Xinrun Fulian Digital Technology Co Ltd
Priority to CN202110964839.7A priority Critical patent/CN113421262B/en
Publication of CN113421262A publication Critical patent/CN113421262A/en
Application granted granted Critical
Publication of CN113421262B publication Critical patent/CN113421262B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a method and a device for detecting defects of a hub, electronic equipment and a storage medium, and relates to the technical field of vehicle hub production. The method comprises acquiring a first hub image; slicing the first hub image to obtain a plurality of first slice images; inputting the plurality of first slice images into a pre-trained hub defect detection model for detection, and outputting a hub defect detection result; the hub defect detection model is used for extracting the features of the first slice images to obtain a plurality of first feature images, performing super-resolution reconstruction and prediction on the first feature images, and outputting hub defect detection results corresponding to the first feature images. Therefore, the pre-trained hub defect detection model can be adopted to replace human eyes to detect the first hub image, and the accuracy and efficiency of hub defect detection can be improved.

Description

Hub defect detection method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of vehicle hub production, in particular to a hub defect detection method and device, electronic equipment and a storage medium.
Background
The hub is an important part of the vehicle, and the mass of the hub is one of important indexes of safe running of the vehicle. To ensure the quality of the vehicle hub, high-quality production of the vehicle hub needs to be enhanced. In the hub production process, the defect hub is prevented from flowing into the market, the safety traffic is influenced, and the defect of the vehicle hub needs to be detected. At present, a hub is detected by irradiating a hub by using X-rays, generating an image on a flat panel detector and transmitting the image to an upper computer, the upper computer receives the image and then displays the image on a screen, and a detector detects the image by naked eyes to judge whether the hub has defects. However, the defects of the automobile hub, such as slag inclusion, cracks, pores, shrinkage cavities, shrinkage porosity and the like, belong to small target defects and are not easy to find, and particularly, when the defects are detected by human eyes, the conditions of missing detection, false detection and the like can occur to the small target defects, so that the problems of low detection result accuracy and low detection efficiency exist in the existing wheel hub detection mode.
Disclosure of Invention
The application provides a method and a device for detecting wheel hub defects, electronic equipment and a storage medium, which are used for solving the problems of low accuracy and low detection efficiency of detection results in the existing wheel hub detection mode.
In a first aspect, the present application provides a method of detecting a hub defect, the method comprising:
acquiring a first hub image;
slicing the first hub image to obtain a plurality of first slice images;
inputting the plurality of first slice images into a pre-trained hub defect detection model for detection, and outputting a hub defect detection result;
the hub defect detection model is used for extracting the features of the first slice images to obtain a plurality of first feature images, performing super-resolution reconstruction and prediction on the first feature images, and outputting hub defect detection results corresponding to the first feature images.
Optionally, the hub defect detection model comprises a convolutional neural network, a generative countermeasure network and a deconvolution single-stage detector network;
the inputting the plurality of first slice images into a pre-trained hub defect detection model for detection and outputting a hub defect detection result comprises:
inputting the plurality of first slice images into the convolutional neural network for feature extraction, and outputting the plurality of first feature images;
inputting the plurality of first characteristic images into the generative countermeasure network for super-resolution reconstruction, and outputting a plurality of first intermediate images, wherein the resolution of the plurality of first intermediate images is higher than that of the plurality of first characteristic images;
and inputting the plurality of first intermediate images into the deconvolution single-stage detector network for prediction, and outputting the wheel hub defect detection result.
Optionally, the generative countermeasure network includes a generator and a discriminator, wherein the generator includes a plurality of first convolutional layers connected in series in sequence, and the discriminator includes a plurality of second convolutional layers connected in series in sequence;
the inputting the plurality of first characteristic images into the generative countermeasure network for super-resolution reconstruction to obtain a plurality of first intermediate images comprises:
inputting the plurality of first characteristic images into the generator to carry out convolution calculation, and generating a plurality of sample images;
inputting the plurality of sample images and the plurality of first characteristic images into the discriminator to carry out convolution calculation, and discriminating the probability that the first sample image belongs to the characteristic images according to the convolution calculation result, wherein the first sample image is any one of the plurality of sample images;
taking the first sample image as a first intermediate image under the condition that a preset condition is met;
the preset condition is that the probability that the first sample image belongs to the plurality of first feature images is equal to a preset threshold value, or the current judging times reach preset times.
Optionally, the convolutional neural network comprises a plurality of deconvolution layers and a plurality of third convolution layers;
the inputting the plurality of first slice images into the convolutional neural network for feature extraction, and outputting the plurality of first feature images includes:
sequentially inputting the plurality of first slice images into each deconvolution layer in the plurality of deconvolution layers for deconvolution calculation, and outputting a plurality of second intermediate images;
and sequentially inputting the plurality of second intermediate images into each third convolution layer in the plurality of third convolution layers for convolution calculation to obtain the plurality of first characteristic images.
Optionally, after the inputting the plurality of first slice images into a pre-trained hub defect detection model for detection and outputting a hub defect detection result, the method further includes:
adjusting the acquisition angle of the first hub image; and/or the presence of a gas in the gas,
and storing the hub defect detection result into a preset database.
Optionally, before the inputting the plurality of first slice images into a pre-trained hub defect detection model for detection and outputting a hub defect detection result, the method further includes:
acquiring a training image;
inputting the training image into a model to be trained for training to obtain the hub defect detection model;
the model to be trained comprises a convolutional neural network, a generative countermeasure network and a deconvolution single-stage detector network.
Optionally, the acquiring the training image comprises:
acquiring a second hub image;
slicing the second hub image to obtain a plurality of second slice images;
performing data enhancement processing on the plurality of second slice images to obtain a first training image;
generating a second training image based on a preset generation type confrontation network;
wherein the training image comprises: the first training image and the second training image.
In a second aspect, the present application also provides a hub defect detecting apparatus, the apparatus comprising:
the first acquisition module is used for acquiring a first hub image;
the slicing module is used for slicing the first hub image to obtain a plurality of first slice images;
the detection module is used for inputting the first slice images into a pre-trained hub defect detection model for detection and outputting a hub defect detection result;
the hub defect detection model is used for extracting the features of the first slice images to obtain a plurality of first feature images, performing super-resolution reconstruction and prediction on the first feature images, and outputting hub defect detection results corresponding to the first feature images.
In a third aspect, an electronic device is provided, which includes a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the steps of the hub defect detection method in any embodiment of the first aspect when executing the program stored in the memory.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the hub defect detection method according to any one of the embodiments of the first aspect.
In the embodiment of the application, a first hub image is acquired; slicing the first hub image to obtain a plurality of first slice images; inputting the plurality of first slice images into a pre-trained hub defect detection model for detection, and outputting a hub defect detection result; the hub defect detection model is used for extracting the features of the first slice images to obtain a plurality of first feature images, performing super-resolution reconstruction and prediction on the first feature images, and outputting hub defect detection results corresponding to the first feature images. Through the method, the acquired first hub image can be sliced into a plurality of first slice images with smaller sizes, so that the hub defect detection model can extract the characteristics of smaller defects on the first slice images conveniently, and the accuracy of hub defect detection is improved. Meanwhile, the pre-trained hub defect detection model is adopted to replace human eyes to detect the first hub image, so that the accuracy and efficiency of hub defect detection can be improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a hub defect detection method according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a hub defect detecting system according to an embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram of a hub defect detection model according to an embodiment of the present application;
fig. 4 is a schematic diagram illustrating a computing process and a structure of a generative countermeasure network according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a hub defect detecting device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a schematic flow chart of a hub defect detecting method provided in the embodiment of the present application. As shown in fig. 1, the hub defect detecting method includes:
step 101, acquiring a first hub image.
Specifically, the hub defect detecting system shown in fig. 2 may be adopted to achieve the acquisition of the first hub image. As shown in fig. 2, the hub defect detecting system may include an X-ray machine, a hub placing device, a flat panel detector, an upper computer, a lower computer, a database, and the like. In the first hub image acquisition process, the X-ray machine can emit X-rays to the hub on the hub placing device, perform ray detection on the hub, acquire a first hub image through the flat panel detector, and output the first hub image to the upper computer. And after the upper computer finishes the first hub image detection, transmitting the hub defect detection result to the lower computer and a database for storage. And the lower computer can also display the wheel hub defect detection result to the staff, and the staff sends the motion control instruction to the wheel hub placer through the lower computer to adjust the wheel hub position, obtain the first wheel hub image of wheel hub different angles.
And 102, slicing the first hub image to obtain a plurality of first slice images.
In the step, the first hub image is sliced into a plurality of first slice images according to a preset proportion, so that the characteristic processing of the plurality of first slice images is conveniently carried out subsequently. The preset proportion can be adjusted according to actual needs, and the application is not particularly limited. In this way, even if the defect samples in the first hub image are smaller, the locking can be faster.
103, inputting the plurality of first slice images into a pre-trained hub defect detection model for detection, and outputting a hub defect detection result; the hub defect detection model is used for extracting the features of the first slice images to obtain a plurality of first feature images, performing super-resolution reconstruction and prediction on the first feature images, and outputting hub defect detection results corresponding to the first feature images.
Specifically, the hub defect detection model may be a deep learning model, and the training image including the hub defect may be learned to obtain the hub defect detection model. In the hub defect detection model, the characteristics of each first slice image can be extracted to obtain a corresponding first characteristic image, super-resolution reconstruction is performed on a plurality of first characteristic images through convolution calculation, and finally the first characteristic images after super-resolution reconstruction are predicted to determine the type and position of the hub defect.
In this embodiment, the acquired first hub image can be sliced into a plurality of first slice images with smaller sizes, so that the hub defect detection model can extract the features of smaller defects on the first slice images, and the accuracy of hub defect detection is improved. Meanwhile, the pre-trained hub defect detection model is adopted to replace human eyes to detect the first hub image, so that the accuracy and efficiency of hub defect detection can be improved.
Further, referring to fig. 3, fig. 3 is a schematic structural diagram of a hub defect detection model provided in the embodiment of the present application. As shown in FIG. 3, the hub defect detection model includes a convolutional neural network, a generative countermeasure network, and a deconvolution single-stage detector network;
inputting a plurality of first slice images into a pre-trained hub defect detection model for detection, and outputting a hub defect detection result, wherein the method comprises the following steps:
inputting the plurality of first slice images into a convolutional neural network for feature extraction, and outputting a plurality of first feature images;
inputting a plurality of first characteristic images into a generating countermeasure network for super-resolution reconstruction, and outputting a plurality of first intermediate images, wherein the resolution of the plurality of first intermediate images is higher than that of the plurality of first characteristic images;
and inputting the plurality of first intermediate images into a deconvolution single-stage detector network for prediction, and outputting a hub defect detection result.
Specifically, when detecting the plurality of first slice images, the plurality of first slice images need to be input into a Convolutional Neural Network (CNN) to perform feature extraction of the hub defect, so as to obtain a plurality of first feature images. It should be noted that, during feature extraction, since the first slice image is small in size and features are also small, there may be a case of feature loss in the feature extraction process, for this problem, the size of the first slice image may be enlarged in a manner of deconvolution calculation, so as to avoid feature loss, and then convolution extraction operation is performed. Subsequently, a plurality of first feature images output from the convolutional neural network are input to a Generative Additive Networks (GAN), a generator in the GAN extracts hub defect edge features of the first feature images, generates a sample image with high resolution, and discriminates the sample image through a discriminator, and when the probability that the sample image belongs to the first feature images is equal to 0.5, the sample image with high resolution is output. And finally, inputting the high-resolution sample image output by the generating type countermeasure network into a deconvolution Single-stage Detector network, and predicting the high-resolution sample image through a deconvolution Single-stage Detector (DSSD) network to obtain a hub defect detection result.
In the embodiment, the super-resolution reconstruction can be performed on the first characteristic image through the generated countermeasure network, so that the resolution of the first characteristic image is improved, the problems that the hub defect size is small and the defect features are not easy to extract are solved, and the accuracy of hub defect detection is improved.
Further, the generative countermeasure network comprises a generator and a discriminator, wherein the generator comprises a plurality of first convolution layers which are sequentially connected in series, and the discriminator comprises a plurality of second convolution layers which are sequentially connected in series;
inputting a plurality of first characteristic images into a generative countermeasure network for super-resolution reconstruction to obtain a plurality of first intermediate images, and the method comprises the following steps:
inputting a plurality of first characteristic images into a generator to perform convolution calculation, and generating a plurality of sample images;
inputting the plurality of sample images and the plurality of first characteristic images into a discriminator to carry out convolution calculation, and discriminating the probability that the first sample image belongs to the characteristic images according to the convolution calculation result, wherein the first sample image is any one of the plurality of sample images;
taking the first sample image as a first intermediate image under the condition that a preset condition is met;
the preset condition is that the probability that the first sample image belongs to the plurality of first characteristic images is equal to a preset threshold value, or the current judging times reach preset times.
It should be noted that the generative countermeasure network is an unsupervised algorithm, and is composed of a generator and an arbiter. Wherein a Generator (Generator) is used to generate the sample image in order to "trick" the discriminator; a Discriminator (Discriminator) is used to determine whether the sample image is real or machine generated, with the aim of finding "false data" that the generator does. The principle of the generative countermeasure network is as follows: the generator continuously generates 'false data', and then the 'false data' is sent to the discriminator for judgment. At the beginning, the image generated by the generator is poor in effect, and the discriminator can easily discriminate the image. But with continuous training, the effect of the generator is better and better, and finally the discriminator is cheated; at this time, the discrimination accuracy of the discriminator starts to decrease, and the transmitted data is basically guessed, that is, the probability of discriminating the false data is 50%. At this time, the generator is fixed, the arbiter is continuously trained, the discrimination capability of the discrimination is improved, and the generator is trained again until the data generated by the generator cannot "cheat" the arbiter, until Nash equilibrium is reached. The training process is summarized as follows: step 1, fixing a discriminator and a training generator, step 2, fixing the generator and the training discriminator, and step 3, Nash equilibrium. The training process of the generative confrontation network belongs to a game problem, and aims to achieve Nash equilibrium, so that a generator estimates data sample distribution, and a discriminator cannot distinguish a real image from a sample image. The calculation flow and structure are shown in fig. 4. Finally, the discriminator cannot distinguish the real image from the sample image, the output result probability is 0.5, and the other one is to generate an extended image with different forms from the real sample characteristic. The optimization objective function is:
Figure 29350DEST_PATH_IMAGE001
where, x represents a real image,
Figure 783680DEST_PATH_IMAGE002
representing the probability distribution of the real image, z representing random noise in the input generator,
Figure 583009DEST_PATH_IMAGE003
the probability distribution of the sample image is shown, and D (x) shows the probability that the input image is judged to be a real image by the discriminator; g (z) represents a sample image generated after the generator accepts random noise.
The first term in the above equation is the entropy of the real image passing through the discriminator D, which tries to maximize it to 1, the second term is the entropy of the random noise data passing through the generator G, which then generates a false sample and is identified as false by the discriminator D, the discriminator tries to maximize it to 0, and the generator tries to minimize the difference between the real image and the sample image, E being the expectation of the entire complex function. In general, the goal of the arbiter is to maximize the function V (D, G), and the generator is to minimize it.
In one embodiment, the generator and the arbiter each may include multiple convolutional layers. As shown in table one below, the generator includes 6 first convolutional layers connected in series in sequence, and the discriminator includes 4 second convolutional layers connected in series in sequence.
Watch 1
Figure 106394DEST_PATH_IMAGE004
After a plurality of first characteristic images are input into the generation type countermeasure network, 6 first convolution layers in the generator sequentially carry out convolution calculation on each first characteristic image, wherein the convolution calculation formula of each layer is as follows:
Figure 536238DEST_PATH_IMAGE005
where i denotes a size of an image input to the current convolution layer, o denotes a size of an image output to the current convolution layer, p denotes a cut-off of an image edge unnecessary pixel (p denotes a pixel value 0 filled at an image edge by 1; p denotes a cut-off of an image edge unnecessary pixel by 0), k denotes a size of a convolution kernel, and s denotes a convolution kernel shift step size. For example, assume that the size of the image input to the current convolution layer is 7 × 7, the convolution kernel size is 3 × 3, and the step s is 2; p takes 0, the size o of the image of the output current convolution layer is equal to [ (7+2 x 0-3)/2] +1=3, that is, the size of the image of the output current convolution layer is 3 x 3.
Therefore, the generator continuously extracts the edge characteristics of the hub defects through a plurality of convolution calculations, converts the edge characteristics into a high-resolution image, and finally obtains a high-resolution sample image. The plurality of sample images and the plurality of first characteristic images are input into a discriminator together to be subjected to convolution calculation for a plurality of times, and the probability that the first sample image belongs to the characteristic images is discriminated on the basis of a Softmax function and the convolution calculation result. When the probability that the first sample image belongs to the plurality of first feature images is equal to a preset threshold (namely 0.5) or the current discrimination times reaches a preset number, the first sample image is taken as a first intermediate image, that is, the first intermediate image finally output from the generative countermeasure network is an image subjected to super-resolution reconstruction by the generator.
In this embodiment, the super-resolution reconstruction of the first feature image can be performed through the multiple first convolution layer countermeasure networks in the generation formula, so that the resolution of the first feature image is improved, the problems that the hub defect size is small and the defect feature is not easy to extract are solved, and the accuracy of hub defect detection is improved.
Further, the convolutional neural network comprises a plurality of deconvolution layers and a plurality of third convolution layers;
inputting the plurality of first slice images into a convolutional neural network for feature extraction, and outputting a plurality of first feature images, wherein the method comprises the following steps:
sequentially inputting the first slice images into each deconvolution layer in the deconvolution layers for deconvolution calculation, and outputting a plurality of second intermediate images;
and sequentially inputting the plurality of second intermediate images into each third convolution layer in the plurality of third convolution layers for convolution calculation to obtain a plurality of first characteristic images.
In one embodiment, the convolutional neural network may include 3 deconvolution layers and 3 third convolution layers, as shown in table two below.
Watch two
Figure 777864DEST_PATH_IMAGE006
Therefore, after the plurality of first slice images are input into the convolutional neural network, deconvolution calculation can be performed through 3 deconvolution layers, the size of each first slice image is enlarged, loss of feature data in each first slice image is avoided, and a plurality of second intermediate images are output. And performing convolution calculation on the plurality of second intermediate images through the 3 convolution layers, extracting the characteristic data of the plurality of second intermediate images, and finally obtaining a plurality of first characteristic images. In this way, the probability of missing hub defect features in the first slice image can be greatly reduced.
Further, after inputting the plurality of first slice images into a pre-trained hub defect detection model for detection and outputting a hub defect detection result, the method further comprises:
adjusting the acquisition angle of the first hub image; and/or the presence of a gas in the gas,
and storing the wheel hub defect detection result into a preset database.
In an embodiment, after the hub defect detection of the first hub image is completed, the obtaining angle of the first hub image may be adjusted to obtain a new first hub image at a different angle, and then the new first hub image is sliced and detected to realize the detection of different positions of the hub. Specifically, based on the hub detection system shown in fig. 2, after the upper computer completes detection of the first hub image, the upper computer can send the hub defect detection result to the lower computer and/or a preset database, the lower computer displays the hub defect detection result to a worker, and the worker can send a motion control instruction to the hub placement device through the lower computer to adjust the position of the hub and obtain the first hub image of the hub at different angles. Meanwhile, the hub defect detection results are uniformly stored through a preset database, and the checking is convenient.
Further, before inputting the plurality of first slice images into a pre-trained hub defect detection model for detection and outputting a hub defect detection result, the method further comprises:
acquiring a training image;
inputting the training image into a model to be trained for training to obtain a hub defect detection model;
the model to be trained comprises a convolutional neural network, a generative countermeasure network and a deconvolution single-stage detector network.
Specifically, before the hub defect detection model is used for detecting the hub defects, the hub defect detection model needs to be trained. Specifically, a training image may be acquired, divided into a training data set, a test data set, and a validation data set. Training the model parameters of the model to be trained through a training data set, comparing the performance of the trained model through verification data to determine the model with the best performance, and finally verifying the performance of the model through a test data set model. In the process of training the model, the performance of predicting the hub defects by the model can be measured through a preset loss function. Wherein, the preset loss function has the following calculation formula:
Figure 52987DEST_PATH_IMAGE007
Figure 696458DEST_PATH_IMAGE008
Figure 93941DEST_PATH_IMAGE009
wherein Loss (x, c, l, g) represents the overall objective Loss function, Lconf (x, c) represents the classification error, Lloc (x, l, g) represents the positioning error, N represents the number of anchors paired with the real bounding box, x represents the value 1 if one anchor is paired with the real bounding box, otherwise 0; c represents the predicted value of the real object, l represents the central position, length and width of the predicted boundary box, g represents the central position, length and width of the real boundary box, and alpha represents the weight value.
Figure 822863DEST_PATH_IMAGE010
Figure 901677DEST_PATH_IMAGE011
Figure 399655DEST_PATH_IMAGE012
Figure 171302DEST_PATH_IMAGE013
Figure 121940DEST_PATH_IMAGE014
Figure 66762DEST_PATH_IMAGE015
The coincidence degree IoU of the bounding boxes representing the ith anchor and the jth real object is highAt the value that we set it is,
Figure 419246DEST_PATH_IMAGE016
represents the center positions, width and height of i anchors;
Figure 361795DEST_PATH_IMAGE017
the center position, width and height of the jth real object are shown.
In this embodiment, the hub defect detection model can be obtained by training the model to be trained, and therefore the accuracy and efficiency of the hub defect detection are improved through the hub defect detection model.
Further, acquiring the training image includes:
acquiring a second hub image;
slicing the second hub image to obtain a plurality of second slice images;
performing data enhancement processing on the plurality of second slice images to obtain a first training image;
generating a second training image based on a preset generation type confrontation network;
wherein the training image comprises: a first training image and a second training image.
In practical application, the hub defect image is poor, so that the hub defect image can be acquired through various ways to serve as a training image. Specifically, a second hub image may be acquired by X-ray, and then the second hub image may be sliced to obtain a plurality of second slice images, and then the image may be enhanced by data enhancement such as image conversion, rotation, mirroring, scaling, and the like to obtain a first training image. In addition, a new sample image can be generated as a second training image through a trained generative confrontation network, so that the first training image and the second training image can be jointly input into the model to be trained for training. As the rich hub defect images are used as training images, the prediction accuracy of the hub defect detection model is improved.
The embodiment also provides a hub defect detecting device. Referring to fig. 5, fig. 5 is a schematic structural diagram of a hub defect detecting device provided in the embodiment of the present application. As shown in fig. 5, the hub defect detecting apparatus 500 includes:
a first obtaining module 501, configured to obtain a first hub image;
a slicing module 502, configured to slice the first hub image to obtain a plurality of first slice images;
the detection module 503 is configured to input the plurality of first slice images into a pre-trained hub defect detection model for detection, and output a hub defect detection result;
the hub defect detection model is used for extracting the features of the first slice images to obtain a plurality of first feature images, performing super-resolution reconstruction and prediction on the first feature images, and outputting hub defect detection results corresponding to the first feature images.
Optionally, the hub defect detection model comprises a convolutional neural network, a generative countermeasure network and a deconvolution single-stage detector network;
a detection module 503, comprising:
the feature extraction submodule is used for inputting the first slice images into a convolutional neural network for feature extraction and outputting a plurality of first feature images;
the super-resolution reconstruction sub-module is used for inputting the first characteristic images into the generation type countermeasure network for super-resolution reconstruction and outputting a plurality of first intermediate images, and the resolution of the first intermediate images is higher than that of the first characteristic images;
and the prediction submodule is used for inputting the plurality of first intermediate images into the deconvolution single-stage detector network for prediction and outputting a wheel hub defect detection result.
Optionally, the generative countermeasure network includes a generator and a discriminator, where the generator includes a plurality of first convolutional layers connected in series in sequence, and the discriminator includes a plurality of second convolutional layers connected in series in sequence;
a super-resolution reconstruction sub-module comprising:
the first convolution calculating unit is used for inputting the first characteristic images into the generator to carry out convolution calculation so as to generate a plurality of sample images;
the second convolution calculation unit is used for inputting the plurality of sample images and the plurality of first characteristic images into the discriminator to carry out convolution calculation and discriminating the probability that the first sample image belongs to the characteristic images according to the convolution calculation result, wherein the first sample image is any one of the plurality of sample images;
a processing unit, configured to take the first sample image as a first intermediate image if a preset condition is satisfied;
the preset condition is that the probability that the first sample image belongs to the plurality of first characteristic images is equal to a preset threshold value, or the current judging times reach preset times.
Optionally, the convolutional neural network comprises a plurality of deconvolution layers and a plurality of third convolution layers;
a feature extraction submodule comprising:
a deconvolution calculation unit, which is used for sequentially inputting the first slice images into each deconvolution layer in the deconvolution layers for deconvolution calculation and outputting a plurality of second intermediate images;
and the second convolution calculation unit is used for sequentially inputting the plurality of second intermediate images into each third convolution layer in the plurality of third convolution layers to carry out convolution calculation so as to obtain a plurality of first characteristic images.
Optionally, the hub defect detecting apparatus 500 further includes:
the adjusting module is used for adjusting the obtaining angle of the first hub image; and/or the presence of a gas in the gas,
and the storage module is used for storing the hub defect detection result into a preset database.
Optionally, the hub defect detecting apparatus 500 further includes:
the second acquisition module is used for acquiring a training image;
the training module is used for inputting the training images into a model to be trained for training to obtain a hub defect detection model;
the model to be trained comprises a convolutional neural network, a generative countermeasure network and a deconvolution single-stage detector network.
Optionally, a second obtaining module comprising
Acquiring a second hub image;
slicing the second hub image to obtain a plurality of second slice images;
performing data enhancement processing on the plurality of second slice images to obtain a first training image;
generating a second training image based on a preset generation type confrontation network;
wherein the training image comprises: a first training image and a second training image.
It should be noted that the hub defect detecting apparatus 500 can implement the embodiments of the hub defect detecting method described above, and achieve the same technical effects, which are not described herein again.
As shown in fig. 6, the embodiment of the present application provides an electronic device, which includes a processor 611, a communication interface 612, a memory 613, and a communication bus 614, wherein the processor 611, the communication interface 612, and the memory 613 communicate with each other through the communication bus 614,
a memory 613 for storing computer programs;
in an embodiment of the present application, the processor 611, configured to execute the program stored in the memory 613, to implement the hub defect detecting method provided in any one of the foregoing method embodiments, includes:
acquiring a first hub image;
slicing the first hub image to obtain a plurality of first slice images;
inputting the plurality of first slice images into a pre-trained hub defect detection model for detection, and outputting a hub defect detection result;
the hub defect detection model is used for extracting the features of the first slice images to obtain a plurality of first feature images, performing super-resolution reconstruction and prediction on the first feature images, and outputting hub defect detection results corresponding to the first feature images.
It should be noted that the electronic device can implement the embodiments of the hub defect detecting method described above, and achieve the same technical effects, which are not described herein again.
The present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the hub defect detecting method provided in any one of the foregoing method embodiments.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of detecting a hub defect, the method comprising:
acquiring a first hub image;
slicing the first hub image to obtain a plurality of first slice images;
inputting the plurality of first slice images into a pre-trained hub defect detection model for detection, and outputting a hub defect detection result;
the hub defect detection model is used for extracting the features of the first slice images to obtain a plurality of first feature images, performing super-resolution reconstruction and prediction on the first feature images, and outputting hub defect detection results corresponding to the first feature images.
2. The method of claim 1, wherein the hub defect detection model comprises a convolutional neural network, a generative countermeasure network, and a deconvolution single stage detector network;
the inputting the plurality of first slice images into a pre-trained hub defect detection model for detection and outputting a hub defect detection result comprises:
inputting the plurality of first slice images into the convolutional neural network for feature extraction, and outputting the plurality of first feature images;
inputting the plurality of first characteristic images into the generative countermeasure network for super-resolution reconstruction, and outputting a plurality of first intermediate images, wherein the resolution of the plurality of first intermediate images is higher than that of the plurality of first characteristic images;
and inputting the plurality of first intermediate images into the deconvolution single-stage detector network for prediction, and outputting the wheel hub defect detection result.
3. The method of claim 2, wherein the generative countermeasure network comprises a generator and an arbiter, wherein the generator comprises a plurality of first convolutional layers sequentially connected in series, and the arbiter comprises a plurality of second convolutional layers sequentially connected in series;
the inputting the plurality of first characteristic images into the generative countermeasure network for super-resolution reconstruction to obtain a plurality of first intermediate images comprises:
inputting the plurality of first characteristic images into the generator to carry out convolution calculation, and generating a plurality of sample images;
inputting the plurality of sample images and the plurality of first characteristic images into the discriminator to carry out convolution calculation, and discriminating the probability that the first sample image belongs to the characteristic images according to the convolution calculation result, wherein the first sample image is any one of the plurality of sample images;
taking the first sample image as a first intermediate image under the condition that a preset condition is met;
the preset condition is that the probability that the first sample image belongs to the plurality of first feature images is equal to a preset threshold value, or the current judging times reach preset times.
4. The method of claim 2, wherein the convolutional neural network comprises a plurality of deconvolution layers and a plurality of third convolution layers;
the inputting the plurality of first slice images into the convolutional neural network for feature extraction, and outputting the plurality of first feature images includes:
sequentially inputting the plurality of first slice images into each deconvolution layer in the plurality of deconvolution layers for deconvolution calculation, and outputting a plurality of second intermediate images;
and sequentially inputting the plurality of second intermediate images into each third convolution layer in the plurality of third convolution layers for convolution calculation to obtain the plurality of first characteristic images.
5. The method of claim 1, wherein after inputting the plurality of first sliced images into a pre-trained hub defect inspection model for inspection and outputting a hub defect inspection result, the method further comprises:
adjusting the acquisition angle of the first hub image; and/or the presence of a gas in the gas,
and storing the hub defect detection result into a preset database.
6. The method of claim 1, wherein before inputting the plurality of first sliced images into a pre-trained hub defect inspection model for inspection and outputting a hub defect inspection result, the method further comprises:
acquiring a training image;
inputting the training image into a model to be trained for training to obtain the hub defect detection model;
the model to be trained comprises a convolutional neural network, a generative countermeasure network and a deconvolution single-stage detector network.
7. The method of claim 6, wherein the acquiring training images comprises:
acquiring a second hub image;
slicing the second hub image to obtain a plurality of second slice images;
performing data enhancement processing on the plurality of second slice images to obtain a first training image;
generating a second training image based on a preset generation type confrontation network;
wherein the training image comprises: the first training image and the second training image.
8. A hub defect detection apparatus, the apparatus comprising:
the first acquisition module is used for acquiring a first hub image;
the slicing module is used for slicing the first hub image to obtain a plurality of first slice images;
the detection module is used for inputting the first slice images into a pre-trained hub defect detection model for detection and outputting a hub defect detection result;
the hub defect detection model is used for extracting the features of the first slice images to obtain a plurality of first feature images, performing super-resolution reconstruction and prediction on the first feature images, and outputting hub defect detection results corresponding to the first feature images.
9. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the steps of the hub defect detecting method according to any one of claims 1 to 7 when executing the program stored in the memory.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the hub defect detection method according to any one of claims 1 to 7.
CN202110964839.7A 2021-08-23 2021-08-23 Hub defect detection method and device, electronic equipment and storage medium Active CN113421262B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110964839.7A CN113421262B (en) 2021-08-23 2021-08-23 Hub defect detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110964839.7A CN113421262B (en) 2021-08-23 2021-08-23 Hub defect detection method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113421262A true CN113421262A (en) 2021-09-21
CN113421262B CN113421262B (en) 2021-12-21

Family

ID=77719068

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110964839.7A Active CN113421262B (en) 2021-08-23 2021-08-23 Hub defect detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113421262B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021491A (en) * 2006-12-28 2007-08-22 华南理工大学 Automatic decting method and device for wheel hub casting fault based on image understanding
CN106709924A (en) * 2016-11-18 2017-05-24 中国人民解放军信息工程大学 Deep convolutional neutral network and superpixel-based image semantic segmentation method
US20180293734A1 (en) * 2017-04-06 2018-10-11 General Electric Company Visual anomaly detection system
US10346969B1 (en) * 2018-01-02 2019-07-09 Amazon Technologies, Inc. Detecting surface flaws using computer vision
CN111006865A (en) * 2019-11-15 2020-04-14 上海电机学院 Motor bearing fault diagnosis method
CN111429347A (en) * 2020-03-20 2020-07-17 长沙理工大学 Image super-resolution reconstruction method and device and computer-readable storage medium
CN112132042A (en) * 2020-09-24 2020-12-25 西安电子科技大学 SAR image target detection method based on anti-domain adaptation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021491A (en) * 2006-12-28 2007-08-22 华南理工大学 Automatic decting method and device for wheel hub casting fault based on image understanding
CN106709924A (en) * 2016-11-18 2017-05-24 中国人民解放军信息工程大学 Deep convolutional neutral network and superpixel-based image semantic segmentation method
US20180293734A1 (en) * 2017-04-06 2018-10-11 General Electric Company Visual anomaly detection system
US10346969B1 (en) * 2018-01-02 2019-07-09 Amazon Technologies, Inc. Detecting surface flaws using computer vision
CN111006865A (en) * 2019-11-15 2020-04-14 上海电机学院 Motor bearing fault diagnosis method
CN111429347A (en) * 2020-03-20 2020-07-17 长沙理工大学 Image super-resolution reconstruction method and device and computer-readable storage medium
CN112132042A (en) * 2020-09-24 2020-12-25 西安电子科技大学 SAR image target detection method based on anti-domain adaptation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
梁翊等: "基于深度学习和超分辨率重构技术的图像缺陷诊断算法", 《山东农业大学学报(自然科学版)》 *
韩凯: "基于深度学习的汽车轮毂表面缺陷在线检测算法", 《中国优秀硕士学位论文全文数据库 工程科技II辑》 *

Also Published As

Publication number Publication date
CN113421262B (en) 2021-12-21

Similar Documents

Publication Publication Date Title
CN106874894B (en) A Human Object Detection Method Based on Regional Fully Convolutional Neural Networks
CN110321873A (en) Sensitization picture recognition methods and system based on deep learning convolutional neural networks
CN112132002B (en) Method and device for detecting foreign matter in three-dimensional image data
CN113920107A (en) A method of insulator damage detection based on improved yolov5 algorithm
CN109117876A (en) A kind of dense small target deteection model building method, model and detection method
CN105654067A (en) Vehicle detection method and device
CN110147745B (en) Video key frame detection method and device
CN108460649A (en) A kind of image-recognizing method and device
CN113344475A (en) Transformer bushing defect identification method and system based on sequence modal decomposition
CN111401374A (en) Model training method based on multiple tasks, character recognition method and device
CN110826485B (en) Target detection method and system for remote sensing image
CN109916912A (en) A kind of railway rail clip Defect inspection method and system
CN112347818B (en) Method and device for screening difficult sample images of video target detection model
CN111915595B (en) Image quality assessment method, image quality assessment model training method and device
CN112818774A (en) Living body detection method and device
CN113421262B (en) Hub defect detection method and device, electronic equipment and storage medium
CN115346051A (en) Optical remote sensing image detection method and device
CN111539456A (en) Target identification method and device
CN117474915B (en) Abnormality detection method, electronic equipment and storage medium
CN111127327B (en) Picture inclination detection method and device
Zou Flame image recognition detection based on improved YOLOv7
CN113239075A (en) Construction data self-checking method and system
CN113222843A (en) Image restoration method and related equipment thereof
EP4401034A1 (en) Battery cell electrode sheet inspection method and apparatus, and electronic device
CN119027967B (en) Ancient book image detection method and system based on density map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant