CN112233096A - Vehicle apron board fault detection method - Google Patents

Vehicle apron board fault detection method Download PDF

Info

Publication number
CN112233096A
CN112233096A CN202011118748.3A CN202011118748A CN112233096A CN 112233096 A CN112233096 A CN 112233096A CN 202011118748 A CN202011118748 A CN 202011118748A CN 112233096 A CN112233096 A CN 112233096A
Authority
CN
China
Prior art keywords
image
key points
module
fault
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011118748.3A
Other languages
Chinese (zh)
Other versions
CN112233096B (en
Inventor
李怡蕾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Kejia General Mechanical and Electrical Co Ltd
Original Assignee
Harbin Kejia General Mechanical and Electrical Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Kejia General Mechanical and Electrical Co Ltd filed Critical Harbin Kejia General Mechanical and Electrical Co Ltd
Priority to CN202011118748.3A priority Critical patent/CN112233096B/en
Publication of CN112233096A publication Critical patent/CN112233096A/en
Application granted granted Critical
Publication of CN112233096B publication Critical patent/CN112233096B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Abstract

A method for detecting faults of a vehicle apron board belongs to the technical field of image processing. The invention solves the problem of low accuracy of the detection of the deformation damage fault of the apron board by adopting the existing method. According to the invention, an automatic image identification mode based on a deep learning algorithm is used for replacing manual detection, so that the fault detection efficiency and stability can be improved, and the detection accuracy can be greatly improved. The invention introduces a CenterNet target detection model during fault detection, improves the detection speed on the basis of ensuring the detection precision requirement, and realizes the real-time detection. By optimizing the network of the CenterNet target detection model, the receptive field of the model is improved on the basis of keeping the light weight of the network, and the robustness of the model is improved. The method can be applied to fault detection of striking deformation and damage of the high-speed rail apron board.

Description

Vehicle apron board fault detection method
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a method for detecting faults of a vehicle apron board.
Background
The high-speed rail apron board has the functions of diversion, protection, maintenance and the like, and the driving safety can be endangered once deformation and damage occur. In the detection of the deformation and damage faults of the apron board, the fault detection is usually carried out by adopting a mode of manually checking images in the existing method. Because in the testing process, the testing result can receive the subjective factor influence of examining car personnel, consequently, appear missing the problem such as examining, the wrong detection to the trouble easily to influence driving safety.
And the apron board deformation damage detection is directly carried out by adopting the existing image detection technology and is influenced by interference factors such as the dirt of the apron board, so that the accuracy of the apron board deformation damage fault detection is low.
Disclosure of Invention
The invention aims to solve the problem that the accuracy of detecting the deformation damage fault of an apron board is low by adopting the conventional method, and provides a method for detecting the fault of the apron board of a vehicle.
The technical scheme adopted by the invention for solving the technical problems is as follows: a vehicle skirt fault detection method, the method comprising the steps of:
step one, obtaining a vehicle linear array image;
step two, cutting out an image of the area where the apron board part is located from the image obtained in the step one;
marking the position of the apron board part, the apron board fault position and the apron board fault category in the area image of the apron board part to obtain a marking file corresponding to the area image of the apron board part; forming a data set by the area image of the apron board part and the corresponding mark file;
step four, respectively carrying out feature extraction processing on each image in the data set to obtain a corresponding processed feature map;
inputting the processed feature map and the marking file into a CenterNet target detection model, and training the CenterNet target detection model to obtain a trained CenterNet target detection model, wherein the CenterNet target detection model takes a Hourglass module as a main network;
collecting an original image to be detected, cutting a subgraph of the area where the apron board part is located from the original image to be detected, and then performing down-sampling processing on the cut subgraph to obtain a down-sampled low-resolution image;
inputting the obtained low-resolution image into a trained CenterNet target detection model, and obtaining key points with faults in the low-resolution image; compensating the obtained key points to obtain compensated key points;
and identifying the failure of the skirt board by using the compensated key points.
The invention has the beneficial effects that: the invention provides a vehicle apron board fault detection method, which replaces manual detection with an image automatic identification mode based on a deep learning algorithm, and can improve the fault detection efficiency and stability and greatly improve the detection accuracy. The invention introduces a CenterNet target detection model during fault detection, improves the detection speed on the basis of ensuring the detection precision requirement, and realizes the real-time detection. By optimizing the network of the CenterNet target detection model, the receptive field of the model is improved on the basis of keeping the light weight of the network, and the robustness of the model is improved.
Drawings
FIG. 1 is a flow chart of a method for detecting a vehicle skirt failure in accordance with the present invention;
FIG. 2 is a flow chart of the training of the CenterNet target detection model;
FIG. 3 is a diagram of a Hourglass network architecture;
in the figure, low represents downsampling and up represents upsampling.
Detailed Description
First embodiment this embodiment will be described with reference to fig. 1. The method for detecting the faults of the vehicle apron board of the embodiment specifically comprises the following steps:
step one, obtaining a vehicle linear array image;
step two, cutting out an image of the area where the apron board part is located from the image obtained in the step one;
marking the position of the apron board part, the apron board fault position and the apron board fault category in the area image of the apron board part to obtain a marking file corresponding to the area image of the apron board part; forming a data set by the area image of the apron board part and the corresponding mark file;
step four, respectively carrying out feature extraction processing on each image in the data set to obtain a corresponding processed feature map;
inputting the processed feature map and the marking file into a CenterNet target detection model, training the CenterNet target detection model, stopping training until the loss function value of the CenterNet target detection model is not reduced any more, and obtaining the trained CenterNet target detection model, wherein the CenterNet target detection model takes a Hourglass module as a main network;
collecting an original image to be detected, cutting a subgraph of the area where the apron board part is located from the original image to be detected, and then performing down-sampling processing on the cut subgraph to obtain a down-sampled low-resolution image;
inputting the obtained low-resolution image into a trained CenterNet target detection model, and obtaining key points with faults in the low-resolution image; compensating the obtained key points to obtain compensated key points;
and identifying the failure of the skirt board by using the compensated key points.
After a camera or a video camera is mounted on a fixed device, the mounted camera or video camera is used for shooting a vehicle running at a high speed, and a gray image of the vehicle, namely a linear array image of the vehicle, is obtained.
According to the invention, an automatic image identification mode based on a deep learning algorithm is used for replacing manual detection, so that the fault detection efficiency and stability can be improved, and the detection accuracy can be greatly improved. The method can effectively shorten the time of secondary fault detection, namely, the method can directly utilize the trained model to detect the fault, thereby improving the detection efficiency.
The second embodiment, which is different from the first embodiment, is: the specific process of the second step is as follows:
and cutting out an image of the area where the skirt part is located from the image obtained in the step one according to the wheel base information of the hardware for shooting the vehicle and the position prior knowledge of the skirt.
The camera devices laid at the respective stations have a uniform reference coordinate axis, and necessary portions are cut out according to the distances of the camera devices from the reference coordinate axis.
The third embodiment is different from the first embodiment in that: before marking the position and the fault category of the skirt board part in the image of the area where the skirt board part is located, the image of the area where the skirt board part is located needs to be amplified in a mode of turning, zooming and translating.
The fourth embodiment is different from the first embodiment in that: the concrete process of the step four is as follows:
step S1: taking any image in the data set as an input image;
step S2: for any fault category marked in the input image, the fault category is expressed as a category C, and the key point coordinate p of the category C is calculated as:
Figure BDA0002731237890000031
wherein (x)1,y1) And (x)2,y2) Respectively representing the vertex coordinates of the upper left corner and the lower right corner of the rectangular marking frame of the category C;
the category C can be deformation faults of the apron board caused by the fact that the edge of the apron board is hit or collided, deformation of a grid on the apron board caused by collision, crack damage faults of the apron board caused by the fact that foreign matters hit the apron board, crater damage faults of the apron board, and partial missing damage faults of the grid on the apron board; a plurality of fault categories may exist on one image;
step S3: processing the input image to obtain a low-resolution image;
step S4: from the keypoint coordinates p of the category C calculated in step S2, the corresponding keypoint coordinates of the category C in the low-resolution image are calculated
Figure BDA0002731237890000032
Mapping the key point coordinate p to the low-resolution image to obtain the key point coordinate corresponding to the category C in the low-resolution image
Figure BDA0002731237890000033
Step S5: coordinate key points using Gaussian kernels
Figure BDA0002731237890000034
The feature map is distributed on the low-resolution image to obtain a processed feature map corresponding to the input image;
the calculation formula of the gaussian kernel is:
Figure BDA0002731237890000041
wherein sigmapIs a standard deviation associated with the target size (i.e., W and H);
step S6: and repeating the steps S1 to S5 for each image in the data set to obtain the processed feature map corresponding to each image in the data set.
The fifth embodiment is different from the fourth embodiment in that: in step S3, performing low-resolution processing on the input image to obtain a low-resolution image; the method specifically comprises the following steps:
the input image is passed through 2 and 128 channels of 5 x 5 convolution modules, and 2 and 256 channels of 5 x 5 residual blocks, to obtain a low resolution image.
The sixth embodiment is different from the fifth embodiment in that: loss function L of the CenterNet target detection modelkComprises the following steps:
Figure BDA0002731237890000042
wherein, YxycIs a key point of Gaussian kernel generation, Y xyc1 indicates that the key point generated by the gaussian kernel is at the actual key point position,
Figure BDA0002731237890000043
and expressing the predicted values of the key points output by the CenterNet target detection model, wherein alpha and beta are hyper-parameters of the loss function, x, y and c respectively represent the x coordinate of the key points, the y coordinate of the key points and the category of the key points, and N is the number of the key points in the image.
The seventh embodiment and the sixth embodiment are different from the seventh embodiment in that: the CenterNet target detection model comprises a first Hourglass module, a second Hourglass module, a first Head module and a second Head module, wherein the output of the first Hourglass module is input into the first Head module, the output of the second Hourglass module is input into the second Head module, the output of the first Hourglass module needs to be processed by a convolution module with 128 channels of 5 multiplied by 5 before being input into the first Head module, and the output of the second Hourglass module needs to be processed by a convolution module with 128 channels of 5 multiplied by 5 before being input into the second Head module;
the Hourglass module comprises a 5 multiplied by 5 convolution module, the 5 multiplied by 5 convolution module is used for replacing a 3 multiplied by 3 convolution module, and three continuous linear operations of Conv-BN-Scale are fused into a fusion layer; the Head module comprises a corner pool module which is used for replacing a 3 multiplied by 3 convolution module.
The 5 x 5 depth-wise convolution obtains twice the receptive field, adding only a very small amount of computation to the 3 x 3 depth-wise convolution, so all depth-wise convolutions in the network are replaced by 5 x 5 convolutions in order to enhance the model performance.
Three continuous linear operations Conv-BN-Scale in the network are fused into one layer, so that the model volume is reduced and the acceleration is accelerated.
The eighth embodiment and the seventh embodiment are different from the seventh embodiment in that: the clipped sub-picture is down-sampled to obtain a down-sampled low-resolution picture, and the down-sampling process is performed in step S3.
Ninth embodiment, the difference between the seventh embodiment and the ninth embodiment, is that: step six, inputting the obtained low-resolution image into a trained CenterNet target detection model, and then obtaining key points with faults in the low-resolution image; compensating the obtained key points to obtain compensated key points; the specific process comprises the following steps:
for a certain pixel point in the low-resolution image, if the predicted value of the key point corresponding to the pixel point is the same as the predicted value of the key point corresponding to the pixel point
Figure BDA0002731237890000051
More than or equal to the predicted value of the key point corresponding to other 8 pixel points in the 3 x 3 neighborhood by taking the pixel point as the center, then the pixel point is taken as the predicted valueFor one key point, obtaining all key points in the low-resolution image after traversing all pixel points in the low-resolution image, randomly selecting 100 key points of the low-resolution image from all key points by using an angle pool module, independently extracting a hot point of a fault category corresponding to each key point, respectively calculating the probability value of each key point having a fault by using the extracted hot points, and screening out the key points with the fault probability value larger than a threshold Q from the 100 key points;
and compensating the screened key points to obtain compensated key points.
Preferably, the value of the threshold Q in the present invention is 0.85.
The tenth embodiment, which is different from the ninth embodiment, is that: the method comprises the following steps of utilizing compensated key points to identify the faults of the skirt board, and specifically comprising the following steps:
and taking the fault type corresponding to the compensated key point as a fault detection result, and taking the position of the compensated key point as a detected fault position.
And mapping the detected fault position to the acquired original image to be detected to obtain the position of the fault in the original image, uploading the position information of the fault in the original image to an alarm platform, and displaying the fault on a display interface.
Examples
1. Establishing a sample data set
The method comprises the following steps: linear array image acquisition
And (3) carrying a camera or a video camera by utilizing the fixed equipment, shooting the high-speed running high-speed rail, and acquiring a high-definition gray image of the whole vehicle.
Step two: coarse positioning
And cutting the skirt board part from the whole vehicle image according to the wheel base information of hardware, the position of the skirt board and other prior knowledge, adjusting the size of the image, and reducing subsequent calculation amount to improve the detection speed.
Step three: data amplification
As the skirtboards are provided with the grilles, the cover plates, the nuts and other parts with different sizes and styles, the skirtboards of different vehicle types have respective characteristics, and the skirt board fault image positive sample and the skirt board normal image negative sample are unbalanced, in order to further improve the robustness of the algorithm, the original data set needs to be amplified in the form of turning, zooming, translating and other operations on the images.
Step four: data marking
And marking the data set subjected to the image amplification processing to obtain marking files corresponding to the images of the data set one by one, and taking the marking files as the data set of the training deep learning network model.
2. Skirt deformation and damage fault identification
In the whole training process, a Hourglass network structure is selected as a backbone network of the centret target detection model, and the training process is as shown in fig. 2 as follows:
step 1: the skirt image is taken as input.
Step 2: for each class C of each labeled graph, calculating real key points for training, wherein the key points are calculated according to the formula (1), wherein (x)1,y1)、(x2,y2) The coordinates of the upper left corner and the lower right corner of the mark frame are respectively.
Figure BDA0002731237890000061
And 3, step 3: image pre-processing was performed to reduce image resolution by a factor of 4 using 2 and 128 channels of 5 x 5 convolution modules, and 2 and 256 channels of 5 x 5 residual blocks.
And 4, step 4: calculating the coordinates of the key points corresponding to the low resolution, namely
Figure BDA0002731237890000062
And 5, step 5: by using
Figure BDA0002731237890000063
Marking the image, wherein W and H are the width and height of the image, respectively, using a Gaussian kernel
Figure BDA0002731237890000064
Distributing the key points on a feature map, where σpIs a standard deviation associated with the target size (i.e., W and H).
And 6, step 6: the model was trained using two Hourglass modules in series.
And 7, step 7: the pixel-by-pixel cross entropy is combined with the Focal local training Loss, and as shown in formula (2), the normalization is performed by dividing the cross entropy by the number N of key points contained in the image, wherein alpha and beta are hyper-parameters of the Focal local, N is the number of key points of the image, and Y is the number of key points of the imagexycIs a key point of Gaussian kernel generation, YxycA value of 1 then indicates that at the key point,
Figure BDA0002731237890000065
representing the predicted values of the keypoints.
Figure BDA0002731237890000066
Since downsampling the image causes accuracy errors when such a feature map is remapped to the original image, for each center point, an additional local offset is used to compensate it. The center points of all classes share the same offset prediction, which is trained with L1 loss, where
Figure BDA0002731237890000071
Is the bias we predict, R is the downsampling factor 4.
Figure BDA0002731237890000072
And 8, step 8: and (6) outputting.
The network structure of the Hourglass module is shown in FIG. 3:
the Hourglass module is used for applying more convolutions on an original network, branching the network during each maximum pooling operation, separating out the upper half path before each down-sampling and reserving the original scale information, and adding the data of the previous scale after each up-sampling. The method comprises the steps that two Hourglass modules are used in common, 1 multiplied by 1 convolution is added before and after the first Hourglass module, elements are added, the added elements pass through a ReLU activation function and a residual error module, the output of the residual error module is sent to the second Hourglass module, the output results of the first Hourglass module and the second Hourglass module are input to the corresponding Head modules respectively, and the output of the two Head modules is used as the output result.
The Head module replaces the convolution module of 3 x 3 in the residual block with the corner pool module, processes the characteristics of the backbone network (namely the output of the Hourglass module) by two convolution modules of 5 x 5 channels of 128 channels before inputting the convolution module into the Head module, and then applies the corner pool module. The output of the Head module is then fed into the layer fused by the 256 channels of the 5 x 5Conv layer, BN layer and Scale layer, as designed by the residual block in the Head module. The modified residual block is a 256-channel 5 x 5 convolution module and 3 fused layers to generate the thermodynamic diagram, embedding, and offset three output parts. And adding a Head module to the outputs of the two Hourglass modules respectively, calculating the loss of the outputs of the Hourglass modules, adding the loss and the final output together, carrying out backward feedback by the network according to the loss function, and stopping training when the loss function reaches an expected value and is stable.
In the centret target test regressed in thermodynamic diagrams, the receptive field of the model is of exceptional importance. It is calculated that 5 × 5 depth-wise convolution obtains twice the receptive field, and only adds a very small amount of computation compared to 3 × 3 depth-wise convolution, so all depth-wise convolutions in the network are replaced by 5 × 5 convolutions in order to enhance the model performance. Because of the lack of ImageNet pre-trained 5 × 5 models, 3 × 3 pre-trained models were subjected to zero-padding (zero padding) of the convolution kernel, resulting in a large convolution kernel of 5 × 5.
The BN layer adds a plurality of layers of operations when the network forwards deduces, the performance of the model is influenced, and more memory or video memory space is occupied. Therefore, in order to reduce the model volume and speed up, three continuous linear operations of Conv-BN-Scale in the network are fused into one layer, namely, the convolution kernel is scaled by a certain multiple, the bias is changed to a certain extent, the calculated amount of the convolution layer is not increased, meanwhile, the calculated amount of the whole BN layer is saved, the parameter amount can be reduced by about 5 percent while the effect is not influenced, and the speed is increased by about 5 to 10 percent.
3. Skirt board deformation and damage fault determination
In the detection process, firstly, downsampling is carried out on a skirting board image, then, the downsampled image is predicted, for each class, a central point is predicted in a downsampled feature map, whether the value of a current key point is larger than (or equal to) eight adjacent points (eight directions) of the periphery is detected, then, 100 points are taken by using a pooling layer, and a hot point of each class in an output map is extracted independently. And finally, selecting according to the probability value of the object existing in the current center point, and calling the center point larger than the threshold value from the 100 selected results to serve as a final result.
4. Fault handling
After the fault is identified, the position of the fault in the original image is calculated through the mapping relation from the sub-image to the large image and from the large image to the original image, the fault component information is uploaded to an alarm platform, and the fault display is carried out on a display interface.
The above-described calculation examples of the present invention are merely to explain the calculation model and the calculation flow of the present invention in detail, and are not intended to limit the embodiments of the present invention. It will be apparent to those skilled in the art that other variations and modifications of the present invention can be made based on the above description, and it is not intended to be exhaustive or to limit the invention to the precise form disclosed, and all such modifications and variations are possible and contemplated as falling within the scope of the invention.

Claims (10)

1. A method of vehicle skirt fault detection, the method comprising the steps of:
step one, obtaining a vehicle linear array image;
step two, cutting out an image of the area where the apron board part is located from the image obtained in the step one;
marking the position of the apron board part, the apron board fault position and the apron board fault category in the area image of the apron board part to obtain a marking file corresponding to the area image of the apron board part; forming a data set by the area image of the apron board part and the corresponding mark file;
step four, respectively carrying out feature extraction processing on each image in the data set to obtain a corresponding processed feature map;
inputting the processed feature map and the marking file into a CenterNet target detection model, and training the CenterNet target detection model to obtain a trained CenterNet target detection model, wherein the CenterNet target detection model takes a Hourglass module as a main network;
collecting an original image to be detected, cutting a subgraph of the area where the apron board part is located from the original image to be detected, and then performing down-sampling processing on the cut subgraph to obtain a down-sampled low-resolution image;
inputting the obtained low-resolution image into a trained CenterNet target detection model, and obtaining key points with faults in the low-resolution image; compensating the obtained key points to obtain compensated key points;
and identifying the failure of the skirt board by using the compensated key points.
2. The method for detecting the faults of the skirt panels of the vehicle as claimed in claim 1, wherein the specific process of the second step is as follows:
and cutting out an image of the area where the skirt part is located from the image obtained in the step one according to the wheel base information of the hardware for shooting the vehicle and the position prior knowledge of the skirt.
3. The method as claimed in claim 1, wherein the image of the area of the skirt panel part needs to be augmented before marking the position and the failure category of the skirt panel part in the image of the area of the skirt panel part, and the augmentation includes flipping, zooming and translating.
4. The method for detecting the faults of the skirt panels of the vehicle as claimed in claim 1, wherein the specific process of the fourth step is as follows:
step S1: taking any image in the data set as an input image;
step S2: for any fault category marked in the input image, the fault category is expressed as a category C, and the key point coordinate p of the category C is calculated as:
Figure FDA0002731237880000011
wherein (x)1,y1) And (x)2,y2) Respectively representing the vertex coordinates of the upper left corner and the lower right corner of the rectangular marking frame of the category C;
step S3: processing the input image to obtain a low-resolution image;
step S4: calculating the corresponding key point coordinates p-in the low resolution image of the category C according to the key point coordinates p of the category C calculated in the step S2;
step S5: distributing the key point coordinates p-on the low-resolution image by using a Gaussian kernel to obtain a processed feature map corresponding to the input image;
step S6: and repeating the steps S1 to S5 for each image in the data set to obtain the processed feature map corresponding to each image in the data set.
5. The method according to claim 4, wherein in step S3, the input image is processed to obtain a low resolution image; the method specifically comprises the following steps:
the input image is passed through 2 and 128 channels of 5 x 5 convolution modules, and 2 and 256 channels of 5 x 5 residual blocks, to obtain a low resolution image.
6. The method of claim 5, wherein the skirt failure of the vehicle is detectedIn that the loss function L of the CenterNet target detection modelkComprises the following steps:
Figure FDA0002731237880000021
wherein, YxycIs a key point in the generation of the gaussian kernel,
Figure FDA0002731237880000022
and expressing the predicted values of the key points output by the CenterNet target detection model, wherein alpha and beta are hyper-parameters of the loss function, x, y and c respectively represent the x coordinate of the key points, the y coordinate of the key points and the category of the key points, and N is the number of the key points in the image.
7. The method as claimed in claim 6, wherein the CenterNet target detection model comprises a first Hourglass module, a second Hourglass module, a first Head module and a second Head module, wherein the output of the first Hourglass module is input into the first Head module, the output of the second Hourglass module is input into the second Head module, the output of the first Hourglass module needs to be processed by a convolution module of 128 channels of 5 × 5 before being input into the first Head module, and the output of the second Hourglass module needs to be processed by a convolution module of 128 channels of 5 × 5 before being input into the second Head module;
the Hourglass module comprises a 5 multiplied by 5 convolution module, the 5 multiplied by 5 convolution module is used for replacing a 3 multiplied by 3 convolution module, and three continuous linear operations of Conv-BN-Scale are fused into a fusion layer; the Head module comprises a corner pool module which is used for replacing a 3 multiplied by 3 convolution module.
8. The method of claim 7, wherein the down-sampling process is performed on the clipped subgraph to obtain a down-sampled low-resolution image, and the down-sampling process is performed in step S3.
9. The method for detecting faults of the skirt panels of the vehicle as claimed in claim 7, wherein in the sixth step, after the obtained low-resolution images are input into a trained CenterNet target detection model, key points of faults existing in the low-resolution images are obtained; compensating the obtained key points to obtain compensated key points; the specific process comprises the following steps:
for a certain pixel point in the low-resolution image, if the predicted value of the key point corresponding to the pixel point is the same as the predicted value of the key point corresponding to the pixel point
Figure FDA0002731237880000031
If the predicted value is larger than or equal to the predicted value of the key point corresponding to other 8 pixel points in the 3 x 3 neighborhood centered on the pixel point, taking the pixel point as a key point, obtaining all key points in the low-resolution image after traversing all the pixel points in the low-resolution image, then randomly selecting 100 key points of the low-resolution image from all the key points by using an angle pool module, independently extracting the hot point of the fault category corresponding to each key point, respectively calculating the probability value of the fault existence of each key point by using the extracted hot points, and screening out the key points with the fault probability value larger than a threshold value Q from the 100 key points;
and compensating the screened key points to obtain compensated key points.
10. The method as claimed in claim 9, wherein the skirt fault is identified by using the compensated key points, which is specifically as follows:
and taking the fault type corresponding to the compensated key point as a fault detection result, and taking the position of the compensated key point as a detected fault position.
CN202011118748.3A 2020-10-19 2020-10-19 Vehicle apron board fault detection method Active CN112233096B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011118748.3A CN112233096B (en) 2020-10-19 2020-10-19 Vehicle apron board fault detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011118748.3A CN112233096B (en) 2020-10-19 2020-10-19 Vehicle apron board fault detection method

Publications (2)

Publication Number Publication Date
CN112233096A true CN112233096A (en) 2021-01-15
CN112233096B CN112233096B (en) 2021-11-12

Family

ID=74118877

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011118748.3A Active CN112233096B (en) 2020-10-19 2020-10-19 Vehicle apron board fault detection method

Country Status (1)

Country Link
CN (1) CN112233096B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112906534A (en) * 2021-02-07 2021-06-04 哈尔滨市科佳通用机电股份有限公司 Lock catch loss fault detection method based on improved Faster R-CNN network
CN113643258A (en) * 2021-08-12 2021-11-12 哈尔滨市科佳通用机电股份有限公司 Method for detecting loss fault of skirt board at side part of train based on deep learning
CN113822277A (en) * 2021-11-19 2021-12-21 万商云集(成都)科技股份有限公司 Illegal advertisement picture detection method and system based on deep learning target detection

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110543838A (en) * 2019-08-19 2019-12-06 上海光是信息科技有限公司 Vehicle information detection method and device
CN110765906A (en) * 2019-10-12 2020-02-07 上海雪湖科技有限公司 Pedestrian detection algorithm based on key points
CN111080613A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Image recognition method for damage fault of wagon bathtub
CN111079627A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Railway wagon brake beam body breaking fault image identification method
CN111091553A (en) * 2019-12-12 2020-05-01 哈尔滨市科佳通用机电股份有限公司 Method for detecting loss of blocking key
CN111091555A (en) * 2019-12-12 2020-05-01 哈尔滨市科佳通用机电股份有限公司 Brake shoe breaking target detection method
CN111222562A (en) * 2020-01-02 2020-06-02 南京邮电大学 Space self-attention mechanism and target detection method
CN111401282A (en) * 2020-03-23 2020-07-10 上海眼控科技股份有限公司 Target detection method, target detection device, computer equipment and storage medium
CN111523486A (en) * 2020-04-24 2020-08-11 重庆理工大学 Mechanical arm grabbing detection method based on improved CenterNet
CN111598843A (en) * 2020-04-24 2020-08-28 国电南瑞科技股份有限公司 Power transformer respirator target defect detection method based on deep learning
CN111640089A (en) * 2020-05-09 2020-09-08 武汉精立电子技术有限公司 Defect detection method and device based on feature map center point
CN111639530A (en) * 2020-04-24 2020-09-08 国网浙江宁海县供电有限公司 Detection and identification method and system for power transmission tower and insulator of power transmission line
CN111652296A (en) * 2020-05-21 2020-09-11 哈尔滨市科佳通用机电股份有限公司 Deep learning-based rail wagon lower pull rod fracture fault detection method

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110543838A (en) * 2019-08-19 2019-12-06 上海光是信息科技有限公司 Vehicle information detection method and device
CN110765906A (en) * 2019-10-12 2020-02-07 上海雪湖科技有限公司 Pedestrian detection algorithm based on key points
CN111080613A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Image recognition method for damage fault of wagon bathtub
CN111079627A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Railway wagon brake beam body breaking fault image identification method
CN111091553A (en) * 2019-12-12 2020-05-01 哈尔滨市科佳通用机电股份有限公司 Method for detecting loss of blocking key
CN111091555A (en) * 2019-12-12 2020-05-01 哈尔滨市科佳通用机电股份有限公司 Brake shoe breaking target detection method
CN111222562A (en) * 2020-01-02 2020-06-02 南京邮电大学 Space self-attention mechanism and target detection method
CN111401282A (en) * 2020-03-23 2020-07-10 上海眼控科技股份有限公司 Target detection method, target detection device, computer equipment and storage medium
CN111523486A (en) * 2020-04-24 2020-08-11 重庆理工大学 Mechanical arm grabbing detection method based on improved CenterNet
CN111598843A (en) * 2020-04-24 2020-08-28 国电南瑞科技股份有限公司 Power transformer respirator target defect detection method based on deep learning
CN111639530A (en) * 2020-04-24 2020-09-08 国网浙江宁海县供电有限公司 Detection and identification method and system for power transmission tower and insulator of power transmission line
CN111640089A (en) * 2020-05-09 2020-09-08 武汉精立电子技术有限公司 Defect detection method and device based on feature map center point
CN111652296A (en) * 2020-05-21 2020-09-11 哈尔滨市科佳通用机电股份有限公司 Deep learning-based rail wagon lower pull rod fracture fault detection method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HEI LAW ET AL.: "CornerNet: Detecting Objects as Paired Keypoints", 《ARXIV》 *
PUSH_: "CornerNet网络结构超详细解读", 《HTTPS://BLOG.CSDN.NET/WEIXIN_42150026/ARTICLE/DETAILS/103380443》 *
XINGYI ZHOU ET AL.: "Objects as Points", 《ARXIV》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112906534A (en) * 2021-02-07 2021-06-04 哈尔滨市科佳通用机电股份有限公司 Lock catch loss fault detection method based on improved Faster R-CNN network
CN113643258A (en) * 2021-08-12 2021-11-12 哈尔滨市科佳通用机电股份有限公司 Method for detecting loss fault of skirt board at side part of train based on deep learning
CN113822277A (en) * 2021-11-19 2021-12-21 万商云集(成都)科技股份有限公司 Illegal advertisement picture detection method and system based on deep learning target detection
CN113822277B (en) * 2021-11-19 2022-02-18 万商云集(成都)科技股份有限公司 Illegal advertisement picture detection method and system based on deep learning target detection

Also Published As

Publication number Publication date
CN112233096B (en) 2021-11-12

Similar Documents

Publication Publication Date Title
CN112233096B (en) Vehicle apron board fault detection method
CN113674247B (en) X-ray weld defect detection method based on convolutional neural network
CN114445706A (en) Power transmission line target detection and identification method based on feature fusion
CN110633661A (en) Semantic segmentation fused remote sensing image target detection method
CN111680746B (en) Vehicle damage detection model training, vehicle damage detection method, device, equipment and medium
CN109543665B (en) Image positioning method and device
US11410409B2 (en) Image classification system and method, classification data creation system and method, and storage medium
CN110992349A (en) Underground pipeline abnormity automatic positioning and identification method based on deep learning
CN112906534A (en) Lock catch loss fault detection method based on improved Faster R-CNN network
CN111597941B (en) Target detection method for dam defect image
CN111079822A (en) Method for identifying dislocation fault image of middle rubber and upper and lower plates of axle box rubber pad
EP3605453A1 (en) Convolutional neural network based inspection of blade-defects of a wind turbine
CN111079518A (en) Fall-down abnormal behavior identification method based on scene of law enforcement and case handling area
CN115222697A (en) Container damage detection method based on machine vision and deep learning
JP2010071826A (en) Teacher data preparation method, and image sorting method and image sorter
CN112669301B (en) High-speed rail bottom plate paint removal fault detection method
CN114708532A (en) Monitoring video quality evaluation method, system and storage medium
CN113221839A (en) Automatic truck image identification method and system
CN113592839A (en) Distribution network line typical defect diagnosis method and system based on improved fast RCNN
CN111784667A (en) Crack identification method and device
CN110363198B (en) Neural network weight matrix splitting and combining method
CN115546223A (en) Method and system for detecting loss of fastening bolt of equipment under train
CN115861922A (en) Sparse smoke and fire detection method and device, computer equipment and storage medium
KR20230036650A (en) Defect detection method and system based on image patch
Fujita et al. Fine-tuned Surface Object Detection Applying Pre-trained Mask R-CNN Models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant