CN111091553A - Method for detecting loss of blocking key - Google Patents

Method for detecting loss of blocking key Download PDF

Info

Publication number
CN111091553A
CN111091553A CN201911278037.XA CN201911278037A CN111091553A CN 111091553 A CN111091553 A CN 111091553A CN 201911278037 A CN201911278037 A CN 201911278037A CN 111091553 A CN111091553 A CN 111091553A
Authority
CN
China
Prior art keywords
dense
output
convolution
unit
feature map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911278037.XA
Other languages
Chinese (zh)
Inventor
孙晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Kejia General Mechanical and Electrical Co Ltd
Original Assignee
Harbin Kejia General Mechanical and Electrical Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Kejia General Mechanical and Electrical Co Ltd filed Critical Harbin Kejia General Mechanical and Electrical Co Ltd
Priority to CN201911278037.XA priority Critical patent/CN111091553A/en
Publication of CN111091553A publication Critical patent/CN111091553A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Abstract

A method for detecting loss of a stop key belongs to the technical field of freight train detection. The invention aims to solve the problems of low efficiency and low accuracy of an artificial image detection mode with lost blocking keys and the problem of low accuracy of the image detection mode caused by environmental factors. The method comprises the steps of training a model of a neural network by utilizing an image of an interested area containing a blocking key to obtain the model weight of the trained neural network, and converting the model precision from 32 bits to 16 bits; acquiring a vehicle passing image, cutting out an interested area containing a blocking key, loading and converting the model weight of the neural network with 16-bit precision, predicting the neural network, and judging the fault by utilizing an artificial prior rule according to the obtained blocking key coordinate information. The method is mainly used for detecting the loss fault of the stop key.

Description

Method for detecting loss of blocking key
Technical Field
The invention relates to a method for detecting loss of a stop key. Belongs to the technical field of freight train detection.
Background
In order to ensure safe operation of a railroad freight train, the railroad freight department must frequently inspect and repair various vital parts of the railroad freight train. The rail wagon bogie blocking key has the function of preventing wheels from being separated from the bogie, is an important part for influencing the normal running of a rail wagon, and is important for detecting a freight train. In order to improve the detection efficiency of the loss fault of the bogie blocking key, images of the freight train can be collected, and then the detection of the railway wagon parts can be completed based on the collected images. At present, the mode of manually checking images is adopted to check the loss of the blocking keys of the bogie, the cost is high, the efficiency is low, the conditions of missing detection, false alarm and the like frequently occur when the blocking keys of the parts to be detected are small, and the accuracy rate cannot be guaranteed.
More importantly, as part of the overhaul parts of the railway wagon are positioned on the outer side of the railway wagon, the overhaul parts are seriously influenced by factors such as environment and the like, and the detection accuracy based on the image is greatly influenced. For example, due to the influence of factors such as weather and sand, various situations such as covering of dust on train components, smearing of components, ice and snow are caused, or situations such as dark images caused by dark light cause image components to be difficult to distinguish, so that the detection accuracy is low. Meanwhile, the blocking key part is small, so that the original detection has certain difficulty, once the influence of the environment is met, the detection of the loss of the blocking key part is seriously influenced, and the detection accuracy is further reduced.
Disclosure of Invention
The invention aims to solve the problems of low efficiency and low accuracy of an artificial image detection mode with lost blocking keys and the problem of low accuracy of the image detection mode caused by environmental factors.
A method for detecting loss of a blocking key comprises the following steps:
training a model of the neural network by using an image of the region of interest containing the blocking key to obtain the model weight of the trained neural network, and converting the model precision from 32 bits to 16 bits;
acquiring a vehicle passing image, cutting out an interested area containing a blocking key, loading and converting the model weight of the neural network with 16-bit precision, predicting the neural network, and judging the fault by utilizing an artificial prior rule according to the obtained blocking key coordinate information.
Further, the process of training the model of the neural network using the image of the region of interest containing the bar key includes the steps of:
s1, acquiring a vehicle passing image;
s2, cutting out the region of interest from the passing vehicle image according to the wheel base information of the hardware and the position of the stop key;
s3, dataset image preprocessing, comprising the steps of:
according to the image of the region of interest of the stop key, establishing a stop key image set containing ice, snow and normal conditions; marking the data set to obtain coordinate information of the stop key in the training sample;
and s4, normalizing the training samples in the training set to 256 × 256, inputting the training samples into the built neural network model, and training to obtain the model weight of the trained neural network, so as to obtain the trained model of the trained neural network.
Further, before the data set is labeled in step s3, a data amplification operation needs to be performed on the data set; combining the images after data amplification to form a new training data set; the new data set is then marked.
Further, the neural network model comprises a reference feature extraction network and a multi-scale feature extraction network;
the reference feature extraction network comprises a convolution-pooling unit and 4 Dense Block modules, the interior of each Dense Block is connected in a layer-to-layer cascade mode, and each two Dense blocks are subjected to channel dimension reduction through Concat;
4 Dense Block modules are marked as Dense Block1, Dense Block2, Dense Block3 and Dense Block 4;
dense Block1 contains six sets of convolution units, each convolution unit containing 1 x 1 and 3 x 3 convolutions, wherein the output of the first set of convolution units is input as the convolution units of the second, third, fourth, fifth, and sixth sets, wherein the output of the second set of convolution units is input as the convolution units of the third, fourth, fifth, and sixth sets, wherein the output of the third set of convolution units is input as the convolution units of the fourth, fifth, and sixth sets, wherein the output of the fourth set of convolution units is input as the convolution units of the fifth, sixth sets, wherein the output of the fifth set of convolution units is input as the convolution units of the sixth set, and the convolution units of the first, second, third, fourth, fifth, and sixth sets output into a Concat unit;
the Concat unit contains a convolution of 1 x 1 and an average pooling of 2 x 2;
dense Block2 is identical to Dense Block 1;
dense Block3 is identical to Dense Block 1;
dense Block4 is identical to Dense Block 1;
extracting multi-scale features to obtain 4 sub-scale feature units;
the fourth sub-scale feature cell is convolved by 3 x 3 with the output of the sense Block 4;
the third sub-scale feature unit is obtained by adding the two part feature maps bit by bit and then performing convolution by 3 x 3; the first part of feature map is a feature map of the output of the Dense Block3 unit after being convoluted by 1 x 1, and the second part of feature map is a feature map of the output of the Dense Block4 unit after being sampled by 2 times;
the second sub-scale feature unit is obtained by adding the two part feature maps bit by bit and then performing convolution by 3 x 3; the first part of feature map is the feature map of the output of the Dense Block2 unit after being convoluted by 1 x 1, the second part of feature map is the feature map of two subparts which are added bit by bit and are upsampled by 2 times, wherein the first subpart of the second part is the feature map of the output of the Dense Block3 unit after being convoluted by 1 x 1, and the second subpart of the second part is the feature map of the output of the Dense Block4 unit after being upsampled by 2 times;
the first sub-scale feature unit is obtained by adding two parts of feature graphs bit by bit and then performing convolution by 3 x 3; the first part of feature map is the feature map of the output of the Dense Block1 unit after being convolved by 1 x 1, the second part of feature map is the feature map of two subparts which are added bit by bit and are upsampled by 2 times, wherein the first subpart of the second part is the feature map of the output of the Dense Block2 unit after being convolved by 1 x 1, and the second subpart of the second part is the feature map of the output of the Dense Block4 unit which is upsampled by 2 times and is added bit by bit with the output feature map vector of the Dense Block 3.
Further, the loss function of the model is as follows:
Figure BDA0002314526400000031
the total loss function is the classification error Lconf(x, c) and regression error LlocAnd (x, l, g) weighted summation, wherein c is confidence, l is a prediction box, g is a true box, α is weight, N represents the number of matched default boxes, and x is used for judging whether the designed feature capture box has a corresponding target.
Further, the convolution kernel of the convolution layer in the convolution-pooling unit is 7 × 7, and the pooling layer is the maximum pooling of 3 × 3.
Has the advantages that:
1. compared with a detection mode of realizing the loss of the blocking key based on manual image inspection, the detection method of the loss of the blocking key not only can realize automatic detection and greatly improve the efficiency, but also has very high detection accuracy.
2. The invention can solve the problem of key loss under severe natural weather conditions, and has very high accuracy even under the influence of severe weather and other environmental factors.
3. The method adopts the dense neural network for processing, integrates more bottom layer characteristics, and has very high accuracy, very high flexibility and robustness. The detection accuracy rate of the invention for the loss of the blocking key can reach more than 99 percent, and the omission factor is almost 0.
4. And the fault identification of the loss of the stop key can be processed in real time by adopting the acceleration of an acceleration reasoning engine.
Drawings
FIG. 1 is a schematic overall flow diagram;
FIG. 2 is an image of a portion of a data set;
FIG. 3 is a schematic diagram of a baseline feature extraction network;
fig. 4 is a schematic diagram of multi-scale feature extraction.
Detailed Description
The first embodiment is as follows: the present embodiment is described in detail with reference to figure 1,
the method for detecting loss of a catch key according to this embodiment includes the following steps:
1. linear array image acquisition
A camera or a video camera is carried by the fixing equipment, the truck moving at high speed is shot, and images of two sides of the truck are shot. Aiming at the linear array camera, the shooting frequency of the linear array camera is set according to the moving speed of a measured object, continuous shooting is carried out, and a plurality of shot strip-shaped images are combined into a complete image, so that seamless splicing can be realized, and a two-dimensional image with a large visual field and high precision, namely a whole vehicle image, is generated.
2. Coarse positioning
Cutting out an interested area from the image information of the whole vehicle according to the wheel base information of hardware, the position of a stop key and other prior knowledge, thereby reducing the calculated amount and improving the identification speed; the image corresponding to the region of interest comprises a blocking key;
3. dataset image pre-processing
The method comprises the following steps: building a raw data set
And acquiring an image of the region of interest of the blocking key according to the rough positioning, and establishing a data set containing the blocking keys of ice, snow, normal and the like, as shown in fig. 2. In general, ice and snow can be hung only by the presence of the blocking key, so that the accuracy of detection can be improved by collecting images of ice and snow, and the missing report rate and the false report rate are reduced;
step two: and performing data amplification operation on the data set by adopting image processing modes such as contrast enhancement, random scaling and the like.
Due to the influence of factors such as train speed, outdoor illumination and the like, problems such as different contrast and image stretching of linear array images of a moving truck can occur, and specific problems of automatic identification of component images are solved by adopting contrast enhancement and random scaling methods, so that more training samples can be obtained, and the robustness of a model can be improved.
Step three: data marking
And forming a new training data set by using the data amplified data, and marking the data set (manually marking) to obtain coordinate information of the stop key in the training sample.
For the images of ice and snow, the coordinate information of the stop keys can be labeled manually;
4. fault target detection
Due to the fact that the key blocking loss problem exists under the severe conditions of rain, snow, ice hanging and the like, the target detection and identification method based on regression is adopted. Because the requirements on detection real-time performance and precision are higher in automatic identification and detection of the truck, and the precision problem of the problem of key blocking loss under severe conditions is solved.
The neural network model comprises a reference feature extraction network and a multi-scale feature extraction network;
reference feature extraction network:
the benchmark feature extraction network, as shown in fig. 3, ensures the relation between network layers on the basis of deepening the network layer number, accelerates the training speed of the network, greatly improves the accuracy of the model, solves the problem of feature loss of the common serial deep feedforward network on the premise of not increasing the computational complexity, and fully utilizes the context information. The structure of whole network is shown as table 1, wherein contains 4 sense Block modules, and sense Block is inside to be connected through the mode of establishing ties between layer and layer, and every two sense blocks carry out the passageway through the Concat layer and reduce the dimension, and then make gradient and characteristic information transfer more effective, can prevent to train the fit again.
TABLE 1
Figure BDA0002314526400000051
The feature extraction network adopts 4 down-sampled Dense blocks (Dense blocks), namely Dense Block1, Dense Block2, Dense Block3 and Dense Block 4;
dense Block1 contains six sets of convolution units, each convolution unit containing 1 x 1 and 3 x 3 convolutions, wherein the output of the first set of convolution units is input as the convolution units of the second, third, fourth, fifth, and sixth sets, wherein the output of the second set of convolution units is input as the convolution units of the third, fourth, fifth, and sixth sets, wherein the output of the third set of convolution units is input as the convolution units of the fourth, fifth, and sixth sets, wherein the output of the fourth set of convolution units is input as the convolution units of the fifth, sixth sets, wherein the output of the fifth set of convolution units is input as the convolution units of the sixth set, and the convolution units of the first, second, third, fourth, fifth, and sixth sets output into the Concat unit. The Concat unit contains 1 × 1 convolution and 2 × 2 average pooling. The interior is connected through the mode of establishing ties between the layer and the layer, carries out the passageway through the Concat layer and reduces the dimension, and then makes gradient and characteristic information transfer more effective, can prevent to train the fit again.
Dense Block2 is identical to Dense Block 1;
dense Block3 is identical to Dense Block 1;
dense Block4 is identical to Dense Block 1;
multi-scale feature extraction network:
the multi-scale feature extraction process is shown in fig. 4, and corresponds to 4 sub-scale feature units;
the fourth sub-scale feature cell is convolved by 3 x 3 with the output of the sense Block 4;
the third sub-scale feature unit is obtained by adding the two part feature maps bit by bit and then performing convolution by 3 x 3; the first part of feature map is a feature map of the output of the Dense Block3 unit after being convoluted by 1 x 1, and the second part of feature map is a feature map of the output of the Dense Block4 unit after being sampled by 2 times;
the second sub-scale feature unit is obtained by adding the two part feature maps bit by bit and then performing convolution by 3 x 3; the first part of feature map is the feature map of the output of the Dense Block2 unit after being convoluted by 1 x 1, the second part of feature map is the feature map of two subparts which are added bit by bit and are upsampled by 2 times, wherein the first subpart of the second part is the feature map of the output of the Dense Block3 unit after being convoluted by 1 x 1, and the second subpart of the second part is the feature map of the output of the Dense Block4 unit after being upsampled by 2 times;
the first sub-scale feature unit is obtained by adding two parts of feature graphs bit by bit and then performing convolution by 3 x 3; the first part of feature map is the feature map of the output of the Dense Block1 unit after being convolved by 1 x 1, the second part of feature map is the feature map of two subparts which are added bit by bit and are upsampled by 2 times, wherein the first subpart of the second part is the feature map of the output of the Dense Block2 unit after being convolved by 1 x 1, and the second subpart of the second part is the feature map of the output of the Dense Block4 unit which is upsampled by 2 times and is added bit by bit with the output feature map vector of the Dense Block 3.
The multi-scale feature extraction network is used for carrying out multi-scale fusion and prediction on a first sub-scale feature unit, a second sub-scale feature unit, a third sub-scale feature unit and a fourth sub-scale feature unit; and during the multi-scale feature fusion process, the shallow feature information is in context connection.
And (4) normalizing the training samples in the training set to 256 × 256, and inputting the training samples into the built neural network model.
Loss function:
and designing a feature capture box according to each feature graph coordinate point of the multi-scale features to extract the features, and predicting the types and the boundary frames of the targets by using the feature boxes. The method uses convolution kernels with the size of 3 multiplied by 3 to extract the characteristics corresponding to characteristic grabbing boxes, the convolution corresponding to each characteristic diagram is 3 multiplied by 6 multiplied by (class +4), wherein 6 is the number of the grabbing boxes on each characteristic diagram coordinate point, 4 is the deviation between a predicted target boundary box and a real target boundary box marked before training, class is the number of classes of the class of the loss classification of the blocking key, and if the size of one characteristic diagram is m multiplied by n and each coordinate has 6 boxes, the output result of m multiplied by n multiplied by 6 multiplied by (class +4) is finally generated.
The loss function of the entire model is the following equation:
Figure BDA0002314526400000071
the total loss function is the classification error Lconf(x, c) and regression error LlocAnd (x, l, g) weighted summation, wherein c is confidence, l is a prediction box, g is a true box, α is weight, N represents the number of matched default boxes, and x is used for judging whether the designed feature capture box has a corresponding target.
And performing error back propagation according to the loss function, and updating the parameters of the deep convolutional network. And training the block key sample pictures containing various forms for multiple times until the loss gradually converges, increasing the confidence coefficient to a stable value, and determining the currently learned model parameters as the trained model parameters so as to obtain the trained neural network model, wherein the trained model parameters comprise parameters.
5 results of the treatment
Accelerating the inference engine:
through the model weight of the trained neural network, a high-performance acceleration inference engine TensorRT opened by the Invland company is used for model acceleration, convolution, bias and activation function layers are fused to form a single layer, the fused model has a high set, and then the model precision is converted from 32-bit to 16-bit precision, so that the calculated amount of the neural network is reduced, and the purpose of model acceleration is achieved.
The existing neural network technology is basically based on 32-bit precision, and the invention adopts the fused 16-bit precision neural network model for prediction, so that the prediction time can be greatly shortened, the prediction efficiency is improved, the reduction of the model precision can not cause the reduction of the accuracy, the increase of the false alarm rate and the false missing report rate, and the prediction effect can not be influenced;
predicting by using a neural network model:
and obtaining a vehicle passing image, cutting out an interested area containing a blocking key, loading and converting the model weight of the neural network with 16-bit precision, and predicting the neural network.
Prior rule
And judging the key loss fault by utilizing an artificial prior rule according to the key blocking coordinate information result predicted by the neural network. In the judging process, if the acquired image is a key blocking image, auxiliary judgment can be carried out according to the key blocking image so as to reduce the influence of other interference factors on the judging result, namely, information such as the perimeter, the length-width ratio, the area size and the like of the part to be detected can be further obtained by utilizing an image processing method, the judgment of the key blocking loss fault is carried out by utilizing an artificial prior rule, the judging accuracy can be further improved, and the false alarm rate and the missing alarm rate are reduced.
Upload alarm platform
And acquiring fault information according to the prior rule, and uploading the information of the fault component to an alarm platform.

Claims (6)

1. A method for detecting loss of a catch key is characterized by comprising the following steps:
training a model of the neural network by using an image of the region of interest containing the blocking key to obtain the model weight of the trained neural network, and converting the model precision from 32 bits to 16 bits;
acquiring a vehicle passing image, cutting out an interested area containing a blocking key, loading and converting the model weight of the neural network with 16-bit precision, predicting the neural network, and judging the fault by utilizing an artificial prior rule according to the obtained blocking key coordinate information.
2. The method for detecting the loss of a bar key according to claim 1, wherein the process of training the model of the neural network by using the image of the region of interest containing the bar key comprises the following steps:
s1, acquiring a vehicle passing image;
s2, cutting out the region of interest from the passing vehicle image according to the wheel base information of the hardware and the position of the stop key;
s3, dataset image preprocessing, comprising the steps of:
according to the image of the region of interest of the stop key, establishing a stop key image set containing ice, snow and normal conditions; marking the data set to obtain coordinate information of the stop key in the training sample;
and s4, normalizing the training samples in the training set to 256 × 256, inputting the training samples into the built neural network model, and training to obtain the model weight of the trained neural network, so as to obtain the trained model of the trained neural network.
3. The method of claim 2, wherein the data set is subjected to a data amplification operation before being labeled in step s 3; combining the images after data amplification to form a new training data set; the new data set is then marked.
4. The method according to claim 1, 2 or 3, wherein the neural network model comprises a reference feature extraction network, a multi-scale feature extraction network;
the reference feature extraction network comprises a convolution-pooling unit and 4 Dense Block modules, the interior of each Dense Block is connected in a layer-to-layer cascade mode, and each two Dense blocks are subjected to channel dimension reduction through Concat;
4 Dense Block modules are marked as Dense Block1, Dense Block2, Dense Block3 and Dense Block 4;
dense Block1 contains six sets of convolution units, each convolution unit containing 1 x 1 and 3 x 3 convolutions, wherein the output of the first set of convolution units is input as the convolution units of the second, third, fourth, fifth, and sixth sets, wherein the output of the second set of convolution units is input as the convolution units of the third, fourth, fifth, and sixth sets, wherein the output of the third set of convolution units is input as the convolution units of the fourth, fifth, and sixth sets, wherein the output of the fourth set of convolution units is input as the convolution units of the fifth, sixth sets, wherein the output of the fifth set of convolution units is input as the convolution units of the sixth set, and the convolution units of the first, second, third, fourth, fifth, and sixth sets output into a Concat unit;
the Concat unit contains a convolution of 1 x 1 and an average pooling of 2 x 2;
dense Block2 is identical to Dense Block 1;
dense Block3 is identical to Dense Block 1;
dense Block4 is identical to Dense Block 1;
extracting multi-scale features to obtain 4 sub-scale feature units;
the fourth sub-scale feature cell is convolved by 3 x 3 with the output of the sense Block 4;
the third sub-scale feature unit is obtained by adding the two part feature maps bit by bit and then performing convolution by 3 x 3; the first part of feature map is a feature map of the output of the Dense Block3 unit after being convoluted by 1 x 1, and the second part of feature map is a feature map of the output of the Dense Block4 unit after being sampled by 2 times;
the second sub-scale feature unit is obtained by adding the two part feature maps bit by bit and then performing convolution by 3 x 3; the first part of feature map is the feature map of the output of the Dense Block2 unit after being convoluted by 1 x 1, the second part of feature map is the feature map of two subparts which are added bit by bit and are upsampled by 2 times, wherein the first subpart of the second part is the feature map of the output of the Dense Block3 unit after being convoluted by 1 x 1, and the second subpart of the second part is the feature map of the output of the Dense Block4 unit after being upsampled by 2 times;
the first sub-scale feature unit is obtained by adding two parts of feature graphs bit by bit and then performing convolution by 3 x 3; the first part of feature map is the feature map of the output of the Dense Block1 unit after being convolved by 1 x 1, the second part of feature map is the feature map of two subparts which are added bit by bit and are upsampled by 2 times, wherein the first subpart of the second part is the feature map of the output of the Dense Block2 unit after being convolved by 1 x 1, and the second subpart of the second part is the feature map of the output of the Dense Block4 unit which is upsampled by 2 times and is added bit by bit with the output feature map vector of the Dense Block 3.
5. The method according to claim 4, wherein in the process of training the neural network model, the loss function of the model is as follows:
Figure FDA0002314526390000021
the total loss function is the classification error Lconf(x, c) and regression error LlocAnd (x, l, g) weighted summation, wherein c is confidence, l is a prediction box, g is a true box, α is weight, N represents the number of matched default boxes, and x is used for judging whether the designed feature capture box has a corresponding target.
6. The method of claim 4, wherein the convolution kernel of the convolution layer in the convolution-pooling unit is 7 x 7, and the pooling layer is a maximum pooling of 3 x 3.
CN201911278037.XA 2019-12-12 2019-12-12 Method for detecting loss of blocking key Pending CN111091553A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911278037.XA CN111091553A (en) 2019-12-12 2019-12-12 Method for detecting loss of blocking key

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911278037.XA CN111091553A (en) 2019-12-12 2019-12-12 Method for detecting loss of blocking key

Publications (1)

Publication Number Publication Date
CN111091553A true CN111091553A (en) 2020-05-01

Family

ID=70395493

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911278037.XA Pending CN111091553A (en) 2019-12-12 2019-12-12 Method for detecting loss of blocking key

Country Status (1)

Country Link
CN (1) CN111091553A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111461093A (en) * 2020-06-22 2020-07-28 北京慧智数据科技有限公司 Modified truck identification method based on deep learning technology
CN112233096A (en) * 2020-10-19 2021-01-15 哈尔滨市科佳通用机电股份有限公司 Vehicle apron board fault detection method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103034850A (en) * 2012-12-21 2013-04-10 湖北工业大学 Trouble of moving freight car detection system (TFDS) block key loss fault automatic identification method
CN103295027A (en) * 2013-05-17 2013-09-11 北京康拓红外技术股份有限公司 Freight wagon blocking key missing fault identification method based on support vector machine
CN109614985A (en) * 2018-11-06 2019-04-12 华南理工大学 A kind of object detection method based on intensive connection features pyramid network
CN110232653A (en) * 2018-12-12 2019-09-13 天津大学青岛海洋技术研究院 The quick light-duty intensive residual error network of super-resolution rebuilding

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103034850A (en) * 2012-12-21 2013-04-10 湖北工业大学 Trouble of moving freight car detection system (TFDS) block key loss fault automatic identification method
CN103295027A (en) * 2013-05-17 2013-09-11 北京康拓红外技术股份有限公司 Freight wagon blocking key missing fault identification method based on support vector machine
CN109614985A (en) * 2018-11-06 2019-04-12 华南理工大学 A kind of object detection method based on intensive connection features pyramid network
CN110232653A (en) * 2018-12-12 2019-09-13 天津大学青岛海洋技术研究院 The quick light-duty intensive residual error network of super-resolution rebuilding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
熊伟: "面向移动设备的深度学习部署运算优化技术", 《电子制作》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111461093A (en) * 2020-06-22 2020-07-28 北京慧智数据科技有限公司 Modified truck identification method based on deep learning technology
CN111461093B (en) * 2020-06-22 2020-09-29 北京慧智数据科技有限公司 Modified truck identification method based on deep learning technology
CN112233096A (en) * 2020-10-19 2021-01-15 哈尔滨市科佳通用机电股份有限公司 Vehicle apron board fault detection method

Similar Documents

Publication Publication Date Title
CN103279765B (en) Steel wire rope surface damage detection method based on images match
CN111652227B (en) Method for detecting damage fault of bottom floor of railway wagon
CN109766746B (en) Track foreign matter detection method for aerial video of unmanned aerial vehicle
CN111784633B (en) Insulator defect automatic detection algorithm for electric power inspection video
CN106290388A (en) A kind of insulator breakdown automatic testing method
Yang et al. Deep learning‐based bolt loosening detection for wind turbine towers
Liang et al. Defect detection of rail surface with deep convolutional neural networks
CN111080609B (en) Brake shoe bolt loss detection method based on deep learning
CN111080612B (en) Truck bearing damage detection method
CN111489339A (en) Method for detecting defects of bolt spare nuts of high-speed railway positioner
CN111223087B (en) Automatic bridge crack detection method based on generation countermeasure network
CN102346844B (en) Device and method for identifying fault of losing screw bolts for truck center plates
CN116485717B (en) Concrete dam surface crack detection method based on pixel-level deep learning
CN113111727A (en) Method for detecting rotating target in remote sensing scene based on feature alignment
CN109934135B (en) Rail foreign matter detection method based on low-rank matrix decomposition
CN111091553A (en) Method for detecting loss of blocking key
CN111079748A (en) Method for detecting oil throwing fault of rolling bearing of railway wagon
Zhao et al. Image-based comprehensive maintenance and inspection method for bridges using deep learning
CN114298948A (en) Ball machine monitoring abnormity detection method based on PSPNet-RCNN
CN106709903A (en) PM2.5 concentration prediction method based on image quality
CN114941807A (en) Unmanned aerial vehicle-based rapid monitoring and positioning method for leakage of thermal pipeline
CN116823800A (en) Bridge concrete crack detection method based on deep learning under complex background
CN110765900B (en) Automatic detection illegal building method and system based on DSSD
CN117541534A (en) Power transmission line inspection method based on unmanned plane and CNN-BiLSTM model
CN117372677A (en) Method for detecting health state of cotter pin of fastener of high-speed railway overhead contact system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200501

RJ01 Rejection of invention patent application after publication