CN112329859A - Method for identifying lost fault image of sand spraying pipe nozzle of railway motor car - Google Patents

Method for identifying lost fault image of sand spraying pipe nozzle of railway motor car Download PDF

Info

Publication number
CN112329859A
CN112329859A CN202011233315.2A CN202011233315A CN112329859A CN 112329859 A CN112329859 A CN 112329859A CN 202011233315 A CN202011233315 A CN 202011233315A CN 112329859 A CN112329859 A CN 112329859A
Authority
CN
China
Prior art keywords
image
nozzle
corner
network
loss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011233315.2A
Other languages
Chinese (zh)
Inventor
付德敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Kejia General Mechanical and Electrical Co Ltd
Original Assignee
Harbin Kejia General Mechanical and Electrical Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Kejia General Mechanical and Electrical Co Ltd filed Critical Harbin Kejia General Mechanical and Electrical Co Ltd
Priority to CN202011233315.2A priority Critical patent/CN112329859A/en
Publication of CN112329859A publication Critical patent/CN112329859A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing

Abstract

A method for identifying a lost fault image of a sand scattering pipe nozzle of a railway motor car relates to the technical field of image identification, aims at the problem that a detection network based on Anchor in the prior art has imbalance of positive and negative samples, and utilizes an automatic image identification mode to replace manual detection, so that the detection efficiency and the accuracy are improved. The deep learning algorithm is applied to automatic identification of the nozzle loss fault of the sanding pipe, and the robustness and the precision of the overall algorithm are improved. The Hourglass network is introduced into the Cornernet network, multi-scale information is fused together, adaptability of the network to scale is improved, the Cornernet network is adopted to combine the upper left corner point and the lower right corner point of a target into a detection frame, parameter quantity is greatly reduced, training and prediction speed is improved, an anchor frame is not involved, and the problem of imbalance of positive and negative samples does not exist in the training process. The accuracy of the target frame is improved, and the accuracy of detection is improved. For small targets in detection, a loss function is redefined, and the detection accuracy is improved.

Description

Method for identifying lost fault image of sand spraying pipe nozzle of railway motor car
Technical Field
The invention relates to the technical field of image recognition, in particular to a method for recognizing a lost fault image of a sand spraying pipe nozzle of a railway motor car.
Background
The failure that the motor car sanding pipe nozzle loses is a failure endangering driving safety, and in the failure detection that the sanding pipe nozzle loses, the failure detection is carried out in a mode of manually checking images. The vehicle inspection personnel are easy to have fatigue, omission and other artificial factors in the working process, which may cause the occurrence of missed inspection and false inspection and influence the driving safety. The fault detection accuracy and stability can be improved by the mode of automatically identifying the fault according to the image information. In recent years, deep learning and artificial intelligence are continuously developed, the technology is continuously mature, the detection network based on the Anchor is widely used for target detection, such as fastercnnn, but the detection algorithm based on the Anchor has the problems of dependence on excessive manual design, low efficiency in training and prediction processes, unbalanced positive and negative samples and the like, and the problems of inaccurate detection frames, multi-class category error detection and the like can occur.
Disclosure of Invention
The purpose of the invention is: aiming at the problem that the detection network based on Anchor in the prior art has unbalance positive and negative samples, the method for identifying the lost fault image of the sand spraying pipe nozzle of the railway motor car is provided.
The technical scheme adopted by the invention to solve the technical problems is as follows:
a method for identifying a lost fault image of a sand spraying pipe nozzle of a railway motor car comprises the following steps:
the method comprises the following steps: acquiring an original gray image of the bullet train, and determining a sanding pipe component area in the gray image;
step two: preprocessing the image of the sanding pipe component area, and taking the preprocessed image as a sample image;
step three: forming a sample image set according to all the sample images obtained in the step two;
step four: marking the nozzles of the sand spraying pipes in the sample image set to obtain a marked image set;
step five: training a Cornernet network by utilizing the sample image set and the labeled image set;
step six: carrying out sand scattering pipe nozzle loss fault identification by utilizing a trained Cornernet network, wherein the fault identification comprises the following specific steps: and judging whether the image output by the Cornernet contains the sand spraying pipe nozzle or not, if so, determining that the image contains the sand spraying pipe nozzle, and if not, determining that the image does not contain the sand spraying pipe nozzle.
Further, preprocessing includes data amplification and image contrast improvement.
Further, the image contrast is improvedBy passingAdaptive histogram equalizationThe process is carried out.
Further, the data augmentation includes one or more of rotation, translation, scaling, and mirroring of the image.
Further, marking the sample image set includes marking the image name, the detection category, and the upper left and lower right corner coordinates of the sanding pipe nozzle area.
Further, the concrete steps of the fifth step are as follows:
step five, first: initializing Cornernet network parameters by using coco model parameters;
step five two: inputting the sample image into Hourglass by using an initialized Cornernet network to obtain two thermodynamic diagrams, predicting the position of each corner point in each thermodynamic diagram, and reserving the corner points with the predicted values larger than 0.7;
step five and step three: coding each angular point reserved on the thermodynamic diagram to obtain a coding vector of each angular point, then calculating the distance between the coding vectors of any two angular points, wherein the two angular points with the minimum coding vector distance are the two angular points with the maximum similarity, taking the two angular points with the maximum similarity as the upper left angular point and the lower right angular point of a target, then generating a target frame according to the upper left angular point and the lower right angular point of the target, and training the Cornernet network by using the generated target frame.
Further, the loss function for predicting the position of each corner point in each thermodynamic diagram is as follows:
Figure BDA0002765919170000021
in the formula pcijValue, y, representing the predicted thermodynamic diagram at the (i, j) position of the C-th channelcijMarker boxes indicating the corresponding position of each corner point, ifycijThe position corresponding to the corner point is positioned in the mark frame as 1, N represents the number of targets, alpha represents the loss weight of the difficulty degree of the learning sample, beta represents the weight parameter, C represents the number of channels, H represents the height of the thermodynamic diagram, and W represents the width of the thermodynamic diagram.
Further, the loss function of the encoding is:
Figure BDA0002765919170000022
Figure BDA0002765919170000023
etkcode vector representing the top left corner of an object belonging to class k, ebkCode vector representing the lower right corner of an object belonging to class k, ekDenotes etkAnd ebkIs taken as the mean value of, Δ is 1, ejRepresenting the mean of the code vector for the top left corner and the code vector for the bottom right corner belonging to the j-class object.
Further, the corner position coordinates are repaired by using the following formula in the training process of the fifth step and the third step:
Figure BDA0002765919170000024
Figure BDA0002765919170000031
xkand ykFor the coordinates of the kth marking corner point on the image, n represents the down-sampling factor, OkIndicating the loss of precision, L, of the feature map after it has been scaled back to the original image and the original mark frameoffIn order to target the loss of the frame offset,
Figure BDA0002765919170000032
representing the corner point predicted coordinate deviation.
Further, the loss function of the Cornernet network is:
L=Ldet+aLpull+bLpush+γLoff
Ldetfor corner loss, Lpull、LpushIs a coding loss, and a takes a value0.1, b is 0.1 and gamma is 1.
The invention has the beneficial effects that:
1. and the automatic image identification mode is used for replacing manual detection, so that the detection efficiency and accuracy are improved.
2. The deep learning algorithm is applied to automatic identification of the nozzle loss fault of the sanding pipe, and the robustness and the precision of the overall algorithm are improved.
3. The Hourglass network is introduced into the Cornernet network, multi-scale information is fused together, adaptability of the network to scale is improved, the Cornernet network is adopted to combine the upper left corner point and the lower right corner point of a target into a detection frame, parameter quantity is greatly reduced, training and prediction speed is improved, an anchor frame is not involved, and the problem of unbalanced positive and negative samples does not exist in the training process. The accuracy of the target frame is improved, and the accuracy of detection is improved.
4. For small targets in detection, a loss function is redefined, the problem that the small targets are not easy to detect in the prior art is solved, and the detection accuracy is improved.
Drawings
FIG. 1 is a flow chart of fault identification of the present application;
FIG. 2 is a flow chart of the present application for calculating weighting coefficients;
fig. 3 is a Cornernet training flow diagram.
Detailed Description
It should be noted that, in the case of conflict, the various embodiments disclosed in the present application may be combined with each other.
The first embodiment is as follows: specifically describing the embodiment with reference to fig. 1, the method for identifying the missing fault image of the nozzle of the sand pipe of the railway motor car comprises the following steps:
the method comprises the following steps: acquiring an original gray image of the bullet train, and determining a sanding pipe component area in the gray image;
step two: preprocessing the image of the sanding pipe component area, and taking the preprocessed image as a sample image;
step three: forming a sample image set according to all the sample images obtained in the step two;
step four: marking the nozzles of the sand spraying pipes in the sample image set to obtain a marked image set;
step five: training a Cornernet network by utilizing the sample image set and the labeled image set;
step six: carrying out sand scattering pipe nozzle loss fault identification by utilizing a trained Cornernet network, wherein the fault identification comprises the following specific steps: and judging whether the image output by the Cornernet contains the sand spraying pipe nozzle or not, if so, determining that the image contains the sand spraying pipe nozzle, and if not, determining that the image does not contain the sand spraying pipe nozzle.
Establishing a sample data set
And (4) building imaging equipment on two sides of the railway track, and acquiring high-definition images after the motor car passes through the equipment. The image is a sharp grayscale image. The motor train parts can be influenced by natural conditions such as rainwater, mud stains, oil stains, black paint and the like or artificial conditions. Also, there may be differences in the images taken at different sites. Therefore, the images of the sanding pipe components vary widely. Therefore, in the process of collecting the image data of the sanding pipe, the diversity is ensured, and the sanding pipe images under various conditions are collected as much as possible.
The shape of the sandpipe sections may vary among different types of trucks. However, some of the less common truck-type sandpipe components are more difficult to collect due to the greater frequency differences that occur between the different types. Thus, all types of sandpipe components are collectively referred to as a class, and sample data sets are all built by one class.
The sample data set includes: a grayscale image set and a markup file. The grayscale image set is a high-definition grayscale image shot by the device. The marking file set is stored in the marking file as the gray image name, the detection type and the coordinates of the upper left corner and the lower right corner of the target area, and the sand sprinkling pipe nozzle is lost to have three fault forms, so that the images are marked into three types and are acquired in a manual marking mode. There is a one-to-one correspondence between the grayscale image data set and the marker file set, i.e., each grayscale image corresponds to one marker file.
The second embodiment is as follows: this embodiment mode is a further description of the first embodiment mode, and the difference between this embodiment mode and the first embodiment mode is that the preprocessing includes data amplification and improvement of image contrast.
Although the creation of the sample data set includes images under various conditions, data amplification of the sample data set is still required to improve the stability of the algorithm. The amplification form comprises operations of rotation, translation, zooming, mirror image and the like of the image, and each operation is performed under random conditions, so that the diversity and applicability of the sample can be ensured to the greatest extent.
Initial positioning
According to the prior knowledge of hardware equipment, wheel base information, relevant positions and the like, the area of the sand sprinkling pipe part can be preliminarily cut out from the image of the side camera.
Improving image contrast
Because the angle distances of the imaging devices at all stations are different, the brightness degrees of the collected images are different, and some images are too dark, so that the fracture area of the sanding pipe cannot be clearly observed, the contrast of the images is adaptively improved before the images enter a deep learning network.
The third concrete implementation mode: this embodiment mode is a further description of the second embodiment mode, and the difference between this embodiment mode and the second embodiment mode is that the contrast of an image is improved byAdaptive histogram equalizationThe process is carried out.
The fourth concrete implementation mode: this embodiment is a further description of the second embodiment, and the difference between this embodiment and the second embodiment is that data expansion includes one or more of rotation, translation, scaling, and mirroring of an image.
The fifth concrete implementation mode: the present embodiment is further described with reference to the first embodiment, and the difference between the present embodiment and the first embodiment includes marking the image name, the detection type, and the upper left and lower right corner coordinates of the nozzle area of the sanding pipe.
The sixth specific implementation mode: this embodiment is a further description of the first embodiment, and the difference between this embodiment and the first embodiment is that the specific step in step five is:
step five, first: initializing Cornernet network parameters by using coco model parameters;
step five two: inputting the sample image into Hourglass by using an initialized Cornernet network to obtain two thermodynamic diagrams, predicting the position of each corner point in each thermodynamic diagram, and reserving the corner points with the predicted values larger than 0.7;
step five and step three: coding each angular point reserved on the thermodynamic diagram to obtain a coding vector of each angular point, then calculating the distance between the coding vectors of any two angular points, wherein the two angular points with the minimum coding vector distance are the two angular points with the maximum similarity, taking the two angular points with the maximum similarity as the upper left angular point and the lower right angular point of a target, then generating a target frame according to the upper left angular point and the lower right angular point of the target, and training the Cornernet network by using the generated target frame.
Multi-target detection
1) Initializing the parameters of the Hourglass network and the Cornernet network by using the coco model parameters, and specifically forming a residual error module.
2) Inputting the image into a Hourglass network to obtain two heatmaps, wherein one is used for predicting the upper left corner point, the other is used for pre-detecting the lower right corner point, the number of channels of each heatmap is C, C is the number of detection categories, and the prediction value of each point is 0 to 1, which represents that the point is the fraction of the corner point.
3) The method comprises the steps of encoding each point on an input feature map, obtaining an encoding vector which is Embeddings and represents position information of the point, predicting an Embedding vector for each detected angular point by a network, calculating the distance between the Embedding vectors for the same upper left angular point and the same lower right angular point of an object to be small, namely, the Embeddings corresponding to the angular points of the same object have high similarity, and for the angular points of different objects, the corresponding Embeddings have large distance and small similarity.
4) And according to the result of Embeddings calculation, the corner points of the same object form a candidate frame of the modified object.
The seventh embodiment: the present embodiment is further described with respect to a sixth specific embodiment, and the difference between the present embodiment and the sixth specific embodiment is that a loss function for performing position prediction on each corner point in each thermodynamic diagram is as follows:
Figure BDA0002765919170000061
in the formula pcijValue, y, representing the predicted thermodynamic diagram at the (i, j) position of the C-th channelcijMarker boxes indicating the corresponding position of each corner point, ifycij1 represents that the position corresponding to the corner point is positioned in the mark frame, N represents the number of targets, alpha represents the loss weight of the difficulty level of the learning sample, beta represents the weight parameter, C represents the number of channels, and the loss weight is calculated based on the Gaussian distribution of the corner point of the mark frame, so that y of a point (i, j) closer to the mark framecijThe value is close to 1, the weight is controlled by the part through a beta parameter, a prediction box formed by false detection corner points close to a mark box still has a large overlapping area with a ground channel, H represents the height of a thermodynamic diagram, and W represents the width of the thermodynamic diagram. The loss function of corner point prediction is improved by adopting the focal loss function.
The specific implementation mode is eight: this embodiment is a further description of a sixth embodiment, and the difference between this embodiment and the sixth embodiment is that the loss function of encoding is:
Figure BDA0002765919170000062
Figure BDA0002765919170000063
etkcode vector representing the top left corner of an object belonging to class k, ebkCode vector representing the lower right corner of an object belonging to class k, ekDenotes etkAnd ebkIs taken as the mean value of, Δ is 1, ejIndicates belonging to class jThe mean of the code vector for the top left corner and the code vector for the bottom right corner of the object.
Figure BDA0002765919170000064
Embedding vector (e) for reducing two corner points belonging to the same object (class k object)tkAnd ebk) Distance.
Formula (II)
Figure BDA0002765919170000065
The method is used for expanding the embedding vector distance of two corner points which do not belong to the same target.
The specific implementation method nine: the present embodiment is a further description of a sixth specific embodiment, and the difference between the present embodiment and the sixth specific embodiment is that the corner position coordinates are repaired by using the following formula in the training process of the fifth step and the third step:
Figure BDA0002765919170000071
Figure BDA0002765919170000072
xkand ykFor the coordinates of the kth marking corner point on the image, n represents the down-sampling factor, OkIndicating the loss of precision, L, of the feature map after it has been scaled back to the original image and the original mark frameoffIn order to target the loss of the frame offset,
Figure BDA0002765919170000073
representing the corner point predicted coordinate deviation.
In the training process, the accuracy information lost in the training process is represented by the formula to repair:
formula (II)
Figure BDA0002765919170000074
In (x)k,yk) Is the kth labelThe coordinates of the corner points on the image, n representing the down-sampling factor, OkIndicating the precision loss of the feature map and the original mark frame after the feature map is zoomed back to the original image,
Figure BDA0002765919170000075
indicating the predicted left deviation. Then by the formula
Figure BDA0002765919170000076
The smooth L1 loss function of (1) supervises learning the parameters.
The detailed implementation mode is ten: this embodiment mode is a further description of a sixth embodiment mode, and is different from the sixth embodiment mode in that the loss function of the Cornernet network is:
L=Ldet+aLpull+bLpush+γLoff
Ldetfor corner loss, Lpull、LpushFor coding loss, a takes a value of 0.1, b takes a value of 0.1, and γ takes a value of 1.
It should be noted that the detailed description is only for explaining and explaining the technical solution of the present invention, and the scope of protection of the claims is not limited thereby. It is intended that all such modifications and variations be included within the scope of the invention as defined in the following claims and the description.

Claims (10)

1. A method for identifying a lost fault image of a sand spraying pipe nozzle of a railway motor car is characterized by comprising the following steps:
the method comprises the following steps: acquiring an original gray image of the bullet train, and determining a sanding pipe component area in the gray image;
step two: preprocessing the image of the sanding pipe component area, and taking the preprocessed image as a sample image;
step three: forming a sample image set according to all the sample images obtained in the step two;
step four: marking the nozzles of the sand spraying pipes in the sample image set to obtain a marked image set;
step five: training a Cornernet network by utilizing the sample image set and the labeled image set;
step six: carrying out sand scattering pipe nozzle loss fault identification by utilizing a trained Cornernet network, wherein the fault identification comprises the following specific steps: and judging whether the image output by the Cornernet contains the sand spraying pipe nozzle or not, if so, determining that the image contains the sand spraying pipe nozzle, and if not, determining that the image does not contain the sand spraying pipe nozzle.
2. The method for identifying the missing nozzle fault image of the railway motor car sanding pipe according to claim 1, wherein the preprocessing comprises data amplification and image contrast improvement.
3. The method for identifying the nozzle loss fault image of the sand pipe of the railway motor car as claimed in claim 2, wherein the improvement of the image contrast is performed by adaptive histogram equalization.
4. The method for identifying the missing nozzle fault image of the railway motor car sanding pipe according to claim 2, wherein the data augmentation comprises one or more of rotation, translation, scaling and mirroring of the image.
5. The method according to claim 1, wherein the marking of the sample image set comprises marking the image name, the detection category and the upper left and lower right corner coordinates of the sanding pipe nozzle area.
6. The method for identifying the missing fault image of the nozzle of the sand pipe of the railway motor car according to claim 1, wherein the step five comprises the following specific steps:
step five, first: initializing Cornernet network parameters by using coco model parameters;
step five two: inputting the sample image into Hourglass by using an initialized Cornernet network to obtain two thermodynamic diagrams, predicting the position of each corner point in each thermodynamic diagram, and reserving the corner points with the predicted values larger than 0.7;
step five and step three: and coding each angular point reserved on the thermodynamic diagram to obtain a coding vector of each angular point, then calculating the distance between the coding vectors of any two angular points, wherein the two angular points with the minimum coding vector distance are the two angular points with the maximum similarity, taking the two angular points with the maximum similarity as the upper left angular point and the lower right angular point of a target, then generating a target frame according to the upper left angular point and the lower right angular point of the target, and training the Cornernet network by using the generated target frame.
7. The method for identifying the nozzle loss fault image of the sand pipe of the railway motor car as claimed in claim 6, wherein the loss function for predicting the position of each corner point in each thermodynamic diagram is as follows:
Figure FDA0002765919160000021
in the formula pcijValue, y, representing the predicted thermodynamic diagram at the (i, j) position of the C-th channelcijBoxes of marks, if y, representing the corresponding position of each cornercijThe position corresponding to the corner point is located in the mark frame, N represents the number of the targets, alpha represents the loss weight of the difficulty degree of the learning sample, beta represents the weight parameter, C represents the number of channels, H represents the height of the thermodynamic diagram, and W represents the width of the thermodynamic diagram.
8. The method for identifying the missing nozzle fault image of the sanding pipe of the railway motor car as claimed in claim 7, wherein the encoded loss function is:
Figure FDA0002765919160000022
Figure FDA0002765919160000023
etkcode vector representing the top left corner of an object belonging to class k, ebkCode vector representing the lower right corner of an object belonging to class k, ekDenotes etkAnd ebkIs taken as the mean value of, Δ is 1, ejRepresenting the mean of the code vector for the top left corner and the code vector for the bottom right corner belonging to the j-class object.
9. The method for identifying the missing fault image of the nozzle of the sand pipe of the railway motor car according to claim 8, wherein the coordinates of the corner position are repaired by using the following formula in the five-three training process:
Figure FDA0002765919160000024
Figure FDA0002765919160000025
xkand ykFor the coordinates of the kth marking corner point on the image, n represents the down-sampling factor, OkIndicating the loss of precision, L, of the feature map after scaling back to the original image and the original mark frameoffIn order to target the loss of the frame offset,
Figure FDA0002765919160000026
representing the corner point predicted coordinate deviation.
10. The method for identifying the missing nozzle fault image of the sanding pipe of the railway motor car as claimed in claim 9, wherein the loss function of the Cornernet network is as follows:
L=Ldet+aLpull+bLpush+γLoff
Ldetfor corner loss, Lpull、LpushFor coding loss, a takes a value of 0.1, b takes a value of 0.1, and γ takes a value of 1.
CN202011233315.2A 2020-11-06 2020-11-06 Method for identifying lost fault image of sand spraying pipe nozzle of railway motor car Withdrawn CN112329859A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011233315.2A CN112329859A (en) 2020-11-06 2020-11-06 Method for identifying lost fault image of sand spraying pipe nozzle of railway motor car

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011233315.2A CN112329859A (en) 2020-11-06 2020-11-06 Method for identifying lost fault image of sand spraying pipe nozzle of railway motor car

Publications (1)

Publication Number Publication Date
CN112329859A true CN112329859A (en) 2021-02-05

Family

ID=74316356

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011233315.2A Withdrawn CN112329859A (en) 2020-11-06 2020-11-06 Method for identifying lost fault image of sand spraying pipe nozzle of railway motor car

Country Status (1)

Country Link
CN (1) CN112329859A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113392747A (en) * 2021-06-07 2021-09-14 北京优创新港科技股份有限公司 Goods packing box identification method and system for stereoscopic warehouse

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111079747A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Railway wagon bogie side frame fracture fault image identification method
CN111079627A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Railway wagon brake beam body breaking fault image identification method
CN111091544A (en) * 2019-12-12 2020-05-01 哈尔滨市科佳通用机电股份有限公司 Method for detecting breakage fault of side integrated framework of railway wagon bogie
CN111091558A (en) * 2019-12-12 2020-05-01 哈尔滨市科佳通用机电股份有限公司 Railway wagon swing bolster spring jumping fault image identification method
CN111091547A (en) * 2019-12-12 2020-05-01 哈尔滨市科佳通用机电股份有限公司 Railway wagon brake beam strut fracture fault image identification method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111079747A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Railway wagon bogie side frame fracture fault image identification method
CN111079627A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Railway wagon brake beam body breaking fault image identification method
CN111091544A (en) * 2019-12-12 2020-05-01 哈尔滨市科佳通用机电股份有限公司 Method for detecting breakage fault of side integrated framework of railway wagon bogie
CN111091558A (en) * 2019-12-12 2020-05-01 哈尔滨市科佳通用机电股份有限公司 Railway wagon swing bolster spring jumping fault image identification method
CN111091547A (en) * 2019-12-12 2020-05-01 哈尔滨市科佳通用机电股份有限公司 Railway wagon brake beam strut fracture fault image identification method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HEI LAW 等: "CornerNet: Detecting Objects as Paired Keypoints", 《SPRINGERLINK》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113392747A (en) * 2021-06-07 2021-09-14 北京优创新港科技股份有限公司 Goods packing box identification method and system for stereoscopic warehouse

Similar Documents

Publication Publication Date Title
CN111899288B (en) Tunnel leakage water area detection and identification method based on infrared and visible light image fusion
KR102008973B1 (en) Apparatus and Method for Detection defect of sewer pipe based on Deep Learning
CN113744270B (en) Unmanned aerial vehicle visual detection and identification method for crane complex steel structure surface defects
CN110254468B (en) Intelligent online detection device and detection method for track surface defects
CN111091558B (en) Railway wagon swing bolster spring jumping fault image identification method
CN110334750B (en) Power transmission line iron tower bolt corrosion degree image classification and identification method
CN106056619A (en) Unmanned aerial vehicle vision wire patrol method based on gradient constraint Radon transform
CN111091544B (en) Method for detecting breakage fault of side integrated framework of railway wagon bogie
CN110992349A (en) Underground pipeline abnormity automatic positioning and identification method based on deep learning
CN111080611A (en) Railway wagon bolster spring fracture fault image identification method
CN108305239B (en) Bridge crack image repairing method based on generation type countermeasure network
CN111091547B (en) Railway wagon brake beam strut fracture fault image identification method
CN111080621B (en) Method for identifying railway wagon floor damage fault image
CN111079822A (en) Method for identifying dislocation fault image of middle rubber and upper and lower plates of axle box rubber pad
CN111220619B (en) Insulator self-explosion detection method
Wang et al. Unstructured road detection using hybrid features
CN112907626A (en) Moving object extraction method based on satellite time-exceeding phase data multi-source information
CN112329859A (en) Method for identifying lost fault image of sand spraying pipe nozzle of railway motor car
CN113962951B (en) Training method and device for detecting segmentation model, and target detection method and device
CN115995056A (en) Automatic bridge disease identification method based on deep learning
Zheng et al. A novel deep learning-based automatic damage detection and localization method for remanufacturing/repair
CN112329858B (en) Image recognition method for breakage fault of anti-loosening iron wire of railway motor car
Shajahan et al. Automated inspection of monopole tower using drones and computer vision
CN112308135A (en) Railway motor car sand spreading pipe loosening fault detection method based on deep learning
CN114332006A (en) Automatic quantitative assessment method for urban battlement loss

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210205

WW01 Invention patent application withdrawn after publication