CN112907734B - TEDS fault detection method based on virtual CRH380A model and deep learning - Google Patents

TEDS fault detection method based on virtual CRH380A model and deep learning Download PDF

Info

Publication number
CN112907734B
CN112907734B CN202110253132.5A CN202110253132A CN112907734B CN 112907734 B CN112907734 B CN 112907734B CN 202110253132 A CN202110253132 A CN 202110253132A CN 112907734 B CN112907734 B CN 112907734B
Authority
CN
China
Prior art keywords
fault
model
picture
teds
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110253132.5A
Other languages
Chinese (zh)
Other versions
CN112907734A (en
Inventor
刘斯斯
倪海
罗意平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN202110253132.5A priority Critical patent/CN112907734B/en
Publication of CN112907734A publication Critical patent/CN112907734A/en
Application granted granted Critical
Publication of CN112907734B publication Critical patent/CN112907734B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Human Resources & Organizations (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The TEDS fault detection method based on the virtual CRH380A model and deep learning comprises the following steps: shooting a TEDS fault picture and generating a rendering fault picture; step two, generating a confrontation network conversion picture; step three, constructing a training sample set: mixing the ultra-realistic fault picture with a small amount of real fault pictures, and carrying out sample marking on the pictures by using marking software to construct a training sample set; training a target detection model, and constructing a fault detection depth model; and step five, detecting and judging the TEDS fault. The method utilizes the neural network and the virtual model to establish a massive multidimensional feature recognition model base of the EMU dynamic fault to realize the automatic detection capability of the EMU equipment fault with high accuracy; the automatic detection capability of the TEDS system on EMU equipment faults is supported through the project, and the manual detection cost can be saved in the railway motor train section in China through pre-estimation.

Description

TEDS fault detection method based on virtual CRH380A model and deep learning
Technical Field
The invention relates to the technical field of nondestructive visual fault detection, in particular to a TEDS fault detection method based on a virtual CRH380A model and deep learning.
Background
With the rapid development of high-speed railway construction in China, when a motor train unit (EMU) runs at a high speed, any slight fault on a train body has huge potential safety hazard, so that the EMU is vital to detect the component state in the high-speed running, realize fault warning, improve the EMU maintenance and application quality and strengthen the monitoring of the EMU maintenance and operation quality. The EMU dynamic fault detection system (TEDS) utilizes a high-speed area-array camera and a high-speed line-array camera which are installed on the edge of a rail to collect visual images of the bottom of an EMU body, skirtboards on two sides of the EMU body, a vehicle connecting device, a bogie and the like, an automatic identification technology is adopted to identify vehicle body faults, the graded alarm of the faults is realized, meanwhile, the images are transmitted to an indoor monitoring terminal in real time through a network, and the abnormity alarm is confirmed and submitted manually, so that the operation quality and the operation efficiency of a motor train station are improved.
From the perspective of computer vision, the automatic identification technology adopted by the TEDS system to identify the vehicle body fault only simply stays at a low-level image processing level: and taking time as an axis, and realizing the pixel comparison of two adjacent photos in front of and behind the EMU on the time axis for the same part of the EMU. In the design and implementation of an EMU operation fault dynamic image detection system, researchers adopt Scale-Invariant Feature Transform (SIFT) to capture key points (Keypoint) from a fault picture, use euclidean distance as a similarity matching determination reference of Feature vectors of the key points in two images, and then adopt Histogram of Oriented Gradient (HOG) to compare descriptors of the two matched images. When the comparison result exceeds a predetermined threshold, it is determined that the image contains an abnormal fault. Wherein, the calculated threshold value of Euclidean distance and the matching threshold value of the subsequent descriptors are manually set. The single-line pixel comparison has a good identification effect on static workpieces, but for large-scale variable train workpieces, the method is not suitable for large-scale EMU structural part fault detection. According to statistics, the fault alarm accuracy rate of the existing TEDS system is only about ten-thousandth to one to three-thousandth.
Therefore, at present, in a maintenance master dispatching room (emergency center) of a motor train section, pictures acquired by a manual detection TEDS system are still manually acquired. The manual detection mode has various missed faults and low efficiency, and TEDS analysts can complete overlarge workload only through overload work. From a long-term development, the number of motor cars is increasing day by day, and the misdiagnosis rate is also increased in the face of such a huge workload.
At present, the mainstream research proposes that a deep learning method is adopted to improve the TEDS fault detection performance, but the detection capability of rare faults is still insufficient, the reason is not that the deep learning algorithm performance is weak, but a large amount of complete fault image data is lacked, fault images collected in the practical application of the TEDS system are extremely scarce, the fault feature sparseness degree is high, and the rarer faults are more scarce, so that the corresponding fault images are more scarce.
Disclosure of Invention
The technical problem to be solved by the invention is to overcome the defects in the prior art, and provide a TEDS fault detection method based on a virtual CRH380A model and deep learning, which can generate a large amount of near-real fault images under the condition that TEDS fault samples are scarce, enlarge a training data set, accurately detect rare faults and achieve fault detection automation.
The technical scheme adopted by the invention for solving the technical problems is as follows: the TEDS fault detection method based on the virtual CRH380A model and deep learning comprises the following steps: step one, shooting a TEDS fault picture and generating a rendering fault picture: shooting a TEDS fault picture with faults aiming at faults which are difficult to collect and have low occurrence frequency, generating a rendering fault picture containing a specified fault model by utilizing 3dMax modeling, and rendering the area with the faults by primary materials; step two, generating a confrontation network conversion picture: utilizing a cyclic consistency countermeasure to generate a deep neural network to learn the fault pictures from the real TEDS, and then carrying out style migration on the fault pictures rendered by 3dMax to generate an ultra-realistic fault picture so as to enable the super-realistic fault picture to approach the real fault picture in style; step three, constructing a training sample set: mixing the ultra-realistic fault picture with a small amount of real fault pictures, and carrying out sample marking on the pictures by using marking software to construct a training sample set; step four, training a target detection model: using a target detection network and the training sample set constructed in the third step, and constructing a fault detection depth model; step five, detecting and judging the TEDS fault: and carrying out fault detection and judgment on the real fault picture collected in the TEDS system by using the fault detection depth model constructed in the step.
Further, the rendering of the step one specifically includes the steps of: 1) Editing the model by using a Vlay Next renderer; 2) Using the renderer edit model pair to inject a fault in the virtual model of the CRH 380; 3) And setting related parameters to render the material of the model.
Further, the rendering setting parameters include: input image size, pixel filter, color map type, lighting, and camera position.
Further, the second step specifically comprises: a) Data preprocessing: cutting, normalizing and randomly overturning; b) And generating a confrontation depth neural network by utilizing the cyclic consistency to transfer the rendered fault picture domain intercepted on the virtual model to a real fault picture style.
Further, in step a), the cropping refers to adjusting a 256 × 256 picture to 286 × 286, and then cropping a 256 × 256 pixel picture therefrom; the normalization refers to normalizing the value of each channel of the input image from [0,252] to [ -1,1]; the random flipping refers to flipping the image with a probability level of 0.5 to reduce overfitting.
Further, the specific steps of step b) are as follows: randomly generating an image by using a generator for generating a deep neural network by using antagonism, distinguishing and grading the image by using a discriminator, and converting a domain of a TEDS fault picture generated by a virtual model into a domain of a real fault picture in a cyclic antagonism generation process, so that the height of an ultra-realistic fault picture generated by the virtual model is close to that of the real fault picture.
Further, in the third step, the marking software is LabelImg software, and the LabelImg software performs pulling selection on a fault type marking frame on a fault position in the picture, names a fault area, and then stores the fault area as an xml marking file.
Further, in step four, the specific steps of constructing the fault detection depth model are as follows: and obtaining the data characteristics of the image by using a convolution-linear-activation-pooling layer, outputting candidate suggested areas by using an RPN network, pooling the areas, and stopping training and saving the model after the detection rate reaches a target value.
Further, in the fourth step, the target detection network is fast-RCNN or centernet.
Further, in the fifth step, the fault detection specifically includes: and transmitting a real fault picture to be detected into the fault detection depth model, comparing the learned characteristics with the detection area by using the fault detection depth model, marking after the fault position is found, and completing fault detection.
The beneficial effects of the invention are: establishing a massive multidimensional feature recognition model library of EMU dynamic faults by using a neural network and a virtual model to realize the high-accuracy automatic detection capability of EMU equipment faults; the automatic detection capability of the TEDS system for EMU equipment faults is supported through the project, and the pre-estimation can save 5236 ten thousand per year of manual detection cost for the railway motor train section in China.
Drawings
1. FIG. 1 is a flow chart of the present invention;
2. FIG. 2 is a flow diagram of step one rendering.
Detailed Description
The following examples are given to further illustrate the embodiments of the present invention:
example 1: as shown in fig. 1, the TEDS fault detection method based on the virtual CRH380A model and deep learning includes the following steps:
step one, shooting a TEDS fault picture and generating a rendering fault picture: aiming at the fault of iron wire breakage, a TEDS fault picture containing an iron wire component is selected, then a fault model of iron wire breakage is generated by utilizing 3dsMax modeling, and preliminary material rendering is carried out on the area with the fault, as shown in figure 2, the concrete steps are as follows:
1) Editing the model by using a Vlay Next renderer, setting the model material, wherein the traction motor and the gear box are made of aluminum alloy materials, and the rest parts are mainly made of steel materials; 2) The CRH380 model was modified using 3dsMax, and broken iron wires were injected over the model; 3) Setting 256 multiplied by 256 pixels of a rendering output image, setting a pixel filter as catamull-Rom, setting a color mapping type as Reinhard, and enabling the model to be more vivid and lifelike by utilizing the illumination mapping in the first diffuse reflection calculation.
Step two, generating a confrontation network conversion picture: utilizing a cyclic consistency countermeasure to generate a deep neural network to learn the fault pictures from the real TEDS, and then carrying out style migration on the fault pictures rendered by 3dMax to generate an ultra-realistic fault picture so as to enable the super-realistic fault picture to approach the real fault picture in style; the method comprises the following specific steps:
a) Data preprocessing: cutting a 256 × 256 picture into 286 × 286 size, cutting a 256 × 256 pixel picture from the 256 × 256 picture, normalizing the value of each channel of the cut image from [0,255] to [ -1,1], and turning the image with a probability level of 0.5;
b) Generating a confrontation network by utilizing the cycle consistency to transfer the picture domain generated by the virtual model to a real fault picture style; the method comprises the following specific steps: randomly generating an image by using a generator for generating a deep neural network through antagonism, distinguishing and grading the image by using a discriminator, and converting a domain of a TEDS fault picture generated by a virtual model into a domain of a real fault picture in a cyclic antagonism generation process, so that the height of an ultra-realistic fault picture generated by the virtual model is close to that of the real fault picture.
Step three, constructing a training sample set: mixing the ultra-realistic fault pictures with a small number of real fault pictures, and carrying out sample marking on the pictures by using marking software to construct a training sample set; and the LabelImg software carries out pulling selection on the fault type mark box of the fault position in the picture, names the fault area and then saves the fault area as an xml mark file.
Step four, training a target detection model: establishing a fault detection depth model by using the centrenetet target detection network, and establishing the fault detection depth model by using the training sample set established in the third step; the method comprises the following specific steps: and obtaining the data characteristics of the image by using a convolution-linear-activation-pooling layer, outputting candidate suggested areas by using an RPN network, pooling the areas, and stopping training and saving the model after the detection rate reaches a target value.
Step five, detecting and judging the TEDS fault: carrying out fault detection and judgment on a real fault picture collected in the TEDS system by using the fault detection depth model constructed in the step; the method comprises the following specific steps: and transmitting a real fault picture to be detected into the fault detection depth model, comparing the learned characteristics with the detection area by using the fault detection depth model, marking after the fault position is found, and completing fault detection.
Example 2: as shown in fig. 1, the TEDS fault detection method based on the virtual CRH380A model and deep learning includes the following steps:
step one, shooting a TEDS fault picture and generating a rendering fault picture: aiming at the fault of iron wire breakage, a TEDS fault picture containing an iron wire component is selected, then a fault model of iron wire breakage is generated by utilizing 3dsMax modeling, and preliminary material rendering is carried out on the area with the fault, as shown in figure 2, the concrete steps are as follows:
1) Editing the model by using a Vlay Next renderer, setting the model material, wherein the traction motor and the gear box are made of aluminum alloy materials, and the rest parts are mainly made of steel materials; 2) Modifying the CRH380 model by using 3dsMax, and injecting a nut on the model to fall off; 3) Setting 256 multiplied by 256 pixels of a rendering output image, setting a pixel filter as catamull-Rom, setting a color mapping type as Reinhard, and enabling the model to be more vivid and lifelike by utilizing the illumination mapping in the first diffuse reflection calculation.
Step two, generating a confrontation network conversion picture: utilizing a cyclic consistency countermeasure to generate a deep neural network to learn the fault pictures from the real TEDS, and then carrying out style migration on the fault pictures rendered by 3dMax to generate an ultra-realistic fault picture so as to enable the super-realistic fault picture to approach the real fault picture in style; the method comprises the following specific steps:
a) Data preprocessing: cutting a 256 × 256 picture into 286 × 286 size, cutting a 256 × 256 pixel picture from the 256 × 256 picture, normalizing the value of each channel of the cut image from [0,255] to [ -1,1], and turning the image with a probability level of 0.5;
b) Generating a confrontation network by utilizing the cycle consistency to transfer the picture domain generated by the virtual model to a real fault picture style; the method comprises the following specific steps: randomly generating an image by using a generator for generating a deep neural network through antagonism, distinguishing and grading the image by using a discriminator, and converting a domain of a TEDS fault picture generated by a virtual model into a domain of a real fault picture in a cyclic antagonism generation process, so that the height of an ultra-realistic fault picture generated by the virtual model is close to that of the real fault picture.
Step three, constructing a training sample set: mixing the ultra-realistic fault picture with a small amount of real fault pictures, and carrying out sample marking on the pictures by using marking software to construct a training sample set; and the LabelImg software carries out pulling selection on the fault type mark box of the fault position in the picture, names the fault area and then saves the fault area as an xml mark file.
Step four, training a target detection model: using a fast-RCNN target detection network and the training sample set constructed in the third step, and constructing a fault detection depth model; the method comprises the following specific steps: and obtaining data characteristics of the image by using a convolution-linear-activation-pooling layer, outputting a candidate suggestion region by using an RPN network, pooling the region, and stopping training and saving the model after the detection rate reaches a target value.
Step five, detecting and judging the TEDS fault: carrying out fault detection and judgment on a real fault picture collected in the TEDS system by using the fault detection depth model constructed in the step; the method comprises the following specific steps: and transmitting a real fault picture to be detected into the fault detection depth model, comparing the learned characteristics with the detection area by using the fault detection depth model, marking after the fault position is found, and completing fault detection.
Those not described in detail in the specification are well within the skill of the art.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, it is possible to make various improvements and modifications without departing from the technical principle of the present invention, and these improvements and modifications should also be considered to be within the protection scope of the present invention.

Claims (7)

1. The TEDS fault detection method based on the virtual CRH380A model and deep learning is characterized by comprising the following steps of: step one, shooting a TEDS fault picture and generating a rendering fault picture: shooting a TEDS fault picture with faults aiming at faults which are difficult to collect and have low occurrence frequency, generating a rendering fault picture containing a specified fault model by utilizing 3dMax modeling, and rendering the area with the faults by primary materials; step two, generating a confrontation network conversion picture: utilizing a cyclic consistency countermeasure to generate a deep neural network to learn the fault pictures from the real TEDS, and then carrying out style migration on the fault pictures rendered by 3dMax to generate an ultra-realistic fault picture so as to enable the super-realistic fault picture to approach the real fault picture in style; step three, constructing a training sample set: mixing the ultra-realistic fault picture with a small amount of real fault pictures, and carrying out sample marking on the pictures by using marking software to construct a training sample set; step four, training a target detection model: using a target detection network and the training sample set constructed in the third step, and constructing a fault detection depth model; step five, detecting and judging the TEDS fault: and performing fault detection and judgment on a real fault picture collected in the TEDS system by using the fault detection depth model constructed in the step one, wherein the rendering in the step one comprises the following specific steps: 1) Editing the model by using a Vlay Next renderer; 2) Editing the CRH380 model by using a 3dMax renderer, and injecting faults into the virtual model of the CRH 380; 3) Setting relevant parameters to perform material rendering on the model;
the second step comprises the following specific steps: a) Data preprocessing: cutting, normalizing and randomly overturning; b) Generating a countering depth neural network by utilizing the cycle consistency to transfer the rendered fault picture domain intercepted on the virtual model to a real fault picture style;
the concrete steps of the step b) are as follows: the method comprises the steps of randomly generating an image by using a generator for generating a deep neural network through countermeasure, judging and grading the image by using a discriminator, converting a domain of a TEDS fault picture generated by a virtual model into a domain of a real fault picture in a cyclic countermeasure generation process, enabling the height of a super-realistic fault picture generated by the virtual model to be close to the real fault picture, and expanding a training data set.
2. The TEDS fault detection method based on the virtual CRH380A model and the deep learning of claim 1, wherein in step 3), the rendering setting parameters comprise: input image size, pixel filter, color map type, lighting, and camera position.
3. The TEDS fault detection method based on the virtual CRH380A model and the deep learning of claim 1, wherein the cropping in step a) is performed by adjusting a 256 × 256 picture to 286 × 286, and then cropping a 256 × 256 pixel picture; the normalization means normalizing the value of each channel of the input image from [0,252] to [ -1,1]; the random flipping refers to flipping the image with a probability level of 0.5 to reduce overfitting.
4. The TEDS fault detection method based on the virtual CRH380A model and the deep learning of claim 1, wherein in the third step, the labeling software is LabelImg software, and the LabelImg software performs pull-selection on a fault location in a picture to obtain a fault type labeling box, names a fault area, and then stores the fault type labeling box as an xml labeling file.
5. The TEDS fault detection method based on the virtual CRH380A model and the deep learning of claim 3, wherein in the fourth step, the specific steps of constructing the fault detection depth model are as follows: and obtaining data characteristics of the image by using a convolution-linear-activation-pooling layer, outputting a candidate suggestion region by using an RPN network, pooling the region, and stopping training and saving the model after the detection rate reaches a target value.
6. The method of claim 1, wherein in step four, the target detection network is either fast-RCNN or centernet.
7. The TEDS fault detection method based on the virtual CRH380A model and the deep learning of claim 4, wherein in the fifth step, the specific steps of fault detection are as follows: and transmitting a real fault picture to be detected into the fault detection depth model, comparing the learned characteristics with the detection area by using the fault detection depth model, marking after the fault position is found, and completing fault detection.
CN202110253132.5A 2021-03-09 2021-03-09 TEDS fault detection method based on virtual CRH380A model and deep learning Active CN112907734B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110253132.5A CN112907734B (en) 2021-03-09 2021-03-09 TEDS fault detection method based on virtual CRH380A model and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110253132.5A CN112907734B (en) 2021-03-09 2021-03-09 TEDS fault detection method based on virtual CRH380A model and deep learning

Publications (2)

Publication Number Publication Date
CN112907734A CN112907734A (en) 2021-06-04
CN112907734B true CN112907734B (en) 2023-04-11

Family

ID=76107325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110253132.5A Active CN112907734B (en) 2021-03-09 2021-03-09 TEDS fault detection method based on virtual CRH380A model and deep learning

Country Status (1)

Country Link
CN (1) CN112907734B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256778B (en) * 2021-07-05 2021-10-12 爱保科技有限公司 Method, device, medium and server for generating vehicle appearance part identification sample

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109934172B (en) * 2019-03-14 2021-10-15 中南大学 GPS-free full-operation line fault visual detection and positioning method for high-speed train pantograph
CN111783525B (en) * 2020-05-20 2022-10-18 中国人民解放军93114部队 Aerial photographic image target sample generation method based on style migration
CN111862029A (en) * 2020-07-15 2020-10-30 哈尔滨市科佳通用机电股份有限公司 Fault detection method for bolt part of vertical shock absorber of railway motor train unit

Also Published As

Publication number Publication date
CN112907734A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN110059694B (en) Intelligent identification method for character data in complex scene of power industry
CN108898085B (en) Intelligent road disease detection method based on mobile phone video
CN109118479B (en) Capsule network-based insulator defect identification and positioning device and method
CN108090423B (en) Depth license plate detection method based on thermodynamic diagram and key point regression
CN111650204A (en) Transmission line hardware defect detection method and system based on cascade target detection
CN101576956B (en) On-line character detection method based on machine vision and system thereof
CN112541389B (en) Transmission line fault detection method based on EfficientDet network
CN106610969A (en) Multimodal information-based video content auditing system and method
CN107833213A (en) A kind of Weakly supervised object detecting method based on pseudo- true value adaptive method
CN110232379A (en) A kind of vehicle attitude detection method and system
CN110598693A (en) Ship plate identification method based on fast-RCNN
CN110133443B (en) Power transmission line component detection method, system and device based on parallel vision
CN109829458B (en) Method for automatically generating log file for recording system operation behavior in real time
CN112070135A (en) Power equipment image detection method and device, power equipment and storage medium
CN111402224A (en) Target identification method for power equipment
CN111145222A (en) Fire detection method combining smoke movement trend and textural features
CN112102296A (en) Power equipment target identification method based on human concept
CN111507249A (en) Transformer substation nest identification method based on target detection
CN112907734B (en) TEDS fault detection method based on virtual CRH380A model and deep learning
Wang et al. Railway insulator detection based on adaptive cascaded convolutional neural network
CN116486240A (en) Application of image recognition algorithm in intelligent inspection method of unmanned aerial vehicle of power transmission line
CN111507398A (en) Transformer substation metal instrument corrosion identification method based on target detection
Zheng et al. Rail detection based on LSD and the least square curve fitting
CN111597939B (en) High-speed rail line nest defect detection method based on deep learning
CN114694130A (en) Method and device for detecting telegraph poles and pole numbers along railway based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant