CN113361354B - Track component inspection method and device, computer equipment and storage medium - Google Patents

Track component inspection method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113361354B
CN113361354B CN202110592734.3A CN202110592734A CN113361354B CN 113361354 B CN113361354 B CN 113361354B CN 202110592734 A CN202110592734 A CN 202110592734A CN 113361354 B CN113361354 B CN 113361354B
Authority
CN
China
Prior art keywords
rail
target
image
picture
target area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110592734.3A
Other languages
Chinese (zh)
Other versions
CN113361354A (en
Inventor
张斌
卓卉
黄鑫
孟宪洪
王宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Guoneng Shuohuang Railway Development Co Ltd
Original Assignee
Beihang University
Guoneng Shuohuang Railway Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University, Guoneng Shuohuang Railway Development Co Ltd filed Critical Beihang University
Priority to CN202110592734.3A priority Critical patent/CN113361354B/en
Publication of CN113361354A publication Critical patent/CN113361354A/en
Application granted granted Critical
Publication of CN113361354B publication Critical patent/CN113361354B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a track component inspection method, a track component inspection device, computer equipment and a storage medium. The rail component inspection method comprises the following steps: acquiring a rail picture; the rail picture comprises a multi-view rail image obtained by shooting along a rail by an unmanned aerial vehicle; extracting a target area from the rail picture by adopting an area generation network containing an attention mechanism, and determining a target detection result; the target area includes a rail clip; the target detection result comprises a feature matrix of the target area; performing multi-target screening on the target detection result based on a preset screening rule to obtain a screening picture; according to the multi-view rail image, performing feature fusion on feature matrixes of all views of the same target area in the screening image to obtain a fusion image; anomaly detection is carried out on the fusion image by adopting a meta-learning anomaly detection model, and a routing inspection result is output; the inspection results include an anomaly probability value for the rail clip. The application effectively improves the flexibility, accuracy and instantaneity of the inspection process.

Description

Track component inspection method and device, computer equipment and storage medium
Technical Field
The application relates to the technical field of inspection, in particular to a rail component inspection method, a rail component inspection device, computer equipment and a storage medium.
Background
The most important thing for ensuring the safety of railway traffic is to ensure that the railway track is in good condition all the time, and the rail clip is the part of the track to connect the concrete sleeper and the steel rail, also called the intermediate connecting part. The railway track fixing device has the functions of firmly fixing the steel rail on the sleeper, keeping the long-term reliable connection between the sleeper and the track, and preventing the steel rail from moving transversely and longitudinally relative to the sleeper to be separated from the sleeper, thereby causing the threat of travelling. Therefore, whether the rail clip is intact determines the probability of an accident occurring in the current road section.
Traditional railway rails patrols and examines, is subject to the science and technology level, to fault detection such as the wearing and tearing of rail, the integrality of fastener, all need railway workman to go on the railway line to carry out the manual work and patrol and examine, and the manual work is patrolled and examined the influence that receives different people's different criteria, and this can influence final testing result. In addition, the manual inspection mode consumes a large amount of manpower, material resources and financial resources, and the efficiency is extremely low. Current failure detection modes include eddy current methods, ultrasonic methods, and methods based on object detection, among others.
In the implementation process, the inventor finds that at least the following problems exist in the traditional technology: the existing detection method has the problems of low detection efficiency, poor accuracy and the like.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a method, an apparatus, a computer device and a storage medium for inspecting a track component, which can improve detection efficiency and accuracy.
In order to achieve the above object, in one aspect, an embodiment of the present application provides a rail component inspection method, including:
acquiring a rail picture; the rail picture comprises a multi-view rail image obtained by shooting along a rail by an unmanned aerial vehicle;
extracting a target area from the rail picture by adopting an area generation network containing an attention mechanism, and determining a target detection result; the target area includes a rail clip; the target detection result comprises a feature matrix of the target area;
performing multi-target screening on the target detection result based on a preset screening rule to obtain a screening picture;
according to the multi-view rail image, performing feature fusion on each view angle feature matrix of the same target area in the screening picture to obtain a fusion image;
anomaly detection is carried out on the fused image by adopting a meta-learning anomaly detection model, and a routing inspection result is output; the inspection results include an anomaly probability value for the rail clip.
In one embodiment, the meta-learning anomaly detection model comprises a small sample learning model;
the method comprises the steps of detecting the abnormality of the fused image by adopting a meta-learning abnormality detection model and outputting a polling result, and comprises the following steps:
training the positive and negative samples and the training samples by adopting a prototype network to obtain a small sample learning model; the positive and negative samples include pictures of intact rail fasteners and pictures of failed rail fasteners; the training samples include pictures taken by the drone of rail fasteners that have been determined to be intact or faulty;
and carrying out bidirectional comparison on the fused image and the positive and negative samples based on the small sample learning model to obtain a routing inspection result.
In one embodiment, the prototype network is adopted to train the positive and negative samples and the training samples, and the step of obtaining the small sample learning model comprises the following steps:
and projecting the positive and negative samples into the same projection space, and training the positive and negative samples and the training samples together based on the triple loss in the projection space to obtain a small sample learning model.
In one embodiment, after the step of obtaining the inspection result, the method further comprises the steps of:
processing the inspection result through a classifier to obtain the abnormal probability value of the single rail fastener; the classifier is used for characterizing depth cross correlation among pixels and a nonlinear metric among feature matrixes.
In one embodiment, the feature matrix is a feature map;
the method comprises the following steps of extracting a target area from a rail picture by using an area generation network containing an attention mechanism, and determining a target detection result, wherein the steps comprise:
generating an interference frame in the network filtering single-frame rail picture based on the area containing the attention mechanism to obtain a target detection frame; the interference frame comprises a background frame and a frame which is not matched with the target area category;
and identifying the target detection frame by adopting a convolutional neural network to obtain a characteristic diagram of the target area.
In one embodiment, the multi-view rail images comprise dual-view images taken by two unmanned aerial vehicles at the same speed and along two sides of the same rail;
according to the multi-view rail image, performing feature fusion on feature matrixes of all views in the same target area in the screening picture to obtain a fused image, wherein the step of performing feature fusion on the feature matrixes of all views in the same target area in the screening picture comprises the following steps:
fusing by adopting a middle-layer feature matrix of the feature map to integrate shallow features of the double-view-angle image; shallow features include color and/or gradient.
In one embodiment, the preset screening rule comprises the confidence level of the target area and the shooting distance between the target area and the unmanned aerial vehicle; the inspection result also comprises the position and fault type of the fault rail fastener;
further comprising the steps of:
weighting according to the abnormal probability values of the rail fasteners of the adjacent frames to obtain the final abnormal probability value of the single rail fastener;
outputting an alarm instruction under the condition that the final abnormal probability value of the rail fastener is greater than a threshold value; the alarm instruction is used for indicating the system to alarm and acquiring the inspection result.
A track component inspection device, comprising:
the image acquisition module is used for acquiring a rail image; the rail picture comprises a multi-view rail image obtained by shooting along a rail by an unmanned aerial vehicle;
the target extraction module is used for extracting a target area from the rail picture by adopting an area generation network containing an attention mechanism and determining a target detection result; the target area includes a rail clip; the target detection result comprises a feature matrix of the target area;
the screening module is used for carrying out multi-target screening on the target detection result based on a preset screening rule to obtain a screening picture;
the fusion module is used for performing feature fusion on each view angle feature matrix of the same target area in the screening picture according to the multi-view angle rail image to obtain a fusion image;
the detection module is used for detecting the abnormality of the fused image by adopting a meta-learning abnormality detection model and outputting a routing inspection result; the inspection results include probability values of anomalies of the rail clip.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the method when the processor executes the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method.
One of the above technical solutions has the following advantages and beneficial effects:
according to the method, the multi-view rail image is processed by adopting the area generation network containing the attention mechanism, so that the generation of detection frames of other types can be inhibited under the condition of less training data, and the position of the rail fastener in the picture and the characteristic information contained in the rail fastener can be accurately extracted; the network input redundancy is further reduced through screening, and the target object can be easily transformed based on the meta-learning anomaly detection model without retraining; the regional generation network that this application utilized to contain the attention mechanism combines the abnormal detection model of meta-learning, has realized that small sample lightweight track part is automatic to be patrolled and examined, and whether the judgement that can be comparatively accurate is unusual and the abnormal degree of rail fastener. The problem of neural network training overfitting and unmanned aerial vehicle when patrolling and examining to judge undefined to small-size rail fastener damage condition is solved to this application to simplify the network, effectively improve network flexibility, degree of accuracy and real-time.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the conventional technologies of the present application, the drawings used in the descriptions of the embodiments or the conventional technologies will be briefly introduced below, it is obvious that the drawings in the following descriptions are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of an application environment of a method for inspecting a track component according to an embodiment;
FIG. 2 is a schematic flow chart of a method for inspecting rail components in one embodiment;
FIG. 3 is a flowchart illustrating the target region extraction step in one embodiment;
FIG. 4 is a schematic flowchart of the abnormality detecting step in one embodiment;
FIG. 5 is a block diagram of the track component inspection device according to one embodiment;
FIG. 6 is a diagram of the internal structure of a computer device in one embodiment;
fig. 7 is an internal structural diagram of a computer device in another embodiment.
Detailed Description
To facilitate an understanding of the present application, the present application will now be described more fully with reference to the accompanying drawings. Embodiments of the present application are given in the accompanying drawings. This application may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
Where used, the singular forms "a", "an" and "the" may include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises/comprising," "includes" or "including," or "having," and the like, specify the presence of stated features, integers, steps, operations, components, parts, or combinations thereof, but do not preclude the presence or addition of one or more other features, integers, steps, operations, components, parts, or combinations thereof. Also, as used in this specification, the term "and/or" includes any and all combinations of the associated listed items.
In the current detection mode, the eddy current method mainly detects the rail damage condition by using induced current generated by a rail flaw detector, but the detection system has high-frequency excitation signals, so that the processing difficulty of the detection signals is high, and the detection speed is low; the ultrasonic method is a technique that is widely used, and a flaw detection probe thereof detects internal flaws of a rail by emitting continuous ultrasonic pulses to the rail, but has the disadvantages of low accuracy and single rail flaw detection category.
In recent years, with the development of convolutional neural networks, methods based on target detection are also gradually applied to railway inspection. Most methods based on target detection are characterized in that a classifier is trained, so that feature extraction and classification can be performed on input images, the position of a target object is extracted, and the type of the target object and whether the target object is abnormal or not are judged. However, the target detection method usually needs a large amount of training data, the data acquisition process is very complex, and the difficulty of routing inspection of the track parts is inevitably increased due to the large number of railways, large distribution area and diversity of distribution scenes in China. In addition, for the inspection of the track component, the target types are less, the rail fastener form is single, and the requirement on the number of pictures in the training set is large. Convolutional neural networks often show overfitting.
The application provides an area generation network containing an attention mechanism, and a method for automatically inspecting a small-sample light-weight track component is realized by utilizing the network and combining meta-learning. The problems that manual inspection is time-consuming and labor-consuming, railway fault judgment is not uniform, data acquisition is difficult and overfitting is easy to achieve in a traditional target detection method can be effectively solved, and efficiency and safety of railway inspection are improved. In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The rail component inspection method can be applied to the application environment shown in fig. 1. The rail clip 102 is a member of the rail for connecting a concrete tie to a steel rail, and is also called an intermediate connecting member. The application proposes to utilize unmanned aerial vehicle 104 along the flight of rail track, shoots rail and road surface all around, obtains the rail picture under the different visual angles. Drone 104 may be a drone equipped with a high definition camera.
Further, the drone 104 may communicate with a positioning system; this application passes through positioning system as the auxiliary measure, can prevent that each unmanned aerial vehicle flight is asynchronous, and positioning system is because the bad consequence that visual angle error caused when can in time correct the image integration to in time react, correct unmanned aerial vehicle flight route, and then when preventing to fuse between the image, take place the skew to lead to the fact the influence to final integration result because of shooting the angle. The positioning system may be, but is not limited to, various personal computers, laptops, smartphones, tablets, and portable wearable devices; in some embodiments, the location system may be implemented as a stand-alone server or as a server cluster of multiple servers.
In one embodiment, as shown in fig. 2, a rail component inspection method is provided, which is described by taking the method as an example applied to fig. 1, and comprises the following steps:
step 202, acquiring a rail picture;
wherein, the rail picture includes the multi-view rail image that obtains along the rail shooting through unmanned aerial vehicle.
Specifically, the unmanned aerial vehicle carrying a high-definition camera can be utilized to shoot the ground from different angles. And in the image acquisition process, a single unmanned aerial vehicle or a plurality of unmanned aerial vehicles can be adopted to acquire image information. For example, two unmanned aerial vehicles are used for shooting the rail and the surrounding road surface along the rail track to obtain the rail pictures at different visual angles.
Further, the shooting angle of the unmanned aerial vehicle may include a depression angle; in some embodiments, the multi-perspective rail images include dual-perspective images captured simultaneously by two drones at the same speed and along two sides of the same rail, respectively; the multi-view rail image in this application can be for two visual angle images promptly, specifically can be shot by two unmanned aerial vehicles, and two unmanned aerial vehicles can one left side right side with the flight of same speed.
Based on the multi-view rail images, the fusion processing can be carried out on the feature maps with different views of the same target area (such as rail fasteners) in the subsequent steps, the reliability of the fused images is improved, and more accurate and objective feature information is provided for the subsequent image processing.
In addition, unmanned aerial vehicle in this application can be to the cooperation with positioning system, and then in time correct the image when fusing because the bad consequence that visual angle error caused to in time react, correct unmanned aerial vehicle flight route, thereby when preventing to fuse between the image, take place the skew to lead to the fact the influence to final fusion result because of shooting angle
Step 204, extracting a target area of the rail picture by adopting an area generation network containing an attention mechanism, and determining a target detection result; the target area includes a rail clip; the target detection result includes a feature matrix of the target area.
Specifically, the application proposes that automatic routing inspection of small-sample light-weight rail parts is realized by using an area generation network containing an attention mechanism. The image obtained by the unmanned aerial vehicle is input into the area generation network containing the attention mechanism to be analyzed, and the target area of the image is extracted. Wherein, the target area can be the rail clip that exists in the picture, through carrying out target extraction and location to the rail clip in the picture, and then will contain the characteristic region extraction of fastener. And the target detection result may include a feature matrix (e.g., a feature map, etc.) of the target region.
Further, the region generation network in the present application may be used in object detection architectures such as RCNN and Fast RCNN to extract candidate frames. It should be noted that the area generation network is independent of the basic framework of the network, and a common object detection framework can be implemented. The attention mechanism in this application may be an algorithm that uses a priori information to compute a weighted average of the input information and filter out a large amount of extraneous information.
In one embodiment, the feature matrix may be a feature map; as shown in fig. 3, the step of extracting the target area from the rail image by using the area generation network including the attention mechanism and determining the target detection result may include:
step 302, generating a network filtering interference frame in the single-frame rail picture based on the area containing the attention mechanism to obtain a target detection frame; the interference frame comprises a background frame and a frame which is not matched with the target area category;
and 304, identifying the target detection frame by using a convolutional neural network to obtain a characteristic diagram of the target area.
Specifically, the area generation network in the present application can filter most background frames and frames that do not match the category (i.e., frames that do not match the category of the target area) with as little image information as possible, generate a detection frame that contains the target (i.e., a target detection frame), and then perform target location and feature extraction on the rail clip in the picture.
In other words, the method and the device can inhibit the generation of detection frames of other types under the condition of less training data, and accurately extract the position of the rail clip in the picture and the characteristic information contained in the rail clip.
In some embodiments, for an input picture (i.e., a rail picture), the present application proposes that the input picture may be unified in size (pixels) of 640 × 3, and that the integrity of the rail clip in the picture cannot be directly analyzed due to the fact that a single rail clip is not large in the entire picture, and a single frame picture contains a plurality of rail clips.
In view of the fact that the resolution of the high-definition camera is too large, the size of the input pictures can be unified to be 640 × 3, and the network memory occupation amount can be reduced due to the fact that the resolution can be reduced. Secondly, the area generation network in the application has no special requirements on a basic network frame and can be selected according to actual conditions; in some embodiments, the detection framework may also fix the input image resolution.
Furthermore, the application proposes to unify all extracted image fastener positions to 80 × 3 (pixels), so as to unify the extracted feature size, thereby facilitating the subsequent operation; the size is large relative to the size of the fastener, and if the fastener positioning length and width are less than 80, the detection frame is expanded outwards.
In addition, the method and the device adopt the convolutional neural network to identify the target detection frame to obtain the characteristic diagram of the target area. After the convolutional neural network is trained, the characteristics of the fasteners on the rail can be accurately captured and identified, the position information of the rail fasteners on the rail is returned, and the corresponding characteristic matrix is stored.
It should be noted that the present application utilizes an area generation network including an attention mechanism, and the operation principle of the network may include: the network can weight the characteristics of the input shot pictures, emphasize the parts containing the rail fasteners and weaken the background, and can improve the target detection effect more selectively. The reason for this is that a model that can detect a rail clip is not trained directly is also to improve the model's mobility.
Firstly, a target detection frame is obtained and then input into the meta-learning anomaly detection model for processing, so that the influence of different backgrounds on the detection effect of the anomaly detection model is avoided. For example, if the whole picture is directly input, a model is trained in the scene a, the test is performed in the scene B, and the picture in the scene B is input, so that misjudgment is easily caused due to the difference of the backgrounds of the two scenes.
And step 206, performing multi-target screening on the target detection result based on a preset screening rule to obtain a screening picture.
Specifically, the method provides multi-target screening of a target detection result, and in one embodiment, the preset screening rule includes the confidence level of a target area and the shooting distance between the target area and the unmanned aerial vehicle;
further, the present application may include screening target detection results according to two criteria: (1) according to the confidence degree of target detection; (2) And (4) according to the distance from the target position to the unmanned aerial vehicle (namely shooting distance). Because unmanned aerial vehicle is in the shooting image that does not stop, same rail clip probably appears in different pictures many times, leads to the network calculation volume to seriously increase. Through screening the extracted rail fasteners, the redundancy of network input is reduced, and the calculation speed is accelerated to meet the requirement of routing inspection network real-time performance.
In some embodiments, the target identification network has a probability value for identifying an object that is "rail clip" based on the confidence level of the target detection, the greater the value, the greater the probability that the object is a rail clip, and the smaller the value, the less the probability that the object is a rail clip. By setting a threshold value beta, target objects with low probability can be filtered out, the burden of a subsequent network can be reduced, and the instantaneity is ensured.
In addition, to according to the distance of target location to unmanned aerial vehicle near far and far to unmanned aerial vehicle overlooks the condition of taking the picture and exemplifies, because unmanned aerial vehicle shoots the position higher, each time shoots, all is likely to shoot a plurality of rail fasteners. Wherein, apart from the nearer rail clip of unmanned aerial vehicle distance, the affirmation of shooing on the picture is most clear, most comprehensive. And the distance from the unmanned plane is far, so that the unmanned plane can be shot not well (big and small); given that the drone shoots along the rail, it is possible to identify the rail clip at a distance, which will still be shot when the next 1s (or more) shot, and closer to the drone, more clearly shot. To this, this application proposes the image to the shooting, and the priority is located the image in the middle of, the nearer detection frame of distance unmanned aerial vehicle.
And 208, performing feature fusion on the feature matrixes of all the view angles of the same target area in the screening picture according to the multi-view-angle rail image to obtain a fusion image.
Specifically, the application provides that multi-target fusion can be performed on different pictures in the screened pictures. Use the double-view picture that multi-view rail image gathered as unmanned aerial vehicle as the example, can fuse the processing with the different visual angle characteristic maps of same fastener, improve the reliability of fusing the image, provide more accurate, more objective characteristic information for image processing afterwards.
Furthermore, the screened features can be matched by combining a positioning system, and feature graphs of the same rail fastener from different perspectives can be fused. Positioning system is the auxiliary measure who does in order to prevent that two unmanned aerial vehicle flights are asynchronous because when fusing between the image, it is great to final influence that fuses the result that takes place to squint to shoot the angle, and positioning system is because the kickback that visual angle error caused when can in time correcting the image and fuse to in time react, correct unmanned aerial vehicle flight route.
In one embodiment, the multi-view rail images may include dual-view images captured by two drones at the same speed and along two sides of the same rail, respectively;
according to the multi-view rail image, feature fusion is carried out on feature matrixes of all views of the same target area in the screening image, and a fusion image is obtained in the following steps:
fusing the middle-layer feature matrix of the feature map to integrate the shallow features of the double-view-angle image; shallow features include color and/or gradient.
Specifically, the application can perform feature fusion on the screened fastener features. In step 204, this application proposes can unify the image fastener position of drawing to 80 x 3 to keep the characteristic matrix, and further, can utilize the dual-view picture that unmanned aerial vehicle gathered, fuse the different visual angle characteristic diagrams of same fastener and handle. Wherein, for better judgement fastener abnormal probability, it is that the shallow characteristic that this application provided that needs the integration is the image includes colour, gradient etc.; further, during the process of obtaining the fused image, the mid-level feature matrix 160 × 256 stored in the convolutional neural network in step 304 may be selected for fusion, so as to provide more accurate and objective feature information for the subsequent image processing.
Step 210, performing anomaly detection on the fused image by using a meta-learning anomaly detection model, and outputting a routing inspection result; the inspection results include an anomaly probability value for the rail clip.
Specifically, in the process of target abnormality detection, the application proposes that the fused feature map can be input into a trained network model, and the training of the network model adds the concept of meta-learning, so that the target object can be quickly replaced without retraining the model.
In one embodiment, the meta-learning anomaly detection model may include a small sample learning model; as shown in fig. 4, the step of performing anomaly detection on the fused image by using the meta-learning anomaly detection model and outputting the inspection result may include:
step 402, training positive and negative samples and training samples by adopting a prototype network to obtain a small sample learning model; the positive and negative samples include pictures of sound rail fasteners and pictures of failed rail fasteners; the training samples include pictures taken by the drone of rail fasteners that have been determined to be intact or faulty;
and step 404, performing bidirectional comparison on the fused image and the positive and negative samples based on the small sample learning model to obtain a routing inspection result.
In particular, the meta Learning in the present application can be implemented by using corresponding machine Learning, for example, the raw-shot Learning. In some embodiments, the small sample learning model may be implemented using a prototype network (i.e., prototype network).
Further, the prototype network is introduced, the features of the positive and negative samples are projected into the same projection space, and a margin is set between the positive and negative samples. And (4) comparing the feature map after fusion with positive and negative samples in the projection space by using a bidirectional comparison strategy. Unlike twin networks, prototype networks in this application do not directly compare the similarity of a target to a given sample of positive and negative examples to analyze the class of the target. The distance between vectors of the same category is relatively close, the distance between vectors of different categories is relatively far, and clustering is performed from a feature level and can be used for Few-shot Learning.
Because the rail fasteners in the image are smaller, the damaged areas are more difficult to find, and the type and degree of the rail damage can be judged by comparing with the complete rail fasteners only with larger errors. In contrast, the method provides a novel bidirectional comparison strategy, a model (namely a small sample learning model) obtained by co-training positive and negative samples (intact rail fasteners and damaged rail fasteners) and a training sample is utilized to test a fused image, the fused feature map is input into a trained network model, the concept of meta-learning is added in the training of the network model, and a target object can be rapidly replaced without retraining the model. The method and the device introduce a prototype network, project the characteristics of positive and negative samples into the same projection space, and set a margin between the positive and negative samples.
Wherein, training sample in this application can refer to the picture that unmanned aerial vehicle shot, but the rail clip that appears above is clear and definite normal or unusual. And regarding projection space and margin setting, the Prototype network projects an input picture to a new feature space (higher dimension) through an encoder in the dimension, can project normal and abnormal samples to the space to generate a vector (by using the encoder), and then can control and set the distance between the vector of the normal samples and the vector of the abnormal samples through the parameters of a training model. So that normal and normal vectors will be very close together, but normal and abnormal will be very far apart. Therefore, when the abnormal picture is found in the routing inspection process, the abnormal vector passing through the encoder is far away from the normal vector, and the network detection of the abnormality is facilitated.
In one embodiment, the prototype network is adopted to train the positive and negative samples and the training samples, and the step of obtaining the small sample learning model comprises the following steps:
and projecting the positive and negative samples into the same projection space, and training the positive and negative samples and the training samples together based on triple loss in the projection space to obtain a small sample learning model.
Specifically, the application proposes that the anomaly determination can be performed on the feature map after the rail fasteners are fused by using a ternary network model. Wherein the three-way network model is associated with a Prototype network (Prototype network): the original network projects images all into a high-dimensional space through an encoder, and through some arrangements, enables target objects of the same category to be grouped together while objects of different categories are far away. In the present application, only the abnormal component needs to be detected, and for this reason, the present application proposes to use triple loss to shorten the distance between the normal and normal vectors and enlarge the distance between the normal and abnormal vectors.
In the anomaly detection process, the features can be mapped to the projection space by using a simple convolution layer instead of using a prototype network. That is, in the abnormality detection process, the Convolutional Neural Network (CNN) may be used to perform a projection operation on the detected target image, instead of using the features corresponding to the already extracted target image.
In the method, the network is generated by using the area including the attention mechanism, so that the target detection effect can be improved selectively, and the model mobility is improved. Then, the method continues to operate the features in the detection frame for detecting the rail fasteners, and continues to perform normal and abnormal separation operation through the Prototype network. Through preferentially extracting the target detection frame and inputting the Protopype network, the influence of different backgrounds on the detection effect of the Protopype network is avoided.
In addition, in one embodiment, after the step of obtaining the inspection result, the method may further include the steps of:
processing the inspection result through a classifier to obtain the abnormal probability value of the single rail fastener; the classifier is used for characterizing the depth cross correlation between pixels and the nonlinear measurement between characteristic matrixes.
Specifically, the application provides that the depth cross correlation between pixels and the nonlinear measurement between characteristic matrixes are compared through a classifier, so that the abnormal type and the abnormal degree of the rail fastener are judged. And the three-element network model and the classifier are used for judging the abnormality of the fused characteristic diagram of the rail fastener, so that whether the rail fastener is abnormal or not and the abnormal degree can be accurately judged. Wherein, the output abnormal value (i.e. abnormal probability value) of a single fastener can be obtained by passing the test result through the classifier.
It should be noted that, in the abnormality detection, the test result may be compared with the distance of the normal rail clip in the projection space, instead of the classifier, to determine whether an abnormality occurs, and when there are many types of abnormality of the rail clip, the method is more effective.
In one embodiment, the preset screening rule comprises the confidence level of the target area and the shooting distance between the target area and the unmanned aerial vehicle; the inspection result also comprises the position and the fault type of the fault rail fastener;
further comprising the steps of:
weighting according to the abnormal probability values of the rail fasteners of the adjacent frames to obtain the final abnormal probability value of the single rail fastener;
outputting an alarm instruction under the condition that the final abnormal probability value of the rail fastener is greater than a threshold value; the alarm instruction is used for indicating the system to alarm and acquiring the inspection result.
Specifically, in the multi-objective screening process (i.e., step 206), the present application proposes that certain redundancy be retained, and then a final anomaly probability value (i.e., a final anomaly probability value) of a single rail clip is obtained by weighting in combination with a test result of an adjacent frame; in some embodiments a threshold value α can be set and if the probability of an anomaly for a certain rail clip being present is greater than the threshold value α, a system alarm is triggered and the system will return a damaged rail clip location and probability of damage.
In the rail component inspection method, the multi-view rail image is processed by adopting the area generation network comprising the attention mechanism, so that the generation of other types of detection frames can be inhibited under the condition of less training data, and the position of the rail fastener in the picture and the characteristic information contained in the rail fastener can be accurately extracted; the network input redundancy is further reduced through screening, and the target object can be easily transformed based on the meta-learning anomaly detection model without retraining; the regional generation network that this application utilized to contain the attention mechanism combines the abnormal detection model of meta-learning, has realized that small sample lightweight track part is automatic to be patrolled and examined, and whether the judgement that can be comparatively accurate is unusual and the abnormal degree of rail fastener. The problem of neural network training overfitting and unmanned aerial vehicle when patrolling and examining to judge undefined to small-size rail fastener damage condition is solved to this application to simplify the network, effectively improve network flexibility, degree of accuracy and real-time.
It should be understood that although the various steps in the flow diagrams of fig. 2-4 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not limited to being performed in the exact order illustrated and, unless explicitly stated herein, may be performed in other orders. Moreover, at least some of the steps in fig. 2-4 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 5, there is provided a rail part inspection device including:
a picture acquiring module 510, configured to acquire a rail picture; the rail picture comprises a multi-view rail image obtained by shooting along a rail by an unmanned aerial vehicle;
a target extraction module 520, configured to extract a target area from the rail image by using an area generation network including an attention mechanism, and determine a target detection result; the target area includes a rail clip; the target detection result comprises a characteristic matrix of the target area;
the screening module 530 is used for performing multi-target screening on the target detection result based on a preset screening rule to obtain a screening picture;
the fusion module 540 is configured to perform feature fusion on each view characteristic matrix of the same target area in the screened picture according to the multi-view rail image to obtain a fusion image;
the detection module 550 is used for performing anomaly detection on the fused image by using the meta-learning anomaly detection model and outputting a routing inspection result; the inspection results include an anomaly probability value for the rail clip.
In one embodiment, the meta-learning anomaly detection model comprises a small sample learning model; the detection module 550 may include:
the training unit is used for training the positive and negative samples and the training samples by adopting a prototype network to obtain a small sample learning model; the positive and negative samples include pictures of intact rail fasteners and pictures of failed rail fasteners; the training samples include pictures taken by the drone of rail fasteners that have been determined to be intact or faulty;
and the comparison unit is used for performing bidirectional comparison on the fused image and the positive and negative samples based on the small sample learning model to obtain a routing inspection result.
In one embodiment, the training unit is configured to project the positive and negative samples into the same projection space, and jointly train the positive and negative samples and the training samples based on triplet losses in the projection space to obtain the small sample learning model.
In one embodiment, the method further comprises the following steps:
the classification processing module is used for processing the inspection result through the classifier to obtain the abnormal probability value of the single rail fastener; the classifier is used for characterizing the depth cross correlation between pixels and the nonlinear measurement between characteristic matrixes.
In one embodiment, the feature matrix is a feature map; the target extraction module 520 may include:
the filtering unit is used for generating a network based on the area containing the attention mechanism to filter the interference frame in the single-frame rail picture to obtain a target detection frame; the interference frame comprises a background frame and a frame which is not matched with the target area type;
and the identification unit is used for identifying the target detection frame by adopting a convolutional neural network to obtain a characteristic diagram of the target area.
In one embodiment, the multi-view rail images comprise dual-view images taken by two unmanned aerial vehicles at the same speed and along two sides of the same rail;
a fusion module 540, configured to perform fusion by using the middle-layer feature matrix of the feature map to integrate the shallow features of the dual-view image; shallow features include color and/or gradient.
In one embodiment, the preset screening rule comprises the confidence level of the target area and the shooting distance between the target area and the unmanned aerial vehicle; the inspection result also comprises the position and the fault type of the fault rail fastener;
further comprising:
the weighting module is used for weighting according to the abnormal probability values of the rail fasteners of the adjacent frames to obtain the final abnormal probability value of the single rail fastener;
the alarm module is used for outputting an alarm instruction under the condition that the final abnormal probability value of the rail fastener is greater than a threshold value; the alarm instruction is used for indicating the system to alarm and acquiring the inspection result.
For the specific definition of the rail component inspection device, reference may be made to the definition of the rail component inspection method above, and details are not repeated here. All or part of each module in the rail component inspection device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer equipment is used for storing meta-learning anomaly detection models, routing inspection results and the like. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of rail component inspection.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 7. The computer device comprises a processor, a memory, a network interface, a display screen and an input device which are connected through a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a rail component inspection method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the configurations shown in fig. 6 and 7 are merely block diagrams of some configurations relevant to the present disclosure, and do not constitute a limitation on the computing devices to which the present disclosure may be applied, and that a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory having a computer program stored therein and a processor that, when executing the computer program, implements the rail component inspection method described above.
In one embodiment, a computer-readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, implements the above-described rail component inspection method.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by hardware instructions of a computer program, which may be stored in a non-volatile computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), for example.
In the description herein, references to the description of "some embodiments," "other embodiments," "desired embodiments," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, a schematic description of the above terminology may not necessarily refer to the same embodiment or example.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (9)

1. A rail component inspection method is characterized by comprising the following steps:
acquiring a rail picture; the rail picture comprises a multi-view rail image obtained by shooting along a rail by an unmanned aerial vehicle;
extracting a target area from the rail picture by adopting an area generation network containing an attention mechanism, and determining a target detection result; the target area comprises a rail clip; the target detection result comprises a feature matrix of the target area;
performing multi-target screening on the target detection result based on a preset screening rule to obtain a screening picture;
according to the multi-view rail image, performing feature fusion on feature matrixes of all views of the same target area in the screening image to obtain a fusion image;
performing anomaly detection on the fused image by adopting a meta-learning anomaly detection model, and outputting a routing inspection result; the inspection result comprises an abnormal probability value of the rail fastener; the meta-learning anomaly detection model comprises a small sample learning model; the step of adopting the meta-learning anomaly detection model to detect the anomaly of the fused image and outputting a routing inspection result comprises the following steps:
training the positive and negative samples and the training samples by adopting a prototype network to obtain the small sample learning model; the positive and negative samples include pictures of intact rail fasteners and pictures of failed rail fasteners; the training samples include pictures taken by the drone of rail fasteners that have been determined to be sound or faulty;
and performing bidirectional comparison on the fusion image and the positive and negative samples based on the small sample learning model to obtain the inspection result.
2. The track component inspection method according to claim 1, wherein a prototype network is adopted to train the positive and negative samples and the training samples, and in the step of obtaining the small sample learning model:
and projecting the positive and negative samples into the same projection space, and training the positive and negative samples and the training samples together based on triple loss in the projection space to obtain the small sample learning model.
3. The rail component inspection method according to claim 1, further comprising, after the step of obtaining the inspection result, the steps of:
processing the inspection result through a classifier to obtain the abnormal probability value of the single rail fastener; the classifier is used for characterizing depth cross correlation among pixels and nonlinear measurement among feature matrixes.
4. The rail component inspection method according to claim 1, wherein the feature matrix is a feature map;
the step of extracting the target area from the rail picture by adopting the area generation network containing the attention mechanism and determining the target detection result comprises the following steps:
generating a network filtering interference frame in the single frame of the rail picture based on the area containing the attention mechanism to obtain a target detection frame; the interference frame comprises a background frame and a frame which is not matched with the target area category;
and identifying the target detection frame by adopting a convolutional neural network to obtain the characteristic diagram of the target area.
5. The track component inspection method according to claim 4, wherein the multi-perspective rail images include dual-perspective images captured by two drones simultaneously along two sides of the same rail, respectively, at the same speed;
the step of performing feature fusion on the feature matrixes of all the view angles of the same target area in the screening picture according to the multi-view rail image to obtain a fusion image comprises the following steps:
fusing by adopting the middle-layer feature matrix of the feature map to integrate the shallow features of the double-view-angle image; the shallow features comprise a color and/or gradient.
6. The track component inspection method according to any one of claims 1 to 5, wherein the preset screening rules comprise confidence levels of the target areas and shooting distances between the target areas and the unmanned aerial vehicle; the inspection result also comprises the position and the fault type of the fault rail fastener;
further comprising the steps of:
weighting according to the abnormal probability values of the rail fasteners of the adjacent frames to obtain the final abnormal probability value of a single rail fastener;
outputting an alarm instruction under the condition that the final abnormal probability value of the rail fastener is greater than a threshold value; the alarm instruction is used for indicating a system to alarm and acquiring the inspection result.
7. A rail component inspection device, comprising:
the image acquisition module is used for acquiring a rail image; the rail picture comprises a multi-view rail image obtained by shooting along a rail by an unmanned aerial vehicle;
the target extraction module is used for extracting a target area from the rail picture by adopting an area generation network containing an attention mechanism and determining a target detection result; the target area comprises a rail clip; the target detection result comprises a feature matrix of the target area;
the screening module is used for carrying out multi-target screening on the target detection result based on a preset screening rule to obtain a screening picture;
the fusion module is used for performing feature fusion on each view angle feature matrix of the same target area in the screening picture according to the multi-view rail image to obtain a fusion image;
the detection module is used for detecting the abnormality of the fused image by adopting a meta-learning abnormality detection model and outputting a routing inspection result; the inspection result comprises an abnormal probability value of the rail fastener; the meta-learning anomaly detection model comprises a small sample learning model; the detection module comprises:
the training unit is used for training the positive and negative samples and the training samples by adopting a prototype network to obtain the small sample learning model; the positive and negative samples include pictures of intact rail fasteners and pictures of failed rail fasteners; the training samples include pictures taken by the drone of rail fasteners that have been determined to be sound or faulty;
and the comparison unit is used for performing bidirectional comparison on the fused image and the positive and negative samples based on the small sample learning model to obtain the inspection result.
8. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program performs the steps of the method according to any of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN202110592734.3A 2021-05-28 2021-05-28 Track component inspection method and device, computer equipment and storage medium Active CN113361354B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110592734.3A CN113361354B (en) 2021-05-28 2021-05-28 Track component inspection method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110592734.3A CN113361354B (en) 2021-05-28 2021-05-28 Track component inspection method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113361354A CN113361354A (en) 2021-09-07
CN113361354B true CN113361354B (en) 2022-11-15

Family

ID=77528076

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110592734.3A Active CN113361354B (en) 2021-05-28 2021-05-28 Track component inspection method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113361354B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113642565B (en) * 2021-10-15 2022-02-11 腾讯科技(深圳)有限公司 Object detection method, device, equipment and computer readable storage medium
CN114022826B (en) * 2022-01-05 2022-03-25 石家庄学院 Block chain-based rail detection method and system
CN114937040B (en) * 2022-07-22 2022-11-18 珠海优特电力科技股份有限公司 Train inspection method, device and system for rail transit vehicle section and storage medium
CN115861927A (en) * 2022-12-01 2023-03-28 中国南方电网有限责任公司超高压输电公司大理局 Image identification method and device for power equipment inspection image and computer equipment
CN116469017B (en) * 2023-03-31 2024-01-02 北京交通大学 Real-time track identification method for unmanned aerial vehicle automated railway inspection
CN116935245B (en) * 2023-06-16 2024-06-21 国网山东省电力公司金乡县供电公司 Long-distance communication power grid unmanned aerial vehicle inspection system and method
CN117994753B (en) * 2024-04-03 2024-06-07 浙江浙能数字科技有限公司 Vision-based device and method for detecting abnormality of entrance track of car dumper

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109677444A (en) * 2019-01-10 2019-04-26 北京交通大学 A kind of acquisition device of the comprehensive perception of track apparent condition
CN112115993A (en) * 2020-09-11 2020-12-22 昆明理工大学 Zero sample and small sample evidence photo anomaly detection method based on meta-learning

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190039633A1 (en) * 2017-08-02 2019-02-07 Panton, Inc. Railroad track anomaly detection
CN109389056B (en) * 2018-09-21 2020-05-26 北京航空航天大学 Space-based multi-view-angle collaborative track surrounding environment detection method
CN112288770A (en) * 2020-09-25 2021-01-29 航天科工深圳(集团)有限公司 Video real-time multi-target detection and tracking method and device based on deep learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109677444A (en) * 2019-01-10 2019-04-26 北京交通大学 A kind of acquisition device of the comprehensive perception of track apparent condition
CN112115993A (en) * 2020-09-11 2020-12-22 昆明理工大学 Zero sample and small sample evidence photo anomaly detection method based on meta-learning

Also Published As

Publication number Publication date
CN113361354A (en) 2021-09-07

Similar Documents

Publication Publication Date Title
CN113361354B (en) Track component inspection method and device, computer equipment and storage medium
Li et al. Simultaneously detecting and counting dense vehicles from drone images
Yang et al. Deep concrete inspection using unmanned aerial vehicle towards cssc database
CN111784633B (en) Insulator defect automatic detection algorithm for electric power inspection video
CN110084165B (en) Intelligent identification and early warning method for abnormal events in open scene of power field based on edge calculation
JP7272533B2 (en) Systems and methods for evaluating perceptual systems
CN110796643A (en) Rail fastener defect detection method and system
CN108197604A (en) Fast face positioning and tracing method based on embedded device
CN109033950A (en) Vehicle based on multiple features fusion cascade deep model, which is disobeyed, stops detection method
Yang et al. A robotic system towards concrete structure spalling and crack database
CN113111727B (en) Feature alignment-based method for detecting rotating target in remote sensing scene
CN110263712A (en) A kind of coarse-fine pedestrian detection method based on region candidate
CN113298045A (en) Method, system and device for identifying violation vehicle
CN108764264A (en) Smog detection method, smoke detection system and computer installation
CN108681691A (en) A kind of marine ships and light boats rapid detection method based on unmanned water surface ship
Manninen et al. Multi-stage deep learning networks for automated assessment of electricity transmission infrastructure using fly-by images
CN116843691B (en) Photovoltaic panel hot spot detection method, storage medium and electronic equipment
Tong et al. Anchor‐adaptive railway track detection from unmanned aerial vehicle images
Martin et al. Object of fixation estimation by joint analysis of gaze and object dynamics
Zhou et al. BV-Net: Bin-based Vector-predicted Network for tubular solder joint detection
Tang et al. Insulator defect detection based on improved faster R-CNN
CN116206222A (en) Power transmission line fault detection method and system based on lightweight target detection model
Zan et al. Defect Identification of Power Line Insulators Based on a MobileViT‐Yolo Deep Learning Algorithm
CN115272340A (en) Industrial product defect detection method and device
CN115880590A (en) Method and device for detecting rail foreign matter intrusion based on unmanned aerial vehicle machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant