CN115731179A - Track component detection method, terminal and storage medium - Google Patents

Track component detection method, terminal and storage medium Download PDF

Info

Publication number
CN115731179A
CN115731179A CN202211455777.8A CN202211455777A CN115731179A CN 115731179 A CN115731179 A CN 115731179A CN 202211455777 A CN202211455777 A CN 202211455777A CN 115731179 A CN115731179 A CN 115731179A
Authority
CN
China
Prior art keywords
track
layer
rail
detection
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211455777.8A
Other languages
Chinese (zh)
Inventor
吴云鹏
刘振亮
马龙双
马迷娜
徐飞
杨勇
赵维刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shijiazhuang Tiedao University
Original Assignee
Shijiazhuang Tiedao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shijiazhuang Tiedao University filed Critical Shijiazhuang Tiedao University
Priority to CN202211455777.8A priority Critical patent/CN115731179A/en
Publication of CN115731179A publication Critical patent/CN115731179A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides a track component detection method, a track component detection device, a terminal and a storage medium. The method comprises the following steps: acquiring a track image; inputting the track image into a track component detection model to obtain a component defect detection result and a track surface area of the track image so as to obtain a track surface detection result based on the track surface area; the track component detection model comprises a feature extraction trunk, a component defect detection branch connected with the feature extraction trunk and a track segmentation branch connected with the feature extraction trunk. The rail part detection model used by the invention comprises a feature extraction trunk, a part defect detection branch and a rail segmentation branch, wherein the feature extraction trunk only needs to perform feature extraction once, and the part defect detection branch and the rail segmentation branch can respectively perform part defect detection and rail surface defect detection, so that the rail part detection model has a more simplified structure and a more simplified calculation process, and has stronger practicability compared with the prior art.

Description

Track component detection method, terminal and storage medium
Technical Field
The present invention relates to the field of track detection technologies, and in particular, to a track component detection method, an apparatus, a terminal, and a storage medium.
Background
Periodic inspection of the rail components, such as defect detection of the fasteners and rail surfaces, is critical to maintaining rail quality and ensuring safety in rail operations.
However, in the field of neural network image recognition, fastener defect detection generally needs to be achieved based on significance segmentation, rail surface defects generally need to be achieved based on target detection of a specific track component, and an existing detection mode generally needs to be provided with two detection models, one detection model is used for fastener detection, the other detection model is used for rail surface detection, and the development process and the use process are inconvenient and poor in practicability.
Disclosure of Invention
The embodiment of the invention provides a track component detection method, a terminal and a storage medium, which aim to solve the problem of poor practicability of the current track component detection model.
In a first aspect, an embodiment of the present invention provides a rail component detection method, including:
acquiring a track image;
inputting the track image into a track component detection model to obtain a component defect detection result of the track image and a track surface area so as to obtain a track surface detection result based on the track surface area; the track component detection model comprises a feature extraction trunk, a component defect detection branch connected with the feature extraction trunk and a track segmentation branch connected with the feature extraction trunk.
In one possible implementation manner, the feature extraction trunk includes a CBS layer, a first downsampling layer, a first ConvNeXt layer, a second downsampling layer, a second ConvNeXt layer, a third downsampling layer, a third ConvNeXt layer, a fourth downsampling layer, a fourth ConvNeXt layer, and an SPP layer, which are connected in sequence; the convolution kernel of the CBS layer is 6 × 6, the convolution kernel of each downsampled layer is 2 × 2, the convolution kernel of each ConvNeXt layer is 7 × 7, and the convolution kernel of the spp layer is 5 × 5.
In one possible implementation, the track division branch includes a first C3B layer and a second C3B layer connected in sequence; the first downsampling layer is connected with the first C3B layer.
In one possible implementation, the component detection branch includes a first upsampling layer, a second upsampling layer, and a third upsampling layer, which are connected in sequence.
In one possible implementation, the first downsampling layer is connected with the first upsampling layer, the second downsampling layer is connected with the second upsampling layer, and the third downsampling layer is connected with the third upsampling layer to form a U-shaped significance detection network.
In one possible implementation, after inputting the rail image into the rail component inspection model, and obtaining the component defect inspection result and the rail surface area of the rail image, the method further includes:
and performing defect detection on the surface area of the track based on an LWLC-ME algorithm to obtain a track surface defect detection result.
In one possible implementation, before inputting the rail image into the rail component inspection model to obtain the component defect inspection result and the rail surface area of the rail image, the method further includes:
performing data enhancement on the track image;
inputting the rail image into the rail member detection model includes:
and inputting the track image subjected to data enhancement into a track component detection model.
In a second aspect, an embodiment of the present invention provides a rail component detecting apparatus, including:
the acquisition module is used for acquiring a track image;
the detection module is used for inputting the track image into the track component detection model to obtain a component defect detection result and a track surface area of the track image so as to obtain a track surface detection result based on the track surface area; the rail part detection model comprises a feature extraction trunk, a part defect detection branch connected with the feature extraction trunk and a rail segmentation branch connected with the feature extraction trunk.
In a third aspect, an embodiment of the present invention provides a terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method according to the first aspect or any one of the possible implementation manners of the first aspect when executing the computer program.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the method according to the first aspect or any one of the possible implementation manners of the first aspect.
The rail part detection method provided by the embodiment of the invention has the beneficial effects that:
the rail part detection model used by the invention comprises a feature extraction trunk, a part defect detection branch and a rail segmentation branch, wherein the feature extraction trunk only needs to perform feature extraction once, and the part defect detection branch and the rail segmentation branch can perform part defect detection and rail surface defect detection respectively. Compared with the prior art that two independent models are used for respectively carrying out rail surface detection and component detection, the rail component detection method has a simplified structure and a simplified calculation process, and is higher in practicability compared with the prior art.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required to be used in the embodiments or the prior art description will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings may be obtained according to these drawings without inventive labor.
FIG. 1 is a flow chart of an implementation of a rail component detection method according to an embodiment of the present invention;
FIG. 2 is a flow chart of an implementation of a rail component detection method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a ConvNeXt structure according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of an AOYOLO model according to an embodiment of the present invention;
FIG. 5 is a comparison graph of image enhancement provided by an embodiment of the present invention;
FIG. 6 is a block diagram of a component defect inspection result and a rail surface area provided by an embodiment of the present invention;
FIG. 7 shows the result of detecting defects on the surface of a track according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a rail member detecting apparatus according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following description is made by way of specific embodiments with reference to the accompanying drawings.
Periodic inspection of rail components, such as fasteners and rail surface defects, is critical to maintaining rail quality and ensuring safety in rail operations. However, conventional image processing-based systems have very limited accuracy. However, the conventional Convolutional Neural Network (CNN) based method is tailored for target detection or significance segmentation of specific track components, and cannot simultaneously evaluate all track components to evaluate the condition of the whole track.
To achieve full, accurate rail component (including fasteners and rail surfaces) inspection, it is often necessary to develop two custom networks: one is a model for fastener detection and the other is a semantic segmentation model for rail surface defect detection. This is neither economical nor practical.
The present invention improves upon the above-described problems.
Referring to fig. 1, it shows a flowchart of an implementation of the track component detection method provided in the embodiment of the present invention, which is detailed as follows:
step 101, acquiring a track image.
In the embodiment, the unmanned aerial vehicle can be used for acquiring image data of multiple tracks, specifically, the unmanned aerial vehicle can be used for flying and shooting images/videos at a distance of 30-50m above the tracks, the transverse distance is 8-15 m, and the flying speed is 2-15m/s. And then inputting a plurality of track image data in batches for track component detection. In the embodiment, the track images can be acquired in different modes, and the resolution of each track image is kept consistent when the track system is used; wherein the track image may be in RGB format.
Step 102, inputting a track image into a track component detection model to obtain a component defect detection result and a track surface area of the track image so as to obtain a track surface detection result based on the track surface area; the track component detection model comprises a feature extraction trunk, a component defect detection branch connected with the feature extraction trunk and a track segmentation branch connected with the feature extraction trunk.
In the present embodiment, in order to solve the above problems, the present invention proposes an all-in-one YOLO (AOYOLO) framework for multitask rail component detection. The AOYOLO framework means that the detection branch and the segmentation branch are integrated into one network architecture, making them cooperate to achieve efficient detection and segmentation performance. First, a feature extraction backbone is constructed in AOYOLO to generate hyper-features suitable for detection and segmentation tasks. Secondly, a rail segmentation branch is added in the AOYOLO to supplement the function of rail surface segmentation, so that the rail part detection model has the functions of rail surface defect segmentation and detection of other rail parts. The overall flow of the rail member detection in this embodiment is shown in fig. 2.
In one possible implementation manner, the feature extraction trunk comprises a CBS layer, a first downsampling layer, a first ConvNeXt layer, a second downsampling layer, a second ConvNeXt layer, a third downsampling layer, a third ConvNeXt layer, a fourth downsampling layer, a fourth ConvNeXt layer and an SPP layer which are sequentially connected; the convolution kernel of the CBS layer is 6 × 6, the convolution kernel of each downsampled layer is 2 × 2, the convolution kernel of each ConvNeXt layer is 7 × 7, and the convolution kernel of the spp layer is 5 × 5.
In this example, the YOLO family uses Darknet as backbone, faster R-CNN and RetinaNet use ResNet as backbone, while AOYOLO introduced advanced ConvNeXt as backbone. Compared with ResNet, convNeXt is equipped with a large convolution kernel, an inverted bottleneck structure and the like, as shown in FIG. 3, which brings an improvement to model accuracy. The detailed parameters of ConvNeXt are shown in table 1.
TABLE 1
Figure BDA0003952932100000051
Figure BDA0003952932100000061
In one possible implementation, the track division branch includes a first C3B layer and a second C3B layer connected in sequence; the first downsampling layer is connected with the first C3B layer.
In this embodiment, in rail component inspection, the detection branch can detect relatively large defects of the component, such as broken clips and missing nails, due to its unique structural features. However, based on previous studies on rail surface defect detection, the object detection branch has difficulty detecting rail surface defects due to the following characteristics: very small objects (even tens of pixels), fewer depth features, fewer volume samples, and noise contamination. Previous work has been to detect rail surface defects after rail surface segmentation using semantic segmentation algorithms. This method can only detect rail surface defects and cannot detect other defects. Therefore, the present embodiment adds a segmentation branch based on semantic segmentation in AOYOLO, as shown in fig. 4, to implement the integrated track component inspection. As shown in FIG. 4, the AOYOLO assembly is connected to the branches of the second downsampled layer output with a feature map size of (W/8, H/8, 256). C3B is used as the basic unit of branching. The feature map is then restored to (W, H, 2) by three upsampling layers, which represents the probability of each pixel in the orbit image of the orbit surface and background. In addition, the nearest interpolation in the upsampling layer is used instead of deconvolution to reduce the amount of computation. The final track segmentation branch structure is simple, the reasoning speed is high, and the pixel level segmentation precision is high.
In one possible implementation, the component detection branch includes a first upsampling layer, a second upsampling layer, and a third upsampling layer, which are connected in sequence.
In the embodiment, the AOYOLO algorithm uses a deep neural network to detect and classify the position of the object, and has the main characteristics of high speed and high accuracy, a method for directly predicting a boundary frame of a target object is adopted, two stages of a candidate area and object identification are combined into a whole, the latest AOYOLO algorithm is improved on the conventional structure, multi-scale detection is added, and the characteristics of an image are extracted by a deeper network structure. The image size of the AOYOLO input is uniform 320x320, and the predicted three feature layer sizes are 40x40, 20x20, and 10x10, respectively. 10x10, 20x20 and 40x40 are detection scales corresponding to a large target, a medium target and a small target, respectively.
In one possible implementation, the first downsampling layer is connected with the first upsampling layer, the second downsampling layer is connected with the second upsampling layer, and the third downsampling layer is connected with the third upsampling layer to form a U-shaped significance detection network.
In this embodiment, the component detection branch is connected to the feature extraction trunk in a cascade manner, wherein different down-sampling layers of the feature extraction trunk output features of different scales, and then each up-sampling layer of the component detection branch detects targets of different sizes from the features, so that defect detection can be performed on components of large, medium and small sizes.
In one possible implementation, after inputting the rail image into the rail component inspection model, and obtaining the component defect inspection result and the rail surface area of the rail image, the method further includes:
and carrying out defect detection on the surface area of the track based on the LWLC-ME algorithm to obtain a track surface defect detection result.
In this embodiment, after extracting the track surface area, the image is first enhanced with LWLC, as follows:
Figure BDA0003952932100000071
wherein, I (I, j) represents a pixel point, W represents a local window, and Mean (.) represents a Mean.
Then, the maximum threshold entropy algorithm is used for finishing the surface segmentation of the track to obtain the final segmentation threshold
Figure BDA0003952932100000073
The formula is as follows:
Figure BDA0003952932100000072
Figure BDA0003952932100000081
Figure BDA0003952932100000082
Figure BDA0003952932100000083
wherein omega O ,Ω B Respectively object and background probability distributions. M, P n And f n Respectively the total number of pixels, the frequency of the grey level n and the probability of the grey level n. H O And H B Respectively represent frontEntropy of scenes and backgrounds.
In one possible implementation, before inputting the rail image into the rail component inspection model to obtain the component defect inspection result and the rail surface area of the rail image, the method further includes:
performing data enhancement on the track image;
inputting the rail image into the rail part detection model includes:
and inputting the track image subjected to data enhancement into a track component detection model.
In this embodiment, data enhancement may be performed based on Mixup and Mosaic to enhance samples, aiming at solving the overfitting problem caused by limited samples, limited available semantic information, and similarity between samples, as shown in fig. 5, so as to further improve the accuracy and scalability of the network.
In a specific embodiment, the specific structure of the AOYOLO model is shown in fig. 4, and the rail clip and the defect target identification are realized based on the AOYOLO model to verify the beneficial effects of the present invention, and the overlapping rate of the results is recorded as IOU result Setting the threshold value to 0.5 when IOU result >0.5, the target is detected, and the types of the prediction frames are obtained as a true case (TP), a false positive case (FP), a true counter case (TN) and a false counter case (FN), and the performance evaluation indexes of the improved YOLOv5 model are calculated, namely P (accuracy), R (recall), AP (precision mean) and mAP (mean precision mean). Rail surface segmentation was also evaluated using IoU
The accuracy, recall, and AP values are calculated as follows:
Figure BDA0003952932100000084
Figure BDA0003952932100000085
Figure BDA0003952932100000091
the present embodiment adopts the track image data of two regions for verification, wherein the number of the track images is 800, and the image resolution is 6576x4384 pixels. The method is divided into a training set, a verification set and a test set according to the proportion of 8. The original image is adjusted to 320x320, so that the occupation of the model on the memory caused by an overlarge data image is reduced while the subsequent detection precision is improved. Fig. 6 shows the detection results of the defects of the parts and the surface area of the rail according to the present invention, and fig. 7 shows the detection results of the defects of the surface of the rail. The results show that: the improved AOYOLO algorithm can be used for effectively detecting rail fasteners and segmenting the surface of a steel rail. The target detection model has stronger robustness and generalization capability on the identification of the connection region and the cotter pin component.
The above experiments show that the present invention can: (1) 95.6% of the mAP for rail part detection is achieved at 147FPS real time speed, (2) 93.6% accuracy in rail surface defect detection is achieved, which exceeds the current state-of-the-art model. The abnormal reasoning speed and the superior detection precision have huge field application potential.
The rail part detection model used in the embodiment of the invention comprises a feature extraction trunk, a part defect detection branch and a rail segmentation branch, wherein the feature extraction trunk only needs to perform feature extraction once, and the part defect detection branch and the rail segmentation branch can perform part defect detection and rail surface defect detection respectively. Compared with the prior art in which two independent models are used for respectively carrying out rail surface detection and component detection, the rail component detection method provided by the application has a simplified structure and a simplified calculation process, and is higher in practicability compared with the prior art.
The invention has the beneficial effects that:
(1) A novel integrated multi-grade rail part detection system based on AOYOLO is provided. The new system based on AOYOLO can achieve 147fps processing speed and 95.6% mAP on all detected classes. AOYOLO has only 17M parameters and can be deployed on almost all mobile inspection platforms, such as handheld devices, rail vehicles, and especially unmanned aerial vehicles. Conveniently, an image processing model based on local weber-like contrast law (LWLC) and Maximum Entropy (ME) is also integrated into the system for automatic detection of rail surface defects. The present invention is the first to address the integrated rail assembly inspection of images taken from various inspection platforms, which can supplement or even replace "walk-checking" with greater accuracy and efficiency.
(2) AOYOLO integrates the backbone, object detection branches, and object segmentation branches into a single network. In particular, by assembling a new U-shaped branch for significant segmentation in AOYOLO, rail surface segmentation can be realized, and the accuracy of rail part detection can be improved.
(3) A novel backbone designed for visual Transformer was constructed by introducing ConvNeXt. ConvNeXt-based backbones improve the detection accuracy of the track component through an advanced 'Transformer' structure. In addition, many complex data enhancements such as Mixup and Mosaic are also assembled in AOYOLO to enhance samples, aiming to solve overfitting problems due to limited samples, limited available semantic information, and similarities between samples.
(4) Ablation and comparison experiments were performed on datasets created using drone images. Experimental results show that the system can obtain high-precision and real-time performance in a complex railway environment and is superior to an existing SOTA model.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
The following are embodiments of the apparatus of the invention, reference being made to the corresponding method embodiments described above for details which are not described in detail therein.
Fig. 8 is a schematic structural diagram of a track component detection apparatus provided in an embodiment of the present invention, and for convenience of description, only the portions related to the embodiment of the present invention are shown, and the details are as follows:
as shown in fig. 8, the rail member detecting device 8 includes:
an obtaining module 81, configured to obtain a track image;
the detection module 82 is used for inputting the track image into the track component detection model to obtain a component defect detection result of the track image and a track surface area so as to obtain a track surface detection result based on the track surface area; the rail part detection model comprises a feature extraction trunk, a part defect detection branch connected with the feature extraction trunk and a rail segmentation branch connected with the feature extraction trunk.
In one possible implementation manner, the feature extraction trunk includes a CBS layer, a first downsampling layer, a first ConvNeXt layer, a second downsampling layer, a second ConvNeXt layer, a third downsampling layer, a third ConvNeXt layer, a fourth downsampling layer, a fourth ConvNeXt layer, and an SPP layer, which are connected in sequence; the convolution kernel of the CBS layer is 6 × 6, the convolution kernel of each downsampled layer is 2 × 2, the convolution kernel of each ConvNeXt layer is 7 × 7, and the convolution kernel of the spp layer is 5 × 5.
In one possible implementation, the track division branch includes a first C3B layer and a second C3B layer connected in sequence; the first downsampling layer is connected with the first C3B layer.
In one possible implementation, the component detection branch includes a first upsampling layer, a second upsampling layer, and a third upsampling layer, which are connected in sequence.
In one possible implementation, the first downsampling layer is connected with the first upsampling layer, the second downsampling layer is connected with the second upsampling layer, and the third downsampling layer is connected with the third upsampling layer to form a U-shaped significance detection network.
In one possible implementation, the detection module 82 is further configured to:
inputting the rail image into a rail component detection model to obtain a component defect detection result of the rail image and a rail surface area, and then performing defect detection on the rail surface area based on an LWLC-ME algorithm to obtain a rail surface defect detection result.
In one possible implementation, the rail member detection device 8 further includes:
the data enhancement module is used for enhancing data of the track image before the track image is input into the track component detection model to obtain a component defect detection result and a track surface area of the track image;
the detection module 82 is specifically configured to:
and inputting the track image subjected to data enhancement into a track component detection model.
The rail part detection model used in the embodiment of the invention comprises a feature extraction trunk, a part defect detection branch and a rail segmentation branch, wherein the feature extraction trunk only needs to perform feature extraction once, and the part defect detection branch and the rail segmentation branch can perform part defect detection and rail surface defect detection respectively. Compared with the prior art in which two independent models are used for respectively carrying out rail surface detection and component detection, the rail component detection method provided by the application has a simplified structure and a simplified calculation process, and is higher in practicability compared with the prior art.
Fig. 9 is a schematic diagram of a terminal according to an embodiment of the present invention. As shown in fig. 9, the terminal 9 of this embodiment includes: a processor 90, a memory 91 and a computer program 92 stored in said memory 91 and executable on said processor 90. The processor 90, when executing the computer program 92, implements the steps in the various rail component detection method embodiments described above, such as steps 101-102 shown in fig. 1. Alternatively, the processor 90, when executing the computer program 92, implements the functions of the modules/units in the above-described device embodiments, such as the modules/units 81 to 82 shown in fig. 8.
Illustratively, the computer program 92 may be partitioned into one or more modules/units that are stored in the memory 91 and executed by the processor 90 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 92 in the terminal 9. For example, the computer program 92 may be divided into modules/units 81 to 82 shown in fig. 8.
The terminal 9 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal 9 may include, but is not limited to, a processor 90, a memory 91. It will be appreciated by those skilled in the art that fig. 9 is merely an example of a terminal 9 and does not constitute a limitation of the terminal 9, and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal may also include input-output devices, network access devices, buses, etc.
The Processor 90 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 91 may be an internal storage unit of the terminal 9, such as a hard disk or a memory of the terminal 9. The memory 91 may also be an external storage device of the terminal 9, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) and the like provided on the terminal 9. Further, the memory 91 may also include both an internal storage unit and an external storage device of the terminal 9. The memory 91 is used for storing the computer program and other programs and data required by the terminal. The memory 91 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. For the specific working processes of the units and modules in the system, reference may be made to the corresponding processes in the foregoing method embodiments, which are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal and method may be implemented in other ways. For example, the above-described apparatus/terminal embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated module/unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the above embodiments may be implemented by a computer program, which may be stored in a computer readable storage medium and used by a processor to implement the steps of the embodiments of the track component detection method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain suitable additions or subtractions depending on the requirements of legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media may not include electrical carrier signals or telecommunication signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein.

Claims (10)

1. A rail component inspection method, comprising:
acquiring a track image;
inputting the track image into a track component detection model to obtain a component defect detection result and a track surface area of the track image so as to obtain a track surface detection result based on the track surface area; the track part detection model comprises a feature extraction trunk, a part defect detection branch connected with the feature extraction trunk, and a track segmentation branch connected with the feature extraction trunk.
2. The method of claim 1, wherein the feature extraction backbone comprises a CBS layer, a first downsampling layer, a first ConvNeXt layer, a second downsampling layer, a second ConvNeXt layer, a third downsampling layer, a third ConvNeXt layer, a fourth downsampling layer, a fourth ConvNeXt layer, and an SPP layer connected in sequence; wherein the convolution kernel of the CBS layer is 6 × 6, the convolution kernel of each downsampling layer is 2 × 2, the convolution kernel of each ConvNeXt layer is 7 × 7, and the convolution kernel of the SPP layer is 5 × 5.
3. The rail member detection method according to claim 2, wherein the rail division branch includes a first C3B layer and a second C3B layer which are connected in sequence; the first downsampling layer is connected with the first C3B layer.
4. The rail component detecting method according to claim 2, wherein the component detecting branch includes a first up-sampling layer, a second up-sampling layer, and a third up-sampling layer which are connected in this order.
5. The rail component detection method of claim 4, wherein the first downsampling layer is connected to the first upsampling layer, the second downsampling layer is connected to the second upsampling layer, and the third downsampling layer is connected to the third upsampling layer to form a U-shaped significance detection network.
6. The rail component detecting method according to any one of claims 1 to 5, wherein after the inputting the rail image into a rail component detecting model to obtain a component defect detecting result and a rail surface area of the rail image, the method further comprises:
and carrying out defect detection on the surface area of the track based on an LWLC-ME algorithm to obtain a track surface defect detection result.
7. The rail component inspection method according to any one of claims 1 to 5, wherein before the inputting the rail image into a rail component inspection model to obtain a component defect inspection result and a rail surface area of the rail image, the method further comprises:
performing data enhancement on the track image;
the inputting the rail image into a rail part detection model comprises:
and inputting the track image subjected to data enhancement into a track component detection model.
8. A rail member detection apparatus, comprising:
the acquisition module is used for acquiring a track image;
the detection module is used for inputting the track image into a track component detection model to obtain a component defect detection result and a track surface area of the track image so as to obtain a track surface detection result based on the track surface area; the track part detection model comprises a feature extraction trunk, a part defect detection branch connected with the feature extraction trunk, and a track segmentation branch connected with the feature extraction trunk.
9. A terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of the preceding claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202211455777.8A 2022-11-21 2022-11-21 Track component detection method, terminal and storage medium Pending CN115731179A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211455777.8A CN115731179A (en) 2022-11-21 2022-11-21 Track component detection method, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211455777.8A CN115731179A (en) 2022-11-21 2022-11-21 Track component detection method, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN115731179A true CN115731179A (en) 2023-03-03

Family

ID=85297431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211455777.8A Pending CN115731179A (en) 2022-11-21 2022-11-21 Track component detection method, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN115731179A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116469017A (en) * 2023-03-31 2023-07-21 北京交通大学 Real-time track identification method for unmanned aerial vehicle automated railway inspection

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116469017A (en) * 2023-03-31 2023-07-21 北京交通大学 Real-time track identification method for unmanned aerial vehicle automated railway inspection
CN116469017B (en) * 2023-03-31 2024-01-02 北京交通大学 Real-time track identification method for unmanned aerial vehicle automated railway inspection

Similar Documents

Publication Publication Date Title
CN110378264B (en) Target tracking method and device
Hassaballah et al. Vehicle detection and tracking in adverse weather using a deep learning framework
CN112528878B (en) Method and device for detecting lane line, terminal equipment and readable storage medium
CN110414507B (en) License plate recognition method and device, computer equipment and storage medium
CN112132156B (en) Image saliency target detection method and system based on multi-depth feature fusion
Wang et al. An effective method for plate number recognition
CN112287912B (en) Deep learning-based lane line detection method and device
CN114170516B (en) Vehicle weight recognition method and device based on roadside perception and electronic equipment
CN107437258A (en) Feature extracting method, estimation method of motion state and state estimation device
CN111767915A (en) License plate detection method, device, equipment and storage medium
CN111507337A (en) License plate recognition method based on hybrid neural network
CN111914762A (en) Gait information-based identity recognition method and device
CN115239644A (en) Concrete defect identification method and device, computer equipment and storage medium
CN115731179A (en) Track component detection method, terminal and storage medium
CN113971821A (en) Driver information determination method and device, terminal device and storage medium
JP2002133421A (en) Moving body recognition method and device
CN117115117B (en) Pathological image recognition method based on small sample, electronic equipment and storage medium
CN113542868A (en) Video key frame selection method and device, electronic equipment and storage medium
CN111126248A (en) Method and device for identifying shielded vehicle
CN115984786A (en) Vehicle damage detection method and device, terminal and storage medium
Srikanth et al. Automatic vehicle number plate detection and recognition systems: Survey and implementation
CN113239738B (en) Image blurring detection method and blurring detection device
CN115311632A (en) Vehicle weight recognition method and device based on multiple cameras
CN112348011B (en) Vehicle damage assessment method and device and storage medium
CN113780492A (en) Two-dimensional code binarization method, device and equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination