CN113421246A - Method for forming rail detection model and method for detecting rail abrasion - Google Patents
Method for forming rail detection model and method for detecting rail abrasion Download PDFInfo
- Publication number
- CN113421246A CN113421246A CN202110716186.0A CN202110716186A CN113421246A CN 113421246 A CN113421246 A CN 113421246A CN 202110716186 A CN202110716186 A CN 202110716186A CN 113421246 A CN113421246 A CN 113421246A
- Authority
- CN
- China
- Prior art keywords
- track
- picture
- detection model
- pictures
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The application provides a method for forming a track detection model and a method for detecting track abrasion. The method for forming the track detection model comprises the following steps: receiving an original picture acquired by a shooting device travelling along a track, wherein the original picture comprises a track image; carrying out data augmentation on the original picture to obtain a plurality of augmented pictures; distinguishing and labeling a normal area and a damaged area of a track image in the multiple augmented pictures to obtain multiple labeled pictures; and training the deep neural network by using a plurality of labeled pictures to form a track detection model.
Description
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to a method and a medium for forming a track detection model, and a method, an apparatus, a medium, and a system for detecting track wear.
Background
With the continuous development of advanced manufacturing technology, the automation level of factories is continuously improved. For example, in semiconductor manufacturing plants, man-machine integrated systems (AHMS) are increasingly used. In particular, when a production line is newly built, a higher starting point automation level is often sought.
In the construction of a semiconductor manufacturing plant, it is common to construct overhead cranes that run along and take power through rails. Under the control of the AHMS, the overhead travelling crane can transport the material to a designated machine. The rail may be damaged, and the damaged portion of the rail may have a large resistance, which may cause a power failure of the overhead traveling crane and an abnormal operation of the overhead traveling crane.
Disclosure of Invention
Embodiments of the present application provide a method of forming a track detection model, including: receiving an original picture acquired by a shooting device travelling along a track, wherein the original picture comprises a track image; carrying out data augmentation on the original picture to obtain a plurality of augmented pictures; distinguishing and labeling a normal area and a damaged area of a track image in the multiple augmented pictures to obtain multiple labeled pictures; and training the deep neural network by using a plurality of labeled pictures to form a track detection model.
In one embodiment, the method further comprises: acquiring a picture block comprising a partial orbit image from the augmented picture; the method comprises the following steps of: and distinguishing and labeling the normal area and the damaged area of the track image in the plurality of image blocks.
In one embodiment, the step of obtaining the tiles comprises: and acquiring an area meeting the preset illumination condition and the preset focusing condition in the augmented picture as a picture block.
In one embodiment, the step of receiving the original picture comprises: receiving a video comprising a track image; and extracting an original picture including the track image from the video.
In one embodiment, the step of distinguishing annotations comprises: and marking each pixel point of the normal area and the damaged area.
In one embodiment, the data augmenting an original picture to obtain a plurality of augmented pictures comprises: and performing at least one of rotation, inversion, scaling and color stirring on each original picture to obtain a plurality of augmented pictures.
In one embodiment, the deep neural network is a self-coding deep neural network for predicting normal regions in the original picture.
In a second aspect, embodiments of the present application provide a method of detecting rail wear, the method comprising: receiving a picture to be detected containing a track image, which is acquired by a shooting device travelling along a track; and detecting the picture to be detected through the track detection model so as to distinguish a normal area and a damaged area of the track image, wherein the track detection model is formed according to the method for forming the track detection model.
In one embodiment, the method further comprises: acquiring position information of a picture to be detected; and outputting the position information of the picture to be detected with the damaged area.
In one embodiment, the method further comprises: setting an identification library comprising labeled pictures; marking the normal area and the damaged area of the track image in the picture to be detected in a distinguishing manner to obtain an accumulated marked picture; and inputting the accumulated labeling pictures into an identification library for repeatedly training the track detection model.
The application provides a device for forming track inspection model, the device includes: a receiving unit configured to receive an original picture acquired by a photographing device traveling along a track, wherein the original picture includes a track image; the system comprises an augmentation unit, a processing unit and a processing unit, wherein the augmentation unit is configured to perform data augmentation on an original picture to obtain a plurality of augmented pictures; the marking unit is used for distinguishing and marking the normal area and the damaged area of the track image in the multiple augmented pictures to obtain multiple marked pictures; and the training unit is configured to train the deep neural network by using the plurality of labeled pictures to form a track detection model.
In one embodiment, the receiving unit is further configured to: acquiring a picture block comprising a partial orbit image from the augmented picture; and the annotation unit is further configured to: and distinguishing and labeling the normal area and the damaged area of the track image in the plurality of image blocks.
In one embodiment, the receiving unit is further configured to: and acquiring an area meeting the preset illumination condition and the preset focusing condition in the augmented picture as a picture block.
In one embodiment, the receiving unit is further configured to: receiving a video comprising a track image; and extracting an original picture including the track image from the video.
In one embodiment, the annotation unit is further configured to: and marking each pixel point of the normal area and the damaged area.
In one embodiment, the augmentation unit is configured to perform at least one of rotation, inversion, scaling and color-stirring on each original picture to obtain a plurality of augmented pictures.
Another aspect of the present application provides an apparatus for detecting rail wear, comprising: the image processing device comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is configured to receive a picture to be detected containing a track image acquired by a shooting device travelling along a track; and a wear detection unit configured to detect the picture to be detected through a track detection model, wherein the track detection model is formed according to the method for forming the track detection model.
In one embodiment, the apparatus further comprises: a position detection unit configured to: acquiring position information of a picture to be detected; and outputting the position information of the picture to be detected with the damaged area.
In one embodiment, the apparatus further comprises: the registering unit is configured to store an identification library comprising the marked pictures; the marking unit is configured to mark the normal area and the damaged area of the track image in the picture to be detected in a distinguishing manner to obtain an accumulated marked picture; and the training unit is configured to input the accumulated labeling pictures into the identification library for repeatedly training the track detection model.
The present application provides a system for forming a rail inspection model, the system comprising: the storage is used for storing the executable instructions and the original pictures, wherein the pictures to be detected comprise track images shot by the shooting device moving along the track; and one or more processors in communication with the memory to execute the executable instructions to implement a method of forming a trajectory detection model.
Another aspect of the present application provides a system for detecting rail wear, comprising: the storage is used for storing the executable instruction and the picture to be detected, wherein the picture to be detected comprises a track image shot by the shooting device moving along the track; and one or more processors, which are communicated with the memory to execute executable instructions so as to detect the picture to be detected through a track detection model, wherein the track detection model is formed according to the forming method.
In one embodiment, the system further comprises a camera adapted to travel along the track and capture an image of the track.
In one embodiment, the system further comprises a communication device, which is communicated with the shooting device and the memory, so as to transmit the picture to be detected to the memory.
In one embodiment, the system further comprises: and the positioner is used for acquiring the position of the shooting device relative to the track when the shooting device acquires the picture.
The present application also provides a computer readable medium having computer readable instructions stored thereon, wherein the computer readable instructions, when executed by a processor, implement the aforementioned method of forming a rail detection model or implement the aforementioned method of detecting rail wear.
The method for forming the track detection model provided by the embodiment of the application can form the track detection model for detecting the track of the crown block in a factory. The newly constructed factory lacks various operation data, and the method provided by the application can form the track detection model in advance for subsequent use, and can reduce manual operation, improve personnel safety and improve working efficiency.
The use of the track detection model can ensure the safe operation of the crown block and prevent possible follow-up faults in advance.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
fig. 1 is an architecture diagram of a crown block system according to an embodiment of the present application;
FIG. 2 is a block flow diagram of a method of forming a trajectory detection model according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a tile according to an embodiment of the present application;
fig. 4 is a schematic view of a normal region obtained by a method for detecting rail wear according to an embodiment of the present application.
FIG. 5 is an accuracy curve of a generated orbit detection model according to an embodiment of the present application;
FIG. 6 is a loss rate curve of a generated orbit detection model according to an embodiment of the present application;
FIG. 7 is a block flow diagram of a method for detecting rail wear according to an embodiment of the present application;
FIG. 8 is a block diagram of an apparatus for forming a trajectory detection model according to an embodiment of the present application;
FIG. 9 is a block diagram of an apparatus for detecting rail wear according to an embodiment of the present application; and
fig. 10 is a schematic block diagram of a system for detecting rail wear according to an embodiment of the present application.
Detailed Description
For a better understanding of the present application, various aspects of the present application will be described in more detail with reference to the accompanying drawings. It should be understood that the detailed description is merely illustrative of exemplary embodiments of the present application and does not limit the scope of the present application in any way. Like reference numerals refer to like elements throughout the specification. The expression "and/or" includes any and all combinations of one or more of the associated listed items.
It should be noted that in this specification, the expressions first, second, third, etc. are used only to distinguish one feature from another, and do not represent any limitation on the features. Thus, a first block discussed below may also be referred to as a second block without departing from the teachings of the present application. And vice versa.
It will be further understood that the terms "comprises," "comprising," "has," "having," "includes" and/or "including," when used in this specification, specify the presence of stated features, elements, and/or components, but do not preclude the presence or addition of one or more other features, elements, components, and/or groups thereof. Moreover, when a statement such as "at least one of" appears after a list of listed features, the entirety of the listed features is modified rather than modifying individual elements in the list. Furthermore, when describing embodiments of the present application, the use of "may" mean "one or more embodiments of the present application. Also, the term "exemplary" is intended to refer to an example or illustration.
Unless otherwise defined, all terms (including engineering and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. In addition, unless explicitly defined or contradicted by context, the specific steps included in the methods described herein are not necessarily limited to the order described, but can be performed in any order or in parallel. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Referring to FIG. 1, a schematic block diagram of an automated factory according to an embodiment of the present application is shown. The automated factory 100 may include: a processing system 101, a production management and control system (MES)102, an analog quantity control system (MCS)103, and an Automated Material Handling System (AMHS) 104.
The processing system 101 may be communicatively coupled to the production management system 102 and the automated material handling system 104, and the specific type of coupling may be a wired or wireless communication link or a fiber optic cable. For example, a user may monitor process information of the automated material handling system 104 via the processing system 101.
The production management and control system 102 may transmit the work content to the analog quantity control system 103. The analog quantity control system 103 converts the work content into a Command (CMD) and sends it to the automated material handling system 104.
The automated material handling system 104 may include rails 105, stockers 106, overhead cranes 107, and processing equipment 108. A crown block 107 may run along the track 105 to carry material from the stocker 106 to the processing tool 108.
Illustratively, the overhead traveling crane 107 may be provided with a camera (not shown). The cameras move with the overhead trolley 107 along the track 105. The camera may be used to photograph the track 105, in particular, at least the portion of the track 105 that is located near where the camera is located. Illustratively, the processing system 101 may include a communication device, and the camera may be communicatively coupled to the communication device of the processing system 101. Illustratively, the medium storing the picture captured by the camera is adapted to be read by the processing system 101.
The processing system 101 may be hardware or software. When the processing system 101 is hardware, it may be a variety of electronic devices with display screens, including but not limited to tablet computers, laptop portable computers, desktop computers, and the like. When the processing system 101 is software, it may be installed in the electronic devices listed above, it may be implemented as multiple pieces of software or software modules (for example, to provide distributed services), or it may be implemented as a single piece of software or software module. And is not particularly limited herein.
The method for forming the rail detection model and the method for detecting the rail abrasion provided by the embodiment of the application can be executed by the processing system 101. Accordingly, apparatus for implementing the methods is also generally disposed in the processing system 101.
Referring to FIG. 2, a method 1000 of forming a trajectory detection model according to an embodiment of the present application is shown. The method 1000 includes the steps of:
step S101: and receiving an original picture acquired by a shooting device travelling along the track, wherein the original picture comprises a track image.
Step S102: and carrying out data augmentation on the original picture to obtain a plurality of augmented pictures.
Step S103: and distinguishing and labeling the normal area and the damaged area of the track image in the multiple augmented pictures to obtain multiple labeled pictures.
Step S104: and training the deep neural network by using a plurality of labeled pictures to form a track detection model.
The above steps S101 to S104 will be further exemplarily described below.
Step S101
In this embodiment, the photographing device can obtain an original picture. For example, the camera may be oriented forward or rearward of the motion. Two shooting devices can be arranged and respectively shoot towards the tracks on the two sides. In addition, the angle of view of the camera may be horizontal or may be inclined to the track. In addition, the photographing device may be provided on an anti-shake device such as a pan/tilt head.
In some exemplary embodiments, the step of receiving the original picture comprises: receiving a video comprising a track image; and extracting an original picture including the track image from the video. The camera may be arranged to take a video and each frame of the video may then be taken as the original image. Because the overhead traveling crane may stop at a certain station and may operate between two stations, the shooting device can be set to shoot when the overhead traveling crane operates, and then the video shooting can be stopped when the overhead traveling crane stops. In other embodiments, the photographing means may be arranged to take pictures continuously. For example in response to the overhead travelling crane travelling a certain distance or in response to the overhead travelling crane changing its position in the track, wherein it is ensured that the original pictures taken twice comprise a uniform part of the track image between them.
In an exemplary embodiment, since the depth of the track in the original picture is deep, the track image quality of different areas is greatly different, and therefore only the image-quality image blocks in the original picture can be analyzed subsequently. And further the state of the track can be judged more accurately. Reference is made to fig. 4, wherein fig. 4 is a better quality tile comprising an image of a track, wherein the image, e.g. a wall, a support of the track, etc., is not shown. As an example, a broken area of the track image is included at a in fig. 4.
Generally, the image blocks with better illumination condition in the original image are selected, so that the image in the image blocks is not too dark. The focal length of the shooting device can be considered, and the image which is good at the focusing position is clearer. Further, the brightest location is also not usually selected, since an overly bright location may not be focusing well. In a plurality of original pictures shot by the shooting device, different original pictures may have different shooting effects, and usually, blocks with similar sizes are selected from different original pictures. For example, the pixel point difference between the blocks is within 300 px.
Step S102
In this step, data augmentation is performed on the original picture to obtain a plurality of augmented pictures. Illustratively, the area satisfying the preset illumination condition and the preset focusing condition in the augmented picture is acquired as a subsequent image block to be used.
In an exemplary embodiment, the step of performing data augmentation on the original picture to obtain a plurality of augmented pictures includes: and performing at least one of rotation, inversion, scaling and color stirring on each original picture to obtain a plurality of augmented pictures.
Illustratively, the data augmentation operations include position changes and brightness changes.
Specifically, the position change includes: rotation, flipping, translation, zooming, squeezing, and the like. For example, the original picture may be rotated by an angle within 20 degrees. The original picture can be flipped over with a horizontal axis or a vertical axis. The original picture may be translated in the horizontal direction and/or the vertical direction by a distance within 10% of its dimension in that direction. Illustratively, the original picture may be translated by a distance within 10% of the size of the tile to be selected. The original picture can be scaled to a size of 0.8 times to 1.2 times as a whole. The original picture can be squeezed in one direction (unidirectional scaling) to a size of 0.9 to 1.1 times.
Specifically, the luminance variation includes: luminance scaling, luminance transformation, noise, etc. The luminance value of the picture may be scaled to 0.8 to 1.2 times the original luminance, for example. For example, the raw luminance may be at least one of regularized, normalized, gamma varied, and logarithmically transformed. Gaussian noise can be added to the original picture. In the process of data augmentation, for example, the original picture may be rotated by 0 degree, so in a broad sense, the original picture may also be used as a part of many pictures obtained after data augmentation. The data amplification is carried out on the original image, the diversity of data of a follow-up input track detection model is increased, and the robustness of the track detection model can be improved. Especially when preparing to inspect the track in a newly built factory, little data is available. By performing data augmentation on the original picture, the problem of too little initial data can be overcome.
Step S103
In the step, the normal area and the damaged area of the track image in the multiple augmented pictures are labeled in a distinguishing mode, and multiple labeled pictures are obtained. Illustratively, a tile comprising a partial orbit image is obtained from the augmented picture. And then distinguishing and labeling the normal area and the damaged area of the track image in the plurality of image blocks. The size of the tiles obtained in the plurality of augmented pictures may be similar, for example, differing by less than 300 px. Further, the central position of the original picture may be used as a reference, and the image blocks having the same position with respect to the reference may be selected.
In an exemplary embodiment, the step of distinguishing annotations includes: and marking each pixel point of the normal area and the damaged area. Referring to table 1:
table 1: pixel labeling
0 | 0 | 0 | 0 | 0 | 0 | 0 | … |
0 | 0 | 0 | 0 | 0 | 0 | 0 | … |
0 | 0 | 0 | 0 | 0 | 0 | 0 | … |
1 | 1 | 1 | 1 | 1 | 1 | 1 | … |
1 | 1 | 1 | 0 | 1 | 1 | 1 | … |
1 | 1 | 0 | 0 | 0 | 1 | 1 | … |
1 | 1 | 1 | 1 | 1 | 1 | 1 | … |
… | … | … | … | … | … | … | … |
Each pixel point of the normal region of the track image at a in fig. 4 may be labeled as 1, and each pixel point of the damaged region of the track image may be labeled as 0. The four pixels representing the damaged area in table 1 are located among the plurality of pixels representing the normal area. For example, other image areas in the picture may also be labeled as 0.
Step S104
In this step, a deep neural network is trained using a plurality of labeled pictures to form a track detection model. The orbit detection model can be used at least to detect normal regions of the orbit image. In particular, the trajectory detection model may include a tile acquirer, a detector, and a classifier. The image block acquirer is used for acquiring image blocks to be detected. The detector may be a deep neural network, illustratively a self-coding deep neural network, for predicting normal regions in the original picture. In particular, the self-coding deep neural network may include a convolutional network, which may include a Max pooling layer (Max pooling), and a deconvolution network, which may include an upper pooling layer (pooling).
Illustratively, the tile shown in FIG. 4 is input from the input side of the detector, passes through the layers of the convolutional network, passes through the layers of the deconvolution network, and may then be output from the output side of the detector. The output information is the normal area of the track image in the tile. As shown in fig. 5, the white area corresponds to a normal area of the track. The black area of the lower rail near the centerline represents a worn area of the rail.
In order to form the orbit detection model, the deep neural network can be trained by utilizing a training set formed by a plurality of the augmented pictures obtained in the previous step. Referring to fig. 6 and 7, as the number of times of training increases, the accuracy rate (val-acc) of the orbit detection model to the training set rapidly increases and approaches above 0.98, and the loss rate (val-loss) to the training set rapidly decreases. Meanwhile, the test accuracy (acc) of the track detection model to the image blocks to be tested also rises rapidly and is basically kept above 0.96. The Loss rate (Loss) of the trajectory detection model for the pattern to be tested falls below 0.1.
The method provided by the embodiment of the application can form a track detection model for detecting the track of the crown block in a factory. The method can solve the problem of less data of a new factory, and can continuously train the track detection model to further improve the performance of the track detection model along with the running of the crown block and the continuous increase of data. In addition, the original picture taken by the camera may change during the movement of the camera along the track, and the quality of the track image may fluctuate. The track detection model formed by the method has better robustness and can better adapt to the actual working environment.
FIG. 7 illustrates a method 2000 of detecting rail wear according to one embodiment of the present application.
The method 2000 includes the steps of:
step S201, receiving a to-be-detected picture including a track image acquired by a shooting device traveling along a track.
And S202, detecting the picture to be detected through the track detection model. After detection, the normal area and the damaged area of the track image can be distinguished. The orbit detection model used in this step may be formed according to the method 1000 described above.
The track is typically located at a high level in the factory and may be as long as several kilometres. When the rail is damaged or abnormal, the overhead travelling crane may not operate normally. The detection method provided by the application can be used for shooting pictures by setting the shooting device and detecting the track image through the track detection model, so that the track can be simply and quickly detected, and the normal operation of the overhead travelling crane is not influenced. The detection method provided by the application can discover the damage of the track as early as possible, and avoid the incapability of working of the material handling system. In addition, the system can replace manual inspection, eliminates the potential safety hazard of manual inspection at high altitude for a long time, and also improves the detection efficiency.
In some embodiments, the method 2000 further comprises: step S200, setting an identification library comprising labeled pictures; marking the normal area and the damaged area of the track image in the picture to be detected in a distinguishing manner to obtain an accumulated marked picture; and inputting the accumulated labeling pictures into an identification library for repeatedly training the track detection model. With the continuous use of the track, a wear area is usually added on the track. The track detection model is trained by utilizing the accumulated labeled pictures, so that the detection accuracy of the track detection model can be further improved, and the parameters in the track detection model are kept to be continuously updated, so that the track detection model is more suitable for the actual state of the track.
In some embodiments, the method 2000 further comprises: and acquiring the position information of the picture to be detected. And further can output the position information of the picture to be detected with the damaged area. The method can help a factory manager to quickly determine the actual position of the damaged area on the track so as to deal with the damage of the track in a targeted manner.
Referring to fig. 8, as an implementation of the above method for forming a track detection model, the present application provides an apparatus for forming a track detection model, which may correspond to the method embodiment shown in fig. 2, and which may be particularly applied in various electronic devices.
As shown in fig. 8, the apparatus 1 for forming a trajectory detection model of the present embodiment includes: a receiving unit 11, an augmenting unit 12, a labeling unit 13 and a training unit 14. The receiving unit 11 is configured to receive an original picture including a track image acquired by a photographing device traveling along a track. Illustratively, the receiving unit 11 is configured to: a video including a track image is received and an original picture including the track image is extracted from the video. The augmentation unit 12 is configured to data augment the original picture to obtain a plurality of augmented pictures. The labeling unit 13 performs distinguishing labeling on the normal area and the damaged area of the track image in the multiple augmented pictures to obtain multiple labeled pictures. And a training unit 14 configured to train the deep neural network with the plurality of labeled pictures to form a track detection model. Specifically, training unit 14 may obtain an initial orbit detection model. The initial orbit model may include a self-encoding depth neural network. The trained self-coding deep neural network can be used for constructing a track detection model.
In some embodiments, the receiving unit 11 is further configured to: and acquiring a picture block comprising a partial orbit image from the augmented picture. Meanwhile, the labeling unit 13 is further configured to: and distinguishing and labeling the normal area and the damaged area of the track image in the plurality of image blocks. Illustratively, the receiving unit is configured to: and acquiring an area meeting the preset illumination condition and the preset focusing condition in the augmented picture as a picture block.
In particular, in some embodiments, the annotation unit is further configured to: and marking each pixel point of the normal area and the damaged area.
In some embodiments, the augmentation unit is configured to at least one of rotate, invert, scale, and color stir each original picture to obtain the plurality of augmented pictures.
Referring to fig. 9, as an implementation of the above method for detecting rail wear, the present application provides an apparatus for detecting rail wear, which may correspond to the method embodiment shown in fig. 7, and which may be particularly applied in various electronic devices.
As shown in fig. 9, the apparatus 2 for detecting rail wear of the present embodiment includes: a receiving unit 21 and a wear detection unit 25. The receiving unit 21 is configured to receive a picture to be detected including a track image acquired by a photographing device traveling along a track. The wear detection unit 25 is configured to detect the picture to be detected by the track detection model. The orbit detection model may be formed according to the method 1000 or apparatus 1 described previously.
The apparatus 2 for detecting rail wear of the present embodiment further includes: a registering unit 22, a labeling unit 23 and a training unit 24. The registering unit 22 is configured to store an identification library including an annotated picture. The labeling unit 23 is configured to label the normal area and the damaged area of the track image in the picture to be detected differently and obtain an accumulated labeled picture. The training unit 24 is configured to input the accumulated annotation pictures into the identity repository for repeated training of the trajectory detection model.
In some embodiments, the apparatus 2 further comprises a position detection unit. The position detection primitive is configured to: acquiring position information of a picture to be detected; and outputting the position information of the picture to be detected with the damaged area.
The present application further provides a system for forming a rail inspection model, a system for inspecting rail wear, and a readable storage medium according to embodiments of the present application.
As shown in fig. 10, a block diagram of a system for detecting rail wear may be various forms of computers such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
The system includes one or more processors 301, memory 602, and an interface to connect the various components. These interfaces include high speed interfaces and low speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the system, including instructions stored in the memory to obtain a picture to be detected from an input device, such as a camera that travels along a track. In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple systems may be connected, with each system providing portions of the necessary operations (e.g., as an array of servers, a group of blade servers, or a multi-processor system). Fig. 10 illustrates an example of a processor 601.
Illustratively, the memory 302 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of a system for detecting track wear, and the like. Further, the memory 302 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 302 optionally includes memory located remotely from processor 301, which may be connected to a system for detecting rail wear over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Illustratively, the system further comprises a communication device and a camera. The communication device is in communication connection with the shooting device and the memory 302 respectively, and acquires the picture to be detected from the shooting device and transmits the picture to be detected to the memory 302. In other embodiments, the picture or video to be detected may be received and stored from the camera using a storage medium, which may be an external storage medium or memory 302. The system can read the external storage medium or connect the memory 302 with one or more memories 301 to form the system.
The input device 303 of the system may also include a locator, for example. The positioner is used for acquiring the position of the shooting device relative to the track when the picture is acquired.
Further, the input device 303 may receive input numeric or character information and generate key signal inputs related to user settings for detecting rail wear and function control, such as a touch screen, keypad, mouse, track pad, touch pad, pointer stick, one or more mouse buttons, track ball, joystick, or other input device. The output devices 304 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
In another embodiment, the present application provides a system for forming a trajectory prediction model that includes one or more processors, memory, and an interface for interfacing the components. The memory stores computer instructions for causing the computer to perform the method of forming a trajectory detection model provided herein, as well as raw pictures obtained, for example, by a camera traveling along a trajectory. The method executable by the system of this embodiment may be determined based on computer instructions stored in a memory.
The above description is only a preferred embodiment of the present application and is illustrative of the principles of the technology employed. It will be appreciated by a person skilled in the art that the scope of protection covered by the present application is not limited to the embodiments with a specific combination of the features described above, but also covers other embodiments with any combination of the features described above or their equivalents without departing from the technical idea described above. For example, the above features and (but not limited to) features having similar functions in this application are mutually replaced to form the technical solution.
Claims (18)
1. A method of forming a trajectory detection model, comprising:
receiving an original picture acquired by a shooting device travelling along a track, wherein the original picture comprises a track image;
performing data augmentation on the original picture to obtain a plurality of augmented pictures;
distinguishing and labeling the normal area and the damaged area of the track image in the plurality of augmented pictures to obtain a plurality of labeled pictures; and
and training a deep neural network by using the plurality of marked pictures to form the track detection model.
2. The method of claim 1, wherein the method further comprises:
acquiring a picture block comprising part of the track image from the augmented picture;
wherein, the step of distinguishing and labeling the plurality of augmented pictures comprises the following steps:
and distinguishing and labeling the normal area and the damaged area of the track image in the plurality of image blocks.
3. The method of claim 2, wherein obtaining the tiles comprises:
and acquiring an area meeting a preset illumination condition and a preset focusing condition in the augmented picture as the image block.
4. The method of claim 1, wherein receiving an original picture comprises:
receiving a video comprising a track image; and
extracting an original picture including the track image from the video.
5. The method of claim 1, wherein the step of distinguishing annotations comprises:
and marking each pixel point of the normal area and the damaged area.
6. The method of claim 1, wherein the data augmenting the original picture to obtain a plurality of augmented pictures comprises: and performing at least one of rotation, inversion, scaling and color stirring on each original picture to obtain a plurality of augmented pictures.
7. The method of claim 1, wherein the deep neural network is a self-coding deep neural network for predicting normal regions in the original picture.
8. A method of detecting rail wear, comprising:
receiving a picture to be detected containing a track image, which is acquired by a shooting device travelling along a track; and
detecting the picture to be detected by a track detection model to distinguish a normal area and a damaged area of the track image, wherein the track detection model is formed according to the method of any one of claims 1 to 7.
9. The method of claim 8, further comprising:
acquiring the position information of the picture to be detected; and
and outputting the position information of the picture to be detected with the damaged area.
10. The method of claim 8, further comprising:
setting an identification library comprising the labeled pictures;
marking the normal area and the damaged area of the track image in the picture to be detected in a distinguishing manner to obtain an accumulated marked picture; and
and inputting the accumulated labeling pictures into the identification library to be used for repeatedly training the track detection model.
11. An apparatus for detecting rail wear, comprising:
the image processing device comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is configured to receive a picture to be detected containing a track image acquired by a shooting device travelling along a track; and
a wear detection unit configured to detect the picture to be detected by a track detection model, wherein the track detection model is formed according to the method of any one of claims 1 to 7.
12. The apparatus of claim 11, further comprising:
a position detection unit configured to:
acquiring the position information of the picture to be detected; and
and outputting the position information of the picture to be detected with the damaged area.
13. The apparatus of claim 11, further comprising:
the registering unit is configured to store an identification library comprising the labeled picture;
the marking unit is configured to mark the normal area and the damaged area of the track image in the picture to be detected in a distinguishing manner to obtain an accumulated marking picture; and
a training unit configured to input the accumulated annotation pictures into the identification library for repeatedly training the track detection model.
14. A system for detecting rail wear, comprising:
the storage is used for storing executable instructions and pictures to be detected, wherein the pictures to be detected comprise track images shot by the shooting device when the shooting device moves along the track; and
one or more processors in communication with the memory to execute the executable instructions to detect the picture to be detected by an orbit detection model, wherein the orbit detection model is formed according to the method of any one of claims 1 to 7.
15. The system of claim 14, further comprising the camera adapted to travel along the track and capture the track image.
16. The system according to claim 15, further comprising a communication device in communication with said camera and said memory for transmitting said picture to be detected to said memory.
17. The system of claim 14, further comprising:
and the positioner is used for acquiring the position of the shooting device relative to the track when the shooting device acquires the picture.
18. A computer readable medium having computer readable instructions stored thereon, wherein the computer readable instructions, when executed by a processor, implement the method of forming a rail detection model according to any one of claims 1-7 or implement the method of detecting rail wear according to any one of claims 8-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110716186.0A CN113421246A (en) | 2021-06-24 | 2021-06-24 | Method for forming rail detection model and method for detecting rail abrasion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110716186.0A CN113421246A (en) | 2021-06-24 | 2021-06-24 | Method for forming rail detection model and method for detecting rail abrasion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113421246A true CN113421246A (en) | 2021-09-21 |
Family
ID=77717868
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110716186.0A Pending CN113421246A (en) | 2021-06-24 | 2021-06-24 | Method for forming rail detection model and method for detecting rail abrasion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113421246A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114972117A (en) * | 2022-06-30 | 2022-08-30 | 成都理工大学 | Track surface wear identification and classification method and system |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110619057A (en) * | 2019-08-01 | 2019-12-27 | 北京百度网讯科技有限公司 | Information pushing method, device and equipment based on vehicle |
CN111080608A (en) * | 2019-12-12 | 2020-04-28 | 哈尔滨市科佳通用机电股份有限公司 | Method for recognizing closing fault image of automatic brake valve plug handle of railway wagon in derailment |
CN111239152A (en) * | 2020-01-02 | 2020-06-05 | 长江存储科技有限责任公司 | Wafer detection method, device and equipment |
CN111401182A (en) * | 2020-03-10 | 2020-07-10 | 北京海益同展信息科技有限公司 | Image detection method and device for feeding fence |
CN111611956A (en) * | 2020-05-28 | 2020-09-01 | 中国科学院自动化研究所 | Subway visual image-oriented track detection method and system |
CN112434695A (en) * | 2020-11-20 | 2021-03-02 | 哈尔滨市科佳通用机电股份有限公司 | Upper pull rod fault detection method based on deep learning |
CN112686888A (en) * | 2021-01-27 | 2021-04-20 | 上海电气集团股份有限公司 | Method, system, equipment and medium for detecting cracks of concrete sleeper |
CN112950566A (en) * | 2021-02-25 | 2021-06-11 | 哈尔滨市科佳通用机电股份有限公司 | Windshield damage fault detection method |
-
2021
- 2021-06-24 CN CN202110716186.0A patent/CN113421246A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110619057A (en) * | 2019-08-01 | 2019-12-27 | 北京百度网讯科技有限公司 | Information pushing method, device and equipment based on vehicle |
CN111080608A (en) * | 2019-12-12 | 2020-04-28 | 哈尔滨市科佳通用机电股份有限公司 | Method for recognizing closing fault image of automatic brake valve plug handle of railway wagon in derailment |
CN111239152A (en) * | 2020-01-02 | 2020-06-05 | 长江存储科技有限责任公司 | Wafer detection method, device and equipment |
CN111401182A (en) * | 2020-03-10 | 2020-07-10 | 北京海益同展信息科技有限公司 | Image detection method and device for feeding fence |
CN111611956A (en) * | 2020-05-28 | 2020-09-01 | 中国科学院自动化研究所 | Subway visual image-oriented track detection method and system |
CN112434695A (en) * | 2020-11-20 | 2021-03-02 | 哈尔滨市科佳通用机电股份有限公司 | Upper pull rod fault detection method based on deep learning |
CN112686888A (en) * | 2021-01-27 | 2021-04-20 | 上海电气集团股份有限公司 | Method, system, equipment and medium for detecting cracks of concrete sleeper |
CN112950566A (en) * | 2021-02-25 | 2021-06-11 | 哈尔滨市科佳通用机电股份有限公司 | Windshield damage fault detection method |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114972117A (en) * | 2022-06-30 | 2022-08-30 | 成都理工大学 | Track surface wear identification and classification method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10991088B2 (en) | Defect inspection system and method using artificial intelligence | |
US11341626B2 (en) | Method and apparatus for outputting information | |
Yang et al. | Vision-based tower crane tracking for understanding construction activity | |
CN106839976B (en) | Method and device for detecting lens center | |
CN111784663A (en) | Method and device for detecting parts, electronic equipment and storage medium | |
CN110148106A (en) | A kind of system and method using deep learning model inspection body surface defect | |
CN111488821A (en) | Method and device for identifying traffic signal lamp countdown information | |
US20240070834A1 (en) | Machine-learning framework for detecting defects or conditions of railcar systems | |
CN113421246A (en) | Method for forming rail detection model and method for detecting rail abrasion | |
CN113822882A (en) | Circuit board surface defect detection method and device based on deep learning | |
AVENDAÑO | Identification and quantification of concrete cracks using image analysis and machine learning | |
Ojha et al. | Affordable multiagent robotic system for same-level fall hazard detection in indoor construction environments | |
Hsu et al. | Defect inspection of indoor components in buildings using deep learning object detection and augmented reality | |
Dang et al. | Lightweight pixel-level semantic segmentation and analysis for sewer defects using deep learning | |
Liu et al. | Two-stream boundary-aware neural network for concrete crack segmentation and quantification | |
Singh et al. | Performance analysis of object detection algorithms for robotic welding applications in planar environment | |
Wen et al. | 3D Excavator Pose Estimation Using Projection-Based Pose Optimization for Contact-Driven Hazard Monitoring | |
Attard et al. | A comprehensive virtual reality system for tunnel surface documentation and structural health monitoring | |
US20230152781A1 (en) | Manufacturing intelligence service system connected to mes in smart factory | |
Kim et al. | Real-time assessment of surface cracks in concrete structures using integrated deep neural networks with autonomous unmanned aerial vehicle | |
CN111951328A (en) | Object position detection method, device, equipment and storage medium | |
CN116935174A (en) | Multi-mode fusion method and system for detecting surface defects of metal workpiece | |
CN115007474A (en) | Coal dressing robot and coal dressing method based on image recognition | |
Loktev et al. | Automated system for monitoring the upper structure of the railway track for extreme arctic conditions | |
Ghofrani et al. | Catiloc: Camera image transformer for indoor localization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210921 |
|
RJ01 | Rejection of invention patent application after publication |