CN116543163A - Brake connecting pipe break fault detection method - Google Patents

Brake connecting pipe break fault detection method Download PDF

Info

Publication number
CN116543163A
CN116543163A CN202310542234.8A CN202310542234A CN116543163A CN 116543163 A CN116543163 A CN 116543163A CN 202310542234 A CN202310542234 A CN 202310542234A CN 116543163 A CN116543163 A CN 116543163A
Authority
CN
China
Prior art keywords
network
fusion
feature map
mask
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310542234.8A
Other languages
Chinese (zh)
Other versions
CN116543163B (en
Inventor
韩旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Kejia General Mechanical and Electrical Co Ltd
Original Assignee
Harbin Kejia General Mechanical and Electrical Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Kejia General Mechanical and Electrical Co Ltd filed Critical Harbin Kejia General Mechanical and Electrical Co Ltd
Priority to CN202310542234.8A priority Critical patent/CN116543163B/en
Publication of CN116543163A publication Critical patent/CN116543163A/en
Application granted granted Critical
Publication of CN116543163B publication Critical patent/CN116543163B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

A brake connecting pipe break fault detection method solves the problem of how to reduce brake connecting pipe break fault false alarm, and belongs to the railway wagon fault detection field. The invention comprises the following steps: s1, acquiring an image of a railway wagon, and acquiring a current car image, wherein the current car image comprises a brake connecting hose part; s2, inputting the current car image and the corresponding historical car image into a Mask-RCNN fault segmentation network, outputting a fault type and a segmentation result of the brake connecting pipe by the Mask-RCNN fault segmentation network, and determining a fault position according to the segmentation result; in the Mask-RCNN fault segmentation network, historical vehicle image priori information is introduced into the fault segmentation network, and a contrast feature segmentation network and a contrast segmentation head are introduced into the network, so that the network identification difficulty is reduced, and the identification accuracy is improved.

Description

Brake connecting pipe break fault detection method
Technical Field
The invention relates to a brake connecting pipe break fault detection method, and belongs to the field of railway wagon fault detection.
Background
The traditional fault detection method for manual image checking is time-consuming and labor-consuming, has high labor cost, and can cause the phenomena of missing detection and false detection due to fatigue, carelessness and the like of the vehicle checking staff. The fault detection method of the railway wagon by deep learning can effectively reduce the detection cost and improve the detection efficiency. Because muddy water, raw rubber belt and the like on the brake connecting hose are similar to broken forms, a large number of false alarms can be generated by adopting a simple example segmentation algorithm.
Disclosure of Invention
Aiming at the problem of how to reduce the false alarm of breaking faults of the brake connecting pipe, the invention provides a method for detecting the breaking faults of the brake connecting pipe.
The invention discloses a brake connecting pipe break fault detection method, which comprises the following steps:
s1, acquiring an image of a railway wagon, and acquiring a current car image, wherein the current car image comprises a brake connecting hose part;
s2, inputting the current car image and the corresponding historical car image into a Mask-RCNN fault segmentation network, outputting a fault type and a segmentation result of the brake connecting pipe by the Mask-RCNN fault segmentation network, and determining a fault position according to the segmentation result;
the Mask-RCNN fault segmentation network comprises a Mask-RCNN segmentation network No. 1, a Mask-RCNN segmentation network No. 2 and a contrast feature segmentation network; the contrast characteristic segmentation network comprises a contrast characteristic extraction branch, an RPN network, a RoIALign network and a No. 2 segmentation head;
inputting a current vehicle image into a Mask-RCNN segmentation network No. 1, and inputting a historical vehicle image into a Mask-RCNN segmentation network No. 2;
the method comprises the steps that current vehicle features extracted by a Mask-RCNN segmentation network No. 1 and historical vehicle features extracted by a Mask-RCNN segmentation network No. 2 are simultaneously input into a comparison feature extraction branch, the comparison feature extraction branch finds out parts which exist in both the current vehicle and the historical vehicle but generate feature changes, parts which do not exist in the current vehicle and parts which do not exist in the current vehicle are fused, the fused features are sent into an RPN network extraction suggestion frame, the extracted suggestion frame is sent into a RoIAlign network for pooling, and the pooled suggestion frame is sent into a No. 2 segmentation head;
the method comprises the steps that a dividing head in a No. 2 Mask-RCNN dividing network is a comparison dividing head, the No. 1 Mask-RCNN dividing network and the No. 2 Mask-RCNN dividing network input a pooled current vehicle suggesting frame and a pooled historical vehicle suggesting frame to the comparison dividing head at the same time, and the comparison dividing head fuses the current vehicle suggesting frame and the historical vehicle suggesting frame to obtain a dividing result;
dividing heads of the No. 1 Mask-RCNN dividing network, the No. 2 dividing heads and the dividing results obtained by comparing the dividing heads are combined to obtain the dividing results of the brake connecting pipe; and determining the fault type of the brake connecting pipe according to the classification head output of the Mask-RCNN dividing network No. 1 and the classification head output of the Mask-RCNN dividing network No. 2.
Preferably, the process of comparing feature extraction branch fusion comprises: the method comprises the steps of carrying out a first treatment on the surface of the
The Resnet50 feature extraction networks in the Mask-RCNN segmentation network No. 1 and the Mask-RCNN segmentation network No. 2 comprise 5 convolution blocks Conv1-Conv5, and each convolution block downsamples the feature map by 2 times through convolution pooling operation;
the method comprises the steps that current car features and historical car features output by a convolution block Conv3 of two Resnet50 feature extraction networks are fused through a 1 st fusion network C, the fused feature images are subjected to convolution pooling operation through the convolution block Conv6 and then are subjected to downsampling for 2 times to be used as a last feature fusion feature image to be input into the 2 nd fusion network C, the current car features and the historical car features output by a convolution block Conv4 of the two Resnet50 feature extraction networks are simultaneously input into the 2 nd fusion network C, the feature images fused by the 2 nd fusion network C are subjected to convolution pooling operation through the convolution block Conv7 and then are subjected to downsampling for 2 times to be used as a last feature fusion feature image to be input into the 3 rd fusion network C, the current car features and the historical car features output by a convolution block Conv5 of the two Resnet50 feature extraction networks are simultaneously input into the 3 rd fusion network C, and the feature images fused by the 3 rd fusion network C are sent into an RPN network;
the convolution block Conv6 is identical to the convolution block Conv4, and the convolution block Conv7 is identical to the convolution block Conv 5.
Preferably, the method for fusing the network C comprises the following steps:
the input current car features and the history car features are E1 and E2 respectively, pixel-by-pixel max (E1, E2) -min (E1, E2), subtraction E1-E2 and exchange subtraction E2-E1 are respectively carried out on the E1 and the E2, then the current car features and the history car features are input into a No. 1 Concat fusion network and fused with the E1 and the E2 according to Concat in the channel direction of the feature map, meanwhile, if the feature map E3 of the previous step exists, the feature map E3 of the previous step is input into the No. 1 Concat fusion network at the same time, and the fused feature map obtained by the No. 1 Concat fusion network is output after convolution operation of a size of a No. 1*1 convolution block.
Preferably, the method for fusing the current vehicle suggestion frame and the historical vehicle suggestion frame by comparing the segmentation heads comprises the following steps:
sequentially carrying out 4 times of fusion operation on the current vehicle suggestion frame and the historical vehicle suggestion frame;
fusion procedure 1: the method comprises the steps that a current vehicle suggestion frame and a historical vehicle suggestion frame respectively obtain a suggestion frame characteristic diagram F1 and a suggestion frame characteristic diagram F2 with 14 x 256 by convolution with the size of 3*3, batch normalization BatchNorm and Relu activation operation in sequence;
the method comprises the steps of inputting a suggested frame characteristic diagram F1 and a suggested frame characteristic diagram F2 into a 1 st fusion network D and a 1 st Hopfield network H at the same time, carrying out fusion conversion on the suggested frame characteristic diagram F1 and the suggested frame characteristic diagram F2 by the 1 st Hopfield network H, inputting a converted weight characteristic diagram F3 into the 1 st fusion network D at the same time, and obtaining a 1 st fusion characteristic diagram F4 after the 1 st fusion network D is fused; fusion procedure 2: the 1 st fusion characteristic diagram F4, the suggested frame characteristic diagram F1 and the suggested frame characteristic diagram F2 are respectively and sequentially subjected to convolution with the size of 3*3, batch normalization BatchNorm and Relu activation operation to obtain a fusion characteristic diagram F5, a suggested frame characteristic diagram F6 and a suggested frame characteristic diagram F7 with the size of 14 x 256;
the fusion feature map F5, the suggested frame feature map F6 and the suggested frame feature map F7 are simultaneously input to a 2 nd fusion network D, the suggested frame feature map F6 and the suggested frame feature map F7 are simultaneously input to a 2 nd Hopoffield network H, the 2 nd Hopoffield network H fuses and converts the suggested frame feature map F6 and the suggested frame feature map F7, the converted weight feature map F8 is simultaneously input to a 2 nd fusion network D, and the 2 nd fusion network D fuses to obtain a 2 nd fusion feature map F9;
sequentially carrying out the 3 rd fusion operation and the 4 th fusion operation, wherein the 3 rd fusion operation and the 4 th fusion operation are the same as the 2 nd fusion operation;
performing DCT (discrete cosine transformation) on the 4 th fusion feature map to obtain DCT vectors, sequentially performing 3 times of full-connection operation on the DCT vectors, and performing IDCT (inverse discrete cosine transformation) on the DCT vectors after the 3 times of full-connection operation to obtain 2-dimensional mask images, namely: comparing the segmentation results obtained by the segmentation head.
Preferably, the method for fusing the 2 nd fusion network D comprises the following steps:
respectively carrying out pixel-by-pixel max (F6, F7) -min (F6, F7), subtraction F6-F7 and exchange subtraction F7-F6 by using an input suggested frame feature map F6 of the current car and an input history car suggested frame feature map F7, then inputting the obtained result to a No. 2 Concat fusion network, fusing the obtained result with F6 and F7 according to Concat in the channel direction of the feature map, simultaneously inputting a fused feature map F4 obtained after the last fusion operation to the No. 2 Concat fusion network, carrying out convolution operation on the feature map after fusion by the No. 2 Concat fusion network by using a convolution block with the size of No. 1*1, and carrying out weighting operation on the feature map after the convolution operation and a weight feature map F8 output by a corresponding Hopofeld network H to finish fusion;
the difference between the 1 st fusion network D and the 2 nd fusion network D is that the 1 st fusion network D does not need to input the fusion feature map obtained after the last fusion operation.
Preferably, the method for the Hopfield network H fusion conversion comprises the following steps:
respectively carrying out pixel-by-pixel max (F6, F7) -min (F6, F7), subtraction F6-F7, exchange subtraction F7-F6 by utilizing the input suggested frame feature map F6 of the current car and the history car suggested frame feature map F7, then inputting the obtained result to a Concat fusion network No. 3, fusing the obtained result with F6 and F7 according to Concat in the channel direction of the feature map, carrying out convolution operation on the feature map fused by the Concat fusion network No. 3 through a convolution block with the size of 1*1 No. 3, and carrying out weighting operation on the feature map obtained after the convolution operation and the feature map F8 output by the corresponding Hopofeld network H to obtain a fused feature map F c Fusion of the feature map F c Suggested frame feature map F6 and history car suggested frame of current car respectivelyWeighting the feature map F7 to obtain a weight feature map of the current carAnd weight feature map of history car->Weight feature map of current car->And weight feature map of history car->And adding pixel by pixel and activating through sigmod to obtain a final weight characteristic diagram.
Preferably, the method further comprises the steps of establishing a training data set, and training the Mask-RCNN fault segmentation network by using the training data set;
the process of building the training data set includes: and collecting images of normal brake connection hoses and images of broken brake connection hoses, and simultaneously collecting historical vehicle images of the same vehicle number without faults by each image to obtain a current vehicle image without faults and a historical vehicle image without faults with the same vehicle number as the current vehicle.
Preferably, the training data set is established further comprising: and performing data amplification operation on the collected image, wherein the data amplification operation comprises rotation, clipping, contrast transformation, affine transformation and rain and snow simulation operation.
The method has the beneficial effects that the prior information of the historical vehicle image is introduced into the example segmentation network, the feature comparison branch is added, false alarm is reduced, and the recognition accuracy is improved. Meanwhile, the modern Hopofild network thought and DCT contour enhancement are introduced into the network, so that the network fault detection accuracy is improved.
Drawings
FIG. 1 is a schematic diagram of a Mask-RCNN fault-splitting network;
FIG. 2 is a schematic diagram of a comparative feature fusion branch;
FIG. 3 is a schematic diagram of a fusion network C in a comparative feature fusion branch;
FIG. 4 is a schematic diagram of a comparative segmentation head;
FIG. 5 is a schematic diagram of a comparative split head fusion network;
FIG. 6 is a schematic diagram of a Hopfield network;
fig. 7 is a flowchart of the fault detection according to the present embodiment.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other.
The invention is further described below with reference to the drawings and specific examples, which are not intended to be limiting.
As shown in fig. 7, the brake pipe break failure detection method of the present embodiment includes:
step 1, establishing a Mask-RCNN fault segmentation network, establishing a training data set, and training the Mask-RCNN fault segmentation network by using the training data set;
the brake connecting pipe is broken at the joint of the pipe and the cylinder body, and because the joint is generally interfered by raw rubber belt and the like, a large number of false alarms can be generated, meanwhile, rainwater, mud marks and the like are similar to the break, and network distinguishing is difficult, so that the embodiment introduces historical vehicle image priori information in a fault segmentation network, introduces comparison branches in the network, reduces network recognition difficulty and improves recognition accuracy.
As shown in fig. 1, the Mask-RCNN fault split network of the present embodiment includes a No. 1 Mask-RCNN split network, a No. 2 Mask-RCNN split network, and a contrast feature split network; the contrast characteristic segmentation network comprises a contrast characteristic extraction branch, an RPN network, a RoIALign network and a No. 2 segmentation head;
inputting a current vehicle image into a Mask-RCNN segmentation network No. 1, and inputting a historical vehicle image into a Mask-RCNN segmentation network No. 2;
the method comprises the steps that current vehicle features extracted by a Mask-RCNN segmentation network No. 1 and historical vehicle features extracted by a Mask-RCNN segmentation network No. 2 are simultaneously input into a comparison feature extraction branch, the comparison feature extraction branch finds out parts which exist in both the current vehicle and the historical vehicle but generate feature changes, parts which do not exist in the current vehicle and parts which do not exist in the current vehicle are fused, the fused features are sent into an RPN network extraction suggestion frame, the extracted suggestion frame is sent into a RoIAlign network for pooling, and the pooled suggestion frame is sent into a No. 2 segmentation head;
the method comprises the steps that a dividing head in a No. 2 Mask-RCNN dividing network is a comparison dividing head, the No. 1 Mask-RCNN dividing network and the No. 2 Mask-RCNN dividing network input a pooled current vehicle suggesting frame and a pooled historical vehicle suggesting frame to the comparison dividing head at the same time, and the comparison dividing head fuses the current vehicle suggesting frame and the historical vehicle suggesting frame to obtain a dividing result;
dividing heads of the No. 1 Mask-RCNN dividing network, the No. 2 dividing heads and the dividing results obtained by comparing the dividing heads are combined to obtain the dividing results of the brake connecting pipe; and determining the fault type of the brake connecting pipe according to the classification head output of the Mask-RCNN dividing network No. 1 and the classification head output of the Mask-RCNN dividing network No. 2.
According to the embodiment, the contrast feature extraction branches and the contrast segmentation heads are added on the basis of the Mask-RCNN network, the network learning difficulty is reduced, the network fault recognition accuracy is improved, and the rest of the network is consistent with the Mask-RCNN. And finally, the network combines the segmentation results obtained by the segmentation head No. 1, the segmentation head No. 2 and the comparison segmentation head to obtain a segmentation mask, the classification head 1 obtains the classification target type and the detection frame of the current vehicle, the classification head 2 obtains the classification target type and the detection frame of the comparison target (i.e. fault) of the current vehicle and the historical vehicle, and finally the classification results are obtained by integrating the types of the two classification heads.
In this embodiment, the process of creating the training data set includes:
(1) And erecting high-definition imaging equipment around the railway to obtain a passing image of the railway wagon after passing. And collecting normal images of the brake connecting hose and images of broken brake connecting hose, wherein the number of images of broken brake connecting hose is small, so that broken brake connecting hose faults need to be PS on the normal images to be used as the supplement of the fault images of the data set. Because the network needs to be compared by adding prior information of the historical images, when images are collected, each image needs to collect the historical vehicle images of the same vehicle number without faults. The final data set image comprises a current vehicle image with faults and without faults and a history vehicle image without faults, wherein the current vehicle image and the history vehicle image with faults and the same vehicle number as the current vehicle are in one-to-one correspondence.
(2) Marking an image, creating a dataset
Marking the current car image collected in the last step, marking the outer contour of a fault in the current car as a broken type, marking the outer contour of a connecting pipe as a connecting pipe type, marking the history car image collected in the last step, marking the outer contour of the connecting pipe as the connecting pipe type, generating a marking file by using labelme marking software in the marking process, and finally converting the marking file into a mask image.
And then performing data amplification operations on the data set image, including rotation, clipping, contrast transformation, affine transformation, rain and snow simulation and the like. The data amplification operation can effectively reduce the probability of overfitting of the fault detection network and improve the generalization performance of the fault detection network.
Step 2, acquiring an image of the railway wagon, and acquiring a current wagon image, wherein the current wagon image comprises a brake connecting hose part;
in the step 2, high-definition imaging equipment can be erected around the railway, and after the railway wagon passes, a passing image is obtained. And sending the passing image into a positioning network for positioning and intercepting a brake connecting hose part in the image.
Step 3, inputting the current car image and the corresponding historical car image into a trained Mask-RCNN fault segmentation network as shown in the figure, outputting a fault type and a segmentation result of the brake connecting pipe by the Mask-RCNN fault segmentation network, determining a fault position according to the segmentation result, and calculating to obtain a minimum circumscribed rectangle of the broken type as an alarm position in a fault message; if the section of the brake connecting hose is detected to be faulty, a fault message is generated, otherwise, the fault is not considered to happen. Uploading an image message of the fault, and correspondingly processing the fault part by railway wagon inspection staff according to the message.
In a preferred embodiment, the process of merging the comparison feature extraction branches in this embodiment, as shown in fig. 2, includes:
the Resnet50 feature extraction networks in the Mask-RCNN segmentation network No. 1 and the Mask-RCNN segmentation network No. 2 comprise 5 convolution blocks Conv1-Conv5, and each convolution block downsamples the feature map by 2 times through convolution pooling operation;
the method comprises the steps that current car features and historical car features output by a convolution block Conv3 of two Resnet50 feature extraction networks are fused through a 1 st fusion network C, the fused feature images are subjected to convolution pooling operation through the convolution block Conv6 and then are subjected to downsampling for 2 times to be used as a last feature fusion feature image to be input into the 2 nd fusion network C, the current car features and the historical car features output by a convolution block Conv4 of the two Resnet50 feature extraction networks are simultaneously input into the 2 nd fusion network C, the feature images fused by the 2 nd fusion network C are subjected to convolution pooling operation through the convolution block Conv7 and then are subjected to downsampling for 2 times to be used as a last feature fusion feature image to be input into the 3 rd fusion network C, the current car features and the historical car features output by a convolution block Conv5 of the two Resnet50 feature extraction networks are simultaneously input into the 3 rd fusion network C, and the feature images fused by the 3 rd fusion network C are sent into an RPN network; and (3) performing segmentation result mask regression operation in a number 2 segmentation head, wherein the branch label is the difference between the current car label and the historical car label, namely, the mask is changed (failed).
The convolution block Conv6 is identical to the convolution block Conv4, and the convolution block Conv7 is identical to the convolution block Conv 5.
The specific contrast feature fusion operation in Conv3 layer is as follows: the current and historical vehicle branch Conv3 feature maps F2 are subjected to pixel-by-pixel max (F1, F2) -min (F1, F2), subtraction and transposition subtraction, then subjected to Concat fusion along the channel direction of the feature maps with the original feature maps F1 and F2, and then subjected to 1*1 convolution operation to reduce the channel number to the channel number of the feature maps F1 and F2 and eliminate the aliasing effect.
In a preferred embodiment, the method for fusing network C includes:
the input current car features and the history car features are E1 and E2 respectively, pixel-by-pixel max (E1, E2) -min (E1, E2), subtraction E1-E2 and exchange subtraction E2-E1 are respectively carried out on the E1 and the E2, then the current car features and the history car features are input into a No. 1 Concat fusion network and fused with the E1 and the E2 according to Concat in the channel direction of the feature map, meanwhile, if the feature map E3 of the previous step exists, the feature map E3 of the previous step is input into the No. 1 Concat fusion network at the same time, and the fused feature map obtained by the No. 1 Concat fusion network is output after convolution operation of a size of a No. 1*1 convolution block.
The comparative feature fusion operations on Conv4 and Conv5 are similar to Conv3, and feature map E3 of the previous fusion operation of Concat is only performed at the same time when Concat fusion is performed, as shown in fig. 3. The current vehicle branch characteristic diagram E1 and the historical vehicle branch characteristic diagram E2 are subjected to max (E1, E2) -mine1, E2), subtraction and exchange subtraction, so that the network can more easily find out the characteristics of the parts which exist in both the current vehicle and the historical vehicle but generate characteristic change, the parts which exist in the historical vehicle and the parts which do not exist in the current vehicle, and finally the network learns the change between the current vehicle and the historical vehicle, namely the fault segmentation result.
In a preferred embodiment, the method for fusing the current vehicle suggestion frame and the historical vehicle suggestion frame by comparing the segmentation heads, as shown in fig. 4, includes:
sequentially carrying out 4 times of fusion operation on the current vehicle suggestion frame and the historical vehicle suggestion frame;
fusion procedure 1: the method comprises the steps that a current vehicle suggestion frame and a historical vehicle suggestion frame respectively obtain a suggestion frame characteristic diagram F1 and a suggestion frame characteristic diagram F2 with 14 x 256 by convolution with the size of 3*3, batch normalization BatchNorm and Relu activation operation in sequence;
the method comprises the steps of inputting a suggested frame characteristic diagram F1 and a suggested frame characteristic diagram F2 into a 1 st fusion network D and a 1 st Hopfield network H at the same time, carrying out fusion conversion on the suggested frame characteristic diagram F1 and the suggested frame characteristic diagram F2 by the 1 st Hopfield network H, inputting a converted weight characteristic diagram F3 into the 1 st fusion network D at the same time, and obtaining a 1 st fusion characteristic diagram F4 after the 1 st fusion network D is fused; fusion procedure 2: the 1 st fusion characteristic diagram F4, the suggested frame characteristic diagram F1 and the suggested frame characteristic diagram F2 are respectively and sequentially subjected to convolution with the size of 3*3, batch normalization BatchNorm and Relu activation operation to obtain a fusion characteristic diagram F5, a suggested frame characteristic diagram F6 and a suggested frame characteristic diagram F7 with the size of 14 x 256;
the fusion feature map F5, the suggested frame feature map F6 and the suggested frame feature map F7 are simultaneously input to a 2 nd fusion network D, the suggested frame feature map F6 and the suggested frame feature map F7 are simultaneously input to a 2 nd Hopoffield network H, the 2 nd Hopoffield network H fuses and converts the suggested frame feature map F6 and the suggested frame feature map F7, the converted weight feature map F8 is simultaneously input to a 2 nd fusion network D, and the 2 nd fusion network D fuses to obtain a 2 nd fusion feature map F9;
sequentially carrying out the 3 rd fusion operation and the 4 th fusion operation, wherein the 3 rd fusion operation and the 4 th fusion operation are the same as the 2 nd fusion operation;
performing DCT (discrete cosine transformation) on the 4 th fusion feature map to obtain DCT vectors, sequentially performing 3 times of full-connection operation on the DCT vectors, and performing IDCT (inverse discrete cosine transformation) on the DCT vectors after the 3 times of full-connection operation to obtain 2-dimensional mask images, namely: comparing the segmentation results obtained by the segmentation head.
And in the brake connecting pipe break fault recognition network, the recommended frame feature map of the current vehicle and the historical vehicle after Roialign pooling is subjected to comparison feature extraction in the comparison segmentation head to obtain a final comparison difference (fault) segmentation result, wherein the branched label is the difference between the current vehicle label and the historical vehicle label, namely the change (fault) mask. The 14 x 256 size suggested frame feature map pooled by RoIAlign was subjected to 4 3*3 size convolutions, batch normalized batch norm and Relu activation operations in the original Mask-RCNN segmentation header. The same operation is carried out on the 14 x 256-sized suggested frame feature map obtained by pooling the current vehicle and the historical vehicle in the comparison and segmentation head, except that 4 fusion operations are sequentially carried out, comparison change features are extracted, and modern Hopoffield feature weighting is carried out on the fused features. Finally, the DCT is carried out on the feature map after fusion weighting by adopting the thought of DCT-Mask-RCNN, fc refers to full connection operation, and 3 full connection operations exist in fig. 4. The DCT operation is not shown in the figure, the DCT operation is to perform DCT transformation on the group trunk label mask to generate DCT vectors, 3 full connection layers are used for regressing the group trunk label mask DCT vectors, and finally IDCT is performed to obtain a 2-dimensional mask image, which is the final comparison difference (fault) segmentation result.
In a preferred embodiment, the method of fusing the 2 nd fusion network D includes:
respectively carrying out pixel-by-pixel max (F6, F7) -min (F6, F7), subtraction F6-F7 and exchange subtraction F7-F6 by using an input suggested frame feature map F6 of the current car and an input history car suggested frame feature map F7, then inputting the obtained result to a No. 2 Concat fusion network, fusing the obtained result with F6 and F7 according to Concat in the channel direction of the feature map, simultaneously inputting a fused feature map F4 obtained after the last fusion operation to the No. 2 Concat fusion network, carrying out convolution operation on the feature map after fusion by the No. 2 Concat fusion network by using a convolution block with the size of No. 1*1, and carrying out weighting operation on the feature map after the convolution operation and a weight feature map F8 output by a corresponding Hopofeld network H to finish fusion;
the difference between the 1 st fusion network D and the 2 nd fusion network D is that the 1 st fusion network D does not need to input the fusion feature map obtained after the last fusion operation.
In the embodiment, as shown in fig. 5, the fusion operation of the fusion network D is similar to the comparison feature fusion, the current vehicle branch feature map F6 and the history vehicle branch feature map F7 are subjected to max (F6, F7) -min (F6, F7), subtraction operation is performed, subtraction operation is exchanged, concat fusion and 1*1 convolution are performed to reduce channel dimension, aliasing effect of the feature map is inhibited, but the feature map after 1*1 convolution is subjected to Hopfield feature network H weighting operation, and the weighting operation is multiplied by corresponding elements.
In a preferred embodiment, the method for H-fusion switching of the hopfield network comprises the following steps:
by utilizing the input suggested frame characteristic diagram F6 and the input history vehicle suggested frame characteristic diagram F7 of the current vehicle respectively, the following steps are carried outPixels max (F6, F7) -min (F6, F7), subtracting F6-F7, replacing subtracting F7-F6, inputting to a Concat fusion network No. 3, fusing with F6 and F7 according to Concat in the channel direction of the feature map, and performing convolution operation on the feature map fused by the Concat fusion network No. 3 by using a convolution block of size No. 1*1, and performing weighting operation on the feature map obtained after the convolution operation and a feature map F8 output by a corresponding Hopofeld network H to obtain a fused feature map F c Fusion of the feature map F c Weighting with the suggested frame feature map F6 and the history vehicle suggested frame feature map F7 of the current vehicle to obtain a weighted feature map of the current vehicleAnd weight feature map of history car->Weight feature map of current car->And weight feature map of history car->And adding pixel by pixel and activating through sigmod to obtain a final weight characteristic diagram.
The hopkindel feature network H may be embedded in the deep learning network, so as to improve the feature retention capability of the deep learning feature map, and in this embodiment, the hopkindel feature network H is added in the comparison segmentation head to achieve the effect of fully retaining the contrast change feature, where the hopkindel feature network H is shown in fig. 6, and the process may be expressed as formula (1). Wherein F is i I=1, 2 is the characteristic diagram of the current car and the history car, F c Is a fused characteristic diagram W c 、W s In order to correspond to the weight of the object,the weight feature images obtained by the current vehicle and the historical vehicle after being subjected to the Hopofild feature network H conversion are finally obtained by adding 2 weight feature images pixel by pixel and activating through sigmodTo the final weight feature map.
Although the invention herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present invention. It is therefore to be understood that numerous modifications may be made to the illustrative embodiments and that other arrangements may be devised without departing from the spirit and scope of the present invention as defined by the appended claims. It should be understood that the different dependent claims and the features herein may be combined in ways other than as described in the original claims. It is also to be understood that features described in connection with separate embodiments may be used in other embodiments.

Claims (10)

1. A method for detecting a break failure of a brake connection pipe, the method comprising:
s1, acquiring an image of a railway wagon, and acquiring a current car image, wherein the current car image comprises a brake connecting hose part;
s2, inputting the current car image and the corresponding historical car image into a Mask-RCNN fault segmentation network, outputting a fault type and a segmentation result of the brake connecting pipe by the Mask-RCNN fault segmentation network, and determining a fault position according to the segmentation result;
the Mask-RCNN fault segmentation network comprises a Mask-RCNN segmentation network No. 1, a Mask-RCNN segmentation network No. 2 and a contrast feature segmentation network; the contrast characteristic segmentation network comprises a contrast characteristic extraction branch, an RPN network, a RoIALign network and a No. 2 segmentation head;
inputting a current vehicle image into a Mask-RCNN segmentation network No. 1, and inputting a historical vehicle image into a Mask-RCNN segmentation network No. 2;
the method comprises the steps that current vehicle features extracted by a Mask-RCNN segmentation network No. 1 and historical vehicle features extracted by a Mask-RCNN segmentation network No. 2 are simultaneously input into a comparison feature extraction branch, the comparison feature extraction branch finds out parts which exist in both the current vehicle and the historical vehicle but generate feature changes, parts which do not exist in the current vehicle and parts which do not exist in the current vehicle are fused, the fused features are sent into an RPN network extraction suggestion frame, the extracted suggestion frame is sent into a RoIAlign network for pooling, and the pooled suggestion frame is sent into a No. 2 segmentation head;
the method comprises the steps that a dividing head in a No. 2 Mask-RCNN dividing network is a comparison dividing head, the No. 1 Mask-RCNN dividing network and the No. 2 Mask-RCNN dividing network input a pooled current vehicle suggesting frame and a pooled historical vehicle suggesting frame to the comparison dividing head at the same time, and the comparison dividing head fuses the current vehicle suggesting frame and the historical vehicle suggesting frame to obtain a dividing result;
dividing heads of the No. 1 Mask-RCNN dividing network, the No. 2 dividing heads and the dividing results obtained by comparing the dividing heads are combined to obtain the dividing results of the brake connecting pipe; and determining the fault type of the brake connecting pipe according to the classification head output of the Mask-RCNN dividing network No. 1 and the classification head output of the Mask-RCNN dividing network No. 2.
2. The method for detecting break failure of brake pipe according to claim 1, wherein the process of comparing feature extraction branch fusion comprises: the method comprises the steps of carrying out a first treatment on the surface of the
The Resnet50 feature extraction networks in the Mask-RCNN segmentation network No. 1 and the Mask-RCNN segmentation network No. 2 comprise 5 convolution blocks Conv1-Conv5, and each convolution block downsamples the feature map by 2 times through convolution pooling operation;
the method comprises the steps that current car features and historical car features output by a convolution block Conv3 of two Resnet50 feature extraction networks are fused through a 1 st fusion network C, the fused feature images are subjected to convolution pooling operation through the convolution block Conv6 and then are subjected to downsampling for 2 times to be used as a last feature fusion feature image to be input into the 2 nd fusion network C, the current car features and the historical car features output by a convolution block Conv4 of the two Resnet50 feature extraction networks are simultaneously input into the 2 nd fusion network C, the feature images fused by the 2 nd fusion network C are subjected to convolution pooling operation through the convolution block Conv7 and then are subjected to downsampling for 2 times to be used as a last feature fusion feature image to be input into the 3 rd fusion network C, the current car features and the historical car features output by a convolution block Conv5 of the two Resnet50 feature extraction networks are simultaneously input into the 3 rd fusion network C, and the feature images fused by the 3 rd fusion network C are sent into an RPN network;
the convolution block Conv6 is identical to the convolution block Conv4, and the convolution block Conv7 is identical to the convolution block Conv 5.
3. The brake pipe break failure detection method according to claim 2, wherein the method of fusing network C comprises:
the input current car features and the history car features are E1 and E2 respectively, pixel-by-pixel max (E1, E2) -min (E1, E2), subtraction E1-E2 and exchange subtraction E2-E1 are respectively carried out on the E1 and the E2, then the current car features and the history car features are input into a No. 1 Concat fusion network and fused with the E1 and the E2 according to Concat in the channel direction of the feature map, meanwhile, if the feature map E3 of the previous step exists, the feature map E3 of the previous step is input into the No. 1 Concat fusion network at the same time, and the fused feature map obtained by the No. 1 Concat fusion network is output after convolution operation of a size of a No. 1*1 convolution block.
4. The method for detecting a break failure of a brake pipe according to claim 1, wherein the method for fusing a current car advice frame and a historical car advice frame by comparing the dividing heads comprises:
sequentially carrying out 4 times of fusion operation on the current vehicle suggestion frame and the historical vehicle suggestion frame;
fusion procedure 1: the method comprises the steps that a current vehicle suggestion frame and a historical vehicle suggestion frame respectively obtain a suggestion frame characteristic diagram F1 and a suggestion frame characteristic diagram F2 with 14 x 256 by convolution with the size of 3*3, batch normalization BatchNorm and Relu activation operation in sequence;
the method comprises the steps of inputting a suggested frame characteristic diagram F1 and a suggested frame characteristic diagram F2 into a 1 st fusion network D and a 1 st Hopfield network H at the same time, carrying out fusion conversion on the suggested frame characteristic diagram F1 and the suggested frame characteristic diagram F2 by the 1 st Hopfield network H, inputting a converted weight characteristic diagram F3 into the 1 st fusion network D at the same time, and obtaining a 1 st fusion characteristic diagram F4 after the 1 st fusion network D is fused;
fusion procedure 2: the 1 st fusion characteristic diagram F4, the suggested frame characteristic diagram F1 and the suggested frame characteristic diagram F2 are respectively and sequentially subjected to convolution with the size of 3*3, batch normalization BatchNorm and Relu activation operation to obtain a fusion characteristic diagram F5, a suggested frame characteristic diagram F6 and a suggested frame characteristic diagram F7 with the size of 14 x 256;
the fusion feature map F5, the suggested frame feature map F6 and the suggested frame feature map F7 are simultaneously input to a 2 nd fusion network D, the suggested frame feature map F6 and the suggested frame feature map F7 are simultaneously input to a 2 nd Hopoffield network H, the 2 nd Hopoffield network H fuses and converts the suggested frame feature map F6 and the suggested frame feature map F7, the converted weight feature map F8 is simultaneously input to a 2 nd fusion network D, and the 2 nd fusion network D fuses to obtain a 2 nd fusion feature map F9;
sequentially carrying out the 3 rd fusion operation and the 4 th fusion operation, wherein the 3 rd fusion operation and the 4 th fusion operation are the same as the 2 nd fusion operation;
performing DCT (discrete cosine transformation) on the 4 th fusion feature map to obtain DCT vectors, sequentially performing 3 times of full-connection operation on the DCT vectors, and performing IDCT (inverse discrete cosine transformation) on the DCT vectors after the 3 times of full-connection operation to obtain 2-dimensional mask images, namely: comparing the segmentation results obtained by the segmentation head.
5. The method for detecting break failure of brake pipe according to claim 4, wherein the 2 nd fusion network D fusion method comprises:
respectively carrying out pixel-by-pixel max (F6, F7) -min (F6, F7), subtraction F6-F7 and exchange subtraction F7-F6 by using an input suggested frame feature map F6 of the current car and an input history car suggested frame feature map F7, then inputting the obtained result to a No. 2 Concat fusion network, fusing the obtained result with F6 and F7 according to Concat in the channel direction of the feature map, simultaneously inputting a fused feature map F4 obtained after the last fusion operation to the No. 2 Concat fusion network, carrying out convolution operation on the feature map after fusion by the No. 2 Concat fusion network by using a convolution block with the size of No. 1*1, and carrying out weighting operation on the feature map after the convolution operation and a weight feature map F8 output by a corresponding Hopofeld network H to finish fusion;
the difference between the 1 st fusion network D and the 2 nd fusion network D is that the 1 st fusion network D does not need to input the fusion feature map obtained after the last fusion operation.
6. The method for detecting break failure of brake connection pipe according to claim 4, wherein the method for H-fusion conversion of hopfield network comprises:
respectively carrying out pixel-by-pixel max (F6, F7) -min (F6, F7), subtraction F6-F7, exchange subtraction F7-F6 by utilizing the input suggested frame feature map F6 of the current car and the history car suggested frame feature map F7, then inputting the obtained result to a Concat fusion network No. 3, fusing the obtained result with F6 and F7 according to Concat in the channel direction of the feature map, carrying out convolution operation on the feature map fused by the Concat fusion network No. 3 through a convolution block with the size of 1*1 No. 3, and carrying out weighting operation on the feature map obtained after the convolution operation and the feature map F8 output by the corresponding Hopofeld network H to obtain a fused feature map F c Fusion of the feature map F c Weighting with the suggested frame feature map F6 and the history vehicle suggested frame feature map F7 of the current vehicle to obtain a weighted feature map of the current vehicleAnd weight feature map of history car->Weight feature map of current car->And weight feature map of history car->And adding pixel by pixel and activating through sigmod to obtain a final weight characteristic diagram.
7. The brake pipe break failure detection method of claim 1, further comprising creating a training data set, the training data set being utilized to train a Mask-RCNN fault segmentation network;
the process of building the training data set includes: and collecting images of normal brake connection hoses and images of broken brake connection hoses, and simultaneously collecting historical vehicle images of the same vehicle number without faults by each image to obtain a current vehicle image without faults and a historical vehicle image without faults with the same vehicle number as the current vehicle.
8. The brake pipe break failure detection method according to claim 7, characterized by further comprising, when establishing the training data set: and performing data amplification operation on the collected image, wherein the data amplification operation comprises rotation, clipping, contrast transformation, affine transformation and rain and snow simulation operation.
9. A computer-readable storage device storing a computer program, characterized in that the computer program, when executed, implements the brake connection pipe break failure detection method according to any one of claims 1 to 8.
10. A brake connection pipe break fault detection apparatus comprising a storage device, a processor and a computer program stored in the storage device and executable on the processor, wherein execution of the computer program by the processor implements the brake connection pipe break fault detection method as claimed in any one of claims 1 to 8.
CN202310542234.8A 2023-05-15 2023-05-15 Brake connecting pipe break fault detection method Active CN116543163B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310542234.8A CN116543163B (en) 2023-05-15 2023-05-15 Brake connecting pipe break fault detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310542234.8A CN116543163B (en) 2023-05-15 2023-05-15 Brake connecting pipe break fault detection method

Publications (2)

Publication Number Publication Date
CN116543163A true CN116543163A (en) 2023-08-04
CN116543163B CN116543163B (en) 2024-01-26

Family

ID=87452070

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310542234.8A Active CN116543163B (en) 2023-05-15 2023-05-15 Brake connecting pipe break fault detection method

Country Status (1)

Country Link
CN (1) CN116543163B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110298262A (en) * 2019-06-06 2019-10-01 华为技术有限公司 Object identification method and device
CN111160527A (en) * 2019-12-27 2020-05-15 歌尔股份有限公司 Target identification method and device based on MASK RCNN network model
CN112116195A (en) * 2020-07-21 2020-12-22 浙江蓝卓工业互联网信息技术有限公司 Railway beam production process identification method based on example segmentation
US20210174149A1 (en) * 2018-11-20 2021-06-10 Xidian University Feature fusion and dense connection-based method for infrared plane object detection
CN115601544A (en) * 2022-10-14 2023-01-13 长安大学(Cn) High-resolution image landslide detection and segmentation method
CN115984786A (en) * 2023-01-04 2023-04-18 邦邦汽车销售服务(北京)有限公司 Vehicle damage detection method and device, terminal and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210174149A1 (en) * 2018-11-20 2021-06-10 Xidian University Feature fusion and dense connection-based method for infrared plane object detection
CN110298262A (en) * 2019-06-06 2019-10-01 华为技术有限公司 Object identification method and device
CN111160527A (en) * 2019-12-27 2020-05-15 歌尔股份有限公司 Target identification method and device based on MASK RCNN network model
CN112116195A (en) * 2020-07-21 2020-12-22 浙江蓝卓工业互联网信息技术有限公司 Railway beam production process identification method based on example segmentation
CN115601544A (en) * 2022-10-14 2023-01-13 长安大学(Cn) High-resolution image landslide detection and segmentation method
CN115984786A (en) * 2023-01-04 2023-04-18 邦邦汽车销售服务(北京)有限公司 Vehicle damage detection method and device, terminal and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谢飞;穆昱;管子玉;沈雪敏;许鹏飞;王和旭;: "基于具有空间注意力机制的Mask R-CNN的口腔白斑分割", 西北大学学报(自然科学版), no. 01 *

Also Published As

Publication number Publication date
CN116543163B (en) 2024-01-26

Similar Documents

Publication Publication Date Title
CN110232380B (en) Fire night scene restoration method based on Mask R-CNN neural network
CN112241728B (en) Real-time lane line detection method and system for learning context information by adopting attention mechanism
US20220174089A1 (en) Automatic identification and classification of adversarial attacks
CN112785480B (en) Image splicing tampering detection method based on frequency domain transformation and residual error feedback module
Liu et al. Deep network for road damage detection
CN111652295A (en) Railway wagon coupler yoke key joist falling fault identification method
JP2021174529A (en) Method and device for biometric detection
CN115147418B (en) Compression training method and device for defect detection model
CN116246059A (en) Vehicle target recognition method based on improved YOLO multi-scale detection
CN114926796A (en) Bend detection method based on novel mixed attention module
CN111723852A (en) Robust training method for target detection network
CN111445388A (en) Image super-resolution reconstruction model training method, ship tracking method and ship tracking device
CN115035172A (en) Depth estimation method and system based on confidence degree grading and inter-stage fusion enhancement
CN116543163B (en) Brake connecting pipe break fault detection method
CN112258483B (en) Coupler yoke pin inserting and supporting dislocation and nut loss fault detection method
CN105205829A (en) Transformer substation infrared image segmentation method based on improved two-dimensional Otsu algorithm
US10210621B2 (en) Normalized probability of change algorithm for image processing
CN116523881A (en) Abnormal temperature detection method and device for power equipment
KR102489884B1 (en) Image processing apparatus for improving license plate recognition rate and image processing method using the same
CN115546617A (en) Method and device for detecting loss of accessories of vehicle door locking device based on improved FCT network
CN113128563B (en) Method, device, equipment and storage medium for detecting high-speed engineering vehicle
CN111626102B (en) Bimodal iterative denoising anomaly detection method and terminal based on video weak marker
CN112686835B (en) Road obstacle detection device, method and computer readable storage medium
CN117237683B (en) Chip defect intelligent detection system based on improved neural network
CN113887499B (en) Sand dune image recognition model, creation method thereof and sand dune image recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant