CN110163302B - Indicator diagram identification method based on regularization attention convolution neural network - Google Patents

Indicator diagram identification method based on regularization attention convolution neural network Download PDF

Info

Publication number
CN110163302B
CN110163302B CN201910474060.XA CN201910474060A CN110163302B CN 110163302 B CN110163302 B CN 110163302B CN 201910474060 A CN201910474060 A CN 201910474060A CN 110163302 B CN110163302 B CN 110163302B
Authority
CN
China
Prior art keywords
attention
channel
convolution
indicator diagram
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910474060.XA
Other languages
Chinese (zh)
Other versions
CN110163302A (en
Inventor
刘志刚
宋考平
杨二龙
刘显德
刘贤梅
杜娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Petroleum University
Original Assignee
Northeast Petroleum University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Petroleum University filed Critical Northeast Petroleum University
Priority to CN201910474060.XA priority Critical patent/CN110163302B/en
Publication of CN110163302A publication Critical patent/CN110163302A/en
Application granted granted Critical
Publication of CN110163302B publication Critical patent/CN110163302B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method for identifying an indicator diagram based on a regularization attention convolution neural network, which comprises the following steps: firstly, establishing a data preprocessing module, and processing dimension and gray level diagrams of a working condition sample set of the pumping unit; establishing a regularization attention convolution module, and strengthening, inhibiting and inactivating the convolution characteristics of the autonomous learning; embedding the regularized attention convolution module into a convolution neural network to form a regularized attention convolution neural network; establishing an indicator diagram identification module, and inputting the gray level image of the indicator diagram into a regularized attention convolution neural network for identification; establishing an attention loss function, and training a regularized attention convolution neural network model; inputting the real-time collected working condition data of the pumping unit into the indicator diagram identification model, and repeating the second step to the fourth step; and seventhly, constructing an intelligent diagnosis system of the working condition of the pumping unit by taking the RA-CNN-based indicator diagram identification method as a core. The invention can effectively improve the identification precision of the indicator diagram.

Description

Indicator diagram identification method based on regularization attention convolution neural network
The technical field is as follows:
the invention relates to an intelligent diagnosis system for the working condition of an oil pumping unit, in particular to a method for identifying an indicator diagram based on a regularized attention convolution neural network.
Secondly, background art:
in the oil field oil extraction production process, the indicator diagram is a closed continuous curve formed by the load and displacement of the up-and-down reciprocating motion of the horsehead of the oil pumping unit, rich device state information is hidden inside the indicator diagram, and the influence of factors in the well such as gas, oil, water, sand, wax and the like on the working condition of the oil pumping unit can be reflected in real time, so that the analysis and identification of the indicator diagram are important means for diagnosing the working condition of the oil pumping unit. Because the number of the pumping units is large, and the displacement and load data are frequently acquired, the working condition of the pumping units is difficult to monitor in time by a manual analysis method, and the manual analysis method is limited by personnel experience and areas and is difficult to popularize. Because an Artificial Neural Network (ANN) has good nonlinear approximation capability, the Artificial Neural Network (ANN) has a wide application in nonlinear system modeling. However, it should be noted that, when the artificial neural network is used for indicator diagram recognition in the conventional method, the method is limited by the internal mechanism of the model, and has the following disadvantages: (1) the scale of working condition data is huge, and the common artificial neural network is difficult to accurately mine the internal hidden rule in the working condition sample, so that the generalization capability of the model in the application process is insufficient; (2) the indicator diagram is composed of hundreds of displacement and load data pairs, and when the indicator diagram is identified by a neural network in the conventional method, the displacement and the load are directly input into the network as characteristics. The common neural network has a simple internal structure, weak feature extraction capability and high difficulty in correct diagnosis and identification; (3) in essence, the indicator diagram identification is to identify different working conditions from the profile characteristics formed by displacement and load, and the displacement-load flattening input mode cannot effectively embody the profile characteristics of the indicator diagram; (4) the working conditions of the oil pumping unit are influenced by stratum factors and distribution, the difference of some working conditions of the oil pumping unit on the contour is not very obvious, and the difference needs to be judged by judging some local contour characteristics, so that the indicator diagram identification precision of a common neural network and the existing convolution neural network is lower due to the problem.
Thirdly, the invention content:
the invention aims to provide a method for identifying an indicator diagram based on a regularization attention convolution neural network, which is used for solving the problem of low accuracy of indicator diagram identification of a common neural network and the conventional convolution neural network.
The technical scheme adopted by the invention for solving the technical problems is as follows: the indicator diagram identification method based on the regularization attention convolution neural network comprises the following steps:
establishing a data preprocessing module, and processing dimension and gray level diagrams of a working condition sample set of the pumping unit;
(1) in order to ensure that the dimension of each indicator diagram in the sample set is consistent, the pumping unit working condition sample set is subjected to data normalization processing in the data screening and processing process, wherein x and y are the original displacement and the load of the horsehead movement of the pumping unit respectively, and the normalized displacement
Figure BDA0002081639390000021
And load
Figure BDA0002081639390000022
Comprises the following steps:
Figure BDA0002081639390000023
wherein xmaxAnd xminRespectively the maximum and minimum displacement, y, of a pumping unitmaxAnd yminMaximum and minimum loads;
(2) drawing a contour curve of the indicator diagram by using a computer according to the time sequence data of the displacement and the load, and converting the contour curve into a gray scale diagram of 224 multiplied by 224 pixels;
step two, establishing a regularization attention convolution module, and strengthening, inhibiting and inactivating the convolution characteristics of the autonomous learning;
the regularization attention convolution module comprises two mechanisms, wherein one mechanism is an attention mechanism, the feature extraction capability of the model is enhanced, and the model focuses on features with obvious correlation to indicator diagram categories in the learning and training process through a feature weighting mode of a channel; the other is a regularization mechanism, and the generalization capability of the model in later use is enhanced by establishing a channel discarding mode;
the process of constructing the regularized attention convolution module:
the output characteristic diagram of a certain convolution layer is U ═ U [ < U > ]1,u2,…,uC],
Figure BDA0002081639390000024
C is the number of channels, H and W are the height and width of the channels respectively, and after the normalized attention convolution module, a new feature map is marked as X ═ X1,x2,…,xC],
Figure BDA0002081639390000025
The specific calculation process is as follows:
(1) channel polymerization
Firstly, compressing the space dimension of each channel in the feature map along the channel direction, and aggregating each channel u by using three global pooling of maximum, average and randomiAttention component s to the channelmax、smean、sstoWherein the c-th element in the attention mask component is:
Figure BDA0002081639390000026
Figure BDA0002081639390000027
Figure BDA0002081639390000028
wherein p isc(i, j) represents the selection probability of an element in a channel, and is calculated as follows;
Figure BDA0002081639390000031
(2) Channel attention
Attention of channel to component smax、smeanAnd sstoRespectively inputting the data into a single hidden layer neural network, and finishing aggregation through point-by-point multiplication, accumulation and activation functions of weight values and channel attention components, so that the channel attention s ═ s of the feature map U1,s2,…,sC]Is defined as follows:
s=σ(w1δ(w0smax)+w1δ(w0smean)+w1δ(w0ssto))
where σ is the sigmoid function and δ is the ReLu function. w is a0And w1Weights for the single hidden layer neural network, which are shared for both channel attention mask components;
(3) channel regularization
According to channel attention s ═ s1,s2,…,sC]Constructing a channel attention mask m ═ m1,m2,…,mC]Wherein the mask is miIs used to identify whether a channel is activated or not,
Figure BDA0002081639390000036
a normalized value representing the attention of the ith channel, the probability of each channel being selected being defined as:
Figure BDA0002081639390000032
using erand(0,1)Representing the random attention of the channel, the probability of random selection of the channel can be expressed as
Figure BDA0002081639390000033
If a certain channel selection probability piGreater than the random selection ruleRate, channels are activated, otherwise deactivated, i.e. the greater the probability of selection of a channel, the more easily a channel is activated, so the mask miCan be defined as:
Figure BDA0002081639390000034
(4) regularization channel attention feature map
According to channel attention s and channel attention mask m ═ m1,m2,…,mC]On the basis of the original feature map U, a regularized channel attention feature map is constructed
Figure BDA0002081639390000035
The specific definition is as follows:
Figure BDA0002081639390000041
wherein
Figure BDA0002081639390000042
Represents point-by-point multiplication;
embedding the regularized attention convolution module into a convolution neural network to form a regularized attention convolution neural network RA-CNN;
(1) a VGG-16 convolutional neural network is used as a basic model, a regularization attention convolution module is embedded into each two continuous convolutional layers, the regularization attention convolution module does not change the size of an original feature map, and channel attention and a regularization mechanism are added to the original feature map;
(2) the regularization attention convolution neural network RA-CNN comprises 5 convolution layers, 5 pooling layers and 4 regularization attention convolution modules;
(3) the convolution kernel size, step size and fill size used by each convolution layer are 3 × 3, 1 and 1, respectively;
establishing an indicator diagram identification module, and inputting the gray level image of the indicator diagram into a regularized attention convolution neural network RA-CNN for identification;
(1) the gray scale image with the size of 224 multiplied by 224 indicator diagram respectively passes through each convolution layer, the regularization attention convolution module and the pooling layer to complete the automatic extraction of the characteristics; recording a convolution layer, a pooling layer, a regularization attention convolution module and a pooling layer as a convolution unit, wherein the output dimension of the 1 st convolution unit is 112 × 112 × 128, the output dimension of the 2 nd convolution unit is 56 × 56 × 256, the output dimension of the 3 rd convolution unit is 28 × 28 × 512, the output dimension of the 4 th convolution unit is 14 × 14 × 512, and the output dimension of the 5 th convolution unit is 7 × 7 × 512;
(2) flattening the output vector of the last convolution unit, inputting the output vector to a fully-connected neural network, and identifying the type of the indicator diagram;
(3) the input of the fully-connected neural network in RA-CNN is 7 multiplied by 512, the hidden layer is 4096 nodes, the output is 10 nodes, wherein each output node represents a working condition; the real indicator diagram sample label is coded by hot-one;
establishing an attention loss function, and training a regularized attention convolution neural network model RA-CNN;
adjusting sample weights according to the contribution of each sample to training loss, so that the regularized attention convolutional neural network model RA-CNN model can notice the samples which are easy to be wrongly classified in the training process, neglect the samples which have small contribution to loss and are easy to identify and correct, and the attention loss function is as follows:
Figure BDA0002081639390000051
wherein, yjAnd
Figure BDA0002081639390000052
respectively representing the real category and the model identification probability of the jth type indicator diagram sample, T representing the number of indicator diagram categories,
Figure BDA0002081639390000053
is the loss adjustment factor of the indicator diagram sample;
step six, inputting the real-time collected working condition data of the pumping unit into a training-finished indicator diagram recognition model for recognition, and repeating the step two-four;
constructing an intelligent diagnosis system of the working condition of the pumping unit by taking the indicator diagram identification method based on RA-CNN as a core;
(1) constructing an intelligent diagnosis system for the working condition of the oil pumping unit by taking a indicator diagram identification method based on a regularized attention convolution neural network as a core, wherein the system is provided with a short message and mail sending module;
(2) the intelligent diagnosis system for the working condition of the pumping unit is used for analyzing and identifying the indicator diagram of the pumping unit in real time, if the working condition with the fault occurs, the diagnosis result is pushed to a manager in the form of mails and short messages, and measures are taken in time to process the oil well with the fault.
Has the advantages that:
1. the invention establishes a channel attention mechanism, and the content is subordinate to a regularized attention convolution module. The reason and the advantage of the invention are analyzed.
Analysis of causes
The working state of the pumping unit is complex, and the difference degree of the indicator diagram of some working conditions is small, so that the indicator diagram identification belongs to refined identification. The contour features of the indicator diagram are extracted by directly using the convolutional neural network for identification, so that the accuracy is low, and the indicator diagram is difficult to be directly used in the actual production process of the oil field.
Merit analysis
(1) When a human being observes an object, the human being does not observe every area in a scene one by one, but directly transfers attention to the object or area of interest to perform detailed observation. The physiological mechanism not only has a fast processing speed, but also has high identification accuracy through judgment of local characteristics. The channel attention mechanism provided by the invention is bionic to a mechanism of observing things by human beings, after the indicator diagram extracts features through a convolutional neural network, the features of channels related to the categories of the indicator diagram are enhanced, and other channels are inhibited or inactivated;
(2) during the forward transmission of the indicator diagram in the convolutional neural network, each channel of the characteristic diagram is a component of the indicator diagram on a different convolution kernel, so that each channel can be regarded as a characteristic detector. The channel attention provided by the invention is to utilize the importance degrees of different channels in a feature diagram to complete the enhancement or inhibition of the channel features, and the mechanism is to adjust the channel weight through model learning and determine which channel is meaningful for the identification of the channel, so that the important channels are focused by the model when the model identifies the indicator diagram, and the interference of other irrelevant features on the model identification is inhibited.
2. The invention establishes a regularization mechanism of convolutional layers suitable for a convolutional neural network, and the contents are attached to a regularization attention convolution module. The reason and the advantage of the invention are analyzed.
Reason analysis:
the convolutional neural network is a typical model for deep learning, the number of convolutional kernels is continuously increased along with the deepening of the number of model layers, and the feature extraction capability is continuously strong. However, at the same time, the model also has a large number of training parameters, and an overfitting phenomenon is easily generated, that is, the training precision of the model is very high, but the precision is low in the actual test.
However, the conventional method of "inactivating" some neurons or setting the connection weight to zero only works for the fully connected layer, and the effect on the convolutional layer is not obvious.
And (3) advantage analysis:
the invention discloses a regularization mechanism suitable for a convolutional layer in order to prevent the overfitting problem of a model and improve the identification precision of an indicator diagram. The regularization mechanism is greatly different from the existing regularization mechanism, in the aspect of channels, according to a softmax function and channel attention, a channel attention mask for regularization of the channels is established, and inactivation of the channels is determined by using the mask, so that regularization of the convolutional layers is completed. Through experimental verification, the mechanism can effectively improve the identification precision of the indicator diagram during actual test.
3. The invention establishes an attention loss function suitable for convolutional neural network training, and the content is attached to an indicator diagram identification module. The reason and the advantage of the invention are analyzed.
Reason analysis:
in the process of learning and training of the convolutional neural network, the characteristic difference of some indicator diagrams is obvious, and the model is easy to identify. And the characteristic difference of some samples is small, and the model is easy to identify errors. In the process, the easily-recognized sample does not have too much effect on parameter adjustment of the model in the continuous model iterative training process, so that not only are computing resources wasted and training time consumed, but also the performance improvement of the model is not affected, and further even the learning of the model is interfered. On the contrary, those indicator diagram samples which are easy to identify errors have an important effect on improving the performance of the model, but since all samples are treated equally, the effects of the samples cannot be fully exerted, and the training efficiency of the model is influenced.
And (3) advantage analysis:
(1) the attention loss function provided by the invention automatically adjusts the contribution of the attention loss function to the training loss of the RA-CNN model according to the recognition condition of each indicator diagram sample. If the correctly identified indicator diagram is the indicator diagram, the contribution of the indicator diagram to the model training total loss is inhibited, so that the proportion of the incorrectly identified indicator diagram sample in the model training total loss is improved;
(2) because the model training loss is mainly embodied by the sample with the wrong recognition, the gradient adjustment is carried out on the parameters of the RA-CNN model by utilizing the loss, so that the model learning tends to pay more attention to the training of the indicator diagram with the wrong recognition, the rules hidden in the sample are fully excavated, and the recognition precision of the model is improved;
(3) the attention loss function enables the model to pay more attention to the samples which are easy to identify errors in the training process, and the generalization capability of the model is further improved.
Fourthly, explanation of the attached drawings:
FIG. 1 shows the working conditions of a pumping unit in actual production;
FIG. 2 is a flow diagram of a data pre-processing module of the present invention;
FIG. 3 is a block diagram of the regularized attention convolution module of the present invention;
FIG. 4 is a diagram of an internal model structure of the indicator diagram identification module of the present invention;
FIG. 5 is a flow diagram of an indicator diagram identification module of the present invention;
FIG. 6 is a technical flow diagram of the present invention;
fig. 7 is a confusion matrix for identifying indicator diagrams on a test set according to the present invention, wherein coordinate axes 0-9 sequentially represent normal, fixed valve loss, floating valve loss, rod breakage, sand production, wax deposition, piston pump collision, piston separation, gas influence, and insufficient liquid supply, a vertical axis represents a real working condition label of a sample, a horizontal axis represents a prediction label of the sample, and a diagonal line represents a correct number of samples in various working conditions in the test set. It is clear from this that the invention has a very high accuracy.
Fig. 8 is a gray scale graph (an example of a certain time in a certain well) obtained by converting the profile curves of displacement and load in step one.
The fifth embodiment is as follows:
the invention is further described below with reference to the accompanying drawings:
the indicator diagram identification method based on the regularization attention convolution neural network mainly comprises a data preprocessing module, a regularization attention convolution module and an indicator diagram identification module. The data preprocessing module is mainly responsible for the normalization of displacement load data and converts displacement-load time sequence data representing the indicator diagram into a gray image; the regularization attention convolution module strengthens, inhibits and deactivates convolution characteristics on the convolution layer channel, and effectively improves the identification precision of the indicator diagram; the indicator diagram identification module is used for inputting the gray scale diagram of the indicator diagram into the regularized attention convolution neural network for identification. Finally, the method is used as a core, the intelligent diagnosis system for the working condition of the oil pumping unit is constructed, and the indicator diagram identification result is pushed to a manager, so that the production measures can be taken in time.
The indicator diagram identification method based on the regularization attention convolution neural network specifically comprises the following steps:
step 1, establishing a data preprocessing module, and processing dimension and gray level diagrams of a working condition sample set of the pumping unit;
(1) in order to ensure that the dimension of each indicator diagram in the sample set is consistent, the pumping unit working condition sample set is subjected to data normalization processing in the data screening and processing process. Recording x and y as the original displacement and load of the horse head motion of the pumping unit, respectively, then the normalized displacement
Figure BDA0002081639390000081
And load
Figure BDA0002081639390000082
Comprises the following steps:
Figure BDA0002081639390000083
wherein xmaxAnd xminRespectively the maximum and minimum displacement, y, of a pumping unitmaxAnd yminMaximum and minimum loads.
(2) From the time-series data of the displacement and the load, an outline curve of the indicator diagram is drawn by a computer and converted into a gray scale map of 224 × 224 pixels, see fig. 8. For example, the displacement and load data collected at a certain time for a certain well is as follows:
Figure BDA0002081639390000084
step 2, establishing a regularization attention convolution module, and strengthening, inhibiting and inactivating the convolution characteristics of the autonomous learning;
the main purpose of the step is to design a convolution module with strong feature extraction capability, on the channel of the convolution layer, the convolution features related to the indicator diagram category are enhanced, and the features which are not related or have weak correlation are inhibited and inactivated. In addition, the enhancement, the inhibition and the inactivation are all autonomously learned through a model and are autonomously completed under the mechanism designed by the invention. The module can improve the feature extraction capability and generalization capability of the model, thereby effectively improving the indicator diagram identification precision of the model.
The module comprises two mechanisms, wherein one mechanism is an attention mechanism, the feature extraction capability of the model can be effectively enhanced, and the model focuses on features which have obvious relevance to the indicator diagram types in the learning and training process through a feature weighting mode of a channel; the other is a regularization mechanism, and the generalization capability of the model in later use is enhanced by a channel-establishing discarding mode.
The process of the present invention to build the regularized attention convolution module can be described in detail as follows. For convenience of description, the output characteristic diagram of a convolution layer is recorded as U ═1,u2,…,uC],
Figure BDA0002081639390000091
C is the number of channels, and H and W are the height and width of the channels, respectively. After passing through the regularization attention convolution module, the new feature map is marked as X ═ X1,x2,…,xC],
Figure BDA0002081639390000092
The specific calculation process of this process is as follows.
(1) Channel polymerization
Firstly, compressing the space dimension of each channel in the feature map along the channel direction, and aggregating each channel u by using three global pooling of maximum, average and randomiAttention component s to the channelmax、smean、sstoWherein the c-th element in the attention mask component is:
Figure BDA0002081639390000093
Figure BDA0002081639390000094
Figure BDA0002081639390000095
wherein p isc(i, j) represents "throughThe selection probability of an element in a trace can be calculated as follows.
Figure BDA0002081639390000096
(2) Channel attention
Attention of channel to component smax、smeanAnd sstoAnd respectively inputting the data into a single hidden layer neural network, and finishing aggregation through point-by-point multiplication, accumulation and activation functions of the weight and the channel attention component. Thus, the channel attention s ═ s of the feature map U1,s2,…,sC]Is defined as follows:
s=σ(w1δ(w0smax)+w1δ(w0smean)+w1δ(w0ssto))
where σ is the sigmoid function and δ is the ReLu function. w is a0And w1These weights are shared for both channel attention mask components, which are weights of the single hidden layer neural network.
(3) Channel regularization
According to channel attention s ═ s1,s2,…,sC]Constructing a channel attention mask m ═ m1,m2,…,mC]Wherein the mask is miIs used to identify whether a channel is activated or not,
Figure BDA0002081639390000101
a normalized value representing the attention of the ith channel. The probability that each channel is selected is defined as:
Figure BDA0002081639390000102
using erand(0,1)Representing the random attention of the channel, the probability of random selection of the channel can be expressed as
Figure BDA0002081639390000103
If a certain channel selection probability piAbove this random selection probability, the channel is activated, otherwise deactivated, i.e. the greater the selection probability of the channel, the more easily the channel is activated, so the mask miCan be defined as:
Figure BDA0002081639390000104
(4) regularization channel attention feature map
According to channel attention s and channel attention mask m ═ m1,m2,…,mC]On the basis of the original feature map U, a regularized channel attention feature map is constructed
Figure BDA0002081639390000105
The specific definition is as follows:
Figure BDA0002081639390000106
wherein
Figure BDA0002081639390000107
Representing point-by-point multiplication.
Step 3, embedding the regularization attention convolution module into a convolution neural network to form a regularization attention convolution neural network RA-CNN;
(1) the method adopts the VGG-16 convolutional neural network as a basic model, and embeds a regularization attention convolution module in every two continuous convolutional layers, wherein the module does not change the size of an original characteristic diagram and adds channel attention and a regularization mechanism to the characteristic diagram;
(2) the model comprises 5 convolutional layers, 5 pooling layers and 4 regularized attention convolution modules;
(3) the convolution kernel size, step size and fill size used for each convolution layer are 3 x 3, 1 and 1, respectively.
Step 4, establishing an indicator diagram identification module, and inputting the gray level image of the indicator diagram into a regularized attention convolution neural network RA-CNN for identification;
(1) the gray scale map with the size of 224 multiplied by 224 indicator diagram passes through each convolution layer, the regularization attention convolution module and the pooling layer in the model respectively to complete the automatic extraction of the features. Recording a convolution layer, a pooling layer, a regularization attention convolution module and a pooling layer as a convolution unit, wherein the output dimension of the 1 st convolution unit is 112 × 112 × 128, the output dimension of the 2 nd convolution unit is 56 × 56 × 256, the output dimension of the 3 rd convolution unit is 28 × 28 × 512, the output dimension of the 4 th convolution unit is 14 × 14 × 512, and the output dimension of the 5 th convolution unit is 7 × 7 × 512;
(2) flattening the output vector of the last convolution unit, inputting the output vector to a fully-connected neural network, and identifying the type of the indicator diagram;
(3) the input of the fully-connected neural network in RA-CNN is 7 × 7 × 512, the hidden layer is 4096 nodes, and the output is 10 nodes, wherein each output node represents a working condition. The real indicator diagram sample label uses hot-one codes, for example, two working conditions of fixed valve loss and floating valve loss, the codes are respectively (0, 1, 0, 0, 0, 0, 0), therefore, in actual use, which node value is the largest in 10 output nodes, the number of the node is the indicator diagram type corresponding to the sample.
Step 5, designing an attention loss function, and training a regularized attention convolution neural network model RA-CNN
In order to ensure that the model obtains sufficient training and improve the generalization capability of the model, the invention provides an attention loss function, and the sample weight is adjusted according to the contribution of each sample to the training loss, so that the model can notice the samples which are easy to be wrongly classified in the training process, and ignore the samples which have small contribution to the loss and are easy to identify correctly. The loss function is specifically defined as:
Figure BDA0002081639390000111
wherein, yjAnd
Figure BDA0002081639390000112
respectively representing the real category and the model identification probability of the jth type indicator diagram sample, T representing the number of indicator diagram categories,
Figure BDA0002081639390000113
is the loss adjustment factor for the indicator diagram sample.
The function has the following characteristics:
(1) sample identification probability for identifying correct indicator diagram sample
Figure BDA0002081639390000114
Coefficient of regulation
Figure BDA0002081639390000115
Loss of training
Figure BDA0002081639390000116
Therefore, the contribution of the training loss is effectively inhibited, and the dominance of the training loss to the training process is eliminated;
(2) sample prediction probability for error-identified indicator diagram samples
Figure BDA0002081639390000117
Coefficient of regulation
Figure BDA0002081639390000118
Their training loss is less affected. It should be noted that, compared with the loss suppression for identifying the correct indicator diagram sample, the contribution of the identifying the wrong sample to the training loss is equivalent to exponential expansion, so the loss trend change of the model mainly reflects the identifying situation of the identifying the wrong sample, and the partial loss of the identifying the correct sample is also considered.
Step 6, inputting the real-time collected working condition data of the pumping unit into a trained indicator diagram recognition model for recognition, and repeating the steps 2-4;
after the indicator diagram recognition model trained by the sample set is utilized, the model can be directly used for carrying out actual indicator diagram recognition. And (3) repeating the steps 2-4 in the identification process, firstly, utilizing the convolution layer and the regularization attention module to perform feature extraction, enhancement, inhibition and inactivation on the gray level image of the indicator diagram, then sending the output of the last convolution unit into the fully-connected neural network, and performing type identification on the indicator diagram according to the probability calculated by the softmax function.
And 7, constructing an intelligent diagnosis system of the working condition of the pumping unit by taking the RA-CNN-based indicator diagram identification method as a core.
(1) The method comprises the following steps of constructing an intelligent diagnosis system for the working condition of the oil pumping unit by taking a indicator diagram identification method based on a regularized attention convolution neural network as a core, wherein the system provides a short message and mail sending module;
(2) the system is used for analyzing and identifying the indicator diagram of the oil pumping unit in real time, and if a fault working condition occurs, a diagnosis result is pushed to a manager in the form of mails and short messages, so that measures can be taken in time to process the fault oil well.
The data preprocessing module of the invention: in order to facilitate model identification and avoid the defect of an instantaneous input mechanism of the conventional common neural network, the time sequence formed by displacement-load of the pumping unit is converted into a gray image, and the displacement and the load are normalized simultaneously;
the Regularized Attention Convolutional Module (RACM) comprises a Regularized mechanism and an Attention mechanism, wherein important features related to the type of the indicator diagram are enhanced on a Convolutional layer channel, and other features are inhibited and inactivated, so that the accuracy of indicator diagram identification is effectively improved;
the indicator diagram identification module of the invention: and embedding RACM into every two continuous convolution layers in the Convolutional Neural Network, namely, a Regularized Attention Convolutional Neural Network (RA-CNN), and identifying the indicator diagram by using the RACM. In the training process of the RACM model, the invention provides an attention loss function, so that the indicator diagram sample with error is identified in multiple views in the training process of the model.
Example (b):
the oil pumping unit is characterized by comprising 10 common working conditions which comprise normal, fixed valve loss, floating valve loss, rod breakage, sand production, wax deposition, piston pump collision, piston separation, gas influence and insufficient liquid supply.
In the aspect of the sample set, 25963 indicator diagram samples of 40 pumping units in a certain oil mine in Daqing oil field are selected in the embodiment, and the working condition of the pumping unit corresponding to each indicator diagram sample is finished by manual marking in the production process. In order to keep the samples balanced, the data of the samples under the fault conditions are screened and enhanced by the method, the enhancement method comprises displacement load shift, rotation and translation, and finally the sample set contains 18500 samples under the fault conditions. In the model training process, 5-fold cross validation is adopted, namely, a working condition sample set is averagely divided into 5 parts, 4 parts of the working condition sample set are taken as a training set each time, and the rest 1 part is taken as a test set.
Description of the conditions of the on-site tests (tests were carried out to ascertain the feasibility of the method from step 1 to step 7)
Table 1 shows the diagnosis results of each working condition of the RA-CNN model of the present invention on different test data sets, and it can be clearly seen that RA-CNN obtains a higher diagnosis accuracy in most working conditions of the pumping unit, and the indicator diagram identification accuracy exceeds 92%, which meets the actual requirements (over 90%) for oil field production.
TABLE 1 working Condition diagnostic accuracy (%)
Figure BDA0002081639390000131
Table 2 shows a fusion experimental analysis of the indicator diagram recognition model in the present invention, that is, a RA-CNN model and a comparison analysis of the indicator diagram recognition accuracy without the regularized attention mechanism and the attention loss function proposed in the present invention. As is apparent from the comparative data in table 2, with the removal of the channel attention, channel regularization, and attention loss function of the present invention in the RA-CNN model, the accuracy of the identification of the indicator diagram is continually degraded, with the degradation after the channel attention mechanism is removed being most significant. When the regularization attention convolution module and the attention loss function are removed, the RA-CNN model is degenerated into a standard convolution neural network, the accuracy of indicator diagram identification is obviously reduced, the accuracy is only 85.3 percent and is less than 90 percent, and the requirement of oil field production cannot be met. The performance analysis result of the fusion experiment further proves the effectiveness of the invention, and the attention and the regularization can effectively and automatically extract the outline characteristics of the indicator diagram, thereby improving the identification precision of the indicator diagram of the model.
TABLE 2 fusion experiment Performance analysis of RA-CNN model
Figure BDA0002081639390000141
In addition, the RA-CNN model is compared with other common deep learning models, such as VGG-16, VGG-19, ResNet-50 and ResNet-101, and is also compared with a common BP neural network (BP-ANN). Table 3 shows the result of the comparison of the accuracy, although the model depths of VGG-16, VGG-19, ResNet-50 and ResNet-101 are continuously deepened, the accuracy of the indicator diagram identification is not obviously improved, and the fact that the indicator diagram is directly identified by using the existing deep learning model is also demonstrated that the accuracy is not ideal. In addition, the identification accuracy of the common BP neural network is only 72.4%, because the BP network directly uses the displacement of the indicator diagram and the flattening input mode of the load data, the contour characteristics of the indicator diagram cannot be effectively extracted. In addition, when a large-scale sample set is identified, the BP neural network is simple in structure and limited in model memory capacity, so that later generalization capacity is weak, and the two reasons cause the lowest accuracy of identification of the indicator diagram of the BP neural network.
TABLE 3 comparison of accuracy of identification of indicator diagrams for different models
Figure BDA0002081639390000142

Claims (1)

1. A indicator diagram identification method based on a regularization attention convolution neural network is characterized by comprising the following steps:
establishing a data preprocessing module, and processing dimension and gray level diagrams of a working condition sample set of the pumping unit;
(1) in order to ensure that the dimension of each indicator diagram in the sample set is consistent, the pumping unit working condition sample set is subjected to data normalization processing in the data screening and processing process, wherein x and y are the original displacement and the load of the horsehead movement of the pumping unit respectively, and the normalized displacement
Figure FDA0003500047640000011
And load
Figure FDA0003500047640000012
Comprises the following steps:
Figure FDA0003500047640000013
wherein xmaxAnd xminRespectively the maximum and minimum displacement, y, of a pumping unitmaxAnd yminMaximum and minimum loads;
(2) drawing a contour curve of the indicator diagram by using a computer according to the time sequence data of the displacement and the load, and converting the contour curve into a gray scale diagram of 224 multiplied by 224 pixels;
step two, establishing a regularization attention convolution module, and strengthening, inhibiting and inactivating the convolution characteristics of the autonomous learning;
the regularization attention convolution module comprises two mechanisms, wherein one mechanism is an attention mechanism, the feature extraction capability of the model is enhanced, and the model focuses on features with obvious correlation to indicator diagram categories in the learning and training process through a feature weighting mode of a channel; the other is a regularization mechanism, and the generalization capability of the model in later use is enhanced by establishing a channel discarding mode;
the process of constructing the regularized attention convolution module:
the output characteristic diagram of a certain convolution layer is U ═ U [ < U > ]1,u2,…,uC],
Figure FDA0003500047640000014
C is the number of channels, H and W are the height and width of the channels respectively, and after the normalized attention convolution module, a new feature map is marked as X ═ X1,x2,…,xC],
Figure FDA0003500047640000015
The specific calculation process is as follows:
(1) channel polymerization
Firstly, compressing the space dimension of each channel in the feature map along the channel direction, and aggregating each channel u by using three global pooling of maximum, average and randomiAttention component s to the channelmax、smean、sstoWherein the c-th element in the attention mask component is:
Figure FDA0003500047640000016
Figure FDA0003500047640000017
Figure FDA0003500047640000021
wherein p isc(i, j) represents the selection probability of a certain element in the channel, and is calculated according to the following formula;
Figure FDA0003500047640000022
(2) channel attention
Attention of channel to component smax、smeanAnd sstoRespectively input into a single hidden layer neural network, and are annotated by weight and channelThe point-by-point multiplication, accumulation and activation functions of the intentional force components complete the aggregation, and thus, the channel attention s of the feature map U is ═ s1,s2,…,sC]Is defined as follows:
s=σ(w1δ(w0smax)+w1δ(w0smean)+w1δ(w0ssto))
where σ is the sigmoid function, δ is the ReLu function, w0And w1Weights for the single hidden layer neural network, which are shared for both channel attention mask components;
(3) channel regularization
According to channel attention s ═ s1,s2,…,sC]Constructing a channel attention mask m ═ m1,m2,…,mC]Wherein the mask is miIs used to identify whether a channel is activated or not,
Figure FDA0003500047640000023
a normalized value representing the attention of the ith channel, the probability of each channel being selected being defined as:
Figure FDA0003500047640000024
using erand(0,1)Representing the random attention of the channel, the probability of random selection of the channel is expressed as
Figure FDA0003500047640000025
If a certain channel selection probability piAbove this random selection probability, the channel is activated, otherwise deactivated, i.e. the greater the selection probability of the channel, the more easily the channel is activated, so the mask miCan be defined as:
Figure FDA0003500047640000026
(4) regularization channel attention feature map
According to channel attention s and channel attention mask m ═ m1,m2,…,mC]On the basis of the original feature map U, a regularized channel attention feature map is constructed
Figure FDA0003500047640000031
The specific definition is as follows:
Figure FDA0003500047640000032
wherein
Figure FDA0003500047640000033
Represents point-by-point multiplication;
embedding the regularized attention convolution module into a convolution neural network to form a regularized attention convolution neural network RA-CNN;
(1) a VGG-16 convolutional neural network is used as a basic model, a regularization attention convolution module is embedded into each two continuous convolutional layers, the regularization attention convolution module does not change the size of an original feature map, and channel attention and a regularization mechanism are added to the original feature map;
(2) the regularization attention convolution neural network RA-CNN comprises 5 convolution layers, 5 pooling layers and 4 regularization attention convolution modules;
(3) the convolution kernel size, step size and fill size used by each convolution layer are 3 × 3, 1 and 1, respectively;
establishing an indicator diagram identification module, and inputting the gray level image of the indicator diagram into a regularized attention convolution neural network RA-CNN for identification;
(1) the gray scale image with the size of 224 multiplied by 224 indicator diagram respectively passes through each convolution layer, the regularization attention convolution module and the pooling layer to complete the automatic extraction of the characteristics; recording a convolution layer, a pooling layer, a regularization attention convolution module and a pooling layer as a convolution unit, wherein the output dimension of the 1 st convolution unit is 112 × 112 × 128, the output dimension of the 2 nd convolution unit is 56 × 56 × 256, the output dimension of the 3 rd convolution unit is 28 × 28 × 512, the output dimension of the 4 th convolution unit is 14 × 14 × 512, and the output dimension of the 5 th convolution unit is 7 × 7 × 512;
(2) flattening the output vector of the last convolution unit, inputting the output vector to a fully-connected neural network, and identifying the type of the indicator diagram;
(3) the input of the fully-connected neural network in RA-CNN is 7 multiplied by 512, the hidden layer is 4096 nodes, the output is 10 nodes, wherein each output node represents a working condition; the real indicator diagram sample label is coded by hot-one;
establishing an attention loss function, and training a regularized attention convolution neural network model RA-CNN;
adjusting sample weights according to the contribution of each sample to training loss, so that the regularized attention convolutional neural network model RA-CNN can notice the samples which are easy to be wrongly classified in the training process, neglect the samples which have small contribution to loss and are easy to identify the correct samples, and the attention loss function is as follows:
Figure FDA0003500047640000041
wherein, yjAnd
Figure FDA0003500047640000042
respectively representing the real category and the model identification probability of the jth type indicator diagram sample, T representing the number of indicator diagram categories,
Figure FDA0003500047640000043
is the loss adjustment factor of the indicator diagram sample;
inputting the real-time collected working condition data of the pumping unit into a trained indicator diagram recognition model for recognition;
constructing an intelligent diagnosis system of the working condition of the pumping unit by taking the indicator diagram identification method based on RA-CNN as a core;
(1) constructing an intelligent diagnosis system for the working condition of the oil pumping unit by taking a indicator diagram identification method based on a regularized attention convolution neural network as a core, wherein the system is provided with a short message and mail sending module;
(2) the intelligent diagnosis system for the working condition of the pumping unit is used for analyzing and identifying the indicator diagram of the pumping unit in real time, if the working condition with the fault occurs, the diagnosis result is pushed to a manager in the form of mails and short messages, and measures are taken in time to process the oil well with the fault.
CN201910474060.XA 2019-06-02 2019-06-02 Indicator diagram identification method based on regularization attention convolution neural network Active CN110163302B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910474060.XA CN110163302B (en) 2019-06-02 2019-06-02 Indicator diagram identification method based on regularization attention convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910474060.XA CN110163302B (en) 2019-06-02 2019-06-02 Indicator diagram identification method based on regularization attention convolution neural network

Publications (2)

Publication Number Publication Date
CN110163302A CN110163302A (en) 2019-08-23
CN110163302B true CN110163302B (en) 2022-03-22

Family

ID=67630734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910474060.XA Active CN110163302B (en) 2019-06-02 2019-06-02 Indicator diagram identification method based on regularization attention convolution neural network

Country Status (1)

Country Link
CN (1) CN110163302B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110766192B (en) * 2019-09-10 2022-09-09 中国石油大学(北京) Drilling well leakage prediction system and method based on deep learning
CN111126453B (en) * 2019-12-05 2022-05-03 重庆邮电大学 Fine-grained image classification method and system based on attention mechanism and cut filling
CN111466878A (en) * 2020-04-14 2020-07-31 合肥工业大学 Real-time monitoring method and device for pain symptoms of bedridden patients based on expression recognition
CN111898421B (en) * 2020-06-18 2022-11-11 东南大学 Regularization method for video behavior recognition
CN111950699A (en) * 2020-07-03 2020-11-17 清华大学深圳国际研究生院 Neural network regularization method based on characteristic space correlation
CN111985343B (en) * 2020-07-23 2024-04-09 深圳大学 Construction method of behavior recognition depth network model and behavior recognition method
CN112507770B (en) * 2020-08-13 2022-08-12 华南农业大学 Rice disease and insect pest identification method and system
CN114352265B (en) * 2020-10-13 2024-05-31 中国石油天然气股份有限公司 Multi-parameter-based oil pumping well working condition diagnosis method and system
CN112464810A (en) * 2020-11-25 2021-03-09 创新奇智(合肥)科技有限公司 Smoking behavior detection method and device based on attention map
CN112577747B (en) * 2020-12-07 2023-08-08 东南大学 Rolling bearing fault diagnosis method based on space pooling network
CN112508105B (en) * 2020-12-11 2024-03-19 南京富岛信息工程有限公司 Fault detection and retrieval method for oil extraction machine
CN112766301B (en) * 2020-12-11 2024-04-12 南京富岛信息工程有限公司 Oil extraction machine indicator diagram similarity judging method
CN112688438B (en) * 2020-12-24 2022-09-23 桂林电子科技大学 Intelligent system for recognizing and reading meters
CN112861912A (en) * 2021-01-08 2021-05-28 中国石油大学(北京) Deep learning-based method and system for identifying indicator diagram of complex working condition of pumping well
CN113029327B (en) * 2021-03-02 2023-04-18 招商局重庆公路工程检测中心有限公司 Tunnel fan embedded foundation damage identification method based on metric attention convolutional neural network
CN113780403B (en) * 2021-09-07 2024-04-26 中国石油化工股份有限公司 Fault diagnosis and interpretation method and device for oil well indicator diagram
CN114842425B (en) * 2022-07-04 2022-09-20 西安石油大学 Abnormal behavior identification method for petrochemical process and electronic equipment
CN117152548B (en) * 2023-11-01 2024-01-30 山东理工大学 Method and system for identifying working conditions of actually measured electric diagram of oil pumping well

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184414A (en) * 2011-05-16 2011-09-14 中国石油天然气股份有限公司 Method and system for identifying and judging pump indicator diagram
CN104361365A (en) * 2014-12-04 2015-02-18 杭州和利时自动化有限公司 Oil-well pump running condition recognition method and device
WO2018028255A1 (en) * 2016-08-11 2018-02-15 深圳市未来媒体技术研究院 Image saliency detection method based on adversarial network
CN108427771A (en) * 2018-04-09 2018-08-21 腾讯科技(深圳)有限公司 Summary texts generation method, device and computer equipment
CN108710920A (en) * 2018-06-05 2018-10-26 北京中油瑞飞信息技术有限责任公司 Indicator card recognition methods and device
CN108830157A (en) * 2018-05-15 2018-11-16 华北电力大学(保定) Human bodys' response method based on attention mechanism and 3D convolutional neural networks
CN109710919A (en) * 2018-11-27 2019-05-03 杭州电子科技大学 A kind of neural network event extraction method merging attention mechanism

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184414A (en) * 2011-05-16 2011-09-14 中国石油天然气股份有限公司 Method and system for identifying and judging pump indicator diagram
CN104361365A (en) * 2014-12-04 2015-02-18 杭州和利时自动化有限公司 Oil-well pump running condition recognition method and device
WO2018028255A1 (en) * 2016-08-11 2018-02-15 深圳市未来媒体技术研究院 Image saliency detection method based on adversarial network
CN108427771A (en) * 2018-04-09 2018-08-21 腾讯科技(深圳)有限公司 Summary texts generation method, device and computer equipment
CN108830157A (en) * 2018-05-15 2018-11-16 华北电力大学(保定) Human bodys' response method based on attention mechanism and 3D convolutional neural networks
CN108710920A (en) * 2018-06-05 2018-10-26 北京中油瑞飞信息技术有限责任公司 Indicator card recognition methods and device
CN109710919A (en) * 2018-11-27 2019-05-03 杭州电子科技大学 A kind of neural network event extraction method merging attention mechanism

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Attention-based Temporal Weighted Convolutional Neural Network for Action Recognition;Jinliang Zang 等;《https://arxiv.org/pdf/1803.07179.pdf》;20180319;1-13 *
CBAM: Convolutional Block Attention Module;Sanghyun Woo 等;《ECCV》;20181231;1-17 *
Using artificial neural networks for pattern recognition of downhole dynamometer card in oil rod pump system;.A. M. Felippe de Souza 等;《Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED "09)》;20091231;230-235 *
基于ELM和连续过程神经网络的抽油机工况诊断;刘志刚 等;《计算机工程与科学》;20171031;第39卷(第10期);1934-1940 *
改进的Alexnet模型及在油井示功图分类中的应用;段友祥 等;《计算机应用与软件》;20180731;第35卷(第7期);226-272 *

Also Published As

Publication number Publication date
CN110163302A (en) 2019-08-23

Similar Documents

Publication Publication Date Title
CN110163302B (en) Indicator diagram identification method based on regularization attention convolution neural network
CN109345507B (en) Dam image crack detection method based on transfer learning
CN109389171B (en) Medical image classification method based on multi-granularity convolution noise reduction automatic encoder technology
CN113923104B (en) Network fault diagnosis method, equipment and storage medium based on wavelet neural network
CN110781406B (en) Social network user multi-attribute inference method based on variational automatic encoder
CN101556650B (en) Distributed self-adapting pulmonary nodule computer detection method and system thereof
CN105160400A (en) L21 norm based method for improving convolutional neural network generalization capability
CN110348381A (en) Video behavior identification method based on deep learning
CN113807356B (en) End-to-end low-visibility image semantic segmentation method
CN111984817B (en) Fine-grained image retrieval method based on self-attention mechanism weighting
CN113155464B (en) CNN model visual optimization method for bearing fault recognition
CN109740254B (en) Ship diesel engine abrasive particle type identification method based on information fusion
CN111161224A (en) Casting internal defect grading evaluation system and method based on deep learning
CN114118138A (en) Bearing composite fault diagnosis method based on multi-label field self-adaptive model
CN115290326A (en) Rolling bearing fault intelligent diagnosis method
CN115204272A (en) Industrial system fault diagnosis method and equipment based on multi-sampling rate data
CN116628592A (en) Dynamic equipment fault diagnosis method based on improved generation type countering network
CN115508073A (en) Prototype adaptation mechanical equipment fault diagnosis method based on multi-scale attention
CN118312879B (en) Proportion servo valve fault diagnosis method based on attention convolution capsule network
CN113409213B (en) Method and system for enhancing noise reduction of time-frequency diagram of fault signal of plunger pump
CN117909881A (en) Fault diagnosis method and device for multi-source data fusion pumping unit
CN117606798A (en) Tobacco machinery bearing fault diagnosis method and diagnosis system
CN117253192A (en) Intelligent system and method for silkworm breeding
CN113221683A (en) Expression recognition method based on CNN model in teaching scene
Fan et al. BFNet: Brain-like feedback network for object detection under severe weather

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant