CN110288257A - A kind of depth transfinites indicator card learning method - Google Patents
A kind of depth transfinites indicator card learning method Download PDFInfo
- Publication number
- CN110288257A CN110288257A CN201910588402.0A CN201910588402A CN110288257A CN 110288257 A CN110288257 A CN 110288257A CN 201910588402 A CN201910588402 A CN 201910588402A CN 110288257 A CN110288257 A CN 110288257A
- Authority
- CN
- China
- Prior art keywords
- layer
- depth
- training
- indicator card
- convolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0635—Risk analysis of enterprise or organisation activities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/02—Agriculture; Fishing; Mining
Abstract
It transfinites indicator card learning method the invention discloses a kind of depth, traditional indicator card recognition methods identification is there are artificial selection feature, the problems such as accuracy rate is low, and Generalization Capability is not strong.In view of the above problems, proposing a kind of depth transfinites indicator card learning method, indicator card depth characteristic vector is extracted using depth convolutional network, provides identification types in input limits learning machine.This method not only avoids the insufficient drawback of characteristic extraction procedure and feature extraction complicated in the diagnosis of conventional method indicator card, while improving the discrimination of indicator card fault diagnosis, and this method has stronger Generalization Capability.
Description
Technical field
It transfinites sucker-rod pump in pumping well indicator card failure the invention belongs to oil-gas mining technical field more particularly to a kind of depth
Diagnose learning method.
Background technique
In oil exploitation sucker rod pumping machine be most extensively be also the most common oil production equipment, since underground part is in underground
It works at several kms, once breaking down, is difficult to find immediately.If underground work situation can be predicted in time, grasps underground and connect
Continuous operating status, it will greatly improve oily well yield.Since rod-pumped well indicator card can directly reflect that oil well production is run
The case where, so indicator card is commonly used in analysis underground work condition.In actual production, to pumping unit sucker rod pump indicator card
Identification and classification are also mainly carried out by artificial, and recognition efficiency is low, and are required Heuristics very high.Now, with meter
The continuous improvement calculated the high speed development of machine information technology and Petroleum Production advanced technology is required, computer based intellectual analysis
Method is increasingly taken seriously.
Current domestic and foreign scholars have done many research to the fault diagnosis based on indicator card, but the emphasis generally studied exists
In in feature extraction, largely will disperse the attention that we identify indicator card itself, and feature extraction algorithm in this way
Be for its research data set on have very high discrimination, once using different pumping unit sucker rod pump data collected into
Row test, often recognition result will not be satisfactory.
In view of existing pumping unit sucker rod pump indicator card recognition methods there are the problem of, propose a kind of depth and transfinite and show function
Graphics learning method, to improve pumping unit sucker rod pump indicator card recognition correct rate and Generalization Capability.
Summary of the invention
It transfinites indicator card learning method it is an object of the invention to overcome the deficiencies of the prior art and provide a kind of depth.
The indicator card learning method the technical scheme is that a kind of depth transfinites, it is characterised in that the following steps are included:
Step 1: the displacement and load data of oil field acquisition being pre-processed, indicator card training set and test set sample are obtained
This;
Step 2: building depth convolution self-encoding encoder model, using indicator card training set sample, depth convolution is encoded certainly
Device carries out unsupervised training, obtains the pre-training parameter of depth convolutional network, the first convolutional layer of depth convolution self-encoding encoder arrives
7th convolutional layer separately includes the convolution kernel of 2n, 4n, 8n, 8n, 8n, 4n, 2n 5*5 sizes, and n is the positive integer greater than 0;
Step 3: building depth convolutional network model, and the encoding layer parameter obtained using the training of depth convolution self-encoding encoder
The convolution layer parameter of depth convolutional network is initialized, includes input layer, p convolutional layer, q pond in depth convolutional network structure
Layer, k full articulamentums and output layer, wherein p, q, k are the positive integer more than or equal to 0, and output layer meets One-hot format
Output, and nonlinear processing is carried out using LeakyReLU activation primitive after each convolution operation;
Wherein, the calculation formula of convolution is carried out to indicator card by using convolution filter are as follows:
In formula, f () indicates excitation function,Indicate the bias of l layers of j-th of neuron,Indicate l layers
I-th neuron to j-th of interneuronal weight,Indicate l layers of j-th of neuron input, conv2D () indicates two dimension
Convolution, i, j, l, k are the positive integer greater than 0;
Wherein, LeakyReLU activation primitive are as follows:
γ is a constant less than 0.01 in formula;
Step 4: depth convolutional network being trained using training set sample, output result and sample label are calculated and missed
Difference is iterated update to the parameter of depth convolutional network using back-propagation algorithm and gradient descent method, obtains training completion
Parameter;
Wherein, the cross entropy loss function of error is calculated are as follows:
In formula, ykIt is k-th of sample predictions output label, tkIt is k-th of sample training collection true tag, k is greater than 0
Positive integer;
Step 5: the full articulamentum for the depth convolutional network that training is completed removes, in addition extreme learning machine layer, to the layer
It is trained, obtains training parameter, extreme learning machine layer includes n hidden layer and an identification classification layer, and feature input layer includes
64n feature vector, hiding includes 125n neuron, and output layer is all satisfied the output of One-hot format, and n is just whole greater than 0
Number;
Step 6: on test set in step 1, the recognition accuracy of test network, if than a preceding recognition correct rate
Height, then preservation model parameter, continues back to step 5, is adjusted to parameter, fluctuates above and below model recognition correct rate small
In 0.001, then stop iteration.
Compared with prior art, beneficial effects of the present invention:
(1) depth transfinites indicator card learning method, needs manually extract geometrical characteristic early period with traditional indicator card identification model
Vector compares as input, automatically extracts indicator card characteristics of image, simplifies the feature extraction step in traditional indicator card identification
Suddenly, the recognition efficiency of indicator card is improved.
(2) present invention carries out Automatic Feature Extraction to indicator card using depth convolutional network, to reduce conventional method feature
The mass efficient information lost in extraction, while in order to enhance generalization, original full articulamentum is removed to depth convolutional network,
Increase extreme learning machine layer, the feature vector of extraction is identified, compared with the conventional method compared with present invention obtains better
Recognition result and stronger Generalization Capability.
Detailed description of the invention
Fig. 1 is that a kind of depth of the present invention transfinites indicator card learning method flow chart, to initial data be normalized it is equal in advance
Processing, draws the indicator card of 64*64 size, and is divided into training set and test set in the ratio of 7:3, is first instructed using training set
Practice autoencoder network, obtain training parameter, depth convolutional network parameter is initialized, recycles training set training depth volume
Product network removes the full articulamentum of trained depth convolutional network, in addition extreme learning machine layer, instructs limit learning layer
Practice, saves training network model, tested on test set;
Fig. 2 is the indicator card sample of the invention patent, including it is normal, feed flow is insufficient, fixed valve leakage, travelling valve leakage,
Seven seed type such as severe leakage, gases affect;
Fig. 3 is the invention patent model identity confusion matrix in training set, and wherein abscissa represents actual indicator card
Type, ordinate represent the indicator card type of prediction, and it is accurate that 100% has been reached in the fault identification of seven seed type of training set
Rate;
Fig. 4 is the invention patent model identity confusion matrix in test set, and wherein abscissa represents actual indicator card
Type, ordinate represent the indicator card type of prediction, and discrimination is highest in seven seed types has reached 100% for normal type
Accuracy, discrimination it is minimum be gases affect be 96.8%, total accuracy has reached 98.54%.
Specific embodiment
The present invention will be further described with reference to the accompanying drawing.Fig. 1 is overall flow figure of the invention.Of the invention is specific
Implementation steps are as follows:
Step 1: in the production activity initial data Excel table of oilfield displacement and load information data carry out it is pre-
Processing draws the indicator card of 64*64 size, and 7:3 in proportion, is divided into training set sample and test set sample, pumping unit has bar
Pump dynamometers data share seven classes (it is normal, feed flow is insufficient, fixed valve leakage, travelling valve leakage, severe leakage, gases affect and
Touch pump), indicator card sample such as Fig. 2.
Step 2: depth convolution is built from encoding model:
Using indicator card training set sample, unsupervised training is carried out to depth convolution self-encoding encoder, obtains depth convolution certainly
The training parameter of coding network, the first convolutional layer to the 7th convolutional layer of depth convolution self-encoding encoder separately include 2n, 4n, 8n,
The convolution kernel of 16n, 8n, 4n, 2n 5*5 sizes, n are the positive integer greater than 0, and n takes 8 in this example, and trained depth is rolled up
The encoding layer parameter of product self-encoding encoder is as depth convolutional network initiation parameter.
Step 3: build depth convolutional network model:
Include input layer, p convolutional layer, q pond layer, k full articulamentums and output in depth convolutional network structure
Layer, wherein p, q, k are the positive integer more than or equal to 0, and output layer meets the output of One-hot format, and after each convolution operation
Nonlinear processing is carried out using LeakyReLU activation primitive, in this example, the value of p, q, k are respectively 4,4,2.Specifically
Network structure setting such as table 1.
1 depth convolutional network structure setting of table
Wherein, the calculation formula of convolution is carried out to indicator card by using convolution kernel are as follows:
In formula, f () indicates excitation function,Indicate the bias of l layers of j-th of neuron,Indicate l layers
I-th neuron to j-th of interneuronal weight,Indicate l layers of j-th of neuron input, conv2D () indicates two dimension
Convolution, i, j, l, k are the positive integer greater than 0;
Wherein, LeakyReLU activation primitive are as follows:
γ is a constant less than 0.01 in formula;
Step 4: the network that step 3 is built is trained:
Depth convolutional network is trained using training set sample, output result and sample label are calculated into error, benefit
Update is iterated to the parameter of depth convolutional network with back-propagation algorithm and gradient descent method, obtains training parameter, and protect
Deposit optimum network model parameter;
Wherein, the cross entropy loss function of error is calculated are as follows:
In formula, ykIt is k-th of sample predictions output label, tkIt is k-th of sample training collection true tag, k is greater than 0
Positive integer;
Step 5: the full articulamentum for the depth convolutional network that training is completed removes, in addition extreme learning machine layer, to the layer
It is trained, obtains training parameter, extreme learning machine layer includes n hidden layer and an identification classification layer, and feature input layer includes
64n feature vector, hiding includes 125n neuron, and output layer is all satisfied the output of One-hot format, and n is just whole greater than 0
It counts, the value of n is 2 in this example.
Step 6: on test set in step 1, the recognition accuracy of test network, if than a preceding recognition correct rate
Height, then preservation model parameter, continues back to step 5, is adjusted to parameter, fluctuates above and below model recognition correct rate small
In 0.001, then stop iteration;Test set sample in step 1 is surveyed using the network model that step 5 obtains training completion
Examination, Fig. 3 are the confusion matrix figure that model identifies training set sample, and Fig. 4 is what model identified test set sample
Confusion matrix figure, model have reached 100% in the recognition accuracy of training set sample, and test set specimen discerning accuracy reaches
98.54%.
Claims (1)
- The indicator card learning method 1. a kind of depth transfinites, it is characterised in that the following steps are included:Step 1: the displacement and load data of oil field acquisition being pre-processed, indicator card training set and test set sample are obtained;Step 2: build depth convolution self-encoding encoder model, using indicator card training set sample, to depth convolution self-encoding encoder into The unsupervised training of row, obtains the pre-training parameter of depth convolutional network, the first convolutional layer of depth convolution self-encoding encoder to the 7th Convolutional layer separately includes the convolution kernel of 2n, 4n, 8n, 8n, 8n, 4n, 2n 5*5 sizes, and n is the positive integer greater than 0;Step 3: building depth convolutional network model, and initial using the encoding layer parameter that the training of depth convolution self-encoding encoder obtains Change the convolution layer parameter of depth convolutional network, includes input layer, p convolutional layer, q pond layer, k in depth convolutional network structure A full articulamentum and output layer, wherein p, q, k are the positive integer more than or equal to 0, and it is defeated that output layer meets One-hot format Out, and after each convolution operation nonlinear processing is carried out using LeakyReLU activation primitive;Wherein, the calculation formula of convolution is carried out to indicator card by using convolution filter are as follows:In formula, f () indicates excitation function,Indicate the bias of l layers of j-th of neuron,Indicate l layers of the i-th mind Through member to j-th of interneuronal weight,Indicating l layers of j-th of neuron input, conv2D () indicates two-dimensional convolution, I, j, l, k are the positive integer greater than 0;Wherein, LeakyReLU activation primitive are as follows:γ is a constant less than 0.01 in formula;Step 4: depth convolutional network is trained using training set sample, output result and sample label are calculated into error, Update is iterated to the parameter of depth convolutional network using back-propagation algorithm and gradient descent method, training is obtained and completes ginseng Number;Wherein, the cross entropy loss function of error is calculated are as follows:In formula, ykIt is k-th of sample predictions output label, tkIt is k-th of sample training collection true tag, k is just whole greater than 0 Number;Step 5: the full articulamentum for the depth convolutional network that training is completed removes, in addition extreme learning machine layer, carries out the layer Training obtains training parameter, and extreme learning machine layer includes n hidden layer and an identification classification layer, and feature input layer includes 64n Feature vector, hiding includes 125n neuron, and output layer is all satisfied the output of One-hot format, and n is the positive integer greater than 0;Step 6: on test set in step 1, the recognition accuracy of test network, if higher than a preceding recognition correct rate, Preservation model parameter, continues back to step 5, is adjusted to parameter, is less than until model recognition correct rate fluctuates up and down 0.001, then stop iteration.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910588402.0A CN110288257A (en) | 2019-07-01 | 2019-07-01 | A kind of depth transfinites indicator card learning method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910588402.0A CN110288257A (en) | 2019-07-01 | 2019-07-01 | A kind of depth transfinites indicator card learning method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110288257A true CN110288257A (en) | 2019-09-27 |
Family
ID=68021694
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910588402.0A Pending CN110288257A (en) | 2019-07-01 | 2019-07-01 | A kind of depth transfinites indicator card learning method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110288257A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110838155A (en) * | 2019-10-29 | 2020-02-25 | 中国石油大学(北京) | Method and system for fully reproducing ground indicator diagram of oil pumping unit |
CN111144548A (en) * | 2019-12-23 | 2020-05-12 | 北京寄云鼎城科技有限公司 | Method and device for identifying working condition of pumping well |
CN113137211A (en) * | 2021-04-02 | 2021-07-20 | 常州大学 | Oil well production parameter self-adaptive control method based on fuzzy comprehensive decision |
CN117532885A (en) * | 2024-01-10 | 2024-02-09 | 成都航空职业技术学院 | Intelligent auxiliary system, method and storage medium for 3D printing |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106446895A (en) * | 2016-10-28 | 2017-02-22 | 安徽四创电子股份有限公司 | License plate recognition method based on deep convolutional neural network |
EP3346423A1 (en) * | 2017-01-04 | 2018-07-11 | STMicroelectronics Srl | Deep convolutional network heterogeneous architecture system and device |
CN109086886A (en) * | 2018-08-02 | 2018-12-25 | 工极(北京)智能科技有限公司 | A kind of convolutional neural networks learning algorithm based on extreme learning machine |
US20190042952A1 (en) * | 2017-08-03 | 2019-02-07 | Beijing University Of Technology | Multi-task Semi-Supervised Online Sequential Extreme Learning Method for Emotion Judgment of User |
-
2019
- 2019-07-01 CN CN201910588402.0A patent/CN110288257A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106446895A (en) * | 2016-10-28 | 2017-02-22 | 安徽四创电子股份有限公司 | License plate recognition method based on deep convolutional neural network |
EP3346423A1 (en) * | 2017-01-04 | 2018-07-11 | STMicroelectronics Srl | Deep convolutional network heterogeneous architecture system and device |
US20190042952A1 (en) * | 2017-08-03 | 2019-02-07 | Beijing University Of Technology | Multi-task Semi-Supervised Online Sequential Extreme Learning Method for Emotion Judgment of User |
CN109086886A (en) * | 2018-08-02 | 2018-12-25 | 工极(北京)智能科技有限公司 | A kind of convolutional neural networks learning algorithm based on extreme learning machine |
Non-Patent Citations (2)
Title |
---|
万晓琪 等: "卷积神经网络在局部放电图像模式识别中的应用", 《电网技术》 * |
尹武松: "基于深度学习的交通标志识别", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110838155A (en) * | 2019-10-29 | 2020-02-25 | 中国石油大学(北京) | Method and system for fully reproducing ground indicator diagram of oil pumping unit |
CN111144548A (en) * | 2019-12-23 | 2020-05-12 | 北京寄云鼎城科技有限公司 | Method and device for identifying working condition of pumping well |
CN111144548B (en) * | 2019-12-23 | 2023-09-01 | 北京寄云鼎城科技有限公司 | Method and device for identifying working condition of oil pumping well |
CN113137211A (en) * | 2021-04-02 | 2021-07-20 | 常州大学 | Oil well production parameter self-adaptive control method based on fuzzy comprehensive decision |
CN113137211B (en) * | 2021-04-02 | 2023-01-17 | 常州大学 | Oil well production parameter self-adaptive control method based on fuzzy comprehensive decision |
CN117532885A (en) * | 2024-01-10 | 2024-02-09 | 成都航空职业技术学院 | Intelligent auxiliary system, method and storage medium for 3D printing |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110288257A (en) | A kind of depth transfinites indicator card learning method | |
CN109272123B (en) | Sucker-rod pump working condition early warning method based on convolution-circulation neural network | |
CN109765333A (en) | A kind of Diagnosis Method of Transformer Faults based on GoogleNet model | |
CN111335887B (en) | Gas well effusion prediction method based on convolutional neural network | |
CN104481496B (en) | Fault diagnosis method of sucker-rod pump well | |
CN110132626A (en) | A kind of Fault Diagnoses of Oil Pump method based on multiple dimensioned convolutional neural networks | |
CN106930751A (en) | A kind of Dlagnosis of Sucker Rod Pumping Well fault separating method | |
CN108921285A (en) | Single-element classification method in sequence based on bidirectional valve controlled Recognition with Recurrent Neural Network | |
CN112861912A (en) | Deep learning-based method and system for identifying indicator diagram of complex working condition of pumping well | |
Zheng et al. | Sucker rod pump working state diagnosis using motor data and hidden conditional random fields | |
CN108647643A (en) | A kind of packed tower liquid flooding state on-line identification method based on deep learning | |
CN112305388B (en) | On-line monitoring and diagnosing method for insulation partial discharge faults of generator stator winding | |
Wang et al. | A working condition diagnosis model of sucker rod pumping wells based on deep learning | |
Wang et al. | A working condition diagnosis model of sucker rod pumping wells based on big data deep learning | |
CN106022352A (en) | Submersible piston pump fault diagnosis method based on support vector machine | |
CN110490188A (en) | A kind of target object rapid detection method based on SSD network improvement type | |
CN111144433B (en) | Oil well working condition intelligent diagnosis and analysis method and device based on SVM model | |
CN113095414A (en) | Indicator diagram identification method based on convolutional neural network and support vector machine | |
CN109389170A (en) | A kind of gradation type operating condition method for early warning based on 3D convolutional neural networks | |
CN112664185A (en) | Indicator diagram-based rod-pumped well working condition prediction method | |
Sharaf | Beam pump dynamometer card prediction using artificial neural networks | |
Yin et al. | Imbalanced Working States Recognition of Sucker Rod Well Dynamometer Cards Based on Data Generation and Diversity Augmentation | |
CN114021620B (en) | BP neural network feature extraction-based electric submersible pump fault diagnosis method | |
CN114120043A (en) | Method for detecting abnormal pumping well based on production dynamic data and indicator diagram | |
Bai et al. | Research on electrical parameter fault diagnosis method of oil well based on tsc-dcgan deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190927 |