CN104166805B - Obtain the data processing method of petroleum casing pipe thickness - Google Patents

Obtain the data processing method of petroleum casing pipe thickness Download PDF

Info

Publication number
CN104166805B
CN104166805B CN201410413005.7A CN201410413005A CN104166805B CN 104166805 B CN104166805 B CN 104166805B CN 201410413005 A CN201410413005 A CN 201410413005A CN 104166805 B CN104166805 B CN 104166805B
Authority
CN
China
Prior art keywords
mrow
msub
mfrac
output
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410413005.7A
Other languages
Chinese (zh)
Other versions
CN104166805A (en
Inventor
钱慧芳
罗卉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Huachen Petroleum Technology Co ltd
Original Assignee
Xian Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Polytechnic University filed Critical Xian Polytechnic University
Priority to CN201410413005.7A priority Critical patent/CN104166805B/en
Publication of CN104166805A publication Critical patent/CN104166805A/en
Application granted granted Critical
Publication of CN104166805B publication Critical patent/CN104166805B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a kind of data processing method for obtaining petroleum casing pipe thickness, specifically implement according to following steps:First, log data is gathered with logger, and using continuous every 5 log datas measured as one group, finds out the maximum a in each group of dataj;Take j=1,2,3,4,5, i.e., a1~a5As the 1st bag data, the average value Mean of the 1st bag data is obtained1And replace a1~a5, afterwards to j >=6 when remainder data do average value processing;Then further according to the data a after processingjGenerate assistance data bj;By contrasting ajWith bj, find out ajFlex point cj;According to flex point cj, obtain casing thickness relative value dj;Finally, build and train neutral net, data are handled after extensive neutral net, obtain sleeve pipe actual (real) thickness value ej.The present invention solves the problems, such as to exist in existing method that efficiency is low, precision is low.

Description

Obtain the data processing method of petroleum casing pipe thickness
Technical field
The invention belongs to information data processing technology field, is related to a kind of data information processing method, and in particular to a kind of Obtain the data processing method of petroleum casing pipe thickness.
Background technology
Oil well logging is link essential in petroleum recovery operations, and it is through whole petroleum recovery operations link In.Log data is the data on the log parameter curve by the data collecting module collected of logger.The pole of log Value point characterizes the required size for measuring downhole parameters.In order to realize the feature extraction of downhole parameters, it is necessary to extract log data Waveforms amplitude.
The method that data discrete waveforms amplitude can be used for extracting at present mainly then asks for pole by matched curve method The method being worth greatly.Most common of which is polynomial curve fitting method.But when data point is more, polynomial order is too low, Fitting precision and effect are not ideal, and to improve fitting precision and effect just needs to improve the exponent number of polynomial fitting, but exponent number It is too high and the complexity and otherwise unfavorable on calculating can be brought.Therefore, if only with a polynomial curve function It is fitted more data point, it is difficult to obtain preferable fitting precision and effect.To efficiently solve above mentioned problem, general use is divided Section curve matching.But because log data amount is big, precision prescribed is high, method of subsection simulation curve method need to could expire many sections of data point point Sufficient required precision, thus the shortcomings such as workload is big, efficiency is low be present, then ask for polynomial maximum and add data meter again Calculation amount, reduces efficiency.
The content of the invention
It is an object of the invention to provide a kind of data processing method for obtaining petroleum casing pipe thickness, using to the very big of data The average value processing method of value, solve the problems, such as there is that efficiency is low, precision is low in the method for existing extraction data waveform amplitude.
The technical solution adopted in the present invention is the data processing method of petroleum casing pipe thickness to be obtained, specifically according to following Step is implemented:
Step 1:Log data is gathered with logger;
Step 2:Using continuous every 5 log datas measured in step 1 as one group, find out in each group of data most Big value aj(j=1,2,3 ... J);
Step 3:Take j=1,2,3,4,5, i.e., a1~a5As the 1st bag data, the average value of the 1st bag data is obtained Mean1And replace a1~a5
Step 4:Remainder data during to j >=6 does average value processing;
Step 5:Data a after being handled according to step 4j(j=1,2,3 ... J) generation assistance data bj(j=1,2,3 ... J);
Step 6:By contrasting ajWith bj, find out ajFlex point cjIf, i.e. aj=bj, then cj=aj;Otherwise, cj=0;
Step 7:According to flex point cj, obtain casing thickness relative value dj
Step 8:Build and train neutral net;
Step 9:Extensive neutral net, sample when will differ from training are input in the neutral net trained, Error ε between the output valve actual casing thickness value corresponding with sample of calculating neutral net, judges whether the error ε meets reality The requirement of border error, if meeting to require, step 10 is transferred to, otherwise goes to step 8 structures for re-starting neutral net and training;
Step 10:The neutral net for being built and being trained using step 8 is handled data, obtains sleeve pipe actual (real) thickness value ej
The features of the present invention also resides in,
Wherein, step 4 is specifically implemented according to following steps:
Step 4.1:By aj~aj+n-1Data obtain the average value Mean of the i-th bag data as the i-th bag datai;Its In, i=2,3 ... I;N=5,3,1, and n initial value is set as 5;
Step 4.2:Obtain the bag data average value Mean of this (adjacent) twoiAnd Meani-1Rate of change ratei(0<rate< 1);The rate of change formula for seeking two bag data average values is:
Wherein, abs is takes absolute value function, MeaniFor the average value of the i-th bag data, Meani-1For the flat of the i-th bag data Average;
Step 4.3:Judge rate of change rateiWhether setting value rate is less than or equal to,
If rateiLess than or equal to rate, then replaced with the average value of n maximum in the i-th bag data in the i-th bag data N maximum, then go to step 4.4;
If rateiMore than setting value rate, n=n-2 is made, judges whether n is equal to 1, step 4.4 is transferred to if n is equal to 1, Step 4.1, which is transferred to, if n is more than 1 recalculates Meani,
Step 4.4:I=i+1 is made, judges whether i is less than I, meets that then return to step 4.1 continues to calculate this condition;If i= I is transferred to step 5.
Step 5 is specifically implemented according to following steps:
Step 5.1:Seek ar-8~ar+8Maximum M in (r=9,10,11 ... J)maxAnd minimum Mmin
Step 5.2:Use ar-8~ar+8In each data respectively with maximum MmaxAnd minimum MminCompare, if the number According to maximum MmaxOr minimum MminIt is equal, then bj=aj,;If the data neither with maximum MmaxIt is equal also not with minimum Value MminIt is equal, then bj=0,
Step 5.3:R=r+1 is made, judges whether r is less than or equal to J-8, meets that then return to step 5.1 continues to count this condition Calculate;If r is transferred to step 6 more than J-8.
Step 7 is specifically implemented according to following steps:
Step 7.1:Make t=2, d1=a1
Step 7.2:Judge ctWhether a is equal totIf ct=at, then dt=at;Otherwise, dt=dt-1
Step 7.3:T=t+1 is made, judges whether t is less than or equal to J, meets that then return to step 7.2 continues to calculate this condition; Otherwise it is transferred to step 7.4;
Step 7.4:To dtTake absolute value to obtain casing thickness relative value dj
Wherein, step 8 is specifically implemented according to following steps:
Step 8.1, neutral net is built, defines each several part variable and function in neutral net
The input vector and output vector of wherein each layer be:
Input layer input vector:X=(x1,x2,L,xl)
Hidden layer input vector:Hi=(hi1,hi2,L,hip)
Hidden layer output vector:Ho=(ho1,ho2,L,hop)
Output layer input vector:Yi=(yi1,yi2,L,yiq)
Output layer output vector:Yo=(yo1,yo2,L,yoq)
Desired output vector:Do=(do1,do2,L,doq)
Wherein,
Input layer and the connection weight in intermediate layer:WshS=1,2, L, l h=1,2, L, p
The connection weight of hidden layer and output layer:WhoH=1,2, L, p o=1,2, L, q
The threshold value of each neuron of hidden layer:θhH=1,2, L, p
The threshold value of each neuron of output layer:θoO=1,2, L, q
Sample data number:K=1,2, LK, wherein, K represents number of samples;
Wherein, l represents input layer number,
P represents hidden layer neuron number,
Q represents output layer neuron number;
Hidden layer is using activation primitive:f1(net)=net (1)
Output layer is using activation primitive:
Error function:
Step 8.2, neutral net initializes,
To input layer and the connection weight W in intermediate layersh, hidden layer and output layer connection weight Who, each nerve of hidden layer The threshold θ of memberh, each neuron of output layer threshold θoA random number in section (- 1,1) is assigned respectively, is given and is calculated essence Angle value 0.0001, maximum frequency of training Z=1000 and learning rate η=0.8;
Step 8.3, the sample for training is inputted, calculates the output valve of output layer, the sample being trained is known The thickness relative value of the sleeve pipe of actual (real) thickness value,
Specifically implement according to following steps:
Step 8.3.1, k-th of input sample is randomly selected,
X (k)=(x1(k),x2(k),L,xl(k)),
Its desired output is:
Do (k)=(do1(k),do2(k),L,doq(k));Desired output herein is that the sleeve pipe corresponding to input sample is real Border thickness value,
Step 8.3.2, the input and output of each neuron of hidden layer are calculated,
The input of each neuron of hidden layer is:
Wherein, h=1,2, L, p; (4)
The output of each neuron of hidden layer is:
hoh(k)=f1(hih(k)) h=1,2, L, p; (5)
Step 8.3.3, the input and output of each neuron of output layer are calculated,
The input of each neuron of output layer is,
The output of each neuron of output layer is:
yoo(k)=f2(yio(k)) o=1,2, L q; (7)
Step 8.4, global error is calculated, and judges whether its precision meets the requirements:
Step 8.4.1:Utilize the desired output d in step 8.3oAnd reality output yo (k)o(k) difference between, is obtained Global error,
Whether the global error of the neutral net calculated in step 8.4.2, judgment step 8.4.1 meets to require, if entirely Office's error is not up to default precision 0.0001 and frequency of training is less than the maximum times 1000 of setting, then goes to step 8.5 and repaiied Just, if global error reaches default precision 0.0001 or learns the maximum times 1000 that number is more than setting, 9 are gone to step;
Step 8.5, calculated according to network desired output do (k) and reality output yo (k), to input layer and intermediate layer Connection weight Wsh, hidden layer and output layer connection weight Who, each neuron of hidden layer threshold θh, each neuron of output layer Threshold θoIt is modified,
Specifically implement according to following steps:
Step 8.5.1, using error function e, output layer is calculated to the connection weight adjustment amount Δ w of hidden layerho
Because
If willIt is defined as-δo(k), i.e.,
Then (9) formula abbreviation is
Δwho=η δo(k)hoh(k) (12)
Step 8.5.2, using error function e, hidden layer is calculated to the connection weight adjustment amount Δ w of input layerlh
Because
If willIt is defined as-δh(k), i.e.,
Then formula (13) abbreviation is
Δwsh=η δh(k)xs(k) (16)
Step 8.5.3, utilize error function e, the adjusting thresholds amount Δ θ of calculating each neuron of output layero
Step 8.5.4, utilize error function e, the adjusting thresholds amount Δ θ of calculating each neuron of hidden layerh
Step 8.5.4, utilize the δ of each neuron of output layero(k) and each neuron of hidden layer output hoh(k) correct Connection weight who(k);
After then correcting,
Wherein z (z=1,2,3 ..., Z) it is frequency of training;
Step 8.5.5:Utilize the δ of each neuron of hidden layerh(k) and each neuron of input layer input xs(k) amendment connection Weight wsh(k);
After then correcting,
Step 8.5.6:Correct the threshold θ of each neuron of hidden layerhWith the threshold θ of each neuron of output layero
After then correcting,
θh z+1h+Δθh (21)
θo z+1o+Δθo (22)
Using revisedθh z+1And θo z+1Return to step 8.3 is trained again, wherein revisedWith θo z+1WithWith θh z+1The w corresponded respectively in the formula in step 8.3 (6)hoWith and θoW in formula (4)shWith θh
Step 10 is specifically implemented according to following steps:
Step 10.1, casing thickness relative value d step 7 drawnjAs input value be input to step 9 carry out it is extensive after In obtained neutral net, hidden layer input vector hi is obtained using formula (4)h(j);
Step 10.2:By the input vector hi of the hidden layer obtained in step 10.1h(j) it is public to substitute into hidden layer output vector Formula (5) hoh(j)=f1(hih(j) in) h=1,2, L, p,
Obtain hidden layer output vector hoh(j);
Step 10.3:By hoh(j) the input vector formula (6) of output layer is substituted into:
Obtain the input vector yi of output layero(j);
Step 10.4:By yio(j) output layer output vector formula is substituted into:
yoo(j)=f2(yio(j)) o=1,2, L q,
Obtain output layer output vector yoo(j), i.e. sleeve pipe actual (real) thickness value ej
The invention has the advantages that data point need not be fitted to multinomial by the inventive method then tries to achieve extreme value Point, data waveform amplitude is directly extracted by initial data, not only execution efficiency is high, and is not influenceed by data amount check;According to Data variation rate replaces the processing method of maximum with average value, and flash removed can be gone to disturb, and improves precision.Therefore present invention side Method has that processing data is more efficient, precision is higher, the simpler advantage of Project Realization.
Brief description of the drawings
Fig. 1 is the flow chart for the data processing method that the present invention obtains petroleum casing pipe thickness;
Fig. 2 is the ripple that log data connects into step 1 in the data processing method of the invention for obtaining petroleum casing pipe thickness Shape detail view;
Fig. 3 is that the present invention obtains dispersion number in the step 3 and step 4 obtained in the data processing method of petroleum casing pipe thickness According to the discrete envelope figure of the most value of waveforms amplitude;
Fig. 4 is the thickness log at the coupling of petroleum casing pipe measured in practice;
Fig. 5 is that the petroleum casing pipe obtained in the data processing method of present invention acquisition petroleum casing pipe thickness by step 7 connects Thickness is with respect to change curve at hoop;
Fig. 6 is to utilize neutral net to carry out data processing in the data processing method of present invention acquisition petroleum casing pipe thickness Flow chart
Fig. 7 is that the extensive output of BP neural network and expectation are defeated in the data processing method of the invention for obtaining petroleum casing pipe thickness Go out data line chart;
Fig. 8 is that the extensive output of BP neural network and expectation are defeated in the data processing method of the invention for obtaining petroleum casing pipe thickness The error amount line chart gone out;
Fig. 9 is that the present invention obtains in the data processing method of petroleum casing pipe thickness casing wall thickness schematic diagram at box cupling.
Embodiment
The present invention is described in detail with reference to the accompanying drawings and detailed description.
The present invention obtains the data processing method of petroleum casing pipe thickness, as shown in figure 1, specifically implementing according to following steps:
Step 1:Log data is gathered with logger;
Step 2:Using continuous every 5 log datas measured in step 1 as one group, find out in each group of data most Big value aj(j=1,2,3 ... J);
Step 3:Take j=1,2,3,4,5, i.e., a1~a5As the 1st bag data, the average value of the 1st bag data is obtained Mean1And replace a1~a5
Step 4:Remainder data during to j >=6 does average value processing:
Step 4.1:By aj~aj+n-1(n=5,3,1) data obtain the i-th bag as i-th (i=2,3 ... I) bag data The average value Mean of datai;If n initial value is 5;
Step 4.2:Obtain the bag data average value Mean of this (adjacent) twoiAnd Meani-1Rate of change ratei(0<rate< 1);The rate of change formula for seeking two bag data average values is:
Wherein, abs is takes absolute value function, MeaniFor the average value of the i-th bag data, Meani-1For the flat of the i-th bag data Average;
Step 4.3:Judge rate of change rateiWhether setting value rate is less than or equal to,
If rateiLess than or equal to rate, then replaced with the average value of n maximum in the i-th bag data in the i-th bag data N maximum, then go to step 4.4;
If rateiMore than setting value rate, n=n-2 is made, judges whether n is equal to 1, step 4.4 is transferred to if n is equal to 1, Step 4.1, which is transferred to, if n is more than 1 recalculates Meani
Step 4.4:I=i+1 is made, judges whether i is less than I, meets that then return to step 4.1 continues to calculate this condition;If i= I is transferred to step 5.
Step 5:Data a after being handled according to step 4j(j=1,2,3 ... J) generation assistance data bj(j=1,2,3 ... J),
Specifically implement according to following steps:
Step 5.1:Seek ar-8~ar+8Maximum M in (r=9,10,11 ... J)maxAnd minimum Mmin
Step 5.2:Use ar-8~ar+8In each data respectively with maximum MmaxAnd minimum MminCompare, if the number According to maximum MmaxOr minimum MminIt is equal, then bj=aj,;If the data neither with maximum MmaxIt is equal also not with minimum Value MminIt is equal, then bj=0.
Step 5.3:R=r+1 is made, judges whether r is less than or equal to J-8, meets that then return to step 5.1 continues to count this condition Calculate;If r is transferred to step 6 more than J-8.
Step 6:Find out ajFlex point cj.If aj=bj, then cj=aj;Otherwise, cj=0.
Step 7:According to flex point cj, obtain casing thickness relative value dj
Specifically implement according to following steps:
Step 7.1:Make t=2, d1=a1
Step 7.2:Judge ctWhether a is equal totIf ct=at, then dt=at;Otherwise, dt=dt-1
Step 7.3:T=t+1 is made, judges whether t is less than or equal to J, meets that then return to step 7.2 continues to calculate this condition; Otherwise it is transferred to step 7.4.
Step 7.4:To dtTake absolute value to obtain casing thickness relative value dj
Step 8:Build and train neutral net, as shown in fig. 6, specifically implementing according to following steps:
Step 8.1, neutral net is built, defines each several part variable and function in neutral net
The input vector and output vector of wherein each layer be:
Input layer input vector:X=(x1,x2,L,xl)
Hidden layer input vector:Hi=(hi1,hi2,L,hip)
Hidden layer output vector:Ho=(ho1,ho2,L,hop)
Output layer input vector:Yi=(yi1,yi2,L,yiq)
Output layer output vector:Yo=(yo1,yo2,L,yoq)
Desired output vector:Do=(do1,do2,L,doq)
Wherein,
Input layer and the connection weight in intermediate layer:WshS=1,2, L, l h=1,2, L, p
The connection weight of hidden layer and output layer:WhoH=1,2, L, p o=1,2, L, q
The threshold value of each neuron of hidden layer:θhH=1,2, L, p
The threshold value of each neuron of output layer:θoO=1,2, L, q
Sample data number:K=1,2, L K, wherein, K represents number of samples;
Wherein, l represents input layer number,
P represents hidden layer neuron number,
Q represents output layer neuron number.
Hidden layer is using activation primitive:f1(net)=net (1)
Output layer is using activation primitive:
Error function:
Step 8.2, neutral net initializes,
To input layer and the connection weight W in intermediate layersh, hidden layer and output layer connection weight Who, each nerve of hidden layer The threshold θ of memberh, each neuron of output layer threshold θoA random number in section (- 1,1) is assigned respectively, is given and is calculated essence Angle value 0.0001, maximum frequency of training Z=1000 and learning rate η=0.8;
Step 8.3, the sample for training is inputted, calculates the output valve of output layer, the sample being trained is known The thickness relative value of the sleeve pipe of actual (real) thickness value,
Specifically implement according to following steps:
Step 8.3.1, k-th of input sample is randomly selected,
X (k)=(x1(k),x2(k),L,xl(k)),
Its desired output is:
Do (k)=(do1(k),do2(k),L,doq(k));Desired output herein is that the sleeve pipe corresponding to input sample is real Border thickness value,
Step 8.3.2, the input and output of each neuron of hidden layer are calculated,
The input of each neuron of hidden layer is:
Wherein, h=1,2, L, p; (4)
The output of each neuron of hidden layer is:
hoh(k)=f1(hih(k)) h=1,2, L, p; (5)
Step 8.3.3, the input and output of each neuron of output layer are calculated,
The input of each neuron of output layer is,
The output of each neuron of output layer is:
yoo(k)=f2(yio(k)) o=1,2, L q; (7)
Step 8.4, global error is calculated, and judges whether its precision meets the requirements:
Step 8.4.1:Utilize the desired output d in step 8.3oAnd reality output yo (k)o(k) difference between, is obtained Global error,
Whether the global error of the neutral net calculated in step 8.4.2, judgment step 8.4.1 meets to require, if entirely Office's error is not up to default precision 0.0001 and frequency of training is less than the maximum times 1000 of setting, then goes to step 8.5 and repaiied Just, if global error reaches default precision 0.0001 or learns the maximum times 1000 that number is more than setting, 9 are gone to step.
Step 8.5, calculated according to network desired output do (k) and reality output yo (k), to input layer and intermediate layer Connection weight Wsh, hidden layer and output layer connection weight Who, each neuron of hidden layer threshold θh, each neuron of output layer Threshold θoIt is modified,
Specifically implement according to following steps:
Step 8.5.1, using error function e, output layer is calculated to the connection weight adjustment amount Δ w of hidden layerho
Because
If willIt is defined as-δo(k), i.e.,
Then formula (9) abbreviation is
Δwho=η δo(k)hoh(k) (12)
Step 8.5.2, using error function e, hidden layer is calculated to the connection weight adjustment amount Δ w of input layerlh
Because
If willIt is defined as-δh(k), i.e.,
Then formula (13) abbreviation is
Δwsh=η δh(k)xs(k) (16)
Step 8.5.3, utilize error function e, the adjusting thresholds amount Δ θ of calculating each neuron of output layero
Step 8.5.4, utilize error function e, the adjusting thresholds amount Δ θ of calculating each neuron of hidden layerh
Step 8.5.4, utilize the δ of each neuron of output layero(k) and each neuron of hidden layer output hoh(k) correct Connection weight who(k);
After then correcting,
Wherein z (z=1,2,3 ..., Z) it is frequency of training;
Step 8.5.5:Utilize the δ of each neuron of hidden layerh(k) and each neuron of input layer input xs(k) amendment connection Weight wsh(k);
After then correcting,
Step 8.5.6:Correct the threshold θ of each neuron of hidden layerhWith the threshold θ of each neuron of output layero
After then correcting,
θh z+1h+Δθh (21)
θo z+1o+Δθo (22)
Using revisedθh z+1And θo z+1Return to step 8.3 is trained again, wherein revisedWith θo z+1WithWith θh z+1The w corresponded respectively in the formula in step 8.3 (6)hoWith and θoW in formula (4)shWith θh
Step 9:Extensive neutral net, sample when will differ from training are input in the neutral net trained, Error ε between the output valve actual casing thickness value corresponding with sample of calculating neutral net, judges whether the error ε meets reality Border error requirement (0.75mm, ε be less than casing thickness 10%), if meet require, be transferred to step 10, otherwise go to step 8 Re-start structure and the training of neutral net.
Step 10:The neutral net for being built and being trained using step 8 is handled data, obtains sleeve pipe actual (real) thickness value ej, specifically implement according to following steps:
Step 10.1, casing thickness relative value d step 7 drawnjAs input value be input to step 9 carry out it is extensive after In obtained neutral net, hidden layer input vector hi is obtained using formula (4)h(j);
Step 10.2:By the input vector hi of the hidden layer obtained in step 10.1h(j) it is public to substitute into hidden layer output vector Formula (5) hoh(j)=f1(hih(j) in) h=1,2, L, p,
Obtain hidden layer output vector hoh(j);
Step 10.3:By hoh(j) the input vector formula (6) of output layer is substituted into:
Obtain the input vector yi of output layero(j);
Step 10.4:By yio(j) output layer output vector formula is substituted into:
yoo(j)=f2(yio(j)) o=1,2, L q
Obtain output layer output vector yoo(j), i.e. sleeve pipe actual (real) thickness value ej
During step 1 gathered data, new double far-field electromagnetic focusing thickness meters, the data measured such as Fig. 2 well loggings are used Shown in the waveform detail view that data connect into, data point therein is exactly the data used in the inventive method in step 1.Well logging Data are the data on the log parameter curve by the data collecting module collected of logger.Due to the remote biography of data It is defeated, it is desirable to describing the information data transmission of each physical parameter in underground should as quickly as possible, it is few, and accurately.This experimental data is to use The log data of the 5 data points composition sampled on the thickness curve at coupling of petroleum casing pipe in a measurement period, it is each The amplitude of point represents the casing thickness inscribed when this.
Fig. 3 is the discrete envelope figure of most value for the discrete data waveforms amplitude that step 3 and step 4 obtain, wherein index point * Represent data point;Fig. 4 is the thickness log at coupling of petroleum casing pipe, log be by logger in depth under The curve of each physical parameter in underground is measured during upper km;By comparison diagram 3 and Fig. 4, two figures have identical amplitude Profile, show that the inventive method being capable of the accurate amplitude extracted on thickness log.
Fig. 5 be at coupling of petroleum casing pipe that step 7 obtains thickness with respect to change curve.
Fig. 7 is the extensive output of BP neural network and desired output.The extensive output of BP neural network and desired output in figure Error is as shown in Figure 8.Know from figure, its worst error is 0.3368mm, fully meets engineering demand.
Fig. 9 is casing wall thickness schematic diagram at box cupling.At box cupling actual casing wall thickness be followed successively by from left to right 7.5mm, 17.75mm、10.25mm、17.75mm、7.5mm.Alphabetical A, B, C, D, E meaning curve thickness characterizes successively in Fig. 5 figures connects Actual casing wall thickness 7.5mm, 17.75mm, 10.25mm, 17.75mm, 7.5mm at hoop.Pass through result figure and schematic diagram Contrast knows that the result of the inventive method processing meets casing wall actual (real) thickness change curve.
The inventive method, it is not necessary to data point is fitted to multinomial and then tries to achieve extreme point, is directly carried by initial data According to waveforms amplitude, not only execution efficiency is high, precision is high, and is not influenceed by data amount check for access.Therefore, can be quick and precisely Ground obtains petroleum casing pipe thickness from a large amount of log datas.

Claims (3)

1. obtain the data processing method of petroleum casing pipe thickness, it is characterised in that specifically implement according to following steps:
Step 1:Log data is gathered with logger;
Step 2:Using continuous every 5 log datas measured in step 1 as one group, the maximum in each group of data is found out aj, wherein, j=1,2,3 ... J;
Step 3:Take j=1,2,3,4,5, i.e., a1~a5As the 1st bag data, the average value Mean of the 1st bag data is obtained1And Instead of a1~a5
Step 4:Remainder data during to j >=6 does average value processing, specifically implements according to following steps:
Step 4.1:By aj~aj+n-1Data obtain the average value Mean of the i-th bag data as the i-th bag datai;Wherein, i= 2,3…I;N=5,3,1, and n initial value is set as 5;
Step 4.2:Obtain this adjacent two bag datas average value MeaniAnd Meani-1Rate of change ratei, wherein, 0<ratei<1; The rate of change formula for seeking two bag data average values is:
<mrow> <msub> <mi>rate</mi> <mi>i</mi> </msub> <mo>=</mo> <mi>a</mi> <mi>b</mi> <mi>s</mi> <mfrac> <mrow> <msub> <mi>Mean</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>Mean</mi> <mrow> <mi>i</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> </mrow> <mrow> <msub> <mi>Mean</mi> <mrow> <mi>i</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> </mrow> </mfrac> </mrow>
Wherein, abs is takes absolute value function, MeaniFor the average value of the i-th bag data, Meani-1For being averaged for the i-th -1 bag data Value;
Step 4.3:Judge rate of change rateiWhether setting value rate is less than or equal to,
If rateiLess than or equal to rate, then n in the i-th bag data are replaced with the average value of n maximum in the i-th bag data Maximum, then go to step 4.4;
If rateiMore than setting value rate, n=n-2 is made, judges whether n is equal to 1, step 4.4 is transferred to if n is equal to 1, if n is big Step 4.1, which is transferred to, in 1 recalculates Meani,
Step 4.4:I=i+1 is made, judges whether i is less than I, meets that then return to step 4.1 continues to calculate this condition;If i=I turns Enter step 5;
Step 5:Data a after being handled according to step 4jGenerate assistance data bj;Specifically implement according to following steps:
Step 5.1:For r=9,10,11 ... J, a is sought respectivelyr-8~ar+8In maximum MmaxAnd minimum Mmin
Step 5.2:Use ar-8~ar+8In each data respectively with maximum MmaxAnd minimum MminCompare, if the data with Maximum MmaxOr minimum MminIt is equal, then bj=aj;If the data neither with maximum MmaxIt is equal also not with minimum Mmin It is equal, then bj=0,
Step 5.3:R=r+1 is made, judges whether r is less than or equal to J-8, meets that then return to step 5.1 continues to calculate this condition;If r Step 6 is transferred to more than J-8;
Step 6:By contrasting ajWith bj, find out ajFlex point cjIf, i.e. aj=bj, then cj=aj;Otherwise, cj=0;
Step 7:According to flex point cj, obtain casing thickness relative value dj;Specifically implement according to following steps:
Step 7.1:Make t=2, d1=a1
Step 7.2:Judge ctWhether a is equal totIf ct=at, then dt=at;Otherwise, dt=dt-1
Step 7.3:T=t+1 is made, judges whether t is less than or equal to J, meets that then return to step 7.2 continues to calculate this condition;Otherwise It is transferred to step 7.4;
Step 7.4:To dtTake absolute value to obtain casing thickness relative value dj
Step 8:Build and train neutral net;
Step 9:Extensive neutral net, sample when will differ from training are input in the neutral net trained, are calculated Error ε between the output valve of neutral net actual casing thickness value corresponding with sample, judges whether the error ε meets actual mistake The requirement of difference, if meeting to require, step 10 is transferred to, otherwise goes to step 8 structures for re-starting neutral net and training;
Step 10:The neutral net for being built and being trained using step 8 is handled data, obtains sleeve pipe actual (real) thickness value ej
2. the data processing method according to claim 1 for obtaining petroleum casing pipe thickness, it is characterised in that described step 8 specifically implement according to following steps:
Step 8.1, neutral net is built, defines each several part variable and function in neutral net
The input vector and output vector of wherein each layer be:
Input layer input vector:X=(x1,x2,…,xl)
Hidden layer input vector:Hi=(hi1,hi2,…,hip)
Hidden layer output vector:Ho=(ho1,ho2,…,hop)
Output layer input vector:Yi=(yi1,yi2,…,yiq)
Output layer output vector:Yo=(yo1,yo2,…,yoq)
Desired output vector:Do=(do1,do2,…,doq)
Wherein,
The connection weight in input layer and intermediate layer is Wsh, wherein, s=1,2 ..., l, h=1,2 ..., p,
The connection weight of hidden layer and output layer is Who, wherein, o=1,2 ..., q,
The threshold value of each neuron of hidden layer is θh,
The threshold value of each neuron of output layer is θo,
Sample data number:K=1,2 ... K, wherein, K represents number of samples;
Wherein, l represents input layer number,
P represents hidden layer neuron number,
Q represents output layer neuron number;
Hidden layer is using activation primitive:f1(net)=net (1)
Output layer is using activation primitive:
Error function:
Step 8.2, neutral net initializes,
To input layer and the connection weight W in intermediate layersh, hidden layer and output layer connection weight Who, hidden layer each neuron Threshold θh, each neuron of output layer threshold θoA random number in section (- 1,1) is assigned respectively, gives computational accuracy value 0.0001, maximum frequency of training Z=1000 and learning rate η=0.8;
Step 8.3, the sample for training is inputted, calculates the output valve of output layer, the sample being trained is known reality The thickness relative value of the sleeve pipe of thickness value,
Specifically implement according to following steps:
Step 8.3.1, k-th of input sample is randomly selected,
X (k)=(x1(k),x2(k),…,xl(k)),
Its desired output is:
Do (k)=(do1(k),do2(k),…,doq(k));Desired output herein is that the sleeve pipe corresponding to input sample is actual Thickness value,
Step 8.3.2, the input and output of each neuron of hidden layer are calculated,
The input of each neuron of hidden layer is:
Wherein, h=1,2 ..., p;(4)
The output of each neuron of hidden layer is:
<mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>ho</mi> <mi>h</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>f</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>hi</mi> <mi>h</mi> </msub> <mo>(</mo> <mi>k</mi> <mo>)</mo> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>h</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>...</mn> <mo>,</mo> <mi>p</mi> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
Step 8.3.3, the input and output of each neuron of output layer are calculated,
The input of each neuron of output layer is,
<mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>yi</mi> <mi>o</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>h</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>p</mi> </munderover> <msub> <mi>w</mi> <mrow> <mi>h</mi> <mi>o</mi> </mrow> </msub> <msub> <mi>ho</mi> <mi>h</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&amp;theta;</mi> <mi>o</mi> </msub> <mo>,</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>o</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>...</mn> <mi>q</mi> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
The output of each neuron of output layer is:
<mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>yo</mi> <mi>o</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>f</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>yi</mi> <mi>o</mi> </msub> <mo>(</mo> <mi>k</mi> <mo>)</mo> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>o</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>...</mn> <mi>q</mi> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>
Step 8.4, global error is calculated, and judges whether its precision meets the requirements:
Step 8.4.1:Utilize the desired output do (k) in step 8.3 and reality output yoo(k) difference between, obtains the overall situation Error,
Whether the global error of the neutral net calculated in step 8.4.2, judgment step 8.4.1 meets to require, if global miss The not up to default precision 0.0001 of difference and frequency of training are less than the maximum times 1000 of setting, then go to step 8.5 and be modified, if Global error reaches default precision 0.0001 or learns the maximum times 1000 that number is more than setting, then goes to step 9;
Step 8.5, according to network desired output do (k) and reality output yoo(k) calculated, to input layer and the company in intermediate layer Meet weights Wsh, hidden layer and output layer connection weight Who, each neuron of hidden layer threshold θh, each neuron of output layer threshold Value θoIt is modified,
Specifically implement according to following steps:
Step 8.5.1, using error function e, output layer is calculated to the connection weight adjustment amount Δ w of hidden layerho
<mrow> <msub> <mi>&amp;Delta;w</mi> <mrow> <mi>h</mi> <mi>o</mi> </mrow> </msub> <mo>=</mo> <mo>-</mo> <mi>&amp;eta;</mi> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>e</mi> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>w</mi> <mrow> <mi>h</mi> <mi>o</mi> </mrow> </msub> </mrow> </mfrac> <mo>=</mo> <mo>-</mo> <mi>&amp;eta;</mi> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>e</mi> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>yi</mi> <mi>o</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mfrac> <mrow> <mo>&amp;part;</mo> <msub> <mi>yi</mi> <mi>o</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>w</mi> <mrow> <mi>h</mi> <mi>o</mi> </mrow> </msub> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow>
Because
<mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <msub> <mi>yi</mi> <mi>o</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>w</mi> <mrow> <mi>h</mi> <mi>o</mi> </mrow> </msub> </mrow> </mfrac> <mo>=</mo> <mfrac> <mrow> <mo>&amp;part;</mo> <mrow> <mo>(</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>h</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>p</mi> </munderover> <msub> <mi>w</mi> <mrow> <mi>h</mi> <mi>o</mi> </mrow> </msub> <msub> <mi>ho</mi> <mi>h</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&amp;theta;</mi> <mi>o</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>w</mi> <mrow> <mi>h</mi> <mi>o</mi> </mrow> </msub> </mrow> </mfrac> <mo>=</mo> <msub> <mi>ho</mi> <mi>h</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow>
If willIt is defined as-δo(k), i.e.,
Then (9) formula abbreviation is
Δwho=η δo(k)hoh(k) (12)
Step 8.5.2, using error function e, hidden layer is calculated to the connection weight adjustment amount Δ w of input layersh
<mrow> <msub> <mi>&amp;Delta;w</mi> <mrow> <mi>s</mi> <mi>h</mi> </mrow> </msub> <mo>=</mo> <mo>-</mo> <mi>&amp;eta;</mi> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>e</mi> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>w</mi> <mrow> <mi>s</mi> <mi>h</mi> </mrow> </msub> </mrow> </mfrac> <mo>=</mo> <mo>-</mo> <mi>&amp;eta;</mi> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>e</mi> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>hi</mi> <mi>h</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mfrac> <mrow> <mo>&amp;part;</mo> <msub> <mi>hi</mi> <mi>h</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>w</mi> <mrow> <mi>s</mi> <mi>h</mi> </mrow> </msub> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow>
Because
<mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <msub> <mi>hi</mi> <mi>h</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>w</mi> <mrow> <mi>s</mi> <mi>h</mi> </mrow> </msub> </mrow> </mfrac> <mo>=</mo> <mfrac> <mrow> <mo>&amp;part;</mo> <mrow> <mo>(</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>s</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>1</mn> </munderover> <msub> <mi>w</mi> <mrow> <mi>s</mi> <mi>h</mi> </mrow> </msub> <msub> <mi>x</mi> <mi>s</mi> </msub> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&amp;theta;</mi> <mi>h</mi> </msub> <mo>)</mo> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>w</mi> <mrow> <mi>s</mi> <mi>h</mi> </mrow> </msub> </mrow> </mfrac> <mo>=</mo> <msub> <mi>x</mi> <mi>s</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>14</mn> <mo>)</mo> </mrow> </mrow>
If willIt is defined as-δh(k), i.e.,
Then formula (13) abbreviation is
Δwsh=η δh(k)xs(k) (16)
Step 8.5.3, utilize error function e, the adjusting thresholds amount Δ θ of calculating each neuron of output layero
<mrow> <msub> <mi>&amp;Delta;&amp;theta;</mi> <mi>o</mi> </msub> <mo>=</mo> <mo>-</mo> <mi>&amp;eta;</mi> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>e</mi> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>&amp;theta;</mi> <mi>o</mi> </msub> </mrow> </mfrac> <mo>=</mo> <mo>-</mo> <mi>&amp;eta;</mi> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>e</mi> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>yi</mi> <mi>o</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mfrac> <mrow> <mo>&amp;part;</mo> <msub> <mi>yi</mi> <mi>o</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> <msub> <mi>&amp;theta;</mi> <mi>o</mi> </msub> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>17</mn> <mo>)</mo> </mrow> </mrow>
Step 8.5.4, utilize error function e, the adjusting thresholds amount Δ θ of calculating each neuron of hidden layerh
<mrow> <msub> <mi>&amp;Delta;&amp;theta;</mi> <mi>h</mi> </msub> <mo>=</mo> <mo>-</mo> <mi>&amp;eta;</mi> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>e</mi> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>&amp;theta;</mi> <mi>h</mi> </msub> </mrow> </mfrac> <mo>=</mo> <mo>-</mo> <mi>&amp;eta;</mi> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>e</mi> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>hi</mi> <mi>h</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mfrac> <mrow> <mo>&amp;part;</mo> <msub> <mi>hi</mi> <mi>h</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>&amp;theta;</mi> <mi>h</mi> </msub> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>18</mn> <mo>)</mo> </mrow> </mrow>
Step 8.5.4, utilize the δ of each neuron of output layero(k) and each neuron of hidden layer output hoh(k) connected to correct Weight who(k);
After then correcting,
<mrow> <msubsup> <mi>w</mi> <mrow> <mi>h</mi> <mi>o</mi> </mrow> <mrow> <mi>z</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msub> <mi>w</mi> <mrow> <mi>h</mi> <mi>o</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>&amp;Delta;w</mi> <mrow> <mi>h</mi> <mi>o</mi> </mrow> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>19</mn> <mo>)</mo> </mrow> </mrow>
Wherein z is frequency of training, 1≤z≤Z;
Step 8.5.5:Utilize the δ of each neuron of hidden layerh(k) and each neuron of input layer input xs(k) connection weight is corrected wsh(k);
After then correcting,
<mrow> <msubsup> <mi>w</mi> <mrow> <mi>s</mi> <mi>h</mi> </mrow> <mrow> <mi>z</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msub> <mi>w</mi> <mrow> <mi>s</mi> <mi>h</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>&amp;Delta;w</mi> <mrow> <mi>s</mi> <mi>h</mi> </mrow> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>20</mn> <mo>)</mo> </mrow> </mrow>
Step 8.5.6:Correct the threshold θ of each neuron of hidden layerhWith the threshold θ of each neuron of output layero
After then correcting,
θh z+1h+Δθh (21)
θo z+1o+Δθo (22)
Using revisedθh z+1And θo z+1Return to step 8.3 is trained again, wherein revisedWith θo z+1WithWith θh z+1The w corresponded respectively in the formula in step 8.3 (6)hoWith and θoW in formula (4)shWith θh
3. the data processing method according to claim 2 for obtaining petroleum casing pipe thickness, it is characterised in that described step 10 specifically implement according to following steps:
Step 10.1, casing thickness relative value d step 7 drawnjAs input value be input to step 9 carry out it is extensive after obtain Neutral net in, obtain hidden layer input vector hi using formula (4)h(j);
Step 10.2:By the input vector hi of the hidden layer obtained in step 10.1h(j) hidden layer output vector formula is substituted into (5):
In,
Obtain hidden layer output vector hoh(j);
Step 10.3:By hoh(j) the input vector formula (6) of output layer is substituted into:
<mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>yi</mi> <mi>o</mi> </msub> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>h</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>p</mi> </munderover> <msub> <mi>w</mi> <mrow> <mi>h</mi> <mi>o</mi> </mrow> </msub> <msub> <mi>ho</mi> <mi>h</mi> </msub> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&amp;theta;</mi> <mi>o</mi> </msub> <mo>,</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>o</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>...</mn> <mi>q</mi> </mrow> </mtd> </mtr> </mtable> </mfenced>
Obtain the input vector yi of output layero(j);
Step 10.4:By yio(j) output layer output vector formula is substituted into:
<mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>yo</mi> <mi>o</mi> </msub> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>f</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>yi</mi> <mi>o</mi> </msub> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>o</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>...</mn> <mi>q</mi> </mrow> </mtd> </mtr> </mtable> <mo>,</mo> </mrow>
Obtain output layer output vector yoo(j), i.e. sleeve pipe actual (real) thickness value ej
CN201410413005.7A 2014-08-20 2014-08-20 Obtain the data processing method of petroleum casing pipe thickness Active CN104166805B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410413005.7A CN104166805B (en) 2014-08-20 2014-08-20 Obtain the data processing method of petroleum casing pipe thickness

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410413005.7A CN104166805B (en) 2014-08-20 2014-08-20 Obtain the data processing method of petroleum casing pipe thickness

Publications (2)

Publication Number Publication Date
CN104166805A CN104166805A (en) 2014-11-26
CN104166805B true CN104166805B (en) 2017-11-10

Family

ID=51910614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410413005.7A Active CN104166805B (en) 2014-08-20 2014-08-20 Obtain the data processing method of petroleum casing pipe thickness

Country Status (1)

Country Link
CN (1) CN104166805B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886371A (en) * 2014-03-28 2014-06-25 郑州大学 Method for controlling component and thermal treatment technological process of pre-hardening plastic die steel

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886371A (en) * 2014-03-28 2014-06-25 郑州大学 Method for controlling component and thermal treatment technological process of pre-hardening plastic die steel

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于BP网络的石油套管破损检测算法;钱慧芳,等;《西安工程大学学报》;20140228;第28卷(第1期);84~88 *

Also Published As

Publication number Publication date
CN104166805A (en) 2014-11-26

Similar Documents

Publication Publication Date Title
CN109272146B (en) Flood prediction method based on deep learning model and BP neural network correction
CN103530818B (en) A kind of water supply network modeling method based on BRB system
CN104778378B (en) A kind of oil gas field the analysis of affecting factors about production decline method
CN103106535B (en) Method for solving collaborative filtering recommendation data sparsity based on neural network
CN107590317A (en) A kind of generator method for dynamic estimation of meter and model parameter uncertainty
CN110210621A (en) A kind of object detection method based on residual error network improvement
CN108682023A (en) Close coupling Unscented kalman tracking filter algorithm based on Elman neural networks
CN106503800A (en) Deep learning model based on complex network and the application in measurement signal analysis
CN102880905B (en) Online soft sensor method done by a kind of normal top oil
CN105546352A (en) Natural gas pipeline tiny leakage detection method based on sound signals
CN109426672B (en) Oil reservoir injection-production parameter optimization method based on uncertain geological model
CN108595803B (en) Shale gas well production pressure prediction method based on recurrent neural network
CN105989410A (en) Overlap kernel pulse separation method
CN108301823A (en) A method of identification reservoir hydrocarbons dessert
CN106156852A (en) A kind of Gauss overlap kernel impulse response estimation method
CN105003256A (en) Method for selecting wells or layers for acidifying oil-water wells on basis of improved entropy weight processes
CN107016205A (en) A kind of multi-model construction method of groundwater Numerical Simulation
CN104166805B (en) Obtain the data processing method of petroleum casing pipe thickness
CN110223342A (en) A kind of extraterrestrial target size estimation method based on deep neural network
CN111914488B (en) Data area hydrologic parameter calibration method based on antagonistic neural network
CN102254184B (en) Method for fusing multi-physical-domain feature information
CN106530109A (en) Oilfield development appraisal well decision method based on information value
CN111460737A (en) Intelligent settlement prediction method and system for slurry air pressure balance shield
CN104346448A (en) Incremental data mining method based on genetic programming algorithm
CN111914487B (en) Data-free regional hydrological parameter calibration method based on antagonistic neural network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20190708

Address after: 710123 7th Floor, Southwest Gate Research Building, Xijing University, Shenhe No.2 Road, Chang'an District, Xi'an City, Shaanxi Province

Patentee after: SHAANXI HUACHEN PETROLEUM TECHNOLOGY CO.,LTD.

Address before: 710048 Jinhua South Road, Xi'an City, Shaanxi Province, No. 19

Patentee before: XI'AN POLYTECHNIC University

PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Data processing method for obtaining oil casing thickness

Effective date of registration: 20210104

Granted publication date: 20171110

Pledgee: Xi'an Yanliang Financing Guarantee Co.,Ltd.

Pledgor: SHAANXI HUACHEN PETROLEUM TECHNOLOGY Co.,Ltd.

Registration number: Y2021610000001

PC01 Cancellation of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20220209

Granted publication date: 20171110

Pledgee: Xi'an Yanliang Financing Guarantee Co.,Ltd.

Pledgor: SHAANXI HUACHEN PETROLEUM TECHNOLOGY CO.,LTD.

Registration number: Y2021610000001

PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Data processing method for obtaining oil casing thickness

Effective date of registration: 20220216

Granted publication date: 20171110

Pledgee: Xi'an investment and financing Company limited by guarantee

Pledgor: SHAANXI HUACHEN PETROLEUM TECHNOLOGY CO.,LTD.

Registration number: Y2022610000033

PC01 Cancellation of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20230306

Granted publication date: 20171110

Pledgee: Xi'an investment and financing Company limited by guarantee

Pledgor: SHAANXI HUACHEN PETROLEUM TECHNOLOGY CO.,LTD.

Registration number: Y2022610000033