CN115983105A - Occam inversion Lagrange multiplier optimization method based on deep learning weighting decision - Google Patents
Occam inversion Lagrange multiplier optimization method based on deep learning weighting decision Download PDFInfo
- Publication number
- CN115983105A CN115983105A CN202211597028.9A CN202211597028A CN115983105A CN 115983105 A CN115983105 A CN 115983105A CN 202211597028 A CN202211597028 A CN 202211597028A CN 115983105 A CN115983105 A CN 115983105A
- Authority
- CN
- China
- Prior art keywords
- inversion
- neural network
- data set
- representing
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000013135 deep learning Methods 0.000 title claims abstract description 20
- 238000005457 optimization Methods 0.000 title claims abstract description 12
- 230000006870 function Effects 0.000 claims abstract description 43
- 238000012549 training Methods 0.000 claims abstract description 36
- 239000013598 vector Substances 0.000 claims abstract description 29
- 238000013528 artificial neural network Methods 0.000 claims abstract description 23
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 19
- 238000013507 mapping Methods 0.000 claims abstract description 15
- 230000009977 dual effect Effects 0.000 claims description 13
- 239000011159 matrix material Substances 0.000 claims description 6
- 230000015572 biosynthetic process Effects 0.000 claims description 2
- 229910052731 fluorine Inorganic materials 0.000 claims 1
- 125000001153 fluoro group Chemical group F* 0.000 claims 1
- 230000008901 benefit Effects 0.000 abstract description 9
- 230000008569 process Effects 0.000 description 7
- 238000010845 search algorithm Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Images
Landscapes
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses an Occam inversion Lagrange multiplier optimization method based on deep learning weighting decision, which comprises the following steps: constructing a full-connection neural network and a convolution neural network; acquiring forward parameters; solving forward modeling data sets corresponding to the k model vectors one by one; performing Occam inversion on the forward data set, and recording data in any iteration to obtain a data set T and a Lagrange multiplier vector L; training the convolutional neural network and the fully-connected neural network; establishing a mapping function of the data set after the convolutional neural network training and the data set after the fully-connected neural network training, and updating an inversion target function; adjusting the double-network weight factor, and updating data and model parameters; and completing inversion until the maximum iteration number is less than or equal to the preset fitting difference. Through the scheme, the method has the advantages of simple logic, high inversion iteration efficiency, stability, reliability and the like.
Description
Technical Field
The invention relates to the technical field of geophysical, in particular to an Occam inversion Lagrange multiplier optimization method based on deep learning weighting decision.
Background
The geophysical inversion method refers to a mathematical physics calculation method from detected data to the distribution of subsurface physical property parameters, and is one of the most core and difficult research directions in the geophysical problem, wherein the existence, uniqueness and stability of a solution in the inversion process are the most important problems. The traditional geophysical inversion method adopts a linear or nonlinear method to fit data to calculate and infer the underground physical structure, regularized inversion is widely applied in the development of recent decades, and model constraint terms are added to a target function to increase the stability of fitting through prior information.
Similar to regularization inversion, the Occam inversion is proposed by Constable et al in the 80 th century based on model roughness constraint, compared with the traditional regularization inversion, the Occam inversion introduces Lagrangian multipliers to constrain the smoothness of the model, so that the inversion iteration process is calculated step by step along the premise that the model is always smooth, and the multiplier of the model with the minimum roughness is taken in each step of iteration, therefore, the inversion result of the method can be guaranteed to be as smooth as possible, and the method has the advantages that the inversion process is very stable and accords with the physical meaning of continuous change of the stratum.
The selection of Lagrange Multipliers (LMs) in the Occam inversion is a very critical core problem, and in each iteration, the multipliers both ensure the minimum objective function value and also ensure the minimum model roughness. At present, the linear search algorithms in the prior art, such as the golden section method, the dichotomy method and the like, and the nonlinear search algorithms such as the Monte Carlo algorithm, the enumeration method and the like, are more applied, and the linear search algorithms have the advantages of simple calculation and fast iteration, but are easy to fall into the local minimum; the nonlinear search algorithm has the advantages of finding a global minimum value, but the time consumption is long and the calculation amount is too large. The traditional Occam inversion search Lagrange multiplier algorithm has certain defects and cannot give consideration to the advantages of overall minimum and rapid iteration.
Therefore, an Occam inversion Lagrange multiplier optimization method based on deep learning weighting decision, which is simple in logic, efficient in inversion iteration, stable and reliable, is urgently needed to be provided.
Disclosure of Invention
Aiming at the problems, the invention aims to provide an Occam inversion Lagrange multiplier optimization method based on deep learning weighting decision, and the technical scheme adopted by the invention is as follows:
an Occam inversion Lagrange multiplier optimization method based on deep learning weighting decision comprises the following steps:
s01, constructing a full-connection neural network and a convolution neural network;
s02, acquiring forward parameters; the forward parameters include k model vectors M k Frequency vector F and data of transmitting and receiving station;
step S03, solving k model vectors M k One-to-one correspondence forward data set D ko ;
Step S04, performing a course data set D ko Performing Occam inversion, and recording data in any iteration to obtain a data set T and a Lagrange multiplier vector L;
s05, training the convolutional neural network by taking the data set T as input and the Lagrange multiplier vector L corresponding to the data set T as a label; meanwhile, taking the Lagrange multiplier vector L as input and searching the optimal Lagrange multiplier L in each iteration N Training the fully-connected neural network for the label;
s06, establishing a mapping function from the data set after the convolutional neural network training to a first predicted value; establishing a mapping function from the data set after the training of the fully-connected neural network to a second predicted value, and updating an inversion target function U, wherein the expression is as follows:
wherein ,representing a roughness matrix; m n Representing the nth model vector; g (T) n ) A mapping function representing a convolutional neural network; λ represents a dual network weight factor; h (L) n ) A mapping function representing a fully-connected neural network; w represents a data item weight coefficient matrix; d n Representing the nth analog data in the data set T; f (M) n ) The table corresponds to the forward function of the nth model;
s07, adjusting a double-network weight factor, and updating data and model parameters; the expression of the dual network weighting factor is:
λ j+1 =λ j -9.9/N
wherein ,λj+1 A dual network weight factor representing the j +1 th iteration; lambda [ alpha ] j A dual network weight factor representing a jth iteration; n represents the total number of iterations;
and (4) repeating the steps S06 to S07 until the maximum iteration number is less than or equal to the preset fitting difference, and finishing inversion.
Further, in the step S04, the actor data set D is subjected to ko Simulating the measured data by superposing Gaussian white noise to obtain a data set D containing noise k (ii) a For the noisy data set D k The Occam inversion was performed.
Preferably, the forward parameters comprise k model vectors M k The expression of (a) is:
M k =[m 1 ,m 2 ,m 3 ,...,m n ]
wherein ,mn A resistivity value representing the nth formation layer;
the expression of the frequency vector F is:
F=[f 1 ,f 2 ,f 3 ,...,f n ]
wherein ,fn Representing the nth frequency value.
Further, in step S05, the loss function of the convolutional neural network and the fully-connected neural network training adopts an L2 norm, and an expression of the loss function is:
l(x,y)=L={l 1 ,l 2 ,...,l n } T
l n =(x n -y n ) 2
wherein L (x, y) represents the average or sum of the lagrange multiplier vectors L; l n Represents the square of the difference between the nth training result and the label, x n Represents the nth training result, y n Representing the nth tag value.
Further, a target error function is established, which is expressed as:
wherein oup represents the network output; targ denotes the value of the label.
Further, in the step S04, the expression of the data set T is:
T={T 1 ,T 2 ,...T i }
i=N*k
wherein ,Ti Data representing an Occam inversion; n denotes the total number of iterations.
Further, the expression of the lagrangian multiplier vector L is:
L={L 1 ,L 2 ,...L N }
wherein ,LN Representing the optimal lagrangian multiplier for each iterative search.
Preferably, in step S04, the expression of the current inversion objective function of the Occam inversion is:
where LM represents the multiplier values obtained by the conventional linear search method.
Further, in step S06, the mapping function of the data set after the convolutional neural network training to the first predicted value is: LM1= g (T); the mapping function from the data set after the full-connection neural network training to the second predicted value is as follows: LM2= h (L).
Compared with the prior art, the invention has the following beneficial effects:
(1) The method skillfully trains the CNN of the field value data set and the FCNN of the Lagrange multiplier, is different from the traditional search algorithm of Occam inversion on the Lagrange multiplier, and combines the predicted value of the dual-network structure training as the new Lagrange multiplier predicted value through the weight factor; compared with the traditional linear search algorithm, the method has the advantages that the comprehensive data set and the real inversion multiplier sequence are input during training, so that the prediction result is more in line with the global optimal solution, and the inversion precision and stability are improved.
(2) The invention skillfully adds Gaussian white noise simulation actual measurement data, and has the advantage of simulating the characteristics of the actual measurement data in training to improve the generalization capability of the deep learning network.
In conclusion, the method has the advantages of simple logic, high inversion iteration efficiency, stability, reliability and the like, and has high practical value and popularization value in the technical field of geophysical.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope of protection, and it is obvious for those skilled in the art that other related drawings can be obtained according to these drawings without inventive efforts.
FIG. 1 is a logic flow diagram of the present invention.
FIG. 2 is a graph of an objective function and a Lagrangian multiplier for an example data inversion in accordance with the present invention.
FIG. 3 is a schematic diagram of a deep learning training network structure according to the present invention.
Fig. 4 is a diagram showing parameters and a feature map of each layer of the CNN network according to the present invention.
FIG. 5 is a graph of the loss function and the error function variation of the training process of the present invention.
FIG. 6 is a comparison graph of the effect of the optimized inversion of the present invention and the conventional inversion.
Detailed Description
To further clarify the objects, technical solutions and advantages of the present application, the present invention will be further described with reference to the accompanying drawings and examples, and embodiments of the present invention include, but are not limited to, the following examples. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In this embodiment, the term "and/or" is only one kind of association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone.
The terms "first" and "second," and the like, in the description and claims of the present embodiment are used for distinguishing different objects, and are not used for describing a specific order of the objects. For example, the first target object and the second target object, etc. are specific sequences for distinguishing different target objects, rather than describing target objects.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present application, the meaning of "a plurality" means two or more unless otherwise specified. For example, a plurality of processing units refers to two or more processing units; a plurality of systems refers to two or more systems.
As shown in fig. 1 to 6, the present embodiment provides an ocean inversion lagrangian multiplier optimization method based on a deep learning weighted decision, which jointly decides through two neural networks, that is, a first prediction value LM1 based on an LM iteration sequence of a fully-connected neural network, and a second prediction value LM2 based on deep learning of an iteration field value data of a convolutional neural network. The method comprises the following specific steps:
first, 50 model vectors M are established k (i.e., 50 different sets of resistivity models), high-resistance layers of 10 Ω m, 50 Ω m, and 100 Ω m were randomly combined, the background resistivity was 1 Ω m, the dominant frequency was 0.5Hz, 1.5Hz, 8Hz, etc., the layer thickness and depth were fixed at 100m and 1000m, and the transmission/reception interval was 500m, 700m, \\ 8230;, and 4500m, which were 21 sites.
Secondly, forward modeling the models respectively to obtain forward modeling data sets D ko Adding 5% of white Gaussian noise and performing inversion, wherein when a Lagrange multiplier is searched in one iteration, the distribution of an objective function and the multiplier is shown in FIG. 2, the iteration times are set to be 50, and 2500 groups of training data sets T = { T = (total number of training data sets) = (total number of training data sets T = { T) } are obtained 1 ,T 2 ,...T i ,},T i (ii) =50 x 50, and 2500 corresponding lagrange multiplier values L = { L = { (L) 1 ,L 2 ,...L 2500 }. The current inversion objective function is:
where LM represents multiplier values obtained by a conventional linear search method.
Thirdly, constructing a network model, taking a data set T as input to construct a convolutional neural network CNN, taking a corresponding Lagrange multiplier vector L as a label, comprising 3 convolutional layers, 2 pooling layers and 2 full-connection layers, and simultaneously constructing a full-connection neural network FCNN by taking the Lagrange multiplier vector L as input, taking LN as a label, and totally 2 full-connection layers, wherein a schematic diagram of the double-network model is shown in FIG. 3, and a characteristic diagram of parameters and partial data of each layer of the CNN is shown in FIG. 4.
Fourthly, the loss function of the training adopts L2 norm, and the expression is as follows:
l(x,y)=L={l 1 ,l 2 ,...,l n } T
l n =(x n -y n ) 2
wherein L (x, y) represents the average or sum of the lagrange multiplier vectors L; l n Represents the square of the difference between the nth training result and the label, x n Represents the nth training result, y n Representing the nth tag value.
Here, a target error function is established, which is expressed as:
wherein oup represents the network output; targ denotes the value of the label.
In this embodiment, the change process of the loss function and the error function in the training iteration process is shown in fig. 5, and it can be seen from the graph that the descending trend of both functions in the training process is obvious.
Fifthly, after training is finished, inversion is carried out on the test model, the test model is a 100 omega m high-resistance layer with the top layer buried depth of 1700m, the thickness is 100m, a new inversion target function U is obtained, and the expression is as follows:
wherein ,representing a roughness matrix; m n Representing the nth model vector; g (T) n ) A mapping function representing a convolutional neural network; λ represents a dual network weight factor; h (L) n ) A mapping function representing a fully-connected neural network; w represents a data item weight coefficient matrix; d n Representing the nth simulated data, f (M), in the data set T n ) Representing a forward function corresponding to the nth model;
in this embodiment, the initial value of the dual-network weighting factor λ is 10, the self-training of LM is mainly used in the early stage of the iteration, λ is 10 and gradually starts to decrease, the fitting data training model is mainly used in the later stage, and the value of λ is less than 1.
Sixthly, updating data and model parameters in iterative inversion, adjusting double-network weight factors and updating the data and the model parameters; the expression of the dual network weighting factor is:
λ j+1 =λ j -9.9/N
wherein ,λj+1 A dual network weight factor representing the j +1 th iteration; lambda j A dual network weight factor representing a jth iteration; n represents the total number of iterations;
in this embodiment, the situation that the fitting difference increases may be encountered in each iteration, and when the fitting difference increases, the network is automatically re-entered according to 1/2 of the original step length according to the current data to obtain a new predicted value until the fitting difference decreases and the next iteration calculation is performed.
And (4) repeating the step (S06) to the step (S07) until the maximum iteration number is less than or equal to the preset fitting difference, and finishing inversion. As shown in fig. 6, the result after 28 times of inversion iteration in the above steps is compared with the effect after 28 times of conventional inversion iteration with the same parameters, and it can be known from the figure that both methods embody the smooth characteristic of the Occam inversion model, the two methods are relatively close to each other, and both methods fit better to the theoretical model, slightly shallower than the real model, but the difference is very small and can be ignored, but the algorithm speed of the invention is faster than that of the conventional inversion.
From the above cases, it can be seen that the Occam inversion Lagrange multiplier optimization method based on the deep learning weighted decision adopts a dual neural network simultaneous training mode, establishes a random model to respectively learn the characteristics of the data set and the Lagrange multiplier, and verifies the test data, so that the loss function and the error function are obviously reduced, and finally the test model is verified.
The above-mentioned embodiments are only preferred embodiments of the present invention, and do not limit the scope of the present invention, but all the modifications made by the principles of the present invention and the non-inventive efforts based on the above-mentioned embodiments shall fall within the scope of the present invention.
Claims (9)
1. The Occam inversion Lagrange multiplier optimization method based on the deep learning weighting decision is characterized by comprising the following steps of:
s01, constructing a full-connection neural network and a convolution neural network;
s02, acquiring forward parameters; the forward parameters include k model vectors M k Frequency vector F and data of transmitting and receiving station;
step S03, solving k model vectors M k One-to-one correspondence forward data set D ko ;
Step S04, performing a course data set D ko Performing Occam inversion, and recording data in any iteration to obtain a data set T and a Lagrange multiplier vector L;
s05, training the convolutional neural network by taking the data set T as input and the Lagrange multiplier vector L corresponding to the data set T as a label; meanwhile, taking the Lagrange multiplier vector L as input and searching the optimal Lagrange multiplier L in each iteration N Training the fully-connected neural network for the label;
s06, establishing a mapping function from the data set after the convolutional neural network training to a first predicted value; establishing a mapping function from the data set after the full-connection neural network training to a second predicted value, and updating an inversion target function U, wherein the expression is as follows:
wherein ,representing a roughness matrix; m is a group of n Representing the nth model vector; g (T) n ) A mapping function representing a convolutional neural network; λ represents a dual network weighting factorA seed; h (L) n ) A mapping function representing a fully-connected neural network; w represents a data item weight coefficient matrix; d n Representing the nth analog data in the data set T; f (M) n ) Representing a forward function corresponding to the nth model;
s07, adjusting a double-network weight factor, and updating data and model parameters; the expression of the dual network weighting factor is:
λ j+1 =λ j -9.9N
wherein ,λj+1 A dual network weight factor representing the j +1 th iteration; lambda [ alpha ] j A dual network weight factor representing a jth iteration; n represents the total number of iterations;
and (4) repeating the step (S06) to the step (S07) until the maximum iteration number is less than or equal to the preset fitting difference, and finishing inversion.
2. The method for optimizing Occam inversion Lagrangian multiplier based on deep learning weighted decision as claimed in claim 1, wherein in step S04, the forward data set D is subjected to ko The Gaussian white noise is superposed to simulate the actually measured data to obtain a data set D containing noise k (ii) a For the noisy data set D k The Occam inversion was performed.
3. The method of claim 1, wherein the forward parameters comprise k model vectors M k The expression of (a) is:
M k =[m 1 ,m 2 ,m 3 ,...,m n ]
wherein ,mn A resistivity value representing the nth formation layer;
the expression of the frequency vector F is:
F=[f 1 ,f 2 ,f 3 ,...,f n ]
wherein ,fn Representing the nth frequency value.
4. The method for optimizing Occam inversion Lagrangian multiplier based on deep learning weighted decision as claimed in claim 1, 2 or 3, wherein in the step S05, the loss functions trained by the convolutional neural network and the fully-connected neural network adopt L2 norm, and the expression is as follows:
l(x,y)=L={l 1 ,l 2 ,...,l n } T
l n =(x n -y n ) 2
wherein L (x, y) represents the average or sum of the lagrange multiplier vectors L; l n Represents the square of the difference between the nth training result and the label; x is a radical of a fluorine atom n Representing the nth training result; y is n Representing the nth tag value.
6. The method for optimizing Occam inversion Lagrangian multipliers based on deep learning weighting decision according to claim 1 or 2, wherein in the step S04, the expression of the data set T is as follows:
T={T 1 ,T 2 ,...T i }
i=N*k
wherein ,Ti Data representing an Occam inversion; n denotes the total number of iterations.
7. The deep learning weighted decision based Occam inversion Lagrangian multiplier optimization method according to claim 6, wherein the expression of the Lagrangian multiplier vector L is as follows:
L={L 1 ,L 2 ,...L N }
wherein ,LN Representing the optimal lagrangian multiplier for each iterative search.
8. The method for optimizing Occam inversion Lagrangian multipliers based on deep learning weighted decision as claimed in claim 7, wherein in the step S04, the expression of the current inversion objective function of the Occam inversion is as follows:
where LM represents multiplier values obtained by a conventional linear search method.
9. The method for optimizing Occam inversion Lagrangian multiplier based on deep learning weighted decision as claimed in claim 1, wherein in the step S06, the mapping function from the data set trained by the convolutional neural network to the first predicted value is as follows: LM1= g (T); the mapping function of the data set after the full-connection neural network training to the second predicted value is as follows: LM2= h (L).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211597028.9A CN115983105B (en) | 2022-12-12 | 2022-12-12 | Occam inversion Lagrange multiplier optimization method based on deep learning weighted decision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211597028.9A CN115983105B (en) | 2022-12-12 | 2022-12-12 | Occam inversion Lagrange multiplier optimization method based on deep learning weighted decision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115983105A true CN115983105A (en) | 2023-04-18 |
CN115983105B CN115983105B (en) | 2023-10-03 |
Family
ID=85958716
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211597028.9A Active CN115983105B (en) | 2022-12-12 | 2022-12-12 | Occam inversion Lagrange multiplier optimization method based on deep learning weighted decision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115983105B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116976202A (en) * | 2023-07-12 | 2023-10-31 | 清华大学 | Fixed complex source item distribution inversion method and device based on deep neural network |
CN117828420A (en) * | 2023-12-07 | 2024-04-05 | 湖南光华防务科技集团有限公司 | Testability multi-target distribution method based on generated data |
CN117828420B (en) * | 2023-12-07 | 2024-05-31 | 湖南光华防务科技集团有限公司 | Testability multi-target distribution method based on generated data |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105573963A (en) * | 2016-01-05 | 2016-05-11 | 中国电子科技集团公司第二十二研究所 | Reconstruction method for horizontal nonuniform structure of ionized layer |
US20170075030A1 (en) * | 2015-09-15 | 2017-03-16 | Brent D. Wheelock | Accelerated Occam Inversion Using Model Remapping and Jacobian Matrix Decomposition |
CN113486591A (en) * | 2021-07-13 | 2021-10-08 | 吉林大学 | Gravity multi-parameter data density weighted inversion method for convolutional neural network result |
CN113933905A (en) * | 2021-09-30 | 2022-01-14 | 中国矿业大学 | Cone-shaped field source transient electromagnetic inversion method |
-
2022
- 2022-12-12 CN CN202211597028.9A patent/CN115983105B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170075030A1 (en) * | 2015-09-15 | 2017-03-16 | Brent D. Wheelock | Accelerated Occam Inversion Using Model Remapping and Jacobian Matrix Decomposition |
CN105573963A (en) * | 2016-01-05 | 2016-05-11 | 中国电子科技集团公司第二十二研究所 | Reconstruction method for horizontal nonuniform structure of ionized layer |
CN113486591A (en) * | 2021-07-13 | 2021-10-08 | 吉林大学 | Gravity multi-parameter data density weighted inversion method for convolutional neural network result |
CN113933905A (en) * | 2021-09-30 | 2022-01-14 | 中国矿业大学 | Cone-shaped field source transient electromagnetic inversion method |
Non-Patent Citations (2)
Title |
---|
C DE GROOT-HEDLIN等: ""Inversion of magnetotelluric data for 2D structure with sharp resistivity contrasts"", 《GEOPHYSICS》, vol. 69, no. 1, pages 78 - 86 * |
陈润滋 等: ""基于MATLAB语言的二维大地电磁OCCAM快速反演"", 《地球物理学进展》, vol. 33, no. 4, pages 1461 - 1468 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116976202A (en) * | 2023-07-12 | 2023-10-31 | 清华大学 | Fixed complex source item distribution inversion method and device based on deep neural network |
CN116976202B (en) * | 2023-07-12 | 2024-03-26 | 清华大学 | Fixed complex source item distribution inversion method and device based on deep neural network |
CN117828420A (en) * | 2023-12-07 | 2024-04-05 | 湖南光华防务科技集团有限公司 | Testability multi-target distribution method based on generated data |
CN117828420B (en) * | 2023-12-07 | 2024-05-31 | 湖南光华防务科技集团有限公司 | Testability multi-target distribution method based on generated data |
Also Published As
Publication number | Publication date |
---|---|
CN115983105B (en) | 2023-10-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106815782A (en) | A kind of real estate estimation method and system based on neutral net statistical models | |
CN110046710A (en) | A kind of the nonlinear function Extremal optimization method and system of neural network | |
CN104636801A (en) | Transmission line audible noise prediction method based on BP neural network optimization | |
CN104636985A (en) | Method for predicting radio disturbance of electric transmission line by using improved BP (back propagation) neural network | |
CN115983105A (en) | Occam inversion Lagrange multiplier optimization method based on deep learning weighting decision | |
CN112784140B (en) | Search method of high-energy-efficiency neural network architecture | |
CN114723095A (en) | Missing well logging curve prediction method and device | |
CN114004336A (en) | Three-dimensional ray reconstruction method based on enhanced variational self-encoder | |
CN116151102A (en) | Intelligent determination method for space target ultra-short arc initial orbit | |
CN114637881B (en) | Image retrieval method based on multi-agent metric learning | |
CN113486591B (en) | Gravity multi-parameter data density weighted inversion method for convolutional neural network result | |
CN113722980A (en) | Ocean wave height prediction method, system, computer equipment, storage medium and terminal | |
CN111192158A (en) | Transformer substation daily load curve similarity matching method based on deep learning | |
CN110441815B (en) | Simulated annealing Rayleigh wave inversion method based on differential evolution and block coordinate descent | |
CN116822742A (en) | Power load prediction method based on dynamic decomposition-reconstruction integrated processing | |
CN107994570A (en) | A kind of method for estimating state and system based on neutral net | |
CN112380306A (en) | Adaptive correction method for Kergin spatial interpolation considering distribution balance | |
CN116956744A (en) | Multi-loop groove cable steady-state temperature rise prediction method based on improved particle swarm optimization | |
CN114282440B (en) | Robust identification method for adjusting system of pumped storage unit | |
CN115567131A (en) | 6G wireless channel characteristic extraction method based on dimensionality reduction complex convolution network | |
CN115512077A (en) | Implicit three-dimensional scene characterization method based on multilayer dynamic characteristic point clouds | |
CN109754058A (en) | A kind of depth datum approximating method based on CGBP algorithm | |
CN110501903B (en) | Self-adjusting and optimizing method for parameters of robot inverse solution-free control system | |
CN113255887A (en) | Radar error compensation method and system based on genetic algorithm optimization BP neural network | |
KR102110316B1 (en) | Method and device for variational interference using neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |