CN115983105A - Occam inversion Lagrange multiplier optimization method based on deep learning weighting decision - Google Patents

Occam inversion Lagrange multiplier optimization method based on deep learning weighting decision Download PDF

Info

Publication number
CN115983105A
CN115983105A CN202211597028.9A CN202211597028A CN115983105A CN 115983105 A CN115983105 A CN 115983105A CN 202211597028 A CN202211597028 A CN 202211597028A CN 115983105 A CN115983105 A CN 115983105A
Authority
CN
China
Prior art keywords
inversion
neural network
data set
representing
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211597028.9A
Other languages
Chinese (zh)
Other versions
CN115983105B (en
Inventor
王绪本
杨锐
杨钰菡
王向鹏
袁崇鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Univeristy of Technology
Original Assignee
Chengdu Univeristy of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Univeristy of Technology filed Critical Chengdu Univeristy of Technology
Priority to CN202211597028.9A priority Critical patent/CN115983105B/en
Publication of CN115983105A publication Critical patent/CN115983105A/en
Application granted granted Critical
Publication of CN115983105B publication Critical patent/CN115983105B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses an Occam inversion Lagrange multiplier optimization method based on deep learning weighting decision, which comprises the following steps: constructing a full-connection neural network and a convolution neural network; acquiring forward parameters; solving forward modeling data sets corresponding to the k model vectors one by one; performing Occam inversion on the forward data set, and recording data in any iteration to obtain a data set T and a Lagrange multiplier vector L; training the convolutional neural network and the fully-connected neural network; establishing a mapping function of the data set after the convolutional neural network training and the data set after the fully-connected neural network training, and updating an inversion target function; adjusting the double-network weight factor, and updating data and model parameters; and completing inversion until the maximum iteration number is less than or equal to the preset fitting difference. Through the scheme, the method has the advantages of simple logic, high inversion iteration efficiency, stability, reliability and the like.

Description

Occam inversion Lagrange multiplier optimization method based on deep learning weighting decision
Technical Field
The invention relates to the technical field of geophysical, in particular to an Occam inversion Lagrange multiplier optimization method based on deep learning weighting decision.
Background
The geophysical inversion method refers to a mathematical physics calculation method from detected data to the distribution of subsurface physical property parameters, and is one of the most core and difficult research directions in the geophysical problem, wherein the existence, uniqueness and stability of a solution in the inversion process are the most important problems. The traditional geophysical inversion method adopts a linear or nonlinear method to fit data to calculate and infer the underground physical structure, regularized inversion is widely applied in the development of recent decades, and model constraint terms are added to a target function to increase the stability of fitting through prior information.
Similar to regularization inversion, the Occam inversion is proposed by Constable et al in the 80 th century based on model roughness constraint, compared with the traditional regularization inversion, the Occam inversion introduces Lagrangian multipliers to constrain the smoothness of the model, so that the inversion iteration process is calculated step by step along the premise that the model is always smooth, and the multiplier of the model with the minimum roughness is taken in each step of iteration, therefore, the inversion result of the method can be guaranteed to be as smooth as possible, and the method has the advantages that the inversion process is very stable and accords with the physical meaning of continuous change of the stratum.
The selection of Lagrange Multipliers (LMs) in the Occam inversion is a very critical core problem, and in each iteration, the multipliers both ensure the minimum objective function value and also ensure the minimum model roughness. At present, the linear search algorithms in the prior art, such as the golden section method, the dichotomy method and the like, and the nonlinear search algorithms such as the Monte Carlo algorithm, the enumeration method and the like, are more applied, and the linear search algorithms have the advantages of simple calculation and fast iteration, but are easy to fall into the local minimum; the nonlinear search algorithm has the advantages of finding a global minimum value, but the time consumption is long and the calculation amount is too large. The traditional Occam inversion search Lagrange multiplier algorithm has certain defects and cannot give consideration to the advantages of overall minimum and rapid iteration.
Therefore, an Occam inversion Lagrange multiplier optimization method based on deep learning weighting decision, which is simple in logic, efficient in inversion iteration, stable and reliable, is urgently needed to be provided.
Disclosure of Invention
Aiming at the problems, the invention aims to provide an Occam inversion Lagrange multiplier optimization method based on deep learning weighting decision, and the technical scheme adopted by the invention is as follows:
an Occam inversion Lagrange multiplier optimization method based on deep learning weighting decision comprises the following steps:
s01, constructing a full-connection neural network and a convolution neural network;
s02, acquiring forward parameters; the forward parameters include k model vectors M k Frequency vector F and data of transmitting and receiving station;
step S03, solving k model vectors M k One-to-one correspondence forward data set D ko
Step S04, performing a course data set D ko Performing Occam inversion, and recording data in any iteration to obtain a data set T and a Lagrange multiplier vector L;
s05, training the convolutional neural network by taking the data set T as input and the Lagrange multiplier vector L corresponding to the data set T as a label; meanwhile, taking the Lagrange multiplier vector L as input and searching the optimal Lagrange multiplier L in each iteration N Training the fully-connected neural network for the label;
s06, establishing a mapping function from the data set after the convolutional neural network training to a first predicted value; establishing a mapping function from the data set after the training of the fully-connected neural network to a second predicted value, and updating an inversion target function U, wherein the expression is as follows:
Figure BDA0003993500330000021
wherein ,
Figure BDA0003993500330000022
representing a roughness matrix; m n Representing the nth model vector; g (T) n ) A mapping function representing a convolutional neural network; λ represents a dual network weight factor; h (L) n ) A mapping function representing a fully-connected neural network; w represents a data item weight coefficient matrix; d n Representing the nth analog data in the data set T; f (M) n ) The table corresponds to the forward function of the nth model;
s07, adjusting a double-network weight factor, and updating data and model parameters; the expression of the dual network weighting factor is:
λ j+1 =λ j -9.9/N
wherein ,λj+1 A dual network weight factor representing the j +1 th iteration; lambda [ alpha ] j A dual network weight factor representing a jth iteration; n represents the total number of iterations;
and (4) repeating the steps S06 to S07 until the maximum iteration number is less than or equal to the preset fitting difference, and finishing inversion.
Further, in the step S04, the actor data set D is subjected to ko Simulating the measured data by superposing Gaussian white noise to obtain a data set D containing noise k (ii) a For the noisy data set D k The Occam inversion was performed.
Preferably, the forward parameters comprise k model vectors M k The expression of (a) is:
M k =[m 1 ,m 2 ,m 3 ,...,m n ]
wherein ,mn A resistivity value representing the nth formation layer;
the expression of the frequency vector F is:
F=[f 1 ,f 2 ,f 3 ,...,f n ]
wherein ,fn Representing the nth frequency value.
Further, in step S05, the loss function of the convolutional neural network and the fully-connected neural network training adopts an L2 norm, and an expression of the loss function is:
l(x,y)=L={l 1 ,l 2 ,...,l n } T
l n =(x n -y n ) 2
wherein L (x, y) represents the average or sum of the lagrange multiplier vectors L; l n Represents the square of the difference between the nth training result and the label, x n Represents the nth training result, y n Representing the nth tag value.
Further, a target error function is established, which is expressed as:
Figure BDA0003993500330000041
wherein oup represents the network output; targ denotes the value of the label.
Further, in the step S04, the expression of the data set T is:
T={T 1 ,T 2 ,...T i }
i=N*k
wherein ,Ti Data representing an Occam inversion; n denotes the total number of iterations.
Further, the expression of the lagrangian multiplier vector L is:
L={L 1 ,L 2 ,...L N }
wherein ,LN Representing the optimal lagrangian multiplier for each iterative search.
Preferably, in step S04, the expression of the current inversion objective function of the Occam inversion is:
Figure BDA0003993500330000042
where LM represents the multiplier values obtained by the conventional linear search method.
Further, in step S06, the mapping function of the data set after the convolutional neural network training to the first predicted value is: LM1= g (T); the mapping function from the data set after the full-connection neural network training to the second predicted value is as follows: LM2= h (L).
Compared with the prior art, the invention has the following beneficial effects:
(1) The method skillfully trains the CNN of the field value data set and the FCNN of the Lagrange multiplier, is different from the traditional search algorithm of Occam inversion on the Lagrange multiplier, and combines the predicted value of the dual-network structure training as the new Lagrange multiplier predicted value through the weight factor; compared with the traditional linear search algorithm, the method has the advantages that the comprehensive data set and the real inversion multiplier sequence are input during training, so that the prediction result is more in line with the global optimal solution, and the inversion precision and stability are improved.
(2) The invention skillfully adds Gaussian white noise simulation actual measurement data, and has the advantage of simulating the characteristics of the actual measurement data in training to improve the generalization capability of the deep learning network.
In conclusion, the method has the advantages of simple logic, high inversion iteration efficiency, stability, reliability and the like, and has high practical value and popularization value in the technical field of geophysical.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope of protection, and it is obvious for those skilled in the art that other related drawings can be obtained according to these drawings without inventive efforts.
FIG. 1 is a logic flow diagram of the present invention.
FIG. 2 is a graph of an objective function and a Lagrangian multiplier for an example data inversion in accordance with the present invention.
FIG. 3 is a schematic diagram of a deep learning training network structure according to the present invention.
Fig. 4 is a diagram showing parameters and a feature map of each layer of the CNN network according to the present invention.
FIG. 5 is a graph of the loss function and the error function variation of the training process of the present invention.
FIG. 6 is a comparison graph of the effect of the optimized inversion of the present invention and the conventional inversion.
Detailed Description
To further clarify the objects, technical solutions and advantages of the present application, the present invention will be further described with reference to the accompanying drawings and examples, and embodiments of the present invention include, but are not limited to, the following examples. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In this embodiment, the term "and/or" is only one kind of association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone.
The terms "first" and "second," and the like, in the description and claims of the present embodiment are used for distinguishing different objects, and are not used for describing a specific order of the objects. For example, the first target object and the second target object, etc. are specific sequences for distinguishing different target objects, rather than describing target objects.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present application, the meaning of "a plurality" means two or more unless otherwise specified. For example, a plurality of processing units refers to two or more processing units; a plurality of systems refers to two or more systems.
As shown in fig. 1 to 6, the present embodiment provides an ocean inversion lagrangian multiplier optimization method based on a deep learning weighted decision, which jointly decides through two neural networks, that is, a first prediction value LM1 based on an LM iteration sequence of a fully-connected neural network, and a second prediction value LM2 based on deep learning of an iteration field value data of a convolutional neural network. The method comprises the following specific steps:
first, 50 model vectors M are established k (i.e., 50 different sets of resistivity models), high-resistance layers of 10 Ω m, 50 Ω m, and 100 Ω m were randomly combined, the background resistivity was 1 Ω m, the dominant frequency was 0.5Hz, 1.5Hz, 8Hz, etc., the layer thickness and depth were fixed at 100m and 1000m, and the transmission/reception interval was 500m, 700m, \\ 8230;, and 4500m, which were 21 sites.
Secondly, forward modeling the models respectively to obtain forward modeling data sets D ko Adding 5% of white Gaussian noise and performing inversion, wherein when a Lagrange multiplier is searched in one iteration, the distribution of an objective function and the multiplier is shown in FIG. 2, the iteration times are set to be 50, and 2500 groups of training data sets T = { T = (total number of training data sets) = (total number of training data sets T = { T) } are obtained 1 ,T 2 ,...T i ,},T i (ii) =50 x 50, and 2500 corresponding lagrange multiplier values L = { L = { (L) 1 ,L 2 ,...L 2500 }. The current inversion objective function is:
Figure BDA0003993500330000071
where LM represents multiplier values obtained by a conventional linear search method.
Thirdly, constructing a network model, taking a data set T as input to construct a convolutional neural network CNN, taking a corresponding Lagrange multiplier vector L as a label, comprising 3 convolutional layers, 2 pooling layers and 2 full-connection layers, and simultaneously constructing a full-connection neural network FCNN by taking the Lagrange multiplier vector L as input, taking LN as a label, and totally 2 full-connection layers, wherein a schematic diagram of the double-network model is shown in FIG. 3, and a characteristic diagram of parameters and partial data of each layer of the CNN is shown in FIG. 4.
Fourthly, the loss function of the training adopts L2 norm, and the expression is as follows:
l(x,y)=L={l 1 ,l 2 ,...,l n } T
l n =(x n -y n ) 2
wherein L (x, y) represents the average or sum of the lagrange multiplier vectors L; l n Represents the square of the difference between the nth training result and the label, x n Represents the nth training result, y n Representing the nth tag value.
Here, a target error function is established, which is expressed as:
Figure BDA0003993500330000072
wherein oup represents the network output; targ denotes the value of the label.
In this embodiment, the change process of the loss function and the error function in the training iteration process is shown in fig. 5, and it can be seen from the graph that the descending trend of both functions in the training process is obvious.
Fifthly, after training is finished, inversion is carried out on the test model, the test model is a 100 omega m high-resistance layer with the top layer buried depth of 1700m, the thickness is 100m, a new inversion target function U is obtained, and the expression is as follows:
Figure BDA0003993500330000073
wherein ,
Figure BDA0003993500330000074
representing a roughness matrix; m n Representing the nth model vector; g (T) n ) A mapping function representing a convolutional neural network; λ represents a dual network weight factor; h (L) n ) A mapping function representing a fully-connected neural network; w represents a data item weight coefficient matrix; d n Representing the nth simulated data, f (M), in the data set T n ) Representing a forward function corresponding to the nth model;
in this embodiment, the initial value of the dual-network weighting factor λ is 10, the self-training of LM is mainly used in the early stage of the iteration, λ is 10 and gradually starts to decrease, the fitting data training model is mainly used in the later stage, and the value of λ is less than 1.
Sixthly, updating data and model parameters in iterative inversion, adjusting double-network weight factors and updating the data and the model parameters; the expression of the dual network weighting factor is:
λ j+1 =λ j -9.9/N
wherein ,λj+1 A dual network weight factor representing the j +1 th iteration; lambda j A dual network weight factor representing a jth iteration; n represents the total number of iterations;
in this embodiment, the situation that the fitting difference increases may be encountered in each iteration, and when the fitting difference increases, the network is automatically re-entered according to 1/2 of the original step length according to the current data to obtain a new predicted value until the fitting difference decreases and the next iteration calculation is performed.
And (4) repeating the step (S06) to the step (S07) until the maximum iteration number is less than or equal to the preset fitting difference, and finishing inversion. As shown in fig. 6, the result after 28 times of inversion iteration in the above steps is compared with the effect after 28 times of conventional inversion iteration with the same parameters, and it can be known from the figure that both methods embody the smooth characteristic of the Occam inversion model, the two methods are relatively close to each other, and both methods fit better to the theoretical model, slightly shallower than the real model, but the difference is very small and can be ignored, but the algorithm speed of the invention is faster than that of the conventional inversion.
From the above cases, it can be seen that the Occam inversion Lagrange multiplier optimization method based on the deep learning weighted decision adopts a dual neural network simultaneous training mode, establishes a random model to respectively learn the characteristics of the data set and the Lagrange multiplier, and verifies the test data, so that the loss function and the error function are obviously reduced, and finally the test model is verified.
The above-mentioned embodiments are only preferred embodiments of the present invention, and do not limit the scope of the present invention, but all the modifications made by the principles of the present invention and the non-inventive efforts based on the above-mentioned embodiments shall fall within the scope of the present invention.

Claims (9)

1. The Occam inversion Lagrange multiplier optimization method based on the deep learning weighting decision is characterized by comprising the following steps of:
s01, constructing a full-connection neural network and a convolution neural network;
s02, acquiring forward parameters; the forward parameters include k model vectors M k Frequency vector F and data of transmitting and receiving station;
step S03, solving k model vectors M k One-to-one correspondence forward data set D ko
Step S04, performing a course data set D ko Performing Occam inversion, and recording data in any iteration to obtain a data set T and a Lagrange multiplier vector L;
s05, training the convolutional neural network by taking the data set T as input and the Lagrange multiplier vector L corresponding to the data set T as a label; meanwhile, taking the Lagrange multiplier vector L as input and searching the optimal Lagrange multiplier L in each iteration N Training the fully-connected neural network for the label;
s06, establishing a mapping function from the data set after the convolutional neural network training to a first predicted value; establishing a mapping function from the data set after the full-connection neural network training to a second predicted value, and updating an inversion target function U, wherein the expression is as follows:
Figure FDA0003993500320000011
wherein ,
Figure FDA0003993500320000012
representing a roughness matrix; m is a group of n Representing the nth model vector; g (T) n ) A mapping function representing a convolutional neural network; λ represents a dual network weighting factorA seed; h (L) n ) A mapping function representing a fully-connected neural network; w represents a data item weight coefficient matrix; d n Representing the nth analog data in the data set T; f (M) n ) Representing a forward function corresponding to the nth model;
s07, adjusting a double-network weight factor, and updating data and model parameters; the expression of the dual network weighting factor is:
λ j+1 =λ j -9.9N
wherein ,λj+1 A dual network weight factor representing the j +1 th iteration; lambda [ alpha ] j A dual network weight factor representing a jth iteration; n represents the total number of iterations;
and (4) repeating the step (S06) to the step (S07) until the maximum iteration number is less than or equal to the preset fitting difference, and finishing inversion.
2. The method for optimizing Occam inversion Lagrangian multiplier based on deep learning weighted decision as claimed in claim 1, wherein in step S04, the forward data set D is subjected to ko The Gaussian white noise is superposed to simulate the actually measured data to obtain a data set D containing noise k (ii) a For the noisy data set D k The Occam inversion was performed.
3. The method of claim 1, wherein the forward parameters comprise k model vectors M k The expression of (a) is:
M k =[m 1 ,m 2 ,m 3 ,...,m n ]
wherein ,mn A resistivity value representing the nth formation layer;
the expression of the frequency vector F is:
F=[f 1 ,f 2 ,f 3 ,...,f n ]
wherein ,fn Representing the nth frequency value.
4. The method for optimizing Occam inversion Lagrangian multiplier based on deep learning weighted decision as claimed in claim 1, 2 or 3, wherein in the step S05, the loss functions trained by the convolutional neural network and the fully-connected neural network adopt L2 norm, and the expression is as follows:
l(x,y)=L={l 1 ,l 2 ,...,l n } T
l n =(x n -y n ) 2
wherein L (x, y) represents the average or sum of the lagrange multiplier vectors L; l n Represents the square of the difference between the nth training result and the label; x is a radical of a fluorine atom n Representing the nth training result; y is n Representing the nth tag value.
5. The deep learning weighted decision based Occam inversion Lagrangian multiplier optimization method according to claim 4, wherein a target error function is established, which is expressed as:
Figure FDA0003993500320000031
wherein oup represents the network output; targ denotes the value of the label.
6. The method for optimizing Occam inversion Lagrangian multipliers based on deep learning weighting decision according to claim 1 or 2, wherein in the step S04, the expression of the data set T is as follows:
T={T 1 ,T 2 ,...T i }
i=N*k
wherein ,Ti Data representing an Occam inversion; n denotes the total number of iterations.
7. The deep learning weighted decision based Occam inversion Lagrangian multiplier optimization method according to claim 6, wherein the expression of the Lagrangian multiplier vector L is as follows:
L={L 1 ,L 2 ,...L N }
wherein ,LN Representing the optimal lagrangian multiplier for each iterative search.
8. The method for optimizing Occam inversion Lagrangian multipliers based on deep learning weighted decision as claimed in claim 7, wherein in the step S04, the expression of the current inversion objective function of the Occam inversion is as follows:
Figure FDA0003993500320000032
where LM represents multiplier values obtained by a conventional linear search method.
9. The method for optimizing Occam inversion Lagrangian multiplier based on deep learning weighted decision as claimed in claim 1, wherein in the step S06, the mapping function from the data set trained by the convolutional neural network to the first predicted value is as follows: LM1= g (T); the mapping function of the data set after the full-connection neural network training to the second predicted value is as follows: LM2= h (L).
CN202211597028.9A 2022-12-12 2022-12-12 Occam inversion Lagrange multiplier optimization method based on deep learning weighted decision Active CN115983105B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211597028.9A CN115983105B (en) 2022-12-12 2022-12-12 Occam inversion Lagrange multiplier optimization method based on deep learning weighted decision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211597028.9A CN115983105B (en) 2022-12-12 2022-12-12 Occam inversion Lagrange multiplier optimization method based on deep learning weighted decision

Publications (2)

Publication Number Publication Date
CN115983105A true CN115983105A (en) 2023-04-18
CN115983105B CN115983105B (en) 2023-10-03

Family

ID=85958716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211597028.9A Active CN115983105B (en) 2022-12-12 2022-12-12 Occam inversion Lagrange multiplier optimization method based on deep learning weighted decision

Country Status (1)

Country Link
CN (1) CN115983105B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116976202A (en) * 2023-07-12 2023-10-31 清华大学 Fixed complex source item distribution inversion method and device based on deep neural network
CN117828420A (en) * 2023-12-07 2024-04-05 湖南光华防务科技集团有限公司 Testability multi-target distribution method based on generated data
CN117828420B (en) * 2023-12-07 2024-05-31 湖南光华防务科技集团有限公司 Testability multi-target distribution method based on generated data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105573963A (en) * 2016-01-05 2016-05-11 中国电子科技集团公司第二十二研究所 Reconstruction method for horizontal nonuniform structure of ionized layer
US20170075030A1 (en) * 2015-09-15 2017-03-16 Brent D. Wheelock Accelerated Occam Inversion Using Model Remapping and Jacobian Matrix Decomposition
CN113486591A (en) * 2021-07-13 2021-10-08 吉林大学 Gravity multi-parameter data density weighted inversion method for convolutional neural network result
CN113933905A (en) * 2021-09-30 2022-01-14 中国矿业大学 Cone-shaped field source transient electromagnetic inversion method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170075030A1 (en) * 2015-09-15 2017-03-16 Brent D. Wheelock Accelerated Occam Inversion Using Model Remapping and Jacobian Matrix Decomposition
CN105573963A (en) * 2016-01-05 2016-05-11 中国电子科技集团公司第二十二研究所 Reconstruction method for horizontal nonuniform structure of ionized layer
CN113486591A (en) * 2021-07-13 2021-10-08 吉林大学 Gravity multi-parameter data density weighted inversion method for convolutional neural network result
CN113933905A (en) * 2021-09-30 2022-01-14 中国矿业大学 Cone-shaped field source transient electromagnetic inversion method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
C DE GROOT-HEDLIN等: ""Inversion of magnetotelluric data for 2D structure with sharp resistivity contrasts"", 《GEOPHYSICS》, vol. 69, no. 1, pages 78 - 86 *
陈润滋 等: ""基于MATLAB语言的二维大地电磁OCCAM快速反演"", 《地球物理学进展》, vol. 33, no. 4, pages 1461 - 1468 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116976202A (en) * 2023-07-12 2023-10-31 清华大学 Fixed complex source item distribution inversion method and device based on deep neural network
CN116976202B (en) * 2023-07-12 2024-03-26 清华大学 Fixed complex source item distribution inversion method and device based on deep neural network
CN117828420A (en) * 2023-12-07 2024-04-05 湖南光华防务科技集团有限公司 Testability multi-target distribution method based on generated data
CN117828420B (en) * 2023-12-07 2024-05-31 湖南光华防务科技集团有限公司 Testability multi-target distribution method based on generated data

Also Published As

Publication number Publication date
CN115983105B (en) 2023-10-03

Similar Documents

Publication Publication Date Title
CN106815782A (en) A kind of real estate estimation method and system based on neutral net statistical models
CN110046710A (en) A kind of the nonlinear function Extremal optimization method and system of neural network
CN104636801A (en) Transmission line audible noise prediction method based on BP neural network optimization
CN104636985A (en) Method for predicting radio disturbance of electric transmission line by using improved BP (back propagation) neural network
CN115983105A (en) Occam inversion Lagrange multiplier optimization method based on deep learning weighting decision
CN112784140B (en) Search method of high-energy-efficiency neural network architecture
CN114723095A (en) Missing well logging curve prediction method and device
CN114004336A (en) Three-dimensional ray reconstruction method based on enhanced variational self-encoder
CN116151102A (en) Intelligent determination method for space target ultra-short arc initial orbit
CN114637881B (en) Image retrieval method based on multi-agent metric learning
CN113486591B (en) Gravity multi-parameter data density weighted inversion method for convolutional neural network result
CN113722980A (en) Ocean wave height prediction method, system, computer equipment, storage medium and terminal
CN111192158A (en) Transformer substation daily load curve similarity matching method based on deep learning
CN110441815B (en) Simulated annealing Rayleigh wave inversion method based on differential evolution and block coordinate descent
CN116822742A (en) Power load prediction method based on dynamic decomposition-reconstruction integrated processing
CN107994570A (en) A kind of method for estimating state and system based on neutral net
CN112380306A (en) Adaptive correction method for Kergin spatial interpolation considering distribution balance
CN116956744A (en) Multi-loop groove cable steady-state temperature rise prediction method based on improved particle swarm optimization
CN114282440B (en) Robust identification method for adjusting system of pumped storage unit
CN115567131A (en) 6G wireless channel characteristic extraction method based on dimensionality reduction complex convolution network
CN115512077A (en) Implicit three-dimensional scene characterization method based on multilayer dynamic characteristic point clouds
CN109754058A (en) A kind of depth datum approximating method based on CGBP algorithm
CN110501903B (en) Self-adjusting and optimizing method for parameters of robot inverse solution-free control system
CN113255887A (en) Radar error compensation method and system based on genetic algorithm optimization BP neural network
KR102110316B1 (en) Method and device for variational interference using neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant