CN115983105B - Occam inversion Lagrange multiplier optimization method based on deep learning weighted decision - Google Patents

Occam inversion Lagrange multiplier optimization method based on deep learning weighted decision Download PDF

Info

Publication number
CN115983105B
CN115983105B CN202211597028.9A CN202211597028A CN115983105B CN 115983105 B CN115983105 B CN 115983105B CN 202211597028 A CN202211597028 A CN 202211597028A CN 115983105 B CN115983105 B CN 115983105B
Authority
CN
China
Prior art keywords
inversion
neural network
data set
representing
lagrangian multiplier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211597028.9A
Other languages
Chinese (zh)
Other versions
CN115983105A (en
Inventor
王绪本
杨锐
杨钰菡
王向鹏
袁崇鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Univeristy of Technology
Original Assignee
Chengdu Univeristy of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Univeristy of Technology filed Critical Chengdu Univeristy of Technology
Priority to CN202211597028.9A priority Critical patent/CN115983105B/en
Publication of CN115983105A publication Critical patent/CN115983105A/en
Application granted granted Critical
Publication of CN115983105B publication Critical patent/CN115983105B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses an Occam inversion Lagrange multiplier optimization method based on deep learning weighted decision, which comprises the following steps: constructing a fully connected neural network and a convolutional neural network; acquiring forward parameters; obtaining forward data sets corresponding to the k model vectors one by one; performing Occam inversion on the forward data set, and recording data in any iteration to obtain a data set T and a Lagrangian multiplier vector L; training a convolutional neural network and a fully-connected neural network; establishing a mapping function of a data set trained by the convolutional neural network and a data set trained by the fully-connected neural network, and updating an inversion objective function; adjusting the dual-network weight factors, and updating data and model parameters; and finishing inversion until the maximum iteration times are equal to or less than a preset fitting difference. Through the scheme, the method has the advantages of simple logic, efficient inversion iteration, stability, reliability and the like.

Description

Occam inversion Lagrange multiplier optimization method based on deep learning weighted decision
Technical Field
The application relates to the technical field of geophysics, in particular to an Occam inversion Lagrange multiplier optimization method based on deep learning weighted decision.
Background
The geophysical inversion method refers to a mathematical physical calculation method from detection data to underground physical parameter distribution, and is one of the most core and difficult research directions in geophysical problems, wherein the existence, uniqueness and stability of solutions in the inversion process are the most important problems. The traditional geophysical inversion method adopts a linear or nonlinear method to fit data to calculate and infer an underground physical structure, regularized inversion is widely applied in the development of the last decades, and the stability of the fit is improved by adding model constraint terms to an objective function and using priori information.
Similar to regularization inversion, occam inversion is proposed by Constable et al in the 80 s of the last century based on model roughness constraint, and compared with traditional regularization inversion, occam inversion introduces Lagrange multipliers to constrain the smoothness of the model, so that the inversion iteration process is gradually calculated on the premise that the model is always smooth, and each step of iteration takes the multiplier of the model with the minimum roughness, therefore, the inversion result of the method can be ensured to be as smooth as possible, and the inversion process is quite stable and accords with the physical meaning of stratum continuous change.
The choice of lagrangian multipliers (Lagrange Multiplier, LM) in Occam inversion is a very critical core problem, and in each iteration, the multipliers ensure that the objective function value is minimum and also compromise that the model roughness is minimum. At present, the prior art adopts a plurality of linear search algorithms such as golden section method, dichotomy and the like and nonlinear search algorithms such as Monte Carlo algorithm, enumeration method and the like, and the linear search algorithm has the advantages of simple calculation and quicker iteration, but is easy to fall into a local minimum value; the nonlinear search algorithm has the advantages that the global minimum can be found, but the time is long and the calculated amount is too large. The algorithm for searching the Lagrangian multiplier by inversion of the traditional Occam has certain defects, and the advantages of global minimum and quick iteration cannot be considered.
Therefore, it is highly desirable to provide a simple logic, efficient, stable and reliable method for optimizing the Occam inversion lagrangian multiplier based on the weight decision of deep learning.
Disclosure of Invention
Aiming at the problems, the application aims to provide an Occam inversion Lagrange multiplier optimization method based on deep learning weighted decision, which adopts the following technical scheme:
the Occam inversion Lagrangian multiplier optimization method based on the deep learning weighted decision comprises the following steps:
step S01, constructing a fully-connected neural network and a convolutional neural network;
step S02, obtaining forward parameters; the forward modelingThe parameters include k model vectors M k Frequency vector F and transceiver station data;
step S03, obtaining k model vectors M k One-to-one forward data set D ko
Step S04, for forward data set D ko Performing Occam inversion, and recording data in any iteration to obtain a data set T and a Lagrangian multiplier vector L;
step S05, training the convolutional neural network by taking the data set T as input and Lagrangian multiplier vector L corresponding to the data set T as a label; at the same time, taking Lagrangian multiplier vector L as input and searching for optimal Lagrangian multiplier L each time N Training the fully connected neural network for the label;
step S06, establishing a mapping function from the data set trained by the convolutional neural network to the first predicted value; establishing a mapping function from the data set trained by the fully-connected neural network to the second predicted value, and updating an inversion objective function U, wherein the expression is as follows:
wherein ,representing a roughness matrix; m is M n Representing an nth model vector; g (T) n ) A mapping function representing a convolutional neural network; λ represents a dual-network weight factor; h (L) n ) A mapping function representing a fully connected neural network; w represents a data item weight coefficient matrix; d, d n Representing the nth analog data in the data set T; f (M) n ) The table corresponds to the forward function of the nth model;
step S07, adjusting the double-network weight factors, and updating data and model parameters; the expression of the dual-network weight factor is as follows:
λ j+1 =λ j -9.9/N
wherein ,λj+1 A dual network weight factor representing the j+1th iteration; lambda (lambda) j A dual network weight factor representing the jth iteration; n represents the total number of iterations;
and (3) repeating the steps S06 to S07 until the maximum iteration times or the fitting difference is smaller than or equal to the preset fitting difference, and finishing inversion.
Further, in the step S04, the forward data set D is compared with ko The Gaussian white noise simulation actual measurement data are superimposed to obtain a noisy data set D k The method comprises the steps of carrying out a first treatment on the surface of the For the noisy data set D k Occam inversion was performed.
Preferably, the forward parameters include k model vectors M k The expression of (2) is:
M k =[m 1 ,m 2 ,m 3 ,...,m n ]
wherein ,mn Representing resistivity values of an nth layer formation;
the expression of the frequency vector F is:
F=[f 1 ,f 2 ,f 3 ,...,f n ]
wherein ,fn Indicating the nth frequency value.
Further, in the step S05, the loss function trained by the convolutional neural network and the fully-connected neural network adopts an L2 norm, and the expression is as follows:
l(x,y)=L={l 1 ,l 2 ,...,l n } T
l n =(x n -y n ) 2
where L (x, y) represents taking the average or sum of the lagrangian multiplier vectors L; l (L) n Represents the square of the difference between the nth training result and the label, x n Represents the nth training result, y n Representing the nth tag value.
Further, an objective error function is established, which is expressed as:
wherein oup represents a network output; targ represents the value of the tag.
Further, in the step S04, the expression of the data set T is:
T={T 1 ,T 2 ,...T i }
i=N*k
wherein ,Ti Data representing Occam inversion; n represents the total number of iterations.
Further, the expression of the lagrangian multiplier vector L is:
L={L 1 ,L 2 ,...L N }
wherein ,LN Representing the optimal lagrangian multiplier for each iterative search.
Preferably, in the step S04, the expression of the current inversion objective function of the Occam inversion is:
wherein LM represents the multiplier value obtained by the conventional linear search method.
Further, in the step S06, a mapping function from the data set trained by the convolutional neural network to the first predicted value is: lm1=g (T); the mapping function from the data set trained by the fully-connected neural network to the second predicted value is as follows: lm2=h (L).
Compared with the prior art, the application has the following beneficial effects:
(1) According to the application, the CNN training and the FCNN training of the Lagrangian multiplier of the field value dataset are skillfully performed, and different from the traditional search algorithm of the Occam inversion on the Lagrangian multiplier, the predicted values of the double-network structure training are combined to be used as new Lagrangian multiplier predicted values through the weight factors, and as the network training is finished in advance, the multiplier search time is greatly reduced, and the inversion efficiency is improved; compared with the traditional linear search algorithm, the method inputs the comprehensive data set and the true inversion multiplier sequence during training, so that the prediction result is more in line with the global optimal solution, and the inversion precision and stability are improved.
(2) The Gaussian white noise simulation actual measurement data are skillfully added, and the method has the advantage that the characteristics of the actual measurement data are simulated in training to improve the generalization capability of the deep learning network.
In conclusion, the method has the advantages of simple logic, high efficiency, stability, reliability and the like in inversion iteration, and has high practical value and popularization value in the technical field of geophysics.
Drawings
For a clearer description of the technical solutions of the embodiments of the present application, the drawings to be used in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope of protection, and other related drawings may be obtained according to these drawings without the need of inventive effort for a person skilled in the art.
FIG. 1 is a logic flow diagram of the present application.
FIG. 2 is a graph of objective function versus Lagrangian multiplier for one example data inversion in the present application.
Fig. 3 is a schematic diagram of a deep learning training network structure according to the present application.
Fig. 4 is a diagram showing parameters and characteristics of each layer of CNN network according to the present application.
Fig. 5 is a graph showing the change of the loss function and the error function in the training process according to the present application.
FIG. 6 is a graph comparing the effects of the optimized inversion of the present application with conventional inversion.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the present application will be further described with reference to the accompanying drawings and examples, which include, but are not limited to, the following examples. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In this embodiment, the term "and/or" is merely an association relationship describing the association object, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone.
The terms first and second and the like in the description and in the claims of the present embodiment are used for distinguishing between different objects and not for describing a particular sequential order of objects. For example, the first target object and the second target object, etc., are used to distinguish between different target objects, and are not used to describe a particular order of target objects.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" means two or more. For example, the plurality of processing units refers to two or more processing units; the plurality of systems means two or more systems.
As shown in fig. 1 to 6, the present embodiment provides an Occam inversion lagrangian multiplier optimization method based on a deep learning weighted decision, which makes a decision in common by two neural networks, namely, a first predicted value LM1 based on an LM iteration sequence of a fully connected neural network and a second predicted value LM2 based on deep learning of iteration field value data of a convolutional neural network. The method comprises the following specific steps:
first, 50 model vectors M are established k (namely, 50 groups of different resistivity models), the high-resistance layers of 10 omega m, 50 omega m and 100 omega m are randomly combined, the background resistivity is 1 omega m, the main frequency is 0.5Hz, 1.5Hz, 8Hz and the like, the layer thickness and the depth are fixed to be 100m and 1000m, and the transmission and reception distance intervals are 500m, 700m, … and 4500m for 21 stations.
Step two, forward modeling the model to obtain forward modeling data set D ko Inversion after adding 5% Gaussian white noiseWherein the objective function and multiplier distribution when one iteration searches for Lagrangian multiplier is shown in FIG. 2, and the iteration number is set to 50 times to obtain 2500 sets of training data sets T= { T 1 ,T 2 ,...T i ,},T i =50×50, and 2500 corresponding lagrangian multiplier values l= { L 1 ,L 2 ,...L 2500 }. The current inversion objective function is:
wherein LM represents the multiplier value obtained by the conventional linear search method.
Thirdly, constructing a network model, namely constructing a convolutional neural network CNN by taking a data set T as input, taking a corresponding Lagrangian multiplier vector L as a label, wherein the convolutional neural network CNN comprises 3 convolutional layers, 2 pooling layers and 2 full-connection layers, meanwhile, constructing the full-connection neural network FCNN by taking the Lagrangian multiplier L vector as input, taking LN as the label, and taking 2 full-connection layers in total, wherein a double-network model schematic diagram is shown in fig. 3, and a characteristic diagram of parameters and partial data of each layer of the CNN network is shown in fig. 4.
Fourth, the training loss function adopts L2 norm, and the expression is:
l(x,y)=L={l 1 ,l 2 ,...,l n } T
l n =(x n -y n ) 2
where L (x, y) represents taking the average or sum of the lagrangian multiplier vectors L; l (L) n Represents the square of the difference between the nth training result and the label, x n Represents the nth training result, y n Representing the nth tag value.
Here, an objective error function is established, which is expressed as:
wherein oup represents a network output; targ represents the value of the tag.
In this embodiment, the change process of the loss function and the error function in the training iteration process is shown in fig. 5, and it can be seen from the graph that the decreasing trend of both functions in the training process is obvious.
Fifthly, after training, inverting a test model, wherein the test model is a 100 omega m high-resistance layer with the top layer burial depth of 1700m, the thickness is 100m, and a new inversion objective function U is expressed as follows:
wherein ,representing a roughness matrix; m is M n Representing an nth model vector; g (T) n ) A mapping function representing a convolutional neural network; λ represents a dual-network weight factor; h (L) n ) A mapping function representing a fully connected neural network; w represents a data item weight coefficient matrix; d, d n Represents the nth analog data in the data set T, f (M n ) Representing a forward function corresponding to the nth model;
in this embodiment, the initial value of the dual-network weighting factor λ is 10, the early stage of iteration is mainly self-training of LM, λ takes 10 and gradually starts decreasing, the later stage is mainly fitting a data training model, and the value of λ is smaller than 1.
Sixth, updating data and model parameters in iterative inversion, adjusting dual-network weight factors, and updating the data and the model parameters; the expression of the dual-network weight factor is as follows:
λ j+1 =λ j -9.9/N
wherein ,λj+1 A dual network weight factor representing the j+1th iteration; lambda (lambda) j A dual network weight factor representing the jth iteration; n represents the total number of iterations;
in this embodiment, in each iteration, the situation that the fitting difference increases is sometimes encountered, and when the fitting difference increases, the new predicted value is obtained by automatically re-entering the network according to the current data by 1/2 of the original step length until the fitting difference decreases and then enters the next iteration calculation.
And (3) repeating the steps S06 to S07 until the maximum iteration times or the fitting difference is smaller than or equal to the preset fitting difference, and finishing inversion. As shown in fig. 6, the results obtained after the step inversion iteration 28 times are compared with the effects obtained after the step inversion iteration 28 times of the traditional inversion iteration of the same parameters, and as can be seen from the graph, both methods show smooth characteristics of the model of the Occam inversion, the two methods are relatively close, the two methods are relatively good in fitting to the theoretical model, the two methods are slightly shallower than the real model, the difference is small and negligible, and the algorithm speed of the application is faster than that of the traditional inversion.
According to the method, a dual neural network simultaneous training mode is adopted, a random model is established to respectively learn the characteristics of a data set and the Lagrange multiplier, through verification of test data, the loss function and the error function are obviously reduced, and finally the test model is verified.
The above embodiments are only preferred embodiments of the present application and are not intended to limit the scope of the present application, but all changes made by adopting the design principle of the present application and performing non-creative work on the basis thereof shall fall within the scope of the present application.

Claims (8)

1. The Occam inversion Lagrange multiplier optimization method based on the deep learning weighted decision is characterized by comprising the following steps of:
step S01, constructing a fully-connected neural network and a convolutional neural network;
step S02, obtaining forward parameters; the forward parameters include k model vectors M k Frequency vector F and transceiver station data; the forward parameters include k model vectors M k Expression of (2)The formula is:
M k =[m 1 ,m 2 ,m 3 ,...,m n ]
wherein ,mn Representing resistivity values of an nth layer formation;
the expression of the frequency vector F is:
F=[f 1 ,f 2 ,f 3 ,...,f n ]
wherein ,fn Represents an nth frequency value;
step S03, obtaining k model vectors M k One-to-one forward data set D ko
Step S04, for forward data set D ko Performing Occam inversion, and recording data in any iteration to obtain a data set T and a Lagrangian multiplier vector L;
step S05, training the convolutional neural network by taking the data set T as input and Lagrangian multiplier vector L corresponding to the data set T as a label; at the same time, taking Lagrangian multiplier vector L as input and searching for optimal Lagrangian multiplier L each time N Training the fully connected neural network for the label;
step S06, establishing a mapping function from the data set trained by the convolutional neural network to the first predicted value; establishing a mapping function from the data set trained by the fully-connected neural network to the second predicted value, and updating an inversion objective function U, wherein the expression is as follows:
wherein ,representing a roughness matrix; m is M n Representing an nth model vector; g (T) n ) A mapping function representing a convolutional neural network; λ represents a dual-network weight factor; h (L) n ) A mapping function representing a fully connected neural network; w represents a data item weight coefficient matrix; d, d n Representing the nth mode in the data set TData simulation; f (M) n ) Representing a forward function corresponding to the nth model;
step S07, adjusting the double-network weight factors, and updating data and model parameters; the expression of the dual-network weight factor is as follows:
λ j+1 =λ j -9.9/N
wherein ,λj+1 A dual network weight factor representing the j+1th iteration; lambda (lambda) j A dual network weight factor representing the jth iteration; n represents the total number of iterations;
and (3) repeating the steps S06 to S07 until the maximum iteration times or the fitting difference is smaller than or equal to the preset fitting difference, and finishing inversion.
2. The method for optimizing the lagrangian multiplier for the Occam inversion based on the deep learning weighted decision as claimed in claim 1, wherein in the step S04, the forward data set D is subjected to ko The Gaussian white noise simulation actual measurement data are superimposed to obtain a noisy data set D k The method comprises the steps of carrying out a first treatment on the surface of the For the noisy data set D k Occam inversion was performed.
3. The method for optimizing the Occam inversion lagrangian multiplier based on the deep learning weighted decision according to claim 1 or 2, wherein in the step S05, the loss function trained by the convolutional neural network and the fully-connected neural network adopts an L2 norm, and the expression is:
l(x,y)=L={l 1 ,l 2 ,...,l n } T
l n =(x n -y n ) 2
where L (x, y) represents taking the average or sum of the lagrangian multiplier vectors L; l (L) n Representing the square of the nth training result and the label difference; x is x n Representing an nth training result; y is n Representing the nth tag value.
4. The method of optimizing the Occam inversion lagrangian multiplier based on deep learning weighted decisions of claim 3, wherein a target error function is established, expressed as:
wherein oup represents a network output; targ represents the value of the tag.
5. The method of optimizing the Occam inversion lagrangian multiplier based on deep learning weighted decisions according to claim 1 or 2, wherein in step S04, the expression of the data set T is:
T={T 1 ,T 2 ,...T i }
i=N*k
wherein ,Ti Data representing Occam inversion; n represents the total number of iterations.
6. The method for optimizing the Occam inversion lagrangian multiplier based on deep learning weighted decisions as claimed in claim 5, wherein the expression of the lagrangian multiplier vector L is:
L={L 1 ,L 2 ,...L N }
wherein ,LN Representing the optimal lagrangian multiplier for each iterative search.
7. The method for optimizing the lagrangian multiplier for the Occam inversion based on the deep learning weighted decision according to claim 6, wherein in the step S04, the expression of the current inversion objective function of the Occam inversion is:
wherein LM represents the multiplier value obtained by the conventional linear search method.
8. The method for optimizing the Occam inversion lagrangian multiplier based on the deep learning weighted decision according to claim 1, wherein in the step S06, the mapping function from the data set trained by the convolutional neural network to the first predicted value is: lm1=g (T); the mapping function from the data set trained by the fully-connected neural network to the second predicted value is as follows: lm2=h (L).
CN202211597028.9A 2022-12-12 2022-12-12 Occam inversion Lagrange multiplier optimization method based on deep learning weighted decision Active CN115983105B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211597028.9A CN115983105B (en) 2022-12-12 2022-12-12 Occam inversion Lagrange multiplier optimization method based on deep learning weighted decision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211597028.9A CN115983105B (en) 2022-12-12 2022-12-12 Occam inversion Lagrange multiplier optimization method based on deep learning weighted decision

Publications (2)

Publication Number Publication Date
CN115983105A CN115983105A (en) 2023-04-18
CN115983105B true CN115983105B (en) 2023-10-03

Family

ID=85958716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211597028.9A Active CN115983105B (en) 2022-12-12 2022-12-12 Occam inversion Lagrange multiplier optimization method based on deep learning weighted decision

Country Status (1)

Country Link
CN (1) CN115983105B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116976202B (en) * 2023-07-12 2024-03-26 清华大学 Fixed complex source item distribution inversion method and device based on deep neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105573963A (en) * 2016-01-05 2016-05-11 中国电子科技集团公司第二十二研究所 Reconstruction method for horizontal nonuniform structure of ionized layer
CN113486591A (en) * 2021-07-13 2021-10-08 吉林大学 Gravity multi-parameter data density weighted inversion method for convolutional neural network result
CN113933905A (en) * 2021-09-30 2022-01-14 中国矿业大学 Cone-shaped field source transient electromagnetic inversion method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017048445A1 (en) * 2015-09-15 2017-03-23 Exxonmobil Upstream Research Company Accelerated occam inversion using model remapping and jacobian matrix decomposition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105573963A (en) * 2016-01-05 2016-05-11 中国电子科技集团公司第二十二研究所 Reconstruction method for horizontal nonuniform structure of ionized layer
CN113486591A (en) * 2021-07-13 2021-10-08 吉林大学 Gravity multi-parameter data density weighted inversion method for convolutional neural network result
CN113933905A (en) * 2021-09-30 2022-01-14 中国矿业大学 Cone-shaped field source transient electromagnetic inversion method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
C de Groot-Hedlin等."Inversion of magnetotelluric data for 2D structure with sharp resistivity contrasts".《Geophysics》.2004,第69卷(第1期),78-86. *
陈润滋 等."基于MATLAB语言的二维大地电磁OCCAM快速反演".《地球物理学进展》.2018,第33卷(第4期),1461-1468. *

Also Published As

Publication number Publication date
CN115983105A (en) 2023-04-18

Similar Documents

Publication Publication Date Title
Tong et al. Polynomial fitting algorithm based on neural network
TWI794157B (en) Automatic multi-threshold feature filtering method and device
CN106815782A (en) A kind of real estate estimation method and system based on neutral net statistical models
CN110941734B (en) Depth unsupervised image retrieval method based on sparse graph structure
CN115983105B (en) Occam inversion Lagrange multiplier optimization method based on deep learning weighted decision
CN110059616A (en) Pedestrian's weight identification model optimization method based on fusion loss function
CN112784140B (en) Search method of high-energy-efficiency neural network architecture
CN110083125A (en) A kind of machine tool thermal error modeling method based on deep learning
CN111488498A (en) Node-graph cross-layer graph matching method and system based on graph neural network
CN114004336A (en) Three-dimensional ray reconstruction method based on enhanced variational self-encoder
CN113947133A (en) Task importance perception element learning method for small sample image recognition
CN115841076A (en) Shallow sea layered seabed ground sound parameter inversion method based on BP neural network model
CN113486591B (en) Gravity multi-parameter data density weighted inversion method for convolutional neural network result
CN114637881A (en) Image retrieval method based on multi-agent metric learning
CN116151102A (en) Intelligent determination method for space target ultra-short arc initial orbit
CN109492816B (en) Coal and gas outburst dynamic prediction method based on hybrid intelligence
CN111192158A (en) Transformer substation daily load curve similarity matching method based on deep learning
CN116956744A (en) Multi-loop groove cable steady-state temperature rise prediction method based on improved particle swarm optimization
CN115797309A (en) Surface defect segmentation method based on two-stage incremental learning
CN109323677A (en) Improve the Circularity error evaluation algorithm of cuckoo searching algorithm
CN115567131A (en) 6G wireless channel characteristic extraction method based on dimensionality reduction complex convolution network
CN111211559B (en) Power grid impedance estimation method based on dynamic step length firefly algorithm
CN112016956B (en) Ore grade estimation method and device based on BP neural network
CN114185039A (en) Radar target one-dimensional range profile intelligent identification method
Yichang The application of immune genetic algorithm in BP neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant