CN114334041A - Transformer-based electromagnetic metamaterial complex spectrum high-precision prediction method - Google Patents

Transformer-based electromagnetic metamaterial complex spectrum high-precision prediction method Download PDF

Info

Publication number
CN114334041A
CN114334041A CN202111662213.7A CN202111662213A CN114334041A CN 114334041 A CN114334041 A CN 114334041A CN 202111662213 A CN202111662213 A CN 202111662213A CN 114334041 A CN114334041 A CN 114334041A
Authority
CN
China
Prior art keywords
layer
neural network
transformer
network model
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111662213.7A
Other languages
Chinese (zh)
Inventor
朱锦锋
熊健凯
高源�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University
Original Assignee
Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University filed Critical Xiamen University
Priority to CN202111662213.7A priority Critical patent/CN114334041A/en
Publication of CN114334041A publication Critical patent/CN114334041A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a Transformer-based electromagnetic metamaterial complex spectrum high-precision prediction method, which is characterized in that a Transformer neural network model is constructed on the basis of the prediction method, sample data is obtained by utilizing a strict coupled wave analysis (RCWA) method to randomly combine and calculate in a given electromagnetic metamaterial physical model structure parameter range, and the sample data is calculated according to the following steps of 4: and 1, dividing the training set into a training set and a verification set, training and verifying the neural network model, inputting the training set into the Transformer neural network model for training, and verifying the performance of the neural network model by the verification set. The method can accurately and quickly predict the optical response according to the input structural parameters of the electromagnetic metamaterial, overcomes the defect that the traditional numerical simulation method is complex and time-consuming in solving the Maxwell equation set, can accurately predict the spectrum in real time, and reduces time and hardware cost. The design cycle of the electromagnetic metamaterial can be greatly accelerated, and the method is easy to popularize in other electromagnetic metamaterial models.

Description

Transformer-based electromagnetic metamaterial complex spectrum high-precision prediction method
Technical Field
The invention relates to the technical field of electromagnetic metamaterials and artificial intelligence, in particular to a Transformer-based electromagnetic metamaterial complex spectrum high-precision prediction method.
Background
The electromagnetic metamaterial is an artificial composite material with a microstructure, has extraordinary physical properties which are not possessed by natural materials, and can flexibly and effectively regulate and control the phase, amplitude and polarization of electromagnetic waves in a nanoscale. The device based on the electromagnetic metamaterial has the advantages of small size, high sensitivity, high flexibility and the like, and is widely applied to sensing, detection, energy storage, heat radiation and the like.
The size of the electromagnetic metamaterial device is in a nanometer level, the preparation of the electromagnetic metamaterial device depends on a precise processing means, and the optical response needs to be accurately simulated before processing so as to optimize the structure of the device. The Finite Difference Time Domain (FDTD) or Finite Element Method (FEM) is two general numerical calculation methods for simulating the optical response of the electromagnetic metamaterial. These methods tend to require large computation time costs and hardware costs, and as device structure complexity increases, the computational process becomes increasingly difficult.
With the development of machine learning, deep learning has shown its powerful ability in the fields of speech recognition, image recognition, natural language processing, etc., deep learning is a data-driven algorithm, through training of a large amount of data, complex nonlinear relations between data can be represented, compared with a numerical calculation method, a calculation result can be given in millisecond time, and the calculation time cost and hardware are not increased along with the increase of the structural complexity of a device.
Deep learning assisted electromagnetic metamaterial design has attracted considerable researcher attention and has been successfully applied to a number of fields of electromagnetism. At present, the electromagnetic metamaterial design scheme based on deep learning mainly comprises two types of networks: 1. predicting an optical response, namely a forward network, according to the structural parameters of the electromagnetic metamaterial; 2. and predicting structural parameters of the electromagnetic metamaterial, namely the reverse network, according to the target optical response. The forward network can replace a numerical calculation method to predict the optical response efficiently and quickly and is important for training of the reverse network, for example, the trained forward network assists the reverse network to train so as to solve the problem of non-unique mapping between the spectrum and the structural parameters. Deep learning models commonly used in forward networks at present include multi-layer perceptrons (MLPs), Convolutional Neural Networks (CNNs), self-encoders (AEs), and the like. These models typically have better prediction accuracy with sufficient training sample size or low spectral complexity.
In particular, the collection of the electromagnetic metamaterial training data inevitably uses a numerical calculation method, the available data volume is limited, and the neural network model is required to improve the learning ability as much as possible under the condition of low sample volume and fully utilize the existing training set. For complex spectrum curves, a traditional neural network model often has larger prediction errors at positions with larger spectrum changes, and the positions often contain important physical information, so that the improvement of the forward network is of great importance to the prediction accuracy of the complex spectrum.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a Transformer-based electromagnetic metamaterial complex spectrum high-precision prediction method, and aims to solve the problem that the prior art is not high in prediction consistency of complex spectra.
The invention provides the following technical scheme: a Transformer-based electromagnetic metamaterial complex spectrum high-precision prediction method is characterized in that a Transformer neural network model is constructed, sample data are obtained by utilizing a strict coupled wave analysis (RCWA) method to randomly combine and calculate in a given electromagnetic metamaterial physical model structure parameter range, and the sample data are calculated according to the following steps of 4: and 1, dividing the model into a training set and a verification set, training and verifying the neural network model, inputting the training set to the Transformer neural network model for training, verifying the performance of the neural network model by the verification set, and predicting according to the trained Transformer neural network model to obtain the optical response corresponding to the structural parameters of the electromagnetic metamaterial.
Preferably, a strict coupled wave analysis (RCWA) method is used for randomly combining and calculating within a given range of the layered metal/dielectric structure parameters to obtain the corresponding reflectivity, and the reflectivity is used for training and verifying a Transformer neural network model, so that the Transformer neural network model can obtain a complex nonlinear mapping relation between the layered metal/dielectric structure parameters and the reflectivity.
Preferably, the layered metal/dielectric structure is composed of a plurality of layers of vertical periodic alternating metal/dielectric layered nano-structure units, the geometric shape is any one of a rectangle, an ellipse and a triangle, the width of a left pillar and a right pillar in each unit is w1 and w2 respectively, the thickness of a metal layer of the substrate is h, the metal is one or a plurality of compounds of alumina, silica, magnesium fluoride, germanium, gold, silver, aluminum or titanium, the material of the dielectric layer is any one of silica, silicon monoxide, magnesium fluoride and alumina, and the structure can realize rich filter circuit functions in a near infrared band.
Preferably, the Transformer neural network model comprises an input layer, a position coding layer, 8 serially connected Transformer encoders, a nonlinear layer and an output layer which are sequentially arranged;
the input layer is used for matrix dimension change, the input 10 structural parameters respectively pass through 10 full-connection layers with the length of 512, and the matrix dimension of input data is changed from 64 multiplied by 10 to 64 multiplied by 10 multiplied by 512;
the position coding layer is composed of learnable matrixes with the dimensionality of 1 x 10 x 512, and the learnable matrixes are added with each input matrix to add position information to input data;
the Transformer encoder comprises a multi-head attention mechanism module, a residual error connection layer, a normalization layer, a Dropout random discarding layer and a forward feedback layer;
the output expression of the multi-head attention mechanism module is as follows:
Figure BDA0003450316240000041
wherein Q, K, V are three learnable matrices Wi q、Wi k、Wi vMultiplied by the outputs of the position-coding layers, respectively, to obtain Wq、Wk、WvThe identification symbols are all integrated, i represents the number one in the multi-head attention mechanism; dkAn identifier representing an ensemble, representing a scaling factor; q, k, v are not a single parameter, respectively with Wq、Wk、WvForming symbols;
the nonlinear layer is constructed by a full-link layer with the length of 1024 and a ReLU activation function;
the output layer is constructed by a full-connection layer with the length of 100, and output data is changed into a matrix with the size consistent with the size of a spectrum sampling point.
The specific training method comprises the following steps:
initializing parameters, setting the batch size to 64, setting the total learning times epoch to 600, setting the learning rate to 0.0001, decreasing the learning rate by 80% every 30 epochs, using Adam as a gradient decreasing optimizer, using the mean square error as a loss function, and using the ReLU as an activation function for the whole neural network model;
in an input layer, an input 64 × 10 two-dimensional matrix is expanded into a 64 × 10 × 1 three-dimensional matrix, then the three-dimensional matrix is subjected to 10 1 × 512 full-connection layers to obtain a 64 × 10 × 512 three-dimensional matrix, and then the three-dimensional matrix is sent to a position coding layer;
the position coding layer is constructed by a three-dimensional learnable matrix, the size of the matrix is 1 multiplied by 10 multiplied by 512, and the position coding layer is added with input data to obtain position coding layer output;
the position coding layer output is sent to the Transformer encoder, and after passing through N series Transformer encoders, the position coding layer output passes through a nonlinear unit consisting of a 1024-sized full connection layer and a ReLU activation function, and the final calculation result is sent to the final output layer;
a layer of full-connection network is used as an output layer, the number of output units of the output layer is 100, the mean square error is used as a loss function, and the gradient is reversely transferred to each neuron through a chain derivation method so as to optimize parameters in the neuron.
During testing, sending a verification set which does not appear during training into a deep learning model T, calculating the error between a prediction spectrum and an actual spectrum by using mean square error, and verifying the prediction capability of the model;
repeating the steps until the training frequency epoch reaches about 600, the loss of the verification set does not decrease any more, and the training is finished;
the invention has the beneficial effects that: the multi-head self-attention mechanism in the Transformer can fully mine the characteristic relation existing between the input structure parameters, and improves the prediction precision of the multi-head self-attention mechanism on the complex spectrum based on the extracted rich characteristic information. For a well-trained deep learning model, the time taken to predict a spectrum is only in milliseconds. The defect that the traditional numerical simulation method is complex and time-consuming in solving Maxwell equations is overcome, and time cost and hardware cost are greatly reduced. Compared with a common deep learning model, the method has the advantages that the accuracy of spectrum prediction is improved by one order of magnitude, and the method can still accurately predict the part with stronger spectrum fluctuation change which is usually difficult to predict. The design requirement of the electromagnetic metamaterial for accuracy and rapidness is met.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a schematic structural diagram of a transform-based neural network model according to an embodiment of the present invention, where N is 8;
FIG. 2 is a schematic cross-sectional view of a hyperbolic plasmon metamaterial structure provided by an embodiment of the invention;
fig. 3 is a schematic diagram of a training sample provided in an embodiment of the present invention, where the corresponding structural parameters are located above a spectral line, where the number of the structural parameters used for training is 10, and the number of sampling points of the spectral line is 100;
fig. 4 is a comparison of prediction results of spectra of hyperbolic plasmonic metamaterials used in embodiments of the present invention based on a transform neural network model and a multilayer perceptron neural network model.
Detailed Description
In order to make the technical means, the creation characteristics, the achievement purposes and the effects of the invention easy to understand, the invention is further described with the specific embodiments.
As shown in fig. 1-4, the method for predicting the complex spectrum of the electromagnetic metamaterial with high precision based on the Transformer overcomes the defects of complexity and time consumption of the traditional method which relies on numerical simulation to solve maxwell equations on one hand, and greatly reduces hardware cost and time cost; on the other hand, compared with the traditional multilayer perceptron, the deep learning model used by the method further improves the prediction precision of the complex spectrum under the condition of equivalent training data and network model size, has an order of magnitude reduction on the prediction error, and can be used for accurately and rapidly simulating the optical response of the electromagnetic metamaterial.
The method comprises the following specific steps:
step one, data collection and pretreatment
As shown in fig. 2, a schematic diagram of a hyperbolic electromagnetic metamaterial unit structure used in the embodiment of the present invention, which is a very important member in a metamaterial family, is a plasma metamaterial having a layered metal/dielectric mixed nano structure, which is of great interest due to its wide application, including sensing, detection, energy storage and thermal radiation, and which can generate rich circuit simulation functions in a near-infrared band based on an equivalent circuit theory, and the electromagnetic metamaterial unit structure used in the embodiment of the present invention is composed of 10 layers of vertical periodic alternating metal/dielectric layered nano structures. In the data collection process, structural parameters are randomly generated in a limited range, a strict coupled wave analysis (RCWA) method is adopted to model a physical model and calculate the reflectivity of the wavelength from 1000nm to 2500nm, different from the use of commercial simulation software, in order to improve the collection efficiency and improve the utilization rate of a computer to the maximum extent, a parallel calculation function of matlab is used for running a plurality of RCWA calculation codes of generated samples in parallel, and thirty thousand groups of data are obtained as data for training and verification of a neural network. Each set of data includes 10 structural parameters (t) as shown in FIG. 31To t10) And a reflectivity spectrum of 100 discrete points, unlike a single absorption peak, the electromagnetic metamaterial used in the present invention may have a plurality of absorption peaks, and thus the reflectivity spectrum has a higher complexity. Thirty thousand samples collected were run as per 4: 24000 randomly divided in proportion of 1 are used as training sets and 6000 randomly divided as verification sets, the training sets are used for training the neural network model, and the verification sets which do not appear in the training sets are used for verifying the prediction capability of the model, so that the situation that the prediction capability of the model is over-determined is preventedAnd (6) fitting. According to the previously defined structure parameter range pair t1To t10And normalizing, and converting each sample into a matrix format required by the neural network input.
Step two, constructing a neural network model based on a Transformer
Fig. 1 shows a neural network model T used in the embodiment of the present invention, and the input layer of the model is a matrix change layer. In the matrix change layer, 10 input structural parameters respectively pass through 10 full-connection layers with the neuron size of 512 to obtain a matrix with the dimension size of 10 multiplied by 512. The position encoding layer is composed of trainable parameters having dimensions of 1 × 10 × 512, and adds position information to the output of the matrix change layer. And then serially connecting 8N Transformer encoders, wherein the number of the multi-head attention mechanism modules head in the encoders is set to 8, and the output expression of the attention mechanism modules is as follows:
Figure BDA0003450316240000081
wherein Q, K, V are three learnable matrices Wi q、Wi k、Wi vMultiplied by the outputs of the position-coding layers, respectively, to obtain Wq、Wk、WvAre all an integral identification number, and i represents the head in the multi-head attention mechanism. dkAn identifier representing an ensemble, representing a scaling factor; q, k, v are not a single parameter, respectively with Wq、Wk、WvForming symbols; in order to prevent the problem of gradient disappearance possibly caused by the fact that the neural network is too deep in the training process, the output of the multi-head attention mechanism module passes through a processing module formed by residual connection and normalization. Then, nonlinear processing is carried out through a forward feedback layer, the forward feedback layer is composed of two full-connection layers with neuron size of 1024 and a ReLU activation function, a Dropout random discarding layer with discarding rate of 0.1 is added between the two full-connection layers to prevent overfitting, the output of the forward feedback layer is also transmitted into a processing module composed of residual error connection and normalization, and finally Tra is obtainedThe output of the nsformer encoder. After passing through 8 transform encoders mentioned above in succession, the output passes through a nonlinear module consisting of a full link layer with a length of 1024 and a ReLU activation function, and then the output data is transformed into a matrix with the same size as the spectral sampling point through a full link layer with a length of 100.
Step three, training and verifying the neural network model
The construction based on the Transformer neural network model can be completed according to the description. In the neural network training process, 64 training set samples are taken as a group and input into the neural network in batches, Adam is taken as a gradient descent optimizer, and the mean square error is taken as a loss function. And after each training batch is finished, reversely transferring the gradient according to the loss function and updating the trainable parameters, wherein after about 600 epochs, the loss of the verification set is not reduced any more, and the training of the neural network model is finished. The validation set that did not appear in the training samples was used to validate the predictive power based on the transform neural network model, again using mean square error as the means of error calculation, with the final validation set error being 0.000142. As shown in FIG. 4(a), the prediction accuracy of the method of the present invention is higher for complex spectra compared with the common deep learning model of the multi-layered perceptron. As shown in the spectrum between 1200nm and 2200nm in fig. 4(b), the conventional deep learning model has limited prediction capability on the part with strong spectral fluctuation, and cannot accurately express the change information of the part of the spectrum, and the part of the spectrum often contains abundant physical information, so that the method of the present invention has high prediction consistency on the part with strong spectral fluctuation compared with a multilayer perceptron model. The method can accurately and quickly predict the optical response according to the input structural parameters of the electromagnetic metamaterial, overcomes the defect that the traditional numerical simulation method is complex and time-consuming in solving Maxwell equations, can accurately predict the spectrum in real time, and greatly reduces the time cost and the hardware cost. Compared with the traditional deep learning model, the prediction precision is higher, and the method is more suitable for prediction of complex spectrums. The method is simple to implement, can greatly reduce the design cycle of the electromagnetic metamaterial, and is easy to popularize in other electromagnetic metamaterial models.
Finally, it should be noted that: in the description of the present invention, it should be noted that the terms "vertical", "upper", "lower", "horizontal", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience of describing the present invention and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present invention.
In the description of the present invention, it should also be noted that, unless otherwise explicitly specified or limited, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly and may, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (4)

1. A Transformer-based electromagnetic metamaterial complex spectrum high-precision prediction method is characterized in that a Transformer neural network model is constructed, a strict coupled wave analysis (RCWA) method is utilized to randomly combine and calculate in a given electromagnetic metamaterial physical model structure parameter range to obtain sample data, and the sample data is calculated according to the following steps of 4: and 1, dividing the model into a training set and a verification set, training and verifying the neural network model, inputting the training set to the Transformer neural network model for training, verifying the performance of the neural network model by the verification set, and predicting according to the trained Transformer neural network model to obtain the optical response corresponding to the structural parameters of the electromagnetic metamaterial.
2. The method for predicting the complex spectrum of the transducer-based electromagnetic metamaterial with high accuracy as claimed in claim 1, wherein a strict coupled wave analysis (RCWA) is used to randomly combine and calculate within a given range of the parameters of the layered metal/dielectric structure to obtain the corresponding reflectivity, and the method is used for training and verifying a transducer neural network model, so that the transducer neural network model can obtain a complex nonlinear mapping relation between the parameters of the layered metal/dielectric structure and the reflectivity.
3. The method as claimed in claim 2, wherein the layered metal/dielectric structure is formed by a plurality of layers of vertical periodically alternating metal/dielectric layered nanostructure units, the geometric shape is any one of rectangular, elliptical and triangular geometric patterns, the width of the left and right pillars in each unit is w1 and w2, respectively, the thickness of the metal layer of the substrate is h, the metal is one or more of aluminum oxide, silicon dioxide, magnesium fluoride, germanium, gold, silver, aluminum and titanium, and the material of the dielectric layer is any one of silicon dioxide, silicon monoxide, magnesium fluoride and aluminum oxide.
4. The method for predicting the complex spectrum of the electromagnetic metamaterial with high precision based on the Transformer as claimed in claim 2, wherein the Transformer neural network model comprises an input layer, a position coding layer, 8 serially connected Transformer encoders, a nonlinear layer and an output layer which are sequentially arranged;
the input layer is used for matrix dimension change, the input 10 structural parameters respectively pass through 10 full-connection layers with the length of 512, and the matrix dimension of input data is changed from 64 multiplied by 10 to 64 multiplied by 10 multiplied by 512;
the position coding layer is composed of learnable matrixes with the dimensionality of 1 x 10 x 512, and the learnable matrixes are added with each input matrix to add position information to input data;
the Transformer encoder comprises a multi-head attention mechanism module, a residual error connection layer, a normalization layer, a Dropout random discarding layer and a forward feedback layer;
the output expression of the multi-head attention mechanism module is as follows:
Figure FDA0003450316230000021
wherein Q, K, V are three learnable matrices Wi q、Wi k、Wi vMultiplied by the outputs of the position-coding layers, respectively, to obtain Wq、Wk、WvThe identification symbols are all integrated, i represents the number one in the multi-head attention mechanism; dkAn identifier representing an ensemble, representing a scaling factor; q, k, v are not a single parameter, respectively with Wq、Wk、WvForming symbols; the nonlinear layer is constructed by a full-link layer with the length of 1024 and a ReLU activation function; the output layer is constructed by a full-connection layer with the length of 100, and output data is changed into a matrix with the size consistent with the size of a spectrum sampling point.
CN202111662213.7A 2021-12-31 2021-12-31 Transformer-based electromagnetic metamaterial complex spectrum high-precision prediction method Pending CN114334041A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111662213.7A CN114334041A (en) 2021-12-31 2021-12-31 Transformer-based electromagnetic metamaterial complex spectrum high-precision prediction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111662213.7A CN114334041A (en) 2021-12-31 2021-12-31 Transformer-based electromagnetic metamaterial complex spectrum high-precision prediction method

Publications (1)

Publication Number Publication Date
CN114334041A true CN114334041A (en) 2022-04-12

Family

ID=81020347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111662213.7A Pending CN114334041A (en) 2021-12-31 2021-12-31 Transformer-based electromagnetic metamaterial complex spectrum high-precision prediction method

Country Status (1)

Country Link
CN (1) CN114334041A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115392138A (en) * 2022-10-27 2022-11-25 中国航天三江集团有限公司 Optical-mechanical-thermal coupling analysis model based on machine learning
CN115983140A (en) * 2023-03-16 2023-04-18 河北工业大学 Electromagnetic field numerical value prediction method based on big data deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109962688A (en) * 2019-04-04 2019-07-02 北京邮电大学 The quick predict and reverse geometry design method of all dielectric Meta Materials filter transfer characteristic based on deep learning neural network
WO2020020088A1 (en) * 2018-07-23 2020-01-30 第四范式(北京)技术有限公司 Neural network model training method and system, and prediction method and system
CN111126282A (en) * 2019-12-25 2020-05-08 中国矿业大学 Remote sensing image content description method based on variation self-attention reinforcement learning
CN112653142A (en) * 2020-12-18 2021-04-13 武汉大学 Wind power prediction method and system for optimizing depth transform network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020020088A1 (en) * 2018-07-23 2020-01-30 第四范式(北京)技术有限公司 Neural network model training method and system, and prediction method and system
CN109962688A (en) * 2019-04-04 2019-07-02 北京邮电大学 The quick predict and reverse geometry design method of all dielectric Meta Materials filter transfer characteristic based on deep learning neural network
CN111126282A (en) * 2019-12-25 2020-05-08 中国矿业大学 Remote sensing image content description method based on variation self-attention reinforcement learning
CN112653142A (en) * 2020-12-18 2021-04-13 武汉大学 Wind power prediction method and system for optimizing depth transform network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张强;魏儒义;严强强;赵玉迪;张学敏;于涛;: "深度神经网络在红外光谱定量分析VOCs中的应用", 光谱学与光谱分析, no. 04, 15 April 2020 (2020-04-15), pages 109 - 116 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115392138A (en) * 2022-10-27 2022-11-25 中国航天三江集团有限公司 Optical-mechanical-thermal coupling analysis model based on machine learning
CN115983140A (en) * 2023-03-16 2023-04-18 河北工业大学 Electromagnetic field numerical value prediction method based on big data deep learning
CN115983140B (en) * 2023-03-16 2023-06-09 河北工业大学 Electromagnetic field numerical prediction method based on big data deep learning

Similar Documents

Publication Publication Date Title
Finol et al. Deep convolutional neural networks for eigenvalue problems in mechanics
Xu et al. Multi-level convolutional autoencoder networks for parametric prediction of spatio-temporal dynamics
CN114334041A (en) Transformer-based electromagnetic metamaterial complex spectrum high-precision prediction method
Xu et al. Interfacing photonics with artificial intelligence: an innovative design strategy for photonic structures and devices based on artificial neural networks
Rai et al. Lamb wave based damage detection in metallic plates using multi-headed 1-dimensional convolutional neural network
CN105631554B (en) A kind of oil well oil liquid moisture content multi-model prediction technique based on time series
CN104573621A (en) Dynamic gesture learning and identifying method based on Chebyshev neural network
CN106529570B (en) Image classification method based on depth ridge ripple neural network
CN111649779B (en) Oil well oil content and total flow rate measuring method based on dense neural network and application
CN107085733A (en) Offshore infrared ship recognition methods based on CNN deep learnings
Sawant et al. Temperature variation compensated damage classification and localisation in ultrasonic guided wave SHM using self-learnt features and Gaussian mixture models
CN109284541A (en) A kind of more Method of Physical Modeling of neural network for microwave passive component
Hajmohammad et al. Optimization of stacking sequence of composite laminates for optimizing buckling load by neural network and genetic algorithm
An et al. A freeform dielectric metasurface modeling approach based on deep neural networks
CN115169235A (en) Super surface unit structure inverse design method based on improved generation of countermeasure network
Noh et al. Inverse design meets nanophotonics: From computational optimization to artificial neural network
CN113705031B (en) Nano antenna array electromagnetic performance prediction method based on deep learning
Mu et al. Catalyst optimization design based on artificial neural network
Liu et al. Machine learning-based optimization design of bistable curved shell structures with variable thickness
Yan et al. Multi-physics parametric modeling of microwave passive components using artificial neural networks
Liang et al. Research on chemical process optimization based on artificial neural network algorithm
CN116821452A (en) Graph node classification model training method and graph node classification method
Yu et al. Machine learning-based design and optimization of double curved beams for multi-stable honeycomb structures
Cerniauskas et al. Machine intelligence in metamaterials design: a review
Bhandari et al. Continuous Wavelet Transform and Deep Learning for Accurate Ae Zone Detection in Laminated Composite Structures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination