CN116111984A - Filter design optimization method and device, filter, equipment and medium - Google Patents
Filter design optimization method and device, filter, equipment and medium Download PDFInfo
- Publication number
- CN116111984A CN116111984A CN202211555506.XA CN202211555506A CN116111984A CN 116111984 A CN116111984 A CN 116111984A CN 202211555506 A CN202211555506 A CN 202211555506A CN 116111984 A CN116111984 A CN 116111984A
- Authority
- CN
- China
- Prior art keywords
- filter
- design
- training
- learning machine
- extreme learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013461 design Methods 0.000 title claims abstract description 226
- 238000005457 optimization Methods 0.000 title claims abstract description 101
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000012549 training Methods 0.000 claims abstract description 140
- 238000010586 diagram Methods 0.000 claims abstract description 46
- 238000012360 testing method Methods 0.000 claims description 49
- 230000006870 function Effects 0.000 claims description 12
- 230000004913 activation Effects 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 4
- 238000013528 artificial neural network Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 9
- 230000005540 biological transmission Effects 0.000 description 5
- 238000004088 simulation Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 210000002569 neuron Anatomy 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000033228 biological regulation Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 239000010985 leather Substances 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03H—IMPEDANCE NETWORKS, e.g. RESONANT CIRCUITS; RESONATORS
- H03H17/00—Networks using digital techniques
- H03H17/02—Frequency selective networks
- H03H17/0248—Filters characterised by a particular frequency response or filtering method
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Networks Using Active Elements (AREA)
Abstract
The embodiment of the disclosure relates to the technical field of automatic design of antennas and circuits, and provides a design optimization method and device of a filter, the filter, equipment and a medium, wherein the method comprises the following steps: determining a topology structure diagram or a circuit structure diagram of the filter; determining design parameters of the filter and corresponding initial value ranges based on a topological structure diagram or a circuit structure diagram; inputting the design parameters and the corresponding initial value ranges thereof into a trained filter design optimization model to obtain optimized predicted values corresponding to the design parameters, and completing the design optimization of the filter; the trained filter design optimization model is obtained based on training of a preset extreme learning machine. According to the embodiment of the disclosure, the complex task that a designer needs to manually adjust each design parameter in the traditional method can be omitted, the design optimization of the filter can be realized faster and better, the time and effort of the designer are greatly saved, the design optimization threshold and design cost of the filter are reduced, and the design optimization efficiency is improved.
Description
Technical Field
The embodiment of the disclosure relates to the technical field of automatic design of antennas and circuits, in particular to a method and a device for optimizing the design of a filter, the filter, equipment and a medium.
Background
In recent years, with the large-scale development of fifth-generation mobile communication technology (5 th Generation Mobile Communication Technology, abbreviated as 5G) and Sub-6G (referring to 5G frequency band and operating frequency is below 6G frequency band of 450MHz-6000 MHz) communication, new fields such as internet of things, high-speed information transfer, unmanned operation and artificial intelligence have been rapidly developed. 5G is one of the key core technologies in new technological and industrial leather, and is integrated with technologies such as Internet of things, big data, cloud computing, artificial intelligence and the like. Meanwhile, 5G also puts new demands on the communication system. The design requirements of the radio frequency system are greatly increased when the 5G era is entered.
In the prior art, engineers often need to repeatedly invoke full wave electromagnetic simulation in order to find the optimal value of a design variable in the process of optimizing the design of a microwave element and a microwave component such as a filter. However, this makes the full wave electromagnetic simulation process computationally expensive due to the need to perform repeated time-consuming electromagnetic simulations of different geometric parameters. Meanwhile, in the traditional filter design method, a designer selects a corresponding circuit topology structure according to the requirements and indexes of a required circuit, and corresponding parameter values are obtained through theoretical calculation to meet the design requirements. When the topology structure is built, and the design parameters of all components are not determined, a designer often needs to spend a great deal of time and effort to adjust all the design parameters one by one so that all the design parameters can meet the design index requirements. Once the circuit with more components is encountered, the design difficulty is further increased, and the design experience of a designer is more tested.
Disclosure of Invention
The present disclosure aims to solve at least one of the problems in the prior art, and provides a method and apparatus for optimizing the design of a filter, a device, and a medium.
In one aspect of the present disclosure, a method for optimizing a design of a filter is provided, the method including:
determining a topology structure diagram or a circuit structure diagram of the filter;
determining design parameters of the filter and corresponding initial value ranges based on the topological structure diagram or the circuit structure diagram;
inputting the design parameters and the corresponding initial value ranges thereof into a trained filter design optimization model to obtain an optimization predicted value corresponding to the design parameters, and completing the design optimization of the filter; the trained filter design optimization model is obtained based on training of a preset extreme learning machine.
Optionally, the trained filter design optimization model is obtained through training according to the following steps:
determining training data, wherein the training data comprises training design parameters corresponding to the design parameters and training value ranges corresponding to the training design parameters;
normalizing the training data, subtracting the minimum value in the training value range corresponding to the training design parameter from the training design parameter, and dividing the result of subtracting the minimum value from the training design parameter by the length of the value range corresponding to the training design parameter to obtain normalized training data;
and training the preset extreme learning machine by using the normalized training data to obtain a trained filter design optimization model.
Optionally, training the preset extreme learning machine by using the normalized training data to obtain a trained filter design optimization model, including:
dividing the normalized training data into a training sample and a test sample;
training the preset extreme learning machine by using the training sample to obtain the updated extreme learning machine;
the testing steps are as follows: testing the updated extreme learning machine by using the test sample to obtain an S-curve parameter corresponding to the test sample;
judging whether the S curve parameter reaches a preset parameter range or not: if yes, taking the updated extreme learning machine as a trained filter design optimization model; if not, training the updated extreme learning machine by using the training sample to obtain the extreme learning machine updated again, and returning to the testing step.
Optionally, the trained filter design optimization model is expressed as the following formula (1):
wherein i=1, …, L represents the node number of the hidden layer in the preset extreme learning machine, L represents the node number of the hidden layer,represents a weight matrix, g represents an activation function, w i Representing the input weights of hidden layer node i, b i Representing the bias, x, of hidden layer node i j A j-th input sample, o, representing the hidden layer j Represents x j Corresponding toJ=1, …, N represents the input sample number of the hidden layer, and N represents the total number of input samples of the hidden layer.
Optionally, the preset extreme learning machine includes a forward network and a reverse network, where the forward network and the reverse network each include an input layer, a hidden layer, and an output layer, and the input layer, the hidden layer, and the output layer each include a plurality of nodes; wherein,,
in the forward network, the number of the nodes of the hidden layer is greater than the number of the nodes of the output layer, and the number of the nodes of the output layer is greater than the number of the nodes of the input layer;
in the reverse network, the number of the nodes of the hidden layer is greater than the number of the nodes of the input layer, and the number of the nodes of the input layer is greater than the number of the nodes of the output layer.
Optionally, the filter includes a parallel microstrip line filter or a coplanar waveguide filter.
In another aspect of the present disclosure, there is provided a design optimizing apparatus of a filter, the apparatus including:
the first determining module is used for determining a topological structure diagram or a circuit structure diagram of the filter;
the second determining module is used for determining design parameters of the filter and a corresponding initial value range based on the topological structure diagram or the circuit structure diagram;
the optimization module is used for inputting the design parameters and the corresponding initial value ranges thereof into a trained filter design optimization model to obtain optimized predicted values corresponding to the design parameters, and completing the design optimization of the filter; the trained filter design optimization model is obtained based on training of a preset extreme learning machine.
In another aspect of the disclosure, a filter is provided, where the filter is designed by using the design optimization method of the filter described above.
In another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of optimizing the design of the filter described above.
In another aspect of the present disclosure, a computer-readable storage medium is provided, in which a computer program is stored, which when executed by a processor, implements the method of optimizing the design of a filter described above.
Compared with the prior art, the implementation mode of the method can save the tedious and tedious task that a designer needs to manually adjust each design parameter in the traditional method, realize the design optimization of the filter faster and better, greatly save the time and effort of the designer, reduce the design optimization threshold and the design cost of the filter, and improve the design optimization efficiency.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which the figures do not depict a proportional limitation unless expressly stated otherwise.
FIG. 1 is a flow chart of a method for optimizing the design of a filter according to an embodiment of the present disclosure;
FIG. 2 is a topology diagram of a filter provided in another embodiment of the present disclosure;
FIG. 3 is a schematic diagram of training steps of a filter design optimization model according to another embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a forward network and a reverse network included in an extreme learning machine according to another embodiment of the present disclosure;
FIG. 5 is a graph of S-curve parameters generated over a forward network provided by another embodiment of the present disclosure;
FIG. 6 is a graph of S-curve parameters generated by a reverse network provided in accordance with another embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a design optimizing device of a filter according to another embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an electronic device according to another embodiment of the present disclosure.
Detailed Description
With the development of machine learning, artificial neural networks (Artificial Neural Network, ANN) have been recognized as a powerful tool. The artificial neural network can memorize complex relations of objective things in terms of space and time through 'learning and training' of templates, is characterized by being suitable for solving the problems of various predictions, classification, evaluation matching, recognition and the like, and can remarkably accelerate the optimization process, namely the capability of searching for an optimal solution at a high speed. Under normal circumstances, a large amount of calculation is often required to find an optimal solution of a complex problem, and a feedback type artificial neural network designed for a certain problem is utilized to exert the high-speed computing capability of a computer, so that the optimal solution can be found quickly. Therefore, the internal parameters of the artificial neural network can be adjusted through the training process, so that the artificial neural network learns the relation between the geometric variables and the electromagnetism of the filter, and the trained artificial neural network is utilized to design and optimize the filter. Wherein, since the artificial neural network model is a forward model that can be used for modeling, the geometric variables of the filter can be set as the input of the artificial neural network, while the electromagnetic response is set as the output of the artificial neural network.
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings. However, those of ordinary skill in the art will understand that in various embodiments of the present disclosure, numerous technical details have been set forth in order to provide a better understanding of the present disclosure. However, the technical solutions claimed in the present disclosure can be implemented without these technical details and with various changes and modifications based on the following embodiments. The following divisions of the various embodiments are for convenience of description, and should not be construed as limiting the specific implementations of the disclosure, and the various embodiments may be mutually combined and referred to without contradiction.
One embodiment of the present disclosure relates to a method for optimizing the design of a filter, the flow of which is shown in fig. 1, including:
step S101, determining a topology structure diagram or a circuit structure diagram of the filter.
Specifically, since the design optimization of the filter is generally performed based on the topology or circuit configuration, the present embodiment needs to first determine the topology or circuit configuration of the filter to optimize the design parameters of the filter based on the topology or circuit configuration.
The present embodiment is not limited to the specific type of filter, and those skilled in the art may select the filter according to actual needs. For example, the type of filter may be a low-pass filter, a high-pass filter, a band-stop filter, an all-pass filter, or the like.
For example, when the filter is a parallel microstrip filter, the topology structure diagram corresponding to the filter may include a first microstrip line 201, a second microstrip line 202, a third microstrip line 203, a fourth microstrip line 204, and a fifth microstrip line 205 as shown in fig. 2. The microstrip lines are mutually coupled to play a role of capacitance and inductance, so that the function of the filter is realized.
Step S102, determining design parameters of the filter and corresponding initial value ranges based on a topological structure diagram or a circuit structure diagram.
Specifically, the specific design parameters of the filter and the corresponding initial value range thereof can be determined on the basis of a topological structure diagram or a circuit structure diagram. The design parameters may include filter main parameters such as center frequency, cut-off frequency, passband bandwidth, insertion loss, return loss, stop band suppression degree, and the like. Of course, the design parameters may be different for different types of filters. For example, for a parallel microstrip filter, the design parameters may include microstrip 1 length, microstrip 2 length, microstrip 1 width, microstrip 2 width, microstrip 1 spacing, microstrip 2 spacing, frequency point, etc. The initial value range corresponding to the design parameters is the basis of filter design optimization and is also the optimization range of the design parameters, and the trained filter design optimization model is utilized to select the design parameter value combination capable of enabling the performance of the filter to meet the design requirements based on the initial value range corresponding to each design parameter, so that the filter design optimization is completed. For example, for a parallel microstrip filter, the initial value ranges of the design parameter microstrip line 1 and the microstrip line 2 may be 24 mm-27 mm, the initial value range of the design parameter microstrip line 1 width may be 1.9 mm-2.2 mm, the initial value range of the design parameter microstrip line 2 width may be 2.4 mm-2.7 mm, the initial value range of the design parameter microstrip line 1 interval may be 0.24 mm-0.3 mm, the initial value range of the design parameter microstrip line 2 interval may be 0.9 mm-1.3 mm, and the initial value range of the design parameter microstrip line frequency point may be 1.5 GHz-2.5 GHz (interval is 0.005 GHz).
It should be noted that, the specific type of the design parameter and the specific numerical range of the corresponding initial numerical range are not limited in this embodiment, and those skilled in the art can select and set according to the needs in practical applications.
Step S103, inputting the design parameters and the corresponding initial value ranges into a trained filter design optimization model to obtain optimized predicted values corresponding to the design parameters, and completing the design optimization of the filter; the trained filter design optimization model is obtained based on training of a preset extreme learning machine.
Specifically, on the basis of the initial value range of the trained filter design optimization model, a design parameter value combination capable of optimizing the performance of the filter is selected, and the actual value corresponding to each design parameter at the moment is output and used as an optimization predicted value corresponding to each design parameter, so that the design optimization of the filter is completed.
The extreme learning machine English is named as Extreme Learning Machine, ELM for short, and is regarded as a special fully connected neural network (Fully Neural Network, FNN) or an improvement on FNN and a reverse propagation algorithm thereof, and is characterized in that the weight of the hidden layer node is random or artificially given, updating is not needed, and the learning process only calculates the output weight. Because the learning process of the extreme learning machine is easy to converge on the global minimum, the embodiment selects the extreme learning machine as the basic model of the filter design optimization model, and trains the extreme learning machine by using training data, thereby obtaining a trained filter design optimization model.
Compared with the prior art, the method and the device have the advantages that the topological structure diagram or the circuit structure diagram of the filter is determined, the design parameters and the corresponding initial value ranges of the design parameters are determined based on the topological structure diagram or the circuit structure diagram, the design parameters and the corresponding initial value ranges of the design parameters are input into the filter design optimization model obtained based on the training of the preset extreme learning machine, the optimization predicted values corresponding to the design parameters are obtained, the design optimization of the filter is completed, the complex task that a designer needs to manually adjust each design parameter in the traditional method is omitted, the design optimization of the filter is realized faster and better, the time and energy of the designer are greatly saved, the design optimization threshold and the design cost of the filter are reduced, and the design optimization efficiency is improved.
Illustratively, as shown in FIG. 3, the trained filter design optimization model is trained according to the following steps:
in step S301, training data is determined, where the training data includes training design parameters corresponding to the design parameters and training value ranges corresponding to the training design parameters.
Specifically, the step may use the data set of the existing filter design optimization scheme as a training data set, select parameters and optimization ranges thereof corresponding to the design parameters from the existing filter design optimization scheme, use the parameters and the optimization ranges thereof corresponding to the design parameters as training design parameters and training optimization value ranges corresponding to the design parameters, and use the final values in the filter design optimization scheme corresponding to each training design parameter as training optimization predicted values corresponding to the training design parameters, so as to train the preset extreme learning machine according to the training design parameters and the training optimization predicted values corresponding to the training design parameters in the training data.
For example, for a parallel microstrip filter, the training design parameter corresponding to the microstrip line 1 length is also the microstrip line 1 length, the training range corresponding to the training design parameter may be the initial range corresponding to the microstrip line 1 length, that is, 24 mm-27 mm, and the training optimization prediction value corresponding to the training optimization prediction value may be 25.9809mm.
It should be noted that, the specific type of the training design parameter and the specific numerical range of the training value range corresponding thereto are not limited in this embodiment, as long as the training design parameter and the training value range corresponding thereto correspond to the design parameter.
Step S302, carrying out normalization processing on the training data, subtracting the minimum value in the training value range corresponding to the training design parameter from the training design parameter, and dividing the result of subtracting the minimum value from the training design parameter by the length of the value range corresponding to the training design parameter to obtain normalized training data.
For example, for a parallel microstrip filter, when the training design parameter is the length L1 of the microstrip line 1 and the corresponding training value range is 24 mm-27 mm, the normalization processing of the training design parameter is expressed as follows: subtracting the minimum value of 24-27 mm from the length L1 of the microstrip line 1, namely 24mm, to obtain L1-24mm, dividing L1-24mm by the length of 24-27 mm, namely 3mm, to obtainI.e. the length of the microstrip line 1 as a normalized training design parameter.
Step S303, training a preset extreme learning machine by using the normalized training data to obtain a trained filter design optimization model.
Specifically, the step can train the preset extreme learning machine by using the normalized training design parameters, the corresponding training value ranges and the training optimization predicted values, so as to obtain a trained filter design optimization model.
According to the embodiment, the input parameters of the original training data, namely the preset extreme learning machine, are converted into the range of [0,1] by adopting a linear function normalization method, so that the scaling of the original training data in equal proportion can be realized, the same contribution of each characteristic to the result is made, and the convergence speed and the convergence accuracy of the model are improved.
The preset extreme learning machine comprises a forward network and a reverse network, wherein the forward network and the reverse network comprise an input layer, a hidden layer and an output layer, and the input layer, the hidden layer and the output layer comprise a plurality of nodes. In the forward network, the number of nodes of the hidden layer is larger than that of nodes of the output layer, and the number of nodes of the output layer is larger than that of nodes of the input layer. In the reverse network, the number of nodes of the hidden layer is larger than that of the nodes of the input layer, and the number of nodes of the input layer is larger than that of the nodes of the output layer.
For example, the preset extreme learning machine may include a forward network as shown by a in fig. 4 and a reverse network as shown by B in fig. 4. Wherein, as shown in a in fig. 4, the input layer of the forward network may include 6 nodes, the hidden layer may include 1000 nodes, and the output layer may include 804 nodes. As shown in B in fig. 4, the input layer of the reverse network may include 804 nodes, the hidden layer may include 1000 nodes, and the output layer may include 6 nodes.
It should be noted that, in this embodiment, the specific number of nodes in each layer of the forward network and the reverse network is not limited, and those skilled in the art may set the number according to actual needs.
By setting the forward network and the reverse network in the preset extreme learning machine, the filter design optimization model obtained by training the extreme learning machine can realize forward prediction and reverse design functions, the forward network is utilized to predict the optimal design parameters of the filter under different size parameters, and the reverse network is utilized to predict the size parameters of the filter corresponding to the ideal design waveform.
Illustratively, step S303 may include:
dividing the normalized training data into a training sample and a test sample. The training sample is used for training a preset extreme learning machine, and the test sample is used for testing the extreme learning machine after training and updating.
Training the preset extreme learning machine by using the training sample to obtain the updated extreme learning machine. For example, the predetermined extreme learning machine selected in this step may include a forward network and a reverse network, so as to respectively perform training update on the forward network and the reverse network by using training samples. When the forward network is trained, the training design parameters and the corresponding training value ranges thereof can be used as input, and the training optimization predicted values corresponding to the training design parameters can be used as output. When the reverse network is trained, the training optimization predicted value corresponding to the training design parameter can be used as input, and the training design parameter and the corresponding training value range are used as output.
The testing steps are as follows: and testing the updated extreme learning machine by using the test sample to obtain the S-curve parameters corresponding to the test sample. Specifically, in this step, after training the preset extreme learning machine by using one or more training samples, the updated extreme learning machine may be tested by using the test sample. For example, for a forward network in a preset extreme learning machine, if a training design parameter and a training value range corresponding to the training design parameter are taken as input and a training optimization predicted value corresponding to the training design parameter is taken as output in a training process, when the updated forward network is tested by using a test sample, the test design parameter and the testing value range corresponding to the test design parameter are taken as input, and a testing optimization predicted value corresponding to the test design parameter is taken as output, so that an S-curve parameter corresponding to the test sample is obtained. For the reverse network in the preset extreme learning machine, if the training optimization predicted value corresponding to the training design parameter is taken as input and the training design parameter and the corresponding training value range are taken as output in the training process, when the updated reverse network is tested by using the test sample, the test optimization predicted value corresponding to the test design parameter can be taken as input, the test design parameter and the corresponding test value range are taken as output, and the S-curve parameter corresponding to the test sample is obtained on the basis.
Judging whether the S curve parameter reaches a preset parameter range or not: if yes, taking the updated extreme learning machine as a trained filter design optimization model; if not, training the updated extreme learning machine by using the training sample to obtain the extreme learning machine updated again, and returning to the testing step.
Specifically, the preset parameter range can be set according to actual needs. If the S curve parameter corresponding to the test sample reaches the preset parameter range, the fact that the extreme learning machine trained at the moment can meet the design optimization requirement is indicated, and therefore the extreme learning machine can be used as a trained filter design optimization model. If the S-curve parameters corresponding to the test sample do not reach the preset parameter range, the fact that the extreme learning machine trained at the moment cannot meet the design optimization requirement is indicated, and training is needed to be continued until the S-curve parameters corresponding to the test sample reach the preset parameter range.
According to the method and the device, whether the preset extreme learning machine is trained is determined by judging whether the S-curve parameters corresponding to the test sample reach the preset parameter range, and whether the model is effective can be accurately identified. If the training effect of the extreme learning machine is poor, when the extreme learning machine comprises a forward network and a reverse network, the prediction result of the extreme learning machine is far from the actual situation whether the extreme learning machine comprises the forward network or the reverse network. Therefore, the present embodiment verifies the training effect of the extreme learning machine by testing the extreme learning machine with the test data, which is clearly more practical than directly checking the training situation through the loss function.
Illustratively, for an ELM comprising L hidden layer nodes, M output layer nodes, the learning process comprises the following steps 1 to 4:
step 1, randomly initializing input weight W and hidden layer bias B:
wherein w is 11 Indicating hidden layer 1 stWeights, w, of 1 st parameters of input samples corresponding to neurons, i.e. 1 st hidden layer nodes 1N The weight of the N parameter of the input sample corresponding to the 1 st neuron of the hidden layer, namely the 1 st hidden layer node, w L1 The weight of the 1 st parameter of the input sample corresponding to the L-th neuron of the hidden layer, namely the L-th hidden layer node, w LN The weight of the N parameter of the input sample corresponding to the L nerve element of the hidden layer, namely the node of the L hidden layer is represented, N represents the total number of the input samples of the hidden layer, b 1 …b L Respectively represent the bias of hidden layer node 1 … L.
where g represents the activation function, w 1 …w L Input weights, x, respectively representing hidden layer node 1 … L 1 …x N The 1 st … N input samples of the hidden layer are shown, respectively.
Step 3, calculating a weight output matrix beta:
wherein,,weights of node parameters of hidden layer 1 st … L are respectively represented, and +.>The expected output results corresponding to the 1 st … N input samples of the hidden layer are respectively represented, T represents the expected output matrix, and m represents the number of expected output results. />
Wherein j=1, …, N represents the input sample number of the hidden layer, o j The j-th input sample x representing the hidden layer j Corresponding predicted value, t j The j-th input sample x representing the hidden layer j Corresponding ideal values.
Where i=1, …, L denotes the node number of the hidden layer, w i Representing the input weights of hidden layer node i, b i Representing the bias, beta, of hidden layer node i i Indicating the output weight of hidden layer node i.
thus, the trained filter design optimization model is expressed as the following equation (1):
wherein i=1, …, L represents the node number of the hidden layer in the preset extreme learning machine, L represents the node number of the hidden layer,representing weight momentMatrix, g represents the activation function, w i Representing the input weights of hidden layer node i, b i Representing the bias, x, of hidden layer node i j A j-th input sample representing a hidden layer, o j Represents x j The corresponding predicted value, j=1, …, N represents the input sample number of the hidden layer, and N represents the total number of input samples of the hidden layer.
Preferably, the filter comprises a parallel microstrip filter or a coplanar waveguide filter.
To better embody the beneficial effects of the trained filter design optimization model in the above embodiment, a specific test example is described below.
In this test, the filter is specifically a parallel microstrip line filter, the topological structure diagram of which is shown in fig. 2, and the design parameters and the corresponding initial value ranges thereof are as follows: microstrip line 1 length L1:24 mm-27 mm; microstrip line 2 length L2:24 mm-27 mm; microstrip line 1 width W1:1.9 mm-2.2 mm; microstrip line 2 width W2:2.4 mm-2.7 mm; microstrip line 1 is spaced S1:0.24 mm-0.3 mm; microstrip line 2 spacing S2:0.9 mm-1.3 mm; frequency point: 1.5 GHz-2.5 GHz (interval is 0.005 GHz). The trained filter design optimization model is obtained based on the training of an extreme learning machine comprising a forward network and a reverse network, wherein the network structure of the forward network is shown as A in fig. 4, and the number of hidden layer nodes is 1000. The network structure of the reverse network is shown as B in fig. 4, and the number of hidden layer nodes is also 1000. When training is carried out on the forward network and the reverse network respectively, the number of the selected training samples is 400, and the number of the test samples is 100.
Aiming at a trained forward network, the test selects input parameters W1, W2, L1, L2, S1 and S2, outputs the real part and the imaginary part of return loss characteristics S (1, 1) and forward transmission coefficients S (2, 1) in the results of 1.5 GHz-2.5 GHz, and selects two groups of different values in the value range corresponding to the input parameters for testing respectively, so that an S curve parameter diagram shown in figure 5 can be obtained. Fig. 5-1 shows an S parameter curve corresponding to S11, which is a return loss characteristic S (1, 1) obtained by testing a trained forward network by using a first set of values of input parameters, and S21, which is a forward transmission coefficient S (2, 1), fig. 5-2 shows an S parameter curve corresponding to S11 and S21 obtained by testing a trained forward network by using a second set of values of input parameters, wherein a dotted line is a predicted result of the forward network, and a solid line is an actual result. As can be seen from fig. 5, the prediction results obtained by the two sets of values corresponding to the selected input parameters are basically identical to the corresponding actual results, and the model is accurate, so that it is shown that, for any value in the value range corresponding to the input parameters, the model can output the prediction result basically identical to the corresponding actual results.
For a trained reverse network, the test selects the real part and the imaginary part of the results of S (1, 1) and S (2, 1) at 1.5 GHz-2.5 GHz as input parameters, and outputs the real part and the imaginary part as W1, W2, L1, L2, S1 and S2, so that the optimized predicted values corresponding to W1, W2, L1, L2, S1 and S2 respectively can be obtained, and the actual values corresponding to W1, W2, L1, L2, S1 and S2 respectively can be expressed as the following table 1:
tables 1 W1, W2, L1, L2, S1, S2 respectively correspond to the optimized predicted value and the actual value
Design parameters | Optimizing predictive value (mm) | Actual value (mm) |
W1 | 2.188934 | 2.199 |
W2 | 2.697863 | 2.628 |
L1 | 0.264229 | 0.259 |
L2 | 1.204551 | 1.224 |
S1 | 26.354 | 26.391 |
S2 | 24.03737 | 24.03 |
Because the numerical comparison shown in table 1 is not intuitive enough, the test simulates the optimized predicted values corresponding to W1, W2, L1, L2, S1 and S2 respectively by using an advanced design system (Advanced Design System, ADS) to obtain a set of corresponding S parameter curves, and compares the S parameter curves with the ideal S parameter curves, and the comparison result is shown in fig. 6. Fig. 6-1 is an S-parameter curve obtained by ADS simulation of S11, which is the return loss characteristic S (1, 1), and S21, which is the forward transmission coefficient S (2, 1), and fig. 6-2 is an ideal S-parameter curve corresponding to S11, which is the return loss characteristic S (1, 1), and S21, which is the forward transmission coefficient S (2, 1). As can be seen from fig. 6, the S parameter curves obtained by ADS simulation on S11 and S21 are not different from the corresponding ideal S parameter curves, so that the optimized predicted values corresponding to W1, W2, L1, L2, S1 and S2 respectively meet the preset design requirements.
Another embodiment of the present disclosure relates to a design optimizing apparatus of a filter, as shown in fig. 7, including:
a first determining module 701, configured to determine a topology structure diagram or a circuit structure diagram of the filter;
a second determining module 702, configured to determine a design parameter of the filter and a corresponding initial value range thereof based on a topology structure diagram or a circuit structure diagram;
the optimizing module 703 is configured to input the design parameter and the initial value range thereof into a trained filter design optimizing model to obtain an optimized predicted value corresponding to the design parameter, thereby completing the design optimization of the filter; the trained filter design optimization model is obtained based on training of a preset extreme learning machine.
The specific implementation method of the filter design optimization device provided in the embodiment of the present disclosure may be referred to the filter design optimization method provided in the embodiment of the present disclosure, and will not be described here again.
Compared with the prior art, the method and the device have the advantages that the topological structure diagram or the circuit structure diagram of the filter is determined by the first determining module, the design parameters and the corresponding initial value ranges of the filter are determined by the second determining module based on the topological structure diagram or the circuit structure diagram, the design parameters and the corresponding initial value ranges of the design parameters are input into the filter design optimization model obtained by training based on the preset extreme learning machine by the optimizing module, the optimized predicted values corresponding to the design parameters are obtained, the design optimization of the filter is completed, the complex task that a designer needs to manually adjust each design parameter can be omitted, the design optimization of the filter is realized more quickly and better, the time and the energy of the designer are greatly saved, the design optimization threshold and the design cost of the filter are reduced, and the design optimization efficiency is improved.
Another embodiment of the present disclosure relates to a filter, which is designed by using the method for optimizing the design of the filter described in the foregoing embodiment.
Another embodiment of the present disclosure relates to an electronic device, as shown in fig. 8, comprising:
at least one processor 801; the method comprises the steps of,
a memory 802 communicatively coupled to the at least one processor 801; wherein,,
the memory 802 stores instructions executable by the at least one processor 801, the instructions being executable by the at least one processor 801 to enable the at least one processor 801 to perform the filter design optimization method described in the above embodiments.
Where the memory and the processor are connected by a bus, the bus may comprise any number of interconnected buses and bridges, the buses connecting the various circuits of the one or more processors and the memory together. The bus may also connect various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or may be a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor is transmitted over the wireless medium via the antenna, which further receives the data and transmits the data to the processor.
The processor is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory may be used to store data used by the processor in performing operations.
Another embodiment of the present disclosure relates to a computer-readable storage medium storing a computer program that, when executed by a processor, implements the method for optimizing the design of a filter described in the above embodiment.
That is, it will be understood by those skilled in the art that all or part of the steps of the method described in the above embodiments may be implemented by a program stored in a storage medium, including several instructions for causing a device (which may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the method described in the various embodiments of the disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific embodiments for carrying out the present disclosure, and that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure.
Claims (10)
1. A method for optimizing the design of a filter, the method comprising:
determining a topology structure diagram or a circuit structure diagram of the filter;
determining design parameters of the filter and corresponding initial value ranges based on the topological structure diagram or the circuit structure diagram;
inputting the design parameters and the corresponding initial value ranges thereof into a trained filter design optimization model to obtain an optimization predicted value corresponding to the design parameters, and completing the design optimization of the filter; the trained filter design optimization model is obtained based on training of a preset extreme learning machine.
2. The method of claim 1, wherein the trained filter design optimization model is trained according to the steps of:
determining training data, wherein the training data comprises training design parameters corresponding to the design parameters and training value ranges corresponding to the training design parameters;
normalizing the training data, subtracting the minimum value in the training value range corresponding to the training design parameter from the training design parameter, and dividing the result of subtracting the minimum value from the training design parameter by the length of the value range corresponding to the training design parameter to obtain normalized training data;
and training the preset extreme learning machine by using the normalized training data to obtain a trained filter design optimization model.
3. The method according to claim 2, wherein training the preset extreme learning machine using the normalized training data to obtain the trained filter design optimization model comprises:
dividing the normalized training data into a training sample and a test sample;
training the preset extreme learning machine by using the training sample to obtain the updated extreme learning machine;
the testing steps are as follows: testing the updated extreme learning machine by using the test sample to obtain an S-curve parameter corresponding to the test sample;
judging whether the S curve parameter reaches a preset parameter range or not: if yes, taking the updated extreme learning machine as a trained filter design optimization model; if not, training the updated extreme learning machine by using the training sample to obtain the extreme learning machine updated again, and returning to the testing step.
4. A method according to claim 3, wherein the trained filter design optimization model is expressed as the following formula (1):
wherein i=1, …, L represents the node number of the hidden layer in the preset extreme learning machine, L represents the node number of the hidden layer,represents a weight matrix, g represents an activation function, w i Representing the input weights of hidden layer node i, b i Representing the bias, x, of hidden layer node i j A j-th input sample, o, representing the hidden layer j Represents x j The corresponding predicted value, j=1, …, N represents the input sample number of the hidden layer, and N represents the total number of input samples of the hidden layer.
5. The method of any one of claims 1 to 4, wherein the preset extreme learning machine comprises a forward network and a reverse network, each comprising an input layer, a hidden layer, and an output layer, each comprising a plurality of nodes; wherein,,
in the forward network, the number of the nodes of the hidden layer is greater than the number of the nodes of the output layer, and the number of the nodes of the output layer is greater than the number of the nodes of the input layer;
in the reverse network, the number of the nodes of the hidden layer is greater than the number of the nodes of the input layer, and the number of the nodes of the input layer is greater than the number of the nodes of the output layer.
6. The method of claim 5, wherein the filter comprises a parallel microstrip filter or a coplanar waveguide filter.
7. A filter design optimization apparatus, the apparatus comprising:
the first determining module is used for determining a topological structure diagram or a circuit structure diagram of the filter;
the second determining module is used for determining design parameters of the filter and a corresponding initial value range based on the topological structure diagram or the circuit structure diagram;
the optimization module is used for inputting the design parameters and the corresponding initial value ranges thereof into a trained filter design optimization model to obtain optimized predicted values corresponding to the design parameters, and completing the design optimization of the filter; the trained filter design optimization model is obtained based on training of a preset extreme learning machine.
8. A filter, characterized in that the filter is designed by the design optimization method of the filter according to any one of claims 1 to 6.
9. An electronic device, comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of design optimization of a filter according to any one of claims 1 to 6.
10. A computer readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the method of optimizing the design of a filter according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211555506.XA CN116111984A (en) | 2022-12-06 | 2022-12-06 | Filter design optimization method and device, filter, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211555506.XA CN116111984A (en) | 2022-12-06 | 2022-12-06 | Filter design optimization method and device, filter, equipment and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116111984A true CN116111984A (en) | 2023-05-12 |
Family
ID=86262888
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211555506.XA Pending CN116111984A (en) | 2022-12-06 | 2022-12-06 | Filter design optimization method and device, filter, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116111984A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101869266B1 (en) * | 2017-05-08 | 2018-06-21 | 경북대학교 산학협력단 | Lane detection system based on extream learning convolutional neural network and method thereof |
US20190340497A1 (en) * | 2016-12-09 | 2019-11-07 | William Marsh Rice University | Signal Recovery Via Deep Convolutional Networks |
CN113128119A (en) * | 2021-04-21 | 2021-07-16 | 复旦大学 | Filter reverse design and optimization method based on deep learning |
CN113657026A (en) * | 2021-07-30 | 2021-11-16 | 万魔声学股份有限公司 | Simulation design method, device, equipment and storage medium of filter |
CN113807040A (en) * | 2021-09-23 | 2021-12-17 | 北京邮电大学 | Optimal design method for microwave circuit |
CN114036839A (en) * | 2021-11-09 | 2022-02-11 | 江苏科技大学 | Microwave antenna physical parameter design method and system based on multilayer extreme learning machine |
CN114626573A (en) * | 2022-01-27 | 2022-06-14 | 华南理工大学 | Load prediction method for optimizing extreme learning machine based on improved multivariate universe algorithm |
-
2022
- 2022-12-06 CN CN202211555506.XA patent/CN116111984A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190340497A1 (en) * | 2016-12-09 | 2019-11-07 | William Marsh Rice University | Signal Recovery Via Deep Convolutional Networks |
KR101869266B1 (en) * | 2017-05-08 | 2018-06-21 | 경북대학교 산학협력단 | Lane detection system based on extream learning convolutional neural network and method thereof |
CN113128119A (en) * | 2021-04-21 | 2021-07-16 | 复旦大学 | Filter reverse design and optimization method based on deep learning |
CN113657026A (en) * | 2021-07-30 | 2021-11-16 | 万魔声学股份有限公司 | Simulation design method, device, equipment and storage medium of filter |
CN113807040A (en) * | 2021-09-23 | 2021-12-17 | 北京邮电大学 | Optimal design method for microwave circuit |
CN114036839A (en) * | 2021-11-09 | 2022-02-11 | 江苏科技大学 | Microwave antenna physical parameter design method and system based on multilayer extreme learning machine |
CN114626573A (en) * | 2022-01-27 | 2022-06-14 | 华南理工大学 | Load prediction method for optimizing extreme learning machine based on improved multivariate universe algorithm |
Non-Patent Citations (2)
Title |
---|
王一宾;田文泉;程玉胜;裴根生;: "基于核极限学习机的标记分布学习", 计算机工程与应用, no. 24 * |
郑健;曹炜;: "基于GA-ELM神经网络的日前电价预测", 上海电力学院学报, no. 01 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021089013A1 (en) | Spatial graph convolutional network training method, electronic device and storage medium | |
CN109960834A (en) | A kind of analog circuit multi-objective optimization design of power method based on multi-objective Bayesian optimization | |
CN115238599B (en) | Energy-saving method and model reinforcement learning training method and device for refrigerating system | |
CN112257848B (en) | Method for determining logic core layout, model training method, electronic device and medium | |
CN115906303A (en) | Planar microwave filter design method and device based on machine learning | |
CN116402002A (en) | Multi-target layered reinforcement learning method for chip layout problem | |
Guo et al. | A novel design methodology for a multioctave GaN-HEMT power amplifier using clustering guided Bayesian optimization | |
Amrit et al. | Design strategies for multi-objective optimization of aerodynamic surfaces | |
Koziel et al. | On decision-making strategies for improved-reliability size reduction of microwave passives: Intermittent correction of equality constraints and adaptive handling of inequality constraints | |
CN113962163A (en) | Optimization method, device and equipment for realizing efficient design of passive microwave device | |
CN114564787A (en) | Bayesian optimization method, device and storage medium for target-related airfoil design | |
Somayaji et al. | Prioritized reinforcement learning for analog circuit optimization with design knowledge | |
CN111460734A (en) | Microwave device automatic modeling method combining advanced adaptive sampling and artificial neural network | |
CN116111984A (en) | Filter design optimization method and device, filter, equipment and medium | |
CN115906741A (en) | Radio frequency circuit optimization design method based on high-performance calculation | |
CN109117545B (en) | Neural network-based antenna rapid design method | |
CN105302968A (en) | Optimization design method for distributed power amplifier | |
De Tommasi et al. | Surrogate modeling of low noise amplifiers based on transistor level simulations | |
Wei et al. | Observer-based mixed H∞/passive adaptive sliding mode control for Semi-Markovian jump system with time-varying delay | |
CN117077541B (en) | Efficient fine adjustment method and system for parameters of medical model | |
CN113076699B (en) | Antenna optimization method based on Bayesian optimization of multi-output Gaussian process | |
Yun et al. | State of charge evaluation of battery in electric vehicles based on data-driven and model fusion approach | |
CN114896852B (en) | Converter transformer scaling model vibration parameter prediction method based on PSO-BP neural network | |
Kundu et al. | An efficient method of Pareto-optimal front generation for analog circuits | |
CN117195999A (en) | Differential topk-based differential model scaling method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |