CN116911169A - EDFA test parameter prediction method based on BP neural network - Google Patents

EDFA test parameter prediction method based on BP neural network Download PDF

Info

Publication number
CN116911169A
CN116911169A CN202310757358.8A CN202310757358A CN116911169A CN 116911169 A CN116911169 A CN 116911169A CN 202310757358 A CN202310757358 A CN 202310757358A CN 116911169 A CN116911169 A CN 116911169A
Authority
CN
China
Prior art keywords
edfa
neural network
value
test
solution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310757358.8A
Other languages
Chinese (zh)
Inventor
臧益鹏
李现勤
吴松桂
顾文华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Dekeli Optoelectronic Technology Co ltd
Original Assignee
Wuxi Dekeli Optoelectronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Dekeli Optoelectronic Technology Co ltd filed Critical Wuxi Dekeli Optoelectronic Technology Co ltd
Priority to CN202310757358.8A priority Critical patent/CN116911169A/en
Publication of CN116911169A publication Critical patent/CN116911169A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/02Reliability analysis or reliability optimisation; Failure analysis, e.g. worst case scenario performance, failure mode and effects analysis [FMEA]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses an EDFA test parameter prediction method based on BP neural network, which relates to the field of erbium-doped fiber amplifier test, and comprises the following steps: acquiring an EDFA data set and dividing the EDFA data set into a training sample and a test sample, wherein the data set comprises a plurality of groups of gain, input optical power and pumping current; setting basic parameters of a BP neural network, and optimizing the weight and the threshold of the network by using a GSO algorithm; learning the training sample by using the optimized network, and establishing a parameter learning model; testing the test sample by using a learning model, and comparing the difference between the theoretical value and the actual value output by the model; and predicting EDFA test parameters by using a parameter learning model, automatically generating a gain-input optical power-pumping current table document and outputting the gain-input optical power-pumping current table document. The method utilizes the neural network to find out the implicit relation among the gain, the input optical power and the pumping current, optimizes the weight and the threshold value of the network, and enables the established parameter learning model to be more accurate.

Description

EDFA test parameter prediction method based on BP neural network
Technical Field
The application relates to the field of erbium-doped fiber amplifier test, in particular to an EDFA test parameter prediction method based on a BP neural network.
Background
The erbium-doped fiber amplifier (Eribium-Doped Optical Fiber Amplifier, EDFA) is an active device for amplifying signal light, and in the test of the EDFA module, an accurate current value needs to be set in a very short time, and whether the current value accurately influences the test efficiency and cost of the EDFA module or not.
At present, in order to solve the problem of accurate current value setting, an internal difference method is generally adopted, and the method comprises the steps of firstly giving a group of pumping current values, connecting output light into a spectrometer, and then dynamically adjusting the current value according to indexes such as gain, noise and the like fed back by the spectrometer until all optical index conditions are met. In practical testing, the method is not limited to a few gain values and input optical power, and the test time of a single module is greatly increased due to the occurrence of the full gain test condition, so how to improve the test efficiency becomes a difficult problem to be solved in the EDFA module testing.
The prior implementation scheme is generally as follows: according to the functional requirement of the optical communication system, the gain and the input optical power required to be tested by the EDFA module are established, a gain-input optical power-pumping current table is established, accurate pumping current values are dynamically regulated according to each gain value and each input optical power, and then the gain values, the input optical power and the corresponding pumping current values are manually recorded. In the process, a tester needs to spend a great deal of time and effort in the dynamic adjustment process, and when the gain value and the input optical power to be tested are additionally added, the test steps are required to be repeated, the table cannot be updated in time, and great difficulty is brought to improving the efficiency of the product line.
Disclosure of Invention
Aiming at the problems and the technical requirements, the inventor provides an EDFA test parameter prediction method based on a BP neural network, and the technical scheme of the application is as follows:
an EDFA test parameter prediction method based on BP neural network comprises the following steps:
obtaining an EDFA data set and dividing the EDFA data set into a training sample and a test sample, wherein the EDFA data set comprises a plurality of groups of EDFA test parameters, and each group of parameters comprises gain, input optical power and pumping current;
setting basic parameters of the BP neural network, and optimizing the weight and the threshold of the BP neural network by using a GSO algorithm;
learning the training sample by using the optimized BP neural network, and establishing a parameter learning model;
testing the test sample by using a parameter learning model, and comparing the difference between the theoretical value and the actual value output by the model;
and predicting EDFA test parameters by using a parameter learning model, automatically generating a gain-input optical power-pumping current table document and outputting the gain-input optical power-pumping current table document.
The further technical scheme is that the optimizing process for the weight and the threshold value of the BP neural network by using the GSO algorithm comprises the following steps:
setting basic parameters of a GSO algorithm, wherein the basic parameters comprise maximum iteration times, initial step length and random initial solutions within a set limit value range, and the solutions of the GSO algorithm are vectors formed by weights and threshold values of a BP neural network;
taking the initial objective function value obtained by calculation of the initial solution as an initial optimal fitness value, and taking the initial solution as an initial optimal solution;
calculating according to the value of the current algorithm solution to obtain the value of the current objective function, if the value is smaller than the optimal fitness value, updating the value into the optimal fitness value, updating the value corresponding to the current algorithm solution into the optimal solution, otherwise, directly entering an updating algorithm solution;
updating the value of the next algorithm solution by updating the step length, repeatedly executing the process of calculating the value of the current objective function according to the value of the current algorithm solution until the maximum iteration number is reached, and outputting the final optimal solution as the found optimal weight and threshold of the BP neural network.
The further technical scheme is that the numerical value of the next algorithm solution is updated by updating the step length, and the method comprises the following steps:
the expression of the update step is:
step′=G·step+c 1 ·cos(r 1 )·(p best -p t )+c 2 ·cos(r 2 )·(p best -p t );
wherein step' is the updated step length, step is the last step length, and the factor is calculatedT is the current iteration number, T is the maximum iteration number, c 1 And c 2 Is a random number between (0, 2), r 1 And r 2 Is a random number between (0, 1), p best For the optimal solution, p t Is the current algorithm solution;
the next algorithm solution is updated as the sum of the current algorithm solution and the updated step size.
The further technical proposal is that the basic parameters of GSO algorithm are set, the method also comprises the step of confirming the dimension D=m+h+h+n+n of the algorithm solution; wherein m, n and h are the number of neurons of an input layer, an output layer and an hidden layer of the BP neural network respectively, and the dimension is the length of a vector consisting of a weight and a threshold.
The further technical scheme is that the objective function is the error square sum of the theoretical value and the actual value obtained after the training sample is learned by the BP neural network.
The further technical scheme is that the EDFA data set is obtained and divided into a training sample and a test sample, and the method comprises the following steps:
and carrying out normalization processing on a plurality of groups of EDFA test parameters in the obtained EDFA data set, and then, scrambling and sorting the EDFA test parameters and dividing the EDFA test parameters into training samples and test samples according to a preset proportion.
The further technical scheme is that basic parameters of the BP neural network are set, including the number of neurons of an input layer, an output layer and an hidden layer, the maximum iteration number of training, the learning rate and an error target.
The further technical proposal is that in the difference between the theoretical value and the actual value output by the comparison model, the correlation coefficient R is adopted 2 As an evaluation index.
The beneficial technical effects of the application are as follows:
according to the application, the BP neural network is adopted to learn the EDFA data set, so that the implicit relation among gain, input optical power and pumping current in the EDFA test is found, the weight and the threshold value in the BP neural network are optimized by utilizing a golden search optimization (Golden Search Optimization, GSO) algorithm, so that the established parameter learning model is more accurate, other gain values of the EDFA and pumping current values under the input optical power are predicted according to the parameter learning model established after the EDFA data set is learned, a gain-input optical power-pumping current table is established, document output is automatically generated, a powerful technical support is provided for the EDFA production, and meanwhile, the prediction level of the EDFA test parameter is improved.
Drawings
Fig. 1 is a flowchart of an EDFA test parameter prediction method based on a BP neural network.
Fig. 2 is a flowchart of optimizing the network weights and the threshold values by using the GSO algorithm provided by the present application.
Fig. 3 is a graph showing the theoretical and actual results of pump current at different input optical powers for a gain of 15 according to the present application.
Fig. 4 is a graph showing the theoretical and actual results of pumping current at different input optical powers for a gain of 20 according to the present application.
Fig. 5 is a partial screenshot of an automatically generated gain-input optical power-pumping current table provided by the present application.
Detailed Description
The following describes the embodiments of the present application further with reference to the drawings.
As shown in fig. 1, the application provides an EDFA test parameter prediction method based on BP neural network, which is integrated in data processing software on a notebook computer, and the data processing software has the functions of EDFA data set preprocessing, BP neural network parameter adjustment, automatic output gain-input optical power-pumping current table and the like. The method specifically comprises the following steps:
s1: acquiring and dividing the EDFA data set into a training sample and a test sample, wherein the EDFA data set comprises:
the acquired EDFA data set is imported into data processing software, wherein the EDFA data set comprises a plurality of sets of EDFA test parameters, and each set of parameters comprises gain, input optical power and pumping current; and carrying out normalization treatment on a plurality of groups of EDFA test parameters, and then, sorting according to group disorder and dividing into training samples and test samples according to the proportion of 90% -10%.
Wherein the normalization can be expressed by the following formula:
wherein x is original data, x min And x max Representing the minimum and maximum values of x in the dataset, respectively, and y representing the normalized data. Normalization maps data into the (0, 1) range, effectively eliminating the magnitude differences between the data.
S2: setting basic parameters of the BP neural network, and optimizing the weight and the threshold of the BP neural network by using a GSO algorithm.
The BP neural network is used for learning the EDFA data set so as to find out the implicit relation among the gain, the input optical power and the pumping current in the EDFA test. In order to improve the accuracy of network learning, the application provides a method for combining a golden search optimization algorithm with a neural network for the first time so as to adjust the weight and the threshold of the network. As shown in fig. 2, the method comprises the following sub-steps:
s2-1: setting the neuron numbers of an input layer, an output layer and an hidden layer of the BP neural network, training the maximum iteration times, learning rate and error targets. In this embodiment, the gain and the input optical power in the EDFA dataset are input to the BP neural network, the number of neurons in the input layer is 2, the pumping current is output from the BP neural network, the number of neurons in the output layer is 1, and the number of neurons in the hidden layer is calculated by the following formula:
where h is the number of neurons in the hidden layer, m and n represent the number of neurons in the input layer and the output layer, respectively, and k is a positive integer within the range of (0,12), and in this example, k is set to be 6. The maximum number of training iterations is set to 1000 in this example, and the learning rate and the error order are set to 0.001 and 10-8, respectively, in this example.
At this time, the initial weight and the threshold value in the BP neural network are random values, which can cause insufficient robustness of the BP neural network.
S2-2: setting basic parameters of a GSO algorithm, including the dimension D of an algorithm solution, the maximum iteration number T and an initial step size step 0 And a random initial solution within a set limit value range, wherein the solution of the GSO algorithm is a vector consisting of the weight value and the threshold value in the network.
Wherein the dimension represents the length of the vector, and in this example, the dimension value is confirmed according to d=m×h+h+h×n+n; the remaining parameters were set as: t=200, step 0 The upper and lower numerical limits of the vector are set to 3 and-3 for the random number between (0, 1), and a random initial solution p is given in the range 0 As an initial weight threshold for the network.
S2-3: according to the initial solution p 0 Calculated initial objective function value f (p 0 ) As an initial optimal fitness value g best Initial solution p 0 As an initial optimal solution p best . Wherein f (p 0 ) Representing the weight threshold as p 0 And the BP neural network learns the training sample to obtain the error square sum of the theoretical value and the actual value.
S2-4: setting a p t As the current algorithm solution, the value f (p) of the current objective function is obtained according to the value calculation of the current algorithm solution t ) And combine it with f (p best ) If f (p t )<f(p best ) Updating the value to be the optimal fitness value, and updating the value corresponding to the current algorithm solution to be the optimal solution, namely p best =p t ,g best =f(p t ) Otherwise, directly entering a step S2-5 of updating the algorithm solution.
S2-5: updating the value of the next algorithm solution by updating the step size, including:
the expression of the update step is:
step′=G·step+c 1 ·cos(r 1 )·(p best -p t )+c 2 ·cos(r 2 )·(p best -p t ) (3)
wherein step' is the updated step length, step is the last step length, and the factor is calculatedt is the current iteration number, c 1 And c 2 Is a random number between (0, 2), r 1 And r 2 Is a random number between (0, 1) and can take the same value. Updating the next algorithm solution to be the sum of the current algorithm solution and the updated step size, i.e. p t ′=p t +step′。
S2-6: comparing the sizes of T and T, if t=T, ending the cycle, and outputting a final optimal solution as the optimal weight and the threshold of the found BP neural network; otherwise, the process of calculating the value of the current objective function according to the value of the current algorithm solution is re-executed, namely, S2-4 to S2-6 are circularly executed.
S3: p obtained in the last step best The method is applied to the BP neural network, and the training samples in the EDFA data set are learned by utilizing the optimized BP neural network, so that a parameter learning model is established.
S4: and testing the test sample in the EDFA data set by using the parameter learning model, and comparing the difference (namely, error) between the theoretical value and the actual value output by the model.
In the present embodiment, the error employs a correlation coefficient R 2 As an evaluation index, the following formula can be used:
wherein N is the data number of the test sample, y k For the theoretical value, x, corresponding to the k-th group of test samples k Is the k-th set of original test sample values. The theoretical result and actual result of pumping current at different input optical powers are shown in FIG. 3 and FIG. 4 for a gain of 15 and a gain of 20As a result, the graph is compared, it can be seen that the two are substantially coincident, and R is calculated 2 0.986 and 0.975 respectively, and the accuracy of the optimized BP neural network for predicting the EDFA test parameters is high.
S5: all the EDFA test parameters (namely pumping current values) under the gain and the input optical power are predicted by using a parameter learning model, and a gain-input optical power-pumping current table file is automatically generated and output, as shown in figure 5.
After the document is generated, researchers can directly download the document, and the pumping current is directly regulated by contrasting the document, so that the problem of insufficient precision in the prior art is reduced, the test progress is accelerated, and more high efficiency and intellectualization are realized.
The above is only a preferred embodiment of the present application, and the present application is not limited to the above examples. It is to be understood that other modifications and variations which may be directly derived or contemplated by those skilled in the art without departing from the spirit and concepts of the present application are deemed to be included within the scope of the present application.

Claims (8)

1. An EDFA test parameter prediction method based on BP neural network is characterized by comprising the following steps:
obtaining an EDFA data set and dividing the EDFA data set into a training sample and a test sample, wherein the EDFA data set comprises a plurality of groups of EDFA test parameters, and each group of parameters comprises gain, input optical power and pumping current;
setting basic parameters of a BP neural network, and optimizing weights and thresholds of the BP neural network by using a GSO algorithm;
learning the training sample by using the optimized BP neural network, and establishing a parameter learning model;
testing the test sample by using the parameter learning model, and comparing the difference between the theoretical value and the actual value output by the model;
and predicting EDFA test parameters by using the parameter learning model, automatically generating a gain-input optical power-pumping current table document, and outputting the gain-input optical power-pumping current table document.
2. The EDFA test parameter prediction method based on a BP neural network according to claim 1, wherein the optimizing the weight and the threshold of the BP neural network by using a GSO algorithm comprises:
setting basic parameters of a GSO algorithm, wherein the basic parameters comprise maximum iteration times, initial step length and random initial solutions within a set limit value range, and the solutions of the GSO algorithm are vectors formed by weights and threshold values of the BP neural network;
the initial objective function value obtained through calculation according to the initial solution is used as an initial optimal fitness value, and the initial solution is used as an initial optimal solution;
calculating according to the value of the current algorithm solution to obtain the value of the current objective function, if the value is smaller than the optimal fitness value, updating the value into the optimal fitness value, updating the value corresponding to the current algorithm solution into the optimal solution, otherwise, directly entering an updating algorithm solution;
updating the value of the next algorithm solution by updating the step length, repeating the process of calculating the value of the current objective function according to the value of the current algorithm solution until the maximum iteration number is reached, and outputting the final optimal solution as the found optimal weight and threshold of the BP neural network.
3. The EDFA test parameter prediction method based on a BP neural network according to claim 2, wherein updating the value of the next algorithm solution by updating the step size comprises:
the expression of the update step is:
step′=G·step+c 1 ·cos(r 1 )·(p best -p t )+c 2 ·cos(r 2 )·(p best -p t );
wherein step' is the updated step length, step is the last step length, and the factor is calculatedT is the current iteration number, T is the maximum iteration number, c 1 And c 2 Is a random number between (0, 2), r 1 And r 2 Is a random number between (0, 1), p best For the optimal solution, p t Is the current algorithm solution;
updating the next algorithm solution to be the sum of the current algorithm solution and the updated step size.
4. The EDFA test parameter prediction method based on a BP neural network according to claim 2, wherein the setting of basic parameters of the GSO algorithm further comprises confirming the dimension d=m×h+h+h×n+n of the algorithm solution; and m, n and h are the numbers of neurons of an input layer, an output layer and an hidden layer of the BP neural network respectively, and the dimension is the length of a vector formed by the weight and the threshold.
5. The EDFA test parameter prediction method based on a BP neural network according to claim 2, wherein the objective function is the sum of squares of errors of theoretical values and actual values obtained by calculating the BP neural network after learning the training samples.
6. The method of predicting EDFA test parameters based on a BP neural network of claim 1, wherein the acquiring the EDFA dataset and dividing into training samples and test samples comprises:
and carrying out normalization processing on a plurality of groups of EDFA test parameters in the obtained EDFA data set, and then, scrambling and sorting the EDFA test parameters and dividing the EDFA test parameters into training samples and test samples according to a preset proportion.
7. The EDFA test parameter prediction method based on a BP neural network according to claim 1, wherein the setting of basic parameters of the BP neural network includes the number of neurons of an input layer, an output layer and an hidden layer, training a maximum number of iterations, a learning rate and an error target.
8. The method of predicting EDFA test parameters based on BP neural network of claim 1, wherein the output is based on the comparison modelIn the difference between the theoretical value and the actual value, the correlation coefficient R is adopted 2 As an evaluation index.
CN202310757358.8A 2023-06-26 2023-06-26 EDFA test parameter prediction method based on BP neural network Pending CN116911169A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310757358.8A CN116911169A (en) 2023-06-26 2023-06-26 EDFA test parameter prediction method based on BP neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310757358.8A CN116911169A (en) 2023-06-26 2023-06-26 EDFA test parameter prediction method based on BP neural network

Publications (1)

Publication Number Publication Date
CN116911169A true CN116911169A (en) 2023-10-20

Family

ID=88365923

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310757358.8A Pending CN116911169A (en) 2023-06-26 2023-06-26 EDFA test parameter prediction method based on BP neural network

Country Status (1)

Country Link
CN (1) CN116911169A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118074806A (en) * 2024-04-22 2024-05-24 中国电建集团江西省电力设计院有限公司 Optical amplifier gain adjusting method and equipment based on machine learning
CN118153460A (en) * 2024-05-10 2024-06-07 无锡芯光互连技术研究院有限公司 Method and device for designing directional coupler and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118074806A (en) * 2024-04-22 2024-05-24 中国电建集团江西省电力设计院有限公司 Optical amplifier gain adjusting method and equipment based on machine learning
CN118153460A (en) * 2024-05-10 2024-06-07 无锡芯光互连技术研究院有限公司 Method and device for designing directional coupler and storage medium

Similar Documents

Publication Publication Date Title
CN116911169A (en) EDFA test parameter prediction method based on BP neural network
Bottou et al. Large scale online learning
CN110175386B (en) Method for predicting temperature of electrical equipment of transformer substation
CN109472817B (en) Multi-sequence magnetic resonance image registration method based on loop generation countermeasure network
CN112884059B (en) Small sample radar working mode classification method fusing priori knowledge
CN107832789B (en) Feature weighting K nearest neighbor fault diagnosis method based on average influence value data transformation
CN112861982A (en) Long-tail target detection method based on gradient average
CN108229659A (en) Piano singly-bound voice recognition method based on deep learning
CN111200141B (en) Proton exchange membrane fuel cell performance prediction and optimization method based on deep belief network
CN114283320B (en) Branch-free structure target detection method based on full convolution
Lin et al. Self-attentive similarity measurement strategies in speaker diarization.
CN110766190A (en) Power distribution network load prediction method
CN112347910A (en) Signal fingerprint identification method based on multi-mode deep learning
CN115222983A (en) Cable damage detection method and system
CN113472415B (en) Signal arrival angle estimation method and device, electronic equipment and storage medium
Ao et al. Entropy estimation via normalizing flow
CN112651500A (en) Method for generating quantization model and terminal
CN116523131A (en) Integrated circuit process parameter optimization method and system based on DBN
CN112766537B (en) Short-term electric load prediction method
CN115374863A (en) Sample generation method, sample generation device, storage medium and equipment
CN112738724B (en) Method, device, equipment and medium for accurately identifying regional target crowd
CN113255927A (en) Logistic regression model training method and device, computer equipment and storage medium
CN113011597A (en) Deep learning method and device for regression task
CN114745231B (en) AI communication signal identification method and device based on block chain
CN112215272A (en) Bezier curve-based image classification neural network attack method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination