CN118233002B - Optical fiber state evaluation method - Google Patents

Optical fiber state evaluation method Download PDF

Info

Publication number
CN118233002B
CN118233002B CN202410649781.0A CN202410649781A CN118233002B CN 118233002 B CN118233002 B CN 118233002B CN 202410649781 A CN202410649781 A CN 202410649781A CN 118233002 B CN118233002 B CN 118233002B
Authority
CN
China
Prior art keywords
feature extraction
extraction unit
unit
output
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410649781.0A
Other languages
Chinese (zh)
Other versions
CN118233002A (en
Inventor
吴春春
罗建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Siwei Technology Co ltd
Original Assignee
Chengdu Siwei Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Siwei Technology Co ltd filed Critical Chengdu Siwei Technology Co ltd
Priority to CN202410649781.0A priority Critical patent/CN118233002B/en
Publication of CN118233002A publication Critical patent/CN118233002A/en
Application granted granted Critical
Publication of CN118233002B publication Critical patent/CN118233002B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Testing Of Optical Devices Or Fibers (AREA)
  • Light Guides In General And Applications Therefor (AREA)

Abstract

The invention discloses an optical fiber state evaluation method, which belongs to the technical field of optical fiber state evaluation, wherein a test signal is sent at a modulation end of an optical fiber, a demodulation signal is received at a demodulation end of the optical fiber, a training set is formed by the difference of signals in a time domain, the difference of signals in a frequency domain, the frequency loss and the transmission condition, and an optical fiber state evaluation model is subjected to sectional training by adopting the training set, so that the trained optical fiber state evaluation model has the capability of evaluating the optical fiber state according to the difference of signals before and after receiving.

Description

Optical fiber state evaluation method
Technical Field
The invention relates to the technical field of optical fiber state evaluation, in particular to an optical fiber state evaluation method.
Background
In an optical fiber communication system, light needs to be modulated first, then transmitted in an optical fiber, and finally demodulated at a receiving end. At the beginning of signal transmission, information (e.g., sound, data, etc.) is converted or "modulated" onto light waves. Once the information is modulated onto the light waves, the light enters the fiber and propagates therein. When the light wave reaches the far end, the receiving device "demodulates" it, i.e. converts the information carried by the light wave back into the original signal form. Optical fibers are therefore an important carrier for transmitting signals. However, the optical fiber is susceptible to damage such as folding and bending as a line, and signal transmission is different. The prior art cannot accurately evaluate the state of the optical fiber line, so that potential faults in the optical fiber line are difficult to discover.
Disclosure of Invention
Aiming at the defects in the prior art, the optical fiber state evaluation method provided by the invention solves the problem that the prior art cannot accurately evaluate the state of an optical fiber line.
In order to achieve the aim of the invention, the invention adopts the following technical scheme: a method of evaluating the condition of an optical fiber, comprising the steps of:
S1, a section of test signal is sent at a modulation end of an optical fiber, and a demodulation signal is obtained at a demodulation end of the optical fiber;
s2, obtaining a time domain difference sequence, a frequency amplitude difference sequence and a frequency set according to the demodulation signal and the test signal;
S3, constructing a time domain difference sequence, a frequency amplitude difference sequence and a frequency set as a training set;
s4, carrying out sectional training on the optical fiber state evaluation model by adopting a training set to obtain a trained optical fiber state evaluation model;
s5, evaluating the state of the optical fiber by adopting a trained optical fiber state evaluation model.
The beneficial effects of the invention are as follows: according to the invention, a test signal is sent at the modulation end of the optical fiber, a demodulation signal is received at the demodulation end of the optical fiber, a training set is formed by the difference of signals in the time domain, the difference of signals in the frequency domain, the frequency loss and the transmission condition, and the training set is adopted to carry out sectional training on an optical fiber state evaluation model, so that the trained optical fiber state evaluation model has the capability of evaluating the optical fiber state according to the difference of signals before and after receiving.
Further, the method for obtaining the time domain gap sequence in S2 includes: in the time domain, the demodulation signal and the test signal are subtracted according to the acquisition time point to obtain a time domain gap sequence;
The method for acquiring the frequency amplitude difference sequence in the S2 comprises the following steps: in the frequency domain, the demodulation signal and the test signal are subjected to amplitude subtraction according to the frequency value to obtain a frequency amplitude difference sequence.
Further, the method for acquiring the frequency set in S2 includes the following steps:
A1, constructing a frequency value of a demodulation signal into a demodulation frequency sequence, and constructing a frequency value of a test signal into a test frequency sequence;
a2, taking an intersection of the demodulation frequency sequence and the test frequency sequence to obtain a transmission frequency sequence, and counting the number of transmission frequencies in the transmission frequency sequence;
a3, removing elements contained in the transmission frequency sequence from the test frequency sequence to obtain the number of untransmitted frequencies;
a4, removing elements contained in the transmission frequency sequence from the demodulation frequency sequence to obtain the number of noise frequencies;
a5, constructing the number of transmission frequencies, the number of untransmitted frequencies and the number of noise frequencies into a frequency set.
The beneficial effects of the above further scheme are: in the invention, in the time domain, the obtained time domain difference sequence can express the shape condition of the signal according to the acquisition time point, and in the frequency domain, the amplitude is subtracted according to the frequency value, and the obtained frequency amplitude difference sequence can express the loss condition of the amplitude on each frequency value. The loss, transmission and noise generated by the optical fiber to the frequency point can be obtained by counting the change conditions of the frequency values before and after transmission.
Further, the optical fiber state evaluation model in S4 includes: the device comprises a first shallow feature extraction unit, a second shallow feature extraction unit, a first depth feature extraction unit, a second depth feature extraction unit, a feature fusion unit, a first sub-evaluation unit, a second sub-evaluation unit and an evaluation output unit;
The input end of the first shallow feature extraction unit is used for inputting a time domain gap sequence, and the output end of the first shallow feature extraction unit is connected with the input end of the first depth feature extraction unit; the input end of the second shallow feature extraction unit is used for inputting a frequency amplitude difference sequence, and the output end of the second shallow feature extraction unit is connected with the input end of the second deep feature extraction unit; the input end of the second sub-evaluation unit is used for inputting a frequency set; the input end of the feature fusion unit is respectively connected with the output end of the first depth feature extraction unit and the output end of the second depth feature extraction unit, and the output end of the feature fusion unit is connected with the input end of the first sub-evaluation unit; the input end of the evaluation output unit is respectively connected with the output end of the first sub-evaluation unit and the output end of the second sub-evaluation unit, and the output end of the evaluation output unit is used as the output end of the optical fiber state evaluation model.
The beneficial effects of the above further scheme are: the optical fiber state evaluation model comprises three processing channels, each type of data is processed in the corresponding channel, the data quantity of the time domain difference sequence and the frequency amplitude difference sequence is large, therefore, a shallow layer feature extraction unit and a depth feature extraction unit are adopted for data processing, features are extracted, feature fusion is further carried out, the data features are further highlighted, the data quantity is reduced, a first sub-evaluation unit is adopted for evaluating the first part of features, a second sub-evaluation unit is adopted for evaluating the second part of features, and an evaluation output unit synthesizes the two evaluations to obtain the state of the optical fiber.
Further, the first shallow feature extraction unit and the second shallow feature extraction unit have the same structure, and each include: a first convolution layer, a second convolution layer, a first averaging pooling layer, and a first scaling layer;
The input end of the first convolution layer is used as the input end of the first shallow layer feature extraction unit or the second shallow layer feature extraction unit, and the output end of the first convolution layer is connected with the input end of the second convolution layer; the input end of the first averaging pooling layer is connected with the output end of the second convolution layer, and the output end of the first averaging pooling layer is connected with the input end of the first scaling layer; the output end of the first scaling layer is used as the output end of the first shallow layer feature extraction unit or the second shallow layer feature extraction unit.
Further, the first depth feature extraction unit and the second depth feature extraction unit have the same structure, and each include: a third convolution layer, a fourth convolution layer, a fifth convolution layer, a sixth convolution layer, a multiplier, a second scaling layer, and an adder;
The input end of the third convolution layer is used as the input ends of the first depth feature extraction unit and the second depth feature extraction unit, and the output ends of the third convolution layer are respectively connected with the input end of the fourth convolution layer and the input end of the fifth convolution layer; the output end of the fifth convolution layer is connected with the input end of the sixth convolution layer; the output end of the fourth convolution layer is connected with the first input end of the multiplier and the input end of the second scaling layer respectively; the second input end of the multiplier is connected with the output end of the sixth convolution layer, and the output end of the multiplier is connected with the first input end of the adder; the second input end of the adder is connected with the output end of the second scaling layer, and the output end of the adder is used as the output end of the first depth feature extraction unit or the second depth feature extraction unit.
Further, the expressions of the first scaling layer and the second scaling layer are:
Wherein, For the ith feature of the zoom layer output, sigmoid is an S-type activation function, x i is the ith feature of the zoom layer input, w s,i is the weight of the ith feature of the zoom layer input, and b s,i is the bias of the ith feature of the zoom layer input;
The expression of the feature fusion unit is as follows:
Wherein V is the output of the feature fusion unit, V 1 is the output of the first depth feature extraction unit, V 2 is the output of the second depth feature extraction unit, Is Hadamard product.
The beneficial effects of the above further scheme are: according to the invention, scaling layers are respectively arranged in a shallow feature extraction unit and a depth feature extraction unit, so that a model can be converged, two channels of a fourth convolution layer, a fifth convolution layer and a sixth convolution layer are arranged in the depth feature extraction unit to process features output by the third convolution layer, feature fusion and enhancement are carried out through a multiplier, and residual connection is established by using a second scaling layer, so that the model can be converged.
Further, the expression of the first sub-evaluation unit is:
Wherein h 1 is the output of the first sub-evaluation unit, tanh is the hyperbolic tangent function, v j is the j-th feature output by the feature fusion unit, w p1,j is the weight of v j, b p1,j is the bias of v j, N is the number of v j, and j is a positive integer;
The expression of the second sub-evaluation unit is:
where h 2 is the output of the second sub-evaluation unit, F 1 is the first influence specific gravity, F 2 is the second influence specific gravity, F 3 is the third influence specific gravity, X 1 is the number of transmission frequencies, X 2 is the number of untransmitted frequencies, X 3 is the number of noise frequencies, w p2,1 is the weight of X 1, w p2,2 is the weight of X 2, w p2,3 is the weight of X 3, b p2,1 is the bias of X 1, b p2,2 is the bias of X 2, b p2,3 is the bias of X 3, e is a natural constant, w F1 is the weight of F 1, w F2 is the weight of F 2 and w F3 is the weight of F 3.
The beneficial effects of the above further scheme are: and integrating each feature output by the feature fusion unit in the first sub-evaluation unit to obtain the output of the first sub-evaluation unit, respectively giving weights and offsets to the transmission frequency number, the untransmitted frequency number and the noise frequency number in the second sub-evaluation unit, so as to obtain respective influence proportion, and integrating the respective influence proportion to obtain the output of the second sub-evaluation unit.
Further, the expression of the evaluation output unit is:
where y is the output of the evaluation output unit, w h1 is the weight of h 1, and w h2 is the weight of h 2.
Further, the step of segment training in S4 includes: according to the loss value, updating weights in the first sub-evaluation unit, the second sub-evaluation unit and the evaluation output unit by adopting a first weight updating formula, and updating biases in the first sub-evaluation unit and the second sub-evaluation unit by adopting a first bias updating formula;
Updating weights in the first depth feature extraction unit and the second depth feature extraction unit by adopting a second weight updating formula, and updating biases in the first depth feature extraction unit and the second depth feature extraction unit by adopting a second bias updating formula;
Updating weights in the first shallow feature extraction unit and the second shallow feature extraction unit by adopting a third weight updating formula, and updating biases in the first shallow feature extraction unit and the second shallow feature extraction unit by adopting a third bias updating formula;
the first weight updating formula and the first bias updating formula are:
wherein r 1,k+1 is the first parameter updated in the (k+1) th training, r 1,k is the first parameter updated in the (k) th training, L k is the loss value in the (k) th training, k is the number of training times, and the types of the parameters include: weight and bias;
the second weight updating formula and the second bias updating formula are:
wherein r 2,k+1 is the second parameter updated in the (k+1) th training, r 2,k is the second parameter updated in the (k) th training, and gamma is the enhancement coefficient, wherein gamma is 1 < gamma < 2;
The third weight updating formula and the third bias updating formula are:
Where r 3,k+1 is the third parameter updated during the (k+1) th training, and r 3,k is the third parameter updated during the (k) th training.
The beneficial effects of the above further scheme are: in order to solve the problem of gradient disappearance, the model is updated with segmentation weights and offsets, the offsets and weights at the tail end of the model are more easily trained, and the offsets and weights at the input end of the model are more difficult to train, so that the invention adopts three weight and offset updating formulas and sequentially updates and enhances the weights and the offsets of all parts to be effectively trained.
Drawings
FIG. 1 is a flow chart of a method for evaluating the status of an optical fiber;
FIG. 2 is a schematic diagram of a fiber state evaluation model;
FIG. 3 is a schematic structural diagram of a first shallow feature extraction unit and a second shallow feature extraction unit;
Fig. 4 is a schematic structural diagram of the first depth feature extraction unit and the second depth feature extraction unit.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and all the inventions which make use of the inventive concept are protected by the spirit and scope of the present invention as defined and defined in the appended claims to those skilled in the art.
As shown in fig. 1, a method for evaluating the state of an optical fiber includes the following steps:
S1, a section of test signal is sent at a modulation end of an optical fiber, and a demodulation signal is obtained at a demodulation end of the optical fiber;
s2, obtaining a time domain difference sequence, a frequency amplitude difference sequence and a frequency set according to the demodulation signal and the test signal;
S3, constructing a time domain difference sequence, a frequency amplitude difference sequence and a frequency set as a training set;
s4, carrying out sectional training on the optical fiber state evaluation model by adopting a training set to obtain a trained optical fiber state evaluation model;
s5, evaluating the state of the optical fiber by adopting a trained optical fiber state evaluation model.
The method for acquiring the time domain gap sequence in the S2 comprises the following steps: in the time domain, the demodulation signal and the test signal are subtracted according to the acquisition time point to obtain a time domain gap sequence;
The method for acquiring the frequency amplitude difference sequence in the S2 comprises the following steps: in the frequency domain, the demodulation signal and the test signal are subjected to amplitude subtraction according to the frequency value to obtain a frequency amplitude difference sequence.
The method for acquiring the frequency set in S2 comprises the following steps:
A1, constructing a frequency value of a demodulation signal into a demodulation frequency sequence, and constructing a frequency value of a test signal into a test frequency sequence;
a2, taking an intersection of the demodulation frequency sequence and the test frequency sequence to obtain a transmission frequency sequence, and counting the number of transmission frequencies in the transmission frequency sequence;
a3, removing elements contained in the transmission frequency sequence from the test frequency sequence to obtain the number of untransmitted frequencies;
a4, removing elements contained in the transmission frequency sequence from the demodulation frequency sequence to obtain the number of noise frequencies;
a5, constructing the number of transmission frequencies, the number of untransmitted frequencies and the number of noise frequencies into a frequency set.
In the invention, in the time domain, the obtained time domain difference sequence can express the shape condition of the signal according to the acquisition time point, and in the frequency domain, the amplitude is subtracted according to the frequency value, and the obtained frequency amplitude difference sequence can express the loss condition of the amplitude on each frequency value. The loss, transmission and noise generated by the optical fiber to the frequency point can be obtained by counting the change conditions of the frequency values before and after transmission.
As shown in fig. 2, the optical fiber state evaluation model in S4 includes: the device comprises a first shallow feature extraction unit, a second shallow feature extraction unit, a first depth feature extraction unit, a second depth feature extraction unit, a feature fusion unit, a first sub-evaluation unit, a second sub-evaluation unit and an evaluation output unit;
The input end of the first shallow feature extraction unit is used for inputting a time domain gap sequence, and the output end of the first shallow feature extraction unit is connected with the input end of the first depth feature extraction unit; the input end of the second shallow feature extraction unit is used for inputting a frequency amplitude difference sequence, and the output end of the second shallow feature extraction unit is connected with the input end of the second deep feature extraction unit; the input end of the second sub-evaluation unit is used for inputting a frequency set; the input end of the feature fusion unit is respectively connected with the output end of the first depth feature extraction unit and the output end of the second depth feature extraction unit, and the output end of the feature fusion unit is connected with the input end of the first sub-evaluation unit; the input end of the evaluation output unit is respectively connected with the output end of the first sub-evaluation unit and the output end of the second sub-evaluation unit, and the output end of the evaluation output unit is used as the output end of the optical fiber state evaluation model.
The optical fiber state evaluation model comprises three processing channels, each type of data is processed in the corresponding channel, the data quantity of the time domain difference sequence and the frequency amplitude difference sequence is large, therefore, a shallow layer feature extraction unit and a depth feature extraction unit are adopted for data processing, features are extracted, feature fusion is further carried out, the data features are further highlighted, the data quantity is reduced, a first sub-evaluation unit is adopted for evaluating the first part of features, a second sub-evaluation unit is adopted for evaluating the second part of features, and an evaluation output unit synthesizes the two evaluations to obtain the state of the optical fiber.
As shown in fig. 3, the first shallow feature extraction unit and the second shallow feature extraction unit have the same structure, and each includes: a first convolution layer, a second convolution layer, a first averaging pooling layer, and a first scaling layer;
The input end of the first convolution layer is used as the input end of the first shallow layer feature extraction unit or the second shallow layer feature extraction unit, and the output end of the first convolution layer is connected with the input end of the second convolution layer; the input end of the first averaging pooling layer is connected with the output end of the second convolution layer, and the output end of the first averaging pooling layer is connected with the input end of the first scaling layer; the output end of the first scaling layer is used as the output end of the first shallow layer feature extraction unit or the second shallow layer feature extraction unit.
As shown in fig. 4, the first depth feature extraction unit and the second depth feature extraction unit have the same structure, and each includes: a third convolution layer, a fourth convolution layer, a fifth convolution layer, a sixth convolution layer, a multiplier, a second scaling layer, and an adder;
The input end of the third convolution layer is used as the input ends of the first depth feature extraction unit and the second depth feature extraction unit, and the output ends of the third convolution layer are respectively connected with the input end of the fourth convolution layer and the input end of the fifth convolution layer; the output end of the fifth convolution layer is connected with the input end of the sixth convolution layer; the output end of the fourth convolution layer is connected with the first input end of the multiplier and the input end of the second scaling layer respectively; the second input end of the multiplier is connected with the output end of the sixth convolution layer, and the output end of the multiplier is connected with the first input end of the adder; the second input end of the adder is connected with the output end of the second scaling layer, and the output end of the adder is used as the output end of the first depth feature extraction unit or the second depth feature extraction unit.
The expressions of the first scaling layer and the second scaling layer are:
Wherein, For the ith feature of the zoom layer output, sigmoid is an S-type activation function, x i is the ith feature of the zoom layer input, w s,i is the weight of the ith feature of the zoom layer input, and b s,i is the bias of the ith feature of the zoom layer input;
The expression of the feature fusion unit is as follows:
Wherein V is the output of the feature fusion unit, V 1 is the output of the first depth feature extraction unit, V 2 is the output of the second depth feature extraction unit, Is Hadamard product.
According to the invention, scaling layers are respectively arranged in a shallow feature extraction unit and a depth feature extraction unit, so that a model can be converged, two channels of a fourth convolution layer, a fifth convolution layer and a sixth convolution layer are arranged in the depth feature extraction unit to process features output by the third convolution layer, feature fusion and enhancement are carried out through a multiplier, and residual connection is established by using a second scaling layer, so that the model can be converged.
The expression of the first sub-evaluation unit is:
Wherein h 1 is the output of the first sub-evaluation unit, tanh is the hyperbolic tangent function, v j is the j-th feature output by the feature fusion unit, w p1,j is the weight of v j, b p1,j is the bias of v j, N is the number of v j, and j is a positive integer;
The expression of the second sub-evaluation unit is:
where h 2 is the output of the second sub-evaluation unit, F 1 is the first influence specific gravity, F 2 is the second influence specific gravity, F 3 is the third influence specific gravity, X 1 is the number of transmission frequencies, X 2 is the number of untransmitted frequencies, X 3 is the number of noise frequencies, w p2,1 is the weight of X 1, w p2,2 is the weight of X 2, w p2,3 is the weight of X 3, b p2,1 is the bias of X 1, b p2,2 is the bias of X 2, b p2,3 is the bias of X 3, e is a natural constant, w F1 is the weight of F 1, w F2 is the weight of F 2 and w F3 is the weight of F 3.
And integrating each feature output by the feature fusion unit in the first sub-evaluation unit to obtain the output of the first sub-evaluation unit, respectively giving weights and offsets to the transmission frequency number, the untransmitted frequency number and the noise frequency number in the second sub-evaluation unit, so as to obtain respective influence proportion, and integrating the respective influence proportion to obtain the output of the second sub-evaluation unit.
The expression of the evaluation output unit is:
where y is the output of the evaluation output unit, w h1 is the weight of h 1, and w h2 is the weight of h 2.
The step of segment training in S4 comprises the following steps: according to the loss value, updating weights in the first sub-evaluation unit, the second sub-evaluation unit and the evaluation output unit by adopting a first weight updating formula, and updating biases in the first sub-evaluation unit and the second sub-evaluation unit by adopting a first bias updating formula;
Updating weights in the first depth feature extraction unit and the second depth feature extraction unit by adopting a second weight updating formula, and updating biases in the first depth feature extraction unit and the second depth feature extraction unit by adopting a second bias updating formula;
Updating weights in the first shallow feature extraction unit and the second shallow feature extraction unit by adopting a third weight updating formula, and updating biases in the first shallow feature extraction unit and the second shallow feature extraction unit by adopting a third bias updating formula;
the first weight updating formula and the first bias updating formula are:
wherein r 1,k+1 is the first parameter updated in the (k+1) th training, r 1,k is the first parameter updated in the (k) th training, L k is the loss value in the (k) th training, k is the number of training times, and the types of the parameters include: weight and bias;
the second weight updating formula and the second bias updating formula are:
wherein r 2,k+1 is the second parameter updated in the (k+1) th training, r 2,k is the second parameter updated in the (k) th training, and gamma is the enhancement coefficient, wherein gamma is 1 < gamma < 2;
The third weight updating formula and the third bias updating formula are:
Where r 3,k+1 is the third parameter updated during the (k+1) th training, and r 3,k is the third parameter updated during the (k) th training.
In this embodiment, the loss function of the loss value may employ a mean square error loss function and an average absolute error loss function.
In order to solve the problem of gradient disappearance, the model is updated with segmentation weights and offsets, the offsets and weights at the tail end of the model are more easily trained, and the offsets and weights at the input end of the model are more difficult to train, so that the invention adopts three weight and offset updating formulas and sequentially updates and enhances the weights and the offsets of all parts to be effectively trained.
According to the invention, a test signal is sent at the modulation end of the optical fiber, a demodulation signal is received at the demodulation end of the optical fiber, a training set is formed by the difference of signals in the time domain, the difference of signals in the frequency domain, the frequency loss and the transmission condition, and the training set is adopted to carry out sectional training on an optical fiber state evaluation model, so that the trained optical fiber state evaluation model has the capability of evaluating the optical fiber state according to the difference of signals before and after receiving.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1. A method for evaluating the condition of an optical fiber, comprising the steps of:
S1, a section of test signal is sent at a modulation end of an optical fiber, and a demodulation signal is obtained at a demodulation end of the optical fiber;
s2, obtaining a time domain difference sequence, a frequency amplitude difference sequence and a frequency set according to the demodulation signal and the test signal;
S3, constructing a time domain difference sequence, a frequency amplitude difference sequence and a frequency set as a training set;
s4, carrying out sectional training on the optical fiber state evaluation model by adopting a training set to obtain a trained optical fiber state evaluation model;
s5, evaluating the state of the optical fiber by adopting a trained optical fiber state evaluation model;
The method for acquiring the time domain gap sequence in the S2 comprises the following steps: in the time domain, the demodulation signal and the test signal are subtracted according to the acquisition time point to obtain a time domain gap sequence;
The method for acquiring the frequency amplitude difference sequence in the S2 comprises the following steps: in the frequency domain, the amplitude of the demodulation signal and the test signal are subtracted according to the frequency value to obtain a frequency amplitude difference sequence;
the method for acquiring the frequency set in S2 comprises the following steps:
A1, constructing a frequency value of a demodulation signal into a demodulation frequency sequence, and constructing a frequency value of a test signal into a test frequency sequence;
a2, taking an intersection of the demodulation frequency sequence and the test frequency sequence to obtain a transmission frequency sequence, and counting the number of transmission frequencies in the transmission frequency sequence;
a3, removing elements contained in the transmission frequency sequence from the test frequency sequence to obtain the number of untransmitted frequencies;
a4, removing elements contained in the transmission frequency sequence from the demodulation frequency sequence to obtain the number of noise frequencies;
A5, constructing the number of transmission frequencies, the number of untransmitted frequencies and the number of noise frequencies into a frequency set;
the optical fiber state evaluation model in S4 includes: the device comprises a first shallow feature extraction unit, a second shallow feature extraction unit, a first depth feature extraction unit, a second depth feature extraction unit, a feature fusion unit, a first sub-evaluation unit, a second sub-evaluation unit and an evaluation output unit;
The input end of the first shallow feature extraction unit is used for inputting a time domain gap sequence, and the output end of the first shallow feature extraction unit is connected with the input end of the first depth feature extraction unit; the input end of the second shallow feature extraction unit is used for inputting a frequency amplitude difference sequence, and the output end of the second shallow feature extraction unit is connected with the input end of the second deep feature extraction unit; the input end of the second sub-evaluation unit is used for inputting a frequency set; the input end of the feature fusion unit is respectively connected with the output end of the first depth feature extraction unit and the output end of the second depth feature extraction unit, and the output end of the feature fusion unit is connected with the input end of the first sub-evaluation unit; the input end of the evaluation output unit is respectively connected with the output end of the first sub-evaluation unit and the output end of the second sub-evaluation unit, and the output end of the evaluation output unit is used as the output end of the optical fiber state evaluation model;
the step of segment training in S4 comprises the following steps: according to the loss value, updating weights in the first sub-evaluation unit, the second sub-evaluation unit and the evaluation output unit by adopting a first weight updating formula, and updating biases in the first sub-evaluation unit and the second sub-evaluation unit by adopting a first bias updating formula;
Updating weights in the first depth feature extraction unit and the second depth feature extraction unit by adopting a second weight updating formula, and updating biases in the first depth feature extraction unit and the second depth feature extraction unit by adopting a second bias updating formula;
Updating weights in the first shallow feature extraction unit and the second shallow feature extraction unit by adopting a third weight updating formula, and updating biases in the first shallow feature extraction unit and the second shallow feature extraction unit by adopting a third bias updating formula;
the first weight updating formula and the first bias updating formula are:
wherein r 1,k+1 is the first parameter updated in the (k+1) th training, r 1,k is the first parameter updated in the (k) th training, L k is the loss value in the (k) th training, k is the number of training times, and the types of the parameters include: weight and bias;
the second weight updating formula and the second bias updating formula are:
wherein r 2,k+1 is the second parameter updated in the (k+1) th training, r 2,k is the second parameter updated in the (k) th training, and gamma is the enhancement coefficient, wherein gamma is 1 < gamma < 2;
The third weight updating formula and the third bias updating formula are:
Where r 3,k+1 is the third parameter updated during the (k+1) th training, and r 3,k is the third parameter updated during the (k) th training.
2. The optical fiber state evaluation method according to claim 1, wherein the first shallow feature extraction unit and the second shallow feature extraction unit are identical in structure, each comprising: a first convolution layer, a second convolution layer, a first averaging pooling layer, and a first scaling layer;
The input end of the first convolution layer is used as the input end of the first shallow layer feature extraction unit or the second shallow layer feature extraction unit, and the output end of the first convolution layer is connected with the input end of the second convolution layer; the input end of the first averaging pooling layer is connected with the output end of the second convolution layer, and the output end of the first averaging pooling layer is connected with the input end of the first scaling layer; the output end of the first scaling layer is used as the output end of the first shallow layer feature extraction unit or the second shallow layer feature extraction unit.
3. The optical fiber state evaluation method according to claim 2, wherein the first depth feature extraction unit and the second depth feature extraction unit are identical in structure, each comprising: a third convolution layer, a fourth convolution layer, a fifth convolution layer, a sixth convolution layer, a multiplier, a second scaling layer, and an adder;
The input end of the third convolution layer is used as the input ends of the first depth feature extraction unit and the second depth feature extraction unit, and the output ends of the third convolution layer are respectively connected with the input end of the fourth convolution layer and the input end of the fifth convolution layer; the output end of the fifth convolution layer is connected with the input end of the sixth convolution layer; the output end of the fourth convolution layer is connected with the first input end of the multiplier and the input end of the second scaling layer respectively; the second input end of the multiplier is connected with the output end of the sixth convolution layer, and the output end of the multiplier is connected with the first input end of the adder; the second input end of the adder is connected with the output end of the second scaling layer, and the output end of the adder is used as the output end of the first depth feature extraction unit or the second depth feature extraction unit.
4. The method of claim 3, wherein the expressions of the first scaling layer and the second scaling layer are:
Wherein, For the ith feature of the zoom layer output, sigmoid is an S-type activation function, x i is the ith feature of the zoom layer input, w s,i is the weight of the ith feature of the zoom layer input, and b s,i is the bias of the ith feature of the zoom layer input;
The expression of the feature fusion unit is as follows:
Wherein V is the output of the feature fusion unit, V 1 is the output of the first depth feature extraction unit, V 2 is the output of the second depth feature extraction unit, Is Hadamard product.
5. The optical fiber state evaluation method according to claim 1, wherein the expression of the first sub-evaluation unit is:
Wherein h 1 is the output of the first sub-evaluation unit, tanh is the hyperbolic tangent function, v j is the j-th feature output by the feature fusion unit, w p1,j is the weight of v j, b p1,j is the bias of v j, N is the number of v j, and j is a positive integer;
The expression of the second sub-evaluation unit is:
where h 2 is the output of the second sub-evaluation unit, F 1 is the first influence specific gravity, F 2 is the second influence specific gravity, F 3 is the third influence specific gravity, X 1 is the number of transmission frequencies, X 2 is the number of untransmitted frequencies, X 3 is the number of noise frequencies, w p2,1 is the weight of X 1, w p2,2 is the weight of X 2, w p2,3 is the weight of X 3, b p2,1 is the bias of X 1, b p2,2 is the bias of X 2, b p2,3 is the bias of X 3, e is a natural constant, w F1 is the weight of F 1, w F2 is the weight of F 2 and w F3 is the weight of F 3.
6. The optical fiber state evaluation method according to claim 5, wherein the expression of the evaluation output unit is:
where y is the output of the evaluation output unit, w h1 is the weight of h 1, and w h2 is the weight of h 2.
CN202410649781.0A 2024-05-24 2024-05-24 Optical fiber state evaluation method Active CN118233002B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410649781.0A CN118233002B (en) 2024-05-24 2024-05-24 Optical fiber state evaluation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410649781.0A CN118233002B (en) 2024-05-24 2024-05-24 Optical fiber state evaluation method

Publications (2)

Publication Number Publication Date
CN118233002A CN118233002A (en) 2024-06-21
CN118233002B true CN118233002B (en) 2024-07-16

Family

ID=91505169

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410649781.0A Active CN118233002B (en) 2024-05-24 2024-05-24 Optical fiber state evaluation method

Country Status (1)

Country Link
CN (1) CN118233002B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104331635A (en) * 2014-11-24 2015-02-04 国网吉林省电力有限公司信息通信公司 Predication method for electric power optical fiber communication optical power
CN117674993A (en) * 2023-11-09 2024-03-08 中交广州航道局有限公司 Optical fiber network running state detection system and method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023150298A2 (en) * 2022-02-03 2023-08-10 Nortech Systems, Inc. Monitoring technology for active optical components
CN117970040B (en) * 2024-04-01 2024-06-18 成都四威科技股份有限公司 Cable fault positioning method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104331635A (en) * 2014-11-24 2015-02-04 国网吉林省电力有限公司信息通信公司 Predication method for electric power optical fiber communication optical power
CN117674993A (en) * 2023-11-09 2024-03-08 中交广州航道局有限公司 Optical fiber network running state detection system and method

Also Published As

Publication number Publication date
CN118233002A (en) 2024-06-21

Similar Documents

Publication Publication Date Title
CN109039472B (en) Data center optical communication dispersion estimation and management method based on deep learning
CN110932809B (en) Fiber channel model simulation method, device, electronic equipment and storage medium
CN109379132B (en) Device and method for estimating optical fiber dispersion by low-speed coherent detection and neural network
CN109217923A (en) A kind of joint optical information networks and rate, modulation format recognition methods and system
CN111614398A (en) Method and device for identifying modulation format and signal-to-noise ratio based on XOR neural network
CN118233002B (en) Optical fiber state evaluation method
KR100864213B1 (en) modeling method of background noise in power-line communication
EP3562047B1 (en) Method of training a deep neural network classifier, a training module and a network analyzer
CN103471623A (en) MZI signal denoising method based on neighborhood wavelet coefficients
CN110826703B (en) Communication signal detection method based on cooperative time-varying bidirectional recurrent neural network
CN110380789A (en) A kind of signal processing method and device
CN111934694A (en) Distortion compensation device of broadband zero intermediate frequency transceiving system
Aref et al. An efficient nonlinear Fourier transform algorithm for detection of eigenvalues from continuous spectrum
CN109547116A (en) Real number nonlinear equalization method and device applied to coherent fiber communication system
CN114417930A (en) Remote high-precision ocean target automatic identification and judgment method and system
CN108173790A (en) A kind of transmission method of super Nyquist signal
CN115436751A (en) Power grid distributed fault positioning method and device and electronic equipment
CN111683026B (en) CMA-SAE-based underwater acoustic channel signal processing method and system
CN105301655B (en) A kind of common imaging gather line noise minimizing technology and device
CN114024810A (en) Multi-core optical fiber channel modulation format identification method and device
CN114697183A (en) Channel synchronization method based on deep learning
US20040264684A1 (en) Time domain reflected signal measurement using statistical signal processing
CN111901049A (en) Time error compensation method and device for laser communication system and readable storage medium
Ma et al. Modeling of Multi-Core Fiber Channel Based on M-CGAN for High Capacity Fiber Optical Communication
CN117220807A (en) Deep learning-based multi-dimensional multiplexing optical communication system channel construction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant