CN112152953A - Random sequence construction method for neural network equalizer training - Google Patents

Random sequence construction method for neural network equalizer training Download PDF

Info

Publication number
CN112152953A
CN112152953A CN202011007542.3A CN202011007542A CN112152953A CN 112152953 A CN112152953 A CN 112152953A CN 202011007542 A CN202011007542 A CN 202011007542A CN 112152953 A CN112152953 A CN 112152953A
Authority
CN
China
Prior art keywords
sequence
random
neural network
sequences
random sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011007542.3A
Other languages
Chinese (zh)
Inventor
义理林
廖韬
黄璐瑶
胡卫生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202011007542.3A priority Critical patent/CN112152953A/en
Publication of CN112152953A publication Critical patent/CN112152953A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/03Shaping networks in transmitter or receiver, e.g. adaptive shaping networks
    • H04L25/03006Arrangements for removing intersymbol interference
    • H04L25/03165Arrangements for removing intersymbol interference using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/03Shaping networks in transmitter or receiver, e.g. adaptive shaping networks
    • H04L25/03828Arrangements for spectral shaping; Arrangements for providing signals with specified spectral properties
    • H04L25/03866Arrangements for spectral shaping; Arrangements for providing signals with specified spectral properties using scrambling

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Artificial Intelligence (AREA)
  • Power Engineering (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Cable Transmission Systems, Equalization Of Radio And Reduction Of Echo (AREA)

Abstract

The invention discloses a random sequence construction method aiming at neural network equalizer training, which relates to the field of channel equalization in optical fiber communication and comprises the steps of selecting three independent random sequences, wherein one random sequence is an index sequence, the other two random sequences are data sequences, selecting items in the data sequences based on the values of the index sequence in the construction process, adding the items into the construction sequences, and finally combining the items into a new random sequence meeting the length requirement. The invention effectively improves the complexity of the random sequence generation rule under the condition of not changing the random characteristic, can effectively cover the data generation rule aiming at the given neural network equalizer and other advanced channel equalization algorithms, ensures the reliability of the equalization model training, and can be used for modulating various transmission signals to be used as the training data of the channel equalizer.

Description

Random sequence construction method for neural network equalizer training
Technical Field
The invention relates to the field of channel equalization in optical fiber communication, in particular to a random sequence construction method aiming at neural network equalizer training.
Background
The neural network has powerful performance in the aspects of modeling, classification, prediction and the like, is widely concerned by the field of optical fiber communication, and is applied to the research in the fields of optical network performance monitoring, optical network management, modulation format identification [ and channel equalization and the like. In the field of signal equalization, a neural network is considered as a powerful tool for compensating signal distortion in a communication neighborhood, and can effectively compensate linear and nonlinear distortion. In related researches, the performance of the neural network is improved slightly compared with that of the traditional equalization algorithm.
However, with the intensive research on the neural network equalization algorithm, the related research has found that the neural network can identify a Pseudo-Random Binary Sequence (PRBS) and further cause abnormal performance evaluation. The PRBS is a pseudo random sequence widely used in the field of communications, and is a binary sequence with general random statistical characteristics generated by an exclusive or operation based on a specific primitive polynomial. Therefore, PRBS was also originally applied in the equalization performance test of neural network correlation models.
However, the use of the transmission signal of the PRBS sequence as training data can make the neural network learn the generation rule of the PRBS, which in turn leads to the model showing excellent performance on the test data with the same rule (T.A. Eriksson, H.B. low and A.Leven, "Applying neural networks in optical communication systems: Point disks", IEEE Photonics Technology Letters, vol.29, No.23, pp.209 2094,2017.). The relevant research shows that when the input size of the neural network is enough to cover the PRBS relevant generation recurrence rule, the problem that the neural network learns the PRBS generation rule during training is caused, so that when signals generated by sequences based on the same rule are balanced, the performance is over high, and meanwhile, poor balancing performance is obtained for data with different rules. Some studies on neural network equalizers have not recognized and circumvented the related problems, which in turn has led to overestimation of neural network equalizer performance.
On the other hand, Liang Shu et al further verified the phenomenon of neural network learning PRBS, and by extracting the trained neural network parameters, this work demonstrated that the neural network effectively utilized the generation rules of PRBS to assist in judging the equalization target, thereby causing abnormal equalization performance (L.Shu, J.Li, Z.Wan, W.Zhang, S.Fu and K, Xu, "inversion trace of intellectual network: learning of PRBS," in European Conference on Optical Communication, paper Tu4F, 2018.). Meanwhile, the work also proves that the single hidden layer network can effectively learn the PRBS rule under the condition that at least 2 hidden layer nodes exist, so that the abnormity is caused. Chun-Yen Chuang et al verified that random sequences generated based on the Meisen rotation algorithm could not be learned by general Neural Networks and could be used to generate signal data for transmission Systems, and this work showed that the problem of data rule learning by Neural Networks can be avoided by long-period-spaced random sequences, avoiding scale coverage by conventional Neural network inputs (C.Chuang, L.Liu, C.Wei, J.Liu, L.Henrickson, C.Wang, Y.Chen and J.Chen, "Study of Training Patterns for applying Deep Neural Networks in Optical Communication Systems," European Conference on Optical Communication, paper Tu 4.2, 2018.).
Therefore, those skilled in the art are working to develop a reliable random sequence to ensure the effective training of advanced equalization models such as neural network equalizers.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, the technical problem to be solved by the present invention is to provide a simple method for obtaining a high-complexity random sequence to mask data features, which can be used as an effective training sequence for a neural network equalizer.
In order to achieve the above object, the present invention provides a random sequence constructing method for neural network equalizer training, which comprises the following steps:
step1, selecting three random sequences;
step2, selecting one of the random sequences as an index sequence Si, and the other two random sequences as data sequences, which are respectively marked as S0 and S1;
step3, mapping Si into a binary sequence Sb;
and 4, constructing a new sequence Sc based on Sb: if Sb current bit is 0, move S0 to add the current head element to Sc; if Sb current bit is 1, move S1 to add the current head element to Sc; and traversing Sb until Sc reaches a specified length.
Further, comprising:
and 5, selecting Sc to replace any sequence in the step2, and executing the step3 and the step4, so that the complexity of the final constructed sequence can be further improved.
Further, step5 is repeatedly executed to continuously increase the complexity of the generation sequence so as to effectively mask the generation rule.
Further, the Random Sequence is a Pseudo-Random Binary Sequence (PRBS).
Further, the random sequence generation rules are independent of each other.
Further, the random sequence is constructed based on mutually independent parameters by adopting the same random algorithm.
Further, S0 and S1 follow the same distribution characteristics, thereby ensuring that the target sequence Sc follows the same distribution characteristics.
Further, when mapping Si to Sb in step3, it is ensured that 0 and 1 bits in Sb are uniformly distributed. Specifically, if the index sequence Si itself is a uniformly distributed random binary sequence, this step may be skipped; if the index sequence Si is a random sequence under other uniform distribution, the distribution mean value can be selected as separation to uniformly map the sequence into 0 and 1 sequences; if the index sequence Si obeys other distribution characteristics, the binary sequence after mapping should be guaranteed to be uniformly distributed.
Further, in step4, Sb, S0, and S1 can be recycled, that is, when the last element of a sequence is used, the sequence will start from the original first element of the sequence again next time, so as to ensure that step4 can always obtain a new sequence satisfying the length requirement.
Further, before step4, the lengths of the Sb, S0, S1 sequences are trimmed by deleting the end element so that the lengths are relatively prime, which can greatly improve the period of the finally constructed sequence Sc.
The invention has the beneficial effects that: according to the invention, the random sequence is reconstructed in a combined mode, the complexity of the generation rule of the random sequence can be obviously improved compared with the original random sequence, the data rule is effectively covered, and the neural network equalizer can be effectively trained on the basis of the transmission signal modulated by the generated random sequence, so that the problem that a neural network model learns the data generation rule is avoided. The scheme can be iterated repeatedly, and the complexity can be improved by one dimension in each iteration, so that the random sequence iteratively constructed by the scheme can effectively cover the generation rule for any equilibrium model based on fixed-length sequence input. The invention has important significance for the research and the application of the neural network equalizer.
The conception, the specific structure and the technical effects of the present invention will be further described with reference to the accompanying drawings to fully understand the objects, the features and the effects of the present invention.
Drawings
FIG. 1 is a schematic diagram of a method for constructing a combined random sequence according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a data set sequence for cross-testing in validation operations according to an embodiment of the present invention;
FIG. 3 is a graph of error code curves for a PRBS and a Messen random sequence combination based training test neural network for validation operations in accordance with an embodiment of the present invention;
FIG. 4 is a graph of error code for a combined PRBS and Messen random sequence based training test neural network for validation operations in accordance with an embodiment of the present invention;
FIG. 5 is a graph of error code curves for a combined training test neural network based on a combined PRBS and a Messen random sequence under different neural network scales of validation operations according to an embodiment of the present invention;
FIG. 6 is a transmission experiment system architecture for validating operation of an embodiment of the present invention;
FIG. 7 is a graph of error codes of a combined PRBS and Meisen random sequence combined training test neural network based on an experimental system for verification operation in an embodiment of the present invention.
Detailed Description
The technical contents of the preferred embodiments of the present invention will be more clearly and easily understood by referring to the drawings attached to the specification. The present invention may be embodied in many different forms of embodiments and the scope of the invention is not limited to the embodiments set forth herein.
The invention constructs a new random sequence by iterative combination of three independent common random sequences, can effectively improve the complexity of the random sequence to cover the sequence generation rule, and further ensures the training effectiveness of the neural network equalizer. The scheme is shown in figure 1, and specifically comprises the following steps:
STEP1, selecting three Random sequences with different rules, wherein the three Random sequences should have different generation rules, such as three different Pseudo-Random Binary sequences (PRBS); the generation rules of the selected three random sequences are mutually independent and can be different sequences constructed by the same random algorithm based on mutually independent parameters;
STEP2, selecting one sequence as an index sequence Si, and the other two sequences are data sequences and are respectively marked as S0 and S1, wherein the two sequences S0 and S1 serving as the data sequences should follow the same distribution characteristic so as to ensure that the target sequence Sc follows the same distribution characteristic;
STEP3, mapping the index sequence Si into a binary sequence Sb, and ensuring that the bits 0 and 1 in the new sequence are uniformly distributed; specifically, if the index sequence Si itself is a uniformly distributed random binary sequence, this step may be skipped; if the index sequence Si is a random sequence under other uniform distribution, the distribution mean value can be selected as separation to uniformly map the sequence into 0 and 1 sequences; if the index sequence Si obeys other distribution characteristics, the binary sequence after mapping is ensured to be uniformly distributed;
the STEP4 constructs a combined sequence Sc based on the combined sequence S0 and S1 of the binary index sequence Sb: traversing the sequence Sb, and if the current bit is 0, moving S0 to add the current head element to the new sequence Sc; if the current element is 1, move S1 the head element to add to Sc; the three sequences can be recycled, namely when the last element of one sequence is used, the sequence is restarted from the original first element of the sequence next time, so that the combined sequence meeting the length requirement can be obtained;
STEP5, when Sc reaches the specified length, finishing the construction;
STEP6 (optional), selecting Sc to replace index sequence or one of data sequence in STEP2, and performing STEP 3-5 again can further increase the complexity of the final constructed sequence. This step can also be repeated to increase the complexity of the generation sequence to effectively mask the generation rule.
The authentication process of the present invention is implemented based on PRBS15, PRBS23, PRBS31, and a meisen Random sequence (denoted Random). Based on a combined random sequence construction method, a PRBS31 is used as an index sequence, and PRBS15 and PRBS23 are used as data sequences, and STEP 1-5 construction of a combined PRBS sequence is carried out. Both data sequences are uniformly distributed binary sequences, so the final constructed sequence should be a uniformly distributed binary sequence, and meet the requirements of STEP 2. Since PRBS31 is a uniformly distributed binary sequence, STEP3 may be skipped. Before performing STEP4, three sequence end elements are removed respectively to increase the final constructed sequence period length. STEP4 is performed, resulting in a binary random sequence of 200,000 bits in length.
A description of the verification operation based on simulation and experiment will be given below. The invention adopts a mode of cross testing based on different sequences to verify the practical performance of neural network training, and the data set structure is shown in figure 2. The training sequence comprises a training set and a test set, wherein the training set is used for training, and the test set is used for testing the balance performance under the training sequence rule; the test training is only used for comparison test and is used for verifying whether the neural network shows the same equalization performance on different sequences. When the neural network learns the sequence generation rules, different equalization performances will be exhibited on different sequences.
The simulation is based on Non-Return-to-Zero (NRZ) coding based on different random sequence modulation, and Additive White Gaussian Noise (AWGN) channel transmission. And testing neural network equalization performance through different training-testing data combinations. The simulation results of fig. 3-5 are obtained, and the legend "a-B" represents the error performance curve for training with the signal based on sequence a and testing with the signal based on sequence B. Ramdom represents the metson random sequence, indicating in related studies that simple neural networks do not learn this data feature, and is used here for comparative validation.
Fig. 3 was tested with the original PRBS and the results show that when the training data is the metson random number, both the test at PRBS15 and the metson random number yielded performance consistent with hard decisions. This fully accounts for the generation rules of the neural network model that do not recognize the meisen random numbers. When the training data is a PRBS23 sequence, the test performance of the training data is obviously differentiated, and when the training data is tested by PRBS23, the performance of the training data is far higher than the superhard judgment result; the performance was far less good when PRBS15 was used as test data. This indicates that the neural network learns the generation rules of PRBS23, which has a positive impact on the testing of the PRBS23 sequence and a negative impact on different rule sequences, such as the PRBS15 sequence under test. The simulation is used for contrast verification, and shows that the problem of abnormal performance can be caused by training the neural network through a common PRBS sequence.
Fig. 4 shows that the performance of the neural network trained by the PRBS sequence combined by the scheme of the present invention tested on different data is consistent, which indicates that the neural network does not recognize the data generation rule. The result proves that the proposed random number combination construction method can effectively cover the generation rule of the simple random sequence.
Since the neural network learning sequence rule has a certain relationship with its scale, the validation operation of fig. 5 tests the effectiveness of the application of the random sequence to training for neural networks of different scales. The neural network used for evaluation is still a fully connected network with 2 hidden layers. Sz is used as a network scale factor to measure the size of the neural network. Each hidden layer contains sz × 4 nodes, the input length is sz × 2-1, and the equalization target is the sequence center symbol, where k is log (sz). The error code performance of each test is still approximately equal, which indicates that the problem of learning data rules by a neural network does not exist.
Further, the proposed scheme is tested by the experimental transmission system. As shown in FIG. 6, the random sequence is converted into a PAM4 sequence through bit mapping, an arbitrary waveform generator generates an electrical signal of 25GBaud/s, the electrical signal is modulated into an optical signal through an O-band directly-tuned laser with 10G bandwidth, the optical signal is transmitted through a single-mode optical fiber with 20 kilometers, and the receiving power is controlled through a variable optical attenuator. The sample is detected by an avalanche photodiode with a bandwidth of 20G at a receiving end, and finally is sampled by an oscilloscope with a bandwidth of 45GHz and a sampling rate of 120 GSa/s.
And the PRBS sequence and the Messen random sequence which are constructed in a combined mode are modulated and transmitted to obtain a receiving sequence. The neural network balance performance under different training and testing data combinations is tested experimentally. Since the PAM4 signal was used in the experiment, the output scale of the neural network was 4, and it was still a classification model. The neural network is still a structure comprising 2 hidden layers, the length of an input sequence is 201, and the number of nodes of each hidden layer is 128. The training and test data are length balanced 100000. Fig. 7 shows the test results, and it can be seen that the performances under each group of tests are approximately equal, which proves that the combined data can actually shield the generation rule of the random sequence to the neural network, and confirms that the invention can combine the random sequence to construct a stronger random sequence to ensure the correctness of effective training. Not only for neural networks, the scheme should have a certain popularity for other advanced algorithms that may appear in the future.
In conclusion, the scheme provides a random sequence construction method for neural network equalizer training, and a random sequence with higher complexity is constructed by combining common random sequences, so that data generation rules can be effectively covered, and effective training of an audit network equalizer is ensured.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (10)

1. A random sequence construction method aiming at neural network equalizer training is characterized by comprising the following steps:
step1, selecting three random sequences;
step2, selecting one of the random sequences as an index sequence Si, and taking the other two random sequences as data sequences which are respectively marked as S0 and S1;
step3, mapping the Si into a binary sequence Sb;
and 4, constructing a new sequence Sc based on the Sb: if the Sb current bit is 0, the S0 current header element is moved to be added to the Sc; if the Sb current bit is 1, moving the S1 current header element to add to the Sc; and traversing the Sb until the Sc reaches a specified length.
2. The method of random sequence construction for neural network equalizer training of claim 1, comprising:
and 5, selecting the Sc to replace any sequence in the step2, and executing the step3 and the step 4.
3. The method of constructing random sequences for neural network equalizer training of claim 2, wherein said step5 is performed repeatedly.
4. The method of random sequence construction for neural network equalizer training of any of claims 1-3, wherein the random sequence is a pseudo-random binary sequence.
5. The method of claim 4, wherein the random sequence generation rules are independent of each other.
6. The method of claim 4, wherein the random sequence is constructed based on independent parameters using a homogeneous random algorithm.
7. The method of random sequence construction for neural network equalizer training of claim 4, wherein the S0 and the S1 follow the same distribution characteristics.
8. The method of claim 4, wherein mapping Si to Sb in step3 ensures that 0 and 1 bits in Sb are uniformly distributed.
9. The method of claim 4, wherein in step4, the Sb, the S0 and the S1 are all reusable, that is, when the last element of a sequence is used, the sequence will start from the original first element again in the next time.
10. The method of claim 4, wherein before the step4, the lengths of the Sb, the S0, and the S1 sequences are fine-tuned by deleting end elements so that their lengths are relatively prime.
CN202011007542.3A 2020-09-23 2020-09-23 Random sequence construction method for neural network equalizer training Pending CN112152953A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011007542.3A CN112152953A (en) 2020-09-23 2020-09-23 Random sequence construction method for neural network equalizer training

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011007542.3A CN112152953A (en) 2020-09-23 2020-09-23 Random sequence construction method for neural network equalizer training

Publications (1)

Publication Number Publication Date
CN112152953A true CN112152953A (en) 2020-12-29

Family

ID=73897802

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011007542.3A Pending CN112152953A (en) 2020-09-23 2020-09-23 Random sequence construction method for neural network equalizer training

Country Status (1)

Country Link
CN (1) CN112152953A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114337849A (en) * 2021-12-21 2022-04-12 上海交通大学 Physical layer confidentiality method and system based on mutual information quantity estimation neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150180577A1 (en) * 2012-08-31 2015-06-25 Huawei Technologies Co., Ltd. Training Sequence Generation Method, Training Sequence Generation Apparatus, and Optical Communications System
CN109347555A (en) * 2018-09-19 2019-02-15 北京邮电大学 A kind of visible light communication equalization methods based on radial basis function neural network
CN111313971A (en) * 2020-02-28 2020-06-19 杭州电子科技大学 Lightgbm equalization system and method for improving IMDD short-distance optical communication system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150180577A1 (en) * 2012-08-31 2015-06-25 Huawei Technologies Co., Ltd. Training Sequence Generation Method, Training Sequence Generation Apparatus, and Optical Communications System
CN109347555A (en) * 2018-09-19 2019-02-15 北京邮电大学 A kind of visible light communication equalization methods based on radial basis function neural network
CN111313971A (en) * 2020-02-28 2020-06-19 杭州电子科技大学 Lightgbm equalization system and method for improving IMDD short-distance optical communication system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TAO LIAO,LEI XUE等: ""Training data generation and validation for a neural network-based equalizer"", 《OPTICS LETTERS》 *
TAO LIAO,LEI XUE等: ""Unsupervised Learning for Neural Network-Based Blind Equalization"", 《IEEE PHOTONICS TECHNOLOGY LETTERS》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114337849A (en) * 2021-12-21 2022-04-12 上海交通大学 Physical layer confidentiality method and system based on mutual information quantity estimation neural network
CN114337849B (en) * 2021-12-21 2023-03-14 上海交通大学 Physical layer confidentiality method and system based on mutual information quantity estimation neural network

Similar Documents

Publication Publication Date Title
Wang et al. Data-driven optical fiber channel modeling: A deep learning approach
Argyris et al. Photonic machine learning implementation for signal recovery in optical communications
Houtsma et al. 92 and 50 Gbps TDM-PON using neural network enabled receiver equalization specialized for PON
US9768914B2 (en) Blind channel estimation method for an MLSE receiver in high speed optical communication channels
Ming et al. Ultralow complexity long short-term memory network for fiber nonlinearity mitigation in coherent optical communication systems
CN110932809B (en) Fiber channel model simulation method, device, electronic equipment and storage medium
CN112115821B (en) Multi-signal intelligent modulation mode identification method based on wavelet approximate coefficient entropy
Zaman et al. Polarization mode dispersion-based physical layer key generation for optical fiber link security
Yi et al. Neural network-based equalization in high-speed PONs
Wang et al. Comprehensive eye diagram analysis: a transfer learning approach
Yang et al. Overfitting effect of artificial neural network based nonlinear equalizer: from mathematical origin to transmission evolution
Liao et al. Unsupervised learning for neural network-based blind equalization
CN112152953A (en) Random sequence construction method for neural network equalizer training
Karinou et al. Experimental performance evaluation of equalization techniques for 56 Gb/s PAM-4 VCSEL-based optical interconnects
CN114285715B (en) Nonlinear equalization method based on bidirectional GRU-conditional random field
Bosco et al. Long-distance effectiveness of MLSE IMDD receivers
Li et al. End-to-end learning for optical fiber communication with data-driven channel model
CN113542172B (en) Elastic optical network modulation format identification method and system based on improved PSO clustering
Borujeny et al. Why constant-composition codes reduce nonlinear interference noise
CN114124223B (en) Convolutional neural network optical fiber equalizer generation method and system
Yang et al. A novel nonlinear noise power estimation method based on error vector correlation function using artificial neural networks for coherent optical fiber transmission systems
CN112543070B (en) On-line extraction of channel characteristics
Róka et al. Impact of environmental influences on multilevel modulation formats at the signal transmission in the optical transmission medium
Cui et al. Optical Fiber Channel Modeling Method Using Multi-BiLSTM for PM-QPSK Systems
CN113141216B (en) Signal processing method, apparatus, device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201229

WD01 Invention patent application deemed withdrawn after publication