CN107342810A - Deep learning Brilliant Eyes figure analysis method based on convolutional neural networks - Google Patents

Deep learning Brilliant Eyes figure analysis method based on convolutional neural networks Download PDF

Info

Publication number
CN107342810A
CN107342810A CN201710534126.0A CN201710534126A CN107342810A CN 107342810 A CN107342810 A CN 107342810A CN 201710534126 A CN201710534126 A CN 201710534126A CN 107342810 A CN107342810 A CN 107342810A
Authority
CN
China
Prior art keywords
eye pattern
analysis
layer
neural networks
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710534126.0A
Other languages
Chinese (zh)
Other versions
CN107342810B (en
Inventor
王丹石
张民
李建强
李进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN201710534126.0A priority Critical patent/CN107342810B/en
Publication of CN107342810A publication Critical patent/CN107342810A/en
Application granted granted Critical
Publication of CN107342810B publication Critical patent/CN107342810B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/07Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems
    • H04B10/075Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems using an in-service signal
    • H04B10/079Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems using an in-service signal using measurements of the data signal
    • H04B10/0795Performance monitoring; Measurement of transmission parameters
    • H04B10/07953Monitoring or measuring OSNR, BER or Q
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/07Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems
    • H04B10/075Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems using an in-service signal
    • H04B10/079Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems using an in-service signal using measurements of the data signal
    • H04B10/0795Performance monitoring; Measurement of transmission parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/07Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems
    • H04B10/075Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems using an in-service signal
    • H04B10/079Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems using an in-service signal using measurements of the data signal
    • H04B10/0795Performance monitoring; Measurement of transmission parameters
    • H04B10/07951Monitoring or measuring chromatic dispersion or PMD

Abstract

The invention discloses a kind of deep learning Brilliant Eyes figure analysis method based on convolutional neural networks, it is related to technical field of photo communication, wherein carrying out performance evaluation to eye pattern by building simultaneously training convolutional neural networks module, comprises the following steps:Obtain eye pattern training dataset;Eye pattern is pre-processed;CNN modules are trained to carry out feature extraction;The eye pattern of required analysis is inputted to the CNN modules that training is completed after pretreatment and carries out pattern-recognition and performance evaluation;Export analysis result.Depth learning technology based on convolutional neural networks is applied in eye Diagram Analysis by the present invention, solve the problems, such as directly handle initial data in traditional eye pattern performance evaluation, manual intervention need to be carried out, the intellectuality and automation of eye pattern original image information analysis are realized using CNN, can be as the eye pattern software processing module of oscillograph and the eye Diagram Analysis module of simulation software, and then be embedded into tester and carry out intelligent signal analysis and performance monitoring.

Description

Deep learning Brilliant Eyes figure analysis method based on convolutional neural networks
Technical field
The present invention relates to technical field of photo communication, more particularly to a kind of deep learning Brilliant Eyes based on convolutional neural networks Figure analysis method.
Background technology
Machine learning (ML) technology provides powerful instrument to solve such as natural language processing, data mining, voice The problem of many fields such as identification and image recognition.Meanwhile machine learning techniques have also obtained widely should in optical communication field With having greatly facilitated the development of intelligence system.Research is concentrated mainly on to enter using different machine learning algorithms at present In terms of row optical performance monitoring (OPM) and nonlinear impairments compensation, used machine learning algorithm includes it is expected maximum (EM), random forest, back-propagation artificial neural network (BP-ANN), k nearest neighbor (KNN) and SVMs (SVM) etc..So And all above-mentioned machine learning algorithms have the limitation of its algorithm in itself in the ability of feature extraction.More specifically, machine Learning model can not directly handle the primitive form of natural data, therefore have to need considerable field before with algorithm Speciality and engineering technical ability carry out design feature extractor, and initial data is converted into suitable internal representation or characteristic vector, and then Subsystem can just detect the pattern of input data.Therefore, it is intended that more advanced machine learning algorithm can be developed, not only may be used , can be with the feature needed for automatic detection directly to handle initial data.
Recently, deep learning turns into a burning hot research topic, and the purpose is to cause machine learning closer to artificial intelligence The target of energy (AI).Deep learning is construed as the deep neural network with multiple non-linear layers, and it passes through self study Process carries out engineer from data learning feature, rather than by human engineer.Foremost breakthrough in deep learning One of be Google DeepMind computer program " AlphaGo ", they are hit in Trivial Pursuit Unhinged with the ability of self study first The player of specialty is lost.In addition, as current study hotspot, deep learning is in unmanned vehicle, medical diagnosis, mood The various application fields such as analysis achieve major progress.But as far as we know, almost there is no base but in optical communication system field In the research work of deep learning.
Meanwhile in optical communication field, the identification of current modulation format and OSNR, CD, linear damage, nonlinear impairments etc. The estimation technique of performance indications can not be handled directly initial data, and must artificially extract corresponding feature, it is necessary to Substantial amounts of manual intervention.It is desirable to can carry out the intelligence of various performances point using more advanced technology using eye pattern Analysis, without manual intervention, accomplishes accurately to measure, is handled immediately without data statistics, realizes and carries out performance evaluation using eye pattern Intelligent and automation.
The content of the invention
It is an object of the invention to deep learning technology is applied into optical communication field, there is provided a kind of intelligence, reliable base In the deep learning Brilliant Eyes figure analysis method of convolutional neural networks, solve directly handle original in traditional eye pattern performance evaluation Beginning view data, the drawbacks of manual intervention need to be carried out, realize and the intelligent of performance evaluation and oneself are carried out to eye pattern original image Dynamicization.
To reach above-mentioned purpose, the invention discloses a kind of deep learning intelligence eye Diagram Analysis based on convolutional neural networks Method, the deep learning technology based on convolutional neural networks is applied in eye Diagram Analysis, using convolutional neural networks to eye pattern Multiple performance analysis is carried out, the described method comprises the following steps:Step 1: the eye pattern training dataset of analysis needed for obtaining;Step Rapid two, eye pattern image preprocessing;Step 3: training convolutional neural networks (CNN) module carries out feature extraction to eye pattern;Step 4th, the CNN modules that required analysis eye pattern input training is completed are subjected to pattern-recognition and performance evaluation;Step 5: output analysis As a result.
Preferably, the multiple performance analyzed needed for the eye pattern is modulation format, OSNR (OSNR), dispersion (CD), linear damage and nonlinear impairments.
Preferably, in the eye pattern training set obtaining step one, in the case of the various performance difference indexs for gathering eye pattern Training dataset, wherein, every group of data that training data is concentrated are the specific of particular characteristic for eye pattern image and output by inputting Indication information is to forming.
Preferably, in the eye pattern pre-treatment step two, by the colour of the training data obtained in the step 1 concentration Eye pattern image is converted to gray level image, and obtained eye pattern gray level image is carried out into down-sampling processing.
Preferably, the training CNN modules are carried out in characteristic extraction step three, by pretreated eye in the step 2 In the CNN modules that figure input is built, after being trained process based on the training data, the CNN modules are automatically from eye pattern Feature, and the relation between construction feature and different performance are extracted in image.
Preferably, in the CNN template patterns identification and performance evaluation step 4, the eye pattern of pretreated required analysis Input in the CNN modules that the training is completed, CNN modules carry out pattern-recognition to the eye pattern of input, and pass through its conventional Habit experience carries out performance evaluation to the eye pattern currently inputted.
Preferably, in the output analysis result step 5, by the packet of CNN modules output containing required analysis Various performances, the analysis result of different performance can be obtained from output information.
Preferably, the structure of the CNN modules mainly includes:One input layer, n convolutional layer (C1, C2 ..., Cn), n Individual pond layer (P1, P2 ..., Pn), m full articulamentums (F1, F2 ..., Fm), an output layer, wherein, the input layer it is defeated Enter and be connected for the eye pattern image by pretreatment, input layer with convolutional layer C1;It is a1 that the convolutional layer C1, which contains k1 size, × a1 convolution kernel, the input tomographic image obtains k1 characteristic pattern by convolutional layer C1, and then obtained characteristic pattern is transmitted To pond layer P1;The characteristic pattern that the pond layer P1 is generated with b1 × b1 sample size to the convolutional layer C1 carries out pond, The characteristic pattern after corresponding k1 sampling is obtained, then obtained characteristic pattern is sent to next convolutional layer C2;The n convolution Layer and pond layer constantly extract the profound Sampling characters of image to being sequentially connected with, last pond layer Pn and entirely Articulamentum F1 is connected, wherein, convolutional layer Ci contains the convolution kernel that ki size is ai × ai, and pond layer Pj sample size is Bj × bj, Ci represent i-th of convolutional layer, and Pj represents j-th of pond layer;The full articulamentum F1 is last described pond layer The one-dimensional layer that the pixel mapping of all kn characteristic patterns obtained by Pn forms, each pixel represent the one of the full articulamentum F1 Individual neuron node, all neuron nodes of F1 layers are connected entirely with next full articulamentum F2 neuron node;Through m Individual full articulamentum is sequentially connected with, and last full articulamentum Fm is connected entirely with the output layer;The output layer exports institute The nodal information for the eye pattern different performance that need to be analyzed.
Preferably, the nodal information of the output layer output is the binary bit sequence of L positions, wherein, N number of difference Performance respectively with L1, L2 ..., LN positions binary bits information represent that Li positions are used to represent that the Li kinds of i-th of performance to be different Indication information, wherein L=L1+L2+ ...+LN.
CNN eye pattern Processing Algorithm is preferably based on using as the eye pattern software processing module or simulation software of oscillograph Eye Diagram Analysis module, and then be embedded into tester and carry out intelligent signal analysis and performance monitoring.
The beneficial effects of the present invention are:The present invention solves the drawbacks of traditional eye Diagram Analysis, will be based on convolutional Neural net The deep learning technology of network is applied in eye Diagram Analysis, and multiple performance analysis, application are carried out to eye pattern using convolutional neural networks The present invention can be directly handled eye pattern raw image data, without carrying out feature extraction by manual intervention, realize eye The intellectuality and automation of figure performance evaluation, and then can be as the eye pattern software processing module of oscillograph or the eye of simulation software Map analysis module, it is embedded into tester and carries out intelligent signal analysis and performance monitoring.
Brief description of the drawings
Fig. 1 shows the flow chart of the deep learning Brilliant Eyes figure analysis method of the invention based on convolutional neural networks;
Fig. 2 shows the deep learning intelligence eye Diagram Analysis structure based on convolutional neural networks of one embodiment of the invention Schematic diagram;
Fig. 3 shows the different modulating form of one embodiment of the invention collection and different OSNR part eye pattern image;
Fig. 4 shows OSNR estimated under the different modulating form of one embodiment of the invention accuracy schematic diagram;
Fig. 5 shows under the different modulating form of one embodiment of the invention CNN with other machines learning algorithm for eye The contrast schematic diagram of figure performance evaluation accuracy.
Embodiment
With reference to the accompanying drawings and examples, the embodiment of the present invention is described in further detail.Implement below Example is used to illustrate the present invention, but is not limited to protection scope of the present invention.
As shown in figure 1, the deep learning Brilliant Eyes figure analysis method proposed by the present invention based on convolutional neural networks, by base It is applied in the deep learning technology of convolutional neural networks in eye Diagram Analysis, a variety of property is carried out to eye pattern using convolutional neural networks It can analyze, comprise the following steps:Step 1: the eye pattern training dataset of analysis needed for obtaining;Step 2: eye pattern image is located in advance Reason;Step 3: training convolutional neural networks (CNN) module carries out feature extraction to eye pattern;Step 4: required analysis eye pattern input The CNN modules that training is completed carry out pattern-recognition and performance evaluation;Step 5: output analysis result.
In the present embodiment, the eye pattern performance to be analyzed is modulation format and OSNR.
In the acquisition eye pattern training dataset step 1, base is established based on VPI Transmission Maker 9.0 This analogue system, the optical signal of four kinds of different modulating forms is generated by pseudo-random binary sequence, be respectively:4PAM, RZ- DPSK, NRZ-OOK, RZ-OOK.Four kinds of modulation formats are all based on direct detection mode, and the message reflection of transmission is in signal In amplitude, it is suitable for follow-up eye Diagram Analysis.Erbium-doped fiber amplifier (EDFA) is used in analogue system by the spontaneous hair of amplification Penetrate (ASE) noise to be added in optical signal, and under 1dB step-length, be adjusted to OSNR using variable optical attenuator (VOA) 10 to 25dB.In order to simulate real optical signal as far as possible, dispersion (CD) emulator is added in system so that simulate generation Eye pattern can more reflect real situation.For the optical signal of four kinds of different modulating forms in the present embodiment, 4PAM, NRZ-OOK and RZ-OOK signals directly detect by photoelectric detector (PD), and RZ-DPSK signals with delay interferometer (DI) then by combining Photoelectric detector (BPD) is balanced to be detected.After sampling is synchronized, obtain comprising four kinds of signal strength informations Data signal.In order to obtain visual effect more true to nature, the present embodiment uses eye pattern generation module special in oscillograph, will The data signal received is converted to corresponding eye pattern image.
Based on the analogue system, the different OSNR values of the every kind of modulation format generation 16 of the present embodiment regulation () Eye pattern image, " jpg " form that 100 pixel sizes are 900 × 1200 is collected to each OSNR values of every kind of modulation format Eye pattern image, here, using each OSNR values of every kind of modulation format and its corresponding eye pattern image as one group of training data, Therefore whole training data set includes 6400 (1600 × 4) group training data altogether.
In the eye pattern image preprocessing step 2, in order to reduce amount of calculation and enhancing generalization ability, it will be received in step 1 The eye pattern image collected causes original coloured image to be converted to gray level image after greyscale transformation, and causes original by down-sampling The pixel size of beginning eye pattern is down to 28 × 28, is finally input to the training dataset after processing in the CNN modules established.Such as Shown in Fig. 3, different eye patterns can show different modulation formats, and if observed eye pattern visually be entered Row carefully analyzes, and it is it can also be seen that eye pattern and the first approximation relation of OSNR values.
The training CNN modules are carried out in characteristic extraction step three, wherein the eye pattern training dataset of input CNN modules, Each of which eye pattern image corresponds with a label vector being made up of 20 bits, and first 4 of label vector represent not Same modulation format (4PAM:0001、RZ-DPSK:0010、NRZ-OOK:0100、RZ-OOK:1000), latter 16 represent difference OSNR values (10dB:0000000000000001、11dB:0000000000000010 ..., 25dB: 1000000000000000).In described training process, CNN modules gradually extract the validity feature of input eye pattern image.Together When, in order to minimize the error between desired tag vector sum reality output label vector, CNN modules are used by backpropagation The method that gradient declines progressively adjusts the parameter of its kernel.
Fig. 2 represents the intelligent eye Diagram Analysis structural representation based on convolutional neural networks of a specific embodiment of the invention Figure, the structure of the CNN modules mainly include following components:One input layer, two convolutional layers (C1, C2), two ponds Change layer (P1, P2), full articulamentum (F1), an output layer.28 × 28 eye pattern images by pretreatment are as input layer CNN modules are inputted, are connected with convolutional layer C1;The eye pattern image of input passes through the volume containing the convolution kernel that 6 sizes are 5 × 5 Lamination C1, it is 24 × 24 characteristic patterns to obtain 6 sizes, and then obtained characteristic pattern is sent into pond layer P1;Pond layer P1 is with 2 × 2 sample size carries out maximum pond to 6 characteristic patterns, obtains the feature after the sampling that corresponding 6 sizes are 12 × 12 Figure, and then obtained characteristic pattern is sent to convolutional layer C2;Convolutional layer C2 contains the convolution kernel that 12 sizes are 5 × 5, pond layer 6 characteristic patterns obtained by P1 obtain the characteristic pattern that 12 sizes are 8 × 8 through convolutional layer C2, and then obtained characteristic pattern is transmitted To pond layer P2;12 sizes that pond layer P2 is equally generated using 2 × 2 sample size to convolutional layer C2 are entered as 4 × 4 characteristic patterns The maximum pond of row, the characteristic pattern after corresponding 12 samplings is obtained, obtained characteristic pattern is then sent to full articulamentum F1;Pond The pixel for changing all characteristic patterns obtained by layer P2 is mapped as one-dimensional full articulamentum F1, and each pixel represents full articulamentum F1's One neuron node, full articulamentum F1 each neuron node are connected entirely with output layer;Last output layer output institute The nodal information for the eye pattern performance that need to be analyzed.
Wherein, convolutional layer is the core component of CNN modules.Parameter in this layer is made up of one group of convolution kernel, and they have Less local receptor field, but extend to the entire depth of eye pattern image.During propagating forward, each convolution Core carries out convolution with the pixel on the width and height of eye pattern image, exports a two-dimentional plane, it is referred to as from the volume The characteristic pattern of product karyogenesis.Different from the classical convolution in mathematics, the operation in CNN is discrete convolution, can be counted as square Battle array is multiplied.Convolution kernel can be looked at as property detector, and by convolution kernel, CNN modules can be from the image learning of input The feature exclusive to its, while in order to build a significantly more efficient model, multiple convolution kernels are generally required to detect multiple spies Sign, to produce multiple characteristic patterns in convolutional layer.After the feature extraction by convolutional layer, pond layer can will be semantically similar Feature be merged into corresponding one, typical pond mode is to calculate the maximum of local unit block in a characteristic pattern, is entered The sub-sampling of row characteristic pattern.Each sub-sample unit 2 × 2 unit area from convolution characteristic pattern obtains defeated in the present embodiment Enter, and the numerical value using the maximum of these inputs as Chi Huahou, and then form the characteristic pattern behind pond.
In the CNN template patterns identification and performance evaluation step 4,4 kinds of different modulatings of pretreated required analysis There is scope to be for form, every kind of modulation formatOSNR values (using 1dB as step-length) eye pattern image be input to it is above-mentioned To train in the CNN modules completed, CNN modules carry out pattern-recognition to the eye pattern under the different situations of input, and by training rank Section learning experience form and OSNR performance evaluation are modulated to the eye pattern of input, by analysis result with the bit of 20 to The form output of amount.
In the output analysis result step 5, from 20 bit vectors of CNN modules output, its first 4 available The modulation format information of analyzed eye pattern, latter 16 can obtain corresponding OSNR values.
To show the accuracy of institute's extracting method analysis of the present invention, Fig. 4 is shown under different modulating form difference iterations Estimated accuracy of the CNN modules to OSNR.Obviously, the accuracy of four kinds of modulation formats is with the increase of CNN module iterationses And increase.The CNN modules that different iterationses are trained have different performance recognition capabilities.In the present embodiment, iteration is worked as When number is more than 31, CNN modules reach 100% to its corresponding OSNR precision estimated under four kinds of modulation formats, that is, divide The performance inerrancy result of analysis.
Meanwhile be the advantage for proving the present invention, by CNN and other four kinds of famous machine learning algorithms, i.e. decision tree, KNN, BP-ANN and SVM are compared.Each algorithm for OSNR under different modulating form estimated accuracy in histogram Form is shown in Fig. 5, and CNN has obvious advantage for other four kinds of algorithms as seen from the figure.Wherein, decision Tree algorithms processing speed Fast and the requirement very little to internal memory, these advantages also cause its estimated accuracy relatively low simultaneously;KNN algorithms generally have in low dimensional There is good estimated accuracy, but very big deviation may be produced on high-dimensional;SVM algorithm uses in estimated accuracy and internal memory On be respectively provided with very big advantage, it only needs seldom supporting vector, but it is substantially a binary classifier, so Value in face of multiple OSNR just needs multiple SVM classifiers to be handled;Although BP neural network is also to be sent out from neutral net Exhibition, but it lacks the ability of feature extraction, it is necessary to which a large amount of training datas can be only achieved preferable effect, and be easily trapped into Local minimum and over-fitting.Compared with algorithm above, CNN is constructed to the sensitive relatively low of input data variance Network is more powerful, can largely avoid over-fitting, and can automatically extract the feature of input data, especially It is that have extraordinary effect on image procossing, simultaneously as the advantage such as local receptor field, weight distribution, sub-sampling, CNN can realize optimal accuracy with appropriate calculating cost.
To sum up, the deep learning technology based on convolutional neural networks is applied to eye Diagram Analysis by method proposed by the invention In, the eye pattern software processing module of oscillograph or the eye Diagram Analysis module of simulation software can be effective as, and then be embedded into Intelligent signal analysis and performance monitoring are carried out in tester, realizes the automation and intellectuality of eye Diagram Analysis.
Above example is merely to illustrate the present invention, and and protection scope of the present invention is any limitation as, led for correlation The technical staff in domain, the present invention can have various modifications and variations.Within the spirit and principles of the invention, that is made is any Modification, equivalent substitution, improvement etc., should be included in the scope of the protection.

Claims (10)

1. a kind of deep learning Brilliant Eyes figure analysis method based on convolutional neural networks, it is characterised in that convolution god will be based on Deep learning technology through network is applied in eye Diagram Analysis, and multiple performance analysis is carried out to eye pattern using convolutional neural networks, It the described method comprises the following steps:
Step 1: the eye pattern training dataset of analysis needed for obtaining;
Step 2: eye pattern image preprocessing;
Step 3: training convolutional neural networks (CNN) module carries out feature extraction to eye pattern;
Step 4: the CNN modules that required analysis eye pattern input training is completed carry out pattern-recognition and performance evaluation;
Step 5: output analysis result.
2. the deep learning Brilliant Eyes figure analysis method according to claim 1 based on convolutional neural networks, its feature exist In, the multiple performance analyzed needed for the eye pattern be modulation format, OSNR (OSNR), dispersion (CD), linear damage and Nonlinear impairments.
3. the deep learning Brilliant Eyes figure analysis method according to claim 1 based on convolutional neural networks, its feature exist In, in the eye pattern training set obtaining step one, the training dataset in the case of the various performance difference indexs of eye pattern is gathered, its In, training data concentrate every group of data by input for eye pattern image and output be particular characteristic specific indexes information to structure Into.
4. the deep learning Brilliant Eyes figure analysis method according to claim 1 based on convolutional neural networks, its feature exist In in the eye pattern pre-treatment step two, the colored eye pattern image that the training data obtained in the step 1 is concentrated is changed For gray level image, and obtained eye pattern gray level image is subjected to down-sampling processing.
5. the deep learning Brilliant Eyes figure analysis method according to claim 1 based on convolutional neural networks, its feature exist In the training CNN modules are carried out in characteristic extraction step three, and pretreated eye pattern input in the step 2 is built CNN modules in, after being trained process based on the training data, the CNN modules are extracted special from eye pattern image automatically Sign, and the relation between construction feature and different performance.
6. the deep learning Brilliant Eyes figure analysis method according to claim 1 based on convolutional neural networks, its feature exist In in the CNN template patterns identification and performance evaluation step 4, the eye pattern of pretreated required analysis inputs the training In the CNN modules of completion, CNN modules carry out pattern-recognition to the eye pattern of input, and by its conventional learning experience to current The eye pattern of input carries out performance evaluation.
7. the deep learning Brilliant Eyes figure analysis method according to claim 1 based on convolutional neural networks, its feature exist In described to export in analysis result step 5, various performances of the packet containing required analysis exported by the CNN modules can The analysis result of different performance is obtained from output information.
8. the deep learning Brilliant Eyes figure analysis method according to claim 1 based on convolutional neural networks, its feature exist In the structure of the CNN modules mainly includes:One input layer, n convolutional layer (C1, C2 ..., Cn), n pond layer (P1, P2 ..., Pn), the full articulamentum of m (F1, F2 ..., Fm), an output layer;
Wherein, the input of the input layer is to be connected by the eye pattern image of pretreatment, input layer with convolutional layer C1;
The convolutional layer C1 contains k1Individual size is a1×a1Convolution kernel, it is described input tomographic image obtain k by convolutional layer C11It is individual Characteristic pattern, and then obtained characteristic pattern is sent to pond layer P1;
The pond layer P1 is with b1×b1The characteristic pattern that is generated to the convolutional layer C1 of sample size carry out pond, obtain corresponding K1Characteristic pattern after individual sampling, then obtained characteristic pattern is sent to next convolutional layer C2;
The n convolutional layer and pond layer constantly extract the profound Sampling characters of image to being sequentially connected with, last Individual pond layer Pn is connected with full articulamentum F1, wherein, convolutional layer Ci contains kiIndividual size is ai×aiConvolution kernel, pond layer Pj sample size is bj×bj, Ci i-th of convolutional layer of expression, Pj j-th of pond layer of expression;
The full articulamentum F1 is all k obtained by last pond layer PnnWhat the pixel mapping of individual characteristic pattern formed One-dimensional layer, each pixel represent a neuron node of the full articulamentum F1, all neuron nodes of F1 layers with it is next Individual full articulamentum F2 neuron node is connected entirely;
It is sequentially connected with through m full articulamentums, last full articulamentum Fm is connected entirely with the output layer;
The nodal information of the eye pattern different performance of analysis needed for the output layer output.
9. the deep learning Brilliant Eyes figure analysis method according to claim 8 based on convolutional neural networks, its feature exist In, the nodal information of output layer output is the binary bit sequence of L positions, wherein, N number of different performance respectively with L1、L2、…、LNPosition binary bits information represents, LiPosition is used for the L for representing i-th of performanceiThe different indication information of kind, its Middle L=L1+L2+…+LN
10. the deep learning Brilliant Eyes figure analysis method according to claim 1 based on convolutional neural networks, its feature exist In the eye pattern Processing Algorithm based on CNN is using as the eye Diagram Analysis mould of the eye pattern software processing module of oscillograph or simulation software Block, and then be embedded into tester and carry out intelligent signal analysis and performance monitoring.
CN201710534126.0A 2017-07-03 2017-07-03 Deep learning Brilliant Eyes figure analysis method based on convolutional neural networks Active CN107342810B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710534126.0A CN107342810B (en) 2017-07-03 2017-07-03 Deep learning Brilliant Eyes figure analysis method based on convolutional neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710534126.0A CN107342810B (en) 2017-07-03 2017-07-03 Deep learning Brilliant Eyes figure analysis method based on convolutional neural networks

Publications (2)

Publication Number Publication Date
CN107342810A true CN107342810A (en) 2017-11-10
CN107342810B CN107342810B (en) 2019-11-19

Family

ID=60218952

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710534126.0A Active CN107342810B (en) 2017-07-03 2017-07-03 Deep learning Brilliant Eyes figure analysis method based on convolutional neural networks

Country Status (1)

Country Link
CN (1) CN107342810B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108446631A (en) * 2018-03-20 2018-08-24 北京邮电大学 The smart frequency spectrum figure analysis method of deep learning based on convolutional neural networks
CN108957125A (en) * 2018-03-20 2018-12-07 北京邮电大学 Smart frequency spectrum figure analysis method based on machine learning
CN109120563A (en) * 2018-08-06 2019-01-01 电子科技大学 A kind of Modulation Identification method based on Artificial neural network ensemble
CN109217923A (en) * 2018-09-28 2019-01-15 北京科技大学 A kind of joint optical information networks and rate, modulation format recognition methods and system
CN109547102A (en) * 2018-12-17 2019-03-29 北京邮电大学 A kind of optical information networks method, apparatus, electronic equipment and readable storage medium storing program for executing
CN109768944A (en) * 2018-12-29 2019-05-17 苏州联讯仪器有限公司 A kind of signal modulation identification of code type method based on convolutional neural networks
CN109905167A (en) * 2019-02-25 2019-06-18 苏州工业园区新国大研究院 A kind of optical communication system method for analyzing performance based on convolutional neural networks
CN111157551A (en) * 2018-11-07 2020-05-15 浦项工科大学校产学协力团 Method for analyzing perovskite structure using machine learning
CN111863104A (en) * 2020-07-29 2020-10-30 展讯通信(上海)有限公司 Eye pattern determination model training method, eye pattern determination device, eye pattern determination apparatus, and medium
CN111860852A (en) * 2019-04-30 2020-10-30 百度时代网络技术(北京)有限公司 Method, device and system for processing data
CN111934755A (en) * 2020-07-08 2020-11-13 国网宁夏电力有限公司电力科学研究院 SDN controller and optical signal-to-noise ratio prediction method of optical communication equipment
CN112115760A (en) * 2019-06-20 2020-12-22 和硕联合科技股份有限公司 Object detection system and object detection method
CN112836422A (en) * 2020-12-31 2021-05-25 电子科技大学 Interference and convolution neural network mixed scheme measuring method
CN113141214A (en) * 2021-04-06 2021-07-20 中山大学 Deep learning-based underwater optical communication misalignment robust blind receiver design method
US11907090B2 (en) 2021-08-12 2024-02-20 Tektronix, Inc. Machine learning for taps to accelerate TDECQ and other measurements
US11923895B2 (en) 2021-03-24 2024-03-05 Tektronix, Inc. Optical transmitter tuning using machine learning and reference parameters
US11923896B2 (en) 2021-03-24 2024-03-05 Tektronix, Inc. Optical transceiver tuning using machine learning
US11940889B2 (en) 2021-08-12 2024-03-26 Tektronix, Inc. Combined TDECQ measurement and transmitter tuning using machine learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205453A (en) * 2015-08-28 2015-12-30 中国科学院自动化研究所 Depth-auto-encoder-based human eye detection and positioning method
CN106326874A (en) * 2016-08-30 2017-01-11 天津中科智能识别产业技术研究院有限公司 Method and device for recognizing iris in human eye images
CN106650688A (en) * 2016-12-30 2017-05-10 公安海警学院 Eye feature detection method, device and recognition system based on convolutional neural network
GB2545661A (en) * 2015-12-21 2017-06-28 Nokia Technologies Oy A method for analysing media content

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205453A (en) * 2015-08-28 2015-12-30 中国科学院自动化研究所 Depth-auto-encoder-based human eye detection and positioning method
GB2545661A (en) * 2015-12-21 2017-06-28 Nokia Technologies Oy A method for analysing media content
CN106326874A (en) * 2016-08-30 2017-01-11 天津中科智能识别产业技术研究院有限公司 Method and device for recognizing iris in human eye images
CN106650688A (en) * 2016-12-30 2017-05-10 公安海警学院 Eye feature detection method, device and recognition system based on convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赖俊森: "基于眼图重构和人工神经网络的光性能监测", 《光电子·激光》 *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108957125A (en) * 2018-03-20 2018-12-07 北京邮电大学 Smart frequency spectrum figure analysis method based on machine learning
CN108446631A (en) * 2018-03-20 2018-08-24 北京邮电大学 The smart frequency spectrum figure analysis method of deep learning based on convolutional neural networks
CN109120563B (en) * 2018-08-06 2020-12-29 电子科技大学 Modulation recognition method based on neural network integration
CN109120563A (en) * 2018-08-06 2019-01-01 电子科技大学 A kind of Modulation Identification method based on Artificial neural network ensemble
CN109217923A (en) * 2018-09-28 2019-01-15 北京科技大学 A kind of joint optical information networks and rate, modulation format recognition methods and system
CN111157551A (en) * 2018-11-07 2020-05-15 浦项工科大学校产学协力团 Method for analyzing perovskite structure using machine learning
CN109547102A (en) * 2018-12-17 2019-03-29 北京邮电大学 A kind of optical information networks method, apparatus, electronic equipment and readable storage medium storing program for executing
CN109768944A (en) * 2018-12-29 2019-05-17 苏州联讯仪器有限公司 A kind of signal modulation identification of code type method based on convolutional neural networks
CN109905167A (en) * 2019-02-25 2019-06-18 苏州工业园区新国大研究院 A kind of optical communication system method for analyzing performance based on convolutional neural networks
CN111860852A (en) * 2019-04-30 2020-10-30 百度时代网络技术(北京)有限公司 Method, device and system for processing data
CN112115760A (en) * 2019-06-20 2020-12-22 和硕联合科技股份有限公司 Object detection system and object detection method
CN112115760B (en) * 2019-06-20 2024-02-13 和硕联合科技股份有限公司 Object detection system and object detection method
TWI738009B (en) * 2019-06-20 2021-09-01 和碩聯合科技股份有限公司 Object detection system and object detection method
US11195083B2 (en) 2019-06-20 2021-12-07 Pegatron Corporation Object detection system and object detection method
CN111934755B (en) * 2020-07-08 2022-03-25 国网宁夏电力有限公司电力科学研究院 SDN controller and optical signal-to-noise ratio prediction method of optical communication equipment
CN111934755A (en) * 2020-07-08 2020-11-13 国网宁夏电力有限公司电力科学研究院 SDN controller and optical signal-to-noise ratio prediction method of optical communication equipment
CN111863104A (en) * 2020-07-29 2020-10-30 展讯通信(上海)有限公司 Eye pattern determination model training method, eye pattern determination device, eye pattern determination apparatus, and medium
CN111863104B (en) * 2020-07-29 2023-05-09 展讯通信(上海)有限公司 Eye diagram judgment model training method, eye diagram judgment device, eye diagram judgment equipment and medium
CN112836422B (en) * 2020-12-31 2022-03-18 电子科技大学 Interference and convolution neural network mixed scheme measuring method
CN112836422A (en) * 2020-12-31 2021-05-25 电子科技大学 Interference and convolution neural network mixed scheme measuring method
US11923895B2 (en) 2021-03-24 2024-03-05 Tektronix, Inc. Optical transmitter tuning using machine learning and reference parameters
US11923896B2 (en) 2021-03-24 2024-03-05 Tektronix, Inc. Optical transceiver tuning using machine learning
CN113141214A (en) * 2021-04-06 2021-07-20 中山大学 Deep learning-based underwater optical communication misalignment robust blind receiver design method
US11907090B2 (en) 2021-08-12 2024-02-20 Tektronix, Inc. Machine learning for taps to accelerate TDECQ and other measurements
US11940889B2 (en) 2021-08-12 2024-03-26 Tektronix, Inc. Combined TDECQ measurement and transmitter tuning using machine learning

Also Published As

Publication number Publication date
CN107342810B (en) 2019-11-19

Similar Documents

Publication Publication Date Title
CN107342810B (en) Deep learning Brilliant Eyes figure analysis method based on convolutional neural networks
CN107342962A (en) Deep learning intelligence Analysis On Constellation Map method based on convolutional neural networks
Wang et al. Modulation format recognition and OSNR estimation using CNN-based deep learning
CN105046277B (en) Robust mechanism study method of the feature significance in image quality evaluation
CN108427921A (en) A kind of face identification method based on convolutional neural networks
CN107944386B (en) Visual scene recognition methods based on convolutional neural networks
CN109063569A (en) A kind of semantic class change detecting method based on remote sensing image
CN109344759A (en) A kind of relatives' recognition methods based on angle loss neural network
CN104834905A (en) Facial image identification simulation system and method
CN106778910A (en) Deep learning system and method based on local training
CN110490242A (en) Training method, eye fundus image classification method and the relevant device of image classification network
CN106682702A (en) Deep learning method and system
CN107423727A (en) Face complex expression recognition methods based on neutral net
CN106997373A (en) A kind of link prediction method based on depth confidence network
CN109190521A (en) A kind of construction method of the human face recognition model of knowledge based purification and application
Lv et al. Joint OSNR monitoring and modulation format identification on signal amplitude histograms using convolutional neural network
Capece et al. Implementation of a coin recognition system for mobile devices with deep learning
CN113435254A (en) Sentinel second image-based farmland deep learning extraction method
CN113989217A (en) Human eye diopter detection method based on deep learning
CN110289987A (en) Multi-agent system network resilience appraisal procedure based on representative learning
CN115965819A (en) Lightweight pest identification method based on Transformer structure
Wang et al. Identification of growing points of cotton main stem based on convolutional neural network
Huang et al. Application of Data Augmentation and Migration Learning in Identification of Diseases and Pests in Tea Trees
CN110189361A (en) A kind of multi-channel feature and the target following preferentially updated parallel
Gong et al. Research on high resolution remote sensing image classification based on convolution neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant