CN108872993A - Radar echo extrapolation method based on cyclic dynamic convolution neural network - Google Patents

Radar echo extrapolation method based on cyclic dynamic convolution neural network Download PDF

Info

Publication number
CN108872993A
CN108872993A CN201810402330.1A CN201810402330A CN108872993A CN 108872993 A CN108872993 A CN 108872993A CN 201810402330 A CN201810402330 A CN 201810402330A CN 108872993 A CN108872993 A CN 108872993A
Authority
CN
China
Prior art keywords
layer
image
characteristic pattern
test
output characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810402330.1A
Other languages
Chinese (zh)
Inventor
李骞
施恩
马强
马烁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN201810402330.1A priority Critical patent/CN108872993A/en
Publication of CN108872993A publication Critical patent/CN108872993A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/95Radar or analogous systems specially adapted for specific applications for meteorological use
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a radar echo extrapolation method based on a cyclic dynamic convolution neural network, which comprises the following steps: RDCNN online prediction: and constructing a test sample set through data preprocessing, testing the trained RDCNN by using the test sample set, and convolving the last radar echo image in the input image sequence with a probability vector obtained in network forward propagation to obtain a predicted radar echo image.

Description

A kind of Radar Echo Extrapolation method based on circulation dynamic convolutional neural networks
Technical field
The invention belongs to surface weather observation technical fields in Atmospheric Survey, more particularly to one kind is based on circulation dynamic convolution The Radar Echo Extrapolation method of neural network.
Background technique
Nowcasting refers mainly to the weather forecast of 0~3 hour high-spatial and temporal resolution, and Main Prediction object includes strong drop The diastrous weathers such as water, strong wind, hail.Currently, many forecast systems all use Numerical Prediction Models, but due to numerical forecast In the presence of return slow (spin-up) has been forecast, Nowcasting ability is limited.New Generation Doppler Weather Radar has very high Sensitivity and resolution ratio, the spatial resolution of data information can reach 200~1000m, and temporal resolution can reach 2 ~15min.In addition, Doppler radar also has reasonable operating mode, comprehensive condition monitoring and fault warning, advanced Real-time Calibration System and Radar meteorology product algorithm abundant, the reliability of Nowcasting can be greatly improved.Nowadays, New Generation Doppler Weather Radar has become one of most effective tool of nowcasting, is faced using Doppler radar Nearly forecast is based primarily upon Radar Echo Extrapolation technology, i.e., according to current time radar observation result, thus it is speculated that radar return future Position and intensity, to realize the track prediction to strong convection system.
Traditional Radar Echo Extrapolation method is centroid tracking method and the cross-correlation technique based on maximum correlation coefficient (Tracking Radar Echoes by Correlation, TREC), but all there is certain deficiency, mass center in conventional method Tracing is only applicable to echo compared with strong, the lesser storm monomer of range, unreliable for the forecast of a wide range of precipitation;TREC is general Echo is considered as linear change, and echo variation is increasingly complex in reality, while such method is vulnerable in vector field Unordered vector disturbance.In addition, existing method is low to the utilization rate of Radar Data, and history Radar Data includes local weather system The important feature of system variation, has very high researching value.
For improve Radar Echo Extrapolation timeliness, and from a large amount of history Radar Data study radar return variation Machine learning method is introduced into Radar Echo Extrapolation by rule.Convolutional neural networks (Convolutional Neural Networks, CNNs) important branch as deep learning, it is widely used in image procossing, the fields such as pattern-recognition.The network Maximum feature is using part connection, weight is shared, down-sampling method, deformation, translation and overturning to input picture With stronger adaptability.For strong temporal correlation existing between radar return image, it is dynamic to design the circulation based on input State convolutional neural networks, which can dynamically change weighting parameter according to the radar echo map of input, and then predict extrapolation Image.Using history Radar Data training circulation dynamic convolutional neural networks, so that network is more fully extracted echo character, learn Echo changing rule is practised, for improving Radar Echo Extrapolation accuracy, optimization nowcasting effect is of great significance.
Summary of the invention
Goal of the invention:When the technical problem to be solved by the present invention is to be directed to the extrapolation of existing Radar Echo Extrapolation method Imitate it is short, it is insufficient to Radar Data utilization rate, propose a kind of based on circulation dynamic convolutional neural networks RDCNN (Recurrent Dynamic Convolutional Neural Networks, RDCNN) Radar Echo Extrapolation method, realize to radar return The high plane such as intensity shows the outer of CAPPI (Constant AltitudePlan Position Indicator, CAPPI) image Prediction is pushed away, is included the following steps:
Step 1, data prediction:Input test image set, the every piece image concentrated to test image standardize Processing, converts every piece image to 280 × 280 gray level image, then divide to gray level image set, construction includes The test sample collection of TestsetSize group sample;
Step 2, read test sample:The TestsetSize group test sample input that step 2-1 is obtained is trained It recycles in dynamic convolutional neural networks;
Step 3, propagated forward:The image sequence characteristic for extracting input in a sub-network obtains level probability vector HPVtestWith vertical probability vector VPVtest;In probabilistic forecasting layer, by the last piece image in input image sequence successively with VPVtest、HPVtestPhase convolution obtains the final extrapolated image of circulation dynamic convolutional neural networks.
Step 1 includes the following steps:
Step 1-1, sampling:The image that test image is concentrated is sequentially arranged, and constant duration is distributed, the time Between be divided into 6 minutes, altogether include NTestWidth image determines TestsetSize by following formula:
After acquiring TestsetSize, test image is retained by sampling and concentrates preceding 4 × TestsetSize+1 width image, is adopted Last image is concentrated to meet the requirements amount of images by deleting test image when sample;
Step 1-2, image normalization:Image transformation, normalization operation, by original resolution are carried out to the image that sampling obtains The color image that rate is 2000 × 2000 is converted into the gray level image that resolution ratio is 280 × 280;
Step 1-3 constructs test sample collection:Test sample collection is constructed using the grayscale image image set that step 1-2 is obtained, it will be grey Spend every four adjacent images in image set, i.e. { 4M+1,4M+2,4M+3,4M+4 } width image is as one group of list entries, and the [4 × (M+1)+1] width image is by cutting, and the part that the central resolution ratio of reservation is 240 × 240 is as corresponding sample to sighting target Label, wherein for positive integer, and have M ∈ [0, TestsetSize-1], obtain the test comprising TestsetSize group test sample Sample set.
Step 1-2 includes the following steps:
Step 1-2-1, image conversion:Gray level image is converted by colored echo strength CAPPI image, then passes through cutting Retain the part that original image center resolution ratio is 560 × 560 to obtain the image resolution ratio boil down to 280 × 280 after cutting The grayscale image for being 280 × 280 to resolution ratio;
Step 1-2-2, data normalization:By the value of each of the grayscale image obtained in step 1-2-1 pixel from [0~255] is mapped to [0~1].
Step 3 includes the following steps:
Step 3-1 calculates sub-network probability vector:Sub-network is made of 10 network layers, is followed successively by convolution from front to back It is layer C1, down-sampling layer S1, hidden layer H1, convolutional layer C2, down-sampling layer S2, hidden layer H2, convolutional layer C3, down-sampling layer S3, hidden H3 containing layer, convolutional layer C4, down-sampling layer S4, hidden layer H4, convolutional layer C5, hidden layer H5 and classifier layer F1;In sub-network In the image sequence characteristic of input is extracted by the alternate treatment of convolutional layer and down-sampling layer, then pass through in classifier layer The processing of Softmax function, obtains level probability vector HPVtestWith vertical probability vector VPVtest
Step 3-2 calculates probabilistic forecasting layer and exports image:The VPV that step 3-1 is obtainedtestAnd HPVtestAs probabilistic forecasting Layer convolution kernel, by the last piece image in input image sequence successively with VPVtestAnd HPVtestIt is dynamic to obtain circulation for phase convolution The final extrapolated image of state convolutional neural networks
Step 3-1 includes the following steps:
Step 3-1-1, judges network layer type:Indicate the network layer in current RDSN with p, the value of p be followed successively by H1, C1, S1, H2, C2, S2, H3, C3, S3, H4, C4, S4, H5, C5, F1 }, initial value H1;The type of network layer p is judged, if p ∈ { H1, H2, H3, H4, H5 }, then p is hidden layer, executes step 3-1-2;If p { C1, C2, C3, C4, C5 }, then p is convolutional layer, Execute step 3-1-3;If p ∈ { S1, S2, S3, S4 }, then p is down-sampling layer, executes step 3-1-4;If p=F1, p are point Class device layer executes step 3-1-5;This output characteristic pattern tested is denoted as a in test processC", wherein C ∈ C1, C2, C3, C4, C5 }, aC" initial value be null matrix;
Step 3-1-2 handles hidden layer:There is p=p at this timeH,pH∈ { H1, H2, H3, H4, H5 } is divided into two kinds of situations:
Work as pHWhen ∈ { H1, H2, H3, H4 }, calculating p firstHV-th of output characteristic pattern a of layerv pHIf pH=H1, then C= C1, by zero pixel filling by aC" in corresponding characteristic pattern width expand to ExpandSizepH, then it is corresponding with this layer Convolution nuclear phase convolution, convolution results are summed, and summed result adds pHV-th of offset parameter of layerIt is activated by ReLU Function processing, obtains pHV-th of output characteristic pattern of layerCalculation formula is as follows:
In above formula, Expand_Zero () indicates zero extended function,For pHU-th of the input feature vector figure and v of layer The corresponding convolution kernel of a output characteristic pattern, mh are the input feature vector figure number of current hidden layer,Indicate pHU-th of layer Input feature vector figure, ExpandSizepHValue determine and have by the width of input feature vector figure and the size of convolution kernel
Work as pHWhen=H5, H5 layers of v-th of output characteristic pattern is calculated firstBy zero pixel filling by aC5" feature Figure resolution ratio is expanded to 10 × 10, then the corresponding weighting parameter of itself and this layer is multiplied, and calculated result is summed, summed result adds V-th of offset parameter of upper H5 layerIt is handled by ReLU activation primitive, obtains H5 layers of v-th of output characteristic patternMeter It is as follows to calculate formula:
In above formula,For H5 layers of u-th of input feature vector figure weighting parameter corresponding with v-th of output characteristic pattern;
Successively calculate pHAll output characteristic pattern of layer, obtains apH, p is updated to l+1, and return step 3-1-1 judges Network type carries out the operation of next network layer;
Step 3-1-3 handles convolutional layer:There is p=p at this timeC,pC∈ { C1, C2, C3, C4, C5 }, first calculating pCThe of layer V output characteristic patternBy pCThe input feature vector figure convolution nuclear phase convolution corresponding with this layer respectively of layer, convolution results are asked P is added with, summed resultCV-th of offset parameter of layerIt is handled using ReLU activation primitive, obtains pCThe v of layer A output characteristic patternCalculation formula is as follows:
In above formula,For pCU-th of input feature vector figure convolution kernel corresponding with v-th of output characteristic pattern of layer, mc are The input feature vector figure number of convolutional layer,Indicate pCU-th of input feature vector figure of layer, while being also pC- 1 layer of u-th of output Characteristic pattern, * representing matrix convolution, if pC=C1, then pC- 1 layer is input layer;
Successively calculate pCAll output characteristic pattern of layer, obtains pCThe output characteristic pattern a of layerpC, use apCValue update aC", P is updated to p+1, and return step 3-1-3 judges network type, carries out the operation of next network layer;
Step 3-1-4 handles down-sampling layer:There is p=p at this timeS,pS∈ { S1, S2, S3, S4 }, step 3-1-3 is obtained The output characteristic pattern of convolutional layer respectively withPhase convolution, then sampled with step-length for 2, sampling obtains pSLayer Export characteristic pattern apS, calculation formula is as follows:
Wherein, Sample () indicates that step-length is 2 sampling processing, pS- 1 indicates the previous convolution of current down-sampling layer Layer,Indicate pSThe output characteristic pattern a of layerpSIn v-th of output characteristic pattern, obtain pSThe output characteristic pattern a of layerpSAfterwards, by p It is updated to p+1, and return step 3-1-1 judges network type, carries out the operation of next network layer;
Step 3-1-5 calculates F1 layers of probability vector:If network layer p is classifier layer, i.e. p=F1, by matrixing, By the 32 width resolution ratio of C5 be 4 × 4 output characteristic pattern with column sequential deployment, obtain the output for the F1 layer that resolution ratio is 512 × 1 Feature vectorThen calculate separately horizontal parameters matrix W H, Vertical Parameters matrix W V withApposition, by calculated result point It does not sum with Horizontal offset parameter BH, vertical off setting parameter BV, summed result obtains level probability after the processing of Softmax function Vector HPVtest, vertical probability vector VPVtest, calculation formula is as follows:
By its vertical probability vector VPVtestTransposition obtains final vertical probability vector.
Step 3-2 includes the following steps:
Step 3-2-1 predicts DC1 layers of vertical direction:By last width input picture of input layer and vertical probability vector VPVtestPhase convolution obtains the DC1 layer that resolution ratio is 240 × 280 and exports characteristic pattern
Step 3-2-2 predicts DC2 layers of vertical direction:Step 3-2-1 is obtainedWith level probability vector HPVtest Phase convolution, obtains the final extrapolated image of RDCNN, and resolution ratio is 240 × 240.
Beneficial effect:The present invention realizes Radar Echo Extrapolation using convolutional neural networks (CNN) image processing techniques, proposes A kind of circulation dynamic convolutional neural networks (RDCNN) structure, the network is by circulation dynamic sub-network (RDSN) and probabilistic forecasting Layer (PPL) composition, has dynamic characteristic and cycle characteristics.The convolution kernel of PPL is calculated by RDSN, the radar return with input There are mapping relations for image, therefore the convolution kernel still is able to the difference according to input in the RDCNN on-line testing stage and changes, Make network that there is dynamic characteristic;RDSN increases hidden layer on the basis of traditional CNN model, by hidden layer and convolutional layer structure It at loop structure, can recursively retain history training information by loop structure, make network that there is cycle characteristics.Using a large amount of Radar return image data train RDCNN, make network convergence, trained network can preferably realize Radar Echo Extrapolation.
Detailed description of the invention
The present invention is done with reference to the accompanying drawings and detailed description and is further illustrated, it is of the invention above-mentioned or Otherwise advantage will become apparent.
Fig. 1 is flow chart of the present invention.
Fig. 2 is circulation dynamic convolutional neural networks initialization model structure chart.
Fig. 3 is circulation dynamic sub-network structural map.
Fig. 4 is probabilistic forecasting layer structural map.
Fig. 5 is that matrix zero expands schematic diagram.
Fig. 6 is the process schematic up-sampled to 2 × 2 matrix.
Specific embodiment
The present invention will be further described with reference to the accompanying drawings and embodiments.
As shown in Figure 1, the invention discloses a kind of Radar Echo Extrapolation method based on circulation dynamic convolutional neural networks, Include the following steps:
Step 1, RDCNN off-line training:Training image collection is inputted, data prediction is carried out to training image collection, is instructed Practice sample set, designs RDCNN structure, and initialize network training parameter;Using training sample set training RDCNN, input has Sequence image sequence obtains a width forecast image by propagated forward, calculates forecast image and compares the error between label, passes through Backpropagation updates the weighting parameter and offset parameter of network, this process of repetition reaches trained termination condition until prediction result, Obtain convergent RDCNN model;
Step 2, RDCNN on-line prediction:Input test image set carries out data prediction to test chart image set, is surveyed Sample set is tried, then by the RDCNN model obtained in test sample collection input step 1, is calculated by network propagated forward general Rate vector, and by input image sequence last width radar return image and obtained probability vector phase convolution, obtain pre- The Radar Echo Extrapolation image of survey.
Step 1 includes the following steps:
Step 1-1, data prediction:Training image collection is inputted, the every piece image concentrated to training image standardizes Change processing, converts every piece image to 280 × 280 gray level image, obtains image collection, draw to gray level image set Point, construction includes the training sample set of TrainsetSize group sample;
Step 1-2 initializes RDCNN:RDCNN structure is designed, the circulation dynamic subnet of generating probability vector is configured to String bag network (Recurrent Dynamic Sub-network, RDSN) and the probability for predicting future time instance radar return Prediction interval (Probability Prediction Layer, PPL), provides the initialization model of RDCNN for off-line training step, As shown in Fig. 2, for circulation dynamic convolutional neural networks initialization model structure chart;
Step 1-3 initializes the training parameter of RDCNN:E-learning rate λ=0.0001 is enabled, the training stage inputs every time Sample size BatchSize=10, most large quantities of frequency of training of training sample setCurrently Criticize frequency of training BatchNum=1, the maximum number of iterations IterationMax=40 of network training, current iteration number IterationNum=1;
Step 1-4 reads training sample:By the way of batch training, the training sample obtained from step 1-1 is trained every time It concentrates and reads BatchSize group training sample, every group of training sample is { x1,x2,x3,x4, y }, it altogether include 5 width images, wherein {x1,x2,x3,x4It is used as input image sequence, y is corresponding control label;
Step 1-5, propagated forward:In RDSN extract input image sequence feature, obtain level probability vector HPV and Vertical probability vector VPV;In probabilistic forecasting layer, by the last piece image in input image sequence successively with VPV, HPV phase Convolution obtains the output forecast image of propagated forward;
Step 1-6, backpropagation:The error term that probability vector is acquired in PPL, further according to probability vector error term from Afterwards to the preceding layer-by-layer error term for calculating each network layer in RDSN, so calculate in each network layer error term to weighting parameter and The gradient of offset parameter utilizes obtained gradient updating network parameter;
Step 1-7, off-line training step control:Whole control is carried out to the offline neural metwork training stage, is divided into following Three kinds of situations:
If training sample is concentrated there are still original training sample, i.e. BatchNum < BatchMax, then step is returned Rapid 1-4 continues to read BatchSize group training sample, carries out network training;
If training sample, which is concentrated, is not present original training sample, i.e. BatchNum=BatchMax, and current net Network the number of iterations is less than maximum number of iterations, i.e. IterationNum < IterationMax then enables BatchNum=1, returns Step 1-4 continues to read BatchSize group training sample, carries out network training;
If training sample, which is concentrated, is not present original training sample, i.e. BatchNum=BatchMax, and network changes Generation number reaches maximum number of iterations, i.e. IterationNum=IterationMax, then terminates RDCNN off-line training step, Obtain convergent RDCNN model.
Step 1-1 data prediction includes the following steps:
Step 1-1-1, sampling:The image that training image is concentrated is sequentially arranged, and constant duration is distributed, when Between between be divided into 6 minutes, altogether include NTrainWidth image determines TrainsetSize by following formula:
In above formula, Mod (NTrain, 4) and indicate NTrainTo 4 modulus,Expression is not more thanMaximum integer, After acquiring TrainsetSize, training image is retained by sampling and concentrates preceding 4 × TrainsetSize+1 width image, when sampling leads to Crossing deletion training image concentrates last image to meet the requirements amount of images;
Step 1-1-2, normalized images:Image transformation, normalization operation, by original point are carried out to the image that sampling obtains The color image that resolution is 2000 × 2000 is converted into the gray level image that resolution ratio is 280 × 280;
Step 1-1-3 constructs training sample set:Training sample set is constructed using the gray level image that step 1-1-2 is obtained, it will Gray level image concentrates every four adjacent images, i.e. { 4N+1,4N+2,4N+3,4N+4 } width image as one group of list entries, [4 × (N+1)+1] width image is by cutting, control of the part that the central resolution ratio of reservation is 240 × 240 as corresponding sample Label, for N group sampleIts make is as follows:
In above formula, G4N+1Indicate gray level image concentrate 4N+1 width image, N is positive integer, and have N ∈ [0, TrainsetSize-1], Crop () indicates trimming operation, and the portion that original image center size is 240 × 240 is retained after cutting Point, finally obtain the training sample set comprising TrainsetSize group training sample;
Wherein, step 1-1-2 includes the following steps:
Step 1-1-2-1, image conversion:Gray level image is converted by the step 1-1-1 image sampled, passes through cutting Retain the part that original image center resolution ratio is 560 × 560 to obtain the image resolution ratio boil down to 280 × 280 after cutting The grayscale image for being 280 × 280 to resolution ratio;
Step 1-1-2-2, data normalization:By each of the grayscale image obtained in step 1-1-2-1 pixel Value is mapped to [0~1] from [0~255].
Step 1-2 includes the following steps:
Step 1-2-1, construction circulation dynamic sub-network RDSN, as shown in figure 3, for circulation dynamic sub-network structural map:
Sub-network is made of 10 network layers, is followed successively by convolutional layer C1, down-sampling layer S1, hidden layer H1, volume from front to back Lamination C2, down-sampling layer S2, hidden layer H2, convolutional layer C3, down-sampling layer S3, hidden layer H3, convolutional layer C4, down-sampling layer S4, Hidden layer H4, convolutional layer C5, hidden layer H5 and classifier layer F1;
Step 1-2-2 constructs probabilistic forecasting layer PPL, as shown in figure 4, being probabilistic forecasting layer structural map:
Dynamic convolutional layer DC1 and dynamic convolutional layer DC2 is constructed in probabilistic forecasting layer, the vertical probability vector that RDSN is exported Convolution kernel of the VPV as dynamic convolutional layer DC1, convolution kernel of the level probability vector HPV as dynamic convolutional layer DC2;
Wherein, step 1-2-1 includes the following steps:
Step 1-2-1-1 constructs convolutional layer:For convolutional layer lC,lC∈ { C1, C2, C3, C4, C5 }, determines the following contents: The output characteristic pattern quantity of convolutional layerConvolution kernelAnd offset parameterFor convolution kernel, need Determine the width of convolution kernelThe quantity of convolution kernelThe value is that convolutional layer is inputted and exported The product of characteristic pattern quantity, and convolution kernel is constructed according to Xavier initial method;For offset parameter, quantity and this layer It is identical to export characteristic pattern quantity;lCLayer output characteristic pattern width be Value by convolutional layer lC Input feature vector figure resolution ratio and convolution kernel widthIt codetermines, i.e.,Indicate convolutional layer lCUpper one layer of convolutional layer Output characteristic pattern width;
For convolutional layer C1, C1 layers of output characteristic pattern quantity OutputMaps is enabledC1The width of=12, C1 layers of output characteristic pattern Spend OutputSizeC1=272, C1 layers of convolution kernel width KernelSizeC1=9, C1 layers of offset parameter biasC1It is initialized as Zero, C1 layers of convolution kernel kC1Quantity KernelNumberC1=48, the initial value of each parameter is in convolution kernelRand () is for generating random number;
For convolutional layer C2, C2 layers of output characteristic pattern quantity OutputMaps are enabledC2The width of=32, C2 layers of output characteristic pattern OutputSizeC2=128, C2 layers of convolution kernel width KernelSizeC2=9, C2 layers of offset parameter are initialized as zero, C2 layers Convolution kernel kC2Quantity KernelNumberC2=384, the initial value of each parameter is in convolution kernel
For convolutional layer C3, C3 layers of output characteristic pattern quantity OutputMaps are enabledC3The width of=32, C3 layers of output characteristic pattern OutputSizeC3=56, C3 layers of convolution kernel width KernelSizeC3=9, C3 layers of offset parameter are initialized as zero, C3 layers Convolution kernel kC3Quantity KernelNumberC3=1024, the initial value of each parameter is in convolution kernel
For convolutional layer C4, C4 layers of output characteristic pattern quantity OutputMaps are enabledC4The width of=32, C4 layers of output characteristic pattern OutputSizeC4=20, C4 layers of convolution kernel width KernelSizeC4=9, C4 layers of offset parameter are initialized as zero, C4 layers Convolution kernel kC4Quantity KernelNumberC4=1024, the initial value of each parameter is in convolution kernel
For convolutional layer C5, C5 layers of output characteristic pattern quantity OutputMaps are enabledC5The width of=32, C5 layers of output characteristic pattern OutputSizeC5=4, C5 layers of convolution kernel width KernelSizeC5=7, C5 layers of offset parameter are initialized as zero, C5 layers Convolution kernel kC5Quantity KernelNumberC5=1024, the initial value of each parameter is in convolution kernel
Step 1-2-1-2 constructs hidden layer:For hidden layer lH,lH∈ { H1, H2, H3, H4, H5 }, determines the following contents: The output characteristic pattern quantity of hidden layerConvolution kernelAnd offset parameterFor convolution kernel, need Determine the width of convolution kernelThe quantity of convolution kernelIts value is that hidden layer is inputted and exported The product of characteristic pattern quantity, and convolution kernel is constructed according to Xavier initial method;For offset parameter, quantity and hidden layer Output characteristic pattern quantity it is identical;lHLayer output characteristic pattern width be With corresponding convolutional layer The width of input feature vector figure is consistent;
For hidden layer H1, H1 layers of output characteristic pattern quantity OutputMaps is enabledH1The width of=4, H1 layers of output characteristic pattern Spend OutputSizeH1=280, H1 layers of convolution kernel width KernelSizeH1=9, H1 layers of offset parameter biasH1Zero is initialized as, H1 layers of convolution kernel kH1Quantity KernelNumberH1=48, the initial value of each parameter is in convolution kernelRand () is for generating random number;
For hidden layer H2, H2 layers of output characteristic pattern quantity OutputMaps are enabledH2The width of=8, H2 layers of output characteristic pattern OutputSizeH2=136, H2 layers of convolution kernel width KernelSizeH2=9, H2 layers of offset parameter are initialized as zero, H2 layers Convolution kernel kH2Quantity KernelNumberH2=256, the initial value of each parameter is in convolution kernel
For hidden layer H3, H3 layers of output characteristic pattern quantity OutputMaps are enabledH3The width of=8, H3 layers of output characteristic pattern OutputSizeH3=64, H3 layers of convolution kernel width KernelSizeH3=9, H3 layers of offset parameter are initialized as zero, H3 layers Convolution kernel kH3Quantity KernelNumberH3=256, the initial value of each parameter is in convolution kernel
For hidden layer H4, H4 layers of output characteristic pattern quantity OutputMaps are enabledH4The width of=8, H4 layers of output characteristic pattern OutputSizeH4=28, H4 layers of convolution kernel width KernelSizeH4=9, H4 layers of offset parameter are initialized as zero, H4 layers Convolution kernel kH4Quantity KernelNumberH4=256, the initial value of each parameter is in convolution kernel
For hidden layer H5, H5 layers of output characteristic pattern quantity OutputMaps are enabledH5The width of=8, H5 layers of output characteristic pattern OutputSizeH5=10, H5 layers of offset parameter are initialized as zero.H5 layers include 256 weighting parameter kH5, each weight ginseng Several initial values are
Step 1-2-1-3 constructs down-sampling layer:In down-sampling layer do not include need training parameter, by down-sampling layer S1, The sampling core of S2, S3 and S4 are initialized asFor down-sampling layer lS,lS∈ { S1, S2, S3, S4 }, it is defeated Characteristic pattern quantity outIt is consistent with the output characteristic pattern quantity of one layer of convolutional layer thereon, exports characteristic pattern WidthIt is the 1/2 of the output characteristic pattern width of one layer of convolutional layer thereon, formula is expressed as follows:
Step 1-2-1-4, structural classification device layer:Classifier layer is made of a full articulamentum F1, F1 layers of weighting parameter For horizontal weighting parameter matrix W H and vertical weighting parameter matrix W V, size is 41 × 512, is enabled every in weighting parameter matrix The initial value of one parameter isOffset parameter is Horizontal offset parameter BH and vertical off setting parameter BV, It is initialized as 41 × 1 one-dimensional null vector.
Step 1-5 includes the following steps:
Step 1-5-1, RDSN calculate probability vector:It is mentioned in a sub-network by the alternate treatment of convolutional layer and down-sampling layer The image sequence characteristic for taking input is handled in classifier layer by Softmax function, is obtained level probability vector HPV and is hung down Straight probability vector VPV;
Step 1-5-2, PPL export forecast image:Convolution of the HPV and VPV that step 1-5-1 is obtained as probabilistic forecasting layer Core obtains the output prognostic chart of propagated forward by the last piece image in input image sequence successively with VPV, HPV phase convolution Picture.
Step 1-5-1 includes the following steps:
Step 1-5-1-1, judges network layer type:Indicate that the network layer in current RDSN, the value of l are followed successively by with l { H1, C1, S1, H2, C2, S2, H3, C3, S3, H4, C4, S4, H5, C5, F1 }, initial value H1.Judge the class of network layer l Type, if l ∈ { H1, H2, H3, H4, H5 }, then l is hidden layer, executes step 1-5-1-2;If l ∈ { C1, C2, C3, C4, C5 }, then L is convolutional layer, executes step 1-5-1-3;If l ∈ { S1, S2, S3, S4 }, then l is down-sampling layer, executes step 1-5-1-4;If L=F1, then l is classifier layer, executes step 1-5-1-5.By the output feature seal of this training convolutional layer in training process For aC', wherein C ∈ { C1, C2, C3, C4, C5 }, aC' initial value be null matrix;
Step 1-5-1-2 handles hidden layer:There is l=l at this timeH,lH∈ { H1, H2, H3, H4, H5 } is divided into two kinds of feelings at this time Condition:
Work as lHWhen ∈ { H1, H2, H3, H4 }, calculating l firstHJ-th of output characteristic pattern of layerPass through zero pixel filling By aC' in corresponding characteristic pattern (if lH=H1, then C=C1) width expand toIt is again that it is corresponding with this layer Convolution nuclear phase convolution, convolution results are summed, and summed result adds lHJ-th of offset parameter of layerIt is activated by ReLU Function processing, obtainsCalculation formula is as follows:
In above formula, Expand_Zero () indicates zero extended function, as shown in figure 5, expand schematic diagram for matrix zero, For lHI-th of input feature vector figure convolution kernel corresponding with j-th of output characteristic pattern of layer, nh are that the input of current hidden layer is special Figure number is levied,Indicate lHI-th of input feature vector figure of layer,Value by input feature vector figure width and volume The size of product core determines, and has
Work as lHWhen=H5, H5 layers of j-th of output characteristic pattern is calculated firstBy zero pixel filling by aC5' feature Figure resolution ratio is expanded to 10 × 10, then the corresponding weighting parameter of itself and this layer is multiplied, and calculated result is summed, summed result adds J-th of offset parameter of upper H5 layerIt handles, obtains by ReLU activation primitiveCalculation formula is as follows:
In above formula,For H5 layers of i-th of input feature vector figure weighting parameter corresponding with j-th of output characteristic pattern;
Successively calculate lHAll output characteristic pattern of layer, obtainsL is updated to l+1, and return step 1-5-1-1 sentences Circuit network type carries out the operation of next network layer;
Step 1-5-1-3 handles convolutional layer:There is l=l at this timeC,lC∈ { C1, C2, C3, C4, C5 }, first calculating lCLayer J-th of output characteristic patternBy lCThe input feature vector figure convolution nuclear phase convolution corresponding with this layer respectively of layer, convolution results are asked L is added with, summed resultCJ-th of offset parameter of layerIt handles, obtains using ReLU activation primitiveIt calculates public Formula is as follows:
In above formula,For lCI-th of input feature vector figure convolution kernel corresponding with j-th of output characteristic pattern of layer, nc are The input feature vector figure number of convolutional layer,Indicate lCI-th of input feature vector figure of layer, while being also lC- 1 layer of i-th of output Characteristic pattern, * representing matrix convolution, if lC=C1, then lC- 1 layer is input layer.
Successively calculate lCAll output characteristic pattern of layer, obtainsWithValue update aC'(lC=C, such as work as lC=C1 When, then use aC1Update aC1'), l is updated to l+1, network type is judged for simultaneously return step 1-5-1-1, carries out next net The operation of network layers;
Step 1-5-1-3 handles down-sampling layer:There is l=l at this timeS,lS∈ { S1, S2, S3, S4 }, step 1-5-1-2 is obtained The output characteristic pattern of the convolutional layer arrived respectively withPhase convolution, then sampled with step-length for 2, sampling obtains lS The output characteristic pattern of layerCalculation formula is as follows:
In above formula, Sample () indicates that step-length is 2 sampling processing, lS- 1 indicates the previous convolution of current down-sampling layer Layer,Indicate lSThe output characteristic pattern of layerIn j-th of output characteristic pattern, obtain lSThe output characteristic pattern of layerAfterwards, more by l It is newly l+1, and return step 1-5-1-1 judges network type, carries out the operation of next network layer;
Step 1-5-1-4 calculates F1 layers of probability vector:There is l=F1 at this time, by matrixing, 32 width of C5 are differentiated The output characteristic pattern that rate is 4 × 4 obtains the output feature vector a for the F1 layer that resolution ratio is 512 × 1 with column sequential deploymentF1, point Horizontal weighting parameter matrix W H and a are not calculatedF1Apposition, vertical weighting parameter matrix W V and aF1Apposition, by calculated result point It does not sum with Horizontal offset parameter BH, vertical off setting parameter BV, obtains level probability vector HPV after the processing of Softmax function With vertical probability vector VPV, specific formula for calculation is as follows:
By its vertical probability vector VPV transposition, final vertical probability vector is obtained;
Step 1-5-2 includes the following steps:
Step 1-5-2-1 predicts DC1 layers of vertical direction:By last width input picture of input layer and vertical probability to VPV phase convolution is measured, the DC1 layer that resolution ratio is 240 × 280 is obtained and exports characteristic pattern aDC1
Step 1-5-2-2 predicts DC2 layers of vertical direction:By DC1 layers of output characteristic pattern aDC1With level probability vector HPV phase Convolution, obtains the output forecast image of propagated forward, and resolution ratio is 240 × 240.
Step 1-6 includes the following steps:
Step 1-6-1 calculates PPL error term:It will be in the step 1-5-2-2 forecast image obtained and the training sample of input Control label ask poor, calculate DC2 layers, DC1 layers of error term, finally acquire the error term δ of level probability vectorHPVWith it is vertical The error term δ of probability vectorVPV
Step 1-6-2 calculates RDSN error term:According to the error term δ of level probability vectorHPVWith vertical probability vector Error term δVPV, from it is rear to preceding successively calculate classification layer F1, convolutional layer (C5, C4, C3, C2, C1) hidden layer (H5, H4, H3, H2, H1) and the error term of down-sampling layer (S4, S3, S2, S1), the output of the resolution ratio of any layer error term matrix acquired and this layer The resolution ratio of characteristic pattern is consistent;
Step 1-6-3 calculates gradient:The mistake of each network layer in RDSN is calculated according to the error term that step 1-6-2 is obtained Gradient value of the poor item to this layer of weighting parameter and offset parameter;
Step 1-6-4, undated parameter:By the ladder of the weighting parameter of the step 1-6-3 each network layer obtained and offset parameter Angle value is multiplied by the learning rate of RDCNN, obtains the update item of each network layer weighting parameter and offset parameter, by former weighting parameter and partially It sets parameter and asks poor with the update item respectively, obtain updated weighting parameter and offset parameter.
Step 1-6-1 includes the following steps:
Step 1-6-1-1 calculates dynamic convolutional layer DC2 error term:The forecast image and the group that step 1-5-2-2 is obtained The control label of sample asks poor, obtains the error term matrix delta that size is 240 × 240DC2
Step 1-6-1-2 calculates dynamic convolutional layer DC1 error term:By zero padding by DC2 layers of error term matrix deltaDC2 Expanding is 240 × 320, by level probability Vector rotation 180 degree, by the error term matrix after expansion and the level probability after overturning Vector phase convolution obtains DC1 layers of error term matrix deltaDC1, size is 240 × 280, and formula is as follows:
δDC1=Expand_Zero (δDC2) * rot180 (HPV),
In above formula, rot180 () indicates that angle is 180 ° of rotation function, and 2 × 2 matrix zero is extended for 4 × 4 Matrix, the matrix after zero expansion, the region that central resolution ratio is 2 × 2 is consistent with original matrix, remaining position is filled out with zero pixel It fills;
Step 1-6-1-3 calculates probability vector error term:The error term for calculating level probability vector HPV, by DC1 layers Export characteristic pattern and error term matrix deltaDC2Phase convolution obtains 1 × 41 row vector after convolution, which is the error term of HPV δHPV, formula is as follows:
δHPV=aDC1DC2,
The error term for calculating vertical probability vector VPV, by the input feature vector figure of input layer and error term matrix deltaDC1Mutually roll up It is long-pending, 41 × 1 column vector is obtained after convolution, which is the error term δ of VPVVPV, formula is as follows:
In above formula,For the last piece image in the input image sequence of training sample;
Step 1-6-2 includes the following steps:
Step 1-6-2-1 calculates classifier layer F1 error term:By the error term of the step 1-6-1-3 probability vector obtained δVPVAnd δHPVWeighting parameter matrix W V vertical with F1 layers and horizontal weighting parameter matrix W H carry out matrix multiple respectively, then by square The apposition of battle array is summed and is averaged, and F1 layers of error term δ is obtainedF1, formula is as follows:
In above formula, × representing matrix apposition, ()TThe transposition for representing matrix, obtained δF1Size be 512 × 1;
Step 1-6-2-2 calculates convolutional layer C5 error term:By matrixing, the F1 layer that will be obtained in step 1-6-2-1 Error term δF1It is transformed to the matrix that 32 resolution ratio are 4 × 4Obtain C5 layers of error term δC5, Indicate that transformed 32nd resolution ratio is 4 × 4 matrix;
Step 1-6-2-3, judges network layer type:Indicate the network layer in the RDSN being presently in l, the value of l according to Secondary is { H5, S4, C4, H4, S3, C3, H3, S2, C2, H2, S1, C1, H1 }, and l initial value is H5, judges the type of network layer l, if L ∈ { H5, H4, H3, H2, H1 }, then l is hidden layer, executes step 1-6-2-4;If l ∈ { S4, S3, S2, S1 }, then l is adopted under being Sample layer executes step 1-6-2-5, if l ∈ { C4, C3, C2, C1 }, then l is convolutional layer, executes step 1-6-2-6;
Step 1-6-2-4 calculates hidden layer error term:L=l at this timeH,lH∈ { H5, H4, H3, H2, H1 } calculates lHLayer I-th of error term matrixBy zero padding respectively by l+1 layers (convolutional layer) of each error term matrix deltal+1It expands to width For ExpandSizel+1(ExpandSizel+1=OutputSizel+1+2·(KernelSizel+1- 1)), then by corresponding convolution Core180 degree is rotated, then by the matrix after expansion and the convolution nuclear phase convolution after overturning, and convolution results is summed, is obtained lHI-th of error term matrix of layerFormula is as follows:
In above formula, nc indicates the error term number of l+1 layers (convolutional layer), numerical value and l+1 layers of output characteristic pattern quantity It is identical, and have nc=OutputMapsl+1
All error term matrixes are successively calculated, l is obtainedHThe output characteristic pattern of layerL is updated to l-1, and is returned Step 1-6-2-3 judges network type, calculates the error term of a upper network layer;
Step 1-6-2-5 calculates down-sampling layer error term:L=l at this timeS,lS∈ { S4, S3, S2, S1 } calculates lSLayer I-th of error term matrixBy zero padding respectively by each error term matrix delta of l+2 layers (corresponding convolutional layer)l+2It expands extremely Width is ExpandSizel+2(ExpandSizel+2=OutputSizel+2+2·(KernelSizel+2It -1)), then will be corresponding Convolution kernel180 degree is rotated, then by the matrix after expansion and the convolution nuclear phase convolution after overturning, and convolution results are summed, Obtain lSI-th of error term matrix of layerFormula is as follows:
In above formula, nc indicates the error term number of l+2 layers (convolutional layer), numerical value and l+2 layers of output characteristic pattern quantity It is identical, and have nc=OutputMapsl+2
All error term matrixes are successively calculated, l is obtainedSThe output characteristic pattern of layerL is updated to l-1, and returns to step Rapid 1-6-2-3 judges network type, calculates the error term of a upper network layer;
Step 1-6-2-6 calculates convolutional layer error term:There is l=l at this timeC,lC∈ { C4, C3, C2, C1 }, due to step 1- The initial value of l is H5 in 6-2-3, therefore is not in lCThe case where=C5, for lCI-th of error term matrix of layer First to corresponding i-th of the error term matrix in l+1 layers (down-sampling layer)It is up-sampled, as shown in fig. 6, for 2 × 2 The process schematic that is up-sampled of matrix, when up-sampling, willIn each element error entry value average mark to sample region In domain, obtaining resolution ratio isUp-sampling matrix, then calculate activation primitive in lCLayer is corresponding The inner product of derivative and the up-sampling matrix acquired at characteristic pattern, obtains lCI-th of error term matrix of layerFormula is as follows It is shown:
In above formula, representing matrix inner product, ReLU'() indicate the derivative of ReLU activation primitive, form is as follows:
UpSamlpe () indicates up-sampling function, the corresponding up-sampling of each of original image pixel after up-sampling Region in each of original pixel value mean allocation to sampling area pixel, successively calculates all error term matrixes, obtains To lCThe output characteristic pattern of layer
Step 1-6-2-7, l layers are convolutional layer, i.e. l=l at this timeC, it is divided into two kinds of situations later:
If l ≠ C1, l is updated to l-1, and return step 1-6-2-3 judges network type, calculates a upper network layer Error term;
If l=C1, the calculating of step 1-6-2 sub-network error term terminates;
Step 1-6-3 includes the following steps:
Step 1-6-3-1 calculates convolutional layer error term to the gradient of convolution kernel:Use lCIndicate currently processed convolutional layer, lC ∈ { C1, C2, C3, C4, C5 } successively calculates each convolutional layer error term to the gradient of convolution kernel, by convolutional layer since C1 layers I-th of input feature vector figureWith lCJ-th of error term matrix of layerPhase convolution, convolution results are the ladder of corresponding convolution kernel Angle valueFormula is as follows:
In above formula,WithRespectively indicate lCThe output characteristic pattern number and l of layerC- 1 layer of output characteristic pattern number;
Step 1-6-3-2 calculates each convolutional layer error term to the gradient of biasing:Use lCIndicate currently processed convolutional layer, lC ∈ { C1, C2, C3, C4, C5 } successively calculates each convolutional layer error term to the gradient of biasing, by l since C1 layersCJ-th of layer Error term matrixIn all elements sum, obtain j-th of this layer biasing gradient valueThe following institute of formula Show:
In above formula, Sum () expression sums to all elements of matrix;
Step 1-6-3-3 calculates hidden layer error term to the gradient of convolution kernel:Use lHIndicate currently processed hidden layer, lH ∈ { H1, H2, H3, H4, H5 } successively calculates each convolutional layer error term to the gradient of convolution kernel, first to implicit since H1 layers Layer error term is cut, and is retained central width and isPart (work as lHWhen=H5, protect Staying H5 layers of error term center width is 4 × 4 part) it is denoted asThen by i-th of input feature vector figure of hidden layerWith J-th of component phase convolution, convolution results are the gradient value of corresponding convolution kernelFormula is as follows:
In above formula,WithRespectively indicate lHThe output characteristic pattern number and l of layerH- 1 layer of output characteristic pattern number;
Step 1-6-3-4 calculates each hidden layer error term to the gradient of biasing:Use lHIndicate currently processed hidden layer, lH ∈ { H1, H2, H3, H4, H5 } successively calculates each convolutional layer error term to the gradient of biasing, by step 1-6-3-3 since H1 layers Obtained inAll elements in j-th of component are summed, and the gradient value of j-th of this layer biasing is obtainedFormula As follows:
In above formula, Sum () expression sums to all elements of matrix;
Step 1-6-3-5 calculates F1 layers of error term to the gradient of weighting parameter:Calculate separately level probability vector with it is vertical The error term δ of probability vectorHPV、δVPVWith F1 layers of error term δF1Inner product, calculated result be F1 layers of error term to weighting parameter WH, The gradient value of WV, formula are as follows:
▽ WH=(δHPV)T×(δF1)T,
▽ WV=δVPV×(δF1)T,
In above formula, ▽ WH is gradient value of the error term to horizontal weighting parameter, and ▽ WV is error term to vertical weighting parameter Gradient value;
Step 1-6-3-6 calculates F1 layers of error term to the gradient of offset parameter:By level probability vector and vertical probability to The error term δ of amountHPV、δVPVRespectively as F1 layers of error term to the gradient value of Horizontal offset parameter BH and vertical off setting parameter BV, Formula is as follows:
▽ BH=(δHPV)T,
▽ BV=δVPV,
In above formula, ▽ BH is gradient value of the error term to Horizontal offset parameter, and ▽ BV is error term to vertical off setting parameter Gradient value;
Step 1-6-4 includes the following steps:
Step 1-6-4-1 updates each convolutional layer weighting parameter:Each convolutional layer error term pair that step 1-6-3-1 is obtained The gradient of convolution kernel is multiplied by the learning rate of RDCNN, obtains the correction term of convolution kernel, then former convolution kernel and the correction term are asked poor, The convolution kernel updatedFormula is as follows:
In above formula, λ is the e-learning rate determined in step 1-3, λ=0.0001;
Step 1-6-4-2 updates each convolutional layer offset parameter:Each convolutional layer error term pair that step 1-6-3-2 is obtained The gradient of biasing is multiplied by the learning rate of RDCNN, obtains the correction term of offset parameter, then former bias term and the correction term are asked poor, The bias term updatedFormula is as follows:
Step 1-6-4-3 updates each hidden layer weighting parameter:Each hidden layer error term pair that step 1-6-3-3 is obtained The gradient of convolution kernel is multiplied by the learning rate of RDCNN, obtains the correction term of convolution kernel, then former convolution kernel and the correction term are asked poor, The convolution kernel updatedFormula is as follows:
In above formula, λ is the e-learning rate determined in step 1-3, λ=0.0001;
Step 1-6-4-4 updates each hidden layer offset parameter:Each hidden layer error term pair that step 1-6-3-4 is obtained The gradient of biasing is multiplied by the learning rate of RDCNN, obtains the correction term of offset parameter, then former bias term and the correction term are asked poor, The bias term updatedFormula is as follows:
Step 1-6-4-5 updates F1 layers of weighting parameter:The F1 layer error term that step 1-6-3-5 is obtained is to weighting parameter The gradient value of WH and WV is multiplied by the learning rate of RDCNN, obtains the correction term of weighting parameter, then former weighting parameter WH and WV is distinguished Ask poor with the correction term acquired, the WH and WV updated, formula is as follows:
WH=WH- λ ▽ WH,
WV=WV- λ ▽ WV;
Step 1-6-4-6 updates F1 layers of offset parameter:The F1 layer error term that step 1-6-3-6 is obtained is to offset parameter The gradient value of BH and BV is multiplied by the learning rate of RDCNN, obtains the correction term of offset parameter, then former offset parameter BH and BV is distinguished Ask poor with the correction term acquired, the BH and BV updated, formula is as follows:
BH=BH- λ ▽ BH,
BV=BV- λ ▽ BV.
Step 2 includes the following steps:
Step 2-1, data prediction:Input test image set, the every piece image concentrated to test image standardize Change processing, converts every piece image to 280 × 280 gray level image, then divide to gray level image set, construction includes The test sample collection of TestsetSize group sample;
Step 2-2, read test sample:The TestsetSize group test sample input that step 2-1 is obtained is by training Circulation dynamic convolutional neural networks in;
Step 2-3, propagated forward:The image sequence characteristic for extracting input in a sub-network obtains level probability vector HPVtestWith vertical probability vector VPVtest;In probabilistic forecasting layer, by the last piece image in input image sequence successively with VPVtest、HPVtestPhase convolution obtains the final extrapolated image of circulation dynamic convolutional neural networks.
Step 2-1 includes the following steps:
Step 2-1-1, sampling:The image that test image is concentrated is sequentially arranged, and constant duration is distributed, when Between between be divided into 6 minutes, altogether include NTestWidth image determines TestsetSize by following formula:
After acquiring TestsetSize, test image is retained by sampling and concentrates preceding 4 × TestsetSize+1 width image, is adopted Last image is concentrated to meet the requirements amount of images by deleting test image when sample;
Step 2-1-2, image normalization:Image transformation, normalization operation, by original point are carried out to the image that sampling obtains The color image that resolution is 2000 × 2000 is converted into the gray level image that resolution ratio is 280 × 280;
Step 2-1-3 constructs test sample collection:Test sample collection is constructed using the grayscale image image set that step 2-1-2 is obtained, Gray level image is concentrated into every four adjacent images, i.e., { 4M+1,4M+2,4M+3,4M+4 } width image is as one group of input sequence Column, [4 × (M+1)+1] width image is by cutting, and the part that the central resolution ratio of reservation is 240 × 240 is as corresponding sample Label is compareed, wherein for positive integer, and there is M ∈ [0, TestsetSize-1] to obtain comprising TestsetSize group test sample Test sample collection;
Step 2-1-2 includes the following steps:
Step 2-1-2-1, image conversion:Gray level image is converted by colored echo strength CAPPI image, then passes through sanction It cuts and retains the part that original image center resolution ratio is 560 × 560, by the image resolution ratio boil down to 280 × 280 after cutting, Obtain the grayscale image that resolution ratio is 280 × 280;
Step 2-1-2-2, data normalization:By each of the grayscale image obtained in step 1-1-2-1 pixel Value is mapped to [0~1] from [0~255];
Step 2-3 includes the following steps:
Step 2-3-1 calculates sub-network probability vector:Pass through the alternate treatment of convolutional layer and down-sampling layer in a sub-network The image sequence characteristic for extracting input, is then handled by Softmax function in classifier layer, obtains level probability vector HPVtestWith vertical probability vector VPVtest
Step 2-3-2 calculates probabilistic forecasting layer and exports image:The VPV that step 2-3-1 is obtainedtestAnd HPVtestAs probability The convolution kernel of prediction interval, by the last piece image in input image sequence successively with VPVtestAnd HPVtestPhase convolution, is followed The final extrapolated image of gyration state convolutional neural networks;
Step 2-3-1 includes the following steps:
Step 2-3-1-1, judges network layer type:Indicate that the network layer in current RDSN, the value of p are followed successively by with p { H1, C1, S1, H2, C2, S2, H3, C3, S3, H4, C4, S4, H5, C5, F1 }, initial value H1.Judge the class of network layer p Type, if p ∈ { H1, H2, H3, H4, H5 }, then p is hidden layer, executes step 2-3-1-2;If p { C1, C2, C3, C4, C5 }, then p For convolutional layer, step 2-3-1-3 is executed;If p ∈ { S1, S2, S3, S4 }, then p is down-sampling layer, executes step 2-3-1-4;If p =F1, then p is classifier layer, executes step 2-3-1-5.This output characteristic pattern tested is denoted as a in test processC", Middle C ∈ { C1, C2, C3, C4, C5 }, aC" initial value be null matrix;
Step 2-3-1-2 handles hidden layer:There is p=p at this timeH,pH∈ { H1, H2, H3, H4, H5 } is divided into two kinds of feelings at this time Condition:
Work as pHWhen ∈ { H1, H2, H3, H4 }, calculating p firstHV-th of output characteristic pattern of layerPass through zero pixel filling By aC" in corresponding characteristic pattern (if pH=H1, then C=C1) width expand toIt is again that it is corresponding with this layer Convolution nuclear phase convolution, convolution results are summed, and summed result adds pHV-th of offset parameter of layerIt is activated by ReLU Function processing, obtainsCalculation formula is as follows:
In above formula, Expand_Zero () indicates zero extended function,For pHLayer u-th of input feature vector figure and v-th The corresponding convolution kernel of characteristic pattern is exported, mh is the input feature vector figure number of current hidden layer,Indicate pHU-th of layer is defeated Enter characteristic pattern,Value determine and have by the width of input feature vector figure and the size of convolution kernel
Work as pHWhen=H5, H5 layers of v-th of output characteristic pattern is calculated firstBy zero pixel filling by aC5" feature Figure resolution ratio is expanded to 10 × 10, then the corresponding weighting parameter of itself and this layer is multiplied, and calculated result is summed, summed result adds V-th of offset parameter of upper H5 layerIt is handled by ReLU activation primitive, obtains av H5Calculation formula is as follows:
In above formula,For H5 layers of u-th of input feature vector figure weighting parameter corresponding with v-th of output characteristic pattern;
Successively calculate pHAll output characteristic pattern of layer, obtainsP is updated to l+1, and return step 2-3-1-1 sentences Circuit network type carries out the operation of next network layer;
Step 2-3-1-3 handles convolutional layer:There is p=p at this timeC,pC∈ { C1, C2, C3, C4, C5 }, first calculating pCLayer V-th of output characteristic patternBy pCThe input feature vector figure convolution nuclear phase convolution corresponding with this layer respectively of layer, by convolution results Summation, summed result add pCV-th of offset parameter of layerIt handles, obtains using ReLU activation primitiveIt calculates Formula is as follows:
In above formula,For pCU-th of input feature vector figure convolution kernel corresponding with v-th of output characteristic pattern of layer, mc are The input feature vector figure number of convolutional layer,Indicate pCU-th of input feature vector figure of layer, while being also pC- 1 layer of u-th of output Characteristic pattern, * representing matrix convolution, if pC=C1, then pC- 1 layer is input layer.
Successively calculate pCAll output characteristic pattern of layer, obtainsWithValue update aC”(pC=C, such as work as pC= When C1, then a is usedC1Update aC1"), p is updated to p+1, and return step 2-3-1-3 judges network type, carries out next net The operation of network layers;
Step 2-3-1-4 handles down-sampling layer:There is p=p at this timeS,pS∈ { S1, S2, S3, S4 }, step 2-3-1-3 is obtained The output characteristic pattern of the convolutional layer arrived respectively withPhase convolution, then sampled with step-length for 2, sampling obtains pS The output characteristic pattern of layerCalculation formula is as follows:
Wherein, Sample () indicates that step-length is 2 sampling processing, pS- 1 indicates the previous convolution of current down-sampling layer Layer,Indicate pSThe output characteristic pattern of layerIn j-th of output characteristic pattern, obtain pSThe output characteristic pattern of layerAfterwards, by p It is updated to p+1, and return step 2-3-1-1 judges network type, carries out the operation of next network layer;
Step 2-3-1-4 calculates F1 layers of probability vector:If network layer p is classifier layer, i.e. p=F1 is become by matrix Change, by the 32 width resolution ratio of C5 be 4 × 4 output characteristic pattern with column sequential deployment, obtain the F1 layer that resolution ratio is 512 × 1 Export feature vectorThen calculate separately horizontal parameters matrix W H, Vertical Parameters matrix W V withApposition, by calculate tie Fruit is summed with Horizontal offset parameter BH, vertical off setting parameter BV respectively, and summed result obtains level after the processing of Softmax function Probability vector HPVtest, vertical probability vector VPVtest, calculation formula is as follows:
By its vertical probability vector VPVtestTransposition obtains final vertical probability vector;
Step 2-3-2 includes the following steps:
Step 2-3-2-1 predicts DC1 layers of vertical direction:By last width input picture of input layer and vertical probability to Measure VPVtestPhase convolution obtains the DC1 layer that resolution ratio is 240 × 280 and exports characteristic pattern
Step 2-3-2-2 predicts DC2 layers of vertical direction:Step 2-3-2-1 is obtainedWith level probability vector HPVtestPhase convolution, obtains the final extrapolated image of RDCNN, and resolution ratio is 240 × 240.
The present invention provides a kind of Radar Echo Extrapolation method based on circulation dynamic convolutional neural networks, specific implementation should There are many method and approach of technical solution, the above is only a preferred embodiment of the present invention, it is noted that for this technology For the those of ordinary skill in field, various improvements and modifications may be made without departing from the principle of the present invention, this A little improvements and modifications also should be regarded as protection scope of the present invention.Existing skill can be used in each component part being not known in the present embodiment Art is realized.

Claims (6)

1. a kind of Radar Echo Extrapolation method based on circulation dynamic convolutional neural networks, which is characterized in that include the following steps:
Step 1, data prediction:Input test image set carries out standardization processing to every piece image that test image is concentrated, It converts every piece image to 280 × 280 gray level image, then gray level image set is divided, construction includes The test sample collection of TestsetSize group sample;
Step 2, read test sample:The TestsetSize group test sample that step 2-1 is obtained inputs trained circulation In dynamic convolutional neural networks;
Step 3, propagated forward:The image sequence characteristic for extracting input in a sub-network obtains level probability vector HPVtestWith hang down Straight probability vector VPVtest;In probabilistic forecasting layer, by the last piece image in input image sequence successively with VPVtest、 HPVtestPhase convolution obtains the final extrapolated image of circulation dynamic convolutional neural networks.
2. the method according to claim 1, wherein step 1 includes the following steps:
Step 1-1, sampling:The image that test image is concentrated is sequentially arranged, and constant duration is distributed, time interval It is 6 minutes, altogether includes NTestWidth image determines TestsetSize by following formula:
If Mod (NTest, 4)=0
If Mod (NTest, 4) ≠ 0
After acquiring TestsetSize, test image is retained by sampling and concentrates preceding 4 × TestsetSize+1 width image, when sampling Last image is concentrated to meet the requirements amount of images by deleting test image;
Step 1-2, image normalization:Image transformation is carried out to the image that sampling obtains, original resolution is by normalization operation 2000 × 2000 color image is converted into the gray level image that resolution ratio is 280 × 280;
Step 1-3 constructs test sample collection:Test sample collection is constructed using the grayscale image image set that step 1-2 is obtained, by grayscale image Every four adjacent images in image set, i.e. { 4M+1,4M+2,4M+3,4M+4 } width image as one group of list entries, [4 × (M+1)+1] width image is by cutting, control label of the part that the central resolution ratio of reservation is 240 × 240 as corresponding sample, It is wherein positive integer, and has M ∈ [0, TestsetSize-1], obtains the test sample comprising TestsetSize group test sample Collection.
3. according to the method described in claim 2, it is characterized in that, step 1-2 includes the following steps:
Step 1-2-1, image conversion:Gray level image is converted by colored echo strength CAPPI image, then is retained by cutting The image resolution ratio boil down to 280 × 280 after cutting is divided in the part that original image center resolution ratio is 560 × 560 The grayscale image that resolution is 280 × 280;
Step 1-2-2, data normalization:By the value of each of the grayscale image obtained in step 1-2-1 pixel from [0~ 255] it is mapped to [0~1].
4. according to the method described in claim 3, it is characterized in that, step 3 includes the following steps:
Step 3-1 calculates sub-network probability vector:Sub-network is made of 10 network layers, be followed successively by from front to back convolutional layer C1, Down-sampling layer S1, hidden layer H1, convolutional layer C2, down-sampling layer S2, hidden layer H2, convolutional layer C3, down-sampling layer S3, hidden layer H3, convolutional layer C4, down-sampling layer S4, hidden layer H4, convolutional layer C5, hidden layer H5 and classifier layer F1;Lead in a sub-network The alternate treatment for crossing convolutional layer and down-sampling layer extracts the image sequence characteristic inputted, then passes through in classifier layer The processing of Softmax function, obtains level probability vector HPVtestWith vertical probability vector VPVtest
Step 3-2 calculates probabilistic forecasting layer and exports image:The VPV that step 3-1 is obtainedtestAnd HPVtestAs probabilistic forecasting layer Convolution kernel, by the last piece image in input image sequence successively with VPVtestAnd HPVtestPhase convolution obtains circulation dynamic volume The final extrapolated image of product neural network.
5. according to the method described in claim 4, it is characterized in that, step 3-1 includes the following steps:
Step 3-1-1, judges network layer type:Indicate the network layer in current RDSN with p, the value of p be followed successively by H1, C1, S1, H2, C2, S2, H3, C3, S3, H4, C4, S4, H5, C5, F1 }, initial value H1;The type of network layer p is judged, if p ∈ { H1, H2, H3, H4, H5 }, then p is hidden layer, executes step 3-1-2;If p { C1, C2, C3, C4, C5 }, then p is convolutional layer, is held Row step 3-1-3;If p ∈ { S1, S2, S3, S4 }, then p is down-sampling layer, executes step 3-1-4;If p=F1, p are classification Device layer executes step 3-1-5;This output characteristic pattern tested is denoted as a in test processC", wherein C ∈ C1, C2, C3, C4, C5 }, aC" initial value be null matrix;
Step 3-1-2 handles hidden layer:There is p=p at this timeH,pH∈ { H1, H2, H3, H4, H5 } is divided into two kinds of situations:
Work as pHWhen ∈ { H1, H2, H3, H4 }, calculating p firstHV-th of output characteristic pattern of layerIf pH=H1, then C=C1, leads to Zero passage pixel filling is by aC" in corresponding characteristic pattern width expand toAgain by the corresponding convolution kernel of itself and this layer Phase convolution, convolution results are summed, and summed result adds pHV-th of offset parameter of layerAt ReLU activation primitive Reason, obtains pHV-th of output characteristic pattern of layerCalculation formula is as follows:
In above formula, Expand_Zero () indicates zero extended function,For pHU-th of input feature vector figure of layer and v-th of output The corresponding convolution kernel of characteristic pattern, mh are the input feature vector figure number of current hidden layer,Indicate pHU-th of input of layer is special Sign figure,Value determine and have by the width of input feature vector figure and the size of convolution kernel
Work as pHWhen=H5, H5 layers of v-th of output characteristic pattern is calculated firstBy zero pixel filling by aC5" characteristic pattern point Resolution is expanded to 10 × 10, then the corresponding weighting parameter of itself and this layer is multiplied, and calculated result is summed, summed result adds H5 V-th of offset parameter of layerIt is handled by ReLU activation primitive, obtains H5 layers of v-th of output characteristic patternIt calculates public Formula is as follows:
In above formula,For H5 layers of u-th of input feature vector figure weighting parameter corresponding with v-th of output characteristic pattern;
Successively calculate pHAll output characteristic pattern of layer, obtainsP is updated to l+1, and return step 3-1-1 judges network Type carries out the operation of next network layer;
Step 3-1-3 handles convolutional layer:There is p=p at this timeC,pC∈ { C1, C2, C3, C4, C5 }, first calculating pCV-th of layer Export characteristic patternBy pCThe input feature vector figure convolution nuclear phase convolution corresponding with this layer respectively of layer, convolution results are summed, Summed result adds pCV-th of offset parameter of layerIt is handled using ReLU activation primitive, obtains pCV-th of layer is defeated Characteristic pattern outCalculation formula is as follows:
In above formula,For pCU-th of input feature vector figure convolution kernel corresponding with v-th of output characteristic pattern of layer, mc is convolution The input feature vector figure number of layer,Indicate pCU-th of input feature vector figure of layer, while being also pC- 1 layer of u-th of output feature Figure, * representing matrix convolution, if pC=C1, then pC- 1 layer is input layer;
Successively calculate pCAll output characteristic pattern of layer, obtains pCThe output characteristic pattern of layerWithValue update aC", more by p It is newly p+1, and return step 3-1-3 judges network type, carries out the operation of next network layer;
Step 3-1-4 handles down-sampling layer:There is p=p at this timeS,pS∈ { S1, S2, S3, S4 }, the convolution that step 3-1-3 is obtained Layer output characteristic pattern respectively withPhase convolution, then sampled with step-length for 2, sampling obtains pSThe output of layer Characteristic patternCalculation formula is as follows:
Wherein, Sample () indicates that step-length is 2 sampling processing, pS- 1 indicates the previous convolutional layer of current down-sampling layer, Indicate pSThe output characteristic pattern of layerIn v-th of output characteristic pattern, obtain pSThe output characteristic pattern of layerAfterwards, p is updated to P+1, and return step 3-1-1 judges network type, carries out the operation of next network layer;
Step 3-1-5 calculates F1 layers of probability vector:If network layer p is classifier layer, i.e. p=F1, by matrixing, by C5 32 width resolution ratio be 4 × 4 output characteristic pattern with column sequential deployment, obtain the output feature for the F1 layer that resolution ratio is 512 × 1 VectorThen calculate separately horizontal parameters matrix W H, Vertical Parameters matrix W V withApposition, by calculated result respectively with Horizontal offset parameter BH, vertical off setting parameter BV summation, summed result obtain level probability vector after the processing of Softmax function HPVtest, vertical probability vector VPVtest, calculation formula is as follows:
By its vertical probability vector VPVtestTransposition obtains final vertical probability vector.
6. according to the method described in claim 5, it is characterized in that, step 3-2 includes the following steps:
Step 3-2-1 predicts DC1 layers of vertical direction:By last width input picture of input layer and vertical probability vector VPVtest Phase convolution obtains the DC1 layer that resolution ratio is 240 × 280 and exports characteristic pattern
Step 3-2-2 predicts DC2 layers of vertical direction:Step 3-2-1 is obtainedWith level probability vector HPVtestMutually roll up Product, obtains the final extrapolated image of RDCNN, and resolution ratio is 240 × 240.
CN201810402330.1A 2018-04-28 2018-04-28 Radar echo extrapolation method based on cyclic dynamic convolution neural network Pending CN108872993A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810402330.1A CN108872993A (en) 2018-04-28 2018-04-28 Radar echo extrapolation method based on cyclic dynamic convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810402330.1A CN108872993A (en) 2018-04-28 2018-04-28 Radar echo extrapolation method based on cyclic dynamic convolution neural network

Publications (1)

Publication Number Publication Date
CN108872993A true CN108872993A (en) 2018-11-23

Family

ID=64326990

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810402330.1A Pending CN108872993A (en) 2018-04-28 2018-04-28 Radar echo extrapolation method based on cyclic dynamic convolution neural network

Country Status (1)

Country Link
CN (1) CN108872993A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115755227A (en) * 2023-01-10 2023-03-07 南京信大气象科学技术研究院有限公司 Three-dimensional radar extrapolation method based on deep neural network model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007095046A (en) * 2005-09-01 2007-04-12 Nagaoka Univ Of Technology Method and device for learning/forecasting irregular time serial data using recurrent neural network, and weather forecasting method
CN106886023A (en) * 2017-02-27 2017-06-23 中国人民解放军理工大学 A kind of Radar Echo Extrapolation method based on dynamic convolutional neural networks
CN107121679A (en) * 2017-06-08 2017-09-01 湖南师范大学 Recognition with Recurrent Neural Network predicted method and memory unit structure for Radar Echo Extrapolation
CN107632295A (en) * 2017-09-15 2018-01-26 广东工业大学 A kind of Radar Echo Extrapolation method based on sequential convolutional neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007095046A (en) * 2005-09-01 2007-04-12 Nagaoka Univ Of Technology Method and device for learning/forecasting irregular time serial data using recurrent neural network, and weather forecasting method
CN106886023A (en) * 2017-02-27 2017-06-23 中国人民解放军理工大学 A kind of Radar Echo Extrapolation method based on dynamic convolutional neural networks
CN107121679A (en) * 2017-06-08 2017-09-01 湖南师范大学 Recognition with Recurrent Neural Network predicted method and memory unit structure for Radar Echo Extrapolation
CN107632295A (en) * 2017-09-15 2018-01-26 广东工业大学 A kind of Radar Echo Extrapolation method based on sequential convolutional neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
EN SHI等: "A Method of Weather Radar Echo Extrapolation Based on Convolutional Neural Networks", 《24TH INTERNATIONAL CONFERENCE, MMM(MULTIMEDIA MODELING)2018》 *
梁鸣: "受神经科学启发的计算机识别和注意模型", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115755227A (en) * 2023-01-10 2023-03-07 南京信大气象科学技术研究院有限公司 Three-dimensional radar extrapolation method based on deep neural network model

Similar Documents

Publication Publication Date Title
CN106886023B (en) A kind of Radar Echo Extrapolation method based on dynamic convolutional neural networks
CN108846409A (en) Radar echo extrapolation model training method based on cyclic dynamic convolution neural network
Franz et al. Ocean eddy identification and tracking using neural networks
CN109086700B (en) Radar one-dimensional range profile target identification method based on deep convolutional neural network
CN107092870B (en) A kind of high resolution image Semantic features extraction method
CN106355151B (en) A kind of three-dimensional S AR images steganalysis method based on depth confidence network
CN105787439B (en) A kind of depth image human synovial localization method based on convolutional neural networks
CN108254741A (en) Targetpath Forecasting Methodology based on Recognition with Recurrent Neural Network
Hsu et al. Precipitation estimation from remotely sensed information using artificial neural networks
CN104931960B (en) Trend message and radar target situation information full flight path segment data correlating method
Ustaoglu et al. Forecast of daily mean, maximum and minimum temperature time series by three artificial neural network methods
CN109800628A (en) A kind of network structure and detection method for reinforcing SSD Small object pedestrian detection performance
CN110335270A (en) Transmission line of electricity defect inspection method based on the study of hierarchical regions Fusion Features
CN112465006B (en) Target tracking method and device for graph neural network
CN110163836A (en) Based on deep learning for the excavator detection method under the inspection of high-altitude
CN107229084B (en) A kind of automatic identification tracks and predicts contracurrent system mesh calibration method
CN108447057A (en) SAR image change detection based on conspicuousness and depth convolutional network
CN107967474A (en) A kind of sea-surface target conspicuousness detection method based on convolutional neural networks
CN104680151B (en) A kind of panchromatic remote sensing image variation detection method of high-resolution for taking snow covering influence into account
CN109766936A (en) Image change detection method based on information transmitting and attention mechanism
CN112949407B (en) Remote sensing image building vectorization method based on deep learning and point set optimization
CN109919045A (en) Small scale pedestrian detection recognition methods based on concatenated convolutional network
CN109446894A (en) The multispectral image change detecting method clustered based on probabilistic segmentation and Gaussian Mixture
Amo-Boateng et al. Instance segmentation scheme for roofs in rural areas based on Mask R-CNN
CN105844637A (en) Method for detecting SAR image changes based on non-local CV model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181123