CN107833183A - A kind of satellite image based on multitask deep neural network while super-resolution and the method for coloring - Google Patents

A kind of satellite image based on multitask deep neural network while super-resolution and the method for coloring Download PDF

Info

Publication number
CN107833183A
CN107833183A CN201711224807.3A CN201711224807A CN107833183A CN 107833183 A CN107833183 A CN 107833183A CN 201711224807 A CN201711224807 A CN 201711224807A CN 107833183 A CN107833183 A CN 107833183A
Authority
CN
China
Prior art keywords
resolution
mrow
network
image
msub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711224807.3A
Other languages
Chinese (zh)
Other versions
CN107833183B (en
Inventor
刘恒
伏自霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University of Technology AHUT
Original Assignee
Anhui University of Technology AHUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University of Technology AHUT filed Critical Anhui University of Technology AHUT
Priority to CN201711224807.3A priority Critical patent/CN107833183B/en
Publication of CN107833183A publication Critical patent/CN107833183A/en
Application granted granted Critical
Publication of CN107833183B publication Critical patent/CN107833183B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4046Scaling the whole image or part thereof using neural networks

Abstract

The invention discloses a kind of satellite image based on multitask deep neural network while super-resolution and the method for coloring, belong to technical field of image processing.The invention mainly includes steps:1st, the gray level image block training set of high-resolution and low resolution is made;2nd, the deep neural network for building a multitask is used for model training;3rd, the depth network based on structure and the training set of making are trained to network model;4th, the model parameter according to study, the gray level image of a width low resolution is inputted, obtained output is the high-resolution coloured image of reconstruction.The present invention possesses the depth super-resolution network and coloured networks of premium properties by combining, not only increase the detail section of satellite image, but also can carry out coloring to gray level image simultaneously makes it automatically generate the color remote sensing image for meeting the sense of reality, the step of also reducing execution and time, had a wide range of applications in the fields such as gray level image coloring, satellite remote sensing remote measurement.

Description

A kind of super-resolution and the coloring simultaneously of satellite image based on multitask deep neural network Method
Technical field
The invention belongs to technical field of image processing, and multitask deep neural network is based on more specifically to one kind Satellite image simultaneously super-resolution and coloring method.
Background technology
As the network performance with simple function is more and more stronger, people are to the network that can handle complicated multitask Demand also increasingly increase.Traditional method is using the input exported as another network of a network, then could be obtained Result to the end.Because this method not only needs man-machine interactively, and execution can waste many times one by one, it is also contemplated that With the presence or absence of problems such as compatibility issues between two networks so that people have to seek other method.
Existing network need to have two very important functions, i.e. super-resolution and coloring.In terms of super-resolution, rebuild Technology can be divided into different types, can mainly be divided into 3 classes:Method based on interpolation, the method based on reconstruction, it is based on The method of study.Wherein, the method based on study is generally from an external data focusing study high-definition picture and low resolution Mapping relations between rate image, high-definition picture then is rebuild using the mapping relations of study, be most popular at present Method.For example convolutional neural networks are applied in image super-resolution rebuilding task by Dong et al. first, they pass through structure One three-layer coil accumulates neutral net to generate super-resolution image;Also He et al. rebuilds high resolution graphics by residual unit Picture.In terms of coloring, method can from it is earliest based on interaction, such as:Luan et al. proposes the phase using neighbor pixel Likelihood metric colours, and then has scholar to propose an automanual method, and it is reference picture or multiple images to the defeated of grey The Color Statistical entered.Later, full-automatic method was also suggested, for example Zhang et al. makes e-learning combination rudimentary and advanced Clue dyes.
In recent years, the training method due to the powerful learning ability of convolutional neural networks and end to end so that computer regards Feel is obtained for progress, such as image classification and recognition of face in many aspects, and many networks have good performance, because This someone starts to consider how to allow a real-time performance multiple tasks, such as Iizuka et al. to propose a global network, study The environment of image is semantic so that the result of coloring is more accurate.But because the network is still used as driving, institute using data So that if the picture/mb-type tested is not included in training set class, then poor effect, and the instruction of sorter network can be produced Practice difficulty greatly, it is necessary to be lot more time to restrain, allow it to aid in another network to colour gray level image, Ke Nengyi Start to have negative effect.
It is September in 2016 27 through retrieval, Chinese Patent Application No. 201610856231.1, the applying date, innovation and creation It is entitled:The face character analysis method of convolutional neural networks based on multi-task learning, this application case comprise the following steps:1、 Single task model analysis:1) original sample of each age facial image is subjected to face critical point detection, pedestrian's face of going forward side by side alignment The new samples for generating and including facial image are cut according to pre-set dimension afterwards;2) using the new samples of step 1) generation, it is respectively trained Three age estimation network, sex identification network, species network single task convolutional neural networks, the convergence of more each network Speed, obtain the weights of a most slow single task convolutional neural networks of convergence rate;Pre-set dimension cuts generation and includes face The new samples of image;2nd, multi task model is trained:1) multitask convolutional neural networks are built, it is defeated that the network shares three tasks Go out, correspond to age estimation, sex identification and species respectively, three tasks are all using softmax loss functions as target Function;The multitask convolutional neural networks include being used for the shared part that data sharing and information exchange in multi-task learning, And for calculating the independent sector of above three task output;Weights using the single task convolutional neural networks of acquisition are initial Change the shared part of multitask convolutional neural networks, the multitask convolutional neural networks formed after initialization;2) generation is utilized New samples, train multitask convolutional neural networks, the multitask convolutional neural networks model trained;3rd, face character is sentenced It is disconnected:1) Face datection is carried out to inputted picture, judges whether to include facial image, as carried out face to input picture comprising if Critical point detection, pedestrian's face of going forward side by side alignment, the new picture for generating and including facial image is then cut according to pre-set dimension;2) by institute New picture is obtained, is input to obtained multitask convolutional neural networks model, carries out age estimation, sex identification and species. Although the method realizes a kind of multitask network, by the way that face is inputted, each attribute of face is obtained, such as:Year Age estimation, sex identification and species, but this application case has the following disadvantages:1) although this application case can realize more The function of business, but analyzed the attribute of face, so three networks are similar, but affirm the problem of in reality It is complicated and changeable, it is impossible to so close situation occur;2) due to the network solve the problems, such as it is similar, so network structure Also it is similar, two completely different tasks are if desired handled simultaneously, then need to consider combination and the feature of network on this basis It is shared.
Analyzed based on more than, prior art needs a kind of method for the deep neural network that can obtain more preferable multitask.
The content of the invention
1. invention technical problems to be solved
In order to overcome existing for above-mentioned prior art can not be complicated and changeable in Coping with Reality the problem of and multitask between deposit May it is incompatible the problem of;The present invention proposes a kind of satellite image based on multitask deep neural network super-resolution simultaneously With the method for coloring;The present invention not only carries out multi-task learnings to completely different two aspects, and solve network it Between the problem of incompatible be present, meet requirement complicated in reality.
2. technical scheme
To reach above-mentioned purpose, technical scheme provided by the invention is:
A kind of satellite image based on multitask deep neural network while super-resolution of the present invention and the method for coloring, its Step is:
Step 1, using satellite color image data collection, make the image block training set of high-resolution and low resolution;
Step 2, the deep neural network of one multitask of structure are used for model training;
The network that step 3, the training set made according to step 1 and step 2 are built, adjusts network parameter, carries out network instruction Practice;
Step 4, the input using the gray level image of a width low resolution as network, the parameter obtained using step 3 study A high-resolution coloured image is rebuild as output.
Further, the process of the coloured image block training set of step 1 making high-resolution and low resolution is:
Every coloured image is concentrated for a conventional satellite image processing data, two are carried out to high-definition picture first Secondary bicubic interpolation, obtain with high-definition picture corresponding to identical size low-resolution image;Then by every high-resolution Image and low-resolution image are cut into multiple images block, and intersection be present between adjacent image block, thus obtain Set for the high-definition picture block and low-resolution image block of depth network training.
Further, one 43 layers of depth network model is built in step 2, one is divided into three parts, right first Image is pre-processed, 20 layers of composition super-resolution network afterwards, last 23 layers of composition coloured networks;In image preprocessing section Point, coloured image is transformed into Lab color spaces from RGB, coloured image is then divided into two parts, a part is L vectors, is made For the input of whole network;Another part be ab vector, the label as last coloured networks;In super-resolution network, altogether 9 residual error layers and two convolutional layers are contained, wherein each residual unit there are two convolutional layers;Each convolutional layer is followed by one PReLU active coatings;Shown in residual unit such as formula (1):
yi=0.9*h (xi)+0.1*F(xi,wi)
xi+1=f (yi) (1)
Wherein, xiIt is expressed as the feature input of i-th layer of residual unit, wiIt is expressed as setting for i-th layer of weight and bias term Put, F represents residual error function, and it is that identity maps h (x that f, which then represents activation primitive PReLU, h,i)=xi
The depth network by learn low resolution gray level image block and high-resolution gray level image block between mapping Relation, as shown in formula (2):
X=F (y, Φ) (2)
Wherein, x, y are respectively high-resolution gray level image block and the gray level image block of low resolution, and Φ is super-resolution The model parameter that e-learning arrives, for follow-up high resolution image reconstruction;
In coloured networks, 4th layer reciprocal is warp lamination, and residue is convolutional layer;In each convolutional layer and deconvolution Layer is followed by a Relu active coating;Network inputs be super-resolution network output high-resolution gray level image block, the network The mapping relations between high-resolution gray level image block and ab color component images block will be learnt, as shown in formula (3):
X=f (y, θ) (3)
Wherein, x, y are respectively ab color component images block and high-resolution gray level image block, and θ learns for coloured networks The model parameter arrived, for predicting the ab chrominance components in high-resolution gray-scale map corresponding to the brightness L of each pixel afterwards, And the result of prediction is combined with high-resolution gray level image L, has obtained the high-definition picture of Lab color spaces, then It is transformed into RGB color, it is possible to obtain our desired coloured images.
Further, the loss function of network training uses difference in super-resolution network and coloured networks in step 2 Method, in super-resolution network, loss function using mean square error represent, as shown in formula (4):
Wherein, N be step 1 gained training set in sample size, xi,yiFor i-th of high-definition picture block and corresponding low Image in different resolution block;
In coloured networks, loss function intersects entropy loss using multinomial and represented, as shown in formula (5):
Wherein,The probability distribution of prediction is represented, and Z then represents real probability distribution, function v is a rebalancing The factor, it is obtained based on the statistics to training set ab color components, and h and w represent the length and width of image respectively, and q is then instruction Practice the classification sum for concentrating ab color components.
Further, the activation primitive of ReLU active coatings represents as follows with formula (6) in step 2:
F (x)=max (0, x) (6)
Wherein, x is the input of ReLU activation primitives, and f (x) is the output of ReLU activation primitives;
The activation primitive of PReLU active coatings represents as follows with formula (7):
F (x)=max (0, x)+a*min (0, x) (7)
Wherein, x is the input of PReLU activation primitives, and f (x) is the output of PReLU activation primitives, and a is can learning parameter.
Further, in step 2, except last layer of convolutional layer, all convolutional layers of the depth network of structure Convolution kernel size is set to 3*3, and the convolution kernel of last layer is dimensioned to 1*1;The convolution kernel of warp lamination is dimensioned to 4*4。
Further, in super-resolution network, the quantity of the characteristic pattern of preceding 19 convolutional layers is all set to 64, finally One layer of characteristic pattern quantity is 1;In coloured networks, the quantity of characteristic pattern corresponding to preceding 7 convolutional layers is set to 64,64, 128th, 128,256,256,256, followed by 12 convolutional layers be arranged to 512, the feature of 3 convolutional layers followed by The quantity of figure is set as 256, and the quantity of the characteristic pattern of last layer of convolution is set to 244, and each convolutional layer and warp lamination obtains Output such as formula (8) represent:
yi=f (Wixi+bi), i=1,2 ..., 43 (8)
Wherein, WiRepresent i-th layer of weight, biRepresent i-th layer of biasing, xiRepresent i-th layer of input, yiRepresent i-th layer Output;
Pass through activation primitive ReLU and PReLU respectively, as a result as shown in formula (9) and (10):
zi=max (0, yi) (9)
zj=max (0, yj)+a*min(0,yj) (10)
Wherein, yiAnd yjThe respectively output of activation primitive ReLU and PReLU last layers, ziAnd zjRespectively activation primitive ReLU and PReLU output.
Further, step 3 is trained using Caffe deep learning platforms to network, first to being built in step 2 Multitask deep neural network weight and biasing initialized, detailed process is:
1) after initializing weight W using MSRA modes in super-resolution network, W meets following Gaussian Profile:
Wherein, n represents the layer network input block number, i.e. convolutional layer input feature vector figure quantity;
In coloured networks, weights initialisation is all set to 0, i.e. Wi=0;
2) in the entire network, biasing is all initialized as 0, i.e. bi=0.
Further, step 3 represents as follows using gradient descent method renewal network parameter with formula (12):
Vi+1=μ Vi-α▽L(Wi),Wi+1=Wi+Vi+1 (12)
Wherein, Vi+1Represent this weight updated value, and ViLast weight updated value is represented, and μ is last ladder The weight of angle value, α are learning rates, ▽ L (Wi) it is gradient;
In the training process, network parameter renewal is carried out by given number of iterations.
3. beneficial effect
Using technical scheme provided by the invention, compared with existing known technology, there is following remarkable result:
(1) side of a kind of satellite image based on multitask deep neural network of the invention while super-resolution and coloring Method, it is contemplated that reality in it is complicated and changeable the problem of, have chosen completely different both direction and image handled, i.e., to image The work of super-resolution and coloring is carried out simultaneously, not only meets the demand of multitask, and can realize to other any kind The image of class realizes super-resolution and coloring simultaneously, and the method meets requirement complicated in reality.
(2) side of a kind of satellite image based on multitask deep neural network of the invention while super-resolution and coloring Method, the not only each self-optimizing but also they are shared and feature interaction realizes collaboration optimization by feature of two parts of the network, To realize more preferable result.
(3) side of a kind of satellite image based on multitask deep neural network of the invention while super-resolution and coloring Method, the super-resolution network and coloured networks of premium properties are possessed by combining, not only increase the detail section of satellite image, and And can also carry out coloring to gray level image simultaneously makes it automatically generate the color remote sensing image for meeting the sense of reality;Compared to original list One prototype network method, man-machine interactively is not needed not only, and perform and greatly shortened on the time, in historical photograph and remote sensing images Deng being had a wide range of applications in field.
Brief description of the drawings
The deep neural network based on multitask that Fig. 1 is the present invention carries out super-resolution and coloring to satellite image simultaneously Method flow diagram;
Fig. 2 is the Making programme figure of data set in the present invention;
Fig. 3 is the network model schematic diagram that builds of the present invention, the ReLU active coatings that are not drawn into Fig. 3 after convolution and PReLU active coatings;
Fig. 4 is the detailed annotation figure of residual unit in the present invention.
Embodiment
To further appreciate that present disclosure, the present invention is described in detail in conjunction with the accompanying drawings and embodiments.
Embodiment 1
With reference to Fig. 1, a kind of satellite image based on multitask deep neural network of the present embodiment simultaneously super-resolution and The method of color, specifically includes following steps:
Step 1, using conventional data set, such as ImageNet and AID satellite datas collection, make high-resolution figure As block training set and the image block training set of low resolution, specific steps are as shown in Fig. 2 i.e.:
For every coloured image in frequently-used data collection (such as AID satellite image datas collection), first to high-definition picture Bicubic interpolation (carrying out bicubic down-sampling interpolation, second of progress bicubic up-sampling interpolation for the first time) twice is carried out, is obtained To with high-definition picture corresponding to identical size low-resolution image;
The image block that every high-definition picture and low-resolution image are cut into multiple 93*93 (is cut into 93*93's Image block includes the feature learning of more conducively super-resolution), cut at intervals of 27 pixels so that deposited between adjacent image block In the part that a part overlaps, high-definition picture block and low-resolution image block for depth network training resulting in Set.
Step 2, the deep neural network of one multitask of structure are used for model training;
2-1, one 43 layers of depth network model is built, concrete structure is as shown in figure 3, and one be divided into three portions Point, image is pre-processed first, 20 layers of composition super-resolution network afterwards, last 23 layers of composition coloured networks;Pre- During processing, coloured image is transformed into Lab color spaces from RGB, coloured image is then divided into two parts, a part for L to Amount, the input as whole network;Another part be ab vector, the label as last coloured networks;In super-resolution network, 9 residual error layers and two convolutional layers are contained altogether, wherein (remaining residual unit and first by taking first residual unit as an example Individual residual unit is consistent), the concrete structure of residual unit in each residual unit as shown in figure 4, there is two convolutional layers;Each Convolutional layer is followed by a PReLU active coating;Shown in residual unit such as formula (1):
yi=0.9*h (xi)+0.1*F(xi,wi)
xi+1=f (yi) (1)
Wherein, xiIt is expressed as the feature input of i-th layer of residual unit, wiIt is expressed as setting for i-th layer of weight and bias term Put, what F was represented is residual error function, and f, which then represents activation primitive PReLU, h, to be mapped as identity:h(xi)=xi
In this network, the network will learn between gray level image block and the high-resolution gray level image block of low resolution Mapping relations, as shown in formula (2):
X=F (y, Φ) (2)
Wherein, x, y represent high-definition picture block and low-resolution image block respectively, and Φ is that super-resolution e-learning arrives Model parameter, for high resolution image reconstruction afterwards.
Finally, in coloured networks, 4th layer reciprocal is warp lamination, and remaining is entirely convolutional layer;In each convolution A Relu active coating is connect after layer and warp lamination;Network inputs are the high-resolution gray scales of a part of network output above Image block, the network are such as public by the mapping relations between learning high-resolution gray level image block and ab color component images block Shown in formula (3):
X=f (y, θ) (3)
Wherein, x, y are respectively ab color component images block and high-resolution gray level image block, and θ learns for coloured networks The model parameter arrived, for predicting the ab chrominance components in high-resolution gray-scale map corresponding to the brightness L of each pixel afterwards, And the result of prediction is combined with high-resolution gray level image L, has obtained the high-definition picture of Lab color spaces, then It is transformed into RGB color, it is possible to obtain our desired coloured images.
The loss function of network training employs different methods in two parts network, in super-resolution network, damage Lose function and employ mean square error expression, as shown in formula (4):
Wherein, N be step 1 gained training set in sample size, xi,yiFor i-th of high-definition picture block and corresponding low Image in different resolution block, Φ are the model parameter that super-resolution e-learning arrives.
And in coloured networks, the loss function of training intersects entropy loss using multinomial and represented, as shown in formula (5):
Wherein,The probability distribution of prediction is represented, and Z then represents real probability distribution, function v is a rebalancing The factor, it is obtained based on the statistics to training set ab color components, and h and w represent the length and width of image respectively, q It is then the ab color component classifications in training set.
The activation primitive of ReLU active coatings represents as follows with formula (6):
F (x)=max (0, x) (6)
Wherein, x is the input of ReLU activation primitives, and f (x) is the output of ReLU activation primitives.
The activation primitive of PReLU active coatings represents as follows with formula (7):
F (x)=max (0, x)+a*min (0, x) (7)
Wherein, x is the input of PReLU activation primitives, and f (x) is the output of PReLU activation primitives, and a is can learning parameter.
2-2, the convolution kernel size of all convolutional layers of depth network of structure are set to 3*3, the convolution kernel of last layer It is dimensioned to 1*1;The convolution kernel of warp lamination is dimensioned to 4*4.In super-resolution network, the spy of preceding 19 convolutional layers The quantity of sign figure is all set to 64, and the characteristic pattern quantity of last layer is 1;It is special corresponding to preceding 7 convolutional layers in coloured networks The quantity of sign figure is set to 64,64,128,128,256,256,256, followed by 12 convolutional layers characteristic pattern Quantity is arranged to 512, and the quantity of the characteristic pattern of 3 convolutional layers followed by is set as 256, the characteristic pattern of last layer of convolution Quantity be set to 244.Every layer of configuration is specifically as shown in table 1 in network.
The network model configuration of the present invention of table 1
The output such as formula (8) that each convolutional layer and warp lamination obtain represents:
yi=f (Wixi+bi), i=1,2 ..., 43 (8)
Wherein, WiRepresent i-th layer of weight, biRepresent i-th layer of biasing, xiRepresent i-th layer of input, yiRepresent i-th layer Output;
Pass through activation primitive ReLU and PReLU respectively, as a result as shown in formula (9) and (10):
zi=max (0, yi) (9)
zj=max (0, yj)+a*min(0,yj) (10)
Wherein, yiAnd yjThe respectively output of activation primitive ReLU and PReLU last layers, ziAnd zjRespectively activation primitive ReLU and PReLU output.
The network that step 3, the training set made according to step 1 and step 2 are built, adjusts network parameter, carries out network instruction Practice, it is specific as follows:
3-1, using Caffe deep learning platforms network is trained, to the depth god of the multitask built in step 2 Through network, super-resolution network is initialized using MSRA modes first, and coloured networks are initialized as 0, biasing is all initial Turn to 0.Detailed process is:
1) employed in super-resolution network after MSRA modes initialize weight W, W meets following Gaussian Profile:
Wherein, n represents the layer network input block number, i.e. convolutional layer input feature vector figure quantity.
And in coloured networks, weights initialisation is all set to 0, i.e. Wi=0.
2) in the entire network, biasing is all initialized as 0, i.e. bi=0.
3-2, network parameter updated using gradient descent method, represent as follows with formula (12):
Vi+1=μ Vi-α▽L(Wi),Wi+1=Wi+Vi+1 (12)
Wherein, Vi+1Represent this weight updated value, and ViLast weight updated value is represented, and μ is last ladder The weight of angle value, α are learning rates, ▽ L (Wi) it is gradient;
3-3, in the training process, network parameter renewal is carried out by given number of iterations.
After step 4, training terminate, using the gray level image of a width low resolution as the input of network, learnt using step 3 Obtained parameter removes to reconstruct a high-resolution coloured image as output.
The satellite image based on multitask deep neural network while super-resolution of the present embodiment and the method for coloring, consider The problem of complicated and changeable into reality, it have chosen completely different both direction and image handled, not only meet more The demand of business, and can realize to other any kind of images while realize super-resolution and coloring, the method is met Complicated requirement in reality.In addition, two parts of the network not only each self-optimizing and also they by the way that feature is shared and feature Interaction realizes collaboration optimization, to realize more preferable result.Also it is exactly not need artificial interference not only, and performs on the time Greatly shorten, had a wide range of applications in the field such as historical photograph and remote sensing images.
Schematically the present invention and embodiments thereof are described above, this describes no restricted, institute in accompanying drawing What is shown is also one of embodiments of the present invention, and actual structure is not limited thereto.So if common skill of this area Art personnel are enlightened by it, without departing from the spirit of the invention, without designing and the technical scheme for creativeness Similar frame mode and embodiment, protection scope of the present invention all should be belonged to.

Claims (9)

1. super-resolution and the method for coloring, its step are a kind of satellite image based on multitask deep neural network simultaneously:
Step 1, using satellite color image data collection, make the image block training set of high-resolution and low resolution;
Step 2, the deep neural network of one multitask of structure are used for model training;
The network that step 3, the training set made according to step 1 and step 2 are built, adjusts network parameter, carries out network training;
Step 4, the input using the gray level image of a width low resolution as network, the Reconstruction obtained using step 3 study One high-resolution coloured image is as output.
2. a kind of satellite image based on multitask deep neural network according to claim 1 carry out simultaneously super-resolution and The method of coloring, it is characterised in that:The process that step 1 makes the coloured image block training set of high-resolution and low resolution is:
Every coloured image is concentrated for a conventional satellite image processing data, high-definition picture is carried out first double twice Cubic interpolation, obtain with high-definition picture corresponding to identical size low-resolution image;Then by every high-definition picture Multiple images block is cut into low-resolution image, and intersection be present between adjacent image block, is thus used for The high-definition picture block of depth network training and the set of low-resolution image block.
3. a kind of satellite image based on multitask deep neural network according to claim 1 or 2 carries out oversubscription simultaneously The method distinguished and coloured, it is characterised in that:One 43 layers of depth network model is built in step 2, one is divided into three parts, Image is pre-processed first, 20 layers of composition super-resolution network afterwards, last 23 layers of composition coloured networks;It is pre- in image Process part, coloured image is transformed into Lab color spaces from RGB, coloured image is then divided into two parts, a part is L Vector, the input as whole network;Another part be ab vector, the label as last coloured networks;In super-resolution network In, 9 residual error layers and two convolutional layers are contained altogether, wherein each residual unit there are two convolutional layers;Each convolutional layer it It is followed by a PReLU active coating;Shown in residual unit such as formula (1):
yi=0.9*h (xi)+0.1*F(xi,wi)
xi+1=f (yi) (1)
Wherein, xiIt is expressed as the feature input of i-th layer of residual unit, wiIt is expressed as the setting of i-th layer of weight and bias term, F generations Table residual error function, it is that identity maps h (x that f, which then represents activation primitive PReLU, h,i)=xi
The depth network by learn low resolution gray level image block and high-resolution gray level image block between mapping relations, As shown in formula (2):
X=F (y, Φ) (2)
Wherein, x, y are respectively high-resolution gray level image block and the gray level image block of low resolution, and Φ is super-resolution network The model parameter learnt, for follow-up high resolution image reconstruction;
In coloured networks, 4th layer reciprocal is warp lamination, and residue is convolutional layer;Each convolutional layer and warp lamination it It is followed by a Relu active coating;Network inputs are the high-resolution gray level image blocks of super-resolution network output, and the network will be learned The mapping relations between high-resolution gray level image block and ab color component images block are practised, as shown in formula (3):
X=f (y, θ) (3)
Wherein, x, y are respectively ab color component images block and high-resolution gray level image block, and θ is what coloured networks learnt Model parameter.
A kind of 4. satellite image based on multitask deep neural network according to claim 3 super-resolution and coloring simultaneously Method, it is characterised in that:The loss function of network training uses different in super-resolution network and coloured networks in step 2 Method, in super-resolution network, loss function is represented using mean square error, as shown in formula (4):
<mrow> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>=</mo> <munder> <mi>min</mi> <mi>&amp;Phi;</mi> </munder> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <mo>|</mo> <mi>F</mi> <mrow> <mo>(</mo> <msup> <mi>y</mi> <mi>i</mi> </msup> <mo>,</mo> <msub> <mi>&amp;Phi;</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>x</mi> <mi>i</mi> </msup> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
Wherein, N be step 1 gained training set in sample size, xi,yiFor i-th of high-definition picture block and corresponding low resolution Rate image block;
In coloured networks, loss function intersects entropy loss using multinomial and represented, as shown in formula (5):
<mrow> <msub> <mi>L</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mover> <mi>Z</mi> <mo>^</mo> </mover> <mo>,</mo> <mi>Z</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>-</mo> <munder> <mo>&amp;Sigma;</mo> <mrow> <mi>h</mi> <mo>,</mo> <mi>w</mi> </mrow> </munder> <mi>v</mi> <mrow> <mo>(</mo> <msub> <mi>Z</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>w</mi> </mrow> </msub> <mo>)</mo> </mrow> <munder> <mo>&amp;Sigma;</mo> <mi>q</mi> </munder> <msub> <mi>Z</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>w</mi> <mo>,</mo> <mi>q</mi> </mrow> </msub> <mi>log</mi> <mrow> <mo>(</mo> <msub> <mover> <mi>Z</mi> <mo>^</mo> </mover> <mrow> <mi>h</mi> <mo>,</mo> <mi>w</mi> <mo>,</mo> <mi>q</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
Wherein,Represent the probability distribution of prediction, and Z then represents real probability distribution, function v be rebalancing because Son, it is obtained based on the statistics to training set ab color components, and h and w represent the length and width of image respectively, and q is then training set The classification sum of middle ab color components.
A kind of 5. satellite image based on multitask deep neural network according to claim 4 super-resolution and coloring simultaneously Method, it is characterised in that:The activation primitive of ReLU active coatings represents as follows with formula (6) in step 2:
F (x)=max (0, x) (6)
Wherein, x is the input of ReLU activation primitives, and f (x) is the output of ReLU activation primitives;
The activation primitive of PReLU active coatings represents as follows with formula (7):
F (x)=max (0, x)+a*min (0, x) (7)
Wherein, x is the input of PReLU activation primitives, and f (x) is the output of PReLU activation primitives, and a is can learning parameter.
A kind of 6. satellite image based on multitask deep neural network according to claim 5 super-resolution and coloring simultaneously Method, it is characterised in that:In step 2, except last layer of convolutional layer, the volume of all convolutional layers of the depth network of structure Product core size is set to 3*3, and the convolution kernel of last layer is dimensioned to 1*1;The convolution kernel of warp lamination is dimensioned to 4* 4。
A kind of 7. satellite image based on multitask deep neural network according to claim 6 super-resolution and coloring simultaneously Method, it is characterised in that:In super-resolution network, the quantity of the characteristic pattern of preceding 19 convolutional layers is all set to 64, finally One layer of characteristic pattern quantity is 1;In coloured networks, the quantity of characteristic pattern corresponding to preceding 7 convolutional layers is set to 64,64, 128th, 128,256,256,256, followed by 12 convolutional layers be arranged to 512, the feature of 3 convolutional layers followed by The quantity of figure is set as 256, and the quantity of the characteristic pattern of last layer of convolution is set to 244, and each convolutional layer and warp lamination obtains Output such as formula (8) represent:
yi=f (Wixi+bi), i=1,2 ..., 43 (8)
Wherein, WiRepresent i-th layer of weight, biRepresent i-th layer of biasing, xiRepresent i-th layer of input, yiRepresent i-th layer defeated Go out;
Pass through activation primitive ReLU and PReLU respectively, as a result as shown in formula (9) and (10):
zi=max (0, yi) (9)
zj=max (0, yj)+a*min(0,yj) (10)
Wherein, yiAnd yjThe respectively output of activation primitive ReLU and PReLU last layers, ziAnd zjRespectively activation primitive ReLU and PReLU output.
A kind of 8. satellite image based on multitask deep neural network according to claim 7 super-resolution and coloring simultaneously Method, it is characterised in that:Step 3 is trained using Caffe deep learning platforms to network, first to being built in step 2 Multitask deep neural network weight and biasing initialized, detailed process is:
1) after initializing weight W using MSRA modes in super-resolution network, W meets following Gaussian Profile:
<mrow> <mi>W</mi> <mo>~</mo> <mi>G</mi> <mo>&amp;lsqb;</mo> <mn>0</mn> <mo>,</mo> <msqrt> <mfrac> <mn>2</mn> <mi>n</mi> </mfrac> </msqrt> <mo>&amp;rsqb;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow>
Wherein, n represents the layer network input block number, i.e. convolutional layer input feature vector figure quantity;
In coloured networks, weights initialisation is all set to 0, i.e. Wi=0;
2) in the entire network, biasing is all initialized as 0, i.e. bi=0.
A kind of 9. satellite image based on multitask deep neural network according to claim 8 super-resolution and coloring simultaneously Method, it is characterised in that:Step 3 represents as follows using gradient descent method renewal network parameter with formula (12):
<mrow> <msub> <mi>V</mi> <mrow> <mi>i</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msub> <mi>&amp;mu;V</mi> <mi>i</mi> </msub> <mo>-</mo> <mi>&amp;alpha;</mi> <mo>&amp;dtri;</mo> <mi>L</mi> <mrow> <mo>(</mo> <msub> <mi>W</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>W</mi> <mrow> <mi>i</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msub> <mi>W</mi> <mi>i</mi> </msub> <mo>+</mo> <msub> <mi>V</mi> <mrow> <mi>i</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow>
Wherein, Vi+1Represent this weight updated value, and ViLast weight updated value is represented, and μ is last Grad Weight, α is learning rate,It is gradient;
In the training process, network parameter renewal is carried out by given number of iterations.
CN201711224807.3A 2017-11-29 2017-11-29 Method for simultaneously super-resolving and coloring satellite image based on multitask deep neural network Active CN107833183B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711224807.3A CN107833183B (en) 2017-11-29 2017-11-29 Method for simultaneously super-resolving and coloring satellite image based on multitask deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711224807.3A CN107833183B (en) 2017-11-29 2017-11-29 Method for simultaneously super-resolving and coloring satellite image based on multitask deep neural network

Publications (2)

Publication Number Publication Date
CN107833183A true CN107833183A (en) 2018-03-23
CN107833183B CN107833183B (en) 2021-05-25

Family

ID=61646641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711224807.3A Active CN107833183B (en) 2017-11-29 2017-11-29 Method for simultaneously super-resolving and coloring satellite image based on multitask deep neural network

Country Status (1)

Country Link
CN (1) CN107833183B (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108648144A (en) * 2018-04-20 2018-10-12 南开大学 A kind of FPM high-resolution colour picture method for reconstructing based on deep learning algorithm
CN108830912A (en) * 2018-05-04 2018-11-16 北京航空航天大学 A kind of interactive grayscale image color method of depth characteristic confrontation type study
CN108876870A (en) * 2018-05-30 2018-11-23 福州大学 A kind of domain mapping GANs image rendering methods considering texture complexity
CN108921783A (en) * 2018-06-01 2018-11-30 武汉大学 A kind of satellite image super resolution ratio reconstruction method based on losses by mixture function constraint
CN108921932A (en) * 2018-06-28 2018-11-30 福州大学 A method of the black and white personage picture based on convolutional neural networks generates various reasonable coloring in real time
CN109063565A (en) * 2018-06-29 2018-12-21 中国科学院信息工程研究所 A kind of low resolution face identification method and device
CN109146788A (en) * 2018-08-16 2019-01-04 广州视源电子科技股份有限公司 Super-resolution image reconstruction method and device based on deep learning
CN109191411A (en) * 2018-08-16 2019-01-11 广州视源电子科技股份有限公司 A kind of multitask image rebuilding method, device, equipment and medium
CN109360148A (en) * 2018-09-05 2019-02-19 北京悦图遥感科技发展有限公司 Based on mixing random down-sampled remote sensing image ultra-resolution ratio reconstructing method and device
CN109410114A (en) * 2018-09-19 2019-03-01 湖北工业大学 Compressed sensing image reconstruction algorithm based on deep learning
CN109961105A (en) * 2019-04-08 2019-07-02 上海市测绘院 A kind of Classification of High Resolution Satellite Images method based on multitask deep learning
CN110163801A (en) * 2019-05-17 2019-08-23 深圳先进技术研究院 A kind of Image Super-resolution and color method, system and electronic equipment
CN110288515A (en) * 2019-05-27 2019-09-27 宁波大学 The method and CNN coloring learner intelligently coloured to the microsctructural photograph of electron microscope shooting
CN110675462A (en) * 2019-09-17 2020-01-10 天津大学 Gray level image colorizing method based on convolutional neural network
CN110880163A (en) * 2018-09-05 2020-03-13 南京大学 Low-light color imaging method based on deep learning
CN111429350A (en) * 2020-03-24 2020-07-17 安徽工业大学 Rapid super-resolution processing method for mobile phone photographing
CN111627080A (en) * 2020-05-20 2020-09-04 广西师范大学 Gray level image coloring method based on convolution nerve and condition generation antagonistic network
CN111986084A (en) * 2020-08-03 2020-11-24 南京大学 Multi-camera low-illumination image quality enhancement method based on multi-task fusion
CN112489164A (en) * 2020-12-07 2021-03-12 南京理工大学 Image coloring method based on improved depth separable convolutional neural network
CN112508786A (en) * 2020-12-03 2021-03-16 武汉大学 Satellite image-oriented arbitrary-scale super-resolution reconstruction method and system
CN113537246A (en) * 2021-08-12 2021-10-22 浙江大学 Gray level image simultaneous coloring and hyper-parting method based on counterstudy
CN114463175A (en) * 2022-01-18 2022-05-10 哈尔滨工业大学 Mars image super-resolution method based on deep convolution neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105354273A (en) * 2015-10-29 2016-02-24 浙江高速信息工程技术有限公司 Method for fast retrieving high-similarity image of highway fee evasion vehicle
CN106204449A (en) * 2016-07-06 2016-12-07 安徽工业大学 A kind of single image super resolution ratio reconstruction method based on symmetrical degree of depth network
US20170148139A1 (en) * 2015-11-25 2017-05-25 Heptagon Micro Optics Pte. Ltd. Super-resolution image reconstruction using high-frequency band extraction

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105354273A (en) * 2015-10-29 2016-02-24 浙江高速信息工程技术有限公司 Method for fast retrieving high-similarity image of highway fee evasion vehicle
US20170148139A1 (en) * 2015-11-25 2017-05-25 Heptagon Micro Optics Pte. Ltd. Super-resolution image reconstruction using high-frequency band extraction
CN106204449A (en) * 2016-07-06 2016-12-07 安徽工业大学 A kind of single image super resolution ratio reconstruction method based on symmetrical degree of depth network

Cited By (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108648144B (en) * 2018-04-20 2021-12-10 南开大学 FPM high-resolution color image reconstruction method based on deep learning algorithm
CN108648144A (en) * 2018-04-20 2018-10-12 南开大学 A kind of FPM high-resolution colour picture method for reconstructing based on deep learning algorithm
CN108830912A (en) * 2018-05-04 2018-11-16 北京航空航天大学 A kind of interactive grayscale image color method of depth characteristic confrontation type study
CN108876870B (en) * 2018-05-30 2022-12-13 福州大学 Domain mapping GANs image coloring method considering texture complexity
CN108876870A (en) * 2018-05-30 2018-11-23 福州大学 A kind of domain mapping GANs image rendering methods considering texture complexity
CN108921783A (en) * 2018-06-01 2018-11-30 武汉大学 A kind of satellite image super resolution ratio reconstruction method based on losses by mixture function constraint
CN108921783B (en) * 2018-06-01 2022-04-15 武汉大学 Satellite image super-resolution reconstruction method based on mixed loss function constraint
CN108921932A (en) * 2018-06-28 2018-11-30 福州大学 A method of the black and white personage picture based on convolutional neural networks generates various reasonable coloring in real time
CN108921932B (en) * 2018-06-28 2022-09-23 福州大学 Method for generating multiple reasonable colorings of black and white figure pictures based on convolutional neural network
CN109063565A (en) * 2018-06-29 2018-12-21 中国科学院信息工程研究所 A kind of low resolution face identification method and device
CN109063565B (en) * 2018-06-29 2021-12-10 中国科学院信息工程研究所 Low-resolution face recognition method and device
CN109191411A (en) * 2018-08-16 2019-01-11 广州视源电子科技股份有限公司 A kind of multitask image rebuilding method, device, equipment and medium
CN109146788A (en) * 2018-08-16 2019-01-04 广州视源电子科技股份有限公司 Super-resolution image reconstruction method and device based on deep learning
CN109191411B (en) * 2018-08-16 2021-05-18 广州视源电子科技股份有限公司 Multitask image reconstruction method, device, equipment and medium
CN109360148B (en) * 2018-09-05 2023-11-07 北京悦图遥感科技发展有限公司 Remote sensing image super-resolution reconstruction method and device based on mixed random downsampling
CN110880163A (en) * 2018-09-05 2020-03-13 南京大学 Low-light color imaging method based on deep learning
CN109360148A (en) * 2018-09-05 2019-02-19 北京悦图遥感科技发展有限公司 Based on mixing random down-sampled remote sensing image ultra-resolution ratio reconstructing method and device
CN110880163B (en) * 2018-09-05 2022-08-19 南京大学 Low-light color imaging method based on deep learning
CN109410114B (en) * 2018-09-19 2023-08-25 湖北工业大学 Compressed Sensing Image Reconstruction Algorithm Based on Deep Learning
CN109410114A (en) * 2018-09-19 2019-03-01 湖北工业大学 Compressed sensing image reconstruction algorithm based on deep learning
CN109961105A (en) * 2019-04-08 2019-07-02 上海市测绘院 A kind of Classification of High Resolution Satellite Images method based on multitask deep learning
WO2020233129A1 (en) * 2019-05-17 2020-11-26 深圳先进技术研究院 Image super-resolution and coloring method and system, and electronic device
CN110163801A (en) * 2019-05-17 2019-08-23 深圳先进技术研究院 A kind of Image Super-resolution and color method, system and electronic equipment
CN110163801B (en) * 2019-05-17 2021-07-20 深圳先进技术研究院 Image super-resolution and coloring method, system and electronic equipment
CN110288515A (en) * 2019-05-27 2019-09-27 宁波大学 The method and CNN coloring learner intelligently coloured to the microsctructural photograph of electron microscope shooting
CN110675462A (en) * 2019-09-17 2020-01-10 天津大学 Gray level image colorizing method based on convolutional neural network
CN111429350B (en) * 2020-03-24 2023-02-24 安徽工业大学 Rapid super-resolution processing method for mobile phone photographing
CN111429350A (en) * 2020-03-24 2020-07-17 安徽工业大学 Rapid super-resolution processing method for mobile phone photographing
CN111627080A (en) * 2020-05-20 2020-09-04 广西师范大学 Gray level image coloring method based on convolution nerve and condition generation antagonistic network
CN111627080B (en) * 2020-05-20 2022-11-18 广西师范大学 Gray level image coloring method based on convolution nerve and condition generation antagonistic network
CN111986084A (en) * 2020-08-03 2020-11-24 南京大学 Multi-camera low-illumination image quality enhancement method based on multi-task fusion
CN111986084B (en) * 2020-08-03 2023-12-12 南京大学 Multi-camera low-illumination image quality enhancement method based on multi-task fusion
CN112508786B (en) * 2020-12-03 2022-04-29 武汉大学 Satellite image-oriented arbitrary-scale super-resolution reconstruction method and system
CN112508786A (en) * 2020-12-03 2021-03-16 武汉大学 Satellite image-oriented arbitrary-scale super-resolution reconstruction method and system
CN112489164A (en) * 2020-12-07 2021-03-12 南京理工大学 Image coloring method based on improved depth separable convolutional neural network
CN112489164B (en) * 2020-12-07 2023-07-04 南京理工大学 Image coloring method based on improved depth separable convolutional neural network
CN113537246A (en) * 2021-08-12 2021-10-22 浙江大学 Gray level image simultaneous coloring and hyper-parting method based on counterstudy
CN114463175A (en) * 2022-01-18 2022-05-10 哈尔滨工业大学 Mars image super-resolution method based on deep convolution neural network
CN114463175B (en) * 2022-01-18 2022-11-01 哈尔滨工业大学 Mars image super-resolution method based on deep convolutional neural network

Also Published As

Publication number Publication date
CN107833183B (en) 2021-05-25

Similar Documents

Publication Publication Date Title
CN107833183A (en) A kind of satellite image based on multitask deep neural network while super-resolution and the method for coloring
CN108564029B (en) Face attribute recognition method based on cascade multitask learning deep neural network
CN106920243A (en) The ceramic material part method for sequence image segmentation of improved full convolutional neural networks
Li et al. Depth and surface normal estimation from monocular images using regression on deep features and hierarchical crfs
CN109902798A (en) The training method and device of deep neural network
CN109711413A (en) Image, semantic dividing method based on deep learning
CN108010049A (en) Split the method in human hand region in stop-motion animation using full convolutional neural networks
CN109816725A (en) A kind of monocular camera object pose estimation method and device based on deep learning
CN106981080A (en) Night unmanned vehicle scene depth method of estimation based on infrared image and radar data
CN109410321A (en) Three-dimensional rebuilding method based on convolutional neural networks
CN111080511A (en) End-to-end face exchange method for high-resolution multi-feature extraction
CN107808129A (en) A kind of facial multi-characteristic points localization method based on single convolutional neural networks
CN110378985A (en) A kind of animation drawing auxiliary creative method based on GAN
CN105069746A (en) Video real-time human face substitution method and system based on partial affine and color transfer technology
CN107123089A (en) Remote sensing images super-resolution reconstruction method and system based on depth convolutional network
CN110163801A (en) A kind of Image Super-resolution and color method, system and electronic equipment
CN106780543A (en) A kind of double framework estimating depths and movement technique based on convolutional neural networks
CN106056155A (en) Super-pixel segmentation method based on boundary information fusion
CN109920012A (en) Image colorant system and method based on convolutional neural networks
CN109783887A (en) A kind of intelligent recognition and search method towards Three-dimension process feature
CN112489164B (en) Image coloring method based on improved depth separable convolutional neural network
Shen et al. Machine learning assisted urban filling
CN110378208A (en) A kind of Activity recognition method based on depth residual error network
CN111401156B (en) Image identification method based on Gabor convolution neural network
Pérez-Benito et al. Smoothing vs. sharpening of colour images: Together or separated

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant