CN110533580A - A kind of image Style Transfer method generating neural network based on confrontation - Google Patents

A kind of image Style Transfer method generating neural network based on confrontation Download PDF

Info

Publication number
CN110533580A
CN110533580A CN201910730983.7A CN201910730983A CN110533580A CN 110533580 A CN110533580 A CN 110533580A CN 201910730983 A CN201910730983 A CN 201910730983A CN 110533580 A CN110533580 A CN 110533580A
Authority
CN
China
Prior art keywords
filter1
filter2
data set
image
cyclegan
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910730983.7A
Other languages
Chinese (zh)
Inventor
李垚辰
王瑞豪
吴霄
陆丹卉
刘跃虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201910730983.7A priority Critical patent/CN110533580A/en
Publication of CN110533580A publication Critical patent/CN110533580A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06T3/04

Abstract

The invention discloses a kind of image Style Transfer method for generating neural network based on confrontation, the present invention increases by two filtering data set Filter1 data sets and filtering data set Filter2 on the basis of CycleGAN.The present invention converts the filtering data set Filter1 of image by increasing comprising not needing style, remain unchanged it by generator, and increase the filtering data set Filter2 comprising migration error image, so that it is solved the problems, such as CycleGAN migration mistake by the mode that arbiter is determined as wrong picture.It is trained after adjusting whole loss function with production confrontation network, obtained network model is used for the Style Transfer of image.The present invention can be good at the excessive migration for solving the problems, such as to occur in CycleGAN, migration transition under the premise of keeping unsupervised learning.

Description

A kind of image Style Transfer method generating neural network based on confrontation
Technical field
The invention belongs to image procossing, computer vision and area of pattern recognition, and in particular to one kind is generated based on confrontation The image Style Transfer method of neural network.
Background technique
Image Style Transfer has critically important status in computer vision field, according to desired target style, existing There is image to carry out style and is converted to target style image.Style Transfer is taken pictures in cartoon making, camera, is had very in simulation system More applications.Style Transfer model based on unsupervised learning, Typical Representative are CycleGAN (with reference to the method for Wang: Zhu J-Y,Park T,Isola P,et al.Unpaired image-to-image translation using cycle- consistent adversarial networks[C]:Proceedings of the IEEE International Conference on Computer Vision,2017:2223-2232).The basic thought of CycleGAN is to antibiosis At neural network GAN.CycleGAN in order to after guaranteeing original image and converting style the structural similarity of image increase Cycle's Design.Since CycleGAN is by the way of this unsupervised paired-associate learning of GAN, the data set of CycleGAN is opposite to hold receipts Collection.CycleGAN Style Transfer model generation picture quality is relatively preferable, test speed is compared to traditional style migration algorithm Comparatively fast.But CycleGAN is easy to appear situations such as variation targets mistake.Such as during horse becomes zebra, CycleGAN holds Easily there is the case where spot people.Turn situations such as black vehicle is easy to appear during night daytime.As shown in Figure 1.
Summary of the invention
The purpose of the present invention is to overcome the above shortcomings and to provide a kind of image styles for generating neural network based on confrontation to move Shifting method is able to solve the situation for changing mistake in CycleGAN migration.
In order to achieve the above object, comprising the following steps:
Step 1 is set up equipped with the filtering data set Filter1 for needing to remain unchanged image;
Step 2 will filter the generator of data set Filter1 access CycleGAN, make to filter data set Filter1 warp It is remained unchanged after crossing the generator of CycleGAN;
Step 3 sets up the filtering data set Filter2 of transformation error image;
Step 4 will filter the arbiter of data set Filter2 access CycleGAN, make to filter data set Filter2 warp Mistake is identified as after crossing the arbiter of CycleGAN;
Step 5, adjustment addition filtering data set Filter1 and the whole damage for filtering CycleGAN after data set Filter2 Function is lost, and neural network is generated to addition filtering data set Filter1 using confrontation and filters data set Filter2's CycleGAN is trained, and the method for image Style Transfer is completed after the completion of training.
Image filter1, the x style that filtering data set Filter1 is built-in with other styles in addition to x style is to need to convert Image, filter1 is not by occurring any variation, i.e. G_y (filter1)=filter1 after generator G_y.
Transformation error image filter2, the x style that filtering data set Filter2 is built-in with x style is the figure for needing to convert Picture, arbiter D_y differentiate that image filter2 is not belonging to y style, i.e. D_y (filter2)=0 in this data set.
Adjusting whole loss function, the specific method is as follows:
Filtering data set Filter1 has loss function Lfilter1, filtering data set Filter2 is with loss function Lfilter2, then whole loss function be,
Lgenerater_loss=Lfake_true_loss+cycle_rate·Lcycle_loss+
filter1_rate·Lfilter1+filter2_rate·Lfilter2
Wherein, Lfake_true_lossThe loss that image discriminating is true picture will be generated for arbiter, cycle_rate is Lcycle_lThe learning rate of oss, Lcycle_lossFor the difference loss for the image that original image obtains after cycle is recycled, filter1_ Rate is Lfilter1Learning rate, filter2_rate Lfilter2Learning rate.
Filtering data set Filter1 has loss function Lfilter1, the image filter1 in data set is filtered by G_y It remains unchanged later, then,
Lfilter1=(G_y (filter1)-filter1)2
Filter1 is the filtering increased image of data set filter1;
Filtering data set Filter2 has loss function Lfilter2, the image filter2 that mistake generates, which is judged, to be negative Example, then,
Lfilter2=lable_yn (1-D_y (filter2))
Lable_y is the label of y image, and positive example 1, negative example is that 0, filter2 is that filtering data set Filter2 is increased Image.
Generator and arbiter are all made of convolutional neural networks model.
Compared with prior art, the present invention increases two filtering data sets on the basis of CycleGAN, passes through It the filtering data set Filter1 that is remained unchanged after the generator of CycleGAN and is identified as after the arbiter of CycleGAN The filtering data set Filter2 of mistake is trained after adjusting whole loss function with production confrontation network, obtained net Network model is used for the Style Transfer of image.Requirement of the present invention to data set is very low, does not need one-to-one data set, this hair Bright is unsupervised learning, is a kind of unsupervised Style Transfer, and learning efficiency is high, and it is wrong to solve variation in CycleGAN migration Accidentally the problem of.
Detailed description of the invention
The case where Fig. 1 is existing CycleGAN incorrect migration is schemed;Wherein, (a) and (c) is original graph, and (b) and (d) is CycleGAN conversion figure;
Fig. 2 is the comparative result figure of the present invention and CycleGAN;Wherein, (a) and (d) is original graph;(b) and (e) is CycleGAN conversion figure, (c) and (f) is present invention conversion figure;
Fig. 3 is generator neural network model internal structure chart of the invention;
Fig. 4 is arbiter neural network model internal structure chart of the invention;
Fig. 5 is overall model structure chart of the invention.
Specific embodiment
The present invention will be further described with reference to the accompanying drawing.
The present invention is defined as FilterGAN.Due to CycleGAN data set itself style type only there are two types of, when encountering When the image of similar style, CycleGAN Style Transfer model is easy to appear the situation of variation targets mistake.FilterGAN is being protected Under the premise of holding unsupervised learning, is filtered by two steps and solve this problem.Referring to Fig. 5, of the invention the specific method is as follows:
Step 1: increase filtering data set Filter1: so that generator G_y for do not need variation image filter1 It is converted without style.
Because the generator G_y of CycleGAN model can go to change for any image, and two kinds of styles All there was only two class images in data set.During turning night, the training dataset of G_y is not in addition to day images on daytime Other images;During horse becomes zebra, the training dataset of G_y other images not other than horse.I.e. on road daytime Image make a detour road evening images Style Transfer in, in the data set of x image style not other than daytime other images.In horse During the Style Transfer for becoming zebra, there is no the image except horse in the data set of x image style.Therefore, this paper presents In the external idea for increasing filtering data set Filter1.Other styles in addition to x and y style are placed in filtering data set Filter1 Image filter1.By taking daytime turns night as an example, what is put in Filter1 data set is red vehicle, white vehicle, green tree Other image filter1 other than night such as wood.Filter1 is not by any variation, i.e. G_y occur after G_y (filter1)=filter1.The purpose done so is exactly that generator G_y association is allowed to refuse the image except non-style on daytime White vehicle is become illegal vehicle by any variation, such as refusal.
Step 2: increasing filtering data set Filter2: so that arbiter D_y differentiates the image filter2 for converting mistake Be negative example, inhibits the generation of such error image.
There is erroneous judgement when differentiating in arbiter D_y, the image filter2 of Style Transfer mistake is also determined as y The image of style.Such as during road day images make a detour the evening images on road, D_y is " black vehicle ", " transparent vehicle " also wrong differentiation is at y style image.During horse becomes zebra, " spot people ", " spot vehicle " etc. are converted mistake by D_y Image also differentiates into y style image.This is also the major reason that generator G_y is continuously generated error image.Accordingly The idea of FilterGAN is: by increasing the filtering data set Filter2 equipped with transformation error image filter2, and allowing D_y Differentiate that image filter2 is not belonging to y style, i.e. D_y (filter2)=0 in this data set.Increase on the basis of CycleGAN This two data acquisition system.
Embodiment:
Step 1: production A, the data set of two kinds of different-styles of B do not need to correspond.
Step 2: according to the mistake occurred in CycleGAN, production filtering data set Filter1 and filtering data set Filter2.It wherein filters in data set Filter1 and is packed into during Style Transfer, the image for needing to remain unchanged.Cross filter data Collect the image that CycleGAN variation mistake is packed into Filter2.
Step 3: design basis neural network structure, network structure foundation structure is similar with CycleGAN, including two are sentenced Other device and two generators.
Generator is a convolutional neural networks model.Its purpose be convert an image to from a certain style it is another A style.Detailed process an are as follows: image is overstock into contracting, nine residual block convolution by three-layer coil, deconvolution is expanded three times To the picture of another style.As shown in figure 3-2, convolution process is similar with DCGAN inside generator, i.e., no pond layer does not have There is full articulamentum, the last layer is handled using tanh () function, and internal structure is as shown in Figure 2.
Arbiter is a convolutional neural networks model.Its purpose be judge a certain image whether be x style image. Detailed process an are as follows: image is subjected to five secondary volumes and overstocks contracting, 1/8th of original image size is narrowed down to, is finally compressed into list The style and features tensor matrix in channel exports after being handled with tanh () function.Its internal structure is as shown in Figure 3.
Step 4: being combined using neural network.FilterGAN increases by two filtering numbers on the basis of CycleGAN According to collection: being packed into the image that mustn't be changed inside filtering data set Filter1, remained unchanged by generator by it;Filtering It is packed into the image of variation mistake inside data set Filter2, negative sample is judged as by arbiter.It is specific as shown in Figure 4
Step 5: adjustment whole loss function.The loss function of FilterGAN and the structure of CycleGAN are similar, still there is two A loss function: the loss function of the true and false and the loss function of fraud and the loss function for examining the true and false are examined.Wherein FilterGAN examines the loss function of the true and false consistent with CycleGAN.The loss function of fraud process filters data set Filter1 Bring loses Lfilter1, increase filtering data set Filter2 bring loss function Lfilter2
The loss function of FilterGAN fraud process increases two Lfilter1And Lfilter2
In formula: cycle_rate filter1_rate filter2_rate represents each learning rate.
Wherein increase two loss item L newlyfilter1And Lfilter2
(1)Lfilter1Increase increased loss after filtering data set Filter1, to guarantee to filter the image in data set Filter1 is by keeping constant after G_y.
Lfilter1=(G_y (filter1)-filter1)2 (2)
In formula: filter1 is the filtering increased image of data set Filter1.
(2)Lfilter2Increase increased loss after filtering data set Filter2, for the image for guaranteeing mistake generation Filter2 is judged the example that is negative.
Lfilter2=lable_yn (1-D_y (filter2)) (3)
In formula: filter2 is the filtering increased image of data set Filter2.
Step 6: training neural network obtains Style Transfer model, and the training method of FilterGAN is similar with CycleGAN. It is all by the way of cross-training.
FilterGAN of the present invention increases two filtering data sets on the basis of CycleGAN.Successfully solve The problem of mistake is migrated in CycleGAN.The experimental result comparison such as Fig. 1 of experimental result and CycleGAN of the invention.
FilterGAN of the present invention remains as unsupervised learning.Relative to CycleGAN, FilterGAN is not raw to target It grows up to be a useful person and increases any parameter.For example, FilterGAN does not need to increase generator semantic point compared to the methods of AugGAN Cut parameter.This is an obviously advantage.
Requirement of the FilterGAN of the present invention to data set is very low.FilterGAN does not need one-to-one data set, is A kind of unsupervised Style Transfer.The models such as AugGAN and AttentionGAN, because joined semantic segmentation figure and thermodynamic chart, So the requirement to data set is very high.Requirement of the FilterGAN Style Transfer model to data set is relatively low.Its is increased Two datasets: Filter1 data, which are concentrated into, needs the image remained unchanged, Filter2 data to be concentrated into Style Transfer The image of mistake is unsupervised data set.Therefore FilterGAN only increases two data sets being easy to get.

Claims (6)

1. a kind of image Style Transfer method for generating neural network based on confrontation, which comprises the following steps:
Step 1 is set up equipped with the filtering data set Filter1 for needing to remain unchanged image;
Step 2 will filter the generator of data set Filter1 access CycleGAN, make to filter data set Filter1 process It is remained unchanged after the generator of CycleGAN;
Step 3 sets up the filtering data set Filter2 of transformation error image;
Step 4 will filter the arbiter of data set Filter2 access CycleGAN, make to filter data set Filter2 process Mistake is identified as after the arbiter of CycleGAN;
Step 5, adjustment addition filtering data set Filter1 and the whole loss letter for filtering CycleGAN after data set Filter2 Number, and neural network is generated to the CycleGAN of addition filtering data set Filter1 and filtering data set Filter2 using fighting It is trained, the method for image Style Transfer is completed after the completion of training.
2. a kind of image Style Transfer method for being generated neural network based on confrontation according to claim 1, feature are existed In image filter1, the x style that filtering data set Filter1 is built-in with other styles in addition to x style is the figure for needing to convert Picture, filter1 is not by any variation, i.e. G_y (filter1)=filter1 occur after generator G_y.
3. a kind of image Style Transfer method for being generated neural network based on confrontation according to claim 1, feature are existed In transformation error image filter2, the x style that filtering data set Filter2 is built-in with x style is the image for needing to convert, and is sentenced Other device D_y differentiates that image filter2 is not belonging to y style, i.e. D_y (filter2)=0 in this data set.
4. a kind of image Style Transfer method for being generated neural network based on confrontation according to claim 1, feature are existed In the specific method is as follows for adjustment whole loss function:
Filtering data set Filter1 has loss function Lfilter1, filtering data set Filter2 is with loss function Lfilter2, then Whole loss function is,
Lgenerater_loss=Lfake_true_loss+cycle_rate·Lcycle_loss+
filter1_rate·Lfilter1+filter2_rate·Lfilter2
Wherein, Lfake_true_lossThe loss that image discriminating is true picture will be generated for arbiter, cycle_rate is Lcycle_lossLearning rate, Lcycle_lossFor the difference loss for the image that original image obtains after cycle is recycled, filter1_ Rate is Lfilter1Learning rate, filter2_rate Lfilter2Learning rate.
5. a kind of image Style Transfer method for being generated neural network based on confrontation according to claim 1, feature are existed In filtering data set Filter1 has loss function Lfilter1, the image filter1 in data set is filtered by protecting after G_y Hold it is constant, then,
Lfilter1=(G_y (filter1)-filter1)2
Filter1 is the filtering increased image of data set filter1;
Filtering data set Filter2 has loss function Lfilter2, mistake generate image filter2 be judged the example that is negative, then,
Lfilter2=lable_yn (1-D_y (filter2))
Lable_y is the label of y image, and positive example 1, negative example is that 0, filter2 is the filtering increased figure of data set Filter2 Picture.
6. a kind of image Style Transfer method for being generated neural network based on confrontation according to claim 1, feature are existed In generator and arbiter are all made of convolutional neural networks model.
CN201910730983.7A 2019-08-08 2019-08-08 A kind of image Style Transfer method generating neural network based on confrontation Pending CN110533580A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910730983.7A CN110533580A (en) 2019-08-08 2019-08-08 A kind of image Style Transfer method generating neural network based on confrontation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910730983.7A CN110533580A (en) 2019-08-08 2019-08-08 A kind of image Style Transfer method generating neural network based on confrontation

Publications (1)

Publication Number Publication Date
CN110533580A true CN110533580A (en) 2019-12-03

Family

ID=68662229

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910730983.7A Pending CN110533580A (en) 2019-08-08 2019-08-08 A kind of image Style Transfer method generating neural network based on confrontation

Country Status (1)

Country Link
CN (1) CN110533580A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179172A (en) * 2019-12-24 2020-05-19 浙江大学 Remote sensing satellite super-resolution implementation method and device based on unmanned aerial vehicle aerial data, electronic equipment and storage medium
CN111539897A (en) * 2020-05-09 2020-08-14 北京百度网讯科技有限公司 Method and apparatus for generating image conversion model
CN112288622A (en) * 2020-10-29 2021-01-29 中山大学 Multi-scale generation countermeasure network-based camouflaged image generation method
CN112418310A (en) * 2020-11-20 2021-02-26 第四范式(北京)技术有限公司 Text style migration model training method and system and image generation method and system
CN113538216A (en) * 2021-06-16 2021-10-22 电子科技大学 Image style migration method based on attribute decomposition

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220929A (en) * 2017-06-23 2017-09-29 深圳市唯特视科技有限公司 A kind of non-paired image method for transformation using the consistent confrontation network of circulation
AU2017101166A4 (en) * 2017-08-25 2017-11-02 Lai, Haodong MR A Method For Real-Time Image Style Transfer Based On Conditional Generative Adversarial Networks
CN107464210A (en) * 2017-07-06 2017-12-12 浙江工业大学 A kind of image Style Transfer method based on production confrontation network
CN108038818A (en) * 2017-12-06 2018-05-15 电子科技大学 A kind of generation confrontation type network image style transfer method based on Multiple Cycle uniformity
CN109766895A (en) * 2019-01-03 2019-05-17 京东方科技集团股份有限公司 The training method and image Style Transfer method of convolutional neural networks for image Style Transfer

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220929A (en) * 2017-06-23 2017-09-29 深圳市唯特视科技有限公司 A kind of non-paired image method for transformation using the consistent confrontation network of circulation
CN107464210A (en) * 2017-07-06 2017-12-12 浙江工业大学 A kind of image Style Transfer method based on production confrontation network
AU2017101166A4 (en) * 2017-08-25 2017-11-02 Lai, Haodong MR A Method For Real-Time Image Style Transfer Based On Conditional Generative Adversarial Networks
CN108038818A (en) * 2017-12-06 2018-05-15 电子科技大学 A kind of generation confrontation type network image style transfer method based on Multiple Cycle uniformity
CN109766895A (en) * 2019-01-03 2019-05-17 京东方科技集团股份有限公司 The training method and image Style Transfer method of convolutional neural networks for image Style Transfer

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PHILLIP ISOLA等: "Image-to-Image Translation with Conditional Adversarial Networks", 《ARXIV:1611.07004》 *
TAEKSOO KIM等: "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks", 《ARXIV:1703.05192》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179172A (en) * 2019-12-24 2020-05-19 浙江大学 Remote sensing satellite super-resolution implementation method and device based on unmanned aerial vehicle aerial data, electronic equipment and storage medium
CN111179172B (en) * 2019-12-24 2021-11-02 浙江大学 Remote sensing satellite super-resolution implementation method and device based on unmanned aerial vehicle aerial data, electronic equipment and storage medium
CN111539897A (en) * 2020-05-09 2020-08-14 北京百度网讯科技有限公司 Method and apparatus for generating image conversion model
CN112288622A (en) * 2020-10-29 2021-01-29 中山大学 Multi-scale generation countermeasure network-based camouflaged image generation method
CN112288622B (en) * 2020-10-29 2022-11-08 中山大学 Multi-scale generation countermeasure network-based camouflaged image generation method
CN112418310A (en) * 2020-11-20 2021-02-26 第四范式(北京)技术有限公司 Text style migration model training method and system and image generation method and system
CN113538216A (en) * 2021-06-16 2021-10-22 电子科技大学 Image style migration method based on attribute decomposition

Similar Documents

Publication Publication Date Title
CN110533580A (en) A kind of image Style Transfer method generating neural network based on confrontation
CN112308158B (en) Multi-source field self-adaptive model and method based on partial feature alignment
CN103605972B (en) Non-restricted environment face verification method based on block depth neural network
CN103839084B (en) Multi-kernel support vector machine multi-instance learning algorithm applied to pedestrian re-identification
CN104951554B (en) It is that landscape shines the method for mixing the verse for meeting its artistic conception
CN109886161B (en) Road traffic identification recognition method based on likelihood clustering and convolutional neural network
Sharif et al. A hybrid deep model with HOG features for Bangla handwritten numeral classification
CN103116749A (en) Near-infrared face identification method based on self-built image library
CN112967178B (en) Image conversion method, device, equipment and storage medium
CN110309707A (en) A kind of recognition methods of the coffee drupe maturity based on deep learning
CN113780132A (en) Lane line detection method based on convolutional neural network
CN109344856B (en) Offline signature identification method based on multilayer discriminant feature learning
CN110991349A (en) Lightweight vehicle attribute identification method based on metric learning
CN107992807A (en) A kind of face identification method and device based on CNN models
CN112613480A (en) Face recognition method, face recognition system, electronic equipment and storage medium
CN110310238A (en) A kind of removing rain based on single image method based on the compression rewards and punishments neural network for reusing raw information
CN114330529A (en) Real-time pedestrian shielding detection method based on improved YOLOv4
CN110674884A (en) Image identification method based on feature fusion
CN111461000A (en) Intelligent office garbage classification method based on CNN and wavelet analysis
CN111738939B (en) Complex scene image defogging method based on semi-training generator
Li et al. Transfering low-frequency features for domain adaptation
Shammi et al. FishNet: fish classification using convolutional neural network
Zhang et al. CNN-based anomaly detection for face presentation attack detection with multi-channel images
CN116543269A (en) Cross-domain small sample fine granularity image recognition method based on self-supervision and model thereof
Özyurt et al. A new method for classification of images using convolutional neural network based on Dwt-Svd perceptual hash function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191203

RJ01 Rejection of invention patent application after publication