CN109816107A - A kind of quasi- newton neural network BP training algorithm of the BFGS based on heterogeneous computing platforms - Google Patents

A kind of quasi- newton neural network BP training algorithm of the BFGS based on heterogeneous computing platforms Download PDF

Info

Publication number
CN109816107A
CN109816107A CN201711158651.3A CN201711158651A CN109816107A CN 109816107 A CN109816107 A CN 109816107A CN 201711158651 A CN201711158651 A CN 201711158651A CN 109816107 A CN109816107 A CN 109816107A
Authority
CN
China
Prior art keywords
calculating
neural network
work
task
newton
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711158651.3A
Other languages
Chinese (zh)
Inventor
李佳峻
刘强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201711158651.3A priority Critical patent/CN109816107A/en
Publication of CN109816107A publication Critical patent/CN109816107A/en
Pending legal-status Critical Current

Links

Landscapes

  • Feedback Control In General (AREA)

Abstract

The invention discloses a kind of BFGS based on heterogeneous computing platforms to intend newton neural network BP training algorithm, the following steps are included: (1) divides task: (2) divide degree of parallelism, the sum of work-item needed for completing calculating task and by the quantity of the work-item work-group being organized into;Find out neural metwork training error;(3) direction of search dir is calculated;(4) based on Fibonacci method material calculation λ and update weight w;(5) the gradient g of neural metwork training error evaluation function is calculated;(6) Hessian matrix H is calculated;(7) parallel reduction.The present invention calculates equipment as neural metwork training using CPU and GPU heterogeneous computing platforms, is accelerated using GPU to BFGS Quasi-Newton algorithm, the speed of service obtains tremendous increase compared with traditional realization based on CPU;Convergence efficiency and ability of searching optimum with higher compared with other optimization algorithms.

Description

A kind of quasi- newton neural network BP training algorithm of the BFGS based on heterogeneous computing platforms
Technical field
The present invention relates to high-performance calculations and machine learning field, and in particular to a kind of BFGS based on heterogeneous computing platforms Quasi- newton neural network BP training algorithm.
Background technique
Artificial neural network is a kind of information processing system, it can learn any input and output by volume of data and close System, and establish accurate model.At present, one of the significant challenge that artificial neural network faces is exactly to train.Before training, refreshing Any information is not carried through network;By training, the weighted value of neural network is determined, to establish essence based on training data True model.The determination process of neural network weighted value, actually an optimization process, i.e. neural network pass through various optimizations Algorithm, such as gradient descent method, Quasi-Newton algorithm (QN), particle swarm optimization algorithm (PSO) and conjugate gradient algorithms (CG) etc., More accurate fitting weighted value is calculated by iterating.Therefore, neural metwork training contains a large amount of training datas Successive ignition calculate, be a quite time-consuming process.
With the fast development of GPU technology, current GPU has been provided with very strong computation capability and data processing energy Power, but also there is a big difference compared with CPU for the logic processing capability of GPU, therefore, is badly in need of one kind and comprehensively considers speed promotion, side The algorithm of method scalability and design cost.
Summary of the invention
Purpose of the invention is to overcome the shortcomings in the prior art, solves traditional artificial neural network in training process The problem of middle low efficiency, provides a kind of BFGS based on heterogeneous computing platforms and intends newton neural network BP training algorithm, using CPU and GPU heterogeneous computing platforms calculate equipment as neural metwork training, are accelerated using GPU to BFGS Quasi-Newton algorithm, obtain A kind of method that can comparatively fast complete neural metwork training and establish a high-precision model.
The purpose of the present invention is what is be achieved through the following technical solutions:
A kind of quasi- newton neural network BP training algorithm of the BFGS based on heterogeneous computing platforms, comprising the following steps:
(1) divide task: it includes calculating task and control task, control task that BFGS, which intends newton neural network BP training algorithm, It is completed by CPU, calculating task is completed by GPU, and calculating task is divided into five kernel functions (kernel);
(2) divide degree of parallelism, complete calculating task needed for work-item sum and work-item is organized into Work-group quantity;
(3) neural metwork training error is found out;
(4) direction of search dir is calculated;
(5) based on Fibonacci method material calculation λ and update weight w;
(6) the gradient g of neural metwork training error evaluation function is calculated;
(7) Hessian matrix H is calculated;
(8) parallel reduction.
Further, in step (1) five kernel functions be respectively as follows: kernel1 indicate neural metwork training error meter It calculates, kernel2 indicates the calculating of the direction of search, and the calculating of kernel3 expression step-size in search and neural network connection weight are more Newly, kernel4 indicates the calculating of the gradient of neural metwork training error evaluation function, and kernel5 indicates the calculating of Hessian matrix.
Further, in step (1) control task include primary data by host side to calculate equipment end transmission control, Calculated result by calculating equipment end to host side transmission, whether reach the number of iterations upper limit condition judgement, training error be The control whether no satisfactory judgement and calculating task terminate.
Compared with prior art, the beneficial effects brought by the technical solution of the present invention are as follows:
1. the present invention uses CPU and GPU heterogeneous computing platforms as equipment is calculated, with traditional realization phase based on CPU Tremendous increase is obtained than the speed of service;
2. the present invention uses OpenCL to realize as programming language, with higher compared with the realization for using CUDA language Portability can use on the GPU and FPGA of different manufacturers;
3. the present invention uses optimization algorithm of the BFGS Quasi-Newton algorithm as neural metwork training, with other optimization algorithm phases Than convergence efficiency with higher and ability of searching optimum.
Detailed description of the invention
Fig. 1 is neural network structure schematic diagram.
Fig. 2 is CPU and GPU heterogeneous computing platforms schematic diagram.
Fig. 3 is parallel reduction schematic diagram.
Fig. 4 is to intend newton neural network BP training algorithm design flow diagram parallel.
Fig. 5 is that data transmit schematic diagram between modules.
Fig. 6 is concrete operations flow chart.
Fig. 7 is result schematic diagram
Specific embodiment
The invention will be further described with reference to the accompanying drawing.
As shown in Figure 1, being neural network structure schematic diagram, comprising: input layer, hidden layer and output layer.It is adjacent refreshing twice It all links together through first, the corresponding weight of each connection.Input layer number corresponds to the defeated of one group of training data Enter number, the output number of the corresponding one group of training data of output layer neuron number, hidden layer neuron number is set as needed Determine, generally higher than input layer quantity.
As shown in Fig. 2, being CPU and GPU heterogeneous computing platforms schematic diagram.CPU is communicated with GPU by PCIE bus. Part where CPU is the end host, and supervisor's control, the part where GPU is the end device, and supervisor calculates, the storage at the end host Space is host memory, which can carry out the read-write of data with the global memory at the end GPU.It is being grasped When making, respective directories only need to be placed data into, software will voluntarily be read.
As shown in figure 4, being the design flow diagram of inventive algorithm, overall tasks are carried out according to the characteristics of CPU and GPU It divides.Wherein the initialization section of judgment part and data is all completed by CPU, neural metwork training error function and BFGS Quasi-Newton algorithm is all realized on GPU.The division of modules degree of parallelism, the characteristics of both having considered algorithm itself it is further contemplated that The design feature of GPU.
It is specific as follows:
1) task divides: the algorithm totally includes two generic tasks, respectively calculating task and control task.Control task by CPU is completed, including primary data by host side to calculating the transmission control of equipment end, calculated result by calculating equipment end to leading The transmission of generator terminal, condition judgement, the whether satisfactory judgement of training error and the calculating for whether reaching the number of iterations upper limit The control whether task terminates.Calculating task is completed by GPU, and for the ease of task schedule and optimization, calculating task is divided into Five kernel function kernel realize that, as shown in figure 5, including the calculating of kernel1 neural metwork training error, kernel2 is searched The calculating of Suo Fangxiang, the calculating of kernel3 step-size in search and the update of neural network connection weight, kernel4 neural network instruction Practice the calculating of the gradient of error evaluation function, the calculating of kernel5 Hessian matrix.
2) degree of parallelism divides: for five kernel function kernel in 1), on GPU Parallel Implementation firstly the need of division Degree of parallelism completes the sum of work-item required for these calculating tasks and is organized into these work-item The quantity of work-group.Kernel1 and kernel4 is divided simultaneously according to training data scale in five kernel function kernel Row degree, for example, if training data is by 1024 groups, then the two kernel need 1024 work-item to participate in calculating, often A work-item carries out the relevant calculation of one group of training data.Kernel2, kernel3 and kernel5 are weighed according to neural network Tuple amount divides degree of parallelism, and the connection weight quantity of neural network as shown in Figure 1 is 32, then under this neural network structure These three kernel needed for work-item quantity be 32.
3) neural metwork training error evaluation function: kernel1 in the function corresponding diagram 5.The kernel is read from memory It takes training data and connection weight w to be calculated, finally obtains the corresponding training error E of wT(w).In the function, s-th Shown in the calculating content such as formula (1) (2) (3) of work-item, wherein the value of input neuron is training data, x is usedi sTable Show, indicates the value of s group training data corresponding with i-th of input neuron;The value of hidden neuron by formula (1) and (2) be calculated, wherein hj indicate to hide for j-th the tired of the corresponding input neuron of nerve and weight and, f (hj) indicate jth The value of a hidden neuron, wijIndicate weight of i-th of input neuron to j-th of hidden neuron, n expression input neuron Quantity;Output neuron ymValue be calculated by formula (3), wherein m indicate m-th of output neuron, N indicate hide nerve The quantity of member.Finally, acquiring neural metwork training error E according to formula (4) using reduction techniquesT.S in formula (4)TIndicate instruction Practice the scale of data set, NyIndicate output neuron number, ymIndicate the calculated result of m-th of output neuron, dkmIndicate kth Group training data ideal output result corresponding with m-th of output neuron.dmax,mIndicate maximum in given training data Idea output, and dmin,mIndicate the smallest idea output in given training data.
4) calculate direction of search dir: kernel2 in the function corresponding diagram 5, calculating process need to use Hessian matrix H The data line of matrix H is read with gradient g, each work-item and gradient g carries out multiplying accumulating the operation generation direction of search An element of dir.
5) based on Fibonacci method material calculation λ and update weight w: kernel3 in the function corresponding diagram 5, process are It initializes a step-length section first and calculates two golden section points in this section of section, then update w with the two points for step-length, Then E is calledTFunction seeks the corresponding value of two groups of w respectively and compares size, leave that lesser step-length and delete larger step size One section of section in outside.Repeatedly, until less than one, remaining step-length section definite value, stop iteration and determine step-length.By In the kernel main purpose be update weight w, therefore function inside degree of parallelism division based on w dimension carry out.It is i.e. every A work-item calculates an element of weight w.The function will call neural metwork training error evaluation function repeatedly, that is, need It will be in kernel intrinsic call another kernel.The function can be realized by being higher than the OpenCL of 1.1 versions.But by It is not identical in the degree of parallelism of two kernel, thus while the work- that overall work-item quantity is identical but active Item is not identical.It is right before calling training error function every time in order to avoid thus bring work-item conflicts All work-item pressures synchronize operation.
6) the gradient g: kernel4 in the function corresponding diagram 5, process master of neural metwork training error evaluation function is calculated To be the partial derivative for successively calculating each element in every group of weight w, then seek each element local derviation using the method for parallel reduction Neural metwork training error function gradient can be obtained in several sums.In the function, each work-item, which is calculated, is based on one group of instruction Practice the partial derivative of all elements in the weight w of data, reduction summation finally is carried out to all work-item
7) calculate Hessian matrix H: subscript k indicates kth time iteration in all formula, in the function corresponding diagram 5 By formula (5) (6) s and z is calculated by step 5) and 6) the weight w that is calculated and gradient g in kernel5, then by this two X is calculated according to formula (7) in a vector, and wherein s is the knots modification of weight, and z is the knots modification of gradient, and x is correction term.Finally Hessian matrix is calculated according to formula (8).The element of the column of Hessian matrix H mono- is calculated in each work-item in the part.HkFor The Hessian matrix H of kth time iteration.
sk=wk+1-wk (5)
zk=gk+1-gk (6)
8) it parallel reduction: needs to use in the kernel1 and kernel4 that BFGS intends newton neural metwork training parallel algorithm Some vectors are handled to the technology of parallel reduction, and to acquire in vector each element the sum of cumulative.Parallel reduction technology as shown in figure 3, First work-item calculates the sum of first variable and the last one variable, and second work-item calculates second variable The sum of with penultimate variable, and so on, final result is calculated by first work-item.
Specifically, Fig. 5 is that data transmit schematic diagram between modules.Each module corresponds to a kernel.First, Kernel2 calculates direction of search d using initial Hessian matrix H and gradient g;Then, kernel3 is true according to golden cut algorithm Fixed step size, and weight w is updated by direction of search d and step-length λ and is passed to kernel1 and calculates training error error and is assessed, Final step-length and determining w are determined until meeting condition;Kernel4 calculates E according to some intermediate variables in kernel1T(w) Partial derivative simultaneously determines the functional gradient;Finally, in kernel5, weight w and gradient that several kernel before use are generated G calculates Hessian matrix matrix H.By judging on outer CPU, result is exported if reaching requirement, otherwise continues cycling through and runs this Five kernel.
Its concrete operations is as shown in Figure 6:
(1) neural network parameter is set
As shown in Figure 1, being the structure chart of neural network;The present embodiment has selected single hidden layer neural network, input layer mind It is arranged through first quantity according to the input variable number for the model to be fitted, output layer neuron quantity is according to the mould to be fitted The output variable number of type is arranged, and hidden layer neuron quantity is arranged according to their needs, generally higher than input layer number Amount.The quantity of neural network weight quantity neuron according to set by front is according to formula w=(input+output) * hidden Voluntarily calculate change.
(2) end GPU parameter setting
The end GPU parameter setting mainly has the setting of work-item quantity and the setting of work-group scale.work- Item is the minimum unit run on GPU, and the setting of quantity can be depending on training data scale.Such as training data scale It is 4096 groups, then work-item quantity can be set as 4096.A certain number of work-item can organize as work-group, In this way convenient for data transmission and management between group work-item.Work-item quantity can be according to nerve in work-group The setting of network weight quantity, neural network as shown in Figure 1, weight quantity is (3+1) * 8=32, then a work- The quantity of work-item is just set as 32 in group.
(3) training data imports
Software can only read the file of csv format, so needing to be first stored in training data in csv file, then will It is stored under software catalog, and corresponding filename is changed to training-data.
(4) termination condition is set
After the completion of above step, it is also necessary to set software termination condition, generally comprise the setting of iteration maximum times and instruction Practice error boundary condition setting.After being provided with, software reaches maximum number of iterations or training error is less than training error side Boundary's condition then terminator and can export result.
(5) result records
It is as shown in Figure 7 to export result.It as a result include four information, the first behavior training error indicates neural network fitting The accuracy of model, as shown is 37.5297;Second row is the number of iterations, and representation program is secondary from the iteration for running to termination Number, as shown is 20;The neural network weight that the third line starts as acquisition;Last line is runing time, and unit is the second.
The present invention is not limited to embodiments described above.Above the description of specific embodiment is intended to describe and say Bright technical solution of the present invention, the above mentioned embodiment is only schematical, is not restrictive.This is not being departed from In the case of invention objective and scope of the claimed protection, those skilled in the art may be used also under the inspiration of the present invention The specific transformation of many forms is made, within these are all belonged to the scope of protection of the present invention.

Claims (3)

1. a kind of BFGS based on heterogeneous computing platforms intends newton neural network BP training algorithm, which is characterized in that including following step It is rapid:
(1) divide task: it includes calculating task and control task that BFGS, which intends newton neural network BP training algorithm, control task by CPU is completed, and calculating task is completed by GPU, and calculating task is divided into five kernel function kernel;
(2) degree of parallelism is divided, the sum of work-item needed for completing calculating task and is organized into work-item The quantity of work-group;
(3) neural metwork training error is found out;
(4) direction of search dir is calculated;
(5) based on Fibonacci method material calculation λ and update weight w;
(6) the gradient g of neural metwork training error evaluation function is calculated;
(7) Hessian matrix H is calculated;
(8) parallel reduction.
2. a kind of BFGS based on heterogeneous computing platforms intends newton neural network BP training algorithm according to claim 1, feature exists In five kernel functions are respectively as follows: the calculating of kernel1 expression neural metwork training error in step (1), and kernel2 is indicated The calculating of the direction of search, kernel3 indicate the calculating of step-size in search and the update of neural network connection weight, and kernel4 is indicated The calculating of the gradient of neural metwork training error evaluation function, kernel5 indicate the calculating of Hessian matrix.
3. a kind of BFGS based on heterogeneous computing platforms intends newton neural network BP training algorithm according to claim 1, feature exists In control task includes primary data by host side to the transmission control for calculating equipment end in step (1), calculated result is by calculating Equipment end to host side transmission, whether reach the number of iterations upper limit condition judgement, whether training error satisfactory sentences The control whether disconnected and calculating task terminates.
CN201711158651.3A 2017-11-20 2017-11-20 A kind of quasi- newton neural network BP training algorithm of the BFGS based on heterogeneous computing platforms Pending CN109816107A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711158651.3A CN109816107A (en) 2017-11-20 2017-11-20 A kind of quasi- newton neural network BP training algorithm of the BFGS based on heterogeneous computing platforms

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711158651.3A CN109816107A (en) 2017-11-20 2017-11-20 A kind of quasi- newton neural network BP training algorithm of the BFGS based on heterogeneous computing platforms

Publications (1)

Publication Number Publication Date
CN109816107A true CN109816107A (en) 2019-05-28

Family

ID=66598678

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711158651.3A Pending CN109816107A (en) 2017-11-20 2017-11-20 A kind of quasi- newton neural network BP training algorithm of the BFGS based on heterogeneous computing platforms

Country Status (1)

Country Link
CN (1) CN109816107A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476346A (en) * 2020-02-28 2020-07-31 之江实验室 Deep learning network architecture based on Newton conjugate gradient method
CN113515822A (en) * 2021-01-28 2021-10-19 长春工业大学 Return-to-zero neural network-based stretching integral structure form finding method
WO2021208808A1 (en) * 2020-04-14 2021-10-21 International Business Machines Corporation Cooperative neural networks with spatial containment constraints
US11222201B2 (en) 2020-04-14 2022-01-11 International Business Machines Corporation Vision-based cell structure recognition using hierarchical neural networks

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999756A (en) * 2012-11-09 2013-03-27 重庆邮电大学 Method for recognizing road signs by PSO-SVM (particle swarm optimization-support vector machine) based on GPU (graphics processing unit)
CN105303252A (en) * 2015-10-12 2016-02-03 国家计算机网络与信息安全管理中心 Multi-stage nerve network model training method based on genetic algorithm
CN106503803A (en) * 2016-10-31 2017-03-15 天津大学 A kind of limited Boltzmann machine iteration map training method based on pseudo-Newtonian algorithm
CN106528357A (en) * 2016-11-24 2017-03-22 天津大学 FPGA system and implementation method based on on-line training neural network of quasi-newton method
CN106775905A (en) * 2016-11-19 2017-05-31 天津大学 Higher synthesis based on FPGA realizes the method that Quasi-Newton algorithm accelerates

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999756A (en) * 2012-11-09 2013-03-27 重庆邮电大学 Method for recognizing road signs by PSO-SVM (particle swarm optimization-support vector machine) based on GPU (graphics processing unit)
CN105303252A (en) * 2015-10-12 2016-02-03 国家计算机网络与信息安全管理中心 Multi-stage nerve network model training method based on genetic algorithm
CN106503803A (en) * 2016-10-31 2017-03-15 天津大学 A kind of limited Boltzmann machine iteration map training method based on pseudo-Newtonian algorithm
CN106775905A (en) * 2016-11-19 2017-05-31 天津大学 Higher synthesis based on FPGA realizes the method that Quasi-Newton algorithm accelerates
CN106528357A (en) * 2016-11-24 2017-03-22 天津大学 FPGA system and implementation method based on on-line training neural network of quasi-newton method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JIAJUN LI.ET AL: ""Neural Network Training Acceleration with PSO Algorithm on a GPU Using OpenCL"", 《PROCEEDING OF THE 8TH INTERNATIONAL SYMPOSIUM ON HIGHLY EFFICIENT ACCELERATORS AND RECONFIGURABLE TECHNOLOGIES》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476346A (en) * 2020-02-28 2020-07-31 之江实验室 Deep learning network architecture based on Newton conjugate gradient method
CN111476346B (en) * 2020-02-28 2022-11-29 之江实验室 Deep learning network architecture based on Newton conjugate gradient method
WO2021208808A1 (en) * 2020-04-14 2021-10-21 International Business Machines Corporation Cooperative neural networks with spatial containment constraints
US11222201B2 (en) 2020-04-14 2022-01-11 International Business Machines Corporation Vision-based cell structure recognition using hierarchical neural networks
GB2610098A (en) * 2020-04-14 2023-02-22 Ibm Cooperative neural networks with spatial containment constraints
US11734939B2 (en) 2020-04-14 2023-08-22 International Business Machines Corporation Vision-based cell structure recognition using hierarchical neural networks and cell boundaries to structure clustering
US11734576B2 (en) 2020-04-14 2023-08-22 International Business Machines Corporation Cooperative neural networks with spatial containment constraints
CN113515822A (en) * 2021-01-28 2021-10-19 长春工业大学 Return-to-zero neural network-based stretching integral structure form finding method

Similar Documents

Publication Publication Date Title
CN109816107A (en) A kind of quasi- newton neural network BP training algorithm of the BFGS based on heterogeneous computing platforms
Sola et al. Importance of input data normalization for the application of neural networks to complex industrial problems
CN105550268B (en) Big data process modeling analysis engine
Siljak Decentralized control of complex systems
US20170330078A1 (en) Method and system for automated model building
CN109791626A (en) The coding method of neural network weight, computing device and hardware system
CN108593260A (en) Lightguide cable link fault location and detection method and terminal device
CN109816103A (en) A kind of PSO-BFGS neural network BP training algorithm
Zhou et al. Deep learning enabled cutting tool selection for special-shaped machining features of complex products
JP2019537079A (en) How to build stochastic models for large-scale renewable energy data
CN108108347B (en) Dialogue mode analysis system and method
CN102662322B (en) FPGA (field programmable gate array) processor and PID (proportion integration differentiation) membrane optimization neural network controller
WO2020167316A1 (en) Machine learning-based generative design for process planning
CN106295670A (en) Data processing method and data processing equipment
CN106295074B (en) A kind of carrier rocket bay section vibratory response characteristic is quickly analyzed and optimization method
CN110222824B (en) Intelligent algorithm model autonomous generation and evolution method, system and device
CN110378358A (en) A kind of power distribution network isomeric data integration method and system
Sattianadan et al. Optimal placement of capacitor in radial distribution system using PSO
CN107941210B (en) Star map identification method combining neural network technology and triangle algorithm
CN110096773A (en) Threedimensional model batch processing method and system for the exploitation of nuclear power station virtual emulation
CN104573331A (en) K neighbor data prediction method based on MapReduce
CN110956010B (en) Large-scale new energy access power grid stability identification method based on gradient lifting tree
Lee et al. Nnsim: A fast and accurate systemc/tlm simulator for deep convolutional neural network accelerators
CN110083674B (en) Intellectual property information processing method and device
CN116628136A (en) Collaborative query processing method, system and electronic equipment based on declarative reasoning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190528