CN108038538A - Multi-objective Evolutionary Algorithm based on intensified learning - Google Patents

Multi-objective Evolutionary Algorithm based on intensified learning Download PDF

Info

Publication number
CN108038538A
CN108038538A CN201711279238.2A CN201711279238A CN108038538A CN 108038538 A CN108038538 A CN 108038538A CN 201711279238 A CN201711279238 A CN 201711279238A CN 108038538 A CN108038538 A CN 108038538A
Authority
CN
China
Prior art keywords
mrow
population
value
solution
new
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711279238.2A
Other languages
Chinese (zh)
Inventor
郭宝龙
郭新兴
宁伟康
李�诚
安陆
闫允
闫允一
陈祖铭
李星星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201711279238.2A priority Critical patent/CN108038538A/en
Publication of CN108038538A publication Critical patent/CN108038538A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/086Learning methods using evolutionary algorithms, e.g. genetic algorithms or genetic programming

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Physiology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses the multi-objective Evolutionary Algorithm based on intensified learning, initial population is randomly generated from search space, obtained population is evaluated;Population to being unsatisfactory for end condition, new value is produced using the DEvariant operators and T operators of intensified learning selection, and the value of itself and neighborhood is intersected, and variation produces new explanation;The new explanation of generation, compared with the solution of original seed group, selection makes subproblem function meet the solution of optimal value, carrys out Population Regeneration;Using the new population of generation, 5 new dimension observation vector sum return value R are calculated, and then update the state of RL controllers;Judge whether to meet end condition, be unsatisfactory for, be constantly iterated calculating, until meeting end condition, terminate.The present invention solve thes problems, such as that MOEA/D is insensitive for T parameter regulations.

Description

Multi-target evolutionary algorithm based on reinforcement learning
Technical Field
The invention relates to the technical field of science and engineering, in particular to a multi-target evolutionary algorithm based on reinforcement learning.
Background
In the scientific and engineering fields, there are a large number of Multi-objective optimization problems (MOP), which, unlike Single-objective optimization problems (SOP), are a set of so-called Pareto optimal solutions of the MOP. The traditional multi-objective optimization algorithm comprises a weighting method, a constraint method, an objective planning method, a minimum and maximum method and the like. These methods all convert MOP into SOP, and have the disadvantages of requiring sufficient prior knowledge, being difficult to process target noise, and having poor robustness. Since the objective functions and constraint functions of the multi-objective optimization problem may be non-linear, non-trivial, or discontinuous, traditional mathematical programming methods tend to be inefficient and they are sensitive to the order in which the weights or objectives are given.
2. An Evolution Algorithm (EA) is a random global optimization method for simulating a natural evolution process, and the EA searches a solution of a problem in a mode of group search and information exchange among individuals in a group. Due to the inherent parallelism of the EA, it is possible to find multiple Pareto optimal solutions in the simulation. Compared with the traditional algorithm, the method has the advantages that: firstly, the evolution search process is random and is not easy to fall into local optimum; secondly, the EA has inherent parallelism, can simultaneously evolve and find a plurality of solutions, and is suitable for a multi-objective optimization problem; thirdly, the problems of discontinuity, irreconcilability, non-convexity of Pareto front edges and the like can be solved, and excessive priori knowledge is not needed.
3. The algorithm is based on a Pareto dominance mechanism, and different fitness distribution strategies and selection mechanisms are adopted; different schemes are adopted to keep population diversity and avoid premature convergence of the algorithm, so that Pareto solutions of the algorithm are distributed uniformly.
4. As an efficient and robust multi-objective optimizer, MOEA has been widely used in many fields of science and engineering, including control engineering, system planning, production scheduling, data mining, etc., due to the advantages of multi-objective evolutionary algorithms.
MOEA/D decomposes MOP into N scalar sub-problems. It solves all sub-problems simultaneously by evolving a population of solutions. For each generation population, the population is a set of optimal solutions for each sub-problem selected from all generations. The degree of association between two adjacent subproblem keys is determined by the distance between their aggregate coefficient vectors. The optimal solution should be very similar for two adjacent sub-problems. For each sub-problem, it is optimized only with the information of the sub-problems adjacent to it.
MOEA/D has the following characteristics:
MOEA/D provides a simple but efficient method of introducing a decomposition method into multi-objective evolutionary computation. For decomposition methods, which are often developed in the field of mathematical planning, it can be really incorporated into the EA, solving the MOP problem by using the MOEA/D framework. Because the MOEA/D algorithm optimizes the N scalar subproblems simultaneously rather than solving the MOP problem directly as a whole, the difficulty of fitness assignment and diversity control is reduced in the MOEA/D framework for the traditional MOEA algorithm that is not based on decomposition.
6. However, MOEA/D has the defects that the MOEA/D is insensitive to the adjustment of T parameters, T is small and has no breadth, T is large and has no depth, and the adaptive regulation capability is poor.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a multi-objective evolutionary algorithm based on reinforcement learning so as to solve the technical problems.
In order to achieve the purpose, the invention adopts the following technical scheme:
the multi-target evolutionary algorithm based on reinforcement learning comprises the following steps: step 1, randomly generating an initial population from a search space;
step 2, evaluating the obtained population according to an evaluation criterion;
step 3, updating the searched optimal value of the objective function;
step 4, utilizing the generated approximate solution Z*Comparing and judging with the termination condition, and finishing when the comparison result is satisfied; for the population which does not meet the termination condition, generating a new value by utilizing a Devariant operator and a T operator selected by the reinforced learning RL controller, and crossing the new value with the value of the neighborhood to generate a new solution through mutation;
step 5, comparing the generated new solution with the solution of the original population, and selecting the solution which enables the subproblem function to meet the optimal value to update the population;
and 6, calculating a new 5-dimensional observation vector and a return value R by using the generated new population, further updating the state of the RL controller, judging whether a termination condition is met, and continuously performing iterative calculation until the termination condition is met, and ending.
Preferably, the specific steps of step 1 are:
step 1.1, calculating the euclidean distance between any two weight vectors, and searching the T weight vectors nearest to each weight vector, where T is the number of weight vectors in each neighborhood, and for each i ═ 1, …, N, let B bei={i1,…,iT},λi 1,…λi TIs λiThe most recent T weight vectors;
step 1.2, establishing an external population EP for storing a non-dominant solution found in the process of searching the optimal solution and initializing the EP to be null;
step 1.3, uniformly and randomly acquiring and generating an objective function F (X) ═ f from a search space1(x),f2(x),…,fi(x) Take the solution of the optimal value as the initial population, where i ═ 1, 2.., m; x is a set of decision vectors, X is an argument;
step 1.4, decomposing the objective function F (X) into N layers of subproblems by utilizing a Chebyshev method:wherein,adjacency of ith sub-problem by all sub-problems with respect to λiWeight vector of points, Z*Is the optimal vector value of the objective function which can be searched at present, also called approximate solution, Z*=min{(f1(x),f2(x),…,fi(x))}。
Preferably, the value generated in step 4 and the value of its neighborhood are operated as follows to generate a new solution: step 4.1 selection operation: randomly selecting two serial numbers h, k from B (i), and using genetic operator to select xhAnd xkGenerating a new value, wherein xhIs the current optimal solution to the h-th sub-problem, and xkIs the current optimal solution for the kth sub-problem; comparing the generated value with the value of the neighborhood, performing the operation of preferential treatment and elimination, selecting the superior value with high fitness for remaining, and inheriting the next generation;
step 4.2, cross operation: pairing individuals in the population, and performing gene cross operation to generate new individuals;
step 4.3 mutation operation: and performing low-probability variation operation on the gene values.
Preferably, the reward value R in step 6 is given by the following formula:
the invention has the beneficial effects that: the invention introduces a reinforcement learning mechanism, utilizes RL controller to optimize continuously, and can realize the self-adaptation of parameters; specifically, an optimal value is generated according to the maximum reward R and the five-dimensional observation vector by using an operator selected by an RL controller for reinforcement learning, so that the population is continuously optimized until a termination condition is met, and the problem that MOEA/D is insensitive to T parameter adjustment is effectively solved.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
FIG. 2 is a graph showing the effect of the invention on testing UF 3.
FIG. 3 is a graph showing the effect of the invention on testing UF 7.
Detailed Description
The invention is explained in further detail below with reference to the figures and the specific embodiments.
As shown in fig. 1, the multi-objective evolutionary algorithm based on reinforcement learning includes the following steps:
step 1, randomly generating an initial population from a search space;
step 1.1, calculating the euclidean distance between any two weight vectors, and searching the T weight vectors nearest to each weight vector, where T is the number of weight vectors in each neighborhood, and for each i ═ 1, …, N, let B bei={i1,…,iT},λi 1,…λi TIs λiThe most recent T weight vectors;
step 1.2, establishing an external population EP for storing a non-dominant solution found in the process of searching the optimal solution and initializing the EP to be null;
step 1.3, uniformly and randomly acquiring and generating an objective function F (X) ═ f from a search space1(x),f2(x),…,fi(x) Take the solution of the optimal value as the initial population, where i ═ 1, 2.., m; x is a set of decision vectors, X is an argument;
step 1.4, decomposing the objective function F (X) into N layers of subproblems by utilizing a Chebyshev method:wherein,adjacency of ith sub-problem by all sub-problems with respect to λiWeight vector of points, Z*Is the optimal vector value of the objective function which can be searched at present, also called approximate solution, Z*=min{(f1(x),f2(x),…,fi(x))}。
Step 2, evaluating the obtained population according to an evaluation criterion;
step 3, updating the searched optimal value of the objective function;
step 4, utilizing the generated approximate solution Z*Comparing and judging with the termination condition, and finishing when the comparison result is satisfied; for the population which does not meet the termination condition, generating a new value by utilizing a Devariant operator and a T operator selected by the reinforced learning RL controller, and crossing the new value with the value of the neighborhood to generate a new solution through mutation;
the resulting value is operated on with the values of its neighbours to produce a new solution as follows:
step 4.1 selection operation: randomly selecting two serial numbers h, k from B (i), and using genetic operator to select xhAnd xkGenerating a new value, wherein xhIs the current optimal solution to the h-th sub-problem, and xkIs the current optimal solution for the kth sub-problem; comparing the generated value with the value of the neighborhood, performing the operation of preferential treatment and elimination, selecting the superior value with high fitness for remaining, and inheriting the next generation;
step 4.2, cross operation: pairing individuals in the population, and performing gene cross operation to generate new individuals;
step 4.3 mutation operation: and performing low-probability variation operation on the gene values.
Step 5, comparing the generated new solution with the solution of the original population, and selecting the solution which enables the subproblem function to meet the optimal value to update the population;
and 6, calculating a new 5-dimensional observation vector and a return value R by using the generated new population, further updating the state of the RL controller, judging whether a termination condition is met, and continuously performing iterative calculation until the termination condition is met, and ending.
The return value R is given by the following formula:
as shown in fig. 2-3, to demonstrate the effectiveness of the algorithm, two standard test sets UF3, UF7 were chosen for validation. Among them, UF3 and UF7 are optimization problems of 2 targets. The population size was set at 300. Experimental results show that the multi-objective optimization algorithm based on reinforcement learning is superior to the MOEA/D algorithm in adjusting T parameters.
The invention introduces a reinforcement learning mechanism, utilizes RL controller to optimize continuously, and can realize the self-adaptation of parameters; and generating an optimal value to continuously optimize the population according to the maximum reward R and the five-dimensional observation vector by utilizing an operator selected by the RL controller for reinforcement learning until a termination condition is met, thereby effectively solving the problem that MOEA/D is not sensitive to T parameter adjustment.
The foregoing is a preferred embodiment of the present invention, and it will be apparent to those skilled in the art that variations, modifications, substitutions and alterations can be made in the embodiment without departing from the principles and spirit of the invention.

Claims (4)

1. The multi-target evolutionary algorithm based on reinforcement learning is characterized by comprising the following steps of:
step 1, randomly generating an initial population from a search space;
step 2, evaluating the obtained population according to an evaluation criterion;
step 3, updating the searched optimal value of the objective function;
step 4, utilizing the generated approximate solution Z*Comparing and judging with the termination condition, and finishing when the comparison result is satisfied; for populations not meeting the termination conditions, strong chemistry is usedThe DEvariant operator and the T operator selected by the learning RL controller generate new values, and the new values are crossed with the values of the neighborhood and mutated to generate a new solution;
step 5, comparing the generated new solution with the solution of the original population, and selecting the solution which enables the subproblem function to meet the optimal value to update the population;
and 6, calculating a new 5-dimensional observation vector and a return value R by using the generated new population, further updating the state of the RL controller, judging whether a termination condition is met, and continuously performing iterative calculation until the termination condition is met, and ending.
2. The multi-objective evolutionary algorithm based on reinforcement learning of claim 1, characterized in that the specific steps of the step 1 are as follows:
step 1.1, calculating the euclidean distance between any two weight vectors, and searching the T weight vectors nearest to each weight vector, where T is the number of weight vectors in each neighborhood, and for each i ═ 1, …, N, let B bei={i1,…,iT},λi 1,…λi TIs λiThe most recent T weight vectors;
step 1.2, establishing an external population EP for storing a non-dominant solution found in the process of searching the optimal solution and initializing the EP to be null;
step 1.3, uniformly and randomly acquiring and generating an objective function F (X) ═ f from a search space1(x),f2(x),…,fi(x) Take the solution of the optimal value as the initial population, where i ═ 1, 2.., m; x is a set of decision vectors, X is an argument;
step 1.4, decomposing the objective function F (X) into N layers of subproblems by utilizing a Chebyshev method:wherein,adjacency of ith sub-problem by all sub-problems with respect to λiWeight vector of points, Z*Is the optimal vector value of the objective function which can be searched at present, also called approximate solution, Z*=min{(f1(x),f2(x),…,fi(x))}。
3. The reinforcement learning-based multi-objective evolutionary algorithm of claim 2, wherein the values generated in step 4 and the values of its neighborhood are subjected to the following operations to generate a new solution: step 4.1 selection operation: randomly selecting two serial numbers h, k from B (i), and using genetic operator to select xhAnd xkGenerating a new value, wherein xhIs the current optimal solution to the h-th sub-problem, and xkIs the current optimal solution for the kth sub-problem; comparing the generated value with the value of the neighborhood, performing the operation of preferential treatment and elimination, selecting the superior value with high fitness for remaining, and inheriting the next generation;
step 4.2, cross operation: pairing individuals in the population, and performing gene cross operation to generate new individuals;
step 4.3 mutation operation: and performing low-probability variation operation on the gene values.
4. The reinforcement learning-based multi-objective evolutionary algorithm of claim 3, wherein the reward value R in the step 6 is obtained by the following formula:
<mrow> <mi>R</mi> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mfrac> <mrow> <msup> <mi>g</mi> <mrow> <mi>t</mi> <mi>e</mi> </mrow> </msup> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>i</mi> </msubsup> <mo>|</mo> <msup> <mi>&amp;lambda;</mi> <mi>i</mi> </msup> <mo>,</mo> <mi>z</mi> <mo>*</mo> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>g</mi> <mrow> <mi>e</mi> <mi>t</mi> </mrow> </msup> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mi>t</mi> <mi>i</mi> </msubsup> <mo>|</mo> <msup> <mi>&amp;lambda;</mi> <mi>i</mi> </msup> <mo>,</mo> <mi>z</mi> <mo>*</mo> <mo>)</mo> </mrow> </mrow> <mrow> <msup> <mi>g</mi> <mrow> <mi>t</mi> <mi>e</mi> </mrow> </msup> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>i</mi> </msubsup> <mo>|</mo> <msup> <mi>&amp;lambda;</mi> <mi>i</mi> </msup> <mo>,</mo> <mi>z</mi> <mo>*</mo> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>.</mo> </mrow>
CN201711279238.2A 2017-12-06 2017-12-06 Multi-objective Evolutionary Algorithm based on intensified learning Pending CN108038538A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711279238.2A CN108038538A (en) 2017-12-06 2017-12-06 Multi-objective Evolutionary Algorithm based on intensified learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711279238.2A CN108038538A (en) 2017-12-06 2017-12-06 Multi-objective Evolutionary Algorithm based on intensified learning

Publications (1)

Publication Number Publication Date
CN108038538A true CN108038538A (en) 2018-05-15

Family

ID=62095661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711279238.2A Pending CN108038538A (en) 2017-12-06 2017-12-06 Multi-objective Evolutionary Algorithm based on intensified learning

Country Status (1)

Country Link
CN (1) CN108038538A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805268A (en) * 2018-06-08 2018-11-13 中国科学技术大学 Deeply learning strategy network training method based on evolution algorithm
CN108830370A (en) * 2018-05-24 2018-11-16 东北大学 Based on the feature selection approach for enhancing learning-oriented flora foraging algorithm
CN110174118A (en) * 2019-05-29 2019-08-27 北京洛必德科技有限公司 Robot multiple-objective search-path layout method and apparatus based on intensified learning
CN110704959A (en) * 2019-08-19 2020-01-17 南昌航空大学 MOEAD (Metal oxide optical insulator deposition) optimization fixture layout method and device based on migration behavior
CN110782016A (en) * 2019-10-25 2020-02-11 北京百度网讯科技有限公司 Method and apparatus for optimizing neural network architecture search
CN111045325A (en) * 2018-10-11 2020-04-21 富士通株式会社 Optimization device and control method of optimization device
TWI741760B (en) * 2020-08-27 2021-10-01 財團法人工業技術研究院 Learning based resource allocation method, learning based resource allocation system and user interface

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830370A (en) * 2018-05-24 2018-11-16 东北大学 Based on the feature selection approach for enhancing learning-oriented flora foraging algorithm
CN108830370B (en) * 2018-05-24 2020-11-10 东北大学 Feature selection method based on reinforced learning type flora foraging algorithm
CN108805268A (en) * 2018-06-08 2018-11-13 中国科学技术大学 Deeply learning strategy network training method based on evolution algorithm
CN111045325A (en) * 2018-10-11 2020-04-21 富士通株式会社 Optimization device and control method of optimization device
CN110174118A (en) * 2019-05-29 2019-08-27 北京洛必德科技有限公司 Robot multiple-objective search-path layout method and apparatus based on intensified learning
CN110704959A (en) * 2019-08-19 2020-01-17 南昌航空大学 MOEAD (Metal oxide optical insulator deposition) optimization fixture layout method and device based on migration behavior
CN110704959B (en) * 2019-08-19 2022-04-08 南昌航空大学 MOEAD (Metal oxide optical insulator deposition) optimization fixture layout method and device based on migration behavior
CN110782016A (en) * 2019-10-25 2020-02-11 北京百度网讯科技有限公司 Method and apparatus for optimizing neural network architecture search
TWI741760B (en) * 2020-08-27 2021-10-01 財團法人工業技術研究院 Learning based resource allocation method, learning based resource allocation system and user interface

Similar Documents

Publication Publication Date Title
CN108038538A (en) Multi-objective Evolutionary Algorithm based on intensified learning
CN109271320B (en) Higher-level multi-target test case priority ordering method
Liu et al. S-metric based multi-objective fireworks algorithm
CN110147590B (en) Spiral antenna design method based on adaptive evolution optimization algorithm
CN104616062A (en) Nonlinear system recognizing method based on multi-target genetic programming
CN111369000A (en) High-dimensional multi-target evolution method based on decomposition
CN109034479B (en) Multi-target scheduling method and device based on differential evolution algorithm
Moyano et al. An evolutionary algorithm for optimizing the target ordering in ensemble of regressor chains
CN116560313A (en) Genetic algorithm optimization scheduling method for multi-objective flexible job shop problem
CN114065896A (en) Multi-target decomposition evolution algorithm based on neighborhood adjustment and angle selection strategy
Wang et al. Bi-objective scenario-guided swarm intelligent algorithms based on reinforcement learning for robust unrelated parallel machines scheduling with setup times
Zhang et al. Embedding multi-attribute decision making into evolutionary optimization to solve the many-objective combinatorial optimization problems
Büche Multi-objective evolutionary optimization of gas turbine components
CN114021934A (en) Method for solving workshop energy-saving scheduling problem based on improved SPEA2
Zheng et al. Data-driven optimization based on random forest surrogate
JP2021131835A (en) Image processing system and image processing program
CN115310654A (en) Job shop scheduling method based on reinforcement learning-non-dominated sorting genetic algorithm
CN113141272B (en) Network security situation analysis method based on iteration optimization RBF neural network
Carvalho et al. Multi-objective Flexible Job-Shop scheduling problem with DIPSO: More diversity, greater efficiency
CN114036069A (en) Multi-target test case sequencing method based on decomposition weight vector self-adaption
CN115438877A (en) Multi-objective distributed flexible workshop scheduling optimization method based on gray wolf algorithm
CN114648247A (en) Remanufacturing decision-making method integrating process planning and scheduling
CN110221931B (en) System-level testability design multi-objective optimization method based on Chebyshev
CN110399968B (en) Multi-objective optimization method for system-level testability design based on utility function
Masood et al. Genetic programming hyper-heuristic with gaussian process-based reference point adaption for many-objective job shop scheduling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180515

RJ01 Rejection of invention patent application after publication