CN107577534A - A kind of resource regulating method and device - Google Patents

A kind of resource regulating method and device Download PDF

Info

Publication number
CN107577534A
CN107577534A CN201710776588.3A CN201710776588A CN107577534A CN 107577534 A CN107577534 A CN 107577534A CN 201710776588 A CN201710776588 A CN 201710776588A CN 107577534 A CN107577534 A CN 107577534A
Authority
CN
China
Prior art keywords
gpu
task
resource
lists
clustered node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710776588.3A
Other languages
Chinese (zh)
Inventor
胡叶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Yunhai Information Technology Co Ltd
Original Assignee
Zhengzhou Yunhai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Yunhai Information Technology Co Ltd filed Critical Zhengzhou Yunhai Information Technology Co Ltd
Priority to CN201710776588.3A priority Critical patent/CN107577534A/en
Publication of CN107577534A publication Critical patent/CN107577534A/en
Pending legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present invention provides a kind of resource regulating method and device, the above method and comprised the following steps:After the processing request of reception task, GPU lists are distributed for the task, wherein, the GPU in the GPU lists is located in corresponding clustered node;The GPU of the GPU lists in the clustered node, resource group corresponding to establishment are simultaneously bound with the task;During tasks carrying, the GPU in the resource group is dispatched, the task is handled.In above-mentioned technical proposal, GPU lists are distributed for the task, the GPU service conditions of clustered node can be tracked;Bound by resource group corresponding to establishment and with the task, it is ensured that can not be crossed the border during task actual motion and use unsolicited GPU resource.

Description

A kind of resource regulating method and device
Technical field
The invention belongs to field of cloud computer technology, more particularly to a kind of resource regulating method and device.
Background technology
Maui is the job scheduling an increased income application software, is widely used in realizing operation in high performance service cluster Management and running.Furthermore it is possible to by Maui GRES attributes, the GPU quantity of node is set and is scheduled, such as:By with Put maui.cfg files and increase NODECFG settings, such as " NODECFG [node1] GRES=gpu:4 ", represent that node1 nodes are deposited In 4 GPU;Operation submit when by qsub orders "-W x=gpu@2 " represent operation be submitted to set GPU attributes and ought The preceding attribute storage is more than or equal to 2 node.
Saved although above-mentioned scheduling mode can make scheduling system distribute operation by the GPU quantity of configuration node corresponding to Point, but due to the GPU of actual use can not be known in job run process, on the one hand may cause resource cross the border problem (for example, The GPU quantity of user's actual use is more than the GPU quantity of request), it on the other hand can not also realize the service condition progress to GPU Book keeping operation statistics.
Therefore, there is an urgent need to provide a kind of resource scheduling scheme to solve above-mentioned technical problem.
The content of the invention
The present invention provides a kind of resource regulating method and device, to solve the above problems.
The embodiment of the present invention provides a kind of resource regulating method, comprises the following steps:After the processing request of reception task, for institute Task distribution GPU lists are stated, wherein, the GPU in the GPU lists is located in corresponding clustered node;
The GPU of the GPU lists in the clustered node, resource group corresponding to establishment are simultaneously carried out with the task Binding;
During tasks carrying, the GPU in the resource group is dispatched, the task is handled.
The embodiment of the present invention also provides a kind of resource scheduling device, including processor, is adapted for carrying out each instruction;Storage is set Standby, suitable for storing a plurality of instruction, the instruction is suitable to be loaded and performed by the processor;
After the processing request of reception task, GPU lists are distributed for the task, wherein, the GPU in the GPU lists is located at In corresponding clustered node;
The GPU of the GPU lists in the clustered node, resource group corresponding to establishment are simultaneously carried out with the task Binding;
During tasks carrying, the GPU in the resource group is dispatched, the task is handled.
Technical scheme provided in an embodiment of the present invention:After the processing request of reception task, GPU lists are distributed for the task, Wherein, the GPU in the GPU lists is located in corresponding clustered node;GPU lists in the clustered node GPU, resource group corresponding to establishment are simultaneously bound with the task;During tasks carrying, the GPU in the resource group is dispatched, it is right The task is handled.
In above-mentioned technical proposal, GPU lists are distributed for the task, the GPU service conditions of clustered node can be tracked;It is logical Cross resource group corresponding to establishment and bound with the task, it is ensured that can not cross the border to use during task actual motion and not ask GPU resource.
Brief description of the drawings
Accompanying drawing described herein is used for providing a further understanding of the present invention, forms the part of the application, this hair Bright schematic description and description is used to explain the present invention, does not form inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 show the resource regulating method flow chart of the embodiment of the present invention 1;
Fig. 2 show the resource scheduling device structure chart of the embodiment of the present invention 2;
Fig. 3 show the resource scheduling device structure chart of the embodiment of the present invention 3.
Embodiment
Describe the present invention in detail below with reference to accompanying drawing and in conjunction with the embodiments.It should be noted that do not conflicting In the case of, the feature in embodiment and embodiment in the application can be mutually combined.
Fig. 1 show the resource regulating method flow chart of the embodiment of the present invention 1, comprises the following steps:
Step 101:After the processing request of reception task, GPU lists are distributed for the task, wherein, in the GPU lists GPU is located in corresponding clustered node;
Further, the GPU in the GPU lists is located at same clustered node, or;
GPU in the GPU lists is located at multiple clustered nodes.
Further, the task processing request carries GPU quantity informations.
Further, after receiving task processing request, before distributing GPU lists for the task, in addition to:
The GPU working conditions in each clustered node are obtained respectively.
Preferably, the working condition includes:Use, be idle.
Step 102:The GPU of the GPU lists in the clustered node, resource group corresponding to establishment and with it is described Task is bound;
Step 103:During tasks carrying, the GPU in the resource group is dispatched, the task is handled.
Further, after tasks carrying, the resource group is discharged.
Fig. 2 show the resource scheduling device structure chart of the embodiment of the present invention 2, including scheduler module, resource distribution module, Resource binding module, task execution module, resource release module;Wherein, the scheduler module passes through the resource distribution module It is connected with the resource binding module;The resource distribution module passes through the resource binding module and the task execution module It is connected;The resource binding module is connected by the task execution module with the resource release module.
Wherein, the scheduler module, for receiving task processing request;It is additionally operable to after receiving the task processing request, Dispatch the resource distribution module and distribute GPU lists for the task;It is additionally operable to add resource distribution module, resource binding module Enter the preamble handling process of task, resource release module is added to the postorder handling process of task.
The resource distribution module, after the scheduling message for receiving the scheduler module, GPU is distributed for the task List;It is additionally operable to send to the resource binding module and is allocated successfully message;
The resource binding module, for receive it is described be allocated successfully message after, the institute in the clustered node The GPU of GPU lists is stated, resource group corresponding to establishment is simultaneously bound with the task;It is additionally operable to the task execution module Send binding success message;
The task execution module, the GPU in the resource group is dispatched, the task is handled.
The resource release module, for tasks carrying after, discharge the resource group.
Further, resource distribution module is safeguarded the GPU resource service condition of cluster and should distributed with this calculating task GPU resource list;Resource binding module realizes the binding of task, GPU equipment according to distribution condition by cgroup;Resource discharges Module discharges the GPU equipment of task binding at the end of task.
Specifically:
Resource distribution module, the module safeguard the file of entitled " gpuNodes ", and every record represents a collection in file The GPU resource service condition of group node, it is as follows:
node01:0 1 0 1
node02:0 0 0 0
node03:0 0 0 0
node04:0 0 0 0
Wherein, " node01:0101 " represent that node01 nodes include 4 GPU altogether, current the 0th and the 2nd GPU It is idle.
Resource distribution module initializes gpuNodes files according to the configuration of current scheduling module first, then per subtask Perform when calling the module according to the current resource service condition computing cluster node of the GPU quantity and clustered node of task requests The specific GPU lists of task should be distributed to.
Resource distribution module is by the GPU list records of the task of distribution to jobGpus files, and every record represents one in file The GPU resource list that individual task is distributed, it is as follows:
601.node01;;Node01#0,1;Node02#2,3
602.node01;;node03#0;node02#1
Wherein, task " 601.node1 " has used clustered node node01 the 0th, 1 piece of GPU card and clustered node Node02 the 2nd, 3 piece of GPU card.
Resource binding module, created according to the information list by cgroup in corresponding clustered node corresponding to resource Group, and in the task process that the resource group binding is run into clustered node;
Resource release module, is finished in task, passes through GPU corresponding to the task record acquisition task in jobGpus The Resources list, then the resource group of binding is discharged on corresponding clustered node, and this in jobGpus record is turned Move in history file i.e. jobGpusHis files;(jobGpusHis files are identical with jobGpus file formats).
Illustrate in detail below:
1) gpuNodes file initialization procedures are defined
Input parameter:Task identification $ JOBID
Output:Nothing;
Task ID of the script first in input parameter obtains the task in scheduling system please with the presence or absence of GPU resource Ask, if there is no GPU resource distribution is then stopped, if it is present continuing to determine whether gpuNodes files be present, if not depositing In gpuNodes files, then gpuNodes files are created;The nodes records file " nodes " in scheduling system is traveled through, if The nodes records and the record that gpu configurations in this document be present are not present in gpuNodes files, and # is then added into.
2) resource distribution module is defined
Input parameter:Task identification $ JOBID;
Output:Task GPU resource sequence such as " 601.node01;;Node01#0,1;Node02#2,3 ";
Detailed process is as follows:
Mission bit stream is obtained in scheduling system according to task identification;
Parsing task, distribute clustered node list;
Initialization task GPU lists;
Travel through clustered node list;
Obtain clustered node and use GPU resource;
Obtain clustered node free time GPU resource;
Safeguard gpuNodes files;
Safeguard jobGpus files.
The module first according to task identification obtain mission bit stream and extracting go out on missions the clustered node list of distribution and The GPU quantity that each clustered node should distribute, the list of traversal clustered node obtain corresponding clustered node GPU in gpuNodes files Service condition, such as " node01:0101 " represent that the clustered node has used the 1st, 3 piece of GPU card, if task is (such as id:605) 1 piece of GPU card is requested in the clustered node, then can distributes the 0th piece of use, the module sets and is recorded as " node01:1 101 " " 605 are returned and;;Node01#0, " as the task node1 nodes GPU assignment records.
3) resource binding module is defined
Resource binding module, searched according to the task ID of input in jobGpus files corresponding to GPU resource list, solution Analyse the list and GPU sequence number establishing resource groups and bound according to distribution into corresponding node, while limit the resource group access The authority of other GPU resources.
Detailed process is as follows:
GPU resource list that acquisition task uses simultaneously obtains GPU card information;
Use cgroup establishing resource groups;
Resource group is set to disable whole GPU resources;
Resource group is set to use the GPU resource of distribution;
Resource group is bound to task process.
3) resource release module is defined
After task run, resource release module is called, the module obtains the GPU resource of task distribution according to task ID List, discharged in corresponding clustered node establishment resource group and by corresponding GPU bit recoveries in gpuNodes files for sky Not busy (" 0 "), is finally transferred to history file jobGpusHis by this mission bit stream.
Detailed process is as follows:
GPU resource list corresponding to acquisition task, do not discharged then if sky, otherwise, discharge resource group.
Recover the GPU state in gpuNodes files;
In addition, during multi-task parallel, the file operation to more than increases file lock control.
The embodiment of the present invention can support the GPU scheduling features of Torque+Maui scheduling clusters, by safeguarding gpuNodes File and jobGpus files can track present node and the GPU resource of task uses, by safeguarding that jobGpusHis can be looked into See the GPU service conditions of Historical Jobs.
The embodiments of the invention provide the mode limited task GPU, passes through cgroup establishing resources group and binding task Process, which may insure to cross the border during task actual motion, uses unsolicited GPU resource.
Fig. 3 show the resource scheduling device structure chart of the embodiment of the present invention 3, including processor, is adapted for carrying out each instruction; Storage device, suitable for storing a plurality of instruction, the instruction is suitable to be loaded and performed by the processor;
After the processing request of reception task, GPU lists are distributed for the task, wherein, the GPU in the GPU lists is located at In corresponding clustered node;
The GPU of the GPU lists in the clustered node, resource group corresponding to establishment are simultaneously carried out with the task Binding;
During tasks carrying, the GPU in the resource group is dispatched, the task is handled.
Further, after tasks carrying, the resource group is discharged.
Further, the GPU in the GPU lists is located at same clustered node, or;
GPU in the GPU lists is located at multiple clustered nodes.
Further, the task processing request carries GPU quantity informations.
Further, after receiving task processing request, before distributing GPU lists for the task, in addition to:
The GPU working conditions in each clustered node are obtained respectively.
Preferably, the working condition includes:Use, be idle.
Technical scheme provided in an embodiment of the present invention:After the processing request of reception task, GPU lists are distributed for the task, Wherein, the GPU in the GPU lists is located in corresponding clustered node;GPU lists in the clustered node GPU, resource group corresponding to establishment are simultaneously bound with the task;During tasks carrying, the GPU in the resource group is dispatched, it is right The task is handled.
In above-mentioned technical proposal, GPU lists are distributed for the task, the GPU service conditions of clustered node can be tracked;It is logical Cross resource group corresponding to establishment and bound with the task, it is ensured that can not cross the border to use during task actual motion and not ask GPU resource.
The GPU dispatching distribution modes of the present invention are equally applicable to other equipment type, such as MIC, IB card, pass through this money Source is distributed and tracking mode can also count for later stage cluster and provide data support.
The preferred embodiments of the present invention are the foregoing is only, are not intended to limit the invention, for the skill of this area For art personnel, the present invention can have various modifications and variations.Within the spirit and principles of the invention, that is made any repaiies Change, equivalent substitution, improvement etc., should be included in the scope of the protection.

Claims (12)

1. a kind of resource regulating method, it is characterised in that comprise the following steps:
After the processing request of reception task, GPU lists are distributed for the task, wherein, the GPU in the GPU lists is positioned at corresponding Clustered node in;
The GPU of the GPU lists in the clustered node, resource group corresponding to establishment are simultaneously tied up with the task It is fixed;
During tasks carrying, the GPU in the resource group is dispatched, the task is handled.
2. resource regulating method according to claim 1, it is characterised in that after tasks carrying, discharge the resource Group.
3. resource regulating method according to claim 1, it is characterised in that the GPU in the GPU lists is located at same collection Group node, or;
GPU in the GPU lists is located at multiple clustered nodes.
4. resource regulating method according to claim 1, it is characterised in that the task processing request carries GPU numbers Measure information.
5. according to the resource regulating method described in Claims 1-4 any claim, it is characterised in that receive task processing After request, before distributing GPU lists for the task, in addition to:
The GPU working conditions in each clustered node are obtained respectively.
6. resource regulating method according to claim 5, it is characterised in that the working condition includes:Use, be empty It is not busy.
7. a kind of resource scheduling device, it is characterised in that including processor, be adapted for carrying out each instruction;Storage device, suitable for storage A plurality of instruction, the instruction are suitable to be loaded and performed by the processor;
After the processing request of reception task, GPU lists are distributed for the task, wherein, the GPU in the GPU lists is positioned at corresponding Clustered node in;
The GPU of the GPU lists in the clustered node, resource group corresponding to establishment are simultaneously tied up with the task It is fixed;
During tasks carrying, the GPU in the resource group is dispatched, the task is handled.
8. resource scheduling device according to claim 7, it is characterised in that after tasks carrying, discharge the resource Group.
9. resource scheduling device according to claim 7, it is characterised in that the GPU in the GPU lists is located at same collection Group node, or;
GPU in the GPU lists is located at multiple clustered nodes.
10. resource scheduling device according to claim 7, it is characterised in that the task processing request carries GPU numbers Measure information.
11. according to the resource regulating method described in claim 7 to 10 any claim, it is characterised in that receive at task After reason request, before distributing GPU lists for the task, in addition to:
The GPU working conditions in each clustered node are obtained respectively.
12. resource regulating method according to claim 11, it is characterised in that the working condition includes:Use, be empty It is not busy.
CN201710776588.3A 2017-08-31 2017-08-31 A kind of resource regulating method and device Pending CN107577534A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710776588.3A CN107577534A (en) 2017-08-31 2017-08-31 A kind of resource regulating method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710776588.3A CN107577534A (en) 2017-08-31 2017-08-31 A kind of resource regulating method and device

Publications (1)

Publication Number Publication Date
CN107577534A true CN107577534A (en) 2018-01-12

Family

ID=61031216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710776588.3A Pending CN107577534A (en) 2017-08-31 2017-08-31 A kind of resource regulating method and device

Country Status (1)

Country Link
CN (1) CN107577534A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109933433A (en) * 2019-03-19 2019-06-25 合肥中科类脑智能技术有限公司 A kind of GPU resource scheduling system and its dispatching method
CN110503593A (en) * 2018-05-18 2019-11-26 微软技术许可有限责任公司 The scheduling of multiple graphics processing units
CN110795241A (en) * 2019-10-18 2020-02-14 北京并行科技股份有限公司 Job scheduling management method, scheduling center and system
CN111290855A (en) * 2020-02-06 2020-06-16 四川大学 GPU card management method, system and storage medium for multiple GPU servers in distributed environment
CN111400051A (en) * 2020-03-31 2020-07-10 京东方科技集团股份有限公司 Resource scheduling method, device and system
CN112910796A (en) * 2021-01-27 2021-06-04 北京百度网讯科技有限公司 Traffic management method, apparatus, device, storage medium, and program product
CN113010309A (en) * 2021-03-02 2021-06-22 北京达佳互联信息技术有限公司 Cluster resource scheduling method, device, storage medium, equipment and program product
CN113742064A (en) * 2021-08-06 2021-12-03 苏州浪潮智能科技有限公司 Resource arrangement method, system, equipment and medium for server cluster
CN115145695A (en) * 2022-08-30 2022-10-04 浙江大华技术股份有限公司 Resource scheduling method and device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120124591A1 (en) * 2010-11-17 2012-05-17 Nec Laboratories America, Inc. scheduler and resource manager for coprocessor-based heterogeneous clusters
CN102541640A (en) * 2011-12-28 2012-07-04 厦门市美亚柏科信息股份有限公司 Cluster GPU (graphic processing unit) resource scheduling system and method
CN104023062A (en) * 2014-06-10 2014-09-03 上海大学 Heterogeneous computing-oriented hardware architecture of distributed big data system
CN106506594A (en) * 2016-09-30 2017-03-15 科大讯飞股份有限公司 A kind of concurrent computation resource distribution method and device
US20170235601A1 (en) * 2015-07-13 2017-08-17 Palo Alto Research Center Incorporated Dynamically adaptive, resource aware system and method for scheduling

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120124591A1 (en) * 2010-11-17 2012-05-17 Nec Laboratories America, Inc. scheduler and resource manager for coprocessor-based heterogeneous clusters
CN102541640A (en) * 2011-12-28 2012-07-04 厦门市美亚柏科信息股份有限公司 Cluster GPU (graphic processing unit) resource scheduling system and method
CN104023062A (en) * 2014-06-10 2014-09-03 上海大学 Heterogeneous computing-oriented hardware architecture of distributed big data system
US20170235601A1 (en) * 2015-07-13 2017-08-17 Palo Alto Research Center Incorporated Dynamically adaptive, resource aware system and method for scheduling
CN106506594A (en) * 2016-09-30 2017-03-15 科大讯飞股份有限公司 A kind of concurrent computation resource distribution method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吕相文等: "云计算环境下多GPU资源调度机制研究", 《小型微型计算机系统》 *
康雷等: "基于B/S模式的GPU集群管理系统设计", 《计算机工程》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110503593A (en) * 2018-05-18 2019-11-26 微软技术许可有限责任公司 The scheduling of multiple graphics processing units
US11983564B2 (en) 2018-05-18 2024-05-14 Microsoft Technology Licensing, Llc Scheduling of a plurality of graphic processing units
CN109933433A (en) * 2019-03-19 2019-06-25 合肥中科类脑智能技术有限公司 A kind of GPU resource scheduling system and its dispatching method
CN109933433B (en) * 2019-03-19 2021-06-25 合肥中科类脑智能技术有限公司 GPU resource scheduling system and scheduling method thereof
CN110795241A (en) * 2019-10-18 2020-02-14 北京并行科技股份有限公司 Job scheduling management method, scheduling center and system
CN111290855A (en) * 2020-02-06 2020-06-16 四川大学 GPU card management method, system and storage medium for multiple GPU servers in distributed environment
CN111400051A (en) * 2020-03-31 2020-07-10 京东方科技集团股份有限公司 Resource scheduling method, device and system
CN111400051B (en) * 2020-03-31 2023-10-27 京东方科技集团股份有限公司 Resource scheduling method, device and system
CN112910796B (en) * 2021-01-27 2022-12-16 北京百度网讯科技有限公司 Traffic management method, apparatus, device, storage medium, and program product
CN112910796A (en) * 2021-01-27 2021-06-04 北京百度网讯科技有限公司 Traffic management method, apparatus, device, storage medium, and program product
CN113010309A (en) * 2021-03-02 2021-06-22 北京达佳互联信息技术有限公司 Cluster resource scheduling method, device, storage medium, equipment and program product
CN113742064B (en) * 2021-08-06 2023-08-04 苏州浪潮智能科技有限公司 Resource arrangement method, system, equipment and medium of server cluster
CN113742064A (en) * 2021-08-06 2021-12-03 苏州浪潮智能科技有限公司 Resource arrangement method, system, equipment and medium for server cluster
CN115145695A (en) * 2022-08-30 2022-10-04 浙江大华技术股份有限公司 Resource scheduling method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107577534A (en) A kind of resource regulating method and device
CN107038069B (en) Dynamic label matching DLMS scheduling method under Hadoop platform
CN111258737B (en) Resource scheduling method and device and filter scheduler
CN102567086B (en) Task scheduling method, equipment and system
CN110413412B (en) GPU (graphics processing Unit) cluster resource allocation method and device
CN107864211B (en) Cluster resource dispatching method and system
CN102096599A (en) Multi-queue task scheduling method and related system and equipment
CN107515781B (en) Deterministic task scheduling and load balancing system based on multiple processors
CN103491024A (en) Job scheduling method and device for streaming data
CN103503412A (en) Method and device for scheduling resources
CN107239342A (en) A kind of storage cluster task management method and device
CN109976907A (en) Method for allocating tasks and system, electronic equipment, computer-readable medium
CN102981973A (en) Method of executing requests in memory system
CN105608138B (en) A kind of system of optimization array data base concurrency data loading performance
KR101765725B1 (en) System and Method for connecting dynamic device on mass broadcasting Big Data Parallel Distributed Processing
CN113626173B (en) Scheduling method, scheduling device and storage medium
US6782535B1 (en) Dynamic queue width system and method
CN114721818A (en) Kubernetes cluster-based GPU time-sharing method and system
US20150212859A1 (en) Graphics processing unit controller, host system, and methods
CN104156505A (en) Hadoop cluster job scheduling method and device on basis of user behavior analysis
CN105955816A (en) Event scheduling method and device
CN113301087B (en) Resource scheduling method, device, computing equipment and medium
CN111144760B (en) Work order auditing platform, auditing dispatching method and device and dispatching server
CN111796932A (en) GPU resource scheduling method
CN111506407A (en) Resource management and job scheduling method, system and medium combining Pull mode and Push mode

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180112