CN105868773A - Hierarchical random forest based multi-tag classification method - Google Patents

Hierarchical random forest based multi-tag classification method Download PDF

Info

Publication number
CN105868773A
CN105868773A CN201610171082.5A CN201610171082A CN105868773A CN 105868773 A CN105868773 A CN 105868773A CN 201610171082 A CN201610171082 A CN 201610171082A CN 105868773 A CN105868773 A CN 105868773A
Authority
CN
China
Prior art keywords
label
data
random forest
tag
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610171082.5A
Other languages
Chinese (zh)
Inventor
吴庆耀
谭明奎
陈健
林世杭
黄翰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201610171082.5A priority Critical patent/CN105868773A/en
Publication of CN105868773A publication Critical patent/CN105868773A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a hierarchical random forest based multi-tag classification method. The method comprises the steps of randomly extracting part of data from a training data set; training a hierarchical tree by using the randomly extracted data, wherein the arrangement of nodes in the hierarchical tree is based on a clustering result of tags of all data in the nodes; repeatedly establishing a plurality of hierarchical trees to establish a hierarchical random forest as a multi-tag classifier; and classifying untagged objects by using the established hierarchical random forest, namely the multi-tag classifier. According to the method, the hierarchical tree is established based on the clustering result of the tags by utilizing a basic thought that the tags of the data always have a certain associativity, and a classifier is established for each node of the tree; and the hierarchical random forest is established by using a random forest thought, various possibilities of tag association are fully considered, and the classification errors of the hierarchical trees are generalized, so that the speed of multi-tag classification is increased and the accuracy of multi-tag classification is improved.

Description

A kind of multi-tag sorting technique based on level random forest
Technical field
The present invention relates to relate to multi-tag classification field, be specifically related to multi-tag sorting technique based on level random forest.
Background technology
Multi-tag classification problem is problem more complicated in classification problem, is different from two class classification problems, there is multiple classification in its permission problem;Being different from multicategory classification problem, it allows object of classification to belong simultaneously to multiple classification.Reality exists many multi-tag classification problems.One relatively common problem is for separated film, and the classification of film has a variety of, such as, science fiction, comedy, action, the story of a play or opera etc., one film can belong simultaneously to comedy and the story of a play or opera, i.e. belongs simultaneously to more than one classification, and most film is all belonging to multiple classification.In text classification, can an article be assigned in multiple topic, such as, society, science, physical culture, entertain, education etc.;In landscape image is classified, piece image can have multiple theme, such as, the woods, seabeach, mountain, grassland etc..Multi-tag problem has in actual life and is extremely widely applied, and therefore the research to multi-tag classification problem has very major and immediate significance undoubtedly.At present, the algorithm of multi-tag classification problem mainly has two big classes, and a class is the method decomposed based on data set, and another kind of is method based on single optimization problem.Although the research to multi-tag classification problem has had certain achievement, but still has much room for improvement in the speed and accuracy of classification.
Summary of the invention
It is an object of the invention to provide a kind of multi-tag sorting technique based on level random forest, overcome deficiencies of the prior art.
The purpose of the present invention can be by adopting the following technical scheme that realization.
A kind of multi-tag sorting technique based on level random forest, comprises the steps:
S1, from training data concentrate randomly draw a part of data;
S2, the data using step S1 to extract set up a hierarchical tree;
S3, repetition step S1-S2, set up level random forest as multi-tag grader;
Object without label is classified by S4, the multi-tag grader using step S3 to set up.
As a kind of specific embodiment, in step S1, a part of data are randomly drawed in described concentrating from training data, method particularly includes: use bagging method that training dataset is sampled with putting back to, randomly draw n times, the data deletion repeated in the data that will randomly draw.
As a kind of specific embodiment, in step S2, described sets up a hierarchical tree, and concrete grammar comprises the following steps:
S31, one root node of establishment, this node comprises all training datas of extraction in all of label of data and step S1;
Label in father node in hierarchical tree is clustered by S32, use balance k-means algorithm;
S33, set up the child node of same number according to quantity to the label clustering in father node in step S32, each bunch is divided in different child nodes, the label set L that the c child node comprisescRepresent, the label set μ that data object e belongs toeRepresent, ifThen data object e is divided in child node c;
S34, by the data in each child node by (xe,Ye) be converted to (xe,Ze), wherein YeAnd ZeIt is respectively data object e tally set in father node and the tally set in current node;
S35, use sorting algorithm are respectively trained a grader, the data after wherein training data is the conversion of step S34 to each child node;
S36, repeat step S32-S35, until in child node the label of all data all as, or grader cannot be used the data Further Division in child node.
As a kind of specific embodiment, in step S32, described balance k-means algorithm, concrete grammar comprises the following steps:
S41, look for k label as initial cluster centre at random;
S42, for remaining label, calculate the distance of each label and each cluster centre, if bunch number of labels comprised at the nearest cluster centre place of distance label is less thanThen it is classified as this bunch;Otherwise, just look for distance cluster centre second near bunch, the like, until be grouped in one bunch, wherein, L is number of labels, and k is the quantity of cluster centre (bunch);The computing formula of the spacing of label is specific as follows:
Wherein, P (yi,yj) represent label yiWith label yjThe probability simultaneously occurred, P (yi) represent label yiThe probability occurred, SijThe probability that two labels of the biggest expression occur together is the biggest, then two labels are the most similar.
S43, after all labels are all divided in certain bunch, recalculate the center of each bunch;
S44, repetition step S42-S43, when all of bunch of label comprised the most no longer changes, algorithm terminates.
As a kind of specific embodiment, in step S4, the object without label is classified by the described level random forest multi-tag grader that uses, and circular is:
Wherein, HTi(u) (i=1,2 ..., M) i-th hierarchical tree predicting the outcome to the label without label data object u in representational level random forest, it is the 0-1 vector of an a length of M, the number of the middle-level tree of M representational level random forest;P(λ12,…,λL) representational level random forest grader predicts the outcome to the label without label data object u, L represents the quantity of label, λiComputational methods as follows:
Wherein, λ is the threshold value pre-set, piFor level random forest being predicted, the label without label data object u is yiThe percentage ratio of hierarchical tree.
Compared with prior art, the invention have the advantages that and technique effect:
The present invention utilizes the basic thought always having certain relatedness between multiple labels of data, and cluster result based on label sets up hierarchical tree, and each node for tree sets up a grader;Use the thought of random forest, set up level random forest, take into full account the various probabilities of association, the error in classification of extensive hierarchical tree between label.The method can improve speed and the accuracy of multi-tag classification problem.
Accompanying drawing explanation
Fig. 1 is the flow chart of a kind of based on level random forest the multi-tag sorting technique of the embodiment of the present invention 1.
Fig. 2 is the flow chart setting up a hierarchical tree of the embodiment of the present invention 1.
Detailed description of the invention
Below in conjunction with embodiment and accompanying drawing, the present invention is described in further detail, but embodiments of the present invention are not limited to this.
Embodiment 1:
As it is shown in figure 1, a kind of based on level random forest the multi-tag sorting technique of the present embodiment 1, comprise the following steps:
S1, from training data concentrate randomly draw a part of data;
S2, the data using step S1 to extract set up a hierarchical tree;
S3, repetition step S1-S2, set up level random forest as multi-tag grader;
Object without label is classified by S4, the multi-tag grader using step S3 to set up.
Described training dataset is the sort research field abbreviation to the data for learning classification model, and these data can be to use the medical data of expression, internet data, the bank datas etc. such as text, picture, video.
A part of data are randomly drawed in described concentrating from training data, method particularly includes: use bagging method that training dataset is sampled with putting back to, randomly draw n times (as an example, N is usually the 2/3 of training dataset), the data deletion repeated in the data that will randomly draw, using these data as the training data setting up a hierarchical tree.Wherein,
As in figure 2 it is shown, a kind of based on level random forest the multi-tag sorting technique of the present embodiment 1, described sets up a hierarchical tree, and concrete grammar comprises the following steps:
S31, one root node of establishment, this node comprises all training datas of extraction in all of label of data and step S1;
Label in father node in hierarchical tree is clustered by S32, use balance k-means algorithm;
S33, set up the child node of same number according to quantity to the label clustering in father node in step S32, each bunch is divided in different child nodes, the label set L that the c child node comprisescRepresent, the label set μ that data object e belongs toeRepresent, ifThen data object e is divided in child node c;
S34, by the data in each child node by (xe,Ye) be converted to (xe,Ze), wherein YeAnd ZeIt is respectively data object e tally set in father node and the tally set in current node;
S35, use sorting algorithm are respectively trained a grader to each child node, and the data after wherein training data is the conversion of step S34, sorting algorithm here can be the sorting techniques such as C4.5, SVM;
S36, repeat step S32-S35, until in child node the label of all data all as, or grader cannot be used the data Further Division in child node.
Described balance k-means algorithm, concrete grammar comprises the following steps:
S41, look for k label as initial cluster centre at random;
S42, for remaining label, calculate the distance of each label and each cluster centre, if bunch number of labels comprised at the nearest cluster centre place of distance label is less thanThen it is classified as this bunch;Otherwise, just look for distance cluster centre second near bunch, the like, until be grouped in one bunch, wherein, L is number of labels, and k is the quantity of cluster centre (bunch);The computing formula of the spacing of label is specific as follows:
Wherein, P (yi,yj) represent label yiWith label yjThe probability simultaneously occurred, P (yi) represent label yiThe probability occurred, SijThe probability that two labels of the biggest expression occur together is the biggest, then two labels are the most similar.
S43, after all labels are all divided in certain bunch, recalculate the center of each bunch;
S44, repetition step S42-S43, when all of bunch of label comprised the most no longer changes, algorithm terminates.
Object without label is classified by the level random forest multi-tag grader that uses described in step S4, and circular is:
Wherein, HTi(u) (i=1,2 ..., M) i-th hierarchical tree predicting the outcome to the label without label data object u in representational level random forest, it is the 0-1 vector of an a length of M, the number of the middle-level tree of M representational level random forest;P(λ12,…,λL) representational level random forest grader predicts the outcome to the label without label data object u, L represents the quantity of label, λiComputational methods as follows:
Wherein, λ is the threshold value pre-set, piFor level random forest being predicted, the label without label data object u is yiThe percentage ratio of hierarchical tree.
Examples detailed above uses the thought of random forest, sets up level random forest, takes into full account the various probabilities of association, the error in classification of extensive hierarchical tree between label, can improve speed and the accuracy of multi-tag classification problem.
The above; it is only patent preferred embodiment of the present invention; but the protection domain of patent of the present invention is not limited thereto; any those familiar with the art is in the scope disclosed in patent of the present invention; technical scheme and patent of invention thereof according to patent of the present invention conceive equivalent or change in addition, broadly fall into the protection domain of patent of the present invention.

Claims (5)

1. a multi-tag sorting technique based on level random forest, it is characterised in that: comprise the following steps:
S1, from training data concentrate randomly draw a part of data;
S2, the data using step S1 to extract set up a hierarchical tree;
S3, repetition step S1-S2, set up level random forest as multi-tag grader i.e. level random forest grader;
Object without label is classified by S4, the multi-tag grader using step S3 to set up.
A kind of multi-tag sorting technique based on level random forest the most according to claim 1, it is characterised in that: Concentrating from training data described in step S1 randomly draws a part of data, particularly as follows: use bagging method to training Data set carries out sampling with putting back to, and randomly draws n times, the data deletion repeated in the data that will randomly draw.
A kind of multi-tag sorting technique based on level random forest the most according to claim 1, it is characterised in that: Setting up a hierarchical tree described in step S2, concrete grammar comprises the following steps:
S31, one root node of establishment, this node comprises use method described by step S1 and concentrates sampling from training data Data, and all labels that training dataset comprises.
Label in father node in hierarchical tree is clustered by S32, use balance k-means algorithm;
S33, set up the child node of same number according to quantity to the label clustering in father node in step S32, by each Bunch it is divided in different child nodes, the label set L that the c child node comprisescRepresent, all of label of data object e With set μeRepresent, ifThen data object e is divided in the c child node of present node;
S34, by the data in each child node by (xe,Ye) be converted to (xe,Ze), wherein xeRepresent object e, YeAnd ZePoint Wei data object e tally set in father node and the tally set in current node;
S35, use sorting algorithm are respectively trained a grader to each child node, and wherein training data is that step S34 turns Data after changing;
S36, repeat step S32-S35, until in child node the label of all objects all as, or classification cannot be used Device is to the data Further Division in child node.
A kind of multi-tag sorting technique based on stratification random forest the most according to claim 3, it is characterised in that: Balance k-means algorithm described in step S32, concrete grammar comprises the following steps:
S41, look for k label as initial cluster centre at random;
S42, for remaining label, calculate the distance of each label and each cluster centre, if distance label nearest Bunch number of labels comprised at cluster centre place is less thanThen it is classified as this bunch;Otherwise, distance cluster is just looked for Center second near bunch, the like, until be grouped in one bunch, wherein, L is number of labels, and k is in cluster The quantity of the heart;
S43, after all labels are all divided in certain bunch, recalculate the center of each bunch;
S44, repetition step S42-S43, until the cluster centre of all bunches no longer changes.
A kind of multi-tag sorting technique based on stratification random forest the most according to claim 1, it is characterised in that: Object without label is classified by the use multi-tag grader described in step S4, and circular is:
P ( λ 1 , λ 2 , ... , λ L ) = HT 1 ( u ) + HT 2 ( u ) + ... + HT M ( u ) M
Wherein, HTiU in () representational level random forest, the prediction of the label without label data object u is tied by i-th hierarchical tree Really, it is the 0-1 vector of an a length of M, i=1,2 ..., the number of the middle-level tree of M, M representational level random forest; P(λ12,…,λL) representational level random forest grader predicts the outcome to the label without label data object u, L represents mark The quantity signed, λiComputational methods as follows:
&lambda; i = 1 , p i &GreaterEqual; &lambda; ; 0 , p i < &lambda; .
Wherein, λ is the threshold value pre-set, piFor level random forest being predicted, the label without label data object u is yi The percentage ratio of hierarchical tree.
CN201610171082.5A 2016-03-23 2016-03-23 Hierarchical random forest based multi-tag classification method Pending CN105868773A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610171082.5A CN105868773A (en) 2016-03-23 2016-03-23 Hierarchical random forest based multi-tag classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610171082.5A CN105868773A (en) 2016-03-23 2016-03-23 Hierarchical random forest based multi-tag classification method

Publications (1)

Publication Number Publication Date
CN105868773A true CN105868773A (en) 2016-08-17

Family

ID=56625425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610171082.5A Pending CN105868773A (en) 2016-03-23 2016-03-23 Hierarchical random forest based multi-tag classification method

Country Status (1)

Country Link
CN (1) CN105868773A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106875278A (en) * 2017-01-19 2017-06-20 浙江工商大学 Social network user portrait method based on random forest
CN107392176A (en) * 2017-08-10 2017-11-24 华南理工大学 A kind of high efficiency vehicle detection method based on kmeans
CN107577785A (en) * 2017-09-15 2018-01-12 南京大学 A kind of level multi-tag sorting technique suitable for law identification
CN109211814A (en) * 2018-10-29 2019-01-15 中国科学院南京土壤研究所 It is a kind of to be set a song to music the soil profile kind identification methods of face partition characteristics based on three-dimensional light
CN109492682A (en) * 2018-10-30 2019-03-19 桂林电子科技大学 A kind of multi-branched random forest data classification method
CN109886335A (en) * 2019-02-21 2019-06-14 厦门美图之家科技有限公司 Disaggregated model training method and device
CN109934489A (en) * 2019-03-12 2019-06-25 广东电网有限责任公司 A kind of status of electric power evaluation method
CN109993391A (en) * 2017-12-31 2019-07-09 中国移动通信集团山西有限公司 Distributing method, device, equipment and the medium of network O&M task work order
CN110135185A (en) * 2018-02-08 2019-08-16 苹果公司 The machine learning of privatization is carried out using production confrontation network
CN110347839A (en) * 2019-07-18 2019-10-18 湖南数定智能科技有限公司 A kind of file classification method based on production multi-task learning model
WO2021024080A1 (en) * 2019-08-05 2021-02-11 International Business Machines Corporation Active learning for data matching
CN112883189A (en) * 2021-01-26 2021-06-01 浙江香侬慧语科技有限责任公司 Text classification method and device based on label description, storage medium and equipment
US11663275B2 (en) 2019-08-05 2023-05-30 International Business Machines Corporation Method for dynamic data blocking in a database system
US20230195773A1 (en) * 2019-10-11 2023-06-22 Ping An Technology (Shenzhen) Co., Ltd. Text classification method, apparatus and computer-readable storage medium
CN117891411A (en) * 2024-03-14 2024-04-16 济宁蜗牛软件科技有限公司 Optimized storage method for massive archive data

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106875278B (en) * 2017-01-19 2020-11-03 浙江工商大学 Social network user image drawing method based on random forest
CN106875278A (en) * 2017-01-19 2017-06-20 浙江工商大学 Social network user portrait method based on random forest
CN107392176A (en) * 2017-08-10 2017-11-24 华南理工大学 A kind of high efficiency vehicle detection method based on kmeans
CN107392176B (en) * 2017-08-10 2020-05-22 华南理工大学 High-efficiency vehicle detection method based on kmeans
CN107577785A (en) * 2017-09-15 2018-01-12 南京大学 A kind of level multi-tag sorting technique suitable for law identification
CN107577785B (en) * 2017-09-15 2020-02-07 南京大学 Hierarchical multi-label classification method suitable for legal identification
CN109993391B (en) * 2017-12-31 2021-03-26 中国移动通信集团山西有限公司 Method, device, equipment and medium for dispatching network operation and maintenance task work order
CN109993391A (en) * 2017-12-31 2019-07-09 中国移动通信集团山西有限公司 Distributing method, device, equipment and the medium of network O&M task work order
CN110135185B (en) * 2018-02-08 2023-12-22 苹果公司 Machine learning privatization using generative antagonism network
CN110135185A (en) * 2018-02-08 2019-08-16 苹果公司 The machine learning of privatization is carried out using production confrontation network
CN109211814A (en) * 2018-10-29 2019-01-15 中国科学院南京土壤研究所 It is a kind of to be set a song to music the soil profile kind identification methods of face partition characteristics based on three-dimensional light
CN109492682A (en) * 2018-10-30 2019-03-19 桂林电子科技大学 A kind of multi-branched random forest data classification method
CN109886335A (en) * 2019-02-21 2019-06-14 厦门美图之家科技有限公司 Disaggregated model training method and device
CN109886335B (en) * 2019-02-21 2021-11-26 厦门美图之家科技有限公司 Classification model training method and device
CN109934489A (en) * 2019-03-12 2019-06-25 广东电网有限责任公司 A kind of status of electric power evaluation method
CN110347839B (en) * 2019-07-18 2021-07-16 湖南数定智能科技有限公司 Text classification method based on generative multi-task learning model
CN110347839A (en) * 2019-07-18 2019-10-18 湖南数定智能科技有限公司 A kind of file classification method based on production multi-task learning model
WO2021024080A1 (en) * 2019-08-05 2021-02-11 International Business Machines Corporation Active learning for data matching
GB2600369A (en) * 2019-08-05 2022-04-27 Ibm Active learning for data matching
US11409772B2 (en) 2019-08-05 2022-08-09 International Business Machines Corporation Active learning for data matching
US11663275B2 (en) 2019-08-05 2023-05-30 International Business Machines Corporation Method for dynamic data blocking in a database system
US20230195773A1 (en) * 2019-10-11 2023-06-22 Ping An Technology (Shenzhen) Co., Ltd. Text classification method, apparatus and computer-readable storage medium
CN112883189A (en) * 2021-01-26 2021-06-01 浙江香侬慧语科技有限责任公司 Text classification method and device based on label description, storage medium and equipment
CN117891411A (en) * 2024-03-14 2024-04-16 济宁蜗牛软件科技有限公司 Optimized storage method for massive archive data
CN117891411B (en) * 2024-03-14 2024-06-14 济宁蜗牛软件科技有限公司 Optimized storage method for massive archive data

Similar Documents

Publication Publication Date Title
CN105868773A (en) Hierarchical random forest based multi-tag classification method
CN106250412B (en) Knowledge mapping construction method based on the fusion of multi-source entity
CN106294593B (en) In conjunction with the Relation extraction method of subordinate clause grade remote supervisory and semi-supervised integrated study
WO2021068339A1 (en) Text classification method and device, and computer readable storage medium
CN107944559B (en) Method and system for automatically identifying entity relationship
CN104102626B (en) A kind of method for short text Semantic Similarity Measurement
CN109902159A (en) A kind of intelligent O&amp;M statement similarity matching process based on natural language processing
CN102289522B (en) Method of intelligently classifying texts
CN104574192B (en) Method and device for identifying same user in multiple social networks
CN109918532A (en) Image search method, device, equipment and computer readable storage medium
CN109543183A (en) Multi-tag entity-relation combined extraction method based on deep neural network and mark strategy
CN105469096A (en) Feature bag image retrieval method based on Hash binary code
CN102411611B (en) Instant interactive text oriented event identifying and tracking method
Lee Unsupervised and supervised learning to evaluate event relatedness based on content mining from social-media streams
CN113407660B (en) Unstructured text event extraction method
CN109840322A (en) It is a kind of based on intensified learning cloze test type reading understand analysis model and method
CN115393692A (en) Generation formula pre-training language model-based association text-to-image generation method
CN105718532A (en) Cross-media sequencing method based on multi-depth network structure
CN107992890B (en) A kind of multi-angle of view classifier and design method based on local feature
CN111027595A (en) Double-stage semantic word vector generation method
CN104331523B (en) A kind of question sentence search method based on conceptual object model
CN103412878B (en) Document theme partitioning method based on domain knowledge map community structure
CN109522416A (en) A kind of construction method of Financial Risk Control knowledge mapping
CN109740151A (en) Public security notes name entity recognition method based on iteration expansion convolutional neural networks
Wang et al. A deep clustering via automatic feature embedded learning for human activity recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160817