CN106611187A - Multi-dimensional scaling heterogeneous cost sensitive decision-making tree constructing method - Google Patents

Multi-dimensional scaling heterogeneous cost sensitive decision-making tree constructing method Download PDF

Info

Publication number
CN106611187A
CN106611187A CN201610445671.8A CN201610445671A CN106611187A CN 106611187 A CN106611187 A CN 106611187A CN 201610445671 A CN201610445671 A CN 201610445671A CN 106611187 A CN106611187 A CN 106611187A
Authority
CN
China
Prior art keywords
cost
attribute
function
misclassification
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610445671.8A
Other languages
Chinese (zh)
Inventor
金平艳
胡成华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Yonglian Information Technology Co Ltd
Original Assignee
Sichuan Yonglian Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Yonglian Information Technology Co Ltd filed Critical Sichuan Yonglian Information Technology Co Ltd
Priority to CN201610445671.8A priority Critical patent/CN106611187A/en
Publication of CN106611187A publication Critical patent/CN106611187A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a multi-dimensional scaling heterogeneous cost sensitive decision-making tree constructing method. According to a target function as shown in the specification of an attribute as shown in the specification, a formula as shown in the specification is selected from candidate attributes; a branch satisfying a condition as shown in the specification extends from a node; simultaneously, a leaf node is pruned by utilization of a first pruning technology; pruning is carried out while tree building is carried out; tree building is stopped when the following two conditions are satisfied: (1), it assumes that a formula as shown in the specification is a sample set in a training data set as shown in the specification; if a formula as shown in the specification is empty, a leaf node is added; the sample set is marked as the most common type in the training data set; and (2), all examples in the node belong to the same type. By means of the multi-dimensional scaling heterogeneous cost sensitive decision-making tree constructing method provided by the invention, the classification accuracy is improved; the application range is enlarged; the risk that splitting attribute information is ignored due to too small can be avoided; the risk rate is reduced; and the classified learning rate is increased.

Description

A kind of isomery cost-sensitive Constructing Method for Decision of multi-dimentional scale
Technical field
The present invention relates to machine learning, artificial intelligence and Data Mining.
Background technology
Decision tree correlation problem is an important and positive research topic in data mining and machine learning.Proposed Algorithm extensively and be applied successfully in practical problem, such as ID3, CART and C4.5, the such classic algorithm of decision tree is main It is the problem for studying accuracy rate, the decision tree accuracy rate of generation is higher.In existing algorithm, some only consider test cost, Some only consider misclassification mistake cost, and such to be referred to as unidimensional scale cost-sensitive, the decision tree that it builds is in real case Synthtic price index can not be solved in example.For example, except test cost and misclassification cost pair need to be considered in cost sensitive learning The impact of classification, also needs to consider to treat impact of the time cost to classification prediction.For example, patient there may be test cost constraint, The constraint for having on the stand-by period is also possible to, the own resource possessed according to different classes of demander is different, when required Between length it is also different, it is considered to the different problem of various cost unit mechanism, in addition during decision tree is built, using first cutting Technology to solve the problems, such as decision tree in over-fitting, in order to solve this demand, the present invention is in the two dimension yardstick cost of a peacekeeping before On the basis of, it is proposed that a kind of isomery cost-sensitive Constructing Method for Decision of multi-dimentional scale.
The content of the invention
It is directed to solution and considers that test cost, misclassification cost and stand-by period cost factor of influence are more to build simultaneously The problem of dimension yardstick decision tree process, consider the different problem of various cost unit mechanism, it is proposed that a kind of multi-dimentional scale it is different Structure cost-sensitive Constructing Method for Decision.
To solve the above problems, the present invention proposes technical scheme below:
A kind of isomery cost-sensitive Constructing Method for Decision of multi-dimentional scale, comprises the steps:
Step 1:If training is concentrated with X sample, attribute number is n, i.e. n=(S1, S2... Sn), while Split Attribute Si M class L, wherein L are corresponded tor∈(L1, L2..., Lm), i ∈ (1,2 ..., n), r ∈ (1,2 ..., m).Association area user set Misclassification cost matrix C, attribute SiTest cost is costi, resource adjustments factor ∝, relative stand-by period cost wc (Si)。
Step 2:Create root node G.
Step 3:If training dataset is sky, return node G simultaneously marks failure.
Step 4:If training data concentrates all records to belong to same category, the type flag node G.
Step 5:If candidate attribute is sky, return G is leafy node, is labeled as training data and concentrates most common Class.
Step 6:According to attribute SiObject function f (Si) splitS is selected from candidate attributei
Object function f (Si):
averagegini(Si) be information purity function, D (Si) it is various cost validity functions.When selection attribute splitSiMeet object function f (Si) bigger, then find flag node G.
When there are object function f (Si) it is equal when, for the standard of clinching a tie, then selected again according to following priority Select:
(1) bigger Dmc (Si)
(2) less ZTC (Si)
Step 7:Flag node G is attribute splitSi
Step 8:Extended by node and meet condition for splitS=splitSiBranch, while using first technology of prunning branches pair Leafy node carries out cut operator, one side achievement one side beta pruning, if meeting one of following two conditions, just stops contributing.
8.1 it is assumed here that YiSplitS=splitS is concentrated for training dataiSample set, if YiFor sky, one is added Individual leafy node, is labeled as training data and concentrates most common class.
All examples belong to same class in 8.2 this node.
Step 9:Situation in non-8.1 and 8.2, then recursive call step 6 is to step 8.
Step 10:Training dataset is updated, new sample data is preserved.
Present invention has the advantages that:
1st, the decision tree for building has the more preferably classification degree of accuracy, strengthen classification capacity, it is to avoid when there is rare class in class, It is treated as general category to be classified.
2nd, various cost factors of influence are considered, the decision-tree model range of application that this is generated is wider, more meets reality Demand.
3rd, in decision tree building process, it is to avoid Split Attribute information is present because of too small and ignored risk.
4th, during contributing, make use of validity to measure test cost and treat that the total cost of time cost can make misclassification generation Valency obtains the reduction of maximum dynamics, and solves in reality misclassification cost, test cost and treat that time cost is regarded as The irrationality of same linear module, the decision tree of formation has high nicety of grading and reduces misclassification cost, test cost And treat time cost.
5th, utilization is first cut a technology and decision tree is carried out to cut a speed that improve classification learning.
Description of the drawings
Fig. 1 is the flow chart that a kind of isomery cost-sensitive decision tree of multi-dimentional scale builds
Specific embodiment
To solve to consider test cost, misclassification cost simultaneously and treating time cost factor of influence building multi-dimentional scale The problem of decision tree process and consider the different problem of various cost unit mechanism, the decision tree for ultimately producing preferably is advised Overfitting problem is kept away, the present invention has been described in detail with reference to Fig. 1, its specific implementation step is as follows:
Step 1:If training is concentrated with X sample, attribute number is n, i.e. n=(S1, S2... Sn), while Split Attribute Si M class L, wherein L are corresponded tor∈(L1, L2..., Lm), i ∈ (1,2 ..., n), r ∈ (1,2 ..., m).Association area user set Misclassification cost matrix C, attribute SiTest cost is costi, resource adjustments factor ∝, wc (Si)-relative stand-by period cost Value.
1) specifically setting is as follows for the misclassification cost matrix C described in above-mentioned steps 1:
The setting of association area user misclassification cost matrix C:
Classification logotype number is m, then the cost matrix m × m square formations of the data are:
Wherein cijRepresent that jth class data are divided into the cost of the i-th class, if i=j is correct classification, cij=0, otherwise for Mistake classification cij≠ 0, its value is given by association area user, here i, and j ∈ (1,2 ..., m)
Step 2:Create root node G.
Step 3:If training dataset is sky, return node G simultaneously marks failure.
Step 4:If training data concentrates all records to belong to same category, the type flag node G.
Step 5:If candidate attribute is sky, return G is leafy node, is labeled as training data and concentrates most common Class.
Step 6:According to attribute SiObject function f (Si) splitS is selected from candidate attributei
Object function f (Si):
averagegini(Si) be information purity function, D (Si) it is cost validity function,
Wherein D (Si)-cost validity function formula is:
Dmc(Si) reduce function, ZTC (S for misclassification loss costi) it is test cost and relative stand-by period cost Total cost function.
As selection attribute splitSiMeet object function f (Si) bigger, then find flag node G.
2) above-mentioned steps 6 solve object function f (Si), need first to solve averagegini (Si)-information purity function, D (Si)-cost validity function, concrete solution procedure is as follows:
2.1) averagegini (Y are calculatediThe detailed process of)-information purity function is as follows:
Gini index is a kind of impurity level splitting method, and gini index is expressed as gini (Yi), it is defined as:
Wherein p (Li/Yi) it is classification LiIn property value YiThe relative probability at place, as gini (YiDuring)=0, i.e., here is tied
All records belong to same category at point, increase by a leafy node, i.e. information purity is bigger.Conversely, gini (Yi) maximum, the useful information for obtaining is minimum, then continue according to object function f (Si) candidate's next one attribute.
According to gini (Yi) averagegini (S can be learnti)
Here it is (Y that attribute S has j property value, i.e. property value1, Y2..., Yj)
Information purity function averagegini (Si) effect:The nicety of grading of decision tree can be improved
2.2) D (S are solvedi)-cost validity function
Solve D (Si)-cost validity function, needs first to solve Dmc (Si)-misclassification loss cost reduces function and ZTC (SiThe total cost function of)-test cost and relative stand-by period cost.
2.2.1) Dmc (S are solvediIt is as follows that)-misclassification loss cost reduces function detailed process:
If the class label L to example predictionaWith true class label LbIt is identical, then classify correct, misclassification cost now C(La, Lb)=0, if La≠Lb, then C (La, Lb)≠0.In assorting process, the physical tags of example are not generally known, so Here the value of misclassification cost is replaced with the expectation Emc of misclassification cost, i.e. the class label of an example is predicted as La's Misclassification cost is desired for:
Wherein, L is all class tag sets in data set, p (Lb/Si) select attribute S for currentiIn contain class LbIt is general Rate, C (La, Lb) it is class LbIt is divided into class L by mistakeaCost spend.
The selection of Split Attribute should also reduce being basic principle to the maximum with misclassification cost, not select attribute Si Shi Youyi total misclassification cost mc, selects any one attribute to be tested, and is likely to reduce certain misclassification generation Valency, constructs here misclassification loss cost and reduces function Dmc (Si)。
Dmc(Si)=mc-Emc (Si, La)
Mc is also not select Split Attribute SiThe summation of front all misclassification costs, this can set according to user Misclassification cost matrix be easy to try to achieve out.
2.2.2) ZTC (S are solvediThe total cost function detailed process of)-test cost and relative stand-by period cost is such as Under:
ZTC(Si)=TC (Si)+∝wc(Si)
Wherein ∝ is a regulatory factor, and different resource ∝ is different, and more ∝ are bigger for resource, otherwise also set up.TC(Si) be Attribute test cost function, wc (Si) it is with respect to stand-by period cost function, wc (Si) determined by expert.
Attribute test cost is TC (Si) set as follows:
TC(Si)=(1+costi)
Wherein costiFor attribute SiTest cost, this is specified by user
Relative stand-by period cost wc (S is introduced in detail belowi):
Stand-by period cost is relevant with the time, i.e., we can describe these time-sensitive costs with numerical value, if knot Fruit can obtain at once, and stand-by period cost is 0;If result takes several days, just a numerical value is determined by corresponding expert.Advise in addition It is fixed, if must this test result out can just carry out next test, even if the sands are running out for waiting, such as half a day or one My god, all this stand-by period cost is set to a very big constant, i.e. m → ∞.
Stand-by period is simultaneously also relevant with local resources, while considering time cost and resource constraint cost.
When there are object function f (Si) it is equal when, for the standard of clinching a tie, then selected again according to following priority Select:
(1) bigger Dmc (Si)
(2) less ZTC (Si)
Step 7:Flag node G is attribute splitS.
Step 8:Extended by node and meet condition for splitS=splitSiBranch, while propping up a technology pair using first cutting Leafy node carries out cut operator, one side achievement one side beta pruning, if meeting one of following two conditions, just stops contributing.
8.1 it is assumed here that YiSplitS=splitS is concentrated for training dataiSample set, if YiFor sky, one is added Individual leafy node, is labeled as training data and concentrates most common class.
All examples belong to same class in 8.2 this node.
3) the first technology of prunning branches described in above-mentioned steps 8, its decision condition is specific as follows:
For leafy node class LrSample number, X is training set population sample number, and p is user based on training set One appropriate threshold value of the minimum of a value setting of number of samples percentage.Beta pruning condition first has to reach user's specified requirements.
Step 9:Situation in non-8.1 and 8.2, then recursive call step 6 is to step 8.
Step 10:Training dataset is updated, new sample data is preserved.
A kind of isomery cost-sensitive Constructing Method for Decision of multi-dimentional scale, its false code calculating process is as follows:
Input:X sample training collection, misclassification cost matrix C, attribute SiTest cost is costi, the resource adjustments factor ∝、wc(Si)-relative stand-by period cost.
Output:A kind of isomery cost-sensitive Constructing Method for Decision of multi-dimentional scale.

Claims (3)

1. a kind of isomery cost-sensitive Constructing Method for Decision of multi-dimentional scale, the method is related to machine learning, artificial intelligence Energy and Data Mining, is characterized in that, comprise the steps:
Step 1:If training is concentrated with X sample, attribute number is n, i.e.,, while Split Attribute M class L has been corresponded to, wherein, , association area use Family sets misclassification cost matrix C, attributeTest cost is, the resource adjustments factor- relative etc. Treat time cost value
1)Specifically setting is as follows for misclassification cost matrix C described in above-mentioned steps 1:
The setting of association area user misclassification cost matrix C:
Classification logotype number is m, then the cost matrix of the dataSquare formation is:
WhereinRepresent theClass data are divided intoThe cost of class, ifClassify for correct, then, it is otherwise mistake Classification, its value gives by association area user, here
Step 2:Create root node G
Step 3:If training dataset is sky, returns node G and mark failure
Step 4:If training data concentrates all records to belong to same category, such phenotypic marker node G
Step 5:If candidate attribute is sky, return G is leafy node, is labeled as training data and concentrates most common class
Step 6:According to attributeObject functionSelect from candidate attribute
Step 7:Mark node G is attribute
Step 8:Extended by node and meet condition and beBranch, while using first technology of prunning branches to leaf section Point carries out cut operator, one side achievement one side beta pruning, if meeting one of following two conditions, just stops contributing
8.1 it is assumed here thatFor training data concentrationSample set, ifFor sky, one is added Leafy node, is labeled as training data and concentrates most common class
All examples belong to same class in 8.2 this node
Step 9:Situation in non-8.1 and 8.2, then recursive call step 6 is to step 8
Step 10:Training dataset is updated, new sample data is preserved.
2., according to a kind of isomery cost-sensitive Constructing Method for Decision of the multi-dimentional scale described in claim 1, it is characterized in that, Related calculating process is as follows in the above step 6:
Step 6:According to attributeObject functionSelect from candidate attribute
Object function
For information purity function,For cost validity function
Wherein- cost validity function formula is:
Function is reduced for misclassification loss cost,It is total with relative stand-by period cost for test cost Cost function
When selection attributeMeet object functionIt is bigger, then find mark knot G
2)Above-mentioned steps 6 solve object function, need first to solve- information purity function,- cost validity function, concrete solution procedure is as follows:
2.1) calculateThe detailed process of-information purity function is as follows:
Gini index is a kind of impurity level splitting method, and gini index is expressed as, it is defined as:
WhereinFor classificationIn property valueThe relative probability at place, whenWhen, i.e., in this node place There is record to belong to same category, increase by a leaf node, be i.e. information purity is bigger,
Conversely,Maximum, the useful information for obtaining is minimum, then continue according to object functionCandidate's next one attribute
According toCan learn
Here attribute S has j property value, the i.e. property value to be
Information purity functionEffect:The nicety of grading of decision tree can be improved
2.2) solve- cost validity function
Solve- cost validity function, needs first to solve- misclassification loss cost reduce function andThe total cost function of-test cost and relative stand-by period cost
2.2.1)SolveIt is as follows that-misclassification loss cost reduces function detailed process:
If the class label to example predictionWith true class labelIt is identical, then classify correct, misclassification cost nowIf,, then, in assorting process, generally do not know the reality of example Label, so replacing the value of misclassification cost with the expectation Emc of misclassification cost here, i.e. the class label of an example It is predicted asMisclassification cost be desired for:
Emc
Wherein, L is all class tag sets in data set,Attribute is selected for currentIn contain classProbability,It is classIt is divided into class by mistakeCost spend
The selection of Split Attribute should also reduce being basic principle to the maximum with misclassification cost, not select attributeShi Youyi Individual total misclassification cost mc, selects any one attribute to be tested, and is likely to reduce certain misclassification cost, here Construct misclassification loss cost and reduce function
Also not select Split AttributeThe summation of front all misclassification costs, this can be according to user's setting Misclassification cost matrix is easy to try to achieve out
2.2.2)Solve- test cost is as follows with the total cost function detailed process of relative stand-by period cost:
WhereinIt is a regulatory factor, it is differentResource is different, and resource is moreIt is bigger, on the contrary also set up,For category Property test cost function,It is relative stand-by period cost function,Determined by expert
Attribute test cost isSetting is as follows:
WhereinFor attributeTest cost, this is specified by user
Relative stand-by period cost is introduced in detail below
Stand-by period cost is relevant with the time, i.e., we can describe these time-sensitive costs with numerical value, such as
Fruit result can be obtained at once, and stand-by period cost is 0;If result takes several days, just a number is determined by corresponding expert Value, dictate otherwise, if must this test result out can just carry out next test, even if wait the sands are running out, Such as half a day or one day, all this stand-by period cost is set to a very big constant, i.e.,
Stand-by period is simultaneously also relevant with local resources, while considering time cost and resource constraint cost
When there is object functionWhen equal, for the standard of clinching a tie, then selected again according to following priority:
(1)Bigger
(2)Less
3., according to a kind of isomery cost-sensitive Constructing Method for Decision of the multi-dimentional scale described in claim 1, it is characterized in that, Related calculating process is as follows in the above step 8:
Step 8:Extended by node and meet condition and beBranch, while using first technology of prunning branches to leaf section Point carries out cut operator, one side achievement one side beta pruning, if meeting one of following two conditions, just stops contributing
8.1 it is assumed here thatFor training data concentrationSample set, ifFor sky, one is added Leafy node, is labeled as training data and concentrates most common class
All examples belong to same class in 8.2 this node
3)First technology of prunning branches described in above-mentioned steps 8, its decision condition is specific as follows:
For a leaf node classSample number, X be training set population sample number,For sample of the user based on training set One appropriate threshold value of the minimum of a value setting of this number percentage, beta pruning condition first has to reach user's specified requirements.
CN201610445671.8A 2016-06-17 2016-06-17 Multi-dimensional scaling heterogeneous cost sensitive decision-making tree constructing method Pending CN106611187A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610445671.8A CN106611187A (en) 2016-06-17 2016-06-17 Multi-dimensional scaling heterogeneous cost sensitive decision-making tree constructing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610445671.8A CN106611187A (en) 2016-06-17 2016-06-17 Multi-dimensional scaling heterogeneous cost sensitive decision-making tree constructing method

Publications (1)

Publication Number Publication Date
CN106611187A true CN106611187A (en) 2017-05-03

Family

ID=58614819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610445671.8A Pending CN106611187A (en) 2016-06-17 2016-06-17 Multi-dimensional scaling heterogeneous cost sensitive decision-making tree constructing method

Country Status (1)

Country Link
CN (1) CN106611187A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110085324A (en) * 2019-04-25 2019-08-02 深圳市华嘉生物智能科技有限公司 A kind of method of multiple existence end results Conjoint Analysis
CN115146725A (en) * 2022-06-30 2022-10-04 北京百度网讯科技有限公司 Determination method of object classification mode, object classification method, device and equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110085324A (en) * 2019-04-25 2019-08-02 深圳市华嘉生物智能科技有限公司 A kind of method of multiple existence end results Conjoint Analysis
CN110085324B (en) * 2019-04-25 2023-09-08 深圳市华嘉生物智能科技有限公司 Multiple survival terminal result joint analysis method
CN115146725A (en) * 2022-06-30 2022-10-04 北京百度网讯科技有限公司 Determination method of object classification mode, object classification method, device and equipment

Similar Documents

Publication Publication Date Title
Katarya et al. Impact of machine learning techniques in precision agriculture
CN110135231A (en) Animal face recognition methods, device, computer equipment and storage medium
CN110362723A (en) A kind of topic character representation method, apparatus and storage medium
CN104966105A (en) Robust machine error retrieving method and system
CN103324954A (en) Image classification method based on tree structure and system using same
CA3086699C (en) Metagenomics for microbiomes
CN110263979A (en) Method and device based on intensified learning model prediction sample label
CN105095494A (en) Method for testing categorical data set
CN106611188A (en) Standardized multi-dimensional scaling cost sensitive decision-making tree constructing method
CN106611036A (en) Improved multidimensional scaling heterogeneous cost-sensitive decision tree building method
CN111914159A (en) Information recommendation method and terminal
CN106611189A (en) Method for constructing integrated classifier of standardized multi-dimensional cost sensitive decision-making tree
CN106611187A (en) Multi-dimensional scaling heterogeneous cost sensitive decision-making tree constructing method
Jie RETRACTED ARTICLE: Precision and intelligent agricultural decision support system based on big data analysis
CN106611180A (en) Decision tree classifier construction method based on test cost
CN111340637B (en) Medical insurance intelligent auditing system based on machine learning feedback rule enhancement
CN106611183A (en) Method for constructing Gini coefficient and misclassification cost-sensitive decision tree
CN106611181A (en) Method for constructing cost-sensitive two-dimensional decision tree
Swaminathan et al. Meta learning-based dynamic ensemble model for crop selection
CN112149623B (en) Self-adaptive multi-sensor information fusion system, method and storage medium
Parmar et al. Crop Yield Prediction based on Feature Selection and Machine Learners: A Review
CN110852094B (en) Method, apparatus and computer readable storage medium for searching target
CN106611185A (en) Multi-standard misclassification cost sensitive decision tree construction method
CN113706285A (en) Credit card fraud detection method
Shoaib et al. Revolutionizing global food security: empowering resilience through integrated AI foundation models and data-driven solutions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170503

WD01 Invention patent application deemed withdrawn after publication