CN107871139A - A kind of neighborhood keeps the Method of Data with Adding Windows of embedded innovatory algorithm - Google Patents

A kind of neighborhood keeps the Method of Data with Adding Windows of embedded innovatory algorithm Download PDF

Info

Publication number
CN107871139A
CN107871139A CN201711058157.XA CN201711058157A CN107871139A CN 107871139 A CN107871139 A CN 107871139A CN 201711058157 A CN201711058157 A CN 201711058157A CN 107871139 A CN107871139 A CN 107871139A
Authority
CN
China
Prior art keywords
matrix
data
point
points
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711058157.XA
Other languages
Chinese (zh)
Inventor
董渭清
李玥
郭桑
董文鑫
陈建友
仓剑
袁泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201711058157.XA priority Critical patent/CN107871139A/en
Publication of CN107871139A publication Critical patent/CN107871139A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2132Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on discrimination criteria, e.g. discriminant analysis
    • G06F18/21322Rendering the within-class scatter matrix non-singular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2132Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on discrimination criteria, e.g. discriminant analysis
    • G06F18/21322Rendering the within-class scatter matrix non-singular
    • G06F18/21324Rendering the within-class scatter matrix non-singular involving projections, e.g. Fisherface techniques

Abstract

The invention discloses the Method of Data with Adding Windows that a kind of neighborhood keeps embedded innovatory algorithm, adjacent map is built first, the point of proximity of each sample point is calculated using geodesic curve, so as to form adjacency matrix;Then calculate and rebuild weights, each sampled point is represented with point of proximity;Projection matrix is finally calculated, conversion projection matrix is calculated using weight matrix is rebuild.The inventive method instead of Euclidean distance using geodesic curve distance, more preferably maintain the partial structurtes information of NPE algorithms, improve the ability of algorithm process manifold structure.

Description

Data dimension reduction method of neighborhood preserving embedding improved algorithm
Technical Field
The invention belongs to the field of big data processing, relates to a data dimension reduction method, and particularly relates to a data dimension reduction method of a neighborhood preserving embedding improved algorithm.
Background
Under the era of large data, the continuous expansion of data volume leads to information explosion, the data often presents the characteristic of high dimensionality, and the technology mastered in the real world is generally difficult to directly process because of the complexity of the structure of the high dimensionality data, for example, the main purpose of data mining is to utilize an efficient algorithm to explore information hidden behind the data and finally convert the information into knowledge to guide people to make reasonable decisions. In order to properly process these high-dimensional data, data dimension reduction techniques are born. The data dimension reduction is a process of projecting data from a high-dimensional feature space to a low-dimensional feature space, and the essential structure of the data can be greatly reserved in the dimension reduction process. The dimensionality of the data is reduced, and data mining can be facilitated. From the characteristics of data, the dimension reduction method is a linear dimension reduction method and a nonlinear dimension reduction method. In order to effectively explore a nonlinear structure contained in a data set, people develop a plurality of effective nonlinear dimension reduction means, for the nonlinear dimension reduction algorithms such as artificial neural networks, genetic algorithms, manifold learning and the like, generally, the nonlinear algorithms of the manifolds perform well on training samples, but the dimension reduction effect cannot be achieved on test samples because the nonlinear algorithms lack projection matrixes, feature extraction cannot be performed on newly added sample sets, and in order to solve the problem, a linearized typical manifold learning algorithm is proposed, for example, manifold learning (NPE) of a neighborhood preserving embedding algorithm, a projection matrix is obtained by using local representation, and high-dimensional manifold data is projected to a low-dimensional manifold space. However, such local representation usually assumes that the local manifold space is linear, which results in a large fluctuation of the dimension reduction result.
Disclosure of Invention
Aiming at the limitation of a neighborhood preserving embedding algorithm (NPE), the neighborhood preserving embedding algorithm based on geodesic lines can describe local information more accurately, so that the selection of adjacent points is optimized, reconstruction errors are reduced on the premise of better preserving the local information, and finally data dimension reduction is realized.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
a data dimension reduction method of a neighborhood preserving embedding improved algorithm comprises the following steps:
1) And (3) constructing an adjacency graph, calculating the distance between each sampling point and other points by using the geodesic distance to form a matrix, and then selecting a part of points with shorter distances from the points to finally form the adjacency matrix. (ii) a
2) Calculating a reconstruction weight of the data according to the geodesic distance of the adjacency matrix, wherein in order to minimize loss after projection, the reconstruction weight is calculated according to the contribution rate of each sample point in the adjacency graph, and each sampling point of the data is represented by the adjacent point of the adjacency matrix to obtain a reconstruction weight matrix;
3) And calculating a projection matrix of the data, putting the reconstructed weight matrix into an equation for calculating the characteristic vector to calculate a transformation matrix of the projection matrix, and finishing the dimension reduction of the data.
Further, for the sampling points i and j in the adjacency graph in the step 1), if the two sampling points belong to the same category, a connection exists between the two sampling points, and the distance d between the two sampling points is the geodesic distance G (i,j)=d x (i,j);
If the two sampling points do not belong to the same category, a connecting line does not exist between the two sampling points, and d is assumed to be G (i, j) = ∞, then for all sampling points l =1,2,3, …, N, the geodesic distance is found, and d is updated G (i, j), the following equation is obtained:
d G (i,j)=min{d G (i,j),d G (i,l)+d G (l,j)}。
further, the objective function that the reconstruction weight is calculated in step 2) and each sampling point is represented by a near point is as follows:
wherein, w ij Using the reconstructed weight, w, obtained from geodesic distances for each sample point i1 ,...,w ik Given in the corresponding proximate point.
Furthermore, the feature space, namely x, is converted in the objective function after dimension reduction i →y i And according to the weight vector matrix, simplifying the objective function as follows:
further, the coordinate after projection in the step 3) is set as y i For the formula:as defined below:
y i =A T x i
then there are:
wherein, the matrix formed by a is a projection matrix, phi (y) represents a transformation matrix, z represents the vector form of the transformation matrix, I represents an identity matrix, W represents a reconstruction weight matrix, X represents a coordinate matrix before projection, and M represents (I-W) T (I-W), T represents the transpose of the matrix.
Furthermore, after introducing Lagrange factors into the transformation matrix formula, the method is changed into the method for solving XMX by utilizing SVD T The process is as follows: mapping a high-dimensional coordinate point N to N subspace points (N)&gt, n), assuming that the rank of X is l, X can be projected into a matrix B of dimension l using SVD, X = USV T ,B=U T X=SV T . Wherein U is XX T V is X T The eigenvectors of X, S are a diagonal matrix of l × l. The eigenvectors that ultimately solve the following equations become the matrix (BB) T ) -1 (BMB T ) The feature vector of (2).
XMX T A=λXX T A
Where a denotes an eigenvector and λ denotes an eigenvalue corresponding to the matrix.
Compared with the prior art, the neighborhood preserving embedding algorithm based on the geodesic is provided aiming at the limitation of the neighborhood preserving embedding algorithm (NPE), firstly, an adjacency graph is constructed, and the geodesic is used for calculating the adjacent point of each sample point, so that an adjacency matrix is formed; then, calculating a reconstruction weight, and representing each sampling point by using a near point; and finally, calculating a projection matrix, calculating by using a reconstruction weight matrix to obtain a transformed projection matrix, replacing the Euclidean distance by using the geodesic distance, better keeping the local structure information of the NPE algorithm, improving the capability of the algorithm for processing the manifold structure, and more accurately describing the local information, so that the selection of adjacent points is optimized, the reconstruction error is reduced on the premise of better keeping the local information, and finally the data dimension reduction is realized.
Drawings
FIG. 1 is a three-dimensional effect diagram of Helix with two types of features;
FIG. 2 is a diagram illustrating the NPE method dimension reduction effect of the data in FIG. 1;
FIG. 3 is a graph illustrating the dimensionality reduction effect of the data of FIG. 1 by the GNPE method of the present invention;
FIG. 4 is a flow chart of the method of the present invention.
In the figure, the horizontal and vertical coordinates represent the distances between the sample points, respectively, and the distances are set to be varied in order to normally recognize the dispersion of the sample points with the naked eye.
Detailed Description
The invention is further explained below with reference to specific examples and the drawing of the description.
Because the NPE algorithm assumes that the manifold space is locally in a linear relation, the manifold space processing effect is not good for the manifold space with large curvature, the invention uses geodesic distance to replace Euclidean distance, and excavates the inherent real space by selecting the real neighbor points in the manifold, thereby well retaining the local structure information and improving the capability of the method for processing high-dimensional data.
Referring to fig. 4, the present invention includes the steps of:
step 01: constructing an adjacency graph, and calculating the consistent adjacent point of each sample point by using a geodesic line so as to form an adjacency matrix;
step 02: calculating a reconstruction weight, and representing each sampling point as a same adjacent point;
and 03: calculating a projection matrix, and calculating by using a reconstruction weight matrix to obtain a transformation matrix;
in step 01, an adjacency graph is constructed, and a point near each sample point is calculated by using a geodesic line, so that an adjacency matrix is formed, and the method specifically comprises the following steps:
when the adjacent points are selected from any sampling points of GNPE, the geodesic distance is used for replacing the Euclidean distance; for the sampling points i and j, if the sampling points i and j belong to the same category, a connecting line exists, otherwise, the connecting line does not exist; if there is a connection between them, d G (i,j)=d x (i, j), otherwise, assume d first G (i, j) = ∞, then d is updated for all l =1,2,3, …, N G (i, j), the following equation is obtained:
d G (i,j)=min{d G (i,j),d G (i,l)+d G (l,j)}
in step 02, a reconstruction weight is calculated, each sampling point is represented by a near point, and at the moment, an objective function is as follows:
in the above formula, w ij For each samplePoints use a reconstruction weight obtained by measuring the distance of the ground line, and the reconstruction weight can describe a low-dimensional structure more closely under the condition; with this method, it is possible to make x a given sample point away i Nearest neighbors have a large weight, while near and far points have small weights that decay exponentially with distance from the sample point; w is a i1 ,...,w ik Given weight vectors in the corresponding neighboring points; since the feature space is transformed after dimensionality reduction, namely x i →y i The above equation can be further simplified by adding the weight vector matrix to the spatial transform of (1) to obtain the following equation:
in step 03, calculating a projection matrix, and calculating by using a reconstruction weight matrix to obtain a transformation matrix, including:
let the coordinate after projection be y i For the formula:
as defined below:
y i =A T x i
then:
wherein the matrix formed by a is a projection matrix, and the lagrangian factor is used for converting the formula into XMX solution T A=λXX T A problem of feature vectors.
To verify the effectiveness of the method, two sets of experiments were performed separately. The recognition rate was determined using a KNN classifier and the NPE dimensionality reduction algorithm was used as a comparative example in the experiment compared to the GNPE algorithm of the present invention. In the experiment, the dimensionality reduced data dimensionality d =10 and d =80 are selected, and the parameter k =12 is selected. For each sample, 5 were selected as training set and the rest as testing set, and the experiment was repeated 5 times. Finally, the data are averaged, and table 1 is a comparison table of the average accuracy of two dimension reduction methods with d = 10:
table 2 is a comparison table of the average accuracy of two dimension reduction methods with d = 80:
as can be seen from the average accuracy in the different algorithms in table 1 and table 2, we can see from the ORL face database that the face recognition rate of GNPE is better than that of NPE algorithm in general. Under the condition of determining the number of training samples, the lower the dimensionality of the sample data is, the lower the finally obtained face recognition rate is, because the intrinsic features of the low-dimensionality data structure reserved by the sample information are less and less. In the manifold structure of the human face, the GNPE algorithm enables points which should be neighbors to be changed into neighbor points after points which are far away due to the fact that the NPE algorithm uses the Euclidean distance to recalculate, enables sample points which are not neighbors in the manifold and are close to each other in the Euclidean distance to be changed into neighbor points, and reduces the contribution in a reconstruction matrix after the geodesic calculation is reused, so that the recognition rate can be improved. That is to say, in the original neighborhood preserving embedding algorithm, the weight matrix obtained by calculating each sample point by using the euclidean distance is calculated according to the distance between the neighbors of each sample point, the calculation is not accurate in the manifold, so the euclidean distance is changed into the geodesic distance, the distance between the k neighbors of each sample point is recalculated, the contribution value is reallocated, a new weight matrix is recalculated, and a higher recognition rate is obtained, because the manifold structure has a larger curvature and is not very close to a linear structure locally.
In the process that the number of training samples is increased under the condition that the same dimensionality is selected, the algorithm recognition rate is improved, and the GNPE algorithm recognition rate is still obviously higher than that of the NPE algorithm through analysis of table data.
Similarly, it can be seen from the PIE face database that the face recognition rate of the GNPE algorithm is still higher than that of the NPE algorithm, and in addition, experiments show that if the dimension is increased on the basis of 80, the face recognition rate of the face data, whether the ORL face database or the PIE face data, is not changed basically, and remains stable.
In the embodiment, a data Helix artificial data set with two types of characteristics is adopted, wherein hollow points and solid points form a spiral line which is connected in a staggered mode in a three-dimensional space, a connecting line formed by the hollow points belongs to the spiral line with the first type of characteristics, a connecting line formed by the solid points belongs to the spiral line with the second type of characteristics, the two types of characteristics are distributed randomly, and the data Helix artificial data set has a tangent line at any point so that a constant angle of the data Helix artificial data set has a fixed line. The three-dimensional effect of Helix is shown in FIG. 1.
First, using the most original NPE algorithm to perform dimensionality reduction on Helix, selecting parameters k =12 and d =2. The data dimension reduction result is shown in fig. 2 below, and it can be seen from fig. 2 that processing of Helix by using the NPE algorithm has a certain dimension reduction effect, but some data are overlapped.
As can be seen from fig. 3, GNPE has a good dimension reduction effect and classification effect, and can separate sample points of each class, so that the overlap of sample points of each class is less.
Because the NPE algorithm assumes that the manifold space is locally in a linear relation, the manifold space processing effect is not good for large curvature, the geodesic distance is used for replacing the Euclidean distance, the inherent real space is excavated by selecting the real neighbor points in the manifold, the local structure information is well reserved, and the capability of the method for processing high-dimensional data is improved.
Finally, it should be noted that the above models and practical data examples provide further verification for the purpose, technical solution and advantages of the present invention, which only belong to the specific embodiments of the present invention, and are not used to limit the scope of the present invention, and any modification, improvement or equivalent replacement made within the spirit and principle of the present invention should be within the scope of the present invention.

Claims (6)

1. A data dimension reduction method of a neighborhood preserving embedding improved algorithm is characterized by comprising the following steps:
1) Constructing an adjacency graph, calculating the distance between each sampling point and other points by using the geodesic distance to form a matrix, and then selecting a part of points with shorter distances from the points to finally form an adjacency matrix;
2) Calculating a reconstruction weight of the data according to the geodesic distance of the adjacency matrix, wherein in order to minimize loss after projection, the reconstruction weight is calculated according to the contribution rate of each sample point in the adjacency graph, and each sampling point of the data is represented by the adjacent point of the adjacency matrix to obtain a reconstruction weight matrix;
3) And calculating a projection matrix of the data, putting the reconstruction weight matrix into an equation for calculating the characteristic vector, and calculating to obtain a transformation matrix of the projection matrix to finish the data dimension reduction.
2. The data dimension reduction method of the neighborhood preserving embedding improved algorithm according to claim 1, characterized in that, for the sampling points i and j in the adjacency graph of the step 1), if two sampling points belong to the same category, a connection line exists between the two sampling points, and the geodesic distance d exists G (i,j)=d x (i,j);
If the two sampling points do not belong to the same category, a connecting line does not exist between the two sampling points, and d is assumed to be G (i, j) = ∞ then the geodesic distance is determined for all sampling points l =1,2,3, …, NUpdate d G (i, j), the following equation is obtained:
d G (i,j)=min{d G (i,j),d G (i,l)+d G (l,j)}。
3. the data dimension reduction method of the neighborhood preserving embedding improved algorithm according to claim 1, wherein the objective function of calculating the reconstruction weight in step 2) and representing each sampling point by a near point is as follows:
wherein, w ij Using the reconstructed weight, w, obtained from geodesic distances for each sample point i1 ,...,w ik Given in the corresponding proximate point.
4. The method of claim 3, wherein the feature space (x) of the objective function is transformed after dimensionality reduction i →y i And according to the weight vector matrix, simplifying the objective function as follows:
5. the method for reducing the data dimension of the neighborhood preserving embedding improved algorithm according to claim 4, wherein the coordinate after projection in the step 3) is set as y i For the formula:as defined below:
y i =A T x i
then there are:
wherein, the matrix formed by a is a projection matrix, phi (y) represents a transformation matrix, z represents the vector form of the transformation matrix, I represents an identity matrix, W represents a reconstruction weight matrix, X represents a coordinate matrix before projection, and M represents (I-W) T (I-W), T represents the transpose of the matrix.
6. The method as claimed in claim 5, wherein the transformation matrix formula is converted into the method for solving XMX by SVD after introducing Lagrangian factor T The process is as follows: mapping a high-dimensional coordinate point N to N subspace points (N)&gt, n), assuming that the rank of X is l, X can be projected into a matrix B of dimension l using SVD, X = USV T ,B=U T X=SV T (ii) a Wherein U is XX T V is X T The eigenvectors of X, S is a diagonal matrix of l × l; the eigenvectors that ultimately solve the following equations become the matrix (BB) T )(BMB T ) The feature vector of (2);
XMX T A=λXX T A
where a denotes an eigenvector and λ denotes an eigenvalue corresponding to the matrix.
CN201711058157.XA 2017-11-01 2017-11-01 A kind of neighborhood keeps the Method of Data with Adding Windows of embedded innovatory algorithm Pending CN107871139A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711058157.XA CN107871139A (en) 2017-11-01 2017-11-01 A kind of neighborhood keeps the Method of Data with Adding Windows of embedded innovatory algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711058157.XA CN107871139A (en) 2017-11-01 2017-11-01 A kind of neighborhood keeps the Method of Data with Adding Windows of embedded innovatory algorithm

Publications (1)

Publication Number Publication Date
CN107871139A true CN107871139A (en) 2018-04-03

Family

ID=61753602

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711058157.XA Pending CN107871139A (en) 2017-11-01 2017-11-01 A kind of neighborhood keeps the Method of Data with Adding Windows of embedded innovatory algorithm

Country Status (1)

Country Link
CN (1) CN107871139A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657558A (en) * 2018-11-23 2019-04-19 中国人民解放军海军航空大学 A kind of aero-engine mechanical failure diagnostic method to be extended based on maximum difference
CN109885578A (en) * 2019-03-12 2019-06-14 西北工业大学 Data processing method, device, equipment and storage medium
CN113507278A (en) * 2021-06-17 2021-10-15 重庆大学 Wireless signal processing method, device and computer readable storage medium
WO2022063216A1 (en) * 2020-09-28 2022-03-31 International Business Machines Corporation Determination and use of spectral embeddings of large-scale systems by substructuring
CN116028822A (en) * 2023-03-30 2023-04-28 国网福建省电力有限公司 Electric energy meter error state evaluation method, system, equipment and storage medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657558A (en) * 2018-11-23 2019-04-19 中国人民解放军海军航空大学 A kind of aero-engine mechanical failure diagnostic method to be extended based on maximum difference
CN109885578A (en) * 2019-03-12 2019-06-14 西北工业大学 Data processing method, device, equipment and storage medium
CN109885578B (en) * 2019-03-12 2021-08-13 西北工业大学 Data processing method, device, equipment and storage medium
WO2022063216A1 (en) * 2020-09-28 2022-03-31 International Business Machines Corporation Determination and use of spectral embeddings of large-scale systems by substructuring
GB2613994A (en) * 2020-09-28 2023-06-21 Ibm Determination and use of spectral embeddings of large-scale systems by substructuring
US11734384B2 (en) 2020-09-28 2023-08-22 International Business Machines Corporation Determination and use of spectral embeddings of large-scale systems by substructuring
CN113507278A (en) * 2021-06-17 2021-10-15 重庆大学 Wireless signal processing method, device and computer readable storage medium
CN113507278B (en) * 2021-06-17 2023-10-24 重庆大学 Wireless signal processing method, device and computer readable storage medium
CN116028822A (en) * 2023-03-30 2023-04-28 国网福建省电力有限公司 Electric energy meter error state evaluation method, system, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107871139A (en) A kind of neighborhood keeps the Method of Data with Adding Windows of embedded innovatory algorithm
CN109611087B (en) Volcanic oil reservoir parameter intelligent prediction method and system
US8600534B2 (en) Method of designing a structure
CN109740227A (en) Miniature complex parts modeling method based on feature identification
CN107688201B (en) RBM-based seismic prestack signal clustering method
CN111401468B (en) Weight self-updating multi-view spectral clustering method based on shared neighbor
CN106228185A (en) A kind of general image classifying and identifying system based on neutral net and method
CN105760821A (en) Classification and aggregation sparse representation face identification method based on nuclear space
CN106257498A (en) Zinc flotation work condition state division methods based on isomery textural characteristics
CN111612906B (en) Method and system for generating three-dimensional geological model and computer storage medium
CN112464004A (en) Multi-view depth generation image clustering method
CN105046694A (en) Quick point cloud registration method based on curved surface fitting coefficient features
CN105550649A (en) Extremely low resolution human face recognition method and system based on unity coupling local constraint expression
CN110378423A (en) Feature extracting method, device, computer equipment and storage medium
CN109272029B (en) Well control sparse representation large-scale spectral clustering seismic facies partitioning method
CN109002771B (en) Remote sensing image classification method based on recurrent neural network
CN116127298B (en) Small sample radio frequency fingerprint identification method based on triplet loss
CN113077497A (en) Tunneling roadway point cloud registration method based on 3D NDT-ICP algorithm
CN110569616B (en) SOM-based building multi-objective optimization design decision support method
CN103325104A (en) Facial image super-resolution reestablishing method based on iterative sparse representation
CN104933410A (en) United classification method for hyper-spectral image spectrum domain and spatial domain
CN109615026A (en) A kind of differentiation projecting method and pattern recognition device based on Sparse rules
CN105718858A (en) Pedestrian recognition method based on positive-negative generalized max-pooling
CN112767462B (en) Point cloud single-point alignment method based on ridge-valley characteristics and depth characteristic descriptors
Hu et al. Investigation of wind pressures on tall building under interference effects using machine learning techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180403

RJ01 Rejection of invention patent application after publication