CN108319983A - A kind of nonlinear data dimension reduction method of local nonlinearity alignment - Google Patents

A kind of nonlinear data dimension reduction method of local nonlinearity alignment Download PDF

Info

Publication number
CN108319983A
CN108319983A CN201810120043.1A CN201810120043A CN108319983A CN 108319983 A CN108319983 A CN 108319983A CN 201810120043 A CN201810120043 A CN 201810120043A CN 108319983 A CN108319983 A CN 108319983A
Authority
CN
China
Prior art keywords
point
dimensional
block
low
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810120043.1A
Other languages
Chinese (zh)
Inventor
马争鸣
戴利孟
刘洁
车航健
张扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
National Sun Yat Sen University
Original Assignee
National Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Sun Yat Sen University filed Critical National Sun Yat Sen University
Priority to CN201810120043.1A priority Critical patent/CN108319983A/en
Publication of CN108319983A publication Critical patent/CN108319983A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • G06F18/21355Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis nonlinear criteria, e.g. embedding a manifold in a Euclidean space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a kind of nonlinear data dimension reduction methods based on local nonlinearity alignment.Its Neighbor Points and the corresponding block of composition are taken using KNN algorithms to each sample point in higher dimensional space first.The data point block of these higher dimensional spaces is all dropped to PCA algorithms to the space plane of low-dimensional respectively.By respective translation rotation block is perfectly stitched together each block to lower by PCA on lower dimensional space.The criterion of splicing be each point may include by different several pieces, then in all pieces by comprising this point belong to the same point, they should can be overlapped in the representation theory of low-dimensional;Then these this points in the block can find out the position of its equalization point, and two norms of distance of equalization point the sum of of all points that same point should be represented as in different masses to it should want minimum, so just can guarantee that the same point in different masses overlaps as far as possible.All pieces by the way that, so that sum of the deviations caused by each point is minimum, so different blocks can have been carried out perfect splicing by we in low-dimensional plane after respective translation rotation.To solve the world coordinates that sample point is embedded in low-dimensional, it is achieved the nonlinear data dimensionality reduction of local nonlinearity alignment.

Description

A kind of nonlinear data dimension reduction method of local nonlinearity alignment
Technical field
The invention belongs to machine learning fields, and in particular to a kind of based on the non-of local nonlinearity alignment in manifold learning Linear data dimension reduction method.
Background technology
Data Dimensionality Reduction refers to that sample is mapped to lower dimensional space from higher dimensional space by linear or non-linear method, from And obtain the process that the higher dimensional space indicates in compared with lower dimensional space.The redundancy of legacy data can be reduced by this operation Property, improve the efficiency and specific aim to data processing.The method of Data Dimensionality Reduction is broadly divided into Linear Mapping and Nonlinear Mapping side Method two major classes.The representative method of wherein Linear Mapping method have principal component analysis (Principle Component Analysis, Abbreviation PCA) and linear decision analysis (Linear Discriminant Analysis, abbreviation LDA).Both methods theory at It is ripe, calculate that simple, calculating speed is fast, but its for the high dimensional datas of those nonlinear organizations, you can't get effective answers.
Nonlinear method based on manifold learning then provides a solution route for Data Dimensionality Reduction.Manifold learning is with non- One major class Method of Nonlinear Dimensionality Reduction of monitor mode operation, each method all attempts to retain specific geometric sense, such as distance, angle Degree, proximity or local patch.Since two delivered on Science since 2000 start sex work, Isomap and LLE, manifold learning are always the important topic of data visualization and pattern classification.Come from imaging device, bioinformatics today Mass data with financial application is typically higher-dimension;Therefore there is an urgent need to overcome " dimension curse ".One direct solution party Case is exactly to convert high dimensional data to the dimension reduction method of low-dimensional embedded space.However, the conventional methods such as PCA and MDS can not be found The non-linear or curvilinear structures of input data.On the contrary, manifold learning be suitable for nonlinear organization being launched into it is flat low Tie up embedded space.Therefore, these methods, which have discovered that, is widely applied, such as microarray gene expression, 3 D human body posture Restore, recognition of face and facial expression transfer.Most of pervious manifold learnings are absorbed in a specific visual angle, to protect Hold a single geometric sense.Therefore, how under the premise of ensuring not lose key property, reducing dimension as much as possible becomes A recent research hotspot.
Manifold learning dimension reduction method mainly has:Be locally linear embedding into (Locally Linear Embedding, referred to as LLE), Isometric Maps (Isometric Mapping, abbreviation ISOMAP), local tangent space are aligned (Local tangent Space alignment, abbreviation LTSA) etc..
LLE algorithms stress guarantee local neighborhood structure using based on local linear Conformal Mapping thought.At information In many applications of reason, local message is sometimes more more effective than global information, and advantageous in calculation amount, only comprising multinomial The sparse matrix operation of the formula order of magnitude.In addition, also there is good presentation skills, i.e., when the feelings that global structure is non-Euclidean space Under condition, local geometry is close to Euclidean space.But there is also some application limitations for this method, such as:To parameter and Dimensionality reduction performance failure when outside noise is excessively sensitive, processing is distributed sparse data set.
ISOMAP algorithms are existing by analyzing using the global thought kept based on the geodesic distance before and after dimensionality reduction Manifold of higher dimension, the low-dimensional insertion corresponding to manifold of higher dimension is obtained, to allow the Near-neighbor Structure on manifold of higher dimension between data point to exist It is more completely reappeared in low-dimensional insertion.It is calculated with multi-dimentional scale transformation (Multidimensional Scaling, abbreviation MDS) Method is analysis tool, difference be to calculate on manifold of higher dimension between data point apart from when, abandoned traditional Euclidean distance, power The inherent geometric properties for keeping data point are sought, Euclidean distance is replaced using the geodesic curve distance in Differential Geometry, is by with reality The input data on border estimates a kind of algorithm of its geodesic curve distance.
LTSA algorithms are using the popular learning algorithm based on local tangent space.By approaching cutting for each sample point Space builds the local geometric of low dimensional manifold, then finds out whole low-dimensional embedded coordinate using cutting space arrangement.As one kind Prevalence learning algorithm, LTSA can effectively learn the whole embedded coordinate of embodiment data set low dimensional manifold structure well, But there is also deficiencies for it:It is equal to sample number for the matrix exponent number of Eigenvalues Decomposition in algorithm, when sample set is larger, will be unable to locate Reason.
The present invention proposes another based on the thought locally kept, i.e. the nonlinear data dimensionality reduction side of local nonlinearity alignment Method.This method drops to low-dimensional by PCA to each piece, then exists again by building local domain block structure to input sample point Lower dimensional space is with the sum of two norm errors of distance between the equalization point of each point to the respective point of all blocks put comprising this And each piece of translation and degree of rotation are constrained to minimize the error, to solve the world coordinates that sample point is embedded in low-dimensional, It is achieved the nonlinear data dimensionality reduction of local nonlinearity alignment.Since the dimensionality reduction effect that the method is made is very good, while this The time complexity of method is very low, is relatively more suitable for large-scale high dimensional data dimensionality reduction.
Invention content
It is an object of the invention to propose a kind of nonlinear data dimension reduction method being aligned based on local nonlinearity.The present invention Its domain variability is calculated using KNN algorithms to each sample point of higher dimensional space first and forms corresponding block, then these Block all drops to lower dimensional space using PCA algorithms;On lower dimensional space we by with each point to it is all comprising this point Between the equalization point of the respective point of block the sum of two norm errors of distance and with minimize the error constrain each piece translation and rotation Carryover degree is achieved in mapping of the initial data higher-dimension to lower dimensional space, obtains whole low-dimensional embedded coordinate, is achieved part The nonlinear data dimensionality reduction of non-linear alignment.The particular content of the present invention is as follows:
1, its Neighbor Points and the corresponding block of composition are taken using KNN algorithms to the sample point in higher dimensional space.Take the calculation of block There are many kinds of methods, the present invention mainly to each data point in higher dimensional space using KNN algorithms take its k nearest neighbor point as The field block of this point, has N number of point that can form N number of piece.Such as data point xpField block be Xp=[x1 … xk xk+1] ∈RD×(k+1), wherein x1 … xkFor xpK field point, xk+1For point xp
2, the data block of these higher dimensional spaces is all dropped to the process of local homomorphism to the space plane of low-dimensional, coordinate respectively It is expressed as Θp∈Rd×(k+1), p=1 ..., N.There are many algorithm of local homomorphism in manifold learning, and the present invention mainly uses PCA to calculate Method carries out Local-projection method, higher dimensional space midpoint xi, i=1 ..., N, in Θ after PCA dimensionality reductionspIn coordinate representation be θI, p =(mI, p, nI, p)∈R1×d
3, to each block ΘpBuild unknown spin matrix and translation matrix so that ΘpIt is flat that low-dimensional can be moved to Any position in face and block ΘpIt can arbitrarily be rotated around center.ΘpIt is embedded in Local Coordinate Representations by translating postrotational low-dimensional For Yp, and ΘpMiddle certain point θI, pIt is y by translating postrotational low-dimensional coordinate representationI, p.The present invention provides following algorithm and solves This problem:
3.1, by each piece of translation vector ξpS and spin matrix vpS, which is stacked up, constitutes whole translation matrix ξ ∈ RN ×dAnd rotate integrally matrix V ∈ RdN×d
3.2, to each each point θ in the blockI, pStructure translation selection vector sI, p∈R1×PAnd coordinate selection vector lI, p ∈R1×dP
3.3, some block ΘpMiddle certain point θI, pIt is y by translating postrotational low-dimensional coordinate representationI, p=sI, pξ+lI, pV。
4, it includes data point x to calculate alliLocal coordinate block ΘiS all y after respective translation rotationI, pS's Average value yi, find out yI, pWith its average point yiTwo norm errors of distance simultaneously find out same point y in all piecesI, pS corresponds to yiThis The global error of a point.
5, to the average translation of each point selection vector siAnd averagely rotation selection matrix liThe composition that is stacked is overall Mean shift selection matrix SmeanAnd population mean rotation selection matrix Lmean
6, it calculates all low-dimensionals insertion point and is accumulated in together the object function to be formed with its average point global error and beIt is required that global error is minimum namely object function is minimum, then demand solutionWherein H=(I-S (STS)+ST)L。
7, above formula is usually solved with rayleigh quotient, enables VTV=Id, by HTH carries out feature decomposition, obtains minimal eigenvalue Corresponding feature vector finds out spin matrix V, and ξ=- (STS)+STLV。
8, after having found out ξ and V, then the coordinate points of all low-dimensional insertions are:Y=Smeanξ+LmeanV。
The invention has the advantages that:This method to input sample point by building local domain block structure, to every A block drops to low-dimensional by PCA, is then spliced on lower dimensional space again, and whole low-dimensional embedded coordinate is found out, can be effective Ground reduces the sample set dimension of input.Since the dimensionality reduction effect that the method is made is very good, at the same time complexity very It is low, it is relatively more suitable for large-scale high dimensional data drop.
Description of the drawings
Fig. 1 is the operational flowchart for the nonlinear data dimension reduction method of the present invention being aligned based on local nonlinearity.
Specific implementation mode
As shown in the picture, the nonlinear data dimension reduction method based on local nonlinearity alignment, including the following contents:
1, assume that high dimensional data sample point set is X=[x1 … xN]∈RD×N, the sample point set that is mapped in lower dimensional space For Y=[y1 … yN]∈Rd×N.Wherein:D is the dimension of higher dimensional space;D (d < < D) is the dimension of lower dimensional space;X is higher-dimension The input of data model is higher dimensional space RD×NIn N number of D tie up real number column vector.Y is that high dimensional data is mapped in lower dimensional space Output sample set, be lower dimensional space Rd×NIn N number of d tie up real number column vector.
2, its Neighbor Points and the corresponding block of composition are taken using KNN algorithms to the sample point in higher dimensional space.Take the calculation of block There are many kinds of methods, the present invention mainly to each data point in higher dimensional space using KNN algorithms take its k nearest neighbor point as The field block of this point, has N number of point that can form N number of piece.Such as data point xpField block be Xp=[x1 … xk xk+1] ∈RD×(k+1), wherein x1 … xkFor xpK field point, xk+1For point xp
3, the data block of these higher dimensional spaces is all dropped to the process of local homomorphism to the space plane of low-dimensional, coordinate respectively It is expressed as Θp∈Rd×(k+1), p=1 ..., N.There are many algorithm of local homomorphism in manifold learning, and the present invention mainly uses PCA to calculate Method carries out Local-projection method, higher dimensional space midpoint xi, i=1 ..., N, in Θ after PCA dimensionality reductionspIn coordinate representation be θI, p =(mI, p, nI, p)∈R1×d
4, assume that p-th piece of unknown translation vector is ξp∈R1×d, unknown spin matrix is vp∈Rd×d, 1≤p≤P. ΘpIt is Y by translating postrotational coordinate representationp, i.e. Ypp+ApΘp, and θI, pBy translating postrotational low-dimensional coordinates table It is shown as
5, l is enabledI, p∈R1×dPFor θI, pThe coordinate selection vector constituted.Such as θ in first blockI, 1Coordinate selection Vector is:lI, 1=(mI, 1, nI, 1, 0,0 ... 0) ∈ R1×dP.Enable sI, p∈R1×PP-th piece of ΘpTranslation selection vector.Such as the The translation of one block select vector for:sI, 1=(1,0,0 ... 0) ∈ R1×P.So yI, pIt is represented by yI, p=sI, pξ+lI, pV, wherein ξ∈RN×dAnd V ∈ RdN×dFor all pieces of translation vector ξpS and spin matrix vpThe matrix that s is constituted, i.e.,
6, so all includes data point xiLocal coordinate block ΘiAverage points of the s after respective translation rotation It is set toHere fiIndicate this point by fiA block includes.
7, due to xiProjection appropriate is all yI, pThe average value y of si.So yI, pWith its average point yiError be:Corresponding to yiThis point global error beWherein
8, the s each puti, liIt is stacked and constitutes population mean matrix
9, it is possible thereby to which the object function for calculating all low-dimensional insertions and its average point global error isWherein
10, require global error minimum namely object function minimum, then demand solutionThis When be believed that object function minimum value be 0, then S ξ+LV=0 namely S ξ=- LV;It is multiplied by S simultaneously in both sidesTFor STS ξ=- STLV;To STS asks pseudoinverse that can obtain ξ=- (STS)+STLV。
11, at this time Wherein H=(I-S (STS)+ST)L。
12, formula V above may have multiple solutions, and in order to avoid trivial solution, above formula is usually solved with rayleigh quotient, even VTV=Id, By to HTH carries out feature decomposition, obtains the feature vector of corresponding minimal eigenvalue to find out spin matrix V, and ξ=- (STS)+ STLV.After having found out ξ and V, then the coordinate points of all low-dimensional insertions are:Y=Smeanξ+LmeanV。

Claims (3)

1. a kind of nonlinear data dimension reduction method of local nonlinearity alignment, it is characterised in that the step of this method is as follows:
A. the field that its k nearest neighbor point is put as this is taken using KNN algorithms to each sample point in higher dimensional space Block has N number of point that can form N number of piece;
B., the data block of these higher dimensional spaces is all dropped to PCA algorithms to the space plane of low-dimensional, coordinate representation Θ respectivelyp, p =1 ..., N.Higher dimensional space midpoint xi, i=1 ..., N, in Θ after PCA dimensionality reductionspIn coordinate representation be θI, p
C. to each block ΘpBuild unknown spin matrix and translation matrix.ΘpBy translating postrotational low-dimensional insertion office Portion's coordinate representation is Yp, and ΘpMiddle certain point θI, pIt is y by translating postrotational low-dimensional coordinate representationI, p
D. all low-dimensional insertion points are calculated and the object function of its average point global error isBy to target The solution of function takes the feature vector corresponding to minimal eigenvalue to find out all pieces of spin matrix V, and then finds out all pieces Translation matrix ξ;
E, after having found out ξ and V, the coordinate for further finding out the final low-dimensional insertion of all sample points is:Y=Smeanξ+LmeanV。
2. according to the method described in claim 1, it is characterized in that the step C is specifically included:
C1, by each piece of translation vector ξpS and selection matrix vpS, which is stacked up, constitutes whole translation matrix ξ ∈ RN×dAnd Rotate integrally matrix V ∈ RdN×d
C2, to each each point θ in the blockI, pStructure translation selection vector sI, p∈R1×PAnd coordinate selection vector lI, p∈R1 ×dP
C3, some block ΘpMiddle certain point θI, pIt is y by translating postrotational low-dimensional coordinate representationI, p=sI, pξ+lI, pV。
3. according to the method described in claim 1, it is characterized in that the step D is specifically included:
D1, all calculating include data point xiLocal coordinate block ΘiS all y after respective translation rotationI, pS's is flat Mean value yi, find out yI, pWith its average point yiTwo norm errors of distance simultaneously find out all block midpoint y comprising same pointI, pS is corresponding In yiThe global error of this point;
D2, all block midpoint y comprising same point that each sample point is found out according to D1I, pS corresponds to equalization point yiInstitute's shape At global error be accumulated in together to form all low-dimensionals insertion point and its average point global error, to construct object function
D3, by the solution to object function, take the feature vector corresponding to minimal eigenvalue to find out all pieces of spin moment Battle array V, and then find out all pieces of translation matrix ξ;
D4, the average translation selection vector s to each pointiAnd averagely rotation selection matrix liThe composition that is stacked is overall flat Translation selection matrix SmeanAnd population mean rotation selection matrix Lmean
CN201810120043.1A 2018-01-31 2018-01-31 A kind of nonlinear data dimension reduction method of local nonlinearity alignment Pending CN108319983A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810120043.1A CN108319983A (en) 2018-01-31 2018-01-31 A kind of nonlinear data dimension reduction method of local nonlinearity alignment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810120043.1A CN108319983A (en) 2018-01-31 2018-01-31 A kind of nonlinear data dimension reduction method of local nonlinearity alignment

Publications (1)

Publication Number Publication Date
CN108319983A true CN108319983A (en) 2018-07-24

Family

ID=62902049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810120043.1A Pending CN108319983A (en) 2018-01-31 2018-01-31 A kind of nonlinear data dimension reduction method of local nonlinearity alignment

Country Status (1)

Country Link
CN (1) CN108319983A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110120082A (en) * 2019-04-04 2019-08-13 平安科技(深圳)有限公司 Image processing method, device, equipment and the readable storage medium storing program for executing of finance data
CN110222631A (en) * 2019-06-04 2019-09-10 电子科技大学 Based on deblocking it is parallel cut space arrangement SAR image target recognition method
CN113570016A (en) * 2021-08-09 2021-10-29 广东电网有限责任公司 Optical cable engineering management method based on RFID and GPLLE
CN114127712A (en) * 2019-05-15 2022-03-01 雷克斯股份有限公司 System and method for generating a low dimensional space representing similarity between patents

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110120082A (en) * 2019-04-04 2019-08-13 平安科技(深圳)有限公司 Image processing method, device, equipment and the readable storage medium storing program for executing of finance data
CN110120082B (en) * 2019-04-04 2023-08-18 平安科技(深圳)有限公司 Image processing method, device and equipment for financial data and readable storage medium
CN114127712A (en) * 2019-05-15 2022-03-01 雷克斯股份有限公司 System and method for generating a low dimensional space representing similarity between patents
CN114127712B (en) * 2019-05-15 2024-01-05 雷克斯股份有限公司 System and method for generating a low-dimensional space representing similarity between patents
CN110222631A (en) * 2019-06-04 2019-09-10 电子科技大学 Based on deblocking it is parallel cut space arrangement SAR image target recognition method
CN110222631B (en) * 2019-06-04 2022-03-15 电子科技大学 Data block parallel based cut space arrangement SAR image target identification method
CN113570016A (en) * 2021-08-09 2021-10-29 广东电网有限责任公司 Optical cable engineering management method based on RFID and GPLLE

Similar Documents

Publication Publication Date Title
Masi et al. Learning pose-aware models for pose-invariant face recognition in the wild
Cong et al. Global-and-local collaborative learning for co-salient object detection
Cao et al. A vector-based representation to enhance head pose estimation
CN108319983A (en) A kind of nonlinear data dimension reduction method of local nonlinearity alignment
Barroso-Laguna et al. Key. net: Keypoint detection by handcrafted and learned cnn filters revisited
Zeng et al. Ricci flow for shape analysis and surface registration: theories, algorithms and applications
Shi et al. Hessian Semi-Supervised Sparse Feature Selection Based on ${L_ {2, 1/2}} $-Matrix Norm
Chen et al. Attention-aware cross-modal cross-level fusion network for RGB-D salient object detection
Michel et al. Scale invariant and deformation tolerant partial shape matching
Yi et al. Label propagation based semi-supervised non-negative matrix factorization for feature extraction
CN105184772A (en) Adaptive color image segmentation method based on super pixels
Salehian et al. Recursive estimation of the stein center of SPD matrices and its applications
Zhao et al. Aliked: A lighter keypoint and descriptor extraction network via deformable transformation
Wang et al. Semi-supervised Gaussian process latent variable model with pairwise constraints
Hu et al. Shape matching and object recognition using common base triangle area
Gao et al. Multi-level view associative convolution network for view-based 3D model retrieval
Sharma et al. Voxel-based 3D occlusion-invariant face recognition using game theory and simulated annealing
Liang et al. Feature-preserved convolutional neural network for 3D mesh recognition
Fan et al. A deep learning framework for face verification without alignment
CN104573727B (en) A kind of handwriting digital image dimension reduction method
He et al. Chinese sign language recognition based on trajectory and hand shape features
CN114882260A (en) Graph matching method and system
CN115544306A (en) Multi-mode retrieval method based on feature fusion Hash algorithm
Hao et al. Cascaded geometric feature modulation network for point cloud processing
Xu et al. Sparse subspace clustering with low-rank transformation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180724