CN107730435A - A kind of compression sensing method based on graphics processing unit accelerating algorithm - Google Patents

A kind of compression sensing method based on graphics processing unit accelerating algorithm Download PDF

Info

Publication number
CN107730435A
CN107730435A CN201710954822.7A CN201710954822A CN107730435A CN 107730435 A CN107730435 A CN 107730435A CN 201710954822 A CN201710954822 A CN 201710954822A CN 107730435 A CN107730435 A CN 107730435A
Authority
CN
China
Prior art keywords
matrix
lasso
algorithm
admm
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201710954822.7A
Other languages
Chinese (zh)
Inventor
夏春秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Vision Technology Co Ltd
Original Assignee
Shenzhen Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Vision Technology Co Ltd filed Critical Shenzhen Vision Technology Co Ltd
Priority to CN201710954822.7A priority Critical patent/CN107730435A/en
Publication of CN107730435A publication Critical patent/CN107730435A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization

Abstract

A kind of compression sensing method based on graphics processing unit accelerating algorithm proposed in the present invention, its main contents include:Sparse signal recovery is carried out by minimum absolute retract and selection opertor (LASSO), LASSO iteration soft-threshold algorithm (ISTA), LASSO alternating directions Multiplier Algorithm (ADMM) and the algorithm design with circulation sensing matrix, its process is, first pass through LASSO and carry out sparse signal recovery, then solves LASSO in each iteration to most fast descent direction movement with ISTA, then optimization problem is redefined, the Lagrange that ADMM iteratively minimizes enhancing merges double decomposition and the Lagrangian method of enhancing, finally algorithm of the design with circulation sensing matrix.The present invention utilizes circular matrix signal for faster recovery process, reduces the sequential access number to global storage, recovers even amplified signal under limited hardware requirement, reduces the memory space shared by image.

Description

Compressed sensing method based on graphic processing unit acceleration algorithm
Technical Field
The invention relates to the field of compressed sensing, in particular to a compressed sensing method based on an acceleration algorithm of a graphic processing unit.
Background
Compressed sensing is a leading edge of intense research in recent years and has attracted attention in many fields of application. Compressed sensing means that when a signal is acquired (from an analog signal to a digital signal), compression of the signal is accomplished. The compressed sensing technology has a wide application prospect, for example, in the application of a digital camera, when a large amount of data is collected by a lens of the digital camera, the compression can be carried out, 90% of data is discarded during the compression, redundant information is directly removed, the battery power can be saved, and the space can be saved. When the compressive sensing technology is applied to the tomography (CT) technology and the Magnetic Resonance Imaging (MRI) technology of medical images, although the quality of the reconstructed image is slightly lower than that of the original image, the space occupied by the reconstructed image is much smaller than that of the original image. It can also be applied in the fields of wireless communication, array signal processing, imaging, analog information conversion, physiological signal acquisition, biological sensing and the like. However, in the conventional image processing method, for example, for the reconstruction of the night sky image, the sparsity is much smaller than the number of pixels, the acquired image may be affected by a small amount of blur, and the memory occupied by the image is also large, which is not favorable for storing the image.
The invention provides a compressed sensing method based on an acceleration algorithm of a graphic processing unit, which comprises the steps of firstly recovering sparse signals through a minimum absolute shrinkage and selection operator (LASSO), applying sparse constraint and additional hypothesis to a matrix, then solving the problem that the LASSO moves towards the fastest descending direction during each iteration by using an Iterative Soft Threshold Algorithm (ISTA), then redefining the optimization problem, minimizing enhanced Lagrange merging double decomposition and enhanced Lagrange method by an alternating direction multiplier Algorithm (ADMM) in an iterative mode, and finally designing an algorithm with a cyclic sensing matrix. The invention uses the attribute of the cyclic matrix to accelerate the signal recovery process, reduces the sequential access times of the global memory, realizes the recovery and even amplification of signals under the limited hardware requirement, and reduces the storage space occupied by the image.
Disclosure of Invention
Aiming at the problems of image blurring and large memory occupation, the invention aims to provide a compressed sensing method based on a graphic processing unit acceleration algorithm.
In order to solve the above problems, the present invention provides a compressed sensing method based on an accelerated algorithm of a graphics processing unit, which mainly comprises:
sparse signal recovery is carried out through a minimum absolute shrinkage and selection operator (LASSO);
(II) an Iterative Soft Threshold Algorithm (ISTA) of LASSO;
(III) LASSO alternate direction multiplier Algorithm (ADMM);
and (IV) designing an algorithm with a circular sensing matrix.
Wherein said compressed sensing is of the orderFor sparse signals to be detected, i.e. x * At most k < n elements are different from zero; let A be the perceptual matrix (m)&N), the perception process is represented by a linear combination:
y=Ax * (1)
wherein y = (y) 1 ,…,y m ) Is a measurement vector.
Wherein, the sparse signal recovery is performed through LASSO, and sparse constraint and additional hypothesis are applied to the matrix A:
min‖x‖ 0 s.t.Ax=y (2)
if x * Is k sparse, each index setThe columns of a associated with | Γ | =2k are linearly independent, then x is * Is a unique solution to equation (2); however, this non-convex procedure represents a combinatorial complexity in the size of the problem; problem (2) is generally solved by convex relaxation, which provides for minimizing the following cost function, also known as LASSO:
wherein the content of the first and second substances,is a loss function, indicating a vectorAnd dataα&gt, 0, and | x | 1 Is a term that promotes sparsity in the estimation.
The LASSO Iterative Soft Threshold Algorithm (ISTA) solves the LASSO problem (3) and moves towards the fastest descending direction during each iteration, and the threshold is applied to promote sparsity; x (t) is expressed as the estimated recovery signal at the end of the tth iteration (t ≧ 1);
starting from the initial guess (x (0) = 0), computing a residual vector r (t), evaluating the degree to which the current estimate is consistent with the data; the residual vector is used to calculate the gradient vector: Δ (t) represents the direction of minimization of LASSO, τ is the step in the update; finally, the process is carried out in a closed loop,
η γ an operator is a threshold function applied to an element.
Further, the threshold function optimally selects parameters alpha and tau through calculation experiments; the optimization is defined in terms of phase transitions, where the number of non-probabilities of algorithm success is maximized; if it is notThen for any initial selection x (0), the ISTA generation converges to the minimum of equation (3)Of (2) a
Wherein, the LASSO alternative direction multiplier Algorithm (ADMM) redefines the optimization problem as follows:
ADMM attempts to combine the advantages of dual decomposition and enhanced lagrangian methods by iteratively minimizing the enhanced lagrangian relative to the original variable x and the bivariables; more formally, ADMM solves the optimization problem:
where u is the bivariate or Lagrangian multiplier associated with the constraint in equation (5) and z is the auxiliary vector; ADMM consists of an initial phase comprising a matrix which is invertedAnd an iteration stage.
Furthermore, in the alternating direction multiplier algorithm, in the aspect of complexity, the number of iterations of the ADMM is less than that of the ISTA, and the complexity of each iteration is equivalent; in practice, each ADMM iteration requires two matrix vector multiplications to be performed, and each multiplication can be parallelized; however, ADMM initially requires an n × n matrix to be inverted and stored in memory(ii) a Inversion complexity of O (n) 3 ) Thus, as the dimension n of the sampled signal increases, it becomes a major source of complexity.
The algorithm design with the cyclic sensing matrix comprises the cyclic sensing matrix and the ADMM of the cyclic sensing matrix.
Further, the circular sensing matrix is provided with a general dense matrix A with the size of m multiplied by n and a vector x with the size of n, and a product needs O (mn) calculation; furthermore, the multiplication must be performed over and over again with different input vectors; to reduce storage and computational complexity, one can consider structured matrices that are dense but only dependent on the O (n) parameters;
let v be the 1 xn vector corresponding to row a; vector v is referred to as the sensing vector: a is cyclic, so each A row can be represented as a shift of the sensing vector, i.e., A i,j =v (j-i)mod(n) Wherein mod defines the remainder operator for the remainder; similarly, transposeMay be represented as a shift of the sensing vector; that is to say that the first and second electrodes,therefore, it is not only easy to use
From a computational point of view, the circulant matrix may be diagonalized using a discrete fourier transform; thus, the matrix vector multiplication y = Ax can be efficiently performed using a fast fourier transform with a complexity of order O (nlog (n)); although the use of circulant matrices naturally reduces the storage of ISTA, ADMM still requires inverting and storing an n matrix
Further, ADMM of the circulant matrixRandomly choosing | Ω | = m, then consider a = a in the form of PC, where,is the square of the cycle and,is a binary diagonalMatrix, if i ∈ Ω, then P is i,i =1;
The LASSO problem can be written as follows:
consider the enhanced lagrange function:
wherein x, v, z are the original variables and μ, v are the lagrange multipliers; the following updates are obtained by iteratively minimizing L (x, v, z, μ, v): starting from the initial condition μ (0) = v (0) = z (0) = v (0) = 0;
forming pseudo codes in the algorithm; two matrices can be executed offlineAndis reversed.
Drawings
FIG. 1 is a system diagram of a compressed sensing method based on a GPU acceleration algorithm according to the present invention.
FIG. 2 is an alternative direction multiplier algorithm of a compressed sensing method based on a GPU acceleration algorithm according to the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application can be combined with each other without conflict, and the present invention is further described in detail with reference to the drawings and specific embodiments.
FIG. 1 is a system diagram of a compressed sensing method based on a GPU acceleration algorithm according to the present invention. The method mainly comprises sparse signal recovery through a minimum absolute shrinkage and selection operator (LASSO), an LASSO Iterative Soft Threshold Algorithm (ISTA), an LASSO alternating direction multiplier Algorithm (ADMM) and algorithm design with a circular sensing matrix.
Compressed sensing ofFor sparse signals to be detected, i.e. x * At most k < n elements are different from zero; let A be the perceptual matrix (m)&N), the perception process is represented by a linear combination:
y=Ax * (1)
wherein y = (y) 1 ,…,y m ) Is a measurement vector.
Sparse signal recovery by LASSO, sparse constraint and additional assumptions are applied to matrix a:
min‖x‖ 0 s.t.Ax=y (2)
if x * Is k sparse, each index setThe columns of a associated with | Γ | =2k are linearly independent, then x is * Is a unique solution to equation (2); however, such non-convex procedures represent a combinatorial complexity in the size of the problem; problem (2) is generally solved by convex relaxation, which provides for minimizing the following cost function, also called LASSO:
wherein the content of the first and second substances,is a loss function, indicating a vectorAnd dataα&gt, 0, and | x | 1 Is a term that promotes sparsity in the estimation.
An Iterative Soft Threshold Algorithm (ISTA) of LASSO, which solves the LASSO problem (3) moving towards the fastest descending direction during each iteration and applies a threshold to promote sparsity; x (t) is expressed as the estimated recovery signal at the end of the tth iteration (t ≧ 1);
starting from the initial guess (x (0) = 0), computing a residual vector r (t), evaluating the degree to which the current estimate is consistent with the data; the residual vector is used to calculate the gradient vector: Δ (t) represents the direction of LASSO minimization,. Tau.is the step size in the update; finally, the process is carried out in a closed loop,
η γ an operator is a threshold function applied to an element.
The threshold function optimally selects parameters alpha and tau through calculation experiments; the optimization is defined in terms of phase transitions, where the number of non-probabilities of algorithm success is maximized; if it is notThen for any initial selection x (0), ISTA production converges to the minimum of equation (3)Of (2) a
Algorithmic design with a recurrent sense matrix ADMM includes a recurrent sense matrix and a circulant matrix.
A universal dense matrix A with the size of m multiplied by n and a vector x with the size of n are given to a circular sensing matrix, and a product needs O (mn) calculation; furthermore, the multiplication must be performed over and over again with different input vectors; to reduce storage and computational complexity, one can consider structured matrices that are dense but only dependent on the O (n) parameters;
let v be the 1 xn vector corresponding to row a; vector v is called the sensing vector: a is cyclic, so each A row can be represented as a shift of the sensing vector, i.e., A i,j =v (j-i)mod(n) Wherein mod defines the remainder operator for the remainder; similarly, transposeMay be represented as a shift of the sensing vector; that is to say that the first and second electrodes,therefore, it is possible to
From a computational point of view, the circulant matrix may be diagonalized using a discrete fourier transform; thus, the matrix vector multiplication y = Ax can be efficiently performed using a fast fourier transform with a complexity of order O (nlog (n)); although the use of circulant matrices naturally reduces the storage of ISTA, ADMM still requires inverting and storing the n matrix
ADMM of circulant matrix, orderRandomly choosing | Ω | = m, then consider a = a in the form of PC, where,is the square of the cycle and,is a binary diagonal matrix, if i ∈ Ω, then is P i,i =1;
The LASSO problem can be written as follows:
consider the enhanced lagrange function:
wherein x, v, z are the original variables and μ, v are the lagrange multipliers; the following updates are obtained by iteratively minimizing L (x, v, z, μ, v): starting from the initial condition μ (0) = v (0) = z (0) = v (0) = 0;
pseudo code in the algorithm is formed; two matrices can be executed offlineAndis reversed.
FIG. 2 is an alternative direction multiplier algorithm of a compressed sensing method based on a GPU acceleration algorithm according to the present invention. LASSO alternating direction multiplier Algorithm (ADMM), the optimization problem is redefined as follows:
ADMM attempts to combine the advantages of dual decomposition and enhanced lagrangian methods by iteratively minimizing the enhanced lagrangian relative to the original variable x and the bivariables; more formally, ADMM solves the optimization problem:
where u is the bivariate or Lagrangian multiplier associated with the constraint in equation (8) and z is the auxiliary vector; ADMM consists of an initial phase comprising a matrix which is invertedAnd an iteration stage.
In terms of complexity, the iteration number of ADMM is less than that of ISTA, and the complexity of each iteration is equivalent; in practice, each ADMM iteration requires two matrix vector multiplications to be performed, and each multiplication can be parallelized; however, ADMM initially requires an n × n matrix to be inverted and stored in memory(ii) a Inversion complexity of O (n) 3 ) Thus, as the dimension n of the sampled signal increases, it becomes a major source of complexity.
It will be appreciated by persons skilled in the art that the invention is not limited to details of the foregoing embodiments, and that the invention can be embodied in other specific forms without departing from the spirit or scope of the invention. In addition, various modifications and alterations of this invention may be made by those skilled in the art without departing from the spirit and scope of this invention, and such modifications and alterations should also be viewed as being within the scope of this invention. It is therefore intended that the appended claims be interpreted as including the preferred embodiment and all alterations and modifications as fall within the scope of the invention.

Claims (10)

1. A compressed sensing method based on graphic processing unit acceleration algorithm is characterized by mainly comprising the steps of carrying out sparse signal recovery (I) through minimum absolute shrinkage and selection operators (LASSO); the Iterative Soft Threshold Algorithm (ISTA) of LASSO (two); LASSO alternating direction multiplier Algorithm (ADMM) (three); and (IV) designing an algorithm with a circular sensing matrix.
2. The compressed sensing of claim 1, wherein the instructions areFor sparse signals to be detected, i.e. x * At most k < n elements are different from zero; let A be the perceptual matrix (m)&N), the perception process is represented by a linear combination:
y=Ax * (1)
wherein y = (y) 1 ,…,y m ) Is a measurement vector.
3. Sparse signal recovery by LASSO (one) as claimed in claim 1 wherein sparse constraints and additional assumptions are applied to matrix A:
min‖x‖ 0 s.t.Ax=y (2)
if x * Is k sparse, each index setThe columns of a associated with | Γ | =2k are linearly independent, then x is * Is a unique solution to equation (2); however, this non-convex procedure represents a combinatorial complexity in the size of the problem; problem (2) is generally solved by convex relaxation, which provides for minimizing the following cost function, also known as LASSO:
wherein, the first and the second end of the pipe are connected with each other,is a loss function, indicating a vectorAnd dataα&gt, 0, and | x | 1 Is a term that promotes sparsity in the estimation.
4. The LASSO Iterative Soft Threshold Algorithm (ISTA) of claim 3 wherein the Iterative Soft Threshold Algorithm (ISTA) solves the LASSO problem (3) moving towards the fastest descent at each iteration and applies a threshold to promote sparsity; x (t) is expressed as the estimated recovery signal at the end of the tth iteration (t ≧ 1);
starting from the initial guess (x (0) = 0), computing a residual vector r (t), evaluating the degree to which the current estimate is consistent with the data; the residual vector is used to calculate the gradient vector: Δ (t) represents the direction of LASSO minimization,. Tau.is the step size in the update; finally, the process is carried out in a batch,
η γ an operator is a threshold function applied to an element.
5. The threshold function of claim 4, wherein the threshold function optimally selects the parameters α and τ by computational experiments; the optimization is defined in terms of phase transitions, where the number of non-probabilities of algorithm success is maximized; if it is notThen for any initial selection x (0), ISTA production converges to the minimum of equation (3)Of (2) a
6. The LASSO alternating direction multiplier Algorithm (ADMM) (iii) as claimed in claim 1, wherein the optimization problem is redefined as follows:
ADMM attempts to combine the advantages of dual decomposition and enhanced lagrangian methods by iteratively minimizing the enhanced lagrangian relative to the original variable x and the bivariables; more formally, ADMM solves the optimization problem:
where u is the bivariate or Lagrangian multiplier associated with the constraint in equation (5) and z is the auxiliary vector; ADMM consists of an initial phase comprising a matrix which is invertedAnd an iteration phase.
7. The alternating direction multiplier algorithm of claim 6, wherein the ADMM has fewer iterations than ISTA in terms of complexity, and each iteration has a comparable complexity; in practice, each ADMM iteration requires two matrix vector multiplications to be performed, and each multiplication can be parallelized; however, ADMM initially requires an n × n matrix to be inverted and stored in memoryThe inversion complexity is O (n) 3 ) It therefore becomes a major source of complexity as the dimension n of the sampled signal increases.
8. Algorithmic design (iv) with recurrent sensing matrices based on claim 1, characterized by ADMM comprising recurrent sensing matrices and recurrent matrices.
9. The cyclic sensing matrix of claim 8, wherein given a generalized dense matrix a of size mxn and a vector x of size n, a product requires O (mn) computation; furthermore, the multiplication must be performed over and over again with different input vectors; to reduce storage and computational complexity, a structured matrix that is dense but only dependent on O (n) parameters may be considered;
let v be the 1 xn vector corresponding to row a; vector v is referred to as the sensing vector: a is cyclic, so each A row can be represented as a shift of the sensing vector, i.e., A i,j =v (j-i)mod(n) Wherein mod defines the remainder operator for the remainder; similarly, transposeMay be represented as a shift of the sensing vector; that is to say that the temperature of the molten steel,therefore, it is possible to
From a computational point of view, the circulant matrix may be diagonalized using a discrete fourier transform; thus, the matrix vector multiplication y = Ax can be efficiently performed using a fast fourier transform with a complexity of order O (nlog (n)); although the use of circulant matrices naturally reduces the storage of ISTA, ADMM still requires inverting and storing the n matrix
10. ADMM based on a circulant matrix according to claim 8, characterized in that the instructions causeRandomly choosing | Ω | = m, then consider a = a in the form of PC, where,is the square of the cycle, and,is a binary diagonal matrix, if i ∈ Ω, then is P i,i =1;
The LASSO problem can be written as follows:
consider the enhanced lagrange function:
wherein x, v, z are original variables, μ, v are lagrangian multipliers; the following updates are obtained by iteratively minimizing L (x, v, z, μ, v): starting from the initial condition μ (0) = v (0) = z (0) = v (0) = 0;
pseudo code in the algorithm is formed; two matrices can be executed offlineAndis reversed.
CN201710954822.7A 2017-10-13 2017-10-13 A kind of compression sensing method based on graphics processing unit accelerating algorithm Withdrawn CN107730435A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710954822.7A CN107730435A (en) 2017-10-13 2017-10-13 A kind of compression sensing method based on graphics processing unit accelerating algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710954822.7A CN107730435A (en) 2017-10-13 2017-10-13 A kind of compression sensing method based on graphics processing unit accelerating algorithm

Publications (1)

Publication Number Publication Date
CN107730435A true CN107730435A (en) 2018-02-23

Family

ID=61211344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710954822.7A Withdrawn CN107730435A (en) 2017-10-13 2017-10-13 A kind of compression sensing method based on graphics processing unit accelerating algorithm

Country Status (1)

Country Link
CN (1) CN107730435A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109709547A (en) * 2019-01-21 2019-05-03 电子科技大学 A kind of reality beam scanning radar acceleration super-resolution imaging method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ATTILIO FIANDROTTI等: "GPU-Accelerated Algorithms for Compressed Signals Recovery with Application to Astronomical Imagery Deblurring", 《ARXIV:1707.02244V1》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109709547A (en) * 2019-01-21 2019-05-03 电子科技大学 A kind of reality beam scanning radar acceleration super-resolution imaging method

Similar Documents

Publication Publication Date Title
Ongie et al. Deep learning techniques for inverse problems in imaging
Yue et al. Image super-resolution: The techniques, applications, and future
Trinh et al. Novel example-based method for super-resolution and denoising of medical images
Mehta et al. Rodeo: robust de-aliasing autoencoder for real-time medical image reconstruction
Huang et al. Composite splitting algorithms for convex optimization
US11810301B2 (en) System and method for image segmentation using a joint deep learning model
Chen et al. Learning memory augmented cascading network for compressed sensing of images
Kutyniok et al. Shearlets: theory and applications
Machidon et al. Deep learning for compressive sensing: a ubiquitous systems perspective
Liu et al. Group sparsity with orthogonal dictionary and nonconvex regularization for exact MRI reconstruction
Fiandrotti et al. GPU-accelerated algorithms for compressed signals recovery with application to astronomical imagery deblurring
Durasov et al. Double refinement network for efficient monocular depth estimation
Gilton et al. Learned patch-based regularization for inverse problems in imaging
CN107730435A (en) A kind of compression sensing method based on graphics processing unit accelerating algorithm
JP4563982B2 (en) Motion estimation method, apparatus, program thereof, and recording medium thereof
Fan et al. Inversenet: Solving inverse problems with splitting networks
Madireddy et al. In situ compression artifact removal in scientific data using deep transfer learning and experience replay
Vemulapalli et al. Deep networks and mutual information maximization for cross-modal medical image synthesis
Kumar et al. Fractional Sailfish Optimizer with Deep Convolution Neural Network for Compressive Sensing Based Magnetic Resonance Image Reconstruction
Ke et al. Deep low-rank prior in dynamic MR imaging
Lee et al. Online update techniques for projection based robust principal component analysis
Song et al. SODAS-Net: Side-Information-Aided Deep Adaptive Shrinkage Network for Compressive Sensing
Zhao et al. Temporal Super-Resolution for Fast T1 Mapping
Pan et al. DiffSCI: Zero-Shot Snapshot Compressive Imaging via Iterative Spectral Diffusion Model
Lioutas Mapping Low-Resolution Images To Multiple High-Resolution Images Using Non-Adversarial Mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20180223