CN108846349A - A kind of face identification method based on dynamic Spiking neural network - Google Patents
A kind of face identification method based on dynamic Spiking neural network Download PDFInfo
- Publication number
- CN108846349A CN108846349A CN201810585279.2A CN201810585279A CN108846349A CN 108846349 A CN108846349 A CN 108846349A CN 201810585279 A CN201810585279 A CN 201810585279A CN 108846349 A CN108846349 A CN 108846349A
- Authority
- CN
- China
- Prior art keywords
- neural network
- spiking neural
- weight
- dynamic
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012421 spiking Methods 0.000 title claims abstract description 43
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 42
- 238000000034 method Methods 0.000 title claims abstract description 25
- 210000002569 neuron Anatomy 0.000 claims abstract description 31
- 230000001815 facial effect Effects 0.000 claims abstract description 28
- 238000000605 extraction Methods 0.000 claims abstract description 6
- 230000009467 reduction Effects 0.000 claims description 4
- 230000003247 decreasing effect Effects 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 3
- 210000001367 artery Anatomy 0.000 claims 1
- 210000003462 vein Anatomy 0.000 claims 1
- 238000012545 processing Methods 0.000 abstract description 2
- 238000012549 training Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 230000008859 change Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 210000002364 input neuron Anatomy 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 210000004205 output neuron Anatomy 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 210000000225 synapse Anatomy 0.000 description 1
- 230000000946 synaptic effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Molecular Biology (AREA)
- Human Computer Interaction (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of face identification methods based on dynamic Spiking neural network, are related to technical field of image processing, and the present invention includes the following steps:S1, facial image is converted to gray-scale pixels, obtains grayscale image;S2, feature extraction is carried out to grayscale image, obtains the associated low-dimensional feature of grayscale image provincial characteristics;S3, the characteristic strength of low-dimensional feature is converted into burst length sequence;S4, multiple facial images are trained Spiking neural network, obtain dynamic Spiking neural network;In S5, the dynamic Spiking neural network for obtaining the burst length sequence inputting that facial image to be identified successively obtains after S1-S3 is handled to S4, it is compared according to adjustment weight with the weight of existing neuron in dynamic Spiking neural network, the classification of the immediate neuron of weight is the classification of facial image to be identified, the present invention has creatively used dynamic Spiking neural network, compared to traditional Spiking image-recognizing method, recognition efficiency is significantly improved.
Description
Technical field
The present invention relates to technical field of image processing, are based on dynamic Spiking neural network more particularly to one kind
Face identification method.
Background technique
Recognition of face is gone out on the basis of the related disciplines field such as graphics, Computer Science and Technology and pattern-recognition
An existing new research direction, when being differentiated by way of extracting human characteristic to the mankind, using face as according to being
Most succinct mode.Although face recognition technology has become instantly more popular from occurring there was only decades till now
One of research topic.Especially in the artificial intelligence epoch, as the fast development and people of science and technology are for safety and intelligence
The pursuit of life, the requirement to the validity, convenience, agility of face recognition technology etc. are higher and higher.
Spiking neural network pays close attention to the time of pulse granting as third generation neural network, is suitble on chip real
It is existing.But overwhelming majority supervision or the non-supervisory learning algorithm for impulsive neural networks all have fixed structure, wherein hiding
The size of layer and output layer must specify in advance, and with the training of offline batch mode, therefore, these methods be can be only applied to
In situation known to class or the quantity of cluster;In addition, these methods may not apply to the problem of data continuously change, because they will
Need re -training old and new data sample.However, ability of the biological neural network because of its successive learning and incremental learning
And it is well known that this enables them to continue to adapt to continually changing unstable environment, therefore, in order to allow SNN (Spiking
Neuron Networks, impulsive neural networks) with the environmental interactions of consecutive variations, it is necessary to make its structure and weight dynamically
New data is adapted to, in addition, catastrophic interference should be avoided or forget when learning new information.
Summary of the invention
It is an object of the invention to:It is asked to solve the classification that the prior art is highly dependent on sample to recognition of face
Topic, the present invention provide a kind of face identification method based on dynamic Spiking neural network.
The present invention specifically uses following technical scheme to achieve the goals above:
A kind of face identification method based on dynamic Spiking neural network, includes the following steps:
S1, facial image is converted to gray-scale pixels, obtains grayscale image;
S2, feature extraction is carried out to grayscale image, obtains the associated low-dimensional feature of grayscale image provincial characteristics;
S3, the characteristic strength of low-dimensional feature is converted into burst length sequence;
S4, Spiking neural network is trained using multiple facial images
By multiple facial images successively after S1-S3 is handled, corresponding burst length sequence is respectively obtained, according to every
The precise time of pulse adjusts corresponding initial weight in a burst length sequence, is adjusted weight, then carries out weight
Study is sentenced jointly according to existing neuron in the label of every facial image, adjustment weight and Spiking neural network
It is disconnected whether to increase new neuron, after all people's face image is input to Spiking neural network, obtain stable dynamic
Spiking neural network;
S5, facial image to be identified is identified
The burst length sequence inputting that facial image to be identified successively obtains after S1-S3 is handled is moved to what S4 was obtained
In state Spiking neural network, initial weight is adjusted according to the precise time of pulse in the burst length sequence of input, is obtained
Corresponding adjustment weight, which is compared with the weight of existing neuron in dynamic Spiking neural network,
The neuron with the immediate weight of adjustment weight is found out, the classification of the neuron is the classification of facial image to be identified.
Further, in the S2, feature extraction is carried out to grayscale image using the method for PCA dimensionality reduction.
Further, in the S3, when the characteristic strength of low-dimensional feature being converted to pulse using the method that Gauss encodes
Between sequence.
Further, it is realized in the S4 based on the weights learning of precise time is the function that is successively decreased with one, i.e.,
The pulse reached at first contains most information.
Beneficial effects of the present invention are as follows:
1, the present invention is based on feature of the thought of PCA dimensionality reduction to facial image to be identified to extract, and then reuses height
The method of this coding encodes extracted feature, uses dynamic Spiking Learning Algorithm pair in encoded
Characteristic sequence after coding is learnt, and is weighed according to the initial cynapse of the burst length sequence of the precise time of pulse adjustment input
Value increases or updates neuron according to the similitude between weight come dynamic, finally obtains identification output as a result, creatively making
Recognition efficiency is significantly improved compared to traditional Spiking image-recognizing method with dynamic Spiking neural network.
2, the present invention is improved according to the initial synaptic weight of the burst length sequence of the precise time of pulse adjustment input
The accuracy rate of facial image identification.
3, the present invention compares weight using similarity, further improves the efficiency of facial image identification.
4, the present invention uses dynamic Spiking neural network structure, dynamically increases neuron according to the sample of input, right
The label of sample does not need to count in advance, reduces face identification method to the dependence of sample.
Detailed description of the invention
Fig. 1 is identification process schematic diagram of the invention.
Fig. 2 is overall network structural schematic diagram of the invention.
Fig. 3 is the training flow chart of dynamic Spiking neural network of the present invention.
Fig. 4 is ORL data set sample figure.
Specific embodiment
In order to which those skilled in the art better understand the present invention, with reference to the accompanying drawing with following embodiment to the present invention
It is described in further detail.
Embodiment 1
As shown in Figures 1 to 4, the present embodiment provides a kind of face identification method based on dynamic Spiking neural network,
Include the following steps:
S1, facial image is converted to gray-scale pixels, obtains grayscale image;
S2, feature extraction is carried out to grayscale image using the method for PCA dimensionality reduction, obtains the associated low-dimensional of grayscale image provincial characteristics
Feature includes the following steps:
S2.1, a N-dimensional column vector is expressed as xi, wherein i=1,2 ..., L;
S2.2, the average value x for calculating L sample vectori;
S2.3, covariance matrix is calculated;
S2.4, off diagonal element are the correlations between each column vector element, and calculation formula is:
S2.5, feature decomposition is done to covariance matrix, obtains several characteristic values;
S2.6, several characteristic values are ranked up from big to small, the corresponding feature vector of r characteristic value is projected before taking
Matrix;
S2.7, new low-dimensional vector, i.e. low-dimensional feature are calculated with projection matrix;
S3, the characteristic strength of first layer neuron is converted to by burst length sequence using the method that Gauss encodes, specifically
For:
Assuming that the low-dimensional vector that S2.7 is obtained has m dimensional feature (x1,x2,...,xm), m*p is obtained after group coding
A burst length, first mean value of the calculating ith feature in j-th of acceptance regionAnd standard deviationSpecifically formula is:
Wherein,WithIt is the minimum value and maximum value of ith feature respectively;β is a parameter, by influencing standard
Difference influences the coverage area of Gauss acceptance region, passes through mean valueAnd standard deviationObtain Gaussian functionCalculation formula is:
Utilize Gaussian functionAs a result, obtaining burst length sequenceObtain the pulse of each input neuron
Time, calculation formula are:
S4, Spiking neural network is trained using multiple facial images,
It is concentrated in ORL human face data and chooses multiple facial images successively after S1-S3 is handled, respectively obtained corresponding
Burst length sequence adjusts corresponding initial weight according to the precise time of pulse in each burst length sequence, using more
Kind strategy is classified, and after all people's face image is input to Spiking neural network, obtains stable dynamic
Spiking neural network;
S4.1, initial weight fine tuning strategy:
One independent output layer neuron represents an input pattern, and for each input sample, output layer will
It will create a new neuron, and the neuron and the weights learning of coding layer are the functions that are successively decreased with one to indicate
, calculation formula is:
wij=w0+γexp(-ti/τ)
Wherein, wijIt is the synapse weight between input layer i and output layer neuron j, w0It is initial weight, tiIt is
The precise time of input layer pulse, to substitute the order of pulse, τ is time constant;
S4.2, neuron adjustable strategies:
In the training stage, training input sample will be presented to Spiking neural network one by one, then will be stored in
Information in Spiking neural network is compared with information entrained by input sample, these information indicate input feature vector with
Functional relation between sample class label, the algorithm make a kind of learning strategy of each samples selection:
S4.2.1, neural Meta-Policy is added:Difference between the information entrained by the information and input sample in network
When relatively large, the new neuron in output layer is added to record new information;
S4.2.2, merge neural Meta-Policy:Has the letter in neuron in a network when the information of input sample and storage
When breath is sufficiently similar, new neuron will merge with most like neuron, and trained output neuron indicates space-time spike mode
Cluster, according to the similarity combination neuron between neuron and predict that class label to realize that sufficiently fast study becomes
May, while the accurate time being provided for weight vector and makes algorithm effective enough, combined neuron needs to update weight, will be into
The weight come is added on the basis of original weight, obtains new weight;
S5, facial image to be identified is identified
Select a facial image as facial image to be identified in ORL human face data concentration, by face figure to be identified
In the dynamic Spiking neural network obtained as the burst length sequence inputting successively obtained after S1-S3 is handled to S4, root
Initial weight is adjusted according to the precise time of pulse in the burst length sequence of input, obtains adjusting weight accordingly, by the tune
Whole weight is compared with the weight of existing neuron in dynamic Spiking neural network, is found out and is most connect with the adjustment weight
The neuron of close weight, the classification of the neuron are the classifications of facial image to be identified.
The above, only presently preferred embodiments of the present invention, are not intended to limit the invention, patent protection model of the invention
It encloses and is subject to claims, it is all to change with equivalent structure made by specification and accompanying drawing content of the invention, similarly
It should be included within the scope of the present invention.
Claims (4)
1. a kind of face identification method based on dynamic Spiking neural network, which is characterized in that include the following steps:
S1, facial image is converted to gray-scale pixels, obtains grayscale image;
S2, feature extraction is carried out to grayscale image, obtains the associated low-dimensional feature of grayscale image provincial characteristics;
S3, the characteristic strength of low-dimensional feature is converted into burst length sequence;
S4, Spiking neural network is trained using multiple facial images
By multiple facial images successively after S1-S3 is handled, corresponding burst length sequence is respectively obtained, according to each arteries and veins
The precise time of pulse in time series is rushed to adjust corresponding initial weight, is adjusted weight, then carries out weights learning,
Judge to be jointly according to existing neuron in the label of every facial image, adjustment weight and Spiking neural network
The new neuron of no increase obtains stable dynamic after all people's face image is input to Spiking neural network
Spiking neural network;
S5, facial image to be identified is identified
The dynamic that the burst length sequence inputting that facial image to be identified successively obtains after S1-S3 is handled is obtained to S4
In Spiking neural network, initial weight is adjusted according to the precise time of pulse in the burst length sequence of input, obtains phase
The adjustment weight is compared with the weight of existing neuron in dynamic Spiking neural network, looks for by the adjustment weight answered
Out with the neuron of the immediate weight of adjustment weight, the classification of the neuron is the classification of facial image to be identified.
2. a kind of face identification method based on dynamic Spiking neural network according to claim 1, feature exist
In:In the S2, feature extraction is carried out to grayscale image using the method for PCA dimensionality reduction.
3. a kind of face identification method based on dynamic Spiking neural network according to claim 1, feature exist
In:In the S3, the characteristic strength of low-dimensional feature is converted to by burst length sequence using the method that Gauss encodes.
4. a kind of face identification method based on dynamic Spiking neural network according to claim 1, feature exist
In:Weights learning in the S4 based on precise time is the function that is successively decreased with one to realize, i.e., the pulse reached at first
Contain most information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810585279.2A CN108846349A (en) | 2018-06-08 | 2018-06-08 | A kind of face identification method based on dynamic Spiking neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810585279.2A CN108846349A (en) | 2018-06-08 | 2018-06-08 | A kind of face identification method based on dynamic Spiking neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108846349A true CN108846349A (en) | 2018-11-20 |
Family
ID=64210289
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810585279.2A Pending CN108846349A (en) | 2018-06-08 | 2018-06-08 | A kind of face identification method based on dynamic Spiking neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108846349A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109871940A (en) * | 2019-01-31 | 2019-06-11 | 清华大学 | A kind of multilayer training algorithm of impulsive neural networks |
CN110135498A (en) * | 2019-05-17 | 2019-08-16 | 电子科技大学 | A kind of image-recognizing method based on depth Evolutionary Neural Network |
CN110674928A (en) * | 2019-09-18 | 2020-01-10 | 电子科技大学 | Online learning method integrating artificial neural network and neural morphological calculation |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103279958A (en) * | 2013-05-31 | 2013-09-04 | 电子科技大学 | Image segmentation method based on Spiking neural network |
CN104662526A (en) * | 2012-07-27 | 2015-05-27 | 高通技术公司 | Apparatus and methods for efficient updates in spiking neuron networks |
CN104685516A (en) * | 2012-08-17 | 2015-06-03 | 高通技术公司 | Apparatus and methods for spiking neuron network learning |
CN105404902A (en) * | 2015-10-27 | 2016-03-16 | 清华大学 | Impulsive neural network-based image feature describing and memorizing method |
CN107194426A (en) * | 2017-05-23 | 2017-09-22 | 电子科技大学 | A kind of image-recognizing method based on Spiking neutral nets |
-
2018
- 2018-06-08 CN CN201810585279.2A patent/CN108846349A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104662526A (en) * | 2012-07-27 | 2015-05-27 | 高通技术公司 | Apparatus and methods for efficient updates in spiking neuron networks |
CN104685516A (en) * | 2012-08-17 | 2015-06-03 | 高通技术公司 | Apparatus and methods for spiking neuron network learning |
CN103279958A (en) * | 2013-05-31 | 2013-09-04 | 电子科技大学 | Image segmentation method based on Spiking neural network |
CN105404902A (en) * | 2015-10-27 | 2016-03-16 | 清华大学 | Impulsive neural network-based image feature describing and memorizing method |
CN107194426A (en) * | 2017-05-23 | 2017-09-22 | 电子科技大学 | A kind of image-recognizing method based on Spiking neutral nets |
Non-Patent Citations (1)
Title |
---|
LIN ZUO ET AL.: ""A Fast Precise-Spike and Weight-Comparison Based Learning Approach for Evolving Spiking Neural Networks", 《ICONIP 2017: NEURAL INFORMATION PROCESSING》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109871940A (en) * | 2019-01-31 | 2019-06-11 | 清华大学 | A kind of multilayer training algorithm of impulsive neural networks |
CN109871940B (en) * | 2019-01-31 | 2021-07-27 | 清华大学 | Multi-layer training algorithm of impulse neural network |
CN110135498A (en) * | 2019-05-17 | 2019-08-16 | 电子科技大学 | A kind of image-recognizing method based on depth Evolutionary Neural Network |
CN110674928A (en) * | 2019-09-18 | 2020-01-10 | 电子科技大学 | Online learning method integrating artificial neural network and neural morphological calculation |
CN110674928B (en) * | 2019-09-18 | 2023-10-27 | 电子科技大学 | Online learning method integrating artificial neural network and nerve morphology calculation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zeng et al. | Traffic sign recognition using kernel extreme learning machines with deep perceptual features | |
Zahisham et al. | Food recognition with resnet-50 | |
CN104866810B (en) | A kind of face identification method of depth convolutional neural networks | |
Falez et al. | Multi-layered spiking neural network with target timestamp threshold adaptation and stdp | |
CN108427921A (en) | A kind of face identification method based on convolutional neural networks | |
CN110399821B (en) | Customer satisfaction acquisition method based on facial expression recognition | |
CN108830334A (en) | A kind of fine granularity target-recognition method based on confrontation type transfer learning | |
CN109241995B (en) | Image identification method based on improved ArcFace loss function | |
CN109961093B (en) | Image classification method based on crowd-sourcing integrated learning | |
CN109102000A (en) | A kind of image-recognizing method extracted based on layered characteristic with multilayer impulsive neural networks | |
CN108846349A (en) | A kind of face identification method based on dynamic Spiking neural network | |
Fu et al. | An ensemble unsupervised spiking neural network for objective recognition | |
Lagani et al. | Comparing the performance of Hebbian against backpropagation learning using convolutional neural networks | |
CN113177612B (en) | Agricultural pest image identification method based on CNN few samples | |
KR100729273B1 (en) | A method of face recognition using pca and back-propagation algorithms | |
Bouchain | Character recognition using convolutional neural networks | |
CN115238731A (en) | Emotion identification method based on convolution recurrent neural network and multi-head self-attention | |
KR101676101B1 (en) | A Hybrid Method based on Dynamic Compensatory Fuzzy Neural Network Algorithm for Face Recognition | |
CN114818963B (en) | Small sample detection method based on cross-image feature fusion | |
Matiz et al. | Conformal prediction based active learning by linear regression optimization | |
Li et al. | Adaptive dropout method based on biological principles | |
Lee et al. | Face and facial expressions recognition system for blind people using ResNet50 architecture and CNN | |
CN108805177A (en) | Vehicle type identifier method under complex environment background based on deep learning | |
Li et al. | Pattern recognition of spiking neural networks based on visual mechanism and supervised synaptic learning | |
CN110110673A (en) | A kind of face identification method based on two-way 2DPCA and cascade feedforward neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181120 |