CN108564048A - A kind of depth convolutional neural networks method applied to Traffic Sign Recognition - Google Patents

A kind of depth convolutional neural networks method applied to Traffic Sign Recognition Download PDF

Info

Publication number
CN108564048A
CN108564048A CN201810358621.5A CN201810358621A CN108564048A CN 108564048 A CN108564048 A CN 108564048A CN 201810358621 A CN201810358621 A CN 201810358621A CN 108564048 A CN108564048 A CN 108564048A
Authority
CN
China
Prior art keywords
convolutional neural
neural networks
depth convolutional
networks model
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810358621.5A
Other languages
Chinese (zh)
Inventor
宋丽梅
林文伟
郭庆华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Polytechnic University
Original Assignee
Tianjin Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Polytechnic University filed Critical Tianjin Polytechnic University
Priority to CN201810358621.5A priority Critical patent/CN108564048A/en
Publication of CN108564048A publication Critical patent/CN108564048A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to field of machine vision, are related to a kind of depth convolutional neural networks method applied to Traffic Sign Recognition.This method is trained traffic sign by depth convolutional neural networks, establishes the depth convolutional neural networks model applied to Traffic Sign Recognition.Optimal depth convolutional neural networks model is selected by test, acquires traffic sign using colour TV camera, traffic sign is identified by optimal depth convolutional neural networks model.The depth convolutional neural networks that the present invention designs, can be with the problem of the changeable Traffic Sign Recognition of effective solution light environment.

Description

A kind of depth convolutional neural networks method applied to Traffic Sign Recognition
Technical field
The present invention relates to a kind of depth convolutional neural networks methods applied to Traffic Sign Recognition, more specifically, this Invention is related to a kind of depth convolutional neural networks method can be used in the Traffic Sign Recognition under complex environment.
Background technology
Deep learning is the extremely burning hot research direction in current artificial intelligence, machine learning field, speech recognition, The numerous areas such as image recognition, natural language processing achieve breakthrough, are produced to academia and industrial quarters far-reaching It influences.Deep learning also begins to gradually apply solves traffic jam, frequent accidents generation in Traffic Sign Recognition System The problems such as.Machine vision method has been achieved for good achievement in Traffic Sign Recognition System.Being directed to for occurring at present is handed over The technique study of logical landmark identification, is mainly the following machine vision method:Template matching method, nearest neighbour method, artificial neural network Network algorithm, SVM etc..The image of classification directly match comparing by template matching method with the template image set, in environment Good classifying quality can not be realized in changeable traffic sign picture.Escalera et al. is it has been suggested that be directly based upon color The partitioning algorithm of threshold value is split to obtain area-of-interest in RGB color given threshold, then utilizes traffic sign The information such as shape feature carry out secondary detection, but this method is illuminated by the light, sample etc. is affected.Bright qin of paddy et al. utilizes Europe Formula distance and support vector machine classifier complete traffic sign Classification and Identification, and obtain higher discrimination, but due to traffic sign Type is more and part traffic sign there are similitudes, traffic sign local environment is complicated, and illumination variation is various, car steering process Jolting for occurring makes image the problems such as distortion, blooming occur, can not maturely apply in real life.Depth convolution god One of method through network as deep learning has very strong learning ability, can be hidden from great amount of samples extracting data Feature exists, and the status of outstanding person is in image classification.Currently, obtaining the team of best result in International image identifies contest All use depth convolutional neural networks.In order to solve can not to identify traffic sign problem, the present invention under complex environment variation Method based on convolutional neural networks devises a kind of new Traffic Sign Recognition side based on depth convolutional neural networks method Method.
Invention content
The present invention devises a kind of depth convolutional neural networks method applied to Traffic Sign Recognition, and this method can answer For Traffic Sign Recognition of the environment under changeable, traffic sign local environment complexity is completed, illumination variation is various, car steering What journey occurred, which jolts, makes image appearance distort, the Traffic Sign Recognition when blooming.
The hardware system of the depth convolutional neural networks method includes:
For precision controlling, Image Acquisition and the computer of data processing;
Colour TV camera for acquiring image;
For place the colour TV camera operating platform;
A kind of depth convolutional neural networks method applied to Traffic Sign Recognition designed by the present invention, it is characterized in that: Classification is identified to Traffic Sign Images, steps are as follows:
Step 1:The data set containing N class traffic signs is chosen, the traffic sign data set includes training image X , test image Y uniformly sets the image size of the data set to the image that 3 channel pixel values are 32 × 32;
Step 2:The first layer input layer parameter I of depth convolutional neural networks model1=m1×m1×n1, m is set1=32, n1=3;
Step 3:The second layer convolution layer parameter C of depth convolutional neural networks model described in step 21=m2×m2× n2, m is set2=7, n2=6;
Step 4:The third layer convolution layer parameter C of depth convolutional neural networks model described in step 22=m3×m3× n3, m is set3=5, n3=12;
Step 5:4th layer of pond layer parameter P of the depth convolutional neural networks model described in step 21=m4×m4× n4, m is set4=2, n4=1;
Step 6:The layer 5 pond layer parameter C of depth convolutional neural networks model described in step 23=m5×m5× n5, m is set5=3, n5=18;
Step 7:The full articulamentum input parameter FI of layer 6 of depth convolutional neural networks model described in step 21= [(m1-m2-m3+2)/m4-m5+1]2×n5, the full articulamentum output of layer 6 of the depth convolutional neural networks model is set Parameter FO1=500;
Step 8:The full articulamentum input parameter FI of layer 7 of depth convolutional neural networks model described in step 22= FO1, the full articulamentum output parameter FO of layer 7 of the depth convolutional neural networks model is set2=160;
Step 9:8th layer of full articulamentum input parameter FI of the depth convolutional neural networks model described in step 23= FO2, the 8th layer of full articulamentum output parameter of the depth convolutional neural networks model is the N described in step 1;
Step 10:Shown in the excitation function such as formula (1) that depth convolutional neural networks model is set;
R (x)=max (x) formula (1)
Wherein, as R (x) > 0, R (x) is equal to itself, and as R (x)≤0, R (x) is equal to 0;
Step 11:Shown in the Regularization function such as formula (2) that depth convolutional neural networks model is set;
Wherein, C0Indicate that arbitrary loss function, ω indicate that all weights of model, λ indicate Regularization function;
Step 12:The gradient descent algorithm such as formula (3) of depth convolutional neural networks model is set to shown in formula (7);
mt=μ * mt-1+(1-μ)*gtFormula (3)
Wherein, mt, ntThe single order moments estimation and second order moments estimation to gradient, g are indicated respectivelytIndicate that gradient, μ, v indicate dynamic The state factor,Indicate mt, ntCorrection;ε, which is constant, ensures denominator not and is that 0, η indicates learning rate,Expression pair One dynamic constrained of learning rate;
Step 13:The depth convolution god that X described in step 1 training image steps for importing 2 to steps 12 are set It is trained through network model;
Step 14:By trained depth convolutional neural networks model in the Y in step 1 test image steps for importing 13 It is tested;
Step 15:Start the colour TV camera, Image Acquisition, the depth tested with step 14 are carried out to traffic sign Classification is identified to the collected traffic mark of colour TV camera in degree convolutional neural networks model, obtains classification results, identifies Terminate.
Depth convolutional neural networks model structure flow chart designed by the present invention is as shown in Figure 1.The standard that will be handled well Data set picture is introduced directly into depth convolutional neural networks model and is trained and tests.
The beneficial effects of the invention are as follows:The depth convolutional neural networks method introduced through the invention, can solve ring Traffic Sign Recognition problem under border is changeable is remained in the case where different light environment traffic indication map image distortions or obscuring Realize the identification to traffic sign.
Description of the drawings
Fig. 1:Depth convolutional neural networks model structure flow chart designed by the present invention;
Fig. 2:The principle of operation figure of image convolution;
Fig. 3:The principle of operation figure in image pond;
Fig. 4:The schematic diagram that image connects entirely;
Specific implementation mode
Convolution method is to carry out convolution algorithm with convolution nuclear matrix by the matrix of image, obtains the eigenmatrix of image, Image array often carries out a convolution algorithm from convolution kernel can all obtain different image characteristic matrix.Carry out multiple image array Convolution algorithm can obtain multiple image characteristic matrix, and multiple image characteristic matrix superposition can get increasingly complex figure As feature.
By taking a convolution algorithm as an example, if the image characteristic matrix after a certain image convolution is y [m, n], convolution algorithm is public Shown in formula such as formula (4):
Wherein, x [m, n] indicates that the matrix of a certain image, h [m, n] indicate convolution nuclear matrix.
The principle of operation figure of image convolution is as shown in Figure 2.
Pond method is to replace network in the position defeated by the general evaluation system feature of the adjacent output of a certain position Go out.When to inputting micro translation, most of outputs after the method for pond can't change.Pond method is to input Eigenmatrix compressed, simplify network calculations complexity simultaneously extract main feature.The principle of operation figure such as Fig. 3 in image pond It is shown.
Complete each node of articulamentum is connected with all nodes of last layer, comprehensive for feature that front is extracted Altogether.Due to the characteristic that it is connected entirely, the parameter of general full articulamentum is also most.The schematic diagram that image connects entirely is as schemed Shown in 4.
A kind of depth convolutional neural networks method applied to Traffic Sign Recognition designed by the present invention, it is characterized in that: Classification is identified to Traffic Sign Images, steps are as follows:
Step 1:The data set containing N class traffic signs is chosen, the traffic sign data set includes training image X , test image Y uniformly sets the image size of the data set to the image that 3 channel pixel values are 32 × 32;
Step 2:The first layer input layer parameter I of depth convolutional neural networks model1=m1×m1×n1, m is set1=32, n1=3;
Step 3:The second layer convolution layer parameter C of depth convolutional neural networks model described in step 21=m2×m2× n2, m is set2=7, n2=6;
Step 4:The third layer convolution layer parameter C of depth convolutional neural networks model described in step 22=m3×m3× n3, m is set3=5, n3=12;
Step 5:4th layer of pond layer parameter P of the depth convolutional neural networks model described in step 21=m4×m4× n4, m is set4=2, n4=1;
Step 6:The layer 5 pond layer parameter C of depth convolutional neural networks model described in step 23=m5×m5× n5, m is set5=3, n5=18;
Step 7:The full articulamentum input parameter FI of layer 6 of depth convolutional neural networks model described in step 21= [(m1-m2-m3+2)/m4-m5+1]2×n5, the full articulamentum output of layer 6 of the depth convolutional neural networks model is set Parameter FO1=500;
Step 8:The full articulamentum input parameter FI of layer 7 of depth convolutional neural networks model described in step 22= FO1, the full articulamentum output parameter FO of layer 7 of the depth convolutional neural networks model is set2=160;
Step 9:8th layer of full articulamentum input parameter FI of the depth convolutional neural networks model described in step 23= FO2, the 8th layer of full articulamentum output parameter of the depth convolutional neural networks model is the N described in step 1;
Step 10:The excitation function that depth convolutional neural networks model is arranged is shown below;
R (x)=max (x)
Wherein, as R (x) > 0, R (x) is equal to itself, and as R (x)≤0, R (x) is equal to 0;
Step 11:The Regularization function that depth convolutional neural networks model is arranged is L2 regular functions, and L2 regular functions are such as Shown in following formula;
Wherein, C0Indicate that arbitrary loss function, ω indicate that all weights of model, λ indicate Regularization function;
Step 12:The gradient descent algorithm that depth convolutional neural networks model is arranged is as follows;
mt=μ * mt-1+(1-μ)*gt
Wherein, mt, ntThe single order moments estimation and second order moments estimation to gradient, g are indicated respectivelytIndicate that gradient, μ, v indicate dynamic The state factor,Indicate mt, ntCorrection;ε, which is constant, ensures denominator not and is that 0, η indicates learning rate,Expression pair One dynamic constrained of learning rate;
Step 13:The depth convolutional Neural net that X in step 1 training image steps for importing 2 to steps 12 are set Network model is trained;
Step 14:By trained depth convolutional neural networks model in the Y in step 1 test image steps for importing 13 It is tested;
Step 15:Start the colour TV camera, Image Acquisition, the depth tested with step 14 are carried out to traffic sign Classification is identified to the collected traffic mark of colour TV camera in degree convolutional neural networks model, obtains classification results, identifies Terminate.
The present invention is distinguished with existing traffic sign recognition method:It is special that image is obtained without additional image preprocessing Sign, directly characteristics of image is extracted by convolution algorithm in depth convolutional neural networks model;The present invention is not illuminated by the light environment, claps The influence for taking the photograph the factors such as angle calculates the characteristics of image that can obtain image deeper by multilayer convolutional layer.Therefore the present invention The traffic sign recognition method of designed depth convolutional neural networks can improve the robustness and accuracy of identification.
In conclusion the advantages of depth convolutional neural networks of the present invention, is:
(1) characteristics of image is obtained since image preprocessing need not be carried out, can be directly obtained by convolution algorithm Characteristics of image is more targeted so that recognition methods accuracy higher of the invention;
(2) feature extraction being carried out to image due to the convolutional layer by multilayer, the image feature information of acquisition is more abundant, Avoid interference of the external factor to identification so that recognition methods of the invention has better robustness.
Schematically the present invention and embodiments thereof are described above, this describes no limitation, institute in attached drawing What is shown is also one of embodiments of the present invention.So if those skilled in the art are enlightened by it, do not departing from In the case of the invention objective, each component layouts mode of the same item or other forms that take other form, without Creative designs technical solution similar with the technical solution and embodiment, is within the scope of protection of the invention.

Claims (1)

1. a kind of depth convolutional neural networks method applied to Traffic Sign Recognition designed by the present invention, it is characterized in that:It is right Classification is identified in Traffic Sign Images, and steps are as follows:
Step 1:The data set containing N class traffic signs is chosen, the traffic sign data set includes training image X, is surveyed Attempt, as Y, uniformly to set the image size of the data set to the image that 3 channel pixel values are 32 × 32;
Step 2:The first layer input layer parameter I of depth convolutional neural networks model1=m1×m1×n1, m is set1=32, n1= 3;
Step 3:The second layer convolution layer parameter C of depth convolutional neural networks model described in step 21=m2×m2×n2If Set m2=7, n2=6;
Step 4:The third layer convolution layer parameter C of depth convolutional neural networks model described in step 22=m3×m3×n3If Set m3=5, n3=12;
Step 5:4th layer of pond layer parameter P of the depth convolutional neural networks model described in step 21=m4×m4×n4If Set m4=2, n4=1;
Step 6:The layer 5 pond layer parameter C of depth convolutional neural networks model described in step 23=m5×m5×n5If Set m5=3, n5=18;
Step 7:The full articulamentum input parameter FI of layer 6 of depth convolutional neural networks model described in step 21=[(m1- m2-m3+2)/m4-m5+1]2×n5, the full articulamentum output parameter of layer 6 of the depth convolutional neural networks model is set FO1=500;
Step 8:The full articulamentum input parameter FI of layer 7 of depth convolutional neural networks model described in step 22=FO1If Set the full articulamentum output parameter FO of layer 7 of the depth convolutional neural networks model2=160;
Step 9:8th layer of full articulamentum input parameter FI of the depth convolutional neural networks model described in step 23=FO2, institute The full articulamentum output parameter of the 8th layer of the depth convolutional neural networks model stated is the N described in step 1;
Step 10:The excitation function of depth convolutional neural networks model is set, shown in the excitation function such as formula (1);
R (x)=max (x) formula (1)
Wherein, as R (x) > 0, R (x) is equal to itself, and as R (x)≤0, R (x) is equal to 0;
Step 11:The Regularization function of depth convolutional neural networks model, the Regularization function such as formula (2) institute are set Show;
Wherein, C0Indicate that arbitrary loss function, ω indicate that all weights of model, λ indicate Regularization function;
Step 12:The gradient descent algorithm of depth convolutional neural networks model, the gradient descent algorithm such as formula (3) are set To shown in formula (7);
mt=μ * mt-1+(1-μ)*gtFormula (3)
Wherein, mt, ntThe single order moments estimation and second order moments estimation to gradient, g are indicated respectivelytIndicate gradient, μ, v indicate dynamic because Son,Indicate mt, ntCorrection;ε, which is constant, ensures denominator not and is that 0, η indicates learning rate,It indicates to study One dynamic constrained of rate;
Step 13:The depth convolutional neural networks mould that X in step 1 training image steps for importing 2 to steps 12 are set Type is trained;
Step 14:Trained depth convolutional neural networks model in Y in step 1 test image steps for importing 13 is carried out Test;
Step 15:The colour TV camera for starting acquisition image carries out Image Acquisition, the depth tested with step 14 to traffic sign Classification is identified to the collected traffic mark of colour TV camera in degree convolutional neural networks model, obtains classification knot Fruit, end of identification.
CN201810358621.5A 2018-04-20 2018-04-20 A kind of depth convolutional neural networks method applied to Traffic Sign Recognition Pending CN108564048A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810358621.5A CN108564048A (en) 2018-04-20 2018-04-20 A kind of depth convolutional neural networks method applied to Traffic Sign Recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810358621.5A CN108564048A (en) 2018-04-20 2018-04-20 A kind of depth convolutional neural networks method applied to Traffic Sign Recognition

Publications (1)

Publication Number Publication Date
CN108564048A true CN108564048A (en) 2018-09-21

Family

ID=63535788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810358621.5A Pending CN108564048A (en) 2018-04-20 2018-04-20 A kind of depth convolutional neural networks method applied to Traffic Sign Recognition

Country Status (1)

Country Link
CN (1) CN108564048A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107122776A (en) * 2017-04-14 2017-09-01 重庆邮电大学 A kind of road traffic sign detection and recognition methods based on convolutional neural networks
CN107220643A (en) * 2017-04-12 2017-09-29 广东工业大学 The Traffic Sign Recognition System of deep learning model based on neurological network
CN107609485A (en) * 2017-08-16 2018-01-19 中国科学院自动化研究所 The recognition methods of traffic sign, storage medium, processing equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220643A (en) * 2017-04-12 2017-09-29 广东工业大学 The Traffic Sign Recognition System of deep learning model based on neurological network
CN107122776A (en) * 2017-04-14 2017-09-01 重庆邮电大学 A kind of road traffic sign detection and recognition methods based on convolutional neural networks
CN107609485A (en) * 2017-08-16 2018-01-19 中国科学院自动化研究所 The recognition methods of traffic sign, storage medium, processing equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
乔堃等: "基于深度学习的交通标志识别", 《信息技术与信息化》 *
田正鑫: "基于多尺度卷积神经网络的交通标志识别方法", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Similar Documents

Publication Publication Date Title
CN112926405B (en) Method, system, equipment and storage medium for detecting wearing of safety helmet
CN104050471B (en) Natural scene character detection method and system
CN106127164B (en) Pedestrian detection method and device based on conspicuousness detection and convolutional neural networks
CN104616664B (en) A kind of audio identification methods detected based on sonograph conspicuousness
CN103514456B (en) Image classification method and device based on compressed sensing multi-core learning
CN108717524B (en) Gesture recognition system based on double-camera mobile phone and artificial intelligence system
CN110287849A (en) A kind of lightweight depth network image object detection method suitable for raspberry pie
CN107945153A (en) A kind of road surface crack detection method based on deep learning
CN108537136A (en) The pedestrian's recognition methods again generated based on posture normalized image
CN106980852B (en) Based on Corner Detection and the medicine identifying system matched and its recognition methods
CN108256424A (en) A kind of high-resolution remote sensing image method for extracting roads based on deep learning
CN108229458A (en) A kind of intelligent flame recognition methods based on motion detection and multi-feature extraction
CN107463954B (en) A kind of template matching recognition methods obscuring different spectrogram picture
CN103984948B (en) A kind of soft double-deck age estimation method based on facial image fusion feature
CN110097029B (en) Identity authentication method based on high way network multi-view gait recognition
CN107066972B (en) Natural scene Method for text detection based on multichannel extremal region
CN111401188B (en) Traffic police gesture recognition method based on human body key point characteristics
CN109614921A (en) A kind of cell segmentation method for the semi-supervised learning generating network based on confrontation
CN108734138A (en) A kind of melanoma skin disease image classification method based on integrated study
CN107545536A (en) The image processing method and image processing system of a kind of intelligent terminal
CN109657612A (en) A kind of quality-ordered system and its application method based on facial image feature
CN106934386A (en) A kind of natural scene character detecting method and system based on from heuristic strategies
CN108985200A (en) A kind of In vivo detection algorithm of the non-formula based on terminal device
CN107633229A (en) Method for detecting human face and device based on convolutional neural networks
CN106650606A (en) Matching and processing method of face image and face image model construction system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180921