CN101777120A - Face recognition image processing method based on sequence characteristics - Google Patents

Face recognition image processing method based on sequence characteristics Download PDF

Info

Publication number
CN101777120A
CN101777120A CN201010102106A CN201010102106A CN101777120A CN 101777120 A CN101777120 A CN 101777120A CN 201010102106 A CN201010102106 A CN 201010102106A CN 201010102106 A CN201010102106 A CN 201010102106A CN 101777120 A CN101777120 A CN 101777120A
Authority
CN
China
Prior art keywords
image
sequence characteristics
face
processing method
face recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201010102106A
Other languages
Chinese (zh)
Inventor
孙涛
刘毅
杨环
杨永密
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201010102106A priority Critical patent/CN101777120A/en
Publication of CN101777120A publication Critical patent/CN101777120A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a face recognition image processing method based on sequence characteristics. The face recognition image processing method comprises the following steps: firstly shooting a face image under general illumination condition by a camera, and then carrying out normalization on the image to lead the image to have uniform size and take on the same part; carrying out two-dimensional discrete wavelet transform on the image and selecting low-frequency component therein; and carrying out sequence characteristic extraction, and forming an image with the sequence characteristic as a component after processing. The method extracts the sequence characteristic of illumination invariants, reduces the influence of the illumination change on the face image and improves the face recognition efficiency under complex illumination conditions.

Description

A kind of face recognition image processing method based on sequence characteristics
Technical field
This patent relates to recognition of face and Flame Image Process, relates in particular to more differentiate multiple dimensioned wavelet transformation and extract based on the facial image illumination invariant of sequence characteristics.
Background technology
Development along with digitizing and infotech, identification has become the problem that often runs in people's daily life, especially for the high-accuracy identification demand growing, security protection inspection such as large-scale activity, the identity of entry and exit customs is differentiated, commercial gate control system and monitoring in real time, police criminal detection investigation etc.
Traditional identification authentication mode, for example: there are many shortcomings in password, password, identity document etc., are easy to duplicate, and lose easily, carry inconvenience etc.In this, utilize biological characteristic to carry out identification and be subjected to extensive concern.In different living things feature recognition methods, recognition of face has himself special advantage, thereby consequence is arranged in bio-identification.It has non-invasion, gathers advantages such as simple, so come into one's own gradually, has dropped into the practical application in the life.
But the application of recognition of face also faces some problems, and wherein illumination, attitude, expression are topmost three class problems.The change of this three classes factor often causes great influence to the imaging of people's face.In face identification system, attitude and expression can necessarily retrain person to be identified.But the influence of illumination has certain randomness, also is a great problem that the recognition of face researcher will face so solve illumination variation to the influence of people's face imaging.In research in the past, mainly be divided three classes illumination model method, illumination compensation algorithm, illumination invariant algorithm at the solution that illumination variation proposed.The illumination model method is by making up the illumination space, realizes that illumination is simulated to handle.This class methods recognition effect is best, but calculated amount is big, and the training sample that needs is many, requires high to training set and training environment.The illumination compensation algorithm is to remove influence or the compensation illumination that illumination variation is brought.The influence that these class methods are selected by parameter is very big, and parameter is selected very complicated.The illumination invariant method is extracted those and is not subjected to illumination effect or can keeps the distinctive one-tenth presentation video of assigning under illumination.These class methods are obvious for the recognition of face effect of some complex illumination condition, and the performance on speed and efficient also generally is better than preceding two class methods.
Summary of the invention
Purpose of the present invention is exactly in order to address the above problem, and adopts the 3rd class methods, and a kind of face recognition image processing method based on sequence characteristics is provided.Thinking of the present invention, at first using two-dimensional wavelet transformation is four parts with picture breakdown, reservation low frequency component wherein is to reduce noise effect, packed data.Then, it is being the sequence characteristics in the zone at center with it at each pixel extraction in the image, and forming with the sequence characteristics is the facial image of content.
For achieving the above object, the present invention has adopted following technical scheme:
A kind of face recognition image processing method based on sequence characteristics, this image processing method comprises the steps: Step1: take the colorized face images that comprises people's face under the general illumination condition by common camera head;
Step1: take the image that comprises people's face under the general illumination condition by common camera head;
Step2: image is carried out yardstick normalization, obtain adjusted image;
Step3: adjusted image is carried out two-dimensional wavelet transformation one time, image is further compressed, and extract low frequency component;
Step4: extract the sequence characteristics of low-frequency image, forming with the sequence characteristics is the facial image of composition.
Yardstick normalization among the described Step3 comprises the steps:
A. the eyes of people's face have been located.By connecting two center R, L rotates line and makes it level, makes facial image be adjusted to that state of level without any inclination;
B. the oculocentric distance of known person is D, calculates the distance of line center to both sides, carries out translation and makes it equal;
C. intercept face's square area;
D. use the Hermite interpolation in the polynomial interpolation that image is carried out convergent-divergent, make it size and taper to about sizing 168 * 192.
Among the described Step3 image is carried out two-dimensional wavelet transformation one time,
Image is carried out two-dimensional wavelet transformation, Selection of Wavelet Basis db4 small echo one time.The dbN small echo is exactly the Daubechies small echo.
The Daubechies small echo is that we generally write a Chinese character in simplified form into dbN by the wavelet function of world-renowned wavelet analysis scholar Ingrid Daubechies structure, and N is the exponent number of small echo.Support in wavelet function and the scaling function is 2N-1, and the vanishing moment of wavelet function is N.Except that N=1, dbN does not have symmetry (being nonlinear phase).DbN does not have clear and definite expression formula (except N=1, i.e. Ha Er small echo).
The Daubechies small echo has following characteristics:
(1) be finite support on time domain, i.e. wavelet function limited length, and N value is big more, and the length of wavelet function is just long more.
(2) wavelet function has N rank zero points at 0 Frequency point place on frequency domain.
(3) wavelet function and its integer displacement quadrature normalizing.
(4) wavelet function can be obtained by scaling function.
The use of wavelet transformation is to carry out certain compression for object.The low frequency component of Ti Quing in addition, it is less to receive noise effect.The concrete formula of wavelet decomposition to image is as follows:
f ( x , y ) = Σ k , m c k , m φ k , m + Σ k , m d k , m 1 ψ 1 k , m + Σ k , m d k , m 2 ψ 2 k , m + Σ k , m d k , m 3 ψ 3 k , m
Wherein (x y) is the image through above-mentioned conversion and processing, φ to f K, mBe φ K, m(x y), is the scaling function in the wavelet transformation, φ K, m(x, y)=φ k(x) φ m(y), k, m are respectively scaling function level and vertical displacement sign;
ψ 1 K, m, ψ 2 K, m, ψ 3 K, mBe wavelet function, k, m are respectively scaling function level and vertical displacement sign, and the formula of concrete 2-d wavelet function is as follows:
ψ 1 k,m=φ k(x)ψ m(y),ψ 2 k,m=ψ k(x)φ m(y),ψ 3 k,m=ψ k(x)ψ m(y)
Wherein, φ (x), ψ (y) is respectively the scaling function and the wavelet function of one dimension;
c K, mBe low frequency coefficient, d K, m 1, d K, m 2, d K, m 3Be high frequency coefficient, its formula is as follows: c K, m=<f, φ K, m,
Figure GSA00000007684000031
Figure GSA00000007684000033
Provide the diagram of Fig. 3 here, as two-dimensional wavelet transformation.Wherein h represents low-pass filtering, and g represents high-pass filtering.2 down-sampling is carried out in ↓ 2 expressions to view data.c K, m j, c K, m J+1Be low frequency coefficient, d K, m J, 1, d K, m J, 2, d K, m J, 3Be high frequency coefficient.Because only carry out wavelet transformation one time, so in formula, do not embody for the sign of yardstick.And in diagram, j has represented yardstick.
(x y), keeps wherein c to the image f after conversion and the processing K, mThe low frequency component image that coefficient is formed.
The extraction of sequence characteristics is to carry out with the form of mask computing among the described Step5, the steps include:
A. earlier image is carried out the zero padding operation, on original ranks basis of image around zero padding, if window size is set at l * n, l, on behalf of the line number of window and columns, n all be necessary for odd number, l=15, n=15 respectively, then mend (l-1)/2 row zero, mend (n-1)/2 row zero in the left and right sides of image in the image above and below;
B. after the zero padding, window is since the 1st row row 1, by the sequence characteristics of this pixel of extraction in window area of row individual element;
C. suppose with p 0Be the window at center, 225 pixels arranged in window, with N (p 0) define the collection of pixels of this neighborhood, comprising center pixel, the pixel value of I (p) remarked pixel p, O (p 0) then represented pixel at p 0Sequence characteristics quantity form in window area,
O(p 0)=||{p∈N(p 0)|I(p)≤I(p 0)}||。
In the step D of Step3, Hermite (Hermite) interpolation is a kind of generalized form of Lagrange's interpolation, and it is not only to the data point interpolation, but also it is led the arrow interpolation.Common Hermite interpolation is at t 0The t of place 1, to two data point P 0, P 1And lead up to the k rank and to vow P 0 (r), P 1 (r), r=1,2 ..., k carries out interpolation
P ( t ) = Σ r = 0 k P 0 ( r ) H r , 0 ( t ) + Σ r = 0 k P 1 ( r ) H r , 1 ( t )
It has provided 2k+1 order polynomial interpolation curve, wherein a basis function H R, i(t) satisfy following formula
Figure GSA00000007684000042
What use was maximum in the practical application is three Hermite interpolations, and have this moment
P ( t ) = P 0 ( 0 ) H 0,0 ( t ) + P 0 ( 1 ) H 1,0 ( t ) + P 1 ( 0 ) H 0,1 ( t ) + P 1 ( 1 ) H 1,1 ( t )
Wherein
H 0,0 ( t ) = 2 ( t - t 0 ) 3 ( t 1 - t 0 ) 3 - 3 ( t - t 0 ) 2 ( t - t 0 ) 2 + 1
H 1,0 ( t ) = ( t - t 0 ) 3 ( t 1 - t 0 ) 2 - 2 ( t - t 0 ) 2 t 1 - t 0 + t - t 0
H 0,1 ( t ) = - 2 ( t - t 0 ) 3 ( t 1 - t 0 ) 3 + 3 ( t - t 0 ) 2 ( t - t 0 ) 2
H 1,1 ( t ) = ( t - t 1 ) 3 ( t 1 - t 0 ) 2 + 2 ( t - t 1 ) 2 t 1 - t 0 + t - t 1
Be called the Hermite basis function three times.H 0,0(t), H 1,0(t), H 0,1(t), H 1,1(t) following character is arranged:
H 0,0(t 0)=1,
H 1,0 ( 1 ) ( t 0 ) = 1 , H 1,0 ( t 0 ) = H 1,0 ( t 1 ) = H 1,0 ( 1 ) ( t 1 ) = 0
H 0,1(t 1)=1,
Figure GSA000000076840000411
H 1,1 ( 1 ) ( t 1 ) = 1 , H 1,1 ( t 0 ) = H 1,1 ( 1 ) ( t 0 ) = H 1,1 ( t 1 ) = 0
Here, three Hermite curves generally are to be defined on [0,1] interval, and this moment, three Hermite basis functions were
h 0,0(t)=2t 3-3t 2+1,H 1,0(t)=t(1-t) 2
H 0,1(t)=3t 2-2t 3,H 1,1(t)=-t 2(1-t)
The invention has the beneficial effects as follows: this method has effectively been improved the influence that illumination variation distributes to the facial image gray space, and is very obvious to the image processing effect under the complex conditions, improved the robustness of face identification system to illumination.
Figure of description
Fig. 1 is a processing flow chart of the present invention;
Fig. 2 is the diagram of two-dimensional wavelet transformation;
The original facial image of Fig. 3 for gathering;
Fig. 4 carries out yardstick normalization adjustment facial image afterwards;
Fig. 5 is the facial image before and after decomposing through wavelet transformation;
Low frequency component image and its facial image that through sequence characteristics extract of Fig. 6 for extracting;
Fig. 7 is image and a corresponding preface people face in the Yale B face database.
Embodiment
1, gathers original facial image
Adopted Yale B face database, taken the image that comprises people's face under the general illumination condition by common camera head.The facial image of Fig. 3 for taking.
2, image is carried out yardstick normalization
The normalization of yardstick comprises the adjustment of location, cutting, rotation and the size of people's face.Here the eyes of people's face have been located.By connecting two center R, L rotates line and makes it level, makes facial image be adjusted to that state of level without any inclination.The oculocentric distance of known person is D, calculates the distance of line center to both sides, carries out translation and makes it equal.Intercept face's square area then.
Use the Hermite interpolation in the polynomial interpolation that image is carried out convergent-divergent, make it size and taper to about sizing 168 * 192.Shown in Figure 4 is exactly to carry out adjusted result.
3, image is carried out wavelet transformation one time, keep low frequency component
In Fig. 5, showed the image that decomposes front and back through wavelet transformation.The left side be undressed facial image, and the right side is for passing through 4 facial images that obtain after the wavelet decomposition.The upper left image in the right is a low frequency component, and upper right is the component image of horizontal high frequency, and the lower-left is the component image of vertical high frequency, and the lower right corner is that two-way high frequency is the diagonal high-frequency components image.
4, extract the sequence characteristics of low-frequency image, forming with the sequence characteristics is the facial image of composition.
Fig. 6 has shown low frequency component image and its image that extracts through sequence characteristics that extracts.Can from figure, obviously learn, with the sequence characteristics be people's face of composition, more outstanding on contrast and contour sharpness, more identifying information can be provided.Influence for illumination also can have removal significantly.
Correlation test proves
Correlation test has adopted Yale B face database, wherein comprises 10 people, and everyone has the positive face image of 64 under the different illumination conditions.Original image is cut to 168 * 192 sizes.Be divided into five subclass according to light source and camera angle, subclass 1 (0 to 12 degree), subclass 2 (12 spend to 25 degree), subclass 3 (25 spend to 50 degree), subclass 4 (50 spend to 77 degree), subclass 5 (50 spend to 90 degree).Each set has 70,120,120,140 respectively, 190 images.
First group of experiment selected one as training set from first to the 5th subclass, other four subclass are as test set.The imaging of people's face after extracting sequence characteristics can show in Fig. 7 in the experiment.Experimental result sees Table 1.Can learn that from table 1 using subclass 1 and at 3 o'clock, discrimination can reach 100%.Subclass 2 and 4 is as training set the time, and discrimination is very near 100%.When using subclass 5,, be much better than other two class methods though discrimination has only 97.56% as training set.
Training set Subclass 1 Subclass 2 Subclass 3 Subclass 4 Subclass 5
Discrimination ??100% ??99.40% ??100% ??99.6% ??97.56%
Table 1
Second group of experiment is to select 10 images to form training set as training sample at random from everyone 64 images, and other 54 images are formed test set as test sample book.For guaranteeing randomness, this experiment has been carried out 50 times.Be this group test for data in the table 2, the discrimination of preface people's face gets to 99.62%, shows that the stability of this method is very outstanding.
Preface people's face ??SQI ??DCT
??99.62% ??96.87% ??95.23%
Table 2
The 3rd group of experiment formed training set with everyone desirable light image as training sample, and other 63 images are as test set.The result provides in table 3.Can see that the stability of this method is still fine, use the single face of adult can reach 98.41% as training.
Preface people's face ??SQI ??DCT
??98.41% ??96.34??% ??86.19??%
Table 3

Claims (4)

1. face recognition image processing method based on sequence characteristics, it is characterized in that: this image processing method comprises the steps:
Step1: take the image that comprises people's face under the general illumination condition by common camera head;
Step2: image is carried out yardstick normalization, obtain adjusted image;
Step3: adjusted image is carried out two-dimensional wavelet transformation one time, image is further compressed, and extract low frequency component;
Step4: extract the sequence characteristics of low-frequency image, forming with the sequence characteristics is the facial image of composition.
2. a kind of face recognition image processing method based on sequence characteristics according to claim 1 is characterized in that: the yardstick normalization among the described Step2 comprises the steps:
A. located the eyes of people's face, by connecting two center R, L rotates line and makes it level, makes facial image be adjusted to that state of level without any inclination;
B. the oculocentric distance of known person is D, calculates the distance of line center to both sides, carries out translation and makes it equal;
C. intercept face's square area;
D. use the Hermite interpolation in the polynomial interpolation that image is carried out convergent-divergent, make it size and taper to about sizing 168 * 192.
3. a kind of face recognition image processing method based on sequence characteristics according to claim 1 is characterized in that: among the described Step3 image is carried out two-dimensional wavelet transformation one time, follow following formula:
Figure FSA00000007683900011
Wherein (x y) is the image through above-mentioned conversion and processing, φ to f K, mBe φ K, m(x y), is the scaling function in the wavelet transformation, φ K, m(x, y)=φ k(x) φ m(y), k, m are respectively scaling function level and vertical displacement sign;
ψ 1 K, m, ψ 2 K, m, ψ 3 K, mBe wavelet function, k, m are respectively scaling function level and vertical displacement sign, and the formula of concrete 2-d wavelet function is as follows:
ψ 1 k,m=φ k(x)ψ m(y),ψ 2 k,m=ψ k(x)φ m(y),ψ 3 k,m=ψ k(x)ψ m(y)
Wherein, φ (x), ψ (y) is respectively the scaling function and the wavelet function of one dimension;
c K, mBe low frequency coefficient, d K, m 1, d K, m 2, d K, m 3Be high frequency coefficient, its formula is as follows: c K, m=<f, φ K, m,
Figure FSA00000007683900012
Figure FSA00000007683900013
Figure FSA00000007683900014
(x y), keeps wherein c to the image f after conversion and the processing K, mThe low frequency component image that coefficient is formed.
4. a kind of face recognition image processing method based on sequence characteristics according to claim 1 is characterized in that: the extraction of sequence characteristics is to carry out with the form of mask computing among the described Step4, the steps include:
A. earlier image is carried out the zero padding operation, on original ranks basis of image around zero padding, if window size is set at l * n, l, on behalf of the line number of window and columns, n all be necessary for odd number, l=15, n=15 respectively, then mend (l-1)/2 row zero, mend (n-1)/2 row zero in the left and right sides of image in the image above and below;
B. after the zero padding, window is since the 1st row row 1, by the sequence characteristics of this pixel of extraction in window area of row individual element;
C. suppose with p 0Be the window at center, 225 pixels arranged in window, with N (p 0) define the collection of pixels of this neighborhood, comprising center pixel, the pixel value of I (p) remarked pixel p, O (p 0) then represented pixel at p 0Sequence characteristics quantity form in window area,
O(p 0)=||p∈N(p 0)|I(p)≤I(p 0)}||。
CN201010102106A 2010-01-28 2010-01-28 Face recognition image processing method based on sequence characteristics Pending CN101777120A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010102106A CN101777120A (en) 2010-01-28 2010-01-28 Face recognition image processing method based on sequence characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010102106A CN101777120A (en) 2010-01-28 2010-01-28 Face recognition image processing method based on sequence characteristics

Publications (1)

Publication Number Publication Date
CN101777120A true CN101777120A (en) 2010-07-14

Family

ID=42513578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010102106A Pending CN101777120A (en) 2010-01-28 2010-01-28 Face recognition image processing method based on sequence characteristics

Country Status (1)

Country Link
CN (1) CN101777120A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902967A (en) * 2012-10-16 2013-01-30 第三眼(天津)生物识别科技有限公司 Method for positioning iris and pupil based on eye structure classification
CN103605993A (en) * 2013-12-04 2014-02-26 康江科技(北京)有限责任公司 Image-to-video face identification method based on distinguish analysis oriented to scenes
CN103996023A (en) * 2014-05-09 2014-08-20 清华大学深圳研究生院 Light field face recognition method based on depth belief network
CN104700018A (en) * 2015-03-31 2015-06-10 江苏祥和电子科技有限公司 Identification method for intelligent robots
WO2015090126A1 (en) * 2013-12-16 2015-06-25 北京天诚盛业科技有限公司 Facial characteristic extraction and authentication method and device
CN107633163A (en) * 2016-07-19 2018-01-26 百度在线网络技术(北京)有限公司 Login method and device based on recognition of face
CN110032927A (en) * 2019-02-27 2019-07-19 视缘(上海)智能科技有限公司 A kind of face identification method

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902967A (en) * 2012-10-16 2013-01-30 第三眼(天津)生物识别科技有限公司 Method for positioning iris and pupil based on eye structure classification
CN102902967B (en) * 2012-10-16 2015-03-11 第三眼(天津)生物识别科技有限公司 Method for positioning iris and pupil based on eye structure classification
CN103605993A (en) * 2013-12-04 2014-02-26 康江科技(北京)有限责任公司 Image-to-video face identification method based on distinguish analysis oriented to scenes
CN103605993B (en) * 2013-12-04 2017-01-25 康江科技(北京)有限责任公司 Image-to-video face identification method based on distinguish analysis oriented to scenes
WO2015090126A1 (en) * 2013-12-16 2015-06-25 北京天诚盛业科技有限公司 Facial characteristic extraction and authentication method and device
CN103996023A (en) * 2014-05-09 2014-08-20 清华大学深圳研究生院 Light field face recognition method based on depth belief network
CN103996023B (en) * 2014-05-09 2017-02-15 清华大学深圳研究生院 Light field face recognition method based on depth belief network
CN104700018A (en) * 2015-03-31 2015-06-10 江苏祥和电子科技有限公司 Identification method for intelligent robots
CN107633163A (en) * 2016-07-19 2018-01-26 百度在线网络技术(北京)有限公司 Login method and device based on recognition of face
CN110032927A (en) * 2019-02-27 2019-07-19 视缘(上海)智能科技有限公司 A kind of face identification method

Similar Documents

Publication Publication Date Title
Tang Wavelet theory approach to pattern recognition
CN101777120A (en) Face recognition image processing method based on sequence characteristics
Ding et al. An approach for visual attention based on biquaternion and its application for ship detection in multispectral imagery
Zhang et al. Saliency detection based on self-adaptive multiple feature fusion for remote sensing images
CN102637251B (en) Face recognition method based on reference features
Li et al. Extracting the nonlinear features of motor imagery EEG using parametric t-SNE
CN110097617B (en) Image fusion method based on convolutional neural network and significance weight
CN103646244A (en) Methods and devices for face characteristic extraction and authentication
CN103605993B (en) Image-to-video face identification method based on distinguish analysis oriented to scenes
Chen et al. TEMDNet: A novel deep denoising network for transient electromagnetic signal with signal-to-image transformation
CN108664911A (en) A kind of robust human face recognition methods indicated based on image sparse
Mukhedkar et al. Fast face recognition based on Wavelet Transform on PCA
Gao et al. A novel face feature descriptor using adaptively weighted extended LBP pyramid
Wang et al. A new Gabor based approach for wood recognition
CN107590785A (en) A kind of Brillouin spectrum image-recognizing method based on sobel operators
CN104834909A (en) Image characteristic description method based on Gabor synthetic characteristic
CN105046189A (en) Human face recognition method based on bi-directionally and two-dimensionally iterative and non-relevant discriminant analysis
CN104240187A (en) Image denoising device and image denoising method
Amrouni et al. Contactless palmprint recognition using binarized statistical image features-based multiresolution analysis
CN102426704B (en) Quick detection method for salient object
CN116310452B (en) Multi-view clustering method and system
Bi et al. A robust color edge detection algorithm based on the quaternion Hardy filter
CN110490210A (en) A kind of color texture classification method based on compact interchannel t sample differential
CN116309030A (en) GAN-based small data set craniofacial translation method
Zhou et al. Face Recognition Based on Multi-Wavelet and Sparse Representation.

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20100714