CN104850838A - Three-dimensional face recognition method based on expression invariant regions - Google Patents

Three-dimensional face recognition method based on expression invariant regions Download PDF

Info

Publication number
CN104850838A
CN104850838A CN201510254758.2A CN201510254758A CN104850838A CN 104850838 A CN104850838 A CN 104850838A CN 201510254758 A CN201510254758 A CN 201510254758A CN 104850838 A CN104850838 A CN 104850838A
Authority
CN
China
Prior art keywords
dimensional face
point
region
data
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510254758.2A
Other languages
Chinese (zh)
Other versions
CN104850838B (en
Inventor
纪禄平
尹力
郝德水
王强
卢鑫
黄青君
杨洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201510254758.2A priority Critical patent/CN104850838B/en
Publication of CN104850838A publication Critical patent/CN104850838A/en
Application granted granted Critical
Publication of CN104850838B publication Critical patent/CN104850838B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a three-dimensional face recognition method based on expression invariant regions. Firstly a two-dimensional face region is obtained from two-dimensional face images corresponding to three-dimensional face data through detection; an initial three-dimensional face region is extracted in the three-dimensional face data according to the two-dimensional face region, transverse slicing is performed on the initial three-dimensional face region, and a nose tip point is detected; a relatively accurate three-dimensional face region is extracted according to the nose tip point, and then statistical characteristic vectors and the expression invariant regions are extracted in the three-dimensional face region; the statistical characteristic vectors of a check sample act as the check samples of a reject classifier; and the candidate check samples are obtained according to the statistical characteristic vectors of samples to be recognized, then the set of points of the expression invariant regions of the samples to be recognized are matched with the set of points of the expression invariant regions of the candidate check samples, and a recognition result is obtained according to the matching error. Accuracy of the three-dimensional face region can be enhanced, and accuracy of three-dimensional face recognition is enhanced via combination of the statistical characteristic vectors and the expression invariant regions.

Description

Based on the three-dimensional face identification method of expression invariant region
Technical field
The invention belongs to three-dimensional face recognition technology field, more specifically say, relate to a kind of three-dimensional face identification method based on expression invariant region.
Background technology
Count from the nineties in last century, recognition of face have passed through the development of more than 20 year.Initial recognition of face study hotspot concentrates on the recognition of face on two dimensional image, through the propelling of correlative study, under the constraint conditions such as restriction lighting angle, attitude, expression, two-dimension human face identification can obtain good discrimination, can meet the application scenarios under simple condition.Along with the propelling of research, the recognition of face under non-ideal condition becomes correlative study focus, but the discrimination of complex condition does not make great progress.Along with the development of 3-D scanning technology, three-dimensional data obtains and becomes more and more easier, and the lifting of computer process ability in addition, three-dimensional face identification becomes study hotspot.Two dimensional image is three-dimensional data brief projection in the plane in essence, so three-dimensional data has the shape information of more horn of plenty in quantity of information, and 3-D data collection process is not substantially by illumination effect.But the inferior position of three-dimensional face identification is obvious, the three-dimensional data coupling under high resolving power needs to consume a large amount of computing times, and three-dimensional face is more vulnerable to the impact of expression shape change, and non-rigid transformation region directly will affect discrimination.Therefore, need the better square suitably method upgraded of research to eliminate the impact of expression shape change, and reduce the match time needed for identifying as far as possible.
Three-dimensional face data generally represent with a cloud form, are demarcated the coordinate information of each point by X, Y, Z axis virtual in space, and without other associations between some point.The basic procedure of three-dimensional face identification can be divided into pre-service, feature extraction, characteristic matching three part.The subject matter that preprocessing part will solve is that the complete human face region of extraction and noise reduction are level and smooth, due to the position such as hair, shoulder can be attached when three-dimensional scanning device obtains human face data, these positions will certainly affect complete face extraction, and the noise such as cavity, projection simultaneously caused by scanning device also needs to be eliminated.Feature extraction extracts some value composition of vector by special algorithm to characterize original face, and characteristic matching is then with these vector calculation process of similarity between face between two.
Difference according to three-dimensional face identification desired data can be divided into two classes.First be single mode identification, this type of identifies and only relies on three-dimensional face data to identify, the steps such as the demarcation of face key point, feature extraction depend on primary three-dimensional data.Single mode identification can be divided into the method for feature based and the method two kinds based on holistic approach.Next is multimodal recognition, and because three-dimensional data lacks the texture information on two dimensional image, so merge two dimensional image can obtain higher quantity of information theoretically on the basis of three-dimensional data, the feature extracted more has discrimination, thus improves discrimination.
Multimodal recognition has multiple understanding form, different sensors can be utilized to obtain different representation and identify, different sampling under many condition also can be utilized to identify, or utilizes the result of algorithms of different to carry out fusion to obtain final recognition result.For the multimodal recognition on three-dimensional face, Tsalakanidou utilizes PCA to process respectively coloured image, depth image, and merges its result as final recognition result; Mian devises a refusal sorter according to the spherical representation of three-dimensional face, thus can screen part face to be identified and reduce calculated amount, extracts face and is subject to the original face of region representation that expression influence is less, and carry out recognition of face based on these regions.But current multimodal recognition method, when extracting expression invariant region, the accurate end, is lower, therefore still Shortcomings in elimination expression shape change affects.
Summary of the invention
The object of the invention is to overcome the deficiencies in the prior art, a kind of three-dimensional face identification method based on expression invariant region is provided, combined by two dimensional image and 3-D view, the prenasale of accurate extraction face, obtain the higher human face region of degree of accuracy and expression invariant region with this, thus improve the accuracy of three-dimensional face identification.
For achieving the above object, the three-dimensional face identification method that the present invention is based on expression invariant region comprises the following steps:
S1: the statistical nature and the expression invariant region that extract sample to be identified and check sample respectively, the concrete steps of the feature extraction of every width three-dimensional face data comprise:
S1.1: the two-dimension human face image corresponding to three-dimensional face data carries out human face region detection;
S1.2: the x coordinate range of the human face region detected according to two-dimension human face image and y coordinate range, extracts corresponding three-dimensional face region, as initial three-dimensional face region from three-dimensional face data;
S1.3: carry out prenasale detection according to the initial three-dimensional face region that step S1.2 obtains, obtain prenasale;
S1.4: take prenasale as the centre of sphere, in calculating three-dimensional face data, each point is to the distance of prenasale, if distance is less than pre-set radius R, then this point belongs to three-dimensional face region, otherwise does not belong to, thus obtains three-dimensional face region;
S1.5: carry out human face posture correction to the three-dimensional face region that step S1.4 obtains, obtains the three-dimensional face region after correcting;
S1.6: the three-dimensional face region obtained for step S1.5, arranges K radius λ k, k=1,2 ..., K, λ k< λ k+1, and λ k≤ R, take prenasale as the centre of sphere, adds up respectively with λ kfor the three-dimensional face number of data points f in the spheroid of radius k, build statistical nature vector F=(f 1, f 2..., f k);
S1.7: extract expression invariant region, concrete grammar is: first obtain prenasale (x a, y a, z a) slices across, then be the center of circle with prenasale, justify in slice plane with pre-set radius v, try to achieve and two of prenasale slices across intersection point (x b, y b, z b), (x c, y c, z c); In the three-dimensional face region that step S1.5 obtains, travel through each point (x, y, z), if x ∈ is [x b, x c] and y ∈ [y a1, y a+ δ 2], or y > y a+ δ 2, wherein δ 1represent the amount of offseting downward, δ 2represent upwards side-play amount, so this point belongs to expression invariant region, otherwise does not belong to;
S2: using the check sample of the statistical nature of check sample vector as refusal sorter, calculate the distance of the statistical nature vector of each check sample of statistical nature vector sum in the three-dimensional face region of sample to be identified, sorted from small to large according to distance by check sample, before selecting according to predetermined ratio, several check samples are as check sample to be selected;
S3: the check sample to be selected obtained according to sample to be identified and step S2, mated by the coordinate point set of the expression invariant region of the coordinate point set of the expression invariant region of sample to be identified and each check sample to be selected, Q the check sample to be selected selecting matching error minimum is as recognition result.
The present invention is based on the three-dimensional face identification method of expression invariant region, first detect from two-dimension human face image corresponding to three-dimensional face data and obtain two human face regions, according to two-dimension human face region in three-dimensional face extracting data to initial three-dimensional face region, slices across is carried out to it, detection obtains prenasale, three-dimensional face region is more accurately extracted again according to prenasale, then from this three-dimensional face region, extract statistical nature vector sum expression invariant region, using the check sample of the statistical nature of check sample vector as refusal sorter, statistical nature vector according to sample to be identified obtains check sample to be selected, again the point set of the point set of the expression invariant region of sample to be identified with the expression invariant region of check sample to be selected is mated, recognition result is obtained according to matching error.
The present invention has following beneficial effect:
(1) method adopting two-dimension human face image and three-dimensional face data to combine obtains three-dimensional face region, makes the three-dimensional face region of extracting more accurate;
(2) by correcting the human face posture in three-dimensional face region, can correct roughly with check sample to the sample identified, avoiding not restraining of matching algorithm;
(3) obtain by prenasale invariant region of expressing one's feelings, the change of face can be adapted to, make expression invariant region more accurate.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the three-dimensional face identification method that the present invention is based on expression invariant region;
Fig. 2 is the process flow diagram that face characteristic extracts;
Fig. 3 is the exemplary plot of three-dimensional face data;
Fig. 4 is the exemplary plot that three-dimensional face data carry out noise elimination;
Fig. 5 is the Face datection result figure of two-dimension human face image;
Fig. 6 is the principle schematic of the prenasale detection based on section;
Fig. 7 is the three-dimensional face region exemplary plot extracted;
Fig. 8 is that statistical nature extracts schematic diagram;
Fig. 9 is the schematic diagram extracting expression invariant region;
Figure 10 is the exemplary plot of expression invariant region;
Figure 11 is the CMC curve comparison figure of experiment one and experiment two;
Figure 12 is the comparison diagram of CMC curve of iterative closest point algorithms and LDA algorithm, PCA algorithm.
Embodiment
Below in conjunction with accompanying drawing, the specific embodiment of the present invention is described, so that those skilled in the art understands the present invention better.Requiring particular attention is that, in the following description, when perhaps the detailed description of known function and design can desalinate main contents of the present invention, these are described in and will be left in the basket here.
Embodiment
Fig. 1 is the process flow diagram of the three-dimensional face identification method that the present invention is based on expression invariant region.As shown in Figure 1, the concrete steps that the present invention is based on the three-dimensional face identification method of expression invariant region comprise:
S101: the feature extracting sample to be identified and check sample:
First the feature of sample to be identified and check sample will be extracted respectively, the feature in the three-dimensional face region adopted in the present invention has two kinds: statistical nature vector sum expression invariant region point set, in order to the accuracy of feature extraction, needed to carry out pre-service to sample image before carrying out human face region detection, and need to carry out certain correction to the human face region extracted.Fig. 2 is the process flow diagram that face characteristic extracts.As shown in Figure 2, the concrete steps that face characteristic extracts comprise:
S201: Image semantic classification:
Before carrying out feature extraction, generally need to carry out pre-service to three-dimensional face data, to remove the noise in three-dimensional face data.Like the prior art, the x-axis of three-dimensional face of the present invention represent the level of face to, y-axis represents the vertical direction of face, and z-axis, perpendicular to the plane of x-axis and y-axis, can be understood as the degree of depth.
Fig. 3 is the exemplary plot of three-dimensional face data.As shown in Figure 3, have three width three-dimensional face data altogether, each is classified as two visual angles of piece image.The noise main manifestations of three-dimensional face data is the noise such as cusp, cavity, the spiced salt that equipment deficiency causes.
The method eliminating cusp is: each some O (x in traversal three-dimensional face data o, y o, z o), by x ∈ [x o-l, x o+ l], y ∈ [y o-l, y o+ l] in data point as the point in an O neighborhood, l is positive integer, and the length of side of visible neighborhood is 2l+1.Calculate the average μ of the z coordinate of the data point in these points except some O and cavity point oand standard deviation sigma o, according to average μ oand standard deviation sigma opoint for each three-dimensional face data arranges a threshold value t o, computing formula is:
t O=μ o+0.6σ o(1)
If z o≤ t o, then think that this point is normal, do not do any operation, if z o> t o, then think that this point is abnormity point, make z o=t o.
For the cavity in three-dimensional face data, adopt the method for three-dimensional data being carried out to interpolation and resampling, its concrete grammar is: be first Delaunay (De Luonie) triangle grid data by the three-dimensional face data transformations after cusp is eliminated, and utilize the method for linear interpolation to carry out interpolation to cavity point, then according to the resolution of former three-dimensional face data, resampling is carried out to the data after interpolation.
The last mean curvature flow method that adopts again is smoothing to the three-dimensional face data after cavity is eliminated, and eliminates salt-pepper noise.
Fig. 4 is the exemplary plot that three-dimensional face data carry out noise elimination.As shown in Figure 4, first row is three width initial three-dimensional human face data, and second row is the three-dimensional face data after noise is eliminated, and visible, the three-dimensional face data after noise is eliminated are more complete and level and smooth.
Align to make two-dimension human face image and three-dimensional face data, the present embodiment also adopts the gray level image of two-dimension human face image and the same treatment method of three-dimensional face data carries out interpolation and resampling: the gradation data by two-dimension human face is converted into Delaunay (De Luonie) triangle grid data, and utilize the method for linear interpolation to carry out interpolation, then according to the resolution of former two-dimension human face image, resampling is carried out to the data after interpolation.
S202: Face datection is carried out to two-dimension human face image:
In order to improve the robustness that follow-up prenasale detects, first needing the two-dimension human face image corresponding to three-dimensional face data to carry out human face region detection, and then carry out prenasale detection on the basis in the two-dimension human face region that this detects.The method for detecting human face of current two-dimension human face image is comparatively ripe, has many algorithms available.The method for detecting human face based on Haar (Ha Er) feature is adopted in the present embodiment.Haar feature is a kind of more classical face characteristic, method for detecting human face based on Haar feature can list of references Viola P, Jones M.2001), Rapid Object Detection Using a Boosted Cascade of SimpleFeatures [J] .in IEEE Conference on Computer Vision and Pattern Recognition (CVPR ' 01,2001,1:I-511-I-518..Fig. 5 is the Face datection result figure of two-dimension human face image.As shown in Figure 5, after face being detected, the rectangle frame that comprises human face region can be obtained on original two dimensional image.If human face region do not detected in two-dimension human face image, then inwardly reduce certain proportion from two-dimension human face image edge, this ratio is arranged according to actual conditions, using the region that obtains as two-dimension human face region.
S203: extract initial three-dimensional face region:
According to the x coordinate range [x of the human face region that two-dimension human face image detects 1, x 2] and y coordinate range [y 1, y 2], extract corresponding three-dimensional face region, as initial three-dimensional face region from three-dimensional face data.That is, if the coordinate (x that in three-dimensional face data, certain is put o, y o, z o), if x o∈ [x 1, x 2], y o∈ [y 1, y 2], then this point belongs to three-dimensional face region.
S204: prenasale detects:
Carry out prenasale detection according to the initial three-dimensional face region that step S203 obtains, thus obtain prenasale.The concrete grammar that prenasale detects can be determined as required.The prenasale detection method based on section is adopted in the present embodiment.
Fig. 6 is the principle schematic of the prenasale detection based on section.As shown in Figure 6, the principle that prenasale detects is: in three-dimensional face region, carry out slices across at nose place, with the summit of nose for doing circle in the center of circle, has two crossing points with this section, these 3 can form a triangle, with prenasale for summit is done shown in the green line that height is in figure.Obviously when the center of circle is mobile along section, the leg-of-mutton height (the thick straight line of Fig. 6) being arranged in prenasale will obtain maximal value.If cut into slices along the longitudinal direction of face area always, then the maximal value that triangle is high still can obtain at nose place.
Therefore, the concrete grammar that in the present invention, prenasale detects is: in three-dimensional face regional extent, with slice spacings α 1slices across is carried out to three-dimensional face region, the y=y namely cut into slices 1+ k × α 1, y ∈ [y 1, y 2], wherein k=0,1,2 ..., α 1represent first slice spacings.For each section obtained, section travels through each point, with this be the center of circle, pre-set radius r carries out making circle and obtain and two intersection points of cutting into slices, and asks for centre point to two straight height of intersection point structure, select high maximum point as initial prenasale in the point of all sections.If high more than one maximum of point, then select one arbitrarily.Initially to calculate the y coordinate y of cusp place section *centered by, respectively expand certain limit downwards respectively to upper, obtain secondary section scope [y *-β, y *+ β], β represents side-play amount, with slice spacings α 2slices across is carried out to the human face region within the scope of secondary section, the y=y now cut into slices b-β+k × α 2, y ∈ [y b-β, y 2+ β], α 2represent secondary slice spacings, α 2< α 1.Similarly, for each section obtained, section travels through each point, with this point be the center of circle, pre-set radius r carries out justifying, try to achieve and two intersection points of cutting into slices, ask for centre point to two straight height of intersection point structure, in the point of all sections, select high maximum point as prenasale.
In order to make the prenasale that obtains more accurate, first slice spacings and secondary slice spacings should not be too little usually, according to face size, and α 1scope be generally 5mm≤α 1≤ 10mm, α 2equal the y coordinate spacing of three-dimensional face data mid point, the namely resolution of three-dimensional face data, the span of side-play amount β is 15mm≤β≤25mm, and concrete numerical value is determined according to the resolution of three-dimensional face data.Such as, in the present embodiment, the resolution of three-dimensional face data is 1mm, arranges α 1=5mm, α 2=1mm, side-play amount β=20mm.
S205: extract three-dimensional face region according to prenasale:
Detect according to step S204 the prenasale obtained and can extract three-dimensional face region, its method is: take prenasale as the centre of sphere, and in calculating three-dimensional face data, each point is to the distance of prenasale, if distance is less than pre-set radius R, then this point belongs to three-dimensional face region, otherwise does not belong to.That is, be the centre of sphere with prenasale, with R be radius spheroid in point, belong to three-dimensional face region.Thus complete the extraction in three-dimensional face region.Radius R is also arranged according to face size, and the span of usual R is 60mm≤R≤100mm, arranges R=80mm in the present embodiment.Fig. 7 is the three-dimensional face region exemplary plot extracted.
S206: human face posture corrects:
Owing to there being 6 degree of freedom when face gathers, may attitude differ so acquire face, need to correct the human face posture in the three-dimensional face region that step S205 extracts, concrete bearing calibration can be selected as required.The human face posture bearing calibration converted based on Hotelling (Hotelling) is adopted in the present embodiment.Its concrete grammar is:
For a face three-dimensional data, the coordinate of its x, y, z tri-coordinate axis can be represented with the form of column vector, all some formation matrixes:
P = x 1 . . . x N y 1 . . . y N z 1 . . . z N - - - ( 2 )
Wherein, N represents the quantity of the three-dimensional face region mid point that step S205 extracts.
Calculate average vector m and the covariance matrix C of P matrix, respectively as shown in (2), (3),
m = 1 N &Sigma; n = 1 N p n - - - ( 3 )
C = 1 N &Sigma; n = 1 N p n p n T - mm T - - - ( 4 )
Wherein p nrepresent the coordinate column vector of n-th, i.e. p n=(x n, y n, z n) t, subscript T represents transposition.Its corresponding proper vector can be expressed as (5) formula:
CV=DV (5)
D in formula is the eigenvalue matrix of covariance matrix C, and it is a diagonal matrix.V is the matrix that proper vector is formed, and can be expressed as V=[v 1; v 2; v 3].The method solving proper vector is more, is not described in detail in this.Hotelling conversion can be carried out, as shown in (6) after trying to achieve proper vector.
p n′=U(p n-m) (6)
P n' represent the coordinate column vector of n-th in matrix P ' after converting.Wherein U can be obtained by V conversion, and both relations are as shown in (7).
V=[v 1;v 2;v 3]
(7)
U=[v 3;v 2;v 1] T
Hotelling conversion is the process that an iteration is repeatedly carried out, can not linear transformation to ideal position, the difference of data between twice conversion (P ' with P) can be judged when carrying out iteration, when being less than certain threshold value or reach maximum iteration time can termination of iterations, otherwise make P=P ', again ask for average vector and covariance matrix converts next time.Face through hotelling conversion may expose some hole region, and these hole region are not arrived by scanner scanning before being generally.If there is no cavity, then do not do any operation, if there is cavity, then needs to fill these cavities, can directly adopt interpolation method to fill, can adopt other modes yet.Counting in three-dimensional face region after cavity is filled still is N.
S207: the statistical nature vector extracting three-dimensional face region:
For the three-dimensional face region corrected through human face posture, first need to extract statistical nature.Fig. 8 is that statistical nature extracts schematic diagram.As shown in Figure 8, the method that statistical nature extracts is: arrange K radius λ k, k=1,2 ..., K, λ k< λ k+1, and λ k≤ R, take prenasale as the centre of sphere, adds up respectively with λ kfor the face in the spheroid of radius is counted f k, build statistical nature vector F=(f 1, f 2..., f k).
In general, in order to fully use the data in three-dimensional face region, λ is made k=R.Due to the radius R=80mm in three-dimensional face region in the present embodiment step S205, therefore build with radius stepping 6mm the concentric sphere volume that 15 take prenasale as the centre of sphere, according to the distance of each point to prenasale, thus put each point under different spheroid, statistics obtains the proper vector of 15 dimensions.
S208: extract expression invariant region:
In three-dimensional face identification, very important point to avoid the impact of human face expression, knownly by analysis mainly concentrate on the regions such as nose, forehead, part cheek by the less region of expression influence, these regions can be extracted as the region to be matched of three-dimensional face identification and reduce the impact of expression shape change.
Fig. 9 is the schematic diagram extracting expression invariant region.As shown in Figure 9, the method extracting expression invariant region is: first obtain prenasale (x a, y a, z a) slices across (i.e. y=y a), then be the center of circle with prenasale, justify in slice plane with pre-set radius v, try to achieve and two of prenasale slices across intersection point (x b, y b, z b), (x c, y c, z c).Using the x coordinate of these two intersection points as the x coordinate range representing nasal area in expression invariant region, by the y coordinate y of prenasale amove down certain distance, then the certain distance that moves up, obtain the y coordinate range of nasal area in expression invariant region, add eyes and forehead region, thus obtain invariant region of expressing one's feelings.Concrete decision method is: in three-dimensional face region, travels through each point (x, y, z), if x ∈ is [x b, x c] and y ∈ [y a1, y a+ δ 2], wherein δ 1represent the amount of offseting downward, δ 2represent upwards side-play amount, so this point belongs to expression invariant region, otherwise judges whether y > y further a+ δ 2, if so, so this point belongs to expression invariant region, otherwise does not belong to.Experimentally known, make 35mm≤v≤45mm, 5mm≤δ 1≤ 15mm, 15mm≤δ 2during≤25mm, the expression invariant region obtained is comparatively accurate, and concrete value can be determined according to the situation of actual sample.V=40mm, δ in the present embodiment 1=10mm, δ 2=20mm.
Figure 10 is the exemplary plot of expression invariant region.As shown in Figure 10, first row is three-dimensional face region, and second row is the expression invariant region arrived according to first row three-dimensional face extracted region.As can be seen from Figure 10, expression invariant region extracting method of the present invention, can adapt to different face well.
S102: carry out rough sort according to statistical nature vector:
The present invention, first according to statistical nature, adopts refusal sorter to carry out rough sort to the sample identified.Refusal sorter refers to the sorter that can fall most of constraint term from a set fast filtering.Filter out a part of constraint term at refusal sorter, export residue constraint term, then sample to be identified and these remaining constraint terms are carried out matching judgment.Visible, when refusal sorter judges that the classification number obtained is fewer, then the calculated amount that judges further of follow-up needs is fewer, is refused the reject rate of sorter, just can control the calculated amount of follow-up judgement by control.
The assorting process of refusal sorter is: the distance calculating the statistical nature vector of each check sample of statistical nature vector sum in the three-dimensional face region of sample to be identified, sorted from small to large according to distance by check sample, before selecting according to predetermined ratio, several check samples are as check sample to be selected.Predetermined ratio is less, and check sample to be selected is less, and the calculated amount of follow-up judgement is less.
In the present embodiment, the distance between statistical nature vector adopts standardization Euclidean distance.Standardization Euclidean distance is improving one's methods based on Euclidean distance, to distribute inconsistent situation for each dimension of data, standardization Euclidean distance first by the attributeization in each dimension to consistent, the average namely in each dimension, variance are consistent.For the statistical nature vector (f of two K dimensions 11, f 12..., f 1K) and (f 21, f 22..., f 2K), the computing formula of its standardization Euclidean distance d:
d = &Sigma; k = 1 K ( f 1 k - f 2 k ) 2 s k - - - ( 8 )
Wherein, s krepresent f 1kand f 2kstandard deviation.
S103: carry out face match cognization according to expression invariant region:
Next, according to the check sample to be selected that sample to be identified and step S102 obtain, mated by the coordinate point set of the expression invariant region of the coordinate point set of the expression invariant region of sample to be identified and each check sample to be selected, Q the check sample to be selected selecting matching error minimum is as recognition result.The size of Q is arranged according to actual needs.
In the present embodiment, iterative closest point algorithms is adopted to carry out the coupling of expression invariant region point set.Iterative closest point algorithms is a kind of main flow algorithm of three-dimensional point set registration, and its main thought is: the coordinate point set remembering the expression invariant region of sample to be identified m 1represent the quantity of expression invariant region mid point, p irepresent the coordinate of i-th point in point set P.Remember the coordinate point set of the expression invariant region of check sample to be selected m 2represent the quantity of expression invariant region mid point, g jrepresent the coordinate of a jth point in point set G.First subset P ' is got from point set P, point quantity in subset P ' is designated as M, in point set G search obtain subset P ' middle the subset G ' of closest approach a little, obtain two translation matrix τ between subset P ' and G ' and rotation matrix γ again, subset G ' to the sample identified converts, point set P ' (t+1)=γ P ' (t)+τ after conversion, t represents the t time iteration, again search for generation point subset G ' (t+1) after each iteration, then ask translation matrix and rotation matrix.Obvious P ' (0)=P ', G ' (0)=G '.The distance error E under current iteration number of times can be calculated after each iteration t, computing formula is:
E t = 1 M &Sigma; i = 1 M | | p i t - g i t - 1 | | - - - ( 9 )
When iterations reaches preset times or distance error is less than predetermined threshold value, iteration terminates.The detailed process of iterative closest point algorithms can see document Neugebauer P J.Reconstruction of real-worldobjects via simultaneous registration and robust combination of multiple rangeimages [J] .International Journal of Shape Modeling, and 2011.
In the present invention, owing to needing service range error as the criterion of recognition of face, therefore iterations t is set o, after iteration completes, calculate distance error.
When the coordinate point set of the coordinate point set of expression invariant region to the sample identified and the expression invariant region of each check sample to be selected mates, if two point sets can not align at the beginning well, matching algorithm is likely being caused not restrain.Therefore the present invention is carrying out attitude correction to three-dimensional face region at the very start, the situation that can effectively avoid matching algorithm not restrain.
In actual applications, can first unify to check sample extract statistical nature and expression invariant region, then store, when need recognition sample time, only need the statistical nature and the expression invariant region that extract sample to be identified, then undertaken identifying by identification step of the present invention.
In order to verify technique effect of the present invention, UMB-DB face database is adopted to carry out test experiments.According to face expression classification (neutral, smile, angry, be sick of totally four kinds), the part face in random selecting face database is as test set.Table 1 is the data list of test set.
Table 1
In order to test the face in each test set, from 143 tested objects, have chosen 100 neutral expression's faces, the in contrast samples of 50 people at random.
Devise three groups of experiments herein altogether.Experiment one is the present invention, namely refusal sorter is adopted to obtain check sample to be selected, the refusal ratio setting of refusal sorter is 50%, and then the expression invariant region of each sample of test set and the expression invariant region of check sample to be selected carry out iterative closest point coupling, obtain face recognition result.Experiment two and and experiment three for control experiment, wherein experiment two couples of test sets Probe1, Probe2, Probe3 and check sample have carried out Face datection, directly carry out iterative closest point coupling according to the human face region of test set and the human face region of all check samples, obtain face recognition result, and without this step of refusal classification.Experiment three is Based PC A (Principal Component Analysis, principal component analysis (PCA)), the typical human face recognition method of LDA (LinearDiscriminant Analysis, linear differential analysis) carries out the control experiment of recognition of face.
Figure 11 is the CMC curve comparison figure of experiment one and experiment two.CMC (Cumulative Match Score, accumulative coupling score value) curve is the curve for evaluating testing result conventional in three-dimensional face identification.As shown in figure 11, Probe4, Probe5, Probe6 represent that the human face region directly adopting test set Probe1, Probe2, Probe3 to detect carries out recognition of face respectively.As can be seen from Figure 11, in Rank scope 1 to 5 time, in experiment one, the checking rate of each probe is apparently higher than the latter, but in 6 to 15 scopes performance boost not as experiment two.This is because refusal sorter classification accuracy rate when refusal ratio is 50% only has 97%, directly limit the lifting of experiment one performance.But in general, the whole structure of experiment one is better than experiment two, this is because the present invention only have employed expression invariant region carry out face coupling, improve matching accuracy rate.
Table 2 is time lists of experiment one and experiment two each test sets.
Table 2
As can be seen from Table 2, test a content and contain this process of refusal classification, and refusal ratio setting is 50%, as can be seen from experimental result, testing a required time is half needed for experiment two substantially, and this is similar with the expectation effect of refusal sorter.Visible, the present invention, owing to have employed refusal sorter, significantly can reduce recognition time, and the real-time of method is improved.
Figure 12 is the comparison diagram of CMC curve of iterative closest point algorithms and LDA algorithm, PCA algorithm.Only compare the CMC curve of test set Probe1 herein.As seen from Figure 12, iterative closest point algorithms is dominant, and from Rank1 to Rank5, the score of iterative closest point algorithms smoothly rises, and is finally stabilized in about 0.93.Other two kinds of methods are finally stabilized in about 0.85.As a whole, the weak effect of three kinds of methods, apart from also little, this is because probe1 is through the human face region that expression invariant region extracts, affects the maximum expression shape change factor of PCA and LDA weakened when pre-service.
Although be described the illustrative embodiment of the present invention above; so that those skilled in the art understand the present invention; but should be clear; the invention is not restricted to the scope of embodiment; to those skilled in the art; as long as various change to limit and in the spirit and scope of the present invention determined, these changes are apparent, and all innovation and creation utilizing the present invention to conceive are all at the row of protection in appended claim.

Claims (9)

1., based on a three-dimensional face identification method for expression invariant region, it is characterized in that, comprise the following steps:
S1: the statistical nature and the expression invariant region that extract sample to be identified and check sample respectively, the concrete steps of the feature extraction of every width three-dimensional face data comprise:
S1.1: the two-dimension human face image corresponding to three-dimensional face data carries out human face region detection;
S1.2: the x coordinate range of the human face region detected according to two-dimension human face image and y coordinate range, extracts corresponding three-dimensional face region, as initial three-dimensional face region from three-dimensional face data;
S1.3: carry out prenasale detection according to the initial three-dimensional face region that step S1.2 obtains, obtain prenasale;
S1.4: take prenasale as the centre of sphere, in calculating three-dimensional face data, each point is to the distance of prenasale, if distance is less than pre-set radius R, then this point belongs to three-dimensional face region, otherwise does not belong to, thus obtains three-dimensional face region;
S1.5: carry out human face posture correction to the three-dimensional face region that step S1.4 obtains, obtains the three-dimensional face region after correcting;
S1.6: the three-dimensional face region obtained for step S1.5, arranges K radius λ k, k=1,2 ..., K, λ k< λ k+1, and λ k≤ R, take prenasale as the centre of sphere, adds up respectively with λ kfor the three-dimensional face number of data points f in the spheroid of radius k, build statistical nature vector F=(f 1, f 2..., f k);
S1.7: extract expression invariant region, concrete grammar is: first obtain prenasale (x a, y a, z a) slices across, then be the center of circle with prenasale, justify in slice plane with pre-set radius v, try to achieve and two of prenasale slices across intersection point (x b, y b, z b), (x c, y c, z c); In the three-dimensional face region that step S1.5 obtains, travel through each point (x, y, z), if x ∈ is [x b, x c] and y ∈ [y a1, y a+ δ 2], or y > y a+ δ 2, wherein δ 1represent the amount of offseting downward, δ 2represent upwards side-play amount, so this point belongs to expression invariant region, otherwise does not belong to;
S2: using the check sample of the statistical nature of check sample vector as refusal sorter, calculate the distance of the statistical nature vector of each check sample of statistical nature vector sum in the three-dimensional face region of sample to be identified, sorted from small to large according to distance by check sample, before selecting according to predetermined ratio, several check samples are as check sample to be selected;
S3: the check sample to be selected obtained according to sample to be identified and step S2, mated by the coordinate point set of the expression invariant region of the coordinate point set of the expression invariant region of sample to be identified and each check sample to be selected, Q the check sample to be selected selecting matching error minimum is as recognition result.
2. three-dimensional face identification method according to claim 1, is characterized in that, before described step S1.1, also need to carry out pre-service to the two-dimension human face image of three-dimensional face data and correspondence, preprocess method is:
The pre-service of three-dimensional face data is: each some O (x in traversal three-dimensional face data o, y o, z o), by x ∈ [x o-l, x o+ l], y ∈ [y o-l, y o+ l] in data point as the point in an O neighborhood, calculate the average μ of z coordinate of the point in these points except some O and cavity are put oand standard deviation sigma o, calculated threshold t oo+ 0.6 σ oif, z o≤ t o, do not do any operation, if z o> t o, make z o=t o; Then by process after three-dimensional face data transformations be Delaunay triangulation network lattice data, utilize the method for linear interpolation to carry out interpolation to cavity point, then according to the resolution of former three-dimensional face data, resampling carried out to the data after interpolation; Finally smoothing to the three-dimensional face data acquisition mean curvature flow table of resampling.
The pre-service of two-dimension human face image is: two-dimension human face image is converted into gray level image, again gradation data is converted into Delaunay triangulation network lattice data, utilize the method for linear interpolation to carry out interpolation, then according to the resolution of former two-dimension human face image, resampling is carried out to the data after interpolation.
3. three-dimensional face identification method according to claim 1, is characterized in that, the feature that the human face region detection method in described step 1.1 adopts is that Lis Hartel is levied.
4. three-dimensional face identification method according to claim 1, is characterized in that, the prenasale detection method in described step S1.3 is:
In three-dimensional face regional extent, with slice spacings α 1slices across is carried out to three-dimensional face region, for each section obtained, each point in traversal section, with this point be the center of circle, pre-set radius r carries out justifying, obtain and two intersection points of cutting into slices, ask for centre point to two straight height of intersection point structure, in the point of all sections, select high maximum point as initial prenasale;
Initially to calculate the y coordinate y of cusp place section *centered by, obtain secondary section scope [y *-β, y *+ β], β represents side-play amount, is positive integer, with slice spacings α 2slices across is carried out to the human face region within the scope of secondary section, α 2< α 1; For each section obtained, each point in traversal section, with this point be the center of circle, pre-set radius r carries out justifying, try to achieve and two intersection points of cutting into slices, ask for centre point to two straight height of intersection point structure, in the point of all sections, select high maximum point as prenasale.
5. three-dimensional face identification method according to claim 4, is characterized in that, described slice spacings α 1span be 5mm≤α 1≤ 10mm, α 2equal the resolution of three-dimensional face data, the span of side-play amount β is 15mm≤β≤25mm.
6. three-dimensional face identification method according to claim 1, is characterized in that, the human face posture bearing calibration in described step S1.5 adopts the human face posture bearing calibration based on Hotelling conversion.
7. three-dimensional face identification method according to claim 1, is characterized in that, in described step S1.7, the span of radius v is 35mm≤v≤45mm, the amount of offseting downward δ 1span be 5mm≤δ 1≤ 15mm, upwards offset delta 1span be 15mm≤δ 2≤ 25mm.
8. three-dimensional face identification method according to claim 1, is characterized in that, the distance in described step S2 between statistical nature vector adopts standardization Euclidean distance.
9. three-dimensional face identification method according to claim 1, is characterized in that, in described step S3, the matching process of coordinate point set adopts iterative closest point algorithms.
CN201510254758.2A 2015-05-19 2015-05-19 Three-dimensional face identification method based on expression invariant region Expired - Fee Related CN104850838B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510254758.2A CN104850838B (en) 2015-05-19 2015-05-19 Three-dimensional face identification method based on expression invariant region

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510254758.2A CN104850838B (en) 2015-05-19 2015-05-19 Three-dimensional face identification method based on expression invariant region

Publications (2)

Publication Number Publication Date
CN104850838A true CN104850838A (en) 2015-08-19
CN104850838B CN104850838B (en) 2017-12-08

Family

ID=53850473

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510254758.2A Expired - Fee Related CN104850838B (en) 2015-05-19 2015-05-19 Three-dimensional face identification method based on expression invariant region

Country Status (1)

Country Link
CN (1) CN104850838B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654035A (en) * 2015-12-21 2016-06-08 湖南拓视觉信息技术有限公司 Three-dimensional face recognition method and data processing device applying three-dimensional face recognition method
CN105678235A (en) * 2015-12-30 2016-06-15 北京工业大学 Three dimensional facial expression recognition method based on multiple dimensional characteristics of representative regions
CN106022228A (en) * 2016-05-11 2016-10-12 东南大学 Three-dimensional face recognition method based on vertical and horizontal local binary pattern on the mesh
CN106446773A (en) * 2016-08-22 2017-02-22 南通大学 Automatic robust three-dimensional face detection method
CN106909874A (en) * 2016-07-07 2017-06-30 湖南拓视觉信息技术有限公司 A kind of nose localization method and device
CN107203961A (en) * 2016-03-17 2017-09-26 掌赢信息科技(上海)有限公司 A kind of method and electronic equipment of migration of expressing one's feelings
CN107423685A (en) * 2017-06-13 2017-12-01 重庆大学 Expression Emotion identification method
CN107483423A (en) * 2017-08-04 2017-12-15 北京联合大学 A kind of user login validation method
CN107590829A (en) * 2017-09-18 2018-01-16 西安电子科技大学 A kind of seed point pick-up method for being applied to the intensive cloud data registration of various visual angles
CN107944435A (en) * 2017-12-27 2018-04-20 广州图语信息科技有限公司 A kind of three-dimensional face identification method, device and processing terminal
WO2018209569A1 (en) * 2017-05-16 2018-11-22 深圳市三维人工智能科技有限公司 3d scanning model cutting device and method
CN110046543A (en) * 2019-02-27 2019-07-23 视缘(上海)智能科技有限公司 A kind of three-dimensional face identification method based on plane parameter
CN110210318A (en) * 2019-05-06 2019-09-06 深圳市华芯技研科技有限公司 A kind of three-dimensional face identification method based on characteristic point
CN110782247A (en) * 2019-10-23 2020-02-11 广东乐芯智能科技有限公司 Smart watch payment method based on face recognition
CN111768476A (en) * 2020-07-07 2020-10-13 北京中科深智科技有限公司 Expression animation redirection method and system based on grid deformation
CN113111780A (en) * 2021-04-13 2021-07-13 谢爱菊 Regional alarm monitoring system and method based on block chain
CN113158892A (en) * 2021-04-20 2021-07-23 南京大学 Face recognition method irrelevant to textures and expressions

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1776712A (en) * 2005-12-15 2006-05-24 复旦大学 Human face recognition method based on human face statistics
WO2007050630A2 (en) * 2005-10-24 2007-05-03 Iris International, Inc. Face recognition system and method
CN103116902A (en) * 2011-11-16 2013-05-22 华为软件技术有限公司 Three-dimensional virtual human head image generation method, and method and device of human head image motion tracking
CN104598879A (en) * 2015-01-07 2015-05-06 东南大学 Three-dimensional face recognition method based on face contour lines of semi-rigid areas

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007050630A2 (en) * 2005-10-24 2007-05-03 Iris International, Inc. Face recognition system and method
CN1776712A (en) * 2005-12-15 2006-05-24 复旦大学 Human face recognition method based on human face statistics
CN103116902A (en) * 2011-11-16 2013-05-22 华为软件技术有限公司 Three-dimensional virtual human head image generation method, and method and device of human head image motion tracking
CN104598879A (en) * 2015-01-07 2015-05-06 东南大学 Three-dimensional face recognition method based on face contour lines of semi-rigid areas

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王跃明 等: "三维人脸识别研究综述", 《计算机辅助设计与图形学学报》 *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654035A (en) * 2015-12-21 2016-06-08 湖南拓视觉信息技术有限公司 Three-dimensional face recognition method and data processing device applying three-dimensional face recognition method
CN105654035B (en) * 2015-12-21 2019-08-09 湖南拓视觉信息技术有限公司 Three-dimensional face identification method and the data processing equipment for applying it
CN105678235A (en) * 2015-12-30 2016-06-15 北京工业大学 Three dimensional facial expression recognition method based on multiple dimensional characteristics of representative regions
CN105678235B (en) * 2015-12-30 2018-08-14 北京工业大学 Three-dimensional face expression recognition methods based on representative region various dimensions feature
CN107203961A (en) * 2016-03-17 2017-09-26 掌赢信息科技(上海)有限公司 A kind of method and electronic equipment of migration of expressing one's feelings
CN106022228B (en) * 2016-05-11 2019-04-09 东南大学 A kind of three-dimensional face identification method based on grid local binary patterns in length and breadth
CN106022228A (en) * 2016-05-11 2016-10-12 东南大学 Three-dimensional face recognition method based on vertical and horizontal local binary pattern on the mesh
CN106909874A (en) * 2016-07-07 2017-06-30 湖南拓视觉信息技术有限公司 A kind of nose localization method and device
CN106909874B (en) * 2016-07-07 2019-08-30 湖南拓视觉信息技术有限公司 A kind of nose localization method and device
CN106446773A (en) * 2016-08-22 2017-02-22 南通大学 Automatic robust three-dimensional face detection method
CN106446773B (en) * 2016-08-22 2019-12-20 南通大学 Full-automatic robust three-dimensional face detection method
WO2018209569A1 (en) * 2017-05-16 2018-11-22 深圳市三维人工智能科技有限公司 3d scanning model cutting device and method
CN107423685A (en) * 2017-06-13 2017-12-01 重庆大学 Expression Emotion identification method
CN107483423A (en) * 2017-08-04 2017-12-15 北京联合大学 A kind of user login validation method
CN107483423B (en) * 2017-08-04 2020-10-27 北京联合大学 User login verification method
CN107590829A (en) * 2017-09-18 2018-01-16 西安电子科技大学 A kind of seed point pick-up method for being applied to the intensive cloud data registration of various visual angles
CN107944435A (en) * 2017-12-27 2018-04-20 广州图语信息科技有限公司 A kind of three-dimensional face identification method, device and processing terminal
CN110046543A (en) * 2019-02-27 2019-07-23 视缘(上海)智能科技有限公司 A kind of three-dimensional face identification method based on plane parameter
CN110210318A (en) * 2019-05-06 2019-09-06 深圳市华芯技研科技有限公司 A kind of three-dimensional face identification method based on characteristic point
CN110782247A (en) * 2019-10-23 2020-02-11 广东乐芯智能科技有限公司 Smart watch payment method based on face recognition
CN110782247B (en) * 2019-10-23 2023-07-21 广东盛迪嘉电子商务股份有限公司 Intelligent watch payment method based on face recognition
CN111768476A (en) * 2020-07-07 2020-10-13 北京中科深智科技有限公司 Expression animation redirection method and system based on grid deformation
CN113111780A (en) * 2021-04-13 2021-07-13 谢爱菊 Regional alarm monitoring system and method based on block chain
CN113158892A (en) * 2021-04-20 2021-07-23 南京大学 Face recognition method irrelevant to textures and expressions
CN113158892B (en) * 2021-04-20 2024-01-26 南京大学 Face recognition method irrelevant to textures and expressions

Also Published As

Publication number Publication date
CN104850838B (en) 2017-12-08

Similar Documents

Publication Publication Date Title
CN104850838A (en) Three-dimensional face recognition method based on expression invariant regions
CN105956582B (en) A kind of face identification system based on three-dimensional data
CN110363182B (en) Deep learning-based lane line detection method
CN106682598B (en) Multi-pose face feature point detection method based on cascade regression
US9824258B2 (en) Method and apparatus for fingerprint identification
CN104008370B (en) A kind of video face identification method
CN107657241B (en) Signature pen-oriented signature authenticity identification system
JP3279913B2 (en) Person authentication device, feature point extraction device, and feature point extraction method
CN104143080B (en) Three-dimensional face identifying device and method based on three-dimensional point cloud
CN106228528B (en) A kind of multi-focus image fusing method based on decision diagram and rarefaction representation
Li et al. Efficient 3D face recognition handling facial expression and hair occlusion
CN105894047A (en) Human face classification system based on three-dimensional data
CN106355138A (en) Face recognition method based on deep learning and key features extraction
CN103605972A (en) Non-restricted environment face verification method based on block depth neural network
CN103473545B (en) A kind of text image method for measuring similarity based on multiple features
CN105809113B (en) Three-dimensional face identification method and the data processing equipment for applying it
CN102938065A (en) Facial feature extraction method and face recognition method based on large-scale image data
US20150347804A1 (en) Method and system for estimating fingerprint pose
CN106096560A (en) A kind of face alignment method
CN112149758B (en) Hyperspectral open set classification method based on Euclidean distance and deep learning
Pan et al. 3D face recognition from range data
CN103218609A (en) Multi-pose face recognition method based on hidden least square regression and device thereof
CN106778489A (en) The method for building up and equipment of face 3D characteristic identity information banks
CN107247916A (en) A kind of three-dimensional face identification method based on Kinect
CN103927554A (en) Image sparse representation facial expression feature extraction system and method based on topological structure

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20171208

Termination date: 20200519