CN105894047A - Human face classification system based on three-dimensional data - Google Patents

Human face classification system based on three-dimensional data Download PDF

Info

Publication number
CN105894047A
CN105894047A CN201610490306.9A CN201610490306A CN105894047A CN 105894047 A CN105894047 A CN 105894047A CN 201610490306 A CN201610490306 A CN 201610490306A CN 105894047 A CN105894047 A CN 105894047A
Authority
CN
China
Prior art keywords
depth
face
data
image
degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610490306.9A
Other languages
Chinese (zh)
Other versions
CN105894047B (en
Inventor
夏春秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Vision Technology Co Ltd
Original Assignee
Shenzhen Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Vision Technology Co Ltd filed Critical Shenzhen Vision Technology Co Ltd
Priority to CN201610490306.9A priority Critical patent/CN105894047B/en
Publication of CN105894047A publication Critical patent/CN105894047A/en
Application granted granted Critical
Publication of CN105894047B publication Critical patent/CN105894047B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a human face classification system which comprises a plurality of modules of feature point location, three-dimensional data rectification, data denoising, data quality evaluation, feature extraction, classifier design and the like. Vision dictionary texture characteristics are adopted to accurately describe features of three-dimensional deep human face images, and an SVM classifier is adopted to achieve accurate classification in a deep human face image vision dictionary histogram characteristic space. The difficulties of race classification are that the races are complex and diverse and race types are hard to accurately define. The concept of fuzzy classification is introduced, an oriental vision dictionary and a western vision dictionary are established, and the race classification is judged according to human face images according to subordinating degree functions. Due to accurate classification results, human face features in human face data can be effectively acquired, relatively rich human face semantic comprehension information can be acquired, meanwhile the operation can be a rough classification step in three-dimensional human face recognition, and the precision of a recognition system can be improved.

Description

A kind of face classification system based on three-dimensional data
Technical field
The present invention relates to three-dimensional face categorizing system, contain positioning feature point, three-dimensional data registration, data de-noising, the quality of data Multiple modules of the three-dimensional face categorizing systems such as assessment, feature extraction and classifier design.
Background technology
Three-dimensional face identification relative to two-dimension human face identification, have its on illumination, affected by the factor such as attitude and expression less etc. Advantage, therefore after three dimensional data collection technology develops rapidly and the quality of three-dimensional data and precision are greatly promoted, Hen Duoxue Their research is put in this field by person.
It has been proposed that the correlated characteristic of three-dimensional bending invariant for carry out face characteristic description (patent No.: 201010256907).The method, by the local feature of the bending invariant of coding three-dimensional face surface adjacent node, extracts bending Invariant related features;The correlated characteristic of described bending invariant is signed and uses spectrum recurrence to carry out dimensionality reduction, it is thus achieved that main one-tenth Point, and use K arest neighbors sorting technique that three-dimensional face is identified.But owing to needing complexity when extracting variable correlated characteristic Amount of calculation, the therefore further application of the method at efficiency upper limit;
Three-dimensional face classification is an element task in three-dimensional face field.In order to improve the efficiency extracting variable, Accurate classification Particularly important to face classification.Classification results is possible not only to the face characteristic effectively obtaining in human face data accurately, obtains more Many face semantic understanding information, is also used as a rough classification step of three-dimensional face identification simultaneously, promotes identification system Precision.The present invention designs Gender Classification and two kinds of mode classifications of species.The difficult point of Gender Classification is to describe the most accurately The sex characteristic of human face data and how to realize classifying accurately on the basis of feature space.This patent utilizes visual dictionary stricture of vagina Reason feature, describes the characteristic of three dimensional depth facial image accurately, then utilizes SVM classifier to regard at degree of depth facial image Feel that dictionary histogram feature spatially achieves Accurate classification.The difficult point of species is that the race of people is complicated various, the most accurate True definition race's classification.This patent introduces the concept of fuzzy classification, by building east visual dictionary and west visual dictionary, Utilize membership function to judge the species of degree of depth facial image.
Summary of the invention
Computationally intensive for existing three-dimensional face identification, inefficient problem, present invention aim at providing a kind of based on three dimensions According to face classification system, three-dimensional face data are classified.
For solving the problems referred to above, the present invention provides a kind of face classification system based on three-dimensional data, and its main contents include:
(1) importation of three-dimensional face cloud data;
(2) for detection part in face specific region in three-dimensional face cloud data;
(3) the face specific region for detecting carries out Registration of Measuring Data part;
(4) degree of depth human face data demapping section is carried out for the three-dimensional face cloud data after registration;
(5) face depth data quality evaluation part is carried out for degree of depth human face data;
(6) degree of depth face texture repairing part is carried out for degree of depth human face data;
(7) three-dimensional face identification division is carried out for degree of depth human face data;
Wherein, the importation of three dimensional point cloud includes that the data to all kinds of three-dimensional point cloud collecting devices input;
Wherein, in three-dimensional face cloud data, face specific region test section divides and includes according to three dimensional point cloud characteristic information extraction And utilize the grader trained to carry out the module of face specific region detection;
Further, the module to the detection of face specific region, use nose region as face characteristic region, key step is such as Under:
1) determine that the threshold value of usefulness metric density is averagely born in territory, be defined as thr
2) utilize the extraction of depth information of the data human face data in the range of certain depth as pending data
3) normal information of the human face data selected by depth information is calculated
4) bear the definition of usefulness metric density according to zone leveling, obtain the average negative effective energy of each connected domain in pending data Density, selects the connected domain that wherein density value is maximum
5) when the threshold value in this region is more than predefined thr, this region is nose region, otherwise returns to the 1st) step continuation.
Wherein, the face specific region detected is utilized to carry out the face specific region that Registration of Measuring Data part includes obtaining according to detection The module of Registration of Measuring Data is carried out with the normal data of this specific region in template base;
Further, based on the Registration of Measuring Data module based on the face nose region detected, first we need in template base The data in the nose region that middle preparation one width is corresponding with standard attitude, then for the three-dimensional data of different attitudes, are registrated Reference zone after, carry out the registration of data according to ICP algorithm.This algorithm key step is as follows:
Assume that we have obtained matched data set to P and Q,
1) matrix of 3*3 is calculated
H = Σ i = 1 N Q i Q i T
Wherein N is the capacity of data acquisition system.
2) SVD doing H-matrix decomposes
H=U ∧ VT
X=VUT
3) spin matrix R and translation matrix t are calculated
When X determinant is 1, R=X;
T=P-R*Q
By above-mentioned steps, obtain the three dimensions transformation matrix between two three-dimensional data point sets, thus realize the registration of two point sets.
Wherein, three-dimensional face cloud data is carried out degree of depth facial image demapping section include according to predetermined depth resolution and The position, face specific region that detection obtains carries out the module of data mapping;
Further, three dimensional point cloud is mapped as the data mapping module of degree of depth facial image, face nose detection obtained Behave as the basis reference of the center of depth image data, the x-axis of its space coordinates and y-axis information MAP in point region The image coordinate system information of face depth image.Calculating process is as follows:
If prenasale be N (x, y, z), then the image coordinate of spatial point P (x1, y1, z1) is:
Ix=(x1-x)+width/2
Iy=(y1-y)+height/2
Width is the width of depth image, and height is the height of depth image.
Meanwhile, depth resolution Z is preset according to the depth accuracy of three dimensional point cloudref, as by the z of space coordinates Axis information is as the basis reference of the depth value being mapped as face depth image, and its formula is:
I d e p t h = ( z 1 - z ) / Z r e f + 255 , z 1 < = z 255 , z 1 > z
The data completing to be mapped as three dimensional point cloud degree of depth facial image map.
Wherein, degree of depth human face data be acquired data quality accessment part include according to typicality degree of depth human face data produce the degree of depth Eigenface and utilize depth characteristic face to carry out the module of face depth data quality evaluation.
Further, producing depth characteristic face according to typicality degree of depth human face data, the calculating process of depth characteristic face can be summed up For:
1) each degree of depth face image data in training set is all referred to as a dimensional vector from two-dimensional matrix stretching, these are arranged Vector is grouped together into matrix A.The resolution assuming every degree of depth facial image is M*M, then stretched After the dimension of face column vector be exactly D=M*M.If training is concentrated with N and opens degree of depth facial image, then sample The dimension of matrix A is exactly D*N;
2) N in training set opens degree of depth facial image be added in corresponding dimension and be then averaging, it is possible to obtain depth map The average face of picture;N is opened depth image and all deducts depth-averaged face, obtain difference image data matrix Φ;
3) to covariance matrix C=Φ * ΦTCarry out Eigenvalues Decomposition;According to occupying the ratio of all eigenvalue energy, select Some big eigenvalues, its corresponding characteristic vector is depth characteristic face;
Degree of depth facial image can project to carry out in the space that these eigenface are opened approximate calculation;
Further, depth image data quality assessment modules is divided into training and two stages of assessment: train deeply in the training stage The eigenface of degree facial image, opens into degree of depth facial image space based on this;In evaluation stage, for the degree of depth people of input Face image, a bit being mapped as in depth characteristic face space, it is obtained by the approximate depth face figure that depth characteristic face characterizes Picture;
I a p r = &Sigma; i w i * I e i g e n
Then approximate image is contrasted with original image, if difference is more than certain threshold value, then illustrate that this depth image does not meets this The type that a little depth characteristic faces represent, assessment is not passed through;The most then think that this image meets the type that these depth characteristic faces represent, Assessment is passed through.
E = 0 , a b s ( I a p r - L o r i ) > T h r 1 , a b s ( I a p r - I o r i ) < = T h r
Wherein, face depth image data is carried out texture repairing include according to Image neighborhood data point carry out noise data filtering with And utilize edge reservation filter to carry out the module of depth image texture repairing.
Further, it mainly comprises the following steps: detect firstly for the noise in depth image, and noise type mainly includes number According to cavity and the projection of data, depth image then shows as the null value in face depth data and the degree of depth of local grain Protruding value.
Then carry out depth data denoising, use the filtering of neighborhood degree of depth virtual value in the present invention, in above-mentioned degree of depth facial image Noise filter, this filtering expression can be described as:
I ( x , y ) = &Sigma; m = - w i n , n = - w i n m = w i n , n = w i n I ( x - m , y - n ) * w ( x - m , y - n )
When I (x-m, y-n) is depth image available point, it is worth and isWhen I (x-m, y-n) is deep During degree effective image point, value is 0.
Further, after singular point is carried out preliminary low-pass filtering, continue with holding edge filter for depth image Carrying out further texture repairing, in the present invention, holding edge filter device uses bilateral filtering (being not limited to).Two-sided filter be by Two functions are constituted, and a function is to determine filter coefficient by geometric space distance, and another function is then by pixel value difference Determine filter coefficient.In two-sided filter, the value of output pixel depends on the weighted array of the value of neighborhood territory pixel:
g ( i , j ) = &Sigma; k , l f ( k , l ) w ( i , j , k , l ) &Sigma; k , l w ( i , j , k , l )
Wherein, geometric space distance the filter coefficient determined, its formula is:
d ( i , j , k , l ) = exp ( - ( i - k ) 2 + ( j - l ) 2 2 &sigma; d 2 ) ,
The filter coefficient determined by pixel value difference, its formula is:
r ( i , j , k , l ) = exp ( - | | f ( i , j ) - f ( k , l ) | | 2 2 &sigma; r 2 )
Then weight coefficient is then spatial domain coefficient and the product of codomain coefficient:
w ( i , j , k , l ) = exp ( - ( i - k ) 2 + ( j - l ) 2 2 &sigma; d 2 - | | f ( i , j ) - f ( k , l ) | | 2 2 &sigma; r 2 ) .
By this combination, image filtering considers the difference of spatial domain and codomain simultaneously, filtering noise data During can also retain the particular edge information in data, effectively carried out degree of depth face image data noise reparation and The enhancing of face depth characteristic information.
Wherein, degree of depth human face data carry out three-dimensional face identification division include according to depth data extract degree of depth face characteristic and point Class device carries out the module of recognition of face.
Further, face depth image data carries out the extraction of visual dictionary histogram feature can be divided into visual vocabulary to train rank Section and visual dictionary histogram feature extract the stage.Mainly comprise the following steps:
In the visual vocabulary training stage, first the depth image that P width resolution is M*N in training set is carried out Gabor filter Ripple device filters, and original depth image is converted to P*M*N multidimensional Gabor filter response vector in this way;Will These vectors are grouped according to the locus of its place image, and carry out K mean cluster to often organizing vector set, obtain Cluster centre is the visual vocabulary (vision participle allusion quotation) of the Gabor filter response vector set corresponding to this image space positions;Will The vision vector often organized couples together, and just constitutes the visual dictionary of degree of depth facial image.
The stage is extracted, when after test facial image input, after Gabor filters, by arbitrary at visual dictionary histogram feature All primitive vocabulary in the vision participle allusion quotation that filter vector is all corresponding with its position compare, by the way of distance coupling, It is mapped to distance therewith closest to primitive on.In this way, it is possible to extract the vision of original depth image Dictionary histogram feature.
Further, the principle of SVM classifier is:
If linear separability sample set and be (xi, yi), i=1 ..., n, x ∈ Rd, {+1 ,-1} are category labels to y ∈.Then
W x+b=0
It it is the classifying face equation of SVM classifier.
When classification, in order to make classifying face that all samples correctly to be classified and class interval reaches maximum, need to meet following two Condition:
Φ (x)=min (wTw)
yi(w·xi+b)-1≥0
Can be obtained by optimal classification surface by solving this constrained optimization problems, and cross point nearest from classifying face in two class samples and put down Row training sample on the hyperplane of optimal classification surface is just so that those special sample that in formula, equal sign is set up, because they Support optimal classification surface, be therefore referred to as support vector.First three-dimensional face images after texture optimization carries out vision rectangular histogram Feature extraction, is input among SVM gender sorter after feature extraction, it is thus achieved that final Gender Classification result.
Further, what east and west face visual dictionary built mainly comprises the following steps: respectively to typical east degree of depth face figure Picture and west degree of depth facial image carry out visual vocabulary calculating.To all visual vocabularies calculated, wherein distance is compared Near visual vocabulary region, we are regarded as the critical region of race's deep vision vocabulary, and what it represented is the attribute of people;Right In at a distance of distant region, we are regarded as representing the characteristic information (east or west degree of depth facial image) of race, Build visual dictionary storehouse, east and visual dictionary storehouse, west the most respectively.
Further, face species is different from two classification design of Gender Classification, and species is considered as one and obscures by us Classification problem.First three-dimensional face images after texture optimization carries out Gabor filtering, obtains the Gabor filtering of this depth image Device response vector set;Each response vector to this set, maps in (west) visual dictionary storehouse east by it, as Fruit is with certain the vocabulary distance in east (west) dictionary less than threshold value, then this response vector belongs to east (west) face, Number of vectors eastnum (westnum) of its correspondence carries out+1 process;Final fuzzy membership function is:
Membership (I)=eastnum/westnum
Accompanying drawing explanation
Fig. 1 is the system flow chart of a kind of face classification system based on three-dimensional data of the present invention.
Fig. 2 is the nose detection module schematic diagram of a kind of face classification system based on three-dimensional data of the present invention.
Fig. 3 is the Registration of Measuring Data module diagram of a kind of face classification system based on three-dimensional data of the present invention.
Fig. 4 is the data space map schematic diagram of a kind of face classification system based on three-dimensional data of the present invention.
Fig. 5 is the face depth image quality evaluation schematic flow sheet of a kind of face classification system based on three-dimensional data of the present invention.
Fig. 6 is that the depth texture of a kind of face classification system based on three-dimensional data of the present invention repairs schematic diagram.
Fig. 7 is the feature extraction schematic diagram of a kind of face gender categorizing system based on three-dimensional data of the present invention.
Fig. 8 is the classification process figure of a kind of face gender categorizing system based on three-dimensional data of the present invention.
Fig. 9 is that the ethnic visual dictionary of a kind of face species system based on three-dimensional data of the present invention builds schematic diagram.
Figure 10 is the classification process figure of a kind of face species system based on three-dimensional data of the present invention.
Figure 11 is the system block diagram of a kind of face classification system based on three-dimensional data of the present invention.
Detailed description of the invention
It should be noted that in the case of not conflicting, the embodiment in the application and the feature in embodiment can be combined with each other, With specific embodiment, the present invention is described in further detail below in conjunction with the accompanying drawings.
Fig. 1 is the system flow chart of a kind of face classification system based on three-dimensional data of the present invention.As it is shown in figure 1, the present invention carries A kind of based on three-dimensional data the face classification system gone out, its main contents include:
(1) data of all kinds of video capture devices are sailed into by the importation of video human face data.
(2) for Face datection part in video data, including the face inspection that will be carried out by Face datection algorithm in video present frame The module surveyed and the module that the facial image detected is carried out quality evaluation;
(3) for positioning feature point part in the human face data that detects, including the positioning feature point carrying out subscribing according to facial image Module and the characteristic point navigated to is carried out the module of quality evaluation;
(4) extraction part in face specific region is carried out for human face characteristic point, including according to facial image and the feature that navigates to Point, goes out face specific portion texture according to specific region Rule Extraction.
(5) face depth data quality evaluation part is carried out for degree of depth human face data, produce including according to typicality degree of depth human face data Give birth to depth characteristic face and utilize depth characteristic face to carry out the module of face depth data quality evaluation.;
(6) degree of depth face texture repairing part is carried out for degree of depth human face data, carry out noise number including according to Image neighborhood data point The module of depth image texture repairing is carried out according to filtering and utilizing edge to retain filter.;
(7) three-dimensional face identification division is carried out for degree of depth human face data, including according to depth data extract degree of depth face characteristic and Grader carries out the module of recognition of face.
Fig. 2 is the nose detection module schematic diagram of a kind of face classification system based on three-dimensional data of the present invention.It is illustrated in figure 2 The face characteristic region detection module of the present invention, in Fig. 2 (a), due to the data message in nose region in three-dimensional point cloud human face data Being clearly distinguishable from other positions of face, therefore in the present invention, face characteristic region uses nose region;Fig. 2 (b) is nose The flow chart of zone location, its key step is as follows:
1. determine threshold value
This step mainly determines that the threshold value of usefulness metric density is averagely born in territory, is defined as thr.
2. utilize depth information to choose pending data
This step, mainly by the depth information of data, is extracted in the human face data in the range of certain depth as pending data.
3. the calculating of normal vector
This step mainly calculates the normal information of the human face data selected by depth information.
4. zone leveling bears the calculating of usefulness metric density
This step is mainly the definition bearing usefulness metric density according to zone leveling, obtains the average of each connected domain in pending data Bear usefulness metric density, select the connected domain that wherein density value is maximum.
The most whether find nose region decision
This step is mainly when the threshold value in this region is more than predefined thr, and this region is nose region, otherwise returns to the 1st Step continues.
Fig. 3 is the Registration of Measuring Data module diagram of a kind of face classification system based on three-dimensional data of the present invention.It is illustrated in figure 3 Based on the Registration of Measuring Data module based on the face nose region detected in the present invention.First we need to prepare in template base The data in the nose region that one width is corresponding with standard attitude, then for the three-dimensional data of different attitudes, obtain the reference of registration Behind region, carry out the registration of data according to ICP algorithm;Contrast before and after registration is as shown in Figure 3.
An ICP algorithm substantially optimization problem seeking least mean-square error, it is assumed that we have obtained matched data set to P And Q, then this algorithm key step is as follows:
1. calculate the matrix of 3*3
H = &Sigma; i = 1 N Q i Q i T
Wherein N is the capacity of data acquisition system.
2. the SVD doing H-matrix decomposes
H=U ∧ VT
X=VUT
3. calculate spin matrix R and translation matrix t
When X determinant is 1, R=X;
T=P-R*Q
By above-mentioned steps, obtain the three dimensions transformation matrix between two three-dimensional data point sets, thus realize the registration of two point sets.
Fig. 4 is the data space map schematic diagram of a kind of face classification system based on three-dimensional data of the present invention.It is illustrated in figure 4 The data mapping module that three dimensional point cloud is mapped as degree of depth facial image in the present invention.Wherein, the face nose that detection obtains Behave as the basis reference of the center of depth image data, the x-axis of its space coordinates and y-axis information MAP in point region The image coordinate system information of face depth image.
The most known prenasale be N (x, y, z), then the image coordinate of spatial point P (x1, y1, z1) is:
Ix=(x1-x)+width/2
Iy=(y1-y)+height/2
Wherein width is the width of depth image, and height is the height of depth image.
Meanwhile, depth resolution Z is preset according to the depth accuracy of three dimensional point cloudref, as by the z of space coordinates Axis information is as the basis reference of the depth value being mapped as face depth image.
I d e p t h = ( z 1 - z ) / Z r e f + 255 , z 1 < = z 255 , z 1 > z
Fig. 5 is the face depth image quality evaluation schematic flow sheet of a kind of face classification system based on three-dimensional data of the present invention, As Fig. 5 (a) show the eigenface schematic diagram of degree of depth facial image.The calculating process of depth characteristic face can be summarized as:
1) each degree of depth face image data in training set is all referred to as a dimensional vector from two-dimensional matrix stretching, these are arranged Vector is grouped together into matrix A.The resolution assuming every degree of depth facial image is M*M, then stretched After the dimension of face column vector be exactly D=M*M.If training is concentrated with N and opens degree of depth facial image, then sample The dimension of matrix A is exactly D*N;
2) N in training set opens degree of depth facial image be added in corresponding dimension and be then averaging, it is possible to obtain depth map The average face of picture;N is opened depth image and all deducts depth-averaged face, obtain difference image data matrix Φ;
3) to covariance matrix C=Φ * ΦTCarry out Eigenvalues Decomposition;According to occupying the ratio of all eigenvalue energy, select Some big eigenvalues, its corresponding characteristic vector is depth characteristic face;
4) approximate calculation is carried out during degree of depth facial image can project to the space that these eigenface are opened;
Algorithm flow such as the depth image data quality assessment modules that Fig. 5 (b) show in the present invention.This module be divided into training and Assess two stages: in the training stage, as shown in Fig. 5 (a), train the eigenface of degree of depth facial image, open based on this Become degree of depth facial image space;In evaluation stage, for the degree of depth facial image of input, it is mapped as depth characteristic face space In a bit, be obtained by the approximate depth facial image that depth characteristic face characterizes;
I a p r = &Sigma; i w i * I e i g e n
Then approximate image is contrasted with original image, if difference is more than certain threshold value, then illustrate that this depth image does not meets this The type that a little depth characteristic faces represent, assessment is not passed through;The most then think that this image meets the type that these depth characteristic faces represent, Assessment is passed through.
E = 0 , a b s ( I a p r - L o r i ) > T h r 1 , a b s ( I a p r - I o r i ) < = T h r
Fig. 6 is that the depth texture of a kind of face classification system based on three-dimensional data of the present invention repairs schematic diagram.As shown in Figure 6, Detecting firstly for the noise in depth image, noise type mainly includes the projection of data void holes and data, in the degree of depth Image then shows as the degree of depth projection value of the null value in face depth data and local grain.
Then carry out depth data denoising, use the filtering of neighborhood degree of depth virtual value in the present invention, in above-mentioned degree of depth facial image Noise filter, this filtering expression can be described as:
I ( x , y ) = &Sigma; m = - w i n , n = - w i n m = w i n , n = w i n I ( x - m , y - n ) * w ( x - m , y - n )
Wherein when I (x-m, y-n) is depth image available point, it is worth and isAs I (x-m, y-n) During for depth image Null Spot, value is 0.
After singular point is carried out preliminary low-pass filtering, continue with holding edge filter and depth image is carried out further Texture repairing, in the present invention holding edge filter device use bilateral filtering (being not limited to).Two-sided filter is by two function structures Becoming, a function is to determine filter coefficient by geometric space distance, and another function is then to be determined wave filter by pixel value difference Coefficient.In two-sided filter, the value of output pixel depends on the weighted array of the value of neighborhood territory pixel:
g ( i , j ) = &Sigma; k , l f ( k , l ) w ( i , j , k , l ) &Sigma; k , l w ( i , j , k , l )
Wherein, geometric space distance the filter coefficient determined, its formula is:
d ( i , j , k , l ) = exp ( - ( i - k ) 2 + ( j - l ) 2 2 &sigma; d 2 ) ,
The filter coefficient determined by pixel value difference, its formula is:
r ( i , j , k , l ) = exp ( - | | f ( i , j ) - f ( k , l ) | | 2 2 &sigma; r 2 )
Then weight coefficient is then spatial domain coefficient and the product of codomain coefficient:
w ( i , j , k , l ) = exp ( - ( i - k ) 2 + ( j - l ) 2 2 &sigma; d 2 - | | f ( i , j ) - f ( k , l ) | | 2 2 &sigma; r 2 ) .
By this combination, image filtering considers the difference of spatial domain and codomain simultaneously, filtering noise data During can also retain the particular edge information in data, effectively carried out degree of depth face image data noise reparation and The enhancing of face depth characteristic information.
Fig. 7 is the feature extraction schematic diagram of a kind of face gender categorizing system based on three-dimensional data of the present invention.As it is shown in fig. 7, This process can be divided into visual vocabulary training stage and visual dictionary histogram feature to extract the stage.
In the visual vocabulary training stage, first the depth image that P width resolution is M*N in training set is carried out Gabor filter Ripple device filters, and original depth image is converted to P*M*N multidimensional Gabor filter response vector in this way;Will These vectors are grouped according to the locus of its place image, and carry out K mean cluster to often organizing vector set, obtain Cluster centre is the visual vocabulary (vision participle allusion quotation) of the Gabor filter response vector set corresponding to this image space positions;Will The vision vector often organized couples together, and just constitutes the visual dictionary of degree of depth facial image.
The stage is extracted, when after test facial image input, after Gabor filters, by arbitrary at visual dictionary histogram feature All primitive vocabulary in the vision participle allusion quotation that filter vector is all corresponding with its position compare, by the way of distance coupling, It is mapped to distance therewith closest to primitive on.In this way, it is possible to extract the vision of original depth image Dictionary histogram feature.
As Fig. 8 (a) show the schematic diagram of SVM classifier.
If linear separability sample set and be (xi, yi), i=1 ..., n, x ∈ Rd, {+1 ,-1} are category labels to y ∈.Then
W x+b=0
It it is the classifying face equation of SVM classifier.
When classification, in order to make classifying face that all samples correctly to be classified and class interval reaches maximum, need to meet following two Condition:
Φ (x)=min (wTw)
yi(w·xi+b)-1≥0
Optimal classification surface is can be obtained by, as shown in Figure 6 by solving this constrained optimization problems.And cross in two class samples from classifying face Nearest point and the training sample being parallel on the hyperplane of optimal classification surface are just so that those special samples that in formula, equal sign is set up This, because they support optimal classification surface, be therefore referred to as support vector.The face showing in the present invention such as Fig. 8 (b) Other classification process figure.First three-dimensional face images after texture optimization carries out vision histogram feature extraction, defeated after feature extraction Enter among SVM gender sorter, it is thus achieved that final Gender Classification result.
Fig. 9 is that the ethnic visual dictionary of a kind of face species system based on three-dimensional data of the present invention builds schematic diagram.Such as figure Shown in 7, respectively typical east degree of depth facial image and west degree of depth facial image are carried out visual vocabulary calculating.To calculating The all visual vocabularies gone out, for wherein apart from closer visual vocabulary region, we are regarded as race's deep vision vocabulary Critical region, what it represented is the attribute of people;For at a distance of distant region, we are regarded as representing the spy of race Property information (east or west degree of depth facial image), build visual dictionary storehouse, east and west visual dictionary the most respectively Storehouse.
Figure 10 is the classification process figure of a kind of face species system based on three-dimensional data of the present invention.As shown in Figure 10, no Being same as two classification design of Gender Classification, species is considered as a fuzzy classification problem by us.Three-dimensional people after texture optimization First face image carries out Gabor filtering, obtains the Gabor filter response vector set of this depth image;Every to this set Individual response vector, maps in (west) visual dictionary storehouse east by it, if with certain in east (west) dictionary Individual vocabulary distance is less than threshold value, then this response vector belongs to east (west) face, the number of vectors of its correspondence Eastnum (westnum) carries out+1 process;Final fuzzy membership function is:
Membership (I)=eastnum/westnum
Figure 11 is the system block diagram of a kind of face classification system based on three-dimensional data of the present invention, as shown in figure 11, including each Module position in systems and its major function.
For those skilled in the art, the present invention is not restricted to the details of above-described embodiment, without departing substantially from the spirit of the present invention and model In the case of enclosing, it is possible to realize the present invention with other concrete forms.Additionally, the present invention can be carried out by those skilled in the art Without departing from the spirit and scope of the present invention, these improve and modification also should be regarded as protection scope of the present invention for various changes and modification. Therefore, claims are intended to be construed to include preferred embodiment and fall into all changes and the amendment of the scope of the invention.

Claims (18)

1. the face classification system being based on three-dimensional data, it is characterised in that main contents include:
(1) importation of three-dimensional face cloud data;
(2) for detection part in face specific region in three-dimensional face cloud data;
(3) the face specific region for detecting carries out Registration of Measuring Data part;
(4) degree of depth human face data demapping section is carried out for the three-dimensional face cloud data after registration;
(5) face depth data quality evaluation part is carried out for degree of depth human face data;
(6) degree of depth face texture repairing part is carried out for degree of depth human face data;
(7) three-dimensional face identification division is carried out for degree of depth human face data.
2. importation based on the three dimensional point cloud described in claims 1 (one), it is characterised in that include that the data to all kinds of three-dimensional point cloud collecting devices input.
3. based on face specific region detection part (two) in the three-dimensional face cloud data described in claims 1, it is characterized in that, carry out the module of face specific region detection including the grader trained according to three dimensional point cloud characteristic information extraction and utilization.
4. the face specific region detected based on the utilization described in claims 1 carries out Registration of Measuring Data part (three), it is characterized in that, carry out the module of Registration of Measuring Data including the face specific region obtained according to detection with the normal data of this specific region in template base.
5. based on described in claims 1, three-dimensional face cloud data being carried out degree of depth facial image demapping section (four), it is characterized in that, carry out the module of data mapping including the position, face specific region obtained according to predetermined depth resolution and detection.
6. it is acquired data quality accessment part (five) based on the degree of depth human face data described in claims 1, it is characterized in that, including producing depth characteristic face according to typicality degree of depth human face data and utilizing depth characteristic face to carry out the module of face depth data quality evaluation.
7. carry out degree of depth face texture repairing part (six) based on the degree of depth human face data described in claims 1, it is characterized in that, including carrying out noise data filtering according to Image neighborhood data point and utilizing edge reservation filter to carry out the module of depth image texture repairing.
8. carry out three-dimensional face identification division (seven) based on the degree of depth human face data described in claims 1, it is characterised in that include the module carrying out recognition of face according to depth data extraction degree of depth face characteristic and grader.
9. based on the face characteristic region detection module described in claims 3, it is characterised in that using nose region as face characteristic region, key step is as follows:
1) determine that the threshold value of usefulness metric density is averagely born in territory, be defined as thr
2) utilize the extraction of depth information of the data human face data in the range of certain depth as pending data
3) normal information of the human face data selected by depth information is calculated
4) bear the definition of usefulness metric density according to zone leveling, that obtains each connected domain in pending data averagely bears usefulness metric density, selects the connected domain that wherein density value is maximum
5) when the threshold value in this region is more than predefined thr, this region is nose region, otherwise returns to the 1st) step continuation.
10. based on described in claims 4 based on the Registration of Measuring Data module based on the face nose region detected, it is characterised in that key step is as follows:
First we need to prepare the data in the width nose region corresponding with standard attitude in template base, then for the three-dimensional data of different attitudes, after obtaining the reference zone of registration, carry out the registration of data according to ICP algorithm, and this algorithm key step is as follows:
Assume that we have obtained matched data set to P and Q,
1) matrix of 3*3 is calculated
Wherein N is the capacity of data acquisition system
2) SVD doing H-matrix decomposes
H=U ∧ VT
X=VUT
3) spin matrix R and translation matrix t are calculated
When X determinant is 1, R=X:
T=P-R*Q
By above-mentioned steps, obtain the three dimensions transformation matrix between two three-dimensional data point sets, thus realize the registration of two point sets.
11. based on the data mapping module that three dimensional point cloud is mapped as degree of depth facial image described in claims 5, it is characterized in that, the face nose region that detection obtains is as the basis reference of the center of depth image data, the x-axis of its space coordinates and the image coordinate system information that y-axis information MAP is face depth image, calculate process as follows:
If prenasale be N (x, y, z), then the image coordinate of spatial point P (x1, y1, z1) is:
Ix=(x1-x)+width/2
Iy=(y1-y)+height/2
Width is the width of depth image, and height is the height of depth image
Meanwhile, depth resolution Z is preset according to the depth accuracy of three dimensional point cloudref, as using the z axis information of space coordinates as the basis reference of the depth value being mapped as face depth image
The data completing to be mapped as three dimensional point cloud degree of depth facial image map.
12. based on described in claims 6 according to typicality degree of depth human face data produce depth characteristic face, it is characterised in that the calculating process of depth characteristic face can be summarized as:
1) each degree of depth face image data in training set is all referred to as a dimensional vector from two-dimensional matrix stretching, these column vectors are grouped together into matrix A, the resolution assuming every degree of depth facial image is M*M, the dimension of the face column vector after the most stretched is exactly D=M*M, if training is concentrated with N and opens degree of depth facial image, then the dimension of sample matrix A is exactly D*N;
2) N in training set opens degree of depth facial image be added in corresponding dimension and be then averaging, it is possible to obtain the average face of depth image;N is opened depth image and all deducts depth-averaged face, obtain difference image data matrix Φ;
3) to covariance matrix C=Φ * ΦTCarrying out Eigenvalues Decomposition, according to occupying the ratio of all eigenvalue energy, select some maximum eigenvalues, its corresponding characteristic vector is depth characteristic face;
Degree of depth facial image can project to carry out in the space that these eigenface are opened approximate calculation.
13. algorithm flows based on the depth image data quality assessment modules described in claims 6, it is characterized in that, this module is divided into training and two stages of assessment: trains the eigenface of degree of depth facial image in the training stage, opens into degree of depth facial image space based on this;In evaluation stage, for the degree of depth facial image of input, a bit being mapped as in depth characteristic face space, it is obtained by the approximate depth facial image that depth characteristic face characterizes;
Then being contrasted with original image by approximate image, if difference is more than certain threshold value, then illustrate that this depth image does not meets the type that these depth characteristic faces represent, assessment is not passed through;The most then thinking that this image meets the type that these depth characteristic faces represent, assessment is passed through.
14. based on the image processing module that face depth image data carries out texture repairing described in claims 7, mainly comprise the following steps: detect firstly for the noise in depth image, noise type mainly includes the projection of data void holes and data, then shows as the degree of depth projection value of the null value in face depth data and local grain in depth image;Then carrying out depth data denoising, use the filtering of neighborhood degree of depth virtual value in the present invention, filter the noise in above-mentioned degree of depth facial image, this filtering expression can be described as:
Wherein when I (x-m, y-n) is depth image available point, it is worth and isAs I (x-m, when y-n) being depth image Null Spot, value is 0, after singular point is carried out preliminary low-pass filtering, continue with holding edge filter and further texture repairing is carried out for depth image, in the present invention, holding edge filter device uses bilateral filtering (being not limited to), two-sided filter is to be made up of two functions, one function is to determine filter coefficient by geometric space distance, another function is then to be determined filter coefficient by pixel value difference, in two-sided filter, the value of output pixel depends on the weighted array of the value of neighborhood territory pixel:
Wherein, geometric space distance the filter coefficient determined, its formula is:
The filter coefficient determined by pixel value difference, its formula is:
Then weight coefficient is then spatial domain coefficient and the product of codomain coefficient:
By this combination, consider the difference of spatial domain and codomain in image filtering simultaneously, the particular edge information in data can also be retained during filtering noise data, effectively carried out reparation and the enhancing of face depth characteristic information of degree of depth face image data noise.
15. based on extracting degree of depth face characteristic according to depth data described in claims 8, it is characterised in that this process can be divided into visual vocabulary training stage and visual dictionary histogram feature to extract the stage:
In the visual vocabulary training stage, first the depth image that P width resolution is M*N in training set is carried out Gabor filter filtering, in this way original depth image is converted to P*M*N multidimensional Gabor filter response vector;These vectors are grouped according to the locus of its place image, and carrying out K mean cluster to often organizing vector set, the cluster centre obtained is the visual vocabulary (vision participle allusion quotation) of the Gabor filter response vector set corresponding to this image space positions;The vision vector often organized is coupled together, just constitutes the visual dictionary of degree of depth facial image:
The stage is extracted at visual dictionary histogram feature, when after test facial image input, after Gabor filters, all primitive vocabulary in vision participle allusion quotation all corresponding with its position for arbitrary filter vector are compared, by the way of distance coupling, it be mapped to distance therewith closest to primitive on, in this way, it is possible to extract the visual dictionary histogram feature of original depth image.
16. carry out the module of recognition of face based on the grader described in claims 8, it is characterised in that the principle of grader is: set linear separability sample set and as (xi, yi), i=1 ..., n, x ∈ Rd, {+1 ,-1} are category labels to y ∈, then
W x+b=O
It is the classifying face equation of SVM classifier, when classification, in order to make classifying face that all samples correctly to be classified and class interval reaches maximum, needs to meet following two condition:
Φ (x)=min (wTw)
yi(w·xi+b)-1≥0
Optimal classification surface is can be obtained by by solving this constrained optimization problems, and cross point nearest from classifying face in two class samples and the training sample that is parallel on the hyperplane of optimal classification surface is just so that those special sample that in formula, equal sign is set up, because they support optimal classification surface, therefore support vector it is referred to as, first three-dimensional face images after texture optimization carries out vision histogram feature extraction, it is input among SVM gender sorter after feature extraction, it is thus achieved that final Gender Classification result.
17. based on the sort module described in claims 16, it is characterized in that, it is different from two classification design of Gender Classification, species is considered as a fuzzy classification problem by us, first three-dimensional face images after texture optimization carries out Gabor filtering, obtains the Gabor filter response vector set of this depth image;Each response vector to this set, it is mapped in (west) visual dictionary storehouse east, if with certain the vocabulary distance in east (west) dictionary less than threshold value, then this response vector belongs to east (west) face, and number of vectors eastnum (westnum) of its correspondence carries out+1 process;Following equation:
Membership (I)=eastnum/westnum
For final fuzzy membership function.
18. based on the visual dictionary described in claims 16, it is characterized in that, respectively typical east degree of depth facial image and west degree of depth facial image are carried out visual vocabulary calculating, to all visual vocabularies calculated, for wherein apart from closer visual vocabulary region, we are regarded as the critical region of race's deep vision vocabulary, and what it represented is the attribute of people;For at a distance of distant region, we are regarded as representing the characteristic information (east or west degree of depth facial image) of race, build visual dictionary storehouse, east and visual dictionary storehouse, west the most respectively.
CN201610490306.9A 2016-06-28 2016-06-28 A kind of face classification system based on three-dimensional data Active CN105894047B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610490306.9A CN105894047B (en) 2016-06-28 2016-06-28 A kind of face classification system based on three-dimensional data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610490306.9A CN105894047B (en) 2016-06-28 2016-06-28 A kind of face classification system based on three-dimensional data

Publications (2)

Publication Number Publication Date
CN105894047A true CN105894047A (en) 2016-08-24
CN105894047B CN105894047B (en) 2019-08-27

Family

ID=56718462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610490306.9A Active CN105894047B (en) 2016-06-28 2016-06-28 A kind of face classification system based on three-dimensional data

Country Status (1)

Country Link
CN (1) CN105894047B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529486A (en) * 2016-11-18 2017-03-22 深圳市唯特视科技有限公司 Racial recognition method based on three-dimensional deformed face model
CN106557749A (en) * 2016-11-18 2017-04-05 深圳市唯特视科技有限公司 A kind of face identification method for being used for security protection based on three-dimensional deformation faceform
CN106682690A (en) * 2016-12-20 2017-05-17 电子科技大学 Visual sense mapping method based on support vector regression
CN108154466A (en) * 2017-12-19 2018-06-12 北京小米移动软件有限公司 Image processing method and device
CN108665431A (en) * 2018-05-16 2018-10-16 南京信息工程大学 Fractional order image texture Enhancement Method based on K- mean clusters
CN108664839A (en) * 2017-03-27 2018-10-16 北京三星通信技术研究有限公司 A kind of image processing method and equipment
CN109064429A (en) * 2018-08-02 2018-12-21 河北工业大学 A kind of fusion GPU accelerates the pseudo- laser data generation method of depth map reparation
CN109166177A (en) * 2018-08-27 2019-01-08 清华大学 Air navigation aid in a kind of art of craniomaxillofacial surgery
WO2019080580A1 (en) * 2017-10-26 2019-05-02 深圳奥比中光科技有限公司 3d face identity authentication method and apparatus
CN109977803A (en) * 2019-03-07 2019-07-05 北京超维度计算科技有限公司 A kind of face identification method based on Kmeans supervised learning
CN110647856A (en) * 2019-09-29 2020-01-03 大连民族大学 Method for recognizing facial expressions based on theory of axiomatic fuzzy set
CN110781828A (en) * 2019-10-28 2020-02-11 北方工业大学 Fatigue state detection method based on micro-expression
CN111210510A (en) * 2020-01-16 2020-05-29 腾讯科技(深圳)有限公司 Three-dimensional face model generation method and device, computer equipment and storage medium
WO2020134411A1 (en) * 2018-12-29 2020-07-02 杭州海康威视数字技术股份有限公司 Merchandise category recognition method, apparatus, and electronic device
CN111401331A (en) * 2020-04-27 2020-07-10 支付宝(杭州)信息技术有限公司 Face recognition method and device
CN111507178A (en) * 2020-03-03 2020-08-07 平安科技(深圳)有限公司 Data processing optimization method and device, storage medium and computer equipment
CN112070700A (en) * 2020-09-07 2020-12-11 深圳市凌云视迅科技有限责任公司 Method and device for removing salient interference noise in depth image
US11238270B2 (en) 2017-10-26 2022-02-01 Orbbec Inc. 3D face identity authentication method and apparatus
CN114252071A (en) * 2020-09-25 2022-03-29 财团法人工业技术研究院 Self-propelled vehicle navigation device and method thereof

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020102010A1 (en) * 2000-12-06 2002-08-01 Zicheng Liu System and method providing improved head motion estimations for animation
US20040208344A1 (en) * 2000-03-09 2004-10-21 Microsoft Corporation Rapid computer modeling of faces for animation
US20120257800A1 (en) * 2011-04-05 2012-10-11 Yufeng Zheng Face recognition system and method using face pattern words and face pattern bytes
CN103996052A (en) * 2014-05-12 2014-08-20 深圳市唯特视科技有限公司 Three-dimensional face gender classification device and method based on three-dimensional point cloud
CN104036247A (en) * 2014-06-11 2014-09-10 杭州巨峰科技有限公司 Facial feature based face racial classification method
CN104504410A (en) * 2015-01-07 2015-04-08 深圳市唯特视科技有限公司 Three-dimensional face recognition device and method based on three-dimensional point cloud
CN104573722A (en) * 2015-01-07 2015-04-29 深圳市唯特视科技有限公司 Three-dimensional face race classifying device and method based on three-dimensional point cloud

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040208344A1 (en) * 2000-03-09 2004-10-21 Microsoft Corporation Rapid computer modeling of faces for animation
US20020102010A1 (en) * 2000-12-06 2002-08-01 Zicheng Liu System and method providing improved head motion estimations for animation
US20120257800A1 (en) * 2011-04-05 2012-10-11 Yufeng Zheng Face recognition system and method using face pattern words and face pattern bytes
CN103996052A (en) * 2014-05-12 2014-08-20 深圳市唯特视科技有限公司 Three-dimensional face gender classification device and method based on three-dimensional point cloud
CN104036247A (en) * 2014-06-11 2014-09-10 杭州巨峰科技有限公司 Facial feature based face racial classification method
CN104504410A (en) * 2015-01-07 2015-04-08 深圳市唯特视科技有限公司 Three-dimensional face recognition device and method based on three-dimensional point cloud
CN104573722A (en) * 2015-01-07 2015-04-29 深圳市唯特视科技有限公司 Three-dimensional face race classifying device and method based on three-dimensional point cloud

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHONG C ET AL: "《Fuzzy 3D Face Ethnicity Categorization》", 《ICB’09 PROCEEDINGS OF THE THIRD INTERNATIONAL CONFERENCE ON ADVANCES IN BIOMETRICS》 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106557749A (en) * 2016-11-18 2017-04-05 深圳市唯特视科技有限公司 A kind of face identification method for being used for security protection based on three-dimensional deformation faceform
CN106529486A (en) * 2016-11-18 2017-03-22 深圳市唯特视科技有限公司 Racial recognition method based on three-dimensional deformed face model
CN106682690B (en) * 2016-12-20 2019-11-05 电子科技大学 A kind of vision mapping method based on support vector regression
CN106682690A (en) * 2016-12-20 2017-05-17 电子科技大学 Visual sense mapping method based on support vector regression
CN108664839B (en) * 2017-03-27 2024-01-12 北京三星通信技术研究有限公司 Image processing method and device
CN108664839A (en) * 2017-03-27 2018-10-16 北京三星通信技术研究有限公司 A kind of image processing method and equipment
WO2019080580A1 (en) * 2017-10-26 2019-05-02 深圳奥比中光科技有限公司 3d face identity authentication method and apparatus
US11238270B2 (en) 2017-10-26 2022-02-01 Orbbec Inc. 3D face identity authentication method and apparatus
CN108154466A (en) * 2017-12-19 2018-06-12 北京小米移动软件有限公司 Image processing method and device
CN108665431A (en) * 2018-05-16 2018-10-16 南京信息工程大学 Fractional order image texture Enhancement Method based on K- mean clusters
CN109064429A (en) * 2018-08-02 2018-12-21 河北工业大学 A kind of fusion GPU accelerates the pseudo- laser data generation method of depth map reparation
CN109064429B (en) * 2018-08-02 2022-02-08 河北工业大学 Pseudo laser data generation method for accelerating depth image restoration by fusing GPU
CN109166177A (en) * 2018-08-27 2019-01-08 清华大学 Air navigation aid in a kind of art of craniomaxillofacial surgery
WO2020134411A1 (en) * 2018-12-29 2020-07-02 杭州海康威视数字技术股份有限公司 Merchandise category recognition method, apparatus, and electronic device
CN109977803A (en) * 2019-03-07 2019-07-05 北京超维度计算科技有限公司 A kind of face identification method based on Kmeans supervised learning
CN110647856A (en) * 2019-09-29 2020-01-03 大连民族大学 Method for recognizing facial expressions based on theory of axiomatic fuzzy set
CN110647856B (en) * 2019-09-29 2023-04-18 大连民族大学 Method for recognizing facial expressions based on theory of axiomatic fuzzy set
CN110781828A (en) * 2019-10-28 2020-02-11 北方工业大学 Fatigue state detection method based on micro-expression
CN111210510B (en) * 2020-01-16 2021-08-06 腾讯科技(深圳)有限公司 Three-dimensional face model generation method and device, computer equipment and storage medium
CN111210510A (en) * 2020-01-16 2020-05-29 腾讯科技(深圳)有限公司 Three-dimensional face model generation method and device, computer equipment and storage medium
CN111507178A (en) * 2020-03-03 2020-08-07 平安科技(深圳)有限公司 Data processing optimization method and device, storage medium and computer equipment
CN111507178B (en) * 2020-03-03 2024-05-14 平安科技(深圳)有限公司 Data processing optimization method and device, storage medium and computer equipment
CN111401331A (en) * 2020-04-27 2020-07-10 支付宝(杭州)信息技术有限公司 Face recognition method and device
CN112070700A (en) * 2020-09-07 2020-12-11 深圳市凌云视迅科技有限责任公司 Method and device for removing salient interference noise in depth image
CN112070700B (en) * 2020-09-07 2024-03-29 深圳市凌云视迅科技有限责任公司 Method and device for removing protrusion interference noise in depth image
CN114252071A (en) * 2020-09-25 2022-03-29 财团法人工业技术研究院 Self-propelled vehicle navigation device and method thereof

Also Published As

Publication number Publication date
CN105894047B (en) 2019-08-27

Similar Documents

Publication Publication Date Title
CN105894047A (en) Human face classification system based on three-dimensional data
CN105956582B (en) A kind of face identification system based on three-dimensional data
CN110263774B (en) A kind of method for detecting human face
CN106682598B (en) Multi-pose face feature point detection method based on cascade regression
CN112418074B (en) Coupled posture face recognition method based on self-attention
CN107657279B (en) Remote sensing target detection method based on small amount of samples
CN101763503B (en) Face recognition method of attitude robust
JP5545361B2 (en) Image classification method, apparatus, program product, and storage medium
CN103279768B (en) A kind of video face identification method based on incremental learning face piecemeal visual characteristic
CN102930300B (en) Method and system for identifying airplane target
CN102819733B (en) Rapid detection fuzzy method of face in street view image
CN107590458B (en) Gender and age identification method of vertical image people flow counting
CN105243374A (en) Three-dimensional human face recognition method and system, and data processing device applying same
CN106529504B (en) A kind of bimodal video feeling recognition methods of compound space-time characteristic
Rouhi et al. A review on feature extraction techniques in face recognition
CN103295025A (en) Automatic selecting method of three-dimensional model optimal view
EP2648159A1 (en) Object detecting method and object detecting device using same
CN103136516A (en) Face recognition method and system fusing visible light and near-infrared information
CN110084211B (en) Action recognition method
CN103218825A (en) Quick detection method of spatio-temporal interest points with invariable scale
CN105868711B (en) Sparse low-rank-based human behavior identification method
CN105956570A (en) Lip characteristic and deep learning based smiling face recognition method
CN104573722A (en) Three-dimensional face race classifying device and method based on three-dimensional point cloud
CN103218606A (en) Multi-pose face recognition method based on face mean and variance energy images
CN116469020A (en) Unmanned aerial vehicle image target detection method based on multiscale and Gaussian Wasserstein distance

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant