CN104598878A - Multi-modal face recognition device and method based on multi-layer fusion of gray level and depth information - Google Patents
Multi-modal face recognition device and method based on multi-layer fusion of gray level and depth information Download PDFInfo
- Publication number
- CN104598878A CN104598878A CN201510006214.4A CN201510006214A CN104598878A CN 104598878 A CN104598878 A CN 104598878A CN 201510006214 A CN201510006214 A CN 201510006214A CN 104598878 A CN104598878 A CN 104598878A
- Authority
- CN
- China
- Prior art keywords
- face
- data
- depth information
- modal
- dimensional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 230000004927 fusion Effects 0.000 title claims abstract description 14
- 230000014509 gene expression Effects 0.000 claims abstract description 10
- 239000013598 vector Substances 0.000 claims description 23
- 230000008878 coupling Effects 0.000 claims description 22
- 238000010168 coupling process Methods 0.000 claims description 22
- 238000005859 coupling reaction Methods 0.000 claims description 22
- 239000011159 matrix material Substances 0.000 claims description 15
- 238000001514 detection method Methods 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 10
- 238000010606 normalization Methods 0.000 claims description 10
- 230000000007 visual effect Effects 0.000 claims description 10
- 238000013507 mapping Methods 0.000 claims description 7
- 238000012549 training Methods 0.000 claims description 6
- 238000004458 analytical method Methods 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 4
- 230000001815 facial effect Effects 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 3
- 238000003709 image segmentation Methods 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 230000008901 benefit Effects 0.000 abstract description 7
- 238000005286 illumination Methods 0.000 abstract description 7
- 230000002708 enhancing effect Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 8
- 238000005452 bending Methods 0.000 description 4
- 230000002596 correlated effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000001174 ascending effect Effects 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000000875 corresponding effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- HUTDUHSNJYTCAR-UHFFFAOYSA-N ancymidol Chemical compound C1=CC(OC)=CC=C1C(O)(C=1C=NC=NC=1)C1CC1 HUTDUHSNJYTCAR-UHFFFAOYSA-N 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a multi-modal face recognition device and method based on multi-layer fusion of gray level and depth information. The method mainly comprises the steps of recognizing the gray level information of a human face; recognizing the depth information of the human face; normalizing the gray level information and the depth information of the human face, based on normalized matching scores, obtaining multi-modal fused matching scores through fusion strategies to achieve multi-modal face recognition. According to the multi-modal face recognition device and method based on multi-layer fusion of gray level and depth information, a multi-modal system collects two-dimensional gray level information and three-dimensional depth information, makes use of the advantages of the two-dimensional gray level information and the three-dimensional depth information, and through the fusion strategies, overcomes certain inherent shortcomings of a single-modal system such as illumination of gray level images and expressions of depth images, thereby greatly enhancing the performance and achieve accurate and rapid face recognition.
Description
Technical field
The present invention relates to technical field of face recognition, especially relate to multi-modal face identification device and method that a kind of multilayer based on gray scale and depth information merges.
Background technology
Three-dimensional face identification is relative to two-dimension human face identification, there is it on illumination robust, affect the advantages such as less by the factor such as attitude and expression, therefore, after the quality of the develop rapidly of 3-D data collection technology and three-dimensional data and precision promote greatly, their research is put in this field by a lot of scholar.
The correlated characteristic that CN20101025690 proposes three-dimensional bending invariant is used for carrying out the description of face characteristic.The method, by the local feature of the bending invariant of coding three-dimensional face surface adjacent node, extracts bending invariant related features; The correlated characteristic of described bending invariant signed and adopts spectrum recurrence to carry out dimensionality reduction, obtaining major component, and use K arest neighbors sorting technique to identify three-dimensional face.But owing to needing complicated calculated amount when extracting variable correlated characteristic, the therefore further application of the method at efficiency upper limit;
CN200910197378 proposes a kind of method of full-automatic three-dimensional Face datection and posture correction.The method is by carrying out multiple dimensioned square analysis to face three-dimension curved surface, propose face area feature and detect face curved surface cursorily, and the position that nose provincial characteristics locates nose is exactly proposed, then complete face curved surface is accurately partitioned into further, after detecting nose location of root according to the range information proposition nasion provincial characteristics of face curved surface, establish a face coordinate system, and automatically carry out the correction application of face posture accordingly.This patent object is to estimate the attitude of three-dimensional face data, belongs to the data preprocessing phase of three-dimensional face recognition system.
Face gray level image is easily subject to the impact of illumination variation, and face depth image is easily subject to the impact such as accuracy of data acquisition and expression shape change, and these factors have impact on stability and the accuracy of face identification system to a certain extent.
Therefore multi-modal fusion systems grow receives the concern of people.Multimodal systems is by carrying out the collection of multi-modal data, the advantage of each modal data can be utilized, and overcome some inherent weakness of single mode system (as the illumination of gray level image by convergence strategy, the expression of depth image), effectively improve the performance of face identification system.
Summary of the invention
In order to solve the problems of the technologies described above, multi-modal fusion systems grow receives the concern of people.Multimodal systems is by carrying out the collection of multi-modal data, the advantage of each modal data can be utilized, the inherent weakness of single mode system is overcome (as the illumination of gray level image by convergence strategy, the expression of depth image), effectively improve the performance of face identification system, the present invention adopts following technical scheme to solve above-mentioned technical matters:
Based on the multi-modal face identification device that the multilayer of gray scale and depth information merges, comprise computing unit half-tone information being carried out to recognition of face; For carrying out the computing unit of recognition of face to depth information; The computing unit merged is carried out based on multi-modal recognition of face mark; To the classifier calculated unit that data are classified.
Preferably, in above-mentioned a kind of multi-modal face identification device merged based on the multilayer of gray scale and depth information, the described computing unit carrying out recognition of face for half-tone information comprises: human eye detection unit, 2-D data registration computing unit, Gray Face feature extraction unit and Gray Face identification score calculating unit.
Preferably, in above-mentioned a kind of multi-modal face identification device merged based on the multilayer of gray scale and depth information, computing unit depth information being carried out to recognition of face comprises: nose detector cell, three-dimensional data registration computing unit, degree of depth face characteristic extraction unit and degree of depth recognition of face score calculating unit.
The present invention also discloses the multi-modal face identification method that a kind of multilayer based on gray scale and depth information merges, and comprises the steps:
A. face half-tone information is identified;
B. face depth information is identified;
C. be normalized face half-tone information and depth information, based on normalized coupling mark, the coupling mark after adopting convergence strategy to obtain multi-modal fusion, realizes multi-modal recognition of face.
Preferably, in above-mentioned a kind of multi-modal face recognition method merged based on the multilayer of gray scale and depth information, described steps A comprises the steps:
A1. characteristic area location, use human eye detector acquisition human eye area, described human eye detection device is hierarchical classification device H, obtains through following algorithm:
Given training sample set S={ (x
1, y
1) ..., (x
m, y
m), weak spatial classification device
wherein x
i∈ χ is sample vector, y
i=± 1, be tag along sort, m is total sample number; Initialization sample probability distribution
T=1 ..., T, each Weak Classifier h of centering does following operation:
Sample space χ is divided, obtains X
1, X
2..., X
n;
Calculate normalized factor,
A h is selected in Weak Classifier space
t, Z is minimized
Upgrade training sample probability distribution
Wherein
for normalized factor, make D
t+1it is a probability distribution;
Final strong classifier H is.
A2. use the human eye area position of acquisition to carry out registration, utilize LBP algorithm process position of human eye data acquisition LBP histogram feature, value formula is
This feature input gray level Image Classifier is obtained Gray-scale Matching mark.
Preferably, in above-mentioned a kind of multi-modal face identification method merged based on the multilayer of gray scale and depth information, described step B comprises the steps:
B1. characteristic area location, judges face nose regional location;
B2. for the three-dimensional data of different attitude, after obtaining the reference zone of registration, carry out the registration of data according to ICP algorithm, after registration completes, calculate the Euclidean distance between the three-dimensional face model data in input data and registry;
B3. carry out the acquisition of depth image according to depth information, utilize wave filter to compensate denoising for the noise point in the depth image after mapping, finally expression robust region is selected, obtain final three-dimensional face depth image;
B4. extraction is the visual dictionary histogram feature vector of three dimensional depth image, when after the input of test facial image, after Gabor filtering, the all primitive vocabulary in dictionary the vision that arbitrary filter vector is all corresponding with its position are divided to compare, by the mode of distance coupling, it is mapped to it apart from primitive the most close, extracts the visual dictionary histogram feature of original depth image, utilize this feature to input depth image sorter and obtain coupling mark.
Preferably, in above-mentioned a kind of multi-modal face identification method merged based on the multilayer of gray scale and depth information, described step c specifically comprises:
Adopt minimax linear normalization principle to carry out score normalization to two dimensional gray information and three-dimensional depth information, formula is as follows
After score normalization, adopt the coupling mark of weighted addition principle to different modalities comparing robust to merge, formula is as follows
Obtain multi-modal data merge after coupling mark after adopt linear discriminant analysis algorithm by building in class scatter matrix SW between scatter matrix SB and class, maximize objective function
Obtain LDA mapping matrix W, be weights.
Preferably, in above-mentioned a kind of multi-modal face identification method merged based on the multilayer of gray scale and depth information, described step B1 specifically comprises
Step 1: definite threshold, determines that the threshold value of usefulness metric density is on average born in territory, is defined as thr;
Step 2: utilize depth information to choose pending data, utilize the depth information of data, is extracted in human face data within the scope of certain depth as pending data;
Step 3: the calculating of normal vector, calculates the side vector information of the human face data selected by depth information;
Step 4: zone leveling bears the calculating of usefulness metric density, bears the definition of usefulness metric density according to zone leveling, that to obtain in pending data a connected domain on average bears usefulness metric density, selects the connected domain that wherein density value is maximum;
Step 5: determine whether to find nose region, when current region threshold value is greater than predefined thr, this region is nose region, otherwise get back to step 1 restart circulation.
Preferably, in above-mentioned a kind of multi-modal face identification method merged based on the multilayer of gray scale and depth information, described ICP algorithm key step comprises:
Determine matched data set pair, from the three-dimensional nose data decimation reference data point set P reference template, recycle point-to-point between nearest distance select to input the data point set Q matched with reference data in three-dimensional face;
Calculate rigid motion parameter, calculate rotation matrix R and translation vector t
When X determinant is 1, R=X;
t=P-R*Q
According to the whether registration of the error judgment 3-D data set between the data set RQ+t after rigid transformation and reference data set P, after registration, calculated the Euclidean distance between the three-dimensional face model data in input data and registry by following formula
Wherein P, Q are unique point set to be matched respectively, containing N number of unique point in set.
Preferably, in above-mentioned a kind of multi-modal face identification method merged based on the multilayer of gray scale and depth information, step B4 is specially:
Three-dimensional face Range Image Segmentation is become some local grain regions;
For each GaBor filter response vector, the vision being mapped to its correspondence according to the difference of position is divided in the vocabulary of dictionary, and according to this based on set up visual dictionary histogram vectors and express as the special medical treatment of three-dimensional face;
Nearest neighbor classifier is used as last recognition of face, and wherein L1 distance is selected as distance metric.
Compared with prior art, the present invention has following technique effect:
Adopt the solution of the present invention, multimodal systems is by carrying out the collection of two dimensional gray information and three-dimensional depth information, utilize the advantage of two dimensional gray information and three-dimensional depth information, some inherent weakness of single mode system is overcome (as the illumination of gray level image by convergence strategy, the expression of depth image), effectively improve the performance of face identification system, make recognition of face accurate quick more.
Accompanying drawing explanation
Fig. 1 is FB(flow block) of the present invention;
Fig. 2 is present system block diagram;
Fig. 3 is three-dimensional face nose of the present invention location schematic diagram;
Fig. 4 is three-dimensional face spatial mappings schematic diagram of the present invention;
Fig. 5 is the present inventor's three-dimensional face degree of depth presentation feature extraction schematic diagram;
Fig. 6 is two-dimension human face human eye detection schematic diagram of the present invention;
Fig. 7 is two-dimension human face LBP feature schematic diagram of the present invention;
Fig. 8 is two-dimension human face gray scale presentation feature extraction schematic diagram of the present invention;
Fig. 9 is different modalities mark blending algorithm schematic diagram of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
The present invention discloses the multi-modal face identification device that a kind of multilayer based on gray scale and depth information merges, and comprises computing unit half-tone information being carried out to recognition of face; For carrying out the computing unit of recognition of face to depth information; The computing unit merged is carried out based on multi-modal recognition of face mark; To the classifier calculated unit that data are classified.
Wherein above-mentioned half-tone information recognition of face computing unit specifically comprises human eye detection unit, 2-D data registration computing unit, Gray Face feature extraction unit and Gray Face identification score calculating unit.
Above-mentioned depth information recognition of face computing unit specifically comprises nose detector cell, three-dimensional data registration computing unit, degree of depth face characteristic extraction unit and degree of depth recognition of face score calculating unit.
Simultaneously the present invention also disclose a kind of multi-modal face identification method of the multilayer fusion based on gray scale and depth information, and as shown in Figure 9, multi-modal fusion system disclosed by the invention comprises multiple data source: as 2-D gray image, three dimensional depth image.For 2-D gray image, first feature point detection (human eye) is carried out, then the characteristic point position of acquisition is utilized to carry out registration, after gray level image registration, utilize LBP algorithm to this data acquisition LBP histogram feature, utilize this feature input gray level Image Classifier to obtain coupling mark; For range data, first carry out feature point detection (nose) and utilize the unique point obtained to carry out registration, then the three-dimensional space data after registration is mapped as face depth image, utilize visual dictionary algorithm to this data acquisition visual dictionary histogram feature, utilize this feature to input depth image sorter and obtain coupling mark.This multimodal systems utilizes Decision-level fusion strategy, therefore, after acquisition each data source coupling mark, need to be normalized these marks, then based on normalized coupling mark, coupling mark after convergence strategy can be adopted to obtain multi-modal fusion, realizes multi-modal recognition of face with this.
As shown in Figure 6, human eye area is obtained by human eye detection device, this human eye detection device is by being hierarchical classification device, every one deck is all a strong classifier (as Adaboost), and every one deck all can filter a part of non-human eye area, and the image-region finally obtained is exactly human eye area.The benefit of hierarchical classification device is, aspect ratio which floor sorter front comprises is less, and therefore computing velocity is than very fast; After which floor sorter above, although hierarchical classification device complexity raises, now remaining image-region is fewer.By above-mentioned mechanism, this hierarchical classification device can reach real-time detection perform.Adaboost algorithm can be summarized as follows:
Given training sample set S={ (x
1, y
1) ..., (x
m, y
m), weak spatial classification device
wherein x
i∈ χ is sample vector,
for tag along sort, m is total sample number; Initialization sample probability distribution
T=1 ..., T, each Weak Classifier h of centering does following operation:
Sample space χ is divided, obtains X
1, X
2..., X
n;
Calculate normalized factor,
A h is selected in Weak Classifier space
t, Z is minimized
Upgrade training sample probability distribution
Wherein
for normalized factor, make D
t+1it is a probability distribution;
Final strong classifier H is
As shown in Figure 7,8, use the human eye area position obtained to carry out registration, utilize LBP algorithm process position of human eye data acquisition LBP histogram feature, pixel and its neighborhood territory pixel point contrast by LBP algorithm, and its value is as formula:
If get P=8, R=1, more then have the LBP value of the meaning of texture features as shown in figure (c).What wherein the first width figure represented is texture bright spot, and the second width figure represents Texture Boundaries, and the 3rd width figure represents texture dim spot or smooth grain region.According to the Statistical Distribution of texture, gained LBP value is classified as 59 classes, and using this 59 class as histogrammic base configuration statistical nature vector (LBP histogram feature).By this form, the descriptive and histogrammic robustness of local grain information is effectively combined, achieve good recognition performance in field of face identification.
The two-dimension human face data of input, first extract key point by human eye detection, then according to position of human eye, this facial image are adjusted to forward upright posture by rigid transformation.Gray-scale map by registration is extracted LBP histogram feature.
This feature input gray level Image Classifier is obtained Gray-scale Matching mark.
As shown in Figure 3, for range data, first carry out the detection in face nose region, especially by following steps:
Definite threshold, determines that the threshold value of usefulness metric density is on average born in territory, is defined as thr;
Utilize depth information to choose pending data, utilize the depth information of data, be extracted in human face data within the scope of certain depth as pending data;
The calculating of normal vector, calculates the side vector information of the human face data selected by depth information;
Zone leveling bears the calculating of usefulness metric density, bears the definition of usefulness metric density according to zone leveling, and that to obtain in pending data a connected domain on average bears usefulness metric density, selects the connected domain that wherein density value is maximum;
Determine whether to find nose region, when current region threshold value is greater than predefined thr, this region is nose region, otherwise restarts to choose.
As shown in Figure 4, the nose region obtained is utilized to carry out registration, ICP algorithm is used to carry out the registration of data in the present invention, first from the three-dimensional nose data decimation reference point data point set P reference template, and then utilize point-to-point between nearest distance select to input the data point set Q matched with reference data in three-dimensional face, first calculate the matrix of 3*3
Wherein N is the capacity of data acquisition, then the SVD doing H matrix decomposes
H=U^V
T
X=VU
T
Calculate rotation matrix R and translation matrix t
When X determinant is 1, R=X;
t=P-R*Q
Judge that whether the error between the data set RQ+t after rigid transformation and reference data set P is enough little.When this error is less than a certain threshold value, then these two three-dimensional data set realize registration; Otherwise restart until data acquisition is to realizing registration from the first step.
According to above-mentioned self-adaptive features point sampling and ICP registration Algorithm, then distance function is as follows:
Wherein P, Q are unique point set to be matched respectively, containing N number of unique point in set.
Due to the difference of unique point sampling density, when therefore calculating the Euclidean distance between the three-dimensional face model data in input data and registry after registration completes, need to be normalized this distance according to the number of validity feature point.
As shown in Figure 4, after registration, first the acquisition of depth image is carried out according to depth information, then wave filter is utilized to compensate denoising for the noise point (data protruding point or empty point) in the depth image after mapping, finally expression robust region is selected, obtain final three-dimensional face depth image.
As shown in Figure 5, when after the input of test facial image, after Gabor filtering, all primitive vocabulary in dictionary the vision that arbitrary filter vector is all corresponding with its position are divided to compare, by the mode of distance coupling, it is mapped to it apart from primitive the most close.In this way, the visual dictionary histogram feature of original depth image can just be extracted.Its roughly flow process be summarized as follows:
Three-dimensional face Range Image Segmentation is become some local grain regions;
For each Gabor filter response vector, the vision being mapped to its correspondence according to the difference of position is divided in the vocabulary of dictionary, and according to this based on set up visual dictionary histogram vectors and express as the special medical treatment of three-dimensional face;
Nearest neighbor classifier is used as last recognition of face, and wherein L1 distance is selected as distance metric.
As shown in Figure 9, adopt minimax linear normalization principle to carry out score normalization to two dimensional gray information and three-dimensional depth information in the present invention, formula is as follows
Be different from traditional minimax linear normalization principle, due to max representative is position far away in metric space, therefore this numerical value is easy to the impact being subject to noise (hair as three-dimensional face blocks), therefore when carrying out value to max, is single mode mark set { S
kbe positioned at the value of 95% position after ascending sort; And due to min representative be position nearer in metric space, therefore this numerical value be not subject to noise data impact (influenced value can become large herein), therefore min value is single mode mark set { S
kminimum value after ascending sort.S
kthe coupling mark in this mode,
it is normalized coupling mark in this mode;
After score normalization, adopt the coupling mark of weighted addition principle to different modalities comparing robust to merge, formula is as follows
Obtain the coupling mark after multi-modal data fusion.The acquisition of weights herein adopts linear discriminant analysis algorithm (LDA).This algorithm make use of the classification information of data, by building in class scatter matrix SW between scatter matrix SB and class, maximizes objective function
Obtain LDA mapping matrix W, be weights.
Adopt the solution of the present invention, multimodal systems is by carrying out the collection of two dimensional gray information and three-dimensional depth information, utilize the advantage of two dimensional gray information and three-dimensional depth information, some inherent weakness of single mode system is overcome (as the illumination of gray level image by convergence strategy, the expression of depth image), effectively improve the performance of face identification system, make recognition of face accurate quick more.
To those skilled in the art, obviously the invention is not restricted to the details of above-mentioned one exemplary embodiment, and when not deviating from spirit of the present invention or essential characteristic, the present invention can be realized in other specific forms.Therefore, no matter from which point, all should embodiment be regarded as exemplary, and be nonrestrictive, scope of the present invention is limited by claims instead of above-mentioned explanation, and all changes be therefore intended in the implication of the equivalency by dropping on claim and scope are included in the present invention.Any Reference numeral in claim should be considered as the claim involved by limiting.
In addition, be to be understood that, although this instructions is described according to embodiment, but not each embodiment only comprises an independently technical scheme, this narrating mode of instructions is only for clarity sake, those skilled in the art should by instructions integrally, and the technical scheme in each embodiment also through appropriately combined, can form other embodiments that it will be appreciated by those skilled in the art that.
Claims (10)
1., based on the multi-modal face identification device that the multilayer of gray scale and depth information merges, it is characterized in that: comprise computing unit half-tone information being carried out to recognition of face; For carrying out the computing unit of recognition of face to depth information; The computing unit merged is carried out based on multi-modal recognition of face mark; To the classifier calculated unit that data are classified.
2. the multi-modal face identification device that merges of a kind of multilayer based on gray scale and depth information according to claim 1, it is characterized in that, the described computing unit carrying out recognition of face for half-tone information comprises: human eye detection unit, 2-D data registration computing unit, Gray Face feature extraction unit and Gray Face identification score calculating unit.
3. the multi-modal face identification device that merges of a kind of multilayer based on gray scale and depth information according to claim 1, it is characterized in that, majority comprises the computing unit that depth information carries out recognition of face: nose detector cell, three-dimensional data registration computing unit, degree of depth face characteristic extraction unit and degree of depth recognition of face score calculating unit.
4., based on the multi-modal face identification method that the multilayer of gray scale and depth information merges, it is characterized in that, comprise the steps:
A. face half-tone information is identified;
B. face depth information is identified;
C. be normalized face half-tone information and depth information, based on normalized coupling mark, the coupling mark after adopting convergence strategy to obtain multi-modal fusion, realizes multi-modal recognition of face.
5. the multi-modal face identification method that merges of a kind of multilayer based on gray scale and depth information according to claim 4, it is characterized in that, described steps A comprises the steps:
A1. characteristic area location, use human eye detector acquisition human eye area, described human eye detection device is hierarchical classification device H, obtains through following algorithm:
Given training sample set S={ (x
1, y
1) ..., (x
m, y
m), weak spatial classification device
wherein x
i∈ χ is sample vector, y
i=± 1, be tag along sort, m is total sample number; Initialization sample probability distribution
T=1 ... T, each Weak Classifier h of centering does following operation:
Sample space χ is divided, obtains X
1, X
2..., X
n;
Calculate normalized factor,
A h is selected in Weak Classifier space
t, Z is minimized
Upgrade training sample probability distribution
Wherein
for normalized factor, make D
t+1it is a probability distribution;
Final strong classifier H is
A2. use the human eye area position of acquisition to carry out registration, utilize LBP algorithm process position of human eye data acquisition LBP histogram feature, value formula is
This feature input gray level Image Classifier is obtained Gray-scale Matching mark.
6. the multi-modal face identification method that merges of a kind of multilayer based on radian and depth information according to claim 4, it is characterized in that, described step B comprises the steps:
B1. characteristic area location, judges face nose regional location;
B2. for the three-dimensional data of different attitude, after obtaining the reference zone of registration, carry out the registration of data according to ICP algorithm, after registration completes, calculate the Euclidean distance between the three-dimensional face model data in input data and registry;
B3. carry out the acquisition of depth image according to depth information, utilize wave filter to compensate denoising for the noise point in the depth image after mapping, finally expression robust region is selected, obtain final three-dimensional face depth image;
B4. extraction is the visual dictionary histogram feature vector of three dimensional depth image, when after the input of test facial image, after Gabor filtering, the all primitive vocabulary in dictionary the vision that arbitrary filter vector is all corresponding with its position are divided to compare, by the mode of distance coupling, it is mapped to it apart from primitive the most close, extracts the visual dictionary histogram feature of original depth image, utilize this feature to input depth image sorter and obtain coupling mark.
7. the multi-modal face identification method that merges of a kind of multilayer based on gray scale and depth information according to claim 4, it is characterized in that, described step c specifically comprises:
Adopt minimax linear normalization principle to carry out score normalization to two dimensional gray information and three-dimensional depth information, formula is as follows
After score normalization, adopt the coupling mark of weighted addition principle to different modalities comparing robust to merge, formula is as follows
Obtain multi-modal data merge after coupling mark after adopt linear discriminant analysis algorithm by building in class scatter matrix SW between scatter matrix SB and class, maximize objective function
Obtain LDA mapping matrix W, be weights.
8. the multi-modal face identification method that merges of a kind of multilayer based on gray scale and depth information according to claim 6, it is characterized in that, described step B1 specifically comprises
Step 1: definite threshold, determines that the threshold value of usefulness metric density is on average born in territory, is defined as thr;
Step 2: utilize depth information to choose pending data, utilize the depth information of data, is extracted in human face data within the scope of certain depth as pending data;
Step 3: the calculating of normal vector, calculates the side vector information of the human face data selected by depth information;
Step 4: zone leveling bears the calculating of usefulness metric density, bears the definition of usefulness metric density according to zone leveling, that to obtain in pending data a connected domain on average bears usefulness metric density, selects the connected domain that wherein density value is maximum;
Step 5: determine whether to find nose region, when current region threshold value is greater than predefined thr, this region is nose region, otherwise get back to step 1 restart circulation.
9. the multi-modal face identification method that merges of a kind of multilayer based on gray scale and depth information according to claim 6, it is characterized in that, described ICP algorithm key step comprises:
Determine matched data set pair, from the three-dimensional nose data decimation reference data point set P reference template, recycle point-to-point between nearest distance select to input the data point set Q matched with reference data in three-dimensional face;
Calculate rigid motion parameter, calculate rotation matrix R and translation vector t
When X determinant is 1, R=X;
t=P-R*Q
According to the whether registration of the error judgment 3-D data set between the data set RQ+t after rigid transformation and reference data set P, after registration, calculated the Euclidean distance between the three-dimensional face model data in input data and registry by following formula
Wherein P, Q are unique point set to be matched respectively, containing N number of unique point in set.
10. the multi-modal face identification method that merges of a kind of multilayer based on gray scale and depth information according to claim 6, it is characterized in that, step B4 is specially:
Three-dimensional face Range Image Segmentation is become some local grain regions;
For each Gabor filter response vector, the vision being mapped to its correspondence according to the difference of position is divided in the vocabulary of dictionary, and according to this based on set up visual dictionary histogram vectors and express as the special medical treatment of three-dimensional face;
Be used for nearest neighbor classifier obtaining the identification mark of three-dimensional face identification, and wherein L1 distance is selected as distance metric.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510006214.4A CN104598878A (en) | 2015-01-07 | 2015-01-07 | Multi-modal face recognition device and method based on multi-layer fusion of gray level and depth information |
PCT/CN2015/074868 WO2016110005A1 (en) | 2015-01-07 | 2015-03-23 | Gray level and depth information based multi-layer fusion multi-modal face recognition device and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510006214.4A CN104598878A (en) | 2015-01-07 | 2015-01-07 | Multi-modal face recognition device and method based on multi-layer fusion of gray level and depth information |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104598878A true CN104598878A (en) | 2015-05-06 |
Family
ID=53124652
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510006214.4A Pending CN104598878A (en) | 2015-01-07 | 2015-01-07 | Multi-modal face recognition device and method based on multi-layer fusion of gray level and depth information |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN104598878A (en) |
WO (1) | WO2016110005A1 (en) |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106156740A (en) * | 2016-07-05 | 2016-11-23 | 张宁 | Civil Aviation Airport terminal face system for rapidly identifying |
CN106326867A (en) * | 2016-08-26 | 2017-01-11 | 维沃移动通信有限公司 | Face recognition method and mobile terminal |
CN106407914A (en) * | 2016-08-31 | 2017-02-15 | 北京旷视科技有限公司 | Method for detecting human faces, device and remote teller machine system |
CN106469465A (en) * | 2016-08-31 | 2017-03-01 | 深圳市唯特视科技有限公司 | A kind of three-dimensional facial reconstruction method based on gray scale and depth information |
CN106650693A (en) * | 2016-12-30 | 2017-05-10 | 河北三川科技有限公司 | Multi-feature fusion identification algorithm used for human face comparison |
CN106909873A (en) * | 2016-06-21 | 2017-06-30 | 湖南拓视觉信息技术有限公司 | The method and apparatus of recognition of face |
CN107301377A (en) * | 2017-05-26 | 2017-10-27 | 浙江大学 | A kind of face based on depth camera and pedestrian's sensory perceptual system |
CN108197587A (en) * | 2018-01-18 | 2018-06-22 | 中科视拓(北京)科技有限公司 | A kind of method that multi-modal recognition of face is carried out by face depth prediction |
CN108960173A (en) * | 2018-07-12 | 2018-12-07 | 芜湖博高光电科技股份有限公司 | A kind of millimeter wave and camera merge face identification method |
CN109143260A (en) * | 2018-09-29 | 2019-01-04 | 北京理工大学 | A kind of three-dimensional solid-state face battle array laser radar face identification device and method |
CN109299639A (en) * | 2017-07-25 | 2019-02-01 | 虹软(杭州)多媒体信息技术有限公司 | A kind of method and apparatus for Expression Recognition |
CN110033291A (en) * | 2018-01-12 | 2019-07-19 | 北京京东金融科技控股有限公司 | Information object method for pushing, device and system |
CN110287776A (en) * | 2019-05-15 | 2019-09-27 | 北京邮电大学 | A kind of method, apparatus and computer readable storage medium of recognition of face |
CN110532907A (en) * | 2019-08-14 | 2019-12-03 | 中国科学院自动化研究所 | Based on face as the Chinese medicine human body constitution classification method with tongue picture bimodal feature extraction |
CN110781856A (en) * | 2019-11-04 | 2020-02-11 | 浙江大华技术股份有限公司 | Heterogeneous face recognition model training method, face recognition method and related device |
CN110967678A (en) * | 2019-12-20 | 2020-04-07 | 安徽博微长安电子有限公司 | Data fusion algorithm and system for multiband radar target identification |
CN112102409A (en) * | 2020-09-21 | 2020-12-18 | 杭州海康威视数字技术股份有限公司 | Target detection method, device, equipment and storage medium |
CN112215136A (en) * | 2020-10-10 | 2021-01-12 | 北京奇艺世纪科技有限公司 | Target person identification method and device, electronic equipment and storage medium |
CN112364825A (en) * | 2020-11-30 | 2021-02-12 | 支付宝(杭州)信息技术有限公司 | Method, apparatus and computer-readable storage medium for face recognition |
CN112767303A (en) * | 2020-08-12 | 2021-05-07 | 腾讯科技(深圳)有限公司 | Image detection method, device, equipment and computer readable storage medium |
CN113743321A (en) * | 2015-06-24 | 2021-12-03 | 三星电子株式会社 | Face recognition method and device |
CN113780222A (en) * | 2021-09-17 | 2021-12-10 | 深圳市繁维科技有限公司 | Face living body detection method and device, electronic equipment and readable storage medium |
CN113837105A (en) * | 2021-09-26 | 2021-12-24 | 北京的卢深视科技有限公司 | Face recognition method, face recognition system, electronic equipment and storage medium |
CN114093012A (en) * | 2022-01-18 | 2022-02-25 | 荣耀终端有限公司 | Face shielding detection method and detection device |
WO2022213623A1 (en) * | 2021-04-09 | 2022-10-13 | 上海商汤智能科技有限公司 | Image generation method and apparatus, three-dimensional facial model generation method and apparatus, electronic device and storage medium |
CN116386121A (en) * | 2023-05-30 | 2023-07-04 | 湖北华中电力科技开发有限责任公司 | Personnel identification method and device based on power grid safety production |
Families Citing this family (52)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107392137B (en) * | 2017-07-18 | 2020-09-08 | 艾普柯微电子(上海)有限公司 | Face recognition method and device |
CN109035388B (en) * | 2018-06-28 | 2023-12-05 | 合肥的卢深视科技有限公司 | Three-dimensional face model reconstruction method and device |
CN110163049B (en) * | 2018-07-18 | 2023-08-29 | 腾讯科技(深圳)有限公司 | Face attribute prediction method, device and storage medium |
TWI716938B (en) * | 2018-08-10 | 2021-01-21 | 宏達國際電子股份有限公司 | Facial expression modeling method, apparatus and non-transitory computer readable medium of the same |
CN111046708A (en) * | 2018-10-15 | 2020-04-21 | 天津大学青岛海洋技术研究院 | Human face gender discrimination algorithm based on Wasserstein distance |
CN109191611A (en) * | 2018-10-30 | 2019-01-11 | 惠州学院 | A kind of Time Attendance Device and method based on recognition of face |
CN111460864B (en) * | 2019-01-22 | 2023-10-17 | 天津大学青岛海洋技术研究院 | Animal disease detection method based on image recognition |
CN109902590B (en) * | 2019-01-30 | 2022-09-16 | 西安理工大学 | Pedestrian re-identification method for deep multi-view characteristic distance learning |
CN110084266B (en) * | 2019-03-11 | 2023-01-03 | 中国地质大学(武汉) | Dynamic emotion recognition method based on audio-visual feature deep fusion |
CN110046559A (en) * | 2019-03-28 | 2019-07-23 | 广东工业大学 | A kind of face identification method |
CN110033007B (en) * | 2019-04-19 | 2022-08-09 | 福州大学 | Pedestrian clothing attribute identification method based on depth attitude estimation and multi-feature fusion |
CN110046587B (en) * | 2019-04-22 | 2022-11-25 | 安徽理工大学 | Facial expression feature extraction method based on Gabor differential weight |
CN110232378B (en) * | 2019-05-30 | 2023-01-20 | 苏宁易购集团股份有限公司 | Image interest point detection method and system and readable storage medium |
CN110472479B (en) * | 2019-06-28 | 2022-11-22 | 广州中国科学院先进技术研究所 | Finger vein identification method based on SURF feature point extraction and local LBP coding |
CN110517185B (en) * | 2019-07-23 | 2024-02-09 | 北京达佳互联信息技术有限公司 | Image processing method, device, electronic equipment and storage medium |
CN110472582B (en) * | 2019-08-16 | 2023-07-21 | 腾讯科技(深圳)有限公司 | 3D face recognition method and device based on eye recognition and terminal |
CN110969089B (en) * | 2019-11-01 | 2023-08-18 | 北京交通大学 | Lightweight face recognition system and recognition method in noise environment |
CN111178129B (en) * | 2019-11-25 | 2023-07-14 | 浙江工商大学 | Multi-mode personnel identification method based on human face and gesture |
CN111046770B (en) * | 2019-12-05 | 2023-08-01 | 上海信联信息发展股份有限公司 | Automatic labeling method for photo archive characters |
CN111062324A (en) * | 2019-12-17 | 2020-04-24 | 上海眼控科技股份有限公司 | Face detection method and device, computer equipment and storage medium |
CN111160208B (en) * | 2019-12-24 | 2023-04-07 | 陕西西图数联科技有限公司 | Three-dimensional face super-resolution method based on multi-frame point cloud fusion and variable model |
CN111079684B (en) * | 2019-12-24 | 2023-04-07 | 陕西西图数联科技有限公司 | Three-dimensional face detection method based on rough-fine fitting |
CN111079700B (en) * | 2019-12-30 | 2023-04-07 | 陕西西图数联科技有限公司 | Three-dimensional face recognition method based on fusion of multiple data types |
CN111160292B (en) * | 2019-12-31 | 2023-09-22 | 上海易维视科技有限公司 | Human eye detection method |
CN111242992B (en) * | 2020-01-13 | 2023-05-23 | 洛阳理工学院 | Image registration method |
CN111476175B (en) * | 2020-04-09 | 2023-03-31 | 上海看看智能科技有限公司 | Adaptive topological graph matching method and system suitable for old people face comparison |
CN113643348B (en) * | 2020-04-23 | 2024-02-06 | 杭州海康威视数字技术股份有限公司 | Face attribute analysis method and device |
CN111488856B (en) * | 2020-04-28 | 2023-04-18 | 江西吉为科技有限公司 | Multimodal 2D and 3D facial expression recognition method based on orthogonal guide learning |
CN111753877B (en) * | 2020-05-19 | 2024-03-05 | 海克斯康制造智能技术(青岛)有限公司 | Product quality detection method based on deep neural network migration learning |
CN111582223A (en) * | 2020-05-19 | 2020-08-25 | 华普通用技术研究(广州)有限公司 | Three-dimensional face recognition method |
CN111695498B (en) * | 2020-06-10 | 2023-04-07 | 西南林业大学 | Wood identity detection method |
CN112001219B (en) * | 2020-06-19 | 2024-02-09 | 国家电网有限公司技术学院分公司 | Multi-angle multi-face recognition attendance checking method and system |
CN111950389B (en) * | 2020-07-22 | 2022-07-01 | 重庆邮电大学 | Depth binary feature facial expression recognition method based on lightweight network |
CN112069981A (en) * | 2020-09-03 | 2020-12-11 | Oppo广东移动通信有限公司 | Image classification method and device, electronic equipment and storage medium |
CN112215064A (en) * | 2020-09-03 | 2021-01-12 | 广州市标准化研究院 | Face recognition method and system for public safety precaution |
CN112016524B (en) * | 2020-09-25 | 2023-08-08 | 北京百度网讯科技有限公司 | Model training method, face recognition device, equipment and medium |
CN112364711B (en) * | 2020-10-20 | 2023-04-07 | 盛视科技股份有限公司 | 3D face recognition method, device and system |
CN112329683B (en) * | 2020-11-16 | 2024-01-26 | 常州大学 | Multi-channel convolutional neural network facial expression recognition method |
CN112528902B (en) * | 2020-12-17 | 2022-05-24 | 四川大学 | Video monitoring dynamic face recognition method and device based on 3D face model |
CN112652002B (en) * | 2020-12-25 | 2024-05-03 | 江苏集萃复合材料装备研究所有限公司 | Medical image registration method based on IDC algorithm |
CN112634269B (en) * | 2021-01-14 | 2023-12-26 | 华东交通大学 | Railway vehicle body detection method |
CN112766173B (en) * | 2021-01-21 | 2023-08-04 | 福建天泉教育科技有限公司 | Multi-mode emotion analysis method and system based on AI deep learning |
CN112883914B (en) * | 2021-03-19 | 2024-03-19 | 西安科技大学 | Multi-classifier combined mining robot idea sensing and decision making method |
CN113616184B (en) * | 2021-06-30 | 2023-10-24 | 北京师范大学 | Brain network modeling and individual prediction method based on multi-mode magnetic resonance image |
CN113449674B (en) * | 2021-07-12 | 2022-09-30 | 江苏商贸职业学院 | Pig face identification method and system |
CN113705457A (en) * | 2021-08-30 | 2021-11-26 | 支付宝(杭州)信息技术有限公司 | Service processing method and device based on human face |
CN114283471B (en) * | 2021-12-16 | 2024-04-02 | 武汉大学 | Multi-mode ordering optimization method for heterogeneous face image re-recognition |
CN114511910A (en) * | 2022-02-25 | 2022-05-17 | 支付宝(杭州)信息技术有限公司 | Face brushing payment intention identification method, device and equipment |
CN115147578B (en) * | 2022-06-30 | 2023-10-27 | 北京百度网讯科技有限公司 | Stylized three-dimensional face generation method and device, electronic equipment and storage medium |
CN115496975B (en) * | 2022-08-29 | 2023-08-18 | 锋睿领创(珠海)科技有限公司 | Auxiliary weighted data fusion method, device, equipment and storage medium |
CN116362933B (en) * | 2023-05-30 | 2023-09-26 | 南京农业大学 | Intelligent campus management method and system based on big data |
CN116403270B (en) * | 2023-06-07 | 2023-09-05 | 南昌航空大学 | Facial expression recognition method and system based on multi-feature fusion |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101339607A (en) * | 2008-08-15 | 2009-01-07 | 北京中星微电子有限公司 | Human face recognition method and system, human face recognition model training method and system |
US8285006B2 (en) * | 2007-04-13 | 2012-10-09 | Mira Electronics Co., Ltd. | Human face recognition and user interface system for digital camera and video camera |
CN103164695A (en) * | 2013-02-26 | 2013-06-19 | 中国农业大学 | Fruit identification method based on multi-source image information fusion |
CN103198605A (en) * | 2013-03-11 | 2013-07-10 | 成都百威讯科技有限责任公司 | Indoor emergent abnormal event alarm system |
CN103390164A (en) * | 2012-05-10 | 2013-11-13 | 南京理工大学 | Object detection method based on depth image and implementing device thereof |
CN103971122A (en) * | 2014-04-30 | 2014-08-06 | 深圳市唯特视科技有限公司 | Three-dimensional human face description method and device based on depth image |
CN104143080A (en) * | 2014-05-21 | 2014-11-12 | 深圳市唯特视科技有限公司 | Three-dimensional face recognition device and method based on three-dimensional point cloud |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101640077B1 (en) * | 2009-06-05 | 2016-07-15 | 삼성전자주식회사 | Apparatus and method for video sensor-based human activity and facial expression modeling and recognition |
EP2672426A3 (en) * | 2012-06-04 | 2014-06-04 | Sony Mobile Communications AB | Security by z-face detection |
CN103679151B (en) * | 2013-12-19 | 2016-08-17 | 成都品果科技有限公司 | A kind of face cluster method merging LBP, Gabor characteristic |
CN103810491B (en) * | 2014-02-19 | 2017-02-22 | 北京工业大学 | Head posture estimation interest point detection method fusing depth and gray scale image characteristic points |
-
2015
- 2015-01-07 CN CN201510006214.4A patent/CN104598878A/en active Pending
- 2015-03-23 WO PCT/CN2015/074868 patent/WO2016110005A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8285006B2 (en) * | 2007-04-13 | 2012-10-09 | Mira Electronics Co., Ltd. | Human face recognition and user interface system for digital camera and video camera |
CN101339607A (en) * | 2008-08-15 | 2009-01-07 | 北京中星微电子有限公司 | Human face recognition method and system, human face recognition model training method and system |
CN103390164A (en) * | 2012-05-10 | 2013-11-13 | 南京理工大学 | Object detection method based on depth image and implementing device thereof |
CN103164695A (en) * | 2013-02-26 | 2013-06-19 | 中国农业大学 | Fruit identification method based on multi-source image information fusion |
CN103198605A (en) * | 2013-03-11 | 2013-07-10 | 成都百威讯科技有限责任公司 | Indoor emergent abnormal event alarm system |
CN103971122A (en) * | 2014-04-30 | 2014-08-06 | 深圳市唯特视科技有限公司 | Three-dimensional human face description method and device based on depth image |
CN104143080A (en) * | 2014-05-21 | 2014-11-12 | 深圳市唯特视科技有限公司 | Three-dimensional face recognition device and method based on three-dimensional point cloud |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113743321A (en) * | 2015-06-24 | 2021-12-03 | 三星电子株式会社 | Face recognition method and device |
CN106909873A (en) * | 2016-06-21 | 2017-06-30 | 湖南拓视觉信息技术有限公司 | The method and apparatus of recognition of face |
CN106156740A (en) * | 2016-07-05 | 2016-11-23 | 张宁 | Civil Aviation Airport terminal face system for rapidly identifying |
CN106156740B (en) * | 2016-07-05 | 2019-06-28 | 张宁 | Civil Aviation Airport terminal face system for rapidly identifying |
CN106326867B (en) * | 2016-08-26 | 2019-06-07 | 维沃移动通信有限公司 | A kind of method and mobile terminal of recognition of face |
CN106326867A (en) * | 2016-08-26 | 2017-01-11 | 维沃移动通信有限公司 | Face recognition method and mobile terminal |
CN106407914A (en) * | 2016-08-31 | 2017-02-15 | 北京旷视科技有限公司 | Method for detecting human faces, device and remote teller machine system |
CN106469465A (en) * | 2016-08-31 | 2017-03-01 | 深圳市唯特视科技有限公司 | A kind of three-dimensional facial reconstruction method based on gray scale and depth information |
CN106407914B (en) * | 2016-08-31 | 2019-12-10 | 北京旷视科技有限公司 | Method and device for detecting human face and remote teller machine system |
WO2018040099A1 (en) * | 2016-08-31 | 2018-03-08 | 深圳市唯特视科技有限公司 | Three-dimensional face reconstruction method based on grayscale and depth information |
CN106650693A (en) * | 2016-12-30 | 2017-05-10 | 河北三川科技有限公司 | Multi-feature fusion identification algorithm used for human face comparison |
CN107301377A (en) * | 2017-05-26 | 2017-10-27 | 浙江大学 | A kind of face based on depth camera and pedestrian's sensory perceptual system |
CN109299639A (en) * | 2017-07-25 | 2019-02-01 | 虹软(杭州)多媒体信息技术有限公司 | A kind of method and apparatus for Expression Recognition |
CN110033291A (en) * | 2018-01-12 | 2019-07-19 | 北京京东金融科技控股有限公司 | Information object method for pushing, device and system |
CN108197587B (en) * | 2018-01-18 | 2021-08-03 | 中科视拓(北京)科技有限公司 | Method for performing multi-mode face recognition through face depth prediction |
CN108197587A (en) * | 2018-01-18 | 2018-06-22 | 中科视拓(北京)科技有限公司 | A kind of method that multi-modal recognition of face is carried out by face depth prediction |
CN108960173A (en) * | 2018-07-12 | 2018-12-07 | 芜湖博高光电科技股份有限公司 | A kind of millimeter wave and camera merge face identification method |
CN109143260A (en) * | 2018-09-29 | 2019-01-04 | 北京理工大学 | A kind of three-dimensional solid-state face battle array laser radar face identification device and method |
CN110287776A (en) * | 2019-05-15 | 2019-09-27 | 北京邮电大学 | A kind of method, apparatus and computer readable storage medium of recognition of face |
CN110532907A (en) * | 2019-08-14 | 2019-12-03 | 中国科学院自动化研究所 | Based on face as the Chinese medicine human body constitution classification method with tongue picture bimodal feature extraction |
CN110532907B (en) * | 2019-08-14 | 2022-01-21 | 中国科学院自动化研究所 | Traditional Chinese medicine human body constitution classification method based on face image and tongue image bimodal feature extraction |
CN110781856A (en) * | 2019-11-04 | 2020-02-11 | 浙江大华技术股份有限公司 | Heterogeneous face recognition model training method, face recognition method and related device |
CN110781856B (en) * | 2019-11-04 | 2023-12-19 | 浙江大华技术股份有限公司 | Heterogeneous face recognition model training method, face recognition method and related device |
CN110967678A (en) * | 2019-12-20 | 2020-04-07 | 安徽博微长安电子有限公司 | Data fusion algorithm and system for multiband radar target identification |
CN112767303A (en) * | 2020-08-12 | 2021-05-07 | 腾讯科技(深圳)有限公司 | Image detection method, device, equipment and computer readable storage medium |
CN112767303B (en) * | 2020-08-12 | 2023-11-28 | 腾讯科技(深圳)有限公司 | Image detection method, device, equipment and computer readable storage medium |
CN112102409B (en) * | 2020-09-21 | 2023-09-01 | 杭州海康威视数字技术股份有限公司 | Target detection method, device, equipment and storage medium |
CN112102409A (en) * | 2020-09-21 | 2020-12-18 | 杭州海康威视数字技术股份有限公司 | Target detection method, device, equipment and storage medium |
CN112215136A (en) * | 2020-10-10 | 2021-01-12 | 北京奇艺世纪科技有限公司 | Target person identification method and device, electronic equipment and storage medium |
CN112215136B (en) * | 2020-10-10 | 2023-09-05 | 北京奇艺世纪科技有限公司 | Target person identification method and device, electronic equipment and storage medium |
CN112364825A (en) * | 2020-11-30 | 2021-02-12 | 支付宝(杭州)信息技术有限公司 | Method, apparatus and computer-readable storage medium for face recognition |
WO2022213623A1 (en) * | 2021-04-09 | 2022-10-13 | 上海商汤智能科技有限公司 | Image generation method and apparatus, three-dimensional facial model generation method and apparatus, electronic device and storage medium |
CN113780222A (en) * | 2021-09-17 | 2021-12-10 | 深圳市繁维科技有限公司 | Face living body detection method and device, electronic equipment and readable storage medium |
CN113780222B (en) * | 2021-09-17 | 2024-02-27 | 深圳市繁维科技有限公司 | Face living body detection method and device, electronic equipment and readable storage medium |
CN113837105A (en) * | 2021-09-26 | 2021-12-24 | 北京的卢深视科技有限公司 | Face recognition method, face recognition system, electronic equipment and storage medium |
CN114093012B (en) * | 2022-01-18 | 2022-06-10 | 荣耀终端有限公司 | Face shielding detection method and detection device |
CN114093012A (en) * | 2022-01-18 | 2022-02-25 | 荣耀终端有限公司 | Face shielding detection method and detection device |
CN116386121B (en) * | 2023-05-30 | 2023-08-11 | 湖北华中电力科技开发有限责任公司 | Personnel identification method and device based on power grid safety production |
CN116386121A (en) * | 2023-05-30 | 2023-07-04 | 湖北华中电力科技开发有限责任公司 | Personnel identification method and device based on power grid safety production |
Also Published As
Publication number | Publication date |
---|---|
WO2016110005A1 (en) | 2016-07-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104598878A (en) | Multi-modal face recognition device and method based on multi-layer fusion of gray level and depth information | |
CN104778441A (en) | Multi-mode face identification device and method fusing grey information and depth information | |
CN105956582B (en) | A kind of face identification system based on three-dimensional data | |
CN104008370B (en) | A kind of video face identification method | |
WO2018072233A1 (en) | Method and system for vehicle tag detection and recognition based on selective search algorithm | |
CN104504410A (en) | Three-dimensional face recognition device and method based on three-dimensional point cloud | |
CN103942577B (en) | Based on the personal identification method for establishing sample database and composite character certainly in video monitoring | |
CN105869178B (en) | A kind of complex target dynamic scene non-formaldehyde finishing method based on the convex optimization of Multiscale combination feature | |
CN106469465A (en) | A kind of three-dimensional facial reconstruction method based on gray scale and depth information | |
CN104978550A (en) | Face recognition method and system based on large-scale face database | |
CN104850825A (en) | Facial image face score calculating method based on convolutional neural network | |
CN103632132A (en) | Face detection and recognition method based on skin color segmentation and template matching | |
CN104298995B (en) | Three-dimensional face identifying device and method based on three-dimensional point cloud | |
CN105894047A (en) | Human face classification system based on three-dimensional data | |
CN106529504B (en) | A kind of bimodal video feeling recognition methods of compound space-time characteristic | |
CN101526997A (en) | Embedded infrared face image identifying method and identifying device | |
Pandey et al. | Hand gesture recognition for sign language recognition: A review | |
CN103155003A (en) | Posture estimation device and posture estimation method | |
CN104091155A (en) | Rapid iris positioning method with illumination robustness | |
CN104102904B (en) | A kind of static gesture identification method | |
CN103996052A (en) | Three-dimensional face gender classification device and method based on three-dimensional point cloud | |
CN104143080A (en) | Three-dimensional face recognition device and method based on three-dimensional point cloud | |
CN105138968A (en) | Face authentication method and device | |
CN104537353A (en) | Three-dimensional face age classifying device and method based on three-dimensional point cloud | |
CN104573722A (en) | Three-dimensional face race classifying device and method based on three-dimensional point cloud |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20150506 |
|
RJ01 | Rejection of invention patent application after publication |