CN104573681A - Face recognition method - Google Patents

Face recognition method Download PDF

Info

Publication number
CN104573681A
CN104573681A CN201510072798.5A CN201510072798A CN104573681A CN 104573681 A CN104573681 A CN 104573681A CN 201510072798 A CN201510072798 A CN 201510072798A CN 104573681 A CN104573681 A CN 104573681A
Authority
CN
China
Prior art keywords
feature
image
facial image
queried
eigenvector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510072798.5A
Other languages
Chinese (zh)
Inventor
姚远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHENGDU GUODOU DIGITAL ENTERTAINMENT Co Ltd
Original Assignee
CHENGDU GUODOU DIGITAL ENTERTAINMENT Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHENGDU GUODOU DIGITAL ENTERTAINMENT Co Ltd filed Critical CHENGDU GUODOU DIGITAL ENTERTAINMENT Co Ltd
Priority to CN201510072798.5A priority Critical patent/CN104573681A/en
Publication of CN104573681A publication Critical patent/CN104573681A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a face recognition method. The method comprises the following steps: the feature vector of each image in a face image document library is obtained by a preset feature extraction method to form a corresponding face image feature library and establish an index; when an inquired face image is submitted by a user, the feature vector of the image is acquired by adopting the preset feature extraction method in real time, the proximity feature of the inquired face image is searched from the face image feature library by a proximity searching method, and the similarity among the images is calculated; according to the similarity, one group of the most similar images which are in the face image library and returned from large to small are regarded as image recognition results. The face recognition method is characterized in that the computation complexity of feature extraction and feature matching is effectively reduced in image recognition, and the recognition efficiency is increased.

Description

A kind of face identification method
Technical field
The present invention relates to image procossing, particularly a kind of facial image recognition method.
Background technology
Content-based image recognition obtains valuable information from mass digital image information, from the visual signature of institute's query image, finds out the image of the person of being queried the most similar to it in facial image library.Two technology of its most critical are facial image feature extraction and face characteristic similarity matching.Traditional images feature extracting method detected image unique point is also carried out feature interpretation and obtains, and usually from the feature limited amount that piece image extracts, and it is larger with its otherness of change of gradation of image, even in the image that some contrast is too low, also may can't detect feature, another one unfavorable factor is that the computational complexity of its feature point detecting method is general higher; Although comparatively large to the description coverage rate of picture material, also bring the problem that characteristic amount is too much.The index of the feature based space search of general characteristics similarity matching methods, only can process the characteristic of lower dimension, the complexity of these methods exponentially level rising when dimension improves, performance sharply declines, and although linear sweep is not by the restriction of characteristic dimension, consuming time longer.
Therefore, for the problems referred to above existing in correlation technique, at present effective solution is not yet proposed.
Summary of the invention
For solving the problem existing for above-mentioned prior art, the present invention proposes a kind of face identification method, comprising:
For the every width image in facial image library, obtain characteristics of image vector by default feature extracting method, form corresponding facial image feature database, set up index on this basis;
When user submits the facial image be queried to, apply described default feature extracting method real-time image acquisition eigenvector, then utilize proximity search from facial image feature database, search all adjacent features being queried facial image feature, and calculate the similarity being queried in facial image and facial image library between image;
Image in the most similar lineup's face image file storehouse is returned as image recognition result by similarity order from big to small.
Preferably, it is characterized in that, described image characteristic extracting method comprises feature detection and feature interpretation, and wherein feature detection comprises: detect yardstick spatial extrema point, obtain unique point by stochastic sampling; Screen and the position of location feature point and yardstick;
In described sampling process, the pixel in edge region is not sampled, and the point in the 1/10-9/10 of only sampled images row, column scope;
Described feature interpretation comprises unique point and distributes principal direction; Directly feature interpretation is carried out to the unique point of stochastic sampling, generate the feature interpretation vector only comprising position and directional information.
Preferably, it is characterized in that, the described step setting up index comprises further:
(1) by image high dimensional feature vectors p=(x 1, x 2..., x d) be mapped in Hamming space, be converted to binary string p'=U c(x 1) U c(x 2) ... U c(x d), wherein, U c(x i) (i ∈ [1, d]) represent by x iindividual 1 and c-x ithe binary string of individual 0 composition, c is arbitrary element x in eigenvector p imaximal value, d is eigenvector dimension;
(2) from above-mentioned string of binary characters p', select k position (k ∈ (0, c × d)) to form l hash function: g at random 1(p), g 2(p) ..., g l(p), the corresponding hash table of each function;
(3) utilize the function in step (2), eigenvector is mapped in corresponding hash table.
Preferably, the described proximity search that utilizes searches all adjacent features being queried facial image feature from facial image feature database, comprises further:
(1) for being queried feature q in facial image 1, q 2..., q k, be mapped in Hamming space, be converted to binary string p'=U c(x 1) U c(x 2) ... U c(x d), wherein, U c(x i) (i ∈ [1, k]) represent by x iindividual 1 and c-x ithe binary string of individual 0 composition, c is for being queried arbitrary element x in facial image eigenvector imaximal value, k is eigenvector dimension, and by described l hash function g 1(p), g 2(p) ..., g lp () is mapped in corresponding hash table respectively;
(2) g is extracted i(q j) (i ∈ (0, l], all hash table entries in j ∈ (0, k]) similar regions, retain and be queried the list item of facial image eigenvector distance in threshold range, searching character pair adjacent features alternatively by characteristic library index;
(3) for the candidate's adjacent features collection obtained, by the ascending sequence of the Hamming distance between itself and query characteristics, return a front K feature as the adjacent features being queried facial image feature, K is preset constant.
The present invention compared to existing technology, has the following advantages:
The present invention proposes a kind of face identification method, effectively reduce the computation complexity of feature extraction and characteristic matching in image recognition, improve recognition efficiency.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the face identification method according to the embodiment of the present invention.
Embodiment
Detailed description to one or more embodiment of the present invention is hereafter provided together with the accompanying drawing of the diagram principle of the invention.Describe the present invention in conjunction with such embodiment, but the invention is not restricted to any embodiment.Scope of the present invention is only defined by the claims, and the present invention contain many substitute, amendment and equivalent.Set forth many details in the following description to provide thorough understanding of the present invention.These details are provided for exemplary purposes, and also can realize the present invention according to claims without some in these details or all details.
Fig. 1 is the face identification method process flow diagram according to the embodiment of the present invention.The facial image recognition method that the present invention proposes can be divided into off-line procedure and at line process two parts.
1. off-line procedure, namely for the every width image in facial image library, prior application characteristic extracting method, obtains characteristics of image vector through feature detection, feature interpretation, form corresponding facial image feature database, then set up index on this basis to facilitate the fast search to query characteristics; 2. at line process, that namely submits to for user is queried facial image, apply identical feature extracting method real-time image acquisition eigenvector, then in characteristic matching process, from feature database, all adjacent features being queried facial image feature are searched by suitable proximity search method, and calculate the similarity being queried in facial image and facial image library between image, return image in the most similar lineup's face image file storehouse as image recognition result by similarity order from big to small.
Image characteristics extraction comprises feature detection and feature interpretation two parts.Feature detection comprises detection yardstick spatial extrema point alternatively unique point; Screen and the position of precise positioning feature point and yardstick; Feature interpretation comprises unique point and distributes principal direction; Generating feature describes vector.The present invention reduces a large amount of calculating in feature detection stage, and carried out feature interpretation by stochastic sampling image slices vegetarian refreshments as unique point thus obtain characteristics of image vector, therefore the method effectively can reduce computation complexity, improves counting yield.
First, image characteristic point is obtained by stochastic sampling.Because the position of stochastic sampling gained unique point has randomness, therefore to ensure when setting sampled point number that sampling point position distribution is relatively uniform, namely can sample the characteristic information of zones of different in image as much as possible, avoid sampled point too to concentrate the unique point consequently obtained can not embody the visual signature of entire image more all sidedly as far as possible.Further, the point due to image border part generally can not embody the content information of image preferably, therefore the present invention's setting as far as possible not the pixel in edge region sample, the point in the 1/10-9/10 of a preferably sampled images row, column scope.Suppose that rows, cols are respectively image line, columns, number of samples is that m × n is (if image size is 640 × 480, get m=1/10rows, n=1/10cols), first corresponding line number r is obtained to stochastic sampling m number in image line 1/10rows to 9/10rows scope 1, r 2, r 3..., r m, then at each specific line number r iunder just respective column c is obtained to stochastic sampling n number in image column 1/10cols to 9/10cols scope j, then sample point coordinate is (r i, c j), wherein i=1,2 ..., m, j=1,2 ..., n.
After random character point sampling completes, be described.The object of feature interpretation is accurately to express topography's information and that this information is had is characteristics reliably.The present invention directly carries out feature interpretation to the unique point of stochastic sampling, and the eigenvector therefore finally obtained only comprises position and directional information for classic method.Eigenvector calculation procedure is as follows:
(1) get the neighborhood of around unique point 4 × 4, utilize the gradient of pixel in this neighborhood to add up direction histogram, be namely a post with every 10 ° of directions, the direction representated by post is pixel gradient direction, the length representative gradient magnitude of post.To direction histogram carry out twice level and smooth after main peak value be the principal direction of unique point.Utilize the gradient direction distribution characteristic of unique point neighborhood territory pixel, can be each unique point assigned direction, thus make descriptor have unchangeability to image rotation.If the gray-scale value at unique point (x, y) place is g (x, y), image gradient direction is θ (x, y), amplitude is M (x, y), and computing formula is as follows:
θ ( x , y ) = arctan [ g ( x + 1 , y ) - g ( x - 1 , y ) g ( x + 1 , y ) - g ( x - 1 , y ) g ( x , y + 1 ) - g ( x , y - 1 ) ]
M ( x , y ) = [ ( g ( x + 1 , y ) - g ( x - 1 , y ) ) 2 + ( g ( x , y + 1 ) - g ( x , y - 1 ) ) 2 ] 1 2
(2) for ensureing the rotational invariance of eigenvector, 16 × 16 neighborhoods around unique point are rotated to be the principal direction of unique point.If sample point coordinate is (x, y) before rotating, then can calculate postrotational sample point coordinate (x ', y '):
x ′ y ′ = cos θ - sin θ sin θ cos θ x y
In formula: θ is unique point gradient direction θ (x, y) above calculated.
(3) around unique point, in 16 × 16 neighborhood windows, each pixel is included into the position grid of 4 × 4 by weighted mean according to coordinate, each grid forms a Seed Points.
(4) direction histogram of above each grid of process statistics is utilized, now each net region histogram is divided into 8 Direction intervals by 0 °-360 °, each interval range is 45 °, namely each Seed Points has the gradient intensity information of 8 Direction intervals, the final feature interpretation vector obtaining 128 dimensions.Further Regularization is carried out to it, to remove the impact of illumination variation.
Because in the present invention, the position of sampled point is random, the ability of the 128 n dimensional vector n token image features that some sampled points obtain after feature interpretation is more weak, even may affect the accuracy rate of retrieval, therefore, need screen the vector obtained after feature interpretation, concrete grammar is:
Suppose unique point D gained vector (a after feature interpretation 0, a 1..., a 127) in element value be 0 element number be n (n ∈ [0,128]), if n>K (K is positive integer), then cast out this D.K value according to inquired about image, can compare acquisition by many experiments.
For data point set, proximity search method of the present invention utilizes one group of hash function with certain constraint condition to set up multiple hash table, make the probability that similitude clashes under certain similarity measure condition relatively large, and the probability that dissimilar points clashes is relatively little.
Proximity search method is applied in image recognition by the present invention, image local feature is mapped as the point in Hamming space, and carry out hashing to these points, the probability that the point making Hamming distance nearer clashes is larger.In facial image identification, proximity search method is used for carrying out characteristics of image similarity matching, specifically comprises the index of the image set up in facial image library and the neighbor of search inquiry facial image feature.
The present invention's application proximity search method is the step that index is set up in characteristics of image storehouse:
(1) by image high dimensional feature vectors p=(x 1, x 2..., x d) be mapped in Hamming space, be converted to binary string p'=U c(x 1) U c(x 2) ... U c(x d).Wherein, U c(xi) (i ∈ [1, d]) represents by x iindividual 1 and c-x ithe binary string of individual 0 composition, c is arbitrary element x in eigenvector p imaximal value, d is vector dimension.
(2) from above-mentioned string of binary characters p', select k position (k ∈ (0, c × d)) to form l function: g at random 1(p), g 2(p) ..., g l(p), the corresponding hash table of each function.
(3) utilize the hash function in step (2), eigenvector is mapped in corresponding hash table.
Application proximity search method to the step being queried facial image feature and carrying out proximity search is:
(1) for being queried feature q in facial image 1, q 2..., q k, equally eigenvector is mapped in Hamming space by the step (1) in index process of establishing, is converted to string of binary characters, and by l the hash function g identical with index process of establishing 1(p), g 2(p) ..., g lp () is mapped in corresponding hash table respectively.
(2) g is extracted i(q j) (i ∈ (0, l], all hash table entries in j ∈ (0, k]) similar regions, retain and be queried the list item of facial image eigenvector distance in threshold range, searching character pair adjacent features alternatively by characteristic library index.
(3) for the candidate's adjacent features collection obtained, by the ascending sequence of the Hamming distance between itself and query characteristics, return a front K feature as the adjacent features being queried facial image feature, they are follow-uply voted to the image in facial image library and sort thus draw the foundation of image searching result.
Here, original feature space middle distance question variation has been become the distance metric problem in Hamming space by proximity search method, improve space utilization rate, the index of large-scale data, query manipulation are converted to the operation of one group of hash function, substantially reduce the similarity time.Therefore, proximity search method has good time efficiency, and still can keep good performance at high-dimensional data space.
In the characteristic matching stage, the present invention adopts such matched rule, if the first two unique point nearest with certain the unique point Euclidean distance be queried in facial image in the image in facial image library meets time be closely greater than predetermined threshold with the ratio of minimum distance, then using two unique points as a pair match point.
The present invention completes the determination of matching image accurate threshold and wide threshold adaptively in the mode of iteration variable step, and sets up restricted model with accurate threshold exact matching result in follow-up work, deletes the mismatch in the thick matching result of wide threshold.In an iterative process, accurate threshold coupling number q is limited bmeet 2≤q b≤ 5, thus ensure the accuracy of coupling and the foundation of sequence restrictions model thereof.Auto-adaptive doublethreshold determination flow process is as follows:
(1) threshold value initialization.If threshold value initial value η 1=1.5, η 2=8; Cycle index i 1=0, i 2=0; Cycle index restriction i 1max=10, i 2max=20, required smallest match point number Q=5.
(2) from matching image, eigenvector is extracted.
(3) threshold value η is utilized 1do characteristic matching, obtain current matching point set A={ (a i, a ' i), i=1,2 ..., q a, wherein q afor current η 1corresponding coupling number, and make i 1← i 1+ 1.
(4) if current matching number q a<Q, and i 1<i 1max, then η is made 1← η 1+ 0.15, and return step (3); If coupling number is greater than Q, or i 1>i 1max, then go to step (5).
(5) utilize threshold value η 2 to do characteristic matching, obtain current matching point set B={ (b j, b ' j), j=1,2 ..., q b, wherein q bfor current η 2corresponding coupling number, and make i 2← i 2+ 1.
(6) if current matching number q b<2, and i 2<i 2max, then make η 2← η 2+ 0.02, and return step (5); If coupling number is greater than 2 and is less than 5, and i 2<i 2max, then η is made 2← η 2-0.01, and return step (5); Otherwise, go to step (7).
(7) complete threshold selection work, obtain wide threshold η 1and the thick coupling point set A of correspondence, and accurate threshold η 2and the coupling point set exact matching point set B of correspondence.
According to above-mentioned threshold value selection rule, the matching result that match point corresponding to accurate threshold is concentrated is sparse but accurate, on this basis, Geometrical change (as line segment length, the angle) restricted model between matching image can be set up, with the mismatch in this filtering wide threshold matching result.
If (b 1, b ' 1), (b qB, b ' qB) be two pairs of exact matchings, (a i, a ' i) be in thick coupling any pair.By (b 1, b ' 1), (b qB, b ' qB) constraint between matching image can be set up, i.e. length conversion constraint
Lc=‖b qB-b 1‖/‖b’ qB-b’ 1
With angle change constraint
Vc=∠b 1b qBx-∠b’ 1b’ qBx
For any pair coupling (a in thick coupling i, a ' i), if its corresponding length and angle meet above-mentioned constraint, namely when meeting
r L=min(L c,L)/max(L c,L)>0.85
r V=min(V c,V)/max(V c,V)>0.85
Time, retain current matching (a i, a ' i), wherein L=‖ a i-b 1‖/‖ a ' i-b ' i‖, V=∠ b 1a ix-∠ b ' 1a ' ix; Otherwise, current matching is deleted from thick coupling point set A.
In sum, the present invention proposes a kind of face identification method, effectively reduce the computation complexity of feature extraction and characteristic matching in image recognition, improve recognition efficiency.
Obviously, it should be appreciated by those skilled in the art, above-mentioned of the present invention each module or each step can realize with general computing system, they can concentrate on single computing system, or be distributed on network that multiple computing system forms, alternatively, they can realize with the executable program code of computing system, thus, they can be stored and be performed by computing system within the storage system.Like this, the present invention is not restricted to any specific hardware and software combination.
Should be understood that, above-mentioned embodiment of the present invention only for exemplary illustration or explain principle of the present invention, and is not construed as limiting the invention.Therefore, any amendment made when without departing from the spirit and scope of the present invention, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.In addition, claims of the present invention be intended to contain fall into claims scope and border or this scope and border equivalents in whole change and modification.

Claims (4)

1. a face identification method, is characterized in that, comprising:
For the every width image in facial image library, obtain characteristics of image vector by default feature extracting method, form corresponding facial image feature database, set up index on this basis;
When user submits the facial image be queried to, apply described default feature extracting method real-time image acquisition eigenvector, then utilize proximity search from facial image feature database, search all adjacent features being queried facial image feature, and calculate the similarity being queried in facial image and facial image library between image;
Image in the most similar lineup's face image file storehouse is returned as image recognition result by similarity order from big to small.
2. method according to claim 1, is characterized in that, described image characteristic extracting method comprises feature detection and feature interpretation, and wherein feature detection comprises: detect yardstick spatial extrema point, obtain unique point by stochastic sampling; Screen and the position of location feature point and yardstick;
In described sampling process, the pixel in edge region is not sampled, and the point in the 1/10-9/10 of only sampled images row, column scope;
Described feature interpretation comprises unique point and distributes principal direction; Directly feature interpretation is carried out to the unique point of stochastic sampling, generate the feature interpretation vector only comprising position and directional information.
3. method according to claim 2, is characterized in that, the described step setting up index comprises further:
(1) by image high dimensional feature vectors p=(x 1, x 2..., x d) be mapped in Hamming space, be converted to binary string p'=U c(x 1) U c(x 2) ... U c(x d), wherein, U c(x i) (i ∈ [1, d]) represent by x iindividual 1 and c-x ithe binary string of individual 0 composition, c is arbitrary element x in eigenvector p imaximal value, d is eigenvector dimension;
(2) from above-mentioned string of binary characters p', select k position (k ∈ (0, c × d)) to form l hash function: g at random 1(p), g 2(p) ..., g l(p), the corresponding hash table of each function;
(3) utilize the function in step (2), eigenvector is mapped in corresponding hash table.
4. method according to claim 3, is characterized in that, the described proximity search that utilizes searches all adjacent features being queried facial image feature from facial image feature database, comprises further:
(1) for being queried feature q in facial image 1, q 2..., q k, be mapped in Hamming space, be converted to binary string p'=U c(x 1) U c(x 2) ... U c(x d), wherein, U c(x i) (i ∈ [1, k]) represent by x iindividual 1 and c-x ithe binary string of individual 0 composition, c is for being queried arbitrary element x in facial image eigenvector imaximal value, k is eigenvector dimension, and by described l hash function g 1(p), g 2(p) ..., g lp () is mapped in corresponding hash table respectively;
(2) g is extracted i(q j) (i ∈ (0, l], all hash table entries in j ∈ (0, k]) similar regions, retain and be queried the list item of facial image eigenvector distance in threshold range, searching character pair adjacent features alternatively by characteristic library index;
(3) for the candidate's adjacent features collection obtained, by the ascending sequence of the Hamming distance between itself and query characteristics, return a front K feature as the adjacent features being queried facial image feature, K is preset constant.
CN201510072798.5A 2015-02-11 2015-02-11 Face recognition method Pending CN104573681A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510072798.5A CN104573681A (en) 2015-02-11 2015-02-11 Face recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510072798.5A CN104573681A (en) 2015-02-11 2015-02-11 Face recognition method

Publications (1)

Publication Number Publication Date
CN104573681A true CN104573681A (en) 2015-04-29

Family

ID=53089703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510072798.5A Pending CN104573681A (en) 2015-02-11 2015-02-11 Face recognition method

Country Status (1)

Country Link
CN (1) CN104573681A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899579A (en) * 2015-06-29 2015-09-09 小米科技有限责任公司 Face recognition method and face recognition device
CN105718531A (en) * 2016-01-14 2016-06-29 广州市万联信息科技有限公司 Image database building method and image recognition method
CN106446816A (en) * 2016-09-14 2017-02-22 北京旷视科技有限公司 Face recognition method and device
CN106649624A (en) * 2016-12-06 2017-05-10 杭州电子科技大学 Local feature point verification method based on global relation consistency constraint
WO2017101380A1 (en) * 2015-12-15 2017-06-22 乐视控股(北京)有限公司 Method, system, and device for hand recognition
CN105260739B (en) * 2015-09-21 2018-08-31 中国科学院计算技术研究所 Image matching method towards binary features and its system
CN108876386A (en) * 2017-12-08 2018-11-23 北京旷视科技有限公司 Object authentication method and apparatus, method of commerce and device based on object authentication
CN108875514A (en) * 2017-12-08 2018-11-23 北京旷视科技有限公司 Face authentication method and system and authenticating device and non-volatile memory medium
CN109062942A (en) * 2018-06-21 2018-12-21 北京陌上花科技有限公司 Data query method and apparatus
CN110377774A (en) * 2019-07-15 2019-10-25 腾讯科技(深圳)有限公司 Carry out method, apparatus, server and the storage medium of personage's cluster
CN110929546A (en) * 2018-09-19 2020-03-27 传线网络科技(上海)有限公司 Face comparison method and device
CN113065530A (en) * 2021-05-12 2021-07-02 曼德电子电器有限公司 Face recognition method and device, medium and equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510257A (en) * 2009-03-31 2009-08-19 华为技术有限公司 Human face similarity degree matching method and device
CN101839722A (en) * 2010-05-06 2010-09-22 南京航空航天大学 Method for automatically recognizing target at medium and low altitudes and positioning carrier with high accuracy
CN103778414A (en) * 2014-01-17 2014-05-07 杭州电子科技大学 Real-time face recognition method based on deep neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510257A (en) * 2009-03-31 2009-08-19 华为技术有限公司 Human face similarity degree matching method and device
CN101839722A (en) * 2010-05-06 2010-09-22 南京航空航天大学 Method for automatically recognizing target at medium and low altitudes and positioning carrier with high accuracy
CN103778414A (en) * 2014-01-17 2014-05-07 杭州电子科技大学 Real-time face recognition method based on deep neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
化春键等: "高精度尺度不变特征点匹配方法及其应用", 《中国机械工程》 *
曹健: "基于局部特征的图像目标识别技术研究", 《中国博士学位论文全文数据库信息科技辑》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899579A (en) * 2015-06-29 2015-09-09 小米科技有限责任公司 Face recognition method and face recognition device
CN105260739B (en) * 2015-09-21 2018-08-31 中国科学院计算技术研究所 Image matching method towards binary features and its system
WO2017101380A1 (en) * 2015-12-15 2017-06-22 乐视控股(北京)有限公司 Method, system, and device for hand recognition
CN105718531A (en) * 2016-01-14 2016-06-29 广州市万联信息科技有限公司 Image database building method and image recognition method
CN105718531B (en) * 2016-01-14 2019-12-17 广州市万联信息科技有限公司 Image database establishing method and image identification method
CN106446816A (en) * 2016-09-14 2017-02-22 北京旷视科技有限公司 Face recognition method and device
CN106649624B (en) * 2016-12-06 2020-03-03 杭州电子科技大学 Local feature point verification method based on global relationship consistency constraint
CN106649624A (en) * 2016-12-06 2017-05-10 杭州电子科技大学 Local feature point verification method based on global relation consistency constraint
CN108876386A (en) * 2017-12-08 2018-11-23 北京旷视科技有限公司 Object authentication method and apparatus, method of commerce and device based on object authentication
CN108875514A (en) * 2017-12-08 2018-11-23 北京旷视科技有限公司 Face authentication method and system and authenticating device and non-volatile memory medium
CN108876386B (en) * 2017-12-08 2022-03-22 北京旷视科技有限公司 Object authentication method and device, and transaction method and device based on object authentication
CN109062942A (en) * 2018-06-21 2018-12-21 北京陌上花科技有限公司 Data query method and apparatus
CN110929546A (en) * 2018-09-19 2020-03-27 传线网络科技(上海)有限公司 Face comparison method and device
CN110377774A (en) * 2019-07-15 2019-10-25 腾讯科技(深圳)有限公司 Carry out method, apparatus, server and the storage medium of personage's cluster
CN110377774B (en) * 2019-07-15 2023-08-01 腾讯科技(深圳)有限公司 Method, device, server and storage medium for person clustering
CN113065530A (en) * 2021-05-12 2021-07-02 曼德电子电器有限公司 Face recognition method and device, medium and equipment
CN113065530B (en) * 2021-05-12 2023-05-30 曼德电子电器有限公司 Face recognition method and device, medium and equipment

Similar Documents

Publication Publication Date Title
CN104573681A (en) Face recognition method
CN111126360B (en) Cross-domain pedestrian re-identification method based on unsupervised combined multi-loss model
Gálvez-López et al. Bags of binary words for fast place recognition in image sequences
CN103207898B (en) A kind of similar face method for quickly retrieving based on local sensitivity Hash
JP7430243B2 (en) Visual positioning method and related equipment
Li et al. Mining key skeleton poses with latent svm for action recognition
CN107392215A (en) A kind of multigraph detection method based on SIFT algorithms
CN103336801A (en) Multi-feature locality sensitive hashing (LSH) indexing combination-based remote sensing image retrieval method
CN104966081A (en) Spine image recognition method
Wang et al. Detecting human action as the spatio-temporal tube of maximum mutual information
Wu et al. An efficient visual loop closure detection method in a map of 20 million key locations
Lei et al. Bi-temporal texton forest for land cover transition detection on remotely sensed imagery
Zhao et al. Landsat time series clustering under modified Dynamic Time Warping
Bai et al. An efficient indexing scheme based on k-plet representation for fingerprint database
Varytimidis et al. W α SH: weighted α-shapes for local feature detection
Yu et al. Multiscale crossing representation using combined feature of contour and venation for leaf image identification
CN104615994A (en) Monitoring image real-time processing method
CN112102475A (en) Space target three-dimensional sparse reconstruction method based on image sequence trajectory tracking
Wu et al. A vision-based indoor positioning method with high accuracy and efficiency based on self-optimized-ordered visual vocabulary
Nie et al. Effective 3D object detection based on detector and tracker
Huo et al. Person re-identification based on multi-directional saliency metric learning
Ahmad et al. A fusion of labeled-grid shape descriptors with weighted ranking algorithm for shapes recognition
CN104615995A (en) Face recognition method
Wu et al. Visual loop closure detection by matching binary visual features using locality sensitive hashing
Zhao et al. Person re-identification with effectively designed parts

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20150429

RJ01 Rejection of invention patent application after publication