CN107615298A - Face identification method and system - Google Patents

Face identification method and system Download PDF

Info

Publication number
CN107615298A
CN107615298A CN201680030571.7A CN201680030571A CN107615298A CN 107615298 A CN107615298 A CN 107615298A CN 201680030571 A CN201680030571 A CN 201680030571A CN 107615298 A CN107615298 A CN 107615298A
Authority
CN
China
Prior art keywords
face
attribute
image
template
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201680030571.7A
Other languages
Chinese (zh)
Inventor
林晓明
普拉尚斯·维奇安德兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mdeick Private Investment Co Ltd
Original Assignee
Mdeick Private Investment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mdeick Private Investment Co Ltd filed Critical Mdeick Private Investment Co Ltd
Publication of CN107615298A publication Critical patent/CN107615298A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • G06V40/173Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to a kind of face identification method and system.The method comprising the steps of:A) one or more character image of display is read, b) judge whether described image shows the face of at least one personage, wherein, methods described only continues in the case where showing at least one face, c) image of the non-face attributive character of the face is analyzed, d) the facial attribute of the face is extracted from described image, e) face template being stored in database is ranked up and/or filtered by the non-face attributive character, f) database of the sequence and/or filtering is searched for, to obtain the face template to match with the face of described image.

Description

Face identification method and system
Technical field
The present invention relates to a kind of face identification method and system.
Background technology
The A1 of US 2014/0241574 disclose a kind of Face tracking and recognition method and device.By identifying the selected of face The facial attribute simultaneously is compared to know by the facial attribute in region with the face data being stored in the database of known face Others' thing.
The B2 of US 8,380,711 disclose a kind of method and system for being used to determine the order of classification of facial attribute.From face Face area is estimated in portion's view data, and facial zone is determined in the attribute of these face areas and/or feature.By right These attributes and feature carry out vector quantization, establish the grade figure of face recognition.The level plot shows point of the facial attribute Level sequence.Therefore, one people of identification of facial attribute effectively can be passed through.
The A1 of US 2013/0129210 disclose a kind of introducing system identified based on face and style and method.Pass through face Portion identifies, determines sex and age.Style identification includes the identification to the design and colors of clothes, and combine season, weather and The information of time.The information identified using face and style is this life into one on hair, dressing, clothes and general The recommendation of style.
The B2 of US 7,236,615 disclose a kind of Face datection based on energy model and expression appraisal procedure.This method Multi views detector is set to be able to detect that the face of various expressions.So as to efficiently control the colour of skin, glasses, beard, light, ratio The change of example and facial expression and other facial attributes or facial characteristics.
The A1 of US 2009/0087100 disclose a kind of device for calculating the crown position of people in image.This is by figure Realize in the region for the people for finding out hair as carrying out oscillometry.By using this method, find and using in image Face is as a reference point, to solve the problems, such as the Compositional balance in picture editting.
The A of CN 103679151 disclose a kind of method that face cluster is carried out in one or more image.This method By the way that RGB image is converted into the gray level image for efficiency purpose to improve efficiency, and Gabor is extracted from gray level image And/or local binary patterns (Local binary patterns, LBP) feature.The image for belonging to a people is clustered. Other attributes, such as background, brightness, different facial expressions, body gesture, hair and hair style, and headwear etc. are obtained for Effective control.
C.P. Georgia, the human relations of M. Austria and T. ripples are burnt;General framework for object detection;6th international computer vision Meeting, 555-562 pages, 1998 be that first description Ha Er (Haar) small echo is used for one of publication of real-time objects detection.
From Borrow's viola and Michael Jones;Use the quick object detection of simple feature enhancing cascade, Mitsubishi's electronics Co., Ltd of research laboratory, 2004 (TR-2004-043), positioned at the U.S. (computer vision and pattern-recognition meeting, 2001) Cambridge of Massachusetts, and Borrow's viola and Michael Jones;Sane real-time objects detection;International computer vision Journal, 57 (2):137-154,2002, a kind of method of face in automatic identification image, wherein Haar wavelet transform are used to detect Haar Feature.This method uses so-called " general image ", and it is the middle representative of some image, wherein, all of above pixel and a left side The summation of side pixel, plus the value of its own, each pixel as general image is signed.By using this distributed A little mesh points, it can quickly calculate very much pixel in any rectangle of the image between four such complete pixels Sum.Therefore, Haar wavelet transform very quickly can be applied in image.A kind of study based on AdaBoost faces is described to calculate Method, the algorithm select a small amount of class Lis Hartel to levy from larger set, to obtain very efficient grader.These graders A grader cascade is can be incorporated into, grader cascade allows the background area of image rapidly to be abandoned, and is having More calculate is spent in desired subject area.
P.I. Wilson's, J. Fernandezs;Based on the facial features localization of Ha Er graders, JCSC 21,4 (2006 April), CCSC:In the meeting of the middle and south, another method that face in identification image is levied using class Lis Hartel is described.For The image-region of face feature analysis is by region division to the position with maximum probability.By the detection zone of compartmentalization, disappear Except reporting by mistake, reduce the area of detection, improve the speed of detection.
In Sebastian Schmidt, there is the real-time target detection of class Lis Hartel sign, on June 22nd, 2010, s- Schmitt.de/ressourcen/haar_like_features.pdf, describe using detection object during class Lis Hartel levies in kind Several projects.In these projects, the profiling feature of rotation has been used.In order to calculate the spy of the feature of rotation and axle alignment Sign, the rotary area summation table (RSAT) of rotation integral image is used.
In F. A Badete, in C. horses AVM hereinafter and A. Bruces base (2010);The tracking of real-time face characteristic point, there is gold Luas-Kanade algorithms, man-machine interaction, the great Zuo-Chu (chief editor) of word turriform, International Standard Book Number (ISBN):978-953- 307-051-3, InTech, it can be obtained from following network address:http://www.intechopen.com/books/human- robot-interaction/real-time-facial-feature-pointstracking-with-pyramidal- lucas-kanade-algorithm, a kind of method of the countenance tracking based on class Lis Hartel sign., can be with using this method Selected facial feature points are tracked in the video sequence.
N. Dalaro, the auspicious lattice of B., for the histogram of the orientation gradient of mankind's detection, lear.inrialpes.fr/ People/triggs/pubs/Dalal-cvpr05.pdf, is published in computer vision and pattern-recognition, and 2005, CVPR 2005, IEEE computer societies 2005 meeting on June 25, (volume 1), the 886-893 pages, volume 1, ISSN 1063-6919, ISBN 0-7695-2372-2 are printed in, AAP describes a kind of method for detecting the mankind in the picture, and this method is led to Cross histograms of oriented gradients (Histogram of Oriented Gradient, HOG) the identification mankind.This method is to be based on The normalization local histogram in image gradient direction is assessed in dense grid.Basic idea is can to pass through local strength's ladder The distribution of degree or edge direction represents the outward appearance of native object and shape, in this embodiment it is not even necessary to it is accurate understand associated gradients or Marginal position.This is that each unit is in list by the way that image window is divided into small area of space (" cell ") come what is realized A Local gradient direction or the one dimensional histograms of edge direction are accumulated in first pixel.The table in the form of combining histogram entries Show.With the grid of intensive (really overlapping) HOG descriptors with conventional based on SVMs (Support Vector Machine, SVM) assemblage characteristic vector, the human testing chain based on window grader.
The content of the invention
It is an object of the invention to provide a kind of face identification method and system, can quickly identify with high reliability People.
The purpose of the present invention is that solved by a kind of method and system according to independent claims.Accordingly from Belong to advantageous embodiment of the invention disclosed in claim.
A kind of face identification method, methods described include step:
A) one or more character image of display is read,
B) judge whether described image shows the face of at least one personage, wherein, methods described is only in display at least one Continue in the case of individual face,
C) image of the non-face attributive character of the face is analyzed,
D) the facial attribute of the face is extracted from described image,
E) face template being stored in database is ranked up and/or filtered by the non-face attributive character,
F) database of the sequence and/or filtering is searched for, to obtain the face mould to match with the face of described image Plate.
Before the face that is matched with image of search, using non-face attribute to the face template that is stored in database It is ranked up and/or filters, the quantity of the face template of matching can be greatly reduced, so as to soon searches for face template Database, face template can also be accurately matched.Inventor have appreciated that the non-face attribute of people is very Specifically.By that using only a small amount of non-face attribute, can be carried out to the face template being stored in database very much effective Ground sorts and/or filtering.
The facial attribute of major part of different faces is closely similar.All faces include two eyes, a nose, a mouth Bar, these arrangement of elements obtain quite similar.Therefore, corresponding attribute is substantially closely similar.It is only multiple such in combination Facial attribute, different faces could be distinguished.On the contrary, non-face attribute is often very specific people.For example, clothes can show Show very specific pattern and/or color, and hair quality can be very specific.Therefore, it is possible to use a small amount of non-face Subordinate's property, for abandoning the major part for the face template being stored in database, and these templates do not have corresponding non-face Subordinate's property.
That is, non-face attribute can be used for efficiently selecting the face template in database.By using a small amount of Non-face attribute, such as:The colour of skin, clothes, hair style and glasses, the correlated measure of face template to be matched is reduced to and is stored in The 0.5%-5% of all face templates in database.Therefore, can significantly acceleration search with extraction facial thumbnail The face template of matching, can also highly precisely it carry out.
The present invention realizes carries out extensive real-time face identification to multiple video cameras, particularly on the same day.
Step c) and order d) can change, so as to first determine facial attribute, followed by non-face attribute, or by this A little steps are combined as a step to extract facial attribute and non-face attribute.
Non-face attribute can include:
The color of skin, particularly neck color,
The shape of hair style including hair, the length of hair, the color of hair, the hair quality etc. of hair,
Color of the style of clothes including clothes, the quality of clothes, the style of clothes, collar,
Build includes neck shaped, shoulder shape,
Wearing spectacles,
Colorized glasses.
For the purpose of face recognition, in some non-face attributes, as the style and hair style of clothes only have in a short time Effect, such as one day.Other non-face features, such as neck shaped, neck color, shoulder shape, generally keep in a long time It is stable.Therefore, timestamp is distributed into non-face attribute helps to mark the time of shooting image or extracts from image non-face The time of subordinate's property.When the face template being stored in database is sorted and/or filtered by non-face attribute, non-face attribute Timestamp can be combined according to the average effective phase of each non-face attribute with weight.
Before step d) is performed, face's thumbnail can be selected from image.Preferably, the face is determined in step b) Portion's thumbnail.Face's thumbnail has the size of face.Facial attribute is extracted from this face's thumbnail.The face is contracted Sketch map performs the face template that matching is searched for according to step f).
The attribute thumbnail bigger than face thumbnail can be extracted from the image including face's thumbnail.Therefore, attribute Thumbnail shows other parts of the people in addition to his/her face.These parts particularly including the hair of people, chest, neck Portion and/or shoulder.Attribute thumbnail is preferably dimensioned to be 2 to 4 times more than face's thumbnail, so as to special comprising non-face attribute Sign.Because such region is sufficiently large, the attribute of surrounding can be captured, but the people of interference nearby is captured without substantial amounts of chance And background, the size of attribute thumbnail are desirably no more than the 2 of face's thumbnail, 3 or 4 times.
The method being ranked up by the image of wavelet transformation pair realizes that step b) detects to face.Wavelet transformation is excellent Choosing is levied using two-dimentional Quasi-Haar wavelet to detect class Lis Hartel.The sort method (can be protected based on above-mentioned levied by Lis Hartel Sieve viola and Michael Jones;Use the quick object detection-P.I. Wilson's of simple feature enhancing cascade, J. Fernands Hereby;Facial features localization based on Ha Er graders-Sebastian Schmidt;Real-time target inspection with class Lis Hartel sign Survey) method that is detected to the object in image.Therefore, these documents are very comprehensive.
The non-face attribute related to one of shape is determined by method for checking object or edge detection method.It is preferable right As detection method is histogram of gradients.But also have other suitable edge detection methods, as Canny edge detection operators, Canny-Deriche edge detectors, difference edge detection, Sobel operators, Prewitt operators and Roberts crossover operators.
The non-face attribute related to color is determined by method for detecting color.Preferable method for detecting color is that color is straight Fang Tu.
The non-face attribute related to texture or pattern, such as local binary pattern are determined by texture sort method Or Gabor filter (LBP).
Local binary patterns are respectively adopted or Gabor filter carries out texture sequence, from image or face's thumbnail extraction Facial attribute.
The non-face attribute of the image of shooting can form non-face vector.Each face template of database includes non-face Non-face vector corresponding to subordinate's property.Step e) filtering, non-face are performed by selecting in database all people's face template The non-face vector of subordinate's property is nearer with the non-face vector distance of image, and exceedes predetermined threshold distance.
Such non-face vector can be also used for arranging the face template being stored in database according to step e) Sequence, the face template of database is ranked up according to the non-face vector distance of the non-face vector of captured image.
, can be to individual by determining the distance of non-face vector and the non-face vector of captured image from face template Other non-face attribute is weighted.The weight of the other non-face attribute of this can correspond to a tolerance, to determine The value of corresponding non-face attribute.For example, the determination that can be perfectly clear only includes a kind of clothes of color, have than " clothes Color " higher weight is the segment pattern with many different colours.The weight can also be combined with above-mentioned timestamp Use.The weight of non-attribute corresponds to the stability of attribute.The non-face attribute relevant with clothes does not have the duration generally Stability more than one day.Therefore, the weight more than the time will be substantially reduced.Have with hair color, hair quality or hair shape The attribute of pass is generally more stable so that these non-face attributes have will not be as the non-face attribute of clothes over time The weighting function for elapsing and declining.The non-face attribute related to neck shaped or shoulder shape is generally highly stable, therefore, this A little non-face attributes have constant time weighting.
, can be by being ranked up to selected face template, or to from less than certain threshold according to step f) search The ordering face template of the limited quantity with non-face vector distance extracted in the non-face vector of the image of value enters Row sequence, is further ranked up on the basis of the facial attribute.Facial attribute preferably forms facial vector, so as to Carried out with the distance between facial vector based on the image captured by the facial vector relative to the face template stored Sequence.It can be ranked up by multi-dimensional indexing.
Multiple video cameras can be used for shooting multiple images, and carry out face recognition to each image.This method can be used for The someone of some time frame of track.In time frame, it is necessary to select non-face attribute.It is all above-mentioned non-for the time frame of one day Facial attribute is all suitable.In the case where frame length is more than one day, non-face subordinate of the selection with stronger time stability Property.This method is also applied for monitoring or tracks everyone in a large amount of crowds.This finds out stream for the masses of monitoring outdoor activity The criminals such as the common people are very favorable.
The face recognition method can also be used to determine customer behavior, such as assess advertisement measure or product introduction.This method It can also be used to identify acceptance level of the customer to service and support center.
People of this method especially suitable for tracking and calculating sales department and public place, particularly and multi-camera system It is combined.
The image for including face handled by the inventive method, can be caught by one or more video cameras. These images can also extract from the database including multiple display facial images.
The invention further relates to a kind of system for recognition of face, the system includes taking the photograph at least one of shooting image Camera and the control unit for being connected at least one video camera.The control unit is used to carry out face knowledge according to the above method Not.
The system preferably includes multiple video cameras, for example, at least including five video cameras, preferably at least including ten Video camera, and more preferably at least include 100 video cameras.Video camera can be placed on certain closed area.Video camera Incoherent region, such as railway station, airport can be distributed in, with the personal movement of tracking.
Brief description of the drawings
The present invention is illustrated in greater detail by brief description of the drawings, wherein:
Fig. 1 is the block diagram of face identification system,
Fig. 2 is the schematic flow sheet of face identification method,
Fig. 3 is the module diagram of statistical data sampling program,
Fig. 4 a are a kind of simple class Lis Hartel collection,
Fig. 4 b are that a kind of class Lis Hartel of extension is collected,
Fig. 5 is levied to be a kind of by the first kind of AdaBoost algorithms selections and the second class Lis Hartel.
Embodiment
List of reference numbers
1 system
2 shopping paths
3 shopping centers
4 imports
5 outlets
6 bifurcated sections
7 central control units
8 processor units
9 storage mediums
10 video cameras
11 data wires
12 statistical data collection softwares
13 change detection modules
14 human detection modules
15 face detection modules
16 Haar features
17 subgraphs
18 images
19 subwindows
20 non-face property extracting modules
21 facial feature extraction modules
The preselected module of 22 templates
23 matching modules
24 statistical analysis modules
A kind of 25 internets
Fig. 1 show the embodiment for face identification system 1 according to the present invention, and the system designed to be used monitoring The service condition of shopping path 2 in shopping center 3.
Shopping path 2 is extended between the entrance 4 in shopping center 3 and outlet 5.Shopping path 2 includes having multiple bifurcateds The bifurcated of section 6.Customer passes through one or more bifurcated sections 6 in from No. 4 porch to the way of No. 5 outlets.Customer is according to its need Wherein one or more bifurcated sections 6 are selected, product and advertising campaign are shown in bifurcated section 6.The behavior of customer is mainly produced The influence of distribution and the advertising campaign of product.Therefore, statistics is shown, shown in the position of shopping route 2 some products or Advertising campaign is very attractive to customer, and this handles very helpful to shopping center.
System 1 for face recognition allows to collect this statistics.
System 1 includes the central control unit 7 with processor unit 8 and the storage medium 9 for data storage storehouse.Place Reason device unit 8 includes CPU, RAM (random access memory) and ROM (read-only storage).
Multiple video cameras 10 and central control unit 7 are connected by data wire 11.In the present embodiment, video camera 10 is still It is image camera.Substantially, the combination of video camera or still image video camera and video camera can also be used.
Video camera 10 can also arrange the remote sites such as parking lot at the mall, and is connected to by internet 25 Entreat control unit 7.
Video camera 10 is the DV for producing electronically readable image file.During these image files are sent to Entreat control unit 7.Software 12 is stored on central control unit 7 and is used for collection of statistical data, video camera 10 is delivered automatically Image carry out recognition of face.
Statistical data collection software 12 includes multiple software modules (Fig. 3).Change detection module 13, for detecting input figure It seem the change of the no prior images including same video camera.If an image is identical with previous image, need not enter Row analysis, can abandon it.
Human detection module 14, at least one people whether is shown for detection image.
Face detection module 15, for one or more of detection image face.If face detection module 15 detects To face, then it extracts face's thumbnail and attribute thumbnail.Face's thumbnail is the rectangle part of image, it is shown that from forehead To the face of chin.Attribute thumbnail is a part for described image, that surrounds corresponding face's thumbnail and around described The edge of face's thumbnail, the edge at least show the hair, neck and shoulder of the people related to the face.
Face detection module 15 make use of the object detection technique to image based on so-called class Lis Hartel sign.Class Lis Hartel Sign represents the first feature not being rendered obvious by the image pixel intensities of image.In general, a class Lis Hartel sign is to figure The difference of the mean intensity of subregion is encoded as in.Simplest feature set is by including the squares of two or four formed objects The second zone composition (Fig. 4 a) of shape subregion.These class Lis Hartels sign is applied in image, calculates the pixel in subregion It is worth sum, intensity difference is determined according in white subregions of Fig. 4 a in side and the shade subregion in opposite side.The difference table Show characteristic value.
Feature can zoom in and out in its size, to obtain the characteristic information of different amplitudes.
The feature set extended as shown in Figure 4 b, including edge feature, line feature and center ring characteristics.Some 45 ° of rotations Class Lis Hartel sign.
In order to calculate characteristic value in real time, so-called integral image or region summation table (SAT) are converted images into. This region summation table has is assigned to the left side of original image and all pictures with original image identical size, each pixel The summation of element.Once region summation table is calculated, just can be effectively to any in original image as long as calculating the summation of four values Image pixel intensities in the sub-rectangular areas of axle alignment are summed.
In order to calculate the feature of the feature of rotation and axle alignment, rotary area summation table (RSAT) is used.In Rotary District In domain summation table, the sum for the pixel distributed in original image each pixel, pixel arrangement in the original image is into edge 45 ° of rectangular area is tilted, forms the most right corner of rectangular area.
In order to further improve calculating speed, it is preferable that class Lis Hartel sign is applied in cascade, for image 17 Subwindow 16 is ranked up, and the subwindow will be analyzed existing face.Class Lis Hartel is taken over for use to be carried out in child windows 19 Sequence, so as to be referred to as Ha Er graders when applied to described image.
By the characteristic value of each Ha Er graders compared with feature weight, wherein, if characteristic value is more than or less than Feature weight, then Ha Er graders are true or false, and vice versa.In the cascade of Ha Er graders, if a Ha Er classification Device is false, then refuses subwindow 19, and terminates cascaded computation, and is further analyzed using the cascade of Ha Er graders Another subwindow 19.
In order to detect the face feature of people, such as face, eyes and nose, it is necessary to train Ha Er graders to cascade.Many machines Device learning method can be used for learning Ha Er graders.Preferable algorithm is AdaBoost learning process.Alternative Habit process is the Variance feature selection of feature based, based on Wimnow indexes subscriber loops rule or uses neutral net or support The feature selection process of the learning process of vector machine.
Fig. 5 is shown by the first kind of AdaBoost method choices and the second class Lis Hartel sign.The two Lis Hartels sign is aobvious Show and gone on top, is then covered on the typical training face of bottom row.First feature is between measurement eye areas and cheek region Strength difference.This feature is often more darker than cheek using eye areas.Second feature is by the intensity and mouth of eye areas The intensity of bridge is compared.The embodiment is what Borrow's viola from the discussion above et al. obtained.
Can rapidly analyze multiple subwindows 19 using the face detection module 15, analyze different sizes in image, The subwindow of diverse location.By first or at least by the subwindow of second Ha Er graders discarding only display background.
If detecting face, corresponding subwindow forms face's thumbnail.Based on face's thumbnail generation attribute contracting Sketch map.Attribute thumbnail includes face's thumbnail and certain back gauge around face's thumbnail.Preferably, attribute thumbnail is face Twice to four times of portion's thumbnail.
Non-face property extracting module 20, for extracting the non-face attribute of the people shown in image, these non-face subordinate Property does not include the feature of this face.These non-face attributes include one or more with properties:Skin color, hair style, color development, Hair quality, clothes color, clothes quality, clothes pattern, neck shape, neck sub-color, shoulder shape, wearing spectacles, colorized glasses, Hair style at collar and/or whether there is collar.
The non-face attribute related to shape is determined by method for checking object or edge detection method.In preferred embodiment In, histogram of gradients is used as the method for checking object of extraction shape association attributes.N. Dalaro et al.;Towards people's physical examination of gradient Histogram is surveyed, as described above, discloses a kind of histogram of gradients for being used for extraction and shape association attributes.Therefore, by this article Offer and be incorporated herein in full.
The non-face attribute of particular color in particular segment in image is determined by method for detecting color.In the present embodiment, Using color histogram as method for detecting color, the dot frequency of particular color is determined according to the particular segment.
The non-face attribute related to texture or pattern is determined by texture sort method.The texture sequence of preferred embodiment Method is local binary patterns case (LBP).
Facial property extracting module 21, the related feature of face for extracting to being detected.Facial property extracting module The Lis Hartel determined by face detection module 15 can be replicated to levy, and be stored as facial attribute.In addition, or alternatively, enter The facial attribute of one step can extract for example, by the texture sort method of local binary pattern.
Template preselects module 22, for the face being stored according to non-face Attributions selection in storage medium 9 in database Template.Database in storage medium 9 includes the data set of multiple face templates.Each data set includes at least one non-face Portion's vector, the non-face vector include non-face attribute and at least one facial vector, and the facial vector includes corresponding face The facial attribute in portion.Preferably, data set also includes corresponding face and/or face's thumbnail of data seal or timestamp And/or attribute thumbnail.
The preselected module 22 of template includes filtering and/or sort algorithm, for the people based on non-face attribute to database Face template is filtered and/or sorted.This detects the non-of face by face detection module 15 by calculating in real image The distance between non-face vector is carried out in the face template of facial vector database.
Face template is by the distance-taxis calculated, or is filtered according to this distance.If face template is carried out Sequence, then selection have a number of face template of minimum range.This numeral can change from 10 to 10000, best Not less than 100, particularly not less than 200, desirably no more than 2000, especially not greater than 1000 or 500.Selected face template Quantity generally in the range of 0.5% to the 5% of non-selected face template.
If selecting face template using filter, only select those that there is the face mould less than certain threshold distance Plate.Both selection modes are, it is necessary to further consider to be significantly reduced the quantity of face template.Preferably, it is preselected to template Module 22 is adjusted, to no more than 10%, especially not greater than 5%, and the face template of preferably more than 2% database is done Further processing.
The preselected module of template can be also used for abandoning the face template for showing non-face attribute.At the mall, employee Often to wear specific clothes.Due to there was only customer, rather than staff comes under observation, and is related to this clothing so as to abandon Take the related face template of the shopping center employee of attribute.
Matching module 23, optimal is carried out with the face detected in real image for searching for face template in database Match somebody with somebody.
Best match is searched for according to facial attribute, especially by the face portion vector people detected in real image Face's vector of face template scans for.Best match is between corresponding facial vector of the facial vector with face thumbnail Face template with minimum range.Preferably, scanned for by multi-dimensional indexing.If it is not less than predetermined threshold distance Matching, then result is " mismatch ".
Statistical analysis module 24, for carrying out statistical analysis to the face that detects, and by the information and additional information phase With reference to, such as the time, corresponding to photograph during picture, or the position of people or the position of video camera in picture.
Based on above-mentioned face identification system, the invention discloses a kind of method of 3 collection statistics at the mall (flow chart as shown in Figure 2).
Methods described is since step S1.
In step s 2, with one of shooting image of video camera 10.Video camera 10 can each certain time interval Carry out shooting image.These intervals can be for example between 0.1s to 10s.Video camera 10 can also be connected with proximity transducer, with The people in front of video camera is detected by the proximity transducer.The proximity transducer triggers the seizure to image.
Preferably, when shooting image, generate date stamp and attach it in described image.The date stamp includes clapping Take the photograph the time of image, and/or the description of position shown in described image.The description of the position can be coordinate or art Language, such as " shopping center entrance ".
Video camera 10 sends image by data wire 11 to central control unit 7.
If having any change in the last image shot with same video camera 10, checked by change detection module 13 Input picture (step S3).Due to having analyzed identical image before described image, if image does not change, lose Abandon the image.If for nobody before specific video camera 10, video camera is continuously shot several identicals in 3 at the mall Image.It is worth noting that, it is without in all senses that analysis again is carried out to same image.
If determining that image does not change in step s3, flow returns to step S2.If detect in step s3 To the change of image, then people (step S4) whether is shown in check image.Can be easily by histograms of oriented gradients Detect the representative profiles of human body.If return to step S2 without display people, flow in image.If examine in step s 4 People is measured, then preferably, it is determined that the quantity with people in storage image.
Face detection module 15 passes through the face (step S5) in the analysis of above-mentioned class Lis Hartel sign and detection image.In the step Face's thumbnail and attribute thumbnail are also generated in rapid.
Non-face property extracting module 20 extracts non-face attribute.In the present embodiment, it is only able to detect at the mall Stop the people of at most several hours.Therefore, appropriate use non-face attribute is very important, but not in a long time Effectively.The non-face attribute refers to all properties relevant with clothes and/or hair style.Duration of stay at the mall, anyone All it is less likely to change the clothes or hair style of oneself.In other application, the different non-face attribute of selection that can be suitably.From Non-face attribute is extracted in attribute thumbnail.
Facial feature extraction module 21 extracts (step S7) facial characteristics from face's thumbnail.Copy step can be passed through The facial feature extraction face attribute being had determined in S5, such as class Lis Hartel sign, or by applying spy on face's thumbnail Fixed extraction procedure.
By extracting non-face attribute (step S6), the preselected module 22 of template extracts the face mould in database in advance Plate.By being pre-selected, a small amount of face template being stored in database only have selected.
The face template of these selections is used to search for the face's thumbnail generated in step s 5 and the face in database Matching (step S9) between template.
If not finding matching in step s 9, into step S10.In step slo, new data set is added In the database related to the face to being detected in actual photographed image.The data set comprises at least corresponding facial vector phase The attribute vector answered.Preferably, the data set also includes the attribute of face's thumbnail and/or thumbnail.The data set can be with Including the date stamp generated in step s 2, including time during shooting image and/or position.
The new face mould stored in the face template or step S10 that are matched in step S11 or step S9 in database Plate carries out statistical analysis.In the present invention, which bifurcated section 6 of any personal use shopping path 2 analyzed.Further, it is also possible to Analyze residence time of this person in the specific bifurcated section 6 of shopping path 2.These information can also be with the production of this people's actual purchase Condition associates.Product that someone buys is determined by people corresponding to the detection in point of sale (POS), the information with cash register The data registered on machine are associated.
In step s 12, further check whether to detect people in the image of reality.If it is the case, then flow Journey returns to step S5 to detect next face.Otherwise, flow enters step S13, checks that center control is single in step s 13 Whether member 7 receives further image.Then, flow is back to step S3.Otherwise, this method terminates in step S14.
The above method is the example for collecting data at the mall.In the present example, the face shown in face recognition process Portion's information is used for statistical analysis.This face recognition process can also be used for other application.By this face recognition process, such as Group can be monitored, can be by everyone in non-face attribute easily tracking crowd.This can be used for outside monitoring room The masses of activity, the outdoor activity may be disturbed by criminals such as such as rogues.The process can analyze multiple cameras simultaneously Image or the multiple faces of display image.Once in database record a people, even if he change the position of oneself and The image of different cameras shooting, can also find same person in real time.If some criminal is identified and found out of doors, very Hardly possible isolates the criminal, and then, as long as this video camera is connected to face identification system, by camera supervised, this criminal holds very much Easily it is caught on railway station or any other public place.
In the above-described embodiments, number is detected in step s 4, and detects face in step s 5.The two steps Suddenly a step can also be merged into, Face datection is also used for detecting multiple faces or more personal respectively, and for calculating image The number of middle display.
Further, thus it is possible to vary step S6 and S7 order.Step S5 and S7 can also be merged into a step, led to Cross while detect face, extract face characteristic.This is particularly suitable as facial characteristics, such as class Lis Hartel sign.
This method and system can also be used for monitoring the safety-related field of such as bank.This method can be identified in one day Several times close to the people of safety zone.
This method and system are additionally operable to the service processes in Analysis Service center, can reliably detect a certain customer The time of service centre, and which place of customer Qu Guo service centres must stay in.
The general principle of the present invention is considered from the preselected a small amount of non-face attribute of template being stored in database.By It is big in non-face attribute information amount, it is more likely that a small amount of potential correlate template is soon selected under high reliability.Therefore, may be used With very quickly and efficiently find face template (" face ").The system and method are especially suitable within the limited time People are monitored, such as during one to five hour, one to five day or one month.Must be non-to select according to the cycle of monitored people Facial attribute.
The template in step S8 is carried out according to non-face attribute it is preselected, calculate corresponding between non-face vector Distance.By calculating the distance, the time correlation weight of each attribute can also be used, due to attribute be present, is more likely sent out Changing, and other attributes are stable.In addition, the tolerance for being determined or being estimated according to the value of respective attributes, can enter to attribute Row weighting.Tolerance is smaller, and the respective weights of attribute are bigger.

Claims (15)

1. a kind of face identification method, it is characterised in that methods described includes step:
A) one or more character image of display is read,
B) judge whether described image shows the face of at least one personage, wherein, methods described is only showing at least one face Continue in the case of portion,
C) image of the non-face attributive character of the face is analyzed,
D) the facial attribute of the face is extracted from described image,
E) face template being stored in database is ranked up and/or filtered by the non-face attributive character,
F) database of the sequence and/or filtering is searched for, to obtain the face template to match with the face of described image.
2. according to the method for claim 1, it is characterised in that before step d) is performed, picked out from described image Face's thumbnail with face size size, face's attribute is extracted from face's thumbnail, carried by matching Face's attribute of the thumbnail taken performs the search face template in the step f).
3. method according to claim 1 or 2, it is characterised in that the image ranking method based on wavelet transformation performs step It is rapid b) in personage face detection.
4. according to the method for claim 3, it is characterised in that wavelet transformation is breathed out using two-dimentional Quasi-Haar wavelet to detect class That feature.
5. according to any described methods of claim 1-4, it is characterised in that the non-face attribute include it is one or more with It is properties:
Hair style, color development, hair quality, clothes color, clothes quality, clothes pattern, neck shape, neck sub-color, shoulder shape, eye Mirror, collar.
6. according to the method for claim 5, it is characterised in that by edge detection method (histogram of gradients) determination and shape The related non-face attribute, and/or
The related non-face attribute of color is determined at by method for detecting color (color histogram), or
The non-face attribute related to texture or pattern is determined by texture sort method (local binary patterns case).
7. according to any described methods of claim 1-6, it is characterised in that pass through texture sort method (local binary patterns case) Face feature is extracted from described image.
8. according to any described methods of claim 1-7, it is characterised in that the non-face attribute of described image forms non-face Vector, and each template of database includes non-face vector corresponding to non-face attribute, by selecting to own in database Face template perform step e) filtering, the non-face vector distance of the non-face vector and image of the non-face attribute It is relatively near, and exceed predetermined threshold distance.
9. according to any described methods of claim 1-8, it is characterised in that the non-face attribute of described image forms non-face Vector, and each face template of database includes non-face vector corresponding to non-face attribute, by selecting in database All people's face template performs step e) sequence, according to the distance pair of the non-face vector of the non-face vector of taken image The face template of the database is ranked up.
10. method according to claim 8 or claim 9, it is characterised in that by determining that non-face vector is with being clapped from template The distance of the non-face vector of image is taken the photograph, individual other non-face attribute is weighted.
11. according to any described methods of claim 1-10, it is characterised in that facial thumbnail is determined in step b), it is described Facial thumbnail is used to extract facial characteristics, and the attribute thumbnail including facial thumbnail and more than facial thumbnail, institute Attribute thumbnail is stated to be used to analyze non-face attribute.
12. according to any described methods of claim 1-11, it is characterised in that, can be by right according to step f) search Selected face template is ranked up, or has non-face to what is extracted from the non-face vector of the image less than certain threshold value The ordering face template of the limited quantity of portion's vector distance is ranked up, and is further entered on the basis of the facial attribute Row sequence.
13. according to the method for claim 12, it is characterised in that be ranked up by multi-dimensional indexing.
14. according to any described methods of claim 1-12, it is characterised in that multiple video cameras can be used for shooting multiple figures Picture, and recognition of face is carried out to each image.
15. a kind of face identification system, including:
At least one video camera for shooting image,
The control unit of at least one video camera is connected to, wherein, server is used to perform as described in claim 1 to 14 Method.
CN201680030571.7A 2015-05-25 2016-05-23 Face identification method and system Pending CN107615298A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
SG10201504080W 2015-05-25
SG10201504080WA SG10201504080WA (en) 2015-05-25 2015-05-25 Method and System for Facial Recognition
PCT/SG2016/050244 WO2016190814A1 (en) 2015-05-25 2016-05-23 Method and system for facial recognition

Publications (1)

Publication Number Publication Date
CN107615298A true CN107615298A (en) 2018-01-19

Family

ID=57392166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201680030571.7A Pending CN107615298A (en) 2015-05-25 2016-05-23 Face identification method and system

Country Status (6)

Country Link
CN (1) CN107615298A (en)
AU (1) AU2016266493A1 (en)
HK (1) HK1248018A1 (en)
PH (1) PH12017502144A1 (en)
SG (1) SG10201504080WA (en)
WO (1) WO2016190814A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108724178A (en) * 2018-04-13 2018-11-02 顺丰科技有限公司 The autonomous follower method of particular person and device, robot, equipment and storage medium
CN108805140A (en) * 2018-05-23 2018-11-13 国政通科技股份有限公司 A kind of feature rapid extracting method and face identification system based on LBP
CN109448026A (en) * 2018-11-16 2019-03-08 南京甄视智能科技有限公司 Passenger flow statistical method and system based on head and shoulder detection
CN109670451A (en) * 2018-12-20 2019-04-23 天津天地伟业信息系统集成有限公司 Automatic face recognition tracking
CN110213632A (en) * 2019-04-23 2019-09-06 浙江六客堂文化发展有限公司 A kind of audio/video player system and its application method comprising user data processing
CN111161312A (en) * 2019-12-16 2020-05-15 重庆邮电大学 Object trajectory tracking and identifying device and system based on computer vision
CN111554007A (en) * 2020-04-20 2020-08-18 陈元勇 Intelligent personnel identification control cabinet
CN112749290A (en) * 2019-10-30 2021-05-04 青岛千眼飞凤信息技术有限公司 Photo display processing method and device and video display processing method and device
CN113128356A (en) * 2021-03-29 2021-07-16 成都理工大学工程技术学院 Smart city monitoring system based on image recognition

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11443551B2 (en) 2017-10-24 2022-09-13 Hewlett-Packard Development Company, L.P. Facial recognitions based on contextual information
TR201721657A2 (en) * 2017-12-25 2019-07-22 Arcelik As ONE FACE RECOGNITION SYSTEM AND METHOD
US10936854B2 (en) * 2018-04-27 2021-03-02 Ncr Corporation Individual biometric-based tracking
CN112651268B (en) * 2019-10-11 2024-05-28 北京眼神智能科技有限公司 Method and device for eliminating black-and-white photo in living body detection and electronic equipment
CN111597872A (en) * 2020-03-27 2020-08-28 北京梦天门科技股份有限公司 Health supervision law enforcement illegal medical practice face recognition method based on deep learning
CN113822367B (en) * 2021-09-29 2024-02-09 重庆紫光华山智安科技有限公司 Regional behavior analysis method, system and medium based on human face

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1794264A (en) * 2005-12-31 2006-06-28 北京中星微电子有限公司 Method and system of real time detecting and continuous tracing human face in video frequency sequence
US20060140455A1 (en) * 2004-12-29 2006-06-29 Gabriel Costache Method and component for image recognition
US20130121584A1 (en) * 2009-09-18 2013-05-16 Lubomir D. Bourdev System and Method for Using Contextual Features to Improve Face Recognition in Digital Images

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7739221B2 (en) * 2006-06-28 2010-06-15 Microsoft Corporation Visual and multi-dimensional search
CN100568262C (en) * 2007-12-29 2009-12-09 浙江工业大学 Human face recognition detection device based on the multi-video camera information fusion
US8379917B2 (en) * 2009-10-02 2013-02-19 DigitalOptics Corporation Europe Limited Face recognition performance using additional image features

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060140455A1 (en) * 2004-12-29 2006-06-29 Gabriel Costache Method and component for image recognition
CN1794264A (en) * 2005-12-31 2006-06-28 北京中星微电子有限公司 Method and system of real time detecting and continuous tracing human face in video frequency sequence
US20130121584A1 (en) * 2009-09-18 2013-05-16 Lubomir D. Bourdev System and Method for Using Contextual Features to Improve Face Recognition in Digital Images

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108724178A (en) * 2018-04-13 2018-11-02 顺丰科技有限公司 The autonomous follower method of particular person and device, robot, equipment and storage medium
CN108724178B (en) * 2018-04-13 2022-03-29 顺丰科技有限公司 Method and device for autonomous following of specific person, robot, device and storage medium
CN108805140A (en) * 2018-05-23 2018-11-13 国政通科技股份有限公司 A kind of feature rapid extracting method and face identification system based on LBP
CN109448026A (en) * 2018-11-16 2019-03-08 南京甄视智能科技有限公司 Passenger flow statistical method and system based on head and shoulder detection
CN109670451A (en) * 2018-12-20 2019-04-23 天津天地伟业信息系统集成有限公司 Automatic face recognition tracking
CN110213632B (en) * 2019-04-23 2021-07-30 浙江六客堂文化发展有限公司 Video playing system containing user data processing and use method thereof
CN110213632A (en) * 2019-04-23 2019-09-06 浙江六客堂文化发展有限公司 A kind of audio/video player system and its application method comprising user data processing
CN112749290A (en) * 2019-10-30 2021-05-04 青岛千眼飞凤信息技术有限公司 Photo display processing method and device and video display processing method and device
CN111161312A (en) * 2019-12-16 2020-05-15 重庆邮电大学 Object trajectory tracking and identifying device and system based on computer vision
CN111161312B (en) * 2019-12-16 2022-03-22 重庆邮电大学 Object trajectory tracking and identifying device and system based on computer vision
CN111554007A (en) * 2020-04-20 2020-08-18 陈元勇 Intelligent personnel identification control cabinet
CN111554007B (en) * 2020-04-20 2022-02-01 陈元勇 Intelligent personnel identification control cabinet
CN113128356A (en) * 2021-03-29 2021-07-16 成都理工大学工程技术学院 Smart city monitoring system based on image recognition

Also Published As

Publication number Publication date
HK1248018A1 (en) 2018-10-05
AU2016266493A1 (en) 2017-12-14
SG10201504080WA (en) 2016-12-29
WO2016190814A1 (en) 2016-12-01
PH12017502144A1 (en) 2018-05-28

Similar Documents

Publication Publication Date Title
CN107615298A (en) Face identification method and system
Bialkowski et al. A database for person re-identification in multi-camera surveillance networks
Liciotti et al. Person re-identification dataset with rgb-d camera in a top-view configuration
Vittayakorn et al. Runway to realway: Visual analysis of fashion
CN108520226B (en) Pedestrian re-identification method based on body decomposition and significance detection
US9396412B2 (en) Machine-learnt person re-identification
CN104166841B (en) The quick detection recognition methods of pedestrian or vehicle is specified in a kind of video surveillance network
US20110142335A1 (en) Image Comparison System and Method
Conde et al. HoGG: Gabor and HoG-based human detection for surveillance in non-controlled environments
CN108416336A (en) A kind of method and system of intelligence community recognition of face
CN110414441B (en) Pedestrian track analysis method and system
CN109934047A (en) Face identification system and its face identification method based on deep learning
Ancheta et al. FEDSecurity: implementation of computer vision thru face and eye detection
Denman et al. Searching for people using semantic soft biometric descriptions
CN109271932A (en) Pedestrian based on color-match recognition methods again
Moctezuma et al. Person detection in surveillance environment with HoGG: Gabor filters and histogram of oriented gradient
AU2017231602A1 (en) Method and system for visitor tracking at a POS area
CN110443179A (en) It leaves the post detection method, device and storage medium
Galiyawala et al. Visual appearance based person retrieval in unconstrained environment videos
CN115661903B (en) Picture identification method and device based on space mapping collaborative target filtering
CN112131477A (en) Library book recommendation system and method based on user portrait
CN109858308B (en) Video retrieval device, video retrieval method, and storage medium
Marciniak et al. Face recognition from low resolution images
Kumar et al. Real-Time Face Mask Detection using Computer Vision and Machine Learning
Pavithra et al. Survey on Face Recognition with pre filtering Techniques and their comparative study

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1248018

Country of ref document: HK

WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180119

WD01 Invention patent application deemed withdrawn after publication