WO2005096213A1 - Systeme de reconnaissance faciale et procede - Google Patents

Systeme de reconnaissance faciale et procede Download PDF

Info

Publication number
WO2005096213A1
WO2005096213A1 PCT/EP2005/001988 EP2005001988W WO2005096213A1 WO 2005096213 A1 WO2005096213 A1 WO 2005096213A1 EP 2005001988 W EP2005001988 W EP 2005001988W WO 2005096213 A1 WO2005096213 A1 WO 2005096213A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
database
faces
databases
threshold
Prior art date
Application number
PCT/EP2005/001988
Other languages
English (en)
Inventor
Albrecht Rothermel
Dengpan Mou
Rainer Schweer
Original Assignee
Thomson Licensing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing filed Critical Thomson Licensing
Publication of WO2005096213A1 publication Critical patent/WO2005096213A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • the present invention relates to a face recognition system and method being capable of recognizing multiple human faces with random free motion.
  • One important aspect of the face recognition system is the face detection.
  • motion detection as a preprocessing step and then detecting faces by applying image- based face detection techniques is commonly applied in many video-based face processing systems.
  • using motion as a first step for face detection is subject to meet difficulties in some real-world cases without any human supervision. Background subtraction and frame differencing are two principal methods to detect motion.
  • the background subtraction method requires a static background. If the background is changing, however, there are unavoidable errors when subtracting the initialised background from the current frame. Complicated adaptive background models are to be introduced as compensation, at the expense of significant computational efforts.
  • the frame differencing method might not work properly, when a face is not apparently moving, or other moving objects exist in the background. Failure of the pre-processing step leads to the failure of face detection.
  • Another important aspect of the automatic face recognition system is the face database, which are vital to the success of face recognition.
  • a database composed of a sufficient number of qualified images per face performs much better than a database with few number and randomly selected images per face. There are two conventional ways to construct face databases from video sequences.
  • a human supervisor is normally required to store selected faces for each person as a certain format into databases for further recognition. This is also referred to as a pre-training procedure before recognition, in which the face images from a certain person are carefully selected by a human supervisor. Those chosen face shots are then encoded in a certain format and stored into the corresponding database.
  • the selection criteria are dependent on the recognition methods.
  • Most robust recognition systems collect various face shots from a certain person: under varying lighting conditions, multiple views, different head poses and expressions etc. Although those systems achieve good performance, the training procedure itself demands significant efforts for a human supervisor.
  • the face recognition system of the invention is capable of dealing with above mentioned problems and therefore to be applicable to any commercial activities requiring face recognition.
  • the present invention suggests an automatic, unobtrusive and unsupervised real-time system recognizing multiple faces and creating adaptive databases from video signals, which comprises: a detector continuously obtaining images from video sequence (s) and outputting information of whether or not face(s) and eyes are detected in the acquired images; a face of interest (FOI) tracker being capable of tracking multiple detected faces with random motion, filtering and selecting qualified faces; and a processor for recognizing whether the selected faces are known or unknown faces, creating new databases for recognized unknown faces, adaptively updating existing databases for recognized known faces, merging redundant face databases and removing face databases of non-interest.
  • a detector continuously obtaining images from video sequence (s) and outputting information of whether or not face(s) and eyes are detected in the acquired images
  • a face of interest (FOI) tracker being capable of tracking multiple detected faces with random motion, filtering and selecting qualified faces
  • a processor for recognizing whether the selected faces are known or unknown faces, creating new databases for recognized unknown faces, adaptively updating existing databases for recognized known faces, merging redundant face databases
  • the inventive system not only can run as it is for any period of time, with no requirement for the user ' s behaviour, with no interruption to the user, and with no requirement of any human supervision or help, but also can automatically and passively recognize multiple persons showing up at the same time with free and random motion.
  • the face of interest (FOI) tracker of the inventive system comprises: a face extractor, which is used to extract each detected face from a frame image; a face region separator, which is' intended to divide a current frame image into multiple face check regions, the number of the face check regions corresponding to the number of extracted face regions; a face boundary detector, which is used to examine whether a current frame contains any faces in boundaries corresponding to the face check region defined in a last preceding frame image; a face verifier, which is used to determine whether a face examined in the current frame image is same as the corresponding face in the last preceding frame; a frame filter, which is used to buffer the results of the face boundary detector and the face verifier, and filter them to finally decide how many faces there are in frames and which one is which; and a quality selector, which is used to select a qualified face for further processing in the processor.
  • a face extractor which is used to extract each detected face from a frame image
  • a face region separator which is' intended to divide a current frame image
  • the inventive system uses an i age- based face detection as a main method and a temporal-based motion detection as a post-processing step which effectively decreases the failure rate of the face detection and reduces the complexity of the detection computation.
  • the processor of the inventive system comprises: a database creator, which is used to create a face database by enrolling face images selected from the face of interst (FOI) tracker, each face database being able to enrol multiple different images of one face; a recogniser, which is used to compare the selected face with all face databases and output an instant similarity value (ISV) between the current selected face and the 'most similar face in the face databases as a result of the comparison; a first filter defined with an adaptive recognizing threshold (ART) , which is used to identify the selected face as a known face, when the ISV of the selected face is no less than the ART, or identify the selected face as an unknown face, when the ISV of the selected face is less than the ART; a second filter defined with an adaptive updating threshold (AUT) , which is used to determine whether to update a face database with the current selected face, if an ISV of the selected face is less than the AUT, the corresponding face database is updated with the selected face; a database merger, which is used to merge
  • the inventive system is continuously keeping the variety of enrolled faces, merging redundant faces and removing faces of non-interest, consequently a face database with sufficient number of qualified images per face is achieved and the recognition rate is enhanced, and therefore efficiency and reliability of the system are improved.
  • the processor of the inventive system further comprises a third filter defined with a time parameter threshold, which is used to keep the face database updating with person ' s latest visage based on the predefined time parameter threshold.
  • the inventive system is automatically and continuously updating the face databases by most recent views.
  • Fig.l shows a flow diagram explaining the conception of the intelligent face recognition system according to the present invention
  • Fig.2 shows a flow diagram illustrating one embodiment of the intelligent face recognition system according to the present invention
  • Fig.3 depicts estimation of the face check region in next frame from the face location of the current frame
  • Fig.4 and table 1 respectively show a part of samples and the experiment comparison results of the image sets.
  • an intelligent face recognition system which automatically recognizes known/unknown multiple faces from real-time video signals, creating or adaptively updating the corresponding face databases, and particularly being capable of dealing with multiple persons showing up at the same time, without any requirement for any users and any human supervision or assistance.
  • the inventive system can be based on any reasonable image-based face detection and recognition method.
  • the image-based face detection and recognition technology used in FaceVACS of Cognitec Systems GmbH is applied in the present invention.
  • the face recognitions system according to the present invention generally comprises a detector, a face of interest (FOI) tracker and a processor.
  • FOI face of interest
  • the detector is intended for continuously obtaining images from video sequence (s) and outputting information of whether or not face(s) and eyes are detected.
  • the face of interest (FOI) tracker consists of a face extractor, a face region separator, a face boundary detector, a face verifier, a frame filter and a quality selector.
  • the processor of the system mainly comprises a database creator, a recognizer, a first filter of adaptive recognizing threshold (ART) , a second filter of adaptive updating threshold (AUT) , a third filter of time parameter threshold, a database merger and a database' remover.
  • the processing of the face recognition system according to the present invention essentially contains three main parts: face detection, face recognition and database construction parts.
  • the process of face detection combines an image-based face detection with a temporal-based face detection.
  • the detector of the inventive system performs image-based face detection as a main method, and combine together with the FOI tracker to perform a supplementary detection as a post-processing procedure by utilizing motion information of the face of interest.
  • image-based face detection algorithms are generally the most robust and can achieve much higher face detection rate than common motion detection.
  • the corresponding motion information can still help to complete the face detection procedure, which further increases the detection rate.
  • the face of interest (FOI) tracker of the inventive system is based on: whole image difference between a current and a preceding image; face region difference between the current and the preceding image; eye region movement between the current and the preceding image.
  • the face extractor of the FOI tracker is used to extract each detected face region from the frame image with the corresponding eye positions. The extraction is based on the proportion between the eye distance and the face width and face height. Then the face region separator divides the current frame image into several face check regions, the number of the face check regions corresponding to the number of extracted face regions. Each face check region is defined based on two parameters. One is the eye distance. The other one is a predefined highest motion speed. Normally, a certain face cannot move beyond the corresponding check region boundary in the next coming frame, even with the highest possible speed. The separator guarantees that one face shown up in a certain defined face check region in a current frame must be in the same check region in next frame.
  • the face boundary detector is used to examine whether a current frame contains any faces in boundaries corresponding to the check regions defined in the last preceding frame image. If yes, the current face is considered to probably remain the same as in the last frame, and is to be further checked with the downstream face verifier. If not, either the face might be occluded with another object, or due to a too much rotation, a sudden lighting change etc. that the face extractor fails. In this case, the face check region is reserved for the next frame image and is to be further examined in the downstream frame filter. Afterwards, the face verifier is used to further compare the current examined face with the corresponding face in the last frame. If the similarity between these two faces is no less than a predefined verifying similarity threshold (VST), they are determined to be the same face. If the similarity is below the threshold (VST) , the face is further checked in the downstream frame filter.
  • VST verifying similarity threshold
  • the frame filter buffers the results of the face movement detector, i.e. the face boundary detector and the face verifier, and filters them to finally decide how many faces there are in those frames and which one is which.
  • the filter has to deal with. Firstly, within the filter length, if the face verifier always verifies that a certain face check region contains the same face, the downstream quality selector is to select the best quality face among the frames for further processing. Secondly, compared with a certain face in the last frame, if the current face is not verified by the face verifier, the face is to be further checked in the next frame.
  • the face boundary detector fails to detect any face in a certain face check region in one or two frames, it detects a face in all the other frames. If the faces in those frames are verified by the face verifier, it is turned to be the first case. If not, it is turned to be the second case.
  • the face verifier indicates that, in .one or more frames, the detected faces in a certain face region belong to a same face, while in another frames, they belong to another new face, the old face is removed and the new detected face is added in the filter buffer. This -case may not happen for video signal from a live camera, but it can exist in films or TV programs with a sudden shot change.
  • the quality selector decides whether to select a certain face in the current frame for further processing in the processor. Selection is based on a predefined selecting similarity threshold (SST) .
  • SST selecting similarity threshold
  • the threshold SST is bigger than the threshold VST and smaller than 100% (which means exactly the same) . If a certain face in different frames has little change, i.e. if the inter- frame similarity is above SST, it is not necessary to select the face in each frame, but only one frame for comparing with the face databases. But whenever one face in a certain frame with an inter-frame similarity below SST, it is selected for further comparison. This selector guarantees that each face selected for further comparison has much less redundant information than taking every detected face.
  • the database creator creates a face database by enrolling the current face selected form the FOI tracker.
  • a certain database can contain many enrolled face images for a certain face. There is a maximum number of face images that can be enrolled into one face database.
  • the recogniser is used to compare the current face selected from the FOI tracker with all face databases and outputs the instant similarity value (ISV) between the current face and the most similar face in face databases.
  • ISV instant similarity value
  • the first filter defined with an adaptive recognizing threshold is used to identify the selected face as a known face, when the ISV of the selected face is no less than the adaptive recognizing threshold (ART) , and to identify the selected face as an unknown face, when the ISV of a selected face is less than the adaptive recognizing threshold (ART) .
  • ART adaptive recognizing threshold
  • the adaptive recognizing threshold (ART) is adaptively changeable according to the size of a certain database. For a database with few number of enrolled faces (from the same person) , ART is set to be lower. For the database with many faces (from the same person) enrolled, ART is set to be higher.
  • ART is also robust enough to deal with the problem that a database with more images enrolled has a higher false recognition probability.
  • the filter of ART is introduced so that no face from other persons is wrongly enrolled into a certain database.
  • ART should rely on the value of two important parameters: false acceptance rate (FAR) which is the possibility of face shots from another persons wrongly enrolled and false rejection rate (FRR) which is the possibility of wrong rejection of the face shots from the same person.
  • FAR false acceptance rate
  • FRR false rejection rate
  • FAR is much more harmful for automatic procedures.
  • ART should be selected that a low enough FAR (e.g. bigger than 99%) is achieved.
  • ART is also set adaptive and different for different database.
  • a database of more numbers of face shots enrolled corresponds to a higher ART.
  • the adaptive recognizing threshold (ART) is introduced to guarantee the purity of a certain database even with multiple faces showing up at the same time. Purity means that the database for a certain person does not contain face images from any other persons.
  • the second filter defined with an adaptive updating threshold (AUT) is introduced to determine whether to update a certain database with the selected current face. If ISV is less than AUT, the current face is used to update the corresponding database.
  • AUT is always bigger than ART, and is inversely proportional to the size of a certain database.
  • ART The purpose of ART is to keep the variety of a certain database so that the same person under different conditions can be correctly identified. To save computing power, if one database achieves its maximum number capacity, it does not continue to be updated based on AUT but according to a time threshold.
  • the third filter of time parameter is used to keep updating with the person's latest visage based on a predefined time parameter threshold, e.g. hour or day unit etc.
  • the database merger is used to merge redundant face databases.
  • Said FOI tracker and ART filter assures the purity of a certain database, but they might in turn create redundant databases, i.e. there are more than one databases existing for one person.
  • the merger calculates mutual similarity values (MSV) between each database and combines those databases with MSV bigger than a predefined threshold.
  • the database remover can be used to delete the occasional databases with only few face images enrolled as well as the databases without being updated for a long time.
  • the construction of an automatic and online face database is made passively and should be able to manage people with random motion.
  • the constructed databases are designed to have the following features: Purity — no face shot from any other person is allowed; Variety — a database only enrols various enough face shots; Rapidity — at the beginning of building a new database, a rapid growth of the database is always required; Updatability — the database should be able to keep up with recent views of persons; Uniqueness — each person should have an only one database, multiple databases for one person might lead to confused identification.
  • the variety is apparently required for a database to be complete enough to identify a person in different views, head poses or facial expressions. It is crucial to keep the low false rejection rate (FRR) .
  • FRR low false rejection rate
  • One is the initialised state the other is a stable state. In the initialised state when a new database is created, a rapid growth is important. In principle, before saturation, the larger the number of the face shots of a certain database, as long as they are not the same, the lower false rejection rate is achieved. A less strict rule should be hence used to enrol more face shots in this state. In the stable state, however, the selection of enrolled face shots should be more concentrated on their variety. More various face shots are to replace the previously enrolled more similar face shots.
  • An adaptive updating threshold (AUT) is used to make the selection in a floating way.
  • One mug shot is enrolled if the following equation is fulfilled: ART2 ⁇ S v ⁇ AUT, and n e ⁇ n th
  • AUT is decreasing for each database growing from an initialised state to a stable state
  • ART2 denotes a threshold which is only slightly smaller than ART
  • n e is the current number of enrolled face shots in a certain database
  • n th is the threshold number of enrolled face shots which indicates the saturation of a database.
  • the database in saturation is assumed to have enough face shots enrolled for identifying the same person.
  • ART2 contributes to the database purity. Purity is the most important feature of the databases. A robust recognition technique is needed. Hence, when the image-based face recogniser fails, but a face is still identified, the enrolment should be careful enough. Face shots with S v much smaller than ART are discarded from enrolment to avoid bad quality face shots that might result in failure. ' As mentioned earlier in the recognition procedure, the filter with several S v buffered for recognition also make contributions to the purity. Another important feature for a successful database is to keep up with recent views of faces. Face shots tested from different days statistically have much more difference than face shots from the same day. Enrolling a few positively recognized face shots from every day can improve the FRR.
  • a time parameter is introduced to trigger the update of a saturate database.
  • the replaced face shots are from the oldest days and have most similar characterizing values when compared to others.
  • the face of interest (FOI) tracker combined with the face recogniser of the processor are used to determine whether the selected faces are known or unknown faces, as indicated in step 66.
  • two online adaptive thresholds are applied as an update rule for updating the corresponding face databases.
  • the face recogniser outputs instant similarity values (ISV) between the current face and the corresponding face databases. If it is determined in step 68 that the outputs instant similarity values (ISV) of a face fulfil the updated rules, then the known face databases are updated in step 70. Subsequently, a time parameter threshold together with 'an adaptive updating threshold is also used to update the corresponding databases with the most recent face images.
  • new databases are accordingly created by the database creator in step 72.
  • the creation of a certain database is based on selective face images to reduce redundant information.
  • s.tep 74 when no face is detected for a predefined period of time, if redundant databases are determined existing in step 76, then all redundant databases are merged by the database merger through calculating the mutual similarity value (MSV) between every face database, and occasional databases are removed by the database remover based on the database size and the updating frequency in step 78, therefore, the redundant databases are merged and occasional databases are removed.
  • MSV mutual similarity value
  • step 12 the detector continuously acquires new images from a live video 10.
  • the detection technique of FaceVACS is applied as the image-based detection step.
  • the detection method is rule-based. Each image is compared with a predefined face template to decide whether it contains any face(s) and eyes. With positive results, searches for the eye positions are performed.
  • an actual face region of a current face can be estimated by the following equation, and thus being extracted in step 14 for further processing.
  • the face extractor of the FOI tracker may perform this extraction based on the proportion between the eye distance and the face width and face height:
  • Face width 2.5 Eye Distance
  • Face Height 3.5 Eye Distance
  • the face region separator divides. the current image into several face check regions. Then a face region movement is used to detect a sequence change between two successive frames by means of the face boundary detector ' in step 18.
  • M fb In everyday life, most people do not move extremely fast.
  • the average walking speed of an adult can be reasonably estimated as 4 ⁇ 5 km/hour, equalling to l.lm ⁇ 1.4m/s. It is assumed that the face recognition system can process images with 20 frames per second (fps) ; the walking speed is then corresponding to 5.5cm ⁇ 7cm/frame.
  • the face movement boundary of a certain person between two successive images can be defined as M ob .
  • M oh is accordingly set to 5.5cm ⁇ 7cm/image.
  • M fb can be roughly estimated as the eye distance, denoted by the following equation: M fb ⁇ D eye
  • Fig. 3 depicts the estimation of a boundary for a possible face region in the next frame from the face location of the current frame.
  • the possible face region for the next frame is defined as a face check region (marked in light grey) .
  • Each extracted face defines a corresponding face check region.
  • the face boundary detector is introduced to detect whether a current frame contains any faces in the boundary corresponding to the face check regions defined in the last preceding frame image. If yes, the current face is considered to probably remain the same as in the last frame, and if not, either the face might be occluded with another object, or due to a too much rotation, a sudden lighting change etc. that the face extractor fails to extraction. In this case, the face region is reserved for the next frame image and is to be further examined in the downstream frame filter.
  • motion detection is based on the simple image difference method which calculates the motion pixels in percentage of the whole pixel numbers.
  • a pixel is defined as a motion pixel when the intensity change of the pixel is bigger than a predefined threshold It .
  • the motion parameter m f can be calculated by: m f ⁇ y . 1 00%, total Where n d represents the number of motion pixels, and n tota i represents the total number of pixels in the face region. If m f is below a certain predefine threshold, the face region is supposed to contain the same face independently from the image-based face detector results. For example, when a head rotates to a certain degree or shows only a profile, many image-based face detection algorithms fail to detect it while the temporal-based motion detection does work.
  • the face verifier is used to further compare the current examined face with the corresponding face is the last frame. If the similarity between both faces is no less than a predefined verifying similarity threshold (VST) , they are determined to be the same face. If the similarity is below the threshold, the face is further checked in the downstream frame filter.
  • the frame filter with a length of certain frames (e.g. 7 frames), is intended to buffer the results of the face movement detector and the face verifier, and filter them to finally decide how many faces there are in those frames and which is which.
  • step 24 the quality selector is used to decide whether to select a certain face in the current frame for further processing in the processor. Since each check region is independently calculated from each detected face, the proposed temporal-based method has no trouble in handling multiple faces.
  • the face recognition is performed through step 26 to 28. Any robust image-based face recognition method with fast speed might be used in the invention.
  • the technique of FaceVACS of Cognitec Systems GmbH is chosen. It is a feature-based method which extracts local features from a face shot and transforms them into one vector.
  • Each database is a vector set which consists multiple vectors.
  • the recogniser outputs instant similarity value (ISV) S v (lying between 0 and 1, with 0 meaning "no similarity at all” and 1 meaning "the same”) between a current face shot and the most similar face in the database.
  • ISV instant similarity value
  • step 28 a filter defined with an adaptive recognizing threshold (ART) is provided. Intuitively, if the instant similarity value (ISV) S v is no less than the ART, the current mug shot is identified.
  • ISV instant similarity value
  • step 40 a new face database is enrolled through the database creator. If the current selected face is identified as known face, then it goes to step 30, which further determines whether or not to update a certain database with the current selected face.
  • a filter defined with an adaptive updating threshold (AUT) is applied to keep enough variety of a certain database and thus maintaining the robustness of face recognition.
  • a filter with a threshold of time parameter is used to keep updating the databases with the person's latest visage based on predefined time period parameter. If the time parameter do not fulfil the threshold, no updating will be performed and the current face shot will be discarded as in step 38. If the time parameter fulfil the threshold, the face database will be updated with the recent views of the face, as it performed in step 36. In such a way, the system keeps on updating the database according to a time period parameter.
  • step 42 if the processor determines that the no-face-extracted status has lasted for a predefined period of time, then it is determined as an idle period in step 44. If it is determined that redundant databases are existed in step 46, the redundant databases will be merged by the database merger and face databases of non-interest will be removed by the database remover.
  • the inventive system is running to recognize one salient face for each frame without losing the generality because multiple people may exist.
  • the result shows that the inventive system can successfully run with multiple people in free arbitrary motion compared to FaceVACS which produced less satisfactorily results.
  • the invention explores the general ways to further compensate for face detection and face recognition limitations by using the temporal information from video sequences.
  • the invention further solves existing challenges mentioned above and presents a system which can automatically and passively recognize persons with free and arbitrary motion and runs in a completely unsupervised manner.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

L'invention concerne un système de reconnaissance faciale qui comprend un détecteur assurant l'acquisition continue d'une image ou de plusieurs images à partir d'une source vidéo en direct, un système de poursuite de visage cible assurant la poursuite de plusieurs visages qui obéissent à un mouvement aléatoire, le filtrage et la sélection de visages spécifiques, et un processeur déterminant si les visages spécifiques sont connus ou inconnus, la création de nouveaux ensembles de données pour les visages inconnus, la mise à jour adaptative des ensembles de données existants dans la base de données et l'élimination des visages connus, la fusion des bases de données de visages redondantes et l'élimination des bases de données de visages non pertinentes. Le système est très efficace et il permet de traiter plusieurs personnes apparaissant au même moment, sans nécessité d'interaction avec l'utilisateur et de supervision ou d'assistance humaine.
PCT/EP2005/001988 2004-03-05 2005-02-25 Systeme de reconnaissance faciale et procede WO2005096213A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP04300122 2004-03-05
EP04300122.1 2004-03-05

Publications (1)

Publication Number Publication Date
WO2005096213A1 true WO2005096213A1 (fr) 2005-10-13

Family

ID=34960846

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2005/001988 WO2005096213A1 (fr) 2004-03-05 2005-02-25 Systeme de reconnaissance faciale et procede

Country Status (1)

Country Link
WO (1) WO2005096213A1 (fr)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2435119A (en) * 2006-02-10 2007-08-15 Samsung Electronics Co Ltd Controlling access to user information in a videophone with a camera
CN100361138C (zh) * 2005-12-31 2008-01-09 北京中星微电子有限公司 视频序列中人脸的实时检测与持续跟踪的方法及系统
EP1901239A2 (fr) 2006-09-06 2008-03-19 Yamaha Corporation Appareil d'authentification biométrique
DE102009048118A1 (de) * 2009-10-02 2011-04-07 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Verfahren und Vorrichtung zum Verwalten von Objektansichtsdaten in einer Objektdatenbank
CN101477616B (zh) * 2008-01-04 2012-01-04 华晶科技股份有限公司 人脸检测与追踪方法
JP2012242891A (ja) * 2011-05-16 2012-12-10 Canon Inc 顔認識装置、顔認識方法、顔認識プログラム、およびそのプログラムを記録した記録媒体
CN102833182A (zh) * 2012-08-09 2012-12-19 上海量明科技发展有限公司 即时通信中进行人脸识别的方法、客户端及系统
US8494231B2 (en) 2010-11-01 2013-07-23 Microsoft Corporation Face recognition in video content
EP2704052A1 (fr) * 2012-08-28 2014-03-05 Solink Corporation Système de vérification de transaction
WO2017004464A1 (fr) * 2015-06-30 2017-01-05 Nec Corporation Of America Système de reconnaissance faciale
US10402627B2 (en) 2015-01-04 2019-09-03 Huawei Technologies Co., Ltd. Method and apparatus for determining identity identifier of face in face image, and terminal
CN110750670A (zh) * 2019-09-05 2020-02-04 北京旷视科技有限公司 陌生人监控方法、装置、系统和存储介质
CN110929644A (zh) * 2019-11-22 2020-03-27 南京甄视智能科技有限公司 基于启发式算法的多模型融合人脸识别方法、装置、计算机系统与可读介质
CN112686175A (zh) * 2020-12-31 2021-04-20 北京澎思科技有限公司 人脸抓拍方法、系统及计算机可读存储介质
EP4113325A3 (fr) * 2007-07-24 2023-01-25 Samsung Electronics Co., Ltd. Système et procédé de sauvegarde d'un contenu numérique classé par regroupement personnalisé
US11687635B2 (en) 2019-09-25 2023-06-27 Google PLLC Automatic exposure and gain control for face authentication

Non-Patent Citations (13)

* Cited by examiner, † Cited by third party
Title
"FACES AS PATTERNS", RECOGNISING FACES, 1988, pages 37 - 58,I, XP001202185 *
"PERCEPTION AND ATTENTION", COGNITIVE PSYCHOLOGY, 2001, pages 73 - 107,I, XP008043191 *
"SEMANTIC CODING OF FACES", RECOGNISING FACES, 1988, pages 59 - 91,I, XP001202186 *
BIEDERMAN I ET AL: "NEUROCOMPUTATIONAL BASES OF OBJECT AND FACE RECOGNITION", PHILOSOPHICAL TRANSACTIONS. ROYAL SOCIETY OF LONDON. BIOLOGICAL SCIENCES, ROYAL SOCIETY, LONDON, GB, vol. 352, no. 1358, 29 August 1997 (1997-08-29), pages 1203 - 1219, XP008045101, ISSN: 0962-8436 *
BLICKLE T W ET AL: "TESTS OF A THEORY OF HUMAN IMAGE UNDERSTANDING: PART II. THE PERCEPTION OF DEGRADED OBJECTS", PROCEEDINGS OF THE HUMAN FACTORS SOCIETY. ANNUAL MEETING, vol. 1, 29 September 1986 (1986-09-29), pages 301 - 305, XP008045108 *
ELLIS H D: "INTRODUCTION: PROCESSES UNDERLYING FACE RECOGNITION", NEUROPSYCHOLOGY OF FACE PERCEPTION AND FACIAL EXPRESSION, ERLBAUM ASSOCIATES, HILLSDALE, NJ, US, 1986, pages 1 - 27, XP008045151 *
FRANKISH K: "A MATTER OF OPINION", PHILOSOPHICAL PSYCHOLOGY, CARFAX PUBLISHING, ABINGDON, GB, vol. 11, no. 4, 1998, pages 423 - 442, XP008045090, ISSN: 0951-5089 *
GONG S ET AL: "dynamic vision, from images to face recognition, PASSAGE", DYNAMIC VISION : FROM IMAGES TO FACE RECOGNITION, LONDON : IMPERIAL COLLEGE PRESS, GB, 2000, pages 134 - 136,145, XP002325051, ISBN: 1-86094-181-8 *
QUANREN XIONG ET AL: "Mugshot database acquisition in video surveillance networks using incremental auto-clustering quality measures", ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE, 2003. PROCEEDINGS. IEEE CONFERENCE ON 21-22 JULY 2003, PISCATAWAY, NJ, USA,IEEE, 21 July 2003 (2003-07-21), pages 191 - 198, XP010648384, ISBN: 0-7695-1971-7 *
RAYTCHEV B ET AL: "Unsupervised face recognition from image sequences based on clustering with attraction and repulsion", PROCEEDINGS 2001 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION. CVPR 2001. KAUAI, HAWAII, DEC. 8 - 14, 2001, PROCEEDINGS OF THE IEEE COMPUTER CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, LOS ALAMITOS, CA, IEEE COMP. SOC, US, vol. VOL. 1 OF 2, 8 December 2001 (2001-12-08), pages 25 - 30, XP010584096, ISBN: 0-7695-1272-0 *
RUUD BOLLE: "Guide to Biometrics", 2004, SPRINGER, NEW YORK, USA, XP002333807 *
S. GONG: "Dynamic Vision : from Images to Face Recognition", 2000, IMPERIAL COLLEGE PRESS, LONDON, ENGLAND, XP002333805 *
S. Y. KUNG: "Biometric Authentification : a Machine Learning Approach", 2005, PRENTICE HALL, UPPER SADDLE RIVER, NJ, USA, XP002333806 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100361138C (zh) * 2005-12-31 2008-01-09 北京中星微电子有限公司 视频序列中人脸的实时检测与持续跟踪的方法及系统
GB2435119A (en) * 2006-02-10 2007-08-15 Samsung Electronics Co Ltd Controlling access to user information in a videophone with a camera
EP1901239A2 (fr) 2006-09-06 2008-03-19 Yamaha Corporation Appareil d'authentification biométrique
EP1901239A3 (fr) * 2006-09-06 2012-01-04 Yamaha Corporation Appareil d'authentification biométrique
EP4113325A3 (fr) * 2007-07-24 2023-01-25 Samsung Electronics Co., Ltd. Système et procédé de sauvegarde d'un contenu numérique classé par regroupement personnalisé
CN101477616B (zh) * 2008-01-04 2012-01-04 华晶科技股份有限公司 人脸检测与追踪方法
DE102009048118B4 (de) 2009-10-02 2020-06-18 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Verfahren und Vorrichtung zum Verwalten von Objektansichtsdaten in einer Objektdatenbank
DE102009048118A1 (de) * 2009-10-02 2011-04-07 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Verfahren und Vorrichtung zum Verwalten von Objektansichtsdaten in einer Objektdatenbank
WO2011039184A3 (fr) * 2009-10-02 2011-06-16 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Procédé et dispositif de gestion de données vues objet dans une base de données objet
US8494231B2 (en) 2010-11-01 2013-07-23 Microsoft Corporation Face recognition in video content
JP2012242891A (ja) * 2011-05-16 2012-12-10 Canon Inc 顔認識装置、顔認識方法、顔認識プログラム、およびそのプログラムを記録した記録媒体
CN102833182A (zh) * 2012-08-09 2012-12-19 上海量明科技发展有限公司 即时通信中进行人脸识别的方法、客户端及系统
EP2704052A1 (fr) * 2012-08-28 2014-03-05 Solink Corporation Système de vérification de transaction
US10402627B2 (en) 2015-01-04 2019-09-03 Huawei Technologies Co., Ltd. Method and apparatus for determining identity identifier of face in face image, and terminal
US10657362B2 (en) 2015-06-30 2020-05-19 Nec Corporation Of America Facial recognition system
US11501566B2 (en) 2015-06-30 2022-11-15 Nec Corporation Of America Facial recognition system
WO2017004464A1 (fr) * 2015-06-30 2017-01-05 Nec Corporation Of America Système de reconnaissance faciale
CN110750670A (zh) * 2019-09-05 2020-02-04 北京旷视科技有限公司 陌生人监控方法、装置、系统和存储介质
CN110750670B (zh) * 2019-09-05 2022-04-19 北京旷视科技有限公司 陌生人监控方法、装置、系统和存储介质
US11687635B2 (en) 2019-09-25 2023-06-27 Google PLLC Automatic exposure and gain control for face authentication
CN110929644A (zh) * 2019-11-22 2020-03-27 南京甄视智能科技有限公司 基于启发式算法的多模型融合人脸识别方法、装置、计算机系统与可读介质
CN110929644B (zh) * 2019-11-22 2020-06-26 南京甄视智能科技有限公司 基于启发式算法的多模型融合人脸识别方法、装置、计算机系统与可读介质
CN112686175A (zh) * 2020-12-31 2021-04-20 北京澎思科技有限公司 人脸抓拍方法、系统及计算机可读存储介质

Similar Documents

Publication Publication Date Title
WO2005096213A1 (fr) Systeme de reconnaissance faciale et procede
US11386900B2 (en) Visual speech recognition by phoneme prediction
CN106557726B (zh) 一种带静默式活体检测的人脸身份认证系统及其方法
US8396262B2 (en) Apparatus and method for face recognition and computer program
Krueger et al. Exemplar-based face recognition from video
Alletto et al. Understanding social relationships in egocentric vision
US20050207622A1 (en) Interactive system for recognition analysis of multiple streams of video
WO2004055715A1 (fr) Identification de visage ne variant pas avec l'expression
WO2002047386A1 (fr) Procede et appareil de prevision d'evenements dans une videoconference et autres applications
EP3555799B1 (fr) Procédé de sélection de trames utilisées dans un traitement de visage
CN109299690B (zh) 一种可提高视频实时人脸识别精度的方法
US20190114470A1 (en) Method and System for Face Recognition Based on Online Learning
GB2409030A (en) Face detection
GB2414616A (en) Comparing test image with a set of reference images
CN112016353A (zh) 一种基于视频的人脸图像进行身份识别方法及装置
GB2409029A (en) Face detection
CN113837006B (zh) 一种人脸识别方法、装置、存储介质及电子设备
KR20200119425A (ko) 도메인 적응 기반 객체 인식 장치 및 그 방법
Rathore et al. A survey on face detection and recognition
KR100596498B1 (ko) 다중 프레임 기반 온라인 얼굴 인식 시스템
CN113779290A (zh) 一种摄像头人脸识别聚合优化方法
CN113628248A (zh) 行人驻留时长确定方法、装置以及计算机可读存储介质
CN113297423A (zh) 一种推送方法、推送装置及电子设备
Mou et al. Automatic databases for unsupervised face recognition
Pande et al. The video face book

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase