CN108197586B - Face recognition method and device - Google Patents

Face recognition method and device Download PDF

Info

Publication number
CN108197586B
CN108197586B CN201810036015.1A CN201810036015A CN108197586B CN 108197586 B CN108197586 B CN 108197586B CN 201810036015 A CN201810036015 A CN 201810036015A CN 108197586 B CN108197586 B CN 108197586B
Authority
CN
China
Prior art keywords
image
face
detected
information
optical flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810036015.1A
Other languages
Chinese (zh)
Other versions
CN108197586A (en
Inventor
袁培江
史震云
李建民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensing Tech Co ltd
Original Assignee
Beijing Sensing Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensing Tech Co ltd filed Critical Beijing Sensing Tech Co ltd
Publication of CN108197586A publication Critical patent/CN108197586A/en
Application granted granted Critical
Publication of CN108197586B publication Critical patent/CN108197586B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Abstract

The present disclosure relates to a face recognition method and apparatus. The method comprises the steps of obtaining a first image and a second image of the face of an object to be detected, wherein the imaging methods of the first image and the second image are different; extracting first feature information from the first image; extracting second feature information from the second image; performing data fusion on the first characteristic information and the second characteristic information; performing optical flow method processing on the feature information after data fusion; and determining whether the object to be detected is a living body according to the characteristic information processed by the optical flow method. According to the method, the characteristics of the images acquired by different imaging methods are subjected to data fusion and then processed by adopting a streamer method, so that whether the object to be detected is a living body or not can be determined, therefore, replay deception caused by adopting pictures or videos is prevented, and the anti-invasion capacity of face recognition is improved. Moreover, the characteristics are weighted by adopting each characteristic part of the face, so that the recognition speed, the anti-invasion capability and the illumination adaptability of the infrared face recognition can be further improved.

Description

Face recognition method and device
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a face recognition method and apparatus.
Background
Face recognition is a very popular research area in biometric authentication. The face recognition is favored by people with the advantages of natural and cheap interactive interface, non-contact acquisition, convenient use and the like, and can be applied to access control and attendance checking, system login, entry and exit management, and the contrast of photos of suspects. But few researches on the detection of face impersonation attacks limit the application of the face authentication system. Meanwhile, the diversification of the human face impersonation means also brings great challenges to the research of the human face impersonation attack detection technology, so that the research of the human face impersonation attack detection technology increasingly becomes a hot spot in the field of biological authentication research.
For face recognition systems, the playback of photos and videos of legitimate users is the most common means of attack. It is easy for today's technology to obtain photos and videos of legitimate users. We can take photographs from the web or with a pinhole camera. Human facial features are the easiest to acquire compared to other biometric features. Therefore, how to make the face recognition system have a strong anti-spoofing function has become a problem that needs to be solved urgently in face authentication.
There are many research methods for distinguishing living bodies from replay spoofing attacks from post-processing of biometric information, for example, detecting living bodies using image processing methods such as wavelet technology, fourier transform, local texture analysis, and the like; and analyzing the living body motion such as fingerprint bending motion and face motion by optical flow. Physiological information in the living body can also be acquired from an additional device by which winning information is detected. Other methods are to detect living bodies by means of "request-response", for example in face recognition, Frischholz et al, which requires the user to make a corresponding head gesture with a signal light indication; the method of Kollreider et al requires the user to read the digits of the system prompt and the system recognizes by lip whether the user read digits are the same as the system prompt.
However, the existing method for detecting living bodies has the following defects:
1. the ease of use is poor. Some devices require a simple coordination and response by the user, which requires good interactivity with anti-spoofing techniques. Such as three-dimensional depth analysis: this method requires some movement of the head because a stationary face cannot produce depth coordinates. When the photograph is warped, the estimated depth coordinates also vary greatly as the living human face does.
2. The safety is poor. At present, an infrared face recognition system has certain advantages on static images, but is easily attacked by external videos or pictures for replaying and 3D models. Experiments such as Kollreider et al have shown that optical flow methods can effectively detect photo-spoofing, but not video-spoofing. The method also requires little movement of the head of the user, and because the method estimates the movement of the head at different positions through optical flow, the change of light rays can seriously interfere the estimation of the method on the movement of different parts
3. The adaptability is weak. Part of the face living body recognition methods are based on the quality and the reflectivity of images, so that the requirements for the illumination of application scenes are high, and the application occasions of face recognition are limited. Li et al, mainly based on the fact that human faces generally have reduced sharpness after photographic imaging, and thus cannot be detected if spoofed with a high resolution large face photograph. The frequency dynamic descriptors are sensitive to light changes, and when the illumination on the photo changes, the energy also changes dynamically.
Disclosure of Invention
In view of this, the present disclosure provides a face recognition method and apparatus.
According to an aspect of the present disclosure, there is provided a face recognition method including:
acquiring a first image and a second image of the face of an object to be detected, wherein the imaging methods of the first image and the second image are different;
extracting first feature information from the first image;
extracting second feature information from the second image;
performing data fusion on the first characteristic information and the second characteristic information;
performing optical flow method processing on the feature information after data fusion;
and determining whether the object to be detected is a living body according to the characteristic information processed by the optical flow method.
According to another aspect of the present disclosure, there is provided a face recognition apparatus including:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a first image and a second image of the face of an object to be detected, and the imaging methods of the first image and the second image are different;
the first feature extraction module is used for extracting first feature information from the first image;
the second characteristic extraction module is used for extracting second characteristic information from the second image;
the data fusion module is used for carrying out data fusion on the first characteristic information and the second characteristic information;
the optical flow processing module is used for carrying out optical flow method processing on the feature information after the data fusion;
and the judging module is used for determining whether the object to be detected is a living body according to the characteristic information processed by the optical flow method.
According to another aspect of the present disclosure, there is provided a face recognition apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: the device of any embodiment of the present disclosure is executed.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement a method according to any one of the embodiments of the present disclosure.
In the method, the characteristics of the images acquired by different imaging methods are subjected to data fusion and then processed by a streamer method, so that whether the object to be detected is a living body can be determined, replay deception by using pictures or videos is prevented, and the anti-invasion capacity of face recognition is improved. Moreover, the characteristics are weighted by adopting each characteristic part of the face, so that the recognition speed and the anti-aggression capability of face recognition can be further improved, and the face recognition method can be suitable for occasions under most illumination environments and has strong illumination adaptability.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating a method of face recognition according to an exemplary embodiment.
FIG. 2 is another flow diagram illustrating a method of face recognition according to an example embodiment.
Fig. 3 is another flow chart illustrating a method of face recognition according to another exemplary embodiment.
Fig. 4 is a block diagram illustrating a face recognition apparatus according to an example embodiment.
Fig. 5 is a block diagram illustrating a face recognition apparatus according to another exemplary embodiment.
Fig. 6 is a block diagram illustrating a face recognition apparatus according to another exemplary embodiment.
Fig. 7 is a block diagram illustrating a face recognition apparatus according to another exemplary embodiment.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
FIG. 1 is a flow chart illustrating a method of face recognition according to an exemplary embodiment. As shown in fig. 1, the face recognition method includes the steps of:
step 101, acquiring a first image and a second image of a face of an object to be detected, wherein the first image and the second image have different imaging methods.
Step 102, extracting first characteristic information from the first image.
And 103, extracting second characteristic information from the second image.
And step 104, performing data fusion on the first characteristic information and the second characteristic information.
And 105, performing optical flow method processing on the feature information after data fusion.
And step 106, determining whether the object to be measured is a living body according to the characteristic information processed by the optical flow method.
For example, an infrared light image and a visible light image including the face of the object to be measured, for example, a human face, may be respectively captured using, for example, a binocular camera. The process of acquiring the infrared light image and the visible light image has no timing requirement. After the infrared light image and the visible light image are acquired, feature information may be extracted from the infrared light image and the visible light image, respectively. Then, feature information is extracted from the infrared light image and the visible light image to perform data fusion, and then the fused feature information is subjected to optical flow processing. Whether the object to be measured is a living body can be determined according to the characteristic information processed by the optical flow method.
In one possible implementation manner, acquiring a first image and a second image including a face of an object to be measured, where the first image and the second image have different imaging methods, includes:
respectively shooting an infrared light image and a visible light image which comprise the face of the object to be detected by adopting a binocular camera;
converting the infrared light image into a gray scale image and carrying out noise reduction pretreatment to obtain a first image;
and converting the visible light image into a gray scale image and carrying out noise reduction preprocessing to obtain a second image.
In the present disclosure, the captured infrared light image and visible light image may be respectively preprocessed, for example, the color image is converted into a gray scale image, filtered, denoised, etc., so as to reduce the influence of noise on the image and reduce the data amount of subsequent processing, thereby reducing the data amount required to be processed. And then, performing feature extraction, data fusion and optical flow method processing on the processed image, and judging whether the object to be detected is a living body.
In one possible implementation manner, in step 105, performing optical flow processing on the feature information after data fusion includes:
and carrying out optical flow processing on the feature information after data fusion to obtain the motion information of each pixel of the face of the object to be detected.
In the present disclosure, after the streamer processing, the motion information of each pixel included in the face of the object to be measured can be obtained. Based on the motion information of these pixels, it can be determined whether the object to be measured is a living body. If the object to be detected is a non-living body, the subsequent matching process can be stopped, and a prompt that the object to be detected is the non-living body is output to remind that the attack possibly occurs. This prevents replay spoofing and improves the anti-aggression capability of face recognition.
In one possible implementation, as shown in fig. 2, the method further includes:
step 201, if the object to be detected is a living body, comparing the feature information after the optical flow processing with the known image information to determine the identity of the object to be detected.
If the object to be detected is a living body, the identity of the living body can be further detected. For example, images of a plurality of objects may be acquired in advance, and the acquired images may be stored in a database as known image information after being subjected to processing such as feature extraction. Also, identity related information for each acquired object may be entered, for example: name, gender, identification number, home address, etc. If the pre-collected object comprises the object to be detected, the known image information of the object to be detected exists in the database. Thus, if the known image information of the object to be detected is matched in the database, the identity of the object to be detected can be found.
In one possible implementation manner, in step 201, comparing the feature information after the optical flow processing with the known image information to determine the identity of the object to be detected includes:
filtering, optimizing data and counting the motion information of each pixel after the optical flow is processed by adopting a Gaussian difference filter, a Local Binary Pattern (LBP) characteristic algorithm and a support vector machine;
weighting the information of the visible light image of the known object and the counted data according to each characteristic part of the face;
and comparing the weighted result with a set threshold value to determine whether the identity of the object to be detected is a known object.
In this embodiment, information of visible light images of a plurality of known objects may be collected in advance and saved in a database. When identity matching is carried out, information of a visible light image of a certain known object is extracted from the database, and weighting processing is carried out on the information and the counted data of the object to be measured. In the weighting process, important feature parts included in the face may be considered.
Specifically, the face includes a plurality of feature portions. Such as eyes, mouth, nose, forehead, etc. According to each feature part of the face, corresponding weights can be set for each feature part of the face respectively so as to highlight important parts and improve matching performance. Then, the information of the visible light image of the known object and the counted data are weighted according to each feature part. The manner of weighting processing may be referred to in the following equation 1 and its related description. Wherein, eyes and mouth are the parts with strong correlation between the infrared human face and the visible human face. It may be preferable to apply weighting to both eye and mouth features.
According to the method, the characteristics of the images acquired by different imaging methods are subjected to data fusion and then processed by adopting a streamer method, so that whether the object to be detected is a living body or not can be determined, therefore, replay deception caused by adopting pictures or videos is prevented, and the anti-invasion capacity of face recognition is improved. In addition, the characteristics are weighted by adopting each characteristic part of the face, so that the recognition speed and the anti-aggression capability of infrared face recognition can be further improved, the infrared face recognition system can be suitable for occasions under most illumination environments, and has strong illumination adaptability.
The specific implementation mode of the face recognition method comprises the following steps: according to the optical flow method, temporal variations and correlations of pixel intensity data in an image sequence are used to determine the "motion" of the respective pixel positions. And obtaining the running information of each pixel point from the image sequence. And performing data statistical analysis by adopting a Gaussian difference filter, LBP characteristics and a support vector machine. Meanwhile, the optical flow field is sensitive to the movement of an object, so that the movement and the blink of eyeballs can be uniformly detected by using the optical flow field.
Through the research on the correlation of each part of the face, the eyes and the mouth are found to be the parts with strong correlation between the infrared face and the visible light face. The relevance of the whole face is calculated by weighting the relevance of all parts of the face, and the performance of the method is superior to that of a method for directly calculating the relevance of the whole face, particularly under the condition of not wearing glasses. The correlation difference of the thermal infrared and visible light human faces at different parts of the face is large, so when the correlation of the human faces is calculated, a weight is set for each part of the face to highlight important parts and improve the matching performance. Wherein the weighting formula is formula 1:
Figure BDA0001547963450000081
wherein R isTotal(xiyj) As a weighting function, RK(xiyj) Denotes xiAnd yjN represents the number of characteristic parts of the face, and k represents the number of characteristic parts of the face. Characteristic VKRepresenting the weight value, x, of the k-th feature after face segmentationiA feature part of a certain block of the face of a certain visible light image in the database; y isjAnd i and j are positive integers and represent certain characteristic parts of the face of the binocular shooting image.
Fig. 3 is a flow chart illustrating a method of face recognition according to another exemplary embodiment. As shown in fig. 3, in the face recognition method, taking an infrared light image as an infrared light face photo and a visible light image as a visible light face photo as an example, the method may include the specific steps of:
step 1: the binocular camera picks up the face of the object to be detected. The binocular camera comprises a visible light camera and an infrared light camera. And respectively picking up the face photos under the visible light camera and the infrared light camera to obtain the infrared light face photo and the visible light face photo.
Step 2: and preprocessing two pictures shot under the binocular camera to extract human face features. The preprocessing can reduce the influence of noise on the image and reduce the data volume of subsequent processing. There are various ways of preprocessing, such as converting a color image into a gray-scale image, etc., and then filtering and removing noise, etc.
And step 3: and performing data fusion on the extracted face feature information of the two photos. Data fusion is the union, correlation and combination of data and information from multiple information sources. There are various ways of data fusion, and this disclosure is not limited thereto.
In the steps 1 to 3, the processes of shooting, preprocessing and feature extraction of the infrared light face picture and the visible light face picture have no time sequence requirement, and can be executed simultaneously or sequentially.
And 4, step 4: and carrying out optical flow processing on the fused feature information to obtain the motion information of each pixel of the face of the object to be detected.
And 5: and judging whether the object to be detected is a living body or not according to the motion information after the optical flow processing. If it is a living body, step 6 is executed. If not, a reminder of "not live" may be output and the process ends.
Step 6: and (4) filtering, optimizing and counting the data in the step (4) by adopting a Gaussian difference filter and an LBP (Local Binary pattern) characteristic algorithm and a support vector machine.
And 7: and acquiring the face feature data of the visible light photo of a certain person from the database. And (3) weighting the face characteristic data in the visible light photos in the database and the face characteristic data in the step (6) by using each characteristic part of the human face, such as the mouth and eyes, as a weighting algorithm by adopting an equation 1. The face may be divided into a plurality of feature portions such as a mouth, eyes, a nose, a forehead, a cheek, and the like, and the weight of each feature portion may be set in advance.
And 8: and comparing the weighting processing result with the set threshold value, and outputting a comparison result. For example: the comparison result may be whether the currently photographed object to be measured coincides with the photo information of a person entered in the database. If the currently photographed detection object coincides with the photo information entered in the database, it can be determined that this detection object is a real living body.
For example, a man's nail comes in the doorway. The binocular camera shoots the nail A, the picture shot by the visible light camera is A, and the picture shot by the infrared light camera is B. And respectively carrying out feature extraction and data fusion on the pictures A and B according to the process. Then processed by an optical flow method. According to the optical flow method, the 'motion' of each pixel position can be determined by using the time domain change and the correlation of pixel intensity data in an image sequence, and the operation information of each pixel point is obtained from the image sequence, so that whether the A is a living body is judged.
If the living body is judged to be the first living body, the information after the optical flow processing can be weighted, and then a Gaussian difference filter, LBP characteristics and a support vector machine are adopted to carry out data statistical analysis to form final data W. Then, extracting the face feature information X of the object to be detected from the database, and obtaining a value M after processing W and X through a weighting function. And comparing the value M with a threshold value N to obtain whether the object to be detected is a target living human.
According to the method, the data fusion of infrared light and visible light is utilized, advanced algorithms such as an optical flow method are combined, the recognition speed and the anti-aggression capability of infrared face recognition can be improved, the method can be suitable for occasions under most of illumination environments, and the method has strong illumination adaptability. The method can effectively solve the problems existing in the prior infrared face recognition: poor safety, low recognition efficiency, weak scene adaptability and the like. Specifically, the fused data is subjected to optical flow processing to obtain motion information of each pixel, and whether the object to be measured is a living body can be determined, so that replay spoofing by using a picture or a video can be prevented, and the anti-aggression capability of face recognition can be improved. Moreover, the characteristics are weighted by adopting each characteristic part of the face before comparison, so that the identification speed and the anti-aggression capability of the face identification can be further improved, and the method can be suitable for occasions under most illumination environments and has strong illumination adaptability. In the subsequent comparison process, the object to be detected is known to be a living body, so that the object to be detected does not need to be matched through shaking or response, technical requirements on interactivity do not exist, and the method is simple and convenient to use, namely good in usability. Compared with a simple feature extraction method, the method can detect cheating such as video or image reproduction more easily, and is high in safety. Moreover, the image with some blurriness can be effectively identified, so that the scene adaptability is strong.
Fig. 4 is a block diagram illustrating a face recognition apparatus according to an example embodiment. As shown in fig. 2, the face recognition apparatus may include:
an obtaining module 41, configured to obtain a first image and a second image of a face of an object to be detected, where imaging methods of the first image and the second image are different;
a first feature extraction module 42, configured to extract first feature information from the first image;
a second feature extraction module 43, configured to extract second feature information from the second image;
a data fusion module 44, configured to perform data fusion on the first feature information and the second feature information;
an optical flow processing module 45, configured to perform optical flow processing on the feature information after data fusion;
and the judging module 46 is configured to determine whether the object to be measured is a living body according to the feature information processed by the optical flow method.
In one possible implementation, as shown in fig. 5, the apparatus further includes:
and the comparison module 51 is configured to compare the feature information after the optical flow processing with the known image information to determine the identity of the object to be detected if the object to be detected is a living body.
In a possible implementation manner, the optical flow processing module 45 is further configured to perform optical flow processing on the feature information after data fusion to obtain motion information of each pixel of the face of the object to be detected.
In one possible implementation, the comparing module 51 includes:
the optimization submodule is used for filtering, optimizing data and counting the motion information of each pixel after the optical flow is processed by adopting a Gaussian difference filter, a Local Binary Pattern (LBP) characteristic algorithm and a support vector machine;
the weighting submodule is used for weighting the information of the visible light image of the known object and the counted data according to each characteristic part of the face;
and the comparison submodule is used for comparing the weighting result with a set threshold value so as to determine whether the identity of the object to be detected is a known object.
In a possible implementation manner, the obtaining module 41 includes:
the shooting submodule is used for respectively shooting an infrared light image and a visible light image which comprise the face of the object to be detected by adopting a binocular camera;
the first preprocessing submodule is used for converting the infrared light image into a gray scale image and carrying out noise reduction preprocessing on the gray scale image to obtain a first image;
and the second preprocessing submodule is used for converting the visible light image into a gray image and carrying out noise reduction preprocessing on the gray image to obtain a second image.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 6 is a block diagram illustrating a face recognition apparatus according to another exemplary embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 6, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, images, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 806 provide power to the various components of device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed status of the device 800, the relative positioning of components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in the position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and a change in the temperature of the device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the device 800 to perform the above-described methods.
Fig. 7 is a block diagram illustrating a face recognition apparatus 1900 according to another exemplary embodiment. For example, the apparatus 1900 may be provided as a server. Referring to fig. 7, the device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by the processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The device 1900 may also include a power component 1926 configured to perform power management of the device 1900, a wired or wireless network interface 1950 configured to connect the device 1900 to a network, and an input/output (I/O) interface 1958. The device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, MacOS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the apparatus 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (6)

1. A face recognition method, comprising:
acquiring a first image and a second image of the face of an object to be detected, wherein the imaging methods of the first image and the second image are different;
extracting first feature information from the first image;
extracting second feature information from the second image;
performing data fusion on the first characteristic information and the second characteristic information;
performing optical flow method processing on the feature information after data fusion;
determining whether the object to be detected is a living body according to the characteristic information processed by the optical flow method;
if the object to be detected is a living body, filtering, data optimization and statistics are carried out on the motion information of each pixel of the face of the object to be detected after the optical flow processing by adopting a Gaussian difference filter, a Local Binary Pattern (LBP) characteristic algorithm and a support vector machine;
weighting the information of the visible light image of the known object and the counted data according to each characteristic part of the face;
and comparing the weighted result with a set threshold value to determine whether the identity of the object to be detected is a known object.
2. The method of claim 1, wherein acquiring a first image and a second image including a face of a subject to be measured, the first image and the second image being imaged differently, comprises:
respectively shooting an infrared light image and a visible light image which comprise the face of the object to be detected by adopting a binocular camera;
converting the infrared light image into a gray scale image and carrying out noise reduction pretreatment to obtain a first image;
and converting the visible light image into a gray scale image and carrying out noise reduction preprocessing to obtain a second image.
3. A face recognition apparatus, comprising:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a first image and a second image of the face of an object to be detected, and the imaging methods of the first image and the second image are different;
the first feature extraction module is used for extracting first feature information from the first image;
the second characteristic extraction module is used for extracting second characteristic information from the second image;
the data fusion module is used for carrying out data fusion on the first characteristic information and the second characteristic information;
the optical flow processing module is used for carrying out optical flow method processing on the feature information after the data fusion;
the judging module is used for determining whether the object to be detected is a living body according to the characteristic information processed by the optical flow method;
the comparison module is used for filtering, optimizing data and counting the motion information of each pixel of the face of the object to be detected after the optical flow processing by adopting a Gaussian difference filter, a Local Binary Pattern (LBP) characteristic algorithm and a support vector machine if the object to be detected is a living body;
weighting the information of the visible light image of the known object and the counted data according to each characteristic part of the face;
and comparing the weighted result with a set threshold value to determine whether the identity of the object to be detected is a known object.
4. The apparatus of claim 3, wherein the obtaining module comprises:
the shooting submodule is used for respectively shooting an infrared light image and a visible light image which comprise the face of the object to be detected by adopting a binocular camera;
the first preprocessing submodule is used for converting the infrared light image into a gray scale image and carrying out noise reduction preprocessing on the gray scale image to obtain a first image;
and the second preprocessing submodule is used for converting the visible light image into a gray image and carrying out noise reduction preprocessing on the gray image to obtain a second image.
5. A face recognition apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: performing the method of any of the above claims 1 to 2.
6. A non-transitory computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the method of any of claims 1 to 2.
CN201810036015.1A 2017-12-12 2018-01-15 Face recognition method and device Active CN108197586B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711314866X 2017-12-12
CN201711314866 2017-12-12

Publications (2)

Publication Number Publication Date
CN108197586A CN108197586A (en) 2018-06-22
CN108197586B true CN108197586B (en) 2020-04-21

Family

ID=62589418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810036015.1A Active CN108197586B (en) 2017-12-12 2018-01-15 Face recognition method and device

Country Status (1)

Country Link
CN (1) CN108197586B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109086691B (en) * 2018-07-16 2020-02-21 阿里巴巴集团控股有限公司 Three-dimensional face living body detection method, face authentication and identification method and device
CN109243024A (en) * 2018-08-29 2019-01-18 上海交通大学 A kind of automobile unlocking system and method based on recognition of face
US10783388B2 (en) * 2018-10-26 2020-09-22 Alibaba Group Holding Limited Spoof detection using multiple image acquisition devices
CN109376662A (en) * 2018-10-29 2019-02-22 重庆中科云丛科技有限公司 A kind of biopsy method, system and storage medium
CN109819229B (en) * 2019-01-22 2021-02-26 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN111488756B (en) 2019-01-25 2023-10-03 杭州海康威视数字技术股份有限公司 Face recognition-based living body detection method, electronic device, and storage medium
CN110009866A (en) * 2019-04-03 2019-07-12 天津中航亿达科技有限公司 A kind of method of video detection temperature anomaly
CN110135361A (en) * 2019-05-19 2019-08-16 北京深醒科技有限公司 A kind of Pose-varied face recognition method based under infrared camera
CN110135362A (en) * 2019-05-19 2019-08-16 北京深醒科技有限公司 A kind of fast face recognition method based under infrared camera
CN110728215A (en) * 2019-09-26 2020-01-24 杭州艾芯智能科技有限公司 Face living body detection method and device based on infrared image
CN111079576B (en) * 2019-11-30 2023-07-28 腾讯科技(深圳)有限公司 Living body detection method, living body detection device, living body detection equipment and storage medium
CN113128255A (en) * 2019-12-30 2021-07-16 上海依图网络科技有限公司 Living body detection device method, device, chip, and computer-readable storage medium
CN111401258B (en) * 2020-03-18 2024-01-30 腾讯科技(深圳)有限公司 Living body detection method and device based on artificial intelligence
CN111797702A (en) * 2020-06-11 2020-10-20 南京信息工程大学 Face counterfeit video detection method based on spatial local binary pattern and optical flow gradient
CN111814589A (en) * 2020-06-18 2020-10-23 浙江大华技术股份有限公司 Part recognition method and related equipment and device
CN111862484B (en) * 2020-07-21 2023-10-10 北京市永康药业有限公司 Dangerous chemical storage device based on face recognition technology
CN114419524B (en) * 2022-03-29 2022-08-05 之江实验室 Video classification method and system based on pseudo-optical flow assistance

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650669A (en) * 2016-12-27 2017-05-10 重庆邮电大学 Face recognition method for identifying counterfeit photo deception
CN106874871A (en) * 2017-02-15 2017-06-20 广东光阵光电科技有限公司 A kind of recognition methods of living body faces dual camera and identifying device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650669A (en) * 2016-12-27 2017-05-10 重庆邮电大学 Face recognition method for identifying counterfeit photo deception
CN106874871A (en) * 2017-02-15 2017-06-20 广东光阵光电科技有限公司 A kind of recognition methods of living body faces dual camera and identifying device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A Liveness Detection Method for Face Recognition Based on Optical Flow Field;Wei Bao等;《2009 International Conference on Image Analysis and Signal Processing》;20090602;全文 *
Face recognition with visible and thermal infrared imagery;Diego A. Socolinsky等;《Computer Vision and Image Understanding》;20030831;第91卷(第1-2期);全文 *
基于Fisher判别分析的加权估计纹理分析;从继成等;《实验室研究与探索》;20150228;第34卷(第2期);第2节 *
尺度不变单样本人脸识别方法;王炜强等;《中国图象图形学报》;20120331;第17卷(第3期);全文 *

Also Published As

Publication number Publication date
CN108197586A (en) 2018-06-22

Similar Documents

Publication Publication Date Title
CN108197586B (en) Face recognition method and device
US11532180B2 (en) Image processing method and device and storage medium
US9652663B2 (en) Using facial data for device authentication or subject identification
CN107886032B (en) Terminal device, smart phone, authentication method and system based on face recognition
WO2021031609A1 (en) Living body detection method and device, electronic apparatus and storage medium
US11321575B2 (en) Method, apparatus and system for liveness detection, electronic device, and storage medium
CN110287671B (en) Verification method and device, electronic equipment and storage medium
US20210012091A1 (en) Method and apparatus for image processing, electronic device, and storage medium
CN110956061A (en) Action recognition method and device, and driver state analysis method and device
CN110532957B (en) Face recognition method and device, electronic equipment and storage medium
US20170339287A1 (en) Image transmission method and apparatus
TWI752105B (en) Feature image acquisition method, acquisition device, and user authentication method
CN107977636B (en) Face detection method and device, terminal and storage medium
CN109145878B (en) Image extraction method and device
CN107729886B (en) Method and device for processing face image
TWI770531B (en) Face recognition method, electronic device and storage medium thereof
CN111582381B (en) Method and device for determining performance parameters, electronic equipment and storage medium
CN110110742B (en) Multi-feature fusion method and device, electronic equipment and storage medium
CN111507131B (en) Living body detection method and device, electronic equipment and storage medium
CN110826045B (en) Authentication method and device, electronic equipment and storage medium
CN109271863B (en) Face living body detection method and device
CN114565962A (en) Face image processing method and device, electronic equipment and storage medium
CN110969067A (en) User registration and authentication method and device
CN110544335B (en) Object recognition system and method, electronic device, and storage medium
CN108108685B (en) Method and device for carrying out face recognition processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant