WO2019071664A1 - Human face recognition method and apparatus combined with depth information, and storage medium - Google Patents

Human face recognition method and apparatus combined with depth information, and storage medium Download PDF

Info

Publication number
WO2019071664A1
WO2019071664A1 PCT/CN2017/108804 CN2017108804W WO2019071664A1 WO 2019071664 A1 WO2019071664 A1 WO 2019071664A1 CN 2017108804 W CN2017108804 W CN 2017108804W WO 2019071664 A1 WO2019071664 A1 WO 2019071664A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
image
target
recognized
sample
Prior art date
Application number
PCT/CN2017/108804
Other languages
French (fr)
Chinese (zh)
Inventor
戴磊
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019071664A1 publication Critical patent/WO2019071664A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • the present application relates to the field of face recognition technologies, and in particular, to a face recognition method, apparatus, and storage medium that combine depth information.
  • Face recognition is a non-contact biometric recognition technology based on human facial feature information for identity authentication.
  • face recognition technology has been widely used in many important industries and fields such as smart access control, public safety, entertainment, and military.
  • CNN Convolutional Neural Networks
  • the present invention provides a face recognition method, device and storage medium combining depth information, and the main purpose thereof is to realize accurate recognition of a human face by combining a face color image and a depth image.
  • the present application provides a face recognition method, including:
  • the sample library establishing step acquiring 2N face images, including N face color images and corresponding N face depth images, preprocessing the face images, and using the preprocessed face images as face samples
  • a training step of the model training the convolutional neural network with the matching face sample images to obtain a feature vector of the face classification recognition model and the face sample image;
  • Target face image acquisition step acquiring a target face image to be recognized, including a face color image and a corresponding face depth image;
  • a target recognition step inputting the target face image to be recognized into the face classification recognition model, extracting a feature vector of the target face image to be recognized, and searching and searching the face image sample library according to the feature vector A face sample image to which the target face image to be recognized matches, and a face ID of the target face image to be recognized is determined according to the face sample image.
  • the preprocessing in the sample library establishing step comprises performing a process of removing image noise and correcting a face pose on the face image, and labeling the face image with a face ID.
  • Finding a face sample image matching the target face image to be recognized in the face image sample library includes:
  • a face sample image whose obtained vector distance is the smallest or smaller than the threshold is taken as a face sample image that matches the target face image to be recognized.
  • the vector distance is a cosine distance or an Euclidean distance.
  • the 2N face images in the sample library establishing step are obtained by the following method:
  • the first photographing step using a camera to capture a color image of a person appearing in a shooting area within a preset time range and a corresponding depth image;
  • the first face detecting step extracting a face image from the color image and the depth image by using a face detection algorithm to obtain N face color images and corresponding N face depth images.
  • the target face image is obtained by the following method:
  • a second shooting step capturing, by the camera, a color image and a depth image of the object to be recognized appearing in the current shooting area;
  • the second face detecting step is: extracting a face image from the color image and the depth image of the object to be recognized by using a face detection algorithm to obtain a face color image and a face depth image of the object to be recognized.
  • the face recognition detection algorithm is one or more of a geometric feature based algorithm, a local feature analysis algorithm, a feature face algorithm, an elasticity model based algorithm, and a neural network algorithm.
  • the present application further provides an electronic device including an image acquisition unit, a memory, and a processor, the image acquisition unit including a camera having a depth imaging function including face recognition in the memory a program that implements the following steps when executed by the processor:
  • the sample library establishing step acquiring 2N face images, including N face color images and corresponding N face depth images, preprocessing the face images, and using the preprocessed face images as face samples
  • a training step of the model training the convolutional neural network with the matching face sample images to obtain a feature vector of the face classification recognition model and the face sample image;
  • Target face image acquisition step acquiring a target face image to be recognized, including a face color image and a corresponding face depth image;
  • a target recognition step inputting the target face image to be recognized into the face classification recognition model, extracting a feature vector of the target face image to be recognized, and searching and searching the face image sample library according to the feature vector A face sample image to which the target face image to be recognized matches, and a face ID of the target face image to be recognized is determined according to the face sample image.
  • the preprocessing in the sample library establishing step comprises performing a process of removing image noise and correcting a face pose on the face image, and labeling the face image with a face ID.
  • the searching for the face sample image matching the to-be-identified target face image in the face image sample library according to the feature vector of the target face image to be recognized in the target recognition step includes:
  • a face sample image whose obtained vector distance is the smallest or smaller than the threshold is taken as a face sample image that matches the target face image to be recognized.
  • the vector distance is a cosine distance or an Euclidean distance.
  • the 2N face images in the sample library establishing step are obtained by the following method:
  • the first photographing step using a camera to capture a color image of a person appearing in a shooting area within a preset time range and a corresponding depth image;
  • the first face detecting step extracting a face image from the color image and the depth image by using a face detection algorithm to obtain N face color images and corresponding N face depth images.
  • the target face image is obtained by the following method:
  • a second shooting step capturing, by the camera, a color image and a depth image of the object to be recognized appearing in the current shooting area;
  • the second face detecting step is: extracting a face image from the color image and the depth image of the object to be recognized by using a face detection algorithm to obtain a face color image and a face depth image of the object to be recognized.
  • the face recognition detection algorithm is one or more of a geometric feature based algorithm, a local feature analysis algorithm, a feature face algorithm, an elasticity model based algorithm, and a neural network algorithm.
  • the present application further provides a computer readable storage medium including a face image sample library, a face classification recognition model, and a face recognition program, the face recognition program Any of the steps of the face recognition method of combining depth information as described above when executed by the processor.
  • the face recognition method, the electronic device and the computer readable storage medium combined with the depth information proposed by the present application by constructing and training the face classification recognition model by inputting the face color image and the face depth image of the target to be recognized in real time Extracting a feature vector of the target face image to be recognized, calculating a vector distance of the feature vector of the target face image to be recognized and the feature vector of the face sample image, and searching and searching in the face image sample library according to the vector distance
  • FIG. 1 is a schematic diagram of an operating environment of a preferred embodiment of an electronic device of the present application
  • FIG. 2 is a functional block diagram of the face recognition program of FIG. 1;
  • FIG. 3 is a flowchart of a face recognition method combining depth information according to the present application.
  • FIG. 4 is a flow chart of a preferred embodiment of a face recognition method incorporating depth information in the present application.
  • FIG. 1 is a schematic diagram of an operating environment of a preferred embodiment of an electronic device of the present application.
  • the electronic device 1 may be a terminal device having an imaging and computing function, such as a server, a smart phone, a tablet computer, a portable computer, or a desktop computer.
  • the electronic device 1 includes an image acquisition unit 11, a memory 12, a processor 13, a network interface 14, and a communication bus 15.
  • the image obtaining unit 11 can be installed in a specific place, such as an office place and a monitoring area, and real-time images are taken in real time for the target entering the specific place, and the captured real-time image is transmitted to the processor 13 through the network.
  • Network interface 14 may optionally include a standard wired interface, a wireless interface (such as a WI-FI interface).
  • Communication bus 15 is used to implement connection communication between these components.
  • Memory 12 includes at least one type of readable storage medium.
  • the at least one type of readable storage medium may be a non-volatile storage medium such as a flash memory, a hard disk, a multimedia card, a card type memory, or the like.
  • the readable storage medium may be an internal storage unit of the electronic device 1, such as a hard disk of the electronic device 1.
  • the readable storage medium may also be an external memory 11 of the electronic device 1, such as a plug-in hard disk equipped on the electronic device 1, a smart memory card (SMC). , Secure Digital (SD) card, Flash Card, etc.
  • SMC smart memory card
  • SD Secure Digital
  • the readable storage medium of the memory 12 is generally used to store the face recognition program 10 installed on the electronic device 1, the face image sample library, and the face classification recognition model constructed and trained. .
  • the memory 12 can also be used to temporarily store data that has been output or is about to be output.
  • the processor 13 may be a Central Processing Unit (CPU), microprocessor or other data processing chip for running program code or processing data stored in the memory 12, such as executing a human face. Identify the program 10 and so on.
  • CPU Central Processing Unit
  • microprocessor or other data processing chip for running program code or processing data stored in the memory 12, such as executing a human face. Identify the program 10 and so on.
  • Figure 1 shows only the electronic device 1 with the components 11-15 and the face recognition program 10, but it should be understood that not all of the illustrated components are required to be implemented, and alternative implementations may be more or less s component.
  • the electronic device 1 may further include a user interface
  • the user interface may include an input unit such as a keyboard, a voice input device such as a microphone, a device with a voice recognition function, a voice output device such as an audio, a headphone, and the like.
  • the user interface may also include a standard wired interface and a wireless interface.
  • the electronic device 1 may also include a display, which may also be referred to as a display screen or display unit as appropriate. In some embodiments, it may be an LED display, a liquid crystal display, a touch liquid crystal display, and an Organic Light-Emitting Diode (OLED) display.
  • the display is used to display information processed in the electronic device 1 and a user interface for displaying visualizations.
  • the electronic device 1 also includes a touch sensor.
  • the area provided by the touch sensor for the user to perform a touch operation is referred to as a touch area.
  • the touch sensor described herein may be a resistive touch sensor, a capacitive touch sensor, or the like.
  • the touch sensor includes not only a contact type touch sensor but also a proximity type touch sensor or the like.
  • the touch sensor may be a single sensor or a plurality of sensors arranged, for example, in an array.
  • the area of the display of the electronic device 1 may be the same as or different from the area of the touch sensor.
  • a display is stacked with the touch sensor to form a touch display. The device detects a user-triggered touch operation based on a touch screen display.
  • the electronic device 1 may further include a radio frequency (RF) circuit, a sensor, an audio circuit, and the like, and details are not described herein.
  • RF radio frequency
  • the memory 12 as a readable storage medium may include an operating system, a face recognition program 10, a face image sample library, and build and train. Good face classification recognition model.
  • the processor 13 executes the face recognition program 10 stored in the memory 12, the following steps are implemented:
  • the sample library establishing step acquiring 2N face images, including N face color images and corresponding N face depth images, preprocessing the face images, and using the preprocessed face images as face samples
  • a training step of the model training the convolutional neural network with the matching face sample images to obtain a feature vector of the face classification recognition model and the face sample image;
  • Target face image acquisition step acquiring a target face image to be recognized, including a face color image and a corresponding face depth image;
  • a target recognition step inputting the target face image to be recognized into the face classification recognition model, extracting a feature vector of the target face image to be recognized, and searching and searching the face image sample library according to the feature vector
  • a face sample image to which the target face image to be recognized matches, and a face ID of the target face image to be recognized is determined according to the face sample image.
  • FIG. 2 for the function block diagram of the face recognition program 10 and FIG. 3 for the face recognition method for combining the depth information. Introduction to the flow chart.
  • FIG. 2 it is a functional block diagram of the face recognition program 10 of FIG.
  • the face recognition program 10 is divided into a plurality of modules, which are stored in the memory 12 and executed by the processor 13 to complete the present application.
  • a module as referred to in this application refers to a series of computer program instructions that are capable of performing a particular function.
  • the face recognition program 10 can be divided into: an acquisition module 110, a learning module 120, an extraction module 130, and an identification module 140.
  • the obtaining module 110 is configured to acquire a color image and a depth image of the person captured by the image acquiring unit 11, extract a face image therefrom and perform preprocessing on the extracted face image by using a face detection algorithm.
  • the face image sample library may be established by the acquisition module 110 to obtain a target face color image and a depth image to be recognized.
  • the image acquisition unit 11 may be a camera integrated with a common camera function and a depth camera function, or may include two cameras: an ordinary camera that captures a color image, and a depth camera that captures a depth image (eg, Time of Flight). Camera).
  • the color image can be described by a variety of color spaces, including RGB, HSV, HIS, CMY, and the like.
  • Each pixel value in the depth image represents a distance between a certain point in the scene and the camera, and in the present embodiment, represents a distance between a certain point of the face and the image acquisition unit 11.
  • the shooting angle and the frequency are set to be the same.
  • the image acquisition unit 11 is adjusted to the direction facing the entrance of the monitoring area (photographing the front image of the person), and setting a color image and a depth image at the same time every fixed time (for example, 2 seconds), and color images taken at the same time and The depth image can establish a one-to-one correspondence by labeling the shooting time.
  • the pre-processing includes removing image noise, correcting a face pose, and animating the face ID for each face color image and depth image.
  • the image noise may be caused by various reasons. For example, due to some deficiencies that may exist in the device itself, some points on the face of the person may not be acquired by the image acquisition unit 11 and the depth portion may be replaced by a value of 0. Output, these zero values can be considered as noise.
  • the bilinear interpolation algorithm may be used to fill the portion of the pixels that cannot be acquired by the image acquisition unit 11 and then the Gaussian filter is used to remove the noise interference.
  • the learning module 120 is configured to train the convolutional neural network with a face image sample library in which the face ID, the face color image, and the face depth image match each other, to obtain a face classification recognition model and the face sample image. Feature vector.
  • each set of matching face color images and depth images are input into a convolutional neural network, and the set of face color images and depth images are extracted.
  • the face feature vector of the corresponding character the more samples used in the training model and the larger the sample difference, the more accurate the obtained face classification model.
  • the feature vectors extracted by the convolutional neural network for each group of face color images and depth images are T1, T2, T3, ..., Tn.
  • the extraction module 130 is configured to input the target face color image and the depth image to be input into the face classification recognition model, and extract the target face image to be recognized by using the face classification recognition model A feature vector of the image, such as a feature vector T.
  • the identification module 140 is configured to search, in the face image sample library, the face sample image that matches the target face image to be recognized according to the feature vector, and determine the target face to be recognized according to the face sample image.
  • the face ID of the image is configured to search, in the face image sample library, the face sample image that matches the target face image to be recognized according to the feature vector, and determine the target face to be recognized according to the face sample image. The face ID of the image.
  • the identification module 140 calculates a vector distance between the feature vector of the target face sample image to be recognized and the feature vector of the face sample image, and uses the face sample image with the obtained vector distance minimum or less than the threshold as the context.
  • a face sample image in which the recognition target face image matches is described, and a face ID of the target face image to be recognized is determined according to the face sample image.
  • the identification module 140 calculates the distances D1, D2, D3, ... of the feature vector T of the target face image to be recognized and the feature vectors T1, T2, T3, ..., Tn of the face sample image.
  • Dn the minimum value is selected from the distances D1, D2, D3, ..., Dn, or all the distance values smaller than the preset threshold are selected, and the face sample image corresponding to the selected distance value is the same to be identified.
  • a face sample image in which the target face image matches, and the face ID of the face sample image is the face ID of the target face image to be recognized.
  • the vector distance may be a cosine distance or an Euclidean distance.
  • FIG. 4 it is a flowchart of a preferred embodiment of a face recognition method incorporating depth information according to the present application. The following steps of implementing the face recognition method when the processor 13 of the electronic device 1 executes the face recognition program 10 stored in the memory 12:
  • Step S10 The acquisition module 110 establishes a face image sample library in which the face ID, the face color image, and the face depth image match each other.
  • the obtaining module 110 acquires a color image and a depth image of a person appearing in a shooting area within a preset time range, and extracts a face image from the face detection algorithm to obtain a face color image and a corresponding face depth image, and the face image is obtained.
  • the pre-processing is performed, and the pre-processed face image is used as a face sample image, and a face image sample library in which the face ID, the face color image, and the face depth image match each other is established.
  • the pre-processing includes performing a process of removing image noise and correcting a face pose on the 2N face images, and labeling the face image with the face ID.
  • step S20 the face classification recognition model is constructed and trained by the learning module 120 to obtain a feature vector of the face sample image.
  • the learning module 120 trains the convolutional neural network with the face sample images that match each other to obtain a feature vector of the face classification recognition model and the face sample image.
  • step S30 the image of the target face to be identified is acquired by the acquiring module 110, including the target face color image and the face depth image to be identified.
  • the acquiring module 110 acquires a color image and a depth image of the object to be recognized that appear in the current shooting area, and extracts a face image from the color image and the depth image of the object to be identified by using a face detection algorithm to obtain a face color of the target to be identified.
  • Image and face depth image is one or more of a geometric feature based algorithm, a local feature analysis algorithm, a feature face algorithm, an elasticity model based algorithm, and a neural network algorithm.
  • step S40 the target face image to be recognized is input into the face classification recognition model, and the extraction module 130 extracts a feature vector of the target face image to be recognized.
  • the identification module 140 searches the face image sample library for the face sample image matching the target face image to be recognized according to the feature vector of the target face image to be recognized, and determines the face sample image according to the face sample image.
  • the recognition module 140 calculates a vector sample distance between the feature vector of the target face image to be recognized and the feature vector of the face sample image, and takes the face sample image with the obtained vector distance minimum or smaller than the threshold as A face sample image in which the target face image matches is identified, and the face ID of the face sample image is the face ID of the target face image to be recognized.
  • the vector distance may be a cosine distance or an Euclidean distance.
  • the face recognition method proposed in this embodiment combines the face plane information and the depth information, and uses the face classification recognition model to extract the feature color of the face color image and the face depth image of the object to be recognized, according to the feature vector.
  • the face image sample library searches for a face sample image that matches the target face image to be recognized.
  • the present application can implement the face More accurate recognition, especially when the planar features of the human face are highly similar, and the stereoscopic features, such as the height of the bridge of the nose, the depth of the eye socket, and the height of the tibia, can significantly improve the accuracy of face recognition.
  • the embodiment of the present application further provides a computer readable storage medium, which may be a hard disk, a multimedia card, an SD card, a flash memory card, an SMC, a read only memory (ROM), and an erasable programmable Any combination or combination of any one or more of read only memory (EPROM), portable compact disk read only memory (CD-ROM), USB memory, and the like.
  • the computer readable storage medium includes a face image sample library, a constructed and trained face classification recognition model, a face recognition program 10, and the like, and the face recognition program 10 is implemented by the processor 13 as follows operating:
  • the sample library establishing step acquiring 2N face images, including N face color images and corresponding N face depth images, preprocessing the face images, and using the preprocessed face images as face samples
  • a training step of the model training the convolutional neural network with the matching face sample images to obtain a feature vector of the face classification recognition model and the face sample image;
  • a target face image obtaining step acquiring a target face image to be recognized, including a face color image of the target to be identified and a corresponding face depth image;
  • a target recognition step inputting the target face image to be recognized into the face classification recognition model, extracting a feature vector of the target face image to be recognized, and searching and searching the face image sample library according to the feature vector A face sample image to which the target face image to be recognized matches, and a face ID of the target face image to be recognized is determined according to the face sample image.

Abstract

Disclosed is a human face recognition method combined with depth information. The method comprises: establishing a human face image sample library, wherein a human face ID, a human face color image and a human face depth image match each other in same; constructing and training a human face classification and recognition model to obtain a feature vector of a human face sample image; acquiring a human face image of a target to be recognized, the human face image comprising a human face color image of the target to be recognized and a human face depth image of the target to be recognized; inputting the human face image of the target to be recognized into the human face classification and recognition model, and extracting a feature vector of the human face image of the target to be recognized; and determining, according to the feature vector, a human face ID corresponding to the human face image of the target to be recognized. In combination with depth information about a human face, by means of calculating the vector distance between feature vectors of a human face color image and depth image of a target to be recognized and a feature vector of a human face sample image, the present application realizes precise recognition of the human face. Further provided are an electronic apparatus and a computer-readable storage medium.

Description

结合深度信息的人脸识别方法、装置及存储介质Face recognition method, device and storage medium combined with depth information
本专利申请以2017年10月9日提交的申请号为201710930240.5,名称为“结合深度信息的人脸识别方法、装置及存储介质”的中国发明专利申请为基础,并要求其优先权。This patent application is based on the Chinese Patent Application No. 201710930240.5, entitled "Face Recognition Method, Apparatus and Storage Medium Combining Depth Information", which is filed on October 9, 2017, and claims priority.
技术领域Technical field
本申请涉及人脸识别技术领域,尤其涉及一种结合深度信息的人脸识别方法、装置及存储介质。The present application relates to the field of face recognition technologies, and in particular, to a face recognition method, apparatus, and storage medium that combine depth information.
背景技术Background technique
人脸识别是基于人的脸部特征信息进行身份认证的一种非接触式生物特征识别技术。随着计算机和网络技术的飞速发展,人脸识别技术已广泛地应用于智能门禁、公共安全、娱乐、军事等诸多重要行业及领域。Face recognition is a non-contact biometric recognition technology based on human facial feature information for identity authentication. With the rapid development of computer and network technologies, face recognition technology has been widely used in many important industries and fields such as smart access control, public safety, entertainment, and military.
当前人脸识别普遍使用的是基于2D图像的卷积神经网络(Convolutional Neural Networks,简称CNN)进行回归,由于2D图像中没有深度信息,故现有的人脸识别系统对于相似人脸的区分能力普遍不足,且对于同一人脸的纹理变化(如脸部彩绘、纹身或光照变化等)的适应性不佳。Currently, face recognition is commonly used in 2D image-based Convolutional Neural Networks (CNN) for regression. Since there is no depth information in 2D images, the existing face recognition system can distinguish similar faces. It is generally inadequate and has poor adaptability to texture changes on the same face, such as face painting, tattoos, or changes in lighting.
发明内容Summary of the invention
本申请提供一种结合深度信息的人脸识别方法、装置及存储介质,其主要目的在于结合人脸彩色图像及深度图像,实现对人脸的精确识别。The present invention provides a face recognition method, device and storage medium combining depth information, and the main purpose thereof is to realize accurate recognition of a human face by combining a face color image and a depth image.
为实现上述目的,本申请提供一种人脸识别方法,该方法包括:To achieve the above objective, the present application provides a face recognition method, including:
样本库建立步骤:获取2N张人脸图像,包括N张人脸彩色图像及对应的N张人脸深度图像,对所述人脸图像进行预处理,以预处理后的人脸图像为人脸样本图像,建立人脸ID、人脸彩色图像和人脸深度图像互相匹配的人脸图像样本库,其中,N为大于2的整数,人脸深度图像中包含人物面部每个点与图像获取单元的距离信息;The sample library establishing step: acquiring 2N face images, including N face color images and corresponding N face depth images, preprocessing the face images, and using the preprocessed face images as face samples An image, a face image sample library in which a face ID, a face color image, and a face depth image are matched with each other, wherein N is an integer greater than 2, and the face depth image includes each point of the person's face and the image acquisition unit Distance information;
模型训练步骤:以互相匹配的所述人脸样本图像对卷积神经网络进行训练,得到人脸分类识别模型及所述人脸样本图像的特征向量;a training step of the model: training the convolutional neural network with the matching face sample images to obtain a feature vector of the face classification recognition model and the face sample image;
目标人脸图像获取步骤:获取待识别目标人脸图像,包括人脸彩色图像和对应的人脸深度图像;Target face image acquisition step: acquiring a target face image to be recognized, including a face color image and a corresponding face depth image;
目标识别步骤:将所述待识别目标人脸图像输入所述人脸分类识别模型,提取该待识别目标人脸图像的特征向量,根据该特征向量在所述人脸图像样本库中查找与该待识别目标人脸图像相匹配的人脸样本图像,根据该人脸样本图像确定所述待识别目标人脸图像的人脸ID。a target recognition step: inputting the target face image to be recognized into the face classification recognition model, extracting a feature vector of the target face image to be recognized, and searching and searching the face image sample library according to the feature vector A face sample image to which the target face image to be recognized matches, and a face ID of the target face image to be recognized is determined according to the face sample image.
优选地,所述样本库建立步骤中的预处理包括对所述人脸图像进行去除图像噪声和校正人脸姿态的处理,并对所述人脸图像标注人脸ID。Preferably, the preprocessing in the sample library establishing step comprises performing a process of removing image noise and correcting a face pose on the face image, and labeling the face image with a face ID.
优选地,所述目标识别步骤中根据所述待识别目标人脸图像的特征向量 在所述人脸图像样本库中查找与该待识别目标人脸图像匹配的人脸样本图像包括:Preferably, in the target recognition step, according to the feature vector of the target face image to be identified Finding a face sample image matching the target face image to be recognized in the face image sample library includes:
计算所述待识别目标人脸图像的特征向量与所述人脸样本图像的特征向量之间的向量距离;Calculating a vector distance between a feature vector of the target face image to be identified and a feature vector of the face sample image;
将所得向量距离最小或小于阈值的人脸样本图像作为与所述待识别目标人脸图像相匹配的人脸样本图像。A face sample image whose obtained vector distance is the smallest or smaller than the threshold is taken as a face sample image that matches the target face image to be recognized.
可选地,所述向量距离为余弦距离或欧氏距离。Optionally, the vector distance is a cosine distance or an Euclidean distance.
优选地,所述样本库建立步骤中的2N张人脸图像依下述方法获取:Preferably, the 2N face images in the sample library establishing step are obtained by the following method:
第一拍摄步骤:利用摄像机拍摄预设时间范围内拍摄区域出现的人物的彩色图像及对应的深度图像;The first photographing step: using a camera to capture a color image of a person appearing in a shooting area within a preset time range and a corresponding depth image;
第一人脸检测步骤:利用人脸检测算法从所述彩色图像和深度图像中提取人脸图像,得到N张人脸彩色图像和对应的N张人脸深度图像。The first face detecting step: extracting a face image from the color image and the depth image by using a face detection algorithm to obtain N face color images and corresponding N face depth images.
所述目标人脸图像依下述方法获取:The target face image is obtained by the following method:
第二拍摄步骤:利用摄像机拍摄当前拍摄区域内出现的待识别目标的彩色图像及深度图像;a second shooting step: capturing, by the camera, a color image and a depth image of the object to be recognized appearing in the current shooting area;
第二人脸检测步骤:利用人脸检测算法从该待识别目标的彩色图像及深度图像中提取人脸图像,得到所述待识别目标的人脸彩色图像和人脸深度图像。The second face detecting step is: extracting a face image from the color image and the depth image of the object to be recognized by using a face detection algorithm to obtain a face color image and a face depth image of the object to be recognized.
可选地,所述人脸识别检测算法为基于几何特征的算法、局部特征分析算法、特征脸算法、基于弹性模型的算法、神经网络算法中的一种或几种。Optionally, the face recognition detection algorithm is one or more of a geometric feature based algorithm, a local feature analysis algorithm, a feature face algorithm, an elasticity model based algorithm, and a neural network algorithm.
此外,为实现上述目的,本申请还提供一种电子装置,该电子装置包括图像获取单元、存储器和处理器,所述图像获取单元包括具有深度摄像功能的摄像机,所述存储器中包括人脸识别程序,该人脸识别程序被所述处理器执行时实现如下步骤:In addition, in order to achieve the above object, the present application further provides an electronic device including an image acquisition unit, a memory, and a processor, the image acquisition unit including a camera having a depth imaging function including face recognition in the memory a program that implements the following steps when executed by the processor:
样本库建立步骤:获取2N张人脸图像,包括N张人脸彩色图像及对应的N张人脸深度图像,对所述人脸图像进行预处理,以预处理后的人脸图像为人脸样本图像,建立人脸ID、人脸彩色图像和人脸深度图像互相匹配的人脸图像样本库,其中,N为大于2的整数,人脸深度图像中包含人物面部每个点与图像获取单元的距离信息;The sample library establishing step: acquiring 2N face images, including N face color images and corresponding N face depth images, preprocessing the face images, and using the preprocessed face images as face samples An image, a face image sample library in which a face ID, a face color image, and a face depth image are matched with each other, wherein N is an integer greater than 2, and the face depth image includes each point of the person's face and the image acquisition unit Distance information;
模型训练步骤:以互相匹配的所述人脸样本图像对卷积神经网络进行训练,得到人脸分类识别模型及所述人脸样本图像的特征向量;a training step of the model: training the convolutional neural network with the matching face sample images to obtain a feature vector of the face classification recognition model and the face sample image;
目标人脸图像获取步骤:获取待识别目标人脸图像,包括人脸彩色图像和对应的人脸深度图像;Target face image acquisition step: acquiring a target face image to be recognized, including a face color image and a corresponding face depth image;
目标识别步骤:将所述待识别目标人脸图像输入所述人脸分类识别模型,提取该待识别目标人脸图像的特征向量,根据该特征向量在所述人脸图像样本库中查找与该待识别目标人脸图像相匹配的人脸样本图像,根据该人脸样本图像确定所述待识别目标人脸图像的人脸ID。 a target recognition step: inputting the target face image to be recognized into the face classification recognition model, extracting a feature vector of the target face image to be recognized, and searching and searching the face image sample library according to the feature vector A face sample image to which the target face image to be recognized matches, and a face ID of the target face image to be recognized is determined according to the face sample image.
优选地,所述样本库建立步骤中的预处理包括对所述人脸图像进行去除图像噪声和校正人脸姿态的处理,并对所述人脸图像标注人脸ID。Preferably, the preprocessing in the sample library establishing step comprises performing a process of removing image noise and correcting a face pose on the face image, and labeling the face image with a face ID.
优选地,所述目标识别步骤中根据所述待识别目标人脸图像的特征向量在所述人脸图像样本库中查找与该待识别目标人脸图像匹配的人脸样本图像包括:Preferably, the searching for the face sample image matching the to-be-identified target face image in the face image sample library according to the feature vector of the target face image to be recognized in the target recognition step includes:
计算所述待识别目标人脸图像的特征向量与所述人脸样本图像的特征向量之间的向量距离;Calculating a vector distance between a feature vector of the target face image to be identified and a feature vector of the face sample image;
将所得向量距离最小或小于阈值的人脸样本图像作为与所述待识别目标人脸图像相匹配的人脸样本图像。A face sample image whose obtained vector distance is the smallest or smaller than the threshold is taken as a face sample image that matches the target face image to be recognized.
可选地,所述向量距离为余弦距离或欧氏距离。Optionally, the vector distance is a cosine distance or an Euclidean distance.
优选地,所述样本库建立步骤中的2N张人脸图像依下述方法获取:Preferably, the 2N face images in the sample library establishing step are obtained by the following method:
第一拍摄步骤:利用摄像机拍摄预设时间范围内拍摄区域出现的人物的彩色图像及对应的深度图像;The first photographing step: using a camera to capture a color image of a person appearing in a shooting area within a preset time range and a corresponding depth image;
第一人脸检测步骤:利用人脸检测算法从所述彩色图像和深度图像中提取人脸图像,得到N张人脸彩色图像和对应的N张人脸深度图像。The first face detecting step: extracting a face image from the color image and the depth image by using a face detection algorithm to obtain N face color images and corresponding N face depth images.
所述目标人脸图像依下述方法获取:The target face image is obtained by the following method:
第二拍摄步骤:利用摄像机拍摄当前拍摄区域内出现的待识别目标的彩色图像及深度图像;a second shooting step: capturing, by the camera, a color image and a depth image of the object to be recognized appearing in the current shooting area;
第二人脸检测步骤:利用人脸检测算法从该待识别目标的彩色图像及深度图像中提取人脸图像,得到所述待识别目标的人脸彩色图像和人脸深度图像。The second face detecting step is: extracting a face image from the color image and the depth image of the object to be recognized by using a face detection algorithm to obtain a face color image and a face depth image of the object to be recognized.
可选地,所述人脸识别检测算法为基于几何特征的算法、局部特征分析算法、特征脸算法、基于弹性模型的算法、神经网络算法中的一种或几种。Optionally, the face recognition detection algorithm is one or more of a geometric feature based algorithm, a local feature analysis algorithm, a feature face algorithm, an elasticity model based algorithm, and a neural network algorithm.
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质中包括人脸图像样本库、人脸分类识别模型和人脸识别程序,该人脸识别程序被所述处理器执行时实现如上所述的结合深度信息的人脸识别方法中的任意步骤。In addition, in order to achieve the above object, the present application further provides a computer readable storage medium including a face image sample library, a face classification recognition model, and a face recognition program, the face recognition program Any of the steps of the face recognition method of combining depth information as described above when executed by the processor.
本申请提出的结合深度信息的人脸识别方法、电子装置及计算机可读存储介质,通过将实时获取的待识别目标的人脸彩色图像和人脸深度图像输入构建并训练的人脸分类识别模型,提取出所述待识别目标人脸图像的特征向量,计算该待识别目标人脸图像的特征向量与人脸样本图像的特征向量的向量距离,根据向量距离在人脸图像样本库中查找与所述待识别目标人脸图像匹配的人脸样本图像。由于训练人脸分类识别模型、提取人脸图像的特征向量应用了人脸深度图像,而人脸深度图像中包含人物面部每个点与图像获取单元的距离信息,本申请能够实现对人脸更精确的识别,尤其是当人脸的平面特征高度相似,而立体特征,例如鼻梁高度、眼窝深度、颧骨高度不同的 情况下,能够显著提高人脸识别精度。The face recognition method, the electronic device and the computer readable storage medium combined with the depth information proposed by the present application, by constructing and training the face classification recognition model by inputting the face color image and the face depth image of the target to be recognized in real time Extracting a feature vector of the target face image to be recognized, calculating a vector distance of the feature vector of the target face image to be recognized and the feature vector of the face sample image, and searching and searching in the face image sample library according to the vector distance The face sample image matched by the target face image to be identified. Since the face classification recognition model is trained and the feature vector of the extracted face image is applied with the face depth image, and the face depth image includes the distance information of each point of the person's face and the image acquisition unit, the present application can achieve more on the face. Accurate identification, especially when the planar features of the face are highly similar, while stereo features such as nose height, orbital depth, and tibia height are different In this case, the face recognition accuracy can be significantly improved.
附图说明DRAWINGS
图1为本申请电子装置较佳实施例的运行环境示意图;1 is a schematic diagram of an operating environment of a preferred embodiment of an electronic device of the present application;
图2为图1中人脸识别程序的功能模块图;2 is a functional block diagram of the face recognition program of FIG. 1;
图3为本申请结合深度信息的人脸识别方法的流程图;3 is a flowchart of a face recognition method combining depth information according to the present application;
图4为本申请结合深度信息的人脸识别方法较佳实施例的流程图。4 is a flow chart of a preferred embodiment of a face recognition method incorporating depth information in the present application.
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The implementation, functional features and advantages of the present application will be further described with reference to the accompanying drawings.
具体实施方式Detailed ways
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。It is understood that the specific embodiments described herein are merely illustrative of the application and are not intended to be limiting.
参照图1所示,为本申请电子装置较佳实施例的运行环境示意图。1 is a schematic diagram of an operating environment of a preferred embodiment of an electronic device of the present application.
该电子装置1可以是服务器、智能手机、平板电脑、便携计算机、桌上型计算机等具有摄像和运算功能的终端设备。The electronic device 1 may be a terminal device having an imaging and computing function, such as a server, a smart phone, a tablet computer, a portable computer, or a desktop computer.
参照图1所示,该电子装置1包括图像获取单元11、存储器12、处理器13、网络接口14及通信总线15。所述图像获取单元11可安装于特定场所,如办公场所、监控区域,对进入该特定场所的目标实时拍摄得到实时图像,通过网络将拍摄得到的实时图像传输至处理器13。网络接口14可选地可以包括标准的有线接口、无线接口(如WI-FI接口)。通信总线15用于实现这些组件之间的连接通信。Referring to FIG. 1, the electronic device 1 includes an image acquisition unit 11, a memory 12, a processor 13, a network interface 14, and a communication bus 15. The image obtaining unit 11 can be installed in a specific place, such as an office place and a monitoring area, and real-time images are taken in real time for the target entering the specific place, and the captured real-time image is transmitted to the processor 13 through the network. Network interface 14 may optionally include a standard wired interface, a wireless interface (such as a WI-FI interface). Communication bus 15 is used to implement connection communication between these components.
存储器12包括至少一种类型的可读存储介质。所述至少一种类型的可读存储介质可为如闪存、硬盘、多媒体卡、卡型存储器等的非易失性存储介质。在一些实施例中,所述可读存储介质可以是所述电子装置1的内部存储单元,例如该电子装置1的硬盘。在另一些实施例中,所述可读存储介质也可以是所述电子装置1的外部存储器11,例如所述电子装置1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。 Memory 12 includes at least one type of readable storage medium. The at least one type of readable storage medium may be a non-volatile storage medium such as a flash memory, a hard disk, a multimedia card, a card type memory, or the like. In some embodiments, the readable storage medium may be an internal storage unit of the electronic device 1, such as a hard disk of the electronic device 1. In other embodiments, the readable storage medium may also be an external memory 11 of the electronic device 1, such as a plug-in hard disk equipped on the electronic device 1, a smart memory card (SMC). , Secure Digital (SD) card, Flash Card, etc.
在本实施例中,所述存储器12的可读存储介质通常用于存储安装于所述电子装置1的人脸识别程序10、人脸图像样本库及构建并训练好的人脸分类识别模型等。所述存储器12还可以用于暂时地存储已经输出或者将要输出的数据。In this embodiment, the readable storage medium of the memory 12 is generally used to store the face recognition program 10 installed on the electronic device 1, the face image sample library, and the face classification recognition model constructed and trained. . The memory 12 can also be used to temporarily store data that has been output or is about to be output.
处理器13,在一些实施例中可以是一中央处理器(Central Processing Unit,CPU),微处理器或其他数据处理芯片,用于运行存储器12中存储的程序代码或处理数据,例如执行人脸识别程序10等。The processor 13, in some embodiments, may be a Central Processing Unit (CPU), microprocessor or other data processing chip for running program code or processing data stored in the memory 12, such as executing a human face. Identify the program 10 and so on.
图1仅示出了具有组件11-15以及人脸识别程序10的电子装置1,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少 的组件。Figure 1 shows only the electronic device 1 with the components 11-15 and the face recognition program 10, but it should be understood that not all of the illustrated components are required to be implemented, and alternative implementations may be more or less s component.
可选地,该电子装置1还可以包括用户接口,用户接口可以包括输入单元比如键盘(Keyboard)、语音输入装置比如麦克风(microphone)等具有语音识别功能的设备、语音输出装置比如音响、耳机等,可选地用户接口还可以包括标准的有线接口、无线接口。Optionally, the electronic device 1 may further include a user interface, and the user interface may include an input unit such as a keyboard, a voice input device such as a microphone, a device with a voice recognition function, a voice output device such as an audio, a headphone, and the like. Optionally, the user interface may also include a standard wired interface and a wireless interface.
该电子装置1还可以包括显示器,显示器也可以适当的称为显示屏或显示单元。在一些实施例中可以是LED显示器、液晶显示器、触控式液晶显示器以及有机发光二极管(Organic Light-Emitting Diode,OLED)显示器等。显示器用于显示在电子装置1中处理的信息以及用于显示可视化的用户界面。The electronic device 1 may also include a display, which may also be referred to as a display screen or display unit as appropriate. In some embodiments, it may be an LED display, a liquid crystal display, a touch liquid crystal display, and an Organic Light-Emitting Diode (OLED) display. The display is used to display information processed in the electronic device 1 and a user interface for displaying visualizations.
该电子装置1还包括触摸传感器。所述触摸传感器所提供的供用户进行触摸操作的区域称为触控区域。此外,这里所述的触摸传感器可以为电阻式触摸传感器、电容式触摸传感器等。而且,所述触摸传感器不仅包括接触式的触摸传感器,也可包括接近式的触摸传感器等。此外,所述触摸传感器可以为单个传感器,也可以为例如阵列布置的多个传感器。The electronic device 1 also includes a touch sensor. The area provided by the touch sensor for the user to perform a touch operation is referred to as a touch area. Further, the touch sensor described herein may be a resistive touch sensor, a capacitive touch sensor, or the like. Moreover, the touch sensor includes not only a contact type touch sensor but also a proximity type touch sensor or the like. Furthermore, the touch sensor may be a single sensor or a plurality of sensors arranged, for example, in an array.
此外,该电子装置1的显示器的面积可以与所述触摸传感器的面积相同,也可以不同。可选地,将显示器与所述触摸传感器层叠设置,以形成触摸显示屏。该装置基于触摸显示屏侦测用户触发的触控操作。In addition, the area of the display of the electronic device 1 may be the same as or different from the area of the touch sensor. Optionally, a display is stacked with the touch sensor to form a touch display. The device detects a user-triggered touch operation based on a touch screen display.
该电子装置1还可以包括射频(Radio Frequency,RF)电路、传感器和音频电路等等,在此不再赘述。The electronic device 1 may further include a radio frequency (RF) circuit, a sensor, an audio circuit, and the like, and details are not described herein.
在图1所示的电子装置1较佳实施例的运行环境示意图中,作为一种可读存储介质的存储器12中可以包括操作系统、人脸识别程序10、人脸图像样本库和构建并训练好的人脸分类识别模型。处理器13执行存储器12中存储的人脸识别程序10时实现如下步骤:In the operating environment diagram of the preferred embodiment of the electronic device 1 shown in FIG. 1, the memory 12 as a readable storage medium may include an operating system, a face recognition program 10, a face image sample library, and build and train. Good face classification recognition model. When the processor 13 executes the face recognition program 10 stored in the memory 12, the following steps are implemented:
样本库建立步骤:获取2N张人脸图像,包括N张人脸彩色图像及对应的N张人脸深度图像,对所述人脸图像进行预处理,以预处理后的人脸图像为人脸样本图像,建立人脸ID、人脸彩色图像和人脸深度图像互相匹配的人脸图像样本库,其中,N为大于2的整数,人脸深度图像中包含人物面部每个点与图像获取单元11的距离信息;The sample library establishing step: acquiring 2N face images, including N face color images and corresponding N face depth images, preprocessing the face images, and using the preprocessed face images as face samples An image, a face image sample library in which a face ID, a face color image, and a face depth image are matched with each other, wherein N is an integer greater than 2, and the face depth image includes each point of the person's face and the image acquisition unit 11 Distance information;
模型训练步骤:以互相匹配的所述人脸样本图像对卷积神经网络进行训练,得到人脸分类识别模型及所述人脸样本图像的特征向量;a training step of the model: training the convolutional neural network with the matching face sample images to obtain a feature vector of the face classification recognition model and the face sample image;
目标人脸图像获取步骤:获取待识别目标人脸图像,包括人脸彩色图像和对应的人脸深度图像;Target face image acquisition step: acquiring a target face image to be recognized, including a face color image and a corresponding face depth image;
目标识别步骤:将所述待识别目标人脸图像输入所述人脸分类识别模型,提取该待识别目标人脸图像的特征向量,根据该特征向量在所述人脸图像样本库中查找与该待识别目标人脸图像相匹配的人脸样本图像,根据该人脸样本图像确定所述待识别目标人脸图像的人脸ID。具体原理请参照下述图2关于人脸识别程序10的功能模块图及图3关于结合深度信息的人脸识别方法的 流程图的介绍。a target recognition step: inputting the target face image to be recognized into the face classification recognition model, extracting a feature vector of the target face image to be recognized, and searching and searching the face image sample library according to the feature vector A face sample image to which the target face image to be recognized matches, and a face ID of the target face image to be recognized is determined according to the face sample image. For the specific principle, please refer to the following FIG. 2 for the function block diagram of the face recognition program 10 and FIG. 3 for the face recognition method for combining the depth information. Introduction to the flow chart.
参照图2所示,为图1中人脸识别程序10的功能模块图。在本实施例中,人脸识别程序10被分割为多个模块,该多个模块被存储于存储器12中,并由处理器13执行,以完成本申请。本申请所称的模块是指能够完成特定功能的一系列计算机程序指令段。Referring to FIG. 2, it is a functional block diagram of the face recognition program 10 of FIG. In the present embodiment, the face recognition program 10 is divided into a plurality of modules, which are stored in the memory 12 and executed by the processor 13 to complete the present application. A module as referred to in this application refers to a series of computer program instructions that are capable of performing a particular function.
所述人脸识别程序10可以被分割为:获取模块110、学习模块120、提取模块130及识别模块140。The face recognition program 10 can be divided into: an acquisition module 110, a learning module 120, an extraction module 130, and an identification module 140.
获取模块110,用于获取图像获取单元11拍摄的人物的彩色图像和深度图像,利用人脸检测算法从中提取人脸图像并对提取的人脸图像进行预处理。例如,如图3所示,可以通过获取模块110建立人脸图像样本库,获取待识别目标人脸彩色图像和深度图像。The obtaining module 110 is configured to acquire a color image and a depth image of the person captured by the image acquiring unit 11, extract a face image therefrom and perform preprocessing on the extracted face image by using a face detection algorithm. For example, as shown in FIG. 3, the face image sample library may be established by the acquisition module 110 to obtain a target face color image and a depth image to be recognized.
所述图像获取单元11可以为一台集成普通摄像功能和深度摄像功能的摄像机,也可以包括两台摄像机:一台拍摄彩色图像的普通摄像机,一台拍摄深度图像的深度摄像机(例如Time of Flight Camera)。所述彩色图像可以通过多种颜色空间来描述,包括RGB、HSV、HIS、CMY等等。所述深度图像中的每一个像素值表示场景中某一点与摄像机之间的距离,在本实施例中,表示人脸某一点与图像获取单元11之间的距离。所述图像获取单元11拍摄彩色图像和深度图像时,拍摄角度和频率设置为一致。例如,将图像获取单元11调整到正对监控区域入口的方向(拍摄人物正面图像),设置每隔固定时间(例如2秒)同时拍摄一张彩色图像和深度图像,同一时刻拍摄的彩色图像和深度图像可以通过标注拍摄时间建立一一对应的匹配关系。所述预处理包括去除图像噪声、校正人脸姿态和对每张人脸彩色图像和深度图像标注人脸ID。所述图像噪声可能由多种原因引起,比如,由于装置本身可能存在的一些不足,人脸上的某些点可能无法被图像获取单元11获取深度信息,无法获知深度的部分会以0值代替输出,可将这些0值看作噪声。在本实施例中,可先用双线性插值算法来填补无法被图像获取单元11获取深度信息的这部分像素,再用高斯滤波器去除噪声干扰。The image acquisition unit 11 may be a camera integrated with a common camera function and a depth camera function, or may include two cameras: an ordinary camera that captures a color image, and a depth camera that captures a depth image (eg, Time of Flight). Camera). The color image can be described by a variety of color spaces, including RGB, HSV, HIS, CMY, and the like. Each pixel value in the depth image represents a distance between a certain point in the scene and the camera, and in the present embodiment, represents a distance between a certain point of the face and the image acquisition unit 11. When the image acquisition unit 11 captures a color image and a depth image, the shooting angle and the frequency are set to be the same. For example, the image acquisition unit 11 is adjusted to the direction facing the entrance of the monitoring area (photographing the front image of the person), and setting a color image and a depth image at the same time every fixed time (for example, 2 seconds), and color images taken at the same time and The depth image can establish a one-to-one correspondence by labeling the shooting time. The pre-processing includes removing image noise, correcting a face pose, and animating the face ID for each face color image and depth image. The image noise may be caused by various reasons. For example, due to some deficiencies that may exist in the device itself, some points on the face of the person may not be acquired by the image acquisition unit 11 and the depth portion may be replaced by a value of 0. Output, these zero values can be considered as noise. In this embodiment, the bilinear interpolation algorithm may be used to fill the portion of the pixels that cannot be acquired by the image acquisition unit 11 and then the Gaussian filter is used to remove the noise interference.
学习模块120,用于以人脸ID、人脸彩色图像和人脸深度图像互相匹配的人脸图像样本库对卷积神经网络进行训练,得到人脸分类识别模型及所述人脸样本图像的特征向量。The learning module 120 is configured to train the convolutional neural network with a face image sample library in which the face ID, the face color image, and the face depth image match each other, to obtain a face classification recognition model and the face sample image. Feature vector.
在以所述人脸图像样本库对卷积神经网络进行训练的过程中,将每一组互相匹配的人脸彩色图像及深度图像输入卷积神经网络,提取该组人脸彩色图像及深度图像所对应的人物的脸部特征向量,训练模型用到的样本数量越多、样本差异越大,得到的人脸分类模型越准确。如图3所示,各组人脸彩色图像及深度图像经卷积神经网络提取的特征向量为T1、T2、T3、…、Tn。In the process of training the convolutional neural network with the face image sample library, each set of matching face color images and depth images are input into a convolutional neural network, and the set of face color images and depth images are extracted. The face feature vector of the corresponding character, the more samples used in the training model and the larger the sample difference, the more accurate the obtained face classification model. As shown in FIG. 3, the feature vectors extracted by the convolutional neural network for each group of face color images and depth images are T1, T2, T3, ..., Tn.
提取模块130,用于将所述待识别目标人脸彩色图像和深度图像输入所述人脸分类识别模型,利用该人脸分类识别模型提取出所述待识别目标人脸图 像的特征向量,例如特征向量T。The extraction module 130 is configured to input the target face color image and the depth image to be input into the face classification recognition model, and extract the target face image to be recognized by using the face classification recognition model A feature vector of the image, such as a feature vector T.
识别模块140,用于根据该特征向量在所述人脸图像样本库中查找与该待识别目标人脸图像相匹配的人脸样本图像,根据该人脸样本图像确定所述待识别目标人脸图像的人脸ID。The identification module 140 is configured to search, in the face image sample library, the face sample image that matches the target face image to be recognized according to the feature vector, and determine the target face to be recognized according to the face sample image. The face ID of the image.
在本实施例中,识别模块140计算所述待识别目标人脸样本图像的特征向量与人脸样本图像的特征向量的向量距离,将所得向量距离最小或小于阈值的人脸样本图像作为与所述待识别目标人脸图像相匹配的人脸样本图像,根据该人脸样本图像确定所述待识别目标人脸图像的人脸ID。In this embodiment, the identification module 140 calculates a vector distance between the feature vector of the target face sample image to be recognized and the feature vector of the face sample image, and uses the face sample image with the obtained vector distance minimum or less than the threshold as the context. A face sample image in which the recognition target face image matches is described, and a face ID of the target face image to be recognized is determined according to the face sample image.
例如,如图3所示,识别模块140通过计算待识别目标人脸图像的特征向量T与人脸样本图像的特征向量T1、T2、T3、…、Tn的距离D1、D2、D3、…、Dn,从距离D1、D2、D3、…、Dn中筛选出最小值,或是筛选出小于预设阈值的所有距离值,筛选出的距离值对应的人脸样本图像即为与所述待识别目标人脸图像相匹配的人脸样本图像,该人脸样本图像的人脸ID即为所述待识别目标人脸图像的人脸ID。所述向量距离可以为余弦距离或欧氏距离。For example, as shown in FIG. 3, the identification module 140 calculates the distances D1, D2, D3, ... of the feature vector T of the target face image to be recognized and the feature vectors T1, T2, T3, ..., Tn of the face sample image. Dn, the minimum value is selected from the distances D1, D2, D3, ..., Dn, or all the distance values smaller than the preset threshold are selected, and the face sample image corresponding to the selected distance value is the same to be identified. A face sample image in which the target face image matches, and the face ID of the face sample image is the face ID of the target face image to be recognized. The vector distance may be a cosine distance or an Euclidean distance.
参照图4所示,为本申请结合深度信息的人脸识别方法的较佳实施例的流程图。电子装置1的处理器13执行存储器12中存储的人脸识别程序10时实现人脸识别方法的如下步骤:Referring to FIG. 4, it is a flowchart of a preferred embodiment of a face recognition method incorporating depth information according to the present application. The following steps of implementing the face recognition method when the processor 13 of the electronic device 1 executes the face recognition program 10 stored in the memory 12:
步骤S10,通过获取模块110建立人脸ID、人脸彩色图像和人脸深度图像互相匹配的人脸图像样本库。获取模块110获取预设时间范围内拍摄区域出现人物的彩色图像和深度图像,利用人脸检测算法从中提取人脸图像,得到人脸彩色图像及对应的人脸深度图像,对所述人脸图像进行预处理,以预处理后的人脸图像为人脸样本图像,建立人脸ID、人脸彩色图像和人脸深度图像互相匹配的人脸图像样本库。所述预处理包括对所述2N张人脸图像进行去除图像噪声和校正人脸姿态的处理,并对所述的人脸图像标注人脸ID。Step S10: The acquisition module 110 establishes a face image sample library in which the face ID, the face color image, and the face depth image match each other. The obtaining module 110 acquires a color image and a depth image of a person appearing in a shooting area within a preset time range, and extracts a face image from the face detection algorithm to obtain a face color image and a corresponding face depth image, and the face image is obtained The pre-processing is performed, and the pre-processed face image is used as a face sample image, and a face image sample library in which the face ID, the face color image, and the face depth image match each other is established. The pre-processing includes performing a process of removing image noise and correcting a face pose on the 2N face images, and labeling the face image with the face ID.
步骤S20,通过学习模块120构建并训练人脸分类识别模型,得到人脸样本图像的特征向量。学习模块120以互相匹配的所述人脸样本图像对卷积神经网络进行训练,得到人脸分类识别模型及所述人脸样本图像的特征向量。训练所述人脸分类识别模型时,用到的人脸样本图像的数量越多、差异越大,得到的人脸分类模型越准确。In step S20, the face classification recognition model is constructed and trained by the learning module 120 to obtain a feature vector of the face sample image. The learning module 120 trains the convolutional neural network with the face sample images that match each other to obtain a feature vector of the face classification recognition model and the face sample image. When training the face classification recognition model, the more the number of face sample images used and the greater the difference, the more accurate the obtained face classification model.
步骤S30,通过获取模块110获取待识别目标人脸图像,包括待识别目标人脸彩色图像和人脸深度图像。获取模块110获取当前拍摄区域内出现的待识别目标的彩色图像及深度图像,利用人脸检测算法从该待识别目标的彩色图像及深度图像中提取人脸图像,得到待识别目标的人脸彩色图像和人脸深度图像。在本实施例中,所述人脸识别检测算法为基于几何特征的算法、局部特征分析算法、特征脸算法、基于弹性模型的算法、神经网络算法中的一种或几种。 In step S30, the image of the target face to be identified is acquired by the acquiring module 110, including the target face color image and the face depth image to be identified. The acquiring module 110 acquires a color image and a depth image of the object to be recognized that appear in the current shooting area, and extracts a face image from the color image and the depth image of the object to be identified by using a face detection algorithm to obtain a face color of the target to be identified. Image and face depth image. In this embodiment, the face recognition detection algorithm is one or more of a geometric feature based algorithm, a local feature analysis algorithm, a feature face algorithm, an elasticity model based algorithm, and a neural network algorithm.
步骤S40,将所述待识别目标人脸图像输入所述人脸分类识别模型,提取模块130提取出待识别目标人脸图像的特征向量。In step S40, the target face image to be recognized is input into the face classification recognition model, and the extraction module 130 extracts a feature vector of the target face image to be recognized.
步骤S50,识别模块140根据该待识别目标人脸图像的特征向量在所述人脸图像样本库中查找与该待识别目标人脸图像相匹配的人脸样本图像,根据该人脸样本图像确定所述待识别目标人脸图像的人脸ID。识别模块140通过计算所述待识别目标人脸图像的特征向量与所述人脸样本图像的特征向量之间的向量距离,将所得向量距离最小或小于阈值的人脸样本图像作为与所述待识别目标人脸图像相匹配的人脸样本图像,该人脸样本图像的人脸ID即为所述待识别目标人脸图像的人脸ID。所述向量距离可以为余弦距离或欧氏距离。In step S50, the identification module 140 searches the face image sample library for the face sample image matching the target face image to be recognized according to the feature vector of the target face image to be recognized, and determines the face sample image according to the face sample image. The face ID of the target face image to be identified. The recognition module 140 calculates a vector sample distance between the feature vector of the target face image to be recognized and the feature vector of the face sample image, and takes the face sample image with the obtained vector distance minimum or smaller than the threshold as A face sample image in which the target face image matches is identified, and the face ID of the face sample image is the face ID of the target face image to be recognized. The vector distance may be a cosine distance or an Euclidean distance.
本实施例提出的人脸识别方法,结合了人脸平面信息和深度信息,利用人脸分类识别模型提取出待识别目标的人脸彩色图像和人脸深度图像的特征向量,根据该特征向量在所述人脸图像样本库中查找与该待识别目标人脸图像相匹配的人脸样本图像。由于训练人脸分类识别模型、提取人脸图像的特征向量应用了人脸深度图像,而人脸深度图像中包含人物面部每个点与图像获取单元11的距离信息,本申请能够实现对人脸更精确的识别,尤其是当人脸的平面特征高度相似,而立体特征,例如鼻梁高度、眼窝深度、颧骨高度不同的情况下,能够显著提高人脸识别精度。The face recognition method proposed in this embodiment combines the face plane information and the depth information, and uses the face classification recognition model to extract the feature color of the face color image and the face depth image of the object to be recognized, according to the feature vector. The face image sample library searches for a face sample image that matches the target face image to be recognized. Since the face classification recognition model is trained, the feature vector of the extracted face image is applied with the face depth image, and the face depth image includes the distance information of each point of the person face and the image acquisition unit 11, the present application can implement the face More accurate recognition, especially when the planar features of the human face are highly similar, and the stereoscopic features, such as the height of the bridge of the nose, the depth of the eye socket, and the height of the tibia, can significantly improve the accuracy of face recognition.
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读存储介质可以是硬盘、多媒体卡、SD卡、闪存卡、SMC、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)、便携式紧致盘只读存储器(CD-ROM)、USB存储器等等中的任意一种或者几种的任意组合。所述计算机可读存储介质中包括人脸图像样本库、构建并训练好的人脸分类识别模型及人脸识别程序10等,所述人脸识别程序10被所述处理器13执行时实现如下操作:In addition, the embodiment of the present application further provides a computer readable storage medium, which may be a hard disk, a multimedia card, an SD card, a flash memory card, an SMC, a read only memory (ROM), and an erasable programmable Any combination or combination of any one or more of read only memory (EPROM), portable compact disk read only memory (CD-ROM), USB memory, and the like. The computer readable storage medium includes a face image sample library, a constructed and trained face classification recognition model, a face recognition program 10, and the like, and the face recognition program 10 is implemented by the processor 13 as follows operating:
样本库建立步骤:获取2N张人脸图像,包括N张人脸彩色图像及对应的N张人脸深度图像,对所述人脸图像进行预处理,以预处理后的人脸图像为人脸样本图像,建立人脸ID、人脸彩色图像和人脸深度图像互相匹配的人脸图像样本库,其中,N为大于2的整数,人脸深度图像中包含人物面部每个点与图像获取单元11的距离信息;The sample library establishing step: acquiring 2N face images, including N face color images and corresponding N face depth images, preprocessing the face images, and using the preprocessed face images as face samples An image, a face image sample library in which a face ID, a face color image, and a face depth image are matched with each other, wherein N is an integer greater than 2, and the face depth image includes each point of the person's face and the image acquisition unit 11 Distance information;
模型训练步骤:以互相匹配的所述人脸样本图像对卷积神经网络进行训练,得到人脸分类识别模型及所述人脸样本图像的特征向量;a training step of the model: training the convolutional neural network with the matching face sample images to obtain a feature vector of the face classification recognition model and the face sample image;
目标人脸图像获取步骤:获取待识别目标人脸图像,包括待识别目标的人脸彩色图像和对应的人脸深度图像;a target face image obtaining step: acquiring a target face image to be recognized, including a face color image of the target to be identified and a corresponding face depth image;
目标识别步骤:将所述待识别目标人脸图像输入所述人脸分类识别模型,提取该待识别目标人脸图像的特征向量,根据该特征向量在所述人脸图像样本库中查找与该待识别目标人脸图像相匹配的人脸样本图像,根据该人脸样本图像确定所述待识别目标人脸图像的人脸ID。 a target recognition step: inputting the target face image to be recognized into the face classification recognition model, extracting a feature vector of the target face image to be recognized, and searching and searching the face image sample library according to the feature vector A face sample image to which the target face image to be recognized matches, and a face ID of the target face image to be recognized is determined according to the face sample image.
本申请之计算机可读存储介质的具体实施方式与上述结合深度信息的人脸识别方法和电子装置1的具体实施方式大致相同,在此不再赘述。The specific implementation of the computer-readable storage medium of the present application is substantially the same as the above-described embodiment of the face recognition method and the electronic device 1 in combination with the depth information, and details are not described herein again.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。It is to be understood that the term "comprises", "comprising", or any other variants thereof, is intended to encompass a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a series of elements includes those elements. It also includes other elements not explicitly listed, or elements that are inherent to such a process, device, item, or method. An element that is defined by the phrase "comprising a ..." does not exclude the presence of additional equivalent elements in the process, the device, the item, or the method that comprises the element.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better. Implementation. Based on such understanding, portions of the technical solution of the present application that contribute substantially or to the prior art may be embodied in the form of a software product stored in a storage medium as described above, including a number of instructions. To enable a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the various embodiments of the present application.
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。 The above is only a preferred embodiment of the present application, and is not intended to limit the scope of the patent application, and the equivalent structure or equivalent process transformations made by the specification and the drawings of the present application, or directly or indirectly applied to other related technical fields. The same is included in the scope of patent protection of this application.

Claims (20)

  1. 一种结合深度信息的人脸识别方法,其特征在于,该方法包括:A face recognition method combining depth information, the method comprising:
    样本库建立步骤:获取2N张人脸图像,包括N张人脸彩色图像及对应的N张人脸深度图像,对所述人脸图像进行预处理,以预处理后的人脸图像为人脸样本图像,建立人脸ID、人脸彩色图像和人脸深度图像互相匹配的人脸图像样本库,其中,N为大于2的整数;The sample library establishing step: acquiring 2N face images, including N face color images and corresponding N face depth images, preprocessing the face images, and using the preprocessed face images as face samples An image, a face image sample library in which a face ID, a face color image, and a face depth image are matched with each other, wherein N is an integer greater than 2;
    模型训练步骤:以互相匹配的所述人脸样本图像对卷积神经网络进行训练,得到人脸分类识别模型及所述人脸样本图像的特征向量;a training step of the model: training the convolutional neural network with the matching face sample images to obtain a feature vector of the face classification recognition model and the face sample image;
    目标人脸图像获取步骤:获取待识别目标人脸图像,包括人脸彩色图像和对应的人脸深度图像;Target face image acquisition step: acquiring a target face image to be recognized, including a face color image and a corresponding face depth image;
    目标识别步骤:将所述待识别目标人脸图像输入所述人脸分类识别模型,提取该待识别目标人脸图像的特征向量,根据该特征向量在所述人脸图像样本库中查找与该待识别目标人脸图像相匹配的人脸样本图像,根据该人脸样本图像确定所述待识别目标人脸图像的人脸ID。a target recognition step: inputting the target face image to be recognized into the face classification recognition model, extracting a feature vector of the target face image to be recognized, and searching and searching the face image sample library according to the feature vector A face sample image to which the target face image to be recognized matches, and a face ID of the target face image to be recognized is determined according to the face sample image.
  2. 根据权利要求1所述的人脸识别方法,其特征在于,所述预处理包括对所述人脸图像进行去除图像噪声和校正人脸姿态的处理,并对所述人脸图像标注人脸ID。The face recognition method according to claim 1, wherein the pre-processing comprises performing a process of removing image noise and correcting a face pose on the face image, and labeling the face image with a face ID .
  3. 根据权利要求1所述的人脸识别方法,其特征在于,所述样本库建立步骤中的2N张人脸图像依下述方法获取:The face recognition method according to claim 1, wherein the 2N face images in the sample library establishing step are obtained by the following method:
    第一拍摄步骤:利用摄像机拍摄预设时间范围内拍摄区域出现的人物的彩色图像及对应的深度图像;The first photographing step: using a camera to capture a color image of a person appearing in a shooting area within a preset time range and a corresponding depth image;
    第一人脸检测步骤:利用人脸检测算法从所述彩色图像和深度图像中提取人脸图像,得到N张人脸彩色图像和对应的N张人脸深度图像。The first face detecting step: extracting a face image from the color image and the depth image by using a face detection algorithm to obtain N face color images and corresponding N face depth images.
  4. 根据权利要求1所述的人脸识别方法,其特征在于,所述目标人脸图像获取步骤包括:The face recognition method according to claim 1, wherein the target face image acquisition step comprises:
    第二拍摄步骤:利用摄像机拍摄当前拍摄区域内出现的待识别目标的彩色图像及深度图像;a second shooting step: capturing, by the camera, a color image and a depth image of the object to be recognized appearing in the current shooting area;
    第二人脸检测步骤:利用人脸检测算法从该待识别目标的彩色图像及深度图像中提取人脸图像,得到所述待识别目标的人脸彩色图像和人脸深度图像。The second face detecting step is: extracting a face image from the color image and the depth image of the object to be recognized by using a face detection algorithm to obtain a face color image and a face depth image of the object to be recognized.
  5. 根据权利要求3或4所述的人脸识别方法,其特征在于,所述人脸识别检测算法为基于几何特征的算法、局部特征分析算法、特征脸算法、基于弹性模型的算法、神经网络算法中的一种或几种。 The face recognition method according to claim 3 or 4, wherein the face recognition detection algorithm is a geometric feature based algorithm, a local feature analysis algorithm, a feature face algorithm, an elastic model based algorithm, a neural network algorithm. One or several of them.
  6. 根据权利要求1所述的人脸识别方法,其特征在于,所述目标识别步骤中根据所述待识别目标人脸图像的特征向量在所述人脸图像样本库中查找与该待识别目标人脸图像匹配的人脸样本图像包括:The face recognition method according to claim 1, wherein the target recognition step searches for the target person to be identified in the face image sample database according to the feature vector of the target face image to be recognized. The face sample image matching the face image includes:
    计算所述待识别目标人脸图像的特征向量与所述人脸样本图像的特征向量之间的向量距离;Calculating a vector distance between a feature vector of the target face image to be identified and a feature vector of the face sample image;
    将所得向量距离最小或小于阈值的人脸样本图像作为与所述待识别目标人脸图像相匹配的人脸样本图像。A face sample image whose obtained vector distance is the smallest or smaller than the threshold is taken as a face sample image that matches the target face image to be recognized.
  7. 根据权利要求6所述的人脸识别方法,其特征在于,所述向量距离为余弦距离或欧氏距离。The face recognition method according to claim 6, wherein the vector distance is a cosine distance or an Euclidean distance.
  8. 一种电子装置,包括图像获取单元、存储器和处理器,其特征在于,所述图像获取单元包括具有深度摄像功能的摄像机,所述存储器中包括人脸识别程序,该人脸识别程序被所述处理器执行时实现如下步骤:An electronic device comprising an image acquisition unit, a memory and a processor, wherein the image acquisition unit comprises a camera having a depth camera function, wherein the memory includes a face recognition program, the face recognition program being The processor implements the following steps when it executes:
    样本库建立步骤:获取2N张人脸图像,包括N张人脸彩色图像及对应的N张人脸深度图像,对所述人脸图像进行预处理,以预处理后的人脸图像为人脸样本图像,建立人脸ID、人脸彩色图像和人脸深度图像互相匹配的人脸图像样本库,其中,N为大于2的整数;The sample library establishing step: acquiring 2N face images, including N face color images and corresponding N face depth images, preprocessing the face images, and using the preprocessed face images as face samples An image, a face image sample library in which a face ID, a face color image, and a face depth image are matched with each other, wherein N is an integer greater than 2;
    模型训练步骤:以互相匹配的所述人脸样本图像对卷积神经网络进行训练,得到人脸分类识别模型及所述人脸样本图像的特征向量;a training step of the model: training the convolutional neural network with the matching face sample images to obtain a feature vector of the face classification recognition model and the face sample image;
    目标人脸图像获取步骤:获取待识别目标人脸图像,包括人脸彩色图像和对应的人脸深度图像;Target face image acquisition step: acquiring a target face image to be recognized, including a face color image and a corresponding face depth image;
    目标识别步骤:将所述待识别目标人脸图像输入所述人脸分类识别模型,提取该待识别目标人脸图像的特征向量,根据该特征向量在所述人脸图像样本库中查找与该待识别目标人脸图像相匹配的人脸样本图像,根据该人脸样本图像确定所述待识别目标人脸图像的人脸ID。a target recognition step: inputting the target face image to be recognized into the face classification recognition model, extracting a feature vector of the target face image to be recognized, and searching and searching the face image sample library according to the feature vector A face sample image to which the target face image to be recognized matches, and a face ID of the target face image to be recognized is determined according to the face sample image.
  9. 根据权利要求8所述的电子装置,其特征在于,所述预处理包括对所述人脸图像进行去除图像噪声和校正人脸姿态的处理,并对所述人脸图像标注人脸ID。The electronic device according to claim 8, wherein the pre-processing comprises performing a process of removing image noise and correcting a face pose on the face image, and labeling the face image with a face ID.
  10. 根据权利要求8所述的电子装置,其特征在于,所述样本库建立步骤中的2N张人脸图像依下述方法获取:The electronic device according to claim 8, wherein the 2N face images in the sample library establishing step are obtained by the following method:
    第一拍摄步骤:利用摄像机拍摄预设时间范围内拍摄区域出现的人物的彩色图像及对应的深度图像;The first photographing step: using a camera to capture a color image of a person appearing in a shooting area within a preset time range and a corresponding depth image;
    第一人脸检测步骤:利用人脸检测算法从所述彩色图像和深度图像中提取人脸图像,得到N张人脸彩色图像和对应的N张人脸深度图像。The first face detecting step: extracting a face image from the color image and the depth image by using a face detection algorithm to obtain N face color images and corresponding N face depth images.
  11. 根据权利要求8所述的电子装置,其特征在于,所述目标人脸图像 获取步骤包括:The electronic device according to claim 8, wherein said target face image The acquisition steps include:
    第二拍摄步骤:利用摄像机拍摄当前拍摄区域内出现的待识别目标的彩色图像及深度图像;a second shooting step: capturing, by the camera, a color image and a depth image of the object to be recognized appearing in the current shooting area;
    第二人脸检测步骤:利用人脸检测算法从该待识别目标的彩色图像及深度图像中提取人脸图像,得到所述待识别目标的人脸彩色图像和人脸深度图像。The second face detecting step is: extracting a face image from the color image and the depth image of the object to be recognized by using a face detection algorithm to obtain a face color image and a face depth image of the object to be recognized.
  12. 根据权利要求10或11所述的电子装置,其特征在于,所述人脸识别检测算法为基于几何特征的算法、局部特征分析算法、特征脸算法、基于弹性模型的算法、神经网络算法中的一种或几种。The electronic device according to claim 10 or 11, wherein the face recognition detection algorithm is based on a geometric feature based algorithm, a local feature analysis algorithm, a feature face algorithm, an elastic model based algorithm, and a neural network algorithm One or several.
  13. 根据权利要求8所述的电子装置,其特征在于,所述目标识别步骤中根据所述待识别目标人脸图像的特征向量在所述人脸图像样本库中查找与该待识别目标人脸图像匹配的人脸样本图像包括:The electronic device according to claim 8, wherein in the target recognition step, the face image of the object to be recognized is searched for in the face image sample library according to the feature vector of the target face image to be recognized. The matching face sample images include:
    计算所述待识别目标人脸图像的特征向量与所述人脸样本图像的特征向量之间的向量距离;Calculating a vector distance between a feature vector of the target face image to be identified and a feature vector of the face sample image;
    将所得向量距离最小或小于阈值的人脸样本图像作为与所述待识别目标人脸图像相匹配的人脸样本图像。A face sample image whose obtained vector distance is the smallest or smaller than the threshold is taken as a face sample image that matches the target face image to be recognized.
  14. 根据权利要求13所述的电子装置,其特征在于,所述向量距离为余弦距离或欧氏距离。The electronic device according to claim 13, wherein the vector distance is a cosine distance or an Euclidean distance.
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中包括人脸识别程序、人脸图像样本库和人脸分类识别模型,所述人脸识别程序被处理器执行时,实现如下步骤:A computer readable storage medium, comprising: a face recognition program, a face image sample library, and a face classification recognition model, wherein when the face recognition program is executed by a processor, Implement the following steps:
    样本库建立步骤:获取2N张人脸图像,包括N张人脸彩色图像及对应的N张人脸深度图像,对所述人脸图像进行预处理,以预处理后的人脸图像为人脸样本图像,建立人脸ID、人脸彩色图像和人脸深度图像互相匹配的人脸图像样本库,其中,N为大于2的整数;The sample library establishing step: acquiring 2N face images, including N face color images and corresponding N face depth images, preprocessing the face images, and using the preprocessed face images as face samples An image, a face image sample library in which a face ID, a face color image, and a face depth image are matched with each other, wherein N is an integer greater than 2;
    模型训练步骤:以互相匹配的所述人脸样本图像对卷积神经网络进行训练,得到人脸分类识别模型及所述人脸样本图像的特征向量;a training step of the model: training the convolutional neural network with the matching face sample images to obtain a feature vector of the face classification recognition model and the face sample image;
    目标人脸图像获取步骤:获取待识别目标人脸图像,包括人脸彩色图像和对应的人脸深度图像;Target face image acquisition step: acquiring a target face image to be recognized, including a face color image and a corresponding face depth image;
    目标识别步骤:将所述待识别目标人脸图像输入所述人脸分类识别模型,提取该待识别目标人脸图像的特征向量,根据该特征向量在所述人脸图像样本库中查找与该待识别目标人脸图像相匹配的人脸样本图像,根据该人脸样本图像确定所述待识别目标人脸图像的人脸ID。a target recognition step: inputting the target face image to be recognized into the face classification recognition model, extracting a feature vector of the target face image to be recognized, and searching and searching the face image sample library according to the feature vector A face sample image to which the target face image to be recognized matches, and a face ID of the target face image to be recognized is determined according to the face sample image.
  16. 根据权利要求1所述的计算机可读存储介质,其特征在于,所述预 处理包括对所述人脸图像进行去除图像噪声和校正人脸姿态的处理,并对所述人脸图像标注人脸ID。The computer readable storage medium of claim 1 wherein said pre- The processing includes performing a process of removing image noise and correcting a face pose on the face image, and labeling the face image with a face ID.
  17. 根据权利要求1所述的计算机可读存储介质,其特征在于,所述样本库建立步骤中的2N张人脸图像依下述方法获取:The computer readable storage medium according to claim 1, wherein the 2N face images in the sample library establishing step are obtained by the following method:
    第一拍摄步骤:利用摄像机拍摄预设时间范围内拍摄区域出现的人物的彩色图像及对应的深度图像;The first photographing step: using a camera to capture a color image of a person appearing in a shooting area within a preset time range and a corresponding depth image;
    第一人脸检测步骤:利用人脸检测算法从所述彩色图像和深度图像中提取人脸图像,得到N张人脸彩色图像和对应的N张人脸深度图像。The first face detecting step: extracting a face image from the color image and the depth image by using a face detection algorithm to obtain N face color images and corresponding N face depth images.
  18. 根据权利要求1所述的计算机可读存储介质,其特征在于,所述目标人脸图像获取步骤包括:The computer readable storage medium according to claim 1, wherein the target face image acquisition step comprises:
    第二拍摄步骤:利用摄像机拍摄当前拍摄区域内出现的待识别目标的彩色图像及深度图像;a second shooting step: capturing, by the camera, a color image and a depth image of the object to be recognized appearing in the current shooting area;
    第二人脸检测步骤:利用人脸检测算法从该待识别目标的彩色图像及深度图像中提取人脸图像,得到所述待识别目标的人脸彩色图像和人脸深度图像。The second face detecting step is: extracting a face image from the color image and the depth image of the object to be recognized by using a face detection algorithm to obtain a face color image and a face depth image of the object to be recognized.
  19. 根据权利要求1所述的计算机可读存储介质,其特征在于,所述人脸识别检测算法为基于几何特征的算法、局部特征分析算法、特征脸算法、基于弹性模型的算法、神经网络算法中的一种或几种。The computer readable storage medium according to claim 1, wherein the face recognition detection algorithm is a geometric feature based algorithm, a local feature analysis algorithm, a feature face algorithm, an elastic model based algorithm, and a neural network algorithm. One or several.
  20. 根据权利要求1所述的计算机可读存储介质,其特征在于,所述目标识别步骤中根据所述待识别目标人脸图像的特征向量在所述人脸图像样本库中查找与该待识别目标人脸图像匹配的人脸样本图像包括:The computer readable storage medium according to claim 1, wherein the target recognition step searches for the target to be recognized in the face image sample library according to the feature vector of the target face image to be recognized. The face sample image matching the face image includes:
    计算所述待识别目标人脸图像的特征向量与所述人脸样本图像的特征向量之间的向量距离;Calculating a vector distance between a feature vector of the target face image to be identified and a feature vector of the face sample image;
    将所得向量距离最小或小于阈值的人脸样本图像作为与所述待识别目标人脸图像相匹配的人脸样本图像。 A face sample image whose obtained vector distance is the smallest or smaller than the threshold is taken as a face sample image that matches the target face image to be recognized.
PCT/CN2017/108804 2017-10-09 2017-10-31 Human face recognition method and apparatus combined with depth information, and storage medium WO2019071664A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710930240.5 2017-10-09
CN201710930240.5A CN107844744A (en) 2017-10-09 2017-10-09 With reference to the face identification method, device and storage medium of depth information

Publications (1)

Publication Number Publication Date
WO2019071664A1 true WO2019071664A1 (en) 2019-04-18

Family

ID=61662330

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/108804 WO2019071664A1 (en) 2017-10-09 2017-10-31 Human face recognition method and apparatus combined with depth information, and storage medium

Country Status (2)

Country Link
CN (1) CN107844744A (en)
WO (1) WO2019071664A1 (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210307A (en) * 2019-04-30 2019-09-06 中国银联股份有限公司 Face Sample Storehouse dispositions method is based on recognition of face method for processing business and device
CN110472480A (en) * 2019-06-28 2019-11-19 深圳数马电子技术有限公司 Item identification method, device and computer readable storage medium
CN110502986A (en) * 2019-07-12 2019-11-26 平安科技(深圳)有限公司 Identify character positions method, apparatus, computer equipment and storage medium in image
CN110866469A (en) * 2019-10-30 2020-03-06 腾讯科技(深圳)有限公司 Human face facial features recognition method, device, equipment and medium
CN111325156A (en) * 2020-02-24 2020-06-23 北京沃东天骏信息技术有限公司 Face recognition method, device, equipment and storage medium
CN111368803A (en) * 2020-03-28 2020-07-03 河南工业职业技术学院 Face recognition method and system
CN111462227A (en) * 2020-03-27 2020-07-28 海信集团有限公司 Indoor personnel positioning device and method
CN111583317A (en) * 2020-04-29 2020-08-25 深圳市优必选科技股份有限公司 Image alignment method and device and terminal equipment
CN111639545A (en) * 2020-05-08 2020-09-08 浙江大华技术股份有限公司 Face recognition method, device, equipment and medium
CN111709303A (en) * 2020-05-21 2020-09-25 北京明略软件系统有限公司 Face image recognition method and device
CN111899281A (en) * 2020-07-15 2020-11-06 北京思方技术开发有限公司 Behavior tree-based auxiliary monitoring system control strategy implementation method and system
CN112052727A (en) * 2020-07-30 2020-12-08 广州市标准化研究院 Portrait recognition and analysis system and method based on big data
CN112052709A (en) * 2019-06-06 2020-12-08 搜狗(杭州)智能科技有限公司 Face attribute identification method and device
CN112115740A (en) * 2019-06-19 2020-12-22 北京海益同展信息科技有限公司 Method and apparatus for processing image
CN112329661A (en) * 2020-11-10 2021-02-05 广东博通科技服务有限公司 Face recognition device based on deep volume neural network and use method
CN112380965A (en) * 2020-11-11 2021-02-19 浙江大华技术股份有限公司 Method for face recognition and multi-view camera
CN112766049A (en) * 2020-12-29 2021-05-07 清华大学 Large-scale face recognition test set construction method and device based on difficult sample mining
CN112801013A (en) * 2021-02-08 2021-05-14 的卢技术有限公司 Face recognition method, system and device based on key point recognition and verification
CN112949362A (en) * 2019-12-11 2021-06-11 杭州海康威视数字技术股份有限公司 Personnel information labeling method and device and electronic equipment
CN113378715A (en) * 2021-06-10 2021-09-10 北京华捷艾米科技有限公司 Living body detection method based on color face image and related equipment
CN113536966A (en) * 2021-06-25 2021-10-22 佛山君兰智能科技有限公司 Face recognition system and method based on Internet of things
CN114530033A (en) * 2022-02-18 2022-05-24 平安国际智慧城市科技股份有限公司 Eye screen distance warning method, device, equipment and storage medium based on face recognition
US20220327864A1 (en) * 2020-10-12 2022-10-13 Hong Fu Jin Precision Industry (Wuhan) Co., Ltd. Face recognition method, device employing method, and readable storage medium
CN115471902A (en) * 2022-11-14 2022-12-13 广州市威士丹利智能科技有限公司 Face recognition protection method and system based on smart campus

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764031B (en) * 2018-04-17 2023-04-18 平安科技(深圳)有限公司 Method, device, computer equipment and storage medium for recognizing human face
CN108596141B (en) * 2018-05-08 2022-05-17 深圳大学 Detection method and system for generating face image by deep network
EP3608813A4 (en) 2018-05-29 2020-07-22 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Verification system, electronic device, verification method, computer-readable storage medium, and computer apparatus
CN110619200B (en) * 2018-06-19 2022-04-08 Oppo广东移动通信有限公司 Verification system and electronic device
CN108763903B (en) * 2018-05-29 2020-02-11 Oppo广东移动通信有限公司 Authentication device and electronic apparatus
CN108765014A (en) * 2018-05-30 2018-11-06 中海云智慧(北京)物联网科技有限公司 A kind of intelligent advertisement put-on method based on access control system
CN108763911B (en) 2018-06-06 2019-12-06 Oppo广东移动通信有限公司 Authentication method, authentication apparatus, electronic device, and computer-readable storage medium
CN108932505A (en) * 2018-07-25 2018-12-04 维沃移动通信有限公司 A kind of image processing method and electronic equipment
CN111832535A (en) * 2018-08-24 2020-10-27 创新先进技术有限公司 Face recognition method and device
CN109241890B (en) * 2018-08-24 2020-01-14 北京字节跳动网络技术有限公司 Face image correction method, apparatus and storage medium
CN109472280B (en) * 2018-09-10 2023-04-07 广东数相智能科技有限公司 Method for updating species recognition model library, storage medium and electronic equipment
CN109948467A (en) * 2019-02-28 2019-06-28 中国科学院深圳先进技术研究院 Method, apparatus, computer equipment and the storage medium of recognition of face
CN110188616B (en) * 2019-05-05 2023-02-28 上海盎维信息技术有限公司 Space modeling method and device based on 2D and 3D images
CN110378209B (en) * 2019-06-11 2021-12-17 深圳市锐明技术股份有限公司 Driver identity verification method and device
CN110458025B (en) * 2019-07-11 2022-10-14 南京邮电大学 Target identification and positioning method based on binocular camera
CN110533426A (en) * 2019-08-02 2019-12-03 深圳蚂里奥技术有限公司 A kind of method of payment and system
CN110532979A (en) * 2019-09-03 2019-12-03 深圳市华芯技研科技有限公司 A kind of 3-D image face identification method and system
CN111144298A (en) * 2019-12-26 2020-05-12 北京华捷艾米科技有限公司 Pedestrian identification method and device
CN111553191A (en) * 2020-03-30 2020-08-18 深圳壹账通智能科技有限公司 Video classification method and device based on face recognition and storage medium
CN111753658A (en) * 2020-05-20 2020-10-09 高新兴科技集团股份有限公司 Post sleep warning method and device and computer equipment
CN112036284B (en) * 2020-08-25 2024-04-19 腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium
CN112598836A (en) * 2020-12-14 2021-04-02 天时地理(深圳)智能科技有限公司 Face recognition comparison system and method based on deep learning
CN113837105A (en) * 2021-09-26 2021-12-24 北京的卢深视科技有限公司 Face recognition method, face recognition system, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150278997A1 (en) * 2012-09-26 2015-10-01 Korea Institute Of Science And Technology Method and apparatus for inferring facial composite
CN105354543A (en) * 2015-10-29 2016-02-24 小米科技有限责任公司 Video processing method and apparatus
CN105844635A (en) * 2016-03-21 2016-08-10 北京工业大学 Sparse representation depth image reconstruction algorithm based on structure dictionary
CN106919251A (en) * 2017-01-09 2017-07-04 重庆邮电大学 A kind of collaborative virtual learning environment natural interactive method based on multi-modal emotion recognition
CN106934377A (en) * 2017-03-14 2017-07-07 深圳大图科创技术开发有限公司 A kind of improved face detection system
CN106991377A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 With reference to the face identification method, face identification device and electronic installation of depth information

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130300939A1 (en) * 2012-05-11 2013-11-14 Cisco Technology, Inc. System and method for joint speaker and scene recognition in a video/audio processing environment
CN104778441A (en) * 2015-01-07 2015-07-15 深圳市唯特视科技有限公司 Multi-mode face identification device and method fusing grey information and depth information
CN104751140A (en) * 2015-03-30 2015-07-01 常州大学 Three-dimensional face recognition algorithm based on deep learning SDAE theory and application thereof in field of finance
CN104899579A (en) * 2015-06-29 2015-09-09 小米科技有限责任公司 Face recognition method and face recognition device
CN205318544U (en) * 2015-12-30 2016-06-15 四川川大智胜软件股份有限公司 Device and system are prevented cheaing by ATM based on three dimensional face identification
CN105513221B (en) * 2015-12-30 2018-08-14 四川川大智胜软件股份有限公司 A kind of ATM machine antifraud apparatus and system based on three-dimensional face identification
CN105847684A (en) * 2016-03-31 2016-08-10 深圳奥比中光科技有限公司 Unmanned aerial vehicle
CN106022317A (en) * 2016-06-27 2016-10-12 北京小米移动软件有限公司 Face identification method and apparatus
CN106650656B (en) * 2016-12-16 2023-10-27 中新智擎科技有限公司 User identity recognition device and robot
CN106778615B (en) * 2016-12-16 2019-10-18 中新智擎科技有限公司 A kind of method, apparatus and service for infrastructure robot identifying user identity
CN106909905B (en) * 2017-03-02 2020-02-14 中科视拓(北京)科技有限公司 Multi-mode face recognition method based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150278997A1 (en) * 2012-09-26 2015-10-01 Korea Institute Of Science And Technology Method and apparatus for inferring facial composite
CN105354543A (en) * 2015-10-29 2016-02-24 小米科技有限责任公司 Video processing method and apparatus
CN105844635A (en) * 2016-03-21 2016-08-10 北京工业大学 Sparse representation depth image reconstruction algorithm based on structure dictionary
CN106919251A (en) * 2017-01-09 2017-07-04 重庆邮电大学 A kind of collaborative virtual learning environment natural interactive method based on multi-modal emotion recognition
CN106991377A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 With reference to the face identification method, face identification device and electronic installation of depth information
CN106934377A (en) * 2017-03-14 2017-07-07 深圳大图科创技术开发有限公司 A kind of improved face detection system

Cited By (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11893829B2 (en) 2019-04-30 2024-02-06 China Union Pay Co., Ltd. Method for deploying a face sample library and method and apparatus for business processing based on face recognition
CN110210307B (en) * 2019-04-30 2023-11-28 中国银联股份有限公司 Face sample library deployment method, face-recognition-based service processing method and device
CN110210307A (en) * 2019-04-30 2019-09-06 中国银联股份有限公司 Face Sample Storehouse dispositions method is based on recognition of face method for processing business and device
CN112052709A (en) * 2019-06-06 2020-12-08 搜狗(杭州)智能科技有限公司 Face attribute identification method and device
CN112052709B (en) * 2019-06-06 2024-04-19 北京搜狗科技发展有限公司 Face attribute identification method and device
CN112115740B (en) * 2019-06-19 2024-04-09 京东科技信息技术有限公司 Method and apparatus for processing image
CN112115740A (en) * 2019-06-19 2020-12-22 北京海益同展信息科技有限公司 Method and apparatus for processing image
CN110472480A (en) * 2019-06-28 2019-11-19 深圳数马电子技术有限公司 Item identification method, device and computer readable storage medium
CN110502986A (en) * 2019-07-12 2019-11-26 平安科技(深圳)有限公司 Identify character positions method, apparatus, computer equipment and storage medium in image
CN110866469A (en) * 2019-10-30 2020-03-06 腾讯科技(深圳)有限公司 Human face facial features recognition method, device, equipment and medium
CN112949362B (en) * 2019-12-11 2023-10-17 杭州海康威视数字技术股份有限公司 Personnel information labeling method and device and electronic equipment
CN112949362A (en) * 2019-12-11 2021-06-11 杭州海康威视数字技术股份有限公司 Personnel information labeling method and device and electronic equipment
CN111325156A (en) * 2020-02-24 2020-06-23 北京沃东天骏信息技术有限公司 Face recognition method, device, equipment and storage medium
CN111325156B (en) * 2020-02-24 2023-08-11 北京沃东天骏信息技术有限公司 Face recognition method, device, equipment and storage medium
CN111462227A (en) * 2020-03-27 2020-07-28 海信集团有限公司 Indoor personnel positioning device and method
CN111368803A (en) * 2020-03-28 2020-07-03 河南工业职业技术学院 Face recognition method and system
CN111583317B (en) * 2020-04-29 2024-02-09 深圳市优必选科技股份有限公司 Image alignment method and device and terminal equipment
CN111583317A (en) * 2020-04-29 2020-08-25 深圳市优必选科技股份有限公司 Image alignment method and device and terminal equipment
CN111639545B (en) * 2020-05-08 2023-08-08 浙江大华技术股份有限公司 Face recognition method, device, equipment and medium
CN111639545A (en) * 2020-05-08 2020-09-08 浙江大华技术股份有限公司 Face recognition method, device, equipment and medium
CN111709303A (en) * 2020-05-21 2020-09-25 北京明略软件系统有限公司 Face image recognition method and device
CN111899281A (en) * 2020-07-15 2020-11-06 北京思方技术开发有限公司 Behavior tree-based auxiliary monitoring system control strategy implementation method and system
CN111899281B (en) * 2020-07-15 2023-10-31 北京和利时系统工程有限公司 Method and system for realizing control strategy of auxiliary monitoring system based on behavior tree
CN112052727A (en) * 2020-07-30 2020-12-08 广州市标准化研究院 Portrait recognition and analysis system and method based on big data
CN112052727B (en) * 2020-07-30 2024-04-02 广州市标准化研究院 Portrait recognition analysis system and method based on big data
US11922724B2 (en) * 2020-10-12 2024-03-05 Hong Fu Jin Precision Industry (Wuhan) Co., Ltd. Face recognition method utilizing a face recognition model and a face sample library to detect mask images
US20220327864A1 (en) * 2020-10-12 2022-10-13 Hong Fu Jin Precision Industry (Wuhan) Co., Ltd. Face recognition method, device employing method, and readable storage medium
CN112329661B (en) * 2020-11-10 2022-06-03 广东博通科技服务有限公司 Face recognition device based on deep volume neural network and use method
CN112329661A (en) * 2020-11-10 2021-02-05 广东博通科技服务有限公司 Face recognition device based on deep volume neural network and use method
CN112380965A (en) * 2020-11-11 2021-02-19 浙江大华技术股份有限公司 Method for face recognition and multi-view camera
CN112380965B (en) * 2020-11-11 2024-04-09 浙江大华技术股份有限公司 Face recognition method and multi-camera
CN112766049A (en) * 2020-12-29 2021-05-07 清华大学 Large-scale face recognition test set construction method and device based on difficult sample mining
CN112801013A (en) * 2021-02-08 2021-05-14 的卢技术有限公司 Face recognition method, system and device based on key point recognition and verification
CN112801013B (en) * 2021-02-08 2024-04-09 的卢技术有限公司 Face recognition method, system and device based on key point recognition verification
CN113378715B (en) * 2021-06-10 2024-01-05 北京华捷艾米科技有限公司 Living body detection method based on color face image and related equipment
CN113378715A (en) * 2021-06-10 2021-09-10 北京华捷艾米科技有限公司 Living body detection method based on color face image and related equipment
CN113536966A (en) * 2021-06-25 2021-10-22 佛山君兰智能科技有限公司 Face recognition system and method based on Internet of things
CN114530033A (en) * 2022-02-18 2022-05-24 平安国际智慧城市科技股份有限公司 Eye screen distance warning method, device, equipment and storage medium based on face recognition
CN115471902A (en) * 2022-11-14 2022-12-13 广州市威士丹利智能科技有限公司 Face recognition protection method and system based on smart campus

Also Published As

Publication number Publication date
CN107844744A (en) 2018-03-27

Similar Documents

Publication Publication Date Title
WO2019071664A1 (en) Human face recognition method and apparatus combined with depth information, and storage medium
US9818023B2 (en) Enhanced face detection using depth information
CN110147717B (en) Human body action recognition method and device
US10534957B2 (en) Eyeball movement analysis method and device, and storage medium
CN106446873B (en) Face detection method and device
CN106203305B (en) Face living body detection method and device
CN110232369B (en) Face recognition method and electronic equipment
US10423848B2 (en) Method, system, and computer-readable recording medium for long-distance person identification
WO2019033572A1 (en) Method for detecting whether face is blocked, device and storage medium
US10650234B2 (en) Eyeball movement capturing method and device, and storage medium
WO2019033571A1 (en) Facial feature point detection method, apparatus and storage medium
CN105184238A (en) Human face recognition method and system
CN105740778B (en) Improved three-dimensional human face in-vivo detection method and device
WO2019061658A1 (en) Method and device for positioning eyeglass, and storage medium
US10489636B2 (en) Lip movement capturing method and device, and storage medium
CN103679147A (en) Method and device for identifying model of mobile phone
KR20190097640A (en) Device and method for matching image
US20220147735A1 (en) Face-aware person re-identification system
CN110866466A (en) Face recognition method, face recognition device, storage medium and server
WO2019033570A1 (en) Lip movement analysis method, apparatus and storage medium
TW201405445A (en) True face recognition system and method based on dual camera
JP2013186546A (en) Person retrieval system
KR101724971B1 (en) System for recognizing face using wide angle camera and method for recognizing face thereof
US11315360B2 (en) Live facial recognition system and method
CN112364827A (en) Face recognition method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17928156

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 23.09.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17928156

Country of ref document: EP

Kind code of ref document: A1