WO2019071664A1 - Procédé et appareil de reconnaissance de visage humain combinés à des informations de profondeur, et support de stockage - Google Patents

Procédé et appareil de reconnaissance de visage humain combinés à des informations de profondeur, et support de stockage Download PDF

Info

Publication number
WO2019071664A1
WO2019071664A1 PCT/CN2017/108804 CN2017108804W WO2019071664A1 WO 2019071664 A1 WO2019071664 A1 WO 2019071664A1 CN 2017108804 W CN2017108804 W CN 2017108804W WO 2019071664 A1 WO2019071664 A1 WO 2019071664A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
image
target
recognized
sample
Prior art date
Application number
PCT/CN2017/108804
Other languages
English (en)
Chinese (zh)
Inventor
戴磊
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019071664A1 publication Critical patent/WO2019071664A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • the present application relates to the field of face recognition technologies, and in particular, to a face recognition method, apparatus, and storage medium that combine depth information.
  • Face recognition is a non-contact biometric recognition technology based on human facial feature information for identity authentication.
  • face recognition technology has been widely used in many important industries and fields such as smart access control, public safety, entertainment, and military.
  • CNN Convolutional Neural Networks
  • the present invention provides a face recognition method, device and storage medium combining depth information, and the main purpose thereof is to realize accurate recognition of a human face by combining a face color image and a depth image.
  • the present application provides a face recognition method, including:
  • the sample library establishing step acquiring 2N face images, including N face color images and corresponding N face depth images, preprocessing the face images, and using the preprocessed face images as face samples
  • a training step of the model training the convolutional neural network with the matching face sample images to obtain a feature vector of the face classification recognition model and the face sample image;
  • Target face image acquisition step acquiring a target face image to be recognized, including a face color image and a corresponding face depth image;
  • a target recognition step inputting the target face image to be recognized into the face classification recognition model, extracting a feature vector of the target face image to be recognized, and searching and searching the face image sample library according to the feature vector A face sample image to which the target face image to be recognized matches, and a face ID of the target face image to be recognized is determined according to the face sample image.
  • the preprocessing in the sample library establishing step comprises performing a process of removing image noise and correcting a face pose on the face image, and labeling the face image with a face ID.
  • Finding a face sample image matching the target face image to be recognized in the face image sample library includes:
  • a face sample image whose obtained vector distance is the smallest or smaller than the threshold is taken as a face sample image that matches the target face image to be recognized.
  • the vector distance is a cosine distance or an Euclidean distance.
  • the 2N face images in the sample library establishing step are obtained by the following method:
  • the first photographing step using a camera to capture a color image of a person appearing in a shooting area within a preset time range and a corresponding depth image;
  • the first face detecting step extracting a face image from the color image and the depth image by using a face detection algorithm to obtain N face color images and corresponding N face depth images.
  • the target face image is obtained by the following method:
  • a second shooting step capturing, by the camera, a color image and a depth image of the object to be recognized appearing in the current shooting area;
  • the second face detecting step is: extracting a face image from the color image and the depth image of the object to be recognized by using a face detection algorithm to obtain a face color image and a face depth image of the object to be recognized.
  • the face recognition detection algorithm is one or more of a geometric feature based algorithm, a local feature analysis algorithm, a feature face algorithm, an elasticity model based algorithm, and a neural network algorithm.
  • the present application further provides an electronic device including an image acquisition unit, a memory, and a processor, the image acquisition unit including a camera having a depth imaging function including face recognition in the memory a program that implements the following steps when executed by the processor:
  • the sample library establishing step acquiring 2N face images, including N face color images and corresponding N face depth images, preprocessing the face images, and using the preprocessed face images as face samples
  • a training step of the model training the convolutional neural network with the matching face sample images to obtain a feature vector of the face classification recognition model and the face sample image;
  • Target face image acquisition step acquiring a target face image to be recognized, including a face color image and a corresponding face depth image;
  • a target recognition step inputting the target face image to be recognized into the face classification recognition model, extracting a feature vector of the target face image to be recognized, and searching and searching the face image sample library according to the feature vector A face sample image to which the target face image to be recognized matches, and a face ID of the target face image to be recognized is determined according to the face sample image.
  • the preprocessing in the sample library establishing step comprises performing a process of removing image noise and correcting a face pose on the face image, and labeling the face image with a face ID.
  • the searching for the face sample image matching the to-be-identified target face image in the face image sample library according to the feature vector of the target face image to be recognized in the target recognition step includes:
  • a face sample image whose obtained vector distance is the smallest or smaller than the threshold is taken as a face sample image that matches the target face image to be recognized.
  • the vector distance is a cosine distance or an Euclidean distance.
  • the 2N face images in the sample library establishing step are obtained by the following method:
  • the first photographing step using a camera to capture a color image of a person appearing in a shooting area within a preset time range and a corresponding depth image;
  • the first face detecting step extracting a face image from the color image and the depth image by using a face detection algorithm to obtain N face color images and corresponding N face depth images.
  • the target face image is obtained by the following method:
  • a second shooting step capturing, by the camera, a color image and a depth image of the object to be recognized appearing in the current shooting area;
  • the second face detecting step is: extracting a face image from the color image and the depth image of the object to be recognized by using a face detection algorithm to obtain a face color image and a face depth image of the object to be recognized.
  • the face recognition detection algorithm is one or more of a geometric feature based algorithm, a local feature analysis algorithm, a feature face algorithm, an elasticity model based algorithm, and a neural network algorithm.
  • the present application further provides a computer readable storage medium including a face image sample library, a face classification recognition model, and a face recognition program, the face recognition program Any of the steps of the face recognition method of combining depth information as described above when executed by the processor.
  • the face recognition method, the electronic device and the computer readable storage medium combined with the depth information proposed by the present application by constructing and training the face classification recognition model by inputting the face color image and the face depth image of the target to be recognized in real time Extracting a feature vector of the target face image to be recognized, calculating a vector distance of the feature vector of the target face image to be recognized and the feature vector of the face sample image, and searching and searching in the face image sample library according to the vector distance
  • FIG. 1 is a schematic diagram of an operating environment of a preferred embodiment of an electronic device of the present application
  • FIG. 2 is a functional block diagram of the face recognition program of FIG. 1;
  • FIG. 3 is a flowchart of a face recognition method combining depth information according to the present application.
  • FIG. 4 is a flow chart of a preferred embodiment of a face recognition method incorporating depth information in the present application.
  • FIG. 1 is a schematic diagram of an operating environment of a preferred embodiment of an electronic device of the present application.
  • the electronic device 1 may be a terminal device having an imaging and computing function, such as a server, a smart phone, a tablet computer, a portable computer, or a desktop computer.
  • the electronic device 1 includes an image acquisition unit 11, a memory 12, a processor 13, a network interface 14, and a communication bus 15.
  • the image obtaining unit 11 can be installed in a specific place, such as an office place and a monitoring area, and real-time images are taken in real time for the target entering the specific place, and the captured real-time image is transmitted to the processor 13 through the network.
  • Network interface 14 may optionally include a standard wired interface, a wireless interface (such as a WI-FI interface).
  • Communication bus 15 is used to implement connection communication between these components.
  • Memory 12 includes at least one type of readable storage medium.
  • the at least one type of readable storage medium may be a non-volatile storage medium such as a flash memory, a hard disk, a multimedia card, a card type memory, or the like.
  • the readable storage medium may be an internal storage unit of the electronic device 1, such as a hard disk of the electronic device 1.
  • the readable storage medium may also be an external memory 11 of the electronic device 1, such as a plug-in hard disk equipped on the electronic device 1, a smart memory card (SMC). , Secure Digital (SD) card, Flash Card, etc.
  • SMC smart memory card
  • SD Secure Digital
  • the readable storage medium of the memory 12 is generally used to store the face recognition program 10 installed on the electronic device 1, the face image sample library, and the face classification recognition model constructed and trained. .
  • the memory 12 can also be used to temporarily store data that has been output or is about to be output.
  • the processor 13 may be a Central Processing Unit (CPU), microprocessor or other data processing chip for running program code or processing data stored in the memory 12, such as executing a human face. Identify the program 10 and so on.
  • CPU Central Processing Unit
  • microprocessor or other data processing chip for running program code or processing data stored in the memory 12, such as executing a human face. Identify the program 10 and so on.
  • Figure 1 shows only the electronic device 1 with the components 11-15 and the face recognition program 10, but it should be understood that not all of the illustrated components are required to be implemented, and alternative implementations may be more or less s component.
  • the electronic device 1 may further include a user interface
  • the user interface may include an input unit such as a keyboard, a voice input device such as a microphone, a device with a voice recognition function, a voice output device such as an audio, a headphone, and the like.
  • the user interface may also include a standard wired interface and a wireless interface.
  • the electronic device 1 may also include a display, which may also be referred to as a display screen or display unit as appropriate. In some embodiments, it may be an LED display, a liquid crystal display, a touch liquid crystal display, and an Organic Light-Emitting Diode (OLED) display.
  • the display is used to display information processed in the electronic device 1 and a user interface for displaying visualizations.
  • the electronic device 1 also includes a touch sensor.
  • the area provided by the touch sensor for the user to perform a touch operation is referred to as a touch area.
  • the touch sensor described herein may be a resistive touch sensor, a capacitive touch sensor, or the like.
  • the touch sensor includes not only a contact type touch sensor but also a proximity type touch sensor or the like.
  • the touch sensor may be a single sensor or a plurality of sensors arranged, for example, in an array.
  • the area of the display of the electronic device 1 may be the same as or different from the area of the touch sensor.
  • a display is stacked with the touch sensor to form a touch display. The device detects a user-triggered touch operation based on a touch screen display.
  • the electronic device 1 may further include a radio frequency (RF) circuit, a sensor, an audio circuit, and the like, and details are not described herein.
  • RF radio frequency
  • the memory 12 as a readable storage medium may include an operating system, a face recognition program 10, a face image sample library, and build and train. Good face classification recognition model.
  • the processor 13 executes the face recognition program 10 stored in the memory 12, the following steps are implemented:
  • the sample library establishing step acquiring 2N face images, including N face color images and corresponding N face depth images, preprocessing the face images, and using the preprocessed face images as face samples
  • a training step of the model training the convolutional neural network with the matching face sample images to obtain a feature vector of the face classification recognition model and the face sample image;
  • Target face image acquisition step acquiring a target face image to be recognized, including a face color image and a corresponding face depth image;
  • a target recognition step inputting the target face image to be recognized into the face classification recognition model, extracting a feature vector of the target face image to be recognized, and searching and searching the face image sample library according to the feature vector
  • a face sample image to which the target face image to be recognized matches, and a face ID of the target face image to be recognized is determined according to the face sample image.
  • FIG. 2 for the function block diagram of the face recognition program 10 and FIG. 3 for the face recognition method for combining the depth information. Introduction to the flow chart.
  • FIG. 2 it is a functional block diagram of the face recognition program 10 of FIG.
  • the face recognition program 10 is divided into a plurality of modules, which are stored in the memory 12 and executed by the processor 13 to complete the present application.
  • a module as referred to in this application refers to a series of computer program instructions that are capable of performing a particular function.
  • the face recognition program 10 can be divided into: an acquisition module 110, a learning module 120, an extraction module 130, and an identification module 140.
  • the obtaining module 110 is configured to acquire a color image and a depth image of the person captured by the image acquiring unit 11, extract a face image therefrom and perform preprocessing on the extracted face image by using a face detection algorithm.
  • the face image sample library may be established by the acquisition module 110 to obtain a target face color image and a depth image to be recognized.
  • the image acquisition unit 11 may be a camera integrated with a common camera function and a depth camera function, or may include two cameras: an ordinary camera that captures a color image, and a depth camera that captures a depth image (eg, Time of Flight). Camera).
  • the color image can be described by a variety of color spaces, including RGB, HSV, HIS, CMY, and the like.
  • Each pixel value in the depth image represents a distance between a certain point in the scene and the camera, and in the present embodiment, represents a distance between a certain point of the face and the image acquisition unit 11.
  • the shooting angle and the frequency are set to be the same.
  • the image acquisition unit 11 is adjusted to the direction facing the entrance of the monitoring area (photographing the front image of the person), and setting a color image and a depth image at the same time every fixed time (for example, 2 seconds), and color images taken at the same time and The depth image can establish a one-to-one correspondence by labeling the shooting time.
  • the pre-processing includes removing image noise, correcting a face pose, and animating the face ID for each face color image and depth image.
  • the image noise may be caused by various reasons. For example, due to some deficiencies that may exist in the device itself, some points on the face of the person may not be acquired by the image acquisition unit 11 and the depth portion may be replaced by a value of 0. Output, these zero values can be considered as noise.
  • the bilinear interpolation algorithm may be used to fill the portion of the pixels that cannot be acquired by the image acquisition unit 11 and then the Gaussian filter is used to remove the noise interference.
  • the learning module 120 is configured to train the convolutional neural network with a face image sample library in which the face ID, the face color image, and the face depth image match each other, to obtain a face classification recognition model and the face sample image. Feature vector.
  • each set of matching face color images and depth images are input into a convolutional neural network, and the set of face color images and depth images are extracted.
  • the face feature vector of the corresponding character the more samples used in the training model and the larger the sample difference, the more accurate the obtained face classification model.
  • the feature vectors extracted by the convolutional neural network for each group of face color images and depth images are T1, T2, T3, ..., Tn.
  • the extraction module 130 is configured to input the target face color image and the depth image to be input into the face classification recognition model, and extract the target face image to be recognized by using the face classification recognition model A feature vector of the image, such as a feature vector T.
  • the identification module 140 is configured to search, in the face image sample library, the face sample image that matches the target face image to be recognized according to the feature vector, and determine the target face to be recognized according to the face sample image.
  • the face ID of the image is configured to search, in the face image sample library, the face sample image that matches the target face image to be recognized according to the feature vector, and determine the target face to be recognized according to the face sample image. The face ID of the image.
  • the identification module 140 calculates a vector distance between the feature vector of the target face sample image to be recognized and the feature vector of the face sample image, and uses the face sample image with the obtained vector distance minimum or less than the threshold as the context.
  • a face sample image in which the recognition target face image matches is described, and a face ID of the target face image to be recognized is determined according to the face sample image.
  • the identification module 140 calculates the distances D1, D2, D3, ... of the feature vector T of the target face image to be recognized and the feature vectors T1, T2, T3, ..., Tn of the face sample image.
  • Dn the minimum value is selected from the distances D1, D2, D3, ..., Dn, or all the distance values smaller than the preset threshold are selected, and the face sample image corresponding to the selected distance value is the same to be identified.
  • a face sample image in which the target face image matches, and the face ID of the face sample image is the face ID of the target face image to be recognized.
  • the vector distance may be a cosine distance or an Euclidean distance.
  • FIG. 4 it is a flowchart of a preferred embodiment of a face recognition method incorporating depth information according to the present application. The following steps of implementing the face recognition method when the processor 13 of the electronic device 1 executes the face recognition program 10 stored in the memory 12:
  • Step S10 The acquisition module 110 establishes a face image sample library in which the face ID, the face color image, and the face depth image match each other.
  • the obtaining module 110 acquires a color image and a depth image of a person appearing in a shooting area within a preset time range, and extracts a face image from the face detection algorithm to obtain a face color image and a corresponding face depth image, and the face image is obtained.
  • the pre-processing is performed, and the pre-processed face image is used as a face sample image, and a face image sample library in which the face ID, the face color image, and the face depth image match each other is established.
  • the pre-processing includes performing a process of removing image noise and correcting a face pose on the 2N face images, and labeling the face image with the face ID.
  • step S20 the face classification recognition model is constructed and trained by the learning module 120 to obtain a feature vector of the face sample image.
  • the learning module 120 trains the convolutional neural network with the face sample images that match each other to obtain a feature vector of the face classification recognition model and the face sample image.
  • step S30 the image of the target face to be identified is acquired by the acquiring module 110, including the target face color image and the face depth image to be identified.
  • the acquiring module 110 acquires a color image and a depth image of the object to be recognized that appear in the current shooting area, and extracts a face image from the color image and the depth image of the object to be identified by using a face detection algorithm to obtain a face color of the target to be identified.
  • Image and face depth image is one or more of a geometric feature based algorithm, a local feature analysis algorithm, a feature face algorithm, an elasticity model based algorithm, and a neural network algorithm.
  • step S40 the target face image to be recognized is input into the face classification recognition model, and the extraction module 130 extracts a feature vector of the target face image to be recognized.
  • the identification module 140 searches the face image sample library for the face sample image matching the target face image to be recognized according to the feature vector of the target face image to be recognized, and determines the face sample image according to the face sample image.
  • the recognition module 140 calculates a vector sample distance between the feature vector of the target face image to be recognized and the feature vector of the face sample image, and takes the face sample image with the obtained vector distance minimum or smaller than the threshold as A face sample image in which the target face image matches is identified, and the face ID of the face sample image is the face ID of the target face image to be recognized.
  • the vector distance may be a cosine distance or an Euclidean distance.
  • the face recognition method proposed in this embodiment combines the face plane information and the depth information, and uses the face classification recognition model to extract the feature color of the face color image and the face depth image of the object to be recognized, according to the feature vector.
  • the face image sample library searches for a face sample image that matches the target face image to be recognized.
  • the present application can implement the face More accurate recognition, especially when the planar features of the human face are highly similar, and the stereoscopic features, such as the height of the bridge of the nose, the depth of the eye socket, and the height of the tibia, can significantly improve the accuracy of face recognition.
  • the embodiment of the present application further provides a computer readable storage medium, which may be a hard disk, a multimedia card, an SD card, a flash memory card, an SMC, a read only memory (ROM), and an erasable programmable Any combination or combination of any one or more of read only memory (EPROM), portable compact disk read only memory (CD-ROM), USB memory, and the like.
  • the computer readable storage medium includes a face image sample library, a constructed and trained face classification recognition model, a face recognition program 10, and the like, and the face recognition program 10 is implemented by the processor 13 as follows operating:
  • the sample library establishing step acquiring 2N face images, including N face color images and corresponding N face depth images, preprocessing the face images, and using the preprocessed face images as face samples
  • a training step of the model training the convolutional neural network with the matching face sample images to obtain a feature vector of the face classification recognition model and the face sample image;
  • a target face image obtaining step acquiring a target face image to be recognized, including a face color image of the target to be identified and a corresponding face depth image;
  • a target recognition step inputting the target face image to be recognized into the face classification recognition model, extracting a feature vector of the target face image to be recognized, and searching and searching the face image sample library according to the feature vector A face sample image to which the target face image to be recognized matches, and a face ID of the target face image to be recognized is determined according to the face sample image.

Abstract

L'invention concerne un procédé de reconnaissance de visage humain combiné à des informations de profondeur. Le procédé consiste à : établir une bibliothèque d'échantillons d'image de visage humain, où un ID de visage humain, une image couleur de visage humain et une image de profondeur de visage humain correspondent mutuellement dans celle-ci; construire et entraîner un modèle de classification et de reconnaissance de visage humain pour obtenir un vecteur de caractéristiques d'une image échantillon de visage humain; acquérir une image de visage humain d'une cible à reconnaître, l'image de visage humain comprenant une image couleur de visage humain de la cible à reconnaître et une image de profondeur de visage humain de la cible à reconnaître; fournir l'image de visage humain de la cible à reconnaître en entrée du modèle de classification et de reconnaissance de visage humain, et extraire un vecteur de caractéristiques de l'image de visage humain de la cible à reconnaître; et déterminer, selon le vecteur de caractéristiques, un ID de visage humain correspondant à l'image de visage humain de la cible à reconnaître. En combinaison avec des informations de profondeur à propos d'un visage humain, grâce au calcul du vecteur de distance entre des vecteurs de caractéristiques d'une image couleur de visage humain et d'une image de profondeur de visage humain d'une cible à reconnaître et un vecteur de caractéristiques d'une image échantillon de visage humain, la présente invention réalise une reconnaissance précise de visage humain. L'invention concerne aussi un appareil électronique et un support de stockage lisible par ordinateur.
PCT/CN2017/108804 2017-10-09 2017-10-31 Procédé et appareil de reconnaissance de visage humain combinés à des informations de profondeur, et support de stockage WO2019071664A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710930240.5A CN107844744A (zh) 2017-10-09 2017-10-09 结合深度信息的人脸识别方法、装置及存储介质
CN201710930240.5 2017-10-09

Publications (1)

Publication Number Publication Date
WO2019071664A1 true WO2019071664A1 (fr) 2019-04-18

Family

ID=61662330

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/108804 WO2019071664A1 (fr) 2017-10-09 2017-10-31 Procédé et appareil de reconnaissance de visage humain combinés à des informations de profondeur, et support de stockage

Country Status (2)

Country Link
CN (1) CN107844744A (fr)
WO (1) WO2019071664A1 (fr)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210307A (zh) * 2019-04-30 2019-09-06 中国银联股份有限公司 人脸样本库部署方法、基于人脸识别业务处理方法及装置
CN110472480A (zh) * 2019-06-28 2019-11-19 深圳数马电子技术有限公司 物品识别方法、装置及计算机可读存储介质
CN110502986A (zh) * 2019-07-12 2019-11-26 平安科技(深圳)有限公司 识别图像中人物位置方法、装置、计算机设备和存储介质
CN110866469A (zh) * 2019-10-30 2020-03-06 腾讯科技(深圳)有限公司 一种人脸五官识别方法、装置、设备及介质
CN111325156A (zh) * 2020-02-24 2020-06-23 北京沃东天骏信息技术有限公司 人脸识别方法、装置、设备和存储介质
CN111368803A (zh) * 2020-03-28 2020-07-03 河南工业职业技术学院 一种人脸识别方法及系统
CN111462227A (zh) * 2020-03-27 2020-07-28 海信集团有限公司 室内人员定位装置及方法
CN111583317A (zh) * 2020-04-29 2020-08-25 深圳市优必选科技股份有限公司 图像对齐方法、装置及终端设备
CN111639545A (zh) * 2020-05-08 2020-09-08 浙江大华技术股份有限公司 一种人脸识别方法、装置、设备及介质
CN111709303A (zh) * 2020-05-21 2020-09-25 北京明略软件系统有限公司 一种人脸图像的识别方法和装置
CN111899281A (zh) * 2020-07-15 2020-11-06 北京思方技术开发有限公司 一种基于行为树的辅助监控系统控制策略实现方法及系统
CN112052709A (zh) * 2019-06-06 2020-12-08 搜狗(杭州)智能科技有限公司 一种人脸属性识别方法及装置
CN112052727A (zh) * 2020-07-30 2020-12-08 广州市标准化研究院 一种基于大数据的人像识别分析系统及方法
CN112115740A (zh) * 2019-06-19 2020-12-22 北京海益同展信息科技有限公司 用于处理图像的方法和装置
CN112329661A (zh) * 2020-11-10 2021-02-05 广东博通科技服务有限公司 一种基于深度卷神经网络的人脸识别装置及使用方法
CN112380965A (zh) * 2020-11-11 2021-02-19 浙江大华技术股份有限公司 进行人脸识别的方法及多目摄像机
CN112651306A (zh) * 2020-12-10 2021-04-13 深兰人工智能(四川)有限公司 工具取用监控方法及装置
CN112766049A (zh) * 2020-12-29 2021-05-07 清华大学 基于难样本挖掘的大规模人脸识别测试集构建方法及装置
CN112801013A (zh) * 2021-02-08 2021-05-14 的卢技术有限公司 一种基于关键点识别校验的人脸识别方法、系统及装置
CN112949362A (zh) * 2019-12-11 2021-06-11 杭州海康威视数字技术股份有限公司 人员信息标注方法、装置及电子设备
CN113378715A (zh) * 2021-06-10 2021-09-10 北京华捷艾米科技有限公司 一种基于彩色人脸图像的活体检测方法及相关设备
CN113536966A (zh) * 2021-06-25 2021-10-22 佛山君兰智能科技有限公司 一种基于物联网的人脸识别系统及方法
CN114530033A (zh) * 2022-02-18 2022-05-24 平安国际智慧城市科技股份有限公司 基于人脸识别的眼屏距警示方法、装置、设备及存储介质
US20220327864A1 (en) * 2020-10-12 2022-10-13 Hong Fu Jin Precision Industry (Wuhan) Co., Ltd. Face recognition method, device employing method, and readable storage medium
CN115471902A (zh) * 2022-11-14 2022-12-13 广州市威士丹利智能科技有限公司 基于智慧校园的人脸识别防护方法及系统

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764031B (zh) * 2018-04-17 2023-04-18 平安科技(深圳)有限公司 识别人脸的方法、装置、计算机设备和存储介质
CN108596141B (zh) * 2018-05-08 2022-05-17 深圳大学 一种深度网络生成人脸图像的检测方法及系统
CN110619200B (zh) * 2018-06-19 2022-04-08 Oppo广东移动通信有限公司 验证系统和电子装置
WO2019228097A1 (fr) 2018-05-29 2019-12-05 Oppo广东移动通信有限公司 Système de vérification, dispositif électronique, procédé de vérification, support de stockage lisible par ordinateur et appareil informatique
CN108763903B (zh) * 2018-05-29 2020-02-11 Oppo广东移动通信有限公司 验证装置和电子设备
CN108765014A (zh) * 2018-05-30 2018-11-06 中海云智慧(北京)物联网科技有限公司 一种基于门禁系统的智能广告投放方法
CN108763911B (zh) * 2018-06-06 2019-12-06 Oppo广东移动通信有限公司 验证方法、验证装置、电子设备和计算机可读存储介质
CN108932505A (zh) * 2018-07-25 2018-12-04 维沃移动通信有限公司 一种图像处理方法和电子设备
CN111832535A (zh) * 2018-08-24 2020-10-27 创新先进技术有限公司 人脸识别方法及装置
CN109241890B (zh) * 2018-08-24 2020-01-14 北京字节跳动网络技术有限公司 面部图像校正方法、装置及存储介质
CN109472280B (zh) * 2018-09-10 2023-04-07 广东数相智能科技有限公司 一种更新物种识别模型库的方法、存储介质及电子设备
CN109948467A (zh) * 2019-02-28 2019-06-28 中国科学院深圳先进技术研究院 人脸识别的方法、装置、计算机设备和存储介质
CN110188616B (zh) * 2019-05-05 2023-02-28 上海盎维信息技术有限公司 基于2d和3d影像的空间建模方法及装置
CN110378209B (zh) * 2019-06-11 2021-12-17 深圳市锐明技术股份有限公司 一种司机身份验证方法及装置
CN110458025B (zh) * 2019-07-11 2022-10-14 南京邮电大学 一种基于双目摄像头的目标识别与定位方法
CN110533426A (zh) * 2019-08-02 2019-12-03 深圳蚂里奥技术有限公司 一种支付方法及系统
CN110532979A (zh) * 2019-09-03 2019-12-03 深圳市华芯技研科技有限公司 一种三维图像人脸识别方法及系统
CN111144298A (zh) * 2019-12-26 2020-05-12 北京华捷艾米科技有限公司 一种行人识别方法及装置
CN111553191A (zh) * 2020-03-30 2020-08-18 深圳壹账通智能科技有限公司 基于人脸识别的视频分类方法、装置及存储介质
CN111753658A (zh) * 2020-05-20 2020-10-09 高新兴科技集团股份有限公司 一种睡岗告警方法、装置和计算机设备
CN112036284B (zh) * 2020-08-25 2024-04-19 腾讯科技(深圳)有限公司 图像处理方法、装置、设备及存储介质
CN112598836A (zh) * 2020-12-14 2021-04-02 天时地理(深圳)智能科技有限公司 一种基于深度学习的人脸识别比对系统及方法
CN113837105A (zh) * 2021-09-26 2021-12-24 北京的卢深视科技有限公司 人脸识别方法、系统、电子设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150278997A1 (en) * 2012-09-26 2015-10-01 Korea Institute Of Science And Technology Method and apparatus for inferring facial composite
CN105354543A (zh) * 2015-10-29 2016-02-24 小米科技有限责任公司 视频处理方法及装置
CN105844635A (zh) * 2016-03-21 2016-08-10 北京工业大学 一种基于结构字典的稀疏表示深度图像重建算法
CN106919251A (zh) * 2017-01-09 2017-07-04 重庆邮电大学 一种基于多模态情感识别的虚拟学习环境自然交互方法
CN106934377A (zh) * 2017-03-14 2017-07-07 深圳大图科创技术开发有限公司 一种改进的人脸检测系统
CN106991377A (zh) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 结合深度信息的人脸识别方法、人脸识别装置和电子装置

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130300939A1 (en) * 2012-05-11 2013-11-14 Cisco Technology, Inc. System and method for joint speaker and scene recognition in a video/audio processing environment
CN104778441A (zh) * 2015-01-07 2015-07-15 深圳市唯特视科技有限公司 融合灰度信息和深度信息的多模态人脸识别装置及方法
CN104751140A (zh) * 2015-03-30 2015-07-01 常州大学 一种基于深度学习sdae理论的三维人脸识别算法及其在金融领域应用
CN104899579A (zh) * 2015-06-29 2015-09-09 小米科技有限责任公司 人脸识别方法和装置
CN105513221B (zh) * 2015-12-30 2018-08-14 四川川大智胜软件股份有限公司 一种基于三维人脸识别的atm机防欺诈装置及系统
CN205318544U (zh) * 2015-12-30 2016-06-15 四川川大智胜软件股份有限公司 一种基于三维人脸识别的atm机防欺诈装置及系统
CN105847684A (zh) * 2016-03-31 2016-08-10 深圳奥比中光科技有限公司 无人机
CN106022317A (zh) * 2016-06-27 2016-10-12 北京小米移动软件有限公司 人脸识别方法及装置
CN106650656B (zh) * 2016-12-16 2023-10-27 中新智擎科技有限公司 一种用户身份识别装置和机器人
CN106778615B (zh) * 2016-12-16 2019-10-18 中新智擎科技有限公司 一种识别用户身份的方法、装置和物业服务机器人
CN106909905B (zh) * 2017-03-02 2020-02-14 中科视拓(北京)科技有限公司 一种基于深度学习的多模态人脸识别方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150278997A1 (en) * 2012-09-26 2015-10-01 Korea Institute Of Science And Technology Method and apparatus for inferring facial composite
CN105354543A (zh) * 2015-10-29 2016-02-24 小米科技有限责任公司 视频处理方法及装置
CN105844635A (zh) * 2016-03-21 2016-08-10 北京工业大学 一种基于结构字典的稀疏表示深度图像重建算法
CN106919251A (zh) * 2017-01-09 2017-07-04 重庆邮电大学 一种基于多模态情感识别的虚拟学习环境自然交互方法
CN106991377A (zh) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 结合深度信息的人脸识别方法、人脸识别装置和电子装置
CN106934377A (zh) * 2017-03-14 2017-07-07 深圳大图科创技术开发有限公司 一种改进的人脸检测系统

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210307B (zh) * 2019-04-30 2023-11-28 中国银联股份有限公司 人脸样本库部署方法、基于人脸识别业务处理方法及装置
US11893829B2 (en) 2019-04-30 2024-02-06 China Union Pay Co., Ltd. Method for deploying a face sample library and method and apparatus for business processing based on face recognition
CN110210307A (zh) * 2019-04-30 2019-09-06 中国银联股份有限公司 人脸样本库部署方法、基于人脸识别业务处理方法及装置
CN112052709B (zh) * 2019-06-06 2024-04-19 北京搜狗科技发展有限公司 一种人脸属性识别方法及装置
CN112052709A (zh) * 2019-06-06 2020-12-08 搜狗(杭州)智能科技有限公司 一种人脸属性识别方法及装置
CN112115740B (zh) * 2019-06-19 2024-04-09 京东科技信息技术有限公司 用于处理图像的方法和装置
CN112115740A (zh) * 2019-06-19 2020-12-22 北京海益同展信息科技有限公司 用于处理图像的方法和装置
CN110472480A (zh) * 2019-06-28 2019-11-19 深圳数马电子技术有限公司 物品识别方法、装置及计算机可读存储介质
CN110502986A (zh) * 2019-07-12 2019-11-26 平安科技(深圳)有限公司 识别图像中人物位置方法、装置、计算机设备和存储介质
CN110866469A (zh) * 2019-10-30 2020-03-06 腾讯科技(深圳)有限公司 一种人脸五官识别方法、装置、设备及介质
CN112949362B (zh) * 2019-12-11 2023-10-17 杭州海康威视数字技术股份有限公司 人员信息标注方法、装置及电子设备
CN112949362A (zh) * 2019-12-11 2021-06-11 杭州海康威视数字技术股份有限公司 人员信息标注方法、装置及电子设备
CN111325156B (zh) * 2020-02-24 2023-08-11 北京沃东天骏信息技术有限公司 人脸识别方法、装置、设备和存储介质
CN111325156A (zh) * 2020-02-24 2020-06-23 北京沃东天骏信息技术有限公司 人脸识别方法、装置、设备和存储介质
CN111462227A (zh) * 2020-03-27 2020-07-28 海信集团有限公司 室内人员定位装置及方法
CN111368803A (zh) * 2020-03-28 2020-07-03 河南工业职业技术学院 一种人脸识别方法及系统
CN111583317B (zh) * 2020-04-29 2024-02-09 深圳市优必选科技股份有限公司 图像对齐方法、装置及终端设备
CN111583317A (zh) * 2020-04-29 2020-08-25 深圳市优必选科技股份有限公司 图像对齐方法、装置及终端设备
CN111639545A (zh) * 2020-05-08 2020-09-08 浙江大华技术股份有限公司 一种人脸识别方法、装置、设备及介质
CN111639545B (zh) * 2020-05-08 2023-08-08 浙江大华技术股份有限公司 一种人脸识别方法、装置、设备及介质
CN111709303A (zh) * 2020-05-21 2020-09-25 北京明略软件系统有限公司 一种人脸图像的识别方法和装置
CN111899281A (zh) * 2020-07-15 2020-11-06 北京思方技术开发有限公司 一种基于行为树的辅助监控系统控制策略实现方法及系统
CN111899281B (zh) * 2020-07-15 2023-10-31 北京和利时系统工程有限公司 一种基于行为树的辅助监控系统控制策略实现方法及系统
CN112052727A (zh) * 2020-07-30 2020-12-08 广州市标准化研究院 一种基于大数据的人像识别分析系统及方法
CN112052727B (zh) * 2020-07-30 2024-04-02 广州市标准化研究院 一种基于大数据的人像识别分析系统及方法
US20220327864A1 (en) * 2020-10-12 2022-10-13 Hong Fu Jin Precision Industry (Wuhan) Co., Ltd. Face recognition method, device employing method, and readable storage medium
US11922724B2 (en) * 2020-10-12 2024-03-05 Hong Fu Jin Precision Industry (Wuhan) Co., Ltd. Face recognition method utilizing a face recognition model and a face sample library to detect mask images
CN112329661B (zh) * 2020-11-10 2022-06-03 广东博通科技服务有限公司 一种基于深度卷神经网络的人脸识别装置及使用方法
CN112329661A (zh) * 2020-11-10 2021-02-05 广东博通科技服务有限公司 一种基于深度卷神经网络的人脸识别装置及使用方法
CN112380965B (zh) * 2020-11-11 2024-04-09 浙江大华技术股份有限公司 进行人脸识别的方法及多目摄像机
CN112380965A (zh) * 2020-11-11 2021-02-19 浙江大华技术股份有限公司 进行人脸识别的方法及多目摄像机
CN112651306A (zh) * 2020-12-10 2021-04-13 深兰人工智能(四川)有限公司 工具取用监控方法及装置
CN112766049A (zh) * 2020-12-29 2021-05-07 清华大学 基于难样本挖掘的大规模人脸识别测试集构建方法及装置
CN112801013A (zh) * 2021-02-08 2021-05-14 的卢技术有限公司 一种基于关键点识别校验的人脸识别方法、系统及装置
CN112801013B (zh) * 2021-02-08 2024-04-09 的卢技术有限公司 一种基于关键点识别校验的人脸识别方法、系统及装置
CN113378715B (zh) * 2021-06-10 2024-01-05 北京华捷艾米科技有限公司 一种基于彩色人脸图像的活体检测方法及相关设备
CN113378715A (zh) * 2021-06-10 2021-09-10 北京华捷艾米科技有限公司 一种基于彩色人脸图像的活体检测方法及相关设备
CN113536966A (zh) * 2021-06-25 2021-10-22 佛山君兰智能科技有限公司 一种基于物联网的人脸识别系统及方法
CN114530033A (zh) * 2022-02-18 2022-05-24 平安国际智慧城市科技股份有限公司 基于人脸识别的眼屏距警示方法、装置、设备及存储介质
CN115471902A (zh) * 2022-11-14 2022-12-13 广州市威士丹利智能科技有限公司 基于智慧校园的人脸识别防护方法及系统

Also Published As

Publication number Publication date
CN107844744A (zh) 2018-03-27

Similar Documents

Publication Publication Date Title
WO2019071664A1 (fr) Procédé et appareil de reconnaissance de visage humain combinés à des informations de profondeur, et support de stockage
US9818023B2 (en) Enhanced face detection using depth information
CN110147717B (zh) 一种人体动作的识别方法及设备
US10534957B2 (en) Eyeball movement analysis method and device, and storage medium
CN106446873B (zh) 人脸检测方法及装置
CN106203305B (zh) 人脸活体检测方法和装置
CN110232369B (zh) 一种人脸识别方法和电子设备
US10423848B2 (en) Method, system, and computer-readable recording medium for long-distance person identification
WO2019033572A1 (fr) Procédé de détection de situation de visage bloqué, dispositif et support d'informations
WO2019033571A1 (fr) Procédé de détection de point de caractéristique faciale, appareil et support de stockage
US10650234B2 (en) Eyeball movement capturing method and device, and storage medium
CN105184238A (zh) 一种人脸识别方法及系统
CN105740778B (zh) 一种改进的三维人脸活体检测方法及其装置
WO2019061658A1 (fr) Procédé et dispositif de localisation de lunettes, et support d'informations
US10489636B2 (en) Lip movement capturing method and device, and storage medium
CN103679147A (zh) 手机型号的识别方法与装置
KR20190097640A (ko) 영상 정합 방법 및 장치
WO2019033570A1 (fr) Procédé d'analyse de mouvement labial, appareil et support d'informations
US20220147735A1 (en) Face-aware person re-identification system
CN110866466A (zh) 一种人脸识别方法、装置、存储介质和服务器
TW201405445A (zh) 基於雙攝影機之真實人臉識別系統及其方法
KR101724971B1 (ko) 광각 카메라를 이용한 얼굴 인식 시스템 및 그를 이용한 얼굴 인식 방법
US11315360B2 (en) Live facial recognition system and method
CN112364827A (zh) 人脸识别方法、装置、计算机设备和存储介质
CN111582118A (zh) 一种人脸识别方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17928156

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 23.09.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17928156

Country of ref document: EP

Kind code of ref document: A1