WO2020244076A1 - 人脸识别方法、装置、电子设备及存储介质 - Google Patents

人脸识别方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2020244076A1
WO2020244076A1 PCT/CN2019/103414 CN2019103414W WO2020244076A1 WO 2020244076 A1 WO2020244076 A1 WO 2020244076A1 CN 2019103414 W CN2019103414 W CN 2019103414W WO 2020244076 A1 WO2020244076 A1 WO 2020244076A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
feature point
reticulated
coordinate set
person
Prior art date
Application number
PCT/CN2019/103414
Other languages
English (en)
French (fr)
Inventor
戴磊
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020244076A1 publication Critical patent/WO2020244076A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • This application relates to the field of face recognition, in particular to a face recognition method, device, electronic equipment and storage medium.
  • the reticulated face image of the target person is obtained, the reticulated face image is removed to obtain the reticulated face image, and the reticulated face image is aligned Crop to obtain the aligned face image for subsequent face recognition.
  • the aligned face image is a de-screened face image, the inventor realized that the aligned face image will have some changes in facial features due to de-screening, such as blurring, or eyes and ears. The borders of the nose and mouth move slightly, which will affect the accuracy of face recognition.
  • the first aspect of the present application provides a face recognition method, the method includes:
  • the size of the reticulated character image is w ⁇ h, where w is the length of the reticulated character image, and h is the width of the reticulated character image;
  • the size of the de-textured person image is w' ⁇ h', where w'is the length of the de-textured person image, and h'is the width of the de-textured person image;
  • Mapping the feature point coordinates in the first feature point coordinate set to the de-textured person image to obtain the second feature point coordinate set includes:
  • the second feature point coordinate set obtained by mapping the feature point coordinates in the first feature point coordinate set to the de-screened person image is:
  • xj is the abscissa of the j-th feature point in the second feature point coordinate set
  • x'j is the abscissa of the j-th feature point in the first feature point coordinate set
  • w' is the de-textured person image
  • W is the length of the reticulated character image
  • yj is the ordinate of the jth feature point in the first feature point coordinate set
  • y'j is the abscissa of the jth feature point in the second feature point coordinate set
  • H' is the width of the de-textured character image
  • h is the width of the reticulated character image.
  • the netted person image is a life photo of the target person
  • the life picture of the target person includes the target person and the background
  • the aligned netted face image includes the face of the target person.
  • the characteristic point calibration includes:
  • the feature extraction is performed on the face image by inputting the detected face image into the feature extractor.
  • the method before performing face alignment and cropping on the reticulated person image according to the second feature point coordinate set to obtain an aligned reticulated face image, the method further includes:
  • the target person is segmented from the netted person image to obtain the target person image.
  • the reticulated person image is an image including depth information
  • Segmenting the target person from the netted person image to obtain the target person image includes:
  • Performing face alignment and cropping on the reticulated person image according to the second feature point coordinate set to obtain an aligned reticulated face image includes:
  • the reticulated character image is a reticulated character image in a database of the Ministry of Public Security, a reticulated character image uploaded during website registration, or a reticulated character image uploaded during registration on a device;
  • Said obtaining the image of the reticulated person includes:
  • an image of the netted character associated with the physical card input by the user is acquired.
  • a second aspect of the present application provides a face recognition device, the device includes:
  • the acquisition module is used to acquire an image of a reticulated character
  • the de-texturing module is used to de-texture the reticulated character image to obtain a de-reticulated character image
  • the first feature point obtaining module is configured to perform face detection and feature point calibration on the de-screened person image to obtain a first feature point coordinate set
  • the second feature point obtaining module is configured to map feature point coordinates in the first feature point coordinate set to the de-screened person image to obtain a second feature point coordinate set;
  • An alignment and cropping module configured to perform face alignment and cropping on the reticulated person image according to the second feature point coordinate set to obtain an aligned reticulated face image
  • the face recognition module is used to perform face recognition according to the aligned reticulated face image.
  • a third aspect of the present application provides an electronic device that includes a processor and a memory, the memory is configured to store at least one computer-readable instruction, and the processor is configured to execute the at least one computer-readable instruction to Implement the following steps:
  • the fourth aspect of the present application provides a non-volatile computer-readable storage medium having computer-readable instructions stored thereon, and when the computer-readable instructions are executed by a processor, the following steps are implemented:
  • This solution obtains the de-reticulated person image by de-reticulating the reticulated person image, and can directly use various existing algorithms to perform face detection on the non-reticulated image without collecting a large amount of reticulation.
  • Image training samples are used to construct a model to perform face detection on reticulated images.
  • the second feature point coordinate set can be obtained by mapping the feature point coordinates in the first feature point coordinate set to the de-textured person image.
  • the feature points of the reticulated image are not used to extract the features of the reticulated image, and the reticulated face image is obtained by performing face alignment and cropping on the reticulated person image according to the second feature point coordinate set.
  • the aligned reticulated face image is obtained, so that the comparison object used for face recognition is not de-reticulated, and more of the original features of the face are retained, and the recognition accuracy is higher than that of the de-reticulated face image. Big improvement.
  • FIG. 1 is a flowchart of a face recognition method provided in Embodiment 1 of the present application.
  • Fig. 2 is a functional block diagram of the face recognition device provided in the second embodiment of the present application.
  • FIG. 3 is a schematic diagram of an electronic device provided in Embodiment 3 of the present application.
  • FIG. 1 is a schematic flowchart of a face recognition method provided in Embodiment 1 of this application. According to different needs, the order of the steps in the flowchart can be changed, and some steps can be omitted.
  • the method is applied to an electronic device, and the electronic device can be any electronic product, for example, a personal computer, a tablet computer, a smart phone, a personal digital assistant (Personal Digital Assistant, PDA), etc.
  • the face recognition method may include the following steps:
  • the reticulated character image may be a reticulated character image in the database of the Ministry of Public Security, the reticulated character image uploaded during website registration, or the reticulated character image uploaded during registration on the device can be obtained.
  • the reticulated person image is a life photo of the target person. The life photos include the target person and background.
  • the obtaining of the image of the reticulated person may be to obtain the image of the reticulated person according to the request of face recognition, and obtain the image of the reticulated person associated with the information input by the user when the information input by the user on the website is received, or after receiving the user When the physical card input on the device is used, an image of the netted character associated with the physical card input by the user is obtained.
  • the information entered by the user on the website may be the name, account number, etc. entered by the user.
  • the physical card may be a bank card, an ID card, etc.
  • the size of the reticulated character image is w ⁇ h. Wherein, w is the length of the reticulated person image, and h is the width of the reticulated person image.
  • S12 De-texturing the reticulated person image to obtain a de-textured person image.
  • the current face detection algorithm cannot directly detect the reticulated person image, or the detection of the reticulated person image is not accurate, but for the non-reticulated person image, the current face detection algorithm can be accurate A face in an image of a person without moire is detected. Therefore, in order to achieve the accuracy of face detection, it is necessary to first use a de-retarding algorithm for de-retarding the reticulated person image.
  • De-texturing the reticulated character image to obtain the reticulated character image may be performed by de-texturing the reticulated character image through a fully convolutional neural model, adaptive filtering technology, or other existing technologies.
  • the size of the de-textured person image is w' ⁇ h'. Wherein, w'is the length of the de-textured person image, and h'is the width of the de-textured person image.
  • the size of the output de-textured person image is fixed.
  • S13 Perform face detection and feature point calibration on the de-screened person image to obtain a first feature point coordinate set.
  • Performing face detection on the de-textured person image includes a cascade-based manner, or HOG/FHOG-based SVM/DPM and other ways to perform face detection on the de-textured person image.
  • the feature point calibration is to perform feature extraction on the face image by inputting the detected face image into the feature extractor.
  • the feature extractor is a method based on subspace analysis, a method based on neural network, a method based on hidden Markov model, a method based on support vector machine, etc. to perform feature extraction on a face image.
  • n is a positive integer greater than 1
  • j is the j-th feature point in the first feature point coordinate set
  • x' is the abscissa of the feature point
  • y' is the ordinate of the feature point
  • x'j is the first feature The abscissa of the jth feature point in the point coordinate set, and the ordinate of the jth feature point in the first feature point coordinate set y'j.
  • n is a positive integer greater than 1
  • j is the j-th feature point in the second feature point coordinate set
  • x is the abscissa of the feature point
  • y is the ordinate of the feature point
  • xj is the second feature point coordinate set
  • yj the ordinate of the j-th feature point in the second feature point coordinate set.
  • Mapping the feature point coordinates in the first feature point coordinate set to the de-textured person image to obtain the second feature point coordinate set includes:
  • the second feature point coordinate set obtained by mapping the feature point coordinates in the first feature point coordinate set to the de-screened person image is:
  • xj is the abscissa of the j-th feature point in the second feature point coordinate set
  • x'j is the abscissa of the j-th feature point in the first feature point coordinate set
  • w' is the de-textured person image
  • W is the length of the reticulated character image
  • yj is the ordinate of the jth feature point in the first feature point coordinate set
  • y'j is the abscissa of the jth feature point in the second feature point coordinate set
  • H' is the width of the de-textured character image
  • h is the width of the reticulated character image.
  • S15 Perform face alignment and cropping on the reticulated person image according to the second feature point coordinate set to obtain an aligned reticulated face image.
  • the current face recognizer can collect images or video streams containing human faces through the camera, and automatically detect and track the faces in the images, and combine the detected faces with the netted face images or no
  • the reticulated face images are compared to confirm whether they are the same person.
  • the reticulated face image retains more of the original features of the face compared with the non-reticulated face image, and there is no facial feature blurring in the non-reticulated face image, or the eyes of the face
  • reticulated face images are used for face recognition.
  • Performing face alignment and cropping on the reticulated character image according to the second feature point coordinate set is to crop the face from the reticulated character image according to the second feature point coordinate set, and the cropped person The angle of the face is straightened to facilitate subsequent face recognition.
  • the reticulated face image includes only human faces.
  • Performing face alignment and cropping on the reticulated character image according to the second feature point coordinate set is to perform face alignment on the reticulated character image according to the second feature point coordinate set through ESR algorithm, SDM algorithm, GBDT algorithm, etc. Face alignment and cropping.
  • the method Before performing face alignment and cropping on the reticulated person image according to the second feature point coordinate set to obtain an aligned reticulated face image, the method further includes:
  • the target person is segmented from the netted person image to obtain the target person image.
  • the reticulated person image is an image including depth information. Segmenting the target person from the netted person image to obtain the target person image includes:
  • the clustering algorithm may be a K-means algorithm or a kernel density estimation algorithm.
  • Performing face alignment and cropping on the reticulated person image according to the second feature point coordinate set to obtain an aligned reticulated face image includes:
  • Performing face recognition according to the aligned target reticulated face image includes comparing the face image with the reticulated face image to determine whether it is the same person.
  • the specific face recognition based on the aligned target reticulated face image is a prior art, and will not be repeated here.
  • This solution obtains the de-reticulated person image by de-reticulating the reticulated person image, and can directly use various existing algorithms to perform face detection on the non-reticulated image without collecting a large amount of reticulation.
  • Image training samples are used to construct a model to perform face detection on reticulated images.
  • the second feature point coordinate set can be obtained by mapping the feature point coordinates in the first feature point coordinate set to the de-textured person image.
  • the feature points of the reticulated image are not used to extract the features of the reticulated image, and the reticulated face image is obtained by performing face alignment and cropping on the reticulated person image according to the second feature point coordinate set.
  • the aligned reticulated face image is obtained, so that the comparison object used for face recognition is not de-reticulated, and more of the original features of the face are retained, and the recognition accuracy is higher than that of the de-reticulated face image. Big improvement.
  • FIG. 2 is a functional block diagram of a face recognition device provided in Embodiment 2 of this application.
  • the face recognition device 20 runs in an electronic device.
  • the electronic device may be any electronic product, for example, a smart phone, a personal digital assistant (Personal Digital Assistant, PDA), etc.
  • the face recognition device 20 may include multiple functional modules composed of program code segments.
  • the program code of each program segment in the face recognition device 20 can be stored in a memory and executed by at least one processor to perform face recognition.
  • the face recognition device 20 can be divided into multiple functional modules according to the functions it performs.
  • the functional modules may include: an obtaining module 21, a de-texturing module 22, a first feature point obtaining module 23, a second feature point obtaining module 24, an alignment and cutting module 25, and a face recognition module 26.
  • the module referred to in this application refers to a series of computer-readable instruction segments that can be executed by at least one processor and can complete fixed functions, and are stored in a memory.
  • the acquiring module 21 is used to acquire an image of a reticulated person.
  • the reticulated character image may be a reticulated character image in the database of the Ministry of Public Security, the reticulated character image uploaded during website registration, or the reticulated character image uploaded during registration on the device can be obtained.
  • the reticulated person image is a life photo of the target person. The life photos include the target person and background.
  • the obtaining of the image of the reticulated person may be to obtain the image of the reticulated person according to the request of face recognition, and obtain the image of the reticulated person associated with the information input by the user when the information input by the user on the website is received, or after receiving the user When the physical card input on the device is used, the netted character image associated with the physical card input by the user is obtained.
  • the information entered by the user on the website may be the name, account number, etc. entered by the user.
  • the physical card may be a bank card, an ID card, etc.
  • the size of the reticulated character image is w ⁇ h. Wherein, w is the length of the reticulated person image, and h is the width of the reticulated person image.
  • the de-texturing module 22 is used to de-texture the reticulated person image to obtain a de-textured person image.
  • the current face detection algorithm cannot directly detect the reticulated person image, or the detection of the reticulated person image is not accurate, but for the non-reticulated person image, the current face detection algorithm can be accurate A face in an image of a person without moire is detected. Therefore, in order to achieve the accuracy of face detection, it is necessary to first use a de-retarding algorithm to de-retard the reticulated person image.
  • De-texturing the reticulated character image to obtain the reticulated character image may be performed by de-texturing the reticulated character image through a fully convolutional neural model, adaptive filtering technology, or other existing technologies.
  • the size of the de-textured person image is w' ⁇ h'. Wherein, w'is the length of the de-textured person image, and h'is the width of the de-textured person image.
  • the size of the output de-textured person image is fixed.
  • the first feature point obtaining module 23 is configured to perform face detection and feature point calibration on the de-screened person image to obtain a first feature point coordinate set.
  • Performing face detection on the de-textured person image includes a cascade-based manner, or HOG/FHOG-based SVM/DPM and other ways to perform face detection on the de-textured person image.
  • the feature point calibration is to perform feature extraction on the face image by inputting the detected face image into the feature extractor.
  • the feature extractor is a method based on subspace analysis, a method based on neural network, a method based on hidden Markov model, a method based on support vector machine, etc. to perform feature extraction on a face image.
  • n is a positive integer greater than 1
  • j is the j-th feature point in the first feature point coordinate set
  • x' is the abscissa of the feature point
  • y' is the ordinate of the feature point
  • x'j is the first feature The abscissa of the jth feature point in the point coordinate set, and the ordinate of the jth feature point in the first feature point coordinate set y'j.
  • the second feature point obtaining module 24 is configured to map the feature point coordinates in the first feature point coordinate set to the de-textured person image to obtain a second feature point coordinate set.
  • n is a positive integer greater than 1
  • j is the j-th feature point in the second feature point coordinate set
  • x is the abscissa of the feature point
  • y is the ordinate of the feature point
  • xj is the second feature point coordinate set
  • yj the ordinate of the j-th feature point in the second feature point coordinate set.
  • Mapping the feature point coordinates in the first feature point coordinate set to the de-textured person image to obtain the second feature point coordinate set includes:
  • the second feature point coordinate set obtained by mapping the feature point coordinates in the first feature point coordinate set to the de-screened person image is:
  • xj is the abscissa of the j-th feature point in the second feature point coordinate set
  • x'j is the abscissa of the j-th feature point in the first feature point coordinate set
  • w' is the de-textured person image
  • W is the length of the reticulated character image
  • yj is the ordinate of the jth feature point in the first feature point coordinate set
  • y'j is the abscissa of the jth feature point in the second feature point coordinate set
  • H' is the width of the de-textured character image
  • h is the width of the reticulated character image.
  • the alignment and cropping module 25 is configured to perform face alignment and cropping on the reticulated person image according to the second feature point coordinate set to obtain an aligned reticulated face image.
  • the current face recognizer can collect images or video streams containing human faces through the camera, and automatically detect and track the faces in the images, and combine the detected faces with the netted face images or no
  • the reticulated face images are compared to confirm whether they are the same person.
  • the reticulated face image retains more of the original features of the face compared with the non-reticulated face image, and there is no facial feature blurring in the non-reticulated face image, or the eyes of the face
  • reticulated face images are used for face recognition.
  • Performing face alignment and cropping on the reticulated character image according to the second feature point coordinate set is to crop the face from the reticulated character image according to the second feature point coordinate set, and the cropped person The angle of the face is straightened to facilitate subsequent face recognition.
  • the reticulated face image includes only human faces.
  • Performing face alignment and cropping on the reticulated character image according to the second feature point coordinate set is to perform face alignment on the reticulated character image according to the second feature point coordinate set through ESR algorithm, SDM algorithm, GBDT algorithm, etc. Face alignment and cropping.
  • the alignment and cropping module 25 is further configured to:
  • the target person is segmented from the netted person image to obtain the target person image.
  • the reticulated person image is an image including depth information. Segmenting the target person from the netted person image to obtain the target person image includes:
  • the clustering algorithm may be a K-means algorithm or a kernel density estimation algorithm.
  • Performing face alignment and cropping on the reticulated person image according to the second feature point coordinate set to obtain an aligned reticulated face image includes:
  • the face recognition module 26 is configured to perform face recognition according to the aligned reticulated face image.
  • Performing face recognition according to the aligned target reticulated face image includes comparing the face image with the reticulated face image to determine whether it is the same person.
  • the specific face recognition based on the aligned target reticulated face image is a prior art, and will not be repeated here.
  • This solution determines the predetermined input rules of the test file template according to the test file template selected by the user, and obtains basic field data and specific field data according to the predetermined input rule, and according to the predetermined input rule, the basic The basic field data corresponding to the field and the specific field data corresponding to the specific field generate a test file, thereby automatically generating a test file required by the user.
  • the aforementioned integrated unit implemented in the form of a software functional module may be stored in a non-volatile computer readable storage medium.
  • the above-mentioned software function module is stored in a storage medium, and includes a number of instructions to enable an electronic device or processor to execute part of the method described in each embodiment of the present application.
  • FIG. 3 is a schematic diagram of an electronic device provided in Embodiment 3 of this application.
  • the electronic device 3 includes a memory 31, at least one processor 32, and computer readable instructions 33 stored in the memory 31 and executable on the at least one processor 32.
  • the at least one processor 32 executes the computer-readable instructions 33, the steps in the above-mentioned face recognition method embodiment are implemented.
  • the at least one processor 32 executes the computer-readable instructions 33, the functions of the modules in the foregoing face recognition device embodiments are implemented.
  • the computer-readable instructions 33 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 31 and executed by the at least one processor 32 Execute to complete this application.
  • the one or more modules/units may be a series of computer-readable instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer-readable instructions 33 in the electronic device 3.
  • the computer-readable instructions 33 may be divided into the modules shown in FIG. 2, and the specific functions of each module can be referred to in the second embodiment.
  • the electronic device 3 may be any electronic product, for example, a smart phone, a personal digital assistant (Personal Digital Assistant, PDA), etc.
  • PDA Personal Digital Assistant
  • the schematic diagram 3 is only an example of the electronic device 3, and does not constitute a limitation on the electronic device 3.
  • the electronic device 3 may also include input and output devices, network access devices, buses, etc.
  • the at least one processor 32 may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processors, DSPs), application specific integrated circuits (ASICs). ), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the processor 32 may be a microprocessor, or the processor 32 may also be any conventional processor, etc.
  • the processor 32 is the control center of the electronic device 3, and connects the entire electronic device 3 through various interfaces and lines. Parts.
  • the memory 31 may be used to store the computer-readable instructions 33 and/or modules/units, and the processor 32 can run or execute the computer-readable instructions and/or modules/units stored in the memory 31, and
  • the data stored in the memory 31 is called to realize various functions of the electronic device 3.
  • the memory 31 may mainly include a storage program area and a storage data area, where the storage program area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.); the storage data area may The data (such as audio data, etc.) created according to the use of the electronic device 3 and the like are stored.
  • the memory 31 may include a non-volatile computer readable memory, such as a hard disk, a memory, a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, and a flash memory card (Flash Card), at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
  • a non-volatile computer readable memory such as a hard disk, a memory, a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, and a flash memory card (Flash Card), at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
  • the integrated module/unit of the electronic device 3 is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a non-volatile computer-readable storage medium.
  • this application implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through a computer program.
  • the computer program can be stored in a non-volatile computer-readable storage medium.
  • the computer program includes computer readable instruction code
  • the computer readable instruction code may be in the form of source code, object code, executable file, or some intermediate form.
  • the non-volatile computer-readable storage medium may include: any entity or device capable of carrying the computer-readable instruction code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory ( ROM, Read-Only Memory) etc.
  • the functional units in the various embodiments of the present application may be integrated in the same processing unit, or each unit may exist alone physically, or two or more units may be integrated in the same unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional modules.

Abstract

一种人脸识别方法、装置、电子设备及存储介质,所述方法包括:获取网纹人物图像(S11);对所述网纹人物图像进行去网纹来获得去网纹人物图像(S12);对所述去网纹人物图像进行人脸检测和特征点标定来得到第一特征点坐标集(S13);将所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到第二特征点坐标集(S14);根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像(S15);根据所述对齐后的网纹人脸图像进行人脸识别(S16)。上述方法,可提高人脸识别的精确度。

Description

人脸识别方法、装置、电子设备及存储介质
本申请要求于2019年06月05日提交中国专利局,申请号为201910488461.0发明名称为“人脸识别方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人脸识别领域,具体涉及一种人脸识别方法、装置、电子设备及存储介质。
背景技术
在进行人脸识别时,会获取目标人物的网纹人脸图像,根据所述网纹人脸图像去网纹来得到去网纹人脸图像,并对所述去网纹人脸图像进行对齐裁剪来获得对齐后的人脸图像,以便进行后续的人脸识别。但是由于对齐后的人脸图像为去网纹后的人脸图像,发明人意识到对齐后的人脸图像会由于去网纹,人脸特征会产生某些变化,例如变模糊,或者眼耳口鼻的边界发生微小移动,如此将影响人脸识别的精确度。
发明内容
鉴于以上内容,有必要提出一种人脸识别方法、装置、电子设备及存储介质,可提高人脸识别的精确度。
本申请的第一方面提供一种人脸识别方法,所述方法包括:
获取网纹人物图像;
对所述网纹人物图像进行去网纹来获得去网纹人物图像;
对所述去网纹人物图像进行人脸检测和特征点标定来得到第一特征点坐标集;
将所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到第二特征点坐标集;
根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像;
根据所述对齐后的网纹人脸图像进行人脸识别。
较佳的,所述网纹人物图像的尺寸为w×h,其中,w为所述网纹人物图像的长度,h为所述网纹人物图像的宽度;
所述去网纹人物图像的尺寸为w'×h',其中,w'为所述去网纹人物图像的长度,h'为所述去网纹人物图像的宽度;
所述第一特征点坐标集为L'={(x'1,y'1),…(x'j,y'j),…,(x'n,y'n)};
将所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到第二特征点坐标集包括:
所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到的第二特征点坐标集为:
xj=x'j/w'*w;
yj=y'j/h'*h;
其中,xj为第二特征点坐标集中的第j个特征点的横坐标,x'j为第一特征点坐标集中的第j个特征点的横坐标,w'为所述去网纹人物图像的长度,w为所述网纹人物图像的长度,yj第一特征点坐标集中的第j个特征点的纵坐标,y'j为第二特征点坐标集中的第j个特征点的横坐标,h'为所述去网纹人物图像的宽度,h为所述网纹人物图像的宽度。
较佳的,所述网纹人物图像为目标人物的生活照,所述目标人物的生活照包括目标人物及背景,所述对齐后的网纹人脸图像包括目标人物的人脸。
较佳的,所述特征点标定包括:
通过将检测到的人脸图像输入到特征提取器中来对人脸图像进行特征提取。
较佳的,在根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像之前,所述方法还包括:
将目标人物从所述网纹人物图像中分割来得到目标人物图像。
较佳的,所述网纹人物图像为包括深度信息的图像;
将所述目标人物从所述网纹人物图像中分割来得到目标人物图像包括:
获取所述网纹人物图像的直方图;
采用聚类算法对所述直方图进行聚类,来得到两个类别;
以所述两个类别的边界作为分隔阈值将所述目标人物从所述网纹人物图像中分割;
根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像包括:
根据所述第二特征点坐标集对所述目标人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像。
较佳的,所述网纹人物图像为公安部数据库中的网纹人物图像,网站注册时上传的网纹人物图像,或者在设备上注册时上传的网纹人物图像;
所述获取网纹人物图像包括:
根据人脸识别的请求获取网纹人物图像;
在接收到用户在网站上输入的信息时获取与用户输入的信息关联的网纹人物图像;或者
在接收到用户在设备上输入的实体卡时获取与用户输入的实体卡关联的网纹人物图像。
本申请的第二方面提供一种人脸识别装置,所述装置包括:
获取模块,用于获取网纹人物图像;
去网纹模块,用于对所述网纹人物图像进行去网纹来获得去网纹人物图像;
第一特征点得到模块,用于对所述去网纹人物图像进行人脸检测和特征点标定来得到第一特征点坐标集;
第二特征点得到模块,用于将所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到第二特征点坐标集;
对齐裁剪模块,用于根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像;
人脸识别模块,用于根据所述对齐后的网纹人脸图像进行人脸识别。
本申请的第三方面提供一种电子设备,所述电子设备包括处理器及存储器,所述存储器用于存储至少一个计算机可读指令,所述处理器用于执行所述至少一个计算机可读指令以实现以下步骤:
获取网纹人物图像;
对所述网纹人物图像进行去网纹来获得去网纹人物图像;
对所述去网纹人物图像进行人脸检测和特征点标定来得到第一特征点坐标集;
将所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到第二特征点坐标集;
根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像;
根据所述对齐后的网纹人脸图像进行人脸识别。
本申请的第四方面提供一种非易失性计算机可读存储介质,其上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现以下步骤:
获取网纹人物图像;
对所述网纹人物图像进行去网纹来获得去网纹人物图像;
对所述去网纹人物图像进行人脸检测和特征点标定来得到第一特征点坐标集;
将所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到第二特征点坐标集;
根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像;
根据所述对齐后的网纹人脸图像进行人脸识别。
本方案通过对所述网纹人物图像进行去网纹来获得去网纹人物图像,可直接利用现有的各种算法来对无网纹图像进行人脸检测,而不需收集大量的网纹图像训练样本来构造模型而对网纹图像进行人脸检测,通过将所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到第二特征点坐标集,可通过无网纹图像的特征点来实现对网纹图像进行特征提取,通过根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对 齐后的网纹人脸图像来得到对齐后的网纹人脸图像,使得用于进行人脸识别的比较对象没有经过去网纹,较多的保留了人脸原有的特征,识别精确度比去网纹人脸图像有较大的提升。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。
图1是本申请实施例一提供的人脸识别方法的流程图。
图2是本申请实施例二提供的人脸识别装置的功能模块图。
图3是本申请实施例三提供的电子设备的示意图。
如下具体实施方式将结合上述附图进一步说明本申请。
具体实施方式
为了能够更清楚地理解本申请的上述目的、特征和优点,下面结合附图和具体实施例对本申请进行详细描述。需要说明的是,在不冲突的情况下,本申请的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本申请,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中在本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请。
实施例一
图1为本申请实施例一提供的人脸识别方法的示意流程图。根据不同的需求,该流程图中步骤的顺序可以改变,某些步骤可以省略。所述方法应用于电子设备中,所述电子设备可以为任何一种电子产品,例如,个人计算机、平板电脑、智能手机、个人数字助理(Personal Digital Assistant,PDA)等。如图1所示,所述人脸识别方法可以包括以下步骤:
S11,获取网纹人物图像。
所述网纹人物图像可为公安部数据库中的网纹人物图像,获取网站注册时上传的网纹人物图像,或者获取在设备上注册时上传的网纹人物图像等。所述网纹人物图像为目标人物的生活照。所述生活照包括目标人物及背景等。所述获取网纹人物图像可为根据人脸识别的请求获取网纹人物图像,在接收到用户在网站上输入的信息时获取与用户输入的信息关联的网纹人物图像,或者在接收到用户在设备上输入的实体卡时获取与用户输入的实体卡关 联的网纹人物图像等。所述用户在网站上输入的信息可为用户输入的姓名、账号等。所述实体卡可为银行卡、身份证等。所述网纹人物图像的尺寸为w×h。其中,w为所述网纹人物图像的长度,h为所述网纹人物图像的宽度。
S12,对所述网纹人物图像进行去网纹来获得去网纹人物图像。
由于网纹人物图像训练样本的限制,目前的人脸检测算法不能直接检测网纹人物图像,或者检测网纹人物图像不准确,但是对于无网纹人物图像,目前的人脸检测算法能准确的检测到无网纹人物图像中的人脸。因此,为了实现人脸检测的准确,需要先将网纹人物图像采用去网纹算法进行去网纹。
对所述网纹人物图像进行去网纹来获得去网纹人物图像可为通过全卷积神经模型、自适应滤波技术、或者其他现有的技术对所述网纹人物图像进行去网纹来获得去网纹人物图像。所述去网纹人物图像的尺寸为w'×h'。其中,w'为所述去网纹人物图像的长度,h'为所述去网纹人物图像的宽度。在本实施例中,在进行去网纹时,不管输入的网纹人物图像的尺寸为多少,输出的去网纹人物图像的尺寸都是固定的。
S13,对所述去网纹人物图像进行人脸检测和特征点标定来得到第一特征点坐标集。
对所述去网纹人物图像进行人脸检测包括基于cascade的方式,或者基于HOG/FHOG的SVM/DPM等方式对所述去网纹人物图像进行人脸检测。所述特征点标定为通过将检测到的人脸图像输入到特征提取器中来对人脸图像进行特征提取。所述特征提取器为基于子空间分析的方法、基于神经网络的方法、基于隐马尔可夫模型的方法、基于支持向量机的方法等对人脸图像进行特征提取。在本实施例中,所述第一特征点坐标集为L'={(x'1,y'1),…(x'j,y'j),…,(x'n,y'n)}。其中,n为大于1的正整数,j为第一特征点坐标集中的第j个特征点,x'为特征点的横坐标,y'为特征点的纵坐标,x'j为第一特征点坐标集中的第j个特征点的横坐标,y'j第一特征点坐标集中的第j个特征点的纵坐标。
S14,将所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到第二特征点坐标集。
在本实施例中,所述第二特征点坐标集为L={(x1,y1),…(xj,yj),…,(xn,yn)}。其中,n为大于1的正整数,j为第二特征点坐标集中的第j个特征点,x为特征点的横坐标,y为特征点的纵坐标,xj为第二特征点坐标集中的第j个特征点的横坐标,yj第二特征点坐标集中的第j个特征点的纵坐标。
将所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到第二特征点坐标集包括:
所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到的第二特征点坐标集为:
xj=x'j/w'*w;
yj=y'j/h'*h。
其中,xj为第二特征点坐标集中的第j个特征点的横坐标,x'j为第一特征点坐标集中的第j个特征点的横坐标,w'为所述去网纹人物图像的长度,w为所述网纹人物图像的长度,yj第一特征点坐标集中的第j个特征点的纵坐标,y'j为第二特征点坐标集中的第j个特征点的横坐标,h'为所述去网纹人物图像的宽度,h为所述网纹人物图像的宽度。
S15,根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像。
在本实施例中,目前的人脸识别器可通过摄像机采集含有人脸的图像或者视频流,并自动在图像中检测和跟踪人脸,将检测到的人脸与网纹人脸图像或无网纹人脸图像进行比对,从而来确认是否为同一人。同时由于网纹人脸图像相对于无网纹人脸图像保留了更多了人脸原有的特征,且不存在无网纹人脸图像所存在的人脸特征变模糊,或者人脸的眼耳口鼻的边界发生微小移动的问题,因此,为了提高人脸识别的准确,使用网纹人脸图像进行人脸识别。
根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪为根据所述第二特征点坐标集将人脸从所述网纹人物图像中裁剪,并将裁剪后的人脸的角度摆正,从而方便后续的人脸识别。所述网纹人脸图像中仅包括人脸。根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪为通过ESR算法、SDM算法、GBDT算法等根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪。
在根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像之前,所述方法还包括:
将所述目标人物从所述网纹人物图像中分割来得到目标人物图像。
在本实施例中,所述网纹人物图像为包括深度信息的图像。将所述目标人物从所述网纹人物图像中分割来得到目标人物图像包括:
获取所述网纹人物图像的直方图,采用聚类算法对所述直方图进行聚类,来得到两个类别,以所述两个类别的边界作为分隔阈值将所述目标人物从所述网纹人物图像中分割。其中,所述聚类算法可为K-means算法或者核密度估计算法等。
根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像包括:
根据所述第二特征点坐标集对所述目标人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像。
S16,根据所述对齐后的网纹人脸图像进行人脸识别。
根据所述对齐后的目标网纹人脸图像进行人脸识别包括将人脸图像与所述网纹人脸图像进行比对来确定是否为同一人。具体的根据所述对齐后的目标网纹人脸图像进行人脸识别为现有技术,在此不进行赘述。
本方案通过对所述网纹人物图像进行去网纹来获得去网纹人物图像,可直 接利用现有的各种算法来对无网纹图像进行人脸检测,而不需收集大量的网纹图像训练样本来构造模型而对网纹图像进行人脸检测,通过将所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到第二特征点坐标集,可通过无网纹图像的特征点来实现对网纹图像进行特征提取,通过根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像来得到对齐后的网纹人脸图像,使得用于进行人脸识别的比较对象没有经过去网纹,较多的保留了人脸原有的特征,识别精确度比去网纹人脸图像有较大的提升。
实施例二
图2为本申请实施例二提供的人脸识别装置的功能模块图。在一些实施例中,所述人脸识别装置20运行于电子设备中。所述电子设备可以为任何一种电子产品,例如,智能手机、个人数字助理(Personal Digital Assistant,PDA)等。所述人脸识别装置20可以包括多个由程序代码段所组成的功能模块。所述人脸识别装置20中的各个程序段的程序代码可以存储于存储器中,并由至少一个处理器所执行,以进行人脸识别。
本实施例中,所述人脸识别装置20根据其所执行的功能,可以被划分为多个功能模块。所述功能模块可以包括:获取模块21、去网纹模块22、第一特征点得到模块23、第二特征点得到模块24、对齐裁剪模块25及人脸识别模块26。本申请所称的模块是指一种能够被至少一个处理器所执行并且能够完成固定功能的一系列计算机可读指令段,其存储在存储器中。
所述获取模块21,用于获取网纹人物图像。
所述网纹人物图像可为公安部数据库中的网纹人物图像,获取网站注册时上传的网纹人物图像,或者获取在设备上注册时上传的网纹人物图像等。所述网纹人物图像为目标人物的生活照。所述生活照包括目标人物及背景等。所述获取网纹人物图像可为根据人脸识别的请求获取网纹人物图像,在接收到用户在网站上输入的信息时获取与用户输入的信息关联的网纹人物图像,或者在接收到用户在设备上输入的实体卡时获取与用户输入的实体卡关联的网纹人物图像等。所述用户在网站上输入的信息可为用户输入的姓名、账号等。所述实体卡可为银行卡、身份证等。所述网纹人物图像的尺寸为w×h。其中,w为所述网纹人物图像的长度,h为所述网纹人物图像的宽度。
所述去网纹模块22,用于对所述网纹人物图像进行去网纹来获得去网纹人物图像。
由于网纹人物图像训练样本的限制,目前的人脸检测算法不能直接检测网纹人物图像,或者检测网纹人物图像不准确,但是对于无网纹人物图像,目前的人脸检测算法能准确的检测到无网纹人物图像中的人脸。因此,为了实现人脸检测的准确,需要先将网纹人物图像采用去网纹算法进行去网纹。
对所述网纹人物图像进行去网纹来获得去网纹人物图像可为通过全卷积神经模型、自适应滤波技术、或者其他现有的技术对所述网纹人物图像进行去网纹来获得去网纹人物图像。所述去网纹人物图像的尺寸为w'×h'。其中, w'为所述去网纹人物图像的长度,h'为所述去网纹人物图像的宽度。在本实施例中,在进行去网纹时,不管输入的网纹人物图像的尺寸为多少,输出的去网纹人物图像的尺寸都是固定的。
所述第一特征点得到模块23,用于对所述去网纹人物图像进行人脸检测和特征点标定来得到第一特征点坐标集。
对所述去网纹人物图像进行人脸检测包括基于cascade的方式,或者基于HOG/FHOG的SVM/DPM等方式对所述去网纹人物图像进行人脸检测。所述特征点标定为通过将检测到的人脸图像输入到特征提取器中来对人脸图像进行特征提取。所述特征提取器为基于子空间分析的方法、基于神经网络的方法、基于隐马尔可夫模型的方法、基于支持向量机的方法等对人脸图像进行特征提取。在本实施例中,所述第一特征点坐标集为L'={(x'1,y'1),…(x'j,y'j),…,(x'n,y'n)}。其中,n为大于1的正整数,j为第一特征点坐标集中的第j个特征点,x'为特征点的横坐标,y'为特征点的纵坐标,x'j为第一特征点坐标集中的第j个特征点的横坐标,y'j第一特征点坐标集中的第j个特征点的纵坐标。
所述第二特征点得到模块24,用于将所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到第二特征点坐标集。
在本实施例中,所述第二特征点坐标集为L={(x1,y1),…(xj,yj),…,(xn,yn)}。其中,n为大于1的正整数,j为第二特征点坐标集中的第j个特征点,x为特征点的横坐标,y为特征点的纵坐标,xj为第二特征点坐标集中的第j个特征点的横坐标,yj第二特征点坐标集中的第j个特征点的纵坐标。
将所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到第二特征点坐标集包括:
所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到的第二特征点坐标集为:
xj=x'j/w'*w;
yj=y'j/h'*h。
其中,xj为第二特征点坐标集中的第j个特征点的横坐标,x'j为第一特征点坐标集中的第j个特征点的横坐标,w'为所述去网纹人物图像的长度,w为所述网纹人物图像的长度,yj第一特征点坐标集中的第j个特征点的纵坐标,y'j为第二特征点坐标集中的第j个特征点的横坐标,h'为所述去网纹人物图像的宽度,h为所述网纹人物图像的宽度。
所述对齐裁剪模块25,用于根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像。
在本实施例中,目前的人脸识别器可通过摄像机采集含有人脸的图像或者视频流,并自动在图像中检测和跟踪人脸,将检测到的人脸与网纹人脸图像或无网纹人脸图像进行比对,从而来确认是否为同一人。同时由于网纹人脸图像相对于无网纹人脸图像保留了更多了人脸原有的特征,且不存在无网 纹人脸图像所存在的人脸特征变模糊,或者人脸的眼耳口鼻的边界发生微小移动的问题,因此,为了提高人脸识别的准确,使用网纹人脸图像进行人脸识别。
根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪为根据所述第二特征点坐标集将人脸从所述网纹人物图像中裁剪,并将裁剪后的人脸的角度摆正,从而方便后续的人脸识别。所述网纹人脸图像中仅包括人脸。根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪为通过ESR算法、SDM算法、GBDT算法等根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪。
在根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像之前,所述对齐裁剪模块25还用于:
将所述目标人物从所述网纹人物图像中分割来得到目标人物图像。
在本实施例中,所述网纹人物图像为包括深度信息的图像。将所述目标人物从所述网纹人物图像中分割来得到目标人物图像包括:
获取所述网纹人物图像的直方图,采用聚类算法对所述直方图进行聚类,来得到两个类别,以所述两个类别的边界作为分隔阈值将所述目标人物从所述网纹人物图像中分割。其中,所述聚类算法可为K-means算法或者核密度估计算法等。
根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像包括:
根据所述第二特征点坐标集对所述目标人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像。
所述人脸识别模块26,用于根据所述对齐后的网纹人脸图像进行人脸识别。
根据所述对齐后的目标网纹人脸图像进行人脸识别包括将人脸图像与所述网纹人脸图像进行比对来确定是否为同一人。具体的根据所述对齐后的目标网纹人脸图像进行人脸识别为现有技术,在此不进行赘述。
本方案通过根据用户所选的测试文件模板来确定所述测试文件模板的预定输入规则,并依据所述预定输入规则获取基本字段数据及特定字段数据,并根据所述预定输入规则、所述基本字段对应的基本字段数据及所述特定字段对应的特定字段数据生成测试文件,从而自动生成用户所需的测试文件。
上述以软件功能模块的形式实现的集成的单元,可以存储在一个非易失性计算机可读取存储介质中。上述软件功能模块存储在一个存储介质中,包括若干指令用以使得一台电子设备或处理器(processor)执行本申请各个实施例所述方法的部分。
实施例三
图3为本申请实施例三提供的电子设备的示意图。
所述电子设备3包括:存储器31、至少一个处理器32、及存储在所述存储器31中并可在所述至少一个处理器32上运行的计算机可读指令33。所述至少一个处理器32执行所述计算机可读指令33时实现上述人脸识别方法实 施例中的步骤。或者,所述至少一个处理器32执行所述计算机可读指令33时实现上述人脸识别装置实施例中的各模块的功能。
示例性的,所述计算机可读指令33可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器31中,并由所述至少一个处理器32执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机可读指令段,该指令段用于描述所述计算机可读指令33在所述电子设备3中的执行过程。例如,所述计算机可读指令33可以被分割成图2所示的模块,各模块具体功能参见实施例二。
所述电子设备3可以为任何一种电子产品,例如,智能手机、个人数字助理(Personal Digital Assistant,PDA)等。本领域技术人员可以理解,所述示意图3仅仅是电子设备3的示例,并不构成对电子设备3的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述电子设备3还可以包括输入输出设备、网络接入设备、总线等。
所述至少一个处理器32可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。该处理器32可以是微处理器或者该处理器32也可以是任何常规的处理器等,所述处理器32是所述电子设备3的控制中心,利用各种接口和线路连接整个电子设备3的各个部分。
所述存储器31可用于存储所述计算机可读指令33和/或模块/单元,所述处理器32通过运行或执行存储在所述存储器31内的计算机可读指令和/或模块/单元,以及调用存储在存储器31内的数据,实现所述电子设备3的各种功能。所述存储器31可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据电子设备3的使用所创建的数据(比如音频数据等)等。此外,存储器31可以包括非易失性计算机可读存储器,例如硬盘、内存、插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)、至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。
所述电子设备3集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个非易失性计算机可读存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读存储介质中,该计算机程序在被处理器执行时,可实现以下步骤:
获取网纹人物图像;
对所述网纹人物图像进行去网纹来获得去网纹人物图像;
对所述去网纹人物图像进行人脸检测和特征点标定来得到第一特征点坐标集;
将所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到第二特征点坐标集;
根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像;
根据所述对齐后的网纹人脸图像进行人脸识别。
具体内容可以参见上述人脸识别方法的实施例,在此不再详述。其中,所述计算机程序包括计算机可读指令代码,所述计算机可读指令代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述非易失性计算机可读存储介质可以包括:能够携带所述计算机可读指令代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)等。
在本申请所提供的几个实施例中,应该理解到,所揭露的电子设备和方法,可以通过其它的方式实现。例如,以上所描述的电子设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
另外,在本申请各个实施例中的各功能单元可以集成在相同处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在相同单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能模块的形式实现。
对于本领域技术人员而言,显然本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其他的具体形式实现本申请。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本申请的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本申请内。不应将权利要求中的任何附图标记视为限制所涉及的权利要求。此外,显然“包括”一词不排除其他单元或,单数不排除复数。系统权利要求中陈述的多个单元或装置也可以由一个单元或装置通过软件或者硬件来实现。
最后应说明的是,以上实施例仅用以说明本申请的技术方案而非限制,尽管参照较佳实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换,而不脱离本申请技术方案的精神范围。

Claims (20)

  1. 一种人脸识别方法,所述方法包括:
    获取网纹人物图像;
    对所述网纹人物图像进行去网纹来获得去网纹人物图像;
    对所述去网纹人物图像进行人脸检测和特征点标定来得到第一特征点坐标集;
    将所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到第二特征点坐标集;
    根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像;
    根据所述对齐后的网纹人脸图像进行人脸识别。
  2. 如权利要求1所述的方法,所述网纹人物图像的尺寸为w×h,其中,w为所述网纹人物图像的长度,h为所述网纹人物图像的宽度;
    所述去网纹人物图像的尺寸为w'×h',其中,w'为所述去网纹人物图像的长度,h'为所述去网纹人物图像的宽度;
    所述第一特征点坐标集为L'={(x'1,y'1),…(x'j,y'j),…,(x'n,y'n)};
    将所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到第二特征点坐标集包括:
    所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到的第二特征点坐标集为:
    xj=x'j/w'*w;
    yj=y'j/h'*h;
    其中,xj为第二特征点坐标集中的第j个特征点的横坐标,x'j为第一特征点坐标集中的第j个特征点的横坐标,w'为所述去网纹人物图像的长度,w为所述网纹人物图像的长度,yj为第一特征点坐标集中的第j个特征点的纵坐标,y'j为第二特征点坐标集中的第j个特征点的横坐标,h'为所述去网纹人物图像的宽度,h为所述网纹人物图像的宽度。
  3. 如权利要求1所述的方法,所述网纹人物图像为目标人物的生活照,所述目标人物的生活照包括目标人物及背景,所述对齐后的网纹人脸图像包括目标人物的人脸。
  4. 如权利要求1所述的方法,所述特征点标定包括:
    通过将检测到的人脸图像输入到特征提取器中来对人脸图像进行特征提取。
  5. 如权利要求1所述的方法,在根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像之前,所述方法还包括:
    将目标人物从所述网纹人物图像中分割来得到目标人物图像。
  6. 如权利要求5所述的方法,所述网纹人物图像为包括深度信息的图像;
    将所述目标人物从所述网纹人物图像中分割来得到目标人物图像包括:
    获取所述网纹人物图像的直方图;
    采用聚类算法对所述直方图进行聚类,来得到两个类别;
    以所述两个类别的边界作为分隔阈值将所述目标人物从所述网纹人物图像中分割;
    根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像包括:
    根据所述第二特征点坐标集对所述目标人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像。
  7. 如权利要求1所述的方法,所述网纹人物图像为公安部数据库中的网纹人物图像,网站注册时上传的网纹人物图像,或者在设备上注册时上传的网纹人物图像;
    所述获取网纹人物图像包括:
    根据人脸识别的请求获取网纹人物图像;
    在接收到用户在网站上输入的信息时获取与用户输入的信息关联的网纹人物图像;或者
    在接收到用户在设备上输入的实体卡时获取与用户输入的实体卡关联的网纹人物图像。
  8. 一种人脸识别装置,所述装置包括:
    获取模块,用于获取网纹人物图像;
    去网纹模块,用于对所述网纹人物图像进行去网纹来获得去网纹人物图像;
    第一特征点得到模块,用于对所述去网纹人物图像进行人脸检测和特征点标定来得到第一特征点坐标集;
    第二特征点得到模块,用于将所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到第二特征点坐标集;
    对齐裁剪模块,用于根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像;
    人脸识别模块,用于根据所述对齐后的网纹人脸图像进行人脸识别。
  9. 一种电子设备,所述电子设备包括处理器及存储器,所述存储器用于存储至少一个计算机可读指令,所述处理器用于执行所述至少一个计算机可读指令以实现以下步骤:
    获取网纹人物图像;
    对所述网纹人物图像进行去网纹来获得去网纹人物图像;
    对所述去网纹人物图像进行人脸检测和特征点标定来得到第一特征点坐标集;
    将所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到第二特征点坐标集;
    根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像;
    根据所述对齐后的网纹人脸图像进行人脸识别。
  10. 如权利要求9所述的电子设备,所述网纹人物图像的尺寸为w×h,其中,w为所述网纹人物图像的长度,h为所述网纹人物图像的宽度;
    所述去网纹人物图像的尺寸为w'×h',其中,w'为所述去网纹人物图像的长度,h'为所述去网纹人物图像的宽度;
    所述第一特征点坐标集为L'={(x'1,y'1),…(x'j,y'j),…,(x'n,y'n)};
    将所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到第二特征点坐标集包括:
    所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到的第二特征点坐标集为:
    xj=x'j/w'*w;
    yj=y'j/h'*h;
    其中,xj为第二特征点坐标集中的第j个特征点的横坐标,x'j为第一特征点坐标集中的第j个特征点的横坐标,w'为所述去网纹人物图像的长度,w为所述网纹人物图像的长度,yj为第一特征点坐标集中的第j个特征点的纵坐标,y'j为第二特征点坐标集中的第j个特征点的横坐标,h'为所述去网纹人物图像的宽度,h为所述网纹人物图像的宽度。
  11. 如权利要求9所述的电子设备,所述处理器执行至少一个计算机可读指令以实现所述特征点标定时,包括:
    通过将检测到的人脸图像输入到特征提取器中来对人脸图像进行特征提取。
  12. 如权利要求9所述的电子设备,在根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像之前,所述处理器还用于执行所述至少一个计算机可读指令以实现以下步骤:
    将目标人物从所述网纹人物图像中分割来得到目标人物图像。
  13. 如权利要求12所述的电子设备,所述网纹人物图像为包括深度信息的图像;
    将所述目标人物从所述网纹人物图像中分割来得到目标人物图像包括:
    获取所述网纹人物图像的直方图;
    采用聚类算法对所述直方图进行聚类,来得到两个类别;
    以所述两个类别的边界作为分隔阈值将所述目标人物从所述网纹人物图像中分割;
    根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像包括:
    根据所述第二特征点坐标集对所述目标人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像。
  14. 如权利要求9所述的电子设备,所述网纹人物图像为公安部数据库中的网纹人物图像,网站注册时上传的网纹人物图像,或者在设备上注册时上传的网纹人物图像;
    所述获取网纹人物图像包括:
    根据人脸识别的请求获取网纹人物图像;
    在接收到用户在网站上输入的信息时获取与用户输入的信息关联的网纹人物图像;或者
    在接收到用户在设备上输入的实体卡时获取与用户输入的实体卡关联的网纹人物图像。
  15. 一种非易失性计算机可读存储介质,其上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现以下步骤:
    获取网纹人物图像;
    对所述网纹人物图像进行去网纹来获得去网纹人物图像;
    对所述去网纹人物图像进行人脸检测和特征点标定来得到第一特征点坐标集;
    将所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到第二特征点坐标集;
    根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像;
    根据所述对齐后的网纹人脸图像进行人脸识别。
  16. 如权利要求15所述的存储介质,所述网纹人物图像的尺寸为w×h,其中,w为所述网纹人物图像的长度,h为所述网纹人物图像的宽度;
    所述去网纹人物图像的尺寸为w'×h',其中,w'为所述去网纹人物图像的长度,h'为所述去网纹人物图像的宽度;
    所述第一特征点坐标集为L'={(x'1,y'1),…(x'j,y'j),…,(x'n,y'n)};
    将所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到第二特征点坐标集包括:
    所述第一特征点坐标集中的特征点坐标映射到所述去网纹人物图像来得到的第二特征点坐标集为:
    xj=x'j/w'*w;
    yj=y'j/h'*h;
    其中,xj为第二特征点坐标集中的第j个特征点的横坐标,x'j为第一特征点坐标集中的第j个特征点的横坐标,w'为所述去网纹人物图像的长度,w为所述网纹人物图像的长度,yj为第一特征点坐标集中的第j个特征点的纵坐标,y'j为第二特征点坐标集中的第j个特征点的横坐标,h'为所述去网纹人物图像的宽度,h为所述网纹人物图像的宽度。
  17. 如权利要求15所述的存储介质,所述计算机可读指令被处理器执行以实现所述特征点标定包括:
    通过将检测到的人脸图像输入到特征提取器中来对人脸图像进行特征提取。
  18. 如权利要求15所述的存储介质,在根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像之前,所述 计算机可读指令被处理器执行时还用以实现以下步骤:
    将目标人物从所述网纹人物图像中分割来得到目标人物图像。
  19. 如权利要求18所述的存储介质,所述网纹人物图像为包括深度信息的图像;
    将所述目标人物从所述网纹人物图像中分割来得到目标人物图像包括:
    获取所述网纹人物图像的直方图;
    采用聚类算法对所述直方图进行聚类,来得到两个类别;
    以所述两个类别的边界作为分隔阈值将所述目标人物从所述网纹人物图像中分割;
    根据所述第二特征点坐标集对所述网纹人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像包括:
    根据所述第二特征点坐标集对所述目标人物图像进行人脸对齐裁剪来得到对齐后的网纹人脸图像。
  20. 如权利要求15所述的存储介质,所述网纹人物图像为公安部数据库中的网纹人物图像,网站注册时上传的网纹人物图像,或者在设备上注册时上传的网纹人物图像;
    所述获取网纹人物图像包括:
    根据人脸识别的请求获取网纹人物图像;
    在接收到用户在网站上输入的信息时获取与用户输入的信息关联的网纹人物图像;或者
    在接收到用户在设备上输入的实体卡时获取与用户输入的实体卡关联的网纹人物图像。
PCT/CN2019/103414 2019-06-05 2019-08-29 人脸识别方法、装置、电子设备及存储介质 WO2020244076A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910488461.0A CN110210425B (zh) 2019-06-05 2019-06-05 人脸识别方法、装置、电子设备及存储介质
CN201910488461.0 2019-06-05

Publications (1)

Publication Number Publication Date
WO2020244076A1 true WO2020244076A1 (zh) 2020-12-10

Family

ID=67791144

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/103414 WO2020244076A1 (zh) 2019-06-05 2019-08-29 人脸识别方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN110210425B (zh)
WO (1) WO2020244076A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113808272A (zh) * 2021-08-25 2021-12-17 西北工业大学 三维虚拟人头脸建模中的纹理映射方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210425B (zh) * 2019-06-05 2023-06-30 平安科技(深圳)有限公司 人脸识别方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070133025A1 (en) * 1999-05-18 2007-06-14 Karidi Ron J Methods and apparatus for reconstructing digitized images
CN107767335A (zh) * 2017-11-14 2018-03-06 上海易络客网络技术有限公司 一种基于人脸识别特征点定位的图像融合方法及系统
CN108108685A (zh) * 2017-12-15 2018-06-01 北京小米移动软件有限公司 进行面部识别处理的方法和装置
CN108764041A (zh) * 2018-04-25 2018-11-06 电子科技大学 用于下部遮挡人脸图像的人脸识别方法
CN110210425A (zh) * 2019-06-05 2019-09-06 平安科技(深圳)有限公司 人脸识别方法、装置、电子设备及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103514432B (zh) * 2012-06-25 2017-09-01 诺基亚技术有限公司 人脸特征提取方法、设备和计算机程序产品
CN108121978A (zh) * 2018-01-10 2018-06-05 马上消费金融股份有限公司 一种人脸图像处理方法、系统及设备和存储介质
CN109801225B (zh) * 2018-12-06 2022-12-27 重庆邮电大学 基于多任务全卷积神经网络的人脸网纹污迹去除方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070133025A1 (en) * 1999-05-18 2007-06-14 Karidi Ron J Methods and apparatus for reconstructing digitized images
CN107767335A (zh) * 2017-11-14 2018-03-06 上海易络客网络技术有限公司 一种基于人脸识别特征点定位的图像融合方法及系统
CN108108685A (zh) * 2017-12-15 2018-06-01 北京小米移动软件有限公司 进行面部识别处理的方法和装置
CN108764041A (zh) * 2018-04-25 2018-11-06 电子科技大学 用于下部遮挡人脸图像的人脸识别方法
CN110210425A (zh) * 2019-06-05 2019-09-06 平安科技(深圳)有限公司 人脸识别方法、装置、电子设备及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113808272A (zh) * 2021-08-25 2021-12-17 西北工业大学 三维虚拟人头脸建模中的纹理映射方法
CN113808272B (zh) * 2021-08-25 2024-04-12 西北工业大学 三维虚拟人头脸建模中的纹理映射方法

Also Published As

Publication number Publication date
CN110210425A (zh) 2019-09-06
CN110210425B (zh) 2023-06-30

Similar Documents

Publication Publication Date Title
US11527055B2 (en) Feature density object classification, systems and methods
CN110826519B (zh) 人脸遮挡检测方法、装置、计算机设备及存储介质
US11151363B2 (en) Expression recognition method, apparatus, electronic device, and storage medium
CN109359575B (zh) 人脸检测方法、业务处理方法、装置、终端及介质
WO2019128646A1 (zh) 人脸检测方法、卷积神经网络参数的训练方法、装置及介质
WO2020252917A1 (zh) 一种模糊人脸图像识别方法、装置、终端设备及介质
WO2017096753A1 (zh) 人脸关键点跟踪方法、终端和非易失性计算机可读存储介质
WO2018028546A1 (zh) 一种关键点的定位方法及终端、计算机存储介质
US8792722B2 (en) Hand gesture detection
US8750573B2 (en) Hand gesture detection
CN110363091B (zh) 侧脸情况下的人脸识别方法、装置、设备及存储介质
WO2019223102A1 (zh) 身份合法性的校验方法、装置、终端设备及介质
CN109934065B (zh) 一种用于手势识别的方法和装置
CN109344727B (zh) 身份证文本信息检测方法及装置、可读存储介质和终端
WO2022021029A1 (zh) 检测模型训练方法、装置、检测模型使用方法及存储介质
CN110781770B (zh) 基于人脸识别的活体检测方法、装置及设备
CN111008935B (zh) 一种人脸图像增强方法、装置、系统及存储介质
WO2019119396A1 (zh) 人脸表情识别方法及装置
WO2022227218A1 (zh) 药名识别方法、装置、计算机设备和存储介质
WO2020244076A1 (zh) 人脸识别方法、装置、电子设备及存储介质
CN111666976A (zh) 基于属性信息的特征融合方法、装置和存储介质
CN114663726A (zh) 目标类别检测模型的训练方法、目标检测方法及电子设备
US20220122341A1 (en) Target detection method and apparatus, electronic device, and computer storage medium
US10872232B2 (en) Image filtering for subject and crowd identification
CN110008791B (zh) 一种人脸区域确定方法、电子设备及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19931879

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19931879

Country of ref document: EP

Kind code of ref document: A1