WO2018001092A1 - Face recognition method and apparatus - Google Patents

Face recognition method and apparatus Download PDF

Info

Publication number
WO2018001092A1
WO2018001092A1 PCT/CN2017/088219 CN2017088219W WO2018001092A1 WO 2018001092 A1 WO2018001092 A1 WO 2018001092A1 CN 2017088219 W CN2017088219 W CN 2017088219W WO 2018001092 A1 WO2018001092 A1 WO 2018001092A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
face
key point
face image
point coordinates
Prior art date
Application number
PCT/CN2017/088219
Other languages
French (fr)
Chinese (zh)
Inventor
朱海涛
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2018001092A1 publication Critical patent/WO2018001092A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Definitions

  • the present disclosure relates to the field of communications technologies, and in particular, to a face recognition method and apparatus.
  • human face As a biometric feature, human face has the advantages of not being lost, difficult to be copied, convenient to collect, unique, and undetected. It is getting more and more people's attention and has entered various fields of social life. Compared with other human biometric recognition systems such as retina, fingerprint, iris, voice, palm print, etc., face recognition system has a wide application prospect because of its convenience and friendliness, especially in face recognition access control attendance system, people. Face recognition ATM intelligent video alarm system, face recognition, public security offenders, intelligent alarm system identification, video conferencing and medical applications have become a research hotspot in the field of pattern recognition and content retrieval.
  • Feature extraction and selection are the core issues of face recognition and the basis for subsequent correct recognition.
  • the face image acquisition process is often interfered by factors such as illumination changes and face pose changes.
  • the traditional face recognition process compares the acquired face features with face samples after extracting facial features. To see if it is the same person. In this way, simply extracting facial features for comparison results in low face recognition accuracy and poor accuracy.
  • An object of the present disclosure is to provide a face recognition method, which solves the problem that the face recognition technology in the related art simply extracts facial features for comparison, resulting in low accuracy and poor accuracy of face recognition.
  • an embodiment of the present disclosure provides a face recognition method, including: acquiring a user face image; performing key point coordinate detection on the face image, and using the detected key point coordinates to Correcting the face image; recalculating the key point coordinates of the corrected face image, and extracting the user face feature according to the recalculated key point coordinates; and using the face feature and the recalculated key point Coordinates for face verification.
  • the embodiment of the present disclosure further provides a face recognition device, including: an acquisition unit, configured to acquire a user face image;
  • a correction unit configured to perform key point coordinate detection on the face image, and correct the face image using the detected key point coordinates; and extract an unit to recalculate the corrected face image Key point coordinates, and extracting user face features according to the recalculated key point coordinates; and a verification unit configured to perform face verification using the face features and the recalculated key point coordinates.
  • Embodiments of the present disclosure also provide a computer storage medium having stored therein one or more programs executable by a computer, the one or more programs being executed by the computer to cause the computer to perform as described above
  • a method of face recognition is provided.
  • the face recognition method and apparatus of the present disclosure acquires a user face image; performs key point coordinate detection on the face image, and uses the detected Correcting the face image by key point coordinates; recalculating the key point coordinates of the corrected face image, and according to the re The calculated key point coordinates are used to extract the user face feature; the face feature and the recalculated key point coordinates are used for face verification.
  • the face recognition method and apparatus of the present disclosure acquires a user face image; performs key point coordinate detection on the face image, and uses the detected Correcting the face image by key point coordinates; recalculating the key point coordinates of the corrected face image, and according to the re The calculated key point coordinates are used to extract the user face feature; the face feature and the recalculated key point coordinates are used for face verification.
  • the face recognition method and apparatus of the present disclosure acquires a user face image; performs key point coordinate detection on the face image, and uses the detected Correcting the face image by key point coordinates; recalculating the key point coordinates
  • FIG. 1 is a method for recognizing a face according to an embodiment of the present disclosure
  • FIG. 2 is another method for recognizing a face according to an embodiment of the present disclosure
  • FIG. 3 is a device for recognizing a face according to an embodiment of the present disclosure.
  • an embodiment of the present disclosure provides a method for face recognition, including the following steps.
  • step S101 a user face image is acquired.
  • the device is activated to acquire an image of a face.
  • the method for obtaining a user's face image may be that the user is photographed by using a camera, or the image of the user's face that has been photographed is selected, which is not limited herein.
  • step S102 key point coordinates are detected on the face image, and the face image is corrected using the detected key point coordinates.
  • the key point coordinate detection is performed on the face image, which may be a reference coordinate system set first, then the face image is placed in the set reference coordinate system, and then the key point coordinates are detected.
  • the key point can be representative points on the face image, such as the facial features, nose and ears of the human face, or the characteristic points such as dimples and scorpions.
  • the face image is corrected using key point coordinates so that subsequent comparisons are more accurate. For example, on the acquired face image, the face is a bit awkward, or slightly sideways, so in order to improve the accuracy of recognition, the face needs to be squared and then verified. Correcting the face image may be based on the detected key point coordinates, in the set reference coordinate system, the face image is translated or rotated to make the face image in a positive direction, and then the face image is performed, for example. Remove the blur points and other corrections.
  • step S103 the key point coordinates are recalculated for the corrected face image, and the user face feature is extracted according to the recalculated key point coordinates.
  • the key point coordinates on the face image since the face image has been translated or rotated in the reference coordinate system, the key point coordinates on the face image have changed, and at this time, the face is recalculated.
  • the key point coordinates on the image the method of recalculating the key point coordinates may be implemented by performing the corresponding transformation on the key point coordinates detected in step S102 according to the transformation performed in the face correction in S102, or in the reference coordinates.
  • the new coordinates of the key points on the face image are re-extracted, and the user face features are extracted according to the recalculated key point coordinates, wherein the user face features are not limited to one, and multiple user face features can be extracted for subsequent Verification to improve recognition accuracy.
  • the present disclosure preferentially extracts user facial features with relatively high importance, such as facial features.
  • step S104 the face verification is performed using the face feature and the recalculated key point coordinates.
  • the face samples in the face database are compared to perform face verification.
  • the present disclosure uses a Support Vector Machine (SVM) algorithm (hereinafter referred to as SVM) to train through a large number of training samples, and finally obtains a SVM classifier.
  • SVM Support Vector Machine
  • User face verification is performed using the SVM classifier.
  • the SVM classifier can effectively utilize the information carried by different parts of the face and the importance of the information, and comprehensively evaluate the similarity of each part of the face to obtain the final verification result.
  • the face recognition method of the present disclosure includes: acquiring a user face image; performing key point coordinate detection on the face image, and correcting the face image using the detected key point coordinates; The face image recalculates the key point coordinates, and extracts the user face feature according to the recalculated key point coordinates; and uses the face feature and the recalculated key point coordinates to perform face verification.
  • the face recognition method of the present disclosure includes: acquiring a user face image; performing key point coordinate detection on the face image, and correcting the face image using the detected key point coordinates; The face image recalculates the key point coordinates, and extracts the user face feature according to the recalculated key point coordinates; and uses the face feature and the recalculated key point coordinates to perform face verification.
  • degree and accuracy by detecting the key point coordinates of the face image, correcting the face image through the key point coordinates, extracting the user face feature from the corrected face image, and performing face verification to improve the accuracy of the face recognition. Degree and accuracy
  • an embodiment of the present disclosure provides a method for face recognition, including the following steps.
  • step S201 a user face image is acquired.
  • step S202 key point coordinate detection is performed on the face image, and the face image is aligned according to the preset direction using the detected key point coordinates, and then the aligned face image is normalized. .
  • step S202 is not limited to the step S202.
  • Embodiments may also be implemented by setting a coordinate system on the face image to compare a coordinate system on the face image with a preset reference coordinate system, that is, step 202 may be replaced with the person
  • the coordinate system is set on the face image, and the embodiment in which the coordinate system on the face image is compared with the preset reference coordinate system is replaced or replaced with the feature in step 102.
  • the key point coordinate detection is performed on the face image, which may be a reference coordinate system set first, and then the face image is placed in the set reference coordinate system, and then the key point coordinates are detected.
  • the key point can be representative points on the face image, such as the facial features, nose and ears of the human face, or the characteristic points such as dimples and scorpions.
  • the face image is first aligned on the face image, and the face image is aligned on the face image.
  • the face image is obtained on the face image.
  • the face is somewhat awkward or slightly side-faced.
  • the face image is squared in the reference coordinate system, so that the face image is presented in the preset positive direction.
  • the alignment method may be based on the extracted key point coordinates, calculated in the reference coordinate system, and after the translation or rotation, the face image is presented in the positive direction. For example, if the coordinates of the two eyes are obtained, it is found that the line between the eyes is at an angle to the X axis of the reference coordinate system, that is, the face image is awkward, and the face needs to be rotated by a certain angle to make the eyes The connection between the lines is parallel to the X axis, and then the face is translated to a certain extent, so that the center of the face on the face image coincides with the origin, that is, the face alignment is completed.
  • the face image is normalized to remove the blur points.
  • normalization can be For coordinate image centering, shear invariance (x-shearing) normalization, scaling normalization, and rotation normalization.
  • step S203 the key point coordinates are recalculated for the corrected face image, and the user face feature is extracted according to the recalculated key point coordinates.
  • step S204 face verification is performed using the face feature and the recalculated key point coordinates.
  • Step S201, step S203, and step S204 are the same as steps S101, S103, and S104 in the first embodiment of the present disclosure, and are not described herein.
  • step S203 includes: recalculating the key point coordinates of the corrected face image, and dividing the face feature area by using the coordinate position of the retrieved key points to extract the user face feature.
  • the position of the user's face image is changed.
  • the key point coordinates need to be recalculated in the reference coordinate system, and the coordinate position of the retrieved key point is used.
  • a face feature area is divided on the face image, and a user face feature is extracted in the divided face feature area.
  • the face feature area mainly includes a first feature area and a second feature area, wherein the first feature area is an area composed of an eyebrow, an eye and a nose, and an eyebrow, an eye and a surrounding part of the nose on the face image,
  • the second feature area is an area composed of a mouth and a peripheral portion of the mouth on the face image.
  • the step S203 further includes: extracting the feature 1 of the first feature area, the feature 2 of the second feature area, the key point position coordinate of the facial face of the face as the feature 3 of the feature, and the key point position coordinate of the outer contour of the face as Feature four of the features.
  • four features are extracted for face verification, but it is not limited thereto, and more features and Feature points.
  • the facial features include feature 1, feature two, feature three, and feature four, wherein feature one is an eyebrow, an eye and a nose on the face image, and a first feature region composed of an eyebrow, an eye, and a surrounding portion of the nose.
  • the Gabor feature, feature 2 is the Gabor feature of the second feature region composed of the mouth and the surrounding part of the mouth
  • the feature 3 is the coordinate position of the retrieved key face of the face
  • the feature 4 is the outline of the face on the face image.
  • the coordinate position of the key points such as the coordinate positions of the cheeks and chin.
  • the first feature region and the second feature region are characterized by a Gabor feature, but are not limited thereto.
  • the first feature region and the second feature region may also be characterized by an LBP feature and an HOG.
  • the features of the first feature region and the second feature region use the same feature, which are both Gabor features, but are not limited thereto.
  • the first feature region and the second feature region are Features may also use different features, such as the first feature being a Gabor feature, the second feature being an LBP feature, or other combinations.
  • step S204 includes: comparing feature 1, feature 2, feature 3, and feature 4 with a preset face sample, respectively, and obtaining similarity 1 of feature 1, similarity 2 of feature 2, and feature 3
  • the similarity degree of similarity three and feature four; four-dimensional feature vectors composed of similarity 1, similarity 2, similarity 3 and similarity are classified by using a preset algorithm, and the user face and the preset person are calculated and verified. Whether the face sample belongs to the same person.
  • step S204 after extracting the features of each part of the person's face, that is, after extracting the feature one, the feature two, the feature three, and the feature four, respectively, the feature one, the feature two, the feature three, and the feature four are respectively Corresponding to the face sample
  • the feature of the part is compared, and the similarity of feature one, the similarity of feature two, the similarity of feature three, and the similarity of feature four are obtained.
  • the similarity degree 1, the similarity degree 2, the similarity degree 3, and the similarity degree 4 can be compared and calculated at the same time, and can be separately compared and calculated, but the order is not distinguished.
  • the four-dimensional feature vectors composed of the similarity one, the similarity two, the similarity three and the similarity four are classified, and it is calculated whether the user face and the preset face sample belong to the same person.
  • the similarity degree 1, the similarity degree 2, the similarity degree 3, and the similarity degree 4 may be cosine similarity, but is not limited thereto.
  • the Euclidean distance similarity and the Mahalanobis distance may be used. Similarity or Hamming distance similarity.
  • the similarity one, the similarity two, the similarity three, and the similarity four are combined into one four-dimensional feature vector, and the four-dimensional feature vector is classified by using a preset SVM classifier, wherein the SVM The classifier is trained in advance through a large number of training samples through the SVM algorithm, and finally the SVM classifier is obtained.
  • the SVM classifier can effectively utilize the information carried by different parts of the face and the importance of the information, and comprehensively evaluate the similarity of each part of the face to obtain the final verification result.
  • the four-dimensional feature vectors composed of four feature similarities are classified, but the present invention is not limited thereto. In other embodiments, for example, five-dimensional features may be extracted as needed. Vector, six-dimensional feature vectors or more dimensional feature vectors for face recognition verification.
  • the face recognition method of the present disclosure includes: acquiring a user face image; performing key point coordinate detection on the face image, and using the detected key point coordinates, aligning the face image according to the preset direction, and then aligning The subsequent face image is normalized; the key point coordinates are recalculated for the corrected face image, and the user face feature is extracted according to the recalculated key point coordinates; the face feature and the recalculated key point are used. Coordinates for face verification. In this way, by detecting the key point coordinates of the face image, correcting the face image through the key point coordinates, extracting the user face feature from the corrected face image, and performing face verification to improve the accuracy of the face recognition. Degree and accuracy.
  • an embodiment of the present disclosure provides an apparatus for recognizing a face, including: an obtaining unit 301 configured to acquire a user face image; and a correcting unit 302 configured to perform key point coordinate detection on the face image, and Correcting the face image using the detected key point coordinates; extracting unit 303, recalculating the key point coordinates of the corrected face image, and extracting the user face feature according to the recalculated key point coordinates; and the verification unit 304 , set to use face features and recalculated keypoint coordinates for face verification.
  • the correcting unit 302 is configured to perform face alignment on the face image according to the preset direction using the detected key point coordinates, and then normalize the aligned face image.
  • the extracting unit 303 is configured to recalculate the key point coordinates of the corrected face image, and divide the face feature area by using the coordinate position of the retrieved key points to extract the user face feature.
  • the face feature area includes a first feature area composed of an eyebrow, an eye and a nose and a surrounding portion thereof, and a second feature area composed of a mouth and a surrounding portion thereof, and the extracting unit 303 is configured to extract features of the first feature area One,
  • the feature of the second feature area is the key point position coordinate of the facial features of the face as the feature three of the feature and the key point position coordinate of the outer contour of the face as the feature four of the feature.
  • the verification unit 304 is configured to compare the feature 1, the feature 2, the feature 3, and the feature 4 with the preset face samples respectively, and obtain the similarity of the feature one, the similarity of the feature two, and the feature three. Similarity 3 and similarity 4 of feature 4;
  • the four-dimensional feature vector consisting of similarity 1, similarity 2, similarity 3 and similarity 4 is classified by using a preset algorithm, and it is calculated whether the user face and the preset face sample belong to the same person.
  • the acquiring unit is configured to acquire a user face image;
  • the correcting unit is configured to perform key point coordinate detection on the face image, and perform face image detection using the detected key point coordinates Correcting; extracting unit, recalculating the key point coordinates of the corrected face image, and extracting the user face feature according to the recalculated key point coordinates;
  • the verification unit is set to use the face feature and the recalculated key point coordinates Perform face verification. In this way, by detecting the key point coordinates of the face image, correcting the face image through the key point coordinates, extracting the user face feature from the corrected face image, and performing face verification to improve the accuracy of the face recognition. Degree and accuracy.
  • the face image is corrected by using the detected key point coordinates, including: using the detected key point coordinates, performing face alignment on the face image according to a preset direction, and then performing the aligned face image. Normalized processing.
  • recalculating the key point coordinates of the corrected face image, and extracting the user face feature according to the recalculated key point coordinates including: recalculating the key point coordinates on the corrected face image, and using the retrieved The coordinate position of the key point divides the face feature area to extract the user face feature.
  • the face feature area includes a first feature area composed of an eyebrow, an eye and a nose and a surrounding portion thereof, and a second feature area composed of the mouth and the surrounding portion thereof, and recalculating key point coordinates on the corrected face image And extracting the user face feature according to the recalculated key point coordinates, including: extracting the feature 1 of the first feature area, the feature 2 of the second feature area, and the coordinate position coordinates of the facial features of the face as the feature three of the feature And the key point position coordinates of the outer contour of the face are taken as the feature four of the feature.
  • the step of performing face verification using the face feature and the recalculated key point coordinates comprises: comparing the feature 1, the feature 2, the feature 3, and the feature 4 with the preset face sample respectively, and obtaining The similarity degree of feature one, the similarity degree of feature two, the similarity degree of feature three, and the similarity degree of feature four are four; using the preset algorithm to combine similarity 1, similarity 2, similarity 3 and similarity 4
  • the four-dimensional feature vector is classified, and it is calculated whether the user face and the preset face sample belong to the same person.
  • the storage medium is, for example, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.

Abstract

A face recognition method and apparatus, the method comprising: acquiring a user face image (S101); detecting key point coordinates of the face image and correcting the face image using the detected key point coordinates (S102); recalculating the key point coordinates of the corrected face image and extracting a user face feature according to the recalculated key point coordinates (S103); and performing face verification using the face feature and the recalculated key point coordinates (S104). By means of detecting key point coordinates of a face image, correcting the face image by means of the key point coordinates, extracting a user face feature from the corrected face image, and finally performing face verification, precision and accuracy of face recognition are improved.

Description

一种人脸识别方法及装置Face recognition method and device 技术领域Technical field
本公开涉及通信技术领域,特别涉及一种人脸识别方法及装置。The present disclosure relates to the field of communications technologies, and in particular, to a face recognition method and apparatus.
背景技术Background technique
人脸作为生物识别特征具有不会遗失、不易被复制、采集方便、唯一性、不被察觉等优点,正越来越受到人们的重视,已经进入了社会生活的各个领域。与视网膜、指纹、虹膜、语音、掌纹等其它人体生物特征识别系统相比,人脸识别系统以其方便、友好等特点,具有十分广泛应用前景,特别是在人脸识别门禁考勤系统、人脸识别ATM机智能视频报警系统、人脸识别公安罪犯追逃智能报警系统的身份识别、视频会议以及医学等方面的应用,已成为目前模式识别和基于内容检索领域的一个研究热点。As a biometric feature, human face has the advantages of not being lost, difficult to be copied, convenient to collect, unique, and undetected. It is getting more and more people's attention and has entered various fields of social life. Compared with other human biometric recognition systems such as retina, fingerprint, iris, voice, palm print, etc., face recognition system has a wide application prospect because of its convenience and friendliness, especially in face recognition access control attendance system, people. Face recognition ATM intelligent video alarm system, face recognition, public security offenders, intelligent alarm system identification, video conferencing and medical applications have become a research hotspot in the field of pattern recognition and content retrieval.
特征提取与选择是人脸识别的核心问题,是后续正确识别的基础。人脸图像的采集过程中往往会受到光照变化、人脸姿态的变化等因素的干扰,传统的人脸识别过程,是在提取人脸特征后,将取得的人脸特征与人脸样本进行比较,来看是否是同一个人。这样,只是简单的提取人脸特征进行对比,导致人脸识别精确度低,准确性差。Feature extraction and selection are the core issues of face recognition and the basis for subsequent correct recognition. The face image acquisition process is often interfered by factors such as illumination changes and face pose changes. The traditional face recognition process compares the acquired face features with face samples after extracting facial features. To see if it is the same person. In this way, simply extracting facial features for comparison results in low face recognition accuracy and poor accuracy.
发明内容Summary of the invention
本公开实施例的目的在于提供一种人脸识别方法,解决了相关技术中的人脸识别技术只是简单的提取人脸特征进行对比,导致人脸识别精确度低和准确性差问题。An object of the present disclosure is to provide a face recognition method, which solves the problem that the face recognition technology in the related art simply extracts facial features for comparison, resulting in low accuracy and poor accuracy of face recognition.
为了达到上述目的,本公开实施例提供一种人脸识别方法,包括:获取用户人脸图像;对所述人脸图像进行关键点坐标检测,并使用检测到的所述关键点坐标对所述人脸图像进行校正;对校正过的所述人脸图像重新计算关键点坐标,并根据重新计算过的关键点坐标提取用户人脸特征;以及使用所述人脸特征及重新计算过的关键点坐标进行人脸验证。In order to achieve the above object, an embodiment of the present disclosure provides a face recognition method, including: acquiring a user face image; performing key point coordinate detection on the face image, and using the detected key point coordinates to Correcting the face image; recalculating the key point coordinates of the corrected face image, and extracting the user face feature according to the recalculated key point coordinates; and using the face feature and the recalculated key point Coordinates for face verification.
本公开实施例还提供一种人脸识别装置,包括:获取单元,设置为获取用户人脸图像;The embodiment of the present disclosure further provides a face recognition device, including: an acquisition unit, configured to acquire a user face image;
校正单元,设置为对所述人脸图像进行关键点坐标检测,并使用检测到的所述关键点坐标对所述人脸图像进行校正;提取单元,对校正过的所述人脸图像重新计算关键点坐标,并根据重新计算过的关键点坐标提取用户人脸特征;以及验证单元,设置为使用所述人脸特征及重新计算过的关键点坐标进行人脸验证。a correction unit configured to perform key point coordinate detection on the face image, and correct the face image using the detected key point coordinates; and extract an unit to recalculate the corrected face image Key point coordinates, and extracting user face features according to the recalculated key point coordinates; and a verification unit configured to perform face verification using the face features and the recalculated key point coordinates.
本公开实施例还提供一种计算机存储介质,所述计算机存储介质中存储有计算机可执行的一个或多个程序,所述一个或多个程序被所述计算机执行时使所述计算机执行如上述提供的一种人脸识别的方法。Embodiments of the present disclosure also provide a computer storage medium having stored therein one or more programs executable by a computer, the one or more programs being executed by the computer to cause the computer to perform as described above A method of face recognition is provided.
上述技术方案中的一个技术方案具有如下优点或有益效果:本公开的人脸识别方法和装置,获取用户人脸图像;对所述人脸图像进行关键点坐标检测,并使用检测到的所述关键点坐标对所述人脸图像进行校正;对校正过的所述人脸图像重新计算关键点坐标,并根据重新 计算过的关键点坐标提取用户人脸特征;使用所述人脸特征及重新计算过的关键点坐标进行人脸验证。这样,通过检测人脸图像的关键点坐标,通过关键点坐标对人脸图像进行校正,对校正过的人脸图像进行用户人脸特征的提取,并进行人脸验证,提高人脸识别的精确度和准确性。One of the above technical solutions has the following advantages or advantages: the face recognition method and apparatus of the present disclosure acquires a user face image; performs key point coordinate detection on the face image, and uses the detected Correcting the face image by key point coordinates; recalculating the key point coordinates of the corrected face image, and according to the re The calculated key point coordinates are used to extract the user face feature; the face feature and the recalculated key point coordinates are used for face verification. In this way, by detecting the key point coordinates of the face image, correcting the face image through the key point coordinates, extracting the user face feature from the corrected face image, and performing face verification to improve the accuracy of the face recognition. Degree and accuracy.
附图说明DRAWINGS
图1为本公开实施例提供的一种人脸识别的方法;FIG. 1 is a method for recognizing a face according to an embodiment of the present disclosure;
图2为本公开实施例提供的另一种人脸识别的方法;以及FIG. 2 is another method for recognizing a face according to an embodiment of the present disclosure;
图3为本公开实施例提供的一种人脸识别的装置。FIG. 3 is a device for recognizing a face according to an embodiment of the present disclosure.
具体实施方式detailed description
为使本公开要解决的技术问题、技术方案和优点更加清楚,下面将结合附图及具体实施例进行详细描述。The technical problems, the technical solutions, and the advantages of the present invention will be more clearly described in conjunction with the accompanying drawings and specific embodiments.
如图1所示,本公开实施例提供一种人脸识别的方法,包括以下步骤。在步骤S101,获取用户人脸图像。在该步骤中,启动装置以获取人脸的图像。获取用户人脸图像的方法,可以是使用摄像头,对用户进行拍照,也可以是选取已经拍摄好的用户人脸图像,在此不做限定。As shown in FIG. 1 , an embodiment of the present disclosure provides a method for face recognition, including the following steps. In step S101, a user face image is acquired. In this step, the device is activated to acquire an image of a face. The method for obtaining a user's face image may be that the user is photographed by using a camera, or the image of the user's face that has been photographed is selected, which is not limited herein.
在步骤S102,对人脸图像进行关键点坐标检测,并使用检测到的关键点坐标对人脸图像进行校正。在该步骤中,对人脸图像进行关键点坐标检测,可以是先设置好的参考坐标系,然后将人脸图像放置于设置好参考坐标系中,然后检测关键点坐标。其中,关键点可以是人脸图像上具有代表性的点,比如人脸上的嘴巴、鼻子和耳朵等五官,或者酒窝和痣等有特点的点。In step S102, key point coordinates are detected on the face image, and the face image is corrected using the detected key point coordinates. In this step, the key point coordinate detection is performed on the face image, which may be a reference coordinate system set first, then the face image is placed in the set reference coordinate system, and then the key point coordinates are detected. Among them, the key point can be representative points on the face image, such as the facial features, nose and ears of the human face, or the characteristic points such as dimples and scorpions.
检测到关键点坐标后,使用关键点坐标对所述人脸图像进行校正,以便后续的比较更为准确。比如获取的人脸图像上,人脸是有些歪的,或者稍微有些侧脸,因此为了提高识别的准确率,需要将人脸摆正,再进行验证。对人脸图像进行校正,可以是根据检测到的关键点坐标,在设置好的参考坐标系中,将人脸图像进行平移或者旋转来使人脸图像呈正方向,然后再对人脸图像进行比如去模糊点等修正。After the key point coordinates are detected, the face image is corrected using key point coordinates so that subsequent comparisons are more accurate. For example, on the acquired face image, the face is a bit awkward, or slightly sideways, so in order to improve the accuracy of recognition, the face needs to be squared and then verified. Correcting the face image may be based on the detected key point coordinates, in the set reference coordinate system, the face image is translated or rotated to make the face image in a positive direction, and then the face image is performed, for example. Remove the blur points and other corrections.
在步骤S103,对校正过的所述人脸图像重新计算关键点坐标,并根据重新计算过的关键点坐标提取用户人脸特征。在该步骤中,对人脸图像进行校正后,因为人脸图像已经在参考坐标系中进行了平移或者旋转,所以人脸图像上的关键点坐标已经发生了变化,此时,重新计算人脸图像上的关键点坐标,重新计算关键点坐标的方法可以通过对步骤S102中所检测到的关键点坐标按S102中人脸校正时所做的变换进行相应的变换来实现,也可以在参考坐标系中重新提取人脸图像上的关键点的新坐标,并根据重新计算过的关键点坐标提取用户人脸特征,其中用户人脸特征不限于一个,可以提取多个用户人脸特征进行后续的验证,以便提高识别准确度。为了验证的准确度及有效性,本公开优先提取重要度比较高的用户人脸特征,如五官等。 In step S103, the key point coordinates are recalculated for the corrected face image, and the user face feature is extracted according to the recalculated key point coordinates. In this step, after the face image is corrected, since the face image has been translated or rotated in the reference coordinate system, the key point coordinates on the face image have changed, and at this time, the face is recalculated. The key point coordinates on the image, the method of recalculating the key point coordinates may be implemented by performing the corresponding transformation on the key point coordinates detected in step S102 according to the transformation performed in the face correction in S102, or in the reference coordinates. The new coordinates of the key points on the face image are re-extracted, and the user face features are extracted according to the recalculated key point coordinates, wherein the user face features are not limited to one, and multiple user face features can be extracted for subsequent Verification to improve recognition accuracy. In order to verify the accuracy and effectiveness, the present disclosure preferentially extracts user facial features with relatively high importance, such as facial features.
在步骤S104,使用所述人脸特征及重新计算过的关键点坐标进行人脸验证。在该步骤中,在提取到人脸上的各部分特征,也就是说用户人脸特征后,会与人脸库中的人脸样本做比较,来进行人脸验证。In step S104, the face verification is performed using the face feature and the recalculated key point coordinates. In this step, after extracting various parts of the human face, that is, the user's face features, the face samples in the face database are compared to perform face verification.
举例来说,为提高验证准确率及识别精度,本实施方式中可以是对多个人脸特征进行验证,但由于人脸各部分的特征所携带的信息量以及信息的重要程度不同,因此在计算总体相似度时不能把这几个特征相似度进行简单的加权平均。为了能利用好人脸不同部分的信息量以及信息重要性的不同,本公开预先利用支持向量机(Support Vector Machine,简称SVM)算法(以下统称SVM),通过大量训练样本来进行训练,最终得到一个SVM分类器。再利用SVM分类器进行用户人脸验证。通过SVM分类器能有效地利用人脸不同部分所携带的信息以及信息的重要程度,对人脸各部分的相似度进行综合评估来得到最终的验证结果。For example, in order to improve the verification accuracy and the recognition accuracy, in the embodiment, multiple facial features may be verified, but since the information carried by the features of the various parts of the face and the importance of the information are different, the calculation is performed. These feature similarities cannot be simply weighted averaged for overall similarity. In order to make good use of the information volume of different parts of the face and the importance of the information, the present disclosure uses a Support Vector Machine (SVM) algorithm (hereinafter referred to as SVM) to train through a large number of training samples, and finally obtains a SVM classifier. User face verification is performed using the SVM classifier. The SVM classifier can effectively utilize the information carried by different parts of the face and the importance of the information, and comprehensively evaluate the similarity of each part of the face to obtain the final verification result.
本公开的人脸识别方法包括:获取用户人脸图像;对所述人脸图像进行关键点坐标检测,并使用检测到的所述关键点坐标对所述人脸图像进行校正;对校正过的所述人脸图像重新计算关键点坐标,并根据重新计算过的关键点坐标提取用户人脸特征;使用所述人脸特征及重新计算过的关键点坐标进行人脸验证。这样,通过检测人脸图像的关键点坐标,通过关键点坐标对人脸图像进行校正,对校正过的人脸图像进行用户人脸特征的提取,并进行人脸验证,提高人脸识别的精确度和准确性。The face recognition method of the present disclosure includes: acquiring a user face image; performing key point coordinate detection on the face image, and correcting the face image using the detected key point coordinates; The face image recalculates the key point coordinates, and extracts the user face feature according to the recalculated key point coordinates; and uses the face feature and the recalculated key point coordinates to perform face verification. In this way, by detecting the key point coordinates of the face image, correcting the face image through the key point coordinates, extracting the user face feature from the corrected face image, and performing face verification to improve the accuracy of the face recognition. Degree and accuracy.
如图2所示,本公开实施例提供一种人脸识别的方法,包括以下步骤。在步骤S201,获取用户人脸图像。在步骤S202,对所述人脸图像进行关键点坐标检测,使用检测到的关键点坐标,对人脸图像按照预设方向进行人脸对齐,再将对齐后的人脸图像进行归一化处理。As shown in FIG. 2, an embodiment of the present disclosure provides a method for face recognition, including the following steps. In step S201, a user face image is acquired. In step S202, key point coordinate detection is performed on the face image, and the face image is aligned according to the preset direction using the detected key point coordinates, and then the aligned face image is normalized. .
需要说明的,步骤S202是可替换的,即,步骤S202可以理解为对图1所示的实施例中的步骤S102的限定,但在本公开实施例中,步骤S102并不限定为步骤S202的实施方式,还可以通过在所述人脸图像上设置坐标系,将人脸图像上的坐标系与预设的参考坐标系进行比较重合的实施方式实现,即步骤202可以替换为在所述人脸图像上设置坐标系,将人脸图像上的坐标系与预设的参考坐标系进行比较重合的实施方式或者替换为步骤102中的特征。在步骤202中,对人脸图像进行关键点坐标检测,可以是先设置好的参考坐标系,然后将人脸图像放置于设置好参考坐标系中,然后检测关键点坐标。其中,关键点可以是人脸图像上具有代表性的点,比如人脸上的嘴巴、鼻子和耳朵等五官,或者酒窝和痣等有特点的点。It should be noted that the step S202 is not limited to the step S202. Embodiments may also be implemented by setting a coordinate system on the face image to compare a coordinate system on the face image with a preset reference coordinate system, that is, step 202 may be replaced with the person The coordinate system is set on the face image, and the embodiment in which the coordinate system on the face image is compared with the preset reference coordinate system is replaced or replaced with the feature in step 102. In step 202, the key point coordinate detection is performed on the face image, which may be a reference coordinate system set first, and then the face image is placed in the set reference coordinate system, and then the key point coordinates are detected. Among them, the key point can be representative points on the face image, such as the facial features, nose and ears of the human face, or the characteristic points such as dimples and scorpions.
检测到关键点坐标后,首先对人脸图像进行人脸对齐,对人脸图像进行人脸对齐,是指获取的人脸图像上,人脸是有些歪的,或者稍微有些侧脸,需要在参考坐标系中将人脸图像摆正,使人脸图像按照预设好的正方向呈现。After the key point coordinates are detected, the face image is first aligned on the face image, and the face image is aligned on the face image. The face image is obtained on the face image. The face is somewhat awkward or slightly side-faced. The face image is squared in the reference coordinate system, so that the face image is presented in the preset positive direction.
对齐的方法可以是根据提取好的关键点坐标,在参考坐标系中计算,进行平移或者旋转后,使人脸图像按照正方向呈现。举例说明,比如获取到了双眼的坐标,发现双眼之间的连线与参考坐标系的X轴成一角度,也就是说人脸图像是歪的,此时需要将人脸旋转一定角度,使双眼之间的连线与X轴平行,然后再将人脸做一定的平移,使人脸图像上人脸的中心与原点重合即表示人脸对齐完成。The alignment method may be based on the extracted key point coordinates, calculated in the reference coordinate system, and after the translation or rotation, the face image is presented in the positive direction. For example, if the coordinates of the two eyes are obtained, it is found that the line between the eyes is at an angle to the X axis of the reference coordinate system, that is, the face image is awkward, and the face needs to be rotated by a certain angle to make the eyes The connection between the lines is parallel to the X axis, and then the face is translated to a certain extent, so that the center of the face on the face image coincides with the origin, that is, the face alignment is completed.
人脸对齐完成后,再对人脸图像进行归一化处理,去除模糊点等。其中归一化处理可以 为对人脸图像进行坐标中心化、剪切不变性(x-shearing)归一化、缩放归一化和旋转归一化等处理。After the face alignment is completed, the face image is normalized to remove the blur points. Where normalization can be For coordinate image centering, shear invariance (x-shearing) normalization, scaling normalization, and rotation normalization.
在步骤S203,对校正过的人脸图像重新计算关键点坐标,并根据重新计算过的关键点坐标提取用户人脸特征。In step S203, the key point coordinates are recalculated for the corrected face image, and the user face feature is extracted according to the recalculated key point coordinates.
在步骤S204,使用人脸特征及重新计算过的关键点坐标进行人脸验证。In step S204, face verification is performed using the face feature and the recalculated key point coordinates.
步骤S201、步骤S203和步骤S204分别与本公开的第一实施例中的步骤S101、步骤S103和步骤S104相同,在此不做赘述。Step S201, step S203, and step S204 are the same as steps S101, S103, and S104 in the first embodiment of the present disclosure, and are not described herein.
可选地,步骤S203包括:对校正过的所述人脸图像重新计算关键点坐标,使用重新得到的关键点的坐标位置划分人脸特征区域,以提取用户人脸特征。在该步骤中,由于之前对人脸图像进行了校正,所以用户人脸图像的位置进行了变化,此时需要在参考坐标系中重新计算关键点坐标,并使用重新得到的关键点的坐标位置在所述人脸图像上划分人脸特征区域,在划分的人脸特征区域中提取用户人脸特征。Optionally, step S203 includes: recalculating the key point coordinates of the corrected face image, and dividing the face feature area by using the coordinate position of the retrieved key points to extract the user face feature. In this step, since the face image is previously corrected, the position of the user's face image is changed. At this time, the key point coordinates need to be recalculated in the reference coordinate system, and the coordinate position of the retrieved key point is used. A face feature area is divided on the face image, and a user face feature is extracted in the divided face feature area.
本实施方式中,人脸特征区域主要包括第一特征区域及第二特征区域,其中第一特征区域为人脸图像上的眉毛、眼睛和鼻子及眉毛、眼睛和鼻子的周围部分组成的区域,第二特征区域为人脸图像上的嘴巴及嘴巴的周围部分组成的区域。In this embodiment, the face feature area mainly includes a first feature area and a second feature area, wherein the first feature area is an area composed of an eyebrow, an eye and a nose, and an eyebrow, an eye and a surrounding part of the nose on the face image, The second feature area is an area composed of a mouth and a peripheral portion of the mouth on the face image.
可选地,步骤S203还包括:提取第一特征区域的特征一、第二特征区域的特征二、人脸五官的关键点位置坐标作为特征的特征三以及人脸外轮廓的关键点位置坐标作为特征的特征四。Optionally, the step S203 further includes: extracting the feature 1 of the first feature area, the feature 2 of the second feature area, the key point position coordinate of the facial face of the face as the feature 3 of the feature, and the key point position coordinate of the outer contour of the face as Feature four of the features.
在本实施方式中,为了提高人脸识别的精确度和准确性,可选地,提取四个特征来进行人脸验证,但并不局限于此,还可以根据验证需要,提取更多特征和特征点。In this embodiment, in order to improve the accuracy and accuracy of the face recognition, optionally, four features are extracted for face verification, but it is not limited thereto, and more features and Feature points.
在步骤S203中,人脸特征包括特征一、特征二、特征三和特征四,其中特征一为人脸图像上的眉毛、眼睛和鼻子,及眉毛、眼睛和鼻子的周围部分组成的第一特征区域的Gabor特征,特征二为嘴巴及嘴巴周围部分组成的第二特征区域的Gabor特征,特征三为重新得到的人脸五官关键点的坐标位置,特征四为人脸图上上人脸外轮廓上的关键点的坐标位置,比如脸颊和下巴等坐标位置。In step S203, the facial features include feature 1, feature two, feature three, and feature four, wherein feature one is an eyebrow, an eye and a nose on the face image, and a first feature region composed of an eyebrow, an eye, and a surrounding portion of the nose. The Gabor feature, feature 2 is the Gabor feature of the second feature region composed of the mouth and the surrounding part of the mouth, the feature 3 is the coordinate position of the retrieved key face of the face, and the feature 4 is the outline of the face on the face image. The coordinate position of the key points, such as the coordinate positions of the cheeks and chin.
本实施方式中,第一特征区域和第二特征区域的特征为Gabor特征,但并不局限于此,在其他实施方式,第一特征区域和第二特征区域的特征还可以是LBP特征和HOG特征等。虽然本实施方式中,第一特征区域和第二特征区域的特征使用相同的特征,均为Gabor特征,但并不局限于此,在其他实施方式中,第一特征区域和第二特征区域的特征还可以使用不同的特征,如第一特征为Gabor特征,第二特征为LBP特征,或者采用其他组合。In this embodiment, the first feature region and the second feature region are characterized by a Gabor feature, but are not limited thereto. In other embodiments, the first feature region and the second feature region may also be characterized by an LBP feature and an HOG. Features, etc. In the present embodiment, the features of the first feature region and the second feature region use the same feature, which are both Gabor features, but are not limited thereto. In other embodiments, the first feature region and the second feature region are Features may also use different features, such as the first feature being a Gabor feature, the second feature being an LBP feature, or other combinations.
可选地,步骤S204包括:将特征一、特征二、特征三和特征四分别与预设的人脸样本进行比较,得出特征一的相似度一、特征二的相似度二、特征三的相似度三和特征四的相似度四;使用预设算法将相似度一、相似度二、相似度三和相似度四组成的四维特征向量进行分类,并计算验证用户人脸与预设的人脸样本是否属于同一人。Optionally, step S204 includes: comparing feature 1, feature 2, feature 3, and feature 4 with a preset face sample, respectively, and obtaining similarity 1 of feature 1, similarity 2 of feature 2, and feature 3 The similarity degree of similarity three and feature four; four-dimensional feature vectors composed of similarity 1, similarity 2, similarity 3 and similarity are classified by using a preset algorithm, and the user face and the preset person are calculated and verified. Whether the face sample belongs to the same person.
在该步骤S204中,在提取到人脸上的各部分特征,也就是说提取到特征一、特征二、特征三和特征四后,将特征一、特征二、特征三和特征四分别与预设的人脸样本上相对应的 部位特征进行比较,得出特征一的相似度一、特征二的相似度二、特征三的相似度三和特征四的相似度四。In this step S204, after extracting the features of each part of the person's face, that is, after extracting the feature one, the feature two, the feature three, and the feature four, respectively, the feature one, the feature two, the feature three, and the feature four are respectively Corresponding to the face sample The feature of the part is compared, and the similarity of feature one, the similarity of feature two, the similarity of feature three, and the similarity of feature four are obtained.
需要说明的是,相似度一、相似度二、相似度三和相似度四可以同时对比计算,与可以分开对比计算,但并不区分先后顺序。It should be noted that the similarity degree 1, the similarity degree 2, the similarity degree 3, and the similarity degree 4 can be compared and calculated at the same time, and can be separately compared and calculated, but the order is not distinguished.
然后使用预设算法,将相似度一、相似度二、相似度三和相似度四组成的四维特征向量进行分类,并计算验证用户人脸与预设的人脸样本是否属于同一人。Then, using the preset algorithm, the four-dimensional feature vectors composed of the similarity one, the similarity two, the similarity three and the similarity four are classified, and it is calculated whether the user face and the preset face sample belong to the same person.
本实施方式中,相似度一、相似度二、相似度三和相似度四可以是余弦相似度,但并不局限于此,在其他实施方式中,可以是欧氏距离相似度、马氏距离相似度或者汉明距离相似度等。In this embodiment, the similarity degree 1, the similarity degree 2, the similarity degree 3, and the similarity degree 4 may be cosine similarity, but is not limited thereto. In other embodiments, the Euclidean distance similarity and the Mahalanobis distance may be used. Similarity or Hamming distance similarity.
举例来说,由于人脸各部分的特征所携带的信息量以及信息的重要程度不同,因此在计算总体相似度时不能把这几个特征相似度进行简单的加权平均。因此,本实施方式中,将相似度一、相似度二、相似度三和相似度四组成一个四维特征向量,使用预设设置好的SVM分类器对所述四维特征向量进行分类,其中,SVM分类器是预先通过SVM算法,通过大量训练样本来进行训练,最终得到SVM分类器。通过SVM分类器能有效地利用人脸不同部分所携带的信息以及信息的重要程度,对人脸各部分的相似度进行综合评估来得到最终的验证结果。For example, since the amount of information carried by the features of each part of the face and the importance of the information are different, the weighted average of these feature similarities cannot be simply calculated when calculating the overall similarity. Therefore, in the present embodiment, the similarity one, the similarity two, the similarity three, and the similarity four are combined into one four-dimensional feature vector, and the four-dimensional feature vector is classified by using a preset SVM classifier, wherein the SVM The classifier is trained in advance through a large number of training samples through the SVM algorithm, and finally the SVM classifier is obtained. The SVM classifier can effectively utilize the information carried by different parts of the face and the importance of the information, and comprehensively evaluate the similarity of each part of the face to obtain the final verification result.
为提高验证准确率及识别精度,本实施方式中是对四个特征相似度组成的四维特征向量进行分类,但并不局限于此,在其他实施方式中,还可以根据需要提取比如五维特征向量、六维特征向量或者更多维的特征向量来进行人脸识别验证。In order to improve the verification accuracy and the recognition accuracy, in the embodiment, the four-dimensional feature vectors composed of four feature similarities are classified, but the present invention is not limited thereto. In other embodiments, for example, five-dimensional features may be extracted as needed. Vector, six-dimensional feature vectors or more dimensional feature vectors for face recognition verification.
本公开的人脸识别方法包括:获取用户人脸图像;对人脸图像进行关键点坐标检测,并使用检测到的关键点坐标,对人脸图像按照预设方向进行人脸对齐,再将对齐后的人脸图像进行归一化处理;对校正过的人脸图像重新计算关键点坐标,并根据重新计算过的关键点坐标提取用户人脸特征;使用人脸特征及重新计算过的关键点坐标进行人脸验证。这样,通过检测人脸图像的关键点坐标,通过关键点坐标对人脸图像进行校正,对校正过的人脸图像进行用户人脸特征的提取,并进行人脸验证,提高人脸识别的精确度和准确性。The face recognition method of the present disclosure includes: acquiring a user face image; performing key point coordinate detection on the face image, and using the detected key point coordinates, aligning the face image according to the preset direction, and then aligning The subsequent face image is normalized; the key point coordinates are recalculated for the corrected face image, and the user face feature is extracted according to the recalculated key point coordinates; the face feature and the recalculated key point are used. Coordinates for face verification. In this way, by detecting the key point coordinates of the face image, correcting the face image through the key point coordinates, extracting the user face feature from the corrected face image, and performing face verification to improve the accuracy of the face recognition. Degree and accuracy.
如图3所示,本公开实施例提供一种人脸识别的装置,包括:获取单元301,设置为获取用户人脸图像;校正单元302,设置为对人脸图像进行关键点坐标检测,并使用检测到的关键点坐标对人脸图像进行校正;提取单元303,对校正过的人脸图像重新计算关键点坐标,并根据重新计算过的关键点坐标提取用户人脸特征;以及验证单元304,设置为使用人脸特征及重新计算过的关键点坐标进行人脸验证。As shown in FIG. 3, an embodiment of the present disclosure provides an apparatus for recognizing a face, including: an obtaining unit 301 configured to acquire a user face image; and a correcting unit 302 configured to perform key point coordinate detection on the face image, and Correcting the face image using the detected key point coordinates; extracting unit 303, recalculating the key point coordinates of the corrected face image, and extracting the user face feature according to the recalculated key point coordinates; and the verification unit 304 , set to use face features and recalculated keypoint coordinates for face verification.
可选地,校正单元302设置为使用检测到的关键点坐标,对人脸图像按照预设方向进行人脸对齐,再将对齐后的人脸图像进行归一化处理。Optionally, the correcting unit 302 is configured to perform face alignment on the face image according to the preset direction using the detected key point coordinates, and then normalize the aligned face image.
可选地,提取单元303设置为对校正过的人脸图像重新计算关键点坐标,使用重新得到的关键点的坐标位置划分人脸特征区域,以提取用户人脸特征。Optionally, the extracting unit 303 is configured to recalculate the key point coordinates of the corrected face image, and divide the face feature area by using the coordinate position of the retrieved key points to extract the user face feature.
可选地,人脸特征区域包括眉毛、眼睛和鼻子及其周围部分组成的第一特征区域,以及嘴巴及其周围部分组成的第二特征区域,提取单元303设置为提取第一特征区域的特征一、 第二特征区域的特征二、人脸五官的关键点位置坐标作为特征的特征三以及人脸外轮廓的关键点位置坐标作为特征的特征四。Optionally, the face feature area includes a first feature area composed of an eyebrow, an eye and a nose and a surrounding portion thereof, and a second feature area composed of a mouth and a surrounding portion thereof, and the extracting unit 303 is configured to extract features of the first feature area One, The feature of the second feature area is the key point position coordinate of the facial features of the face as the feature three of the feature and the key point position coordinate of the outer contour of the face as the feature four of the feature.
可选地,验证单元304设置为将特征一、特征二、特征三及特征四分别与预设的人脸样本进行比较,得出特征一的相似度一、特征二的相似度二、特征三的相似度三及特征四的相似度四;Optionally, the verification unit 304 is configured to compare the feature 1, the feature 2, the feature 3, and the feature 4 with the preset face samples respectively, and obtain the similarity of the feature one, the similarity of the feature two, and the feature three. Similarity 3 and similarity 4 of feature 4;
使用预设算法将相似度一、相似度二、相似度三及相似度四组成的四维特征向量进行分类,并计算验证用户人脸与预设的人脸样本是否属于同一人。The four-dimensional feature vector consisting of similarity 1, similarity 2, similarity 3 and similarity 4 is classified by using a preset algorithm, and it is calculated whether the user face and the preset face sample belong to the same person.
本公开实施例的人脸识别的装置,获取单元,设置为获取用户人脸图像;校正单元,设置为对人脸图像进行关键点坐标检测,并使用检测到的关键点坐标对人脸图像进行校正;提取单元,对校正过的人脸图像重新计算关键点坐标,并根据重新计算过的关键点坐标提取用户人脸特征;验证单元,设置为使用人脸特征及重新计算过的关键点坐标进行人脸验证。这样,通过检测人脸图像的关键点坐标,通过关键点坐标对人脸图像进行校正,对校正过的人脸图像进行用户人脸特征的提取,并进行人脸验证,提高人脸识别的精确度和准确性。a device for recognizing a face of an embodiment of the present disclosure, the acquiring unit is configured to acquire a user face image; the correcting unit is configured to perform key point coordinate detection on the face image, and perform face image detection using the detected key point coordinates Correcting; extracting unit, recalculating the key point coordinates of the corrected face image, and extracting the user face feature according to the recalculated key point coordinates; the verification unit is set to use the face feature and the recalculated key point coordinates Perform face verification. In this way, by detecting the key point coordinates of the face image, correcting the face image through the key point coordinates, extracting the user face feature from the corrected face image, and performing face verification to improve the accuracy of the face recognition. Degree and accuracy.
本领域普通技术人员可以理解实现上述实施例方法的全部或者部分步骤是可以通过程序指令相关的硬件来完成,所述的程序可以存储于一计算机可读取介质中,该程序在执行时,包括以下步骤:获取用户人脸图像;对人脸图像进行关键点坐标检测,并使用检测到的关键点坐标对人脸图像进行校正;对校正过的人脸图像重新计算关键点坐标,并根据重新计算过的关键点坐标提取用户人脸特征;使用人脸特征及重新计算过的关键点坐标进行人脸验证。It will be understood by those skilled in the art that all or part of the steps of implementing the above embodiments may be performed by hardware associated with program instructions, which may be stored in a computer readable medium, including when executed, including The following steps: obtaining a user face image; performing key point coordinate detection on the face image, and correcting the face image using the detected key point coordinates; recalculating the key point coordinates on the corrected face image, and according to the re The calculated key point coordinates are used to extract the user face features; the face features and the recalculated key point coordinates are used for face verification.
可选地,使用检测到的关键点坐标对人脸图像进行校正,包括:使用检测到的关键点坐标,对人脸图像按照预设方向进行人脸对齐,再将对齐后的人脸图像进行归一化处理。Optionally, the face image is corrected by using the detected key point coordinates, including: using the detected key point coordinates, performing face alignment on the face image according to a preset direction, and then performing the aligned face image. Normalized processing.
可选地,对校正过的人脸图像重新计算关键点坐标,并根据重新计算过的关键点坐标提取用户人脸特征,包括:对校正过的人脸图像重新计算关键点坐标,使用重新得到的关键点的坐标位置划分人脸特征区域,以提取用户人脸特征。Optionally, recalculating the key point coordinates of the corrected face image, and extracting the user face feature according to the recalculated key point coordinates, including: recalculating the key point coordinates on the corrected face image, and using the retrieved The coordinate position of the key point divides the face feature area to extract the user face feature.
可选地,人脸特征区域包括眉毛、眼睛和鼻子及其周围部分组成的第一特征区域,以及嘴巴及其周围部分组成的第二特征区域,对校正过的人脸图像重新计算关键点坐标,并根据重新计算过的关键点坐标提取用户人脸特征的步骤,包括:提取第一特征区域的特征一、第二特征区域的特征二、人脸五官的关键点位置坐标作为特征的特征三以及人脸外轮廓的关键点位置坐标作为特征的特征四。Optionally, the face feature area includes a first feature area composed of an eyebrow, an eye and a nose and a surrounding portion thereof, and a second feature area composed of the mouth and the surrounding portion thereof, and recalculating key point coordinates on the corrected face image And extracting the user face feature according to the recalculated key point coordinates, including: extracting the feature 1 of the first feature area, the feature 2 of the second feature area, and the coordinate position coordinates of the facial features of the face as the feature three of the feature And the key point position coordinates of the outer contour of the face are taken as the feature four of the feature.
可选地,使用人脸特征及重新计算过的关键点坐标进行人脸验证的步骤,包括:将特征一、特征二、特征三及特征四分别与预设的人脸样本进行比较,得出特征一的相似度一、特征二的相似度二、特征三的相似度三及特征四的相似度四;使用预设算法将相似度一、相似度二、相似度三及相似度四组成的四维特征向量进行分类,并计算验证用户人脸与预设的人脸样本是否属于同一人。Optionally, the step of performing face verification using the face feature and the recalculated key point coordinates comprises: comparing the feature 1, the feature 2, the feature 3, and the feature 4 with the preset face sample respectively, and obtaining The similarity degree of feature one, the similarity degree of feature two, the similarity degree of feature three, and the similarity degree of feature four are four; using the preset algorithm to combine similarity 1, similarity 2, similarity 3 and similarity 4 The four-dimensional feature vector is classified, and it is calculated whether the user face and the preset face sample belong to the same person.
所述的存储介质,如只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、磁碟或者光盘等。The storage medium is, for example, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
以上是本公开的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不 脱离本公开所述原理的前提下,还可以作出若干改进和润饰,这些改进和润饰也应视为本公开的保护范围。 The above is a preferred embodiment of the present disclosure, it should be noted that for those of ordinary skill in the art, Numerous modifications and improvements may be made without departing from the principles of the present disclosure, and such modifications and refinements are also considered to be within the scope of the present disclosure.

Claims (11)

  1. 一种人脸识别方法,包括:A face recognition method includes:
    获取用户人脸图像;Obtaining a user face image;
    对所述人脸图像进行关键点坐标检测,并使用检测到的所述关键点坐标对所述人脸图像进行校正;Performing key point coordinate detection on the face image, and correcting the face image using the detected key point coordinates;
    对校正过的所述人脸图像重新计算关键点坐标,并根据重新计算过的关键点坐标提取用户人脸特征;Recalculating the key point coordinates of the corrected face image, and extracting the user face feature according to the recalculated key point coordinates;
    使用所述人脸特征及重新计算过的关键点坐标进行人脸验证。The face verification is performed using the face features and the recalculated key point coordinates.
  2. 如权利要求1所述的方法,其中,所述使用检测到的所述关键点坐标对所述人脸图像进行校正,包括:The method of claim 1 wherein said correcting said face image using said detected keypoint coordinates comprises:
    使用检测到的所述关键点坐标,对所述人脸图像按照预设方向进行人脸对齐,再将对齐后的人脸图像进行归一化处理。Using the detected key point coordinates, the face image is aligned according to a preset direction, and the aligned face image is normalized.
  3. 如权利要求1所述的方法,其中,所述对校正过的所述人脸图像重新计算关键点坐标,并根据重新计算过的关键点坐标提取用户人脸特征,包括:The method of claim 1, wherein the recalculating key point coordinates for the corrected face image and extracting user face features based on the recalculated key point coordinates comprises:
    对校正过的所述人脸图像重新计算关键点坐标,使用重新得到的关键点的坐标位置划分人脸特征区域,以提取用户人脸特征。The key point coordinates are recalculated for the corrected face image, and the face feature area is divided by the coordinate position of the retrieved key points to extract the user face feature.
  4. 如权利要求1-3所述的方法,其中,所述人脸特征区域包括眉毛、眼睛和鼻子及其周围部分组成的第一特征区域,以及嘴巴及其周围部分组成的第二特征区域,所述对校正过的所述人脸图像重新计算关键点坐标,并根据重新计算过的关键点坐标提取用户人脸特征的步骤,包括:The method according to any one of claims 1 to 3, wherein said face feature region comprises a first feature region composed of an eyebrow, an eye and a nose and a surrounding portion thereof, and a second feature region composed of a mouth and a surrounding portion thereof, And the step of recalculating the key point coordinates of the corrected face image and extracting the user face feature according to the recalculated key point coordinates, including:
    提取所述第一特征区域的特征一、所述第二特征区域的特征二、人脸五官的关键点位置坐标作为特征的特征三以及人脸外轮廓的关键点位置坐标作为特征的特征四。Extracting the feature 1 of the first feature region, the feature 2 of the second feature region, the key point position coordinate of the facial face of the face as the feature 3 of the feature, and the key point position coordinate of the outer contour of the face as the feature 4 of the feature.
  5. 如权利要求4所述的方法,其中,所述使用所述人脸特征及重新计算过的关键点坐标进行人脸验证的步骤,包括:The method of claim 4, wherein the step of performing face verification using the face features and the recalculated keypoint coordinates comprises:
    将所述特征一、所述特征二、所述特征三及所述特征四分别与预设的人脸样本进行比较,得出所述特征一的相似度一、所述特征二的相似度二、所述特征三的相似度三及所述特征四的相似度四;Comparing the feature 1, the feature 2, the feature 3, and the feature 4 with a preset face sample respectively, and obtaining the similarity of the feature one, and the similarity of the feature two , the similarity degree 3 of the feature 3 and the similarity 4 of the feature 4;
    使用预设算法将所述相似度一、所述相似度二、所述相似度三及所述相似度四组成的四维特征向量进行分类,并计算验证用户人脸与预设的人脸样本是否属于同一人。And classifying, by using a preset algorithm, the four-dimensional feature vector consisting of the similarity 1, the similarity 2, the similarity 3, and the similarity 4, and calculating whether the user face and the preset face sample are verified Belong to the same person.
  6. 一种人脸识别装置,包括:A face recognition device comprising:
    获取单元,设置为获取用户人脸图像;Obtaining a unit, configured to obtain a user face image;
    校正单元,设置为对所述人脸图像进行关键点坐标检测,并使用检测到的所述关键点坐标对所述人脸图像进行校正;a correction unit configured to perform key point coordinate detection on the face image, and correct the face image by using the detected key point coordinates;
    提取单元,对校正过的所述人脸图像重新计算关键点坐标,并根据重新计算过的关键点坐标提取用户人脸特征; Extracting unit, recalculating the key point coordinates of the corrected face image, and extracting the user face feature according to the recalculated key point coordinates;
    验证单元,设置为使用所述人脸特征及重新计算过的关键点坐标进行人脸验证。The verification unit is configured to perform face verification using the face feature and the recalculated key point coordinates.
  7. 如权利要求6所述的装置,其中,所述校正单元设置为使用检测到的所述关键点坐标,对所述人脸图像按照预设方向进行人脸对齐,再将对齐后的人脸图像进行归一化处理。The apparatus according to claim 6, wherein the correcting unit is configured to perform face alignment on the face image in a preset direction using the detected key point coordinates, and then to align the face image Perform normalization.
  8. 如权利要求6所述的装置,其中,所述提取单元设置为对校正过的所述人脸图像重新计算关键点坐标,使用重新得到的关键点的坐标位置划分人脸特征区域,以提取用户人脸特征。The apparatus according to claim 6, wherein said extracting unit is configured to recalculate the key point coordinates of the corrected face image, and divide the face feature area using the coordinate position of the retrieved key point to extract the user Face features.
  9. 如权利要求6-8所述的装置,其中,所述人脸特征区域包括眉毛、眼睛和鼻子及其周围部分组成的第一特征区域,以及嘴巴及其周围部分组成的第二特征区域,所述提取单元设置为提取所述第一特征区域的特征一、所述第二特征区域的特征二、人脸五官的关键点位置坐标作为特征的特征三以及人脸外轮廓的关键点位置坐标作为特征的特征四。The device according to any of claims 6-8, wherein said facial feature region comprises a first feature region composed of an eyebrow, an eye and a nose and a surrounding portion thereof, and a second feature region composed of a mouth and a surrounding portion thereof, The extracting unit is configured to extract the feature 1 of the first feature region, the feature 2 of the second feature region, the key point position coordinate of the facial face of the face as the feature 3 of the feature, and the key point position coordinate of the outer contour of the face as Feature four of the features.
  10. 如权利要求9所述的装置,其中,所述验证单元设置为将所述特征一、所述特征二、所述特征三及所述特征四分别与预设的人脸样本进行比较,得出所述特征一的相似度一、所述特征二的相似度二、所述特征三的相似度三及所述特征四的相似度四;The apparatus according to claim 9, wherein the verification unit is configured to compare the feature one, the feature two, the feature three, and the feature four with a preset face sample, respectively The similarity of the feature one, the similarity of the feature two, the similarity of the feature three, and the similarity of the feature four;
    使用预设算法将所述相似度一、所述相似度二、所述相似度三及所述相似度四组成的四维特征向量进行分类,并计算验证用户人脸与预设的人脸样本是否属于同一人。And classifying, by using a preset algorithm, the four-dimensional feature vector consisting of the similarity 1, the similarity 2, the similarity 3, and the similarity 4, and calculating whether the user face and the preset face sample are verified Belong to the same person.
  11. 一种计算机存储介质,所述计算机存储介质存储有执行指令,所述执行指令用于执行权利要求1至5中任一项所述的方法。 A computer storage medium storing execution instructions for performing the method of any one of claims 1 to 5.
PCT/CN2017/088219 2016-06-29 2017-06-14 Face recognition method and apparatus WO2018001092A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610495337.3 2016-06-29
CN201610495337.3A CN107545220A (en) 2016-06-29 2016-06-29 A kind of face identification method and device

Publications (1)

Publication Number Publication Date
WO2018001092A1 true WO2018001092A1 (en) 2018-01-04

Family

ID=60786458

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/088219 WO2018001092A1 (en) 2016-06-29 2017-06-14 Face recognition method and apparatus

Country Status (2)

Country Link
CN (1) CN107545220A (en)
WO (1) WO2018001092A1 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583426A (en) * 2018-12-23 2019-04-05 广东腾晟信息科技有限公司 A method of according to image identification face
CN109635752A (en) * 2018-12-12 2019-04-16 腾讯科技(深圳)有限公司 Localization method, face image processing process and the relevant apparatus of face key point
CN109711392A (en) * 2019-01-24 2019-05-03 郑州市现代人才测评与考试研究院 A kind of talent's assessment method based on recognition of face
CN110263772A (en) * 2019-07-30 2019-09-20 天津艾思科尔科技有限公司 A kind of face characteristic identifying system based on face key point
CN110781712A (en) * 2019-06-12 2020-02-11 上海荟宸信息科技有限公司 Human head space positioning method based on human face detection and recognition
CN110879983A (en) * 2019-11-18 2020-03-13 讯飞幻境(北京)科技有限公司 Face feature key point extraction method and face image synthesis method
CN110909766A (en) * 2019-10-29 2020-03-24 北京明略软件系统有限公司 Similarity determination method and device, storage medium and electronic device
CN111091031A (en) * 2018-10-24 2020-05-01 北京旷视科技有限公司 Target object selection method and face unlocking method
CN111209823A (en) * 2019-12-30 2020-05-29 南京华图信息技术有限公司 Infrared human face alignment method
CN111339990A (en) * 2020-03-13 2020-06-26 乐鑫信息科技(上海)股份有限公司 Face recognition system and method based on dynamic update of face features
CN111401152A (en) * 2020-02-28 2020-07-10 中国工商银行股份有限公司 Face recognition method and device
WO2020215283A1 (en) * 2019-04-25 2020-10-29 深圳市汇顶科技股份有限公司 Facial recognition method, processing chip and electronic device
CN112101127A (en) * 2020-08-21 2020-12-18 深圳数联天下智能科技有限公司 Face shape recognition method and device, computing equipment and computer storage medium
CN112651279A (en) * 2020-09-24 2021-04-13 深圳福鸽科技有限公司 3D face recognition method and system based on short-distance application
CN112800819A (en) * 2019-11-14 2021-05-14 深圳云天励飞技术有限公司 Face recognition method and device and electronic equipment
CN113536844A (en) * 2020-04-16 2021-10-22 中移(成都)信息通信科技有限公司 Face comparison method, device, equipment and medium
CN113723214A (en) * 2021-08-06 2021-11-30 武汉光庭信息技术股份有限公司 Face key point marking method, system, electronic equipment and storage medium
CN110263772B (en) * 2019-07-30 2024-05-10 天津艾思科尔科技有限公司 Face feature recognition system based on face key points

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145783B (en) * 2018-08-03 2022-03-25 北京字节跳动网络技术有限公司 Method and apparatus for generating information
CN109101919B (en) * 2018-08-03 2022-05-10 北京字节跳动网络技术有限公司 Method and apparatus for generating information
CN109376684B (en) 2018-11-13 2021-04-06 广州市百果园信息技术有限公司 Face key point detection method and device, computer equipment and storage medium
CN109685740B (en) * 2018-12-25 2023-08-11 努比亚技术有限公司 Face correction method and device, mobile terminal and computer readable storage medium
CN109685018A (en) * 2018-12-26 2019-04-26 深圳市捷顺科技实业股份有限公司 A kind of testimony of a witness method of calibration, system and relevant device
CN110728225B (en) * 2019-10-08 2022-04-19 北京联华博创科技有限公司 High-speed face searching method for attendance checking
CN111382408A (en) * 2020-02-17 2020-07-07 深圳壹账通智能科技有限公司 Intelligent user identification method and device and computer readable storage medium
CN113837020B (en) * 2021-08-31 2024-02-02 北京新氧科技有限公司 Cosmetic progress detection method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1831846A (en) * 2006-04-20 2006-09-13 上海交通大学 Face posture identification method based on statistical model
US20090060290A1 (en) * 2007-08-27 2009-03-05 Sony Corporation Face image processing apparatus, face image processing method, and computer program
CN103218609A (en) * 2013-04-25 2013-07-24 中国科学院自动化研究所 Multi-pose face recognition method based on hidden least square regression and device thereof
CN103605965A (en) * 2013-11-25 2014-02-26 苏州大学 Multi-pose face recognition method and device
CN104036276A (en) * 2014-05-29 2014-09-10 无锡天脉聚源传媒科技有限公司 Face recognition method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1831846A (en) * 2006-04-20 2006-09-13 上海交通大学 Face posture identification method based on statistical model
US20090060290A1 (en) * 2007-08-27 2009-03-05 Sony Corporation Face image processing apparatus, face image processing method, and computer program
CN103218609A (en) * 2013-04-25 2013-07-24 中国科学院自动化研究所 Multi-pose face recognition method based on hidden least square regression and device thereof
CN103605965A (en) * 2013-11-25 2014-02-26 苏州大学 Multi-pose face recognition method and device
CN104036276A (en) * 2014-05-29 2014-09-10 无锡天脉聚源传媒科技有限公司 Face recognition method and device

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091031A (en) * 2018-10-24 2020-05-01 北京旷视科技有限公司 Target object selection method and face unlocking method
CN109635752A (en) * 2018-12-12 2019-04-16 腾讯科技(深圳)有限公司 Localization method, face image processing process and the relevant apparatus of face key point
CN109583426A (en) * 2018-12-23 2019-04-05 广东腾晟信息科技有限公司 A method of according to image identification face
CN109711392A (en) * 2019-01-24 2019-05-03 郑州市现代人才测评与考试研究院 A kind of talent's assessment method based on recognition of face
WO2020215283A1 (en) * 2019-04-25 2020-10-29 深圳市汇顶科技股份有限公司 Facial recognition method, processing chip and electronic device
CN110781712A (en) * 2019-06-12 2020-02-11 上海荟宸信息科技有限公司 Human head space positioning method based on human face detection and recognition
CN110781712B (en) * 2019-06-12 2023-05-02 上海荟宸信息科技有限公司 Human head space positioning method based on human face detection and recognition
CN110263772A (en) * 2019-07-30 2019-09-20 天津艾思科尔科技有限公司 A kind of face characteristic identifying system based on face key point
CN110263772B (en) * 2019-07-30 2024-05-10 天津艾思科尔科技有限公司 Face feature recognition system based on face key points
CN110909766A (en) * 2019-10-29 2020-03-24 北京明略软件系统有限公司 Similarity determination method and device, storage medium and electronic device
CN112800819A (en) * 2019-11-14 2021-05-14 深圳云天励飞技术有限公司 Face recognition method and device and electronic equipment
CN110879983A (en) * 2019-11-18 2020-03-13 讯飞幻境(北京)科技有限公司 Face feature key point extraction method and face image synthesis method
CN110879983B (en) * 2019-11-18 2023-07-25 讯飞幻境(北京)科技有限公司 Face feature key point extraction method and face image synthesis method
CN111209823A (en) * 2019-12-30 2020-05-29 南京华图信息技术有限公司 Infrared human face alignment method
CN111401152B (en) * 2020-02-28 2024-04-12 中国工商银行股份有限公司 Face recognition method and device
CN111401152A (en) * 2020-02-28 2020-07-10 中国工商银行股份有限公司 Face recognition method and device
CN111339990A (en) * 2020-03-13 2020-06-26 乐鑫信息科技(上海)股份有限公司 Face recognition system and method based on dynamic update of face features
CN111339990B (en) * 2020-03-13 2023-03-24 乐鑫信息科技(上海)股份有限公司 Face recognition system and method based on dynamic update of face features
CN113536844A (en) * 2020-04-16 2021-10-22 中移(成都)信息通信科技有限公司 Face comparison method, device, equipment and medium
CN113536844B (en) * 2020-04-16 2023-10-31 中移(成都)信息通信科技有限公司 Face comparison method, device, equipment and medium
CN112101127A (en) * 2020-08-21 2020-12-18 深圳数联天下智能科技有限公司 Face shape recognition method and device, computing equipment and computer storage medium
CN112101127B (en) * 2020-08-21 2024-04-30 深圳数联天下智能科技有限公司 Face shape recognition method and device, computing equipment and computer storage medium
CN112651279A (en) * 2020-09-24 2021-04-13 深圳福鸽科技有限公司 3D face recognition method and system based on short-distance application
CN113723214B (en) * 2021-08-06 2023-10-13 武汉光庭信息技术股份有限公司 Face key point labeling method, system, electronic equipment and storage medium
CN113723214A (en) * 2021-08-06 2021-11-30 武汉光庭信息技术股份有限公司 Face key point marking method, system, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN107545220A (en) 2018-01-05

Similar Documents

Publication Publication Date Title
WO2018001092A1 (en) Face recognition method and apparatus
KR101998112B1 (en) Method for recognizing partial obscured face by specifying partial area based on facial feature point, recording medium and apparatus for performing the method
US8655029B2 (en) Hash-based face recognition system
US9990538B2 (en) Face recognition apparatus and method using physiognomic feature information
US11449590B2 (en) Device and method for user authentication on basis of iris recognition
WO2012142756A1 (en) Human eyes images based multi-feature fusion identification method
CN102332093A (en) Identity authentication method and device adopting palmprint and human face fusion recognition
Kaur et al. A review on iris recognition
WO2009041963A1 (en) Iris recognition using consistency information
Charity et al. A bimodal biometrie student attendance system
Aboshosha et al. Score level fusion for fingerprint, iris and face biometrics
Gawande et al. Improving iris recognition accuracy by score based fusion method
Kandgaonkar et al. Ear biometrics: A survey on ear image databases and techniques for ear detection and recognition
US10621419B2 (en) Method and system for increasing biometric acceptance rates and reducing false accept rates and false rates
Patel et al. Human identification by partial iris segmentation using pupil circle growing based on binary integrated edge intensity curve
Kour et al. Palmprint recognition system
Min et al. Comparison of eyelid and eyelash detection algorithms for performance improvement of iris recognition
KR20160042646A (en) Method of Recognizing Faces
George et al. Performance comparison of face recognition using transform domain techniques
Deshpande et al. Fast and Reliable Biometric Verification System Using Iris
Muthukumaran et al. Face and Iris based Human Authentication using Deep Learning
Meva et al. Study of different trends and techniques in face recognition
Resmi et al. Automatic 2D ear detection: A survey
Lu et al. Zernike moment invariants based iris recognition
Pereira et al. A method for improving the reliability of an iris recognition system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17819089

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17819089

Country of ref document: EP

Kind code of ref document: A1