WO2021174880A1 - 征提取模型训练方法、人脸识别方法、装置、设备及介质 - Google Patents

征提取模型训练方法、人脸识别方法、装置、设备及介质 Download PDF

Info

Publication number
WO2021174880A1
WO2021174880A1 PCT/CN2020/125033 CN2020125033W WO2021174880A1 WO 2021174880 A1 WO2021174880 A1 WO 2021174880A1 CN 2020125033 W CN2020125033 W CN 2020125033W WO 2021174880 A1 WO2021174880 A1 WO 2021174880A1
Authority
WO
WIPO (PCT)
Prior art keywords
face image
feature extraction
image
extraction model
data set
Prior art date
Application number
PCT/CN2020/125033
Other languages
English (en)
French (fr)
Inventor
孙太武
周超勇
刘玉宇
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021174880A1 publication Critical patent/WO2021174880A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • This application relates to the field of artificial intelligence technology, and in particular to a feature extraction model training method, face recognition method, device, equipment, and medium.
  • the purpose of this application is to provide a feature extraction model training method, face recognition method, device, equipment, and medium to improve the accuracy of partially occluded face recognition.
  • this application provides a feature extraction model training method, including:
  • sample data set including a number of face images marked with corresponding identity tags, the number of face images being divided into face images that are partially occluded by an occluder and face images that are not occluded;
  • the pre-established feature extraction model is trained according to the sample data set after the image interception processing, and the target feature extraction model is obtained.
  • this application also provides a face recognition method, which includes:
  • this application also provides a feature extraction model training device, including:
  • the sample acquisition module is used to acquire a sample data set, the sample data set includes a number of face images marked with corresponding identity tags, and the number of face images is divided into face images that are partially occluded by an occluder and those that are not occluded. Face image
  • the sample enhancement module is used to perform data enhancement processing on the sample data set
  • An image interception module configured to perform image interception processing on the sample data set after data enhancement processing, so as to randomly intercept a local area of each face image in the sample data set according to a preset interception rule;
  • the model training module is used to train the pre-established feature extraction model according to the sample data set after the image interception processing to obtain the target feature extraction model.
  • a face recognition device which includes:
  • the target image acquisition module is used to acquire the target face image
  • a model processing module configured to use the target feature extraction model to process the target face image to obtain features corresponding to the target face image
  • the comparison module is used to compare the feature corresponding to the target face image with the feature of the image stored in the preset image library;
  • the recognition module is used to obtain the identity recognition result of the target face image according to the comparison result.
  • the present application also provides a computer device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor.
  • the processor implements the aforementioned feature extraction model when the computer program is executed. The steps of the training method or face recognition method;
  • the feature extraction model training method includes:
  • sample data set including a number of face images marked with corresponding identity tags, the number of face images being divided into face images that are partially occluded by an occluder and face images that are not occluded;
  • the face recognition method includes:
  • the present application also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the aforementioned feature extraction model training method or face recognition method are realized;
  • the feature extraction model training method includes:
  • sample data set including a number of face images marked with corresponding identity tags, the number of face images being divided into face images that are partially occluded by an occluder and face images that are not occluded;
  • the face recognition method includes:
  • this application enhances the sample data set, so that more samples can be obtained to train the feature extraction model, so that the accuracy of the model obtained by training is higher; on the other hand, this application randomly intercepts the described
  • the feature extraction model is trained on the local area of each face image in the sample data set, so that the interception rule can be configured to make the trained model pay more attention to the area that is not occluded by the occluder, and the face image will be partially occluded or not.
  • the features extracted by the model are as similar as possible. Therefore, when the trained feature extraction model is applied to the recognition of partially occluded face images, the recognition accuracy can be improved.
  • FIG. 1 is a flowchart of a method for training a eigen-extraction model according to Embodiment 1 of the application;
  • FIG. 2 is a flowchart of a face recognition method according to Embodiment 2 of the application.
  • FIG. 3 is a flowchart of the empirical extraction model training device according to Embodiment 3 of the application.
  • FIG. 4 is a flowchart of a face recognition device according to Embodiment 4 of this application.
  • FIG. 5 is a hardware architecture diagram of a computer device according to Embodiment 5 of the application.
  • This embodiment provides a method for training a feature extraction model, which is implemented by machine learning. As shown in FIG. 1, the method includes the following steps:
  • the sample data set includes a number of face images marked with corresponding identity tags, and the number of face images includes face images that are partially occluded by an occluder and face images that are not occluded.
  • the shielding object may be any of masks, microphones, sunglasses, etc., which is not specifically limited here.
  • the data enhancement processing can be implemented in any one or more of the following ways:
  • Adopt GAN Geneative Adversarial Networks, a generative confrontation network learns the features of partially occluded face images and unoccluded face images, and uses the features of the occluded area in the partially occluded face image to replace the features of the corresponding area in the unoccluded face image , In order to obtain a new face image, in which the identity tag annotated by the new face image should be consistent with the unoccluded face image before replacement. For example, taking the obstruction as a mask as an example, the features of the mask region in the mask-wearing face image are used to replace the features corresponding to the position of the mask region in the unmasked face image to construct a new unmasked face image.
  • the occluder as a mask as an example, suppose there are face images of user A wearing a mask and a face image of user A without a mask in the sample data set, then intercept and wear a mask face image from the face image without a mask Middle the image of the area corresponding to the mask position, and overlay the intercepted image on the mask in the mask wearing face image to construct a new mask wearing face image.
  • S3 Perform image interception processing on the sample data set after data enhancement processing, so as to randomly intercept a local area of each face image in the sample data set according to a preset interception rule.
  • the interception rule can be configured to randomly intercept according to a preset probability, where the probability of intercepting the upper half of the face is set to M%, and the probability of intercepting the lower half of the face is set to N%, in order to train The resulting model pays more attention to the area outside the mask (that is, the upper half of the face), and M should be set to be greater than N.
  • the obstructing object is sunglasses or microphone, etc.
  • the rule configuration is carried out according to similar ideas.
  • the face image is divided into easily obstructed areas and not easily obstructed areas, and then the interception probability of the easily obstructed areas is set to be less than no The interception probability of the easily occluded area, and finally the partial area of each face image in the sample data set is randomly intercepted according to the configured interception probability.
  • the size of the interception can be obtained according to the experiment.
  • the feature extraction model preferably uses CNN (Convolutional Neural Networks, convolutional neural network) model.
  • CNN Convolutional Neural Networks, convolutional neural network
  • S41 Input the local area of the face image into the feature extraction model for processing to obtain the local feature of the face image.
  • S42 Input the local features of the face image into a pre-trained classifier to obtain an identity recognition result of the face image.
  • S43 Obtain a first loss function based on the identity recognition result and the identity tag corresponding to the face image.
  • the first loss function may adopt a cross loss function.
  • S44 Perform iterative training on the feature extraction model according to the first loss function, until the first loss function satisfies a predetermined condition, such as converges to a minimum.
  • this embodiment is also provided with a two-classification network at the output end of the feature extraction model.
  • the method of this embodiment may further include: inputting the local features of the face image into a preset two-classification network to obtain whether the face image is occluded The occlusion determination result of the face image; based on the occlusion determination result of the face image and the actual occlusion situation, a second loss function is obtained.
  • the step S44 includes: performing iterative training on the feature extraction model according to the first loss function and the second loss function.
  • the first loss function and the second loss function may be weighted and summed (the weight of the weighted summation is set as required) to obtain the final loss function, and then the feature extraction model may be iteratively trained according to the final loss function, Until the final loss function satisfies a predetermined condition, such as converging to a minimum.
  • the method of this embodiment may further include: pre-training the feature extraction model. For example, first use a binary classification network to process the sample data set after image interception processing to classify several face images in the sample data set into partially occluded face images and unoccluded face images, and then use the partial The occluded face image or the unoccluded face image is pre-trained for the feature extraction model. Then, when step S4 is executed, the initial weight of the feature extraction model can be set to the weight obtained through pre-training.
  • pre-training the feature extraction model For example, first use a binary classification network to process the sample data set after image interception processing to classify several face images in the sample data set into partially occluded face images and unoccluded face images, and then use the partial The occluded face image or the unoccluded face image is pre-trained for the feature extraction model. Then, when step S4 is executed, the initial weight of the feature extraction model can be set to the weight obtained through pre-training.
  • the method of this embodiment may further include: randomly deleting part of the features in the face image according to a preset deletion rule during training.
  • this embodiment enhances the sample data set, so as to obtain more training images to train the feature extraction model, so that the accuracy of the model obtained by training is higher; on the other hand, according to the preset interception rules, randomly intercept all
  • the feature extraction model is trained on the local area of each face image in the sample data set, so that the interception rule can be configured to make the trained model pay more attention to the part that is not occluded by the occluder, so that the face image is partially occluded or not.
  • the features extracted by the model during occlusion are as similar as possible. Therefore, when the feature extraction model is applied to partial occlusion face recognition, the accuracy of recognition can be improved.
  • the present application provides a face recognition method. As shown in FIG. 2, the method includes the following steps:
  • S6 Use the target feature extraction model trained in Embodiment 1 to process the target face image to obtain features corresponding to the target face image.
  • S8 Obtain an identity recognition result of the target face image according to the comparison result. Specifically, the identity tag corresponding to the stored image with the highest feature matching degree of the target face image in the preset image library is used as the identity recognition result of the target face image.
  • this embodiment can improve the accuracy of recognizing a partially occluded face image.
  • the recognized target face image can also be automatically added to the image library, and the recognition accuracy can be improved through continuous iteration.
  • the preset image library may include a first image library for storing face images partially blocked by an obstruction, and a second image library for storing unoccluded face images.
  • the method may further include: inputting the features corresponding to the target face image into a preset two-classification network to obtain the occlusion determination result of the target face image, when the target face image When it is a partially occluded face image, step S7 compares the feature corresponding to the target face image with the feature of the image stored in the first image library, and when the target face image is an unoccluded face image At this time, the step S7 compares the feature corresponding to the target face image with the feature of the image stored in the second image library, so that the recognition accuracy can be further improved.
  • the device 10 includes a sample acquisition module 11, a sample enhancement module 12, an image interception module 13 and a model training module 14. Each module is described in detail below:
  • the sample acquisition module 11 is used to acquire a sample data set, the sample data set includes a number of face images marked with corresponding identity tags, and the number of face images includes face images that are partially occluded by an occluder and people who are not occluded. Face image.
  • the shielding object may be any of masks, microphones, sunglasses, etc., which is not specifically limited here.
  • the sample enhancement module 12 is used to perform data enhancement processing on the sample data set.
  • the data enhancement processing can be implemented in any one or more of the following ways:
  • Adopt GAN (Generative Adversarial Networks, Generative Adversarial Networks) learn the features of partially occluded face images and unoccluded face images, and use the features of the occluded area in the partially occluded face image to replace the features of the corresponding area in the unoccluded face image , In order to obtain a new face image, in which the identity tag annotated by the new face image should be consistent with the unoccluded face image before replacement. For example, taking the obstruction as a mask as an example, the features of the mask region in the mask-wearing face image are used to replace the features corresponding to the position of the mask region in the unmasked face image to construct a new unmasked face image.
  • the occluder as a mask as an example, suppose there are face images of user A wearing a mask and a face image of user A without a mask in the sample data set, then intercept and wear a mask face image from the face image without a mask Middle the image of the area corresponding to the mask position, and overlay the intercepted image on the mask in the mask wearing face image to construct a new mask wearing face image.
  • the image interception module 13 is configured to perform image interception processing on the sample data set after data enhancement processing, so as to randomly intercept a local area of each face image in the sample data set according to a preset interception rule.
  • the interception rule can be configured to randomly intercept according to a preset probability, where the probability of intercepting the upper half of the face is set to M%, and the probability of intercepting the lower half of the face is set to N%, in order to train
  • M% the probability of intercepting the lower half of the face
  • M% the probability of intercepting the lower half of the face
  • the face image is divided into easily obstructed areas and not easily obstructed areas, and then the interception probability of the easily obstructed areas is set to be less than no The interception probability of the easily occluded area, and finally the partial area of each face image in the sample data set is randomly intercepted according to the configured interception probability.
  • the size of the interception can be obtained according to the experiment.
  • the model training module 14 is used to train the pre-established feature extraction model according to the sample data set after the image interception processing, to obtain the target feature extraction model.
  • the feature extraction model preferably uses CNN (Convolutional Neural Networks, convolutional neural network) model.
  • the processing unit is configured to input the local area of the face image into the feature extraction model for processing to obtain the local feature of the face image.
  • the classification unit is used to input the local features of the face image into a pre-trained classifier to obtain the identity recognition result of the face image.
  • the first loss function acquiring unit is configured to acquire the first loss function based on the identity recognition result and the identity tag corresponding to the face image.
  • the first loss function may adopt a cross loss function.
  • the iterative training unit is configured to perform iterative training on the feature extraction model according to the first loss function until the first loss function satisfies a predetermined condition, such as converges to a minimum.
  • this embodiment is provided with a two-classification network at the output end of the feature extraction model.
  • the model training module may further include: a two-classification unit for inputting the local features of the face image into a preset two-classification network after obtaining the local features of the face image to obtain the face The occlusion determination result of whether the image is occluded; the second loss function acquiring unit is configured to acquire the second loss function based on the occlusion determination result of the face image and the actual occlusion situation.
  • the iterative training unit is specifically configured to: perform iterative training on the feature extraction model according to the first loss function and the second loss function.
  • the first loss function and the second loss function may be weighted and summed (the weight of the weighted summation is set as required) to obtain the final loss function, and then the feature extraction model may be iteratively trained according to the final loss function, Until the final loss function satisfies a predetermined condition, such as converging to a minimum.
  • the device of this embodiment may further include a pre-training module, which is used to perform training on the feature extraction model before the model training module trains the pre-established feature extraction model according to the sample data set after image interception processing.
  • a pre-training module which is used to perform training on the feature extraction model before the model training module trains the pre-established feature extraction model according to the sample data set after image interception processing.
  • Pre-training For example, first use a binary classification network to process the sample data set after image interception processing to classify several face images in the sample data set into partially occluded face images and unoccluded face images, and then use the partial The occluded face image or the unoccluded face image is pre-trained for the feature extraction model. Then, when the model training module 14 trains the feature extraction model, the initial weight can be set as the weight obtained through pre-training.
  • the device of this embodiment may further include: a feature deletion module, which is used to randomly delete part of the features in the face image according to a preset deletion rule during training.
  • a feature deletion module which is used to randomly delete part of the features in the face image according to a preset deletion rule during training.
  • this embodiment enhances the sample data set, so as to obtain more training images to train the feature extraction model, so that the accuracy of the model obtained by training is higher; on the other hand, according to the preset interception rules, randomly intercept all
  • the feature extraction model is trained on the local area of each face image in the sample data set, so that the interception rule can be configured to make the trained model pay more attention to the part that is not occluded by the occluder, so that the face image is partially occluded or not.
  • the features extracted by the model during occlusion are as similar as possible. Therefore, when the feature extraction model is applied to partial occlusion face recognition, the accuracy of recognition can be improved.
  • the present application provides a face recognition device.
  • the device 20 includes:
  • the target image acquisition module 21 is used to acquire the target face image to be recognized.
  • the model processing module 22 is configured to use the target feature extraction model trained in Embodiment 3 to process the target face image to obtain features corresponding to the target face image.
  • the comparison module 23 is configured to compare the feature corresponding to the target face image with the feature of the image stored in the preset image library.
  • the recognition module 24 is configured to obtain the identity recognition result of the target face image according to the comparison result. Specifically, the identity tag corresponding to the stored image with the highest feature matching degree of the target face image in the preset image library is used as the identity recognition result of the target face image.
  • this embodiment can improve the accuracy of recognizing a partially occluded face image.
  • the device of this embodiment can also automatically add the recognized target face image to the image library, so as to improve the recognition accuracy through continuous iteration.
  • the preset image library may include a first image library for storing face images partially blocked by an obstruction, and a second image library for storing unoccluded face images.
  • the device of this embodiment may further include: an occlusion determination module, configured to input the features corresponding to the target face image into a preset two-classification network before the comparison module performs the corresponding operation to obtain the target face
  • an occlusion determination module configured to input the features corresponding to the target face image into a preset two-classification network before the comparison module performs the corresponding operation to obtain the target face
  • the comparison module compares the feature corresponding to the target face image with the feature of the image stored in the first image library Yes, when the target face image is an unoccluded face image, the comparison module compares the feature corresponding to the target face image with the feature of the image stored in the second image library, so that further Improve recognition accuracy.
  • This embodiment provides a computer device, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a rack server, a blade server, a tower server, or a cabinet server (including independent servers, or more) that can execute programs.
  • a server cluster composed of two servers) and so on.
  • the computer device 20 of this embodiment at least includes but is not limited to: a memory 21 and a processor 22 that can be communicatively connected to each other through a system bus, as shown in FIG. 5. It should be pointed out that FIG. 5 only shows the computer device 20 with components 21-22, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead.
  • the memory 21 (ie, readable storage medium) includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static random access memory (SRAM), Read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disks, optical disks, etc.
  • the memory 21 may be an internal storage unit of the computer device 20, such as a hard disk or a memory of the computer device 20.
  • the memory 21 may also be an external storage device of the computer device 20, such as a plug-in hard disk equipped on the computer device 20, and a smart memory card (Smart Memory Card).
  • the memory 21 may also include both an internal storage unit of the computer device 20 and an external storage device thereof.
  • the memory 21 is generally used to store the operating system and various application software installed in the computer device 20, such as the program code of the feature extraction model training device 10 or the face recognition device 20 of the third or fourth embodiment.
  • the memory 21 can also be used to temporarily store various types of data that have been output or will be output.
  • the processor 22 may be a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, a microprocessor, or other data processing chips in some embodiments.
  • the processor 22 is generally used to control the overall operation of the computer device 20.
  • the processor 22 is used to run the program code or process data stored in the memory 21, for example, to run the feature extraction model training device 10 or the face recognition device 20 to implement the feature extraction model training method of embodiment 1 or 2. Or face recognition method.
  • the feature extraction model training method includes: obtaining a sample data set, the sample data set includes a plurality of face images marked with corresponding identity tags, the plurality of face images are divided into face images partially occluded by an occluder, and Unoccluded face image; perform data enhancement processing on the sample data set; perform image interception processing on the sample data set after the data enhancement processing, so as to randomly intercept each person in the sample data set according to preset interception rules The local area of the face image; the pre-established feature extraction model is trained according to the sample data set after the image interception processing, and the target feature extraction model is obtained.
  • the face recognition method includes: acquiring a target face image; using the target feature extraction model to process the target face image to obtain features corresponding to the target face image; corresponding to the target face image The feature of is compared with the feature of the image stored in the preset image library; the identity recognition result of the target face image is obtained according to the comparison result.
  • This embodiment provides a computer-readable storage medium, such as flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static random access memory (SRAM), read-only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Programmable Read-Only Memory (PROM), magnetic memory, magnetic disks, optical disks, servers, App application malls, etc., on which computer programs and programs are stored The corresponding function is realized when executed by the processor.
  • the computer-readable storage medium of this embodiment is used to store the feature extraction model training device 10 or the face recognition device 20, and when executed by a processor, the feature extraction model training method or the face recognition method of Embodiment 1 or 2 is implemented.
  • the feature extraction model training method includes: obtaining a sample data set, the sample data set includes a plurality of face images marked with corresponding identity tags, the plurality of face images are divided into face images partially occluded by an occluder, and Unoccluded face image; perform data enhancement processing on the sample data set; perform image interception processing on the sample data set after the data enhancement processing, so as to randomly intercept each person in the sample data set according to preset interception rules The local area of the face image; the pre-established feature extraction model is trained according to the sample data set after the image interception processing, and the target feature extraction model is obtained.
  • the face recognition method includes: acquiring a target face image; using the target feature extraction model to process the target face image to obtain features corresponding to the target face image; corresponding to the target face image The feature of is compared with the feature of the image stored in the preset image library; the identity recognition result of the target face image is obtained according to the comparison result.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

一种特征提取模型训练方法、人脸识别方法、装置、设备及介质,该训练方法包括:获取样本数据集(S1),所述样本数据集包含标注有对应身份标签的若干人脸图像,所述若干人脸图像分为被遮挡物部分遮挡的人脸图像、以及未遮挡的人脸图像;对所述样本数据集进行数据增强处理(S2);对数据增强处理后的所述样本数据集进行图像截取处理,以根据预设截取规则,随机截取所述样本数据集中各人脸图像的局部区域(S3);根据图像截取处理后的样本数据集对预先建立的特征提取模型进行训练,得到目标特征提取模型(S4)。上述方法可提高部分遮挡人脸识别的准确性。

Description

征提取模型训练方法、人脸识别方法、装置、设备及介质
本申请要求于2019年09月01日递交的申请号为CN 202010906610.3、名称为“征提取模型训练方法、人脸识别方法、装置、设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术领域,尤其涉及一种特征提取模型训练方法、人脸识别方法、装置、设备及介质。
背景技术
受新冠肺炎的影响,目前人们无论进出公司还是小区通常都佩戴有口罩,这给基于人脸识别的门禁系统等带来了极大的挑战。
技术问题
发明人发现,因为口罩遮挡了部分人脸,而遮挡的人脸区域很难提取出真实准确的特征描述,而特征描述的不准确性大幅度增加了人脸识别技术的难度。对此情况,现有人脸识别方法无法满足准确识别要求。
技术解决方案
针对上述现有技术的不足,本申请的目的在于提供一种特征提取模型训练方法、人脸识别方法、装置、设备及介质,以提高部分遮挡人脸识别的准确性。
为了实现上述目的,本申请提供一种特征提取模型训练方法,包括:
获取样本数据集,所述样本数据集包含标注有对应身份标签的若干人脸图像,所述若干人脸图像分为被遮挡物部分遮挡的人脸图像、以及未遮挡的人脸图像;
对所述样本数据集进行数据增强处理;
对数据增强处理后的所述样本数据集进行图像截取处理,以根据预设截取规则,随机截取所述样本数据集中各人脸图像的局部区域;
根据图像截取处理后的样本数据集对预先建立的特征提取模型进行训练,得到目标特征提取模型。
为了实现上述目的,本申请还提供一种人脸识别方法,该方法包括:
获取目标人脸图像;
利用所述目标特征提取模型对所述目标人脸图像进行处理,得到所述目标人脸图像对应的特征;
将所述目标人脸图像对应的特征与预设图像库中存储图像的特征进行比对;
根据比对结果获取所述目标人脸图像的身份识别结果。
为了实现上述目的,本申请还提供一种特征提取模型训练装置,包括:
样本获取模块,用于获取样本数据集,所述样本数据集包含标注有对应身份标签的若干人脸图像,所述若干人脸图像分为被遮挡物部分遮挡的人脸图像、以及未遮挡的人脸图像;
样本增强模块,用于对所述样本数据集进行数据增强处理;
图像截取模块,用于对数据增强处理后的所述样本数据集进行图像截取处理,以根据预设截取规则,随机截取所述样本数据集中各人脸图像的局部区域;
模型训练模块,用于根据图像截取处理后的样本数据集对预先建立的特征提取模型进行训练,得到目标特征提取模型。
为了实现上述目的,本申请还提供一种人脸识别装置,该装置包括:
目标图像获取模块,用于获取目标人脸图像;
模型处理模块,用于利用所述目标特征提取模型对所述目标人脸图像进行处理,得到所述目标人脸图像对应的特征;
比对模块,用于将所述目标人脸图像对应的特征与预设图像库中存储图像的特征进行比对;
识别模块,用于根据比对结果获取所述目标人脸图像的身份识别结果。
为了实现上述目的,本申请还提供一种计算机设备,包括存储器、处理器以及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现前述特征提取模型训练方法或人脸识别方法的步骤;
所述特征提取模型训练方法包括:
获取样本数据集,所述样本数据集包含标注有对应身份标签的若干人脸图像,所述若干人脸图像分为被遮挡物部分遮挡的人脸图像、以及未遮挡的人脸图像;
对所述样本数据集进行数据增强处理;
对数据增强处理后的所述样本数据集进行图像截取处理,以根据预设截取规则,随机截取所述样本数据集中各人脸图像的局部区域;
根据图像截取处理后的样本数据集对预先建立的特征提取模型进行训练,得到目标特征提取模型;
所述人脸识别方法包括:
获取目标人脸图像;
利用所述目标特征提取模型对所述目标人脸图像进行处理,得到所述目标人脸图像对应的特征;
将所述目标人脸图像对应的特征与预设图像库中存储图像的特征进行比对;
根据比对结果获取所述目标人脸图像的身份识别结果。
为了实现上述目的,本申请还提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现前述特征提取模型训练方法或人脸识别方法的步骤;
所述特征提取模型训练方法包括:
获取样本数据集,所述样本数据集包含标注有对应身份标签的若干人脸图像,所述若干人脸图像分为被遮挡物部分遮挡的人脸图像、以及未遮挡的人脸图像;
对所述样本数据集进行数据增强处理;
对数据增强处理后的所述样本数据集进行图像截取处理,以根据预设截取规则,随机截取所述样本数据集中各人脸图像的局部区域;
根据图像截取处理后的样本数据集对预先建立的特征提取模型进行训练,得到目标特征提取模型;
所述人脸识别方法包括:
获取目标人脸图像;
利用所述目标特征提取模型对所述目标人脸图像进行处理,得到所述目标人脸图像对应的特征;
将所述目标人脸图像对应的特征与预设图像库中存储图像的特征进行比对;
根据比对结果获取所述目标人脸图像的身份识别结果。
有益效果
本申请一方面对样本数据集进行增强,从而可以得到更多的样本对特征提取模型进行训练,使训练得到的模型准确性更高;本申请另一方面根据预设截取规则,随机截取所述样本数据集中各人脸图像的局部区域对特征提取模型进行训练,从而可以通过配置截取规则使训练得到的模型更关注于未被遮挡物遮挡的区域,进面使得人脸图像被部分遮挡与不被遮挡时模型提取到的特征尽可能相似。因此,当训练得到的特征提取模型应用于部分遮挡的人脸图像识别时,可以提高识别准确率。
附图说明
图1为本申请实施例1的征提取模型训练方法的流程图;
图2为本申请实施例2的人脸识别方法的流程图;
图3为本申请实施例3的征提取模型训练装置的流程图;
图4为本申请实施例4的人脸识别装置的流程图;
图5为本申请实施例5的计算机设备的硬件架构图。
本发明的实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
在本申请使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开。在本公开和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
实施例1
本实施例提供一种特征提取模型训练方法,通过机器学习实现,如图1所示,该方法包括以下步骤:
S1,获取样本数据集,所述样本数据集包含标注有对应身份标签的若干人脸图像,所述若干人脸图像包含被遮挡物部分遮挡的人脸图像、以及未遮挡的人脸图像。
在本实施例中,遮挡物可以是口罩、话筒、墨镜等中的任何一种,在此不做具体限制。
S2,对所述样本数据集进行数据增强处理。
在本实施例中,数据增强处理可以采用以下方式中的任何一种或多种实现:
(1)改变所述人脸图像的属性参数,如尺寸、像素、灰度、饱和度、色度等,以得到新的人脸图像,其中新的人脸图像标注的身份标签应与原人脸图像一致。
(2)将所述人脸图像进行翻转,以得到新的人脸图像,其中新的人脸图像标注的身份标签应与原人脸图像一致。
(3)从部分遮挡的人脸图像中提取遮挡物图像,并将提取到的遮挡物图像进行仿射变换后覆盖至未遮挡的人脸图像的对应位置,以得到新的人脸图像,其中新的人脸图像标注的身份标签应与覆盖前未遮挡的人脸图像一致。例如,以遮挡物为口罩为例,从戴口罩的人脸图像中提取口罩图像,并将提取的口罩图像进行仿射变换后覆盖至未戴口罩的人脸图像的下半脸区域,以构造新的未戴口罩人脸图像。此处进行仿射变换的目的是为了使提取的遮挡物图像与未遮挡的人脸图像适配。
(4)采用GAN(Generative Adversarial Networks,生成式对抗网络)学习部分遮挡的人脸图像以及未遮挡的人脸图像的特征,并利用部分遮挡的人脸图像中遮挡区域的特征替换未遮挡的人脸图像中对应区域的特征,以得到新的人脸图像,其中新的人脸图像标注的身份标签应与替换前未遮挡的人脸图像一致。例如,以遮挡物为口罩为例,利用戴口罩人脸图像中口罩区域的特征替换未戴口罩人脸图像中与所述口罩区域位置对应的特征,以构造新的未戴口罩人脸图像。
当某部分遮挡的人脸图像和某未遮挡的人脸图像标注有相同的身份标签(即两者为同一人的人脸图像)时,从所述某未遮挡的人脸图像中截取与所述某部分遮挡的人脸图像中遮挡位置对应的图像,并将截取到的图像覆盖至所述某部分遮挡的人脸图像中的遮挡物上,以得到新的人脸图像,其中新的人脸图像标注的身份标签应与覆盖前所述某部分遮挡的人脸图像一致。例如,以遮挡物为口罩为例,假设样本数据集中存在用户A戴口罩的人脸图像和未戴口罩的人脸图像,则从其未戴口罩的人脸图像中截取与戴口罩人脸图像中口罩位置对应区域的图像,并将截取到的图像覆盖至其戴口罩人脸图像中的口罩上,以构造新的戴口罩人脸图像。
通过上述方式得到新的人脸图像将扩充样本数据集中训练样本的数量。
S3,对数据增强处理后的所述样本数据集进行图像截取处理,以根据预设截取规则,随机截取所述样本数据集中各人脸图像的局部区域。例如,以遮挡物为口罩为例,截取规则可以配置为按照预设概率进行随机截取,其中,截取上半脸的概率设为M%,截取下半脸的概率设为N%,为了使训练得到的模型更关注口罩以外的区域(即上半脸),M应设置为大于N。当遮挡物为墨镜或话筒等时,根据相似的思路进行规则配置,首先按遮挡物性质将人脸图像划分为容易遮挡区域和不容易遮挡区域,而后将容易遮挡区域的截取概率设置为小于不容易遮挡区域的截取概率,最后根据配置的截取概率随机截取所述样本数据集中各人脸图像的局部区域。其中,截取的尺寸大小可根据试验得到。
S4,根据图像截取处理后的样本数据集对预先建立的特征提取模型进行训练,得到目标特征提取模型。在本实施例中,特征提取模型优选采用CNN(Convolutional Neural Networks,卷积神经网络)模型。本步骤的具体训练过程如下:
S41,将所述人脸图像的局部区域输入所述特征提取模型进行处理,得到所述人脸图像的局部特征。
S42,将所述人脸图像的局部特征输入预先训练的分类器,得到所述人脸图像的身份识别结果。
S43,基于所述人脸图像对应的身份识别结果和身份标签,获取第一损失函数。在本实施例中,第一损失函数可以采用交叉损失函数。
S44,根据所述第一损失函数对所述特征提取模型进行迭代训练,直至所述第一损失函数满足预定条件,如收敛至最小。
优选地,本实施例在特征提取模型的输出端还设置有二分类网络。在步骤S41得到所述人脸图像的局部特征后,本实施例的方法还可以包括:将所述人脸图像的局部特征输入预设的二分类网络,以得到所述人脸图像是否被遮挡的遮挡判定结果;基于所述人脸图像的遮挡判定结果和实际遮挡情况,获取第二损失函数。在此情况下,所述步骤S44包括:根据所述第一损失函数和第二损失函数,对所述特征提取模型进行迭代训练。具体来说,可以将所述第一损失函数和第二损失函数加权求和(加权求和的权重根据需要设置)得到最终损失函数,而后根据最终损失函数对所述特征提取模型进行迭代训练,直至所述最终损失函数满足预定条件,如收敛至最小。
优选地,在执行步骤S4之前,本实施例的方法还可以包括:对所述特征提取模型进行预训练。例如,首先利用二分类网络对图像截取处理后的样本数据集进行处理,以将样本数据集中的若干人脸图像分类为部分遮挡的人脸图像和未遮挡的人脸图像,而后利用所述部分遮挡的人脸图像或未遮挡的人脸图像对所述特征提取模型进行预训练。而后当执行步骤S4时,特征提取模型的初始权重即可设置为经过预训练得到的权重。
此外,为了增强模型泛化性,本实施例的方法还可以包括:在训练时,按照预设删除规则,随机删除所述人脸图像中的部分特征。
可见,本实施例一方面对样本数据集进行增强,从而得到更多的训练图像对特征提取模型进行训练,使训练得到的模型准确率更高;另一方面根据预设截取规则,随机截取所述样本数据集中各人脸图像的局部区域对特征提取模型进行训练,从而可以通过配置截取规则使训练得到的模型更关注于未被遮挡物遮挡的部分,使得人脸图像被部分遮挡与不被遮挡时模型提取到的特征尽可能相似。因此,当特征提取模型应用于部分遮挡人脸识别时,可以提高识别的准确率。
实施例2
为了实现上述目的,本申请提供一种人脸识别方法,如图2所示,该方法包括以下步骤:
S5,获取待识别的目标人脸图像。
S6,利用实施例1训练得到的目标特征提取模型对所述目标人脸图像进行处理,得到所述目标人脸图像对应的特征。
S7,将所述目标人脸图像对应的特征与预设图像库中存储图像的特征进行比对。
S8,根据比对结果获取所述目标人脸图像的身份识别结果。具体来说,将预设图像库中与目标人脸图像的特征匹配度最高的存储图像对应的身份标签,作为所述目标人脸图像的身份识别结果。
由于采用了实施例1得到的目标特征提取模型进行识别,本实施例可以提高识别部分遮挡的人脸图像的准确性。此外,本实施例还可以将经过识别的目标人脸图像自动加入图像库,通过不断迭代提升识别精度。
优选地,所述预设图像库可以包括用于存储被遮挡物部分遮挡的人脸图像的第一图像库、以及用于存储未遮挡的人脸图像的第二图像库。本实施例在执行步骤S7之前,还可以包括:将所述目标人脸图像对应的特征输入预设的二分类网络以得到所述目标人脸图像的遮挡判定结果,当所述目标人脸图像为部分遮挡的人脸图像时,则步骤S7将所述目标人脸图像对应的特征与第一图像库中存储图像的特征进行比对,当所述目标人脸图像为未遮挡的人脸图像时,所述步骤S7将所述目标人脸图像对应的特征与第二图像库中存储图像的特征进行比对,从而可进一步提高识别准确性。
实施例3
本实施例提供一种特征提取模型训练装置,如图3所示,该装置10包括样本获取模块11、样本增强模块12、图像截取模块13和模型训练模块14。下面分别对各模块进行详细描述:
样本获取模块11用于获取样本数据集,所述样本数据集包含标注有对应身份标签的若干人脸图像,所述若干人脸图像包含被遮挡物部分遮挡的人脸图像、以及未遮挡的人脸图像。
在本实施例中,遮挡物可以是口罩、话筒、墨镜等中的任何一种,在此不做具体限制。
样本增强模块12用于对所述样本数据集进行数据增强处理。
在本实施例中,数据增强处理可以采用以下方式中的任何一种或多种实现:
(1)改变所述人脸图像的属性参数,如尺寸、像素、灰度、饱和度、色度等,以得到新的人脸图像,其中新的人脸图像标注的身份标签应与原人脸图像一致。
(2)将所述人脸图像进行翻转,以得到新的人脸图像,其中新的人脸图像标注的身份标签应与原人脸图像一致。
(3)从部分遮挡的人脸图像中提取遮挡物图像,并将提取到的遮挡物图像进行仿射变换后覆盖至未遮挡的人脸图像的对应位置,以得到新的人脸图像,其中新的人脸图像标注的身份标签应与覆盖前未遮挡的人脸图像一致。例如,以遮挡物为口罩为例,从戴口罩的人脸图像中提取口罩图像,并将提取的口罩图像进行仿射变换后覆盖至未戴口罩的人脸图像的下半脸区域,以构造新的未戴口罩人脸图像。此处进行仿射变换的目的是为了使提取的遮挡物图像与未遮挡的人脸图像适配。
(4)采用GAN(Generative Adversarial Networks,生成式对抗网络)学习部分遮挡的人脸图像以及未遮挡的人脸图像的特征,并利用部分遮挡的人脸图像中遮挡区域的特征替换未遮挡的人脸图像中对应区域的特征,以得到新的人脸图像,其中新的人脸图像标注的身份标签应与替换前未遮挡的人脸图像一致。例如,以遮挡物为口罩为例,利用戴口罩人脸图像中口罩区域的特征替换未戴口罩人脸图像中与所述口罩区域位置对应的特征,以构造新的未戴口罩人脸图像。
当某部分遮挡的人脸图像和某未遮挡的人脸图像标注有相同的身份标签(即两者为同一人的人脸图像)时,从所述某未遮挡的人脸图像中截取与所述某部分遮挡的人脸图像中遮挡位置对应的图像,并将截取到的图像覆盖至所述某部分遮挡的人脸图像中的遮挡物上,以得到新的人脸图像,其中新的人脸图像标注的身份标签应与覆盖前所述某部分遮挡的人脸图像一致。例如,以遮挡物为口罩为例,假设样本数据集中存在用户A戴口罩的人脸图像和未戴口罩的人脸图像,则从其未戴口罩的人脸图像中截取与戴口罩人脸图像中口罩位置对应区域的图像,并将截取到的图像覆盖至其戴口罩人脸图像中的口罩上,以构造新的戴口罩人脸图像。
通过上述方式得到新的人脸图像将扩充样本数据集中训练样本的数量。
图像截取模块13用于对数据增强处理后的所述样本数据集进行图像截取处理,以根据预设截取规则,随机截取所述样本数据集中各人脸图像的局部区域。例如,以遮挡物为口罩为例,截取规则可以配置为按照预设概率进行随机截取,其中,截取上半脸的概率设为M%,截取下半脸的概率设为N%,为了使训练得到的模型更关注口罩以外的区域(即上半脸),M应设置为大于N。当遮挡物为墨镜或话筒等时,根据相似的思路进行规则配置,首先按遮挡物性质将人脸图像划分为容易遮挡区域和不容易遮挡区域,而后将容易遮挡区域的截取概率设置为小于不容易遮挡区域的截取概率,最后根据配置的截取概率随机截取所述样本数据集中各人脸图像的局部区域。其中,截取的尺寸大小可根据试验得到。
模型训练模块14用于根据图像截取处理后的样本数据集对预先建立的特征提取模型进行训练,得到目标特征提取模型。在本实施例中,特征提取模型优选采用CNN(Convolutional Neural Networks,卷积神经网络)模型。
本实施例的模型训练模块具体可以包括:
处理单元,用于将所述人脸图像的局部区域输入所述特征提取模型进行处理,得到所述人脸图像的局部特征。
分类单元,用于将所述人脸图像的局部特征输入预先训练的分类器,得到所述人脸图像的身份识别结果。
第一损失函数获取单元,用于基于所述人脸图像对应的身份识别结果和身份标签,获取第一损失函数。在本实施例中,第一损失函数可以采用交叉损失函数。
迭代训练单元,用于根据所述第一损失函数对所述特征提取模型进行迭代训练,直至所述第一损失函数满足预定条件,如收敛至最小。
优选地,本实施例在特征提取模型的输出端设置有二分类网络。所述模型训练模块还可以包括:二分类单元,用于在得到所述人脸图像的局部特征后,将所述人脸图像的局部特征输入预设的二分类网络,以得到所述人脸图像是否被遮挡的遮挡判定结果;第二损失函数获取单元,用于基于所述人脸图像的遮挡判定结果和实际遮挡情况,获取第二损失函数。在此情况下,所述迭代训练单元具体用于:根据所述第一损失函数和第二损失函数,对所述特征提取模型进行迭代训练。具体来说,可以将所述第一损失函数和第二损失函数加权求和(加权求和的权重根据需要设置)得到最终损失函数,而后根据最终损失函数对所述特征提取模型进行迭代训练,直至所述最终损失函数满足预定条件,如收敛至最小。
优选地,本实施例的装置还可以包括预训练模块,用于在所述模型训练模块根据图像截取处理后的样本数据集对预先建立的特征提取模型进行训练之前,对所述特征提取模型进行预训练。例如,首先利用二分类网络对图像截取处理后的样本数据集进行处理,以将样本数据集中的若干人脸图像分类为部分遮挡的人脸图像和未遮挡的人脸图像,而后利用所述部分遮挡的人脸图像或未遮挡的人脸图像对所述特征提取模型进行预训练。而后当所述模型训练模块14训练特征提取模型,初始权重即可设置为经过预训练得到的权重。
此外,为了增强模型泛化性,本实施例的装置还可以包括:特征删除模块,用于在训练时,按照预设删除规则,随机删除所述人脸图像中的部分特征。
可见,本实施例一方面对样本数据集进行增强,从而得到更多的训练图像对特征提取模型进行训练,使训练得到的模型准确率更高;另一方面根据预设截取规则,随机截取所述样本数据集中各人脸图像的局部区域对特征提取模型进行训练,从而可以通过配置截取规则使训练得到的模型更关注于未被遮挡物遮挡的部分,使得人脸图像被部分遮挡与不被遮挡时模型提取到的特征尽可能相似。因此,当特征提取模型应用于部分遮挡人脸识别时,可以提高识别的准确率。
实施例4
为了实现上述目的,本申请提供一种人脸识别装置,如图4所示,该装置20包括:
目标图像获取模块21,用于获取待识别的目标人脸图像。
模型处理模块22,用于利用实施例3训练得到的目标特征提取模型对所述目标人脸图像进行处理,得到所述目标人脸图像对应的特征。
比对模块23,用于将所述目标人脸图像对应的特征与预设图像库中存储图像的特征进行比对。
识别模块24,用于根据比对结果获取所述目标人脸图像的身份识别结果。具体来说,将预设图像库中与目标人脸图像的特征匹配度最高的存储图像对应的身份标签,作为所述目标人脸图像的身份识别结果。
由于采用了实施例3得到的目标特征提取模型进行识别,本实施例可以提高识别部分遮挡的人脸图像的准确性。此外,本实施例的装置还可以将经过识别的目标人脸图像自动加入图像库,通过不断迭代提升识别精度。
优选地,所述预设图像库可以包括用于存储被遮挡物部分遮挡的人脸图像的第一图像库、以及用于存储未遮挡的人脸图像的第二图像库。本实施例的装置还可以包括:遮挡判定模块,用于在所述比对模块执行相应操作之前,将所述目标人脸图像对应的特征输入预设的二分类网络以得到所述目标人脸图像的遮挡判定结果,当所述目标人脸图像为部分遮挡的人脸图像时,则所述比对模块将所述目标人脸图像对应的特征与第一图像库中存储图像的特征进行比对,当所述目标人脸图像为未遮挡的人脸图像时,所述比对模块将所述目标人脸图像对应的特征与第二图像库中存储图像的特征进行比对,从而可进一步提高识别准确性。
实施例5
本实施例提供一种计算机设备,如可以执行程序的智能手机、平板电脑、笔记本电脑、台式计算机、机架式服务器、刀片式服务器、塔式服务器或机柜式服务器(包括独立的服务器,或者多个服务器所组成的服务器集群)等。本实施例的计算机设备20至少包括但不限于:可通过系统总线相互通信连接的存储器21、处理器22,如图5所示。需要指出的是,图5仅示出了具有组件21-22的计算机设备20,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。
本实施例中,存储器21(即可读存储介质)包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,存储器21可以是计算机设备20的内部存储单元,例如该计算机设备20的硬盘或内存。在另一些实施例中,存储器21也可以是计算机设备20的外部存储设备,例如该计算机设备20上配备的插接式硬盘,智能存储卡(Smart Media Card, SMC),安全数字(Secure Digital, SD)卡,闪存卡(Flash Card)等。当然,存储器21还可以既包括计算机设备20的内部存储单元也包括其外部存储设备。本实施例中,存储器21通常用于存储安装于计算机设备20的操作系统和各类应用软件,例如实施例3或4的特征提取模型训练装置10或人脸识别装置20的程序代码等。此外,存储器21还可以用于暂时地存储已经输出或者将要输出的各类数据。
处理器22在一些实施例中可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理器22通常用于控制计算机设备20的总体操作。本实施例中,处理器22用于运行存储器21中存储的程序代码或者处理数据,例如运行特征提取模型训练装置10或人脸识别装置20,以实现实施例1或2的特征提取模型训练方法或人脸识别方法。
所述特征提取模型训练方法包括:获取样本数据集,所述样本数据集包含标注有对应身份标签的若干人脸图像,所述若干人脸图像分为被遮挡物部分遮挡的人脸图像、以及未遮挡的人脸图像;对所述样本数据集进行数据增强处理;对数据增强处理后的所述样本数据集进行图像截取处理,以根据预设截取规则,随机截取所述样本数据集中各人脸图像的局部区域;根据图像截取处理后的样本数据集对预先建立的特征提取模型进行训练,得到目标特征提取模型。
所述人脸识别方法包括:获取目标人脸图像;利用所述目标特征提取模型对所述目标人脸图像进行处理,得到所述目标人脸图像对应的特征;将所述目标人脸图像对应的特征与预设图像库中存储图像的特征进行比对;根据比对结果获取所述目标人脸图像的身份识别结果。
实施例6
本实施例提供一种计算机可读存储介质,如闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘、服务器、App应用商城等等,其上存储有计算机程序,程序被处理器执行时实现相应功能。本实施例的计算机可读存储介质用于存储特征提取模型训练装置10或人脸识别装置20,被处理器执行时实现实施例1或2的特征提取模型训练方法或人脸识别方法。
所述特征提取模型训练方法包括:获取样本数据集,所述样本数据集包含标注有对应身份标签的若干人脸图像,所述若干人脸图像分为被遮挡物部分遮挡的人脸图像、以及未遮挡的人脸图像;对所述样本数据集进行数据增强处理;对数据增强处理后的所述样本数据集进行图像截取处理,以根据预设截取规则,随机截取所述样本数据集中各人脸图像的局部区域;根据图像截取处理后的样本数据集对预先建立的特征提取模型进行训练,得到目标特征提取模型。
所述人脸识别方法包括:获取目标人脸图像;利用所述目标特征提取模型对所述目标人脸图像进行处理,得到所述目标人脸图像对应的特征;将所述目标人脸图像对应的特征与预设图像库中存储图像的特征进行比对;根据比对结果获取所述目标人脸图像的身份识别结果。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种特征提取模型训练方法,其中,包括:
    获取样本数据集,所述样本数据集包含标注有对应身份标签的若干人脸图像,所述若干人脸图像分为被遮挡物部分遮挡的人脸图像、以及未遮挡的人脸图像;
    对所述样本数据集进行数据增强处理;
    对数据增强处理后的所述样本数据集进行图像截取处理,以根据预设截取规则,随机截取所述样本数据集中各人脸图像的局部区域;
    根据图像截取处理后的样本数据集对预先建立的特征提取模型进行训练,得到目标特征提取模型。
  2. 根据权利要求1所述的特征提取模型训练方法,其中,所述对所述样本数据集进行数据增强处理的步骤包括采用以下方式中的任何一种或多种得到新的人脸图像:
    改变所述人脸图像的属性参数;
    将所述人脸图像进行翻转;
    从部分遮挡的人脸图像中提取遮挡物,并将提取到的遮挡物进行仿射变换后覆盖至未遮挡的人脸图像的对应位置;
    学习所述人脸图像的特征,并利用部分遮挡的人脸图像中遮挡区域的特征替换未遮挡的人脸图像中对应区域的特征;
    当某部分遮挡的人脸图像和某未遮挡的人脸图像标注有相同的身份标签时,从所述某未遮挡的人脸图像中截取与所述某部分遮挡的人脸图像中遮挡位置对应的图像,并将截取到的图像覆盖至所述某部分遮挡的人脸图像中的遮挡物上。
  3. 根据权利要求1所述的特征提取模型训练方法,其中,所述根据图像截取处理后的样本数据集对预先建立的特征提取模型进行训练的步骤包括:
    将所述人脸图像的局部区域输入所述特征提取模型进行处理,得到所述人脸图像的局部特征;
    将所述人脸图像的局部特征输入预先训练的分类器,得到所述人脸图像的身份识别结果;
    基于所述人脸图像对应的身份识别结果和身份标签,获取第一损失函数;
    根据所述第一损失函数对所述特征提取模型进行迭代训练。
  4. 根据权利要求3所述的特征提取模型训练方法,其中,在得到所述人脸图像的局部特征后,所述方法还包括:
    将所述人脸图像的局部特征输入预设的二分类网络,以得到所述人脸图像是否被遮挡的遮挡判定结果;
    基于所述人脸图像的遮挡判定结果和实际遮挡情况,获取第二损失函数;
    所述根据所述第一损失函数对所述特征提取模型进行迭代训练的步骤包括:
    根据所述第一损失函数和第二损失函数,对所述特征提取模型进行迭代训练。
  5. 根据权利要求1所述的特征提取模型训练方法,其中,在根据图像截取处理后的样本数据集对预先建立的特征提取模型进行训练之前,所述方法还包括:对所述特征提取模型进行预训练。
  6. 一种人脸识别方法,其中,该方法包括:
    获取目标人脸图像;
    利用权利要求1-5任一项得到的所述目标特征提取模型对所述目标人脸图像进行处理,得到所述目标人脸图像对应的特征;
    将所述目标人脸图像对应的特征与预设图像库中存储图像的特征进行比对;
    根据比对结果获取所述目标人脸图像的身份识别结果。
  7. 一种特征提取模型训练装置,其中,包括:
    样本获取模块,用于获取样本数据集,所述样本数据集包含标注有对应身份标签的若干人脸图像,所述若干人脸图像分为被遮挡物部分遮挡的人脸图像、以及未遮挡的人脸图像;
    样本增强模块,用于对所述样本数据集进行数据增强处理;
    图像截取模块,用于对数据增强处理后的所述样本数据集进行图像截取处理,以根据预设截取规则,随机截取所述样本数据集中各人脸图像的局部区域;
    模型训练模块,用于根据图像截取处理后的样本数据集对预先建立的特征提取模型进行训练,得到目标特征提取模型。
  8. 根据权利要求7所述的特征提取模型训练装置,其中,所述样本增强模块采用以下方式中的任何一种或多种得到新的人脸图像:
    改变所述人脸图像的属性参数;
    将所述人脸图像进行翻转;
    从部分遮挡的人脸图像中提取遮挡物,并将提取到的遮挡物进行仿射变换后覆盖至未遮挡的人脸图像的对应位置;
    学习所述人脸图像的特征,并利用部分遮挡的人脸图像中遮挡区域的特征替换未遮挡的人脸图像中对应区域的特征;
    当某部分遮挡的人脸图像和某未遮挡的人脸图像标注有相同的身份标签时,从所述某未遮挡的人脸图像中截取与所述某部分遮挡的人脸图像中遮挡位置对应的图像,并将截取到的图像覆盖至所述某部分遮挡的人脸图像中的遮挡物上。
  9. 根据权利要求7所述的特征提取模型训练装置,其中,所述模型训练模块包括:
    处理单元,用于将截取到的所述人脸图像的局部区域输入所述特征提取模型进行处理,得到所述人脸图像的局部特征;
    分类单元,用于将所述人脸图像的局部特征输入预先训练的分类器,得到所述人脸图像的身份识别结果;
    第一损失函数获取单元,用于基于所述人脸图像对应的身份识别结果和身份标签,获取第一损失函数;
    迭代训练单元,用于根据所述第一损失函数对所述特征提取模型进行迭代训练。
  10. 根据权利要求9所述的特征提取模型训练装置,其中,所述模型训练模块还包括:
    二分类单元,用于在得到所述人脸图像的局部特征后,将所述人脸图像的局部特征输入预设的二分类网络,以得到所述人脸图像是否被遮挡的遮挡判定结果;
    第二损失函数获取单元,用于基于所述人脸图像的遮挡判定结果和实际遮挡情况,获取第二损失函数;
    其中,所述迭代训练单元具体用于根据所述第一损失函数和第二损失函数,对所述特征提取模型进行迭代训练。
  11. 根据权利要求7所述的特征提取模型训练装置,其中,所述装置还包括预训练模块,用于在所述模型训练模块执行相应操作之前,对所述特征提取模型进行预训练。
  12. 一种人脸识别装置,其中,该装置包括:
    目标图像获取模块,用于获取目标人脸图像;
    模型处理模块,用于利用权利要求7-11中任一项得到的所述目标特征提取模型对所述目标人脸图像进行处理,得到所述目标人脸图像对应的特征;
    比对模块,用于将所述目标人脸图像对应的特征与预设图像库中存储图像的特征进行比对;
    识别模块,用于根据比对结果获取所述目标人脸图像的身份识别结果。
  13. 一种计算机设备,包括存储器、处理器以及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述计算机程序时实现特征提取模型训练方法的以下步骤:
    获取样本数据集,所述样本数据集包含标注有对应身份标签的若干人脸图像,所述若干人脸图像分为被遮挡物部分遮挡的人脸图像、以及未遮挡的人脸图像;
    对所述样本数据集进行数据增强处理;
    对数据增强处理后的所述样本数据集进行图像截取处理,以根据预设截取规则,随机截取所述样本数据集中各人脸图像的局部区域;
    根据图像截取处理后的样本数据集对预先建立的特征提取模型进行训练,得到目标特征提取模型。
  14. 根据权利要求13所述的计算机设备,其中,所述对所述样本数据集进行数据增强处理的步骤包括采用以下方式中的任何一种或多种得到新的人脸图像:
    改变所述人脸图像的属性参数;
    将所述人脸图像进行翻转;
    从部分遮挡的人脸图像中提取遮挡物,并将提取到的遮挡物进行仿射变换后覆盖至未遮挡的人脸图像的对应位置;
    学习所述人脸图像的特征,并利用部分遮挡的人脸图像中遮挡区域的特征替换未遮挡的人脸图像中对应区域的特征;
    当某部分遮挡的人脸图像和某未遮挡的人脸图像标注有相同的身份标签时,从所述某未遮挡的人脸图像中截取与所述某部分遮挡的人脸图像中遮挡位置对应的图像,并将截取到的图像覆盖至所述某部分遮挡的人脸图像中的遮挡物上。
  15. 根据权利要求13所述的计算机设备,其中,所述根据图像截取处理后的样本数据集对预先建立的特征提取模型进行训练的步骤包括:
    将所述人脸图像的局部区域输入所述特征提取模型进行处理,得到所述人脸图像的局部特征;
    将所述人脸图像的局部特征输入预先训练的分类器,得到所述人脸图像的身份识别结果;
    基于所述人脸图像对应的身份识别结果和身份标签,获取第一损失函数;
    根据所述第一损失函数对所述特征提取模型进行迭代训练。
  16. 根据权利要求15所述的计算机设备,其中,在得到所述人脸图像的局部特征后,所述方法还包括:
    将所述人脸图像的局部特征输入预设的二分类网络,以得到所述人脸图像是否被遮挡的遮挡判定结果;
    基于所述人脸图像的遮挡判定结果和实际遮挡情况,获取第二损失函数;
    所述根据所述第一损失函数对所述特征提取模型进行迭代训练的步骤包括:
    根据所述第一损失函数和第二损失函数,对所述特征提取模型进行迭代训练。
  17. 一种计算机可读存储介质,其上存储有计算机程序,其中,所述计算机程序被处理器执行时实现权利要求特征提取模型训练方法的以下步骤:
    获取样本数据集,所述样本数据集包含标注有对应身份标签的若干人脸图像,所述若干人脸图像分为被遮挡物部分遮挡的人脸图像、以及未遮挡的人脸图像;
    对所述样本数据集进行数据增强处理;
    对数据增强处理后的所述样本数据集进行图像截取处理,以根据预设截取规则,随机截取所述样本数据集中各人脸图像的局部区域;
    根据图像截取处理后的样本数据集对预先建立的特征提取模型进行训练,得到目标特征提取模型。
  18. 根据权利要求17所述的计算机可读存储介质,其中,所述对所述样本数据集进行数据增强处理的步骤包括采用以下方式中的任何一种或多种得到新的人脸图像:
    改变所述人脸图像的属性参数;
    将所述人脸图像进行翻转;
    从部分遮挡的人脸图像中提取遮挡物,并将提取到的遮挡物进行仿射变换后覆盖至未遮挡的人脸图像的对应位置;
    学习所述人脸图像的特征,并利用部分遮挡的人脸图像中遮挡区域的特征替换未遮挡的人脸图像中对应区域的特征;
    当某部分遮挡的人脸图像和某未遮挡的人脸图像标注有相同的身份标签时,从所述某未遮挡的人脸图像中截取与所述某部分遮挡的人脸图像中遮挡位置对应的图像,并将截取到的图像覆盖至所述某部分遮挡的人脸图像中的遮挡物上。
  19. 根据权利要求17所述的计算机可读存储介质,其中,所述根据图像截取处理后的样本数据集对预先建立的特征提取模型进行训练的步骤包括:
    将所述人脸图像的局部区域输入所述特征提取模型进行处理,得到所述人脸图像的局部特征;
    将所述人脸图像的局部特征输入预先训练的分类器,得到所述人脸图像的身份识别结果;
    基于所述人脸图像对应的身份识别结果和身份标签,获取第一损失函数;
    根据所述第一损失函数对所述特征提取模型进行迭代训练。
  20. 根据权利要求19所述的计算机可读存储介质,其中,在得到所述人脸图像的局部特征后,所述方法还包括:
    将所述人脸图像的局部特征输入预设的二分类网络,以得到所述人脸图像是否被遮挡的遮挡判定结果;
    基于所述人脸图像的遮挡判定结果和实际遮挡情况,获取第二损失函数;
    所述根据所述第一损失函数对所述特征提取模型进行迭代训练的步骤包括:
    根据所述第一损失函数和第二损失函数,对所述特征提取模型进行迭代训练。
PCT/CN2020/125033 2020-09-01 2020-10-30 征提取模型训练方法、人脸识别方法、装置、设备及介质 WO2021174880A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010906610.3A CN112052781A (zh) 2020-09-01 2020-09-01 特征提取模型训练方法、人脸识别方法、装置、设备及介质
CN202010906610.3 2020-09-01

Publications (1)

Publication Number Publication Date
WO2021174880A1 true WO2021174880A1 (zh) 2021-09-10

Family

ID=73607938

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/125033 WO2021174880A1 (zh) 2020-09-01 2020-10-30 征提取模型训练方法、人脸识别方法、装置、设备及介质

Country Status (2)

Country Link
CN (1) CN112052781A (zh)
WO (1) WO2021174880A1 (zh)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113963183A (zh) * 2021-12-22 2022-01-21 北京的卢深视科技有限公司 模型训练、人脸识别方法、电子设备及存储介质
CN113963374A (zh) * 2021-10-19 2022-01-21 中国石油大学(华东) 基于多层次特征与身份信息辅助的行人属性识别方法
CN114220143A (zh) * 2021-11-26 2022-03-22 华南理工大学 一种佩戴口罩的人脸识别方法
CN114299595A (zh) * 2022-01-29 2022-04-08 北京百度网讯科技有限公司 人脸识别方法、装置、设备、存储介质和程序产品
CN114581984A (zh) * 2022-03-07 2022-06-03 桂林理工大学 一种基于低秩注意力机制的口罩人脸识别算法
CN114972930A (zh) * 2022-08-02 2022-08-30 四川大学 面部图像的皮损标注方法、系统、计算机设备和存储介质
CN115063863A (zh) * 2022-06-27 2022-09-16 中国平安人寿保险股份有限公司 人脸识别方法、装置、计算机设备及存储介质
CN115457624A (zh) * 2022-08-18 2022-12-09 中科天网(广东)科技有限公司 一种局部与整体人脸特征交叉融合的戴口罩人脸识别方法、装置、设备和介质
CN115527254A (zh) * 2022-09-21 2022-12-27 北京的卢深视科技有限公司 人脸识别、模型训练方法、装置、电子设备及存储介质
CN117576766A (zh) * 2024-01-16 2024-02-20 杭州魔点科技有限公司 一种跨时空兼容性无监督自学习人脸识别方法和系统

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112597913A (zh) * 2020-12-26 2021-04-02 中国农业银行股份有限公司 一种人脸标注方法及装置
CN113012176B (zh) * 2021-03-17 2023-12-15 阿波罗智联(北京)科技有限公司 样本图像的处理方法、装置、电子设备及存储介质
CN113255617B (zh) * 2021-07-07 2021-09-21 腾讯科技(深圳)有限公司 图像识别方法、装置、电子设备和计算机可读存储介质
CN113537151B (zh) * 2021-08-12 2023-10-17 北京达佳互联信息技术有限公司 图像处理模型的训练方法及装置、图像处理方法及装置
CN113837015A (zh) * 2021-08-31 2021-12-24 艾普工华科技(武汉)有限公司 一种基于特征金字塔的人脸检测方法及系统
CN113963426B (zh) * 2021-12-22 2022-08-26 合肥的卢深视科技有限公司 模型训练、戴口罩人脸识别方法、电子设备及存储介质
CN115810214B (zh) * 2023-02-06 2023-05-12 广州市森锐科技股份有限公司 基于ai人脸识别核验管理方法、系统、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609481A (zh) * 2017-08-14 2018-01-19 百度在线网络技术(北京)有限公司 为人脸识别生成训练数据的方法、装置和计算机存储介质
US20190130218A1 (en) * 2017-11-01 2019-05-02 Salesforce.Com, Inc. Training a neural network using augmented training datasets
CN109886167A (zh) * 2019-02-01 2019-06-14 中国科学院信息工程研究所 一种遮挡人脸识别方法及装置
CN110334615A (zh) * 2019-06-20 2019-10-15 湖北亮诚光电科技有限公司 一种有遮挡的人脸识别的方法
CN111353411A (zh) * 2020-02-25 2020-06-30 四川翼飞视科技有限公司 一种基于联合损失函数的遮挡人脸的识别方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201711353D0 (en) * 2017-07-14 2017-08-30 Idscan Biometrics Ltd Improvements relating to face recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609481A (zh) * 2017-08-14 2018-01-19 百度在线网络技术(北京)有限公司 为人脸识别生成训练数据的方法、装置和计算机存储介质
US20190130218A1 (en) * 2017-11-01 2019-05-02 Salesforce.Com, Inc. Training a neural network using augmented training datasets
CN109886167A (zh) * 2019-02-01 2019-06-14 中国科学院信息工程研究所 一种遮挡人脸识别方法及装置
CN110334615A (zh) * 2019-06-20 2019-10-15 湖北亮诚光电科技有限公司 一种有遮挡的人脸识别的方法
CN111353411A (zh) * 2020-02-25 2020-06-30 四川翼飞视科技有限公司 一种基于联合损失函数的遮挡人脸的识别方法

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113963374A (zh) * 2021-10-19 2022-01-21 中国石油大学(华东) 基于多层次特征与身份信息辅助的行人属性识别方法
CN114220143A (zh) * 2021-11-26 2022-03-22 华南理工大学 一种佩戴口罩的人脸识别方法
CN114220143B (zh) * 2021-11-26 2024-04-19 华南理工大学 一种佩戴口罩的人脸识别方法
CN113963183B (zh) * 2021-12-22 2022-05-31 合肥的卢深视科技有限公司 模型训练、人脸识别方法、电子设备及存储介质
CN113963183A (zh) * 2021-12-22 2022-01-21 北京的卢深视科技有限公司 模型训练、人脸识别方法、电子设备及存储介质
CN114299595A (zh) * 2022-01-29 2022-04-08 北京百度网讯科技有限公司 人脸识别方法、装置、设备、存储介质和程序产品
CN114581984B (zh) * 2022-03-07 2024-04-16 桂林理工大学 一种基于低秩注意力机制的口罩人脸识别算法
CN114581984A (zh) * 2022-03-07 2022-06-03 桂林理工大学 一种基于低秩注意力机制的口罩人脸识别算法
CN115063863A (zh) * 2022-06-27 2022-09-16 中国平安人寿保险股份有限公司 人脸识别方法、装置、计算机设备及存储介质
CN114972930A (zh) * 2022-08-02 2022-08-30 四川大学 面部图像的皮损标注方法、系统、计算机设备和存储介质
CN115457624B (zh) * 2022-08-18 2023-09-01 中科天网(广东)科技有限公司 一种局部与整体人脸特征交叉融合的戴口罩人脸识别方法、装置、设备和介质
CN115457624A (zh) * 2022-08-18 2022-12-09 中科天网(广东)科技有限公司 一种局部与整体人脸特征交叉融合的戴口罩人脸识别方法、装置、设备和介质
CN115527254A (zh) * 2022-09-21 2022-12-27 北京的卢深视科技有限公司 人脸识别、模型训练方法、装置、电子设备及存储介质
CN117576766A (zh) * 2024-01-16 2024-02-20 杭州魔点科技有限公司 一种跨时空兼容性无监督自学习人脸识别方法和系统
CN117576766B (zh) * 2024-01-16 2024-04-26 杭州魔点科技有限公司 一种跨时空兼容性无监督自学习人脸识别方法和系统

Also Published As

Publication number Publication date
CN112052781A (zh) 2020-12-08

Similar Documents

Publication Publication Date Title
WO2021174880A1 (zh) 征提取模型训练方法、人脸识别方法、装置、设备及介质
WO2021077984A1 (zh) 对象识别方法、装置、电子设备及可读存储介质
WO2018188453A1 (zh) 人脸区域的确定方法、存储介质、计算机设备
CN110569756B (zh) 人脸识别模型构建方法、识别方法、设备和存储介质
WO2020211388A1 (zh) 基于预测模型的行为预测方法、装置、设备及存储介质
WO2021174819A1 (zh) 人脸遮挡检测方法及系统
CN107818308A (zh) 一种人脸识别智能比对方法、电子装置及计算机可读存储介质
CN112084917B (zh) 一种活体检测方法及装置
CN112395979B (zh) 基于图像的健康状态识别方法、装置、设备及存储介质
US11126827B2 (en) Method and system for image identification
EP3975039A1 (en) Masked face recognition
WO2022179401A1 (zh) 图像处理方法、装置、计算机设备、存储介质和程序产品
WO2021068325A1 (zh) 面部动作识别模型训练方法、面部动作识别方法、装置、计算机设备和存储介质
CN116110100B (zh) 一种人脸识别方法、装置、计算机设备及存储介质
US11893773B2 (en) Finger vein comparison method, computer equipment, and storage medium
WO2021139167A1 (zh) 人脸识别方法、装置、电子设备及计算机可读存储介质
CN109416734B (zh) 用于虹膜图像编码的自适应量化方法
JP2008102611A (ja) 画像処理装置
US20230100874A1 (en) Facial expression-based unlocking method and apparatus, computer device, and storage medium
WO2021151317A1 (zh) 活体检测方法、装置、电子设备及存储介质
CN113298158A (zh) 数据检测方法、装置、设备及存储介质
WO2021068613A1 (zh) 面部识别方法、装置、设备及计算机可读存储介质
Booysens et al. Ear biometrics using deep learning: A survey
WO2021203718A1 (zh) 人脸识别方法及系统
CN109829388A (zh) 基于微表情的视频数据处理方法、装置和计算机设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20923467

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20923467

Country of ref document: EP

Kind code of ref document: A1