CN116229556A - Face recognition method and device, embedded device, and computer-readable storage medium - Google Patents

Face recognition method and device, embedded device, and computer-readable storage medium Download PDF

Info

Publication number
CN116229556A
CN116229556A CN202310265736.0A CN202310265736A CN116229556A CN 116229556 A CN116229556 A CN 116229556A CN 202310265736 A CN202310265736 A CN 202310265736A CN 116229556 A CN116229556 A CN 116229556A
Authority
CN
China
Prior art keywords
face
feature vector
preset
image
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310265736.0A
Other languages
Chinese (zh)
Inventor
霍磊
王连忠
郑哲
聂玉虎
崔文朋
龚向锋
刘彬
孙健
孙天奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Smartchip Microelectronics Technology Co Ltd
Electric Power Research Institute of State Grid Jiangsu Electric Power Co Ltd
Original Assignee
Beijing Smartchip Microelectronics Technology Co Ltd
Electric Power Research Institute of State Grid Jiangsu Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Smartchip Microelectronics Technology Co Ltd, Electric Power Research Institute of State Grid Jiangsu Electric Power Co Ltd filed Critical Beijing Smartchip Microelectronics Technology Co Ltd
Priority to CN202310265736.0A priority Critical patent/CN116229556A/en
Publication of CN116229556A publication Critical patent/CN116229556A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a face recognition method, a face recognition device, an embedded device and a nonvolatile computer readable storage medium. The face recognition method comprises the following steps: detecting face information in the acquired image to generate a face image; detecting the quality of the face image based on a preset face screening model; and calculating the similarity between the face feature vector of the face image with the quality reaching the preset condition and the preset face feature vector based on the reconfigurable calculating unit so as to carry out face recognition. According to the face recognition method, the face recognition device, the embedded equipment and the nonvolatile computer readable storage medium, some face images with poor quality can be removed, so that the number of face images needing face recognition is reduced, and the recognition efficiency is improved. And the reconfigurable computing unit can reduce the resource occupancy rate and time consumption of the embedded device in the face recognition process so as to meet the operation requirements of the embedded device on other works.

Description

人脸识别方法及装置、嵌入式设备及计算机可读存储介质Face recognition method and device, embedded device, and computer-readable storage medium

技术领域technical field

本申请涉及人脸识别技术领域,更具体而言,涉及一种人脸识别方法、人脸识别装置、嵌入式设备及非易失性计算机可读存储介质。The present application relates to the technical field of face recognition, and more specifically, to a face recognition method, a face recognition device, an embedded device, and a non-volatile computer-readable storage medium.

背景技术Background technique

近年来,随着计算机性能的快速提升和深度学习方法的不断完善,模式识别和人工智能领域皆取得了重大的突破。人们通过深度学习方法在很多模式识别任务上都取得了十分优秀的效果,人脸识别也不例外。然而,人脸识别过程中,需先提取人脸特征,并在人脸特征库中进行搜索,但是人脸特征的数据量过大,则会导致人脸识别效率较低。In recent years, with the rapid improvement of computer performance and the continuous improvement of deep learning methods, major breakthroughs have been made in the fields of pattern recognition and artificial intelligence. People have achieved excellent results in many pattern recognition tasks through deep learning methods, and face recognition is no exception. However, in the process of face recognition, it is necessary to extract face features first and search in the face feature database, but the data volume of face features is too large, which will lead to low face recognition efficiency.

发明内容Contents of the invention

本申请实施方式提供一种人脸识别方法、人脸识别装置、嵌入式设备及非易失性计算机可读存储介质。Embodiments of the present application provide a face recognition method, a face recognition device, an embedded device, and a non-volatile computer-readable storage medium.

本申请实施方式的人脸识别方法包括检测采集图像中的人脸信息,以生成人脸图像;基于预设的人脸筛选模型,检测所述人脸图像的质量;及基于可重构计算单元,计算质量达到预设条件的人脸图像的人脸特征向量和预设人脸特征向量的相似度,以进行人脸识别。The face recognition method of the embodiment of the present application includes detecting the face information in the collected image to generate a face image; based on a preset face screening model, detecting the quality of the face image; and based on a reconfigurable computing unit and calculating the similarity between the face feature vector of the face image whose quality reaches the preset condition and the preset face feature vector, so as to perform face recognition.

本申请实施方式的人脸识别装置包括生成模块、第一检测模块和识别模块。所述生成模块用于检测采集图像中的人脸信息,以生成人脸图像。所述第一检测模块用于基于预设的人脸筛选模型,检测所述人脸图像的质量。所述识别模块用于基于可重构计算单元,计算质量达到预设条件的人脸图像的人脸特征向量和预设人脸特征向量的相似度,以进行人脸识别。The face recognition device in the embodiment of the present application includes a generation module, a first detection module and a recognition module. The generation module is used to detect face information in the captured image to generate a face image. The first detection module is used to detect the quality of the face image based on a preset face screening model. The identification module is used to calculate the similarity between the face feature vector of the face image whose quality reaches the preset condition and the preset face feature vector based on the reconfigurable computing unit, so as to perform face recognition.

本申请实施方式的嵌入式设备包括处理器。所述处理器用于检测采集图像中的人脸信息,以生成人脸图像;基于预设的人脸筛选模型,检测所述人脸图像的质量;及基于可重构计算单元,计算质量达到预设条件的人脸图像的人脸特征向量和预设人脸特征向量的相似度,以进行人脸识别。The embedded device in the embodiments of the present application includes a processor. The processor is used to detect face information in the collected image to generate a face image; based on a preset face screening model, detect the quality of the face image; and based on a reconfigurable computing unit, the calculation quality reaches the preset The similarity between the face feature vector of the conditional face image and the preset face feature vector is used for face recognition.

本申请实施方式的非易失性计算机可读存储介质包含计算机程序,当所述计算机程序被一个或多个处理器执行时,使得所述处理器执行如下人脸识别方法:检测采集图像中的人脸信息,以生成人脸图像;基于预设的人脸筛选模型,检测所述人脸图像的质量;及基于可重构计算单元,计算质量达到预设条件的人脸图像的人脸特征向量和预设人脸特征向量的相似度,以进行人脸识别。The non-transitory computer-readable storage medium of the embodiment of the present application contains a computer program, and when the computer program is executed by one or more processors, the processor is made to perform the following face recognition method: detecting Face information to generate a face image; based on a preset face screening model, detect the quality of the face image; and based on a reconfigurable computing unit, calculate the face features of the face image whose quality reaches the preset condition The similarity between the vector and the preset face feature vector for face recognition.

本申请实施方式的人脸识别方法、人脸识别装置、嵌入式设备及非易失性计算机可读存储介质中,在对人脸图像进行人脸识别前,会基于预设的人脸筛选模型,以检测人脸图像的质量,只有人脸图像的质量达到预设条件,才会基于可重构算单元,进行人脸识别,即在对人脸图像进行人脸识别前,会剔除一些质量较差的人脸图像,从而减少需要进行人脸识别的人脸图像的数量,以提高识别效率,且可重构计算单元可减少处理器的工作量,从而降低嵌入式设备在人脸识别过程中所用到的资源占用率和耗时,从而满足嵌入式设备在其他工作上的运行需求。In the face recognition method, face recognition device, embedded device, and non-volatile computer-readable storage medium of the embodiments of the present application, before performing face recognition on a face image, a face screening model based on a preset , to detect the quality of the face image, only when the quality of the face image meets the preset conditions, the face recognition will be performed based on the reconfigurable computing unit, that is, before the face recognition is performed on the face image, some quality Poor face images, thereby reducing the number of face images that need face recognition to improve recognition efficiency, and the reconfigurable computing unit can reduce the workload of the processor, thereby reducing the face recognition process of embedded devices The resource occupancy rate and time consumption used in the system can meet the running requirements of embedded devices in other tasks.

本申请的实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实施方式的实践了解到。Additional aspects and advantages of the embodiments of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the embodiments of the application.

附图说明Description of drawings

本申请的上述和/或附加的方面和优点从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the present application will become apparent and understandable from the description of the embodiments in conjunction with the following drawings, wherein:

图1是本申请某些实施方式的人脸识别方法的流程示意图;FIG. 1 is a schematic flow diagram of a face recognition method in some embodiments of the present application;

图2是本申请某些实施方式的人脸识别装置的示意图;Fig. 2 is a schematic diagram of a face recognition device in some embodiments of the present application;

图3是本申请某些实施方式的嵌入式设备的平面示意图;3 is a schematic plan view of an embedded device in some embodiments of the present application;

图4是本申请某些实施方式的人脸识别方法的流程示意图;FIG. 4 is a schematic flow diagram of a face recognition method in some embodiments of the present application;

图5是本申请某些实施方式的人脸识别方法的流程示意图;FIG. 5 is a schematic flow diagram of a face recognition method in some embodiments of the present application;

图6是本申请某些实施方式的人脸识别方法的流程示意图;FIG. 6 is a schematic flow diagram of a face recognition method in some embodiments of the present application;

图7是本申请某些实施方式的人脸识别方法的场景示意图;FIG. 7 is a schematic diagram of a scene of a face recognition method in some embodiments of the present application;

图8是本申请某些实施方式的人脸识别方法的流程示意图;FIG. 8 is a schematic flow diagram of a face recognition method in some embodiments of the present application;

图9是本申请某些实施方式的人脸识别方法的流程示意图;FIG. 9 is a schematic flowchart of a face recognition method in some embodiments of the present application;

图10是本申请某些实施方式的人脸识别方法的流程示意图;FIG. 10 is a schematic flow diagram of a face recognition method in some embodiments of the present application;

图11是本申请某些实施方式的人脸识别方法的流程示意图;FIG. 11 is a schematic flow diagram of a face recognition method in some embodiments of the present application;

图12是本申请某些实施方式的人脸识别方法的流程示意图;Fig. 12 is a schematic flowchart of a face recognition method in some embodiments of the present application;

图13是本申请某些实施方式的人脸识别方法的流程示意图;FIG. 13 is a schematic flowchart of a face recognition method in some embodiments of the present application;

图14是本申请某些实施方式的非易失性计算机可读存储介质和处理器的连接状态示意图。Fig. 14 is a schematic diagram of a connection state between a non-volatile computer-readable storage medium and a processor in some embodiments of the present application.

具体实施方式Detailed ways

下面详细描述本申请的实施方式,实施方式的示例在附图中示出,其中,相同或类似的标号自始至终表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本申请的实施方式,而不能理解为对本申请的实施方式的限制。Embodiments of the present application are described in detail below, and examples of the embodiments are shown in the drawings, wherein the same or similar reference numerals denote the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary, are only for explaining the embodiments of the present application, and should not be construed as limiting the embodiments of the present application.

请参阅图1,本申请实施方式提供一种人脸识别方法。该人脸识别方法包括步骤:Referring to FIG. 1 , an embodiment of the present application provides a face recognition method. The face recognition method comprises steps:

01:检测采集图像中的人脸信息,以生成人脸图像;01: Detect face information in the captured image to generate a face image;

03:基于预设的人脸筛选模型,检测人脸图像的质量;及03: Based on the preset face screening model, detect the quality of the face image; and

05:基于可重构计算单元,计算质量达到预设条件的人脸图像的人脸特征向量和预设人脸特征向量的相似度,以进行人脸识别。05: Based on the reconfigurable computing unit, calculate the similarity between the face feature vector of the face image whose quality meets the preset conditions and the preset face feature vector for face recognition.

请参阅图2,本申请实施方式提供一种人脸识别装置10。人脸识别装置10包括生成模块11、第一检测模块12和识别模块13。本申请实施方式的人脸识别方法可应用于人脸识别装置10。其中,生成模块11、第一检测模块12和识别模块13分别用于执行步骤01、步骤03和步骤05。即,生成模块11用于检测采集图像中的人脸信息,以生成人脸图像。第一检测模块12用于基于预设的人脸筛选模型,检测人脸图像的质量。识别模块13用于基于可重构计算单元,计算质量达到预设条件的人脸图像的人脸特征向量和预设人脸特征向量的相似度,以进行人脸识别。Referring to FIG. 2 , the embodiment of the present application provides a face recognition device 10 . The face recognition device 10 includes a generation module 11 , a first detection module 12 and a recognition module 13 . The face recognition method in the embodiments of the present application can be applied to the face recognition device 10 . Wherein, the generation module 11 , the first detection module 12 and the identification module 13 are used to execute step 01 , step 03 and step 05 respectively. That is, the generation module 11 is used to detect the face information in the captured image to generate a face image. The first detection module 12 is used for detecting the quality of the face image based on the preset face screening model. The recognition module 13 is used to calculate the similarity between the face feature vector of the face image whose quality reaches the preset condition and the preset face feature vector based on the reconfigurable computing unit, so as to perform face recognition.

请参阅图3,本申请实施方式还提供一种嵌入式设备100。本申请实施方式的人脸识别方法可应用于嵌入式设备100。嵌入式设备100包括处理器20。处理器20用于执行步骤01、步骤03和步骤05。即,处理器20用于检测采集图像中的人脸信息,以生成人脸图像;基于预设的人脸筛选模型,检测人脸图像的质量;及基于可重构计算单元,计算质量达到预设条件的人脸图像的人脸特征向量和预设人脸特征向量的相似度,以进行人脸识别。Referring to FIG. 3 , the embodiment of the present application also provides an embedded device 100 . The face recognition method in the embodiments of the present application can be applied to the embedded device 100 . The embedded device 100 includes a processor 20 . The processor 20 is used to execute step 01, step 03 and step 05. That is, the processor 20 is used to detect the face information in the collected image to generate a face image; based on the preset face screening model, detect the quality of the face image; and based on the reconfigurable computing unit, the calculation quality reaches the preset The similarity between the face feature vector of the conditional face image and the preset face feature vector is used for face recognition.

其中,嵌入式设备100还包括壳体30和摄像头40。壳体30可用于安装嵌入式设备100的显示装置、成像装置、供电装置、通信装置等功能模块,以使壳体30为功能模块提供防尘、防摔、防水等保护。摄像头40可用于采集图像。嵌入式设备100可以是手机、数码相机、智能手表、头显设备、游戏机、机器人等。如图3所示,本申请实施方式以嵌入式设备100是手机为例进行说明,可以理解,嵌入式设备100的具体形式并不限于手机。Wherein, the embedded device 100 further includes a casing 30 and a camera 40 . The housing 30 can be used to install functional modules such as a display device, an imaging device, a power supply device, and a communication device of the embedded device 100, so that the housing 30 provides protection against dust, drop, and water for the functional modules. Camera 40 can be used to capture images. The embedded device 100 may be a mobile phone, a digital camera, a smart watch, a head-mounted display device, a game console, a robot, and the like. As shown in FIG. 3 , the embodiment of the present application is described by taking the embedded device 100 as a mobile phone as an example. It can be understood that the specific form of the embedded device 100 is not limited to the mobile phone.

具体地,在摄像头40采集到图像后,处理器20可以检测采集图像中的人脸信息,以生成人脸图像。人脸信息可包括有图像中人脸的位置及人脸的关键点位置(如眼睛、嘴巴、鼻子及耳朵等位置)。其中,处理器20可采用人脸检测算法,以得到图像中的人脸信息。人脸检测算法可以是FaceBoxes人脸检测网络、retinaface人脸检测网络。如此,处理器20则可根据得到的人脸信息,从而生成图像中的人脸图像。Specifically, after the image is collected by the camera 40, the processor 20 may detect face information in the collected image to generate a face image. The face information may include the position of the face in the image and the positions of key points of the face (such as the positions of eyes, mouth, nose, and ears). Wherein, the processor 20 may use a face detection algorithm to obtain face information in the image. The face detection algorithm can be FaceBoxes face detection network, retinaface face detection network. In this way, the processor 20 can generate the face image in the image according to the obtained face information.

接下来,处理器20可根据预设的人脸筛选模型,以检测人脸图像的质量。其中,人脸筛选模型可采用16层卷积神经网络,可包括有卷积层、激活层、归一化网络(BN)层和损失函数层。其损失函数层为鲁棒性回归的损失函数(Huber回归损失函数)。Next, the processor 20 may screen the face according to a preset face image to detect the quality of the face image. Among them, the face screening model can use a 16-layer convolutional neural network, which can include a convolutional layer, an activation layer, a normalization network (BN) layer, and a loss function layer. Its loss function layer is the loss function of robust regression (Huber regression loss function).

预设的人脸筛选模型为提前训练好的筛选模型,人脸筛选模型中的训练样本中可包括有质量较高的人脸样本和质量较低的人脸样本。The preset face screening model is a pre-trained screening model, and the training samples in the face screening model may include high-quality face samples and low-quality face samples.

更具体地,质量较高的人脸样本为人脸特征点稳定易识别的样本,质量较低的人脸样本为人脸特征点难以识别的样本。因此,当处理器20在通过预设的人脸筛选模型,检测人脸图像的质量时,则可将人脸图像输入至预设的人脸筛选模型,处理器20则可得到人脸图像的质量好坏。More specifically, high-quality face samples are samples with stable and easy-to-recognize facial feature points, and low-quality face samples are samples with difficult-to-recognize face feature points. Therefore, when the processor 20 detects the quality of the human face image by the preset human face screening model, the human face image can be input to the preset human face screening model, and the processor 20 can obtain the quality of the human face image. The quality is good or bad.

例如,人脸图像的特征点不易识别时,预设的人脸筛选模型则会输出人脸图像质量较低。又例如,人脸图像的特征点易于识别时,预设的人脸筛选模型则会输出人脸图像质量较高。For example, when the feature points of the face image are difficult to identify, the preset face screening model will output a low quality face image. For another example, when the feature points of the face image are easy to identify, the preset face screening model will output a higher quality face image.

在某些实施方式中,预设的人脸筛选模型还可以是设置不同质量的训练样本对应不同的分数,即,预设的人脸筛选模型可以根据识别到特征点的数量,以对识别到不同数量的特征点对应的图像进行评分。可以理解,识别到的特征点越多,对应的人脸图像评分越高。In some implementations, the preset face screening model can also be set to correspond to different scores for different quality training samples, that is, the preset face screening model can be used to identify the number of feature points according to the number of identified feature points Images corresponding to different numbers of feature points are scored. It can be understood that the more feature points are recognized, the higher the score of the corresponding face image is.

如此,当人脸图像的质量达到预设条件的情况下,处理器20则会基于人脸图像进行人脸识别。In this way, when the quality of the face image meets the preset condition, the processor 20 will perform face recognition based on the face image.

在一个实施方式中,预设条件可以分为高质量和低质量,如此,当人脸图像的质量为高质量时,处理器20才会基于人脸图像进行人脸识别。In one embodiment, the preset conditions can be divided into high quality and low quality, so that when the quality of the face image is high quality, the processor 20 will perform face recognition based on the face image.

在另一个实施方式中,预设条件可以是具体的分数值,如百分制的80分、90分等。那么当处理器20通过预设的人脸筛选模型,检测出人脸图像的质量的分数时,则可与预设条件进行比较,以在人脸图像的质量的分数大于等于预设条件的分数时,处理器20才会基于人脸图像进行识别。In another embodiment, the preset condition may be a specific score value, such as 80 points, 90 points, etc. of a percentile system. Then when the processor 20 detects the quality score of the human face image by the preset human face screening model, it can be compared with the preset condition to determine whether the quality score of the human face image is greater than or equal to the score of the preset condition , the processor 20 will perform recognition based on the face image.

可以理解,只有质量达到预设条件时,处理器20才会基于人脸图像进行人脸识别。即质量较低的人脸图像不会进行人脸识别,从而筛选掉质量较低的人脸图像,从而减少进行人脸识别的识别量,且仅根据质量较高的人脸图像进行人脸识别,还可以提高识别率。It can be understood that only when the quality reaches the preset condition, the processor 20 will perform face recognition based on the face image. That is, face recognition will not be performed on low-quality face images, thereby filtering out low-quality face images, thereby reducing the amount of recognition for face recognition, and performing face recognition only based on high-quality face images , can also improve the recognition rate.

更具体地,在处理器20基于人脸图像进行人脸识别时,处理器20可基于可重构计算单元,以提取质量达到预设条件的人脸图像的人脸特征向量。More specifically, when the processor 20 performs face recognition based on a face image, the processor 20 may use a reconfigurable computing unit to extract a face feature vector of a face image whose quality meets a preset condition.

接下来,处理器20可将预设的人脸信息数据库中的预设人脸特征向量输入至可重构计算单元,以供可重构计算单元计算。其中,可重构计算单元为(ReconfigurableComputing Unit,RCU)设备。通过RCU设备计算待测人脸特征向量和预存特征向量的相似度,可减少处理器20的工作量,从而降低嵌入式设备100在人脸识别过程中所用到的资源占用率和耗时,从而满足嵌入式设备100在其他工作上的运行需求。Next, the processor 20 may input the preset face feature vectors in the preset face information database to the reconfigurable computing unit for calculation by the reconfigurable computing unit. Wherein, the reconfigurable computing unit is a (Reconfigurable Computing Unit, RCU) device. Calculating the similarity between the feature vector of the human face to be tested and the pre-stored feature vector by the RCU device can reduce the workload of the processor 20, thereby reducing the resource usage and time consumption used by the embedded device 100 in the face recognition process, thereby Satisfy the running requirements of the embedded device 100 on other tasks.

本申请实施方式的人脸识别方法、人脸识别装置10和嵌入式设备100中,在对人脸图像进行人脸识别前,会基于预设的人脸筛选模型,以检测人脸图像的质量,只有人脸图像的质量达到预设条件,才会基于可重构算单元,进行人脸识别,即在对人脸图像进行人脸识别前,会剔除一些质量较差的人脸图像,从而减少需要进行人脸识别的人脸图像的数量,以提高识别效率,且可重构计算单元可减少处理器20的工作量,从而降低嵌入式设备100在人脸识别过程中所用到的资源占用率和耗时,从而满足嵌入式设备100在其他工作上的运行需求。In the face recognition method, face recognition device 10, and embedded device 100 of the embodiment of the present application, before performing face recognition on a face image, the quality of the face image will be detected based on a preset face screening model. , only when the quality of the face image reaches the preset condition, the face recognition will be performed based on the reconfigurable computing unit, that is, before the face recognition is performed on the face image, some face images with poor quality will be eliminated, so that Reduce the number of face images that need face recognition to improve recognition efficiency, and the reconfigurable computing unit can reduce the workload of the processor 20, thereby reducing the resource occupation used by the embedded device 100 in the face recognition process rate and time consumption, so as to meet the running requirements of the embedded device 100 on other tasks.

请参阅图2、图3和图4,在某些实施方式中,步骤01:检测采集图像中的人脸信息,以生成人脸图像,包括步骤:Please refer to Fig. 2, Fig. 3 and Fig. 4, in some embodiments, step 01: detect the face information in the collected image, to generate a face image, including steps:

011:基于预设人脸检测模型,检测采集图像中人脸位置;011: Based on the preset face detection model, detect the position of the face in the collected image;

012:根据人脸位置生成人脸图像。012: Generate face images based on face positions.

在某些实施方式中,生成模块11用于执行步骤011和步骤012。即,生成模块11用于基于预设人脸检测模型,检测采集图像中人脸位置;根据人脸位置生成人脸图像。In some implementations, the generation module 11 is used to execute step 011 and step 012. That is, the generation module 11 is used for detecting the position of the face in the captured image based on the preset face detection model; and generating the face image according to the position of the face.

在某些实施方式中,处理器20用于执行步骤011和步骤012。即,处理器20用于基于预设人脸检测模型,检测采集图像中人脸位置;根据人脸位置生成人脸图像。In some implementations, the processor 20 is configured to execute step 011 and step 012. That is, the processor 20 is configured to detect the position of the face in the captured image based on the preset face detection model; generate a face image according to the position of the face.

具体地,人脸信息包括有目标人脸的人脸位置,即摄像头40采集的图像中的人脸位置。当处理器20检测采集图像中的人脸信息,以生成人脸图像时,可以是:处理器20基于预设人脸检测模型,检测采集的图像中的人脸位置。根据上述可知,预设人脸检测模型可以是FaceBoxes人脸检测网络、retinaface人脸检测网络等。Specifically, the face information includes the face position of the target face, that is, the face position in the image collected by the camera 40 . When the processor 20 detects the face information in the collected image to generate the face image, it may be: the processor 20 detects the position of the face in the collected image based on a preset face detection model. According to the above, the preset face detection model can be FaceBoxes face detection network, retinaface face detection network, etc.

在处理器20得到摄像头40采集的图像后,可先根据预设的人脸检测模型,以得到采集的图像中的人脸位置,从而根据人脸位置生成人脸图像。After the processor 20 obtains the image captured by the camera 40, it can first obtain the position of the face in the captured image according to the preset face detection model, so as to generate the face image according to the position of the face.

如此,则可剔除摄像头40采集的图像中不属于人脸部分的图像数据,从而减少在对人脸图像进行质量检测时,所需检测的数据量,以提高识别效率。In this way, the image data that does not belong to the face part in the image collected by the camera 40 can be eliminated, thereby reducing the amount of data to be detected when performing quality inspection on the face image, so as to improve recognition efficiency.

请参阅图2、图3和图5,本申请实施方式的人脸识别方法,还包括步骤:Please refer to Fig. 2, Fig. 3 and Fig. 5, the face recognition method of the embodiment of the present application also includes steps:

07:基于预设人脸检测模型,检测人脸图像中的人脸关键点的位置;及07: Based on the preset face detection model, detect the position of the key points of the face in the face image; and

09:根据人脸位置及人脸关键点的位置,对齐人脸图像,以使得对齐后的人脸图像的目标人脸的姿态调整为预设姿态。09: Align the face images according to the positions of the faces and key points of the faces, so that the pose of the target face of the aligned face images is adjusted to a preset pose.

部分步骤03:检测人脸图像的质量,包括步骤:Partial Step 03: Detect the quality of the face image, including steps:

031:检测对齐后的人脸图像的质量。031: Detect the quality of the aligned face images.

在某些实施方式中,人脸识别装置10还包括第二检测模块14和对齐模块15。第二检测模块14用于执行步骤07。对齐模块15用于执行步骤09。第二检测模块14用于执行步骤031。即,第二检测模块14用于基于预设人脸检测模型,检测人脸图像中的人脸关键点的位置。对齐模块15用于根据人脸位置及人脸关键点的位置,对齐人脸图像,以使得对齐后的人脸图像的目标人脸的姿态调整为预设姿态。第一检测模块12用于检测对齐后的人脸图像的质量。In some implementations, the face recognition device 10 further includes a second detection module 14 and an alignment module 15 . The second detection module 14 is used to execute step 07. The alignment module 15 is used to execute step 09 . The second detection module 14 is used to execute step 031 . That is, the second detection module 14 is configured to detect the position of key points of the face in the face image based on the preset face detection model. The alignment module 15 is used for aligning the face images according to the positions of the faces and key points of the faces, so that the pose of the target face in the aligned face images is adjusted to a preset pose. The first detection module 12 is used to detect the quality of the aligned face images.

在某些实施方式中,处理器20用于执行步骤07、步骤09和步骤031。即,处理器20用于基于预设人脸检测模型,检测人脸图像中的人脸关键点的位置;根据人脸位置及人脸关键点的位置,对齐人脸图像,以使得对齐后的人脸图像的目标人脸的姿态调整为预设姿态;及检测对齐后的人脸图像的质量。In some embodiments, the processor 20 is configured to execute step 07 , step 09 and step 031 . That is, the processor 20 is used to detect the position of the key point of the face in the face image based on the preset face detection model; align the face images according to the position of the face and the position of the key point of the face, so that the aligned The pose of the target face of the face image is adjusted to a preset pose; and the quality of the aligned face image is detected.

具体地,人脸信息还包括有人脸关键点。在生成人脸图像后,处理器20还可根据预设人脸检测模型,以检测得到人脸图像中人脸关键点的位置。其中,人脸关键点的位置可以是眼睛、鼻子、嘴巴及耳朵等位置。Specifically, the face information also includes key points of the face. After the face image is generated, the processor 20 can also detect the positions of key points of the face in the face image according to a preset face detection model. Wherein, the positions of the key points of the human face may be positions such as eyes, nose, mouth, and ears.

如此,在处理器20得到人脸位置和人脸关键点的位置后,便可根据人脸位置和人脸关键点位置,对齐人脸图像,以使对齐后的人脸图像的目标人脸的姿态调整为预设姿态。其中,预设姿态可以是人脸正视图、人脸侧视图等。优选地,为保证人脸图像中的特征点易于提取,且考虑到人脸图像是用于进行人脸识别,因此,预设姿态为人脸正视图。In this way, after the processor 20 obtains the position of the human face and the positions of the key points of the human face, the face images can be aligned according to the position of the human face and the positions of the key points of the human face, so that the target human face of the aligned human face image The posture is adjusted to the default posture. Wherein, the preset posture may be a front view of a face, a side view of a face, and the like. Preferably, in order to ensure that the feature points in the face image are easy to extract, and considering that the face image is used for face recognition, the preset pose is a front view of the face.

具体地,处理器20可根据人脸位置和人脸关键点的位置,以对人脸图像进行仿射变换,从而使人脸的角度面向正面,即人脸图像为人脸正视图。Specifically, the processor 20 may perform affine transformation on the face image according to the position of the face and the positions of the key points of the face, so that the angle of the face faces the front, that is, the face image is a front view of the face.

进一步地,处理器20在基于预设的人脸筛选模型,检测人脸图像的质量时,则是检测对齐后的人脸图像的质量。Further, when the processor 20 detects the quality of the face image based on the preset face screening model, it detects the quality of the aligned face image.

如此,可保证处理器20在基于预设的人脸筛选模型,检测人脸图像的质量时,得到的人脸图像的特征点较为明显,从而保证检测人脸图像的质量时,准确性较高。In this way, it can be ensured that when the processor 20 detects the quality of the human face image based on the preset human face screening model, the feature points of the obtained human face image are relatively obvious, thereby ensuring that the detection of the quality of the human face image has high accuracy .

请参阅图2、图3及图6,在某些实施方式中,部分步骤011:检测采集图像中的人脸关键点的位置,还包括步骤:Please refer to Fig. 2, Fig. 3 and Fig. 6, in some embodiments, part of step 011: detecting the position of the key points of the face in the collected image also includes the steps of:

0111:根据采集图像的尺寸,生成多个预设尺寸的候选框,并输出每个候选框的评分;0111: According to the size of the captured image, generate multiple candidate frames of preset sizes, and output the score of each candidate frame;

0113:确定评分大于预设评分的候选框为人脸框;0113: Determine that the candidate frame whose score is greater than the preset score is a face frame;

0115:获取任意两个存在重合部分的人脸框之间的重合度,确定重合度大于预设重合度的人脸框中,评分最高的人脸框为目标人脸框;及0115: Obtain the coincidence degree between any two face frames with overlapping parts, and determine that among the face frames whose coincidence degree is greater than the preset coincidence degree, the face frame with the highest score is the target face frame; and

0117:输出目标人脸框的位置以作为人脸位置。0117: Output the position of the target face frame as the face position.

在某些实施方式中,第二检测模块14用于执行步骤0111、步骤0113、步骤0115和步骤0117。第二检测模块14用于根据采集图像的尺寸,生成多个预设尺寸的候选框,并输出每个候选框的评分;确定评分大于预设评分的候选框为人脸框;获取任意两个存在重合部分的人脸框之间的重合度,确定重合度大于预设重合度的人脸框中,评分最高的人脸框为目标人脸框;及输出目标人脸框的位置以作为人脸位置。In some implementations, the second detection module 14 is used to execute step 0111 , step 0113 , step 0115 and step 0117 . The second detection module 14 is used to generate a plurality of candidate frames of preset size according to the size of the captured image, and output the score of each candidate frame; determine that the candidate frame whose score is greater than the preset score is a human face frame; obtain any two existing The degree of overlap between the face frames of the overlapping parts, determine the face frame whose degree of overlap is greater than the preset degree of overlap, the face frame with the highest score is the target face frame; and output the position of the target face frame as the face Location.

在某些实施方式中,处理器20用于执行步骤0111、步骤0113、步骤0115和步骤0117。即,处理器20用于根据采集图像的尺寸,生成多个预设尺寸的候选框,并输出每个候选框的评分;确定评分大于预设评分的候选框为人脸框;获取任意两个存在重合部分的人脸框之间的重合度,确定重合度大于预设重合度的人脸框中,评分最高的人脸框为目标人脸框;及输出目标人脸框的位置以作为人脸位置。In some embodiments, the processor 20 is configured to execute step 0111 , step 0113 , step 0115 and step 0117 . That is, the processor 20 is used to generate a plurality of candidate frames of a preset size according to the size of the captured image, and output the score of each candidate frame; determine that the candidate frame with a score greater than the preset score is a face frame; obtain any two existing The degree of overlap between the face frames of the overlapping parts, determine the face frame whose degree of overlap is greater than the preset degree of overlap, the face frame with the highest score is the target face frame; and output the position of the target face frame as the face Location.

具体地,在处理器20基于预设人脸检测模型,以检测图像中的人脸位置时,处理器20可根据采集图像的尺寸,生成多个预设尺寸的候选框,并输出每个候选框的评分。其中。预设尺寸与采集图像的尺寸成正比,即,采集图像的尺寸越大,预设尺寸的候选框的尺寸越大。Specifically, when the processor 20 detects the position of the face in the image based on the preset face detection model, the processor 20 can generate a plurality of candidate frames of preset sizes according to the size of the captured image, and output each candidate frame box rating. in. The preset size is proportional to the size of the captured image, that is, the larger the size of the captured image, the larger the size of the candidate frame of the preset size.

如图7(a)所示,若采集图像P1的尺寸为1600*900,则候选框的尺寸S1为200*150。如图7(b)所示,若采集图像P2的尺寸为1920*1080,则候选框S2的尺寸为240*200。As shown in FIG. 7( a ), if the size of the captured image P1 is 1600*900, the size S1 of the candidate frame is 200*150. As shown in FIG. 7( b ), if the size of the captured image P2 is 1920*1080, the size of the candidate frame S2 is 240*200.

在处理器20对多个候选框进行评分时,可基于预先训练好的人脸检测模型,多个候选框进行评分。例如,预先训练好的人脸检测模型中,包含有不同质量的训练样本对应的评分,当候选框输入至预先训练好的人脸检测模型中后,则会得到每个候选框的评分。例如,候选框中人脸的特征越多,则分数越高。When the processor 20 scores the multiple candidate frames, the multiple candidate frames can be scored based on a pre-trained face detection model. For example, the pre-trained face detection model contains scores corresponding to training samples of different qualities. When the candidate frames are input into the pre-trained face detection model, the scores of each candidate frame will be obtained. For example, the more features of a face in a proposal box, the higher the score.

接下来,处理器20可根据每个候选框的评分,以将评分大于预设分数的候选框,确定为人脸框。其中,预设分数可以是人为设定的分数,如80分、90分等。可以理解,评分大于预设分数的候选框,则候选框内对应的图像的人脸特征越多。Next, the processor 20 may determine a candidate frame whose score is greater than a preset score as a human face frame according to the score of each candidate frame. Wherein, the preset score may be an artificially set score, such as 80 points, 90 points and so on. It can be understood that, the candidate frame whose score is greater than the preset score has more facial features in the corresponding image in the candidate frame.

进一步地,处理器20可获取任意两个存在有重合部分的人脸框之间的重合度,以根据重合度及候选框的评分,确定目标人脸框。Further, the processor 20 may acquire the degree of coincidence between any two human face frames with overlapping parts, so as to determine the target human face frame according to the degree of coincidence and the scoring of the candidate frames.

更具体地,当任意两个人脸框的重合部分越多时,则重合度越大。当重合度大于预设重合度时,处理器20则认为这两个人脸框中,评分最高的人脸框为目标人脸框。可以理解,重合度大于预设重合度时,则说明这两个人脸框共用面积的部分越多。More specifically, the more overlapping parts of any two face frames, the greater the overlapping degree. When the coincidence degree is greater than the preset coincidence degree, the processor 20 considers the face frame with the highest score among the two face frames as the target face frame. It can be understood that when the coincidence degree is greater than the preset coincidence degree, it means that the two face frames share more areas.

而目标人脸框,即是处理器20确定的所有重合度大于预设重合度的人脸框中,评分最高的人脸框。The target face frame is the face frame with the highest score among all the face frames determined by the processor 20 whose coincidence degree is greater than a preset coincidence degree.

如此,处理器20便可输出目标人脸框的位置,以作为人脸位置。可以理解,目标人脸框的位置为采集图像中最为准确的人脸位置,从而可保证在检测人脸图像的质量时,较为准确。In this way, the processor 20 can output the position of the target face frame as the face position. It can be understood that the position of the target face frame is the most accurate face position in the collected image, so as to ensure that the quality of the face image is detected more accurately.

请参阅图2、图3和图8,在某些实施方式中,步骤05:基于可重构计算单元,计算质量达到预设条件的人脸图像的人脸特征向量和预设人脸特征向量的相似度,以进行人脸识别,还包括步骤:Please refer to Fig. 2, Fig. 3 and Fig. 8, in some embodiments, step 05: Based on the reconfigurable computing unit, calculate the face feature vector and the preset face feature vector of the face image whose quality reaches the preset condition The similarity for face recognition also includes steps:

051:基于预设的特征提取模型,提取人脸图像的特征,以生成人脸特征向量;及051: Based on the preset feature extraction model, extract the features of the face image to generate a face feature vector; and

053:根据人脸特征向量,建立人脸信息数据库。053: Establish a face information database based on the face feature vector.

在某些实施方式中,识别模块13用于执行步骤051和步骤053。即,识别模块13用于基于预设的特征提取模型,提取人脸图像的特征,以生成人脸特征向量;及根据人脸特征向量,建立人脸信息数据库。In some implementations, the recognition module 13 is used to execute step 051 and step 053 . That is, the recognition module 13 is used to extract features of the face image based on a preset feature extraction model to generate a face feature vector; and establish a face information database according to the face feature vector.

在某些实施方式中,处理器20用于执行步骤051和步骤053。即,处理器20用于基于预设的特征提取模型,提取人脸图像的特征,以生成人脸特征向量;及根据人脸特征向量,建立人脸信息数据库。In some implementations, the processor 20 is configured to execute step 051 and step 053 . That is, the processor 20 is configured to extract features of the face image based on a preset feature extraction model to generate a face feature vector; and establish a face information database according to the face feature vector.

具体地,处理器20在基于人脸图像进行人脸识别时,还可基于预设的特征提取模型,提取人脸图像的特征,以生成人脸特征向量,并根据人脸特征向量,建立人脸信息数据库。Specifically, when the processor 20 performs face recognition based on a face image, it can also extract features of the face image based on a preset feature extraction model to generate a face feature vector, and establish a face feature vector based on the face feature vector. face information database.

其中,预设的特征提取模型为提前训练好的提取模型,可采用72层卷积神经网络,包括卷积层、池化层、激活层、全连接层和损失层。所用损失函数为softmax-loss和center-loss加权和,softmax-loss用于提高样本在特征空间内的类内聚合程度;center-loss用于增大样本在特征空间内的类间距离。Among them, the preset feature extraction model is a pre-trained extraction model, which can use a 72-layer convolutional neural network, including convolutional layers, pooling layers, activation layers, fully connected layers, and loss layers. The loss function used is the weighted sum of softmax-loss and center-loss. Softmax-loss is used to improve the degree of intra-class aggregation of samples in the feature space; center-loss is used to increase the inter-class distance of samples in the feature space.

在本申请实施方式的人脸识别方法中,在进行人脸识别时,则人脸信息数据库中需先存有能够进行人脸识别的人脸信息。因此,在人脸识别前,用户需先录入自身的人脸信息。即,需要有离线录入人脸图像的过程。In the face recognition method of the embodiment of the present application, when face recognition is performed, face information capable of face recognition must first be stored in the face information database. Therefore, before face recognition, users need to enter their own face information. That is, a process of offline recording of human face images is required.

具体过程如图9所示:首先,嵌入式设备100的摄像头40先进行图像采集,以供处理器20进行检测。在处理器20基于预设的人脸检测模型,检测到人脸信息(人脸位置和人脸关键点位置),以对齐人脸图像。接下来,处理器20基于预设的人脸筛选模型,以检测人脸的质量。当人脸图像的质量达到预设条件后,处理器20会基于预约的特征提取模型,提取人脸图像的特征,以生成人脸特征向量,并建立人脸信息数据库。而当人脸图像的质量未达到预设条件后,摄像头40则会提示用户重新采集图像,以重复进行上述步骤,直至人脸图像的质量达到预设条件。The specific process is shown in FIG. 9 : first, the camera 40 of the embedded device 100 collects images for detection by the processor 20 . Based on the preset human face detection model, the processor 20 detects human face information (human face position and human face key point position) to align the human face images. Next, the processor 20 detects the quality of the human face based on the preset human face screening model. When the quality of the face image reaches the preset condition, the processor 20 will extract the features of the face image based on the predetermined feature extraction model to generate a face feature vector and establish a face information database. And when the quality of the face image does not meet the preset condition, the camera 40 will prompt the user to recapture the image, so as to repeat the above steps until the quality of the face image reaches the preset condition.

可以理解,人脸信息数据库中存储的均为满足质量的人脸图像,如此,在进行人脸识别时,则可保证能够通过人脸识别的人脸图像质量均较好,以保证人脸识别的准确率。It can be understood that the face images stored in the face information database are all face images of satisfactory quality. In this way, when performing face recognition, it can be ensured that the quality of face images that can pass face recognition is better, so as to ensure the accuracy of face recognition. Accuracy.

请参阅图2、图3和图10,在某些实施方式中,步骤05:基于可重构计算单元,计算质量达到预设条件的人脸图像的人脸特征向量和预设人脸特征向量的相似度,以进行人脸识别,还包括步骤:Please refer to Fig. 2, Fig. 3 and Fig. 10, in some embodiments, step 05: Based on the reconfigurable computing unit, calculate the face feature vector and the preset face feature vector of the face image whose quality reaches the preset condition The similarity for face recognition also includes steps:

055:基于预设的特征提取模型,提取人脸图像的特征,以生成待测人脸特征向量;055: Based on the preset feature extraction model, extract the features of the face image to generate the face feature vector to be tested;

057:输入预设的人脸信息数据库中的预存特征向量到可重构计算单元,以计算待测人脸特征向量和预存特征向量的相似度,并根据相似度确定目标特征向量;及057: Input the pre-stored feature vector in the preset face information database to the reconfigurable computing unit to calculate the similarity between the tested face feature vector and the pre-stored feature vector, and determine the target feature vector according to the similarity; and

059:在目标特征向量和待测人脸特征向量的相似度大于预设阈值的情况下,确定人脸认证成功。059: When the similarity between the target feature vector and the face feature vector to be tested is greater than a preset threshold, determine that the face authentication is successful.

在某些实施方式中,识别模块13用于执行步骤055、步骤057和步骤059。即,识别模块13用于基于预设的特征提取模型,提取人脸图像的特征,以生成待测人脸特征向量;输入预设的人脸信息数据库中的预存特征向量到可重构计算单元,以计算待测人脸特征向量和预存特征向量的相似度,并根据相似度确定目标特征向量;及在目标特征向量和待测人脸特征向量的相似度大于预设阈值的情况下,确定人脸认证成功。In some implementations, the recognition module 13 is used to execute step 055, step 057 and step 059. That is, the identification module 13 is used to extract the features of the face image based on the preset feature extraction model to generate the face feature vector to be tested; input the pre-stored feature vector in the preset face information database to the reconfigurable computing unit , to calculate the similarity between the face feature vector to be tested and the pre-stored feature vector, and determine the target feature vector according to the similarity; and when the similarity between the target feature vector and the face feature vector to be tested is greater than a preset threshold, determine Face authentication is successful.

在某些实施方式中,处理器20用于执行步骤055、步骤057和步骤059。即,处理器20用于基于预设的特征提取模型,提取人脸图像的特征,以生成待测人脸特征向量;输入预设的人脸信息数据库中的预存特征向量到可重构计算单元,以计算待测人脸特征向量和预存特征向量的相似度,并根据相似度确定目标特征向量;及在目标特征向量和待测人脸特征向量的相似度大于预设阈值的情况下,确定人脸认证成功。In some embodiments, the processor 20 is configured to execute step 055 , step 057 and step 059 . That is, the processor 20 is used to extract the features of the face image based on the preset feature extraction model to generate the face feature vector to be tested; input the pre-stored feature vector in the preset face information database to the reconfigurable computing unit , to calculate the similarity between the face feature vector to be tested and the pre-stored feature vector, and determine the target feature vector according to the similarity; and when the similarity between the target feature vector and the face feature vector to be tested is greater than a preset threshold, determine Face authentication is successful.

具体地,在处理器20基于人脸图像进行人脸识别时,处理器20可基于预设的特征提取模型,以提取人脸图像的特征,从而生成待测人脸特征向量。Specifically, when the processor 20 performs face recognition based on a face image, the processor 20 may extract features of the face image based on a preset feature extraction model, so as to generate a face feature vector to be tested.

接下来,处理器20可将预设的人脸信息数据库中的预存特征向量输入至可重构计算单元,以供可重构计算单元计算。其中,可重构计算单元为(Reconfigurable ComputingUnit,RCU)设备。通过RCU设备计算待测人脸特征向量和预存特征向量的相似度,可减少处理器20的工作量,从而降低嵌入式设备100在人脸识别过程中所用到的资源占用率和耗时,从而满足嵌入式设备100在其他工作上的运行需求。Next, the processor 20 may input the pre-stored feature vectors in the preset face information database to the reconfigurable computing unit for calculation by the reconfigurable computing unit. Wherein, the reconfigurable computing unit is a (Reconfigurable Computing Unit, RCU) device. Calculating the similarity between the feature vector of the human face to be tested and the pre-stored feature vector by the RCU device can reduce the workload of the processor 20, thereby reducing the resource usage and time consumption used by the embedded device 100 in the face recognition process, thereby Satisfy the running requirements of the embedded device 100 on other tasks.

更具体地,在RCU设备计算得到待测人脸特征向量和预存特征向量的相似度后,处理器20便可根据相似度确定目标特征向量。而当目标特征向量和待测人脸特征向量的相似度大于预设阈值的情况下,处理器20则可确定人脸认证成功。可以理解,目标特征向量为所有的预存特征向量中与待测人脸特征向量相似度大于预设阈值的一个特征向量或多个向量。More specifically, after the RCU device calculates the similarity between the feature vector of the face to be tested and the pre-stored feature vector, the processor 20 can determine the target feature vector according to the similarity. When the similarity between the target feature vector and the face feature vector to be tested is greater than a preset threshold, the processor 20 may determine that the face authentication is successful. It can be understood that the target feature vector is a feature vector or a plurality of vectors among all the pre-stored feature vectors whose similarity with the feature vector of the face to be tested is greater than a preset threshold.

其中,预设阈值可以是人为设置的任意数值,如90%、95%、98%等。当目标特征向量与待测人脸特征向量的相似度大于预设阈值时,则说明该人脸图像能够进行人脸识别,并能够成功解锁。Wherein, the preset threshold may be any numerical value set artificially, such as 90%, 95%, 98% and so on. When the similarity between the target feature vector and the feature vector of the face to be tested is greater than a preset threshold, it indicates that the face image can be recognized and successfully unlocked.

如此,处理器20便可根据RCU设备对待测人脸特征向量和预存特征向量进行相似度的计算,以免去处理器20的工作,从而减少人脸识别工作对嵌入式设备100的资源占用,以降低人脸识别的耗时,并能够满足嵌入式设备100在其他工作上的运行需求。Like this, processor 20 just can carry out the calculation of similarity according to RCU device to be measured face feature vector and prestored feature vector, avoid the work of processor 20, thereby reduce the resource occupation of face recognition work to embedded device 100, with The time consumption of face recognition is reduced, and the running requirements of the embedded device 100 on other tasks can be met.

请参阅图2、图3和图11,在某些实施方式中,步骤057:输入预设的人脸信息数据库中的预存特征向量到可重构计算单元,以计算待测人脸特征向量和预存特征向量的相似度,并根据相似度确定目标特征向量,还包括步骤:Please refer to Fig. 2, Fig. 3 and Fig. 11, in some embodiments, step 057: input the pre-stored feature vector in the preset face information database to the reconfigurable computing unit to calculate the face feature vector and The similarity of the pre-stored feature vectors, and determining the target feature vector according to the similarity, also includes steps:

0571:根据可重构计算单元的内存容量,将人脸信息数据库中的预存特征向量分为多个特征集,以使得每个特征集占用的内存小于内存容量;0571: According to the memory capacity of the reconfigurable computing unit, divide the pre-stored feature vectors in the face information database into multiple feature sets, so that the memory occupied by each feature set is less than the memory capacity;

0573:依次输入每个特征集到可重构计算单元中,以计算待测人脸特征向量和每个特征集的预存特征向量的相似度;0573: Input each feature set into the reconfigurable computing unit in turn to calculate the similarity between the face feature vector to be tested and the pre-stored feature vector of each feature set;

0575:根据每个预存特征向量对应相似度进行排序,以确定预设排序的预存特征向量为目标特征向量。0575: Sorting according to the corresponding similarity of each pre-stored feature vector, to determine the pre-stored feature vector of the preset sort as the target feature vector.

在某些实施方式中,识别模块13用于执行步骤0571、步骤0573和步骤0575。即,识别模块13用于根据可重构计算单元的内存容量,将人脸信息数据库中的预存特征向量分为多个特征集,以使得每个特征集占用的内存小于内存容量;依次输入每个特征集到可重构计算单元中,以计算待测人脸特征向量和每个特征集的预存特征向量的相似度;根据每个预存特征向量对应相似度进行排序,以确定预设排序的预存特征向量为目标特征向量。In some implementations, the identification module 13 is used to perform step 0571 , step 0573 and step 0575 . That is, the recognition module 13 is used to divide the prestored feature vectors in the face information database into multiple feature sets according to the memory capacity of the reconfigurable computing unit, so that the memory occupied by each feature set is less than the memory capacity; feature set into the reconfigurable computing unit to calculate the similarity between the face feature vector to be tested and the pre-stored feature vector of each feature set; sort according to the similarity of each pre-stored feature vector to determine the preset sorting The pre-stored feature vector is the target feature vector.

在某些实施方式中,处理器20用于执行步骤0571、步骤0573和步骤0575。即,处理器20用于根据可重构计算单元的内存容量,将人脸信息数据库中的预存特征向量分为多个特征集,以使得每个特征集占用的内存小于内存容量;依次输入每个特征集到可重构计算单元中,以计算待测人脸特征向量和每个特征集的预存特征向量的相似度;根据每个预存特征向量对应相似度进行排序,以确定预设排序的预存特征向量为目标特征向量。In some embodiments, the processor 20 is configured to perform step 0571 , step 0573 and step 0575 . That is, the processor 20 is used to divide the prestored feature vectors in the face information database into multiple feature sets according to the memory capacity of the reconfigurable computing unit, so that the memory occupied by each feature set is less than the memory capacity; feature set into the reconfigurable computing unit to calculate the similarity between the face feature vector to be tested and the pre-stored feature vector of each feature set; sort according to the similarity of each pre-stored feature vector to determine the preset sorting The pre-stored feature vector is the target feature vector.

具体地,在可重构计算单元(RCU设备)计算待测人脸特征向量和预存特征向量的相似度时,可先根据RCU设备的内存容量,以将人脸信息数据库中的预存特征向量分为多个特征集。Specifically, when the reconfigurable computing unit (RCU device) calculates the similarity between the face feature vector to be tested and the pre-stored feature vector, it can first divide the pre-stored feature vector in the face information database according to the memory capacity of the RCU device. for multiple feature sets.

更具体地,以预存特征向量共有N个为例,处理器20可根据RCU设备的内存容量,以划分为M个部分(即划分为M个特征集),每个部分的预存特征向量的数量则为(N+M-1)/M个,如此,可保证RCU设备的内存容量能够分为M个部分,每个部分中的预存特征向量的数量差距不会较大,且每个特征集占用的内存均小于内存容量。More specifically, taking N pre-stored eigenvectors as an example, the processor 20 can be divided into M parts (that is, divided into M feature sets) according to the memory capacity of the RCU device, and the number of pre-stored eigenvectors in each part Then it is (N+M-1)/M. In this way, it can be guaranteed that the memory capacity of the RCU device can be divided into M parts, and the difference in the number of pre-stored feature vectors in each part will not be large, and each feature set The occupied memory is less than the memory capacity.

接下来,处理器20可依次输入每个特征集至RCU设备中,以计算得到待测人脸特征向量和每个特征集的预存特征向量的相似度。其中,由于RCU设备的内存容量较小,在计算过程中,RCU设备只能计算一个特征集内的预存特征向量与待测人脸特征向量的相似度,因此,依次输入每个特征集至RCU设备中,可保证RCU设备能够计算所有特征集内的预存特征向量和待测人脸特征向量的相似度。Next, the processor 20 may sequentially input each feature set into the RCU device to calculate the similarity between the feature vector of the face to be tested and the pre-stored feature vector of each feature set. Among them, due to the small memory capacity of the RCU device, during the calculation process, the RCU device can only calculate the similarity between the pre-stored feature vector in one feature set and the face feature vector to be tested. Therefore, each feature set is input to the RCU in turn. In the device, it can be ensured that the RCU device can calculate the similarity between the pre-stored feature vectors in all feature sets and the face feature vector to be tested.

最后,处理器20可获取得到每个预存特征向量对应的相似度,从而对预存特征向量的相似度进行排序,并将预设排序的预存特征向量为目标特征向量。其中,处理器20可根据每个预存特征向量对应的相似度,以从大到小的方式进行排序。而预设排序可以是5、10、15等。例如,当预设排序为5时,则预存特性向量中的前5个为目标特征向量,即相似度最大的5个预存特征向量为目标特征向量。Finally, the processor 20 may obtain the similarity corresponding to each pre-stored feature vector, so as to sort the similarity of the pre-stored feature vectors, and use the preset sorted pre-stored feature vector as the target feature vector. Wherein, the processor 20 may perform sorting in a descending manner according to the similarity corresponding to each pre-stored feature vector. And the preset ordering can be 5, 10, 15, etc. For example, when the preset sorting is 5, the first five pre-stored feature vectors are the target feature vectors, that is, the five pre-stored feature vectors with the highest similarity are the target feature vectors.

在某些实施方式中,第一个特征集输入至RCU设备中后,RCU设备则可计算得到第一个特征集中每个预存特征向量和待测人脸特征向量的相似度,以得到K个待测人脸特征向量和每个特征集的预存特征向量的相似度。其中,K为待测人脸特征向量和每个特征集的预存特征向量的相似度从大到小排序后,前K个相似度,即K个相似度为该特征集中相似度较大的K个。In some embodiments, after the first feature set is input into the RCU device, the RCU device can calculate the similarity between each pre-stored feature vector in the first feature set and the face feature vector to be tested, so as to obtain K The similarity between the feature vector of the face to be tested and the pre-stored feature vector of each feature set. Among them, K is the similarity between the feature vector of the face to be tested and the pre-stored feature vector of each feature set. After sorting from large to small, the first K similarities, that is, the K similarities are K indivual.

然后,处理器将第二个特征集输入至RCU设备,RCU可计算得到第二个特征集中每个预存特征向量和待测人脸特征向量的相似度,以再次得到K1个相似度,并与第一个特征集得到的K个相似度进行比较,并更新,以得到更新后的K2个相似度。其中,K1、K2等于K,可以理解,这K2个相似度为第一个特征集和第二个特征集经相似度计算后,相似度较大的K2个相似度。Then, the processor inputs the second feature set to the RCU device, and the RCU can calculate the similarity between each pre-stored feature vector in the second feature set and the face feature vector to be tested, so as to obtain K1 similarities again, and with The K similarities obtained from the first feature set are compared and updated to obtain updated K2 similarities. Wherein, K1 and K2 are equal to K. It can be understood that the K2 similarities are the K2 similarities with larger similarities after the similarity calculation between the first feature set and the second feature set.

依次类推,当处理器将一个特征集输入至RCU设备后,便会计算与待测人脸特征向量的相似度,并与上一个特征集得到的K个相似度进行比较并更新,直至输入完所有的特征集,从而得到所有特征集中的预存特征向量和待测人脸特征向量的相似度内较大的K个相似度。即,这K个相似度对应的预存特征向量,即为目标特征向量。By analogy, when the processor inputs a feature set to the RCU device, it will calculate the similarity with the feature vector of the face to be tested, and compare and update it with the K similarities obtained from the previous feature set until the input is complete. All the feature sets, so as to obtain the larger K similarities among the similarities between the pre-stored feature vectors in all feature sets and the face feature vectors to be tested. That is, the pre-stored feature vectors corresponding to the K similarities are target feature vectors.

请参阅图2、图3及图12,在某些实施方式中,步骤057:计算待测人脸特征向量和预存特征向量的相似度,还包括步骤:Please refer to Fig. 2, Fig. 3 and Fig. 12, in some embodiments, step 057: calculating the similarity between the feature vector of the human face to be tested and the pre-stored feature vector, also includes the steps of:

0577:计算待测人脸特征向量和预存特征向量的欧式距离;及0577: Calculate the Euclidean distance between the feature vector of the face to be tested and the pre-stored feature vector; and

0579:根据欧式距离确定待测人脸特征向量和预存特征向量的相似度。0579: Determine the similarity between the feature vector of the face to be tested and the pre-stored feature vector according to the Euclidean distance.

在某些实施方式中,识别模块13用于执行步骤0577和步骤0579。即,识别模块13用于计算待测人脸特征向量和预存特征向量的欧式距离;及根据欧式距离确定待测人脸特征向量和预存特征向量的相似度。In some implementations, the identification module 13 is used to perform step 0577 and step 0579. That is, the identification module 13 is used to calculate the Euclidean distance between the feature vector of the human face to be tested and the pre-stored feature vector; and determine the similarity between the feature vector of the human face to be tested and the pre-stored feature vector according to the Euclidean distance.

在某些实施方式中,处理器20用于执行步骤0577和步骤0579。即,处理器20用于计算待测人脸特征向量和预存特征向量的欧式距离;及根据欧式距离确定待测人脸特征向量和预存特征向量的相似度。In some embodiments, the processor 20 is configured to perform step 0577 and step 0579. That is, the processor 20 is used to calculate the Euclidean distance between the feature vector of the human face to be tested and the pre-stored feature vector; and determine the similarity between the feature vector of the human face to be tested and the pre-stored feature vector according to the Euclidean distance.

具体地,在RCU设备计算待测人脸特征向量和预存特征向量的相似度时,还可先计算待测人脸特征向量和预存特征向量的欧式距离,再根据欧式距离确定待测人脸特征向量和预存特征向量的相似度。Specifically, when the RCU device calculates the similarity between the face feature vector to be tested and the pre-stored feature vector, it can also first calculate the Euclidean distance between the face feature vector to be tested and the pre-stored feature vector, and then determine the face feature to be tested according to the Euclidean distance The similarity between the vector and the pre-stored feature vector.

欧式距离的计算公式如下公式(1)、公式(2)及公式(3)所示:The calculation formulas of the Euclidean distance are shown in the following formula (1), formula (2) and formula (3):

Figure BDA0004132987830000121
Figure BDA0004132987830000121

Figure BDA0004132987830000122
Figure BDA0004132987830000122

Figure BDA0004132987830000123
Figure BDA0004132987830000123

其中,公式(1)为二维空间的计算公式,公式(2)为三维空间的计算公式,公式(3)为N维空间的计算公式,具体以待测人脸特征向量和预存特征向量的具体数字以选择对应的公式进行计算。ρ、d(x,y)即为欧式距离的具体数值,(x1,y1)为待测人脸特征向量的坐标,(x2,y1)为预存特征向量的坐标。Wherein, formula (1) is a calculation formula of two-dimensional space, formula (2) is a calculation formula of three-dimensional space, and formula (3) is a calculation formula of N-dimensional space, specifically, the face feature vector to be tested and the pre-stored feature vector The specific number is calculated by selecting the corresponding formula. ρ, d(x, y) are the specific values of the Euclidean distance, (x 1 , y 1 ) is the coordinate of the feature vector of the face to be tested, and (x 2 , y 1 ) is the coordinate of the pre-stored feature vector.

更具体地,欧式距离越小,则说明待测人脸特征向量和预存特征向量的相似度越大。如此,当计算得到待测人脸特征向量和预存特征向量的欧式距离后,便可根据欧式距离,以判断出所有预存特征向量中与待测人脸特征向量相似度较大的一个或多个预存特征向量。More specifically, the smaller the Euclidean distance, the greater the similarity between the feature vector of the face to be tested and the pre-stored feature vector. In this way, after the Euclidean distance between the feature vector of the face to be tested and the pre-stored feature vector is calculated, one or more of the pre-stored feature vectors with a greater similarity to the feature vector of the face to be tested can be determined according to the Euclidean distance. Stored feature vectors.

在一个实施方式中,RCU设备可依次计算得到每个特征集中的待测人脸特征向量和预存特征向量的欧式距离,并只保存欧式距离较小的K个待测人脸特征向量。接下来,然后依次比较每个特征集中的O个待测人脸向量,以得到所有特征集中欧式距离较小的O个待测人脸特征向量。其中,K和O为任意大于0的正整数,K和O可以相等。In one embodiment, the RCU device may sequentially calculate the Euclidean distance between the feature vectors of the face to be tested and the pre-stored feature vectors in each feature set, and store only K feature vectors of the face to be tested with smaller Euclidean distances. Next, the O test face vectors in each feature set are compared in turn to obtain O test face feature vectors with smaller Euclidean distances in all feature sets. Wherein, K and O are any positive integers greater than 0, and K and O may be equal.

本申请实施方式的人脸识别方法,在用户使用嵌入式设备100进行人脸识别的过程中,如图13所示,首先处理器20通过摄像头40获取用户的实时图像,并对实时图像进行人脸检测,以得到人脸图像。然后根据人脸位置和人脸关键点位置,以对齐人脸图像,从而根据预设的人脸筛选模型,对人脸图像进行筛选,即质量检测。当质量达到预设条件的情况下,则对人脸图像进行人脸特征提取;当质量未达到预设条件的情况下,嵌入式设备100则会结束人脸识别,还可提醒用户人脸识别失败。在根据预设的特征提取模型,提取到人脸图像的特征,以生成待测人脸特征向量后,处理器20便可根据RCU设备,以计算待测特征向量与人脸信息数据库中的预存特征向量的相似度,从而得到预存特征向量较为相似的目标特征向量,即返回人脸相似ID。并在目标特征向量与待测人脸特征向量的相似度满足预设阈值时,从而确定人脸认证成功。In the face recognition method of the embodiment of the present application, when the user uses the embedded device 100 to perform face recognition, as shown in FIG. Face detection to get face images. Then align the face images according to the positions of the faces and key points of the faces, so that the face images are screened according to the preset face screening model, that is, quality inspection. When the quality reaches the preset condition, face feature extraction is performed on the face image; when the quality does not reach the preset condition, the embedded device 100 will end the face recognition and remind the user of the face recognition fail. After extracting the features of the face image according to the preset feature extraction model to generate the face feature vector to be tested, the processor 20 can calculate the feature vector to be tested and the pre-stored feature vector in the face information database according to the RCU device. The similarity of the eigenvectors, so as to obtain the target eigenvectors that are relatively similar to the pre-stored eigenvectors, that is, return the face similarity ID. And when the similarity between the target feature vector and the face feature vector to be tested meets a preset threshold, it is determined that the face authentication is successful.

请参阅图14,本申请实施方式还提供一种包含计算机程序301的非易失性计算机可读存储介质300。当计算机程序301被一个或多个处理器20执行时,使得一个或多个处理器20执行上述任一实施方式的人脸识别方法。Referring to FIG. 14 , the embodiment of the present application further provides a non-volatile computer-readable storage medium 300 containing a computer program 301 . When the computer program 301 is executed by the one or more processors 20, the one or more processors 20 are made to execute the face recognition method in any one of the above-mentioned embodiments.

例如,计算机程序301被一个或多个处理器20执行时,使得处理器20执行以下人脸识别方法:For example, when the computer program 301 is executed by one or more processors 20, the processors 20 are made to perform the following face recognition methods:

01:检测采集图像中的人脸信息,以生成人脸图像;01: Detect face information in the captured image to generate a face image;

03:基于预设的人脸筛选模型,检测人脸图像的质量;及03: Based on the preset face screening model, detect the quality of the face image; and

05:基于可重构计算单元,计算质量达到预设条件的人脸图像的人脸特征向量和预设人脸特征向量的相似度,以进行人脸识别。05: Based on the reconfigurable computing unit, calculate the similarity between the face feature vector of the face image whose quality meets the preset conditions and the preset face feature vector for face recognition.

又例如,计算机程序301被一个或多个处理器20执行时,使得处理器20执行以下人脸识别方法:For another example, when the computer program 301 is executed by one or more processors 20, the processors 20 are made to perform the following face recognition method:

07:基于预设人脸检测模型,检测人脸图像中的人脸关键点的位置;及07: Based on the preset face detection model, detect the position of the key points of the face in the face image; and

09:根据人脸位置及人脸关键点的位置,对齐人脸图像,以使得对齐后的人脸图像的目标人脸的姿态调整为预设姿态。09: Align the face images according to the positions of the faces and key points of the faces, so that the pose of the target face of the aligned face images is adjusted to a preset pose.

在本说明书的描述中,参考术语“某些实施方式”、“一个例子中”、“示例地”等的描述意指结合实施方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。In the description of this specification, descriptions with reference to the terms "certain embodiments", "in one example", "exemplarily" and the like mean that specific features, structures, materials or characteristics described in conjunction with the embodiments or examples are included in the present application In at least one embodiment or example of . In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the described specific features, structures, materials or characteristics may be combined in any suitable manner in any one or more embodiments or examples. In addition, those skilled in the art can combine and combine different embodiments or examples and features of different embodiments or examples described in this specification without conflicting with each other.

流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。Any process or method descriptions in flowcharts or otherwise described herein may be understood to represent modules, segments or portions of code comprising one or more executable instructions for implementing specific logical functions or steps of the process , and the scope of preferred embodiments of the present application includes additional implementations in which functions may be performed out of the order shown or discussed, including in substantially simultaneous fashion or in reverse order depending on the functions involved, which shall It should be understood by those skilled in the art to which the embodiments of the present application belong.

尽管上面已经示出和描述了本申请的实施方式,可以理解的是,上述实施方式是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施方式进行变化、修改、替换和变型。Although the implementation of the present application has been shown and described above, it can be understood that the above-mentioned implementation is exemplary and should not be construed as limiting the application, and those skilled in the art can make the above-mentioned The embodiments are subject to changes, modifications, substitutions and variations.

Claims (18)

1. A face recognition method, comprising:
detecting face information in the acquired image to generate a face image;
detecting the quality of the face image based on a preset face screening model; and
And based on the reconfigurable computing unit, computing the similarity between the face feature vector of the face image with the quality reaching the preset condition and the preset face feature vector so as to conduct face recognition.
2. The face recognition method according to claim 1, wherein the face information includes a face position of a target face, and the detecting acquires the face information in the image to generate the face image includes:
detecting the face position in the acquired image based on a preset face detection model;
and generating the face image according to the face position.
3. The face recognition method according to claim 2, wherein the face information further includes a face key point, the face recognition method further comprising:
Detecting the positions of the face key points in the face image based on a preset face detection model;
aligning the face image according to the face position and the position of the face key point so that the posture of the target face of the aligned face image is adjusted to be a preset posture;
the detecting the quality of the face image includes:
and detecting the quality of the face image after alignment.
4. The face recognition method according to claim 2, wherein the detecting the face position in the captured image includes:
generating a plurality of candidate frames with preset sizes according to the sizes of the acquired images, and outputting the scores of the candidate frames;
determining the candidate frames with scores greater than a preset score as face frames;
acquiring the coincidence degree between any two face frames with coincidence parts, and determining the face frame with the highest score as a target face frame in the face frames with the coincidence degree larger than the preset coincidence degree;
and outputting the position of the target face frame as the face position.
5. The face recognition method according to claim 1, wherein the calculating, based on the reconfigurable calculating unit, the similarity between the face feature vector of the face image having the quality reaching the preset condition and the preset face feature vector to perform face recognition includes:
Extracting the features of the face image based on a preset feature extraction model to generate a face feature vector;
and establishing a face information database according to the face feature vector.
6. The face recognition method according to claim 1, wherein the calculating, based on the reconfigurable calculating unit, the similarity between the face feature vector of the face image having the quality reaching the preset condition and the preset face feature vector to perform face recognition includes:
extracting the features of the face image based on a preset feature extraction model to generate a face feature vector to be detected;
inputting a pre-stored feature vector in a preset face information database to the reconfigurable computing unit so as to calculate the similarity between the face feature vector to be detected and the pre-stored feature vector, and determining a target feature vector according to the similarity;
and under the condition that the similarity between the target feature vector and the face feature vector to be detected is larger than a preset threshold value, determining that the face authentication is successful.
7. The face recognition method according to claim 6, wherein inputting the pre-stored feature vector in the pre-set face information database to the reconfigurable computing unit to calculate the similarity between the face feature vector to be detected and the pre-stored feature vector, and determining the target feature vector according to the similarity, comprises:
Dividing the pre-stored feature vectors in the face information database into a plurality of feature sets according to the memory capacity of the reconfigurable computing unit, so that the memory occupied by each feature set is smaller than the memory capacity;
sequentially inputting each feature set into the reconfigurable computing unit to compute the similarity of the face feature vector to be tested and the pre-stored feature vector of each feature set;
and sorting according to the corresponding similarity of each pre-stored feature vector to determine the pre-stored feature vector with preset sorting as the target feature vector.
8. The method of claim 6, wherein said calculating the similarity between the feature vector of the face to be detected and the pre-stored feature vector comprises:
calculating Euclidean distance between the face feature vector to be detected and the pre-stored feature vector; and
And determining the similarity of the face feature vector to be detected and the pre-stored feature vector according to the Euclidean distance.
9. A face recognition device, comprising:
the generation module is used for detecting face information in the acquired image to generate a face image;
The first detection module is used for detecting the quality of the face image based on a preset face screening model; and
The recognition module is used for calculating the similarity between the face feature vector of the face image with the quality reaching the preset condition and the preset face feature vector based on the reconfigurable calculation unit so as to carry out face recognition.
10. An embedded device, comprising a processor for detecting face information in an acquired image to generate a face image; detecting the quality of the face image based on a preset face screening model; and calculating the similarity between the face feature vector of the face image with the quality reaching the preset condition and the preset face feature vector based on the reconfigurable calculating unit so as to carry out face recognition.
11. The embedded device of claim 10, wherein the face information includes a face position of a target face, and the processor is configured to detect the face position in the face image based on a preset face detection model; and generating the face image according to the face position.
12. The embedded device of claim 11, wherein the face information further comprises a face key point, and the processor is configured to detect a location of the face key point in the face image based on a preset face detection model; aligning the face image according to the face position and the position of the face key point so that the posture of the target face of the aligned face image is adjusted to be a preset posture; and detecting the quality of the face image after alignment.
13. The embedded device of claim 11, wherein the processor is configured to generate a plurality of candidate frames of a preset size according to the size of the captured image, and output a score for each of the candidate frames; determining the candidate frames with scores greater than a preset score as face frames; acquiring the coincidence degree between any two face frames with coincidence parts, and determining the face frame with the highest score as a target face frame in the face frames with the coincidence degree larger than the preset coincidence degree; and outputting the position of the target face frame as the face position.
14. The embedded device of claim 10, wherein the processor is configured to extract features of the face image based on a preset feature extraction model to generate an acquired face feature vector; and establishing a face information database according to the collected face feature vectors.
15. The embedded device of claim 10, wherein the processor is configured to extract features of the face image based on a preset feature extraction model to generate a face feature vector to be detected; inputting a pre-stored feature vector in a preset face information database to the reconfigurable computing unit so as to calculate the similarity between the face feature vector to be detected and the pre-stored feature vector, and determining a target feature vector according to the similarity; and under the condition that the similarity between the target feature vector and the face feature vector to be detected is larger than a preset threshold value, determining that the face authentication is successful.
16. The embedded device of claim 15, wherein the processor is configured to divide the pre-stored feature vectors in the face information database into a plurality of feature sets according to a memory capacity of the reconfigurable computing unit, such that memory occupied by each of the feature sets is smaller than the memory capacity; sequentially inputting each feature set into the reconfigurable computing unit to compute the similarity of the face feature vector to be tested and the pre-stored feature vector of each feature set; and sorting according to the corresponding similarity of each pre-stored feature vector to determine the pre-stored feature vector with preset sorting as the target feature vector.
17. The embedded device of claim 15, wherein the processor is further configured to calculate a euclidean distance between the face feature vector to be measured and the pre-stored feature vector; and determining the similarity of the face feature vector to be detected and the pre-stored feature vector according to the Euclidean distance.
18. A non-transitory computer readable storage medium, storing a computer program which, when executed by one or more processors, performs the face recognition method of any one of claims 1-8.
CN202310265736.0A 2023-03-13 2023-03-13 Face recognition method and device, embedded device, and computer-readable storage medium Pending CN116229556A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310265736.0A CN116229556A (en) 2023-03-13 2023-03-13 Face recognition method and device, embedded device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310265736.0A CN116229556A (en) 2023-03-13 2023-03-13 Face recognition method and device, embedded device, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN116229556A true CN116229556A (en) 2023-06-06

Family

ID=86575084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310265736.0A Pending CN116229556A (en) 2023-03-13 2023-03-13 Face recognition method and device, embedded device, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN116229556A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117611516A (en) * 2023-09-04 2024-02-27 北京智芯微电子科技有限公司 Image quality evaluation, face recognition, label generation and determination methods and devices

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117611516A (en) * 2023-09-04 2024-02-27 北京智芯微电子科技有限公司 Image quality evaluation, face recognition, label generation and determination methods and devices
CN117611516B (en) * 2023-09-04 2024-09-13 北京智芯微电子科技有限公司 Image quality evaluation, face recognition, label generation and determination methods and devices

Similar Documents

Publication Publication Date Title
CN109829448B (en) Face recognition method, face recognition device and storage medium
WO2018028546A1 (en) Key point positioning method, terminal, and computer storage medium
EP4099217B1 (en) Image processing model training method and apparatus, device, and storage medium
CN102375970B (en) A kind of identity identifying method based on face and authenticate device
US9691007B2 (en) Identification apparatus and method for controlling identification apparatus
WO2016061780A1 (en) Method and system of facial expression recognition using linear relationships within landmark subsets
US8306282B2 (en) Hierarchical face recognition training method and hierarchical face recognition method thereof
WO2017096753A1 (en) Facial key point tracking method, terminal, and nonvolatile computer readable storage medium
CN111931592B (en) Object recognition method, device and storage medium
US8577099B2 (en) Method, apparatus, and program for detecting facial characteristic points
CN110069989B (en) Face image processing method and device, and computer-readable storage medium
JP6112801B2 (en) Image recognition apparatus and image recognition method
CN107851192B (en) Apparatus and method for detecting face part and face
JP6071002B2 (en) Reliability acquisition device, reliability acquisition method, and reliability acquisition program
CN105335719A (en) Living body detection method and device
EP3591580A1 (en) Method and device for recognizing descriptive attributes of appearance feature
US9147130B2 (en) Information processing method, information processing apparatus, and recording medium for identifying a class of an object by using a plurality of discriminators
CN109993021A (en) The positive face detecting method of face, device and electronic equipment
CN106471440A (en) Eye tracking based on efficient forest sensing
CN103927529B (en) The preparation method and application process, system of a kind of final classification device
CN116229556A (en) Face recognition method and device, embedded device, and computer-readable storage medium
CN109961103B (en) Training method of feature extraction model, and image feature extraction method and device
Selvi et al. FPGA implementation of a face recognition system
CN115311723A (en) Living body detection method, living body detection device and computer-readable storage medium
JP7264163B2 (en) Determination method, determination program and information processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination