WO2022126464A1 - Face recognition method, robot, and storage medium - Google Patents

Face recognition method, robot, and storage medium Download PDF

Info

Publication number
WO2022126464A1
WO2022126464A1 PCT/CN2020/137065 CN2020137065W WO2022126464A1 WO 2022126464 A1 WO2022126464 A1 WO 2022126464A1 CN 2020137065 W CN2020137065 W CN 2020137065W WO 2022126464 A1 WO2022126464 A1 WO 2022126464A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
registered
image
key point
detection model
Prior art date
Application number
PCT/CN2020/137065
Other languages
French (fr)
Chinese (zh)
Inventor
曾钰胜
程骏
庞建新
Original Assignee
深圳市优必选科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市优必选科技股份有限公司 filed Critical 深圳市优必选科技股份有限公司
Priority to PCT/CN2020/137065 priority Critical patent/WO2022126464A1/en
Publication of WO2022126464A1 publication Critical patent/WO2022126464A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the present application relates to the technical field of face recognition, and in particular, to a face recognition method, a robot and a storage medium.
  • Face recognition is a biometric recognition technology based on human facial feature information.
  • face recognition technology is increasingly Security equipment and robots, etc., based on this, the demand for technological innovation of face recognition technology is becoming more and more urgent.
  • Face key point recognition is an important part of face recognition algorithms and applications.
  • the accuracy of key point recognition will directly affect the accuracy of face recognition.
  • Existing to improve the accuracy of face key point recognition and the accuracy of subsequent face recognition often need to set up a relatively complex network model.
  • an embodiment of the present application provides a face recognition method, the method comprising:
  • the face key point model is a lightweight neural network model
  • the registered facial features are characterized by performing a feature on the registered frontal image. Extracted and obtained, the registered frontal face image is obtained by screening through a pre-trained face pose detection model.
  • an embodiment of the present application provides a computer device, including a memory and a processor, the memory stores a computer program, and when the computer program is executed by the processor, the processor is caused to perform the following steps:
  • the face key point model is a lightweight neural network model
  • the registered facial features are characterized by characterizing the registered frontal face images. Extracted and obtained, the registered frontal face image is obtained by screening through a pre-trained face pose detection model.
  • an embodiment of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, causes the processor to perform the following steps:
  • the face key point model is a lightweight neural network model
  • the registered facial features are characterized by characterizing the registered frontal face images. Extracted and obtained, the registered frontal face image is obtained by screening through a pre-trained face pose detection model.
  • the embodiment of the present application uses a pre-trained face key point model to perform face key point recognition on the to-be-recognized face image, wherein the face key point model adopts a lightweight neural network model, which reduces computing power requirements, and
  • the registered face features in the preset database are used for comparison to ensure the accuracy of recognition.
  • the registered face features are accurate face features obtained through multiple models, which solves the problem of face obtained due to insufficient computing power on the device side.
  • the problem that the key points are not accurate enough has achieved the effect of accurately completing face recognition under the condition of limited computing power on the device side.
  • FIG. 1 is an application environment diagram of a face recognition method in an embodiment of the present application
  • FIG. 2 is a flowchart of a face recognition method in an embodiment of the application
  • FIG. 3 is a flowchart of obtaining a face image to be recognized in a face recognition method according to an embodiment of the present application
  • FIG. 4 is a flowchart of a face recognition method in another embodiment of the present application.
  • FIG. 5 is a flowchart of obtaining a registered face image in a face recognition method according to an embodiment of the present application
  • FIG. 6 is a structural block diagram of a face recognition device in an embodiment of the present application.
  • FIG. 7 is a structural block diagram of a robot in an embodiment of the present application.
  • FIG. 1 is an application environment diagram of a face recognition method in one embodiment.
  • the face recognition method is applied to a face recognition device.
  • the face recognition device includes a terminal 110 and a server 120 .
  • the terminal 110 and the server 120 are connected through a network.
  • the terminal 110 may be a desktop terminal or a mobile terminal, and the mobile terminal may be at least one of a mobile phone, a tablet computer, a notebook computer, and a robot.
  • the server 120 can be implemented by an independent server or a server cluster composed of multiple servers.
  • the terminal 110 is used to obtain the face image to be recognized and upload it to the server 120, and the server 120 is used to receive the face image to be recognized, and use a pre-trained face key point model to perform face key point recognition on the to-be-recognized face image.
  • the face key point model is a lightweight neural network model; according to the identified face key points, the to-be-recognized face image is aligned to obtain an aligned face image; the aligned face image is extracted The facial features in the image; compare the facial features with the registered facial features in the preset database to identify the characters in the to-be-recognized facial images, and the registered facial features are obtained by comparing the registered facial features.
  • the registered frontal face image is obtained by feature extraction, and the registered frontal face image is obtained by screening through a pre-trained face pose detection model.
  • the above-mentioned text entity recognition method can be directly applied to the terminal 110, and the terminal 110 is used to obtain the face image to be recognized; a pre-trained face key point model is used to perform human analysis on the face image to be recognized.
  • face key point recognition the face key point model is a lightweight neural network model; according to the identified face key points, the to-be-recognized face image is aligned to obtain an aligned face image; Aligning the facial features in the face image; comparing the facial features with the registered facial features in the preset database to identify the person in the to-be-recognized facial image, the registered facial features It is obtained by performing feature extraction on the registered frontal face image, and the registered frontal face image is obtained by screening through a pre-trained face pose detection model.
  • a face recognition method is provided.
  • the method can be applied to both a terminal and a server, and this embodiment is described by taking the application to a terminal as an example.
  • the face recognition method specifically includes the following steps:
  • the terminal that executes the method is a robot, and the robot includes a camera.
  • the user can place the face in front of the camera of the robot, so that the robot can shoot the face through the camera, and the obtained image as the face image to be recognized.
  • step S110 may specifically include the following steps:
  • the image captured by the robot is used as the image to be recognized.
  • S120 uses a pre-trained face key point model to perform face key point recognition on the to-be-recognized face image, and the face key point model is a lightweight neural network model.
  • the face key point recognition can be performed on the face image to be recognized, wherein the face key point recognition model can use a pre-trained face key point detection model, and the face key point detection model to be recognized
  • the face key point detection model can output the face key points of the face image to be recognized.
  • the face image to be recognized can be aligned according to the identified face key points, thereby obtaining an aligned face image.
  • the aligned face image is to convert the face image to be recognized based on the face key points.
  • the aligned face image can be used for the feature comparison of face recognition, so as to ensure the accuracy of face recognition.
  • the face key point detection model can be specifically the neural network model of MobileNetV3_small.
  • MobileNetV3_small is a lightweight version of MobileNetV3 model, which has good accuracy under the condition of excellent speed performance, and the required computing power Lower, and the above MobileNetV3 combines the advantages of MobileNetV1 and MobileNetV2, and is a strong mobile-side feature extraction model.
  • the MobileNetV3_small model can be channel-cut to get MobileNetV3_smallx0.5, which has better performance even with a little loss of accuracy. face recognition effect.
  • the face feature in the aligned face image can be extracted, and then the face feature is compared with the registered face feature in the preset database, so as to identify The person in the face image to be recognized.
  • the registered face features are obtained by feature extraction on the registered frontal face images, and the registered frontal face images are obtained by screening through a pre-trained face pose detection model.
  • the same method as the above steps can be used to obtain the key point recognition of the registered user's face, and then extract the registered face features.
  • the recognition accuracy also needs to be screened by a pre-trained face pose detection model to obtain a frontal face image, and the registered face features are obtained by feature extraction from the registered frontal face image.
  • Each registered face feature includes the corresponding user. name.
  • the embodiment of the present application uses a pre-trained face key point model to perform face key point recognition on the to-be-recognized face image, wherein the face key point model adopts a lightweight neural network model, which reduces computing power requirements, and
  • the registered face features in the preset database are used for comparison to ensure the accuracy of recognition.
  • the registered face features are accurate face features obtained through multiple models, which solves the problem of face obtained due to insufficient computing power on the device side.
  • the problem that the key points are not accurate enough has achieved the effect of accurately completing face recognition under the condition of limited computing power on the device side.
  • the method further includes:
  • the registered face image may be captured by a robot in real time, or uploaded by a user through a server. , which is used for the face images registered by the user in the preset database for comparison in subsequent face recognition.
  • step S210 may specifically include the following steps:
  • the registered face image of the registered user since the registered face image of the registered user is obtained, a plurality of face images to be registered can be obtained, and the best image among them can be selected as the registered face image.
  • the faces in the multiple face images to be registered can be detected to obtain the registered face images, so as to ensure that there are faces in the registered face images during registration.
  • the face image to be registered with the highest face feature value needs to be selected as the registered face image, that is, the registered face images are multiple.
  • the image size of the face in the first face image to be registered is 100 ⁇ 100
  • the image size of the face in the second face image to be registered is 10 ⁇ 10
  • the face feature value in the first face image to be registered is the highest
  • the first face image to be registered is used as the registered face image.
  • the face recognition model includes a face key point detection model and a face gesture detection model, and the face key point detection model is the same as the The face gesture detection model is connected.
  • the registered face image can be used as the input of the face recognition model to obtain the face posture and the key points of the registered face, wherein the face recognition model is pre-trained,
  • the face recognition model includes a pre-trained face key point detection model and a pre-trained face pose detection model.
  • the face key point detection model is connected with the face pose detection model.
  • the face key point detection model is also used for Face key point extraction in face recognition, and face pose detection model can be simply constructed according to actual needs.
  • the face key point detection model is connected to the face pose detection model through the middle feature layer of the face key point detection model. Exemplarily, the middle feature layer in the face key point detection model is selected, and the middle feature layer is selected from the middle feature layer.
  • the prediction of face pose is a joint model obtained by connecting the intermediate feature layer of the face key point detection model with the face pose detection model, that is, the face recognition model has a good balance, and the prediction of face pose is very important for extracting faces.
  • the dependency and calculation amount of key points are not high, and the problem of large errors in face pose prediction based on 3D standard modules is avoided.
  • the face recognition model is trained based on a preset loss function
  • the preset loss function used by the face key point detection model can be the WingLoss loss function, specifically:
  • is a positive number used to limit the range of the nonlinear part of the loss function to the interval [- ⁇ , ⁇ ], and ⁇ is the curvature that constrains the nonlinear region of the loss function
  • the WingLoss loss function has a higher response to subtle differences, and can achieve more refined positioning of the face key points during the training of the face key point detection model.
  • the preset loss function used by the face pose detection model can be the L1_smooth loss function.
  • the face key points include eye key points and other face key points
  • other face key points may be nose key points, mouth key points, and ear key points
  • eye key points include left eye key points and The key point of the right eye, however, due to the difference in the size of the eyes, the diversification of the degree of eye opening and closing, and the complex situation of wearing glasses, the positioning of the key points of the eye is prone to large errors.
  • the loss coefficient of the partial key points is greater than the loss coefficient of the other key points.
  • the loss coefficient of the eye key points is 3, and the loss coefficient of other face key points is 1, thereby increasing the weight of the eye key points. , which helps the eye key points to be positioned more accurately without affecting other key points.
  • the face key point detection model in the face recognition model will output the registered face key points, while the face pose detection model in the face recognition model is different from the human face.
  • the face key point detection model is connected.
  • the face pose detection model will take the input registered face image and the face key point feature output from the middle feature layer of the face key point detection model as input, and output the face pose.
  • the face pose realizes In order to realize the decoupling of the key points of the face through the auxiliary prediction of the key points of the face without affecting the accuracy of the key points of the face. After the face pose is obtained, it can be determined whether the registered face image is a frontal image according to the face pose.
  • the registered face image can be aligned according to the key points of the registered face to obtain Align registered faces.
  • the face features in the aligned registered faces are extracted as registered face features and stored in a preset database for use in subsequent face recognition.
  • acquiring multiple face images to be registered includes: acquiring multiple registered face images corresponding to the same registered user whose eyes are in different states.
  • the registered face key points include registered eye key points and registered other face key points
  • the acquiring the registered face key points output by the face key point detection model further includes: extracting the face key point detection model by using the face key point detection model.
  • the obtained eye feature points in different states are used as the registered eye key points of the same registered user.
  • multiple registered face images corresponding to the same registered user whose eyes are in different states are acquired.
  • the face key point detection model When the face key point detection model outputs the registered face key points, the face key point detection model can also be used to extract the eye feature points in multiple registered face pictures, and the obtained eye feature points in different states are used as the same. A registered user's registered eye key points, so as to improve the registration data of eye key points, and reduce the error in face recognition from the way of data source.
  • the registered face image is used as the input of a pre-trained face recognition model
  • the face recognition model includes a face key point detection model and a face gesture detection model.
  • the face key point The detection model is connected with the face pose detection model, and the loss function weight and data source of the eye key points are added, which not only solves the problem of insufficient robustness of face pose prediction, but also solves the positioning of the eye key points.
  • the problem of large errors is prone to occur, and the joint estimation of face key points and face pose is completed without losing the accuracy of face key points, and the effect of improving the accuracy of eye key point recognition is achieved.
  • a face recognition device is provided.
  • the face recognition device provided in this embodiment can execute the face recognition method provided by any embodiment of the present application, and has the corresponding method for executing the method. Functional modules and beneficial effects.
  • the face recognition device includes an image acquisition module 100 , a key point extraction module 200 , a face alignment module 300 , a feature extraction module 400 and a face recognition module 500 .
  • the image acquisition module 100 is used to acquire a face image to be recognized;
  • the key point extraction module 200 is used to use a pre-trained face key point model to perform face key point recognition on the to-be-recognized face image, and the The face key point model is a lightweight neural network model;
  • the face alignment module 300 is used to align the to-be-recognized face images according to the face key points to obtain an aligned face image;
  • the feature extraction module 400 is used to extract out the facial features in the aligned face images;
  • the face recognition module 500 is used to compare the facial features with the registered facial features in the preset database to identify the to-be-recognized face images characters in.
  • the above-mentioned apparatus further includes a face registration module 600, and the face registration module 600 is used to obtain a registered face image; the registered face image is used as the input of the pre-trained face recognition model.
  • the face recognition model includes a face key point detection model and a face gesture detection model, and the face key point detection model is connected with the face gesture detection model; obtain the face gesture output by the face gesture detection model , and obtain the registered face key points output by the face key point detection model; when the face pose is a frontal face image, align the registered face images according to the registered face key points to obtain Aligning the registered faces; extracting the facial features in the aligned registered faces as the registered face features and storing them in a preset database.
  • the face key point detection model is connected to the face gesture detection model through an intermediate feature layer of the face key point detection model, and the face gesture detection model is used for registration according to the input
  • the face image and the face key point feature determine whether the registered face image is a frontal face image, and the face key point feature is the output of the intermediate feature layer.
  • the face registration module 600 is further configured to obtain a plurality of face images to be registered; the faces in the plurality of face images to be registered are detected to obtain a registered face image, and the registration The face image is the face image to be registered with the highest face feature value among the plurality of face images to be registered.
  • the face registration module 600 is further configured to acquire multiple registered face pictures corresponding to the same registered user whose eyes are in different states; the registered face key points include registered eye key points and registered face key points. Other face key points, the face registration module 600 is also used to extract the eye feature points in the multiple registered face pictures by using the face key point detection model, and the obtained eye features in different states point as the registered eye key point of the same registered user.
  • the face key points include eye key points and other face key points
  • the face recognition model is obtained by training based on a preset loss function.
  • the loss coefficient of the partial key point is larger than the loss coefficient of the other face key points.
  • the image acquisition module 100 is also used to acquire an image to be recognized; the face in the image to be recognized is detected and extracted to obtain an image of the face to be recognized.
  • Figure 7 shows an internal structure diagram of the robot in one embodiment.
  • the robot includes a processor, memory and network interface connected through a system bus.
  • the memory includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium of the robot stores an operating system, and also stores a computer program.
  • the processor can implement the face recognition method.
  • a computer program may also be stored in the internal memory, and when the computer program is executed by the processor, the processor may execute the age identification method.
  • FIG. 7 is only a block diagram of a partial structure related to the solution of the present application, and does not constitute a limitation on the robot to which the solution of the present application is applied. More or fewer components are shown in the figures, either in combination or with different arrangements of components.
  • a robot comprising a memory and a processor, wherein the memory stores a computer program, and when the computer program is executed by the processor, the processor performs the following steps:
  • the face key point model is a lightweight neural network model
  • the registered facial features are characterized by characterizing the registered frontal face images. Extracted and obtained, the registered frontal face image is obtained by screening through a pre-trained face pose detection model.
  • before the acquiring the face image to be recognized includes:
  • Obtain a registered face image ; take the registered face image as the input of a pre-trained face recognition model, the face recognition model includes a face key point detection model and a face gesture detection model, and the face key
  • the point detection model is connected with the face gesture detection model; the face gesture output by the face gesture detection model is obtained, and the registered face key points output by the face key point detection model are obtained;
  • the posture is a frontal face image
  • the registered face image is aligned according to the key points of the registered face to obtain an aligned registered face; the facial features in the aligned registered faces are extracted as the registered face features and Stored in the preset database.
  • the face key point detection model is connected to the face gesture detection model through an intermediate feature layer of the face key point detection model, and the face gesture detection model is used for registration according to the input
  • the face image and the face key point feature determine whether the registered face image is a frontal face image, and the face key point feature is the output of the intermediate feature layer.
  • the obtaining of the registered face images includes: obtaining a plurality of face images to be registered; detecting faces in the plurality of face images to be registered to obtain a registered face image, and the registration
  • the face image is the face image to be registered with the highest face feature value among the plurality of face images to be registered.
  • the acquiring multiple face images to be registered includes: acquiring multiple registered face images corresponding to the same registered user whose eyes are in different states; the registered face key points include the registered eye key and registering other face key points, the obtaining the registered face key points output by the face key point detection model further includes: using the face key point detection model to extract the Eye feature points, the obtained eye feature points in different states are used as the registered eye key points of the same registered user.
  • the face key points include eye key points and other face key points
  • the face recognition model is obtained by training based on a preset loss function.
  • the loss coefficient of the partial key point is larger than the loss coefficient of the other face key points.
  • the acquiring an image of a face to be recognized includes: acquiring an image to be recognized; and detecting and extracting a human face in the image to be recognized to obtain an image of the face to be recognized.
  • a computer-readable storage medium which stores a computer program, and when the computer program is executed by a processor, causes the processor to perform the following steps:
  • the face key point model is a lightweight neural network model
  • the registered facial features are characterized by characterizing the registered frontal face images. Extracted and obtained, the registered frontal face image is obtained by screening through a pre-trained face pose detection model.
  • before the acquiring the face image to be recognized includes:
  • Obtain a registered face image ; take the registered face image as the input of a pre-trained face recognition model, the face recognition model includes a face key point detection model and a face gesture detection model, and the face key
  • the point detection model is connected with the face gesture detection model; the face gesture output by the face gesture detection model is obtained, and the registered face key points output by the face key point detection model are obtained; when the face When the posture is a frontal face image, the registered face image is aligned according to the key points of the registered face to obtain an aligned registered face; the facial features in the aligned registered faces are extracted as the registered face features and Stored in the preset database.
  • the face key point detection model is connected to the face gesture detection model through an intermediate feature layer of the face key point detection model, and the face gesture detection model is used for registration according to the input
  • the face image and the face key point feature determine whether the registered face image is a frontal face image, and the face key point feature is the output of the intermediate feature layer.
  • the obtaining of the registered face images includes: obtaining a plurality of face images to be registered; detecting faces in the plurality of face images to be registered to obtain a registered face image, and the registration
  • the face image is the face image to be registered with the highest face feature value among the plurality of face images to be registered.
  • the acquiring multiple face images to be registered includes: acquiring multiple registered face images corresponding to the same registered user whose eyes are in different states; the registered face key points include the registered eye key and registering other face key points, the obtaining the registered face key points output by the face key point detection model further includes: using the face key point detection model to extract the Eye feature points, the obtained eye feature points in different states are used as the registered eye key points of the same registered user.
  • the face key points include eye key points and other face key points
  • the face recognition model is obtained by training based on a preset loss function.
  • the loss coefficient of the partial key point is larger than the loss coefficient of the other face key points.
  • the acquiring an image of a face to be recognized includes: acquiring an image to be recognized; and detecting and extracting a human face in the image to be recognized to obtain an image of the face to be recognized.
  • Nonvolatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in various forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Road (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous chain Road (Synchlink) DRAM
  • SLDRAM synchronous chain Road (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Abstract

Disclosed in the present application are a face recognition method, a robot, and a storage medium. The face recognition method comprises: acquiring a face image to be recognized; performing face key point recognition on said face image by using a pre-trained face key point model, the face key point model being a lightweight neural network model; aligning said face image according to the face key point obtained by recognition to obtain an aligned face image; extracting a face feature in the aligned face image; and comparing the face feature with a registered face feature in a preset database to recognize a person in the face image to be recognized, the registered face feature being obtained by performing feature extraction on a registered front face image, the registered front face image being obtained by screening by a pre-trained face pose detection model. Embodiments of the present application achieve the effect of accurately completing face recognition under the condition of limited device end computing power.

Description

一种人脸识别方法、机器人及存储介质A face recognition method, robot and storage medium 技术领域technical field
本申请涉及人脸识别技术领域,尤其涉及一种人脸识别方法、机器人及存储介质。The present application relates to the technical field of face recognition, and in particular, to a face recognition method, a robot and a storage medium.
背景技术Background technique
人脸识别,是基于人的脸部特征信息进行身份识别的一种生物识别技术,随着科学技术的飞速发展,人脸识别技术越来越多的被应用到各行各业中,例如手机、安全设备和机器人等,基于此,人脸识别技术的技术革新的需求也越来越紧迫。Face recognition is a biometric recognition technology based on human facial feature information. With the rapid development of science and technology, face recognition technology is increasingly Security equipment and robots, etc., based on this, the demand for technological innovation of face recognition technology is becoming more and more urgent.
人脸关键点识别是人脸识别算法及应用的重要组成部分。其中关键点识别的准确度会直接影响人脸识别的准确度。现有的要提高人脸关键点识别的准确度以及后续人脸识别的准确度往往需要设置比较复杂的网络模型。Face key point recognition is an important part of face recognition algorithms and applications. The accuracy of key point recognition will directly affect the accuracy of face recognition. Existing to improve the accuracy of face key point recognition and the accuracy of subsequent face recognition often need to set up a relatively complex network model.
由于机器人端算力有限,复杂的模型在机器人端很难使用,因此,亟需要一种可以在机器人端使用的人脸识别方法。Due to the limited computing power on the robot side, complex models are difficult to use on the robot side. Therefore, there is an urgent need for a face recognition method that can be used on the robot side.
申请内容Application content
基于此,有必要针对上述问题,提出一种设备端算力有限的情况下还能准确完成人脸识别的人脸识别方法、装置、计算机设备及存储介质。Based on this, it is necessary to propose a face recognition method, device, computer equipment and storage medium that can accurately complete face recognition under the condition of limited computing power on the device side.
第一方面,本申请实施例提供一种人脸识别方法,所述方法包括:In a first aspect, an embodiment of the present application provides a face recognition method, the method comprising:
获取待识别人脸图像;Obtain the face image to be recognized;
采用预先训练好的人脸关键点模型对所述待识别人脸图像进行人脸关键点识别,所述人脸关键点模型为轻量化神经网络模型;Use a pre-trained face key point model to perform face key point recognition on the to-be-recognized face image, and the face key point model is a lightweight neural network model;
根据识别得到的所述人脸关键点将所述待识别人脸图像进行对齐以得到对齐人脸图像;Align the to-be-recognized face images according to the identified face key points to obtain aligned face images;
提取出所述对齐人脸图像中的人脸特征;extracting face features in the aligned face image;
将所述人脸特征与预设数据库中的注册人脸特征进行比对,以识别出所述 待识别人脸图像中的人物,所述注册人脸特征是通过对注册的正脸图像进行特征提取得到的,所述注册的正脸图像是通过预先训练好的人脸姿态检测模型筛选得到的。Compare the facial features with the registered facial features in the preset database to identify the characters in the to-be-recognized facial images, and the registered facial features are characterized by performing a feature on the registered frontal image. Extracted and obtained, the registered frontal face image is obtained by screening through a pre-trained face pose detection model.
第二方面,本申请实施例提供一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行以下步骤:In a second aspect, an embodiment of the present application provides a computer device, including a memory and a processor, the memory stores a computer program, and when the computer program is executed by the processor, the processor is caused to perform the following steps:
获取待识别人脸图像;Obtain the face image to be recognized;
采用预先训练好的人脸关键点模型对所述待识别人脸图像进行人脸关键点识别,所述人脸关键点模型为轻量化神经网络模型;Use a pre-trained face key point model to perform face key point recognition on the to-be-recognized face image, and the face key point model is a lightweight neural network model;
根据识别得到的所述人脸关键点将所述待识别人脸图像进行对齐以得到对齐人脸图像;Align the to-be-recognized face images according to the identified face key points to obtain aligned face images;
提取出所述对齐人脸图像中的人脸特征;extracting face features in the aligned face image;
将所述人脸特征与预设数据库中的注册人脸特征进行比对,以识别出所述待识别人脸图像中的人物,所述注册人脸特征是通过对注册的正脸图像进行特征提取得到的,所述注册的正脸图像是通过预先训练好的人脸姿态检测模型筛选得到的。Comparing the facial features with the registered facial features in the preset database to identify the characters in the to-be-recognized facial images, the registered facial features are characterized by characterizing the registered frontal face images. Extracted and obtained, the registered frontal face image is obtained by screening through a pre-trained face pose detection model.
第三方面,本申请实施例提供一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时,使得所述处理器执行以下步骤:In a third aspect, an embodiment of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, causes the processor to perform the following steps:
获取待识别人脸图像;Obtain the face image to be recognized;
采用预先训练好的人脸关键点模型对所述待识别人脸图像进行人脸关键点识别,所述人脸关键点模型为轻量化神经网络模型;Use a pre-trained face key point model to perform face key point recognition on the to-be-recognized face image, and the face key point model is a lightweight neural network model;
根据识别得到的所述人脸关键点将所述待识别人脸图像进行对齐以得到对齐人脸图像;Align the to-be-recognized face images according to the identified face key points to obtain aligned face images;
提取出所述对齐人脸图像中的人脸特征;extracting face features in the aligned face image;
将所述人脸特征与预设数据库中的注册人脸特征进行比对,以识别出所述待识别人脸图像中的人物,所述注册人脸特征是通过对注册的正脸图像进行特征提取得到的,所述注册的正脸图像是通过预先训练好的人脸姿态检测模型筛选得到的。Comparing the facial features with the registered facial features in the preset database to identify the characters in the to-be-recognized facial images, the registered facial features are characterized by characterizing the registered frontal face images. Extracted and obtained, the registered frontal face image is obtained by screening through a pre-trained face pose detection model.
实施本申请实施例,将具有如下有益效果:Implementing the embodiments of the present application will have the following beneficial effects:
本申请实施例通过采用预先训练好的人脸关键点模型对所述待识别人脸图像进行人脸关键点识别,其中人脸关键点模型采用轻量化神经网络模型,降低了算力需求,并且采用预设数据库中的注册人脸特征进行比对保证了识别的准确率,其中注册人脸特征是通过多个模型得到的精确人脸特征,解决了因设备端算力不足导致得到的人脸关键点不够精确的问题,获得了设备端算力有限的情况下准确完成人脸识别的效果。The embodiment of the present application uses a pre-trained face key point model to perform face key point recognition on the to-be-recognized face image, wherein the face key point model adopts a lightweight neural network model, which reduces computing power requirements, and The registered face features in the preset database are used for comparison to ensure the accuracy of recognition. The registered face features are accurate face features obtained through multiple models, which solves the problem of face obtained due to insufficient computing power on the device side. The problem that the key points are not accurate enough has achieved the effect of accurately completing face recognition under the condition of limited computing power on the device side.
附图说明Description of drawings
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following briefly introduces the accompanying drawings required for the description of the embodiments or the prior art. Obviously, the drawings in the following description are only These are some embodiments of the present application. For those of ordinary skill in the art, other drawings can also be obtained based on these drawings without any creative effort.
其中:in:
图1为本申请一个实施例中人脸识别方法的应用环境图;1 is an application environment diagram of a face recognition method in an embodiment of the present application;
图2为本申请一个实施例中人脸识别方法的流程图;2 is a flowchart of a face recognition method in an embodiment of the application;
图3为本申请一个实施例中人脸识别方法中获取待识别人脸图像的流程图;3 is a flowchart of obtaining a face image to be recognized in a face recognition method according to an embodiment of the present application;
图4为本申请另一个实施例中人脸识别方法的流程图;4 is a flowchart of a face recognition method in another embodiment of the present application;
图5为本申请一个实施例中人脸识别方法中获取注册人脸图像的流程图;5 is a flowchart of obtaining a registered face image in a face recognition method according to an embodiment of the present application;
图6为本申请一个实施例中人脸识别装置的结构框图;6 is a structural block diagram of a face recognition device in an embodiment of the present application;
图7为本申请一个实施例中机器人的结构框图。FIG. 7 is a structural block diagram of a robot in an embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. Obviously, the described embodiments are only a part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art without creative work fall within the protection scope of the present application.
图1为一个实施例中人脸识别方法的应用环境图。参照图1,该人脸识别 方法应用于人脸识别装置。该人脸识别装置包括终端110和服务器120。终端110和服务器120通过网络连接,终端110具体可以是台式终端或移动终端,移动终端具体可以是手机、平板电脑、笔记本电脑、机器人等中的至少一种。服务器120可以用独立的服务器或者是多个服务器组成的服务器集群来实现。终端110用于获取待识别人脸图像上传到服务器120,服务器120用于接收待识别人脸图像,采用预先训练好的人脸关键点模型对所述待识别人脸图像进行人脸关键点识别,所述人脸关键点模型为轻量化神经网络模型;根据识别得到的所述人脸关键点将所述待识别人脸图像进行对齐以得到对齐人脸图像;提取出所述对齐人脸图像中的人脸特征;将所述人脸特征与预设数据库中的注册人脸特征进行比对,以识别出所述待识别人脸图像中的人物,所述注册人脸特征是通过对注册的正脸图像进行特征提取得到的,所述注册的正脸图像是通过预先训练好的人脸姿态检测模型筛选得到的。FIG. 1 is an application environment diagram of a face recognition method in one embodiment. Referring to Fig. 1, the face recognition method is applied to a face recognition device. The face recognition device includes a terminal 110 and a server 120 . The terminal 110 and the server 120 are connected through a network. The terminal 110 may be a desktop terminal or a mobile terminal, and the mobile terminal may be at least one of a mobile phone, a tablet computer, a notebook computer, and a robot. The server 120 can be implemented by an independent server or a server cluster composed of multiple servers. The terminal 110 is used to obtain the face image to be recognized and upload it to the server 120, and the server 120 is used to receive the face image to be recognized, and use a pre-trained face key point model to perform face key point recognition on the to-be-recognized face image. , the face key point model is a lightweight neural network model; according to the identified face key points, the to-be-recognized face image is aligned to obtain an aligned face image; the aligned face image is extracted The facial features in the image; compare the facial features with the registered facial features in the preset database to identify the characters in the to-be-recognized facial images, and the registered facial features are obtained by comparing the registered facial features. The registered frontal face image is obtained by feature extraction, and the registered frontal face image is obtained by screening through a pre-trained face pose detection model.
在另一个实施例中,上述文本实体识别方法可以直接应用于终端110,终端110用于获取待识别人脸图像;采用预先训练好的人脸关键点模型对所述待识别人脸图像进行人脸关键点识别,所述人脸关键点模型为轻量化神经网络模型;根据识别得到的所述人脸关键点将所述待识别人脸图像进行对齐以得到对齐人脸图像;提取出所述对齐人脸图像中的人脸特征;将所述人脸特征与预设数据库中的注册人脸特征进行比对,以识别出所述待识别人脸图像中的人物,所述注册人脸特征是通过对注册的正脸图像进行特征提取得到的,所述注册的正脸图像是通过预先训练好的人脸姿态检测模型筛选得到的。In another embodiment, the above-mentioned text entity recognition method can be directly applied to the terminal 110, and the terminal 110 is used to obtain the face image to be recognized; a pre-trained face key point model is used to perform human analysis on the face image to be recognized. face key point recognition, the face key point model is a lightweight neural network model; according to the identified face key points, the to-be-recognized face image is aligned to obtain an aligned face image; Aligning the facial features in the face image; comparing the facial features with the registered facial features in the preset database to identify the person in the to-be-recognized facial image, the registered facial features It is obtained by performing feature extraction on the registered frontal face image, and the registered frontal face image is obtained by screening through a pre-trained face pose detection model.
如图2所示,在一个实施例中,提供了一种人脸识别方法。该方法既可以应用于终端,也可以应用于服务器,本实施例以应用于终端举例说明。该人脸识别方法具体包括如下步骤:As shown in FIG. 2, in one embodiment, a face recognition method is provided. The method can be applied to both a terminal and a server, and this embodiment is described by taking the application to a terminal as an example. The face recognition method specifically includes the following steps:
S110、获取待识别人脸图像。S110. Acquire a face image to be recognized.
本实施例中,执行本方法的终端为机器人,机器人包括摄像头,用户需要进行人脸识别时,可以将人脸放置在机器人的摄像头前面,从而机器人可以通过摄像头拍摄该人脸,获取到的图像作为待识别人脸图像。In this embodiment, the terminal that executes the method is a robot, and the robot includes a camera. When the user needs to perform face recognition, the user can place the face in front of the camera of the robot, so that the robot can shoot the face through the camera, and the obtained image as the face image to be recognized.
作为优选的,如图3所示,步骤S110可以具体包括如下步骤:Preferably, as shown in FIG. 3 , step S110 may specifically include the following steps:
S111、获取待识别图像。S111. Acquire an image to be recognized.
S112、对所述待识别图像中的人脸进行检测和提取以得到待识别人脸图像。S112. Detect and extract the face in the to-be-recognized image to obtain the to-be-recognized face image.
本实施例中,机器人拍摄得到的图像作为待识别图像,为了确保进行人脸识别时待识别图像中存在人脸,还需要对待识别图像中的人脸进行检测,从而提取出待识别图像中的只包括人脸部分的图像作为待识别人脸图像,从而提高人脸识别的准确率,并提高人脸识别的效率。In this embodiment, the image captured by the robot is used as the image to be recognized. In order to ensure that there is a face in the image to be recognized during face recognition, it is also necessary to detect the face in the image to be recognized, so as to extract the image to be recognized. Only the image of the face part is included as the face image to be recognized, thereby improving the accuracy of face recognition and improving the efficiency of face recognition.
S120采用预先训练好的人脸关键点模型对所述待识别人脸图像进行人脸关键点识别,所述人脸关键点模型为轻量化神经网络模型。S120 uses a pre-trained face key point model to perform face key point recognition on the to-be-recognized face image, and the face key point model is a lightweight neural network model.
S130、根据识别得到的所述人脸关键点将所述待识别人脸图像进行对齐以得到对齐人脸图像。S130. Align the face images to be identified according to the identified face key points to obtain an aligned face image.
本实施例中,获取到待识别人脸图像后,就可以对待识别人脸图像进行人脸关键点识别,其中识别人脸关键点可以采用预先训练好的人脸关键点检测模型,将待识别人脸图像输入至该人脸关键点检测模型后,人脸关键点检测模型就可以输出该待识别人脸图像的人脸关键点。得到人脸关键点后,就可以根据识别得到的人脸关键点将待识别人脸图像进行对齐,从而得到对齐人脸图像,对齐人脸图像是基于人脸关键点将待识别人脸图像转换为的正脸图像,得到对齐人脸图像后才能用于人脸识别的特征对比,从而保证人脸识别的准确率。其中,该人脸关键点检测模型具体可以为MobileNetV3_small的神经网络模型,具体的,MobileNetV3_small是MobileNetV3的轻量版模型,在速度表现优异的情况下也有很好的准确度,并且所需的算力较低,而上述MobileNetV3结合了MobileNetV1和MobileNetV2的优点,是很强的移动端特征提取模型。作为优选的,即使是设备上的算力依旧有限,例如体积较小的机器人,还可以对MobileNetV3_small模型进行通道裁剪得到MobileNetV3_smallx0.5,该模型即使是在精度损失一点的情况下也具有较好的人脸识别效果。In this embodiment, after the face image to be recognized is obtained, the face key point recognition can be performed on the face image to be recognized, wherein the face key point recognition model can use a pre-trained face key point detection model, and the face key point detection model to be recognized After the face image is input to the face key point detection model, the face key point detection model can output the face key points of the face image to be recognized. After obtaining the face key points, the face image to be recognized can be aligned according to the identified face key points, thereby obtaining an aligned face image. The aligned face image is to convert the face image to be recognized based on the face key points. In order to obtain the frontal face image, the aligned face image can be used for the feature comparison of face recognition, so as to ensure the accuracy of face recognition. Among them, the face key point detection model can be specifically the neural network model of MobileNetV3_small. Specifically, MobileNetV3_small is a lightweight version of MobileNetV3 model, which has good accuracy under the condition of excellent speed performance, and the required computing power Lower, and the above MobileNetV3 combines the advantages of MobileNetV1 and MobileNetV2, and is a strong mobile-side feature extraction model. As an option, even if the computing power on the device is still limited, such as a small robot, the MobileNetV3_small model can be channel-cut to get MobileNetV3_smallx0.5, which has better performance even with a little loss of accuracy. face recognition effect.
S140、提取出所述对齐人脸图像中的人脸特征。S140, extracting face features in the aligned face image.
S150、将所述人脸特征与预设数据库中的注册人脸特征进行比对,以识别出所述待识别人脸图像中的人物。S150. Compare the face feature with the registered face feature in a preset database to identify the person in the to-be-recognized face image.
本实施例中,得到对齐人脸图像后,就可以提取出对齐人脸图像中的人脸特征,然后将人脸特征与预设数据库中的注册人脸特征进行比对,从而就可以 识别出待识别人脸图像中的人物。其中,注册人脸特征是通过对注册的正脸图像进行特征提取得到的,而注册的正脸图像是通过预先训练好的人脸姿态检测模型筛选得到的。具体的,在进行人脸识别之前,要将需要识别的用户进行人脸注册,可以采用与上述步骤相同的方法获取注册用户的人脸关键点识别,进而提取出注册人脸特征,进一步为了保证识别的准确率,还需要预先通过预先训练好的人脸姿态检测模型筛选得到正脸图像,通过注册的正脸图像进行特征提取得到注册人脸特征,每个注册人脸特征都包括对应的用户名称。如此将全部注册用户的正脸图像的注册人脸特征存储在预设数据库中,进行人脸识别时,就可以很简单的通过比对识别出待识别人脸图像中的人物。In this embodiment, after the aligned face image is obtained, the face feature in the aligned face image can be extracted, and then the face feature is compared with the registered face feature in the preset database, so as to identify The person in the face image to be recognized. Among them, the registered face features are obtained by feature extraction on the registered frontal face images, and the registered frontal face images are obtained by screening through a pre-trained face pose detection model. Specifically, before performing face recognition, to register the face of the user to be recognized, the same method as the above steps can be used to obtain the key point recognition of the registered user's face, and then extract the registered face features. The recognition accuracy also needs to be screened by a pre-trained face pose detection model to obtain a frontal face image, and the registered face features are obtained by feature extraction from the registered frontal face image. Each registered face feature includes the corresponding user. name. In this way, the registered face features of the frontal face images of all registered users are stored in the preset database, and when face recognition is performed, the person in the face image to be recognized can be easily identified by comparison.
本申请实施例通过采用预先训练好的人脸关键点模型对所述待识别人脸图像进行人脸关键点识别,其中人脸关键点模型采用轻量化神经网络模型,降低了算力需求,并且采用预设数据库中的注册人脸特征进行比对保证了识别的准确率,其中注册人脸特征是通过多个模型得到的精确人脸特征,解决了因设备端算力不足导致得到的人脸关键点不够精确的问题,获得了设备端算力有限的情况下准确完成人脸识别的效果。The embodiment of the present application uses a pre-trained face key point model to perform face key point recognition on the to-be-recognized face image, wherein the face key point model adopts a lightweight neural network model, which reduces computing power requirements, and The registered face features in the preset database are used for comparison to ensure the accuracy of recognition. The registered face features are accurate face features obtained through multiple models, which solves the problem of face obtained due to insufficient computing power on the device side. The problem that the key points are not accurate enough has achieved the effect of accurately completing face recognition under the condition of limited computing power on the device side.
如图4所示,在另一个实施例中,上述实施例中的人脸识别方法的步骤S110之前还包括:As shown in FIG. 4 , in another embodiment, before step S110 of the face recognition method in the above embodiment, the method further includes:
S210、获取注册人脸图像。S210. Acquire a registered face image.
本实施例中,在进行人脸识别之前,要将需要识别的用户进行人脸注册,首先需要获取注册人脸图像,该注册人脸图像可以是机器人实时拍摄的,也可以是用户通过服务器上传的,用于用户注册到预设数据库中的人脸图像供后续人脸识别时的对比使用。In this embodiment, before performing face recognition, in order to register the face of the user to be recognized, it is first necessary to obtain a registered face image. The registered face image may be captured by a robot in real time, or uploaded by a user through a server. , which is used for the face images registered by the user in the preset database for comparison in subsequent face recognition.
作为优选的,如图5所示,步骤S210可以具体包括如下步骤:Preferably, as shown in FIG. 5 , step S210 may specifically include the following steps:
S211、获取多张待注册人脸图像。S211. Acquire multiple face images to be registered.
S212、对所述多张待注册人脸图像中的人脸进行检测以得到注册人脸图像,所述注册人脸图像为所述多张待注册人脸图像中人脸特征值最高的待注册人脸图像。S212. Detecting the faces in the multiple face images to be registered to obtain a registered face image, where the registered face image is to be registered with the highest facial feature value in the multiple face images to be registered face image.
本实施例中,因是获取注册用户的注册人脸图像,因此可以获取多张待注 册人脸图像,从而选择其中最佳的图像作为注册人脸图像。获取到多张待注册人脸图像后,首先可以对多张待注册人脸图像中的人脸进行检测以得到注册人脸图像,确保进行注册时注册人脸图像中存在人脸,此外,因为待注册人脸图像为多张,筛去不存在人脸的待注册人脸图像后,还需要选取人脸特征值最高的待注册人脸图像作为注册人脸图像,即注册人脸图像为多张待注册人脸图像中人脸特征值最高的待注册人脸图像,其中,人脸特征值最高可以体现为在多张待注册人脸图像中的人脸的图像尺寸最大,示例性的,第一待注册人脸图像中的人脸的图像尺寸为100X100,第二待注册人脸图像中的人脸的图像尺寸为10X10,那么第一待注册人脸图像中的人脸特征值最高,将第一待注册人脸图像作为注册人脸图像。In this embodiment, since the registered face image of the registered user is obtained, a plurality of face images to be registered can be obtained, and the best image among them can be selected as the registered face image. After obtaining multiple face images to be registered, firstly, the faces in the multiple face images to be registered can be detected to obtain the registered face images, so as to ensure that there are faces in the registered face images during registration. In addition, because There are multiple face images to be registered. After screening out the face images to be registered that do not have faces, the face image to be registered with the highest face feature value needs to be selected as the registered face image, that is, the registered face images are multiple. A face image to be registered with the highest face feature value among the face images to be registered, wherein the highest face feature value can be represented as the image size of the face in the multiple face images to be registered is the largest. Exemplarily, The image size of the face in the first face image to be registered is 100×100, and the image size of the face in the second face image to be registered is 10×10, then the face feature value in the first face image to be registered is the highest, The first face image to be registered is used as the registered face image.
S220、将所述注册人脸图像作为预先训练好的人脸识别模型的输入,所述人脸识别模型包括人脸关键点检测模型和人脸姿态检测模型,所述人脸关键点检测模型与所述人脸姿态检测模型连接。S220, taking the registered face image as the input of the pre-trained face recognition model, the face recognition model includes a face key point detection model and a face gesture detection model, and the face key point detection model is the same as the The face gesture detection model is connected.
S230、获取所述人脸姿态检测模型输出的人脸姿态,并获取所述人脸关键点检测模型输出的注册人脸关键点。S230. Obtain the face pose output by the face pose detection model, and obtain the registered face key points output by the face key point detection model.
S240、当所述人脸姿态为正脸图像时,根据所述注册人脸关键点将所述注册人脸图像进行对齐以得到对齐注册人脸。S240. When the face posture is a frontal face image, align the registered face images according to the registered face key points to obtain an aligned registered face.
S250、提取出所述对齐注册人脸中的人脸特征作为注册人脸特征并存储在预设数据库中。S250, extracting the facial features in the aligned registered faces as registered facial features and storing them in a preset database.
本实施例中,获取到注册人脸图像后,就可以将注册人脸图像作为人脸识别模型的输入来获取人脸姿态和注册人脸关键点,其中人脸识别模型是预先训练好的,人脸识别模型包括预先训练好的人脸关键点检测模型和预先训练好的人脸姿态检测模型,人脸关键点检测模型与人脸姿态检测模型连接,其中人脸关键点检测模型也用于人脸识别中的人脸关键点提取,而人脸姿态检测模型可以根据实际需要简单构建。具体的,人脸关键点检测模型通过人脸关键点检测模型的中间特征层与人脸姿态检测模型连接,示例性的,选取人脸关键点检测模型中的中间特征层,从该中间特征层中经过2个步长为2的卷积层连接至人脸姿态检测模型,完成对人脸关键点提取的去耦合作用,并作为人脸姿态检测模型的输入,从而实现用人脸关键点辅助人脸姿态的预测,如此将人脸关键点 检测模型的中间特征层与人脸姿态检测模型连接得到的联合模型,即人脸识别模型具有较好的平衡性,人脸姿态的预测对提取人脸关键点的依赖性和计算量都不高,也避免了基于3D标准模块进行人脸姿态预测存在较大误差的问题。In this embodiment, after obtaining the registered face image, the registered face image can be used as the input of the face recognition model to obtain the face posture and the key points of the registered face, wherein the face recognition model is pre-trained, The face recognition model includes a pre-trained face key point detection model and a pre-trained face pose detection model. The face key point detection model is connected with the face pose detection model. The face key point detection model is also used for Face key point extraction in face recognition, and face pose detection model can be simply constructed according to actual needs. Specifically, the face key point detection model is connected to the face pose detection model through the middle feature layer of the face key point detection model. Exemplarily, the middle feature layer in the face key point detection model is selected, and the middle feature layer is selected from the middle feature layer. After 2 convolutional layers with a stride of 2, it is connected to the face pose detection model to complete the decoupling effect on the extraction of face key points, which is used as the input of the face pose detection model, so as to realize the use of face key points to assist people. The prediction of face pose is a joint model obtained by connecting the intermediate feature layer of the face key point detection model with the face pose detection model, that is, the face recognition model has a good balance, and the prediction of face pose is very important for extracting faces. The dependency and calculation amount of key points are not high, and the problem of large errors in face pose prediction based on 3D standard modules is avoided.
需要说明的是,人脸识别模型是基于预设损失函数训练得到的,其中人脸关键点检测模型采用的预设损失函数可以为WingLoss损失函数,具体为:It should be noted that the face recognition model is trained based on a preset loss function, and the preset loss function used by the face key point detection model can be the WingLoss loss function, specifically:
Figure PCTCN2020137065-appb-000001
Figure PCTCN2020137065-appb-000001
其中,C是一个常数,ω是一个正数用于将该损失函数的非线性部分的范围限制在[-ω,ω]的区间内,ε则是约束该损失函数的非线性区域的曲率,上述参数都可以根据实际需求进行选取,示例性的,选取ω=10,ε=2。该WingLoss损失函数相较于L1损失函数、L2损失函数和L1_smooth损失函数,因其细微差异的响应较高,可以在人脸关键点检测模型训练时实现对人脸关键点的更精细化定位,而人脸姿态检测模型采用的预设损失函数可以为L1_smooth损失函数。where C is a constant, ω is a positive number used to limit the range of the nonlinear part of the loss function to the interval [-ω,ω], and ε is the curvature that constrains the nonlinear region of the loss function, All of the above parameters can be selected according to actual requirements. For example, ω=10 and ε=2 are selected. Compared with the L1 loss function, the L2 loss function and the L1_smooth loss function, the WingLoss loss function has a higher response to subtle differences, and can achieve more refined positioning of the face key points during the training of the face key point detection model. The preset loss function used by the face pose detection model can be the L1_smooth loss function.
本实施例中,人脸关键点包括眼部关键点和其他人脸关键点,其他人脸关键点可以为鼻子关键点、嘴巴关键点和耳朵关键点,眼部关键点包括左眼关键点和右眼关键点,然而由于眼睛大小的差异及眼睛睁闭程度的多样化、眼睛佩戴眼镜的复杂情况,眼部关键点的定位容易出现较大的误差,作为优选的,上述WingLoss损失函数中眼部关键点的损失系数大于所述其他关键点的损失系数,示例性的,眼部关键点的损失系数为3,其他人脸关键点的损失系数为1,从而加大眼部关键点的权重,帮助眼部关键点更精确的定位的同时不影响其他关键点。In this embodiment, the face key points include eye key points and other face key points, other face key points may be nose key points, mouth key points, and ear key points, and eye key points include left eye key points and The key point of the right eye, however, due to the difference in the size of the eyes, the diversification of the degree of eye opening and closing, and the complex situation of wearing glasses, the positioning of the key points of the eye is prone to large errors. The loss coefficient of the partial key points is greater than the loss coefficient of the other key points. Exemplarily, the loss coefficient of the eye key points is 3, and the loss coefficient of other face key points is 1, thereby increasing the weight of the eye key points. , which helps the eye key points to be positioned more accurately without affecting other key points.
进一步的,将注册人脸图像输入人脸识别模型后,人脸识别模型中的人脸关键点检测模型会输出注册人脸关键点,而人脸识别模型中的人脸姿态检测模型因与人脸关键点检测模型连接,人脸姿态检测模型会将输入的注册人脸图像和人脸关键点检测模型中间特征层输出的人脸关键点特征作为输入,输出人脸姿态,该人脸姿态实现了通过人脸关键点的辅助预测,且没有影响人脸关键点的精度,实现对人脸关键点的去耦合。得到人脸姿态后,就可以根据人脸姿态 确定注册人脸图像是否为正脸图像,当人脸姿态为正脸图像时,就可以根据注册人脸关键点将注册人脸图像进行对齐以得到对齐注册人脸。最后提取出对齐注册人脸中的人脸特征作为注册人脸特征并存储在预设数据库中,以供后续的人脸识别时使用。Further, after the registered face image is input into the face recognition model, the face key point detection model in the face recognition model will output the registered face key points, while the face pose detection model in the face recognition model is different from the human face. The face key point detection model is connected. The face pose detection model will take the input registered face image and the face key point feature output from the middle feature layer of the face key point detection model as input, and output the face pose. The face pose realizes In order to realize the decoupling of the key points of the face through the auxiliary prediction of the key points of the face without affecting the accuracy of the key points of the face. After the face pose is obtained, it can be determined whether the registered face image is a frontal image according to the face pose. When the face pose is a frontal image, the registered face image can be aligned according to the key points of the registered face to obtain Align registered faces. Finally, the face features in the aligned registered faces are extracted as registered face features and stored in a preset database for use in subsequent face recognition.
在一替代实施例中,获取多张待注册人脸图像包括:获取眼睛处于不同状态下的同一个注册用户对应的多张注册人脸图片。注册人脸关键点包括注册眼部关键点和注册其他人脸关键点,所述获取所述人脸关键点检测模型输出的注册人脸关键点还包括:采用所述人脸关键点检测模型提取所述多张注册人脸图片中的眼部特征点,将得到的不同状态下的眼部特征点作为所述同一个注册用户的注册眼部关键点。本实施例中,为了进一步减小眼部关键点的误差,在获取多张待注册人脸图像时,获取眼睛处于不同状态下的同一个注册用户对应的多张注册人脸图片,在获取人脸关键点检测模型输出的注册人脸关键点时,还可以采用人脸关键点检测模型提取多张注册人脸图片中的眼部特征点,将得到的不同状态下的眼部特征点作为同一个注册用户的注册眼部关键点,从而提高眼部关键点的注册数据,从数据源的途径中减少人脸识别时的误差。In an alternative embodiment, acquiring multiple face images to be registered includes: acquiring multiple registered face images corresponding to the same registered user whose eyes are in different states. The registered face key points include registered eye key points and registered other face key points, and the acquiring the registered face key points output by the face key point detection model further includes: extracting the face key point detection model by using the face key point detection model. For the eye feature points in the multiple registered face pictures, the obtained eye feature points in different states are used as the registered eye key points of the same registered user. In this embodiment, in order to further reduce the error of the key points of the eyes, when acquiring multiple face images to be registered, multiple registered face images corresponding to the same registered user whose eyes are in different states are acquired. When the face key point detection model outputs the registered face key points, the face key point detection model can also be used to extract the eye feature points in multiple registered face pictures, and the obtained eye feature points in different states are used as the same. A registered user's registered eye key points, so as to improve the registration data of eye key points, and reduce the error in face recognition from the way of data source.
本申请实施例通过将所述注册人脸图像作为预先训练好的人脸识别模型的输入,所述人脸识别模型包括人脸关键点检测模型和人脸姿态检测模型,所述人脸关键点检测模型与所述人脸姿态检测模型连接,以及增加眼部关键点的损失函数权重以及数据源,不仅解决了人脸姿态的预测鲁棒性不足的问题,还解决了眼部关键点的定位容易出现较大误差的问题,实现了在不损失人脸关键点精度情况下完成了人脸关键点、人脸姿态的联合估计,以及提高眼部关键点识别准确率的效果。In the embodiment of the present application, the registered face image is used as the input of a pre-trained face recognition model, and the face recognition model includes a face key point detection model and a face gesture detection model. The face key point The detection model is connected with the face pose detection model, and the loss function weight and data source of the eye key points are added, which not only solves the problem of insufficient robustness of face pose prediction, but also solves the positioning of the eye key points. The problem of large errors is prone to occur, and the joint estimation of face key points and face pose is completed without losing the accuracy of face key points, and the effect of improving the accuracy of eye key point recognition is achieved.
如图6所示,在一个实施例中,提供了一种人脸识别装置,该实施例提供的人脸识别装置可执行本申请任意实施例所提供的人脸识别方法,具备执行方法相应的功能模块和有益效果。该人脸识别装置包括图像获取模块100、关键点提取模块200、人脸对齐模块300、特征提取模块400和人脸识别模块500。As shown in FIG. 6 , in one embodiment, a face recognition device is provided. The face recognition device provided in this embodiment can execute the face recognition method provided by any embodiment of the present application, and has the corresponding method for executing the method. Functional modules and beneficial effects. The face recognition device includes an image acquisition module 100 , a key point extraction module 200 , a face alignment module 300 , a feature extraction module 400 and a face recognition module 500 .
具体的,图像获取模块100用于获取待识别人脸图像;关键点提取模块200用于采用预先训练好的人脸关键点模型对所述待识别人脸图像进行人脸 关键点识别,所述人脸关键点模型为轻量化神经网络模型;人脸对齐模块300用于根据所述人脸关键点将所述待识别人脸图像进行对齐以得到对齐人脸图像;特征提取模块400用于提取出所述对齐人脸图像中的人脸特征;人脸识别模块500用于将所述人脸特征与预设数据库中的注册人脸特征进行比对,以识别出所述待识别人脸图像中的人物。Specifically, the image acquisition module 100 is used to acquire a face image to be recognized; the key point extraction module 200 is used to use a pre-trained face key point model to perform face key point recognition on the to-be-recognized face image, and the The face key point model is a lightweight neural network model; the face alignment module 300 is used to align the to-be-recognized face images according to the face key points to obtain an aligned face image; the feature extraction module 400 is used to extract out the facial features in the aligned face images; the face recognition module 500 is used to compare the facial features with the registered facial features in the preset database to identify the to-be-recognized face images characters in.
在一个实施例中,上述装置还包括人脸注册模块600,该人脸注册模块600用于获取注册人脸图像;将所述注册人脸图像作为预先训练好的人脸识别模型的输入,所述人脸识别模型包括人脸关键点检测模型和人脸姿态检测模型,所述人脸关键点检测模型与所述人脸姿态检测模型连接;获取所述人脸姿态检测模型输出的人脸姿态,并获取所述人脸关键点检测模型输出的注册人脸关键点;当所述人脸姿态为正脸图像时,根据所述注册人脸关键点将所述注册人脸图像进行对齐以得到对齐注册人脸;提取出所述对齐注册人脸中的人脸特征作为注册人脸特征并存储在预设数据库中。In one embodiment, the above-mentioned apparatus further includes a face registration module 600, and the face registration module 600 is used to obtain a registered face image; the registered face image is used as the input of the pre-trained face recognition model. The face recognition model includes a face key point detection model and a face gesture detection model, and the face key point detection model is connected with the face gesture detection model; obtain the face gesture output by the face gesture detection model , and obtain the registered face key points output by the face key point detection model; when the face pose is a frontal face image, align the registered face images according to the registered face key points to obtain Aligning the registered faces; extracting the facial features in the aligned registered faces as the registered face features and storing them in a preset database.
在一个实施例中,所述人脸关键点检测模型通过所述人脸关键点检测模型的中间特征层与所述人脸姿态检测模型连接,所述人脸姿态检测模型用于根据输入的注册人脸图像和人脸关键点特征确定所述注册人脸图像是否为正脸图像,所述人脸关键点特征为所述中间特征层的输出。In one embodiment, the face key point detection model is connected to the face gesture detection model through an intermediate feature layer of the face key point detection model, and the face gesture detection model is used for registration according to the input The face image and the face key point feature determine whether the registered face image is a frontal face image, and the face key point feature is the output of the intermediate feature layer.
在一个实施例中,该人脸注册模块600还用于获取多张待注册人脸图像;对所述多张待注册人脸图像中的人脸进行检测以得到注册人脸图像,所述注册人脸图像为所述多张待注册人脸图像中人脸特征值最高的待注册人脸图像。In one embodiment, the face registration module 600 is further configured to obtain a plurality of face images to be registered; the faces in the plurality of face images to be registered are detected to obtain a registered face image, and the registration The face image is the face image to be registered with the highest face feature value among the plurality of face images to be registered.
在一个实施例中,该人脸注册模块600还用于获取眼睛处于不同状态下的同一个注册用户对应的多张注册人脸图片;所述注册人脸关键点包括注册眼部关键点和注册其他人脸关键点,该人脸注册模块600还用于采用所述人脸关键点检测模型提取所述多张注册人脸图片中的眼部特征点,将得到的不同状态下的眼部特征点作为所述同一个注册用户的注册眼部关键点。In one embodiment, the face registration module 600 is further configured to acquire multiple registered face pictures corresponding to the same registered user whose eyes are in different states; the registered face key points include registered eye key points and registered face key points. Other face key points, the face registration module 600 is also used to extract the eye feature points in the multiple registered face pictures by using the face key point detection model, and the obtained eye features in different states point as the registered eye key point of the same registered user.
在一个实施例中,所述人脸关键点包括眼部关键点和其他人脸关键点,所述人脸识别模型是基于预设损失函数训练得到的,所述预设损失函数中所述眼部关键点的损失系数大于所述其他人脸关键点的损失系数。In one embodiment, the face key points include eye key points and other face key points, and the face recognition model is obtained by training based on a preset loss function. The loss coefficient of the partial key point is larger than the loss coefficient of the other face key points.
在一个实施例中,该图像获取模块100还用于获取待识别图像;对所述待 识别图像中的人脸进行检测和提取以得到待识别人脸图像。In one embodiment, the image acquisition module 100 is also used to acquire an image to be recognized; the face in the image to be recognized is detected and extracted to obtain an image of the face to be recognized.
图7示出了一个实施例中机器人的内部结构图。如图7所示,该机器人包括通过系统总线连接的处理器、存储器和网络接口。其中,存储器包括非易失性存储介质和内存储器。该机器人的非易失性存储介质存储有操作系统,还可存储有计算机程序,该计算机程序被处理器执行时,可使得处理器实现人脸识别方法。该内存储器中也可储存有计算机程序,该计算机程序被处理器执行时,可使得处理器执行年龄识别方法。本领域技术人员可以理解,图7中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的机器人的限定,具体的机器人可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。Figure 7 shows an internal structure diagram of the robot in one embodiment. As shown in Figure 7, the robot includes a processor, memory and network interface connected through a system bus. Wherein, the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the robot stores an operating system, and also stores a computer program. When the computer program is executed by the processor, the processor can implement the face recognition method. A computer program may also be stored in the internal memory, and when the computer program is executed by the processor, the processor may execute the age identification method. Those skilled in the art can understand that the structure shown in FIG. 7 is only a block diagram of a partial structure related to the solution of the present application, and does not constitute a limitation on the robot to which the solution of the present application is applied. More or fewer components are shown in the figures, either in combination or with different arrangements of components.
在一个实施例中,提出了一种机器人,包括存储器和处理器,所述存储器存储有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行以下步骤:In one embodiment, a robot is proposed, comprising a memory and a processor, wherein the memory stores a computer program, and when the computer program is executed by the processor, the processor performs the following steps:
获取待识别人脸图像;Obtain the face image to be recognized;
采用预先训练好的人脸关键点模型对所述待识别人脸图像进行人脸关键点识别,所述人脸关键点模型为轻量化神经网络模型;Use a pre-trained face key point model to perform face key point recognition on the to-be-recognized face image, and the face key point model is a lightweight neural network model;
根据识别得到的所述人脸关键点将所述待识别人脸图像进行对齐以得到对齐人脸图像;Align the to-be-recognized face images according to the identified face key points to obtain aligned face images;
提取出所述对齐人脸图像中的人脸特征;extracting face features in the aligned face image;
将所述人脸特征与预设数据库中的注册人脸特征进行比对,以识别出所述待识别人脸图像中的人物,所述注册人脸特征是通过对注册的正脸图像进行特征提取得到的,所述注册的正脸图像是通过预先训练好的人脸姿态检测模型筛选得到的。Comparing the facial features with the registered facial features in the preset database to identify the characters in the to-be-recognized facial images, the registered facial features are characterized by characterizing the registered frontal face images. Extracted and obtained, the registered frontal face image is obtained by screening through a pre-trained face pose detection model.
在一个实施例中,所述获取待识别人脸图像之前包括:In one embodiment, before the acquiring the face image to be recognized includes:
获取注册人脸图像;将所述注册人脸图像作为预先训练好的人脸识别模型的输入,所述人脸识别模型包括人脸关键点检测模型和人脸姿态检测模型,所述人脸关键点检测模型与所述人脸姿态检测模型连接;获取所述人脸姿态检测模型输出的人脸姿态,并获取所述人脸关键点检测模型输出的注册人脸关键点; 当所述人脸姿态为正脸图像时,根据所述注册人脸关键点将所述注册人脸图像进行对齐以得到对齐注册人脸;提取出所述对齐注册人脸中的人脸特征作为注册人脸特征并存储在预设数据库中。Obtain a registered face image; take the registered face image as the input of a pre-trained face recognition model, the face recognition model includes a face key point detection model and a face gesture detection model, and the face key The point detection model is connected with the face gesture detection model; the face gesture output by the face gesture detection model is obtained, and the registered face key points output by the face key point detection model are obtained; When the posture is a frontal face image, the registered face image is aligned according to the key points of the registered face to obtain an aligned registered face; the facial features in the aligned registered faces are extracted as the registered face features and Stored in the preset database.
在一个实施例中,所述人脸关键点检测模型通过所述人脸关键点检测模型的中间特征层与所述人脸姿态检测模型连接,所述人脸姿态检测模型用于根据输入的注册人脸图像和人脸关键点特征确定所述注册人脸图像是否为正脸图像,所述人脸关键点特征为所述中间特征层的输出。In one embodiment, the face key point detection model is connected to the face gesture detection model through an intermediate feature layer of the face key point detection model, and the face gesture detection model is used for registration according to the input The face image and the face key point feature determine whether the registered face image is a frontal face image, and the face key point feature is the output of the intermediate feature layer.
在一个实施例中,所述获取注册人脸图像包括:获取多张待注册人脸图像;对所述多张待注册人脸图像中的人脸进行检测以得到注册人脸图像,所述注册人脸图像为所述多张待注册人脸图像中人脸特征值最高的待注册人脸图像。In one embodiment, the obtaining of the registered face images includes: obtaining a plurality of face images to be registered; detecting faces in the plurality of face images to be registered to obtain a registered face image, and the registration The face image is the face image to be registered with the highest face feature value among the plurality of face images to be registered.
在一个实施例中,所述获取多张待注册人脸图像包括:获取眼睛处于不同状态下的同一个注册用户对应的多张注册人脸图片;所述注册人脸关键点包括注册眼部关键点和注册其他人脸关键点,所述获取所述人脸关键点检测模型输出的注册人脸关键点还包括:采用所述人脸关键点检测模型提取所述多张注册人脸图片中的眼部特征点,将得到的不同状态下的眼部特征点作为所述同一个注册用户的注册眼部关键点。In one embodiment, the acquiring multiple face images to be registered includes: acquiring multiple registered face images corresponding to the same registered user whose eyes are in different states; the registered face key points include the registered eye key and registering other face key points, the obtaining the registered face key points output by the face key point detection model further includes: using the face key point detection model to extract the Eye feature points, the obtained eye feature points in different states are used as the registered eye key points of the same registered user.
在一个实施例中,所述人脸关键点包括眼部关键点和其他人脸关键点,所述人脸识别模型是基于预设损失函数训练得到的,所述预设损失函数中所述眼部关键点的损失系数大于所述其他人脸关键点的损失系数。In one embodiment, the face key points include eye key points and other face key points, and the face recognition model is obtained by training based on a preset loss function. The loss coefficient of the partial key point is larger than the loss coefficient of the other face key points.
在一个实施例中,所述获取待识别人脸图像包括:获取待识别图像;对所述待识别图像中的人脸进行检测和提取以得到待识别人脸图像。In one embodiment, the acquiring an image of a face to be recognized includes: acquiring an image to be recognized; and detecting and extracting a human face in the image to be recognized to obtain an image of the face to be recognized.
在一个实施例中,提出了一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时,使得所述处理器执行以下步骤:In one embodiment, a computer-readable storage medium is provided, which stores a computer program, and when the computer program is executed by a processor, causes the processor to perform the following steps:
获取待识别人脸图像;Obtain the face image to be recognized;
采用预先训练好的人脸关键点模型对所述待识别人脸图像进行人脸关键点识别,所述人脸关键点模型为轻量化神经网络模型;Use a pre-trained face key point model to perform face key point recognition on the to-be-recognized face image, and the face key point model is a lightweight neural network model;
根据识别得到的所述人脸关键点将所述待识别人脸图像进行对齐以得到对齐人脸图像;Align the to-be-recognized face images according to the identified face key points to obtain aligned face images;
提取出所述对齐人脸图像中的人脸特征;extracting face features in the aligned face image;
将所述人脸特征与预设数据库中的注册人脸特征进行比对,以识别出所述待识别人脸图像中的人物,所述注册人脸特征是通过对注册的正脸图像进行特征提取得到的,所述注册的正脸图像是通过预先训练好的人脸姿态检测模型筛选得到的。Comparing the facial features with the registered facial features in the preset database to identify the characters in the to-be-recognized facial images, the registered facial features are characterized by characterizing the registered frontal face images. Extracted and obtained, the registered frontal face image is obtained by screening through a pre-trained face pose detection model.
在一个实施例中,所述获取待识别人脸图像之前包括:In one embodiment, before the acquiring the face image to be recognized includes:
获取注册人脸图像;将所述注册人脸图像作为预先训练好的人脸识别模型的输入,所述人脸识别模型包括人脸关键点检测模型和人脸姿态检测模型,所述人脸关键点检测模型与所述人脸姿态检测模型连接;获取所述人脸姿态检测模型输出的人脸姿态,并获取所述人脸关键点检测模型输出的注册人脸关键点;当所述人脸姿态为正脸图像时,根据所述注册人脸关键点将所述注册人脸图像进行对齐以得到对齐注册人脸;提取出所述对齐注册人脸中的人脸特征作为注册人脸特征并存储在预设数据库中。Obtain a registered face image; take the registered face image as the input of a pre-trained face recognition model, the face recognition model includes a face key point detection model and a face gesture detection model, and the face key The point detection model is connected with the face gesture detection model; the face gesture output by the face gesture detection model is obtained, and the registered face key points output by the face key point detection model are obtained; when the face When the posture is a frontal face image, the registered face image is aligned according to the key points of the registered face to obtain an aligned registered face; the facial features in the aligned registered faces are extracted as the registered face features and Stored in the preset database.
在一个实施例中,所述人脸关键点检测模型通过所述人脸关键点检测模型的中间特征层与所述人脸姿态检测模型连接,所述人脸姿态检测模型用于根据输入的注册人脸图像和人脸关键点特征确定所述注册人脸图像是否为正脸图像,所述人脸关键点特征为所述中间特征层的输出。In one embodiment, the face key point detection model is connected to the face gesture detection model through an intermediate feature layer of the face key point detection model, and the face gesture detection model is used for registration according to the input The face image and the face key point feature determine whether the registered face image is a frontal face image, and the face key point feature is the output of the intermediate feature layer.
在一个实施例中,所述获取注册人脸图像包括:获取多张待注册人脸图像;对所述多张待注册人脸图像中的人脸进行检测以得到注册人脸图像,所述注册人脸图像为所述多张待注册人脸图像中人脸特征值最高的待注册人脸图像。In one embodiment, the obtaining of the registered face images includes: obtaining a plurality of face images to be registered; detecting faces in the plurality of face images to be registered to obtain a registered face image, and the registration The face image is the face image to be registered with the highest face feature value among the plurality of face images to be registered.
在一个实施例中,所述获取多张待注册人脸图像包括:获取眼睛处于不同状态下的同一个注册用户对应的多张注册人脸图片;所述注册人脸关键点包括注册眼部关键点和注册其他人脸关键点,所述获取所述人脸关键点检测模型输出的注册人脸关键点还包括:采用所述人脸关键点检测模型提取所述多张注册人脸图片中的眼部特征点,将得到的不同状态下的眼部特征点作为所述同一个注册用户的注册眼部关键点。In one embodiment, the acquiring multiple face images to be registered includes: acquiring multiple registered face images corresponding to the same registered user whose eyes are in different states; the registered face key points include the registered eye key and registering other face key points, the obtaining the registered face key points output by the face key point detection model further includes: using the face key point detection model to extract the Eye feature points, the obtained eye feature points in different states are used as the registered eye key points of the same registered user.
在一个实施例中,所述人脸关键点包括眼部关键点和其他人脸关键点,所述人脸识别模型是基于预设损失函数训练得到的,所述预设损失函数中所述眼部关键点的损失系数大于所述其他人脸关键点的损失系数。In one embodiment, the face key points include eye key points and other face key points, and the face recognition model is obtained by training based on a preset loss function. The loss coefficient of the partial key point is larger than the loss coefficient of the other face key points.
在一个实施例中,所述获取待识别人脸图像包括:获取待识别图像;对所述待识别图像中的人脸进行检测和提取以得到待识别人脸图像。In one embodiment, the acquiring an image of a face to be recognized includes: acquiring an image to be recognized; and detecting and extracting a human face in the image to be recognized to obtain an image of the face to be recognized.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一非易失性计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。Those of ordinary skill in the art can understand that all or part of the processes in the methods of the above embodiments can be implemented by instructing relevant hardware through a computer program, and the program can be stored in a non-volatile computer-readable storage medium , when the program is executed, it may include the flow of the above-mentioned method embodiments. Wherein, any reference to memory, storage, database or other medium used in the various embodiments provided in this application may include non-volatile and/or volatile memory. Nonvolatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory may include random access memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in various forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Road (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above embodiments can be combined arbitrarily. In order to make the description simple, all possible combinations of the technical features in the above embodiments are not described. However, as long as there is no contradiction in the combination of these technical features It is considered to be the range described in this specification.
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。请输入具体实施内容部分。The above-mentioned embodiments only represent several embodiments of the present application, and the descriptions thereof are relatively specific and detailed, but should not be construed as a limitation on the scope of the patent of the present application. It should be pointed out that for those skilled in the art, without departing from the concept of the present application, several modifications and improvements can be made, which all belong to the protection scope of the present application. Therefore, the scope of protection of the patent of the present application shall be subject to the appended claims. Please enter the specific implementation content section.

Claims (21)

  1. 一种人脸识别方法,其特征在于,所述方法包括:A face recognition method, characterized in that the method comprises:
    获取待识别人脸图像;Obtain the face image to be recognized;
    采用预先训练好的人脸关键点模型对所述待识别人脸图像进行人脸关键点识别,所述人脸关键点模型为轻量化神经网络模型;Use a pre-trained face key point model to perform face key point recognition on the to-be-recognized face image, and the face key point model is a lightweight neural network model;
    根据识别得到的所述人脸关键点将所述待识别人脸图像进行对齐以得到对齐人脸图像;Align the to-be-recognized face images according to the identified face key points to obtain aligned face images;
    提取出所述对齐人脸图像中的人脸特征;extracting face features in the aligned face image;
    将所述人脸特征与预设数据库中的注册人脸特征进行比对,以识别出所述待识别人脸图像中的人物,所述注册人脸特征是通过对注册的正脸图像进行特征提取得到的,所述注册的正脸图像是通过预先训练好的人脸姿态检测模型筛选得到的。Comparing the facial features with the registered facial features in the preset database to identify the characters in the to-be-recognized facial images, the registered facial features are characterized by characterizing the registered frontal face images. Extracted and obtained, the registered frontal face image is obtained by screening through a pre-trained face pose detection model.
  2. 根据权利要求1所述的方法,其特征在于,所述获取待识别人脸图像之前包括:The method according to claim 1, wherein before the acquiring the face image to be recognized comprises:
    获取注册人脸图像;Get the registered face image;
    将所述注册人脸图像作为预先训练好的人脸识别模型的输入,所述人脸识别模型包括人脸关键点检测模型和人脸姿态检测模型,所述人脸关键点检测模型与所述人脸姿态检测模型连接;Taking the registered face image as the input of the pre-trained face recognition model, the face recognition model includes a face key point detection model and a face gesture detection model, and the face key point detection model is the same as the Face pose detection model connection;
    获取所述人脸姿态检测模型输出的人脸姿态,并获取所述人脸关键点检测模型输出的注册人脸关键点;obtaining the face pose output by the face pose detection model, and obtaining the registered face key points output by the face key point detection model;
    当所述人脸姿态为正脸图像时,根据所述注册人脸关键点将所述注册人脸图像进行对齐以得到对齐注册人脸;When the face posture is a frontal face image, aligning the registered face image according to the registered face key points to obtain an aligned registered face;
    提取出所述对齐注册人脸中的人脸特征作为注册人脸特征并存储在预设数据库中。The facial features in the aligned registered faces are extracted as registered facial features and stored in a preset database.
  3. 根据权利要求2所述的方法,其特征在于,所述人脸关键点检测模型通过所述人脸关键点检测模型的中间特征层与所述人脸姿态检测模型连接,所述人脸姿态检测模型用于根据输入的注册人脸图像和人脸关键点特征确定所述注册人脸图像是否为正脸图像,所述人脸关键点特征为所述中间特征层的输出。The method according to claim 2, wherein the face key point detection model is connected to the face gesture detection model through an intermediate feature layer of the face key point detection model, and the face gesture detection model The model is used to determine whether the registered face image is a frontal face image according to the input registered face image and face key point feature, and the face key point feature is the output of the intermediate feature layer.
  4. 根据权利要求2所述的方法,其特征在于,所述获取注册人脸图像包括:The method according to claim 2, wherein the acquiring a registered face image comprises:
    获取多张待注册人脸图像;Obtain multiple face images to be registered;
    对所述多张待注册人脸图像中的人脸进行检测以得到注册人脸图像,所述注册人脸图像为所述多张待注册人脸图像中人脸特征值最高的待注册人脸图像。Detecting the faces in the multiple face images to be registered to obtain a registered face image, the registered face image is the face to be registered with the highest facial feature value in the multiple face images to be registered image.
  5. 根据权利要求4所述的方法,其特征在于,所述获取多张待注册人脸图像包括:The method according to claim 4, wherein the acquiring a plurality of face images to be registered comprises:
    获取眼睛处于不同状态下的同一个注册用户对应的多张注册人脸图片;Obtain multiple registered face pictures corresponding to the same registered user whose eyes are in different states;
    所述注册人脸关键点包括注册眼部关键点和注册其他人脸关键点,所述获取所述人脸关键点检测模型输出的注册人脸关键点还包括:The registered face key points include registered eye key points and registered other face key points, and the acquisition of the registered face key points output by the face key point detection model further includes:
    采用所述人脸关键点检测模型提取所述多张注册人脸图片中的眼部特征点,将得到的不同状态下的眼部特征点作为所述同一个注册用户的注册眼部关键点。The eye feature points in the multiple registered face pictures are extracted by using the face key point detection model, and the obtained eye feature points in different states are used as the registered eye key points of the same registered user.
  6. 根据权利要求2所述的方法,其特征在于,所述人脸关键点包括眼部关键点和其他人脸关键点,所述人脸关键点检测模型是基于预设损失函数训练得到的,所述预设损失函数中所述眼部关键点的损失系数大于所述其他人脸关键点的损失系数。The method according to claim 2, wherein the face key points include eye key points and other face key points, and the face key point detection model is obtained by training based on a preset loss function. The loss coefficient of the eye key point in the preset loss function is greater than the loss coefficient of the other face key points.
  7. 根据权利要求1所述的方法,其特征在于,所述获取待识别人脸图像包括:The method according to claim 1, wherein the acquiring the face image to be recognized comprises:
    获取待识别图像;Get the image to be recognized;
    对所述待识别图像中的人脸进行检测和提取以得到待识别人脸图像。Detecting and extracting the face in the to-be-recognized image to obtain the to-be-recognized face image.
  8. 一种机器人,包括存储器和处理器,所述存储器存储有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行以下步骤:A robot includes a memory and a processor, the memory stores a computer program, and when the computer program is executed by the processor, the processor executes the following steps:
    获取待识别人脸图像;Obtain the face image to be recognized;
    采用预先训练好的人脸关键点模型对所述待识别人脸图像进行人脸关键点识别,所述人脸关键点模型为轻量化神经网络模型;Use a pre-trained face key point model to perform face key point recognition on the to-be-recognized face image, and the face key point model is a lightweight neural network model;
    根据识别得到的所述人脸关键点将所述待识别人脸图像进行对齐以得到对齐人脸图像;Align the to-be-recognized face images according to the identified face key points to obtain aligned face images;
    提取出所述对齐人脸图像中的人脸特征;extracting face features in the aligned face image;
    将所述人脸特征与预设数据库中的注册人脸特征进行比对,以识别出所述待识别人脸图像中的人物,所述注册人脸特征是通过对注册的正脸图像进行特征提取得到的,所述注册的正脸图像是通过预先训练好的人脸姿态检测模型筛选得到的。Comparing the facial features with the registered facial features in the preset database to identify the characters in the to-be-recognized facial images, the registered facial features are characterized by characterizing the registered frontal face images. Extracted and obtained, the registered frontal face image is obtained by screening through a pre-trained face pose detection model.
  9. 根据权利要求8所述的机器人,其特征在于,所述获取待识别人脸图像之前包括:The robot according to claim 8, wherein before the acquiring the face image to be recognized comprises:
    获取注册人脸图像;Get the registered face image;
    将所述注册人脸图像作为预先训练好的人脸识别模型的输入,所述人脸识别模型包括人脸关键点检测模型和人脸姿态检测模型,所述人脸关键点检测模型与所述人脸姿态检测模型连接;Taking the registered face image as the input of the pre-trained face recognition model, the face recognition model includes a face key point detection model and a face gesture detection model, and the face key point detection model is the same as the Face pose detection model connection;
    获取所述人脸姿态检测模型输出的人脸姿态,并获取所述人脸关键点检测模型输出的注册人脸关键点;obtaining the face pose output by the face pose detection model, and obtaining the registered face key points output by the face key point detection model;
    当所述人脸姿态为正脸图像时,根据所述注册人脸关键点将所述注册人脸图像进行对齐以得到对齐注册人脸;When the face posture is a frontal face image, aligning the registered face image according to the registered face key points to obtain an aligned registered face;
    提取出所述对齐注册人脸中的人脸特征作为注册人脸特征并存储在预设数据库中。The facial features in the aligned registered faces are extracted as registered facial features and stored in a preset database.
  10. 根据权利要求9所述的机器人,其特征在于,所述人脸关键点检测模型通过所述人脸关键点检测模型的中间特征层与所述人脸姿态检测模型连接,所述人脸姿态检测模型用于根据输入的注册人脸图像和人脸关键点特征确定所述注册人脸图像是否为正脸图像,所述人脸关键点特征为所述中间特征层的输出。The robot according to claim 9, wherein the face key point detection model is connected to the face gesture detection model through an intermediate feature layer of the face key point detection model, and the face gesture detection model The model is used to determine whether the registered face image is a frontal face image according to the input registered face image and face key point feature, and the face key point feature is the output of the intermediate feature layer.
  11. 根据权利要求9所述的机器人,其特征在于,所述获取注册人脸图像包括:The robot according to claim 9, wherein the acquiring a registered face image comprises:
    获取多张待注册人脸图像;Obtain multiple face images to be registered;
    对所述多张待注册人脸图像中的人脸进行检测以得到注册人脸图像,所述注册人脸图像为所述多张待注册人脸图像中人脸特征值最高的待注册人脸图像。Detecting the faces in the multiple face images to be registered to obtain a registered face image, the registered face image is the face to be registered with the highest facial feature value in the multiple face images to be registered image.
  12. 根据权利要求11所述的机器人,其特征在于,所述获取多张待注册人脸图像包括:The robot according to claim 11, wherein the acquiring a plurality of face images to be registered comprises:
    获取眼睛处于不同状态下的同一个注册用户对应的多张注册人脸图片;Obtain multiple registered face pictures corresponding to the same registered user whose eyes are in different states;
    所述注册人脸关键点包括注册眼部关键点和注册其他人脸关键点,所述获取所述人脸关键点检测模型输出的注册人脸关键点还包括:The registered face key points include registered eye key points and registered other face key points, and the acquisition of the registered face key points output by the face key point detection model further includes:
    采用所述人脸关键点检测模型提取所述多张注册人脸图片中的眼部特征点,将得到的不同状态下的眼部特征点作为所述同一个注册用户的注册眼部关键点。The eye feature points in the multiple registered face pictures are extracted by using the face key point detection model, and the obtained eye feature points in different states are used as the registered eye key points of the same registered user.
  13. 根据权利要求9所述的机器人,其特征在于,所述人脸关键点包括眼部关键点和其他人脸关键点,所述人脸关键点检测模型是基于预设损失函数训练得到的,所述预设损失函数中所述眼部关键点的损失系数大于所述其他人脸关键点的损失系数。The robot according to claim 9, wherein the face key points include eye key points and other face key points, and the face key point detection model is obtained by training based on a preset loss function. The loss coefficient of the eye key point in the preset loss function is greater than the loss coefficient of the other face key points.
  14. 根据权利要求8所述的机器人,其特征在于,所述获取待识别人脸图像包括:The robot according to claim 8, wherein the acquiring the face image to be recognized comprises:
    获取待识别图像;Get the image to be recognized;
    对所述待识别图像中的人脸进行检测和提取以得到待识别人脸图像。Detecting and extracting the face in the to-be-recognized image to obtain the to-be-recognized face image.
  15. 一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时,使得所述处理器执行以下步骤:A computer-readable storage medium storing a computer program, when executed by a processor, the computer program causes the processor to perform the following steps:
    获取待识别人脸图像;Obtain the face image to be recognized;
    采用预先训练好的人脸关键点模型对所述待识别人脸图像进行人脸关键点识别,所述人脸关键点模型为轻量化神经网络模型;Use a pre-trained face key point model to perform face key point recognition on the to-be-recognized face image, and the face key point model is a lightweight neural network model;
    根据识别得到的所述人脸关键点将所述待识别人脸图像进行对齐以得到对齐人脸图像;Align the to-be-recognized face images according to the identified face key points to obtain aligned face images;
    提取出所述对齐人脸图像中的人脸特征;extracting face features in the aligned face image;
    将所述人脸特征与预设数据库中的注册人脸特征进行比对,以识别出所述待识别人脸图像中的人物,所述注册人脸特征是通过对注册的正脸图像进行特征提取得到的,所述注册的正脸图像是通过预先训练好的人脸姿态检测模型筛选得到的。Comparing the facial features with the registered facial features in the preset database to identify the characters in the to-be-recognized facial images, the registered facial features are characterized by characterizing the registered frontal face images. Extracted and obtained, the registered frontal face image is obtained by screening through a pre-trained face pose detection model.
  16. 根据权利要求15所述的存储介质,其特征在于,所述获取待识别人脸图像之前包括:The storage medium according to claim 15, wherein before the acquiring the face image to be recognized comprises:
    获取注册人脸图像;Get the registered face image;
    将所述注册人脸图像作为预先训练好的人脸识别模型的输入,所述人脸识别模型包括人脸关键点检测模型和人脸姿态检测模型,所述人脸关键点检测模型与所述人脸姿态检测模型连接;Taking the registered face image as the input of the pre-trained face recognition model, the face recognition model includes a face key point detection model and a face gesture detection model, and the face key point detection model is the same as the Face pose detection model connection;
    获取所述人脸姿态检测模型输出的人脸姿态,并获取所述人脸关键点检测模型输出的注册人脸关键点;obtaining the face pose output by the face pose detection model, and obtaining the registered face key points output by the face key point detection model;
    当所述人脸姿态为正脸图像时,根据所述注册人脸关键点将所述注册人脸图像进行对齐以得到对齐注册人脸;When the face posture is a frontal face image, aligning the registered face image according to the registered face key points to obtain an aligned registered face;
    提取出所述对齐注册人脸中的人脸特征作为注册人脸特征并存储在预设数据库中。The facial features in the aligned registered faces are extracted as registered facial features and stored in a preset database.
  17. 根据权利要求16所述的存储介质,其特征在于,所述人脸关键点检测模型通过所述人脸关键点检测模型的中间特征层与所述人脸姿态检测模型连接,所述人脸姿态检测模型用于根据输入的注册人脸图像和人脸关键点特征确定所述注册人脸图像是否为正脸图像,所述人脸关键点特征为所述中间特征层的输出。The storage medium according to claim 16, wherein the face key point detection model is connected to the face gesture detection model through an intermediate feature layer of the face key point detection model, and the face gesture The detection model is used to determine whether the registered face image is a frontal face image according to the input registered face image and face key point feature, and the face key point feature is the output of the intermediate feature layer.
  18. 根据权利要求16所述的存储介质,其特征在于,所述获取注册人脸图像包括:The storage medium according to claim 16, wherein the acquiring a registered face image comprises:
    获取多张待注册人脸图像;Obtain multiple face images to be registered;
    对所述多张待注册人脸图像中的人脸进行检测以得到注册人脸图像,所述注册人脸图像为所述多张待注册人脸图像中人脸特征值最高的待注册人脸图像。Detecting the faces in the multiple face images to be registered to obtain a registered face image, the registered face image is the face to be registered with the highest facial feature value in the multiple face images to be registered image.
  19. 根据权利要求18所述的存储介质,其特征在于,所述获取多张待注册人脸图像包括:The storage medium according to claim 18, wherein the acquiring a plurality of face images to be registered comprises:
    获取眼睛处于不同状态下的同一个注册用户对应的多张注册人脸图片;Obtain multiple registered face pictures corresponding to the same registered user whose eyes are in different states;
    所述注册人脸关键点包括注册眼部关键点和注册其他人脸关键点,所述获取所述人脸关键点检测模型输出的注册人脸关键点还包括:The registered face key points include registered eye key points and registered other face key points, and the acquisition of the registered face key points output by the face key point detection model further includes:
    采用所述人脸关键点检测模型提取所述多张注册人脸图片中的眼部特征点,将得到的不同状态下的眼部特征点作为所述同一个注册用户的注册眼部关键点。The eye feature points in the multiple registered face pictures are extracted by using the face key point detection model, and the obtained eye feature points in different states are used as the registered eye key points of the same registered user.
  20. 根据权利要求16所述的存储介质,其特征在于,所述人脸关键点包括 眼部关键点和其他人脸关键点,所述人脸关键点检测模型是基于预设损失函数训练得到的,所述预设损失函数中所述眼部关键点的损失系数大于所述其他人脸关键点的损失系数。The storage medium according to claim 16, wherein the face key points include eye key points and other face key points, and the face key point detection model is obtained by training based on a preset loss function, In the preset loss function, the loss coefficient of the eye key point is greater than the loss coefficient of the other face key points.
  21. 根据权利要求15所述的存储介质,其特征在于,所述获取待识别人脸图像包括:The storage medium according to claim 15, wherein the acquiring the face image to be recognized comprises:
    获取待识别图像;Get the image to be recognized;
    对所述待识别图像中的人脸进行检测和提取以得到待识别人脸图像。Detecting and extracting the face in the to-be-recognized image to obtain the to-be-recognized face image.
PCT/CN2020/137065 2020-12-17 2020-12-17 Face recognition method, robot, and storage medium WO2022126464A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/137065 WO2022126464A1 (en) 2020-12-17 2020-12-17 Face recognition method, robot, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/137065 WO2022126464A1 (en) 2020-12-17 2020-12-17 Face recognition method, robot, and storage medium

Publications (1)

Publication Number Publication Date
WO2022126464A1 true WO2022126464A1 (en) 2022-06-23

Family

ID=82059916

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/137065 WO2022126464A1 (en) 2020-12-17 2020-12-17 Face recognition method, robot, and storage medium

Country Status (1)

Country Link
WO (1) WO2022126464A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160210500A1 (en) * 2015-01-15 2016-07-21 Samsung Electronics Co., Ltd. Method and apparatus for adjusting face pose
CN106096538A (en) * 2016-06-08 2016-11-09 中国科学院自动化研究所 Face identification method based on sequencing neural network model and device
CN111126358A (en) * 2020-02-25 2020-05-08 京东方科技集团股份有限公司 Face detection method, face detection device, storage medium and equipment
CN111160307A (en) * 2019-12-31 2020-05-15 帷幄匠心科技(杭州)有限公司 Face recognition method and face recognition card punching system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160210500A1 (en) * 2015-01-15 2016-07-21 Samsung Electronics Co., Ltd. Method and apparatus for adjusting face pose
CN106096538A (en) * 2016-06-08 2016-11-09 中国科学院自动化研究所 Face identification method based on sequencing neural network model and device
CN111160307A (en) * 2019-12-31 2020-05-15 帷幄匠心科技(杭州)有限公司 Face recognition method and face recognition card punching system
CN111126358A (en) * 2020-02-25 2020-05-08 京东方科技集团股份有限公司 Face detection method, face detection device, storage medium and equipment

Similar Documents

Publication Publication Date Title
WO2019228317A1 (en) Face recognition method and device, and computer readable medium
WO2021008328A1 (en) Image processing method and device, terminal, and storage medium
CN108009528B (en) Triple Loss-based face authentication method and device, computer equipment and storage medium
US20200160040A1 (en) Three-dimensional living-body face detection method, face authentication recognition method, and apparatuses
CN108205655B (en) Key point prediction method and device, electronic equipment and storage medium
WO2021012526A1 (en) Face recognition model training method, face recognition method and apparatus, device, and storage medium
WO2019192121A1 (en) Dual-channel neural network model training and human face comparison method, and terminal and medium
WO2019119505A1 (en) Face recognition method and device, computer device and storage medium
WO2018228218A1 (en) Identification method, computing device, and storage medium
WO2021232985A1 (en) Facial recognition method and apparatus, computer device, and storage medium
WO2020252917A1 (en) Fuzzy face image recognition method and apparatus, terminal device, and medium
TWI712980B (en) Claim information extraction method and device, and electronic equipment
CN111832581B (en) Lung feature recognition method and device, computer equipment and storage medium
US11315363B2 (en) Systems and methods for gait recognition via disentangled representation learning
CN107944395B (en) Method and system for verifying and authenticating integration based on neural network
WO2022247539A1 (en) Living body detection method, estimation network processing method and apparatus, computer device, and computer readable instruction product
TWI725398B (en) Electronic device and method for estimating optical flow
WO2021031704A1 (en) Object tracking method and apparatus, computer device, and storage medium
CN113205002B (en) Low-definition face recognition method, device, equipment and medium for unlimited video monitoring
CN112434556A (en) Pet nose print recognition method and device, computer equipment and storage medium
CN111860055A (en) Face silence living body detection method and device, readable storage medium and equipment
CN113298158A (en) Data detection method, device, equipment and storage medium
US20220327189A1 (en) Personalized biometric anti-spoofing protection using machine learning and enrollment data
CN111382791A (en) Deep learning task processing method, image recognition task processing method and device
CN112488053B (en) Face recognition method, device, robot and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20965481

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20965481

Country of ref document: EP

Kind code of ref document: A1