WO2021051510A1 - Procédé et appareil de génération d'image de visage, dispositif informatique et support de stockage - Google Patents

Procédé et appareil de génération d'image de visage, dispositif informatique et support de stockage Download PDF

Info

Publication number
WO2021051510A1
WO2021051510A1 PCT/CN2019/116707 CN2019116707W WO2021051510A1 WO 2021051510 A1 WO2021051510 A1 WO 2021051510A1 CN 2019116707 W CN2019116707 W CN 2019116707W WO 2021051510 A1 WO2021051510 A1 WO 2021051510A1
Authority
WO
WIPO (PCT)
Prior art keywords
face image
angle
target face
target
image
Prior art date
Application number
PCT/CN2019/116707
Other languages
English (en)
Chinese (zh)
Inventor
徐玲玲
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021051510A1 publication Critical patent/WO2021051510A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • This application relates to the field of computers, in particular to methods, devices, computer equipment and storage media for generating facial images.
  • Face recognition has been widely used in robotics and other disciplines. It is of great significance for automatic identification and automatic discrimination of humans.
  • each user usually only collects a front view image.
  • the face recognition neural network can only generate one type of picture, and cannot generate pictures of multiple angle types at the same time. This makes the application of the real face recognition system have recognition defects, which leads to a sharp drop in the recognition rate and affects normal use.
  • the main purpose of this application is to provide a method for generating a face image, which aims to solve the technical problem that the existing face recognition neural network cannot generate pictures of multiple angle types at the same time.
  • This application proposes a method for generating a face image, including:
  • the original face image is converted into the first target face image corresponding to the first angle vector according to the preset conversion method, where the first angle vector is included in all target face images corresponding to In the angle vector, the first target face image is included in all target face images;
  • This application also provides a device for generating a face image, including:
  • the first input module is used to input the original face image and the angle vector of the target face image into the pre-trained face image generation network at the same time, wherein the angle vector of the target face image includes at least one;
  • the conversion module is used to convert the original face image into the first target face image corresponding to the first angle vector through the pre-trained face image generation network according to the preset conversion method, where the first angle vector is included in all targets In the angle vector corresponding to the face image, the first target face image is included in all target face images;
  • the evaluation module is used to evaluate whether the first angle vector matches the first target face image through a preset discrimination network
  • the output module is used to output the first target face image if it matches.
  • the present application also provides a computer device, including a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method when the computer program is executed.
  • the present application also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the above-mentioned method are realized.
  • the original face image and the angle vector are input into the face image generation network together, and the angle vector is applied to the original face image to realize the synthesis of the target face image corresponding to the angle.
  • the face image generation network of the present application is formed by multi-user and multi-angle face image training, and carries multiple angle images corresponding to the generation template, which can support an original face image and input multiple angle vectors to the face.
  • target face images of multiple angles corresponding to the original face image are generated at the same time.
  • the face image corresponding to the angle is classified into the correct angle domain by the discrimination network, it indicates The reliability of the generation network is high, and the reliability of the discrimination network is also high. If the classification is incorrect, but the generation network is reliable after the first correction loss function analysis, it indicates that the discrimination network needs to be corrected, and the second correction loss function Ld is used for correction.
  • Fig. 1 is a schematic flowchart of a method for generating a face image according to an embodiment of the present application
  • Fig. 2 is a schematic structural diagram of an apparatus for generating a face image according to an embodiment of the present application
  • Fig. 3 is a schematic diagram of the internal structure of a computer device according to an embodiment of the present application.
  • a method for generating a face image includes:
  • the original face image is converted into the first target face image corresponding to the first angle vector according to the preset conversion method, where the first angle vector is included in all target face images In the corresponding angle vector, the first target face image is included in all target face images.
  • the original face image and the angle vector of the target face image are input into the face image generation network together, and the angle vector is applied to the original face image to realize the target face image corresponding to the angle vector Synthesis.
  • the face image generation network of this embodiment is formed by multi-user and multi-angle face image training, and carries multiple angle images corresponding to the generation template, which can support an original face image and input multiple angle vectors together.
  • the target face images corresponding to the original face images after the conversion of each angle vector are generated and output at the same time.
  • step S2 of converting the original face image into the first target face image corresponding to the first angle vector according to the preset conversion method through the pre-trained face image generation network includes:
  • S22 Locate the feature points corresponding to the designated organs of the face according to the feature point positioning model, where the designated organs include at least one type.
  • S25 Form a first target face image according to the face area corresponding to the first angle vector.
  • the head image of the face is enlarged and displayed to form a face.
  • the head region image is recognized and located according to the positioning model, and the coordinate data of the feature point is mapped to the feature point template corresponding to the angle vector according to the angle vector to obtain the target face image.
  • the above-mentioned feature points of the human face include at least five feature points, the feature points corresponding to the two eyes, the feature points corresponding to the corners of the two mouths, and the feature point corresponding to the nose.
  • each coordinate is represented by (x, y), and then according to the opencv affine transformation, the coordinates of the five feature points obtained are The data is mapped to the feature point template corresponding to the pre-stored angle vector, and the face area corresponding to the angle vector is obtained to form the target face image.
  • the face detection model such as mtcnn
  • five feature points of the face can be obtained, and the coordinate data corresponding to the five feature points can be determined.
  • step S3 of evaluating whether the first angle vector matches the first target face image through a preset discrimination network includes:
  • S31 Input the inverse angle of the first angle vector and the first target face image as input to the pre-trained face image generation network.
  • S31 Generate a second target face image according to the inverse angle of the first angle vector and the first target face image.
  • This embodiment verifies the reliability of the pre-trained face image generation network by using the inverse angle of the corresponding angle of the target face image and the synthesized target face image as the input of the face image generation network, and whether to output the original face image If the inverse angle of the corresponding angle between the target face image and the synthesized target face image is used as the input of the face image generation network, and the original face image can be output, it indicates that the face image generation network is reliable.
  • the above-mentioned synthetic target face image corresponding angle is c, and the inverse angle of the synthetic target face image corresponding angle is -c.
  • step S33 of determining whether the second target face image is the same as the original face image the method includes:
  • , G represents the face image generation network, G(x,c) represents the input image x, When generating the corresponding face image when the angle is c, D(x) represents the probability distribution of the domain label calculated by D on the real image, D represents the discriminant network function, and c' -c.
  • the first correction loss function is Lg, and the face image generation network G is corrected by the first correction loss function.
  • G(x,c) means to generate a face image with an angle of c
  • Lfcls is used to optimize the generation network G, that is, the face image generated by the generation network G can be classified into the target angle domain c by the discrimination network D.
  • Lrec is to ensure that the original face image x is transformed in the angle domain c, and then transformed back to the face image corresponding to the original angle domain c', which is consistent with the original face image.
  • step S4 of outputting the first target face image the method includes:
  • the generation network is connected with the discrimination network, and the face image output by the generation network is classified by angle to further verify the reliability of the generation network. If the face image corresponding to the angle is classified to the correct angle by the discrimination network Domain indicates that the reliability of the generated network is high, and the reliability of the discrimination network is also high. If the classification is incorrect, but the generation network is reliable after the first correction loss function analysis, it indicates that the discrimination network needs to be corrected, and the second correction loss function Ld is used for correction. In this embodiment, by comparing the magnitude of the angle vector of the first target face image relative to the original face, whether it is within the corresponding angle classification range, and if it is within the range, it indicates that the classification is correct.
  • step S1 inputting the angle vectors of the original face image and the target face image into the pre-trained face image generation network at the same time before step S1 includes:
  • S11 Obtain face images of a specified number of users, where the face images of the same user include at least two angle images corresponding to the specified angles.
  • S14 Input the training samples into the neural network for training, and obtain a classification model that can classify the image domain corresponding to the specified angle.
  • the parameters of the angle vector are added at the same time to generate face image generation templates corresponding to different angle domains.
  • the face images are classified, the same angle is put into the same classification, and each angle is represented by an angle vector with a unique length.
  • the specified number is greater than or equal to 1000 users, and each user corresponds to face images of multiple angles.
  • the specified angles include, for example, -90, -60, -45, -30, 0, 30, 45, 60, 90 and other angles.
  • step S13 of combining the angle image and the vector value of the specified length corresponding to the specified angle into a training sample includes:
  • the training samples are corrected by positioning to improve the accuracy of the training samples.
  • the face image is centered by cropping, and then the pixel values of the feature points corresponding to the facial organs are corrected. For example, if the standard face image is 256*256, the standard pixel values of the eyes are 58*58, and 158*58.
  • the correction is aligned to The standard pixel value of the standard face image corresponding to the first specified angle, and the feature points corresponding to other face organs are correspondingly corrected in the above process to reduce the degree of sample difference, the impact on the training accuracy of the face image generation network, and improve the generation network Synchronously output the accuracy of face images corresponding to multiple angles.
  • the apparatus for generating a face image includes:
  • the first input module 1 is configured to simultaneously input the angle vectors of the original face image and the target face image into a pre-trained face image generation network, where the angle vector of the target face image includes at least one.
  • the conversion module 2 is used to convert the original face image into a first target face image corresponding to a first angle vector through a pre-trained face image generation network according to a preset conversion method, where the first angle vector is included in all In the angle vector corresponding to the target face image, the first target face image is included in all target face images.
  • the evaluation module 3 is used to evaluate whether the first angle vector matches the first target face image through a preset discrimination network.
  • the output module 4 is configured to output the first target face image if it matches.
  • the original face image and the angle vector of the target face image are input into the face image generation network together, and the angle vector is applied to the original face image to realize the target face image corresponding to the angle vector Synthesis.
  • the face image generation network of this embodiment is formed by multi-user and multi-angle face image training, and carries multiple angle images corresponding to the generation template, which can support an original face image and input multiple angle vectors together.
  • the target face images corresponding to the original face images after the conversion of each angle vector are generated and output at the same time.
  • the conversion module 2 includes:
  • the cropping unit is used to crop the image of the head region of the face from the original face image.
  • the positioning unit is used for locating the feature points corresponding to the designated organs of the face according to the feature point positioning model, wherein the designated organs include at least one type.
  • the first acquiring unit is used to acquire the coordinate data of the characteristic points corresponding to each designated organ.
  • the mapping unit is used to map each coordinate data one-to-one to the feature point template corresponding to the first angle vector to obtain the face area corresponding to the first angle vector.
  • the forming unit is used to form the first target face image according to the face area corresponding to the first angle vector.
  • the head image of the face is enlarged and displayed to form a face.
  • the head region image is recognized and located according to the positioning model, and the coordinate data of the feature point is mapped to the feature point template corresponding to the angle vector according to the angle vector to obtain the target face image.
  • the above-mentioned feature points of the human face include at least five feature points, the feature points corresponding to the two eyes, the feature points corresponding to the corners of the two mouths, and the feature point corresponding to the nose.
  • each coordinate is represented by (x, y), and then according to the opencv affine transformation, the coordinates of the five feature points obtained are The data is mapped to the feature point template corresponding to the pre-stored angle vector, and the face area corresponding to the angle vector is obtained to form the target face image.
  • the face detection model such as mtcnn
  • five feature points of the face can be obtained, and the coordinate data corresponding to the five feature points can be determined.
  • evaluation module 3 includes:
  • the input unit is used to input the inverse angle of the first angle vector and the first target face image as input to the pre-trained face image generation network.
  • the first generating unit is configured to generate a second target face image according to the inverse angle of the first angle vector and the first target face image.
  • the judging unit is used to judge whether the second target face image is the same as the original face image.
  • the determining unit is configured to determine that the first angle vector matches the first target face image if they are the same.
  • This embodiment verifies the reliability of the pre-trained face image generation network by using the inverse angle of the corresponding angle of the target face image and the synthesized target face image as the input of the face image generation network, and whether to output the original face image If the inverse angle of the corresponding angle between the target face image and the synthesized target face image is used as the input of the face image generation network, and the original face image can be output, it indicates that the face image generation network is reliable.
  • the above-mentioned synthetic target face image corresponding angle is c, then the inverse angle of the synthetic target face image corresponding angle is -c.
  • evaluation module 3 includes:
  • the determining unit is configured to determine the difference information between the second target face image and the original face image if the second target face image is different from the original face image.
  • , G represents the face image generation network, G(x,c) Represents the input image x, the corresponding face image when the generated angle is c, D(x) represents the probability distribution of the domain label calculated by D on the real image, D represents the discriminant network function, c' -c.
  • Ladv logD(x)+log(1-D(G(x, c))
  • Lfcls -logD(G(x,c))
  • Lrec
  • the correction unit is configured to correct the parameters in the pre-trained face image generation network according to the first correction loss function.
  • the first correction loss function is Lg, and the face image generation network G is corrected by the first correction loss function.
  • G(x,c) means to generate a face image with an angle of c
  • Lfcls is used to optimize the generation network G, that is, the face image generated by the generation network G can be classified into the target angle domain c by the discrimination network D.
  • Lrec is to ensure that the original face image x is transformed in the angle domain c, and then transformed back to the face image corresponding to the original angle domain c', which is consistent with the original face image.
  • the device for generating a face image includes:
  • the judging module is used to judge whether the angle classification corresponding to each first target face image is correct.
  • the generation network is connected with the discrimination network, and the face image output by the generation network is classified by angle to further verify the reliability of the generation network. If the face image corresponding to the angle is classified to the correct angle by the discrimination network Domain indicates that the reliability of the generated network is high, and the reliability of the discrimination network is also high. If the classification is incorrect, but the generation network is reliable after the first correction loss function analysis, it indicates that the discrimination network needs to be corrected, and the second correction loss function Ld is used for correction. In this embodiment, by comparing the magnitude of the angle vector of the first target face image relative to the original face, whether it is within the corresponding angle classification range, and if it is within the range, it indicates that the classification is correct.
  • the device for generating a face image includes:
  • the obtaining module is used to obtain face images of a specified number of users, where the face images of the same user include at least two angle images corresponding to the specified angles.
  • the mapping module is used to map the specified angle corresponding to the angle image to the vector value of the specified length.
  • the combination module is used to combine the angle image and the vector value of the specified length corresponding to the specified angle into a training sample.
  • the second input module is used to input training samples into the neural network for training to obtain a classification model that can classify the image domain corresponding to the specified angle.
  • the classification model As a module, it is used to use the classification model as a pre-trained face image generation network.
  • the parameters of the angle vector are added at the same time to generate face image generation templates corresponding to different angle domains.
  • the face images are classified, the same angle is put into the same classification, and each angle is represented by an angle vector with a unique length.
  • the specified number is greater than or equal to 1000 users, and each user corresponds to face images of multiple angles.
  • the specified angles include, for example, -90, -60, -45, -30, 0, 30, 45, 60, 90 and other angles.
  • the combination module includes:
  • the second acquiring unit is configured to acquire the first pixel value corresponding to the facial organ in the first angle image, where the first angle image is included in all angle images.
  • the third acquiring unit is used to acquire the standard face image corresponding to the first designated angle.
  • the correction unit is used to correct the first pixel value according to the standard pixel value corresponding to each facial organ in the standard face image to obtain the corrected first pixel value.
  • the combination unit is used to combine the second angle image carrying the corrected first pixel value and the first vector value corresponding to the first angle into a corrected training sample.
  • the training samples are corrected by positioning to improve the accuracy of the training samples.
  • the face image is centered by cropping, and then the pixel values of the feature points corresponding to the facial organs are corrected. For example, if the standard face image is 256*256, the standard pixel values of the eyes are 58*58, and 158*58.
  • the correction is aligned to The standard pixel value of the standard face image corresponding to the first specified angle, and the feature points corresponding to other face organs are correspondingly corrected in the above process to reduce the degree of sample difference, the impact on the training accuracy of the face image generation network, and improve the generation network Simultaneously output the accuracy of face images corresponding to multiple angles.
  • an embodiment of the present application also provides a computer device.
  • the computer device may be a server, and its internal structure may be as shown in FIG. 3.
  • the computer equipment includes a processor, a memory, a network interface, and a database connected through a system bus. Among them, the processor designed by the computer is used to provide calculation and control capabilities.
  • the memory of the computer device includes a readable storage medium and an internal memory.
  • the readable storage medium stores an operating system, computer readable instructions, and a database.
  • the above-mentioned readable storage medium includes a non-volatile readable storage medium and a volatile readable storage medium.
  • the memory provides an environment for the operation of the operating system and computer-readable instructions in the non-volatile storage medium.
  • the database of the computer equipment is used to store the generated face images and other data.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer-readable instruction executes the process of the above-mentioned method embodiment.
  • An embodiment of the present application also provides a computer-readable storage medium on which computer-readable instructions are stored.
  • the processes of the foregoing method embodiments are executed.
  • the above-mentioned readable storage medium includes non-volatile readable storage medium and volatile readable storage medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé et un appareil permettant de générer une image de visage, un dispositif informatique et un support de stockage. Le procédé de génération d'une image de visage comprend : l'entrée simultanée d'une image de visage d'origine et d'un vecteur d'angle d'une image de visage cible dans un réseau de génération d'image de visage pré-entraîné ; la conversion, par le réseau de génération d'image de visage pré-entraîné, de l'image de visage d'origine en une première image de visage cible correspondant à un premier vecteur d'angle selon un mode de conversion prédéfini, le premier vecteur d'angle étant compris dans des vecteurs d'angle correspondant à toutes les images de visage cibles ; la première image de visage cible étant comprise dans toutes les images de visage cibles ; l'évaluation, par un réseau de détermination prédéfini, pour savoir si le premier vecteur d'angle correspond à la première image de visage cible ; et si le premier vecteur d'angle correspond à la première image de visage cible, la délivrance en sortie de la première image de visage cible. La synthèse de l'image de visage cible correspondant à un angle est obtenue par l'entrée simultanée de l'image de visage d'origine et du vecteur d'angle dans le réseau de génération d'image de visage et l'action du vecteur d'angle sur l'image de visage d'origine.
PCT/CN2019/116707 2019-09-19 2019-11-08 Procédé et appareil de génération d'image de visage, dispositif informatique et support de stockage WO2021051510A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910886618.5A CN110781738B (zh) 2019-09-19 2019-09-19 生成人脸图像的方法、装置、计算机设备及存储介质
CN201910886618.5 2019-09-19

Publications (1)

Publication Number Publication Date
WO2021051510A1 true WO2021051510A1 (fr) 2021-03-25

Family

ID=69383588

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/116707 WO2021051510A1 (fr) 2019-09-19 2019-11-08 Procédé et appareil de génération d'image de visage, dispositif informatique et support de stockage

Country Status (2)

Country Link
CN (1) CN110781738B (fr)
WO (1) WO2021051510A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401459A (zh) * 2020-03-24 2020-07-10 谷元(上海)文化科技有限责任公司 一种动画人物形态变化视觉捕捉系统
CN111652064B (zh) * 2020-04-30 2024-06-07 平安科技(深圳)有限公司 人脸图像生成方法、电子装置及可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160125572A1 (en) * 2014-11-05 2016-05-05 Korea Advanced Institute Of Science And Technology Device and method to generate image using image learning model
CN107292813A (zh) * 2017-05-17 2017-10-24 浙江大学 一种基于生成对抗网络的多姿态人脸生成方法
CN107506702A (zh) * 2017-08-08 2017-12-22 江西高创保安服务技术有限公司 基于多角度的人脸识别模型训练及测试系统及方法
CN108776983A (zh) * 2018-05-31 2018-11-09 北京市商汤科技开发有限公司 基于重建网络的人脸重建方法和装置、设备、介质、产品
CN109635745A (zh) * 2018-12-13 2019-04-16 广东工业大学 一种基于生成对抗网络模型生成多角度人脸图像的方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6751684B2 (ja) * 2017-03-28 2020-09-09 株式会社Nttドコモ 類似画像検索装置
CN108509915B (zh) * 2018-04-03 2021-10-26 百度在线网络技术(北京)有限公司 人脸识别模型的生成方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160125572A1 (en) * 2014-11-05 2016-05-05 Korea Advanced Institute Of Science And Technology Device and method to generate image using image learning model
CN107292813A (zh) * 2017-05-17 2017-10-24 浙江大学 一种基于生成对抗网络的多姿态人脸生成方法
CN107506702A (zh) * 2017-08-08 2017-12-22 江西高创保安服务技术有限公司 基于多角度的人脸识别模型训练及测试系统及方法
CN108776983A (zh) * 2018-05-31 2018-11-09 北京市商汤科技开发有限公司 基于重建网络的人脸重建方法和装置、设备、介质、产品
CN109635745A (zh) * 2018-12-13 2019-04-16 广东工业大学 一种基于生成对抗网络模型生成多角度人脸图像的方法

Also Published As

Publication number Publication date
CN110781738B (zh) 2024-09-20
CN110781738A (zh) 2020-02-11

Similar Documents

Publication Publication Date Title
EP1677250B1 (fr) Systeme et procede de collationnement d'images
US11403757B2 (en) Sight line detection method and sight line detection device
US10853677B2 (en) Verification method and system
US7706601B2 (en) Object posture estimation/correlation system using weight information
US6430307B1 (en) Feature extraction system and face image recognition system
WO2018028546A1 (fr) Procédé de positionnement de point clé, terminal et support de stockage informatique
US7925060B2 (en) Authentication system and registration system related to facial feature information
US20150302240A1 (en) Method and device for locating feature points on human face and storage medium
US7916903B2 (en) Personal identification device, personal identification method, updating method for identification dictionary data, and updating program for identification dictionary data
CN112017212B (zh) 人脸关键点跟踪模型的训练、跟踪方法及系统
WO2019033574A1 (fr) Dispositif électronique, procédé et système de reconnaissance dynamique de visage dans une vidéo, et support d'informations
JPWO2018189795A1 (ja) 認識装置、認識方法および認識プログラム
WO2020006964A1 (fr) Procédé et dispositif de détection d'image
WO2021051510A1 (fr) Procédé et appareil de génération d'image de visage, dispositif informatique et support de stockage
CN110448870A (zh) 一种人体姿态训练方法
WO2021212608A1 (fr) Procédé et appareil de positionnement d'utilisateur de source sonore, et dispositif informatique
WO2015165227A1 (fr) Procédé de reconnaissance de visage humain
WO2020223940A1 (fr) Procédé de prédiction de posture, dispositif informatique et support d'informations
CN109344714A (zh) 一种基于关键点匹配的视线估计方法
JP2007052575A (ja) メタデータ付与装置およびメタデータ付与方法
WO2020237941A1 (fr) Procédé et appareil de détection d'état de personnel sur la base d'informations de caractéristiques de paupière
CN116597246A (zh) 模型训练方法、目标检测方法、电子设备及存储介质
CN115827812A (zh) 重定位方法、装置、设备及其存储介质
CN115665361A (zh) 虚拟环境中的视频融合方法和在线视频会议通信方法
WO2021084662A1 (fr) Dispositif d'aide à la vérification, procédé d'aide à la vérification, et support d'enregistrement lisible par ordinateur

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19945539

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19945539

Country of ref document: EP

Kind code of ref document: A1