WO2020199693A1 - Procédé et appareil de reconnaissance faciale de grande pose et dispositif associé - Google Patents

Procédé et appareil de reconnaissance faciale de grande pose et dispositif associé Download PDF

Info

Publication number
WO2020199693A1
WO2020199693A1 PCT/CN2019/130871 CN2019130871W WO2020199693A1 WO 2020199693 A1 WO2020199693 A1 WO 2020199693A1 CN 2019130871 W CN2019130871 W CN 2019130871W WO 2020199693 A1 WO2020199693 A1 WO 2020199693A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
image
dimensional
image feature
face recognition
Prior art date
Application number
PCT/CN2019/130871
Other languages
English (en)
Chinese (zh)
Inventor
乔宇
曾小星
彭小江
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Publication of WO2020199693A1 publication Critical patent/WO2020199693A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • This application belongs to the field of face recognition, and in particular relates to a face recognition method, device and equipment in a big posture.
  • the collected face images In a non-cooperative and uncontrolled environment, when the user's face is recognized, the collected face images often have a variety of posture changes interference, that is, the collected face images are large postures, in order to improve this environment
  • the accuracy of face recognition under the following conditions needs to be recognized for large poses.
  • the current large pose face recognition methods include the use of pose awakening networks and the use of deep networks to learn pose robust facial features.
  • each sub-network in the posture awakening network is responsible for one posture, and the entire network covers all face postures.
  • the training and testing processes are more complicated. Need more storage space.
  • the embodiments of the present application provide a face recognition method, device, and equipment in a large posture to solve the problem that the accuracy of face recognition in the prior art is not high, or the training and testing process is complicated and requires a large Storage space problem.
  • the first aspect of the embodiments of the present application provides a face recognition method in a big posture, and the face recognition method in a big posture includes:
  • the method before the step of learning the first image feature of the face training image through the texture learning network, the method further includes:
  • the step of learning the first image feature of the face training image through the texture learning network includes:
  • the predicted label is compared with the real label of the face training image, and the first image feature of the face training image is learned through the supervision of the cross loss function.
  • the cross loss function is:
  • x represents the character training image
  • Indicates whether the image belongs to the i-th category Indicates the probability that the image belongs to the i-th category
  • C is the number of categories
  • L ce is the calculated loss value
  • the step of reconstructing a corresponding three-dimensional face according to the face training image, and converting the shape information of the reconstructed three-dimensional face into a two-dimensional texture image include:
  • the key point regression loss function and the prior loss function are:
  • L recon to loss of the calculated value the first term on the right represents the return loss of function keys
  • N is the number of critical points
  • L i gt label indicates the i-th critical points
  • L i pr denotes the i th key
  • the second item on the right represents the prior loss function
  • represents the shape parameter of the three-dimensional deformation model
  • represents the set loss function weight.
  • the step of combining the first image feature and the second image feature to recognize the face includes:
  • the first image feature of the first dimension and the second image feature of the second dimension are spliced to obtain the fused third image feature of the third dimension, and face recognition is performed according to the third image feature of the third dimension,
  • the third dimension first dimension+second dimension.
  • a second aspect of the embodiments of the present application provides a face recognition device in a large posture, and the face recognition device in a large posture includes:
  • the first learning unit is used to learn the first image feature of the face training image through the texture learning network
  • a reconstruction unit configured to reconstruct a corresponding three-dimensional face according to the face training image, and convert the shape information of the reconstructed three-dimensional face into a two-dimensional texture image
  • the second learning unit is configured to learn the second image feature of the two-dimensional texture image through a shape learning network
  • the joint recognition unit is used to combine the first image feature and the second image feature to recognize the face.
  • the third aspect of the embodiments of the present application provides a face recognition device, which includes a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor When the computer program is executed, the steps of the face recognition method in a large posture as described in any one of the first aspect are implemented.
  • the fourth aspect of the embodiments of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, it implements the large-scale data described in any of the first The steps of the face recognition method under the posture.
  • the embodiment of this application has the beneficial effect that the first image feature of the face training image is learned through the texture learning network, then the three-dimensional face is reconstructed, and the shape information of the reconstructed three-dimensional face is converted into two
  • the second image feature of the two-dimensional texture image is learned through the shape learning network, and then the first image feature and the second image feature are combined to recognize the face, so that the two-dimensional planar feature and the three-dimensional feature can be jointly expressed, It effectively improves the accuracy of face recognition in a large posture, and the training process is relatively simple, which can reduce the occupation of storage space.
  • FIG. 1 is a schematic diagram of the implementation process of a face recognition method in a large posture provided by an embodiment of the present application;
  • FIG. 2 is a schematic diagram of a face recognition structure provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a face recognition device in a large posture according to an embodiment of the present application
  • Fig. 4 is a schematic diagram of a face recognition device provided by an embodiment of the present application.
  • FIG. 1 is a schematic diagram of the implementation process of a face recognition method in a large posture provided by an embodiment of the application, and the details are as follows:
  • step S101 the first image feature of the face training image is learned through the texture learning network
  • the big posture mentioned in this application refers to the user's posture being in an uncontrollable state, and the user has various postures. In order to describe the multiple posture scenarios of the user, this application expresses it as a big posture.
  • the present application may also include the step of detecting and aligning the face training image, aligning the face image in the face training image, and detecting key points in the face image.
  • the face training image aligning the face image in the face training image
  • key points in the face image there may be 21 key points of the face.
  • the residual N (N can be 18) layer network structure can be used, and the pre-training model may not be used.
  • the length and width of the input image can be pixels of a predetermined size (for example, 224), and the face detection and face alignment operations are performed on the faces in the image.
  • the batch size used in the training process can be 128, and the stochastic gradient descent method can be used to optimize the weights layer by layer.
  • Send the corresponding face training image get the predicted label of the image by the texture learning network through the forward propagation of the network, compare the predicted label with the real label of the image, and calculate the loss function of the classification through the cross loss function.
  • x represents the image
  • x Indicates whether the image belongs to the i-th category
  • C is the number of categories
  • L ce is the calculated loss value.
  • step S102 a corresponding three-dimensional face is reconstructed according to the face training image, and the shape information of the reconstructed three-dimensional face is converted into a two-dimensional texture image;
  • the first image feature with semantic expression learned through the texture learning network in step S101 can be used for face recognition in this application, and can also be used for three-dimensional face reconstruction with identity authentication.
  • the three-dimensional face reconstruction network closely follows the texture learning network, and inputs two-dimensional faces into the three-dimensional face reconstruction network. Unlike the texture learning network, the three-dimensional face reconstruction network may not perform the alignment operation of face detection.
  • the key point information in the face in the face training image can be annotated, the shape and expression parameters of the three-dimensional deformation model can be predicted through the three-dimensional face reconstruction network, and then the three-dimensional face based on the three-dimensional deformation model can be reconstructed.
  • the three-dimensional face reconstruction network is monitored through a supervised operation function. It can be specifically shown in Figure 2, including:
  • step S201 the three-dimensional face corresponding to the face training image is reconstructed through the key point regression loss function and the prior loss function;
  • L recon to loss of the calculated value the first term on the right represents the return loss of function keys
  • N is the number of critical points
  • L i gt label indicates the i-th critical points
  • L i pr denotes the i th key
  • the second item on the right represents the prior loss function
  • represents the shape parameter of the three-dimensional deformation model
  • represents the set loss function weight.
  • the rotation parameter is a 3-dimensional output
  • the offset prediction is made for the three coordinate systems of X, Y, and Z at the same time, and finally all the position coordinates are scaled. Get the final three-dimensional key point prediction.
  • step S202 project the item point coordinates of the reconstructed three-dimensional face to the texture space to obtain a two-dimensional texture image.
  • the texture space can completely express the shape information of the three-dimensional face with a two-dimensional map.
  • the number of channels in this map is 3, which represents the X, Y, and Z coordinate values of the three-dimensional face.
  • step S103 the second image feature of the two-dimensional texture image is learned through a shape learning network
  • the reconstructed three-dimensional coordinates are expressed.
  • the posture robust feature in this two-dimensional texture image can be extracted through the residual network, and the supervision information can be the same as step S101.
  • the three-dimensional reconstructed shape information is feature extracted to obtain features that are robust to the pose.
  • step S104 the first image feature and the second image feature are combined to recognize the face.
  • the testing phase we can perform joint expressions. From steps S101-S103, our framework uses the texture learning network to extract the two-dimensional information of the face. This two-dimensional information is the general information for general face recognition. At the same time, we obtained three-dimensional identity information that is robust to posture. In the testing phase, we obtained joint expression by splicing the corresponding network fully connected output features, which can mine the identity authentication information of the face to the greatest extent, and the joint expression can significantly improve The performance of the face in the big pose scene.
  • the first image feature of the face training image is learned through the texture learning network, then the three-dimensional face is reconstructed, the shape information of the reconstructed three-dimensional face is converted into a two-dimensional texture image, and the shape of the two-dimensional texture image is learned through the shape learning network.
  • the second image feature is then combined with the first image feature and the second image feature to recognize the face, so that the two-dimensional planar feature and the three-dimensional feature can be expressed jointly.
  • the first image feature is the first dimension
  • the second image feature is the first Two dimensions
  • the combined third image feature can be the third dimension
  • the third dimension is the sum of the first dimension and the second dimension.
  • FIG. 3 is a schematic structural diagram of a face recognition device in a large posture provided by an embodiment of the application, and the details are as follows:
  • the face recognition device in the big posture includes:
  • the first learning unit is used to learn the first image feature of the face training image through the texture learning network
  • a reconstruction unit configured to reconstruct a corresponding three-dimensional face according to the face training image, and convert the shape information of the reconstructed three-dimensional face into a two-dimensional texture image
  • the second learning unit is configured to learn the second image feature of the two-dimensional texture image through a shape learning network
  • the joint recognition unit is used to combine the first image feature and the second image feature to recognize the face.
  • the face recognition device in the large posture described in FIG. 3 corresponds to the face recognition method in the large posture described in FIG. 1.
  • Fig. 4 is a schematic diagram of a face recognition device provided by an embodiment of the present application.
  • the face recognition device 4 of this embodiment includes: a processor 40, a memory 41, and a computer program 42 stored in the memory 41 and running on the processor 40, for example, in a large attitude Face recognition program.
  • the processor 40 executes the computer program 42, the steps in the above embodiments of the face recognition method in each large posture are realized.
  • the processor 40 executes the computer program 42, the functions of the modules/units in the foregoing device embodiments are realized.
  • the computer program 42 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 41 and executed by the processor 40 to complete This application.
  • the one or more modules/units may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program 42 in the face recognition device 4.
  • the computer program 42 can be divided into:
  • the first learning unit is used to learn the first image feature of the face training image through the texture learning network
  • a reconstruction unit configured to reconstruct a corresponding three-dimensional face according to the face training image, and convert the shape information of the reconstructed three-dimensional face into a two-dimensional texture image
  • the second learning unit is configured to learn the second image feature of the two-dimensional texture image through a shape learning network
  • the joint recognition unit is used to combine the first image feature and the second image feature to recognize the face.
  • the face recognition device 4 can be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the face recognition device may include, but is not limited to, a processor 40 and a memory 41.
  • FIG. 4 is only an example of the face recognition device 4, and does not constitute a limitation on the face recognition device 4. It may include more or less components than shown in the figure, or combine certain components. Or different components, for example, the face recognition device may also include input and output devices, network access devices, buses, and so on.
  • the so-called processor 40 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 41 may be an internal storage unit of the face recognition device 4, such as a hard disk or a memory of the face recognition device 4.
  • the memory 41 may also be an external storage device of the face recognition device 4, such as a plug-in hard disk equipped on the face recognition device 4, a smart memory card (Smart Media Card, SMC), and a secure digital (Secure Digital) Digital, SD) card, flash card (Flash Card), etc.
  • the memory 41 may also include both an internal storage unit of the face recognition device 4 and an external storage device.
  • the memory 41 is used to store the computer program and other programs and data required by the face recognition device.
  • the memory 41 can also be used to temporarily store data that has been output or will be output.
  • the disclosed apparatus/terminal device and method may be implemented in other ways.
  • the device/terminal device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division, and there may be other divisions in actual implementation, such as multiple units.
  • components can be combined or integrated into another system, or some features can be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated module/unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • this application implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium. When the program is executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electrical carrier signal, telecommunications signal, and software distribution media, etc.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electrical carrier signal telecommunications signal
  • software distribution media etc.
  • the content contained in the computer-readable medium can be appropriately added or deleted in accordance with the requirements of the legislation and patent practice in the jurisdiction.
  • the computer-readable medium Does not include electrical carrier signals and telecommunication signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Geometry (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computer Graphics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé de reconnaissance faciale de grande pose, comprenant : l'apprentissage d'une première caractéristique d'image d'une image d'apprentissage de visage au moyen d'un réseau d'apprentissage de texture (S101) ; la reconstruction d'un visage tridimensionnel correspondant selon l'image d'apprentissage de visage et la conversion des informations de forme du visage tridimensionnel reconstruit en une image de texture bidimensionnelle (S102) ; l'apprentissage d'une seconde caractéristique d'image de l'image de texture bidimensionnelle au moyen d'un réseau d'apprentissage de forme (S103) ; et la combinaison de la première caractéristique d'image et de la seconde caractéristique d'image pour reconnaître un visage (S104). Une caractéristique plane bidimensionnelle et une caractéristique tridimensionnelle peuvent être exprimées de manière combinée, de telle sorte que la précision de la reconnaissance faciale de grande pose soit efficacement améliorée. De plus, un processus d'apprentissage est relativement simple et l'occupation d'un espace de stockage peut être réduite.
PCT/CN2019/130871 2019-03-29 2019-12-31 Procédé et appareil de reconnaissance faciale de grande pose et dispositif associé WO2020199693A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910251871.3A CN110020620B (zh) 2019-03-29 2019-03-29 一种大姿态下的人脸识别方法、装置及设备
CN201910251871.3 2019-03-29

Publications (1)

Publication Number Publication Date
WO2020199693A1 true WO2020199693A1 (fr) 2020-10-08

Family

ID=67190312

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/130871 WO2020199693A1 (fr) 2019-03-29 2019-12-31 Procédé et appareil de reconnaissance faciale de grande pose et dispositif associé

Country Status (2)

Country Link
CN (1) CN110020620B (fr)
WO (1) WO2020199693A1 (fr)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112270711A (zh) * 2020-11-17 2021-01-26 北京百度网讯科技有限公司 模型训练以及姿态预测方法、装置、设备以及存储介质
CN112464895A (zh) * 2020-12-14 2021-03-09 深圳市优必选科技股份有限公司 姿态识别模型训练方法、装置、姿态识别方法和终端设备
CN112488178A (zh) * 2020-11-26 2021-03-12 推想医疗科技股份有限公司 网络模型的训练方法及装置、图像处理方法及装置、设备
CN112562027A (zh) * 2020-12-02 2021-03-26 北京百度网讯科技有限公司 人脸模型的生成方法、装置、电子设备及存储介质
CN112613376A (zh) * 2020-12-17 2021-04-06 深圳集智数字科技有限公司 重识别方法及装置,电子设备
CN112686202A (zh) * 2021-01-12 2021-04-20 武汉大学 一种基于3d重建的人头识别方法及系统
CN112712053A (zh) * 2021-01-14 2021-04-27 深圳数联天下智能科技有限公司 一种坐姿信息的生成方法、装置、终端设备及存储介质
CN112733901A (zh) * 2020-12-30 2021-04-30 杭州趣链科技有限公司 基于联邦学习和区块链的结构化动作分类方法与装置
CN112766215A (zh) * 2021-01-29 2021-05-07 北京字跳网络技术有限公司 人脸融合方法、装置、电子设备及存储介质
CN112818772A (zh) * 2021-01-19 2021-05-18 网易(杭州)网络有限公司 一种面部参数的识别方法、装置、电子设备及存储介质
CN112926543A (zh) * 2021-04-09 2021-06-08 商汤集团有限公司 图像生成、三维模型生成方法、装置、电子设备及介质
CN112949592A (zh) * 2021-03-31 2021-06-11 云南大学 高光谱图像的分类方法、装置和电子设备
CN112950775A (zh) * 2021-04-27 2021-06-11 南京大学 一种基于自监督学习的三维人脸模型重建方法及系统
CN112949761A (zh) * 2021-03-31 2021-06-11 东莞中国科学院云计算产业技术创新与育成中心 三维图像神经网络模型的训练方法、装置和计算机设备
CN112966607A (zh) * 2021-03-05 2021-06-15 北京百度网讯科技有限公司 模型训练方法、人脸视频生成方法、装置、设备和介质
CN113963183A (zh) * 2021-12-22 2022-01-21 北京的卢深视科技有限公司 模型训练、人脸识别方法、电子设备及存储介质
CN114266860A (zh) * 2021-12-22 2022-04-01 西交利物浦大学 三维人脸模型建立方法、装置、电子设备及存储介质
CN114821737A (zh) * 2022-05-13 2022-07-29 浙江工商大学 一种基于三维人脸对齐的移动端实时假发试戴方法
CN114842543A (zh) * 2022-06-01 2022-08-02 华南师范大学 三维人脸识别方法、装置、电子设备及存储介质
CN114944002A (zh) * 2022-06-16 2022-08-26 中国科学技术大学 文本描述辅助的姿势感知的人脸表情识别方法
CN115082640A (zh) * 2022-08-01 2022-09-20 聚好看科技股份有限公司 基于单张图像的3d人脸模型纹理重建方法及设备
CN115147508A (zh) * 2022-06-30 2022-10-04 北京百度网讯科技有限公司 服饰生成模型的训练、生成服饰图像的方法和装置
CN115171149A (zh) * 2022-06-09 2022-10-11 广州紫为云科技有限公司 基于单目rgb图像回归的实时人体2d/3d骨骼关键点识别方法
CN116503524A (zh) * 2023-04-11 2023-07-28 广州赛灵力科技有限公司 一种虚拟形象的生成方法、系统、装置及存储介质

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110020620B (zh) * 2019-03-29 2021-07-30 中国科学院深圳先进技术研究院 一种大姿态下的人脸识别方法、装置及设备
CN110532907B (zh) * 2019-08-14 2022-01-21 中国科学院自动化研究所 基于面象和舌象双模态特征提取的中医人体体质分类方法
CN110263774B (zh) * 2019-08-19 2019-11-22 珠海亿智电子科技有限公司 一种人脸检测方法
CN110991281B (zh) * 2019-11-21 2022-11-04 电子科技大学 一种动态人脸识别方法
US11695758B2 (en) 2020-02-24 2023-07-04 International Business Machines Corporation Second factor authentication of electronic devices
CN111488810A (zh) * 2020-03-31 2020-08-04 长沙千视通智能科技有限公司 人脸识别方法、装置、终端设备及计算机可读介质
CN111783609A (zh) * 2020-06-28 2020-10-16 北京百度网讯科技有限公司 行人再识别的方法、装置、设备和计算机可读存储介质
CN114625456B (zh) * 2020-12-11 2023-08-18 腾讯科技(深圳)有限公司 目标图像显示方法、装置及设备
CN112528902B (zh) * 2020-12-17 2022-05-24 四川大学 一种基于3d人脸模型的视频监控动态人脸识别方法及装置
CN112819947A (zh) * 2021-02-03 2021-05-18 Oppo广东移动通信有限公司 三维人脸的重建方法、装置、电子设备以及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1818977A (zh) * 2006-03-16 2006-08-16 上海交通大学 由一幅正面图像实现快速人脸模型重建的方法
CN101159015A (zh) * 2007-11-08 2008-04-09 清华大学 一种二维人脸图像的识别方法
US20100135541A1 (en) * 2008-12-02 2010-06-03 Shang-Hong Lai Face recognition method
CN102495999A (zh) * 2011-11-14 2012-06-13 深圳市奔凯安全技术有限公司 一种人脸识别的方法
CN109522812A (zh) * 2018-10-23 2019-03-26 青岛小鸟看看科技有限公司 人脸识别方法及装置、电子设备
CN110020620A (zh) * 2019-03-29 2019-07-16 中国科学院深圳先进技术研究院 一种大姿态下的人脸识别方法、装置及设备

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254154B (zh) * 2011-07-05 2013-06-12 南京大学 一种基于三维模型重建的人脸身份认证方法
CN102999942B (zh) * 2012-12-13 2015-07-15 清华大学 三维人脸重建方法
CN103745209B (zh) * 2014-01-27 2018-04-13 中国科学院深圳先进技术研究院 一种人脸识别方法及系统
CN104484669A (zh) * 2014-11-24 2015-04-01 苏州福丰科技有限公司 基于三维人脸识别的手机支付方法
CN104778441A (zh) * 2015-01-07 2015-07-15 深圳市唯特视科技有限公司 融合灰度信息和深度信息的多模态人脸识别装置及方法
CN106778506A (zh) * 2016-11-24 2017-05-31 重庆邮电大学 一种融合深度图像和多通道特征的表情识别方法
CN108960001B (zh) * 2017-05-17 2021-12-24 富士通株式会社 训练用于人脸识别的图像处理装置的方法和装置
CN107423678A (zh) * 2017-05-27 2017-12-01 电子科技大学 一种提取特征的卷积神经网络的训练方法及人脸识别方法
CN107844760A (zh) * 2017-10-24 2018-03-27 西安交通大学 基于曲面法向分量图神经网络表示的三维人脸识别方法
CN108520204A (zh) * 2018-03-16 2018-09-11 西北大学 一种人脸识别方法
CN109191507B (zh) * 2018-08-24 2019-11-05 北京字节跳动网络技术有限公司 三维人脸图像重建方法、装置和计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1818977A (zh) * 2006-03-16 2006-08-16 上海交通大学 由一幅正面图像实现快速人脸模型重建的方法
CN101159015A (zh) * 2007-11-08 2008-04-09 清华大学 一种二维人脸图像的识别方法
US20100135541A1 (en) * 2008-12-02 2010-06-03 Shang-Hong Lai Face recognition method
CN102495999A (zh) * 2011-11-14 2012-06-13 深圳市奔凯安全技术有限公司 一种人脸识别的方法
CN109522812A (zh) * 2018-10-23 2019-03-26 青岛小鸟看看科技有限公司 人脸识别方法及装置、电子设备
CN110020620A (zh) * 2019-03-29 2019-07-16 中国科学院深圳先进技术研究院 一种大姿态下的人脸识别方法、装置及设备

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112270711B (zh) * 2020-11-17 2023-08-04 北京百度网讯科技有限公司 模型训练以及姿态预测方法、装置、设备以及存储介质
CN112270711A (zh) * 2020-11-17 2021-01-26 北京百度网讯科技有限公司 模型训练以及姿态预测方法、装置、设备以及存储介质
CN112488178A (zh) * 2020-11-26 2021-03-12 推想医疗科技股份有限公司 网络模型的训练方法及装置、图像处理方法及装置、设备
CN112488178B (zh) * 2020-11-26 2022-07-26 推想医疗科技股份有限公司 网络模型的训练方法及装置、图像处理方法及装置、设备
CN112562027A (zh) * 2020-12-02 2021-03-26 北京百度网讯科技有限公司 人脸模型的生成方法、装置、电子设备及存储介质
CN112464895A (zh) * 2020-12-14 2021-03-09 深圳市优必选科技股份有限公司 姿态识别模型训练方法、装置、姿态识别方法和终端设备
CN112464895B (zh) * 2020-12-14 2023-09-01 深圳市优必选科技股份有限公司 姿态识别模型训练方法、装置、姿态识别方法和终端设备
CN112613376A (zh) * 2020-12-17 2021-04-06 深圳集智数字科技有限公司 重识别方法及装置,电子设备
CN112613376B (zh) * 2020-12-17 2024-04-02 深圳集智数字科技有限公司 重识别方法及装置,电子设备
CN112733901A (zh) * 2020-12-30 2021-04-30 杭州趣链科技有限公司 基于联邦学习和区块链的结构化动作分类方法与装置
CN112733901B (zh) * 2020-12-30 2024-01-12 杭州趣链科技有限公司 基于联邦学习和区块链的结构化动作分类方法与装置
CN112686202A (zh) * 2021-01-12 2021-04-20 武汉大学 一种基于3d重建的人头识别方法及系统
CN112712053A (zh) * 2021-01-14 2021-04-27 深圳数联天下智能科技有限公司 一种坐姿信息的生成方法、装置、终端设备及存储介质
CN112712053B (zh) * 2021-01-14 2024-05-28 深圳数联天下智能科技有限公司 一种坐姿信息的生成方法、装置、终端设备及存储介质
CN112818772A (zh) * 2021-01-19 2021-05-18 网易(杭州)网络有限公司 一种面部参数的识别方法、装置、电子设备及存储介质
CN112766215A (zh) * 2021-01-29 2021-05-07 北京字跳网络技术有限公司 人脸融合方法、装置、电子设备及存储介质
CN112966607A (zh) * 2021-03-05 2021-06-15 北京百度网讯科技有限公司 模型训练方法、人脸视频生成方法、装置、设备和介质
CN112949592A (zh) * 2021-03-31 2021-06-11 云南大学 高光谱图像的分类方法、装置和电子设备
CN112949761A (zh) * 2021-03-31 2021-06-11 东莞中国科学院云计算产业技术创新与育成中心 三维图像神经网络模型的训练方法、装置和计算机设备
CN112926543A (zh) * 2021-04-09 2021-06-08 商汤集团有限公司 图像生成、三维模型生成方法、装置、电子设备及介质
CN112950775A (zh) * 2021-04-27 2021-06-11 南京大学 一种基于自监督学习的三维人脸模型重建方法及系统
CN114266860A (zh) * 2021-12-22 2022-04-01 西交利物浦大学 三维人脸模型建立方法、装置、电子设备及存储介质
CN113963183A (zh) * 2021-12-22 2022-01-21 北京的卢深视科技有限公司 模型训练、人脸识别方法、电子设备及存储介质
CN114821737A (zh) * 2022-05-13 2022-07-29 浙江工商大学 一种基于三维人脸对齐的移动端实时假发试戴方法
CN114821737B (zh) * 2022-05-13 2024-06-04 浙江工商大学 一种基于三维人脸对齐的移动端实时假发试戴方法
CN114842543A (zh) * 2022-06-01 2022-08-02 华南师范大学 三维人脸识别方法、装置、电子设备及存储介质
CN114842543B (zh) * 2022-06-01 2024-05-28 华南师范大学 三维人脸识别方法、装置、电子设备及存储介质
CN115171149A (zh) * 2022-06-09 2022-10-11 广州紫为云科技有限公司 基于单目rgb图像回归的实时人体2d/3d骨骼关键点识别方法
CN115171149B (zh) * 2022-06-09 2023-12-05 广州紫为云科技有限公司 基于单目rgb图像回归的实时人体2d/3d骨骼关键点识别方法
CN114944002A (zh) * 2022-06-16 2022-08-26 中国科学技术大学 文本描述辅助的姿势感知的人脸表情识别方法
CN114944002B (zh) * 2022-06-16 2024-04-16 中国科学技术大学 文本描述辅助的姿势感知的人脸表情识别方法
CN115147508A (zh) * 2022-06-30 2022-10-04 北京百度网讯科技有限公司 服饰生成模型的训练、生成服饰图像的方法和装置
CN115147508B (zh) * 2022-06-30 2023-09-22 北京百度网讯科技有限公司 服饰生成模型的训练、生成服饰图像的方法和装置
CN115082640A (zh) * 2022-08-01 2022-09-20 聚好看科技股份有限公司 基于单张图像的3d人脸模型纹理重建方法及设备
CN116503524A (zh) * 2023-04-11 2023-07-28 广州赛灵力科技有限公司 一种虚拟形象的生成方法、系统、装置及存储介质
CN116503524B (zh) * 2023-04-11 2024-04-12 广州赛灵力科技有限公司 一种虚拟形象的生成方法、系统、装置及存储介质

Also Published As

Publication number Publication date
CN110020620B (zh) 2021-07-30
CN110020620A (zh) 2019-07-16

Similar Documents

Publication Publication Date Title
WO2020199693A1 (fr) Procédé et appareil de reconnaissance faciale de grande pose et dispositif associé
WO2020207190A1 (fr) Procédé de détermination d'informations tridimensionnelles, dispositif de détermination d'informations tridimensionnelles et appareil terminal
CN111915480B (zh) 生成特征提取网络的方法、装置、设备和计算机可读介质
EP3933708A2 (fr) Procédé d'apprentissage de modèle, procédé d'identification, dispositif, support d'enregistrement et produit-programme
WO2021098618A1 (fr) Appareil et procédé de classification de données, dispositif terminal et support de stockage lisible
CN109754464B (zh) 用于生成信息的方法和装置
CN111414879A (zh) 人脸遮挡程度识别方法、装置、电子设备及可读存储介质
CN111985414B (zh) 一种关节点位置确定方法及装置
CN112614110B (zh) 评估图像质量的方法、装置及终端设备
KR20230132350A (ko) 연합 감지 모델 트레이닝, 연합 감지 방법, 장치, 설비 및 매체
CN114817612A (zh) 多模态数据匹配度计算和计算模型训练的方法、相关装置
EP4095761A1 (fr) Procédé de génération de réseau fédérateur, appareil de génération de réseau fédérateur, dispositif et support d'enregistrement
CN114549728A (zh) 图像处理模型的训练方法、图像处理方法、装置及介质
CN115222583A (zh) 模型训练方法及装置、图像处理方法、电子设备、介质
CN114677350A (zh) 连接点提取方法、装置、计算机设备及存储介质
CN114792355A (zh) 虚拟形象生成方法、装置、电子设备和存储介质
CN110717405A (zh) 人脸特征点定位方法、装置、介质及电子设备
CN112037305B (zh) 对图像中的树状组织进行重建的方法、设备及存储介质
CN117894038A (zh) 一种图像中对象姿态生成方法和装置
CN112991274A (zh) 一种人群计数方法、装置、计算机设备及存储介质
CN112434746A (zh) 基于层次化迁移学习的预标注方法及其相关设备
CN113781653B (zh) 对象模型生成方法、装置、电子设备及存储介质
CN113139490B (zh) 一种图像特征匹配方法、装置、计算机设备及存储介质
WO2022236802A1 (fr) Procédé et appareil de reconstruction de modèle d'objet, et dispositif terminal ainsi que support de stockage
CN114373078A (zh) 目标检测方法、装置、终端设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19922633

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19922633

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19922633

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21.03.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19922633

Country of ref document: EP

Kind code of ref document: A1