CN113688698B - Face correction recognition method and system based on artificial intelligence - Google Patents

Face correction recognition method and system based on artificial intelligence Download PDF

Info

Publication number
CN113688698B
CN113688698B CN202110908979.2A CN202110908979A CN113688698B CN 113688698 B CN113688698 B CN 113688698B CN 202110908979 A CN202110908979 A CN 202110908979A CN 113688698 B CN113688698 B CN 113688698B
Authority
CN
China
Prior art keywords
face
image
obtaining
sequence
database
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110908979.2A
Other languages
Chinese (zh)
Other versions
CN113688698A (en
Inventor
翟慧
赵晶晶
张艳
李云鹤
赵大鹏
窦雪霞
丁玉涛
翟煜锦
宋欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Institute Of Administrative Science
Henan Polytechnic Institute
Original Assignee
Henan Institute Of Administrative Science
Henan Polytechnic Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Institute Of Administrative Science, Henan Polytechnic Institute filed Critical Henan Institute Of Administrative Science
Priority to CN202110908979.2A priority Critical patent/CN113688698B/en
Publication of CN113688698A publication Critical patent/CN113688698A/en
Application granted granted Critical
Publication of CN113688698B publication Critical patent/CN113688698B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a face righting identification method and system based on artificial intelligence. The method comprises the following steps: obtaining an angle information quantity sequence of each figure in a face database; correcting the face images with different horizontal deflection angles in the face video to be recognized; screening out face images with similarity larger than a threshold value in a face database to form a first image set; forming a second image set by the face images of other horizontal deflection angles in the video to be identified; obtaining an angle information quantum sequence by the similarity of each image in the first image set and each image in the second image set; and determining a face recognition result according to the difference of each angle information quantum sequence and the corresponding element of the corresponding angle information quantum sequence. The method determines the result of face recognition according to the difference between the information quantity sequence of the face with different horizontal deflection angles in the video and the information quantity sequence of the face database, fully utilizes the side face image information with different horizontal deflection angles in the video, and can obtain the result of face recognition more accurately.

Description

Face correction recognition method and system based on artificial intelligence
Technical Field
The application relates to the field of face recognition and artificial intelligence, in particular to a face righting recognition method and system based on artificial intelligence.
Background
The face recognition technology is widely applied, and the fields of national security, military security and public security, intelligent video monitoring, public security defense, community management and the like are classical applications; in the civil and economic fields, various bank card holder identity authentication and the like have key application values, and with the continuous development of face recognition technology, the face recognition technology in real-time video is very important in the aspects of content retrieval, digital video processing, visual detection and video monitoring systems.
In the face recognition in real-time video, one image with the smallest influence on the recognition is usually selected for recognition, but at the moment, the recognition of the image still can be wrong or even can not be recognized.
In the prior art, the side face score of a human face is calculated according to the key point position, the degree of the side face is judged according to the side face score, and an image closest to the front face is obtained by screening according to the difference of the key point positions of the side face and the front face for identification, but other side face images still contain much available information, the side face images are directly discarded, a lot of important information can be lost, and a screened single-frame image has certain specificity and possibly has deviation with a standard image in a human face library, so that an error identification result is caused.
Disclosure of Invention
In order to solve the problems, the invention provides a face righting identification method and a face righting identification system based on artificial intelligence, and the adopted technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a face righting recognition method based on artificial intelligence.
Obtaining side face images of different human faces at each horizontal deflection angle in a human face database; correcting the side face image, and obtaining the information content of the side face image according to the similarity between the corrected image and the front face image of the corresponding person in the face database; the information quantity of the side face images of each horizontal deflection angle forms an angle information quantity sequence of the person;
acquiring face images with different horizontal deflection angles in a face video to be recognized, and correcting; carrying out similarity comparison on a positive conversion image corresponding to the face image with the minimum horizontal deflection angle and the face images in the face database to obtain the face images in the face database with the similarity larger than a threshold value, and forming a first image set; obtaining face images of other horizontal deflection angles in the video to be identified to form a second image set; aiming at each image in the first image set, obtaining an angle information quantum sequence according to the similarity of the image and each image in the second image set; and determining a face recognition result according to the difference of each angle information quantum sequence and the corresponding element of the corresponding angle information quantum sequence.
Preferably, the face image of the sample person is used as a sample set, the angle information amount sequence of the sample person is used as label data, and the information amount analysis network is trained to obtain the angle information amount sequence of each person in the face database.
Preferably, the traffic analysis network structure comprises: inputting a face image of a face database into a first network module to obtain a first generated feature map; inputting the face image into a second network module to obtain face characteristic parameters; obtaining a three-dimensional face model by using the face characteristic parameters, and rendering the three-dimensional face model to generate a re-rendered image; utilizing a third network module to perform feature extraction on the three-dimensional face model to obtain a second generated feature map; and generating a first embedded characteristic diagram by using the convergence value of the second network module, generating a second embedded characteristic diagram by using the difference between the face image and the re-rendered image in the face database, combining the first generated characteristic diagram and the second generated characteristic diagram with the first embedded characteristic diagram and the second embedded characteristic diagram, and inputting the combined characteristic diagram and the second embedded characteristic diagram into a fourth network module to obtain an angle information quantity sequence.
Preferably, the obtaining of the similarity between the corrected image and the front face image of the corresponding person in the face database includes: respectively obtaining feature description vectors of the corrected image and each region of the front face of the corresponding person; and obtaining the similarity of the two images according to the distance between the two image feature description vectors.
Preferably, the information amount acquisition package of the side face image is specifically: and multiplying the similarity of the corrected image and the front face image of the corresponding person in the face database by the expansion coefficient to obtain the information content of the side face image.
In a second aspect, another embodiment of the present invention provides an artificial intelligence-based face-righting recognizer system.
A face alignment recognition system based on artificial intelligence comprises: the angle information quantity sequence acquisition module is used for acquiring side face images of different horizontal deflection angles of the faces of different people in the face database; correcting the side face image, and obtaining the information content of the side face image according to the similarity between the corrected image and the front face image of the corresponding person in the face database; the information quantity of the side face images of each horizontal deflection angle forms an angle information quantity sequence of the person;
the face recognition module is used for acquiring face images with different horizontal deflection angles in a face video to be recognized and correcting the face images; carrying out similarity comparison on a positive conversion image corresponding to the face image with the minimum horizontal deflection angle and the face images in the face database to obtain the face images in the face database with the similarity larger than a threshold value, and forming a first image set; acquiring face images of other horizontal deflection angles in a video to be identified to form a second image set; aiming at each image in the first image set, obtaining an angle information quantum sequence according to the similarity of the image and each image in the second image set; and determining a face recognition result according to the difference of each angle information quantum sequence and the corresponding element of the corresponding angle information quantum sequence.
The technical scheme of the invention has the following beneficial effects:
according to the similarity between the side face positive-turning image under different horizontal deflection angles and the corresponding person in the face database, the information quantity of the side face image under each horizontal deflection angle is extracted to form an angle information quantity sequence, and the difference between the information quantity sequence of the face image except for all the horizontal deflection angles closest to the main face in the video and the information quantity sequence of the face database is utilized to carry out face recognition, so that the face recognition efficiency is improved, the information completeness during face recognition is ensured, and the face recognition accuracy is improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
In order to further explain the technical means and effects of the present invention adopted to achieve the predetermined invention purpose, the following detailed description, with reference to the accompanying drawings and preferred embodiments, describes specific embodiments, structures, features and effects of a human face forward recognition method and system based on artificial intelligence proposed by the present invention. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
Example 1:
the main application scenarios of the invention are as follows: and carrying out face recognition in the video. When the face in the video is identified, the face image in the video may be a side face image, the information amount of the side face image is obtained, and the face identification is performed by using the difference of the information amount.
The invention is described in further detail below with reference to the figures and specific examples. A method flowchart is shown in fig. 1.
Firstly, obtaining side face images of different human faces at each horizontal deflection angle in a face database; correcting the side face image, and obtaining the information content of the side face image according to the similarity between the corrected image and the front face image of the corresponding person in the face database; the information amount of the side face image of each horizontal deflection angle constitutes a sequence of angle information amounts of the person.
The face database acquisition specifically comprises: erect the camera and shoot the video stream that contains the people's face, the camera visual angle is fixed, and camera imaging surface central point highly sets up the same with people's face central point, and the face definition of people's face is parallel with camera imaging surface for people's face, and horizontal deflection angle is 0, and the side face definition is on the face basis, controls and takes place the rotation. And acquiring front face RGB images of a large number of different people in the video stream to form a face database.
And obtaining the side face image of each horizontal deflection angle of each front face image in the face database in the simulator, wherein the horizontal deflection angle range is (-90 degrees and 90 degrees), and the deflection step length is 1 degree. And the side face images with different horizontal deflection angles are converted into front face images, and at present, a plurality of networks for side face conversion are provided, preferably, the invention uses a confrontation network (GAN) to finish side face conversion, and the method is well known and is not described herein again.
The human face is divided into different areas, such as left and right eyebrow areas (including eyes and eyebrows), left and right nose areas, left and right mouth areas and the like, and a feature description vector alpha of each area is obtained. At present, there are many methods for describing the features of the face region, the present invention adopts the LBP algorithm to obtain the feature description vector of the face region, the method is a known technology, and the present invention does not repeat the process. And obtaining the feature description vectors of the face image after correction and each region of the corresponding character frontal face in the face database.
And (3) corresponding the feature description vectors of each region of the corrected face image to the feature description vectors of each region of the corresponding front face in the face database one by one, calculating the distance between the feature description vectors of each corresponding region, and obtaining the similarity gamma of the feature description vectors and the corresponding region. For the same person, the face is horizontally deflected by each angle theta, and the similarity gamma can be calculated with the front face image of the person in the face database. When the similarity is higher, the restoration degree of the front face image in the front face database of the corrected front face image of the side face image at the horizontal deflection angle is higher, and the information content of the side face image at the horizontal deflection angle is higher. The calculation formula of the information amount is given as follows:
INF=k·γ+b
in the formula, INF is the amount of information included in the side face image, k is an expansion coefficient, and the effect of γ expansion is amplified, and k is set to 5, b is set to a base value, and b is set to 0. The information amounts of the side face images of the respective horizontal deflection angles of the different persons constitute a sequence of angle information amounts of the persons. The angle information quantity sequences of all the people in the face database can be obtained according to the method.
Then, acquiring face images with different horizontal deflection angles in a face video to be recognized, and correcting; carrying out similarity comparison on a positive conversion image corresponding to the face image with the minimum horizontal deflection angle and the face images in the face database to obtain the face images in the face database with the similarity larger than a threshold value, and forming a first image set; acquiring face images of other horizontal deflection angles in a video to be identified to form a second image set; aiming at each image in the first image set, obtaining an angle information quantum sequence according to the similarity of the image and each image in the second image set; and determining a face recognition result according to the difference of each angle information quantum sequence and the corresponding element of the corresponding angle information quantum sequence.
And shooting the face video to be recognized by the camera. The method of the invention needs to analyze the deflection angles of different video frames of the face video to be recognized. Firstly, face information in a video frame needs to be acquired, specifically, the face information is sent to a trained semantic segmentation network to obtain a Mask of a face region, and the Mask image and an original RGB image are multiplied pixel by pixel to obtain an image I to be recognized, which only contains the face region 1 . The semantic segmentation network is of an end-to-end Encoder-Decoder structure, convolution operation is carried out through an Encoder to extract features, the output result of the Encoder is a feature graph, and the feature graph is operated through a Decoder to obtain a semantic segmentation graph. The training process of the semantic segmentation network comprises the following steps:
a) the image of the face region is taken as a training data set, wherein 80% of the data set is randomly selected as the training set and the remaining 20% is taken as the verification set.
b) And (3) artificially labeling the training set, labeling the pixel points of the face region as 1, and labeling the pixel points of other regions as 0 to obtain labeling data. The loss function is trained using a cross entropy loss function.
And then analyzing the deflection angle according to the face information in the video frame. Specifically, a face angle calculation neural network is trained, and the side face angles of the faces with different angles are obtained.
The obtained face region image I 1 Sending the image into a face angle calculation neural network to obtain a face region image I 1 Corresponding horizontal deflection angle theta. The specific training process of the network is as follows:
a) side face images for each angle were generated in the simulator as a data set, with 80% of the data set randomly selected as the training set and the remaining 20% as the validation set.
b) The labels of the training set are obtained by the simulator. Constructing a human face 3D model in a simulator, adding a camera, and setting a camera position change rule as follows: the camera rotates along the head central point on a plane which passes through the head central point of the human face and is vertical to the human face, and an included angle between an optical axis and a left and right direction line of the human face passing through the head central point is defined as a horizontal deflection angle theta. The angle range of the side face of the human face is (-90 degrees and 90 degrees), the horizontal deflection angle of the right face relative to the positive face at 0 degree when the right face faces the camera is negative, the horizontal deflection angle of the left face relative to the positive face at 0 degree when the left face faces the camera is positive, and the included angle theta is used as label data to label. The loss function is trained using a mean square error loss function.
Inputting a plurality of face images in the video to be recognized into a face angle calculation network, and obtaining the side face angles [ theta ] of the plurality of face images in the video to be recognized 1 ,θ 2 ...θ N ]And N is the number of the face images contained in the video frame. According to the side face angles [ theta ] of a plurality of face images in the video frame 1 ,θ 2 ...θ N ]And acquiring the face image with the minimum horizontal deflection angle, namely the face image closest to 0 degree in the horizontal deflection angle set.
The method comprises the steps of obtaining a video of a face to be recognized, obtaining face images with different horizontal deflection angles in the video, and sequentially correcting the face images with different horizontal deflection angles to obtain corrected images. And comparing the similarity of the positive image corresponding to the obtained face image with the face image in the face database, and obtaining the face image in the face database with the similarity larger than a threshold value to form a first image set.
And obtaining face images of other horizontal deflection angles in the video to be identified to form a second image set. And for each image in the first image set, acquiring the similarity between the image and each image in the second image set, and acquiring an information quantity angle subsequence of the image corresponding to the second image set under the horizontal deflection angle according to an information quantity calculation formula. And acquiring an information quantum sequence corresponding to each image in the first image set, and determining a face recognition result according to the difference of corresponding elements of each angle information quantum sequence and the corresponding information quantum sequence. In the implementation, the face recognition result with the minimum difference of the corresponding elements is selected.
In the acquisition of the angle information amount sequence, it is necessary to obtain about 180 side face images of each person in the face database and perform correction, and then perform similarity calculation, that is, 1 time of three-dimensional modeling, 180 times of GAN-based face correction, and 180 times of similarity calculation are necessary for each person. In order to further improve the acquisition efficiency of the angle traffic sequence, the embodiment designs a traffic analysis network. And training an information quantity analysis network to obtain an angle information quantity sequence of each character in the face database.
And analyzing the human faces in the human face database by using an information quantity analysis network to obtain an angle information quantity sequence of each human face in the human face database. Specifically, the human faces in the training sample set are analyzed to obtain human face shape parameters corresponding to the training sample set, and the human face shape parameters are used as input to obtain an angle information quantity sequence of each human face.
Information content analysis network architecture: inputting the face image into a first network module to obtain a first generated feature map; inputting the image to be subjected to face image into a second network module to obtain face characteristic parameters; obtaining a three-dimensional face model by using the face characteristic parameters, rendering the three-dimensional face model to generate a re-rendered image, and performing characteristic extraction on the three-dimensional face model by using a third network module to obtain a second generated characteristic diagram; and generating a first embedded characteristic diagram by using the convergence value of the second network module, generating a second embedded characteristic diagram by using the difference between the face image and the re-rendered image, combining the first generated characteristic diagram and the second generated characteristic diagram with the first embedded characteristic diagram and the second embedded characteristic diagram, and inputting the combined characteristic diagram and the second embedded characteristic diagram into a fourth network module to obtain an angle information quantity sequence. The convergence value of the second network module is the convergence value of the loss function. The information analysis network improves the convergence speed and the output precision of the information analysis network by utilizing the characteristics of the face model, the characteristics of the face image and the convergence value of the network module.
The information analysis network training method is as follows. First, the second network module is trained. And inputting the training samples into a second network module to obtain face characteristic parameters, and reconstructing a three-dimensional face model according to the face characteristic parameters and the characteristic base vectors. And adjusting the weight parameters of the second network module according to the difference between the reconstructed three-dimensional face model and the true three-dimensional face model and the difference between the re-rendered image and the training sample image so as to make the second network module converge. And when the network converges, acquiring a convergence value of the second network module, and generating a first embedded characteristic diagram according to the convergence value, wherein the size of the first embedded characteristic diagram is the same as that of at least one dimension of the first generated characteristic diagram. Then, the first, third and fourth network modules are trained. And inputting the training sample into the first network module, and performing supervision training according to the data flow relation among the three network modules to obtain an angle information quantity sequence.
The traffic analyzes the loss function of the network. To improve the accuracy of the angle information curve, the loss function is designed as follows:
Figure BDA0003202991580000071
preferably, w 0 Take 1 and epsilon 1. Wherein INF i Information quantity INF 'of the ith horizontal deflection angle side face image of the human being in the human face database output by the network' i For the ith horizontal deviation of the human in the human face database obtained by the information amount calculation formulaInformation amount, w, of angle-turned side face image 0 Is the initial weight value, and epsilon is the weight adjustment coefficient.
Example 2:
the present embodiments provide a system embodiment. A face-to-face recognition system based on artificial intelligence comprises:
the angle information quantity sequence acquisition module is used for acquiring side face images of different horizontal deflection angles of the faces of different people in the face database; correcting the side face image, and obtaining the information content of the side face image according to the similarity between the corrected image and the front face image of the corresponding person in the face database; the information quantity of the side face images of each horizontal deflection angle forms an angle information quantity sequence of the person;
the face recognition module is used for acquiring face images with different horizontal deflection angles in a face video to be recognized and correcting the face images; carrying out similarity comparison on a positive conversion image corresponding to the face image with the minimum horizontal deflection angle and the face images in the face database to obtain the face images in the face database with the similarity larger than a threshold value, and forming a first image set; obtaining face images of other horizontal deflection angles in the video to be identified to form a second image set; aiming at each image in the first image set, obtaining an angle information quantum sequence according to the similarity of the image and each image in the second image set; and determining a face recognition result according to the difference of each angle information quantum sequence and the corresponding element of the corresponding angle information quantum sequence.

Claims (6)

1. A face alignment recognition method based on artificial intelligence is characterized by comprising the following steps:
obtaining side face images of different human faces at each horizontal deflection angle in a human face database; correcting the side face image, and obtaining the information content of the side face image according to the similarity between the corrected image and the front face image of the corresponding person in the face database; the information quantity of the side face images of each horizontal deflection angle forms an angle information quantity sequence of the person;
acquiring face images with different horizontal deflection angles in a face video to be recognized, and correcting; carrying out similarity comparison on a positive conversion image corresponding to the face image with the minimum horizontal deflection angle and the face images in the face database to obtain the face images in the face database with the similarity larger than a threshold value, and forming a first image set; obtaining face images of other horizontal deflection angles in the video to be identified to form a second image set; aiming at each image in the first image set, obtaining an angle information quantum sequence according to the similarity of the image and each image in the second image set; determining a face recognition result according to the difference of corresponding elements of each angle information quantum sequence and the corresponding angle information quantity sequence;
the sequence of the amount of angle information of the person further includes: the method comprises the steps that a face image of a sample person is used as a sample set, an angle information quantity sequence of the sample person is used as label data, an information quantity analysis network is trained, and an angle information quantity sequence of each person in a face database is obtained;
the traffic analysis network structure comprises: inputting a face image of a face database into a first network module to obtain a first generated feature map; inputting the face image into a second network module to obtain face characteristic parameters; obtaining a three-dimensional face model by using the face characteristic parameters, and rendering the three-dimensional face model to generate a re-rendered image; utilizing a third network module to perform feature extraction on the three-dimensional face model to obtain a second generated feature map; and generating a first embedded characteristic diagram by using the convergence value of the second network module, generating a second embedded characteristic diagram by using the difference between the face image and the re-rendered image in the face database, combining the first generated characteristic diagram and the second generated characteristic diagram with the first embedded characteristic diagram and the second embedded characteristic diagram, and inputting the combined characteristic diagram and the second embedded characteristic diagram into a fourth network module to obtain an angle information quantity sequence.
2. The method of claim 1, wherein the obtaining of the similarity between the corrected image and the front face image of the corresponding person in the face database comprises: respectively obtaining feature description vectors of the corrected image and each region of the corresponding front face of the person; and obtaining the similarity of the two images according to the distance between the two image feature description vectors.
3. The method according to claim 1, wherein the information amount acquisition of the side face image is specifically: and multiplying the similarity of the corrected image and the front face image of the corresponding person in the face database by the expansion coefficient to obtain the information content of the side face image.
4. A face alignment recognition system based on artificial intelligence is characterized in that the system comprises: the angle information quantity sequence acquisition module is used for acquiring side face images of different horizontal deflection angles of the faces of different people in the face database; correcting the side face image, and obtaining the information content of the side face image according to the similarity between the corrected image and the front face image of the corresponding person in the face database; the information quantity of the side face images of each horizontal deflection angle forms an angle information quantity sequence of the person;
the face recognition module is used for acquiring face images with different horizontal deflection angles in a face video to be recognized and correcting the face images; carrying out similarity comparison on a positive conversion image corresponding to the face image with the minimum horizontal deflection angle and face images in a face database, acquiring the face images in the face database with the similarity larger than a threshold value, and forming a first image set; obtaining face images of other horizontal deflection angles in the video to be identified to form a second image set; aiming at each image in the first image set, obtaining an angle information quantum sequence according to the similarity of the image and each image in the second image set; determining a face recognition result according to the difference of each angle information quantum sequence and the corresponding element of the corresponding angle information quantum sequence;
the angle information quantity sequence acquisition module is also used for training an information quantity analysis network by taking the face image of the sample figure as a sample set and the angle information quantity sequence of the sample figure as label data to acquire the angle information quantity sequence of each figure in the face database;
the traffic analysis network structure comprises: inputting a face image of a face database into a first network module to obtain a first generated feature map; inputting the face image into a second network module to obtain face characteristic parameters; obtaining a three-dimensional face model by using the face characteristic parameters, and rendering the three-dimensional face model to generate a re-rendered image; utilizing a third network module to perform feature extraction on the three-dimensional face model to obtain a second generated feature map; and generating a first embedded characteristic diagram by using the convergence value of the second network module, generating a second embedded characteristic diagram by using the difference between the face image and the re-rendered image in the face database, combining the first generated characteristic diagram and the second generated characteristic diagram with the first embedded characteristic diagram and the second embedded characteristic diagram, and inputting the combined characteristic diagram and the second embedded characteristic diagram into a fourth network module to obtain an angle information quantity sequence.
5. The system according to claim 4, wherein the angular information quantity sequence obtaining module is further configured to obtain feature description vectors of the corrected image and the regions of the front face of the corresponding person, respectively; and obtaining the similarity of the two images according to the distance between the two image feature description vectors.
6. The system according to claim 4, wherein the angular information content sequence obtaining module is further configured to multiply the similarity between the corrected image and the front face image of the corresponding person in the face database by an expansion coefficient to obtain the information content of the side face image.
CN202110908979.2A 2021-08-09 2021-08-09 Face correction recognition method and system based on artificial intelligence Active CN113688698B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110908979.2A CN113688698B (en) 2021-08-09 2021-08-09 Face correction recognition method and system based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110908979.2A CN113688698B (en) 2021-08-09 2021-08-09 Face correction recognition method and system based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN113688698A CN113688698A (en) 2021-11-23
CN113688698B true CN113688698B (en) 2022-09-16

Family

ID=78579245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110908979.2A Active CN113688698B (en) 2021-08-09 2021-08-09 Face correction recognition method and system based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN113688698B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114842544B (en) * 2022-07-04 2022-09-06 江苏布罗信息技术有限公司 Intelligent face recognition method and system suitable for facial paralysis patient

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956518A (en) * 2016-04-21 2016-09-21 腾讯科技(深圳)有限公司 Face identification method, device and system
WO2019096008A1 (en) * 2017-11-20 2019-05-23 腾讯科技(深圳)有限公司 Identification method, computer device, and storage medium
CN112507889A (en) * 2019-04-29 2021-03-16 众安信息技术服务有限公司 Method and system for verifying certificate and certificate holder

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10691925B2 (en) * 2017-10-28 2020-06-23 Altumview Systems Inc. Enhanced face-detection and face-tracking for resource-limited embedded vision systems

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956518A (en) * 2016-04-21 2016-09-21 腾讯科技(深圳)有限公司 Face identification method, device and system
WO2017181769A1 (en) * 2016-04-21 2017-10-26 腾讯科技(深圳)有限公司 Facial recognition method, apparatus and system, device, and storage medium
WO2019096008A1 (en) * 2017-11-20 2019-05-23 腾讯科技(深圳)有限公司 Identification method, computer device, and storage medium
CN112507889A (en) * 2019-04-29 2021-03-16 众安信息技术服务有限公司 Method and system for verifying certificate and certificate holder

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Face Frontalization for Alignment and Recognition;Christos Sagonas等;《http://arxiv.org/abs/1502.00852》;20150203;全文 *
谢鹏程.多姿态人脸识别的研究与实现.《 CNKI优秀硕士学位论文全文库》.2018, *

Also Published As

Publication number Publication date
CN113688698A (en) 2021-11-23

Similar Documents

Publication Publication Date Title
CN108537743B (en) Face image enhancement method based on generation countermeasure network
CN108596024B (en) Portrait generation method based on face structure information
CN112766160A (en) Face replacement method based on multi-stage attribute encoder and attention mechanism
CN110827312B (en) Learning method based on cooperative visual attention neural network
CN112418041B (en) Multi-pose face recognition method based on face orthogonalization
CN108108760A (en) A kind of fast human face recognition
CN111639580B (en) Gait recognition method combining feature separation model and visual angle conversion model
CN107944437B (en) A kind of Face detection method based on neural network and integral image
CN112818850B (en) Cross-posture face recognition method and system based on progressive neural network and attention mechanism
CN114783024A (en) Face recognition system of gauze mask is worn in public place based on YOLOv5
CN111126307A (en) Small sample face recognition method of joint sparse representation neural network
CN112084895B (en) Pedestrian re-identification method based on deep learning
CN113947794A (en) Fake face changing enhancement detection method based on head posture deviation correction
CN111476727B (en) Video motion enhancement method for face-changing video detection
CN113688698B (en) Face correction recognition method and system based on artificial intelligence
CN112633234A (en) Method, device, equipment and medium for training and applying face glasses-removing model
CN110826534B (en) Face key point detection method and system based on local principal component analysis
CN115188066A (en) Moving target detection system and method based on cooperative attention and multi-scale fusion
CN114550268A (en) Depth-forged video detection method utilizing space-time characteristics
CN116704585A (en) Face recognition method based on quality perception
CN115035562A (en) Facemask shielded face recognition method based on FaceNet improvement
Lenc et al. Confidence Measure for Automatic Face Recognition.
CN112560705A (en) Face detection method and device and electronic equipment
CN112633229A (en) Pedestrian re-identification system based on SPD manifold
CN113139915A (en) Portrait restoration model training method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant