CN113610058A - Facial pose enhancement interaction method for facial feature migration - Google Patents

Facial pose enhancement interaction method for facial feature migration Download PDF

Info

Publication number
CN113610058A
CN113610058A CN202111042103.0A CN202111042103A CN113610058A CN 113610058 A CN113610058 A CN 113610058A CN 202111042103 A CN202111042103 A CN 202111042103A CN 113610058 A CN113610058 A CN 113610058A
Authority
CN
China
Prior art keywords
image
discriminator
feature
training
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111042103.0A
Other languages
Chinese (zh)
Inventor
杨帆
曹杰
范文聪
陈志杰
毛波
申冬琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunjing Business Intelligence Research Institute Nanjing Co ltd
Nanjing Huiyi Information Technology Co ltd
Original Assignee
Yunjing Business Intelligence Research Institute Nanjing Co ltd
Nanjing Huiyi Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunjing Business Intelligence Research Institute Nanjing Co ltd, Nanjing Huiyi Information Technology Co ltd filed Critical Yunjing Business Intelligence Research Institute Nanjing Co ltd
Priority to CN202111042103.0A priority Critical patent/CN113610058A/en
Publication of CN113610058A publication Critical patent/CN113610058A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Abstract

The invention discloses a facial posture enhancement interaction method for human face feature migration. And submitting the processed data to a conditional production type confrontation network model for image prediction, and finally generating a transferred result image. Pix2Pix applies a conditional generation mapping function against network modeling transformation to transform images of one domain to another domain to effect feature migration. More precisely, Pix2Pix learns from the face marks and converts the face marks into facial features, and a human face detected by a network camera or a video is placed on a trained human face model in real time.

Description

Facial pose enhancement interaction method for facial feature migration
Technical Field
The invention designs a face posture enhancement interaction method for face feature migration, and belongs to the technical field of virtual reality and augmented reality.
Background
Virtual computer animation (CG) is currently increasingly using face capture and tracking algorithms in video games and movies, which are the basis for implementing Augmented Reality (AR) and Virtual Reality (VR) technology applications. In the field of image processing, face feature migration is actually a process of converting an input face image into a corresponding output face image. Unlike the past, limitations of professional knowledge, equipment conditions, and the like required in the field of image processing are becoming less and less, and it is becoming easier to automatically synthesize an image, a face video, or manipulate a face image of a person. In the example of automatic language translation, the defined automatic images are given enough training data to complete the training, in which case an image transformation method is used to transform one possible representation of a scene into another scene. In this direction, the convolutional neural network is the best mode in the fields of image prediction and image feature extraction, and can realize a high-precision data classification function.
The convolutional neural network can actively learn by itself, and besides, a plurality of manual operations can be executed, such as adjustment of parameters such as loss functions and the like, so that the output image has the characteristics of clearness and vividness. If a high-demand target level needs to be set, the output image is required to be as close to reality as possible so that the output image is difficult to distinguish, and the loss function suitable for the demand is automatically learned, which is very meaningful, and a generative countermeasure network (GAN) is created. If the output image is judged to be true or false, the GAN learns a loss function, classifies the loss function and minimizes the loss, and it is easily seen that a forged blurred image is not tolerated. GAN is well suited to handle data loss, so it is well suited to perform tasks that require many different kinds of loss functions as parameters.
Disclosure of Invention
The purpose of the invention is as follows: in order to overcome the defects in the prior art, the invention provides a facial pose enhancement interaction method for human face feature migration, which adopts an image-to-image translation algorithm based on a generating confrontation network to realize a human face capturing system based on feature migration. Pix2Pix applies a conditional generation mapping function against network modeling transformation to transform images of one domain to another domain to effect feature migration. More precisely, Pix2Pix learns from the face markers and converts them into facial features, and places a human face detected by a network camera or video on a trained human face model in real time.
The technical scheme is as follows: in order to achieve the purpose, the invention adopts the technical scheme that:
a facial pose enhanced interaction method for facial feature migration comprises the following steps:
step 1, a data set is created, and 68 facial marks on a human face in the data set are detected by using a facial mark detector in an open source tool Dlib library and a BSD machine learning software library as training data. Extracting each frame loaded into the training video, adjusting the size of each frame, converting the frame into a gray image, creating a face detector, an identification predictor and a shape predictor to identify key points of the detected face, and returning to a detection element sequence. And drawing an image of returned position information by using a Polylines function in an OpenCV library, storing the image as a frame face characteristic, realizing a characteristic extraction function of the system, and finally obtaining a data set consisting of an identification image and an original image.
And 2, training an enhanced model. In the training process, the source image and the identification image which are adjusted to be too large are combined to form a new data set, and the new data set is divided into a training set and a verification set. According to the obtained training set, training an enhanced model, firstly generating a graph and a discriminator graph, then starting a training system to work, calculating prediction-true, prediction-false, loss, gradient, variable, discriminator loss, discriminator level and variable, generator GAN loss, generator L1 distance loss and generator level parameters according to Epoch and Step, combining a source image and an identification image into an image pair as input, learning a mapping function between an input image and an output image, and returning to the trained enhanced model.
And 3, simplifying the trained enhanced model to obtain a simplified model with the returned size smaller than the original size, and freezing the simplified model into a single file.
And 4, importing the simplified model and loading the simplified model into a memory, taking the face captured by the camera as an input image, performing primary processing on the input image which is the same as that of the training model, performing feature extraction on the captured face image to obtain a feature identification image, and outputting an image corresponding to the feature identification image in the model, namely realizing the function of transferring the input face features to the face model.
Preferably: the generator structure in the enhancement model adopts a structure of a U-Net network structure, the U-Net network structure comprises a left part of the U-Net network, a right part of the U-Net network and a middle part of the U-Net network, the left part of the U-Net network executes compression operation, the size of a picture is reduced through convolution and upsampling, and a channel is added to extract shallow features. And the right part of the U-Net network performs decoding operation, and deep features are extracted through convolution and upsampling. And the middle part of the U-Net network combines the feature maps acquired by the left part and the right part, and combines the deep features and the shallow features to refine and predict and segment the image. The U-Net network structure is symmetrical and skip connections can be added, with channels connecting layers where pixels of uniform size are located.
Preferably: the discriminator structure in the enhanced model adopts a Markov discriminator, the input of the discriminator is 8N by 8N, 8 Patch are set, the output size of the discriminator is N by N, and each value of the discriminator respectively represents the probability that the corresponding 8 by 8 area in the input image is true.
Preferably: the enhancement model was trained using the Pix2Pix method. The Pix2Pix method is as follows:
i) let x be the edge image of the input image and y be the source image of the input image, the Pix2Pix method is supervised when training, thus requiring training of pairs of images (x, y). The generator G converts the input x into a generated image G (x), combines the x and the G (x) together through the channel dimension to be used as the input of the discriminator D, calculates to obtain a predicted probability value, and simultaneously combines the x and the real image y through the channel dimension to be input to the discriminator D to obtain a probability predicted value. The prediction probability value indicates the degree of truth of the input pair of images, and the closer the probability value approaches 1, the more certain the discriminator D is that the input pair of images is true.
ii) let z be the noise vector in a random uniform distribution, x be the edge image, y be the true output image, the generative confrontation network learns the mapping from z to y, G: z → y. And the condition generating equation resists the net learning mapping from x and z to y, G: { x, z } → y. The trained discriminator D is used for discrimination, the trained generator G generates output which is similar to the real image as much as possible and finally cannot be distinguished by the trained discriminator D, and the discriminator D is trained to detect the fake products generated by the generator G. The training goal of the discriminator D is therefore to output a small probability value when the input is not a pair of real images (x, G (x)), and to output a large probability value when the input is a pair of real images (x, y). The training goal of the generator G is to make the probability values of the outputs of the discriminator D as large as possible when G (x) and x are generated as the inputs of the discriminator D, which is equivalent to successfully deceiving the discriminator D.
Lc(G,D)=Ex,y[logD(x,y)]+Ex,z[log(1-D(x,G(x,z))] (1)
In the formula, Lc(G, D) to the generator G trying to obtain the minimum of this loss function, Ex,y[logD(x,y)]Representing the expectation of the image, D (x, y) representing the discriminator characteristic, Ex,z[log(1-D(x,G(x,z))]Representing the expectation of corresponding discriminant feature learning, D (x, G (x, z)) representing the incorporation of the generator's results into the discriminant process, and discriminator D attempting to maximize this loss function value G*I.e. by
Figure BDA0003249696960000031
G represents the discrimination result of the generator on the image, and D represents the discrimination result of the discriminator on the image.
Preferably: in the step 1, the detected element sequence comprises a mouth feature, a left eyebrow feature, a right eyebrow feature, a nose bridge feature, a lower nose feature, a left eye feature, a right eye feature and corresponding subscripts in the face features.
Compared with the prior art, the invention has the following beneficial effects:
1. the design and implementation of the face capturing system based on the feature migration are completed by adopting a Pix2Pix method, and the completion degree of functions required by the system is high. The effect of the neural network model and the optimization target are defined through a loss function, the precision of the training model is ensured as much as possible, and the U-Net is used for replacing an encoder-decoder to skip connection so as to achieve the aim of improving the image-to-image conversion performance.
Pix2Pix skillfully exploits the framework of GAN to provide a general framework for the problem of "image-to-image translation", using U-Net to promote detail, and PatchGAN to handle high frequency parts of the image.
Drawings
FIG. 1 is a system framework diagram of the present invention.
Detailed Description
The invention is further illustrated by the following examples in connection with specific embodiments thereof, it is to be understood that these examples are intended in an illustrative rather than in a limiting sense and that various equivalent modifications of the invention as described herein will occur to those skilled in the art upon reading the present disclosure and are intended to be covered by the appended claims.
A face posture enhancement interaction method for face feature migration includes the steps of processing original data to obtain a frame and a face identification sketch, adjusting the size of the frame and the face identification sketch, and combining two types of images to obtain preprocessed data. And submitting the processed data to a conditional production type confrontation network model for image prediction, and finally generating a transferred result image. This feature migration process is actually the conversion of the face image of the original data into one of the face images in the model based on the features. A face capture system based on feature migration is realized by using an image-to-image translation algorithm based on a generating confrontation network. Pix2Pix applies a conditional generation mapping function against network modeling transformation to transform images of one domain to another domain to effect feature migration. More precisely, Pix2Pix learns from the face markers and converts them into facial features, and places a human face detected by a network camera or video on a trained human face model in real time. As shown in fig. 1, the method comprises the following steps:
step 1, creating a data set, and detecting 68 face identifications, such as mouth, eyebrow, eyes and the like, on the face in the data set as training data by using a face mark detector in an open source tool Dlib library and a BSD machine learning software library. Extracting each frame loaded into the training video, adjusting the size of each frame, converting the frame into a gray image, creating a face detector, an identification predictor and a shape predictor to identify key points of the detected face, and returning to a detection element sequence. The detection element sequence comprises a mouth feature, a left eyebrow feature, a right eyebrow feature, a nose bridge feature, a lower nose feature, a left eye feature, a right eye feature and corresponding subscripts in the face features. And drawing an image of returned position information by using a Polylines function in an OpenCV library, storing the image as a frame face characteristic, realizing a characteristic extraction function of the system, and finally obtaining a data set consisting of an identification image and an original image.
And 2, training an enhanced model. In the training process, the source image and the identification image which are adjusted to be too large are combined to form a new data set, and the new data set is divided into a training set and a verification set. According to the obtained training set, training an enhanced model, firstly generating a graph and a discriminator graph, then starting a training system to work, calculating prediction-true, prediction-false, loss, gradient, variable, discriminator loss, discriminator level and variable, generator GAN loss, generator L1 distance loss and generator level parameters according to Epoch and Step, combining a source image and an identification image into an image pair as input, learning a mapping function between an input image and an output image, and returning to the trained enhanced model.
The generator structure in the enhanced model adopts a structure of a U-Net network structure, the U-Net network structure comprises a left part of the U-Net network, a right part of the U-Net network and a middle part of the U-Net network, the left part of the U-Net network executes compression operation (Encoder), the size of a picture is reduced through convolution and upsampling, and a channel is increased to extract shallow features. The right part of the U-Net network performs a decoding operation (Decoder) to extract deep features by convolution and upsampling. And the middle part of the U-Net network combines the feature maps acquired by the left part and the right part, and combines the deep features and the shallow features to refine and predict and segment the image. The U-Net network structure is symmetrical and skip connections can be added, with channels connecting layers where pixels of uniform size are located.
The discriminator structure in the enhanced model adopts a Markov discriminator, the input of the discriminator is 8N by 8N, 8 Patch are set, the output size of the discriminator is N by N, and each value of the discriminator respectively represents the probability that the corresponding 8 by 8 area in the input image is true. This enables reduced dimensionality of input, fewer parameters, reduced computational complexity, faster training, and removes the limitation on image size. In addition, since G itself also has no limitation on the image size, the entire Pix2Pix framework has strong extensibility.
The enhancement model was trained using the Pix2Pix method. The Pix2Pix method is as follows:
i) let x be the edge image of the input image and y be the source image of the input image, the Pix2Pix method is supervised when training, thus requiring training of pairs of images (x, y). The generator G converts the input x into a generated image G (x), combines the x and the G (x) together through the channel dimension to be used as the input of the discriminator D, calculates to obtain a predicted probability value, and simultaneously combines the x and the real image y through the channel dimension to be input to the discriminator D to obtain a probability predicted value. The prediction probability value indicates the degree of truth of the input pair of images, and the closer the probability value approaches 1, the more certain the discriminator D is that the input pair of images is true.
ii) let z be the noise vector in a random uniform distribution, x be the edge image, y be the true output image, the generative confrontation network learns the mapping from z to y, G: z → y. And the condition generating equation resists the net learning mapping from x and z to y, G: { x, z } → y. The trained discriminator D is used for discrimination, the training generator G generates output which is similar to the real image as much as possible and finally cannot be distinguished by the trained discriminator D, and the discriminator D is trained to detect fake products generated by the generator G as much as possible. Therefore, the training target of the discriminator D is to output a small probability value (for example, 0 at the minimum) when the input is not a pair of real images (x, G (x)), and to output a large probability value (for example, 1 at the maximum) when the input is a pair of real images (x, y). The training goal of the generator G is to make the probability values of the outputs of the discriminator D as large as possible when G (x) and x are generated as the inputs of the discriminator D, which is equivalent to successfully deceiving the discriminator D.
Lc(G,D)=Ex,y[logD(x,y)]+Ex,z[log(1-D(x,G(x,z))] (1)
In the formula, Lc(G, D) to the generator G trying to obtain the minimum of this loss function, Ex,y[logD(x,y)]Representing the expectation of the image, D (x, y) representing the discriminator characteristic, Ex,z[log(1-D(x,G(x,z))]Representing the expectation of corresponding discriminant feature learning, D (x, G (x, z)) representing the incorporation of the generator's results into the discriminant process, and discriminator D attempting to maximize this loss function value G*I.e. by
Figure BDA0003249696960000051
G represents the discrimination result of the generator on the image, and D represents the discrimination result of the discriminator on the image.
And 3, simplifying the trained enhanced model to obtain a simplified model with the returned size smaller than the original size, and freezing the simplified model into a single file. Simplifying the enhanced model can reduce the amount of model data and optimize the real-time running speed of the system, but also change the size of the image and reduce the quality of the generated image.
And 4, importing the simplified model and loading the simplified model into a memory, taking the face captured by the camera as an input image, performing primary processing on the input image which is the same as that of the training model, performing feature extraction on the captured face image to obtain a feature identification image, and outputting an image corresponding to the feature identification image in the model, namely realizing the function of transferring the input face features to the face model.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.

Claims (5)

1. A facial pose enhancement interaction method for facial feature migration is characterized by comprising the following steps:
step 1, creating a data set, and using a facial mark detector in an open source tool Dlib library and 68 facial marks on a human face in the data set detected based on a BSD machine learning software library as training data; extracting each frame loaded with a training video, adjusting the size of each frame, converting the frame into a gray image, creating a face detector, an identification predictor and a shape predictor to perform key point recognition on the detected face, and returning to a detection element sequence; drawing an image of returned position information by using a Polylines function in an OpenCV (open circuit vehicle) library, storing the image as a frame face feature, realizing a feature extraction function of a system, and finally obtaining a data set consisting of an identification image and an original image;
step 2, training an enhanced model; in the training process, combining the source image and the identification image which are adjusted to be too large into a new data set, and dividing the new data set into a training set and a verification set; training an enhanced model according to the obtained training set, firstly generating a generator graph and a discriminator graph, then starting a training system to work, calculating prediction-true, prediction-false, loss, gradient, variable, discriminator loss, discriminator level and variable, generator GAN loss, generator L1 distance loss and generator level parameters according to Epoch and Step, combining a source image and an identification image into an image pair as input, learning a mapping function between an input image and an output image, and returning to the trained enhanced model;
step 3, simplifying the trained enhanced model to obtain a simplified model with the returned size smaller than the original size, and freezing the simplified model into a single file;
and 4, importing the simplified model and loading the simplified model into a memory, taking the face captured by the camera as an input image, performing primary processing on the input image which is the same as that of the training model, performing feature extraction on the captured face image to obtain a feature identification image, and outputting an image corresponding to the feature identification image in the model, namely realizing the function of transferring the input face features to the face model.
2. The method of claim 1, wherein the facial pose enhancement interaction method comprises the following steps: the generator structure in the enhancement model adopts a structure of a U-Net network structure, the U-Net network structure comprises a left part of the U-Net network, a right part of the U-Net network and a middle part of the U-Net network, the left part of the U-Net network executes compression operation, the size of a picture is reduced through convolution and up-sampling, and a channel is added to extract shallow features; the right part of the U-Net network executes decoding operation, and deep features are extracted through convolution and up-sampling; the middle part of the U-Net network combines the feature maps acquired by the left part and the right part, and combines the deep feature and the shallow feature to carry out thinning and predictive segmentation on the image; the U-Net network structure is symmetrical and skip connections can be added, with channels connecting layers where pixels of uniform size are located.
3. The method of claim 2, wherein the facial pose enhancement interaction method comprises the following steps: the discriminator structure in the enhanced model adopts a Markov discriminator, the input of the discriminator is 8N by 8N, 8 Patch are set, the output size of the discriminator is N by N, and each value of the discriminator respectively represents the probability that the corresponding 8 by 8 area in the input image is true.
4. The method of claim 3, wherein the facial pose enhancement interaction method comprises the following steps: training an enhancement model by adopting a Pix2Pix method; the Pix2Pix method is as follows:
i) let x be the edge image of the input image, y be the source image of the input image, the Pix2Pix method is supervised during training, thus requiring paired images (x, y) to be trained; the generator G converts the input x into a generated image G (x), combines the x and the G (x) together through the channel dimension to be used as the input of the discriminator D, calculates to obtain a predicted probability value, and simultaneously combines the x and the real image y through the channel dimension to be input to the discriminator D to obtain a probability predicted value; the prediction probability value represents the degree of truth of the input image pair, and the closer the probability value approaches 1, the more certain the discriminator D is that the input image pair is real;
ii) let z be the noise vector in random uniform distribution, x be the edge image, y be the real output image, the generative confrontation network learns the mapping from z to y, G: z → y; and the condition generating equation resists the network learning mapping from x and z to y, G: { x, z } → y; the trained discriminator D is used for discrimination, the training generator G generates output which is similar to the real image as much as possible and can not be distinguished by the trained discriminator D, and the discriminator D is trained to detect the fake products generated by the generator G; therefore, the training target of the discriminator D is to output a small probability value when the input is not a pair of real images (x, G (x)), and output a large probability value when the input is a pair of real images (x, y); the training goal of the generator G is to make the probability value of the output of the discriminator D as large as possible when the generated G (x) and x are used as the input of the discriminator D, which is equivalent to successfully deceiving the discriminator D;
Lc(G,D)=Ex,y[logD(x,y)]+Ex,z[log(1-D(x,G(x,z))] (1)
in the formula, Lc(G, D) to the generator G trying to obtain the minimum of this loss function, Ex,y[logD(x,y)]Representing the expectation of the image, D (x, y) representing the discriminator characteristic, Ex,z[log(1-D(x,G(x,z))]Representing the expectation of corresponding discriminant feature learning, D (x, G (x, z)) representing the incorporation of the generator's results into the discriminant process, and discriminator D attempting to maximize this loss function value G*I.e. by
Figure FDA0003249696950000021
G represents the discrimination result of the generator on the image, and D represents the discrimination result of the discriminator on the image.
5. The method of claim 4, wherein the facial pose of the human face feature migration is enhanced by: in the step 1, the detected element sequence comprises a mouth feature, a left eyebrow feature, a right eyebrow feature, a nose bridge feature, a lower nose feature, a left eye feature, a right eye feature and corresponding subscripts in the face features.
CN202111042103.0A 2021-09-07 2021-09-07 Facial pose enhancement interaction method for facial feature migration Pending CN113610058A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111042103.0A CN113610058A (en) 2021-09-07 2021-09-07 Facial pose enhancement interaction method for facial feature migration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111042103.0A CN113610058A (en) 2021-09-07 2021-09-07 Facial pose enhancement interaction method for facial feature migration

Publications (1)

Publication Number Publication Date
CN113610058A true CN113610058A (en) 2021-11-05

Family

ID=78342684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111042103.0A Pending CN113610058A (en) 2021-09-07 2021-09-07 Facial pose enhancement interaction method for facial feature migration

Country Status (1)

Country Link
CN (1) CN113610058A (en)

Similar Documents

Publication Publication Date Title
Jiang et al. Deep learning-based face super-resolution: A survey
Li et al. In ictu oculi: Exposing ai generated fake face videos by detecting eye blinking
CN106599883B (en) CNN-based multilayer image semantic face recognition method
CN110348330B (en) Face pose virtual view generation method based on VAE-ACGAN
CN111444881A (en) Fake face video detection method and device
WO2020108362A1 (en) Body posture detection method, apparatus and device, and storage medium
CN110147721B (en) Three-dimensional face recognition method, model training method and device
WO2020103700A1 (en) Image recognition method based on micro facial expressions, apparatus and related device
CN112800903B (en) Dynamic expression recognition method and system based on space-time diagram convolutional neural network
CN112418095A (en) Facial expression recognition method and system combined with attention mechanism
Hatem et al. A survey of feature base methods for human face detection
CN112580617B (en) Expression recognition method and device in natural scene
CN113705290A (en) Image processing method, image processing device, computer equipment and storage medium
CN112528902B (en) Video monitoring dynamic face recognition method and device based on 3D face model
CN112308128B (en) Image matching method based on attention mechanism neural network
CN111832405A (en) Face recognition method based on HOG and depth residual error network
CN111898571A (en) Action recognition system and method
Liu et al. Physics-guided spoof trace disentanglement for generic face anti-spoofing
Gürel Development of a face recognition system
Paterson et al. 3D head tracking using non-linear optimization.
US20210042510A1 (en) Adaptive hand tracking and gesture recognition using face-shoulder feature coordinate transforms
CN111191549A (en) Two-stage face anti-counterfeiting detection method
CN112380966B (en) Monocular iris matching method based on feature point re-projection
CN113610058A (en) Facial pose enhancement interaction method for facial feature migration
CN113901916A (en) Visual optical flow feature-based facial fraud action identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination