CN111368764A - False video detection method based on computer vision and deep learning algorithm - Google Patents

False video detection method based on computer vision and deep learning algorithm Download PDF

Info

Publication number
CN111368764A
CN111368764A CN202010158340.2A CN202010158340A CN111368764A CN 111368764 A CN111368764 A CN 111368764A CN 202010158340 A CN202010158340 A CN 202010158340A CN 111368764 A CN111368764 A CN 111368764A
Authority
CN
China
Prior art keywords
model
video
false
face
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010158340.2A
Other languages
Chinese (zh)
Other versions
CN111368764B (en
Inventor
姚一鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zero Rank Technology Shenzhen Co ltd
Original Assignee
Zero Rank Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zero Rank Technology Shenzhen Co ltd filed Critical Zero Rank Technology Shenzhen Co ltd
Priority to CN202010158340.2A priority Critical patent/CN111368764B/en
Publication of CN111368764A publication Critical patent/CN111368764A/en
Application granted granted Critical
Publication of CN111368764B publication Critical patent/CN111368764B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the field of computer vision and deep learning, and discloses a false video detection method based on a computer vision and deep learning algorithm. Three feature extraction models, namely a generating confrontation network distinguishing model, a front/side face comparison model and an expression action classification model, are trained in advance, and then the trained three models are stored respectively. And then, respectively inputting the three models by using the training set to extract features, and performing linear fitting on the extracted features, wherein a fitting object is a real value of the training set. Optimization was performed using an Adam optimizer using the cross entropy loss function of the binary classification as a criterion. Taking the parameters with the final loss below 1e-6 as a final model for storage; and traversing the training set again, selecting the optimal threshold value of the ROC curve by using an elbow rule as a classification threshold value, and classifying the video by using a pre-calculated model and the threshold value in the algorithm application stage. The invention is beneficial to false video detection.

Description

False video detection method based on computer vision and deep learning algorithm
Technical Field
The invention belongs to the field of computer vision and deep learning, and particularly relates to a false video detection method based on a computer vision and deep learning algorithm, which is used for quickly judging a false generated video file.
Background
Deep learning technology is used for replacing face parts in a video or a picture, and the deep learning technology can automatically generate a large amount of false videos with replaced faces. Due to the use of the open-source deep learning algorithm model, the generation of the false videos becomes simple and easy, and can be easily realized by using a household desktop computer. Originally, this technology was created for the convenience of movie and animation, and was intended to facilitate the labor savings of the relevant workers. However, some people begin to use the technology to generate false videos which harm the interests of others, so many large companies (google, hundredths) begin to invest in resource research for fast and accurate false video screening methods.
For example, chinese patent publication No. CN 110188706 a discloses a neural network training method and a detection method based on character expressions in a video for generating an antagonistic network, the method first reads in a video vreal, the video takes one expression of a character as a main part; then, calculating a characteristic function f of the system through convolutional neural network processing; then, one possible expression yi and the previous characteristic function f are matched with the expression yi through a deconvolution neural network, and finally the person video generated by the computer obtains the matching degree si of the two through a 3D convolution neural network and the vreal; by changing yi to get a different si, the largest corresponding yi in si is used as the decision output. The feature extraction model is mainly used for neural network training of character expressions, and the problem of screening false videos cannot be solved.
Disclosure of Invention
The invention aims to provide a detection method for effectively screening false videos, and the detection method can improve the efficiency and accuracy of false video detection by training an effective feature extraction model.
The false video detection method based on the computer vision and deep learning algorithm adopted for solving the problems specifically comprises the following steps:
s1: downloading an algorithm for generating a false video in advance, and generating the false video and an unmodified video by utilizing self data, wherein one part of the false video and the unmodified video are used as a test set and the other part of the false video and the unmodified video are used as a training set;
s2: pre-training at least one feature extraction model, and storing the trained feature extraction model;
s3: respectively inputting the training sets into the feature extraction model for feature extraction, and performing linear fitting on the extracted features, wherein a fitting object is a real value of the training set, and the test set is used for quantifying the model training quality during model training;
s4: the feature extraction model comprises a loss function, the loss function is defined, and an optimizer is used for optimizing by using the defined loss function as a standard;
s5: saving the feature extraction model with the optimized loss function as a final model; traversing the training set again, selecting a classification threshold value, and entering an algorithm application stage;
s6: in the algorithm application stage, inputting a video generated by using new data into the final model for feature extraction;
s7: and bringing the extracted features into a linear regression model and extracting the final activation layer for output, and if the average value of the results is greater than the classification threshold, judging the video to be a false video.
Further, to improve on the algorithm for the spurious video generation, in S1, the algorithm for the spurious video generation is a depeffake-related open source algorithm.
Further, in order to improve the optimization of the test set and the training set, in S1, the duration of the dummy videos and the unmodified videos is 1-3 minutes, wherein the number of the dummy videos is 3000-3500, the number of the unmodified videos is 2000-2500, 15% -20% of all the videos are extracted to serve as the test set, and the remaining 80% -85% of the videos serve as the training set.
Further, in order to improve the optimization of the feature extraction model, in S2, three feature extraction models, namely, a "generative confrontation network classification model", a "front/side face comparison model", and an "expression action classification model", are trained in advance and stored.
And integrating the target face picture and the video to be modified by adopting a generation type countermeasure network (GAN) to generate a new video frame image, and splicing the new video frame image into a complete video. There is a problem in doing so, that under a limited number of iterations, the trained network model cannot fit the target face picture to the face in the original video by one hundred percent. Therefore, the first entry point of the system judges whether the video image is modified or not by using the color change rule of the periphery of the face area.
Further, in order to improve the training optimization of the generated countermeasure network differentiation model, the training mode of the generated countermeasure network differentiation model is to utilize samples extracted from a real video and samples extracted from a random parameter input into the deep fake model to perform comparative learning, wherein the samples are human face regions, unnatural color changes and splicing around false human face regions are identified by the countermeasure model in the GAN, and the real and false videos are classified to define a loss function, and the countermeasure model in the GAN is a differentiation model.
The method comprises the steps that when a standard model of algorithms such as DeepFake is pre-trained, most of the standard models are trained by using a front image of a human face, so that side face images in a video cannot be well generated, video frames with horizontal rotation angles of the face larger than a certain numerical value in a video to be detected are extracted to be secondarily screened, the acquired side face images are compared with a plurality of front face images (horizontal rotation angles of the face are close to 0 degree) in the video, and the side face images are judged to be lower than a certain threshold value, namely when the similarity of the two face images is low, the side face images are not successfully generated by using target portraits, and the front face images and the side face images of a certain person in the video belong to different persons respectively, so that the second tangent point of the system can judge that the video is modified by using human face feature comparison.
Further, in order to improve the training optimization aspect of the front/side face comparison model, the training mode of the front/side face comparison model is that false video data is led into an algorithm for face detection and angle judgment, a front face sample and a side face sample are extracted, an average face sample is extracted from the front face sample and is led into a first face recognition model for face feature extraction, the side face sample is led into a second face recognition model for face feature extraction, the front/side face comparison model is a face feature comparison model, face features extracted from the first face recognition model and face features extracted from the second face recognition model are led into the face feature comparison model, and true and false videos are trained and classified.
Further, in order to improve the comparison mode of the face features, the comparison mode of the face features is face angle judgment, the face angle judgment mode is determined in an auxiliary mode according to 68 face labeling positions extracted from a face area, and the angle is calculated by using an affine transformation matrix between a standard face and a detected face.
Because the expression of the character is false, when the deep fake makes a false video, a plurality of pictures of the target character are required to be provided, and the pictures are required to cover multiple expressions and multiple angles as much as possible. However, in actual operation, too many similar target person pictures are often not obtained, so that the trained deep fake model is often easy to be over-fitted, that is, unnatural states such as 'expression stiffness' appear in a generated video image. Whereas the people in the source video would not. Therefore, the expression recognition algorithm can be combined with the LSTM long-short term memory network sensitive to the time domain range change by utilizing the point, and Binary-Cross-entry loss function is adopted to carry out Binary training on the output. The data set utilizes deep take to generate dummy video and unmodified non-repeating video strips. And judging whether the character expression in the input video is stiff or not, and carrying out weighted average by combining the two combinations to judge whether the video is modified by a deep Fake algorithm or not.
Further, in order to improve the training aspect of the expression action classification model, the expression action classification model is trained in a mode that false video data are led into a long-term and short-term memory network to be subjected to expression feature extraction, real and false video classification is carried out through the expression action classification model, and the expression action classification model is an expression false classification model.
Further, in order to improve the aspect of expression feature extraction, the expression feature extraction mode of the long-term and short-term memory network is to capture the characteristic of the expression change features in the time domain, so that the video segment with unnatural expression changes is judged by the expression false classification model, and the purpose of detecting the false video is achieved.
Further, in order to improve a loss function in the definition of the expression action classification model, the loss function defined by the expression action classification model is a Binary-Cross-entry loss function, and the Binary-Cross-entry loss function is used for performing Binary classification training on the expression action classification model.
Further, to improve on defining the loss function, in S4, the loss function is a cross-entropy loss function of the binary classification.
Further, to improve on the optimizer, in S4, the optimizer is an Adam optimizer.
Further, to improve on optimizing the loss function, the final loss value of the optimized loss function is below 1 e-6.
Further, in order to improve the classification threshold, the classification threshold is a threshold for selecting an optimal ROC curve by using an "elbow rule".
Further, to improve on the activation layer function, the activation layer function is a Sigmoid activation function.
The invention provides a method for rapidly judging whether an input video file is transformed by a similar algorithm such as deep Fake by using a computer vision processing technology similar to deep Fake, can help to discriminate a false video, and improves the accuracy of false video detection by adopting three feature extraction models.
Drawings
Fig. 1 is a general flow diagram of the present invention.
FIG. 2 is a schematic diagram of training a generative confrontation network feature extraction model.
FIG. 3 is a schematic diagram of training a front/side face contrast feature extraction model.
Fig. 4 is a schematic diagram of the expression and motion classification feature extraction model training.
Detailed Description
Embodiments of the present invention are described with reference to fig. 1-4: the method is divided into two stages, namely a model training stage and an algorithm application stage.
In the training stage, a deep fake correlation open source algorithm is downloaded in advance, 3500 false videos with the duration of 3 minutes and 2000 unmodified videos are generated by utilizing own data, 15% of 5500 videos are extracted to serve as a test set, and the rest 85% of videos serve as a training set.
As shown in fig. 1, three feature extraction models, namely, a "generative confrontation network classification model", a "front/side face comparison model", and an "expression and motion classification model" are trained in advance, and then the trained three models are stored. And then, respectively inputting the training set into the three models for feature extraction, and performing linear fitting on the extracted features, wherein a fitting object is a real value of the training set, and the test set is used for quantifying the model training quality during model training.
As shown in fig. 2, the training mode of the "generative confrontation network distinguishing model" is to perform comparative learning by using samples extracted from real videos and samples extracted from a model generated by inputting random parameters into a deep fake model, where the samples are human face regions, identify unnatural color changes and splices around false human face regions by the confrontation model in GAN, and classify the real and false videos, so as to define a loss function, where the confrontation model in GAN is a distinguishing model.
As shown in fig. 3, the training mode of the "front/side face comparison model" is to import the false video data into an algorithm for face detection and angle judgment, extract a front face sample and a side face sample, extract an average face sample from the front face sample, extract face features from the first face recognition model, import the side face sample into the second face recognition model to extract face features, the "front/side face comparison model" is a face feature comparison model, import the face features extracted from the first face recognition model and the face features extracted from the second face recognition model into the face feature comparison model, and train and classify real and false videos.
As shown in fig. 4, the "expression motion classification model" is trained by importing the false video data into a long-term and short-term memory network to perform expression feature extraction, and performing real and false video classification by the "expression motion classification model", which is an expression false classification model.
Combining an expression recognition algorithm with an LSTM long-short term memory network sensitive to time domain range change, outputting a data set for 'expression action classification model' training, and generating 3500 pieces of video with the duration of 1min and 2000 pieces of unmodified non-repetitive video by means of a DeepFake loss function for performing classification training, judging whether the character expression in the input video is 'stiff', and performing weighted average by combining the two combinations to judge whether the video is modified by the DeepFake algorithm.
And after the training of the three feature extraction models is completed, optimizing by using an Adam optimizer by using a cross entropy loss function of binary classification as a standard. Taking a characteristic extraction model with a loss function of the parameter value with the final loss below 1e-6 as a final model for storage; and traversing the training set again, selecting the optimal threshold value of the ROC curve as a classification threshold value by utilizing an elbow rule, and entering an algorithm application stage.
In the application stage, a video to be classified is cut into a plurality of video segments with the duration not more than 3 minutes for classification respectively, and the video segments are firstly input into three pre-trained models for feature extraction. And then, bringing the extracted features into a linear regression model and extracting the final activation layer for output, wherein the activation layer function is a Sigmoid activation function, and if the average value of the results is greater than a classification threshold value, the video is judged to be a false video.
The invention provides a method for rapidly judging whether an input video file is transformed by a similar algorithm such as deep Fake by using a computer vision processing technology similar to deep Fake, can help to discriminate a false video, and improves the accuracy of false video detection by adopting three feature extraction models.

Claims (10)

1. A false video detection method based on computer vision and deep learning algorithm is characterized by comprising the following steps:
s1: downloading an algorithm for generating a false video in advance, and generating the false video and an unmodified video by utilizing self data, wherein one part of the false video and the unmodified video are used as a test set and the other part of the false video and the unmodified video are used as a training set;
s2: pre-training at least one feature extraction model, and storing the trained feature extraction model;
s3: respectively inputting the training sets into the feature extraction model for feature extraction, and performing linear fitting on the extracted features, wherein a fitting object is a real value of the training set, and the test set is used for quantifying the model training quality during model training;
s4: the feature extraction model comprises a loss function, the loss function is defined, and an optimizer is used for optimizing by using the defined loss function as a standard;
s5: saving the feature extraction model with the optimized loss function as a final model; traversing the training set again, selecting a classification threshold value and entering an algorithm application stage;
s6: in the algorithm application stage, inputting a video generated by using new data into the final model for feature extraction;
s7: and bringing the extracted features into a linear regression model and extracting the final activation layer for output, and if the average value of the results is greater than the classification threshold, judging the video to be a false video.
2. The method for detecting false video based on computer vision and deep learning algorithm according to claim 1, wherein in S1, the algorithm for false video generation is a deep take correlation open source algorithm.
3. A false video detection method based on computer vision and deep learning algorithm as claimed in claim 1, wherein in S1, the duration of the false video and the unmodified video is 1-3 minutes, wherein the number of false videos is 3000-3500, the number of unmodified videos is 2000-2500, 15-20% of all videos are extracted as the test set, and the remaining 80-85% are used as the training set.
4. The method for detecting false video according to claim 1, wherein in S2, three feature extraction models are trained in advance, which are "generative confrontation network distinguishing model", "front/side face comparison model" and "expression action classification model", respectively, and then stored respectively.
5. The method as claimed in claim 4, wherein the training of the generated confrontation network discriminant model is to use samples extracted from real video and samples extracted from a model generated by inputting random parameters into a DeepFake model to perform contrast learning, wherein the samples are face regions, and the confrontation model in the GAN identifies unnatural color changes and stitching around the false face regions, and classifies the real and false videos, thereby defining a loss function, and wherein the confrontation model in the GAN is a discriminant model.
6. The method as claimed in claim 4, wherein the "front/side face comparison model" is trained by importing dummy video data into an algorithm for face detection and angle determination, extracting a front face sample and a side face sample, extracting an average face sample from the front face sample, importing the average face sample into a first face recognition model for face feature extraction, importing the side face sample into a second face recognition model for face feature extraction, importing the "front/side face comparison model" into a face feature comparison model, importing the face features extracted from the first face recognition model and the face features extracted from the second face recognition model into the face feature comparison model, and training and classifying true and false videos.
7. The false video detection method based on computer vision and deep learning algorithm as claimed in claim 6, wherein the comparison method of the human face features is human face angle determination, the human face angle determination method is auxiliary determination according to 68 human face labeling positions extracted from human face regions, and the angles are calculated by using affine transformation matrix between a standard human face and a detected human face.
8. The false video detection method based on computer vision and deep learning algorithm of claim 4, wherein the "expression motion classification model" is trained by importing false video data into a long-short term memory network for expression feature extraction and performing true-false video classification by the "expression motion classification model", which is an expression false classification model.
9. The method for detecting false video based on computer vision and deep learning algorithm as claimed in claim 8, wherein the expression feature extraction manner of the long-short term memory network is to capture the feature of the expression change feature in the time domain, so that the expression false classification model is used to judge the video segment with unnatural expression change, thereby achieving the purpose of detecting false video.
10. The false video detection method based on computer vision and deep learning algorithm of claim 9, wherein the loss function defined by the expression motion classification model is Binary-Cross-entry loss function, and the Binary-Cross-entry loss function is used to perform Binary classification training on the expression motion classification model.
CN202010158340.2A 2020-03-09 2020-03-09 False video detection method based on computer vision and deep learning algorithm Active CN111368764B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010158340.2A CN111368764B (en) 2020-03-09 2020-03-09 False video detection method based on computer vision and deep learning algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010158340.2A CN111368764B (en) 2020-03-09 2020-03-09 False video detection method based on computer vision and deep learning algorithm

Publications (2)

Publication Number Publication Date
CN111368764A true CN111368764A (en) 2020-07-03
CN111368764B CN111368764B (en) 2023-02-21

Family

ID=71208643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010158340.2A Active CN111368764B (en) 2020-03-09 2020-03-09 False video detection method based on computer vision and deep learning algorithm

Country Status (1)

Country Link
CN (1) CN111368764B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111950497A (en) * 2020-08-20 2020-11-17 重庆邮电大学 AI face-changing video detection method based on multitask learning model
CN112200001A (en) * 2020-09-11 2021-01-08 南京星耀智能科技有限公司 Depth-forged video identification method in specified scene
CN112580521A (en) * 2020-12-22 2021-03-30 浙江工业大学 Multi-feature true and false video detection method based on MAML (maximum likelihood modeling language) meta-learning algorithm
CN112613480A (en) * 2021-01-04 2021-04-06 上海明略人工智能(集团)有限公司 Face recognition method, face recognition system, electronic equipment and storage medium
CN112733733A (en) * 2021-01-11 2021-04-30 中国科学技术大学 Counterfeit video detection method, electronic device and storage medium
CN112861671A (en) * 2021-01-27 2021-05-28 电子科技大学 Method for identifying deeply forged face image and video
CN113628754A (en) * 2021-08-12 2021-11-09 武剑 Cerebrovascular disease dynamic prediction model construction method and system based on artificial intelligence
CN115937994A (en) * 2023-01-06 2023-04-07 南昌大学 Data detection method based on deep learning detection model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017024963A1 (en) * 2015-08-11 2017-02-16 阿里巴巴集团控股有限公司 Image recognition method, measure learning method and image source recognition method and device
CN109583342A (en) * 2018-11-21 2019-04-05 重庆邮电大学 Human face in-vivo detection method based on transfer learning
WO2019114580A1 (en) * 2017-12-13 2019-06-20 深圳励飞科技有限公司 Living body detection method, computer apparatus and computer-readable storage medium
CN110647659A (en) * 2019-09-27 2020-01-03 上海依图网络科技有限公司 Imaging system and video processing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017024963A1 (en) * 2015-08-11 2017-02-16 阿里巴巴集团控股有限公司 Image recognition method, measure learning method and image source recognition method and device
WO2019114580A1 (en) * 2017-12-13 2019-06-20 深圳励飞科技有限公司 Living body detection method, computer apparatus and computer-readable storage medium
CN109583342A (en) * 2018-11-21 2019-04-05 重庆邮电大学 Human face in-vivo detection method based on transfer learning
CN110647659A (en) * 2019-09-27 2020-01-03 上海依图网络科技有限公司 Imaging system and video processing method

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111950497A (en) * 2020-08-20 2020-11-17 重庆邮电大学 AI face-changing video detection method based on multitask learning model
CN111950497B (en) * 2020-08-20 2022-07-01 重庆邮电大学 AI face-changing video detection method based on multitask learning model
CN112200001A (en) * 2020-09-11 2021-01-08 南京星耀智能科技有限公司 Depth-forged video identification method in specified scene
CN112580521A (en) * 2020-12-22 2021-03-30 浙江工业大学 Multi-feature true and false video detection method based on MAML (maximum likelihood modeling language) meta-learning algorithm
CN112580521B (en) * 2020-12-22 2024-02-20 浙江工业大学 Multi-feature true and false video detection method based on MAML (maximum likelihood markup language) element learning algorithm
CN112613480A (en) * 2021-01-04 2021-04-06 上海明略人工智能(集团)有限公司 Face recognition method, face recognition system, electronic equipment and storage medium
CN112733733A (en) * 2021-01-11 2021-04-30 中国科学技术大学 Counterfeit video detection method, electronic device and storage medium
CN112861671A (en) * 2021-01-27 2021-05-28 电子科技大学 Method for identifying deeply forged face image and video
CN112861671B (en) * 2021-01-27 2022-10-21 电子科技大学 Method for identifying deeply forged face image and video
CN113628754A (en) * 2021-08-12 2021-11-09 武剑 Cerebrovascular disease dynamic prediction model construction method and system based on artificial intelligence
CN113628754B (en) * 2021-08-12 2022-04-08 武剑 Cerebrovascular disease dynamic prediction model construction method and system based on artificial intelligence
CN115937994A (en) * 2023-01-06 2023-04-07 南昌大学 Data detection method based on deep learning detection model

Also Published As

Publication number Publication date
CN111368764B (en) 2023-02-21

Similar Documents

Publication Publication Date Title
CN111368764B (en) False video detection method based on computer vision and deep learning algorithm
Matern et al. Exploiting visual artifacts to expose deepfakes and face manipulations
CN108171158B (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
CN111950497B (en) AI face-changing video detection method based on multitask learning model
KR102132407B1 (en) Method and apparatus for estimating human emotion based on adaptive image recognition using incremental deep learning
JP6111297B2 (en) Method, apparatus, and program
CN110414367B (en) Time sequence behavior detection method based on GAN and SSN
CN109086657B (en) A kind of ear detection method, system and model based on machine learning
CN111191584A (en) Face recognition method and device
Diyasa et al. Multi-face Recognition for the Detection of Prisoners in Jail using a Modified Cascade Classifier and CNN
CN113658108A (en) Glass defect detection method based on deep learning
Zhang et al. Face spoofing video detection using spatio-temporal statistical binary pattern
Maiano et al. Depthfake: a depth-based strategy for detecting deepfake videos
Arafah et al. Face recognition system using Viola Jones, histograms of oriented gradients and multi-class support vector machine
JP2011170890A (en) Face detecting method, face detection device, and program
KR20160080483A (en) Method for recognizing gender using random forest
JP4795737B2 (en) Face detection method, apparatus, and program
CN115457620A (en) User expression recognition method and device, computer equipment and storage medium
Zou et al. Rapid face detection in static video using background subtraction
Nallapati et al. Identification of Deepfakes using Strategic Models and Architectures
Ijaz et al. A survey on currency identification system for blind and visually impaired
Jang et al. Skin region segmentation using an image-adapted colour model
Pasha et al. An Efficient Novel Approach for Iris Recognition and Segmentation Based on the Utilization of Deep Learning
CN111353353A (en) Cross-posture face recognition method and device
Vijayalakshmi et al. Image classifier based digital image forensic detection-a review and simulations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant