CN111160286A - Video authenticity identification method - Google Patents

Video authenticity identification method Download PDF

Info

Publication number
CN111160286A
CN111160286A CN201911412373.9A CN201911412373A CN111160286A CN 111160286 A CN111160286 A CN 111160286A CN 201911412373 A CN201911412373 A CN 201911412373A CN 111160286 A CN111160286 A CN 111160286A
Authority
CN
China
Prior art keywords
image
video
image data
pooling
authenticity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911412373.9A
Other languages
Chinese (zh)
Other versions
CN111160286B (en
Inventor
白立飞
王惠峰
张昆
王子玮
张峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC Information Science Research Institute
Original Assignee
CETC Information Science Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC Information Science Research Institute filed Critical CETC Information Science Research Institute
Priority to CN201911412373.9A priority Critical patent/CN111160286B/en
Publication of CN111160286A publication Critical patent/CN111160286A/en
Application granted granted Critical
Publication of CN111160286B publication Critical patent/CN111160286B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Human Computer Interaction (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

A method for authenticating video, the method comprising: extracting image frames from a video to obtain first image data needing to be identified; the first image data is processed in a blocking mode to obtain NpImage blocks, i.e. NP second image data, where Np2,3, …; respectively extracting local micro-mode features in NP second image data to obtain NpThird image data; to NpAggregating the third image data by a pooling method to obtain a single descriptor; classifying the single descriptor by a binary classification method to obtain the true and false result of the image frame; and comprehensively judging the authenticity result of the image frame to obtain the authenticity result of the video. The invention provides aThe video authenticity identification method breaks through the current situation that the authenticity identification method based on the deep neural network is only limited to face authenticity identification; a universal method for discriminating between true and false is disclosed, which can be used to solve various tampering modes.

Description

Video authenticity identification method
Technical Field
The invention relates to the field of image processing, in particular to a video authenticity identification method.
Background
With the development of deep learning, and in particular, generation of confrontational networks, the quality of computer-generated pictures and videos has reached a level that can be confused. The method for verifying the authenticity through the video is also greatly challenged, and particularly, the method for identifying the authenticity of the video gradually becomes a new research hotspot in the multimedia security field due to the fact that the FaceBooK originator, mark and zakhberg are falsified speech videos and hot news of criminals pretending children and sons. The existing non-cooperative video detection methods mainly have two types: 1. based on the traditional video evidence obtaining method; 2. provided is a deep learning method. However, the traditional video forensics method cannot meet the requirements of processing novel generation methods such as deep Face and Face2 Face. The current authenticity identification method based on deep learning mainly focuses on authenticity identification of human faces, and cannot well solve other forms of tampering such as modification of certain objects in images.
Disclosure of Invention
In order to solve the above problems, the present invention provides a method for accurately identifying video authenticity, and in particular, a method for identifying video authenticity, the method comprising:
s1: extracting image frames from a video to obtain first image data needing to be identified;
s2: the first image data is processed in a blocking mode to obtain NpImage blocks, i.e. NpSecond image data of Np=2,3,…
S3: separately extracting NpLocal micro-mode features in the second image data to obtain NpA first oneThree image data;
s4: to NpAggregating the third image data to obtain a single descriptor;
s5: classifying the single descriptor by a binary classification method to obtain the true and false result of the image frame;
s6: and comprehensively judging the authenticity result of the image frame to obtain the authenticity result of the video.
Further, after step S3, the method further includes:
to NpAnd performing principal component analysis on the third image data to obtain an authenticity area distinguishing matrix of the image.
Further, the block dividing processing in step S2 is implemented by a user-defined block dividing method, that is, by a user-defined block dividing number and a block dividing policy.
Further, the blocking process in step S2 employs an image segmentation algorithm.
Further, the local micro-feature extraction method in step S3 is a convolutional neural network feature extraction method, and the extracted image features are camera fingerprints and/or encoded fingerprints.
Further, the pooling method in the step S4 is one or a combination of maximum pooling, minimum pooling, average pooling and average pooling.
Further, the pooling method is a combination of several of maximum pooling, minimum pooling, average pooling and square-mean pooling, and is a combination of at least two of the four pooling methods of maximum pooling, minimum pooling, average pooling and square-mean pooling by using an adaptive weighting manner.
Further, the two-classification method in step S5 is specifically a neural network two-classification method.
Further, the second neural network classification method is a second generation antagonistic deep neural network classification method.
A video authentication apparatus, comprising:
the image frame extraction module is configured to extract image frames from the video and acquire first image data needing to be identified;
an image blocking module configured to block the first image data to obtain NpImage blocks, i.e. NpSecond image data of Np=2,3,…
An image local micro-feature extraction module configured to extract N respectivelypLocal micro-mode features in the second image data to obtain NpThird image data;
an aggregation module configured to pair NpAggregating the third image data to obtain a single descriptor;
the image authenticity judging module is configured to classify the single descriptor through a two-classification method to obtain an authenticity result of the image frame;
and the video authenticity judgment module is configured to comprehensively judge the authenticity results of the image frames to obtain the authenticity results of the video.
The video authenticity identification method provided by the invention breaks through the current situation that the authenticity identification method based on the deep neural network is only limited to face authenticity identification; a universal true and false identification method is provided, which can be used for solving various tampering modes; the video authenticity identification and the local micro mode are combined, so that the problem that the conventional deep authenticity identification mainly depends on the macro semantic features of a high layer is solved; in addition, the video authenticity identification method provided by the invention can be used for authenticity identification and can also display a tampered area.
Drawings
Fig. 1 is a schematic flow chart of a video authentication method in this embodiment 1;
FIG. 2 is a schematic diagram of the custom partition in this embodiment 1;
FIG. 3 is a schematic diagram of image segmentation and segmentation in this embodiment 1;
FIG. 4 is a schematic diagram of a local micro-pattern feature extraction result of an image in this embodiment 1;
fig. 5 is an exploded schematic view of a video authentication apparatus in this embodiment 3.
Detailed Description
The invention will be described in further detail below with reference to fig. 1 to 5, in order to better understand the contents of the invention and its advantages in various aspects. In the following examples, the following detailed description is provided for the purpose of providing a clear and thorough understanding of the present invention, and is not intended to limit the invention.
Example 1
The invention provides a video authenticity identification method, which realizes authenticity identification of a video by authenticity identification of each frame of image in the video, and is specifically shown in figure 1.
In the first step, image frames are extracted from a video to obtain first image data to be identified. As shown at 1 in fig. 1.
The frame extraction of the image is realized by image processing tools such as opencv, MATLAB and the like. According to different video sources, the types of the first image data are different, and if the video sources are gray level images, the first image data are gray level data matrixes formed by gray level values of each pixel of the images; if the video source is a color image, the first image data includes not only a gray data matrix representing the gray value of each pixel, but also a matrix of different color channels, such as a data matrix of R, G, B three channels, or a data matrix of H, S, V three channels, and the specific type of the first image data varies according to the actual image source, which is only an example.
The extraction mode of the video image frame can adopt key frame or frame-by-frame extraction mode. If the frame number of the video is N, the ith image frame extracted from the video is
Figure RE-GDA0002430715270000031
Wherein w, h and c are the width, height and channel number of the image respectively, and i is less than or equal to N.
Secondly, the first image data is processed in a blocking mode to obtain NpImage blocks, i.e. NpSecond image data of N p2,3, …. As shown at 2 in fig. 1.
The method for processing the image blocks can adopt a user-defined block division method and can also adopt an image segmentation algorithm. If a user-defined blocking method is adopted, the number of blocks and the blocking strategy are both user-defined, and the number of blocks can be divided into different numbers such as 3X3, 4X4, 5X5, 3X5, 5X9 and the like; for the block dividing strategy, equal division or unequal division, rectangular block dividing, triangular or other polygonal equal block dividing modes can be adopted. If an image segmentation algorithm is adopted for segmentation, the specific size and the number of the segments are not required to be specific, but different segmentation modes are selected according to specific use requirements, such as segmentation modes for segmenting the foreground and the background of a person image, segmenting an object in a background image, segmenting a face region image and other images, and the like; furthermore, different segmentation algorithms may be used to achieve image segmentation, such as threshold-based segmentation algorithms, watershed algorithms, edge detection-based segmentation algorithms, region selection-based segmentation algorithms, genetic algorithm-based image segmentation algorithms, and so on. In this embodiment, as for the selection of the image segmentation method, the following method is specifically adopted for selection: identifying whether a person exists in the image, if the image has no person, judging the pixel proportion of the object in the background to the total image, if the pixel proportion exceeds a first threshold value, partitioning by using a method for partitioning the object in the background, and if the pixel proportion does not exceed the first threshold value, adopting a user-defined partitioning method, and setting the number of user-defined partitions and a partitioning strategy; if the image has the person, the pixel proportion of the person image in the total image is further judged. If the pixel proportion of the figure image in the total image exceeds a second threshold value, partitioning by adopting a face region segmentation method; and if the pixel proportion of the character image in the total image does not exceed the second threshold value, partitioning by adopting a segmentation method of the foreground and the background of the character image.
In this embodiment, the image is divided into rectangular image blocks, and for the ith image frame
Figure RE-GDA0002430715270000041
Partitioning is carried out, and the jth image block is set as
Figure RE-GDA0002430715270000042
Wherein wjAnd hjRespectively the width and height of the jth image block.
As shown in fig. 2, in the figure, an image is divided into rectangular blocks divided by 4 × 4, which is an example of blocking by using a custom blocking method. For the ith image frame
Figure RE-GDA0002430715270000043
Dividing into 4 × 4 parts, and setting j less than or equal to 16 image blocks as
Figure RE-GDA0002430715270000044
Wherein wjW/4 and hj=h/4。
As shown in fig. 3, an example of blocking by using a face region segmentation method is shown, in which a face region in an image is segmented from a background region to form an image block. For the ith image frame
Figure RE-GDA0002430715270000045
Dividing the face region, and setting the j-th or less than L image blocks as
Figure RE-GDA0002430715270000046
Wherein L is the number of faces in the image.
It should be noted that the method of image blocking is exemplarily described in this embodiment, and does not mean that only rectangular blocks can be used for image blocking.
Thirdly, respectively extracting NpLocal micro-mode features in the second image data to obtain NpAnd third image data. As shown at 3 in fig. 1.
In this embodiment, the local micro-feature extraction method in the second image data may use grayscale feature extraction, texture feature extraction, shape feature extraction, and the like. In this embodiment, a convolutional neural network-based feature extraction method is used to extract NpHigh frequency features in the second image data, the high frequency features including camera fingerprints and/or encoded fingerprint features. As shown in fig. 4, the area a is an image forged area, and the area B is an image real area, which may beIt is seen that the local micro-features of the forged and actual regions of the image are not the same, and therefore, the authenticity is discriminated by discriminating such a difference in the local micro-features as follows.
The fourth step, for NpAnd aggregating the third image data by using a pooling method to obtain a single descriptor. As shown at 4 in fig. 1.
The pair NpAnd aggregating the third image data into one or a combination of maximum pooling, minimum pooling, average pooling and mean-square pooling by using a pooling method. In this example, NpThe third image data is represented by Fij=[Fij,1,...,Fij,M]The method is characterized by comprising the steps of obtaining the characteristics of j image blocks of the ith video image frame, wherein M is the number of local micro-mode characteristics. N is a radical ofpIs the number of image blocks. For NpThird image data, i.e. NpThe image blocks are pooled and several polymerization methods are specifically shown below.
The maximum pooling method specifically comprises the following steps:
Figure RE-GDA0002430715270000051
the minimum pooling method specifically comprises the following steps:
Figure RE-GDA0002430715270000052
the average pooling method specifically comprises the following steps:
Figure RE-GDA0002430715270000053
the square mean pooling method specifically comprises the following steps:
Figure RE-GDA0002430715270000054
the pooling method may be selected according to the particular needs of use, either individually or in combination. If individual pooling is selectedThe method comprises the steps of selecting one method from several different pooling methods for polymerization; if a combined pooling method is selected, at least two different pooling methods are selected, the weight of each pooling method is set, and pooling is performed using the weighted pooling method, i.e., AFimax+BFimin+CFimean+DFimsq. For example, two pooling methods of maximum pooling and average pooling are selected, the weight a for the maximum pooling method is set to 0.9, the weight C for the average pooling method is set to 0.1, the weights B and D for the minimum pooling method and the square-mean pooling method take the value of 0, and a comprehensive pooling method, i.e., 0.9F, is obtained by weightingimax+0.1Fimean(ii) a Or selecting two methods of average pooling and square average pooling, setting the weight C of the average pooling method to be 0.4, setting the weight D of the square average pooling method to be 0.6, setting the weight of the maximum pooling method and the minimum pooling method to be 0, and weighting to obtain a comprehensive pooling method, namely 0.6Fimsq+0.4Fimean. It should be noted that the method of pooling combinations is exemplified in this embodiment, and does not mean that only the above-mentioned combination manner or the above-mentioned weight setting manner can be used for aggregation.
In this embodiment, the self-adaptive selecting and aggregating method according to the information distribution of the image specifically includes: judging the information distribution condition of the image, and if the information distribution of the image is larger than a dispersion threshold, using an average pooling method or a square mean pooling method; and if the information distribution of the image is less than or equal to the dispersion threshold value, using a maximum pooling method or a minimum pooling method. I.e. when the information is spread over the whole image, it works well with the average pooling method, whereas when the discriminative information is concentrated in local areas, it works well with the maximum or minimum pooling method. The resulting pooling result is a single descriptor.
And fifthly, classifying the single descriptor by a classification method to obtain the true and false result of the image frame. As shown at 5 in fig. 1. The binary classification method can be an algorithm capable of realizing a binary classification effect, and the method for generating the confrontation deep neural network binary classification used in the embodiment specifically comprises the following steps:
s51: acquiring a plurality of groups of training samples, wherein each group of training samples comprises an input image and a target image, and the plurality of groups of training samples can be taken from image frames of known tampered videos and are images extracted from the tampered videos by a frame extraction method;
s52: inputting the input image to a generator network in a generated anti-depth neural network, and performing authenticity identification on the input image based on the generator network to obtain a generated image;
s53: inputting the generated image, the input image and the target image into a discriminator network in a generation antagonistic neural network to obtain a first discrimination result about a pixel value of the generated image and a second discrimination result about a pixel value of the target image;
s54: and optimizing the parameters of the generated confrontation neural network according to the first judgment result and the second judgment result to obtain the image authenticity identification model. The optimizing the parameters of the generated antagonistic deep neural network according to the first and second discrimination results includes optimizing parameters of the generator network and optimizing parameters of the discriminator network according to one or more of pixel differences of the generated image and the target image, adjacent pixel differences in the generated image, and pixel differences between a plurality of generated images output by the generator network.
Through the two classification methods, the true and false result of the obtained image is 0 or 1, and if the true and false result is 0, the image is considered as a real image; if the result is 1, the image is considered as being forged, and the true and false result of the image frame is obtained.
And sixthly, comprehensively judging the authenticity result of the image frame to obtain the authenticity result of the video. The method for comprehensively judging the authenticity result of the image frame can be selected according to specific use requirements, for example, if the image frame in the video is a forged image, the video is considered to be a forged image, or the ratio of the forged number of the image frame in the video to the total number of frames exceeds a forged threshold value, the video is judged to be a forged video, and the number of forged threshold values can be specifically set according to actual conditions, such as the ratio of 30%, 50%, 60% and the like. Further, an authenticity matrix of the video can be generated to specifically represent authenticity distribution conditions in the video.
Example 2
On the basis of embodiment 1, this embodiment further includes a method for determining an authenticity area of a frame image in a video, as shown in fig. 1 as 6, specifically, after the third step, the method further includes: to NpAnd performing Principal Component Analysis (PCA) on the third image data to obtain an authenticity area distinguishing matrix of the image.
By adopting a principal component analysis method to count the characteristics of the cross-channel, the maximum change in a high-dimensional space can be obtained, and a more appropriate attention image can be obtained.
Firstly, the characteristics F of j image blocks of the ith video image frameijTransformation into a matrix
Figure RE-GDA0002430715270000061
Wherein H and W are each feature FijWidth and height. Obtaining a covariance matrix (X-mu) by Singular Value Decomposition (SVD)TMaximum eigenvector of (X-mu)
Figure RE-GDA0002430715270000062
Where μ is the mean of the rows of matrix X. The attention image, i.e. the possible tampered area of the image, can be obtained by applying the following formula.
Matt=Sigmoid((X-μ)Tv)
Example 3
In this embodiment, a video authentication apparatus 100 is disclosed, as shown in fig. 5, the apparatus includes:
an image frame extraction module 101, configured to extract an image frame from a video, and acquire first image data to be identified;
an image blocking module 102 configured to block the first image data to obtain NpImage blocks, i.e. NpSecond image data of Np=2,3,…
An image local micro-feature extraction module 103 configured to extract local micro-pattern features in the NP second image data, respectively,to obtain NpThird image data;
an aggregation module 104 configured to pair NpAggregating the third image data by a pooling method to obtain a single descriptor;
an image authenticity judging module 105 configured to classify the single descriptor by a two-classification method to obtain an authenticity result of the image frame;
and the video authenticity judgment module 106 is configured to comprehensively judge the authenticity results of the image frames to obtain the authenticity results of the video.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (10)

1. A method for authenticating video, the method comprising:
s1: extracting image frames from a video to obtain first image data needing to be identified;
s2: the first image data is processed in a blocking mode to obtain NpImage blocks, i.e. NP second image data, where Np=2,3,…
S3: respectively extracting local micro-mode features in NP second image data to obtain NpThird image data;
s4: to NpAggregating the third image data by a pooling method to obtain a single descriptor;
s5: classifying the single descriptor by a binary classification method to obtain the true and false result of the image frame;
s6: and comprehensively judging the authenticity result of the image frame to obtain the authenticity result of the video.
2. The method for authenticating video according to claim 1, further comprising, after the step S3:
to NpAnd performing principal component analysis on the third image data to obtain an authenticity area distinguishing matrix of the image.
3. The method for authenticating video according to claim 1, wherein the blocking process in step S2 employs a user-defined blocking method, i.e. a user-defined number of blocks and a blocking policy.
4. The method for discriminating between video genuineness and video genuineness according to claim 1, wherein the blocking process in step S2 employs an image segmentation algorithm.
5. The method for identifying video authenticity according to claim 1, wherein the local micro feature extraction method in step S3 is a convolutional neural network feature extraction method, and the extracted image features are camera fingerprints and/or encoded fingerprints.
6. The method for authenticating video according to claim 1, wherein the pooling in the step S4 is one or a combination of maximum pooling, minimum pooling, average pooling and average pooling.
7. The method as claimed in claim 6, wherein the pooling method is a combination of maximum pooling, minimum pooling, average pooling and square-mean pooling, and at least two of the maximum pooling, minimum pooling, average pooling and square-mean pooling are combined by using adaptive weighting.
8. The method for authenticating video according to claim 1, wherein the two classification methods in step S5 are neural network two classification methods.
9. The method according to claim 8, wherein the second neural network classification method is a second generation depth-resistant neural network classification method.
10. A video authentication apparatus, comprising:
the image frame extraction module is configured to extract image frames from the video and acquire first image data needing to be identified;
an image blocking module configured to block the first image data to obtain NpImage blocks, i.e. NpSecond image data of Np=2,3,…
An image local micro-feature extraction module configured to extract N respectivelypLocal micro-mode features in the second image data to obtain NpThird image data;
an aggregation module configured to pair NpAggregating the third image data by a pooling method to obtain a single descriptor;
the image authenticity judging module is configured to classify the single descriptor through a two-classification method to obtain an authenticity result of the image frame;
and the video authenticity judgment module is configured to comprehensively judge the authenticity results of the image frames to obtain the authenticity results of the video.
CN201911412373.9A 2019-12-31 2019-12-31 Video authenticity identification method Active CN111160286B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911412373.9A CN111160286B (en) 2019-12-31 2019-12-31 Video authenticity identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911412373.9A CN111160286B (en) 2019-12-31 2019-12-31 Video authenticity identification method

Publications (2)

Publication Number Publication Date
CN111160286A true CN111160286A (en) 2020-05-15
CN111160286B CN111160286B (en) 2023-02-28

Family

ID=70560014

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911412373.9A Active CN111160286B (en) 2019-12-31 2019-12-31 Video authenticity identification method

Country Status (1)

Country Link
CN (1) CN111160286B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738735A (en) * 2020-07-23 2020-10-02 腾讯科技(深圳)有限公司 Image data processing method and device and related equipment
CN112200001A (en) * 2020-09-11 2021-01-08 南京星耀智能科技有限公司 Depth-forged video identification method in specified scene
CN112699236A (en) * 2020-12-22 2021-04-23 浙江工业大学 Deepfake detection method based on emotion recognition and pupil size calculation
CN112749686A (en) * 2021-01-29 2021-05-04 腾讯科技(深圳)有限公司 Image detection method, image detection device, computer equipment and storage medium
CN113344092A (en) * 2021-06-18 2021-09-03 中科迈航信息技术有限公司 AI image recognition method and device
CN115412726A (en) * 2022-09-02 2022-11-29 北京瑞莱智慧科技有限公司 Video authenticity detection method and device and storage medium
CN115936737A (en) * 2023-03-10 2023-04-07 云筑信息科技(成都)有限公司 Method and system for determining authenticity of building material

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150154456A1 (en) * 2012-07-11 2015-06-04 Rai Radiotelevisione Italiana S.P.A. Method and an apparatus for the extraction of descriptors from video content, preferably for search and retrieval purpose
CN108288073A (en) * 2018-01-30 2018-07-17 北京小米移动软件有限公司 Picture authenticity identification method and device, computer readable storage medium
CN110580482A (en) * 2017-11-30 2019-12-17 腾讯科技(深圳)有限公司 Image classification model training, image classification and personalized recommendation method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150154456A1 (en) * 2012-07-11 2015-06-04 Rai Radiotelevisione Italiana S.P.A. Method and an apparatus for the extraction of descriptors from video content, preferably for search and retrieval purpose
CN110580482A (en) * 2017-11-30 2019-12-17 腾讯科技(深圳)有限公司 Image classification model training, image classification and personalized recommendation method and device
CN108288073A (en) * 2018-01-30 2018-07-17 北京小米移动软件有限公司 Picture authenticity identification method and device, computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈佳等: "基于级联深度卷积神经网络的档案图像真伪鉴别算法", 《兰台世界》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738735A (en) * 2020-07-23 2020-10-02 腾讯科技(深圳)有限公司 Image data processing method and device and related equipment
CN111738735B (en) * 2020-07-23 2021-07-13 腾讯科技(深圳)有限公司 Image data processing method and device and related equipment
CN112200001A (en) * 2020-09-11 2021-01-08 南京星耀智能科技有限公司 Depth-forged video identification method in specified scene
CN112699236A (en) * 2020-12-22 2021-04-23 浙江工业大学 Deepfake detection method based on emotion recognition and pupil size calculation
CN112699236B (en) * 2020-12-22 2022-07-01 浙江工业大学 Deepfake detection method based on emotion recognition and pupil size calculation
CN112749686A (en) * 2021-01-29 2021-05-04 腾讯科技(深圳)有限公司 Image detection method, image detection device, computer equipment and storage medium
CN113344092A (en) * 2021-06-18 2021-09-03 中科迈航信息技术有限公司 AI image recognition method and device
CN115412726A (en) * 2022-09-02 2022-11-29 北京瑞莱智慧科技有限公司 Video authenticity detection method and device and storage medium
CN115412726B (en) * 2022-09-02 2024-03-01 北京瑞莱智慧科技有限公司 Video authenticity detection method, device and storage medium
CN115936737A (en) * 2023-03-10 2023-04-07 云筑信息科技(成都)有限公司 Method and system for determining authenticity of building material
CN115936737B (en) * 2023-03-10 2023-06-23 云筑信息科技(成都)有限公司 Method and system for determining authenticity of building material

Also Published As

Publication number Publication date
CN111160286B (en) 2023-02-28

Similar Documents

Publication Publication Date Title
CN111160286B (en) Video authenticity identification method
CN109460814B (en) Deep learning classification method with function of defending against sample attack
Zhang et al. Learning-based license plate detection using global and local features
US7346211B2 (en) Image type classification using color discreteness features
Manfredi et al. Detection of static groups and crowds gathered in open spaces by texture classification
Vigneshwar et al. Detection and counting of pothole using image processing techniques
CN111967344A (en) Refined feature fusion method for face forgery video detection
US20050256820A1 (en) Cognitive arbitration system
Buza et al. Skin detection based on image color segmentation with histogram and k-means clustering
CN113312965B (en) Face unknown spoofing attack living body detection method and system
Nguyen et al. Face presentation attack detection based on a statistical model of image noise
Al Farsi et al. A Review on models of human face verification techniques
Kitayama et al. HOG feature extraction from encrypted images for privacy-preserving machine learning
Raghavendra et al. A novel feature descriptor for face anti-spoofing using texture based method
CN111275137B (en) Tea verification method based on exclusive twin network model
Cho Content-based structural recognition for flower image classification
KR101419837B1 (en) Method and apparatus for adaboost-based object detection using partitioned image cells
CN111191519B (en) Living body detection method for user access of mobile power supply device
Zhang et al. Real-time license plate detection under various conditions
CN115690918A (en) Method, device, equipment and medium for constructing living body identification model and living body identification
Almukhtar Facial emotions recognition using local monotonic pattern and gray level co-occurrence matrices plant leaf images aided agriculture development
Reidy et al. Investigating the Effectiveness of Deep Learning and CFA Interpolation Based Classifiers on Identifying AIGC
Wani et al. Meta-Brisque: Cost efficient image spoofing detection for realtime applications
Hmood et al. Statistical edge-based feature selection for counterfeit coin detection
Hertina et al. Verifying the authenticity of digital certificate and transcript using background subtraction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant