CN116844245A - Method for identifying authenticity of video content - Google Patents

Method for identifying authenticity of video content Download PDF

Info

Publication number
CN116844245A
CN116844245A CN202310827279.XA CN202310827279A CN116844245A CN 116844245 A CN116844245 A CN 116844245A CN 202310827279 A CN202310827279 A CN 202310827279A CN 116844245 A CN116844245 A CN 116844245A
Authority
CN
China
Prior art keywords
skin color
data
mark
image processing
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310827279.XA
Other languages
Chinese (zh)
Inventor
郑威
云剑
凌霞
郑晓玲
周凡棣
海涵
辛鑫
刘澎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Academy of Information and Communications Technology CAICT
Original Assignee
China Academy of Information and Communications Technology CAICT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Academy of Information and Communications Technology CAICT filed Critical China Academy of Information and Communications Technology CAICT
Priority to CN202310827279.XA priority Critical patent/CN116844245A/en
Publication of CN116844245A publication Critical patent/CN116844245A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a video content authenticity identification method, which comprises the following steps of carrying out video preprocessing on a video to be identified, and obtaining video content identification data frame by frame, wherein the video content identification data comprises human skin color data and environmental impact data; establishing a human skin color comparison model, substituting the acquired human skin color data into the human skin color comparison model to perform human skin color analysis so as to acquire a human skin color change mark; establishing an environment data comparison model, substituting the acquired human skin color data and environment influence data into the environment data comparison model to perform environment influence analysis, and generating an environment influence coefficient and a human skin color coefficient; establishing a deep learning model, and analyzing and comparing the environmental influence coefficient and the human skin color influence coefficient to obtain an image processing mark; and according to the human skin color change mark and the image processing mark, false content identification, suspicious content identification and trusted content identification are generated through analysis according to the human skin color change mark and the image processing mark.

Description

Method for identifying authenticity of video content
Technical Field
The application relates to the technical field of video authenticity identification, in particular to a video content authenticity identification method.
Background
With the rapid development of self-media nowadays, various video software on the network is gradually popularized, and the use population of short videos is also wider;
however, the quality of the short video is good and bad nowadays, so that the common user cannot effectively analyze the video and is easily guided by errors of various false videos; in addition, part of lawless persons make false evidence through the synthesis of false videos, and the difficulty of case processing by public staff is increased, so that the deep research of a method for identifying the authenticity of video contents has a certain necessity.
The video authenticity verification and identification method and system based on time watermark change analysis provided by the publication No. CN113034430A are searched, wherein the first identification mode is to analyze pictures corresponding to different frames in comparison with the video, then analyze the digital content of the pictures corresponding to different frames, and the second mode is to perform highlight change analysis on each frame of picture in the video on the basis of the first mode, so that the authenticity verification of the video is achieved through the analysis of the video.
In combination with the above, the following problems remain:
1. most existing video authentication modes adopt video multi-frame analysis processing, then the brightness, texture, tone and metadata (information of shooting date, time and position) of pictures corresponding to each frame are compared and analyzed through computer vision and image processing technology, so that the result of video authenticity authentication is obtained, and the uniqueness of the video authentication mode is stored;
2. in addition, in the process of video authentication, especially in the process of authentication of video content with a portrait, an creator of the video can perform a frame-by-frame P-picture on each frame, but background data is not modified, and at this time, a certain careless error exists in an authentication mode of performing video authentication through brightness, texture and tone, so that accidental error of authentication occurs.
In order to solve the above-mentioned problems, a method for authenticating video contents is proposed.
Disclosure of Invention
The application aims to provide a video content authenticity identification method for solving the defects in the background technology.
In order to achieve the above object, the present application provides the following technical solutions:
the method for identifying the authenticity of the video content comprises the following steps:
step S100: video preprocessing is carried out on the video to be identified, video content identification data is obtained frame by frame, and the video content identification data comprises human skin color data and environmental impact data;
step S200: establishing a human skin color comparison model, substituting the acquired human skin color data into the human skin color comparison model to perform human skin color analysis so as to acquire a human skin color change mark;
step S300: establishing an environment data comparison model, substituting the acquired human skin color data and environment influence data into the environment data comparison model to perform environment influence analysis, and generating an environment influence coefficient and a human skin color coefficient;
step S400: establishing a deep learning model, and analyzing and comparing the environmental influence coefficient and the human skin color influence coefficient to obtain an image processing mark;
step S500: and according to the human skin color change mark and the image processing mark, false content marks, suspicious content marks and trusted content marks are generated through analysis according to the human skin color change mark and the image processing mark, and the matched identification interval is marked on video clip software.
In a preferred embodiment, the human skin tone data includes facial skin tone data X and body skin tone data Y, and the environmental impact data includes an ambient illumination intensity factor L, an ambient reflection factor R, and a light source angle factor F.
In a preferred embodiment, the process of analytically generating a human skin tone variation marker is as follows:
dividing a video to be identified into n identification intervals, wherein n is an integer greater than 1, a single identification interval consists of images of two frames, images of k frames and k+1 frames in the n identification intervals are respectively obtained, and facial skin color data comparison threshold reference values X1 and X2 are set, wherein X1 is less than X2, and X1 and X2 are both greater than 0;
acquiring face skin color data X at k frames k And facial skin color data X at k+1 frames k+1 Generating a face skin color data difference value Xc of k frames and k+1 frames by data processing, wherein x=x k+1 -X k (X k 、X k+1 Greater than 0, xc greater than or equal to 0), substituting the face skin tone data difference Xc into a face skin tone comparison threshold value for comparison and analysis, and generating face skin tone grade information, wherein the face skin tone grade information comprises primary face skin tone information, secondary face skin tone information and tertiary face skin tone information;
setting a threshold reference value Y1 and a threshold reference value Y2 for comparing body skin color data, wherein Y1 is less than Y2, and both Y1 and Y2 are greater than 0;
acquiring body skin color data Y in k frames k And body skin color data Y at k+1 frames k+1 Generating a body skin color data difference Yc of k frames and k+1 frames by data processing, wherein y=y k+1 -Y k (Y k 、Y k+1 Greater than 0 and Y greater than or equal to 0), substituting the body skin color data difference value Yc into a body skin color comparison threshold value for comparison and analysis, and generating body skin color grade information, wherein the body skin color grade information comprises primary body skin color information, secondary body skin color information and tertiary body skin color information.
In a preferred embodiment, the facial skin tone level information generation logic is to:
when the difference value Xc of the face skin color data is smaller than the threshold reference value X1, marking an identification interval where the difference value Xc of the face skin color data is located as primary face skin color information, when the difference value of the face skin color data is larger than the threshold reference value X1 and the difference value Xc of the face skin color data is smaller than the threshold reference value X2, marking the identification interval where the difference value Xc of the face skin color data is located as secondary face skin color information, and when the difference value Xc of the face skin color data is larger than the threshold reference value X2, marking the identification interval where the difference value Xc of the face skin color data is located as tertiary face skin color information;
the body skin tone grade information generation logic is:
when the body skin color data difference value Yc is smaller than the threshold reference value Y1, marking an identification section where the body skin color data difference value Yc is located as first-level body skin color information, when the body skin color data is larger than the threshold reference value Y1 and the body skin color data difference value Yc is smaller than the threshold reference value Y2, marking the identification section where the body skin color data difference value Yc is located as second-level body skin color information, and when the body skin color data difference value Yc is larger than the threshold reference value Y2, marking the identification section where the body skin color data difference value Yc is located as third-level body skin color information.
In a preferred embodiment, the generation logic of the environmental impact coefficients is:
and designing an illumination error coefficient combination model in the environment data comparison model, carrying out weight distribution on each molecule, and obtaining an environment influence coefficient according to formulation analysis.
In a preferred embodiment, the human skin tone factor generation logic is:
the human skin color change marks comprise a primary skin color change mark, a medium skin color change mark and a high-grade skin color change mark, a skin color distribution model is set in an environment data comparison model, the face skin color data X and the body skin color data Y are subjected to formulated analysis processing to generate a human skin color coefficient beta n, and root opening processing is performed through the ratio of the absolute value of the difference value of the face skin color data X and the face skin color data Y to the absolute value of the face skin color data Y and the skin color error comparison revision constant K.
In a preferred embodiment, the image processing tag generation logic is:
the image processing marks comprise a first-level image processing mark, a second-level image processing mark and a third-level image processing mark, the environmental influence coefficient and the human skin color influence coefficient are analyzed and compared through formulation, the influence caused by the environment in the image is removed, an original image coefficient gamma n is generated, original image coefficient comparison thresholds W1 and W2 are set, wherein W1 is more than W2 and more than 0, and the original image coefficient gamma n is substituted into the original image coefficient comparison threshold for analysis:
when the primary image coefficient gamma n of the single identification area is smaller than the primary image coefficient comparison threshold W2, a first-level image processing mark is generated for the identification area, when the primary image coefficient gamma n of the single identification area is larger than the primary image coefficient comparison threshold W2 and smaller than the primary image coefficient comparison threshold W1, a second-level image processing mark is generated for the identification area, and when the primary image coefficient gamma n of the single identification area is larger than the primary image coefficient comparison threshold W1, a third-level image processing mark is generated for the identification area.
In a preferred embodiment, the false content identification, the suspicious content identification and the trusted content identification are generated by analyzing the human skin color change mark and the image processing mark, and the generation logic is as follows:
when a single identification area simultaneously stores a first-level image processing mark and a middle-level skin color change mark, a first-level image processing mark and a high-level skin color change mark or a third-level image processing mark and a primary skin color change mark, false content identification is generated for the identification area;
when a single identification interval simultaneously stores a secondary image processing mark and a primary skin color change mark, a secondary image processing mark and a high-level skin color change mark or a tertiary image processing mark and a middle-level skin color change mark or a tertiary environment influence mark and a high-level skin color change mark, suspicious content identification is generated for the identification area;
when a primary image processing mark and a primary skin color change mark, a secondary image processing mark and a middle skin color change mark or a tertiary image processing mark and a high skin color change mark are stored in a single identification interval at the same time, a trusted content identification is generated for the identification area.
In the technical scheme, the application has the technical effects and advantages that:
according to the embodiment, the identification video is analyzed frame by frame, the more accurate video authenticity identification under the condition of the portrait repair P picture is realized by utilizing the skin color difference value ratio of the human face skin color and the body skin color, and the environment influence factors and the obtained first identification result are analyzed, processed and then are judged through secondary processing of the first identification result, so that the accuracy of the identification result and the diversity of the identification mode are improved in the process of authenticating the video content with the portrait. .
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings required for the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to these drawings for those skilled in the art.
Fig. 1 is a flowchart of a method for authenticating video contents according to the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Referring to fig. 1, the present embodiment provides a method for identifying authenticity of video content, more specifically, a method for identifying authenticity of video content with a portrait, which includes the following steps:
step S100: video preprocessing is carried out on the video to be identified, video content identification data is obtained frame by frame, and the video content identification data comprises human skin color data and environmental impact data;
step S200: establishing a human skin color comparison model, substituting the acquired human skin color data into the human skin color comparison model to perform human skin color analysis so as to acquire a human skin color change mark;
step S300: establishing an environment data comparison model, substituting the acquired human skin color data and environment influence data into the environment data comparison model to perform environment influence analysis, and generating an environment influence coefficient and a human skin color coefficient;
step S400: establishing a deep learning model, and analyzing and comparing the environmental influence coefficient and the human skin color influence coefficient to obtain an image processing mark;
step S500: and according to the human skin color change mark and the image processing mark, false content marks, suspicious content marks and trusted content marks are generated through analysis according to the human skin color change mark and the image processing mark, and the matched identification interval is marked on video clip software.
It should be noted that preprocessing performs operations such as denoising, image enhancement, and motion compensation through video editing software.
The human skin color data comprises face skin color data X and body skin color data Y, and it is to be noted that the human skin color data can be obtained through a skin color model, the skin color model can be obtained through a skin color segmentation algorithm, and basic algorithms involved in the skin color segmentation algorithm comprise threshold segmentation, color space change and a segmentation method of a Gaussian mixture model.
The environmental impact data comprises an environmental illumination intensity factor L, an environmental reflection factor R and a light source angle factor F;
it should be noted that, environmental impact data is acquired through OpenCV, specifically, data is acquired through the following algorithm:
the illumination intensity factor L and histogram equalization image processing algorithm indirectly acquires illumination intensity factor data by adjusting the image of a single identification interval and enhancing the contrast and brightness of the image;
an environmental reflection factor R, a normal estimation algorithm, which estimates the normal direction of the object surface by analyzing the texture and shadow of the object surface in the image, which can be used to infer the angle factor of the light source;
the light source angle factor F and the bilateral filtering are image smoothing algorithms, the edge information of the image is reserved, the color is smoothed, and the influence of illumination change can be restrained by adopting the method of carrying out bilateral filtering on the image, so that the environment reflection factor is estimated.
The process of analyzing and generating the human skin color change mark is as follows:
the human skin color comparison model is established, a deep learning model, such as a convolutional neural network, can be adopted to train the model to identify and mark the human skin color in the image, a Fitzpatrick skin color classification method is adopted to carry out skin color grade assessment, different skin colors are defined in a numerical mode through assignment, and the whiter the skin is, the larger the grade value corresponding to the skin is.
Dividing a video to be identified into n identification sections, wherein n is an integer greater than 1, a single identification section consists of images of two frames, images of k frames and k+1 frames in the n identification sections are respectively acquired, face skin color data are set to compare threshold reference values X1 and X2, wherein X1 is less than X2, X1 and X2 are both greater than 0, and face skin color data X when k frames are acquired k And facial skin color data X at k+1 frames k+1 Generating a face skin color data difference value Xc of k frames and k+1 frames by data processing, wherein x=x k+1 -X k (X k 、X k+1 Greater than 0, xc greater than or equal to 0), substituting the face skin tone data difference Xc into a face skin tone comparison threshold value for comparison and analysis, and generating face skin tone grade information, wherein the face skin tone grade information comprises primary face skin tone information, secondary face skin tone information and tertiary face skin tone information;
the facial skin tone level information generation logic is: when the difference value Xc of the face skin color data is smaller than the threshold reference value X1, marking an identification interval where the difference value Xc of the face skin color data is located as primary face skin color information, when the difference value of the face skin color data is larger than the threshold reference value X1 and the difference value Xc of the face skin color data is smaller than the threshold reference value X2, marking the identification interval where the difference value Xc of the face skin color data is located as secondary face skin color information, and when the difference value Xc of the face skin color data is larger than the threshold reference value X2, marking the identification interval where the difference value Xc of the face skin color data is located as tertiary face skin color information;
it should be noted that, the identification interval corresponding to the difference Xc of the face skin color data is an image of k frames and k+1 frames, the first-level face skin color information is darker than the face skin color of the second-level face skin color information, the second-level face skin color information is darker than the face skin color of the third-level face skin color, and the whitening degree from the first-level face skin color information to the third-level face skin color information is from low to high;
setting a threshold reference value Y1 and Y2 for comparing body skin color data, wherein Y1 is less than Y2, Y1 and Y2 are both greater than 0, and acquiring the body skin color data Y when k frames are acquired k And body skin color data Y at k+1 frames k+1 Generating a body skin color data difference Yc of k frames and k+1 frames by data processing, wherein y=y k+1 -Y k (Y k 、Y k+1 Greater than 0, Y greater than or equal to 0), substituting the body skin color data difference value Yc into a body skin color comparison threshold value for comparison and analysis to generate body skin color grade information, wherein the body skin color grade information comprises primary body skin color information, secondary body skin color information and tertiary body skin color information;
the body skin tone grade information generation logic is: when the body skin color data difference value Yc is smaller than a threshold reference value Y1, marking an identification interval where the body skin color data difference value Yc is located as primary body skin color information, when the body skin color data is larger than the threshold reference value Y1 and the body skin color data difference value Yc is smaller than a threshold reference value Y2, marking the identification interval where the body skin color data difference value Yc is located as secondary body skin color information, and when the body skin color data difference value Yc is larger than the threshold reference value Y2, marking the identification interval where the body skin color data difference value Yc is located as tertiary body skin color information;
it should be noted that, the identification interval corresponding to the body skin color data difference Yc is an image of k frames and k+1 frames, the first-level body skin color information is darker than the body skin color of the second-level body skin color information, the second-level body skin color information is darker than the body skin color of the third-level body skin color, and the whitening degree from the first-level body skin color information to the third-level body skin color information is from low to high;
performing integrated analysis on the facial skin tone grade information and the body skin tone grade information in the single identification area to generate a human skin tone change mark:
generating a primary skin tone variation mark for the identified region when primary face skin tone information and primary body skin tone information are present in a single identified region at the same time, generating a secondary skin tone variation mark for the identified region when secondary face skin tone information and primary body skin tone information, primary face skin tone information and secondary body skin tone information or secondary face information and secondary body skin tone information are present in the single identified region at the same time, and generating a secondary skin tone variation mark for the identified region when tertiary face skin tone information and primary body skin tone information, tertiary face skin tone information and secondary body skin tone information, primary face skin tone information and tertiary body skin tone information, secondary face information and tertiary body skin tone information or tertiary face information and tertiary body skin tone information are present in the single identified region at the same time;
it should be noted that: the high-level skin color change marks correspond to that the difference of skin color change corresponding to two frames of images in the identification area is large, the probability of false suspicion of the stored video P-map is large, and the probability of false suspicion of the P-map corresponding to the high-level skin color change marks to the primary change marks is high to low.
The steps of generating the environmental impact coefficient and the human skin color coefficient are as follows:
establishing a deep learning model, analyzing and processing environmental impact data, wherein the analysis and processing logic is as follows:
the generation logic of the environmental impact coefficient is as follows:
designing an illumination error coefficient combination model in an environment data comparison model, carrying out weight distribution on each molecule, and obtaining an environment influence coefficient delta n according to formulation analysis, wherein a specific formula is as follows;
wherein n is the nth identification interval, n is greater than 0, wherein a, b and c are preset ratio coefficient values of an illumination intensity factor L, an environment reflection factor R and a light source angle factor F respectively, a > b > c > 0, and a+b+c=1.425;
it should be noted that, the algorithm involved in the illumination error coefficient combination model may be an illumination estimation algorithm, and the global illumination estimation and spherical harmonic expansion in the illumination estimation algorithm may utilize physical models of illumination, such as a light propagation model and a reflection model, and estimate parameters of the environment by optimizing or solving an equation set;
in addition, the greater the illumination intensity factor L, the greater the brightness of the environment, the more obvious the illumination of the face of the person, the greater the ambient reflection factor R, the higher the skin brightness of the person in the video, the greater the light source angle factor F, the more dispersed the illumination to which the person is subjected, when the light source angle is smaller, the light is relatively concentrated, and a clearer shadow and highlight part is generated, in which case the details and features on the face may be more prominent, the distribution of the illumination is more obvious, and when the light source angle is larger, the light irradiates the face at a wider angle, resulting in a more uniform illumination distribution, in which case the difference between the shadow and highlight part is smaller, the details on the face may be smoothed, the more uniform the distribution of the illumination is, the analysis shows that the greater the illumination, the smaller the environmental impact coefficient δn, and the smaller the environmental impact coefficient δn.
The generation logic of human skin color coefficients is as follows:
the human skin color change marks comprise a primary skin color change mark, a medium skin color change mark and a high-grade skin color change mark, a skin color distribution model is arranged in an environment data comparison model, the face skin color data X and the body skin color data Y are subjected to formulated analysis processing to generate a human skin color coefficient beta n, the ratio of the absolute value of the difference value of the face skin color data X and the face skin color data Y to the face skin color data Y is added with a skin color error comparison revision constant K to carry out root opening processing, and the specific formula of the human skin color coefficient beta n is as follows;
(X, Y and K are both greater than 0, n is the nth identification interval, n > 0);
it should be noted that, modeling skin color distribution by using gaussian mixture in a skin color distribution model, assuming that the skin color distribution is composed of a plurality of gaussian distributions, each gaussian distribution represents a different skin color component, clustering pixels in an image and estimating parameters to obtain a gaussian mixture model of skin color, defining a characteristic function and a latent variable of the skin color in the gaussian mixture model by a conditional random field probability model, and establishing conditional probability distribution between skin color data, thereby realizing analysis and inference of skin color data;
in addition, the human skin color coefficient βn reflects the influence degree of the skin color of the human body, the same environment illumination condition is adopted, the whiter skin color is easier to display the change of brightness under the illumination condition, the darker skin color is displayed, the brightness is relatively stable under the illumination condition, when the difference between the face skin color data X and the body skin color numerical value Y is larger, namely, the face skin color is larger than the difference between the body skin color numerical value Y, the human skin color coefficient βn is also larger, and when the difference between the face skin color data X and the body skin color numerical value Y is smaller, namely, the face skin color is smaller than the difference between the body skin color numerical value Y, and the human skin color coefficient βn is also smaller.
The generation logic of the image processing mark is as follows:
the image processing marks comprise a first-level image processing mark, a second-level image processing mark and a third-level image processing mark, the environmental influence coefficient and the human skin color influence coefficient are analyzed and compared through formulation, the influence caused by the environment in the image is removed, and a primary image coefficient gamma n is generated, wherein the formulation specifically comprises:
(βn > δn > 0, q is an influence-elimination error constant, q > 0, n is an nth identification interval, n > 0);
setting primary image coefficient comparison thresholds W1 and W2, wherein W1 is more than W2 is more than 0, substituting a primary image coefficient gamma n into the primary image coefficient comparison thresholds for analysis:
when the original image coefficient gamma n of a single identification area is smaller than an original image coefficient comparison threshold W2, generating a first-level image processing mark for the identification area, when the original image coefficient gamma n of the single identification area is larger than the original image coefficient comparison threshold W2 and smaller than the original image coefficient comparison threshold W1, generating a second-level image processing mark for the identification area, and when the original image coefficient gamma n of the single identification area is larger than the original image coefficient comparison threshold W1, generating a third-level image processing mark for the identification area;
it should be noted that, the human skin color influence degree corresponding to the second-level image processing mark is higher than the human skin color influence degree corresponding to the first-level image processing mark, and the human skin color influence degree corresponding to the third-level image processing mark is higher than the human skin color influence degree corresponding to the second-level image processing mark.
According to the human skin color change mark and the image processing mark, false content identification, suspicious content identification and trusted content identification are generated, and the generation logic is as follows:
when a single identification area simultaneously stores a first-level image processing mark and a middle-level skin color change mark, a first-level image processing mark and a high-level skin color change mark or a third-level image processing mark and a primary skin color change mark, false content identification is generated for the identification area;
when a single identification interval simultaneously stores a secondary image processing mark and a primary skin color change mark, a secondary image processing mark and a high-level skin color change mark or a tertiary image processing mark and a middle-level skin color change mark or a tertiary environment influence mark and a high-level skin color change mark, suspicious content identification is generated for the identification area;
when a primary image processing mark and a primary skin color change mark, a secondary image processing mark and a middle skin color change mark or a tertiary image processing mark and a high skin color change mark are stored in a single identification interval at the same time, a trusted content identification is generated for the identification area.
The images matching the identified intervals are marked on the video clip software for spurious content, suspicious content, or trusted content.
According to the embodiment, the identification video is analyzed frame by frame, the more accurate video authenticity identification under the condition of the portrait repair P picture is realized by utilizing the skin color difference value ratio of the human face skin color and the body skin color, and the environment influence factors and the obtained first identification result are analyzed, processed and then are judged through secondary processing of the first identification result, so that the accuracy of the identification result and the diversity of the identification mode are improved in the process of authenticating the video content with the portrait.
The above formulas are all formulas with dimensions removed and numerical values calculated, the formulas are formulas with a large amount of data collected for software simulation to obtain the latest real situation, and preset parameters in the formulas are set by those skilled in the art according to the actual situation.
It should be understood that the term "and/or" is merely an association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean: there are three cases, a alone, a and B together, and B alone, wherein a, B may be singular or plural. In addition, the character "/" herein generally indicates that the associated object is an "or" relationship, but may also indicate an "and/or" relationship, and may be understood by referring to the context.
In the present application, "at least one" means one or more, and "a plurality" means two or more. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. A method for authenticating video content, the method comprising the steps of:
step S100: video preprocessing is carried out on the video to be identified, video content identification data is obtained frame by frame, and the video content identification data comprises human skin color data and environmental impact data;
step S200: establishing a human skin color comparison model, substituting the acquired human skin color data into the human skin color comparison model to perform human skin color analysis so as to acquire a human skin color change mark;
step S300: establishing an environment data comparison model, substituting the acquired human skin color data and environment influence data into the environment data comparison model to perform environment influence analysis, and generating an environment influence coefficient and a human skin color coefficient;
step S400: establishing a deep learning model, and analyzing and comparing the environmental influence coefficient and the human skin color influence coefficient to obtain an image processing mark;
step S500: and according to the human skin color change mark and the image processing mark, false content marks, suspicious content marks and trusted content marks are generated through analysis according to the human skin color change mark and the image processing mark, and the matched identification interval is marked on video clip software.
2. The method according to claim 1, wherein the human skin tone data includes face skin tone data X and body skin tone data Y, and the environmental influence data includes an environmental illumination intensity factor L, an environmental reflection factor R, and a light source angle factor F.
3. The method for authenticating video content according to claim 2, wherein the process of generating the human skin color change mark by analysis is as follows:
dividing a video to be identified into n identification intervals, wherein n is an integer greater than 1, a single identification interval consists of images of two frames, images of k frames and k+1 frames in the n identification intervals are respectively obtained, and facial skin color data comparison threshold reference values X1 and X2 are set, wherein X1 is less than X2, and X1 and X2 are both greater than 0;
acquiring face skin color data X at k frames k And facial skin color data X at k+1 frames k+1 Generating a face skin color data difference value Xc of k frames and k+1 frames by data processing, wherein x=x k+1 -X k (X k 、X k+1 Greater than 0, xc greater than or equal to 0), substituting the face skin tone data difference Xc into a face skin tone comparison threshold value for comparison and analysis, and generating face skin tone grade information, wherein the face skin tone grade information comprises primary face skin tone information, secondary face skin tone information and tertiary face skin tone information;
setting a threshold reference value Y1 and a threshold reference value Y2 for comparing body skin color data, wherein Y1 is less than Y2, and both Y1 and Y2 are greater than 0;
acquiring body skin color data Y in k frames k And body skin color data Y at k+1 frames k+1 Number of passesGenerating a body skin color data difference Yc of k frames and k+1 frames according to the processing, wherein y=y k+1 -Y k (Y k 、Y k+1 Greater than 0 and Y greater than or equal to 0), substituting the body skin color data difference value Yc into a body skin color comparison threshold value for comparison and analysis, and generating body skin color grade information, wherein the body skin color grade information comprises primary body skin color information, secondary body skin color information and tertiary body skin color information.
4. The method of claim 3, wherein the facial skin tone level information generation logic is configured to:
when the difference value Xc of the face skin color data is smaller than the threshold reference value X1, marking an identification interval where the difference value Xc of the face skin color data is located as primary face skin color information, when the difference value of the face skin color data is larger than the threshold reference value X1 and the difference value Xc of the face skin color data is smaller than the threshold reference value X2, marking the identification interval where the difference value Xc of the face skin color data is located as secondary face skin color information, and when the difference value Xc of the face skin color data is larger than the threshold reference value X2, marking the identification interval where the difference value Xc of the face skin color data is located as tertiary face skin color information;
the body skin tone grade information generation logic is:
when the body skin color data difference value Yc is smaller than the threshold reference value Y1, marking an identification section where the body skin color data difference value Yc is located as first-level body skin color information, when the body skin color data is larger than the threshold reference value Y1 and the body skin color data difference value Yc is smaller than the threshold reference value Y2, marking the identification section where the body skin color data difference value Yc is located as second-level body skin color information, and when the body skin color data difference value Yc is larger than the threshold reference value Y2, marking the identification section where the body skin color data difference value Yc is located as third-level body skin color information.
5. The method for authenticating video content as recited in claim 4, wherein the generating logic of the environmental impact coefficient is:
and designing an illumination error coefficient combination model in the environment data comparison model, carrying out weight distribution on each molecule, and obtaining an environment influence coefficient according to formulation analysis.
6. The method for authenticating video content according to claim 5, wherein the generating logic of the human skin color coefficient is:
the human skin color change marks comprise a primary skin color change mark, a medium skin color change mark and a high-grade skin color change mark, a skin color distribution model is set in an environment data comparison model, the face skin color data X and the body skin color data Y are subjected to formulated analysis processing to generate a human skin color coefficient beta n, and root opening processing is performed through the ratio of the absolute value of the difference value of the face skin color data X and the face skin color data Y to the absolute value of the face skin color data Y and the skin color error comparison revision constant K.
7. The method of claim 6, wherein the image processing mark generation logic is configured to:
the image processing marks comprise a first-level image processing mark, a second-level image processing mark and a third-level image processing mark, the environmental influence coefficient and the human skin color influence coefficient are analyzed and compared through formulation, the influence caused by the environment in the image is removed, an original image coefficient gamma n is generated, original image coefficient comparison thresholds W1 and W2 are set, wherein W1 is more than W2 and more than 0, and the original image coefficient gamma n is substituted into the original image coefficient comparison threshold for analysis:
when the primary image coefficient gamma n of the single identification area is smaller than the primary image coefficient comparison threshold W2, a first-level image processing mark is generated for the identification area, when the primary image coefficient gamma n of the single identification area is larger than the primary image coefficient comparison threshold W2 and smaller than the primary image coefficient comparison threshold W1, a second-level image processing mark is generated for the identification area, and when the primary image coefficient gamma n of the single identification area is larger than the primary image coefficient comparison threshold W1, a third-level image processing mark is generated for the identification area.
8. The method of claim 7, wherein the false content identification, the suspicious content identification and the trusted content identification are generated by analyzing the human skin color change mark and the image processing mark, and the generation logic is as follows:
when a single identification area simultaneously stores a first-level image processing mark and a middle-level skin color change mark, a first-level image processing mark and a high-level skin color change mark or a third-level image processing mark and a primary skin color change mark, false content identification is generated for the identification area;
when a single identification interval simultaneously stores a secondary image processing mark and a primary skin color change mark, a secondary image processing mark and a high-level skin color change mark or a tertiary image processing mark and a middle-level skin color change mark or a tertiary environment influence mark and a high-level skin color change mark, suspicious content identification is generated for the identification area;
when a primary image processing mark and a primary skin color change mark, a secondary image processing mark and a middle skin color change mark or a tertiary image processing mark and a high skin color change mark are stored in a single identification interval at the same time, a trusted content identification is generated for the identification area.
CN202310827279.XA 2023-07-06 2023-07-06 Method for identifying authenticity of video content Pending CN116844245A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310827279.XA CN116844245A (en) 2023-07-06 2023-07-06 Method for identifying authenticity of video content

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310827279.XA CN116844245A (en) 2023-07-06 2023-07-06 Method for identifying authenticity of video content

Publications (1)

Publication Number Publication Date
CN116844245A true CN116844245A (en) 2023-10-03

Family

ID=88170357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310827279.XA Pending CN116844245A (en) 2023-07-06 2023-07-06 Method for identifying authenticity of video content

Country Status (1)

Country Link
CN (1) CN116844245A (en)

Similar Documents

Publication Publication Date Title
Meynet et al. PC-MSDM: A quality metric for 3D point clouds
US8175384B1 (en) Method and apparatus for discriminative alpha matting
EP1677250B9 (en) Image collation system and image collation method
US7376270B2 (en) Detecting human faces and detecting red eyes
CN100423020C (en) Human face identifying method based on structural principal element analysis
JP2023036784A (en) Virtual facial makeup removal, fast facial detection and landmark tracking
EP1612733B1 (en) Color segmentation-based stereo 3D reconstruction system and process
US8009880B2 (en) Recovering parameters from a sub-optimal image
Zhang et al. Detecting and extracting the photo composites using planar homography and graph cut
US20240037852A1 (en) Method and device for reconstructing three-dimensional faces and storage medium
CN107292272B (en) Method and system for recognizing human face in real-time transmission video
Huber et al. Real-time 3D face fitting and texture fusion on in-the-wild videos
De Smet et al. A generalized EM approach for 3D model based face recognition under occlusions
CN112528939A (en) Quality evaluation method and device for face image
Sharma et al. Spliced Image Classification and Tampered Region Localization Using Local Directional Pattern.
Liu et al. Splicing forgery exposure in digital image by detecting noise discrepancies
Rahman et al. Human Age and Gender Estimation using Facial Image Processing
Huang et al. Underwater Image Enhancement Based on Zero-Reference Deep Network
Aldrian et al. Inverse rendering of faces on a cloudy day
KR20160080483A (en) Method for recognizing gender using random forest
JP4012200B2 (en) Object detection method, apparatus, and program
Sullivan et al. Statistical foreground modelling for object localisation
CN111191549A (en) Two-stage face anti-counterfeiting detection method
CN116844245A (en) Method for identifying authenticity of video content
Tan et al. A novel weighted hausdorff distance for face localization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination