CN108596141A - A kind of depth network generates the detection method and system of facial image - Google Patents

A kind of depth network generates the detection method and system of facial image Download PDF

Info

Publication number
CN108596141A
CN108596141A CN201810434620.4A CN201810434620A CN108596141A CN 108596141 A CN108596141 A CN 108596141A CN 201810434620 A CN201810434620 A CN 201810434620A CN 108596141 A CN108596141 A CN 108596141A
Authority
CN
China
Prior art keywords
facial image
image
depth network
generates
training sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810434620.4A
Other languages
Chinese (zh)
Other versions
CN108596141B (en
Inventor
李昊东
黄继武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201810434620.4A priority Critical patent/CN108596141B/en
Publication of CN108596141A publication Critical patent/CN108596141A/en
Priority to PCT/CN2019/085592 priority patent/WO2019214557A1/en
Application granted granted Critical
Publication of CN108596141B publication Critical patent/CN108596141B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses detection method and system that a kind of depth network generates facial image, the training sample set being made of real human face image and generation facial image is constructed;The training sample set is modeled based on color relationship, and extracts statistical nature;The statistical nature is trained, to obtain disaggregated model;Testing image is predicted based on the disaggregated model.It is inconsistent with true picture in statistical property that facial image is generated according to depth network, devise one group of co-occurrence matrix feature based on adjacent pixel color relationship, the various sizes of facial image generated to different type depth network all has very high Detection accuracy, it can effectively judge whether given facial image is the Vitua limage generated by depth network, improves safety.

Description

A kind of depth network generates the detection method and system of facial image
Technical field
The present invention relates to multi-media information securities and evidence obtaining field, and in particular, to a kind of depth network generation face figure The detection method and system of picture.
Background technology
With the fast development of science and technology, the various aspects that digital picture is lived in social production have obtained very wide General application becomes the important carrier of record objective fact.At the same time, powerful multi-media processing software is able to It is universal.By the image editing software of profession, such as Adobe Photoshop, GIMP, ACDsee, ordinary user also can be easily Ground does not leave into edlin and modification and apparent visual trace to image data, to cover or even distort the facts true Phase.What is more, is nowadays learnt to a large amount of true picture by depth network, can train to obtain image generation mould Type.This kind of image, which generates model, can be used for generating magnanimity false scene image true to nature, such as face.These false images Once being used for the Sensitive Domains such as news report, authentication, judicial evidence collection, by the serious normal order for harming society.Cause This, the authentication image true and false becomes a practical problem urgently to be resolved hurrily.
Usually, the authentication techniques of digital picture can be divided into active certification and passive certification two major classes.Active certification includes The methods of digital signature, digital watermarking, this kind of technology, which is needed in digital image generation, or addition is additional before its propagation recognizes Information is demonstrate,proved, then such as embedded signature or watermark unchanged judge whether image is true or complete by identifying that embedding information has It is whole.However, the source of digital picture is different in reality, it tends to be difficult to their advance embedding informations, which greatly limits The application of active certification technology.
Compared with active certification technology, passive authentication techniques do not need to pre- embedding information in the picture, and rely only on figure As the data of itself are authenticated, thus it is more practical.Passively the basic foundation of certification is:The hardware feature of digital camera with And the various signal processing means in image shoot process can all leave intrinsic attribute in image data, and to image It modifies, these build-in attributes can be destroyed or introduces new trace.It, can be with qualification figure by extracting relevant feature The source of picture and judge whether they are modified.
Traditional distorted image means include splicing, region duplication displacement, image enhancement etc..These distort means General character is them in the enterprising edlin of existing true picture and modification.With it is traditional distort means compared with, pass through depth net What the image of network structure generated that model can reach " groundless " distorts effect.By selecting suitable parameter, interpolater can Specific scene is generated using trained depth network, such as meets the face of certain shape, posture and age level feature Image.Existing technology shows that this kind of generation image can sometimes accomplished life-like, can deceive human eye.
Criminal is sought interests by the false photo that depth network generates, and can bring many secure contexts in this way Hidden danger.
Therefore, existing to be had yet to be improved and developed for the detection technique for generating image.
Invention content
The facial image that the present invention is generated for depth network has invented a kind of effective detection method, can be accurately Judge that a given facial image is true picture or the Vitua limage that is generated by depth network, improves safety.
The technical proposal for solving the technical problem of the invention is as follows:
A kind of depth network generates the detection method of facial image, includes the following steps:
A, the training sample set that construction is made of real human face image and generation facial image;
B, the training sample set is modeled based on color relationship, and extracts statistical nature;
C, the statistical nature is trained, to obtain disaggregated model;
D, testing image is detected based on the disaggregated model, and exports detection recognition result.
The depth network generates the detection method of facial image, wherein the step A specifically includes:
A1, real human face image is obtained by imaging device;
A2, generation facial image is obtained by trained depth network by random noise vector;
A3, real human face image is considered as negative sample, facial image will be generated and be considered as positive sample, composing training sample set.
The depth network generates the detection method of facial image, wherein the step B specifically includes:
The magnitude relationship of adjacent pixel values in the Color Channel for each sample that B1, the extraction training sample are concentrated;
B2, color and texture information that training sample concentrates each sample are described by co-occurrence matrix;
B3, the feature for obtaining each sub-picture.
The depth network generates the detection method of facial image, wherein the B1 is specially:
Note input picture is I, and tri- Color Channels of R, G, B are respectively Ir, IgAnd Ib, then each color is calculated as follows The magnitude relationship of adjacent pixel value in channel:
RC, i, j(x, y)=Φ { Ic(x, y) > Ic(x+i, x+j) };
Wherein, c ∈ { r, g, b }, (i, j) ∈ { (0,1), (0, -1), (1,0), (- 1,0) }, and if only if in bracket The magnitude relationship in tri- channels R, G, B is considered as a triple by Φ { }=1 when logical expression is true, namely:
RI, j(x, y)=(RR, i, j(x, y), RG, i, j(x, y), RB, i, j(x, y))
RI, jThe value of each component is 0 or 1 in (x, y), carries out following equivalence transformation:
The depth network generates the detection method of facial image, wherein the B2 is specifically included:
Using co-occurrence matrix come to R 'I, jModeling, computational methods are following (by taking the k rank co-occurrence matrixs of horizontal direction as an example):
Wherein, (v1, v2..., vk) it is that target indexes under co-occurrence matrix, N is normalization factor, and if only if in bracket Logical expression Φ { }=1 when being true, otherwise Φ { }=0.
The depth network generates the detection method of facial image, wherein the step C specifically includes:
Using supervised learning method train one using linear discriminant analysis device as the integrated classifier of base grader as Two disaggregated models.
The depth network generates the detection method of facial image, wherein the D steps specifically include:
Testing image is predicted by disaggregated model, if disaggregated model prediction testing image makes a living into facial image, Then judge the image for the facial image of generation;Otherwise, then it is real human face image.
A kind of depth network generates the detecting system of facial image, wherein the depth network generates the inspection of facial image Examining system includes:
Sample architecture module, for constructing the training sample set being made of real human face image and generation facial image;
Characteristic extracting module models the training sample set for being based on color relationship, and it is special to extract statistics Sign;
Feature training module, for being trained to the statistical nature, to obtain disaggregated model;
Image detection module is detected testing image for being based on the disaggregated model, and exports detection identification knot Fruit.
Wherein, the characteristic extracting module includes:Pixel relationship module and statistics describing module.
Adjacent pixel values in Color Channel of the pixel relationship module for extracting each sample that the training sample is concentrated Magnitude relationship.It is calculated especially by following procedure:
Note input picture is I, and tri- Color Channels of R, G, B are respectively Ir, IgAnd Ib, then each color is calculated as follows The magnitude relationship of adjacent pixel value in channel:
RC, i, j(x, y)=Φ { Ic(x, y) > Ic(x+i, x+j) };
The Φ { }=1 when the logical expression in bracket is true, by the magnitude relationship in tri- channels R, G, B It is considered as a triple, namely:
RI, j(x, y)=(RR, i, j(x, y), RG, i, j(x, y), RB, i, j(x, y))
RI, jThe value of each component is 0 or 1 in (x, y), carries out following equivalence transformation:
Statistics describing module is for describing color and texture information that training sample concentrates each sample.Specifically, using Co-occurrence matrix comes to R 'I, jModeling, computational methods are as follows:
Wherein, (v1, v2..., vk) it is that target indexes under co-occurrence matrix, N is normalization factor, and if only if in bracket Logical expression Φ { }=1 when being true, otherwise Φ { }=0.
The invention discloses detection methods and system that a kind of depth network generates facial image, construct by real human face Image and the training sample set for generating facial image composition;The training sample set is modeled based on color relationship, and is carried Take statistical nature;The statistical nature is trained, to obtain disaggregated model;Based on the disaggregated model to testing image It is predicted.It is inconsistent with true picture in statistical property according to depth network generation facial image, devise one group Co-occurrence matrix feature based on adjacent pixel color relationship, the various sizes of face figure that different type depth network is generated As all having very high Detection accuracy, it can effectively judge whether given facial image is to be given birth to by depth network At Vitua limage, improve safety.
Description of the drawings
Fig. 1 is the flow chart of the embodiment of present invention detection facial image.
Fig. 2 is the schematic diagram that the present invention generates facial image.
Fig. 3 (a) and Fig. 3 (b) is the procedure chart that the present invention calculates image adjacent pixel color relationship.
Fig. 4 is the schematic diagram that image of the present invention counts co-occurrence matrix by pixel color relationship.
Fig. 5 (a) and Fig. 5 (b) is real human face image and generates the feature comparison schematic diagram of facial image.
Specific implementation mode
To make the objectives, technical solutions, and advantages of the present invention clearer and more explicit, develop simultaneously embodiment referring to the drawings The present invention is described in more detail.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
Fig. 1 is referred to, Fig. 1 is the detection method preferred embodiment that a kind of depth network of the present invention generates facial image Flow chart.The detection method that the depth network generates facial image includes:
Step S10, the training sample set that construction is made of real human face image and generation facial image.
Specifically, train depth network as Face image synthesis device first with real human face image, by random noise Vector inputs trained depth network, obtains generating facial image, as shown in Figure 2.The class of the trained depth network Type includes but not limited to variation self-encoding encoder, generates confrontation network.Real human face image is considered as negative sample, face will be generated Image is considered as positive sample, composing training set, and the real human face image shoots to obtain by imaging device, the generation people Face image is generated to obtain by random noise vector by trained depth network.
Step S20, the training sample set is modeled based on color relationship, and extracts statistical nature;
Specific method is:To the facial image and real human face image generated by depth network, feature is extracted respectively,
It is I to every image, the magnitude relationship of adjacent pixel value in tri- Color Channels of its R, G, B is calculated as follows:
RC, i, j(x, y)=Φ { Ic(x, y) > Ic(x+i, x+j) }
In above formula, c ∈ { r, g, b }, (i, j) ∈ { (0,1), (0, -1), (1,0), (- 1,0) }, and if only if in bracket Logical expression Φ { }=1 when being true, otherwise Φ { }=0.In order to describe the color relationship between pixel, by R, G, B The magnitude relationship in three channels is considered as a triple, namely:
RI, j(x, y)=(RR, i, j(x, y), RG, i, j(x, y), Rb, i, j(x, y))
For the ease of subsequent statistics, by triple RI, jIt is whole in section [0,7] that (x, y) equivalence is converted to value Number:
For R 'I, j, count the frequency that wherein continuous 3 elements occur both horizontally and vertically respectively, obtain 3 Rank co-occurrence matrixWithThe dimension that each co-occurrence matrix is included is d=83=512.By taking horizontal direction as an example, symbiosis MatrixComputational methods be:
Wherein, (v1, v2, v3) it is that target indexes under co-occurrence matrix, N is normalization factor, and if only if Φ { }=1 when logical expression in bracket is true, otherwise Φ { }=0.It is finally right It sums and calculates mean value, obtain the system of one group of 512 dimension Count feature.
For given image, the pixel value of its tri- Color Channel of R, G, B is extracted respectively, as shown in Fig. 3 (a), and will The pixel value of tri- Color Channels of described R, G, B substitutes into RC, i, j(x, y)=Φ { Ic(x, y) > Ic(x+i, x+j) } and RI, j (x, y)=(RR, i, j(x, y), RG, i, j(x, y), RB, i, j(x, y)) it is calculated, obtain the result as shown in 3 (b).
Then according toIt is converted, obtains the result as shown in Fig. 4.
Finally, according toStatistics is horizontal The co-occurrence matrix in directionObtain 512 dimensional features.Similarly, it calculates separately And the mean value of these features is found out to get to the image Feature.
Fig. 5 is depicted generates 512 dimensions that facial image is calculated from 1000 real human face images and 1000 respectively The Mean curve of feature.It can be seen that real human face image and the feature for generating facial image have significantly in many dimensions Difference.
Step S30, the statistical nature is trained, to obtain disaggregated model;
Specifically, to each sample that training image is concentrated, using above method extraction co-occurrence matrix as feature, profit Train one with the method for supervised learning is with linear discriminant analysis device (Linear Discriminant Analysis, LDA) The integrated classifier of base grader is as two disaggregated models.The disaggregated model is by exercising supervision study to training sample set Obtained from two graders.It is described based on color relationship to training sample set carry out modeling be by co-occurrence matrix to training sample The color relationship of this collection models.It is because it still has when training sample is more, intrinsic dimensionality is high using integrated classifier Very high operational efficiency, and good classification performance can be obtained.Other types can also be selected as needed in practical applications Grader, such as support vector machines (SVM).
Step S40, testing image is detected based on the disaggregated model.
Specifically, for given facial image to be measured, co-occurrence matrix is also extracted as feature using same method. Feature is inputted into trained disaggregated model, obtains prediction result.If prediction result shows that testing image makes a living into face figure Picture then judges that the image is the facial image generated by depth network;Otherwise, which is true facial image.
(include 202599 real human face figures using CelebA face image datas collection in further preferred embodiments Picture) train a depth network being made of variation self-encoding encoder (Variational auto-encoder, VAE) as face Image composer.Trained depth network generates the facial image with CelebA data set equivalent.In this example, picture size It is 64 × 64.It is instruction that the real human face image in facial image and CelebA will be generated respectively with 50% ratio random division Practice collection and test set.
In training set, 512 dimensional features above-mentioned to every image zooming-out, one two disaggregated model of training.It trains point After class model, the image in test set is detected, obtained testing result is as follows:
Concrete class prediction classification Real human face image Generate facial image
Real human face image 99.90% 0.10%
Generate facial image 0.04% 99.96%
In further preferred embodiments, using the training of CelebA face image data collection, one generates confrontation network (Generative adversarial network, GAN) is used as Face image synthesis device.Utilize trained depth network The facial image with CelebA data set equivalent is generated, picture size is 64 × 64.Then, respectively will generate facial image and Real human face image in CelebA extracts feature, training point using 50% ratio random division as training set and test set Class model is simultaneously tested, and it is as follows to obtain experimental result:
Concrete class prediction classification Real human face image Generate facial image
Real human face image 99.33% 0.67%
Generate facial image 0.05% 99.95%
In further preferred embodiments, using CelebA face image data collection training one by variation self-encoding encoder The depth network of composition generates the facial image that size is 128x128 as Face image synthesis device.Face will be generated respectively Real human face image in image and CelebA carries out a series of using 50% ratio random division as training set and test set Experiment after, it is as follows to obtain experimental result:
Concrete class prediction classification Real human face image Generate facial image
Real human face image 99.99% 0.01%
Generate facial image 0.00% 100%
In further preferred embodiments, using the training of CelebA face image data collection, one generates confrontation network work For facial image generator, the facial image that size is 128 × 128 is generated.It will generate in facial image and CelebA respectively Real human face image after carrying out a series of experiment, is tested using 50% ratio random division as training set and test set As a result as follows:
Concrete class prediction classification Real human face image Generate facial image
Real human face image 100% 0.00%
Generate facial image 0.00% 100%
(include 30000 real human faces using CelebA-HQ face image datas collection in further preferred embodiments Image) train a progressive growth formula to generate confrontation network (Progressive Growing of GANs) as facial image Generator generates the high definition facial image that size is 1024 × 1024.By will generate in facial image and CelebA respectively Real human face image after carrying out a series of experiment, is tested using 50% this random division as training set and test set As a result as follows:
Concrete class prediction classification Real human face image Generate facial image
Real human face image 99.07% 0.93%
Generate facial image 0.38% 99.62%
According to above-mentioned experimental result, it is known that the various sizes of people that the method for the present invention generates different type depth network Face image all has very high Detection accuracy, can effectively judge whether given facial image is the falseness generated Image.This is of great significance for being related to the practical occasion of facial image safety.
Based on above method embodiment, the present invention also provides the detecting system that a kind of depth network generates facial image, Wherein, the detecting system of the depth network generation facial image includes:Sample architecture module, characteristic extracting module, feature instruction Practice module and image detection module.
Sample architecture module, for constructing the training sample set being made of real human face image and generation facial image;
Characteristic extracting module models the training sample set for being based on color relationship, and it is special to extract statistics Sign;
Feature training module, for being trained to the statistical nature, to obtain disaggregated model;
Image detection module is detected testing image for being based on the disaggregated model.
Wherein, characteristic extracting module includes pixel relationship module and statistics describing module.
Adjacent pixel values in Color Channel of the pixel relationship module for extracting each sample that the training sample is concentrated Magnitude relationship.It is calculated especially by following procedure:
Note input picture is I, and tri- Color Channels of R, G, B are respectively Ir, IgAnd Ib, then each color is calculated as follows The magnitude relationship of adjacent pixel value in channel:
RC, i, j(x, y)=Φ { Ic(x, y) > Ic(x+i, x+j) };
The Φ { }=1 when the logical expression in bracket is true, by the magnitude relationship in tri- channels R, G, B It is considered as a triple, namely:
RI, j(x, y)=(RR, i, j(x, y), RG, i, j(x, y), RB, i, j(x, y))
RI, jThe value of each component is 0 or 1 in (x, y), carries out following equivalence transformation:
Statistics describing module is for describing color and texture information that training sample concentrates each sample.Specifically, using Co-occurrence matrix comes to R 'I, jModeling, computational methods are as follows:
Wherein, (v1, v2..., vk) it is that target indexes under co-occurrence matrix, N is normalization factor, and if only if in bracket Logical expression Φ { }=1 when being true, otherwise Φ { }=0.
The principle of the present invention is:The facial image that depth network generates is although the shape of face can be simulated to a certain extent The Global Informations such as shape, posture, expression, but grain details present in true picture can not be generated well, this causes generation to be schemed As the internal relation between pixel is inconsistent with true picture, the intrinsic statistical property of true picture cannot keep.Therefore, pass through Statistical nature in the color relationship between image adjacent pixel is extracted, can effectively distinguish real human face image and generates face Image.
The invention discloses a kind of detection method and system of depth network generation facial image is detected, construct by true Facial image and the training sample set for generating facial image composition;The training sample set is modeled based on color relationship, And extract statistical nature;The statistical nature is trained, to obtain disaggregated model;Based on the disaggregated model to be measured Image is predicted.It is inconsistent with true picture in statistical property according to depth network generation facial image, it devises One group of co-occurrence matrix feature based on adjacent pixel color relationship, the various sizes of people that different type depth network is generated Face image all has very high Detection accuracy, can effectively judge whether given facial image is by depth net The Vitua limage that network generates.
It should be understood that the application of the present invention is not limited to the above, for those of ordinary skills, It can be modified or changed according to the above description, and all these modifications and variations should all belong to appended claims of the present invention Protection domain.

Claims (10)

1. a kind of depth network generates the detection method of facial image, which is characterized in that depth network generates the inspection of facial image Surveying step includes:
A, the training sample set that construction is made of real human face image and generation facial image;
B, the training sample set is modeled based on color relationship, and extracts statistical nature;
C, the statistical nature is trained, to obtain disaggregated model;
D, testing image is detected based on the disaggregated model, and exports detection recognition result.
2. depth network according to claim 1 generates the detection method of facial image, which is characterized in that the step A It specifically includes:
A1, real human face image is obtained by imaging device;
A2, generation facial image is obtained by trained depth network by random noise vector;
A3, real human face image is considered as negative sample, facial image will be generated and be considered as positive sample, composing training sample set.
3. depth network according to claim 1 generates the detection method of facial image, which is characterized in that the step B It specifically includes:
The magnitude relationship of adjacent pixel values in the Color Channel for each sample that B1, the extraction training sample are concentrated;
B2, color and texture information that training sample concentrates each sample are described by co-occurrence matrix;
B3, the feature for obtaining each sub-picture.
4. depth network according to claim 3 generates the detection method of facial image, which is characterized in that the B1 is specific For:
Note input picture is I, and tri- Color Channels of R, G, B are respectively Ir, IgAnd Ib, then each Color Channel is calculated as follows The magnitude relationship of middle adjacent pixel value:
RC, i, j(x, y)=Φ { Ic(x, y) > Ic(x+i, x+j) };
Wherein, c ∈ { r, g, b }, (i, j) ∈ { (0,1), (0, -1), (1,0), (- 1,0) }, and if only if the logical table in bracket Φ { }=1, is considered as a triple by the magnitude relationship in tri- channels R, G, B when up to formula being true, namely:
RI, j(x, y)=(RR, i, j(x, y), RG, i, j(x, y), RB, i, j(x, y))
RI, jThe value of each component is 0 or 1 in (x, y), carries out following equivalence transformation:
5. depth network according to claim 3 generates the detection method of facial image, which is characterized in that the B2 is specific Including:
Using co-occurrence matrix to R 'I, jModeling, computational methods are following (by taking the k rank co-occurrence matrixs of horizontal direction as an example):
Wherein, (v1, v2..., vk) it is that target indexes under co-occurrence matrix, N is normalization factor, and if only if the logic in bracket Φ { }=1 when expression formula is true, otherwise Φ { }=0.
6. depth network according to claim 1 generates the detection method of facial image, which is characterized in that the step C It specifically includes:
Using the method for supervised learning train one using linear discriminant analysis device be the integrated classifier of base grader as two points Class model.
7. depth network according to claim 1 generates the detection method of facial image, which is characterized in that the D steps It specifically includes:
Testing image is predicted by disaggregated model, if disaggregated model prediction testing image makes a living into facial image, is sentenced Break the facial image that the image is generation;Otherwise, then it is real human face image.
8. a kind of depth network generates the detecting system of facial image, which is characterized in that the depth network generates facial image Detecting system include:
Sample architecture module, for constructing the training sample set being made of real human face image and generation facial image;
Characteristic extracting module models the training sample set for being based on color relationship, and extracts statistical nature;
Feature training module, for being trained to the statistical nature, to obtain disaggregated model;
Image detection module is detected testing image for being based on the disaggregated model, and exports detection recognition result.
9. depth network according to claim 8 generates the detecting system of facial image, which is characterized in that the feature carries Modulus block includes:
Pixel relationship module, adjacent pixel values is big in the Color Channel for extracting each sample that the training sample is concentrated Small relationship;
It is calculated especially by following procedure:
Note input picture is I, and tri- Color Channels of R, G, B are respectively Ir, IgAnd Ib, then each Color Channel is calculated as follows The magnitude relationship of middle adjacent pixel value:
RC, i, j(x, y)=Φ { Ic(x, y) > Ic(x+i, x+j) };
Wherein, c ∈ { r, g, b }, (i, j) ∈ { (0,1), (0, -1), (1,0), (- 1,0) }, and if only if the logical table in bracket Φ { }=1, is considered as a triple by the magnitude relationship in tri- channels R, G, B when up to formula being true, namely:
RI, j(x, y)=(RR, i, j(x, y), RG, i, j(x, y), RB, i, j(x, y))
RI, jThe value of each component is 0 or 1 in (x, y), carries out following equivalence transformation:
10. depth network according to claim 8 generates the detecting system of facial image, which is characterized in that the feature Extraction module further includes:
Count describing module:The color and texture information of each sample are concentrated for describing training sample;
Using co-occurrence matrix come to R 'I, jModeling, computational methods are as follows:
Wherein, (v1, v2..., vk) it is that target indexes under co-occurrence matrix, N is normalization factor, and if only if the logic in bracket Φ { }=1 when expression formula is true, otherwise Φ { }=0.
CN201810434620.4A 2018-05-08 2018-05-08 Detection method and system for generating face image by deep network Active CN108596141B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810434620.4A CN108596141B (en) 2018-05-08 2018-05-08 Detection method and system for generating face image by deep network
PCT/CN2019/085592 WO2019214557A1 (en) 2018-05-08 2019-05-06 Method and system for detecting face image generated by deep network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810434620.4A CN108596141B (en) 2018-05-08 2018-05-08 Detection method and system for generating face image by deep network

Publications (2)

Publication Number Publication Date
CN108596141A true CN108596141A (en) 2018-09-28
CN108596141B CN108596141B (en) 2022-05-17

Family

ID=63635858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810434620.4A Active CN108596141B (en) 2018-05-08 2018-05-08 Detection method and system for generating face image by deep network

Country Status (2)

Country Link
CN (1) CN108596141B (en)
WO (1) WO2019214557A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635748A (en) * 2018-12-14 2019-04-16 中国公路工程咨询集团有限公司 The extracting method of roadway characteristic in high resolution image
CN109948692A (en) * 2019-03-16 2019-06-28 四川大学 Picture detection method is generated based on the computer of multiple color spaces convolutional neural networks and random forest
CN110163815A (en) * 2019-04-22 2019-08-23 桂林电子科技大学 Low-light (level) restoring method based on multistage variation self-encoding encoder
WO2019214557A1 (en) * 2018-05-08 2019-11-14 深圳大学 Method and system for detecting face image generated by deep network
CN111046975A (en) * 2019-12-27 2020-04-21 深圳云天励飞技术有限公司 Portrait generation method, device, system, electronic equipment and storage medium
CN111444881A (en) * 2020-04-13 2020-07-24 中国人民解放军国防科技大学 Fake face video detection method and device
CN111639589A (en) * 2020-05-28 2020-09-08 西北工业大学 Video false face detection method based on counterstudy and similar color space
CN111709408A (en) * 2020-08-18 2020-09-25 腾讯科技(深圳)有限公司 Image authenticity detection method and device

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111259831B (en) * 2020-01-20 2023-03-24 西北工业大学 False face discrimination method based on recombined color space
CN111597983B (en) * 2020-05-14 2023-06-06 公安部第三研究所 Method for realizing identification of generated false face image based on deep convolutional neural network
CN112200075B (en) * 2020-10-09 2024-06-04 西安西图之光智能科技有限公司 Human face anti-counterfeiting method based on anomaly detection
CN112396005A (en) * 2020-11-23 2021-02-23 平安科技(深圳)有限公司 Biological characteristic image recognition method and device, electronic equipment and readable storage medium
CN112561813B (en) * 2020-12-10 2024-03-26 深圳云天励飞技术股份有限公司 Face image enhancement method and device, electronic equipment and storage medium
CN113095149A (en) * 2021-03-18 2021-07-09 西北工业大学 Full-head texture network structure based on single face image and generation method
CN114519897B (en) * 2021-12-31 2024-09-24 重庆邮电大学 Human face living body detection method based on color space fusion and cyclic neural network
CN118552794B (en) * 2024-07-25 2024-10-18 湖南军芃科技股份有限公司 Ore sorting identification method based on multichannel training and ore sorting machine

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739555A (en) * 2009-12-01 2010-06-16 北京中星微电子有限公司 Method and system for detecting false face, and method and system for training false face model
US20100158319A1 (en) * 2008-12-22 2010-06-24 Electronics And Telecommunications Research Institute Method and apparatus for fake-face detection using range information
US20150227781A1 (en) * 2014-02-12 2015-08-13 Nec Corporation Information processing apparatus, information processing method, and program
CN107563155A (en) * 2017-08-08 2018-01-09 中国科学院信息工程研究所 A kind of safe steganography method and device based on generation confrontation network
US20180025217A1 (en) * 2016-07-22 2018-01-25 Nec Laboratories America, Inc. Liveness detection for antispoof face recognition
CN107808161A (en) * 2017-10-26 2018-03-16 江苏科技大学 A kind of Underwater targets recognition based on light vision
CN107944358A (en) * 2017-11-14 2018-04-20 华南理工大学 A kind of human face generating method based on depth convolution confrontation network model
US20180285668A1 (en) * 2015-10-30 2018-10-04 Microsoft Technology Licensing, Llc Spoofed face detection

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8705866B2 (en) * 2010-12-07 2014-04-22 Sony Corporation Region description and modeling for image subscene recognition
CN104573743B (en) * 2015-01-14 2018-12-18 南京烽火星空通信发展有限公司 A kind of facial image detection filter method
CN105740787B (en) * 2016-01-25 2019-08-23 南京信息工程大学 Identify the face identification method of color space based on multicore
CN106971161A (en) * 2017-03-27 2017-07-21 深圳大图科创技术开发有限公司 Face In vivo detection system based on color and singular value features
CN107844744A (en) * 2017-10-09 2018-03-27 平安科技(深圳)有限公司 With reference to the face identification method, device and storage medium of depth information
CN108596141B (en) * 2018-05-08 2022-05-17 深圳大学 Detection method and system for generating face image by deep network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100158319A1 (en) * 2008-12-22 2010-06-24 Electronics And Telecommunications Research Institute Method and apparatus for fake-face detection using range information
CN101739555A (en) * 2009-12-01 2010-06-16 北京中星微电子有限公司 Method and system for detecting false face, and method and system for training false face model
US20150227781A1 (en) * 2014-02-12 2015-08-13 Nec Corporation Information processing apparatus, information processing method, and program
US20180285668A1 (en) * 2015-10-30 2018-10-04 Microsoft Technology Licensing, Llc Spoofed face detection
US20180025217A1 (en) * 2016-07-22 2018-01-25 Nec Laboratories America, Inc. Liveness detection for antispoof face recognition
CN107563155A (en) * 2017-08-08 2018-01-09 中国科学院信息工程研究所 A kind of safe steganography method and device based on generation confrontation network
CN107808161A (en) * 2017-10-26 2018-03-16 江苏科技大学 A kind of Underwater targets recognition based on light vision
CN107944358A (en) * 2017-11-14 2018-04-20 华南理工大学 A kind of human face generating method based on depth convolution confrontation network model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHAHROZ TARIQ,ETC: "Detecting Both Machine and Human Created Fake Face Images In the Wild", 《IN PROCEEDINGS OF THE 2ND INTERNATIONAL WORKSHOP ON MULTIMEDIA PRIVACY AND SECURITY》 *
朱林等: "人脸合成图像检测方法研究", 《计算机工程与应用》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019214557A1 (en) * 2018-05-08 2019-11-14 深圳大学 Method and system for detecting face image generated by deep network
CN109635748A (en) * 2018-12-14 2019-04-16 中国公路工程咨询集团有限公司 The extracting method of roadway characteristic in high resolution image
CN109948692A (en) * 2019-03-16 2019-06-28 四川大学 Picture detection method is generated based on the computer of multiple color spaces convolutional neural networks and random forest
CN109948692B (en) * 2019-03-16 2020-12-15 四川大学 Computer-generated picture detection method based on multi-color space convolutional neural network and random forest
CN110163815A (en) * 2019-04-22 2019-08-23 桂林电子科技大学 Low-light (level) restoring method based on multistage variation self-encoding encoder
CN110163815B (en) * 2019-04-22 2022-06-24 桂林电子科技大学 Low-illumination reduction method based on multi-stage variational self-encoder
CN111046975A (en) * 2019-12-27 2020-04-21 深圳云天励飞技术有限公司 Portrait generation method, device, system, electronic equipment and storage medium
CN111444881A (en) * 2020-04-13 2020-07-24 中国人民解放军国防科技大学 Fake face video detection method and device
CN111639589A (en) * 2020-05-28 2020-09-08 西北工业大学 Video false face detection method based on counterstudy and similar color space
CN111709408A (en) * 2020-08-18 2020-09-25 腾讯科技(深圳)有限公司 Image authenticity detection method and device
CN111709408B (en) * 2020-08-18 2020-11-20 腾讯科技(深圳)有限公司 Image authenticity detection method and device

Also Published As

Publication number Publication date
WO2019214557A1 (en) 2019-11-14
CN108596141B (en) 2022-05-17

Similar Documents

Publication Publication Date Title
CN108596141A (en) A kind of depth network generates the detection method and system of facial image
Yang et al. MTD-Net: Learning to detect deepfakes images by multi-scale texture difference
Zheng et al. A survey on image tampering and its detection in real-world photos
Albahar et al. Deepfakes: Threats and countermeasures systematic review
Guo et al. Fake colorized image detection
Kong et al. Detect and locate: Exposing face manipulation by semantic-and noise-level telltales
CN106530200A (en) Deep-learning-model-based steganography image detection method and system
Zhang et al. No one can escape: A general approach to detect tampered and generated image
CN111353399A (en) Tamper video detection method
CN110457996A (en) Moving Objects in Video Sequences based on VGG-11 convolutional neural networks distorts evidence collecting method
Agarwal et al. MD-CSDNetwork: Multi-domain cross stitched network for deepfake detection
CN117474741B (en) Active defense detection method based on face key point watermark
Luo et al. Dual attention network approaches to face forgery video detection
Kaushal et al. The societal impact of Deepfakes: Advances in Detection and Mitigation
Suresh et al. Deep learning-based image forgery detection system
Poibrenski et al. Towards a methodology for training with synthetic data on the example of pedestrian detection in a frame-by-frame semantic segmentation task
Boutadjine et al. A comprehensive study on multimedia DeepFakes
Guefrachi et al. Deep learning based DeepFake video detection
CN107103327A (en) Image detecting method is forged in a kind of dyeing based on Color Statistical difference
CN111104892A (en) Human face tampering identification method based on target detection, model and identification method thereof
Zhou et al. Detecting deepfake videos via frame serialization learning
CN115457622A (en) Method, system and equipment for detecting deeply forged faces based on identity invariant features
Xiu-Jian et al. Deep Learning Based Image Forgery Detection Methods
Megahed et al. Exposing deepfake using fusion of deep-learned and hand-crafted features
Ashok et al. Deepfake Detection Using XceptionNet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant