CN112507903A - False face detection method and device, electronic equipment and computer readable storage medium - Google Patents
False face detection method and device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN112507903A CN112507903A CN202011473875.5A CN202011473875A CN112507903A CN 112507903 A CN112507903 A CN 112507903A CN 202011473875 A CN202011473875 A CN 202011473875A CN 112507903 A CN112507903 A CN 112507903A
- Authority
- CN
- China
- Prior art keywords
- face
- image
- probability value
- false
- fine
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 137
- 238000001914 filtration Methods 0.000 claims abstract description 48
- 238000012545 processing Methods 0.000 claims abstract description 45
- 238000006243 chemical reaction Methods 0.000 claims abstract description 32
- 230000009467 reduction Effects 0.000 claims abstract description 27
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 22
- 230000004927 fusion Effects 0.000 claims abstract description 6
- 230000006870 function Effects 0.000 claims description 39
- 238000004364 calculation method Methods 0.000 claims description 25
- 238000013136 deep learning model Methods 0.000 claims description 22
- 238000013145 classification model Methods 0.000 claims description 16
- 238000007499 fusion processing Methods 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 9
- 238000000034 method Methods 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 6
- 239000000126 substance Substances 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 abstract description 7
- 230000015572 biosynthetic process Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000003786 synthesis reaction Methods 0.000 description 4
- 101150115304 cls-2 gene Proteins 0.000 description 2
- 101150058580 cls-3 gene Proteins 0.000 description 2
- 101150053100 cls1 gene Proteins 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012512 characterization method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/254—Fusion techniques of classification results, e.g. of results related to same input data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to an image processing technology, and discloses a false face detection method, which comprises the following steps: carrying out frequency domain conversion and high-pass filtering on the original face image to obtain an initial face image and calculating to obtain a first false face probability value; carrying out face detection on an original face image to obtain a face frame, carrying out expansion or reduction operation on the face frame by using a preset proportion to obtain a face captured image set, carrying out fine-grained classification processing on the face captured image set to obtain a fine-grained probability value set, and then fusing the fine-grained probability value set to obtain a second false face probability value; and performing weighted fusion on the two probability values to obtain a face detection probability value, comparing the face detection probability value with a preset detection threshold value, and judging whether the original face image is a false face judgment result. The invention also relates to a block chain technology, and the judgment result and the like can be stored in the block chain node. The invention also discloses a false face detection device, an electronic device and a storage medium. The invention can solve the problems of insufficient detection precision and long time consumption of a detection algorithm of the existing detection method.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a false face detection method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the wide application of face recognition and face unlocking technologies in various fields, various academic circles and industrial circles are concerned about face anti-counterfeiting and face living body detection. At present, a large number of black products attack each large face recognition system by synthesizing faces through a high-end technology, the fidelity of face synthesis is higher and higher, and the attack success rate is greatly improved. In particular, the combined living body detection is not very defensive to the synthesis of a false face.
The false face synthesis detection method at the present stage is to calculate the gradient amplitude and the texture contrast of the human face edge, and to judge the human face synthesis by analyzing the chroma and the saturation of the skin color. When the method is used for extracting the human face features, the extracted features cannot well express the characteristics of the synthesized human face, so that the feature characterization capability is insufficient, the detection precision is insufficient, and the detection algorithm is long in time consumption.
Disclosure of Invention
The invention provides a false face detection method, a false face detection device, electronic equipment and a computer-readable storage medium, and mainly aims to solve the problem that the characteristics of a synthesized face cannot be well expressed by the features extracted by the conventional detection method.
In order to achieve the above object, the present invention provides a false face detection method, including:
acquiring an original face image, and performing frequency domain conversion and high-pass filtering processing on the original face image to obtain an initial face image;
calculating the initial face image by using a pre-constructed lightweight deep learning model to obtain a first false face probability value;
carrying out face contour detection on the original face image by using a preset face detection algorithm to obtain a face frame, and carrying out expansion or reduction operation on the face frame to obtain face intercepted image sets with different sizes;
carrying out fine-grained classification processing on the face screenshot image set by using a pre-constructed fine-grained classification model to obtain a fine-grained probability value set corresponding to the face screenshot image set, and fusing the fine-grained probability value set to obtain a second false face probability value;
carrying out weighting fusion processing on the first false face probability value and the second false face probability value to obtain a face detection probability value;
detecting whether the original face is a false face according to the face detection probability value, and obtaining a detection result;
and when the detection result is a false face, sending the judgment result to a preset terminal.
Optionally, the calculating the initial face image by using a pre-constructed lightweight deep learning model to obtain a first false face probability value includes:
performing feature extraction on the initial face image by using the lightweight deep learning model to obtain initial image features;
and performing probability calculation on the initial image features according to a classification function in the lightweight deep learning model to obtain a first false face probability value.
Optionally, the performing face contour detection on the original face image by using a preset face detection algorithm to obtain a face frame, and performing an expansion or reduction operation on the face frame to obtain face captured image sets of different sizes includes:
carrying out face detection processing on the original face image by using a preset face detection algorithm to obtain one or more face frames;
carrying out expansion or reduction operation on the face frame by using a preset proportion to obtain a face expansion frame and a face reduction frame;
respectively intercepting the original face image by using the face expansion frame and the face reduction frame to obtain a face region image set;
and carrying out scaling processing on the face region image set according to a preset size to obtain face intercepted image sets with different sizes.
Optionally, the performing fine-grained classification processing on the face capture image set by using a pre-constructed fine-grained classification model to obtain a fine-grained probability value set corresponding to the face capture image set, and fusing the fine-grained probability value set to obtain a second false face probability value includes:
extracting features of the face intercepted images in the face intercepted image set to obtain intercepted image features;
performing probability calculation on the intercepted image features by using a preset classification function to obtain a first fine-grained probability value;
extracting the regional information of the intercepted image features through a preset first regional target network, and cutting the intercepted image according to the extracted regional information to obtain a first regional image;
zooming the first area map according to a preset zooming size, inputting the zoomed first area map into a preset second area target network to obtain a second area map, and performing probability calculation on the second area map by utilizing a classification layer in the second area target network to obtain a second fine-grained probability value;
the second area map is used as the input of a preset third area target network to obtain a third area map and a third fine-grained probability value, and the third area map is used as the input of a preset fourth area target network to obtain a fourth area map and a fourth fine-grained probability value;
and summarizing the first fine-grained probability value, the second fine-grained probability value, the third fine-grained probability value and the fourth fine-grained probability value to obtain a second false face probability value.
Optionally, the performing frequency domain conversion and high-pass filtering processing on the original face image to obtain an initial face image includes:
carrying out space fast conversion on the original face image to obtain a fast frequency domain image;
filtering the rapid frequency domain image by using a preset filtering function to obtain a filtered image;
and carrying out frequency domain inverse transformation on the filtering image to obtain an initial face image.
Optionally, the performing spatial fast conversion on the original face image to obtain a fast frequency domain image includes:
and carrying out space fast conversion on the original face image by using a preset first conversion formula to obtain a pixel value F (u, v) of a fast frequency domain image:
where F (x, y) represents the pixel value of the original face image, F (u, v) represents the pixel value of the fast frequency domain image, M, N represents the width and height of the original face image, and j is a fixed parameter in the fast fourier transform function.
Optionally, the filtering the fast frequency domain image by using a preset filtering function to obtain a filtered image includes:
and carrying out filtering processing on the rapid frequency domain image by using the following filtering function to obtain a filtering image:
where H (u, v) is the pixel value of the filtered image, F (u, v) is the pixel value of the fast frequency domain image, D0And n is a fixed parameter.
In order to solve the above problem, the present invention also provides a false face detection device, including:
the system comprises an initial face image acquisition module, a frequency domain conversion module and a high-pass filtering module, wherein the initial face image acquisition module is used for acquiring an original face image and carrying out frequency domain conversion and high-pass filtering processing on the original face image to obtain an initial face image;
the first false face probability value calculation module is used for calculating the initial face image by using a pre-constructed lightweight deep learning model to obtain a first false face probability value;
the face capturing image set acquisition module is used for carrying out face contour detection on the original face image by using a preset face detection algorithm to obtain a face frame, and carrying out expansion or reduction operation on the face frame to obtain face capturing image sets with different sizes;
the second false face probability value calculation module is used for performing fine-grained classification processing on the face screenshot image set by using a pre-constructed fine-grained classification model to obtain a fine-grained probability value set corresponding to the face screenshot image set, and fusing the fine-grained probability value set to obtain a second false face probability value;
the weighted fusion module is used for carrying out weighted fusion processing on the first false face probability value and the second false face probability value to obtain a face detection probability value;
and the false face judging module is used for comparing the face detection probability value with a preset detection threshold value to obtain a judging result of whether the original face image is a false face or not, and transmitting the judging result to a preset terminal.
In order to solve the above problem, the present invention also provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores computer program instructions executable by the at least one processor to cause the at least one processor to perform the false face detection method described above.
In order to solve the above problem, the present invention also provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the above false face detection method.
The method comprises the steps of firstly obtaining an original face image, carrying out frequency domain conversion and high-pass filtering processing on the original face image to obtain an original face image, converting the face image to a spatial domain by utilizing the frequency domain conversion, and filtering low-frequency components in the image by utilizing the high-pass filtering processing. Calculating the initial face image by using a pre-constructed lightweight deep learning model to obtain a first false face probability value; and carrying out face contour detection on the original face image by using a preset face detection algorithm to obtain a face frame, and carrying out expansion or reduction operation on the face frame to obtain face intercepting image sets with different sizes, wherein the face intercepting image sets comprise pictures with different sizes, so that the robustness of subsequent model training is improved. Carrying out fine-grained classification processing on the face screenshot image set by using a pre-constructed fine-grained classification model to obtain a fine-grained probability value set corresponding to the face screenshot image set, and fusing the fine-grained probability value set to obtain a second false face probability value; carrying out weighting fusion processing on the first false face probability value and the second false face probability value to obtain a face detection probability value; and comparing the face detection probability value with a preset detection threshold value to obtain a judgment result of whether the original face image is a false face, and transmitting the judgment result to a preset terminal. Therefore, the false face detection method, the false face detection device and the computer readable storage medium provided by the invention can improve the efficiency of the false face detection method and solve the problem that the characteristics of the synthesized face cannot be well expressed by the features extracted by the existing detection method.
Drawings
Fig. 1 is a schematic flow chart of a false face detection method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a step of the false face detection method shown in FIG. 1;
fig. 3 is a schematic block diagram of a false face detection apparatus according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an internal structure of an electronic device implementing a false face detection method according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
An embodiment of the present invention provides a false face detection method, where an execution subject of the false face detection method includes but is not limited to at least one of electronic devices such as a server and a terminal that can be configured to execute the method provided in the embodiment of the present application. In other words, the false face detection method may be performed by software or hardware installed in the terminal device or the server device, and the software may be a block chain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Fig. 1 is a schematic flow chart of a false face detection method according to an embodiment of the present invention. In this embodiment, the false face detection method includes:
and S1, acquiring an original face image, and performing frequency domain conversion and high-pass filtering processing on the original face image to obtain the original face image.
The embodiment of the invention synthesizes the face by a high-end technology to obtain the original face image.
Further, referring to fig. 2, in the embodiment of the present invention, the performing frequency domain conversion and high-pass filtering on the original face image to obtain an initial face image includes:
s11, carrying out space fast conversion on the original face image to obtain a fast frequency domain image;
s12, filtering the rapid frequency domain image by using a preset filtering function to obtain a filtered image;
and S13, performing frequency domain inverse transformation on the filtered image to obtain an initial face image.
Specifically, the performing spatial fast conversion on the original face image to obtain a fast frequency domain image includes:
and carrying out space fast conversion on the original face image by using a preset first conversion formula to obtain a pixel value F (u, v) of a fast frequency domain image:
where F (x, y) represents the pixel value of the original face image, F (u, v) represents the pixel value of the fast frequency domain image, M, N represents the width and height of the original face image, and j is a fixed parameter in the fast fourier transform function.
In detail, light interference such as ambient light may exist in the fast frequency domain image, and therefore a filter function needs to be designed to perform filtering processing on the fast frequency domain image to filter out low-frequency components in the fast frequency domain image.
Further, in the embodiment of the present invention, a preset filtering function is used to perform filtering processing on the fast frequency domain image, so as to obtain a filtered image:
where H (u, v) is the pixel value of the filtered image, F (u, v) is the pixel value of the fast frequency domain image, D0And n is a fixed parameter.
Preferably, n is taken as 3 and D0 is taken as 130.
Specifically, the performing frequency domain inverse transform on the filtered image to obtain an initial face image includes:
and performing frequency domain inverse transformation on the filtered image by using a preset second transformation formula to obtain a pixel value L (a, b) of the initial face image:
where L (a, b) is the pixel value of the original face image, X, Y represents the width and height of the filtered image, and j is a fixed parameter in the fast fourier transform function.
And S2, calculating the initial face image by using a pre-constructed lightweight deep learning model to obtain a first false face probability value.
In the embodiment of the present invention, the calculating the initial face image by using the pre-constructed lightweight deep learning model to obtain a first false face probability value includes:
performing feature extraction on the initial face image by using the lightweight deep learning model to obtain initial image features;
and performing probability calculation on the initial image features according to a classification function, such as a softmax function, in the lightweight deep learning model to obtain a first false face probability value.
In the embodiment of the invention, the image in the spatial domain is converted into the frequency domain, the interference such as light is filtered by high pass filtering and the like, the frequency domain is restored into the RGB image, the RGB image is input into the lightweight model fast _ se _ resnet, and the classification probability of the first model is output.
S3, carrying out face contour detection on the original face image by using a preset face detection algorithm to obtain a face frame, and carrying out expansion or reduction operation on the face frame to obtain face intercepted image sets with different sizes.
In the embodiment of the present invention, the preset face detection algorithm may be a centerface of an existing lightweight face detector.
In detail, in the embodiment of the present invention, the performing face contour detection on the original face image by using a preset face detection algorithm to obtain a face frame, and performing an expansion or reduction operation on the face frame to obtain face capture image sets of different sizes includes:
carrying out face detection processing on the original face image by using a preset face detection algorithm to obtain one or more face frames;
carrying out expansion or reduction operation on the face frame by using a preset proportion to obtain a face expansion frame and a face reduction frame;
respectively intercepting the original face image by using the face expansion frame and the face reduction frame to obtain a face region image set;
and carrying out scaling processing on the face region image set according to a preset size to obtain face intercepted image sets with different sizes.
Further, in the embodiment of the present invention, a scaling (Resize) module in the tensrflow is used to scale the screenshot image set, so as to obtain the face screenshot image sets with different sizes.
S4, performing fine-grained classification processing on the face screenshot image set by using a pre-constructed fine-grained classification model to obtain a fine-grained probability value set corresponding to the face screenshot image set, and fusing the fine-grained probability value set to obtain a second false face probability value.
In the embodiment of the invention, the fine-grained classification model can be a false _ RA-CNN model.
Specifically, the performing fine-grained classification processing on the face capture image set by using a pre-constructed fine-grained classification model to obtain a fine-grained probability value set corresponding to the face capture image set, and fusing the fine-grained probability value set to obtain a second false face probability value includes:
extracting features of the face intercepted images in the face intercepted image set to obtain intercepted image features;
performing probability calculation on the intercepted image features by using a preset classification function to obtain a first fine-grained probability value;
extracting the regional information of the intercepted image features through a preset first regional target network, and cutting the intercepted image according to the extracted regional information to obtain a first regional image;
zooming the first area map according to a preset zooming size, inputting the zoomed first area map into a preset second area target network to obtain a second area map, and performing probability calculation on the second area map by utilizing a classification layer in the second area target network to obtain a second fine-grained probability value;
the second area map is used as the input of a preset third area target network to obtain a third area map and a third fine-grained probability value, and the third area map is used as the input of a preset fourth area target network to obtain a fourth area map and a fourth fine-grained probability value;
and summarizing the first fine-grained probability value, the second fine-grained probability value, the third fine-grained probability value and the fourth fine-grained probability value to obtain a second false face probability value.
In detail, feature extraction is carried out on the face captured image by using a preset base network of the fast _ se _ resnet to obtain captured image features.
Preferably, the classification function may be a softmax function.
Specifically, the extracting the area information of the feature of the captured image through a preset first area target network, and clipping the captured image according to the extracted area information to obtain a first area map includes:
performing connection calculation on the intercepted image features by using a preset full connection layer to obtain an output value set;
normalizing the output value set to obtain a coordinate set;
and cutting the intercepted image according to the coordinate set to obtain a first area image.
In detail, the set of output values comprises 3 values, respectively: tx, ty and tl, wherein a square region can be determined by the three values, tx and ty represent the center point of the region, and tl represents the side length of the region.
Specifically, 3 values in the output value set are multiplied by the size of the input map, tx, ty, tl are restored to the original image, and the finally obtained coordinates are: att _ x, tx 244, att _ y, ty 244, att _ l, tl 244, from which the first region map may be cropped out of the cropped image.
Further, the summarizing the first fine-grained probability value, the second fine-grained probability value, the third fine-grained probability value and the fourth fine-grained probability value to obtain a second false face probability value includes:
wherein RA isclsFor the second false face probability value, cls1, cls2, cls3, and cls4 are the first fine-grained probability value, the second fine-grained probability value, the third fine-grained probability value, and the fourth fine-grained probability value, respectively.
And S5, carrying out weighting fusion processing on the first false face probability value and the second false face probability value to obtain a face detection probability value.
In the embodiment of the present invention, the weighting and fusing the first false face probability value and the second false face probability value to obtain the face detection probability value is performed by using a preset weighting formula, and the method includes:
P(cls)=0.625*RAcls+0.375*Recls
wherein P (cls) is face detection probability value ReclsIs a first false face probability value, RAclsIs the second false face probability value.
S6, detecting whether the original face is a false face according to the face detection probability value, and obtaining a detection result; and when the detection result is a false face, sending the judgment result to a preset terminal.
In the embodiment of the present invention, comparing the face detection probability value with a preset detection threshold value by combining a preset determination formula to obtain a determination result of whether the original face image is a false face, including:
the decision formula is:
wherein y is the judgment result, and N is the preset detection threshold.
Preferably, in the embodiment of the present invention, N is 0.65.
Fig. 3 is a schematic block diagram of a false face detection apparatus according to an embodiment of the present invention.
The false face detection apparatus 100 according to the present invention may be installed in an electronic device. According to the realized functions, the false face detection apparatus 100 may include an initial face image acquisition module 101, a first false face probability value calculation module 102, a face clipped image set acquisition module 103, a second false face probability value calculation module 104, a weighted fusion module 105, and a false face determination module 106. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the initial face image acquisition module 101 is configured to acquire an original face image, perform frequency domain conversion and high-pass filtering on the original face image, and obtain an initial face image;
the first false face probability value calculation module 102 is configured to calculate the initial face image by using a pre-constructed lightweight deep learning model to obtain a first false face probability value;
the face capture image set acquisition module 103 is configured to perform face contour detection on the original face image by using a preset face detection algorithm to obtain a face frame, and perform an expansion or reduction operation on the face frame to obtain face capture image sets of different sizes;
the second false face probability value calculation module 104 is configured to perform fine-grained classification processing on the face capture image set by using a pre-constructed fine-grained classification model to obtain a fine-grained probability value set corresponding to the face screenshot image set, and fuse the fine-grained probability value set to obtain a second false face probability value;
the weighted fusion module 105 is configured to perform weighted fusion processing on the first false face probability value and the second false face probability value to obtain a face detection probability value;
the false face determination module 106 is configured to obtain a determination result of whether the original face image is a false face by comparing the face detection probability value with a preset detection threshold, and transmit the determination result to a preset terminal.
In detail, the embodiments of the modules of the false face detection apparatus 100 are as follows:
step one, the initial face image obtaining module 101 obtains an original face image, and performs frequency domain conversion and high-pass filtering processing on the original face image to obtain an initial face image.
The embodiment of the invention synthesizes the face by a high-end technology to obtain the original face image.
Further, in the embodiment of the present invention, the obtaining module 101 of the initial face image performs frequency domain conversion and high-pass filtering on the original face image to obtain the initial face image, and includes:
carrying out space fast conversion on the original face image to obtain a fast frequency domain image;
filtering the rapid frequency domain image by using a preset filtering function to obtain a filtered image;
and carrying out frequency domain inverse transformation on the filtering image to obtain an initial face image.
Specifically, the performing spatial fast conversion on the original face image to obtain a fast frequency domain image includes:
and carrying out space fast conversion on the original face image by using a preset first conversion formula to obtain a pixel value F (u, v) of a fast frequency domain image:
where F (x, y) represents the pixel value of the original face image, F (u, v) represents the pixel value of the fast frequency domain image, M, N represents the width and height of the original face image, and j is a fixed parameter in the fast fourier transform function.
In detail, light interference such as ambient light may exist in the fast frequency domain image, and therefore a filter function needs to be designed to perform filtering processing on the fast frequency domain image to filter out low-frequency components in the fast frequency domain image.
Further, in the embodiment of the present invention, a preset filtering function is used to perform filtering processing on the fast frequency domain image, so as to obtain a filtered image:
where H (u, v) is the pixel value of the filtered image, F (u, v) is the pixel value of the fast frequency domain image, D0And n is a fixed parameter.
Preferably, n is taken as 3 and D0 is taken as 130.
Specifically, the performing frequency domain inverse transform on the filtered image to obtain an initial face image includes:
and performing frequency domain inverse transformation on the filtered image by using a preset second transformation formula to obtain a pixel value L (a, b) of the initial face image:
where L (a, b) is the pixel value of the original face image, X, Y represents the width and height of the filtered image, and j is a fixed parameter in the fast fourier transform function.
And secondly, the first false face probability value calculation module 102 calculates the initial face image by using a pre-constructed lightweight deep learning model to obtain a first false face probability value.
In this embodiment of the present invention, the calculating module 102 of the first false face probability value calculates the initial face image by using a pre-constructed lightweight deep learning model to obtain the first false face probability value, including:
performing feature extraction on the initial face image by using the lightweight deep learning model to obtain initial image features;
and performing probability calculation on the initial image features according to a classification function, such as a softmax function, in the lightweight deep learning model to obtain a first false face probability value.
In the embodiment of the invention, the image in the spatial domain is converted into the frequency domain, the interference such as light is filtered by high pass filtering and the like, the frequency domain is restored into the RGB image, the RGB image is input into the lightweight model fast _ se _ resnet, and the classification probability of the first model is output.
And step three, the face capture image set acquisition module 103 performs face contour detection on the original face image by using a preset face detection algorithm to obtain a face frame, and performs expansion or reduction operation on the face frame to obtain face capture image sets with different sizes.
In the embodiment of the present invention, the preset face detection algorithm may be a centerface of an existing lightweight face detector.
In detail, in the embodiment of the present invention, the human face captured image set obtaining module 103 performs human face contour detection on the original human face image by using a preset human face detection algorithm to obtain a human face frame, and performs an expansion or reduction operation on the human face frame to obtain human face captured image sets with different sizes, including:
carrying out face detection processing on the original face image by using a preset face detection algorithm to obtain one or more face frames;
carrying out expansion or reduction operation on the face frame by using a preset proportion to obtain a face expansion frame and a face reduction frame;
respectively intercepting the original face image by using the face expansion frame and the face reduction frame to obtain a face region image set;
and carrying out scaling processing on the face region image set according to a preset size to obtain face intercepted image sets with different sizes.
Further, in the embodiment of the present invention, a scaling (Resize) module in the tensrflow is used to scale the screenshot image set, so as to obtain the face screenshot image sets with different sizes.
Fourthly, the second false face probability value calculation module 104 performs fine-grained classification processing on the face capture image set by using a pre-constructed fine-grained classification model to obtain a fine-grained probability value set corresponding to the face capture image set, and fuses the fine-grained probability value set to obtain a second false face probability value.
In the embodiment of the invention, the fine-grained classification model can be a false _ RA-CNN model.
Specifically, the second false face probability value calculation module 104 performs fine-grained classification processing on the face capture image set by using a pre-constructed fine-grained classification model to obtain a fine-grained probability value set corresponding to the face capture image set, and fuses the fine-grained probability value set to obtain a second false face probability value, including:
extracting features of the face intercepted images in the face intercepted image set to obtain intercepted image features;
performing probability calculation on the intercepted image features by using a preset classification function to obtain a first fine-grained probability value;
extracting the regional information of the intercepted image features through a preset first regional target network, and cutting the intercepted image according to the extracted regional information to obtain a first regional image;
zooming the first area map according to a preset zooming size, inputting the zoomed first area map into a preset second area target network to obtain a second area map, and performing probability calculation on the second area map by utilizing a classification layer in the second area target network to obtain a second fine-grained probability value;
the second area map is used as the input of a preset third area target network to obtain a third area map and a third fine-grained probability value, and the third area map is used as the input of a preset fourth area target network to obtain a fourth area map and a fourth fine-grained probability value;
and summarizing the first fine-grained probability value, the second fine-grained probability value, the third fine-grained probability value and the fourth fine-grained probability value to obtain a second false face probability value.
In detail, feature extraction is carried out on the face captured image by using a preset base network of the fast _ se _ resnet to obtain captured image features.
Preferably, the classification function may be a softmax function.
Specifically, the extracting the area information of the feature of the captured image through a preset first area target network, and clipping the captured image according to the extracted area information to obtain a first area map includes:
performing connection calculation on the intercepted image features by using a preset full connection layer to obtain an output value set;
normalizing the output value set to obtain a coordinate set;
and cutting the intercepted image according to the coordinate set to obtain a first area image.
In detail, the set of output values comprises 3 values, respectively: tx, ty and tl, wherein a square region can be determined by the three values, tx and ty represent the center point of the region, and tl represents the side length of the region.
Specifically, 3 values in the output value set are multiplied by the size of the input map, tx, ty, tl are restored to the original image, and the finally obtained coordinates are: att _ x, tx 244, att _ y, ty 244, att _ l, tl 244, from which the first region map may be cropped out of the cropped image.
Further, the summarizing the first fine-grained probability value, the second fine-grained probability value, the third fine-grained probability value and the fourth fine-grained probability value to obtain a second false face probability value includes:
wherein RA isclsFor the second false face probability value, cls1, cls2, cls3, and cls4 are the first fine-grained probability value, the second fine-grained probability value, the third fine-grained probability value, and the fourth fine-grained probability value, respectively.
And fifthly, the weighting fusion module 105 performs weighting fusion processing on the first false face probability value and the second false face probability value to obtain a face detection probability value.
In the embodiment of the present invention, the weighting and fusing module 105 performs weighting and fusing on the first false face probability value and the second false face probability value to obtain a face detection probability value, where the weighting and fusing is performed by using a preset weighting formula, and the method includes:
P(cls)=0.625*RAcls+0.375*Recls
wherein P (cls) is face detection probability value ReclsIs a first false face probability value, RAclsIs the second false face probability value.
Step six, the false face judgment module 106 detects whether the original face is a false face according to the face detection probability value and obtains a detection result; and when the detection result is a false face, sending the judgment result to a preset terminal.
In this embodiment of the present invention, the false face determining module 106, in combination with a preset determining formula, compares the face detection probability value with a preset detection threshold to obtain a determination result of whether the original face image is a false face, including:
the decision formula is:
wherein y is the judgment result, and N is the preset detection threshold.
Preferably, in the embodiment of the present invention, N is 0.65.
Fig. 4 is a schematic structural diagram of an electronic device implementing the false face detection method according to the present invention.
The electronic device 1 may comprise a processor 10, a memory 11 and a bus, and may further comprise a computer program, such as a false face detection program 12, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only to store application software installed in the electronic device 1 and various types of data, such as a code of the false face detection program 12, but also to temporarily store data that has been output or is to be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device 1 by running or executing programs or modules (for example, executing a false face detection program and the like) stored in the memory 11 and calling data stored in the memory 11.
The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
Fig. 4 only shows an electronic device with components, and it will be understood by those skilled in the art that the structure shown in fig. 4 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than those shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so as to implement functions of charge management, discharge management, power consumption management, and the like through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the electronic device 1 may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the electronic device 1 and other electronic devices.
Optionally, the electronic device 1 may further comprise a user interface, which may be a Display (Display), an input unit (such as a Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the electronic device 1 and for displaying a visualized user interface, among other things.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The false face detection program 12 stored in the memory 11 of the electronic device 1 is a combination of instructions that, when executed in the processor 10, may implement:
acquiring an original face image, and performing frequency domain conversion and high-pass filtering processing on the original face image to obtain an initial face image;
calculating the initial face image by using a pre-constructed lightweight deep learning model to obtain a first false face probability value;
carrying out face contour detection on the original face image by using a preset face detection algorithm to obtain a face frame, and carrying out expansion or reduction operation on the face frame to obtain face intercepted image sets with different sizes;
carrying out fine-grained classification processing on the face screenshot image set by using a pre-constructed fine-grained classification model to obtain a fine-grained probability value set corresponding to the face screenshot image set, and fusing the fine-grained probability value set to obtain a second false face probability value;
carrying out weighting fusion processing on the first false face probability value and the second false face probability value to obtain a face detection probability value;
detecting whether the original face is a false face according to the face detection probability value, and obtaining a detection result;
and when the detection result is a false face, sending the judgment result to a preset terminal.
Further, the integrated modules/units of the electronic device 1, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. The computer-readable storage medium may be volatile or non-volatile, and may include, for example: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
The present invention also provides a computer-readable storage medium, which stores a computer program that, when executed by a processor of an electronic device, can implement:
acquiring an original face image, and performing frequency domain conversion and high-pass filtering processing on the original face image to obtain an initial face image;
calculating the initial face image by using a pre-constructed lightweight deep learning model to obtain a first false face probability value;
carrying out face contour detection on the original face image by using a preset face detection algorithm to obtain a face frame, and carrying out expansion or reduction operation on the face frame to obtain face intercepted image sets with different sizes;
carrying out fine-grained classification processing on the face screenshot image set by using a pre-constructed fine-grained classification model to obtain a fine-grained probability value set corresponding to the face screenshot image set, and fusing the fine-grained probability value set to obtain a second false face probability value;
carrying out weighting fusion processing on the first false face probability value and the second false face probability value to obtain a face detection probability value;
detecting whether the original face is a false face according to the face detection probability value, and obtaining a detection result;
and when the detection result is a false face, sending the judgment result to a preset terminal.
Further, the computer usable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any accompanying claims should not be construed as limiting the claim concerned.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.
Claims (10)
1. A false face detection method, the method comprising:
acquiring an original face image, and performing frequency domain conversion and high-pass filtering processing on the original face image to obtain an initial face image;
calculating the initial face image by using a pre-constructed lightweight deep learning model to obtain a first false face probability value;
carrying out face contour detection on the original face image by using a preset face detection algorithm to obtain a face frame, and carrying out expansion or reduction operation on the face frame to obtain face intercepted image sets with different sizes;
carrying out fine-grained classification processing on the face screenshot image set by using a pre-constructed fine-grained classification model to obtain a fine-grained probability value set corresponding to the face screenshot image set, and fusing the fine-grained probability value set to obtain a second false face probability value;
carrying out weighting fusion processing on the first false face probability value and the second false face probability value to obtain a face detection probability value;
detecting whether the original face is a false face according to the face detection probability value, and obtaining a detection result; and when the detection result is a false face, sending the judgment result to a preset terminal.
2. The false face detection method of claim 1, wherein the calculating the initial face image by using the pre-constructed lightweight deep learning model to obtain a first false face probability value comprises:
performing feature extraction on the initial face image by using the lightweight deep learning model to obtain initial image features;
and performing probability calculation on the initial image features according to a classification function in the lightweight deep learning model to obtain a first false face probability value.
3. The false face detection method according to claim 1, wherein the performing face contour detection on the original face image by using a preset face detection algorithm to obtain a face frame, and performing an expansion or reduction operation on the face frame to obtain face-captured image sets of different sizes includes:
carrying out face detection processing on the original face image by using a preset face detection algorithm to obtain one or more face frames;
carrying out expansion or reduction operation on the face frame by using a preset proportion to obtain a face expansion frame and a face reduction frame;
respectively intercepting the original face image by using the face expansion frame and the face reduction frame to obtain a face region image set;
and carrying out scaling processing on the face region image set according to a preset size to obtain face intercepted image sets with different sizes.
4. The false face detection method according to claim 1, wherein the performing fine-grained classification processing on the face capture image set by using a pre-constructed fine-grained classification model to obtain a fine-grained probability value set corresponding to the face capture image set, and fusing the fine-grained probability value set to obtain a second false face probability value comprises:
extracting features of the face intercepted images in the face intercepted image set to obtain intercepted image features;
performing probability calculation on the intercepted image features by using a preset classification function to obtain a first fine-grained probability value;
extracting the regional information of the intercepted image features through a preset first regional target network, and cutting the intercepted image according to the extracted regional information to obtain a first regional image;
zooming the first area map according to a preset zooming size, inputting the zoomed first area map into a preset second area target network to obtain a second area map, and performing probability calculation on the second area map by utilizing a classification layer in the second area target network to obtain a second fine-grained probability value;
the second area map is used as the input of a preset third area target network to obtain a third area map and a third fine-grained probability value, and the third area map is used as the input of a preset fourth area target network to obtain a fourth area map and a fourth fine-grained probability value;
and summarizing the first fine-grained probability value, the second fine-grained probability value, the third fine-grained probability value and the fourth fine-grained probability value to obtain a second false face probability value.
5. The false face detection method of claim 1, wherein the frequency domain converting and high-pass filtering the original face image to obtain an initial face image comprises:
carrying out space fast conversion on the original face image to obtain a fast frequency domain image;
filtering the rapid frequency domain image by using a preset filtering function to obtain a filtered image;
and carrying out frequency domain inverse transformation on the filtering image to obtain an initial face image.
6. The false face detection method of claim 5, wherein the performing the spatial fast transformation on the original face image to obtain a fast frequency domain image comprises:
and carrying out space fast conversion on the original face image by using a preset first conversion formula to obtain a pixel value F (u, v) of a fast frequency domain image:
where F (x, y) represents the pixel value of the original face image, F (u, v) represents the pixel value of the fast frequency domain image, M, N represents the width and height of the original face image, and j is a fixed parameter in the fast fourier transform function.
7. The false face detection method of claim 5, wherein the filtering the fast frequency domain image by using a preset filtering function to obtain a filtered image comprises:
and carrying out filtering processing on the rapid frequency domain image by using the following filtering function to obtain a filtering image:
where H (u, v) is the pixel value of the filtered image, F (u, v) is the pixel value of the fast frequency domain image, D0And n is a fixed parameter.
8. A false face detection device, characterized in that the device comprises:
the system comprises an initial face image acquisition module, a frequency domain conversion module and a high-pass filtering module, wherein the initial face image acquisition module is used for acquiring an original face image and carrying out frequency domain conversion and high-pass filtering processing on the original face image to obtain an initial face image;
the first false face probability value calculation module is used for calculating the initial face image by using a pre-constructed lightweight deep learning model to obtain a first false face probability value;
the face capturing image set acquisition module is used for carrying out face contour detection on the original face image by using a preset face detection algorithm to obtain a face frame, and carrying out expansion or reduction operation on the face frame to obtain face capturing image sets with different sizes;
the second false face probability value calculation module is used for performing fine-grained classification processing on the face screenshot image set by using a pre-constructed fine-grained classification model to obtain a fine-grained probability value set corresponding to the face screenshot image set, and fusing the fine-grained probability value set to obtain a second false face probability value;
the weighted fusion module is used for carrying out weighted fusion processing on the first false face probability value and the second false face probability value to obtain a face detection probability value;
and the false face judging module is used for comparing the face detection probability value with a preset detection threshold value to obtain a judging result of whether the original face image is a false face or not, and transmitting the judging result to a preset terminal.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores computer program instructions executable by the at least one processor to enable the at least one processor to perform the false face detection method of any one of claims 1 to 7.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the false face detection method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011473875.5A CN112507903B (en) | 2020-12-15 | 2020-12-15 | False face detection method, false face detection device, electronic equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011473875.5A CN112507903B (en) | 2020-12-15 | 2020-12-15 | False face detection method, false face detection device, electronic equipment and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112507903A true CN112507903A (en) | 2021-03-16 |
CN112507903B CN112507903B (en) | 2024-05-10 |
Family
ID=74973354
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011473875.5A Active CN112507903B (en) | 2020-12-15 | 2020-12-15 | False face detection method, false face detection device, electronic equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112507903B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113792671A (en) * | 2021-09-16 | 2021-12-14 | 平安银行股份有限公司 | Method and device for detecting face synthetic image, electronic equipment and medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190171904A1 (en) * | 2017-12-01 | 2019-06-06 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for training fine-grained image recognition model, fine-grained image recognition method and apparatus, and storage mediums |
CN110705392A (en) * | 2019-09-17 | 2020-01-17 | Oppo广东移动通信有限公司 | Face image detection method and device and storage medium |
CN110781770A (en) * | 2019-10-08 | 2020-02-11 | 高新兴科技集团股份有限公司 | Living body detection method, device and equipment based on face recognition |
US20200302248A1 (en) * | 2018-01-18 | 2020-09-24 | Polixir Technology Co., Ltd. | Recognition system for security check and control method thereof |
CN112052876A (en) * | 2020-08-04 | 2020-12-08 | 烽火通信科技股份有限公司 | Improved RA-CNN-based fine-grained image detection method and system |
WO2020244151A1 (en) * | 2019-06-05 | 2020-12-10 | 平安科技(深圳)有限公司 | Image processing method and apparatus, terminal, and storage medium |
-
2020
- 2020-12-15 CN CN202011473875.5A patent/CN112507903B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190171904A1 (en) * | 2017-12-01 | 2019-06-06 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for training fine-grained image recognition model, fine-grained image recognition method and apparatus, and storage mediums |
US20200302248A1 (en) * | 2018-01-18 | 2020-09-24 | Polixir Technology Co., Ltd. | Recognition system for security check and control method thereof |
WO2020244151A1 (en) * | 2019-06-05 | 2020-12-10 | 平安科技(深圳)有限公司 | Image processing method and apparatus, terminal, and storage medium |
CN110705392A (en) * | 2019-09-17 | 2020-01-17 | Oppo广东移动通信有限公司 | Face image detection method and device and storage medium |
CN110781770A (en) * | 2019-10-08 | 2020-02-11 | 高新兴科技集团股份有限公司 | Living body detection method, device and equipment based on face recognition |
CN112052876A (en) * | 2020-08-04 | 2020-12-08 | 烽火通信科技股份有限公司 | Improved RA-CNN-based fine-grained image detection method and system |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113792671A (en) * | 2021-09-16 | 2021-12-14 | 平安银行股份有限公司 | Method and device for detecting face synthetic image, electronic equipment and medium |
Also Published As
Publication number | Publication date |
---|---|
CN112507903B (en) | 2024-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109816694B (en) | Target tracking method and device and electronic equipment | |
CN112651953B (en) | Picture similarity calculation method and device, computer equipment and storage medium | |
CN111639704A (en) | Target identification method, device and computer readable storage medium | |
CN112132216B (en) | Vehicle type recognition method and device, electronic equipment and storage medium | |
CN112347526A (en) | Information security protection method and device based on anti-shooting screen, electronic equipment and medium | |
CN112200189A (en) | Vehicle type identification method and device based on SPP-YOLOv3 and computer readable storage medium | |
CN111695615A (en) | Vehicle damage assessment method and device based on artificial intelligence, electronic equipment and medium | |
CN112132812A (en) | Certificate checking method and device, electronic equipment and medium | |
CN113887439A (en) | Automatic early warning method, device, equipment and storage medium based on image recognition | |
CN112528903B (en) | Face image acquisition method and device, electronic equipment and medium | |
CN114049568A (en) | Object shape change detection method, device, equipment and medium based on image comparison | |
CN112862703B (en) | Image correction method and device based on mobile photographing, electronic equipment and medium | |
CN112507903A (en) | False face detection method and device, electronic equipment and computer readable storage medium | |
CN113792671A (en) | Method and device for detecting face synthetic image, electronic equipment and medium | |
CN113255456B (en) | Inactive living body detection method, inactive living body detection device, electronic equipment and storage medium | |
CN115601684A (en) | Emergency early warning method and device, electronic equipment and storage medium | |
CN112633183B (en) | Automatic detection method and device for image shielding area and storage medium | |
CN115690615A (en) | Deep learning target identification method and system for video stream | |
CN115147405A (en) | Rapid nondestructive testing method for new energy battery | |
CN113869385A (en) | Poster comparison method, device and equipment based on target detection and storage medium | |
CN114463685A (en) | Behavior recognition method and device, electronic equipment and storage medium | |
CN112561893A (en) | Picture matching method and device, electronic equipment and storage medium | |
CN112541899A (en) | Incomplete certificate detection method and device, electronic equipment and computer storage medium | |
CN112633134A (en) | In-vehicle face recognition method, device and medium based on image recognition | |
CN112507934B (en) | Living body detection method, living body detection device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |