CN110991356A - Mobile phone playback living attack identification method based on screen edge - Google Patents

Mobile phone playback living attack identification method based on screen edge Download PDF

Info

Publication number
CN110991356A
CN110991356A CN201911242721.2A CN201911242721A CN110991356A CN 110991356 A CN110991356 A CN 110991356A CN 201911242721 A CN201911242721 A CN 201911242721A CN 110991356 A CN110991356 A CN 110991356A
Authority
CN
China
Prior art keywords
image
face image
module
face
screen edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911242721.2A
Other languages
Chinese (zh)
Inventor
曾强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Zhiyun Technology Co Ltd
Original Assignee
Zhongke Zhiyun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Zhiyun Technology Co Ltd filed Critical Zhongke Zhiyun Technology Co Ltd
Priority to CN201911242721.2A priority Critical patent/CN110991356A/en
Publication of CN110991356A publication Critical patent/CN110991356A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a mobile phone playback living attack identification method based on screen edges, which comprises an input module, an image defocus detection module, a screen edge detection module and a judgment output module, wherein the input module is used for inputting images; the method comprises the steps of extracting a face image in an image shot by a camera through an input module, carrying out image out-of-focus detection on the face image through an image out-of-focus detection module, transmitting the face image which is judged to be a clear image after the image out-of-focus detection to a screen edge detection module for screen edge detection, finally transmitting a detection result to a judgment output module, judging whether the face image is a replay living attack or not through the judgment output module, and outputting the result.

Description

Mobile phone playback living attack identification method based on screen edge
Technical Field
The invention belongs to the field of image processing, and particularly relates to a mobile phone playback living attack identification method based on a screen edge.
Background
In our daily life and production activities, face recognition technology is being widely applied. Compared with biological characteristics such as fingerprints and irises, the human face characteristics are most easily acquired. At present, face recognition systems are gradually beginning to be used in commercial use, and are developing towards the trend of automation and unsupervised, and the demand for living attack recognition technology is also increasing.
In general, the living body detection is a method of determining whether or not biological information is taken from a legitimate user having a living body of a living body when the biological information is taken from the legitimate user. The method for detecting living body is mainly carried out by identifying physiological information on the living body, and the physiological information is taken as a vital sign to distinguish the biological sign forged by non-vital substances such as photos, silica gel, plastics and the like.
In addition to human recognition, a face recognition system that can work normally needs other technologies to assist, wherein one important technology in a face recognition identity authentication system is face living body detection.
In addition to "recognizing" a person, face recognition systems also require "recognition", that is, to verify that the face of the person is not the face of the person, but is a live face, rather than a picture, video, or a person wearing a mask.
Traditional procedures for living body identification: firstly, face detection, then living body recognition and finally face recognition.
The types of live attacks are mainly divided into: screen playback attacks, print paper attacks, and faceware attacks.
The implementation difficulty is low in the screen playback attack, wherein the mobile phone screen playback is the most easily selected attack type; meanwhile, the playback attack is divided into a playback picture and a playback video, the distortion of the playback picture is small, the algorithm complexity is high, the efficiency is low, and the generalization capability is low.
The traditional replay attack identification method is as follows:
1) based on image texture quality, the resolution of a mobile phone screen is higher and higher, distortion is lower and lower, and algorithm generalization capability is poor
2) Based on the inter-frame timing. Video playback attacks are difficult to handle and the algorithm complexity is high.
In the prior art, actions such as blinking are set for picture playback, when pictures of faces with different facial actions are switched by a screen, the moment that the faces cannot be captured momentarily is detected by a shooting camera, and when the moment appears, a system is automatically started to restart recognition, so that picture playback attack can be effectively avoided. But the method cannot effectively avoid video playback attack.
Disclosure of Invention
The invention provides a mobile phone playback living attack identification method based on a screen edge based on the problem of mobile phone playback living attack, which identifies a screen edge frame through image processing and realizes identification of playback living attack.
The invention specifically comprises the following contents:
a mobile phone playback living body attack identification method based on screen edges is based on an identification system, wherein the identification system comprises an input module, an image defocus detection module, a screen edge detection module and a judgment output module;
the input module carries out face detection on the pictures shot by the camera (the face detection function carried by an opencv library is used for carrying out face detection, the detection process is that the opencv face detection function is called under a linux system, an image is input, the face detection is started, and the image coordinates of a rectangle (a face frame for short) externally connected with the face area on the image are obtained). The method comprises the steps of face recognition, wherein after a face is recognized, a face image is subjected to framing, a face image in a framing range is extracted, and the face image is transmitted to an image defocus detection module;
the image out-of-focus detection module is used for carrying out image out-of-focus detection on the face image input by the image transmission module, and transmitting the face image which is judged not to be out of focus after the image out-of-focus detection to the screen edge detection module;
the screen edge detection module is used for carrying out screen edge detection on the face image transmitted by the image defocus detection module and transmitting a detection result to the judgment output module;
and the judgment output module receives the detection result transmitted by the screen detection module and judges whether the mobile phone playback living attack is the mobile phone playback living attack or not according to the detection result.
In order to better implement the present invention, further, the image out-of-focus detection specifically includes the following steps:
step S1: carrying out Gaussian blur denoising on the face image;
step S2: graying the face image subjected to Gaussian fuzzy denoising;
step S3: filtering the face image subjected to graying by the Laplace algorithm;
step S4: calculating the mean value and the variance of the filtered face image;
step S5: presetting a variance threshold value M for judging defocusing, comparing the variance calculated in the step S4 by taking the variance threshold value M as a standard, and judging whether the face image input by the image transmission module is a face image which is not defocused;
step S6: and transmitting the face image judged not to be out of focus to a screen edge detection module.
In order to better implement the present invention, further, the screen edge detection specifically includes the following steps:
and (3) the face frame is magnified in equal proportion to 2.5 times of the original face frame, the magnified face frame area image is deducted, the image is scaled to 256 × 256 resolution, and screen edge detection is carried out.
Step S7: extracting canny edges of the face image without defocusing;
step S8: carrying out Hofmann straight line extraction on the face image subjected to canny edge extraction to obtain a plurality of line segments;
step S9: performing line segment filtering on the face image subjected to the Huffman straight line extraction; the line segment retention conditions are:
1. is not intersected with the original face frame region
2. The extension line of the line segment does not intersect with the original face frame region
3. The length of the cut-off section is 0.2 times of the length of the diagonal line of the face frame
Step S10: counting the number of line segments of the face image subjected to line segment filtering; the statistical method comprises the following steps:
1. calculating the slope of all line segments, and clustering by a k-means algorithm
2. Calculating the length of a perpendicular line from the original point to the line segment aiming at the line segment in each cluster, and clustering by a k-means algorithm again
3. The sum of all the segment lengths in each cluster is calculated. And finding clusters with the segment length being 0.8 times larger than the diagonal length of the original face frame, and calculating the total segment lengths of the clusters.
Step S11: and transmitting the statistical result of the line segment number statistics of the step S10 to a judgment output module.
If the line segment exceeds 3.0 times of the length of the diagonal line of the face frame, judging that the edge of the mobile phone screen is detected, namely attack; otherwise, it is a living body.
To better implement the present invention, further, the line segment filtering is: and filtering line segments which are not on the same straight line and filtering line segments in the face area.
In order to better implement the present invention, the judgment output module further judges whether a screen edge exists according to a result of the line segment number statistics, and outputs a judgment result.
The invention also provides a mobile phone playback living attack recognition device based on the screen edge, which is characterized by comprising:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the above-mentioned cell phone replay live attack recognition method of the present invention.
The invention also provides a computer-readable storage medium, wherein computer-executable instructions are stored in the computer-readable storage medium, and when the computer-executable instructions are executed by a processor, the computer-executable instructions are used for realizing the mobile phone replay living attack identification method.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the video and the photo can be played back simultaneously, the generalization capability is strong, and the operation efficiency is high.
Drawings
FIG. 1 is a general flow chart of an implementation of the present invention;
FIG. 2 is a flow chart of image defocus detection;
fig. 3 is a flow chart of screen edge detection.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it should be understood that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments, and therefore should not be considered as a limitation to the scope of protection. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
Example 1:
a mobile phone playback living attack identification method based on screen edges is based on an identification system, as shown in figure 1, wherein the identification system comprises an input module, an image defocus detection module, a screen edge detection module and a judgment output module;
the input module carries out face recognition on the pictures shot by the camera, frames the face images after the faces are recognized, extracts the face images within the frame selection range, and transmits the face images to the image defocus detection module;
the image out-of-focus detection module is used for carrying out image out-of-focus detection on the face image input by the image transmission module, and transmitting the face image which is judged not to be out of focus after the image out-of-focus detection to the screen edge detection module;
the screen edge detection module is used for carrying out screen edge detection on the face image transmitted by the image defocus detection module and transmitting a detection result to the judgment output module;
and the judgment output module receives the detection result transmitted by the screen detection module and judges whether the mobile phone playback living attack is the mobile phone playback living attack or not according to the detection result.
The working principle is as follows: because the mobile phone screen is small, when a playback living body attack is carried out, the edge of the screen can be seen even if the mobile phone is close to the lens under the wide-focus lens; under the telephoto lens, the condition of out-of-focus and image blur can occur when the mobile phone is close to the screen; and the mobile phone is obvious when the mobile phone plays back the edge of the mobile phone screen, so that the attack situation can be well identified through the detection of the screen edge. The invention transmits the image shot by the camera to the system, sets the portrait frame selection box, the system firstly identifies the region of the portrait target, the frame selection box has a certain range, on the basis of completely containing the portrait in the box, a large range and a little are needed, so that the frame can select the screen edge frame under the condition of playback attack, because the screen size is not too large, and because the limits of definition and the like exist in the playback of photos and videos, the camera can capture the edge of the mobile phone screen certainly under the condition that the image captured by the camera is clear; the width of the human frame selection frame needs to be larger than 0.7 time of the width of the display screen, a human face image which is selected by the frame is extracted, the image defocus detection is carried out, if the human face image is detected to be an unclear defocus image, the human face image can be shot by a real person or played back for attack, and the human face image needs to be identified and shot again under any one of the two conditions; after the face image is judged to be a clear image, playback attack still cannot be eliminated, so that once again, screen edge detection is set, line segments existing in the image are detected through the screen edge detection, and the line segments are counted and sent to the judgment output module; and the judgment output module judges whether a screen edge frame exists according to the counted line segment condition, and judges that a replay living attack exists if a line segment similar to the edge frame exists.
The method can simultaneously process the picture and the video for playback, and has strong generalization capability and high operation efficiency.
Example 2:
on the basis of the above embodiment 1, in order to better implement the present invention, as shown in fig. 2, the image out-of-focus detection specifically includes the following steps:
step S1: carrying out Gaussian blur denoising on the face image;
step S2: graying the human face image subjected to Gaussian denoising;
step S3: filtering the face image subjected to graying by the Laplace algorithm;
step S4: calculating the mean value and the variance of the filtered face image;
step S5: presetting a variance threshold value M for judging defocusing, comparing the variance calculated in the step S4 by taking the variance threshold value M as a standard, and judging whether the face image input by the image transmission module is a face image which is not defocused;
step S6: and transmitting the face image judged not to be out of focus to a screen edge detection module.
The working principle is as follows: performing image defocus detection on the acquired face image, wherein the image defocus detection specifically comprises the steps of performing Gaussian denoising, converting a gray image, performing Laplacian operator filtering on the basis of the gray image, performing histogram normalization mapping, then solving a mean value and a variance, presetting a variance threshold, determining a clear image picture if the variance is greater than the variance threshold, otherwise determining an out-of-focus blurred picture, and starting a system to re-identify the picture determined as out-of-focus blurred; for a clear image photo, the image out-of-focus detection module sends the face image to the screen edge detection module for further operation processing.
Other parts of this embodiment are the same as embodiment 1, and thus are not described again.
Example 3:
on the basis of the above embodiments 1-2, in order to better implement the present invention, as shown in fig. 3, the screen edge detection specifically includes the following steps:
step S7: extracting canny edges of the face image without defocusing;
step S8: carrying out Hoffman straight line extraction on the face image subjected to canny edge extraction;
step S9: performing line segment filtering on the face image subjected to the Huffman straight line extraction;
step S10: counting the number of line segments of the face image subjected to line segment filtering;
step S11: and transmitting the statistical result of the line segment number statistics of the step S10 to a judgment output module.
The working principle is as follows: for the face image sent by the image defocus detection module, operations such as canny edge extraction, Hoffman straight line extraction, line segment filtering and the like are carried out, the line segment filtering is to filter some line segments which cannot be screen edges, so that the interference of the line segments on the screen edge detection is avoided, the line segments are counted finally, and the counting result is sent to the judgment output module.
The other parts of this embodiment are the same as those of the above embodiments 1-2, and thus are not described again.
Example 4:
on the basis of the above embodiments 1 to 3, in order to better implement the present invention, as shown in fig. 3, the line segment filtering is further as follows: and filtering line segments which are not on the same straight line and filtering line segments in the face area.
The working principle is as follows: the line segments at the edge of the screen are not line segments which are not on a straight line, the line segments in the face area are not line segments at the edge of the screen, and the interference of useless line segments on subsequent judgment can be eliminated by filtering the line segments.
The other parts of this embodiment are the same as those of embodiments 1 to 3, and thus are not described again.
Example 5:
on the basis of the above embodiments 1 to 4, in order to better implement the present invention, the determination output module further determines whether a screen edge exists according to a result of the line segment number statistics, and outputs the determination result.
The other parts of this embodiment are the same as those of embodiments 1 to 3, and thus are not described again.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and all simple modifications and equivalent variations of the above embodiments according to the technical spirit of the present invention are included in the scope of the present invention.

Claims (7)

1. A mobile phone playback living attack identification method based on screen edges is characterized in that the method is based on an identification system, and the identification system comprises an input module, an image defocus detection module, a screen edge detection module and a judgment output module;
the input module carries out face recognition on the pictures shot by the camera, frames the face images after the faces are recognized, extracts the face images within the frame selection range, and transmits the face images to the image defocus detection module;
the image out-of-focus detection module is used for carrying out image out-of-focus detection on the face image input by the image transmission module, and transmitting the face image which is judged not to be out of focus after the image out-of-focus detection to the screen edge detection module;
the screen edge detection module is used for carrying out screen edge detection on the face image transmitted by the image defocus detection module and transmitting a detection result to the judgment output module;
and the judgment output module receives the detection result transmitted by the screen detection module and judges whether the attack is a replay living attack or not according to the detection result.
2. The method for identifying the mobile phone playback live attack based on the screen edge as claimed in claim 1, wherein the image out-of-focus detection specifically comprises the following steps:
step S1: carrying out Gaussian blur denoising on the face image;
step S2: graying the face image;
step S3: filtering the face image subjected to graying by the Laplace algorithm;
step S4: calculating the mean value and the variance of the filtered face image;
step S5: presetting a variance threshold value M for judging defocusing, comparing the variance calculated in the step S4 by taking the variance threshold value M as a standard, and judging whether the face image input by the image transmission module is a face image which is not defocused;
step S6: and transmitting the face image judged not to be out of focus to a screen edge detection module.
3. The method for identifying the mobile phone playback live attack based on the screen edge as claimed in claim 2, wherein the screen edge detection specifically comprises the following steps:
step S7: extracting canny edges of the face image without defocusing;
step S8: carrying out Hoffman straight line extraction on the face image subjected to canny edge extraction;
step S9: performing line segment filtering on the face image subjected to the Huffman straight line extraction;
step S10: counting the number of line segments of the face image subjected to line segment filtering;
step S11: and transmitting the statistical result of the line segment number statistics of the step S10 to a judgment output module.
4. The method for identifying the mobile phone playback live attack based on the screen edge as claimed in claim 3, wherein the line segment filtering specifically refers to: and filtering line segments which are not on the same straight line and filtering line segments in the face area.
5. The method for identifying the mobile phone playback living attack based on the screen edge as claimed in claim 4, wherein the judgment output module judges whether the screen edge exists according to the result of the line segment number statistics and outputs the judgment result.
6. A mobile phone playback living attack recognition device based on screen edges, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 5.
7. A computer-readable storage medium, wherein the computer-readable storage medium stores computer-executable instructions, which when executed by a processor, are used for implementing the method for identifying a cell phone replay live attack according to any one of claims 1 to 5.
CN201911242721.2A 2019-12-06 2019-12-06 Mobile phone playback living attack identification method based on screen edge Pending CN110991356A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911242721.2A CN110991356A (en) 2019-12-06 2019-12-06 Mobile phone playback living attack identification method based on screen edge

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911242721.2A CN110991356A (en) 2019-12-06 2019-12-06 Mobile phone playback living attack identification method based on screen edge

Publications (1)

Publication Number Publication Date
CN110991356A true CN110991356A (en) 2020-04-10

Family

ID=70090788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911242721.2A Pending CN110991356A (en) 2019-12-06 2019-12-06 Mobile phone playback living attack identification method based on screen edge

Country Status (1)

Country Link
CN (1) CN110991356A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021220444A1 (en) * 2020-04-28 2021-11-04 ソニーグループ株式会社 Skin evaluation coefficient learning device, skin evaluation index estimation device, skin evaluation coefficient learning method, skin evaluation index estimation method, focus value acquisition method, and skin smoothness acquisition method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101702198A (en) * 2009-11-19 2010-05-05 浙江大学 Identification method for video and living body faces based on background comparison
CN102236784A (en) * 2010-05-07 2011-11-09 株式会社理光 Screen area detection method and system
CN107491775A (en) * 2017-10-13 2017-12-19 理光图像技术(上海)有限公司 Human face in-vivo detection method, device, storage medium and equipment
CN107609494A (en) * 2017-08-31 2018-01-19 北京飞搜科技有限公司 A kind of human face in-vivo detection method and system based on silent formula
CN107609463A (en) * 2017-07-20 2018-01-19 百度在线网络技术(北京)有限公司 Biopsy method, device, equipment and storage medium
CN108268864A (en) * 2018-02-24 2018-07-10 达闼科技(北京)有限公司 Face identification method, system, electronic equipment and computer program product
CN109543593A (en) * 2018-11-19 2019-03-29 华勤通讯技术有限公司 Detection method, electronic equipment and the computer readable storage medium of replay attack
CN110415191A (en) * 2019-07-31 2019-11-05 西安第六镜网络科技有限公司 A kind of image deblurring algorithm based on successive video frames

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101702198A (en) * 2009-11-19 2010-05-05 浙江大学 Identification method for video and living body faces based on background comparison
CN102236784A (en) * 2010-05-07 2011-11-09 株式会社理光 Screen area detection method and system
CN107609463A (en) * 2017-07-20 2018-01-19 百度在线网络技术(北京)有限公司 Biopsy method, device, equipment and storage medium
CN107609494A (en) * 2017-08-31 2018-01-19 北京飞搜科技有限公司 A kind of human face in-vivo detection method and system based on silent formula
CN107491775A (en) * 2017-10-13 2017-12-19 理光图像技术(上海)有限公司 Human face in-vivo detection method, device, storage medium and equipment
CN108268864A (en) * 2018-02-24 2018-07-10 达闼科技(北京)有限公司 Face identification method, system, electronic equipment and computer program product
CN109543593A (en) * 2018-11-19 2019-03-29 华勤通讯技术有限公司 Detection method, electronic equipment and the computer readable storage medium of replay attack
CN110415191A (en) * 2019-07-31 2019-11-05 西安第六镜网络科技有限公司 A kind of image deblurring algorithm based on successive video frames

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021220444A1 (en) * 2020-04-28 2021-11-04 ソニーグループ株式会社 Skin evaluation coefficient learning device, skin evaluation index estimation device, skin evaluation coefficient learning method, skin evaluation index estimation method, focus value acquisition method, and skin smoothness acquisition method

Similar Documents

Publication Publication Date Title
CN105893920B (en) Face living body detection method and device
EP2548154B1 (en) Object detection and recognition under out of focus conditions
WO2020094091A1 (en) Image capturing method, monitoring camera, and monitoring system
CN113766125B (en) Focusing method and device, electronic equipment and computer readable storage medium
WO2021135064A1 (en) Facial recognition method and apparatus, and computer device and storage medium
CN107368806B (en) Image rectification method, image rectification device, computer-readable storage medium and computer equipment
CN108280386B (en) Monitoring scene detection method and device
CN109190522B (en) Living body detection method based on infrared camera
US11315360B2 (en) Live facial recognition system and method
CN111967319B (en) Living body detection method, device, equipment and storage medium based on infrared and visible light
CN114782984B (en) Sitting posture identification and shielding judgment method based on TOF camera and intelligent desk lamp
CN112016469A (en) Image processing method and device, terminal and readable storage medium
CN104966266A (en) Method and system to automatically blur body part
CN113302907B (en) Shooting method, shooting device, shooting equipment and computer readable storage medium
CN111079688A (en) Living body detection method based on infrared image in face recognition
WO2023217046A1 (en) Image processing method and apparatus, nonvolatile readable storage medium and electronic device
CN110365897B (en) Image correction method and device, electronic equipment and computer readable storage medium
CN112347849B (en) Video conference processing method, electronic equipment and storage medium
CN110991356A (en) Mobile phone playback living attack identification method based on screen edge
CN108259769B (en) Image processing method, image processing device, storage medium and electronic equipment
CN107578372B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN109598195B (en) Method and device for processing clear face image based on monitoring video
CN111931544B (en) Living body detection method, living body detection device, computing equipment and computer storage medium
CN113822927B (en) Face detection method, device, medium and equipment suitable for weak quality image
CN107770446B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200410