CN111428570A - Detection method and device for non-living human face, computer equipment and storage medium - Google Patents

Detection method and device for non-living human face, computer equipment and storage medium Download PDF

Info

Publication number
CN111428570A
CN111428570A CN202010122186.3A CN202010122186A CN111428570A CN 111428570 A CN111428570 A CN 111428570A CN 202010122186 A CN202010122186 A CN 202010122186A CN 111428570 A CN111428570 A CN 111428570A
Authority
CN
China
Prior art keywords
detected
face
category
picture
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010122186.3A
Other languages
Chinese (zh)
Inventor
徐国诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
OneConnect Smart Technology Co Ltd
OneConnect Financial Technology Co Ltd Shanghai
Original Assignee
OneConnect Financial Technology Co Ltd Shanghai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by OneConnect Financial Technology Co Ltd Shanghai filed Critical OneConnect Financial Technology Co Ltd Shanghai
Priority to CN202010122186.3A priority Critical patent/CN111428570A/en
Publication of CN111428570A publication Critical patent/CN111428570A/en
Priority to PCT/CN2021/070470 priority patent/WO2021169616A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The application provides a method and a device for detecting a non-living human face, computer equipment and a storage medium, belonging to the technical field of human face detection. The method and the device are used for improving the accuracy of living body face recognition. The detection method of the non-living human face comprises the following steps: acquiring a video image, and extracting a plurality of pictures to be detected from the video image; respectively inputting the pictures to be detected into a pre-trained target picture detection model to obtain a target image included in each picture to be detected and a category to which the target image belongs; sequentially judging whether the target image of each picture to be detected comprises a target image of which the category is a face area, if so, further judging whether the target image of the corresponding picture to be detected comprises an environmental element of which the category is a preset abnormal category; and if the relative position relation between at least one face area to be detected and the abnormal environment element is within a preset range, judging that the face in the picture to be detected is a non-living face.

Description

Detection method and device for non-living human face, computer equipment and storage medium
Technical Field
The present application relates to the field of face detection technologies, and in particular, to a method and an apparatus for detecting a non-living face, a computer device, and a storage medium.
Background
Based on protection of privacy or property of the user, in some scenes, the user needs to be identified by a camera of the terminal device, and only when the user is identified, certain functions in the application program are allowed to be accessed.
The existing living body detection scheme based on the dynamic key points of the human face generally enables a user to blink, open the mouth or raise the head and the like when a living body is identified, but the existing living body identification technology has the risk of being broken by fake animation, and if the fake human face animation is vivid, the existing living body target detection can be carried out.
Due to the rapid development of animation technology, the existing living body identification technology has the risk of being broken by counterfeit animation, and needs to be improved urgently.
Disclosure of Invention
The embodiment of the invention provides a method and a device for detecting a non-living human face, computer equipment and a storage medium, which can improve the accuracy of living human face identification.
According to an aspect of the present application, a method for detecting a non-living human face is provided, the method including:
acquiring a video image, and extracting a plurality of pictures to be detected from the video image;
respectively inputting the pictures to be detected into a pre-trained target picture detection model to obtain a target image included in each picture to be detected and a category to which the target image belongs;
sequentially judging whether the target image of each picture to be detected comprises a target image of which the category is a face area, if so, further judging whether the target image of the corresponding picture to be detected comprises an environmental element of which the category is a preset abnormal category;
and if the relative position relation between at least one face area of the image to be detected and the abnormal environment element is within a preset range, judging that the face in the image to be detected is a non-living face.
According to another aspect of the present application, there is provided an apparatus for detecting a non-living human face, the apparatus comprising:
the video acquisition module is used for acquiring video images and extracting a plurality of pictures to be detected from the video images;
the input module is used for respectively inputting the pictures to be detected into a pre-trained target picture detection model to obtain a target image included in each picture to be detected and a category to which the target image belongs;
the first judging module is used for sequentially judging whether the target image of each picture to be detected comprises a target image of which the category is a human face area, and if so, further judging whether the target image of the corresponding picture to be detected comprises an environmental element of which the category is a preset abnormal category;
and the second judging module is used for judging that the face in the picture to be detected is a non-living body face if the relative position relation between at least one face area of the picture to be detected and the abnormal environment element is within a preset range.
According to yet another aspect of the present application, a computer device is provided, which includes a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the above-mentioned method for detecting a non-living human face when executing the program.
According to yet another aspect of the present application, there is provided a computer-readable storage medium having stored thereon a computer program, which when executed by a processor, implements the steps in the above-described method for detecting a non-living human face.
According to the detection method, the device, the computer equipment and the storage medium for the non-living body face, an abnormal environment element detection technology is added, and the target picture detection model learns abnormal environment elements in advance, so that when the face to be detected is the non-living body face, whether the abnormal environment elements exist in the picture or video to be detected can be firstly identified through the target picture detection model, and then whether the face in the picture to be detected is the non-living body face is judged through judging the position relation between the environment elements and the face area, and the accuracy of the living body face identification is improved.
Drawings
FIG. 1 is a schematic diagram of an application environment of a method for detecting a non-living human face according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of detecting a non-living human face according to one embodiment of the invention;
FIG. 3 is a flowchart illustrating a method for determining whether an environmental component of a preset abnormal type is included in a target image;
FIG. 4 is a flow chart of training the target picture detection model;
FIG. 5 is a flow chart of a method for detecting a non-living human face according to another embodiment of the present invention;
fig. 6 is a block diagram illustrating an exemplary configuration of an apparatus for detecting a non-living human face according to an embodiment of the present invention;
fig. 7 is a schematic diagram of an internal structure of a computer device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic view of an application environment of a non-living human face detection method according to an embodiment of the present invention, as shown in fig. 1, the non-living human face detection method provided in the present application can be applied to the application environment shown in fig. 1. The detection device of the non-living human face comprises but is not limited to various personal computers, notebook computers, smart phones, tablet computers and the like, and the computer device is provided with a camera for acquiring video images.
Fig. 2 is a flowchart of a method for detecting a non-living human face according to an embodiment of the present invention, and the method for detecting a non-living human face according to an embodiment of the present invention is described in detail below with reference to fig. 2, and as shown in fig. 2, the method includes the following steps S101 to S104.
S101, obtaining a video image, and extracting a plurality of pictures to be detected from the video image.
In one embodiment, the video image is a video image acquired by a camera of the terminal device.
In one embodiment, before the step of S101, the method for detecting a non-living human face further includes the following steps:
outputting a prompt message for a user to do a preset facial action towards the camera;
and acquiring a video image comprising the preset facial action.
In this embodiment, the preset facial actions include, but are not limited to, blinking, raising head, opening mouth, and the like.
S102, the pictures to be detected are respectively input into a pre-trained target picture detection model, and a target image included in each picture to be detected and a category to which the target image belongs are obtained.
The target picture detection model may be an SSD (Single Shot multi box Detector) model. The picture to be detected includes not only a picture of a non-living human face but also a picture of a non-living human face, and also includes common environmental elements including but not limited to a display, a television, a projector screen, a computer, and the like.
In one embodiment, the facial feature image corresponding to blinking is an image of a human eye region, the facial feature image corresponding to open mouth is an image of a human mouth region, and the facial feature image corresponding to heads-up is an image of a human head in a chin region.
S103, sequentially judging whether the target image of each picture to be detected comprises a target image of which the category is a human face area, if so, further judging whether the target image of the corresponding picture to be detected comprises an environmental element of which the category is a preset abnormal category.
In this implementation, which environmental elements are abnormal environmental elements may be set manually. The abnormal environmental elements include, but are not limited to, the bezel of a display, the bezel of a tablet/computer, a television, a projection screen, etc.
In one embodiment, the abnormal environmental element is an environmental element that can be detected by the target picture detection model, and may specifically be a frame of a display or a television, a boundary of a projection curtain, a boundary of a projection area projected on a wall, or the like.
Fig. 3 is a flowchart of determining whether the target image includes an environmental element belonging to a preset abnormal category, and as shown in fig. 3, the step of determining whether the target image corresponding to the picture to be detected includes an environmental element belonging to a preset abnormal category includes the following steps S301 and S302.
S301, acquiring the category of each target image obtained from each picture to be detected and the category of a preset abnormal environment element;
s302, judging whether the category of each target image comprises at least one category of the abnormal environmental elements, and if so, judging that the target image comprises the abnormal environmental elements.
And S104, if the relative position relation between at least one face area of the image to be detected and the abnormal environment element is within a preset range, judging that the face in the image to be detected is a non-living face.
In one embodiment, the case that the relative position relationship is within a preset range, for example, the face region is within the region range of the abnormal environment element, that is, the region of the abnormal environment element completely includes the face region or partially includes the face region.
In one embodiment, in step S104, if the relative position relationship between the face region of at least one of the to-be-detected images and the abnormal environmental element is within a preset range, the step of determining that the face in the to-be-detected image is a non-living face includes:
and judging whether the display area of the abnormal environment element and the face area have a superposed part, if so, judging that the face in the picture to be detected is a non-living face.
In one embodiment, the target image includes facial feature images such as eyes and mouth, and before the step of determining that the face in the picture to be detected is a non-living face, and after the step S102, the method further includes:
identifying the same facial feature image corresponding to the facial action in different pictures to be detected;
and judging whether the facial feature image is changed correspondingly in different pictures to be detected, and if not, jumping to the step S103.
Fig. 4 is a flowchart of training the target image detection model, and according to an embodiment of the present application, as shown in fig. 4, the step of training the target image detection model includes the following steps S401 to S404.
S401, receiving a plurality of sample pictures.
In one embodiment, the sample pictures include, but are not limited to, pictures actually taken by a photographer, pictures downloaded from the internet, images extracted from frames of video pictures.
In this embodiment, the sample picture may be a picture including a face region, a picture including a frame of a display, a sample picture including various target pictures requiring learning of the target picture detection model such as a projection screen, and the like.
S402, labeling the image area in the sample picture and the category to which the image area belongs according to the received instruction.
In this embodiment, the image areas in the labeled sample picture include a human face area, an area of a normal environment original, and an area of an abnormal environment original. The area of the abnormal environment element includes, but is not limited to, a display, a tablet computer/computer, a television, a projection curtain, and the like.
Further, the category to which the face region belongs is a face, and the category of the abnormal environment original includes a display, a tablet computer/computer, a television, and a projection screen.
S403, inputting the marked image area and the category to which the image area belongs into the target picture detection model.
S404, learning the image region and the category to which the image region belongs through the target image detection model to obtain the trained target image detection model.
Fig. 5 is a flowchart of a method for detecting a non-living human face according to another embodiment of the present invention, and the method for detecting a non-living human face according to another embodiment of the present invention is described in detail below with reference to fig. 5, and as shown in fig. 5, before the step of acquiring a video image in step S101, the method further includes:
s501, outputting a prompt message for enabling a user to move from far to near or from near to far or to approach a camera, wherein the camera is used for acquiring the video image;
and S502, outputting a prompt message for the user to do a preset facial action towards the camera.
The step S101 is further a step S503:
s503, obtaining a video image which comprises a human face area of the user and comprises a preset facial action from big to small or from small to big, and extracting a plurality of pictures to be detected from the video image.
In the detection method for the non-living body face, the target image detection model learns the face and the abnormal environmental element, so that when the face to be detected is identified as the non-living body face, firstly, whether the face is included in the image or video to be detected is identified through the target image detection model, then, whether the abnormal environmental element is included in the image or video to be detected is identified, and then, whether the face in the image to be detected is the non-living body face is judged through judging the position relationship between the environmental element and the face region, so that the accuracy of identifying the living body face is improved.
According to an example of this embodiment, the reference numerals of the steps S101 to S503 are not used to limit the sequence of the steps in this embodiment, and the number of each step is only used to make the reference numerals that refer to the steps commonly used for describing each step to conveniently refer to the steps, for example, the step S501 may be before the step S502, or may be after the step S502, as long as the order of execution of each step does not affect the logical relationship in this embodiment.
Fig. 6 is a block diagram illustrating an exemplary structure of a non-living human face detection apparatus according to an embodiment of the present invention, and the non-living human face detection apparatus according to an embodiment of the present invention is described in detail below with reference to fig. 6, as shown in fig. 6, the non-living human face detection apparatus 100 includes a video acquisition module 11, an input module 12, a first judgment module 13, and a second judgment module 14.
The video acquisition module 11 is configured to acquire a video image and extract a plurality of pictures to be detected from the video image.
In one embodiment, the video image is a video image acquired by a camera of the terminal device.
The input module 12 is configured to input the pictures to be detected into a pre-trained target picture detection model, so as to obtain a target image included in each picture to be detected and a category to which the target image belongs.
The target picture detection model may be an SSD (Single Shot multi box Detector) model. The picture to be detected includes not only a picture of a non-living human face but also a picture of a non-living human face, and also includes common environmental elements including but not limited to a display, a television, a projector screen, a computer, and the like.
In one embodiment, the facial feature image corresponding to blinking is an image of a human eye region, the facial feature image corresponding to open mouth is an image of a human mouth region, and the facial feature image corresponding to heads-up is an image of a human head in a chin region.
The first determining module 13 is configured to sequentially determine whether the target image of each picture to be detected includes a target image whose category is a face region, and if so, further determine whether the target image of the corresponding picture to be detected includes an environmental element whose category is a preset abnormal category.
In this implementation, which environmental elements are abnormal environmental elements may be set manually. The abnormal environmental elements include, but are not limited to, the bezel of a display, the bezel of a tablet/computer, a television, a projection screen, etc.
In one embodiment, the abnormal environmental element is an environmental element that can be detected by the target picture detection model, and may specifically be a frame of a display or a television, a boundary of a projection curtain, a boundary of a projection area projected on a wall, or the like.
And the second judging module 14 is configured to judge that the face in the picture to be detected is a non-living face if at least one face region of the picture to be detected and the abnormal environmental element are in a preset range.
In one embodiment, the case that the relative position relationship is within a preset range, for example, the face region is within the region range of the abnormal environment element, that is, the region of the abnormal environment element completely includes the face region or partially includes the face region.
In one embodiment, the first determining module 13 further includes:
the category acquisition unit is used for acquiring the category of each target image obtained from each picture to be detected and the category of a preset abnormal environment element;
the first judging unit is used for judging whether the category of each target image comprises at least one category of the abnormal environmental elements, and if so, judging that the target image comprises the abnormal environmental elements.
In one embodiment, the second determining module is specifically configured to:
and judging whether the display area of the abnormal environment element and the face area have a superposed part, if so, judging that the face in the picture to be detected is a non-living face.
In one embodiment, the target image includes an image of a facial feature such as an eye or mouth.
In one embodiment, the apparatus 100 for detecting a non-living human face further includes:
and the picture receiving module is used for receiving a plurality of sample pictures. Optionally, the sample picture includes, but is not limited to, a picture actually taken by a photographer, a picture downloaded from a network, and an image extracted from a video picture frame. In this embodiment, the sample picture may be a picture including a face region, a picture including a display frame, a sample picture including various target picture detection model learning such as a projection curtain, and the like;
and the marking module is used for marking the image area in the sample picture and the category to which the image area belongs according to the received instruction. In this embodiment, the image areas in the labeled sample picture include a human face area, an area of a normal environment original, and an area of an abnormal environment original. The area of the abnormal environment element includes, but is not limited to, a display, a tablet computer/computer, a television, a projection curtain, and the like. Furthermore, the category to which the face region belongs is a face, and the category of the abnormal environment original includes that of a display, a tablet computer/computer, a television and a projection curtain;
the input module is also used for inputting the marked image area and the category of the image area into the target picture detection model;
and the learning module is used for learning the image region and the category to which the image region belongs through the target image detection model to obtain the trained target image detection model.
In one embodiment, the apparatus 100 for detecting a non-living human face further includes:
the first output module is used for outputting prompt messages for enabling a user to move from far to near or from near to far or to approach a camera, and the camera is used for acquiring the video image;
the second output module is used for outputting a prompt message for enabling a user to do a preset facial action towards the camera;
the video acquiring module 11 is specifically configured to acquire a video image that includes a face region of a user and includes a preset facial action, where the face region is changed from a large size to a small size.
In this embodiment, the preset facial actions include, but are not limited to, blinking, raising head, opening mouth, and the like.
The meaning of "first" and "second" in the modules such as the first judgment module and the second judgment module is only to distinguish two different modules, and is not used for defining which preselected region is determined to have higher priority or other defined meanings. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules explicitly listed, but may include other steps or modules not explicitly listed or inherent to such process, method, article, or apparatus, and such that a division of modules presented in this application is merely a logical division and may be implemented in a practical application in a further manner.
Wherein, all or part of the modules included in the device for detecting the non-living human face can be realized by software, hardware or a combination thereof. Further, each module in the detection apparatus of the non-living human face may be a program segment for implementing a corresponding function.
The detection device for the non-living body face provided by this embodiment learns the face and the abnormal environmental element by using the target image detection model, so that when identifying whether the face to be detected is the non-living body face, firstly, whether the face is included in the image or video to be detected is identified by using the target image detection model, then, whether the abnormal environmental element is included in the image or video to be detected is identified, and then, whether the face in the image to be detected is the non-living body face is determined by determining the position relationship between the environmental element and the face region, thereby improving the accuracy of the living body face identification.
The above-mentioned detection apparatus of a non-living human face may be implemented in the form of a computer program, which may be run on a computer device as shown in fig. 7.
For specific limitations of the detection device for the non-living human face, reference may be made to the above limitations of the detection method for the non-living human face, and details are not repeated here. The modules in the above detection device for the non-living human face can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with the controlled device through network connection. The computer program is executed by a processor to implement an apparatus for detecting a non-living human face.
In one embodiment, a computer device is provided, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the steps of the method for detecting a non-living human face in the above embodiments are implemented, for example, steps 101 to 104 shown in fig. 2. Alternatively, the processor, when executing the computer program, implements the functions of the respective modules/units of the detection apparatus of a non-living human face in the above-described embodiment, for example, the functions of the modules 11 to 14 shown in fig. 6. To avoid repetition, further description is omitted here.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like which is the control center for the computer device and which connects the various parts of the overall computer device using various interfaces and lines.
The memory may be used to store the computer programs and/or modules, and the processor may implement various functions of the computer device by running or executing the computer programs and/or modules stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, video data, etc.) created according to the use of the cellular phone, etc. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The memory may be integrated in the processor or may be provided separately from the processor.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the steps of the detection method of a non-living human face in the above-described embodiments, such as steps 101 to 104 shown in fig. 2. Alternatively, the computer program, when executed by the processor, implements the functions of the respective modules/units of the detection apparatus of a non-living human face in the above-described embodiment, for example, the functions of the modules 11 to 14 shown in fig. 6. To avoid repetition, further description is omitted here.
The method, the device, the computer equipment and the storage medium for detecting the non-living body face provided by the embodiment add an abnormal environment element detection technology, and enable the target picture detection model to learn the abnormal environment element in advance, so that when the face to be detected is the non-living body face, the target picture detection model can firstly identify whether the abnormal environment element exists in the picture or video to be detected, and then judge whether the face in the picture to be detected is the non-living body face by judging the position relationship between the environment element and the face region, thereby improving the accuracy of the living body face identification.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM), and includes several instructions for enabling a terminal (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for detecting a non-living human face, the method comprising:
acquiring a video image, and extracting a plurality of pictures to be detected from the video image;
respectively inputting the pictures to be detected into a pre-trained target picture detection model to obtain a target image included in each picture to be detected and a category to which the target image belongs;
sequentially judging whether the target image of each picture to be detected comprises a target image of which the category is a face area, if so, further judging whether the target image of the corresponding picture to be detected comprises an environmental element of which the category is a preset abnormal category;
and if the relative position relation between at least one face area of the image to be detected and the abnormal environment element is within a preset range, judging that the face in the image to be detected is a non-living face.
2. The method according to claim 1, wherein the step of determining whether the target image corresponding to the picture to be detected includes an environmental element belonging to a preset abnormal category comprises:
acquiring the category of each target image obtained from each picture to be detected and the category of a preset abnormal environment element;
and judging whether the category of each target image comprises at least one category of the abnormal environmental elements, if so, judging that the target image comprises the abnormal environmental elements.
3. The method of claim 1, wherein the step of training the target image detection model comprises:
receiving a plurality of sample pictures;
labeling an image area in the sample picture and a category to which the image area belongs according to the received instruction;
inputting the marked image area and the category to which the image area belongs into the target picture detection model;
and learning the image region and the category to which the image region belongs through the target image detection model to obtain the trained target image detection model.
4. The method according to claim 1, wherein if there is at least one face region of the image to be detected and the relative position relationship between the abnormal environmental element is within a preset range, the step of determining that the face in the image to be detected is a non-living face comprises:
and judging whether the display area of the abnormal environment element and the face area have a superposed part, if so, judging that the face in the picture to be detected is a non-living face.
5. The method of detecting a non-living human face according to any one of claims 1 to 4, wherein before the step of acquiring a video image, the method further comprises:
outputting a prompt message for enabling a user to move from far to near or from near to far or to approach a camera, wherein the camera is used for acquiring the video image;
outputting a prompt message for a user to do a preset facial action towards the camera;
and acquiring a video image which comprises a human face area of the user and comprises a preset facial action from big to small or from small to big.
6. An apparatus for detecting a non-living human face, the apparatus comprising:
the video acquisition module is used for acquiring video images and extracting a plurality of pictures to be detected from the video images;
the input module is used for respectively inputting the pictures to be detected into a pre-trained target picture detection model to obtain a target image included in each picture to be detected and a category to which the target image belongs;
the first judging module is used for sequentially judging whether the target image of each picture to be detected comprises a target image of which the category is a human face area, and if so, further judging whether the target image of the corresponding picture to be detected comprises an environmental element of which the category is a preset abnormal category;
and the second judging module is used for judging that the face in the picture to be detected is a non-living body face if the relative position relation between at least one face area of the picture to be detected and the abnormal environment element is within a preset range.
7. The apparatus according to claim 6, wherein the first determining module further comprises:
the category acquisition unit is used for acquiring the category of each target image obtained from each picture to be detected and the category of a preset abnormal environment element;
the first judging unit is used for judging whether the category of each target image comprises at least one category of the abnormal environmental elements, and if so, judging that the target image comprises the abnormal environmental elements.
8. The apparatus according to claim 6 or 7, wherein the second determining module is specifically configured to:
and judging whether the display area of the abnormal environment element and the face area have a superposed part, if so, judging that the face in the picture to be detected is a non-living face.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of detecting a non-living human face according to any one of claims 1 to 5 when executing the program.
10. A computer-readable storage medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the steps in the method for detecting a non-living human face according to any one of claims 1 to 5.
CN202010122186.3A 2020-02-27 2020-02-27 Detection method and device for non-living human face, computer equipment and storage medium Pending CN111428570A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010122186.3A CN111428570A (en) 2020-02-27 2020-02-27 Detection method and device for non-living human face, computer equipment and storage medium
PCT/CN2021/070470 WO2021169616A1 (en) 2020-02-27 2021-01-06 Method and apparatus for detecting face of non-living body, and computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010122186.3A CN111428570A (en) 2020-02-27 2020-02-27 Detection method and device for non-living human face, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111428570A true CN111428570A (en) 2020-07-17

Family

ID=71547309

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010122186.3A Pending CN111428570A (en) 2020-02-27 2020-02-27 Detection method and device for non-living human face, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN111428570A (en)
WO (1) WO2021169616A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112069917A (en) * 2020-08-14 2020-12-11 武汉轻工大学 Face recognition system for fixed scene
WO2021169616A1 (en) * 2020-02-27 2021-09-02 深圳壹账通智能科技有限公司 Method and apparatus for detecting face of non-living body, and computer device and storage medium
CN113420615A (en) * 2021-06-03 2021-09-21 深圳海翼智新科技有限公司 Face living body detection method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200929005A (en) * 2007-12-26 2009-07-01 Altek Corp Human face detection and tracking method
CN105518714A (en) * 2015-06-30 2016-04-20 北京旷视科技有限公司 Vivo detection method and equipment, and computer program product
CN108734057A (en) * 2017-04-18 2018-11-02 北京旷视科技有限公司 The method, apparatus and computer storage media of In vivo detection
CN107688781A (en) * 2017-08-22 2018-02-13 北京小米移动软件有限公司 Face identification method and device
CN110569808A (en) * 2019-09-11 2019-12-13 腾讯科技(深圳)有限公司 Living body detection method and device and computer equipment
CN111428570A (en) * 2020-02-27 2020-07-17 深圳壹账通智能科技有限公司 Detection method and device for non-living human face, computer equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021169616A1 (en) * 2020-02-27 2021-09-02 深圳壹账通智能科技有限公司 Method and apparatus for detecting face of non-living body, and computer device and storage medium
CN112069917A (en) * 2020-08-14 2020-12-11 武汉轻工大学 Face recognition system for fixed scene
CN112069917B (en) * 2020-08-14 2024-02-02 武汉轻工大学 Face recognition system for fixed scene
CN113420615A (en) * 2021-06-03 2021-09-21 深圳海翼智新科技有限公司 Face living body detection method and device

Also Published As

Publication number Publication date
WO2021169616A1 (en) 2021-09-02

Similar Documents

Publication Publication Date Title
JP6878572B2 (en) Authentication based on face recognition
US11210541B2 (en) Liveness detection method, apparatus and computer-readable storage medium
US10832069B2 (en) Living body detection method, electronic device and computer readable medium
WO2020093634A1 (en) Face recognition-based method, device and terminal for adding image, and storage medium
JP6374986B2 (en) Face recognition method, apparatus and terminal
CN109166156B (en) Camera calibration image generation method, mobile terminal and storage medium
EP3125135A1 (en) Picture processing method and device
US20200380279A1 (en) Method and apparatus for liveness detection, electronic device, and storage medium
CN111428570A (en) Detection method and device for non-living human face, computer equipment and storage medium
CN106228168B (en) The reflective detection method of card image and device
CN109345553B (en) Palm and key point detection method and device thereof, and terminal equipment
CN106296665B (en) Card image fuzzy detection method and apparatus
EP3188078A1 (en) Method and device for fingerprint identification
US11961278B2 (en) Method and apparatus for detecting occluded image and medium
EP3822757A1 (en) Method and apparatus for setting background of ui control
CN111198724A (en) Application program starting method and device, storage medium and terminal
CN111368944B (en) Method and device for recognizing copied image and certificate photo and training model and electronic equipment
US20220270352A1 (en) Methods, apparatuses, devices, storage media and program products for determining performance parameters
CN113255516A (en) Living body detection method and device and electronic equipment
CN109871205B (en) Interface code adjustment method, device, computer device and storage medium
CN108010009B (en) Method and device for removing interference image
US20150112997A1 (en) Method for content control and electronic device thereof
CN108270973B (en) Photographing processing method, mobile terminal and computer readable storage medium
WO2020124454A1 (en) Font switching method and related product
CN111428740A (en) Detection method and device for network-shot photo, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40033544

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination