CN112016482A - Method and device for distinguishing false face and computer equipment - Google Patents

Method and device for distinguishing false face and computer equipment Download PDF

Info

Publication number
CN112016482A
CN112016482A CN202010898814.7A CN202010898814A CN112016482A CN 112016482 A CN112016482 A CN 112016482A CN 202010898814 A CN202010898814 A CN 202010898814A CN 112016482 A CN112016482 A CN 112016482A
Authority
CN
China
Prior art keywords
face
video
video image
image
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010898814.7A
Other languages
Chinese (zh)
Other versions
CN112016482B (en
Inventor
杨青川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Xinchao Media Group Co Ltd
Original Assignee
Chengdu Xinchao Media Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Xinchao Media Group Co Ltd filed Critical Chengdu Xinchao Media Group Co Ltd
Priority to CN202010898814.7A priority Critical patent/CN112016482B/en
Publication of CN112016482A publication Critical patent/CN112016482A/en
Application granted granted Critical
Publication of CN112016482B publication Critical patent/CN112016482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of face recognition, and discloses a method, a device and computer equipment for judging false faces. The invention provides a novel method for realizing false face recognition based on the position relation of a face and a dynamic image display interface in a video image, namely, firstly determining the coordinate position of the face in the video image, then judging whether the coordinate position is positioned in a dynamic image display area in the video image, if so, considering the face as a false face, thereby directly carrying out false face judgment on the face in the video on the premise of not issuing a human body action instruction to field personnel, and having the advantages of simplified confirmation process, high confirmation speed, higher accuracy, no scene limitation and the like, and being particularly suitable for filtering false faces in an elevator site.

Description

Method and device for distinguishing false face and computer equipment
Technical Field
The invention belongs to the technical field of face recognition, and particularly relates to a method and a device for distinguishing false faces and computer equipment.
Background
At present, a living body detection method is mainly adopted to determine whether a face in a video is a real face of a live person, for example, by issuing human body action instructions such as shaking head, nodding head, blinking and opening mouth to the live person, then after identifying the human body action, checking whether the human body action corresponds to the human body action instructions, and when determining that the human body action corresponds to the human body action instructions, determining that the face in the video is the real face of the live person, instead of a false face on a carrier such as a display screen or a poster.
Although the existing in-vivo detection method has the advantage of high confirmation accuracy, the existing in-vivo detection method also has the defects of complex process, low confirmation speed and the like, and in some special places, because human body action instructions such as shaking head, nodding head, blinking, opening mouth and the like cannot be given to field personnel, the existing in-vivo detection method cannot be applied to confirm whether the human face in the monitoring video (namely the video collected in the special places) is the real human face of the field personnel, and the method is not beneficial to further developing the extended application based on the human face recognition result. For example, in an elevator scene, because a human body action instruction cannot be issued to field personnel, a false face appearing in an advertisement screen or a poster cannot be distinguished by applying the existing live body detection method, and a great statistical error occurs in a field personnel statistical result based on a face recognition result.
Disclosure of Invention
The invention aims to solve the problems of complex process, low confirmation speed and limited application scene in the existing real face or false face confirmation process, and provides a method, a device, computer equipment and a computer readable storage medium for judging false faces.
In a first aspect, the present invention provides a method for discriminating a false face, including:
acquiring at least one frame of video image, wherein the at least one frame of video image comprises a human face;
determining the coordinate position of the human face in the at least one frame of video image;
judging whether the coordinate position is located in a dynamic image display area in the at least one frame of video image, wherein the dynamic image display area refers to an imaging area of a display interface of dynamic image display equipment in the at least one frame of video image;
and if so, judging the face to be a false face.
Based on the content of the invention, a novel method for realizing false face recognition based on the position relation of the face and a dynamic image display interface in a video image can be provided, namely, the coordinate position of the face in the video image is determined firstly, then whether the coordinate position is positioned in a dynamic image display area in the video image is judged, if so, the face is considered as the false face, therefore, the false face can be directly judged on the face in the video on the premise of not issuing a human body action instruction to field personnel, and the method has the advantages of simplified confirmation process, high confirmation speed, higher accuracy, no scene limitation and the like, and is particularly suitable for filtering the false face in an elevator place.
In one possible design, two frames of video images corresponding to different moments are acquired;
and determining the dynamic image display area according to the similarity calculation result of the local video image contents in the two frames of video images, wherein the local video image contents refer to the video image contents respectively defined by the same dynamically changed local area boundary in the two frames of video images, and the local area boundary refers to the boundary used for segmenting the local video images in the video images.
Through the possible design, the occupied position of the display interface in the video image can be identified according to the local image similarity of the two frames of video images, and then the dynamic image display area is automatically obtained, so that manual determination is not needed, and the practical application is facilitated.
In one possible design, determining the dynamic image display area according to the similarity calculation result of the local video image contents in the two frames of video images includes any one of the following modes (a) to (D) or any combination thereof:
(A) horizontally sliding a first image dividing line from the left end to the right in synchronization in the two frames of video images;
in the sliding process, carrying out similarity calculation on video image contents positioned in two right local areas in real time, wherein the two right local areas are respectively local areas positioned on the right side of the first image dividing line in the two frames of video images in a one-to-one correspondence manner;
when the similarity calculation result is at an inflection point from small to large, taking the right area of the first image dividing line as the dynamic image display area;
(B) horizontally sliding a second image dividing line from the right end to the left in synchronization in the two frames of video images;
in the sliding process, carrying out similarity calculation on the video image contents in two left local areas in real time, wherein the two left local areas are respectively local areas on the left side of the second image dividing line in the two frames of video images in a one-to-one correspondence manner;
when the similarity calculation result is at an inflection point which is changed from small to large, taking the left area of the second image dividing line as the dynamic image display area;
(C) vertically sliding a third image dividing line downward from an upper end in synchronization in the two frames of video images;
in the sliding process, carrying out similarity calculation on the video image contents in two lower side local areas in real time, wherein the two lower side local areas are respectively the local areas positioned on the lower sides of the third image dividing lines in the two frames of video images in a one-to-one correspondence manner;
when the similarity calculation result is at an inflection point which is changed from small to large, taking the lower side area of the third image dividing line as the dynamic image display area;
(D) vertically sliding a fourth image dividing line from a lower end upward in synchronization in the two frames of video images;
in the sliding process, carrying out similarity calculation on the video image contents in two upper side local areas in real time, wherein the two upper side local areas are respectively local areas on the upper sides of the fourth image dividing lines in the two frames of video images in a one-to-one correspondence manner;
and when the similarity calculation result is at an inflection point which is changed from small to large, taking the upper region of the fourth image dividing line as the dynamic image display region.
Through the possible design, the occupied position of the dynamic image display area in the video image can be determined according to the local image similarity acquisition result of the two frames of video images from one of the left direction, the right direction, the upper direction and the lower direction or any combination angle, and the accuracy rate of judging the subsequent false face is further guaranteed.
In one possible design, if yes, determining that the face is a false face includes:
acquiring a brightness value of the face in the at least one frame of video image;
and when the brightness value of the face is greater than a preset brightness threshold value, judging the face to be a false face.
Through the possible design, whether the face in the dynamic image display area is a false face or not can be further confirmed by utilizing the comparison result of the face brightness value and the preset brightness threshold value based on the characteristics that the screen is bright and the video face brightness value is large, the misjudgment condition is avoided, and the accuracy of false face judgment can be guaranteed.
In one possible design, obtaining the brightness value of the face in the at least one frame of video image includes:
intercepting a face area of the face in the at least one frame of video image;
acquiring brightness component values of all pixel points in the face area under an illumination color model Lab;
averaging the brightness component values of all the pixel points in the face area to obtain a brightness mean value, and taking the brightness mean value as the brightness value.
Through the possible design, the brightness mean value of the face in the video image can be accurately obtained, whether the face in the dynamic image display area is a false face or not can be further accurately determined, the misjudgment condition is avoided, and the accuracy of false face judgment is further guaranteed.
In one possible design, the method further includes:
if not, determining the coordinate position of the face in a plurality of adjacent video images, wherein the plurality of adjacent video images are adjacent to the at least one frame of video image;
and if the coordinate position of the face in the multiple frames of adjacent video images is the same as the coordinate position of the face in the at least one frame of video image, judging that the face is a false face.
By the possible design, the characteristic that the false face provided by the static image display interface is still can be utilized, whether the face outside the dynamic image display area is the false face or not can be further confirmed according to the coordinate position difference condition of the face in different frame video images, the misjudgment condition is avoided, and the accuracy of false face judgment can be guaranteed.
In a second aspect, the invention provides a device for distinguishing false faces, which comprises a video image acquisition unit, a face position determination unit, a position relation judgment unit and a false face judgment unit, wherein the video image acquisition unit, the face position determination unit, the position relation judgment unit and the false face judgment unit are sequentially in communication connection;
the video image acquisition unit is used for acquiring at least one frame of video image, and the at least one frame of video image contains a human face;
the face position determining unit is used for determining the coordinate position of the face in the at least one frame of video image;
the position relation judging unit is used for judging whether the coordinate position is located in a dynamic image display area in the at least one frame of video image, wherein the dynamic image display area refers to an imaging area of dynamic image display equipment in the at least one frame of video image;
and the false face judging unit is used for judging that the face is a false face when the coordinate position is found to be positioned in the dynamic image display area.
In one possible design, the video image processing device further comprises a dynamic region determining unit which is respectively in communication connection with the video image acquiring unit and the position relation judging unit;
the video image acquisition unit is also used for acquiring two frames of video images corresponding to different moments;
the dynamic region determining unit is configured to determine the dynamic image display region according to a similarity calculation result of local video image contents in the two frames of video images, where the local video image contents are video image contents respectively defined by a same dynamically changing local region boundary in the two frames of video images, and the local region boundary is a boundary used for segmenting a local video image in a video image.
In one possible design, the dynamic region determining unit is configured to determine the dynamic image display region in any one of the following manners (a) to (D), or any combination thereof:
(A) horizontally sliding a first image dividing line from the left end to the right in synchronization in the two frames of video images;
in the sliding process, carrying out similarity calculation on video image contents positioned in two right local areas in real time, wherein the two right local areas are respectively local areas positioned on the right side of the first image dividing line in the two frames of video images in a one-to-one correspondence manner;
when the similarity calculation result is at an inflection point from small to large, taking the right area of the first image dividing line as the dynamic image display area;
(B) horizontally sliding a second image dividing line from the right end to the left in synchronization in the two frames of video images;
in the sliding process, carrying out similarity calculation on the video image contents in two left local areas in real time, wherein the two left local areas are respectively local areas on the left side of the second image dividing line in the two frames of video images in a one-to-one correspondence manner;
when the similarity calculation result is at an inflection point which is changed from small to large, taking the left area of the second image dividing line as the dynamic image display area;
(C) vertically sliding a third image dividing line downward from an upper end in synchronization in the two frames of video images;
in the sliding process, carrying out similarity calculation on the video image contents in two lower side local areas in real time, wherein the two lower side local areas are respectively the local areas positioned on the lower sides of the third image dividing lines in the two frames of video images in a one-to-one correspondence manner;
when the similarity calculation result is at an inflection point which is changed from small to large, taking the lower side area of the third image dividing line as the dynamic image display area;
(D) vertically sliding a fourth image dividing line from a lower end upward in synchronization in the two frames of video images;
in the sliding process, carrying out similarity calculation on the video image contents in two upper side local areas in real time, wherein the two upper side local areas are respectively local areas on the upper sides of the fourth image dividing lines in the two frames of video images in a one-to-one correspondence manner;
and when the similarity calculation result is at an inflection point which is changed from small to large, taking the upper region of the fourth image dividing line as the dynamic image display region.
In one possible design, the false face judgment unit comprises a face brightness acquisition subunit and a false face judgment subunit which are in communication connection;
the face brightness acquiring subunit is configured to acquire a brightness value of the face in the at least one frame of video image;
and the false face judging subunit is used for judging that the face is a false face when the brightness value of the face is greater than a preset brightness threshold value.
In one possible design, the face brightness obtaining sub-unit comprises a face region intercepting grandchild unit, a pixel brightness obtaining grandchild unit and a brightness mean value calculating grandchild unit which are sequentially connected in a communication manner;
the human face region intercepting grandchild unit is used for intercepting a human face region of the human face in the at least one frame of video image;
the pixel brightness acquiring unit is used for acquiring brightness component values of all pixel points in the face area under an illumination color model Lab;
and the brightness mean value calculating unit is used for averaging the brightness component values of all the pixel points in the face area to obtain a brightness mean value, and the brightness mean value is used as the brightness value.
In a possible design, the face position determining unit is further configured to determine a coordinate position of the face in multiple frames of adjacent video images when the coordinate position is found to be outside the dynamic image display area, where the multiple frames of adjacent video images are adjacent to the at least one frame of video image;
the false face determination unit is further configured to determine that the face is a false face if the coordinate position of the face in the multiple frames of adjacent video images is the same as the coordinate position of the face in the at least one frame of video image.
In a third aspect, the present invention provides a computer device, comprising a memory and a processor, wherein the memory is used for storing a computer program, and the processor is used for reading the computer program and executing the method for discriminating a false face as described in the first aspect or any one of the first aspects.
In a fourth aspect, the present invention provides a computer-readable storage medium having stored thereon instructions which, when run on a computer, perform the method for discriminating a false face as described in the first aspect or any one of the possible designs of the first aspect.
In a fifth aspect, the present invention provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method for discriminating a false face as described above in the first aspect or any one of the possible designs of the first aspect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a method for discriminating a false face according to the present invention.
Fig. 2 is an exemplary diagram of sliding the first image dividing line horizontally to the right in synchronization in two frames of video images according to the present invention.
Fig. 3 is a diagram illustrating a position of a dynamic image display area in a video image according to an embodiment of the present invention.
Fig. 4 is a schematic structural diagram of a device for discriminating a false face according to the present invention.
Fig. 5 is a schematic structural diagram of a computer device provided by the present invention.
Detailed Description
The invention is further described with reference to the following figures and specific embodiments. It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto. Specific structural and functional details disclosed herein are merely illustrative of example embodiments of the invention. This invention may, however, be embodied in many alternate forms and should not be construed as limited to the embodiments set forth herein.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments of the present invention.
It should be understood that, for the term "and/or" as may appear herein, it is merely an associative relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, B exists alone, and A and B exist at the same time; for the term "/and" as may appear herein, which describes another associative object relationship, it means that two relationships may exist, e.g., a/and B, may mean: a exists independently, and A and B exist independently; in addition, for the character "/" that may appear herein, it generally means that the former and latter associated objects are in an "or" relationship.
It will be understood that when an element is referred to herein as being "connected," "connected," or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Conversely, if a unit is referred to herein as being "directly connected" or "directly coupled" to another unit, it is intended that no intervening units are present. In addition, other words used to describe the relationship between elements should be interpreted in a similar manner (e.g., "between … …" versus "directly between … …", "adjacent" versus "directly adjacent", etc.).
It is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, numbers, steps, operations, elements, components, and/or groups thereof.
It should also be noted that, in some alternative designs, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
It should be understood that specific details are provided in the following description to facilitate a thorough understanding of example embodiments. However, it will be understood by those of ordinary skill in the art that the example embodiments may be practiced without these specific details. For example, systems may be shown in block diagrams in order not to obscure the examples in unnecessary detail. In other instances, well-known processes, structures and techniques may be shown without unnecessary detail in order to avoid obscuring example embodiments.
As shown in fig. 1, the method for discriminating a false face according to the first aspect of the present embodiment may be, but is not limited to, suitable for face detection and tracking in special places such as shops, airports, exhibition halls, elevators, etc. where human motion commands cannot be issued to on-site personnel. The method for discriminating the false face can, but not limited to, include the following steps S101 to S104.
S101, obtaining at least one frame of video image, wherein the at least one frame of video image comprises a human face.
In step S101, the at least one frame of video image may be, but is not limited to, captured by a camera disposed in a special place where no human motion command is given to a live person, such as a shop, an airport, an exhibition hall, an elevator, etc., and may be captured by the camera when the lens is still or rotated. When the human face of the live person, the false person in the display screen or the poster appears in the imaging field of the camera, the human face in the at least one frame of video image can be obtained.
And S102, determining the coordinate position of the face in the at least one frame of video image.
In step S102, the coordinate position is determined in a conventional manner, for example, the abscissa x and the ordinate y of the center of the identification mark frame corresponding to the human face in the video image are taken as the coordinate position.
S103, judging whether the coordinate position is located in a dynamic image display area in the at least one frame of video image, wherein the dynamic image display area refers to an imaging area of a display interface of dynamic image display equipment in the at least one frame of video image;
in step S103, the dynamic image display device may be, but is not limited to, an electronic display device such as a television or an advertisement player disposed on the scene. The occupation position of the dynamic image display area in the at least one frame of video image can be manually determined in advance, and can also be automatically determined in advance based on the result of identifying the display interface of the video image. If the at least one frame of video image is collected by the camera when the display interface is in a static state relative to the lens, the occupied position of the dynamic image display area in each frame of the at least one frame of video image is unchanged, and the occupied positions of the dynamic image display area in the rest frames can be directly obtained as long as the occupied positions of the dynamic image display area in part of the frames are determined. If the at least one frame of video image is captured by the camera in a moving state of the display interface relative to the lens (including but not limited to movement of the display interface and/or rotation of the lens, etc.), the occupied positions of the dynamic image display area in each frame of the at least one frame of video image are changed, and as long as the occupied positions of the dynamic image display area in a partial frame are determined, the occupied positions of the dynamic image display area in the rest frames can also be obtained through conventional geometric transformation based on the occupied positions of the dynamic image display area in the partial frame and a known moving track of the display interface relative to the lens in a camera coordinate system of the camera.
And S104, if yes, judging the face to be a false face.
In step S104, since the display interface provides a human face of a dummy person (for example, a dummy human face of a character actor in an elevator advertisement) for the at least one frame of video image when displaying the playing content, if the coordinate position of the human face appears in the dynamic image display area, it may be preliminarily determined that the human face is a dummy human face provided by the display interface of the dynamic image display device, that is, the human face is determined to be a dummy human face. On the contrary, when the coordinate position of the face appears outside the dynamic image display area, the face can be preliminarily determined to be a real face.
Therefore, through the technical scheme for discrimination described in detail in the steps S101 to S104, a new method for realizing false face recognition based on the position relationship between the face and the dynamic image display interface in the video image can be provided, i.e. the coordinate position of the face in the video image is determined, then whether the coordinate position is located in the dynamic image display area in the video image is judged, if so, the face is considered as the false face, therefore, the false face discrimination can be directly carried out on the face in the video on the premise of not giving a human body action instruction to field personnel, and therefore, the method has the advantages of simplified confirmation process, high confirmation speed, higher accuracy, no limitation of scenes and the like, and is particularly suitable for false face filtering in an elevator site.
On the basis of the technical solution of the first aspect, the present embodiment further specifically proposes a possible design for determining a dynamic image display area based on a result of identifying a display interface from a video image, that is, the method further includes, but is not limited to, the following steps S201 to S202.
S201, two frames of video images corresponding to different moments are obtained.
In the step S201, the two frames of video images and the at least one frame of video image are acquired by the same camera, wherein the two frames of video images must be respectively acquired by the camera when the display interface and the lens are at the same spatial position, two or either of the two frames of video images may be a video image of the at least one frame of video image, or the two frames of video images may be other than the at least one frame of video image and at least one of the at least one frame of video image is a video image captured by the camera when the display interface and the lens are at the same spatial position, so as to obtain the occupation position of the dynamic image display area in the at least one frame of video image based on the occupation positions of the dynamic image display area in the two frames of video images. The time difference between the two video images may be, for example, 1 second or 1 minute.
S202, determining the dynamic image display area according to the similarity calculation result of the local video image contents in the two frames of video images, wherein the local video image contents refer to the video image contents respectively defined by the same dynamically changed local area boundary in the two frames of video images, and the local area boundary refers to the boundary used for dividing the local video images in the video images.
In step S202, the similarity calculation method of the local video image content may be implemented by, but not limited to, a conventional structural similarity calculation method or a cosine similarity calculation method. Because the display playing content of the display interface is a dynamic image picture, the change amplitude of the video image content in the corresponding dynamic image display area is relatively large, so that the area with large video image content change can be identified according to the similarity calculation result of the local video image content in the two frames of video images so as to be used as the dynamic image display area.
Therefore, by the possible design one described in the above steps S201 to S202, the occupied position of the display interface in the video image can be identified according to the local image similarity acquisition result of the two frames of video images, and the dynamic image display area can be automatically obtained, so that manual determination is not required, and the practical application is facilitated.
Based on the technical solution of the first possible design, the second possible design of accurately positioning the dynamic image display area is further specifically proposed, that is, the dynamic image display area is determined according to the similarity calculation result of the local video image contents in the two frames of video images, including but not limited to any one of the following manners (a) to (D) or any combination thereof.
The method (A) includes but is not limited to the following steps SA 1-SA 3:
SA1, sliding a first image segmentation line horizontally from the left end to the right in the two frames of video images synchronously;
SA2, in the sliding process, performing similarity calculation on the video image contents in two right local areas in real time, wherein the two right local areas are local areas on the right side of the first image dividing line in the two frames of video images respectively in one-to-one correspondence;
and SA3, when the similarity calculation result is at an inflection point from small to large, taking the right area of the first image dividing line as the dynamic image display area.
In the foregoing steps SA 1-SA 3, as shown in fig. 2, it is considered that in two frames of video images, the content of the video image in the dynamic image display area (i.e. the diagonal line area in fig. 2) changes relatively greatly, while the content of the video image in the other area changes relatively little or no, so that as the first image dividing line slides horizontally to the right in synchronization in the two frames of video images, the occupation ratio of the dynamic image display area in the two right partial areas will become larger and larger, and the similarity between the two right partial areas will decrease continuously (i.e. from the x abscissa0To the abscissa x1And the abscissa x2Until the first image dividing line slides to the left boundary position of the dynamic image display area, the minimum value of the similarity is reached, and then when the horizontal sliding to the right is continued, the proportion of the dynamic image display area in the two right local areas is smaller and smaller, so that the similarity of the two right local areas is continuously increased (namely, when the horizontal sliding is performed from the abscissa x)2To the abscissa x3In the sliding process), when the similarity of the two right local areas is at an inflection point from small to large, the right area of the first image segmentation line is taken as the dynamic image display area, and a preliminary positioning result is obtained.
The method (B) includes, but is not limited to, the following steps SB 1-SB 3:
horizontally sliding a second image dividing line from the right end to the left synchronously in the two frames of video images;
performing similarity calculation on the video image contents in two left local areas in real time in the sliding process, wherein the two left local areas are respectively local areas on the left side of the second image dividing line in the two frames of video images in a one-to-one correspondence manner;
and SB3, when the similarity calculation result is at an inflection point from small to large, taking the left area of the second image dividing line as the dynamic image display area.
The method (C) includes but is not limited to the following steps SC 1-SC 3:
SC1, sliding vertically a third image dividing line from the upper end downwards in the two frames of video images synchronously;
SC2, in the sliding process, carrying out similarity calculation on the video image contents of two lower side local areas in real time, wherein the two lower side local areas are respectively local areas positioned on the lower side of the third image dividing line in the two frames of video images in a one-to-one correspondence manner;
and SC3, when the similarity calculation result is at an inflection point from small to large, taking a lower area of the third image dividing line as the dynamic image display area.
The method (D) includes, but is not limited to, the following steps SD 1-SD 3:
SD1, sliding a fourth image segmentation line vertically from the lower end upwards in the two frames of video images synchronously;
performing similarity calculation on the video image contents in two upper local areas in real time in the sliding process, wherein the two upper local areas are respectively local areas on the upper sides of the fourth image partition lines in the two frames of video images in a one-to-one correspondence manner;
and SD3, when the similarity calculation result is at an inflection point from small to large, taking the upper region of the fourth image dividing line as the dynamic image display region.
The calculation principle for determining the dynamic image display area described in the foregoing modes (B) to (D) can be referred to as the foregoing mode (a), and details thereof are omitted here. In addition, considering that the dynamic image display area occupies a plurality of different positions in the video image (for example, occupies the entire right area, left area, upper area, lower area, corner area, middle area, etc.), the foregoing modes (a) to (D) can be arbitrarily combined to obtain the dynamic image display area with more precise positioning. For example, as shown in fig. 3, the overlapping region located in the right region of the first image partition line, the left region of the second image partition line, the lower region of the third image partition line, and the upper region of the fourth image partition line may be used as the moving image display region, so that the occupied position of the moving image display region in the video image may be accurately determined, and the accuracy of determining the subsequent false face may be further ensured.
Therefore, by the aid of the second possible design, the occupied position of the dynamic image display area in the video image can be determined according to the local image similarity acquisition result of the two frames of video images from one or any combination of the left direction, the right direction, the upper direction and the lower direction, and the accuracy rate of subsequent false face judgment is further guaranteed.
On the basis of the first aspect and any one of the first to second possible designs, the present embodiment further specifically proposes a third possible design for accurately determining a false face, that is, if yes, determining that the face is a false face, including but not limited to the following steps S401 to S402.
S401, acquiring the brightness value of the face in the at least one frame of video image.
S402, when the brightness value of the face is larger than a preset brightness threshold value, the face is judged to be a false face.
In the step S402, considering that in an actual situation, because the face of a real person may block the display interface (for example, a display screen of a live advertisement machine) relative to a camera, so that the coordinate position of the real face may also be located in the dynamic image display area, to avoid a false determination, based on a characteristic that a screen is brighter and a luminance value of a video face is larger (for example, a luminance average value is generally larger than 130, and a luminance average value of the real face is generally smaller than 85), a comparison result between the luminance value of the face and the preset luminance threshold is utilized to further determine whether the face located in the dynamic image display area is a false face, that is, when the luminance value of the face is larger than the preset luminance threshold, the face is determined to be a false face; otherwise, the face is judged to be a real face. In addition, for example, the preset brightness threshold may be between 85 and 130.
Therefore, through the third possible design described in the above steps S401 to S402, based on the characteristics that the screen is brighter and the brightness value of the video face is larger, the comparison result between the brightness value of the face and the preset brightness threshold is used to further determine whether the face located in the dynamic image display area is a false face, so as to avoid the occurrence of a false judgment condition, and ensure the accuracy of false face judgment.
Based on the aforementioned technical solution of the third possible design, the present embodiment further specifically proposes a fourth possible design for obtaining a face luminance average value, that is, obtaining a luminance value of the face in the at least one frame of video image, including but not limited to the following steps S411 to S413.
S411, intercepting a face area of the face in the at least one frame of video image.
S412, obtaining the brightness component value of each pixel point in the face area under the illumination color model Lab.
And S413, averaging the brightness component values of all the pixel points in the face area to obtain a brightness mean value, and taking the brightness mean value as the brightness value.
In the step S411, the face region may be intercepted by using an existing conventional manner, for example, the face region is intercepted based on a face contour recognition result. The illumination color model Lab is a color model irrelevant to equipment and is also a color model based on physiological characteristics; the Lab color model consists of three elements, one element being luminance (L), a and b being two color channels, where a comprises colors from dark green (low luminance value) to gray (medium luminance value) to bright pink red (high luminance value); b is from bright blue (low brightness value) to gray (medium brightness value) to yellow (high brightness value), so that the brightness component value L of each pixel point can be directly obtained under the illumination color model Lab.
Therefore, by the possible design four described in the above steps S411 to S413, the luminance average value of the face in the video image can be accurately obtained, it is ensured that whether the face located in the dynamic image display area is a false face can be further accurately determined, a false determination situation is avoided, and the accuracy of false face determination is further ensured. In addition, the brightness intermediate value among the brightness component values of all the pixel points may also be used as the brightness value.
On the basis of the first aspect and any one of the first to fourth possible designs, the present embodiment further specifically proposes a fifth possible design for further determining a static false face, that is, the method further includes, but is not limited to, the following steps S501 to S502.
S501, when the coordinate position is not located in the dynamic image display area, determining the coordinate position of the face in multiple adjacent video images, wherein the multiple adjacent video images are adjacent to the at least one video image.
In the step S501, the plurality of frames of adjacent video images and the at least one frame of video image must be respectively captured by the cameras when the still image display interface (e.g., a live hanging picture or a poster) and the lens are in the same spatial position. The specific manner of determining the coordinate position in step S501 needs to be consistent with the specific manner of determining the coordinate position in step S102, for example, the center coordinates of the identification mark frame of the face in the multiple frames of adjacent video images may be used as the determined coordinate position. In addition, the specific way of determining the coordinate position may also be to introduce a face region cut from the multiple frames of adjacent video images or the at least one frame of video image into a face detection algorithm model such as a Multi-task Cascaded Convolutional network Model (MTCNN) to obtain a face feature point, and then use the coordinate position of the face feature point as the coordinate position of the face.
S502, if the coordinate position of the face in the multiple frames of adjacent video images is the same as the coordinate position of the face in the at least one frame of video image, judging that the face is a false face.
In the step S502, it is considered that in an actual situation, a face whose coordinate position is located outside the dynamic image display area may also be a false face provided by a static image display interface such as a live hanging picture or a poster, so to avoid misjudgment, it may be further determined whether the face located outside the dynamic image display area is the false face according to a difference situation between the coordinate positions of the face in the multiple frames of adjacent video images and the at least one frame of video image by using a characteristic that the false face provided by the static image display interface is still, that is, if the coordinate position of the face in the multiple frames of adjacent video images is the same as the coordinate position of the face in the at least one frame of video images, the face is determined to be the false face; otherwise, the face is judged to be a real face. In addition, the difference between the neighboring time of the plurality of frames of neighboring video images and the neighboring time of the at least one frame of video image may be, for example, 10 seconds or more, and the number of the plurality of frames of neighboring video images may be, for example, 3 frames or more.
Therefore, by the fifth possible design described in the steps S501 to S502, it can be further determined whether the face located outside the dynamic image display area is a false face according to the difference of the coordinate positions of the face in different frame video images by using the characteristic that the false face provided by the static image display interface is still, so as to avoid the occurrence of erroneous determination, and ensure the accuracy of false face determination. In addition, at least three frames of video images (the number of frames in the at least one frame of video image is much greater than 3, for example, 100 frames) may be extracted from the at least one frame of video image, then the coordinate positions of the face in the at least three frames of video images are determined, and if all the determined coordinate positions are the same, the face may also be determined to be a false face.
As shown in fig. 4, a second aspect of this embodiment provides a virtual device for implementing the method for distinguishing a false face according to any one of the first aspect or the first aspect, including a video image acquisition unit, a face position determination unit, a position relationship determination unit, and a false face determination unit, which are sequentially connected in a communication manner;
the video image acquisition unit is used for acquiring at least one frame of video image, and the at least one frame of video image contains a human face;
the face position determining unit is used for determining the coordinate position of the face in the at least one frame of video image;
the position relation judging unit is used for judging whether the coordinate position is located in a dynamic image display area in the at least one frame of video image, wherein the dynamic image display area refers to an imaging area of dynamic image display equipment in the at least one frame of video image;
and the false face judging unit is used for judging that the face is a false face when the coordinate position is found to be positioned in the dynamic image display area.
In one possible design, the video image processing device further comprises a dynamic region determining unit which is respectively in communication connection with the video image acquiring unit and the position relation judging unit;
the video image acquisition unit is also used for acquiring two frames of video images corresponding to different moments;
the dynamic region determining unit is configured to determine the dynamic image display region according to a similarity calculation result of local video image contents in the two frames of video images, where the local video image contents are video image contents respectively defined by a same dynamically changing local region boundary in the two frames of video images, and the local region boundary is a boundary used for segmenting a local video image in a video image.
In one possible design, the dynamic region determining unit is configured to determine the dynamic image display region in any one of the following manners (a) to (D), or any combination thereof:
(A) horizontally sliding a first image dividing line from the left end to the right in synchronization in the two frames of video images;
in the sliding process, carrying out similarity calculation on video image contents positioned in two right local areas in real time, wherein the two right local areas are respectively local areas positioned on the right side of the first image dividing line in the two frames of video images in a one-to-one correspondence manner;
when the similarity calculation result is at an inflection point from small to large, taking the right area of the first image dividing line as the dynamic image display area;
(B) horizontally sliding a second image dividing line from the right end to the left in synchronization in the two frames of video images;
in the sliding process, carrying out similarity calculation on the video image contents in two left local areas in real time, wherein the two left local areas are respectively local areas on the left side of the second image dividing line in the two frames of video images in a one-to-one correspondence manner;
when the similarity calculation result is at an inflection point which is changed from small to large, taking the left area of the second image dividing line as the dynamic image display area;
(C) vertically sliding a third image dividing line downward from an upper end in synchronization in the two frames of video images;
in the sliding process, carrying out similarity calculation on the video image contents in two lower side local areas in real time, wherein the two lower side local areas are respectively the local areas positioned on the lower sides of the third image dividing lines in the two frames of video images in a one-to-one correspondence manner;
when the similarity calculation result is at an inflection point which is changed from small to large, taking the lower side area of the third image dividing line as the dynamic image display area;
(D) vertically sliding a fourth image dividing line from a lower end upward in synchronization in the two frames of video images;
in the sliding process, carrying out similarity calculation on the video image contents in two upper side local areas in real time, wherein the two upper side local areas are respectively local areas on the upper sides of the fourth image dividing lines in the two frames of video images in a one-to-one correspondence manner;
and when the similarity calculation result is at an inflection point which is changed from small to large, taking the upper region of the fourth image dividing line as the dynamic image display region.
In one possible design, the false face judgment unit comprises a face brightness acquisition subunit and a false face judgment subunit which are in communication connection;
the face brightness acquiring subunit is configured to acquire a brightness value of the face in the at least one frame of video image;
and the false face judging subunit is used for judging that the face is a false face when the brightness value of the face is greater than a preset brightness threshold value.
In one possible design, the face brightness obtaining sub-unit comprises a face region intercepting grandchild unit, a pixel brightness obtaining grandchild unit and a brightness mean value calculating grandchild unit which are sequentially connected in a communication manner;
the human face region intercepting grandchild unit is used for intercepting a human face region of the human face in the at least one frame of video image;
the pixel brightness acquiring unit is used for acquiring brightness component values of all pixel points in the face area under an illumination color model Lab;
and the brightness mean value calculating unit is used for averaging the brightness component values of all the pixel points in the face area to obtain a brightness mean value, and the brightness mean value is used as the brightness value.
In a possible design, the face position determining unit is further configured to determine a coordinate position of the face in multiple frames of adjacent video images when the coordinate position is found to be outside the dynamic image display area, where the multiple frames of adjacent video images are adjacent to the at least one frame of video image;
the false face determination unit is further configured to determine that the face is a false face if the coordinate position of the face in the multiple frames of adjacent video images is the same as the coordinate position of the face in the at least one frame of video image.
The working process, working details and technical effects of the foregoing apparatus provided in the second aspect of this embodiment may refer to the method for discriminating a false face in any one of the first aspect and the first aspect, which is not described herein again.
As shown in fig. 5, a third aspect of the present embodiment provides a computer device for executing the method for discriminating a false face according to any one of the first aspect or the possible designs of the first aspect, including a memory and a processor, which are connected in communication, where the memory is used for storing a computer program, and the processor is used for reading the computer program and executing the method for discriminating a false face according to any one of the first aspect or the possible designs of the first aspect. For example, the Memory may include, but is not limited to, a Random-Access Memory (RAM), a Read-Only Memory (ROM), a Flash Memory (Flash Memory), a First-in First-out (FIFO), and/or a First-in Last-out (FILO), and the like; the processor may not be limited to the microprocessor of the model number employing the STM32F105 family. In addition, the computer device may also include, but is not limited to, a power module, a display screen, and other necessary components.
The working process, working details and technical effects of the computer device provided in the third aspect of this embodiment may refer to the first aspect or any one of the methods that may be designed in the first aspect for distinguishing false faces, which are not described herein again.
A fourth aspect of the present embodiment provides a computer-readable storage medium storing instructions for implementing the method for discriminating a false face according to any one of the first aspect and the possible designs of the first aspect, that is, the computer-readable storage medium has instructions stored thereon, and when the instructions are executed on a computer, the method for discriminating a false face according to any one of the first aspect and the possible designs of the first aspect is implemented. The computer-readable storage medium refers to a carrier for storing data, and may include, but is not limited to, floppy disks, optical disks, hard disks, flash memories, flash disks and/or Memory sticks (Memory sticks), etc., and the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
The working process, working details and technical effects of the foregoing computer-readable storage medium provided in the fourth aspect of this embodiment may refer to the first aspect or any one of the methods that may be designed for distinguishing a false face in the first aspect, and are not described herein again.
A fifth aspect of the present embodiment provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method for discriminating a false face as described in the first aspect or any one of the first aspect. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable devices.
The embodiments described above are merely illustrative, and may or may not be physically separate, if referring to units illustrated as separate components; if reference is made to a component displayed as a unit, it may or may not be a physical unit, and may be located in one place or distributed over a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: modifications may be made to the embodiments described above, or equivalents may be substituted for some of the features described. And such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Finally, it should be noted that the present invention is not limited to the above alternative embodiments, and that various other forms of products can be obtained by anyone in light of the present invention. The above detailed description should not be taken as limiting the scope of the invention, which is defined in the claims, and which the description is intended to be interpreted accordingly.

Claims (10)

1. A method for discriminating a false face, comprising:
acquiring at least one frame of video image, wherein the at least one frame of video image comprises a human face;
determining the coordinate position of the human face in the at least one frame of video image;
judging whether the coordinate position is located in a dynamic image display area in the at least one frame of video image, wherein the dynamic image display area refers to an imaging area of a display interface of dynamic image display equipment in the at least one frame of video image;
and if so, judging the face to be a false face.
2. The method of claim 1, wherein the method further comprises:
acquiring two frames of video images corresponding to different moments;
and determining the dynamic image display area according to the similarity calculation result of the local video image contents in the two frames of video images, wherein the local video image contents refer to the video image contents respectively defined by the same dynamically changed local area boundary in the two frames of video images, and the local area boundary refers to the boundary used for segmenting the local video images in the video images.
3. The method according to claim 2, wherein determining the dynamic image display area according to the calculation result of the similarity of the contents of the local video images in the two frames of video images comprises any one or any combination of the following modes (a) to (D):
(A) horizontally sliding a first image dividing line from the left end to the right in synchronization in the two frames of video images;
in the sliding process, carrying out similarity calculation on video image contents positioned in two right local areas in real time, wherein the two right local areas are respectively local areas positioned on the right side of the first image dividing line in the two frames of video images in a one-to-one correspondence manner;
when the similarity calculation result is at an inflection point from small to large, taking the right area of the first image dividing line as the dynamic image display area;
(B) horizontally sliding a second image dividing line from the right end to the left in synchronization in the two frames of video images;
in the sliding process, carrying out similarity calculation on the video image contents in two left local areas in real time, wherein the two left local areas are respectively local areas on the left side of the second image dividing line in the two frames of video images in a one-to-one correspondence manner;
when the similarity calculation result is at an inflection point which is changed from small to large, taking the left area of the second image dividing line as the dynamic image display area;
(C) vertically sliding a third image dividing line downward from an upper end in synchronization in the two frames of video images;
in the sliding process, carrying out similarity calculation on the video image contents in two lower side local areas in real time, wherein the two lower side local areas are respectively the local areas positioned on the lower sides of the third image dividing lines in the two frames of video images in a one-to-one correspondence manner;
when the similarity calculation result is at an inflection point which is changed from small to large, taking the lower side area of the third image dividing line as the dynamic image display area;
(D) vertically sliding a fourth image dividing line from a lower end upward in synchronization in the two frames of video images;
in the sliding process, carrying out similarity calculation on the video image contents in two upper side local areas in real time, wherein the two upper side local areas are respectively local areas on the upper sides of the fourth image dividing lines in the two frames of video images in a one-to-one correspondence manner;
and when the similarity calculation result is at an inflection point which is changed from small to large, taking the upper region of the fourth image dividing line as the dynamic image display region.
4. The method of claim 1, wherein if yes, determining the face is a false face comprises:
acquiring a brightness value of the face in the at least one frame of video image;
and when the brightness value of the face is greater than a preset brightness threshold value, judging the face to be a false face.
5. The method of claim 4, wherein obtaining the brightness value of the face in the at least one frame of video image comprises:
intercepting a face area of the face in the at least one frame of video image;
acquiring brightness component values of all pixel points in the face area under an illumination color model Lab;
averaging the brightness component values of all the pixel points in the face area to obtain a brightness mean value, and taking the brightness mean value as the brightness value.
6. The method of claim 1, wherein the method further comprises:
if not, determining the coordinate position of the face in a plurality of adjacent video images, wherein the plurality of adjacent video images are adjacent to the at least one frame of video image;
and if the coordinate position of the face in the multiple frames of adjacent video images is the same as the coordinate position of the face in the at least one frame of video image, judging that the face is a false face.
7. A device for distinguishing false faces is characterized by comprising a video image acquisition unit, a face position determination unit, a position relation judgment unit and a false face judgment unit which are sequentially connected in a communication manner;
the video image acquisition unit is used for acquiring at least one frame of video image, and the at least one frame of video image contains a human face;
the face position determining unit is used for determining the coordinate position of the face in the at least one frame of video image;
the position relation judging unit is used for judging whether the coordinate position is located in a dynamic image display area in the at least one frame of video image, wherein the dynamic image display area refers to an imaging area of a display interface of dynamic image display equipment in the at least one frame of video image;
and the false face judging unit is used for judging that the face is a false face when the coordinate position is found to be positioned in the dynamic image display area.
8. The apparatus according to claim 7, further comprising a dynamic region determining unit communicatively connected to said video image acquiring unit and said positional relationship judging unit, respectively;
the video image acquisition unit is also used for acquiring two frames of video images corresponding to different moments;
the dynamic region determining unit is configured to determine the dynamic image display region according to a similarity calculation result of local video image contents in the two frames of video images, where the local video image contents are video image contents respectively defined by a same dynamically changing local region boundary in the two frames of video images, and the local region boundary is a boundary used for segmenting a local video image in a video image.
9. A computer device comprising a memory and a processor communicatively coupled, wherein the memory is configured to store a computer program and the processor is configured to read the computer program and perform the method of any of claims 1 to 6.
10. A computer-readable storage medium having stored thereon instructions which, when executed on a computer, perform the method of any one of claims 1-6.
CN202010898814.7A 2020-08-31 2020-08-31 Method and device for distinguishing false face and computer equipment Active CN112016482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010898814.7A CN112016482B (en) 2020-08-31 2020-08-31 Method and device for distinguishing false face and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010898814.7A CN112016482B (en) 2020-08-31 2020-08-31 Method and device for distinguishing false face and computer equipment

Publications (2)

Publication Number Publication Date
CN112016482A true CN112016482A (en) 2020-12-01
CN112016482B CN112016482B (en) 2022-10-25

Family

ID=73503305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010898814.7A Active CN112016482B (en) 2020-08-31 2020-08-31 Method and device for distinguishing false face and computer equipment

Country Status (1)

Country Link
CN (1) CN112016482B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112668396A (en) * 2020-12-03 2021-04-16 浙江大华技术股份有限公司 Two-dimensional false target identification method, device, equipment and medium
CN113807234B (en) * 2021-09-14 2023-12-19 深圳市木愚科技有限公司 Method, device, computer equipment and storage medium for checking mouth-shaped synthesized video
CN117727122A (en) * 2024-01-25 2024-03-19 上海舜慕智能科技有限公司 Access control gate based on face recognition and identity recognition method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184277A (en) * 2015-09-29 2015-12-23 杨晴虹 Living body human face recognition method and device
CN105893920A (en) * 2015-01-26 2016-08-24 阿里巴巴集团控股有限公司 Human face vivo detection method and device
WO2017000218A1 (en) * 2015-06-30 2017-01-05 北京旷视科技有限公司 Living-body detection method and device and computer program product
WO2018086543A1 (en) * 2016-11-10 2018-05-17 腾讯科技(深圳)有限公司 Living body identification method, identity authentication method, terminal, server and storage medium
CN109086645A (en) * 2017-06-13 2018-12-25 阿里巴巴集团控股有限公司 Face identification method, the recognition methods of device and fictitious users, device
CN110008813A (en) * 2019-01-24 2019-07-12 阿里巴巴集团控股有限公司 Face identification method and system based on In vivo detection technology
CN110532877A (en) * 2019-07-26 2019-12-03 江苏邦融微电子有限公司 A kind of single camera face recognition scheme anti-fraud method, system, equipment and storage device
CN111898552A (en) * 2020-07-31 2020-11-06 成都新潮传媒集团有限公司 Method and device for distinguishing person attention target object and computer equipment
CN111898553A (en) * 2020-07-31 2020-11-06 成都新潮传媒集团有限公司 Method and device for distinguishing virtual image personnel and computer equipment
CN112395943A (en) * 2020-10-19 2021-02-23 天翼电子商务有限公司 Detection method for counterfeiting face video based on deep learning

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105893920A (en) * 2015-01-26 2016-08-24 阿里巴巴集团控股有限公司 Human face vivo detection method and device
WO2017000218A1 (en) * 2015-06-30 2017-01-05 北京旷视科技有限公司 Living-body detection method and device and computer program product
CN105184277A (en) * 2015-09-29 2015-12-23 杨晴虹 Living body human face recognition method and device
WO2018086543A1 (en) * 2016-11-10 2018-05-17 腾讯科技(深圳)有限公司 Living body identification method, identity authentication method, terminal, server and storage medium
CN109086645A (en) * 2017-06-13 2018-12-25 阿里巴巴集团控股有限公司 Face identification method, the recognition methods of device and fictitious users, device
CN110008813A (en) * 2019-01-24 2019-07-12 阿里巴巴集团控股有限公司 Face identification method and system based on In vivo detection technology
CN110532877A (en) * 2019-07-26 2019-12-03 江苏邦融微电子有限公司 A kind of single camera face recognition scheme anti-fraud method, system, equipment and storage device
CN111898552A (en) * 2020-07-31 2020-11-06 成都新潮传媒集团有限公司 Method and device for distinguishing person attention target object and computer equipment
CN111898553A (en) * 2020-07-31 2020-11-06 成都新潮传媒集团有限公司 Method and device for distinguishing virtual image personnel and computer equipment
CN112395943A (en) * 2020-10-19 2021-02-23 天翼电子商务有限公司 Detection method for counterfeiting face video based on deep learning

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
J. ROSS BEVERIDGE等: "The IJCB 2014 PaSC video face and person recognition competition", 《IEEE INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS》 *
OCR识别专家: "基于随机动作指令的人脸活体检测技术", 《HTTPS://WWW.ELECFANS.COM/NEWS/1165419.HTML》 *
五月传说: "人脸活体检测", 《HTTPS://WWW.SOHU.COM/A/270230645_100267646》 *
卫娟等: "基于局部特征在线学习的视频人脸识别", 《计算机应用与软件》 *
杨青川等: "基于移动窗FICA和SOM方法的心动异常诊断", 《南京理工大学学报》 *
胡骞鹤等: "基于教室监控视频的学生位置检测和人脸图像捕获算法", 《计算机与现代化》 *
黄叶珏: "基于交互式随机动作的人脸活体检测", 《软件导刊》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112668396A (en) * 2020-12-03 2021-04-16 浙江大华技术股份有限公司 Two-dimensional false target identification method, device, equipment and medium
CN113807234B (en) * 2021-09-14 2023-12-19 深圳市木愚科技有限公司 Method, device, computer equipment and storage medium for checking mouth-shaped synthesized video
CN117727122A (en) * 2024-01-25 2024-03-19 上海舜慕智能科技有限公司 Access control gate based on face recognition and identity recognition method
CN117727122B (en) * 2024-01-25 2024-06-11 上海舜慕智能科技有限公司 Access control gate based on face recognition and identity recognition method

Also Published As

Publication number Publication date
CN112016482B (en) 2022-10-25

Similar Documents

Publication Publication Date Title
CN112016482B (en) Method and device for distinguishing false face and computer equipment
CN107133969B (en) A kind of mobile platform moving target detecting method based on background back projection
Cavallaro et al. Shadow-aware object-based video processing
WO2012144732A1 (en) Apparatus and method for compositing image in a portable terminal
CN106096603A (en) A kind of dynamic flame detection method merging multiple features and device
EP1971967A1 (en) Average calculation in color space, particularly for segmentation of video sequences
WO2007076890A1 (en) Segmentation of video sequences
EP1969559A1 (en) Contour finding in segmentation of video sequences
CN111292228A (en) Lens defect detection method
JP2024523865A (en) Screensaver interaction method, device, electronic device, and storage medium
JP2013218612A (en) Image processing apparatus and image processing method
CN102879404A (en) System for automatically detecting medical capsule defects in industrial structure scene
JP3459950B2 (en) Face detection and face tracking method and apparatus
CN108520260B (en) Method for identifying visible foreign matters in bottled oral liquid
EP0780003B1 (en) Method and apparatus for determining the location of a reflective object within a video field
CN106093052A (en) A kind of broken yarn detection method
CN109583414B (en) Indoor road occupation detection method, device, medium and processor based on video detection
CN104813341B (en) Image processing system and image processing method
CN115035147A (en) Matting method, device and system based on virtual shooting and image fusion method
CN104282013B (en) A kind of image processing method and device for foreground target detection
Wei et al. Simulating shadow interactions for outdoor augmented reality with RGBD data
JP2008020369A (en) Image analysis means, image analysis device, inspection device, image analysis program and computer-readable recording medium
WO2019216688A1 (en) Method for estimating light for augmented reality and electronic device thereof
CN110136104B (en) Image processing method, system and medium based on unmanned aerial vehicle ground station
CN112866507B (en) Intelligent panoramic video synthesis method and system, electronic device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant