CN111144182B - Method and system for detecting face risk in video - Google Patents

Method and system for detecting face risk in video Download PDF

Info

Publication number
CN111144182B
CN111144182B CN201811312260.7A CN201811312260A CN111144182B CN 111144182 B CN111144182 B CN 111144182B CN 201811312260 A CN201811312260 A CN 201811312260A CN 111144182 B CN111144182 B CN 111144182B
Authority
CN
China
Prior art keywords
face
preset
concave
video data
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811312260.7A
Other languages
Chinese (zh)
Other versions
CN111144182A (en
Inventor
李东声
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tendyron Corp
Original Assignee
Tendyron Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tendyron Corp filed Critical Tendyron Corp
Priority to CN201811312260.7A priority Critical patent/CN111144182B/en
Publication of CN111144182A publication Critical patent/CN111144182A/en
Application granted granted Critical
Publication of CN111144182B publication Critical patent/CN111144182B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F19/00Complete banking systems; Coded card-freed arrangements adapted for dispensing or receiving monies or the like and posting such transactions to existing accounts, e.g. automatic teller machines
    • G07F19/20Automatic teller machines [ATMs]
    • G07F19/207Surveillance aspects at ATMs

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a method and a system for detecting face risks in a video, wherein the method comprises the following steps: the monitoring device extracts background features, inputs the extracted background features into a preset background collaborative model, and calculates to obtain the matching degree between the background features and the preset background collaborative model; and extracting the first concave-convex degree of the face and the second concave-convex degree of the face of each grid region, calculating to obtain a face concave-convex degree matching value of each grid region, if the matching degree is lower than a preset background threshold value, or sequentially judging whether the first concave-convex degree matching value of the face of each grid region meets a preset threshold value range or not to obtain N1 matching results, obtaining M1 matching results which represent that the first concave-convex degree matching value of the face of each grid region does not meet the preset threshold value range from the N1 matching results, if the ratio of M1 to N1 is larger than the preset threshold value, or the ratio of M2 to N2 is larger than the preset threshold value, generating a first comparison result, and determining that a preset risk exists.

Description

Method and system for detecting face risk in video
Technical Field
The invention relates to the field of video monitoring, in particular to a method and a system for detecting face risks in videos.
Background
An existing Automatic Teller Machine (ATM for short) is generally arranged in an Automatic bank, and after a bank card is inserted into the ATM, bank counter services such as money withdrawal, deposit, transfer and the like can be performed on the ATM. Due to the public, convenience and environmental specificity of self-service banking and automated teller machines. Criminal activity has increased in recent years for self-service banks and automated teller machines.
However, the conventional ATM video monitoring system mainly records the video, and after an event occurs, the recorded video is subjected to post-event evidence obtaining, so that disputes can be eliminated and cases can be cracked, but such a mechanism only provides a post-event evidence obtaining effect, and cannot achieve real-time or early warning.
Disclosure of Invention
The present invention aims to solve one of the above problems.
The invention mainly aims to provide a method and a system for detecting human face risks in a video.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
the invention provides a method for detecting face risks in a video, which comprises the following steps: the method comprises the steps that a first camera carries out video acquisition on an environment to be detected to obtain first video data, and the first video data are sent to a monitoring device; the second camera carries out video acquisition on the environment to be detected to obtain second video data, and sends the second video data to the monitoring device, wherein the first camera and the second camera are arranged at different positions in the environment to be detected; the monitoring device receives the first video data and the second video data, recognizes faces corresponding to users to be analyzed in the first video data and the second video data, determines the users to be analyzed, obtains video data, including the users to be analyzed, located at the necessary passing points in the first video data and the second video data, extracts background features from the video data, including the users to be analyzed, located at the necessary passing points, inputs the extracted background features into a preset background collaborative model, and calculates the matching degree between the background features and the preset background collaborative model; the monitoring device divides a face area of a user to be analyzed in the first video data into N1 grid areas, extracts a face first concave-convex degree of each grid area, compares the extracted face first concave-convex degree of each grid area with a preset face concave-convex degree matching model, calculates to obtain a face first concave-convex degree matching value of each grid area, divides the face area of the user to be analyzed in the second video data into N2 grid areas, extracts a face second concave-convex degree of each grid area, compares the extracted face second concave-convex degree of each grid area with the preset face concave-convex degree matching model, and calculates to obtain a face second concave-convex degree matching value of each grid area, wherein N1 is more than or equal to 1 and is a natural number, and N2 is more than or equal to 1 and is a natural number; the monitoring device compares the matching degree with a preset background threshold value, if the matching degree is lower than the preset background threshold value, or whether a first concave-convex degree matching value of each grid region face is in accordance with a preset threshold value range or not is sequentially judged, N1 matching results are obtained, M1 matching results indicating that the first concave-convex degree matching value of each grid region face is not in accordance with the preset threshold value range are obtained from the N1 matching results, if the ratio of M1 to N1 is larger than the preset threshold value, or whether a second concave-convex degree matching value of each grid region face is in accordance with the preset threshold value range or not is sequentially judged, N2 matching results are obtained, M2 matching results indicating that the second concave-convex degree matching value of each grid region face is not in accordance with the preset threshold value range are obtained from the N2 matching results, if the ratio of M2 to N2 is larger than the preset threshold value, a first comparing result is generated, and it is determined that a preset risk exists, wherein M1 is not larger than N1 and is a natural number, and M2 is not larger than N2 and is a natural number.
After the monitoring device extracts the first concave-convex degree of the face of each grid region, before the extracted first concave-convex degree of the face of each grid region is compared with a preset face concave-convex degree matching model, the method further comprises the following steps: the monitoring device carries out distortion correction on the first concave-convex degree of the face of each grid area; comparing the extracted first concave-convex degree of the face of each grid region with a preset face concave-convex degree matching model comprises the following steps: comparing the first concave-convex degree of the face of each grid region obtained after distortion correction with a preset face concave-convex degree matching model; and after the monitoring device extracts the second concave-convex degree of the face of each grid region, before comparing the extracted second concave-convex degree of the face of each grid region with a preset face concave-convex degree matching model, the method further comprises the following steps: the monitoring device carries out distortion correction on the second concave-convex degree of the face of each grid area; comparing the extracted second concave-convex degree of the face of each grid region with a preset face concave-convex degree matching model comprises the following steps: and comparing the second concave-convex degree of the face of each grid region obtained after the distortion correction with a preset face concave-convex degree matching model.
Wherein, the method further comprises: when the matching degree of the monitoring device is not lower than a preset background threshold value, the ratio of the M1 to the N1 is not larger than the preset threshold value, and the ratio of the M2 to the N2 is not larger than the preset threshold value, a second comparison result is generated, and it is determined that no preset risk exists.
Wherein, the method further comprises: the monitoring device receives training video data acquired by the first camera and the second camera in advance; the monitoring device respectively extracts training elements from the training video data, and a preset background cooperation model and a preset human face concave-convex degree matching model are obtained according to training of the training elements.
Wherein, the method further comprises: and the monitoring device executes alarm operation after determining that the user to be analyzed has the preset risk.
In another aspect, the present invention provides a system for detecting face risk in a video, including: the first camera is used for carrying out video acquisition on an environment to be detected to obtain first video data and sending the first video data to the monitoring device; the second camera is used for carrying out video acquisition on the environment to be detected, obtaining second video data and sending the second video data to the monitoring device, wherein the first camera and the second camera are arranged at different positions in the environment to be detected; the monitoring device is used for receiving the first video data and the second video data, identifying the face corresponding to the user to be analyzed in the first video data and the second video data, determining the user to be analyzed, acquiring the video data, including the point where the user to be analyzed is located, in the first video data and the second video data, extracting background features from the video data, including the point where the user to be analyzed is located, inputting the extracted background features into a preset background collaborative model, and calculating to obtain the matching degree between the background features and the preset background collaborative model; dividing a face area of a user to be analyzed in first video data into N1 grid areas, extracting a face first concave-convex degree of each grid area, comparing the extracted face first concave-convex degree of each grid area with a preset face concave-convex degree matching model, calculating to obtain a face first concave-convex degree matching value of each grid area, dividing the face area of the user to be analyzed in second video data into N2 grid areas, extracting a face second concave-convex degree of each grid area, comparing the extracted face second concave-convex degree of each grid area with the preset face concave-convex degree matching model, and calculating to obtain a face second concave-convex degree matching value of each grid area, wherein N1 is more than or equal to 1 and is a natural number, and N2 is more than or equal to 1 and is a natural number; comparing the matching degree with a preset background threshold, if the matching degree is lower than the preset background threshold, or sequentially judging whether the first concave-convex degree matching value of the face of each grid area meets the preset threshold range, obtaining N1 matching results, obtaining the matching results of which the first concave-convex degree matching values of the face of M1 representation grid areas do not meet the preset threshold range from the N1 matching results, if the ratio of M1 to N1 is larger than the preset threshold, or sequentially judging whether the second concave-convex degree matching value of the face of each grid area meets the preset threshold range, obtaining N2 matching results, obtaining the matching results of which the second concave-convex degree matching values of the face of M2 representation grid areas do not meet the preset threshold range from the N2 matching results, if the ratio of M2 to N2 is larger than the preset threshold, generating a first comparing result, and determining that a preset risk exists, wherein M1 is not larger than N1 and is a natural number, and M2 is not larger than N2 and is a natural number.
The monitoring device is further used for carrying out distortion correction on the first concave-convex degree of the face of each grid region after the first concave-convex degree of the face of each grid region is extracted and before the extracted first concave-convex degree of the face of each grid region is compared with a preset face concave-convex degree matching model; the monitoring device is specifically used for comparing the obtained first concave-convex degree of the face of each grid region after distortion correction with a preset face concave-convex degree matching model; the monitoring device is also used for carrying out distortion correction on the second concave-convex degree of the face of each grid region after the second concave-convex degree of the face of each grid region is extracted and before the extracted second concave-convex degree of the face of each grid region is compared with a preset face concave-convex degree matching model; and the monitoring device is specifically used for comparing the second concave-convex degree of the face of each grid region obtained after distortion correction with a preset face concave-convex degree matching model.
The monitoring device is further configured to generate a second comparison result and determine that no preset risk exists when the matching degree is not lower than a preset background threshold, and a ratio of M1 to N1 is not greater than a preset threshold, and a ratio of M2 to N2 is not greater than a preset threshold.
The monitoring device is also used for receiving training video data acquired by the first camera and the second camera in advance; and respectively extracting training elements from the training video data, and training according to the training elements to obtain a preset background collaborative model and a preset human face concave-convex degree matching model.
The monitoring device is also used for executing alarm operation after determining that the user to be analyzed has the preset risk.
Therefore, according to the method and the system for detecting the face risk in the video provided by the embodiment of the invention, at least two cameras are arranged at different positions to identify a person, the background characteristic of a user to be analyzed when the user passes through a necessary passing point is analyzed, in addition, the face identification is carried out on video data transmitted by the cameras, the face is divided into a plurality of grid areas, the face concavity and convexity of each grid area are matched with a preset face concavity and convexity matching model, whether the face concavity and convexity matching value of each grid area meets a preset threshold range is further judged, the face in the grid area is determined to be not a normal face under the condition that the face does not meet the preset threshold range, whether the number of the grid areas of the face which is not normal is enough is determined, if the face is enough, the face is determined to be a risk face, the existence of the preset risk is determined, and therefore, the person can be identified, and the preset risk (such as the illegal criminal intention) can be found in real time.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a method for detecting a face risk in a video according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a system for detecting face risk in a video according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used only for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention. Furthermore, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or quantity or location.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Embodiments of the present invention will be described in further detail below with reference to the accompanying drawings.
Fig. 1 shows a flowchart of a method for detecting a face risk in a video according to an embodiment of the present invention, and referring to fig. 1, the method for detecting a face risk in a video according to an embodiment of the present invention includes:
s101, a first camera carries out video acquisition on an environment to be detected to obtain first video data, and the first video data are sent to a monitoring device; the second camera carries out video acquisition to the environment that awaits measuring, obtains second video data to with second video data transmission to monitoring device, wherein, first camera and second camera setting are waiting to detect the different positions in the environment.
Specifically, first camera and second camera are the camera that sets up the different positions in waiting to detect the environment, for example when waiting to detect the environment and be self service bank, first camera can be for setting up the camera on the ATM, and the second camera can be for setting up the environment camera in the environment other than ATM in self service bank. Of course, in practical application of the present invention, more than two cameras may be provided, which is not limited in the present invention.
The first camera and the second camera are used for acquiring videos at necessary passing points from different positions and have different background characteristics. The essential point is a point that a user must pass when entering the environment to be detected to process the service, and the essential point may be preset in the embodiment of the present invention, and at the same time, the essential point may be one or multiple points, which is not limited in the present invention. It is worth to explain that, because the position that first camera and second camera set up is different, same user only probably is shot by one of them camera in first camera or the second camera when same must pass through the point.
The first video data acquired by the first camera and the second video data acquired by the second camera are sent to the monitoring device in real time, or the acquired video data are sent to the monitoring device at regular time according to a preset period.
S102, the monitoring device receives the first video data and the second video data, recognizes faces corresponding to users to be analyzed in the first video data and the second video data, determines the users to be analyzed, obtains video data, including the points where the users to be analyzed are located, in the first video data and the second video data, extracts background features from the video data, including the points where the users to be analyzed are located, inputs the extracted background features into a preset background collaborative model, and calculates the matching degree between the background features and the preset background collaborative model; the monitoring device divides a face area of a user to be analyzed in the first video data into N1 grid areas, extracts a face first concave-convex degree of each grid area, compares the extracted face first concave-convex degree of each grid area with a preset face concave-convex degree matching model, calculates to obtain a face first concave-convex degree matching value of each grid area, divides the face area of the user to be analyzed in the second video data into N2 grid areas, extracts a face second concave-convex degree of each grid area, compares the extracted face second concave-convex degree of each grid area with the preset face concave-convex degree matching model, and calculates to obtain a face second concave-convex degree matching value of each grid area, wherein N1 is more than or equal to 1 and is a natural number, and N2 is more than or equal to 1 and is a natural number.
Specifically, the monitoring device may be disposed near the camera or in the background. For example, in an automated banking environment, the system may be disposed in an ATM or in a bank monitoring background, which is not limited in the present invention. After the monitoring device receives the first video data and the second video data, a user is identified from the first video data by adopting a face recognition technology, a user is identified from the second video data, and when the two users are determined to be the same user, the user is determined to be a user to be analyzed.
If the monitoring device identifies only one user in the first video data or in the second video data, this user can be considered directly as the user to be analyzed.
And if the user identified by the monitoring device from the first video data is not the same user as the user identified from the second video data, the two users are considered as different users to be analyzed for analysis.
Because the video data of the user at the inevitable point often contains the user to be analyzed, the user belongs to valid data, the video data of the inevitable point possibly does not contain the user to be analyzed, the user belongs to invalid data, and the analysis of the invalid data has no great significance on risk detection, the monitoring device only extracts the background features of the video data of the user at the inevitable point, so that the extraction of the background features of the invalid data is avoided, and the risk detection efficiency is improved.
The background feature may comprise any feature of a background identifier in the environment and any combination thereof to serve as an identifier of the background. For example, the information may include position information of the static object, shape information of the static object, quantity information of the static object, and motion rule of the dynamic object.
Specifically, a background cooperation model is preset in the monitoring device so as to analyze the background characteristics. As an optional implementation manner of the embodiment of the present invention, the monitoring device receives training video data acquired by the first camera and the second camera in advance; the monitoring device respectively extracts training elements from the training video data, and a preset background collaborative model is obtained according to training of the training elements. The background markers in the shooting range of each camera are analyzed to generate a background collaborative model, and a reasonable background threshold range is set according to the inevitable points in different movement tracks of a normal user for judgment, so that the intelligence and the accuracy of the judgment are improved.
In specific application, the ATM camera and the environment camera transmit videos shot in a monitoring range to the monitoring device, the monitoring system extracts background markers 1,2 \8230n, a background collaborative model is obtained after analysis and calculation, a user arrives at the ATM from different paths and passes through different cameras, the monitoring device analyzes the background markers extracted from each path and sets a reasonable background threshold value and a reasonable background judgment mode, so that the preset background collaborative model is established, and the reasonable background threshold value and the reasonable background judgment mode can be set correspondingly according to different application scenes, which is not described in detail in the invention.
Inputting the extracted background features into a preset background collaborative model, and calculating a matching degree between the extracted background features and the background collaborative model, where the matching degree is a numerical value, and may be a percentage value, for example.
Specifically, because a normal face has concave-convex degrees, when light rays emitted from one point are projected onto the face, the concave-convex degrees of the face at all positions can be monitored. If the human face is divided into N grid areas, the concave-convex degree of the light projected to each grid area is different and accords with a certain rule. Specifically, the face area of the user to be analyzed in the first video data can be divided into N1 grid areas, the first concavity and convexity of the face of each grid area are extracted, the face area of the user to be analyzed in the second video data is divided into N2 grid areas, and the second concavity and convexity of the face of each grid area are extracted.
After a camera captures face information, extracting the concavity and convexity in each mesh as an optional implementation manner of the embodiment of the present invention, after the monitoring device extracts the face concavity and convexity of each mesh region, before comparing the extracted face concavity and convexity of each mesh region with a preset face concavity and convexity matching model, the method for detecting a face risk in a video according to the embodiment of the present invention may further include: and the monitoring device performs distortion correction on the concave-convex degree of the face of each grid area. And after distortion correction, the distortion correction is sent to the monitoring device for analysis, so that the analysis accuracy of the monitoring device is improved. Specifically, in the present invention, after the monitoring apparatus extracts the first concavity and convexity of the face of each mesh region, before comparing the extracted first concavity and convexity of the face of each mesh region with the preset face concavity and convexity matching model, the method for detecting the face risk in the video further includes: the monitoring device carries out distortion correction on the first concave-convex degree of the face of each grid area; comparing the extracted first concave-convex degree of the face of each grid region with a preset face concave-convex degree matching model comprises the following steps: comparing the first concave-convex degree of the face of each grid region obtained after distortion correction with a preset face concave-convex degree matching model; and after the monitoring device extracts the second concave-convex degree of the face of each grid region, before the extracted second concave-convex degree of the face of each grid region is compared with a preset face concave-convex degree matching model, the method for detecting the face risk in the video further comprises the following steps: the monitoring device carries out distortion correction on the second concave-convex degree of the face of each grid area; comparing the extracted second concave-convex degree of the face of each grid region with a preset face concave-convex degree matching model comprises the following steps: and comparing the second concave-convex degree of the face of each grid region obtained after the distortion correction with a preset face concave-convex degree matching model.
The human face concavity and convexity in each grid area are compared with a preset human face concavity and convexity matching model, and whether the human face concavity and convexity accords with the concavity and convexity of a normal human face can be determined for subsequent analysis.
As an optional implementation manner of the embodiment of the present invention, the method for detecting a face risk in a video, provided by the embodiment of the present invention, further includes: the monitoring device receives training video data acquired by a camera in advance; the monitoring device respectively extracts training elements from the training video data, and a preset face concave-convex degree matching model is obtained according to training of the training elements. Carry out face identification through the face to the camera shooting to face unevenness carries out the analysis, generates face unevenness matching model, sets for reasonable predetermined threshold value scope according to normal user's face unevenness matching model and judges, has improved the intelligence and the precision of judgement.
It is worth to be noted that in the method for detecting the face risk in the video, the monitoring device receives training video data acquired by the first camera and the second camera in advance; the monitoring device extracts training elements from the training video data respectively, and the preset background collaborative model and the preset human face concavity and convexity matching model obtained by training according to the training elements can be obtained in the same training or can be obtained in multiple times of training respectively, which is not limited in the invention.
S103, the monitoring device compares the matching degree with a preset background threshold, if the matching degree is lower than the preset background threshold, or sequentially judges whether the first concave-convex degree matching value of each grid area face accords with a preset threshold range, N1 matching results are obtained, M1 matching results indicating that the first concave-convex degree matching value of each grid area face does not accord with the preset threshold range are obtained from the N1 matching results, if the ratio of M1 to N1 is larger than the preset threshold, or sequentially judges whether the second concave-convex degree matching value of each grid area face accords with the preset threshold range, N2 matching results are obtained, matching results indicating that the second concave-convex degree matching value of M2 representation grid area faces does not accord with the preset threshold range are obtained from the N2 matching results, if the ratio of M2 to N2 is larger than the preset threshold, a first comparing result is generated, and the existence of a preset risk is determined, wherein M1 is not larger than N1 and is a natural number, and M2 is not larger than N2 and is a natural number.
Specifically, when the matching degree is lower than a preset background threshold, the background feature is considered to be not matched with the background collaborative model, and a preset risk may be considered to exist in the case that the background feature is not matched with the background collaborative model, for example: the video with the background features extracted has risks or users to be analyzed have risks, for example, the video is tampered, the camera is hijacked, or the users destroy the normal collection of the camera.
The monitoring device sequentially judges whether the concave-convex degree matching value of the face of each grid area accords with a preset threshold range, namely whether the matching value is larger than or equal to a first preset value and smaller than or equal to a second preset value, wherein the second preset value is larger than the first preset value, the matching value is smaller than the first preset value, or the matching value is larger than the second preset value, and the concave-convex degree matching value of the face of each grid area belongs to the condition that the concave-convex degree matching value of the face of each grid area does not accord with the preset threshold range, the face is represented to be a possible risk and not a normal face, such as a face in a mask or a photo, and the like. In this manner, a predetermined risk may be identified.
As an optional embodiment of the present invention, when the matching degree is not lower than the preset background threshold, and the ratio of M1 to N1 is not greater than the preset threshold, and the ratio of M2 to N2 is not greater than the preset threshold, the monitoring device generates a second comparison result, and determines that there is no preset risk. Because the matching degree between the background features and the background collaborative model is high enough, and the concave-convex degree matching value of the face in the grid region meets the preset threshold range, no risk can be considered to exist, for example: there is no risk to the video or to the user to be analyzed.
In a specific application, for example, in an ATM environment, when a user arrives at the ATM, the monitoring device performs background analysis according to a received video containing user characteristics, inputs the background characteristics of the user appearing in each camera into a background collaborative model, and compares the output matching degree with a background threshold value to obtain a comparison result 1, so as to determine whether a risk exists according to the comparison result 1.
Optionally, as an optional embodiment of the present invention, the monitoring apparatus performs an alarm operation after determining that the user to be analyzed has a preset risk. The alarm operation can be that an alarm device in the environment to be detected gives an alarm, for example, by sound and light, or an alarm device in a monitoring room of a background monitoring person, for example, by displaying on a monitoring display screen to give an alarm or sound, or sending a short message to a monitoring person or a policeman, or the like. The efficiency of risk handling for self-service banking and ATMs is further improved by alarming when a risk occurs.
Therefore, according to the method for detecting the face risk in the video provided by the embodiment of the invention, at least two cameras are arranged at different positions to identify a person, the background characteristic of a user to be analyzed when the user passes through a necessary passing point is analyzed, in addition, the face identification is carried out on video data transmitted by the cameras, the face is divided into a plurality of grid areas, the face concavity and convexity of each grid area is matched with a preset face concavity and convexity matching model, whether the face concavity and convexity matching value of each grid area meets a preset threshold range is further judged, the face of the grid area is determined to be not a normal face under the condition that the face does not meet the preset threshold range, whether the number of the grid areas of the abnormal face is enough is determined, if so, the face is determined to be a risk face, the existence of the preset risk is determined, and therefore, the person can be identified, and the preset risk (such as the illegal criminal intention) can be found in real time.
As an optional embodiment of the present invention, first video data collected by a first camera is encrypted by a security chip disposed in the first camera, second video data collected by a second camera is encrypted by a security chip disposed in the second camera, the first camera sends the encrypted first video data to a monitoring device, and the second camera sends the encrypted second video data to the monitoring device; and after receiving the encrypted first video data and the encrypted second video data, the monitoring device decrypts the encrypted first video data and the encrypted second video data to obtain the first video data and the second video data. By carrying out encryption transmission on the video data, the security of video data transmission is improved, and the video data is prevented from being tampered after being cracked.
The method comprises the steps that first video data collected by a first camera are signed through a security chip arranged in the first camera to obtain first signature data, second video data collected by a second camera are signed through a security chip arranged in the second camera to obtain second signature data, the first camera sends the first video data and the first signature data to a monitoring device, and the second camera sends the second video data and the second signature data to the monitoring device; and after the monitoring device receives the first video data, the first signature data, the second video data and the second signature data, checking the first signature data and the second signature data, and performing subsequent analysis by using the first video data and the second video data after the first signature data and the second signature data pass the checking. By signing the video data, the authenticity of the video data source can be ensured, and the video data can be prevented from being tampered.
Fig. 2 is a schematic structural diagram illustrating a system for detecting a risk of a face in a video according to an embodiment of the present invention, where the system for detecting a risk of a face in a video according to an embodiment of the present invention applies the method, and only the structure of the system for detecting a risk of a face in a video according to an embodiment of the present invention is briefly described below, and other things are not the least, with reference to the description of the method for detecting a risk of a face in a video, which is provided in an embodiment of the present invention, and referring to fig. 2, the system for detecting a risk of a face in a video according to an embodiment of the present invention includes:
the first camera 201 is configured to perform video acquisition on an environment to be detected, obtain first video data, and send the first video data to the monitoring device;
the second camera 202 is configured to perform video acquisition on an environment to be detected, obtain second video data, and send the second video data to the monitoring device, where the first camera and the second camera are arranged at different positions in the environment to be detected;
the monitoring device 203 is used for receiving the first video data and the second video data, identifying faces corresponding to users to be analyzed in the first video data and the second video data, determining the users to be analyzed, acquiring video data, including the users to be analyzed, located at the necessary passing points in the first video data and the second video data, extracting background features from the video data, including the users to be analyzed, located at the necessary passing points, inputting the extracted background features into a preset background collaborative model, and calculating to obtain the matching degree between the background features and the preset background collaborative model; dividing a face area of a user to be analyzed in first video data into N1 grid areas, extracting a face first concavity and convexity of each grid area, comparing the extracted face first concavity and convexity of each grid area with a preset face concavity and convexity matching model, calculating to obtain a face first concavity and convexity matching value of each grid area, dividing the face area of the user to be analyzed in second video data into N2 grid areas, extracting a face second concavity and convexity of each grid area, comparing the extracted face second concavity and convexity of each grid area with the preset face concavity and convexity matching model, and calculating to obtain a face second concavity and convexity matching value of each grid area, wherein N1 is more than or equal to 1 and is a natural number, and N2 is more than or equal to 1 and is a natural number; comparing the matching degree with a preset background threshold value, if the matching degree is lower than the preset background threshold value, or sequentially judging whether a first concave-convex degree matching value of each grid region face is in accordance with a preset threshold value range, obtaining N1 matching results, obtaining matching results of which M1 characterization grid region face first concave-convex degree matching values are not in accordance with the preset threshold value range from the N1 matching results, if the ratio of M1 to N1 is larger than the preset threshold value, or sequentially judging whether a second concave-convex degree matching value of each grid region face is in accordance with the preset threshold value range, obtaining N2 matching results, obtaining matching results of which M2 characterization grid region face second concave-convex degree matching values are not in accordance with the preset threshold value range from the N2 matching results, if the ratio of M2 to N2 is larger than the preset threshold value, generating a first comparing result, and determining that a preset risk exists, wherein M1 is not larger than N1 and is a natural number, and M2 is not larger than N2 and is a natural number.
Therefore, through the system for detecting the face risk in the video provided by the embodiment of the invention, at least two cameras are arranged at different positions to identify a person, the background characteristics of a user to be analyzed when the user passes through a necessary passing point are analyzed, in addition, the face identification is carried out on the video data transmitted by the cameras, the face is divided into a plurality of grid areas, the face concavity and convexity of each grid area are matched with a preset face concavity and convexity matching model, whether the face concavity and convexity matching value of each grid area meets a preset threshold range is further judged, the face in the grid area is determined to be not a normal face under the condition that the face does not meet the preset threshold range, whether the number of the grid areas of the abnormal face is enough is determined, if so, the face is determined to be a risk face, the existence of a preset risk is determined, and therefore, the person can be identified, and the preset risk (such as the illegal criminal intention) can be found in real time.
As an optional embodiment of the present invention, the monitoring device 203 is further configured to, after extracting the first concave-convex degree of the face in each mesh region, perform distortion correction on the first concave-convex degree of the face in each mesh region before comparing the extracted first concave-convex degree of the face in each mesh region with a preset face concave-convex degree matching model; the monitoring device 203 is specifically configured to compare the first concave-convex degree of the face of each mesh region obtained after the distortion correction with a preset face concave-convex degree matching model; the monitoring device 203 is also used for carrying out distortion correction on the second concavity and convexity of the face of each grid region after the second concavity and convexity of the face of each grid region are extracted and before the extracted second concavity and convexity of the face of each grid region are compared with a preset face concavity and convexity matching model; the monitoring device 203 is specifically configured to compare the second human face concavity and convexity of each mesh region obtained after the distortion correction with a preset human face concavity and convexity matching model. And after distortion correction, the distortion correction is sent to the monitoring device for analysis, so that the analysis accuracy of the monitoring device is improved.
As an optional embodiment of the present invention, the monitoring device 203 is further configured to generate a second comparison result when the matching degree is not lower than the preset background threshold, and the ratio of M1 to N1 is not greater than the preset threshold, and the ratio of M2 to N2 is not greater than the preset threshold, so as to determine that there is no preset risk. Because the matching degree between the background features and the background collaborative model is high enough, and the concave-convex degree matching value of the face in the grid region conforms to the preset threshold range, no risk can be considered to exist, for example: the video is not at risk or the user to be analyzed is not at risk.
As an optional embodiment of the present invention, the monitoring device 203 is further configured to receive training video data acquired by the first camera and the second camera in advance; and respectively extracting training elements from the training video data, and training according to the training elements to obtain a preset background collaborative model and a preset human face concave-convex degree matching model. The background markers in the shooting ranges of the cameras are analyzed to generate a background collaborative model, a reasonable background threshold range is set according to the necessary points in different moving tracks of normal users to judge, the human face shot by the cameras is subjected to face recognition, the human face concavity and convexity are analyzed to generate a human face concavity and convexity matching model, a reasonable preset threshold range is set according to the human face concavity and convexity matching model of the normal users to judge, and the intelligence and accuracy of judgment are improved.
An alternative embodiment of the present invention is characterized in that the monitoring device 203 is further configured to perform an alarm operation after determining that the user to be analyzed has a preset risk. The efficiency of risk handling for self-service banking and ATMs is further improved by alarming when a risk occurs.
As an optional embodiment of the present invention, first video data collected by the first camera 201 is encrypted by a security chip disposed in the first camera, second video data collected by the second camera 202 is encrypted by a security chip disposed in the second camera, the first camera 201 sends the encrypted first video data to the monitoring apparatus, and the second camera 202 sends the encrypted second video data to the monitoring apparatus 203; after receiving the encrypted first video data and the encrypted second video data, the monitoring device 203 decrypts the encrypted first video data and the encrypted second video data to obtain the first video data and the second video data. By carrying out encryption transmission on the video data, the security of video data transmission is improved, and the video data is prevented from being tampered after being cracked.
A first video data collected by a first camera 201 is signed by a security chip arranged in the first camera to obtain a first signature data, a second video data collected by a second camera 202 is signed by a security chip arranged in the second camera to obtain a second signature data, the first camera 201 sends the first video data and the first signature data to a monitoring device, and the second camera 202 sends the second video data and the second signature data to a monitoring device 203; after receiving the first video data and the first signature data, and the second video data and the second signature data, the monitoring device 203 checks the first signature data and the second signature data, and performs subsequent analysis using the first video data and the second video data after the check passes. By signing the video data, the authenticity of the video data source can be ensured, and the video data can be prevented from being tampered.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following technologies, which are well known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried out in the method for implementing the above embodiment may be implemented by hardware that is related to instructions of a program, and the program may be stored in a computer readable storage medium, and when executed, the program includes one or a combination of the steps of the method embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a computer-readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made in the above embodiments by those of ordinary skill in the art without departing from the principle and spirit of the present invention. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (10)

1. A method for detecting human face risk in a video is characterized by comprising the following steps:
the method comprises the following steps that a first camera carries out video acquisition on an environment to be detected to obtain first video data, and the first video data are sent to a monitoring device;
the method comprises the steps that a second camera carries out video acquisition on an environment to be detected to obtain second video data, and the second video data are sent to a monitoring device, wherein the first camera and the second camera are arranged at different positions in the environment to be detected;
the monitoring device receives the first video data and the second video data, identifies the face corresponding to the user to be analyzed in the first video data and the second video data, determines the user to be analyzed, acquires video data containing the user to be analyzed at the necessary passing point in the first video data and the second video data, extracting background features from the video data containing the points where the user to be analyzed is located, inputting the extracted background features into a preset background collaborative model, and calculating to obtain the matching degree between the background features and the preset background collaborative model;
and
the monitoring device divides the face area of the user to be analyzed in the first video data into N1 grid areas, extracts the face first concave-convex degree of each grid area, compares the extracted face first concave-convex degree of each grid area with a preset face concave-convex degree matching model, calculates to obtain the face first concave-convex degree matching value of each grid area, divides the face area of the user to be analyzed in the second video data into N2 grid areas, extracts the face second concave-convex degree of each grid area, compares the extracted face second concave-convex degree of each grid area with the preset face concave-convex degree matching model, and calculates to obtain the face second concave-convex degree matching value of each grid area, wherein N1 is more than or equal to 1 and is a natural number, and N2 is more than or equal to 1 and is a natural number;
the monitoring device compares the matching degree with a preset background threshold value, if the matching degree is lower than the preset background threshold value, or sequentially judges whether a first concave-convex degree matching value of each grid region face accords with a preset threshold value range, N1 matching results are obtained, M1 matching results representing that the first concave-convex degree matching value of the grid region face does not accord with the preset threshold value range are obtained from the N1 matching results, if the ratio of M1 to N1 is larger than the preset threshold value, or sequentially judges whether a second concave-convex degree matching value of each grid region face accords with the preset threshold value range, N2 matching results are obtained, M2 matching results representing that the second concave-convex degree matching value of the grid region face does not accord with the preset threshold value range are obtained from the N2 matching results, if the ratio of M2 to N2 is larger than the preset threshold value, a first comparing result is generated, and it is determined that a preset risk exists, wherein M1 is not more than or equal to N1 and is a natural number, and M2 is not more than N2 and is a natural number.
2. The method of claim 1,
after the monitoring device extracts the first concave-convex degree of the face of each grid region, before the extracted first concave-convex degree of the face of each grid region is compared with a preset face concave-convex degree matching model, the monitoring device further comprises:
the monitoring device carries out distortion correction on the first concave-convex degree of the face of each grid area;
the step of comparing the extracted first concave-convex degree of the face of each grid region with a preset face concave-convex degree matching model comprises the following steps:
comparing the first concave-convex degree of the face of each grid region obtained after the distortion correction with a preset face concave-convex degree matching model;
and
after the monitoring device extracts each mesh region's face second roughness, before comparing each extracted mesh region's face second roughness with preset face roughness matching model, still include:
the monitoring device carries out distortion correction on the second concave-convex degree of the face of each grid area;
the step of comparing the extracted second concave-convex degree of the face of each grid region with a preset face concave-convex degree matching model comprises the following steps:
and comparing the second concave-convex degree of the face of each grid region obtained after distortion correction with a preset face concave-convex degree matching model.
3. The method of claim 1 or 2, further comprising:
and when the matching degree is not lower than the preset background threshold value, the monitoring device generates a second comparison result and determines that no preset risk exists, wherein the ratio of M1 to N1 is not greater than the preset threshold value, and the ratio of M2 to N2 is not greater than the preset threshold value.
4. The method of claim 1 or 2, further comprising:
the monitoring device receives training video data acquired by the first camera and the second camera in advance;
and the monitoring device respectively extracts training elements from the training video data, and trains according to the training elements to obtain the preset background collaborative model and the preset human face concave-convex degree matching model.
5. The method of claim 1 or 2, further comprising:
and the monitoring device executes alarm operation after determining that the user to be analyzed has the preset risk.
6. A system for detecting face risk in a video, comprising:
the system comprises a first camera, a monitoring device and a second camera, wherein the first camera is used for carrying out video acquisition on an environment to be detected to obtain first video data and sending the first video data to the monitoring device;
the monitoring device comprises a first camera, a second camera and a monitoring device, wherein the first camera is used for carrying out video acquisition on an environment to be detected to obtain first video data and sending the first video data to the monitoring device, and the first camera and the second camera are arranged at different positions in the environment to be detected;
the monitoring device is used for receiving the first video data and the second video data, identifying faces corresponding to users to be analyzed in the first video data and the second video data, determining the users to be analyzed, acquiring video data containing the users to be analyzed at the necessary passing points in the first video data and the second video data, extracting background features from the video data containing the users to be analyzed at the necessary passing points, inputting the extracted background features into a preset background collaborative model, and calculating to obtain the matching degree between the background features and the preset background collaborative model; dividing the face area of the user to be analyzed in the first video data into N1 grid areas, extracting the first face concavity and convexity of each grid area, comparing the extracted first face concavity and convexity of each grid area with a preset face concavity and convexity matching model, calculating to obtain the first face concavity and convexity matching value of each grid area, dividing the face area of the user to be analyzed in the second video data into N2 grid areas, extracting the second face concavity and convexity of each grid area, comparing the extracted second face concavity and convexity of each grid area with the preset face concavity and convexity matching model, and calculating to obtain the second face concavity and convexity matching value of each grid area, wherein N1 is more than or equal to 1 and is a natural number, and N2 is more than or equal to 1 and is a natural number; comparing the matching degree with a preset background threshold value, if the matching degree is lower than the preset background threshold value, or sequentially judging whether a first concave-convex degree matching value of each grid region face accords with a preset threshold value range, obtaining N1 matching results, obtaining M1 matching results representing that the first concave-convex degree matching value of the grid region face does not accord with the preset threshold value range from the N1 matching results, if the ratio of M1 to N1 is larger than the preset threshold value, or sequentially judging whether a second concave-convex degree matching value of each grid region face accords with the preset threshold value range, obtaining N2 matching results, obtaining M2 matching results representing that the second concave-convex degree matching value of the grid region face does not accord with the preset threshold value range from the N2 matching results, and if the ratio of M2 to N2 is larger than the preset threshold value, generating a first comparing result, and determining that a preset risk exists, wherein M1 is smaller than or equal to N1 and is a natural number, and M2 is smaller than or equal to N2 and is a natural number.
7. The system according to claim 6, wherein the monitoring device is further configured to, after extracting the first concave-convex degree of the face of each of the mesh regions, perform distortion correction on the first concave-convex degree of the face of each of the mesh regions before comparing the extracted first concave-convex degree of the face of each of the mesh regions with a preset face concave-convex degree matching model;
the monitoring device is specifically used for comparing the first concave-convex degree of the face of each grid region obtained after distortion correction with a preset face concave-convex degree matching model;
and
the monitoring device is further configured to perform distortion correction on the second human face concavity and convexity of each grid region after the second human face concavity and convexity of each grid region is extracted and before the extracted second human face concavity and convexity of each grid region is compared with a preset human face concavity and convexity matching model;
the monitoring device is specifically configured to compare the second human face concavity and convexity of each grid region obtained after distortion correction with a preset human face concavity and convexity matching model.
8. The system according to claim 6 or 7, wherein the monitoring device is further configured to generate a second comparison result when the matching degree is not lower than the preset background threshold, and a ratio of M1 to N1 is not greater than a preset threshold, and a ratio of M2 to N2 is not greater than a preset threshold, and it is determined that there is no preset risk.
9. The system according to claim 6 or 7, wherein the monitoring device is further configured to receive training video data acquired by the first camera and the second camera in advance; and respectively extracting training elements from the training video data, and training according to the training elements to obtain the preset background collaborative model and the preset human face concave-convex degree matching model.
10. The system according to claim 6 or 7, wherein the monitoring device is further configured to perform an alarm operation after determining that the user to be analyzed has a preset risk.
CN201811312260.7A 2018-11-06 2018-11-06 Method and system for detecting face risk in video Active CN111144182B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811312260.7A CN111144182B (en) 2018-11-06 2018-11-06 Method and system for detecting face risk in video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811312260.7A CN111144182B (en) 2018-11-06 2018-11-06 Method and system for detecting face risk in video

Publications (2)

Publication Number Publication Date
CN111144182A CN111144182A (en) 2020-05-12
CN111144182B true CN111144182B (en) 2023-04-07

Family

ID=70516073

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811312260.7A Active CN111144182B (en) 2018-11-06 2018-11-06 Method and system for detecting face risk in video

Country Status (1)

Country Link
CN (1) CN111144182B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1822025A (en) * 2006-03-08 2006-08-23 北京邮电大学 Parallel and distributing type identifying human face based on net
CN104994281A (en) * 2015-06-30 2015-10-21 广东欧珀移动通信有限公司 Method for correcting face distortion and terminal
WO2017016283A1 (en) * 2015-07-30 2017-02-02 中兴通讯股份有限公司 Video monitoring method, device and system
CN106650671A (en) * 2016-12-27 2017-05-10 深圳英飞拓科技股份有限公司 Human face identification method, apparatus and system
CN106897716A (en) * 2017-04-27 2017-06-27 广东工业大学 A kind of dormitory safety monitoring system and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1822025A (en) * 2006-03-08 2006-08-23 北京邮电大学 Parallel and distributing type identifying human face based on net
CN104994281A (en) * 2015-06-30 2015-10-21 广东欧珀移动通信有限公司 Method for correcting face distortion and terminal
WO2017016283A1 (en) * 2015-07-30 2017-02-02 中兴通讯股份有限公司 Video monitoring method, device and system
CN106650671A (en) * 2016-12-27 2017-05-10 深圳英飞拓科技股份有限公司 Human face identification method, apparatus and system
CN106897716A (en) * 2017-04-27 2017-06-27 广东工业大学 A kind of dormitory safety monitoring system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于人脸表情识别的智能网络教学系统研究;冯满堂等;《计算机技术与发展》;20110610(第06期);全文 *

Also Published As

Publication number Publication date
CN111144182A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
CN102201061B (en) Intelligent safety monitoring system and method based on multilevel filtering face recognition
CN102004904B (en) Automatic teller machine-based safe monitoring device and method and automatic teller machine
CN103714631B (en) ATM cash dispenser intelligent monitor system based on recognition of face
CN102833478B (en) Fault-tolerant background model
RU2680747C1 (en) Device for observing the terminal, attachment, decision making and program
CN103873825A (en) ATM (automatic teller machine) intelligent monitoring system and method
CN110390229B (en) Face picture screening method and device, electronic equipment and storage medium
TW201723967A (en) Financial terminal security system and financial terminal security method
CN105426869A (en) Face recognition system and recognition method based on railway security check
CN101556717A (en) ATM intelligent security system and monitoring method
CN113052029A (en) Abnormal behavior supervision method and device based on action recognition and storage medium
CN109961587A (en) A kind of monitoring system of self-service bank
CN112464030B (en) Suspicious person determination method and suspicious person determination device
CN111144181A (en) Risk detection method, device and system based on background collaboration
CN112016509B (en) Personnel station abnormality reminding method and device
TWI671701B (en) System and method for detecting trading behavior
CN111144182B (en) Method and system for detecting face risk in video
CN112992372A (en) Epidemic situation risk monitoring method, device, equipment, storage medium and program product
CN111144180B (en) Risk detection method and system for monitoring video
CN109215150A (en) Face is called the roll and method of counting and its system
CN111144183B (en) Risk detection method, device and system based on face concave-convex degree
CN111145455A (en) Method and system for detecting face risk in surveillance video
CN113537034A (en) Cash receiving loss prevention method and system
CN111147806A (en) Video content risk detection method, device and system
CN111147807A (en) Risk detection method, device and system based on information synchronization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant