WO2020093757A1 - Procédé, appareil et système de détection de risques basés sur une collaboration d'arrière-plan - Google Patents

Procédé, appareil et système de détection de risques basés sur une collaboration d'arrière-plan Download PDF

Info

Publication number
WO2020093757A1
WO2020093757A1 PCT/CN2019/102129 CN2019102129W WO2020093757A1 WO 2020093757 A1 WO2020093757 A1 WO 2020093757A1 CN 2019102129 W CN2019102129 W CN 2019102129W WO 2020093757 A1 WO2020093757 A1 WO 2020093757A1
Authority
WO
WIPO (PCT)
Prior art keywords
video data
background
camera
preset
monitoring device
Prior art date
Application number
PCT/CN2019/102129
Other languages
English (en)
Chinese (zh)
Inventor
李东声
Original Assignee
天地融科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 天地融科技股份有限公司 filed Critical 天地融科技股份有限公司
Publication of WO2020093757A1 publication Critical patent/WO2020093757A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/64Protecting data integrity, e.g. using checksums, certificates or signatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F19/00Complete banking systems; Coded card-freed arrangements adapted for dispensing or receiving monies or the like and posting such transactions to existing accounts, e.g. automatic teller machines
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F19/00Complete banking systems; Coded card-freed arrangements adapted for dispensing or receiving monies or the like and posting such transactions to existing accounts, e.g. automatic teller machines
    • G07F19/20Automatic teller machines [ATMs]
    • G07F19/207Surveillance aspects at ATMs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection

Definitions

  • the invention relates to the field of video surveillance, and in particular to a method, device and system for risk detection based on background collaboration.
  • ATM Automatic Teller Machine
  • the traditional ATM video surveillance system mainly records the video. After the incident, the recorded video is forensic afterwards, which can eliminate disputes and crack cases. However, such a mechanism can only provide the effect of forensic afterwards, and cannot do it. Real-time or early warning.
  • the present invention aims to solve the above problem / one.
  • the main purpose of the present invention is to provide a risk detection method, device and system based on background collaboration.
  • One aspect of the present invention provides a risk detection method based on background collaboration, including: a first camera performs video collection on a detection environment, obtains first video data, and sends the first video data to a monitoring device; a second camera detects The environment collects video, obtains second video data, and sends the second video data to the monitoring device, where the first camera and the second camera are set at different locations in the environment to be detected; the monitoring device receives the first video data and the first Two video data, identify the face corresponding to the user to be analyzed in the first video data and the second video data, and determine the user to be analyzed; the monitoring device obtains the first video data and the second video data contains the user to be analyzed is located at the mandatory point Video data at the location, and extract background features from the video data containing the user at the mandatory point to be analyzed; the monitoring device inputs the extracted background features into the preset background collaboration model, and calculates the background features and the preset background collaboration The matching degree between the models; the monitoring device will match the matching degree with the preset background
  • Another aspect of the present invention provides a risk detection system based on background collaboration, including: a first camera, used for video collection of a detection environment, obtaining first video data, and sending the first video data to a monitoring device; Two cameras, used for video collection of the environment to be detected, obtaining second video data, and sending the second video data to the monitoring device, wherein the first camera and the second camera are set at different positions in the environment to be detected; the monitoring device For receiving the first video data and the second video data, identifying the face corresponding to the user to be analyzed in the first video data and the second video data, and determining the user to be analyzed; acquiring the first video data and the second video data Contains the video data of the user to be analyzed at the mandatory point, and extracts background features from the video data of the user to be analyzed at the mandatory point; input the extracted background feature into a preset background collaborative model to calculate the background feature
  • the matching degree with the preset background collaborative model; the matching degree is performed with the preset background threshold More, if the matching degree is lower
  • a risk detection device based on background collaboration including: a receiving module, configured to receive a first camera to perform video acquisition on the environment to be detected and obtain first video data; and receive a second camera to perform the environment to be detected Video capture, obtain the second video data, and send the second video data to the monitoring device, wherein the first camera and the second camera are set at different positions in the environment to be detected; the determination module is used to identify the first video The face corresponding to the user to be analyzed in the data and the second video data determines the user to be analyzed; the extraction module is used to obtain the video data of the first video data and the second video data containing the user to be analyzed at the mandatory point, and Extract background features from the video data containing the user at the mandatory point to be analyzed; a calculation module for inputting the extracted background features into a preset background collaborative model, between the calculated background features and the preset background collaborative model Matching degree; judgment module for comparing the matching degree with the preset background threshold, such as BACKGROUND matching degree is
  • the risk detection method, device and system based on background collaboration provided by the embodiments of the present invention set at least two cameras at different positions to identify people, and analyze the background characteristics of the user when passing the required point Through analysis, it is possible to discover preset risks in real time (such as illegal and criminal intent), and solve the drawbacks of deliberate counterfeiting and other criminal behaviors that were unavoidable under the supervision of separate cameras in the past.
  • FIG. 1 is a flowchart of a risk detection method based on background collaboration provided by an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of a risk detection system based on background collaboration provided by an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a risk detection device based on background collaboration provided by an embodiment of the present invention.
  • FIG. 1 shows a flowchart of a risk detection method based on background collaboration provided by an embodiment of the present invention.
  • a risk detection method based on background collaboration provided by an embodiment of the present invention includes:
  • the first camera performs video acquisition on the environment to be detected, obtains first video data, and sends the first video data to the monitoring device;
  • the second camera performs video acquisition on the environment to be detected, obtains second video data, and transmits the second video
  • the data is sent to the monitoring device, wherein the first camera and the second camera are set at different positions in the environment to be detected.
  • the first camera and the second camera are cameras set at different positions in the environment to be detected.
  • the first camera may be a camera set on the ATM machine
  • the second camera may be An environmental camera installed in an environment other than an ATM in a self-service bank.
  • more than two cameras may be provided, which is not limited in the present invention.
  • the first camera and the second camera capture the video at the mandatory point from different locations and have different background characteristics.
  • the mandatory point is the point that the user must pass when entering the environment to be processed in the to-be-detected environment.
  • the mandatory point can be set in advance, and at the same time, the mandatory point can be one or more. No specific limitation is made in the present invention. It is worth noting that, because the positions of the first camera and the second camera are different, the same user may only be captured by one of the first camera or the second camera when passing the same mandatory point.
  • the first video data collected by the first camera and the second video data collected by the second camera are sent to the monitoring device in real time, or the collected video data are regularly sent to the monitoring device according to a preset period.
  • the monitoring device receives the first video data and the second video data, recognizes the face corresponding to the user to be analyzed in the first video data and the second video data, and determines the user to be analyzed.
  • the monitoring device may be installed near the camera or in the background.
  • the monitoring device can be installed in the ATM machine or in the bank monitoring background, which is not specifically limited in the present invention.
  • the monitoring device uses face recognition technology to identify a user from the first video data and a user from the second video data, and determines that the two users are the same As a user, it is determined that the user is a user to be analyzed.
  • the monitoring device only recognizes one user in the first video data or the second video data, the user may be directly regarded as the user to be analyzed.
  • the two users are treated as different users to be analyzed for analysis.
  • the monitoring device acquires the video data including the user to be analyzed at the mandatory point in the first video data and the second video data, and extracts background features from the video data including the user at the mandatory point.
  • the monitoring device performs background feature extraction only for the video data where the user is at the mandatory point, so as to avoid extracting background features in invalid data and improve the efficiency of risk detection.
  • the background feature may include any feature of the background marker in the environment and any combination thereof to play a role of identifying the background.
  • it may contain information on the position of static objects, shape information of static objects, information on the number of static objects, and the movement laws of dynamic objects.
  • the monitoring device inputs the extracted background features to a preset background collaborative model, and calculates the matching degree between the background features and the preset background collaborative model.
  • the background collaboration model is preset in the monitoring device to analyze the background features.
  • the monitoring device receives training video data collected by the first camera and the second camera in advance; the monitoring device extracts training elements from the training video data separately, and obtains presets based on the training elements.
  • Background collaborative model By analyzing the background markers in the shooting range of each camera, a background collaborative model is generated, and a reasonable background threshold range is set according to the mandatory points in different movement trajectories of normal users to improve the intelligence and accuracy of the judgment.
  • the ATM camera and the environmental camera transmit the video captured in the monitoring range to the monitoring device.
  • the monitoring system extracts the background markers 1, 2 ... n, analyzes and calculates the background collaborative model, and the user reaches the ATM from different paths .
  • the monitoring device analyzes the background markers extracted from each path, sets a reasonable background threshold and a reasonable background judgment method, so as to establish the preset background collaborative model of the present invention, and set A reasonable background threshold and a reasonable background determination method can be set accordingly according to different application scenarios, which is not specifically described in the present invention.
  • the extracted background features are input into a preset background collaborative model, and the matching degree between the extracted background features and the background collaborative model is calculated.
  • the matching degree is a numerical value, for example, it can be a percentage value.
  • the monitoring device compares the matching degree with a preset background threshold. If the matching degree is lower than the preset background threshold, a first comparison result is generated to determine that there is a preset risk.
  • the matching degree is lower than the preset background threshold, it is considered that the background feature does not match the background collaborative model, and if the background feature does not match the background collaborative model, the preset risk is considered to exist, for example: extraction
  • the video with this background feature is at risk or the user to be analyzed is at risk, for example, the video is tampered with, the camera is hijacked, or the user disrupts the normal collection of the camera.
  • the monitoring device when the matching degree is not lower than a preset background threshold, the monitoring device generates a second comparison result to determine that there is no preset risk. Since the matching degree between the background features and the background collaborative model is high enough, it can be considered that there is no risk, for example, there is no risk in the video or there is no risk in the user to be analyzed.
  • the monitoring device when a user arrives at the ATM, the monitoring device performs background analysis based on the received video containing user characteristics, inputs the background characteristics of the user in each camera into the background collaborative model, and matches the output The degree is compared with the background threshold to obtain a comparison result 1, so as to determine whether there is a risk according to the comparison result 1.
  • the monitoring device performs an alarm operation after determining that the user to be analyzed has a preset risk.
  • the alarm operation can be an alarm device in the environment to be detected, for example, by sounding a light-emitting alarm, or an alarm device in the monitoring room of the background monitoring personnel, for example, by displaying an alarm or audible alarm on the monitoring display screen, or sending a short message Alarm to the monitoring personnel or police personnel.
  • At least two cameras are set at different positions to identify persons, and by analyzing the background characteristics of the user when passing the required point to be analyzed, it is possible to Discover preset risks in real time (such as illegal and criminal intent), and solve the shortcomings of deliberate forgery and other crimes that were unavoidable under the supervision of separate cameras in the past.
  • the first video data collected by the first camera is encrypted by a security chip provided in the first camera, and the second video data collected by the second camera is provided by the second camera
  • the security chip is encrypted, the first camera sends the encrypted first video data to the monitoring device, and the second camera sends the encrypted second video data to the monitoring device; the monitoring device receives the encrypted first video data and encryption After the second video data, decrypt the encrypted first video data and the encrypted second video data to obtain the first video data and the second video data. Encrypted transmission of video data improves the security of video data transmission and prevents video data from being tampered with after being cracked.
  • the first video data collected by the first camera is signed by a security chip provided in the first camera to obtain first signature data
  • the second video data collected by the second camera is signed by a security chip provided in the second camera
  • Obtain the second signature data the first camera sends the first video data and the first signature data to the monitoring device
  • the second camera sends the second video data and the second signature data to the monitoring device
  • the monitoring device receives the first video data After verifying the first signature data and the second video data and the second signature data, verify the first signature data and the second signature data, and use the first video data and the second video data for subsequent analysis after the signature is passed .
  • FIG. 2 shows a schematic structural diagram of a risk detection system based on background collaboration provided by an embodiment of the present invention.
  • the risk detection system based on background collaboration provided by an embodiment of the present invention applies the above-mentioned method.
  • the structure of the risk detection system is briefly described. For other unfinished matters, refer to the above description of the risk detection method based on background collaboration.
  • the risk detection system based on background collaboration provided by an embodiment of the present invention includes:
  • the first camera 201 is used to collect video for the environment to be detected, obtain first video data, and send the first video data to the monitoring device;
  • the second camera 202 is used to collect video in the environment to be detected, obtain second video data, and send the second video data to the monitoring device, wherein the first camera and the second camera are set at different positions in the environment to be detected;
  • the monitoring device 203 is used to receive the first video data and the second video data, identify the face corresponding to the user to be analyzed in the first video data and the second video data, determine the user to be analyzed; obtain the first video data and the second
  • the video data includes the video data of the user to be analyzed at the mandatory point, and extracts the background features from the video data of the user at the mandatory point; input the extracted background features into the preset background collaborative model and calculate Obtain the matching degree between the background feature and the preset background collaborative model; compare the matching degree with the preset background threshold, and if the matching degree is lower than the preset background threshold, generate a first comparison result and determine that there is a preset risk.
  • At least two cameras are set at different positions to identify persons, and by analyzing the background characteristics of the user when passing the mandatory point to be analyzed, it can be Discover preset risks in real time (such as illegal and criminal intent), and solve the shortcomings of deliberate forgery and other crimes that were unavoidable under the supervision of separate cameras in the past.
  • the monitoring device 203 is further configured to generate a second comparison result when the matching degree is not lower than a preset background threshold to determine that there is no preset risk. Since the matching degree between the background features and the background collaborative model is high enough, it can be considered that there is no risk, for example, there is no risk in the video or there is no risk in the user to be analyzed.
  • the monitoring device 203 is further used to receive training video data collected by the first camera and the second camera in advance; extract training elements from the training video data separately, and obtain presets based on the training elements.
  • Background collaborative model By analyzing the background markers in the shooting range of each camera, a background collaborative model is generated, and a reasonable background threshold range is set according to the mandatory points in different movement trajectories of normal users to improve the intelligence and accuracy of the judgment.
  • the monitoring device 203 is also used to perform an alarm operation after determining that the user to be analyzed has a preset risk. By alerting when a risk occurs, the efficiency of the risk management of self-service banks and ATMs is further improved.
  • the first video data collected by the first camera 201 is encrypted by a security chip provided in the first camera, and the second video data collected by the second camera 202 is provided by the second
  • the security chip in the camera is encrypted, the first camera 201 sends the encrypted first video data to the monitoring device, and the second camera 202 sends the encrypted second video data to the monitoring device 203; the monitoring device 203 receives the encrypted After the first video data and the encrypted second video data, the encrypted first video data and the encrypted second video data are decrypted to obtain the first video data and the second video data. Encrypted transmission of video data improves the security of video data transmission and prevents video data from being tampered with after being cracked.
  • the first video data collected by the first camera 201 is signed by a security chip provided in the first camera to obtain the first signature data
  • the second video data collected by the second camera 202 is passed through the security chip provided in the second camera Signing to obtain the second signature data
  • the first camera 201 sends the first video data and the first signature data to the monitoring device
  • the second camera 202 sends the second video data and the second signature data to the monitoring device 203
  • the monitoring device 203 After receiving the first video data and the first signature data and the second video data and the second signature data, verify the first signature data and the second signature data, and use the first video data and the second signature data after passing the verification Second video data for subsequent analysis.
  • signing the video data you can ensure the authenticity of the source of the video data and prevent the video data from being tampered with.
  • FIG. 3 shows a schematic structural diagram of a risk detection device based on background collaboration provided by an embodiment of the present invention.
  • the risk detection device based on background collaboration is a monitoring device in the system shown in FIG. 2.
  • the risk detection device based on background collaboration provided by the embodiment of the invention applies the above system and method. The following only briefly describes the structure of the risk detection device based on background collaboration provided by the embodiment of the present invention.
  • a risk detection device based on background collaboration provided by an embodiment of the present invention includes:
  • the receiving module 2031 is configured to receive the first video data acquired by the first camera for video acquisition of the environment to be detected; receive the second camera acquire the second video data for video acquisition of the environment to be detected, and send the second video data to Monitoring device, wherein the first camera and the second camera are set at different positions in the environment to be detected;
  • the determining module 2032 is configured to identify the face corresponding to the user to be analyzed in the first video data and the second video data, and determine the user to be analyzed;
  • An extraction module 2033 is used to obtain video data containing the user to be analyzed at the mandatory point in the first video data and the second video data, and extract background features from the video data containing the user at the mandatory point to be analyzed;
  • the calculation module 2034 is configured to input the extracted background features into a preset background collaborative model, and calculate the matching degree between the background features and the preset background collaborative model;
  • the judgment module 2035 is configured to compare the matching degree with a preset background threshold, and if the matching degree is lower than the preset background threshold, generate a first comparison result and determine that there is a preset risk.
  • At least two cameras are set at different positions to identify people, and by analyzing the background characteristics of the user when passing the required point to be analyzed, it is possible to Discover preset risks in real time (such as illegal and criminal intent), and solve the shortcomings of deliberate forgery and other crimes that were unavoidable under the supervision of separate cameras in the past.
  • the judgment module 2035 is further configured to generate a second comparison result when the matching degree is not lower than a preset background threshold to determine that there is no preset risk. Since the matching degree between the background features and the background collaborative model is high enough, it can be considered that there is no risk, for example, there is no risk in the video or there is no risk in the user to be analyzed.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Security & Cryptography (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Finance (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Software Systems (AREA)
  • Bioethics (AREA)
  • Psychiatry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Social Psychology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)
  • Alarm Systems (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

La présente invention concerne un procédé, un appareil et un système de détection de risques basés sur une collaboration d'arrière-plan. Le procédé comprend les étapes suivantes : une première caméra envoie des premières données vidéo à un appareil de surveillance ; une seconde caméra envoie des secondes données vidéo à l'appareil de surveillance, la première caméra et la seconde caméra étant disposées à différentes positions dans l'environnement à détecter ; l'appareil de surveillance reconnaît le visage correspondant à un utilisateur à analyser parmi les premières données vidéo et parmi les secondes données vidéo pour déterminer ledit utilisateur, obtient des données vidéo comprenant ledit utilisateur au point requis parmi les premières données vidéo et parmi les secondes données vidéo, extrait la caractéristique d'arrière-plan des données vidéo comprenant ledit utilisateur au point requis, introduit la caractéristique d'arrière-plan obtenue par extraction dans un modèle prédéfini de collaboration d'arrière-plan et calcule pour obtenir le degré d'appariement entre la caractéristique d'arrière-plan et le modèle prédéfini de collaboration d'arrière-plan ; l'appareil de surveillance compare le degré d'appariement à un seuil prédéfini d'arrière-plan et si le degré d'appariement est inférieur au seuil prédéfini d'arrière-plan, détermine que le risque prédéfini existe.
PCT/CN2019/102129 2018-11-06 2019-08-23 Procédé, appareil et système de détection de risques basés sur une collaboration d'arrière-plan WO2020093757A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811311971.2A CN111144181A (zh) 2018-11-06 2018-11-06 一种基于背景协同的风险检测方法、装置及系统
CN201811311971.2 2018-11-06

Publications (1)

Publication Number Publication Date
WO2020093757A1 true WO2020093757A1 (fr) 2020-05-14

Family

ID=70515848

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/102129 WO2020093757A1 (fr) 2018-11-06 2019-08-23 Procédé, appareil et système de détection de risques basés sur une collaboration d'arrière-plan

Country Status (2)

Country Link
CN (1) CN111144181A (fr)
WO (1) WO2020093757A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3920526A1 (fr) * 2020-06-02 2021-12-08 Canon Kabushiki Kaisha Appareil de traitement, appareil de capture d'images et procédé de traitement

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112804502B (zh) * 2021-03-10 2022-07-12 重庆第二师范学院 基于人工智能的视频监控系统、方法、存储介质及设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101227600A (zh) * 2008-02-02 2008-07-23 北京海鑫科金高科技股份有限公司 一种用于自助银行与自动柜员机的智能监控装置和方法
CN101609581A (zh) * 2008-06-16 2009-12-23 云南正卓信息技术有限公司 Atm机的异常视频预警装置
CN101794481A (zh) * 2009-02-04 2010-08-04 深圳市先进智能技术研究所 Atm自助银行监控系统和方法
US20160125247A1 (en) * 2014-11-05 2016-05-05 Vivotek Inc. Surveillance system and surveillance method
CN107368728A (zh) * 2017-07-21 2017-11-21 重庆凯泽科技股份有限公司 可视化报警管理系统及方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103841367A (zh) * 2012-11-21 2014-06-04 深圳市赛格导航科技股份有限公司 一种监控系统
CN105761261B (zh) * 2016-02-17 2018-11-16 南京工程学院 一种检测摄像头遭人为恶意破坏的方法
CN106897716A (zh) * 2017-04-27 2017-06-27 广东工业大学 一种宿舍安全监控系统及方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101227600A (zh) * 2008-02-02 2008-07-23 北京海鑫科金高科技股份有限公司 一种用于自助银行与自动柜员机的智能监控装置和方法
CN101609581A (zh) * 2008-06-16 2009-12-23 云南正卓信息技术有限公司 Atm机的异常视频预警装置
CN101794481A (zh) * 2009-02-04 2010-08-04 深圳市先进智能技术研究所 Atm自助银行监控系统和方法
US20160125247A1 (en) * 2014-11-05 2016-05-05 Vivotek Inc. Surveillance system and surveillance method
CN107368728A (zh) * 2017-07-21 2017-11-21 重庆凯泽科技股份有限公司 可视化报警管理系统及方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3920526A1 (fr) * 2020-06-02 2021-12-08 Canon Kabushiki Kaisha Appareil de traitement, appareil de capture d'images et procédé de traitement
US11375096B2 (en) 2020-06-02 2022-06-28 Canon Kabushiki Kaisha Authenticating images from a plurality of imaging units

Also Published As

Publication number Publication date
CN111144181A (zh) 2020-05-12

Similar Documents

Publication Publication Date Title
CN101266704B (zh) 基于人脸识别的atm安全认证与预警方法
CN106204815B (zh) 一种基于人脸检测和识别的门禁系统
TWI776796B (zh) 金融終端安全防護系統以及金融終端安全防護方法
RU2680747C1 (ru) Устройство наблюдения за терминалом, банкомат, способ принятия решения и программа
WO2015135406A1 (fr) Procédé et système pour authentifier des caractéristiques biologiques d'un utilisateur
CN105912908A (zh) 基于红外的真人活体身份验证方法
CN101556717A (zh) 一种atm智能安保系统及监测方法
CN101609581A (zh) Atm机的异常视频预警装置
WO2018129687A1 (fr) Procédé et dispositif anti-contrefaçon d'empreintes digitales
US20210006558A1 (en) Method, apparatus and system for performing authentication using face recognition
KR20180050968A (ko) 온라인 시험 관리 방법
JP2009098814A (ja) 入退場管理方法および顔画像認識セキュリティシステム
CN103714631A (zh) 基于人脸识别的atm取款机智能监控系统
WO2020093757A1 (fr) Procédé, appareil et système de détection de risques basés sur une collaboration d'arrière-plan
CN104978784A (zh) 基于图像关联的联动门门禁控制系统及方法
CN207232409U (zh) 一种身份信息整合式安检系统
KR20180057167A (ko) 무인 금융거래 시스템 및 이를 이용한 무인 금융거래 방법
CN111462417A (zh) 一种无人银行的多信息验证系统和多信息验证方法
JP2011014059A (ja) 行動分析システムおよび行動分析方法
CN116564017A (zh) 货币陷阱检测
TWM337796U (en) Face recognition system and security system comprising same
CN111144183B (zh) 一种基于人脸凹凸度的风险检测方法、装置及系统
CN111144180B (zh) 一种监控视频的风险检测方法及系统
CN111144182B (zh) 一种视频中人脸风险检测方法及系统
Granger et al. Results from evaluation of three commercial off-the-shelf face recognition systems on Chokepoint dataset

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19881911

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19881911

Country of ref document: EP

Kind code of ref document: A1