CN114936363A - Identity characteristic rapid calculation method with privacy security - Google Patents

Identity characteristic rapid calculation method with privacy security Download PDF

Info

Publication number
CN114936363A
CN114936363A CN202210497378.1A CN202210497378A CN114936363A CN 114936363 A CN114936363 A CN 114936363A CN 202210497378 A CN202210497378 A CN 202210497378A CN 114936363 A CN114936363 A CN 114936363A
Authority
CN
China
Prior art keywords
identity
feature
image frame
precision
mapping table
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210497378.1A
Other languages
Chinese (zh)
Other versions
CN114936363B (en
Inventor
林宇
赵宇迪
施侃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Shuchuan Data Technology Co ltd
Original Assignee
Shanghai Shuchuan Data Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Shuchuan Data Technology Co ltd filed Critical Shanghai Shuchuan Data Technology Co ltd
Priority to CN202210497378.1A priority Critical patent/CN114936363B/en
Priority claimed from CN202210497378.1A external-priority patent/CN114936363B/en
Publication of CN114936363A publication Critical patent/CN114936363A/en
Application granted granted Critical
Publication of CN114936363B publication Critical patent/CN114936363B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Bioethics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a privacy security identity characteristic rapid calculation method, which comprises the following steps: the method comprises the following steps: the multi-channel monitoring cameras distributed in each position of the physical space collect videos at different positions and visual angles in a space area, the videos are sent to a human-shaped detector after being decoded, and each local front-end/edge AI analysis box arranged in the physical space is responsible for analyzing the videos of 10 monitoring cameras, namely each AI analysis box processes real-time image frames transmitted by 40 cameras every second. The invention greatly reduces the calling frequency of the high-precision model, on the other hand, the invention can still accurately capture the condition that the image characteristic change of the same person is obvious, and is different from a method for simply accelerating by using a low-precision model.

Description

Rapid identity characteristic calculation method for privacy security
Technical Field
The invention relates to the technical field of computer science, in particular to a method for quickly calculating identity characteristics of privacy security.
Background
Digital twins have become a new trend of artificial intelligence technology development, in the digital twins problem, artificial intelligence needs to analyze video data of different positions and visual angles of a physical space, the action route of each person in the physical space is restored in virtual reality, the prior art for determining the identity of a pedestrian based on a visual image is mainly based on the face/human body appearance, when the same person is collected by videos in different time and space, video image frames observed twice by the same person are matched together through an artificial intelligence model to form a short space-time track, the process is repeated continuously, the action track of each person is finally restored correctly through a large number of matching operations, in the matching process, the matching error in the early stage can be amplified continuously in the matching in the later stage, and therefore, the precision of the finally matched pedestrian track is ensured, the accuracy requirement on the artificial intelligence model is high, and is limited by computational resources of front-end/edge calculation, in the prior art, human figure image frames are mainly collected in a front-end camera in a physical space, the human figure images in space and time are uploaded to a cloud end, the large-scale artificial intelligence model in the cloud end is used for completing identity feature calculation of the human figure image frames, and then matching and combination of pedestrian tracks are completed according to the calculated identity features.
With the continuous improvement of privacy requirements of people on an artificial intelligence/big data system, a human face/human figure image acquired in a physical space is transmitted to a cloud end or a server end through a network, and then is processed by using a large-scale artificial intelligence model, the feasibility is greatly limited, namely, in order to avoid invasion of personal privacy, a large number of human face/human figure images are not allowed to be transmitted through the network, and the problem cannot be solved even if data encryption is carried out in the transmission process.
If the identity feature calculation of the human figure/face image is directly performed in the physical space for acquisition, after the identity feature calculation is completed, the original image does not participate in the subsequent calculation, namely, the calculated identity feature data can be transmitted to the cloud for subsequent pedestrian track merging, so that the problem of privacy safety can be avoided, but the new problem of processing is that large-scale calculation resources can be used unlike the method concentrated on the cloud for calculation, the edge calculation power distributed in the physical space for front-end data acquisition is often limited, so if a high-precision artificial intelligent model is used at the front end for calculating the identity feature of the image frame, the calculation speed is greatly reduced, for example, 1-hour digital twin calculation may need 10 hours to be completed, and the method is practically infeasible.
Disclosure of Invention
The invention aims to provide a method for quickly calculating identity characteristics with safe privacy, which has the advantage of high processing efficiency, and solves the problem that if a high-precision identity characteristic model is used, the calculated time length on front-end AI equipment is completely out of control and the feasibility is lost, for example, 4 frames per second and 8 frames per frame are averaged, each AI equipment analyzes 10 paths of videos, the calculation time of the high-precision identity characteristic model of each frame is 20ms, the video in each 1 hour can be calculated in 6.4 hours, and if the pictures are uploaded to a cloud and then calculated by using the high-precision model, the risk of privacy disclosure exists; if the identity characteristics are calculated by adopting a low-precision identity characteristic model and then uploaded to the cloud, the accuracy of the recovered pedestrian track is low due to the low precision of the identity characteristics.
In order to achieve the purpose, the invention provides the following technical scheme: a method for quickly calculating identity characteristics of privacy security comprises the following steps:
the method comprises the following steps: the multi-channel monitoring cameras distributed at each position of the physical space collect videos at different positions and visual angles in a space area, the videos are sent to a human form detector after being decoded, each front end/edge AI analysis box which is distributed at the local position of the physical space is responsible for analyzing the videos of 10 monitoring cameras, namely each AI analysis box processes real-time image frames transmitted by 40 frames of cameras every second, and more multi-channel videos can be processed under the condition that the efficiency of a front end/edge AI computing chip is further improved in the future;
step two: after receiving the real-time frame image, detecting a human-shaped image frame in each frame by a human-shaped detector on the front-end AI analysis box, wherein each human-shaped image frame is represented by a vector (x, t), wherein x is the coordinate of a rectangle of the human-shaped frame in the image, and t is the appearance time of the human-shaped frame;
step three: initializing an identity characteristic mapping table T for each camera, wherein the identity characteristic mapping table T is initially an empty table;
step four: for each newly detected human-shaped image frame (x, T), if the identity characteristic mapping table T of the corresponding camera is empty, firstly inputting the image frame (x, T) into a low-precision identity characteristic model A to obtain a low-precision identity characteristic Ax of the image frame (x, T), then inputting the image (x, T) into a high-precision identity characteristic model B to obtain a high-precision identity characteristic Bx of the image frame (x, T), taking (Ax, x, T) as an index Key and Bx as a Value, inserting the { (Ax, x, T), Bx } characteristic mapping pair into the mapping table T, sending Bx as a final identity characteristic description string of the image frame (x, T) to the cloud end, and if the identity characteristic mapping table T of the corresponding camera is not empty, firstly inputting the image frame (x, T) into the low-precision identity characteristic model A to obtain a low-precision identity characteristic Ax of the image frame (x, T), traversing the feature pair stored in the mapping table T to calculate the maximum feature similarity Cmax between the low-precision identity feature and Ax, when the maximum similarity Cmax is close to 1, considering that the content of the current image frame (x, T) is close to the content of the historical image frame corresponding to Cmax in the mapping table, and sending the high-precision identity model feature Bmax in the feature pair corresponding to Cmax to the cloud as the final identity feature description string of the human-shaped image frame (x, T);
step five: for each newly detected human-shaped image frame (x, T), if the identity characteristic mapping table T of the corresponding camera is not empty, and the maximum identity characteristic similarity Cmax between the medium-low precision identity characteristic and Ax of the feature pairs stored in the mapping table T calculation table is lower than 0.9, inputting the image (x, T) into a high-precision identity characteristic model B to obtain the high-precision identity characteristic Bx of the image frame (x, T), and inserting the ((Ax, x, T), Bx) feature mapping pair into the mapping table T by taking (Ax, x, T) as an index Key and Bx as a Value;
step six: when the number of feature mapping pairs contained in the feature mapping table T of the camera exceeds a certain size, the feature mapping pair with the earliest T is deleted.
Preferably, in the step one, due to the limited computing power of the front-end AI device, a processing frame rate of 4 frames per second may be adopted, and in the case that the efficiency of the front-end/edge AI computing chip is further improved in the future, a processing frequency greater than 4 frames per second may be adopted.
Preferably, more paths of video can be processed in the first step under the condition that the efficiency of the front-end/edge AI computing chip is further improved.
Preferably, the algorithm of the human detector in the second step includes and is not limited to YOLOX.
Preferably, the feature mapping table T of the camera in the sixth step includes a feature mapping pair quantity deletion threshold of 16.
Preferably, the threshold of the maximum similarity Cmax in the fourth step is set to 0.9, that is, the maximum similarity Cmax is close enough to 0.9, and then the content of the current image frame (x, t) is considered to be close enough to the content of the history image frame corresponding to Cmax in the mapping table.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention utilizes the fact that the image characteristics of the same person in a short period of time (4-5 seconds) in continuous video frames are changed little in most of time, utilizes the low-precision model to measure the situation, and uses the same high-precision characteristics to uniformly represent the image frames with small changes, thereby greatly reducing the calling frequency of the high-precision model on one hand, and still accurately capturing the situation that the image characteristics of the same person are changed obviously on the other hand (in the situation, the mapping table of the low-precision model fails to be inquired, the high-precision model is still used for calculation), which is different from the method of simply utilizing the low-precision model to accelerate.
2. According to the invention, through designing the feature mapping table capable of being automatically updated, the fact that most of the human-shaped frames are only subjected to calculation of a low-precision model is realized, but the output features of all the human-shaped image frames are generated by a high-precision model, the problem that if the high-precision identity feature model is used, the calculation time on front-end AI equipment is completely out of control and the feasibility is lost, for example, 4 frames per second and 8 human-shaped frames per frame are averaged, 10 videos are analyzed by each AI equipment, the calculation time of the high-precision identity feature model of each human-shaped frame is 20ms, the calculation can be completed in 6.4 hours per 1 hour, and if the pictures are uploaded to a cloud and then calculated by the high-precision model, the risk of privacy disclosure exists; if the identity characteristics are calculated by adopting a low-precision identity characteristic model and then uploaded to the cloud, the accuracy of the recovered pedestrian track is low due to the low precision of the identity characteristics.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the present invention provides a technical solution: a method for quickly calculating identity characteristics of privacy security comprises the following steps:
the method comprises the following steps: the multi-channel monitoring cameras distributed at each position of the physical space collect videos of different positions and visual angles in a space area, the videos are sent to a human-shaped detector after being decoded, each front end/edge AI analysis box which is distributed at the local position of the physical space is responsible for analyzing the videos of 10 monitoring cameras, because the calculation power of front end AI equipment is limited, a processing frame rate of 4 frames per second can be adopted, the processing frequency which is more than 4 frames per second can be adopted under the condition that the efficiency of a front end/edge AI computing chip is further improved in the future, namely each AI analysis box processes real-time image frames transmitted by 40-frame cameras per second, more multi-channel videos can be processed under the condition that the efficiency of the front end/edge AI computing chip is further improved in the future, and more multi-channel videos can be processed under the condition that the efficiency of the front end/edge AI computing chip is further improved in the future;
step two: after receiving the real-time frame images, the human shape detector on the front-end AI analysis box detects the human shape image frame in each frame, each human shape image frame is represented by a vector (x, t), wherein x is the coordinate of the rectangle of the human shape frame in the image, and t is the time when the human shape frame appears, and the algorithm of the human shape detector includes but is not limited to YOLOX;
step three: initializing an identity characteristic mapping table T for each camera, wherein the identity characteristic mapping table T is initially an empty table;
step four: for each newly detected human-shaped image frame (x, T), if the identity characteristic mapping table T of the corresponding camera is empty, firstly inputting the image frame (x, T) into a low-precision identity characteristic model A to obtain a low-precision identity characteristic Ax of the image frame (x, T), then inputting the image (x, T) into a high-precision identity characteristic model B to obtain a high-precision identity characteristic Bx of the image frame (x, T), taking (Ax, x, T) as an index Key and Bx as a Value, inserting the { (Ax, x, T), Bx } characteristic mapping pair into the mapping table T, sending Bx as a final identity characteristic description string of the image frame (x, T) to the cloud end, and if the identity characteristic mapping table T of the corresponding camera is not empty, firstly inputting the image frame (x, T) into the low-precision identity characteristic model A to obtain a low-precision identity characteristic Ax of the image frame (x, T), traversing the feature pair stored in the mapping table T to calculate the maximum identity feature similarity Cmax between the low-precision identity features and Ax, when the maximum similarity Cmax is close to 1, considering that the content of the current image frame (x, T) is close to the content of the historical image frame corresponding to Cmax in the mapping table, and sending the high-precision identity model feature Bmax in the feature pair corresponding to Cmax to the cloud as the final identity feature description string of the human-shaped image frame (x, T);
step five: for each newly detected human-shaped image frame (x, T), if the identity characteristic mapping table T of the corresponding camera is not empty, and the maximum identity characteristic similarity Cmax between the medium-low precision identity characteristic and Ax of the feature pairs stored in the mapping table T calculation table is lower than 0.9, inputting the image (x, T) into a high-precision identity characteristic model B to obtain the high-precision identity characteristic Bx of the image frame (x, T), and inserting the ((Ax, x, T), Bx) feature mapping pair into the mapping table T by taking (Ax, x, T) as an index Key and Bx as a Value;
step six: when the number of pairs of feature maps T of the camera exceeds a certain size, the pair of feature maps with the earliest T is deleted, and the threshold value for deleting the number of pairs of feature maps T of the camera is 16.
In the invention: and setting the threshold value of the maximum similarity Cmax in the fourth step to be 0.9, namely, if the maximum similarity Cmax is close enough to 0.9, the content of the current image frame (x, t) is considered to be close enough to the content of the history image frame corresponding to Cmax in the mapping table.
In summary, the following steps: the invention discloses a method for rapidly calculating identity characteristics of privacy security, which utilizes a low-precision model to measure the situation that the change of image characteristics of the same person in a short period of time (4-5 seconds) in continuous video frames is small in most of time, the image frames with small changes are uniformly represented by the same high-precision features, so that on one hand, the calling frequency of a high-precision model is greatly reduced, on the other hand, the situation that the image features of the same person are obviously changed can be still accurately captured (in the situation, the mapping table of a low-precision model fails to be inquired, the high-precision model is still used for calculation), and the method is different from a method for simply accelerating by using a low-precision model.
It should be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (6)

1. A privacy security identity characteristic rapid calculation method is characterized by comprising the following steps: comprises the following steps:
the method comprises the following steps: the multi-channel monitoring cameras distributed at each position of the physical space collect videos at different positions and visual angles in a space area, the videos are sent to a human form detector after being decoded, each front end/edge AI analysis box which is distributed at the local position of the physical space is responsible for analyzing the videos of 10 monitoring cameras, namely each AI analysis box processes real-time image frames transmitted by 40 frames of cameras every second, and more multi-channel videos can be processed under the condition that the efficiency of a front end/edge AI computing chip is further improved in the future;
step two: after receiving the real-time frame image, detecting a human-shaped image frame in each frame by a human-shaped detector on the front-end AI analysis box, wherein each human-shaped image frame is represented by a vector (x, t), wherein x is the coordinate of a rectangle of the human-shaped frame in the image, and t is the appearance time of the human-shaped frame;
step three: initializing an identity characteristic mapping table T for each camera, wherein the identity characteristic mapping table T is initially an empty table;
step four: for each newly detected human-shaped image frame (x, T), if the identity mapping table T of the corresponding camera is empty, the image frame (x, T) is firstly input into the low-precision identity feature model A to obtain the low-precision identity feature Ax of the image frame (x, T), then the image (x, T) is input into the high-precision identity feature model B to obtain the high-precision identity feature Bx of the image frame (x, T), the (Ax, x, T) is used as an index Key, the Bx is a Value, the { (Ax, x, T), Bx } feature mapping pair is inserted into the mapping table T, the Bx is used as a final identity feature description string of the image frame (x, T) and is sent to the cloud end, if the identity mapping table T of the corresponding camera is not empty, the image frame (x, T) is firstly input into the low-precision identity feature model A to obtain the low-precision identity feature Ax of the image frame (x, T), traversing the feature pair stored in the mapping table T to calculate the maximum identity feature similarity Cmax between the low-precision identity features and Ax, when the maximum similarity Cmax is close to 1, considering that the content of the current image frame (x, T) is close to the content of the historical image frame corresponding to Cmax in the mapping table, and sending the high-precision identity model feature Bmax in the feature pair corresponding to Cmax to the cloud as the final identity feature description string of the human-shaped image frame (x, T);
step five: for each newly detected human-shaped image frame (x, T), if the identity characteristic mapping table T of the corresponding camera is not empty, and the maximum identity characteristic similarity Cmax between the medium-low precision identity characteristic and Ax of the feature pairs stored in the mapping table T calculation table is lower than 0.9, inputting the image (x, T) into a high-precision identity characteristic model B to obtain the high-precision identity characteristic Bx of the image frame (x, T), and inserting the ((Ax, x, T), Bx) feature mapping pair into the mapping table T by taking (Ax, x, T) as an index Key and Bx as a Value;
step six: when the number of pairs of feature maps T of the camera exceeds a certain size, the pair of feature maps with the earliest T is deleted.
2. The method for rapidly calculating the identity features of the privacy security according to claim 1, wherein the method comprises the following steps: in the first step, because the computation power of the front-end AI device is limited, a processing frame rate of 4 frames per second can be adopted, and a processing frequency greater than 4 frames per second can be adopted in the case that the efficiency of the front-end/edge AI computation chip is further improved in the future.
3. The method for rapidly calculating the identity features of the privacy security according to claim 1, wherein the method comprises the following steps: in the first step, more paths of videos can be processed under the condition that the efficiency of the future front end/edge AI computing chip is further improved.
4. The method for rapidly calculating the identity characteristics of privacy security according to claim 1, wherein: the algorithm of the human detector in the second step includes and is not limited to YOLOX.
5. The method for rapidly calculating the identity features of the privacy security according to claim 1, wherein the method comprises the following steps: the feature mapping table T of the camera in the sixth step includes a feature mapping pair quantity deletion threshold of 16.
6. The method for rapidly calculating the identity features of the privacy security according to claim 1, wherein the method comprises the following steps: the threshold of the maximum similarity Cmax in the fourth step is set to 0.9, that is, the maximum similarity Cmax is close enough to 0.9, then the content of the current image frame (x, t) is considered to be close enough to the content of the history image frame corresponding to Cmax in the mapping table.
CN202210497378.1A 2022-05-09 Quick calculation method for identity characteristics of privacy security Active CN114936363B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210497378.1A CN114936363B (en) 2022-05-09 Quick calculation method for identity characteristics of privacy security

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210497378.1A CN114936363B (en) 2022-05-09 Quick calculation method for identity characteristics of privacy security

Publications (2)

Publication Number Publication Date
CN114936363A true CN114936363A (en) 2022-08-23
CN114936363B CN114936363B (en) 2024-05-24

Family

ID=

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6075571A (en) * 1997-07-29 2000-06-13 Kuthyar; Ashok K. Composite image display device and service for video conferencing
US20120274781A1 (en) * 2011-04-29 2012-11-01 Siemens Corporation Marginal space learning for multi-person tracking over mega pixel imagery
JP2013126167A (en) * 2011-12-15 2013-06-24 Aisin Seiki Co Ltd Vehicle periphery monitoring device
WO2021227349A1 (en) * 2020-05-11 2021-11-18 华南理工大学 Front-end facial image encryption and recognition method for biometric privacy protection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6075571A (en) * 1997-07-29 2000-06-13 Kuthyar; Ashok K. Composite image display device and service for video conferencing
US20120274781A1 (en) * 2011-04-29 2012-11-01 Siemens Corporation Marginal space learning for multi-person tracking over mega pixel imagery
JP2013126167A (en) * 2011-12-15 2013-06-24 Aisin Seiki Co Ltd Vehicle periphery monitoring device
WO2021227349A1 (en) * 2020-05-11 2021-11-18 华南理工大学 Front-end facial image encryption and recognition method for biometric privacy protection

Similar Documents

Publication Publication Date Title
CN111222500B (en) Label extraction method and device
CN114220176A (en) Human behavior recognition method based on deep learning
CN108600707A (en) A kind of monitoring method, recognition methods, relevant apparatus and system
CN109635693B (en) Front face image detection method and device
CN113963315A (en) Real-time video multi-user behavior recognition method and system in complex scene
CN111708663A (en) Cloud computing safety monitoring system based on artificial intelligence
CN110941978B (en) Face clustering method and device for unidentified personnel and storage medium
CN110765134A (en) File establishing method, equipment and storage medium
CN109902681B (en) User group relation determining method, device, equipment and storage medium
CN110852306A (en) Safety monitoring system based on artificial intelligence
CN110599129A (en) Campus attendance checking method, device, identification terminal and system based on image tracking
CN111652331A (en) Image recognition method and device and computer readable storage medium
CN112150692A (en) Access control method and system based on artificial intelligence
CN111241926A (en) Attendance checking and learning condition analysis method, system, equipment and readable storage medium
CN113822110B (en) Target detection method and device
CN112446362B (en) Face picture file processing method and equipment
CN111860168B (en) Pedestrian re-identification method and device, electronic equipment and storage medium
CN110263704B (en) Face data acquisition method, device, server, video acquisition device and medium
CN109815861B (en) User behavior information statistical method based on face recognition
CN114936363A (en) Identity characteristic rapid calculation method with privacy security
CN114936363B (en) Quick calculation method for identity characteristics of privacy security
CN116248861A (en) Intelligent video detection method, system and device
CN114120076B (en) Cross-view video gait recognition method based on gait motion estimation
CN108734144A (en) A kind of speaker's identity identifying method based on recognition of face
CN115719428A (en) Face image clustering method, device, equipment and medium based on classification model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant