CN109522844B - Social affinity determination method and system - Google Patents

Social affinity determination method and system Download PDF

Info

Publication number
CN109522844B
CN109522844B CN201811375169.XA CN201811375169A CN109522844B CN 109522844 B CN109522844 B CN 109522844B CN 201811375169 A CN201811375169 A CN 201811375169A CN 109522844 B CN109522844 B CN 109522844B
Authority
CN
China
Prior art keywords
agents
neural network
intimacy
determining
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811375169.XA
Other languages
Chinese (zh)
Other versions
CN109522844A (en
Inventor
胡硕
徐光远
孙妍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yanshan University
Original Assignee
Yanshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yanshan University filed Critical Yanshan University
Priority to CN201811375169.XA priority Critical patent/CN109522844B/en
Publication of CN109522844A publication Critical patent/CN109522844A/en
Application granted granted Critical
Publication of CN109522844B publication Critical patent/CN109522844B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a social affinity determination method and a social affinity determination system. The method comprises the following steps: acquiring an image acquired by a camera; detecting face information in the image by adopting a cascade neural network; determining sparse representation of the detected face information by adopting a convolutional neural network; taking sparse representation of face information as input, and classifying the face by adopting a classifier to obtain faces representing different actors; tracking by taking the faces of different agents as targets, and acquiring behavior information of the agents, wherein the behavior information comprises distance information between the agents and conversation behaviors between the agents; and determining the intimacy between the agents according to the behavior information of the agents. The social intimacy degree determining method and system provided by the invention can automatically acquire intimacy degree between the agents based on camera monitoring.

Description

Social affinity determination method and system
Technical Field
The invention relates to the field of face recognition and the field of behavior recognition, in particular to a social intimacy degree determination method and a social intimacy degree determination system.
Background
The human face identification is one of important applications of computer vision, in particular to a technology for automatically identifying identities by analyzing and comparing human face visual characteristic information by a computer. Compared with traditional biological identification means such as fingerprints and irises, the face identification has the advantages of no contact, accordance with human identification habits, strong interactivity, difficulty in stealing and the like, so that strong demands are made on public safety, information safety, financial safety, company and personal property safety and the like.
Google published their face recognition algorithms on the CVPR (top meeting in the field of computer vision and pattern recognition) 2015. The method for training the Facenet network by using the CNN and the triple loss is provided by utilizing the high cohesion of the same face under the photos with different angles and the like and the low coupling of different faces, and the accuracy on the face data set reaches a new height. However, the triplet-loss method only uses the relative distance between samples, and does not introduce the concept of absolute distance.
Social relations are important components of human social life, and it is necessary to construct an affinity network during social contact. The constructed affinity network at present depends more on information obtained by network social contact, so that the attention to the information of face-to-face interaction in the actual life is less, and the important research significance is provided for how to construct the affinity network by utilizing the camera monitoring information.
Disclosure of Invention
The invention aims to provide a social intimacy degree determining method and system, which can automatically acquire intimacy degree between agents based on camera monitoring.
In order to achieve the purpose, the invention provides the following scheme:
a social affinity determination method, comprising:
acquiring an image acquired by a camera;
detecting face information in the image by adopting a cascade neural network;
determining sparse representation of the detected face information by adopting a convolutional neural network;
taking sparse representation of face information as input, and classifying the face by adopting a classifier to obtain faces representing different actors;
tracking by taking the faces of different agents as targets, and acquiring behavior information of the agents, wherein the behavior information comprises distance information between the agents and conversation behaviors between the agents;
and determining the intimacy between the agents according to the behavior information of the agents.
Optionally, the determining the sparse representation of the detected face information by using the convolutional neural network specifically includes:
and mapping the detected face information into a 128-dimensional feature vector by adopting a convolutional neural network, and recording the 128-dimensional feature vector as sparse representation of the face information.
Optionally, before the determining the sparse representation of the detected face information by using the convolutional neural network, the method further includes:
and training the convolutional neural network, and adjusting and training the neural network according to the training precision of the convolutional neural network calculated by the loss function.
Optionally, the loss function is
Figure BDA0001870495730000021
Where f represents a 128-dimensional feature vector of samples, a and a' are pairs of positive samples with the same label, B and C are pairs of negative samples with different labels or the same label, and α is the set threshold.
Optionally, the determining the intimacy degree between the agents according to the behavior information of the agents specifically includes:
when the distance between the agents is smaller than a set threshold value and conversation behaviors occur between the agents, determining that the intimacy behaviors occur between the agents according to the I-I0+ Δ I updates the intimacy between the agents, where I0An initial value representing the intimacy degree, I represents the updated intimacy degree,
Figure BDA0001870495730000022
d is the distance between the actors, Δ t is the duration of the intimacy, ω1Is the weight of the distance, ω2Is the weight of the conversation activity.
The invention also provides a social affinity determination system, comprising:
the image acquisition module is used for acquiring images acquired by the camera;
the face detection module is used for detecting face information in the image by adopting a cascade neural network;
the sparse representation module is used for determining sparse representation of the detected face information by adopting a convolutional neural network;
the face recognition module is used for taking the sparse representation of the face information as input and classifying the face by adopting a classifier to obtain the faces representing different actors;
the behavior information acquisition module is used for tracking by taking the faces of different agents as targets and acquiring behavior information of the agents, wherein the behavior information comprises distance information between the agents and conversation behaviors between the agents;
and the intimacy degree determining module is used for determining intimacy degree between the agents according to the behavior information of the agents.
Optionally, the sparse representation module specifically includes:
and the sparse representation unit is used for mapping the detected face information into a 128-dimensional feature vector by adopting a convolutional neural network and recording the 128-dimensional feature vector as sparse representation of the face information.
Optionally, the system further includes:
and the convolutional neural network training module is used for training a convolutional neural network and adjusting and training the neural network according to the training precision of the convolutional neural network calculated by the loss function.
Optionally, the loss function is
Figure BDA0001870495730000031
Where f represents a 128-dimensional feature vector of samples, a and a' are pairs of positive samples with the same label, B and C are pairs of negative samples with different labels or the same label, and α is the set threshold.
Optionally, the intimacy degree determining module specifically includes:
an intimacy degree determining unit for determining intimacy degree when the distance between the agents is less than the set threshold and conversation action occurs between the agents, and determining intimacy degree according to I-I0+ Δ I updates the intimacy between the agents, where I0An initial value representing the intimacy degree, I represents the updated intimacy degree,
Figure BDA0001870495730000032
d is the distance between the actors, Δ t is the duration of the intimacy, ω1Is the weight of the distance, ω2Is the weight of the conversation activity.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects: according to the social intimacy degree determining method and system provided by the invention, different actors are determined through a face recognition technology, the variation of intimacy degree among the actors is obtained through detecting the specified intimacy behavior and combining the time span, and then an intimacy degree network is built. In addition, in the face recognition part, a new loss function training depth network is adopted, and a classifier is combined, so that more accurate face recognition is obtained, and accurate distinguishing of different actors is realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic flow chart illustrating a method for determining social affinity according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a social affinity determination system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a social intimacy degree determining method and system, which can automatically acquire intimacy degree between agents based on camera monitoring.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Fig. 1 is a schematic flow chart of a social affinity determination method according to an embodiment of the present invention, and as shown in fig. 1, the social affinity determination method provided by the present invention specifically includes the following steps:
step 101: acquiring an image acquired by a camera; constructing a camera array so as to obtain a real-time scene image containing a target to the maximum extent;
step 102: detecting face information in the image by adopting a cascade neural network;
step 103: determining sparse representation of the detected face information by adopting a convolutional neural network;
step 104: taking sparse representation of face information as input, and classifying the face by adopting a classifier to obtain faces representing different actors;
step 105: tracking by taking the faces of different agents as targets, and acquiring behavior information of the agents, wherein the behavior information comprises distance information between the agents and conversation behaviors between the agents;
step 106: and determining the intimacy between the agents according to the behavior information of the agents, and constructing a relationship network.
Wherein, step 103 specifically comprises:
and mapping the detected face information into a 128-dimensional feature vector by adopting a convolutional neural network, and recording the 128-dimensional feature vector as sparse representation of the face information.
Before step 103, the method further comprises:
and training the convolutional neural network, and adjusting and training the neural network according to the training precision of the convolutional neural network calculated by the loss function.
Wherein the loss function is
Figure BDA0001870495730000051
Where f represents a 128-dimensional feature vector of samples, a and a' are pairs of positive samples with the same label, B and C are pairs of negative samples with different labels or the same label, and α is the set threshold.
Step 106 specifically includes:
the act of intimacy between agents may include the distance between agents and the act of talking, when the act isWhen the distance between people is less than the set threshold value and conversation behaviors occur between the agents, determining that the intimacy behaviors occur between the agents according to the I-I0+ Δ I updates the intimacy between the agents, where I0An initial value representing the intimacy degree, I represents the updated intimacy degree,
Figure BDA0001870495730000052
d is the distance between the actors, Δ t is the duration of the intimacy, ω1Is the weight of the distance, ω2Is the weight of the behavior of the conversation,
Figure BDA0001870495730000053
represents the degree of contribution of distance to intimacy, ω2eΔtRepresenting the degree of contribution of conversational behaviour to intimacy.
According to the social intimacy degree determining method provided by the invention, different actors are determined through a face recognition technology, the variation of intimacy degree among the actors is obtained through detecting the specified intimacy behavior and combining with the time span, and then an intimacy degree network is built. In addition, in the face recognition part, a new loss function training depth network is adopted, and a classifier is combined, so that more accurate face recognition is obtained, and accurate distinguishing of different actors is realized.
The present invention also provides a social affinity determination system, as shown in fig. 2, the system including:
an image acquisition module 201, configured to acquire an image acquired by a camera;
the face detection module 202 is configured to detect face information in an image by using a cascaded neural network;
a sparse representation module 203, configured to determine a sparse representation of the detected face information by using a convolutional neural network;
the face recognition module 204 is configured to use a sparse representation of face information as an input, and classify faces by using a classifier to obtain faces representing different actors;
the behavior information acquiring module 205 is configured to track faces of different agents as targets and acquire behavior information of the agents, where the behavior information includes distance information between the agents and conversation behavior between the agents;
the intimacy degree determining module 206 is configured to determine intimacy degree between the actors according to the behavior information of the actors.
Wherein, the system still includes:
and the convolutional neural network training module is used for training the convolutional neural network and adjusting and training the neural network according to the training precision of the convolutional neural network calculated by the loss function. A loss function of
Figure BDA0001870495730000061
Where f represents a 128-dimensional feature vector of samples, a and a' are pairs of positive samples with the same label, B and C are pairs of negative samples with different labels or the same label, and α is the set threshold.
The sparse representation module 203 specifically includes:
and the sparse representation unit is used for mapping the detected face information into a 128-dimensional feature vector by adopting a convolutional neural network and recording the 128-dimensional feature vector as sparse representation of the face information.
The intimacy degree determination module 206 specifically includes:
an intimacy degree determining unit for determining intimacy degree when the distance between the agents is less than the set threshold and conversation action occurs between the agents, and determining intimacy degree according to I-I0+ Δ I updates the intimacy between the agents, where I0An initial value representing the intimacy degree, I represents the updated intimacy degree,
Figure BDA0001870495730000071
d is the distance between the actors, Δ t is the duration of the intimacy, ω1Is the weight of the distance, ω2Is the weight of the conversation activity.
The social intimacy degree determining system provided by the invention determines different actors through a face recognition technology, obtains the variable quantity of intimacy degree among the actors through detecting the specified intimacy behavior and combining time span, and further builds an intimacy degree network. In addition, in the face recognition part, a new loss function training depth network is adopted, and a classifier is combined, so that more accurate face recognition is obtained, and accurate distinguishing of different actors is realized.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (8)

1. A method for determining social affinity, comprising:
acquiring an image acquired by a camera;
detecting face information in the image by adopting a cascade neural network;
determining sparse representation of the detected face information by adopting a convolutional neural network;
taking sparse representation of face information as input, and classifying the face by adopting a classifier to obtain faces representing different actors;
tracking by taking the faces of different agents as targets, and acquiring behavior information of the agents, wherein the behavior information comprises distance information between the agents and conversation behaviors between the agents;
determining the intimacy between the agents according to the behavior information of the agents;
the determining of the intimacy between the agents according to the behavior information of the agents specifically includes:
when the distance between the agents is smaller than a set threshold value and conversation behaviors occur between the agents, determining that the intimacy behaviors occur between the agents according to the I-I0+ Δ I updates the intimacy between the agents, where I0An initial value representing the intimacy degree, I represents the updated intimacy degree,
Figure FDA0002489733790000011
d is the distance between the actors, Δ t is the duration of the intimacy, ω1Is the weight of the distance, ω2Is the weight of the conversation activity.
2. The method according to claim 1, wherein the determining a sparse representation of the detected face information using a convolutional neural network specifically comprises:
and mapping the detected face information into a 128-dimensional feature vector by adopting a convolutional neural network, and recording the 128-dimensional feature vector as sparse representation of the face information.
3. The method of claim 2, wherein prior to the determining the sparse representation of the detected face information using the convolutional neural network, further comprising:
and training the convolutional neural network, and adjusting and training the neural network according to the training precision of the convolutional neural network calculated by the loss function.
4. The method of claim 3, wherein the loss function is
Figure FDA0002489733790000012
Where f represents a 128-dimensional feature vector of samples, a and a' are pairs of positive samples with the same label, B and C are pairs of negative samples with different labels or the same label, and α is the set threshold.
5. A social affinity determination system, comprising:
the image acquisition module is used for acquiring images acquired by the camera;
the face detection module is used for detecting face information in the image by adopting a cascade neural network;
the sparse representation module is used for determining sparse representation of the detected face information by adopting a convolutional neural network;
the face recognition module is used for taking the sparse representation of the face information as input and classifying the face by adopting a classifier to obtain the faces representing different actors;
the behavior information acquisition module is used for tracking by taking the faces of different agents as targets and acquiring behavior information of the agents, wherein the behavior information comprises distance information between the agents and conversation behaviors between the agents;
the intimacy degree determining module is used for determining intimacy degree between the agents according to the behavior information of the agents;
wherein the intimacy degree determining module specifically comprises:
an intimacy degree determining unit for determining intimacy degree when the distance between the agents is less than the set threshold and conversation action occurs between the agents, and determining intimacy degree according to I-I0+ Δ I updates the intimacy between the agents, where I0An initial value representing the intimacy degree, I represents the updated intimacy degree,
Figure FDA0002489733790000021
d is the distance between the actors, Δ t is the duration of the intimacy, ω1Is the weight of the distance, ω2Is the weight of the conversation activity.
6. The social affinity determination system of claim 5, wherein the sparse representation module specifically comprises:
and the sparse representation unit is used for mapping the detected face information into a 128-dimensional feature vector by adopting a convolutional neural network and recording the 128-dimensional feature vector as sparse representation of the face information.
7. The social affinity determination system of claim 6, further comprising:
and the convolutional neural network training module is used for training a convolutional neural network and adjusting and training the neural network according to the training precision of the convolutional neural network calculated by the loss function.
8. The social affinity determination system of claim 7, wherein the loss function is
Figure FDA0002489733790000031
Where f represents a 128-dimensional feature vector of samples, a and a' are pairs of positive samples with the same label, B and C are pairs of negative samples with different labels or the same label, and α is the set threshold.
CN201811375169.XA 2018-11-19 2018-11-19 Social affinity determination method and system Active CN109522844B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811375169.XA CN109522844B (en) 2018-11-19 2018-11-19 Social affinity determination method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811375169.XA CN109522844B (en) 2018-11-19 2018-11-19 Social affinity determination method and system

Publications (2)

Publication Number Publication Date
CN109522844A CN109522844A (en) 2019-03-26
CN109522844B true CN109522844B (en) 2020-07-24

Family

ID=65776607

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811375169.XA Active CN109522844B (en) 2018-11-19 2018-11-19 Social affinity determination method and system

Country Status (1)

Country Link
CN (1) CN109522844B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110020204B (en) * 2019-03-29 2022-04-19 联想(北京)有限公司 Information processing method and electronic equipment
CN111339330B (en) * 2020-03-12 2023-09-01 Oppo广东移动通信有限公司 Photo processing method and device, storage medium and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105183758A (en) * 2015-07-22 2015-12-23 深圳市万姓宗祠网络科技股份有限公司 Content recognition method for continuously recorded video or image
CN105760821A (en) * 2016-01-31 2016-07-13 中国石油大学(华东) Classification and aggregation sparse representation face identification method based on nuclear space
CN106203356A (en) * 2016-07-12 2016-12-07 中国计量大学 A kind of face identification method based on convolutional network feature extraction
CN107274437A (en) * 2017-06-23 2017-10-20 燕山大学 A kind of visual tracking method based on convolutional neural networks
CN107292915A (en) * 2017-06-15 2017-10-24 国家新闻出版广电总局广播科学研究院 Method for tracking target based on convolutional neural networks
CN108182394A (en) * 2017-12-22 2018-06-19 浙江大华技术股份有限公司 Training method, face identification method and the device of convolutional neural networks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011081763A (en) * 2009-09-09 2011-04-21 Sony Corp Information processing apparatus, information processing method and information processing program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105183758A (en) * 2015-07-22 2015-12-23 深圳市万姓宗祠网络科技股份有限公司 Content recognition method for continuously recorded video or image
CN105760821A (en) * 2016-01-31 2016-07-13 中国石油大学(华东) Classification and aggregation sparse representation face identification method based on nuclear space
CN106203356A (en) * 2016-07-12 2016-12-07 中国计量大学 A kind of face identification method based on convolutional network feature extraction
CN107292915A (en) * 2017-06-15 2017-10-24 国家新闻出版广电总局广播科学研究院 Method for tracking target based on convolutional neural networks
CN107274437A (en) * 2017-06-23 2017-10-20 燕山大学 A kind of visual tracking method based on convolutional neural networks
CN108182394A (en) * 2017-12-22 2018-06-19 浙江大华技术股份有限公司 Training method, face identification method and the device of convolutional neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于人脸识别的社交关系检索系统的设计与实现;顾昭艺;《中国优秀硕士学位论文全文数据库 信息科技辑》;20131115(第11期);I138-799 *
基于卷积神经网络的人脸识别算法;李辉 等;《软件导刊》;20170315;第16卷(第3期);第26-29页,第0,3节 *

Also Published As

Publication number Publication date
CN109522844A (en) 2019-03-26

Similar Documents

Publication Publication Date Title
Zhang et al. A new method for violence detection in surveillance scenes
WO2021103721A1 (en) Component segmentation-based identification model training and vehicle re-identification methods and devices
CN106557726B (en) Face identity authentication system with silent type living body detection and method thereof
Yap et al. Facial micro-expressions grand challenge 2018 summary
US10009579B2 (en) Method and system for counting people using depth sensor
CN109740499A (en) Methods of video segmentation, video actions recognition methods, device, equipment and medium
CN105590097A (en) Security system and method for recognizing face in real time with cooperation of double cameras on dark condition
WO2023173646A1 (en) Expression recognition method and apparatus
CN109902681B (en) User group relation determining method, device, equipment and storage medium
CN109522844B (en) Social affinity determination method and system
CN114186069A (en) Deep video understanding knowledge graph construction method based on multi-mode heteromorphic graph attention network
CN111652331A (en) Image recognition method and device and computer readable storage medium
CN111382596A (en) Face recognition method and device and computer storage medium
Monteiro et al. Design and evaluation of classifier for identifying sign language videos in video sharing sites
CN110929583A (en) High-detection-precision face recognition method
CN114461078B (en) Man-machine interaction method based on artificial intelligence
US20030123734A1 (en) Methods and apparatus for object recognition
CN116261009A (en) Video detection method, device, equipment and medium for intelligently converting video audience
CN115171335A (en) Image and voice fused indoor safety protection method and device for elderly people living alone
US11087121B2 (en) High accuracy and volume facial recognition on mobile platforms
Mori et al. Person tracking based on gait features from depth sensors
Kim et al. Uncooperative person recognition based on stochastic information updates and environment estimators
CN113221920B (en) Image recognition method, apparatus, device, storage medium, and computer program product
Liu et al. Integrated multiscale appearance features and motion information prediction network for anomaly detection
CN117251219B (en) Multi-system switching method and device based on scene recognition and PC host

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant