CN109522844A - It is a kind of social activity cohesion determine method and system - Google Patents
It is a kind of social activity cohesion determine method and system Download PDFInfo
- Publication number
- CN109522844A CN109522844A CN201811375169.XA CN201811375169A CN109522844A CN 109522844 A CN109522844 A CN 109522844A CN 201811375169 A CN201811375169 A CN 201811375169A CN 109522844 A CN109522844 A CN 109522844A
- Authority
- CN
- China
- Prior art keywords
- cohesion
- face
- involved party
- information
- involved
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/01—Social networking
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Business, Economics & Management (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of social cohesions to determine method and system.This method comprises: obtaining the image of camera acquisition;Using the face information in cascade neural network detection described image;Using the rarefaction representation for the face information that convolutional neural networks confirmly detect;It is input with the rarefaction representation of face information, is classified using classifier to face, obtain the face for representing different involved partys;It is tracked using the face of different involved partys as target, and obtains the behavioural information of each involved party, the behavioural information includes the talk behavior between the distance between involved party information and involved party;The cohesion between involved party is determined according to the behavioural information of each involved party.Social activity cohesion provided by the invention determines that method and system can obtain automatically the cohesion between involved party based on camera head monitor.
Description
Technical field
The present invention relates to field of face identification and Activity recognition field, determine method more particularly to a kind of social cohesion
And system.
Background technique
Recognition of face is one of important application of computer vision, refers in particular to computer using analysis and compares face visual signature
Information carries out the technology of identity identification automatically.Compared to the traditional biologicals means of identification such as fingerprint, iris, there is nothing to connect for recognition of face
It touches, meet the mankind and identify therefore the advantages such as habit, interactivity are strong, are not easy to steal are ensureing public safety, information security, finance
Safety, company and personal property is first-class safely strong demand.
On CVPR in 2015 (top-level meeting of computer vision and area of pattern recognition), Google discloses theirs
Face recognition algorithms.There is high cohesion under the photo of the postures such as different angle using identical face, different faces have lower coupling
Property, propose the method training Facenet network lost using CNN and triple, the accuracy on human face data collection reaches
New height.But the relative distance between sample is only utilized in the method for triple loss, there is no introduce the general of absolute distance
It reads.
Social networks are the important components of human society life, and cohesion network when building is social is very necessary
's.The information that the cohesion network constructed at present more depends on network social intercourse to obtain, for being handed over face-to-face in real life
Past information attention is less, how camera head monitor information architecture cohesion network to be utilized to have important research significance.
Summary of the invention
The object of the present invention is to provide a kind of social cohesions to determine method and system, can be automatic based on camera head monitor
Obtain the cohesion between involved party.
To achieve the above object, the present invention provides following schemes:
It is a kind of social activity cohesion determine method, comprising:
Obtain the image of camera acquisition;
Using the face information in cascade neural network detection described image;
Using the rarefaction representation for the face information that convolutional neural networks confirmly detect;
It is input with the rarefaction representation of face information, is classified using classifier to face, obtain representing different behaviors
The face of people;
It is tracked using the face of different involved partys as target, and obtains the behavioural information of each involved party, the behavior
Information includes the talk behavior between the distance between involved party information and involved party;
The cohesion between involved party is determined according to the behavioural information of each involved party.
Optionally, the rarefaction representation of the face information confirmly detected using convolutional neural networks, is specifically included:
128 dimensional feature vectors are mapped to using the face information that convolutional neural networks will test, are denoted as face information
Rarefaction representation.
Optionally, it before the rarefaction representation of the face information confirmly detected using convolutional neural networks, also wraps
It includes:
Convolutional neural networks are trained, and according to the training essence of the calculated convolutional neural networks of loss function
Degree is adjusted training to the neural network.
Optionally, the loss function isIts
In, 128 dimensional feature vectors of f representative sample, A and A' are the positive samples pair with same label, and B and C are that have different labels
Or the negative sample pair of same label, α are given thresholds.
Optionally, the behavioural information according to each involved party determines the cohesion between involved party, specifically includes:
When the distance between involved party is less than, and talk behavior occurs between given threshold and involved party, it is determined that involved party
Between intimate behavior occurs, according to I=I0Cohesion between+Δ I regeneration behavior people, wherein I0Indicate the initial of cohesion
Value, I indicate updated cohesion,D is the distance between involved party, and Δ t is intimate row
For duration, ω1For the weight of distance, ω2For the weight for talking behavior.
The present invention also provides a kind of social cohesions to determine system, comprising:
Image collection module, for obtaining the image of camera acquisition;
Face detection module, for using the face information in cascade neural network detection described image;
Rarefaction representation module, the rarefaction representation of the face information for being confirmly detected using convolutional neural networks;
Face recognition module, for the rarefaction representation of face information be input, classified using classifier to face,
Obtain representing the face of different involved partys;
Behavioural information obtains module, for tracking using the face of different involved partys as target, and obtains each behavior
The behavioural information of people, the behavioural information include the talk behavior between the distance between involved party information and involved party;
Cohesion determining module, for determining the cohesion between involved party according to the behavioural information of each involved party.
Optionally, the rarefaction representation module specifically includes:
Rarefaction representation unit, the face information for will test using convolutional neural networks be mapped to 128 dimensional features to
Amount, is denoted as the rarefaction representation of face information.
Optionally, the system also includes:
Convolutional neural networks training module for being trained to convolutional neural networks, and is calculated according to loss function
The training precisions of the convolutional neural networks training is adjusted to the neural network.
Optionally, the loss function isIts
In, 128 dimensional feature vectors of f representative sample, A and A' are the positive samples pair with same label, and B and C are that have different labels
Or the negative sample pair of same label, α are given thresholds.
Optionally, the cohesion determining module specifically includes:
Cohesion determination unit, for being talked between given threshold and involved party when the distance between involved party is less than
When behavior, it is determined that intimate behavior occurs between involved party, according to I=I0Cohesion between+Δ I regeneration behavior people, wherein
I0Indicate that the initial value of cohesion, I indicate updated cohesion,D is between involved party
Distance, Δ t be intimate behavior duration, ω1For the weight of distance, ω2For the weight for talking behavior.
The specific embodiment provided according to the present invention, the invention discloses following technical effects: social activity provided by the invention
Cohesion determines method and system, determines different involved partys by face recognition technology, by detecting defined intimate behavior, and
Binding time span builds cohesion network in turn to obtain the variable quantity of cohesion in the behavior human world.Moreover, knowing in face
Other part, using new loss function training depth network, combining classification device obtains more accurate recognition of face, realizes
Accurate differentiation to different involved partys.
Detailed description of the invention
It in order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, below will be to institute in embodiment
Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the invention
Example, for those of ordinary skill in the art, without creative efforts, can also obtain according to these attached drawings
Obtain other attached drawings.
Fig. 1 is that social activity of embodiment of the present invention cohesion determines method flow schematic diagram;
Fig. 2 is that social activity of embodiment of the present invention cohesion determines system structure diagram.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
The object of the present invention is to provide a kind of social cohesions to determine method and system, can be automatic based on camera head monitor
Obtain the cohesion between involved party.
In order to make the foregoing objectives, features and advantages of the present invention clearer and more comprehensible, with reference to the accompanying drawing and specific real
Applying mode, the present invention is described in further detail.
Fig. 1 is that social activity of embodiment of the present invention cohesion determines method flow schematic diagram, as shown in Figure 1, provided by the invention
Social cohesion determines that method and step is specific as follows:
Step 101: obtaining the image of camera acquisition;Camera array is built, to be obtained to the full extent comprising mesh
Target real-time scene image;
Step 102: using the face information in cascade neural network detection image;
Step 103: using the rarefaction representation for the face information that convolutional neural networks confirmly detect;
Step 104: being input with the rarefaction representation of face information, classified using classifier to face, represented
The face of different involved partys;
Step 105: it is tracked using the face of different involved partys as target, and obtains the behavioural information of each involved party,
Behavioural information includes the talk behavior between the distance between involved party information and involved party;
Step 106: the cohesion between involved party being determined according to the behavioural information of each involved party, constructs relational network.
Wherein, step 103 specifically includes:
128 dimensional feature vectors are mapped to using the face information that convolutional neural networks will test, are denoted as face information
Rarefaction representation.
Before step 103 further include:
Convolutional neural networks are trained, and according to the training precision pair of the calculated convolutional neural networks of loss function
Neural network is adjusted training.
Wherein, loss function isWherein, f generation
128 dimensional feature vectors of table sample sheet, A and A' are the positive samples pair with same label, and B and C are that have different labels or identical
The negative sample pair of label, α are given thresholds.
Step 106 specifically includes:
Intimate behavior between involved party may include the distance between involved party and talk behavior, when between involved party
Distance be less than between given threshold and involved party occur talk behavior when, it is determined that intimate behavior, root occur between involved party
According to I=I0Cohesion between+Δ I regeneration behavior people, wherein I0Indicate that the initial value of cohesion, I indicate updated intimate
Degree,D is the distance between involved party, and Δ t is the duration of intimate behavior, ω1For distance
Weight, ω2For talk behavior weight,Indicate contribution degree of the distance to cohesion, ω2eΔtIndicate talk behavior
To the contribution degree of cohesion.
Social activity cohesion provided by the invention determines that method determines different involved partys by face recognition technology, passes through detection
Defined intimate behavior, and binding time span build cohesion in turn to obtain the variable quantity of cohesion in the behavior human world
Network.Moreover, using new loss function training depth network, combining classification device obtains more quasi- in recognition of face part
True recognition of face realizes the accurate differentiation to different involved partys.
The present invention also provides a kind of social cohesions to determine system, as shown in Fig. 2, the system includes:
Image collection module 201, for obtaining the image of camera acquisition;
Face detection module 202, for using the face information in cascade neural network detection image;
Rarefaction representation module 203, the rarefaction representation of the face information for being confirmly detected using convolutional neural networks;
Face recognition module 204 divides face using classifier for being input with the rarefaction representation of face information
Class obtains the face for representing different involved partys;
Behavioural information obtains module 205, for tracking using the face of different involved partys as target, and obtains each row
For the behavioural information of people, behavioural information includes the talk behavior between the distance between involved party information and involved party;
Cohesion determining module 206, for determining the cohesion between involved party according to the behavioural information of each involved party.
Wherein, system further include:
Convolutional neural networks training module for being trained to convolutional neural networks, and is calculated according to loss function
The training precisions of convolutional neural networks training is adjusted to neural network.Loss function isWherein, 128 dimensional feature vectors of f representative sample,
A and A' is the positive sample pair with same label, and B and C are the negative samples pair with different labels or same label, and α is setting
Threshold value.
Rarefaction representation module 203 specifically includes:
Rarefaction representation unit, the face information for will test using convolutional neural networks be mapped to 128 dimensional features to
Amount, is denoted as the rarefaction representation of face information.
Cohesion determining module 206 specifically includes:
Cohesion determination unit, for being talked between given threshold and involved party when the distance between involved party is less than
When behavior, it is determined that intimate behavior occurs between involved party, according to I=I0Cohesion between+Δ I regeneration behavior people, wherein
I0Indicate that the initial value of cohesion, I indicate updated cohesion,D is between involved party
Distance, Δ t be intimate behavior duration, ω1For the weight of distance, ω2For the weight for talking behavior.
Social activity cohesion provided by the invention determines that system determines different involved partys by face recognition technology, passes through detection
Defined intimate behavior, and binding time span build cohesion in turn to obtain the variable quantity of cohesion in the behavior human world
Network.Moreover, using new loss function training depth network, combining classification device obtains more quasi- in recognition of face part
True recognition of face realizes the accurate differentiation to different involved partys.
Each embodiment in this specification is described in a progressive manner, the highlights of each of the examples are with other
The difference of embodiment, the same or similar parts in each embodiment may refer to each other.For system disclosed in embodiment
For, since it is corresponded to the methods disclosed in the examples, so being described relatively simple, related place is said referring to method part
It is bright.
Used herein a specific example illustrates the principle and implementation of the invention, and above embodiments are said
It is bright to be merely used to help understand method and its core concept of the invention;At the same time, for those skilled in the art, foundation
Thought of the invention, there will be changes in the specific implementation manner and application range.In conclusion the content of the present specification is not
It is interpreted as limitation of the present invention.
Claims (10)
1. a kind of social activity cohesion determines method characterized by comprising
Obtain the image of camera acquisition;
Using the face information in cascade neural network detection described image;
Using the rarefaction representation for the face information that convolutional neural networks confirmly detect;
It is input with the rarefaction representation of face information, is classified using classifier to face, obtain representing different involved partys'
Face;
It is tracked using the face of different involved partys as target, and obtains the behavioural information of each involved party, the behavioural information
Including the talk behavior between the distance between involved party information and involved party;
The cohesion between involved party is determined according to the behavioural information of each involved party.
2. social activity cohesion according to claim 1 determines method, which is characterized in that described true using convolutional neural networks
The rarefaction representation for the face information that regular inspection measures, specifically includes:
128 dimensional feature vectors are mapped to using the face information that convolutional neural networks will test, are denoted as the sparse of face information
It indicates.
3. social activity cohesion according to claim 2 determines method, which is characterized in that use convolutional neural networks described
Before the rarefaction representation of the face information confirmly detected, further includes:
Convolutional neural networks are trained, and according to the training precision pair of the calculated convolutional neural networks of loss function
The neural network is adjusted training.
4. social activity cohesion according to claim 3 determines method, which is characterized in that the loss function isWherein, 128 dimensional feature vectors of f representative sample, A
It is the positive sample pair with same label with A', B and C are the negative samples pair with different labels or same label, and α is setting threshold
Value.
5. social activity cohesion according to claim 1 determines method, which is characterized in that the behavior according to each involved party
Information determines the cohesion between involved party, specifically includes:
When the distance between involved party is less than, and talk behavior occurs between given threshold and involved party, it is determined that between involved party
Intimate behavior occurs, according to I=I0Cohesion between+Δ I regeneration behavior people, wherein I0Indicate the initial value of cohesion, I table
Show updated cohesion,D is the distance between involved party, and Δ t is holding for intimate behavior
Continuous time, ω1For the weight of distance, ω2For the weight for talking behavior.
6. a kind of social activity cohesion determines system characterized by comprising
Image collection module, for obtaining the image of camera acquisition;
Face detection module, for using the face information in cascade neural network detection described image;
Rarefaction representation module, the rarefaction representation of the face information for being confirmly detected using convolutional neural networks;
Face recognition module is classified to face using classifier, is obtained for being input with the rarefaction representation of face information
Represent the face of different involved partys;
Behavioural information obtains module, for tracking using the face of different involved partys as target, and obtains each involved party's
Behavioural information, the behavioural information include the talk behavior between the distance between involved party information and involved party;
Cohesion determining module, for determining the cohesion between involved party according to the behavioural information of each involved party.
7. social activity cohesion according to claim 6 determines system, which is characterized in that the rarefaction representation module is specifically wrapped
It includes:
Rarefaction representation unit, the face information for will test using convolutional neural networks are mapped to 128 dimensional feature vectors, remember
For the rarefaction representation of face information.
8. social activity cohesion according to claim 7 determines system, which is characterized in that the system also includes:
Convolutional neural networks training module, for being trained to convolutional neural networks, and according to the calculated institute of loss function
The training precision for stating convolutional neural networks is adjusted training to the neural network.
9. social activity cohesion according to claim 8 determines system, which is characterized in that the loss function isWherein, 128 dimensional feature vectors of f representative sample,
A and A' is the positive sample pair with same label, and B and C are the negative samples pair with different labels or same label, and α is setting
Threshold value.
10. social activity cohesion according to claim 6 determines system, which is characterized in that the cohesion determining module tool
Body includes:
Cohesion determination unit, for talk behavior to occur when the distance between involved party is less than between given threshold and involved party
When, it is determined that intimate behavior occurs between involved party, according to I=I0Cohesion between+Δ I regeneration behavior people, wherein I0Table
Show that the initial value of cohesion, I indicate updated cohesion,D between involved party away from
From Δ t is the duration of intimate behavior, ω1For the weight of distance, ω2For the weight for talking behavior.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811375169.XA CN109522844B (en) | 2018-11-19 | 2018-11-19 | Social affinity determination method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811375169.XA CN109522844B (en) | 2018-11-19 | 2018-11-19 | Social affinity determination method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109522844A true CN109522844A (en) | 2019-03-26 |
CN109522844B CN109522844B (en) | 2020-07-24 |
Family
ID=65776607
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811375169.XA Active CN109522844B (en) | 2018-11-19 | 2018-11-19 | Social affinity determination method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109522844B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110020204A (en) * | 2019-03-29 | 2019-07-16 | 联想(北京)有限公司 | A kind of information processing method and electronic equipment |
WO2021179819A1 (en) * | 2020-03-12 | 2021-09-16 | Oppo广东移动通信有限公司 | Photo processing method and apparatus, and storage medium and electronic device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110058028A1 (en) * | 2009-09-09 | 2011-03-10 | Sony Corporation | Information processing apparatus, information processing method, and information processing program |
CN105183758A (en) * | 2015-07-22 | 2015-12-23 | 深圳市万姓宗祠网络科技股份有限公司 | Content recognition method for continuously recorded video or image |
CN105760821A (en) * | 2016-01-31 | 2016-07-13 | 中国石油大学(华东) | Classification and aggregation sparse representation face identification method based on nuclear space |
CN106203356A (en) * | 2016-07-12 | 2016-12-07 | 中国计量大学 | A kind of face identification method based on convolutional network feature extraction |
CN107274437A (en) * | 2017-06-23 | 2017-10-20 | 燕山大学 | A kind of visual tracking method based on convolutional neural networks |
CN107292915A (en) * | 2017-06-15 | 2017-10-24 | 国家新闻出版广电总局广播科学研究院 | Method for tracking target based on convolutional neural networks |
CN108182394A (en) * | 2017-12-22 | 2018-06-19 | 浙江大华技术股份有限公司 | Training method, face identification method and the device of convolutional neural networks |
-
2018
- 2018-11-19 CN CN201811375169.XA patent/CN109522844B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110058028A1 (en) * | 2009-09-09 | 2011-03-10 | Sony Corporation | Information processing apparatus, information processing method, and information processing program |
CN105183758A (en) * | 2015-07-22 | 2015-12-23 | 深圳市万姓宗祠网络科技股份有限公司 | Content recognition method for continuously recorded video or image |
CN105760821A (en) * | 2016-01-31 | 2016-07-13 | 中国石油大学(华东) | Classification and aggregation sparse representation face identification method based on nuclear space |
CN106203356A (en) * | 2016-07-12 | 2016-12-07 | 中国计量大学 | A kind of face identification method based on convolutional network feature extraction |
CN107292915A (en) * | 2017-06-15 | 2017-10-24 | 国家新闻出版广电总局广播科学研究院 | Method for tracking target based on convolutional neural networks |
CN107274437A (en) * | 2017-06-23 | 2017-10-20 | 燕山大学 | A kind of visual tracking method based on convolutional neural networks |
CN108182394A (en) * | 2017-12-22 | 2018-06-19 | 浙江大华技术股份有限公司 | Training method, face identification method and the device of convolutional neural networks |
Non-Patent Citations (2)
Title |
---|
李辉 等: "基于卷积神经网络的人脸识别算法", 《软件导刊》 * |
顾昭艺: "基于人脸识别的社交关系检索系统的设计与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110020204A (en) * | 2019-03-29 | 2019-07-16 | 联想(北京)有限公司 | A kind of information processing method and electronic equipment |
WO2021179819A1 (en) * | 2020-03-12 | 2021-09-16 | Oppo广东移动通信有限公司 | Photo processing method and apparatus, and storage medium and electronic device |
Also Published As
Publication number | Publication date |
---|---|
CN109522844B (en) | 2020-07-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109583342B (en) | Human face living body detection method based on transfer learning | |
CN104091176B (en) | Portrait comparison application technology in video | |
CN107506702B (en) | Multi-angle-based face recognition model training and testing system and method | |
CN108549854B (en) | A kind of human face in-vivo detection method | |
WO2019127273A1 (en) | Multi-person face detection method, apparatus, server, system, and storage medium | |
CN105335726B (en) | Recognition of face confidence level acquisition methods and system | |
CN103761748B (en) | Anomaly detection method and device | |
CN105160318A (en) | Facial expression based lie detection method and system | |
CN105740779B (en) | Method and device for detecting living human face | |
CN107230267B (en) | Intelligence In Baogang Kindergarten based on face recognition algorithms is registered method | |
Yu et al. | Railway obstacle detection algorithm using neural network | |
CN102841354A (en) | Vision protection implementation method of electronic equipment with display screen | |
US8965068B2 (en) | Apparatus and method for discriminating disguised face | |
CN102262727A (en) | Method for monitoring face image quality at client acquisition terminal in real time | |
CN109145742A (en) | A kind of pedestrian recognition method and system | |
TWI712980B (en) | Claim information extraction method and device, and electronic equipment | |
CN104766072A (en) | Recognition device for human face of living body and use method thereof | |
CN109145717A (en) | A kind of face identification method of on-line study | |
CN105022999A (en) | Man code company real-time acquisition system | |
CN102629320A (en) | Ordinal measurement statistical description face recognition method based on feature level | |
CN110633643A (en) | Abnormal behavior detection method and system for smart community | |
US9323989B2 (en) | Tracking device | |
CN107316029A (en) | A kind of live body verification method and equipment | |
CN111191535B (en) | Pedestrian detection model construction method based on deep learning and pedestrian detection method | |
CN108960156A (en) | A kind of Face datection recognition methods and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |