CN109165685A - Prison prisoner potentiality risk monitoring method and system based on expression and movement - Google Patents

Prison prisoner potentiality risk monitoring method and system based on expression and movement Download PDF

Info

Publication number
CN109165685A
CN109165685A CN201810952067.3A CN201810952067A CN109165685A CN 109165685 A CN109165685 A CN 109165685A CN 201810952067 A CN201810952067 A CN 201810952067A CN 109165685 A CN109165685 A CN 109165685A
Authority
CN
China
Prior art keywords
human
expression
potentiality
generic
human face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810952067.3A
Other languages
Chinese (zh)
Other versions
CN109165685B (en
Inventor
李晓飞
魏巍
吴聪
柴磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Post and Telecommunication University
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201810952067.3A priority Critical patent/CN109165685B/en
Publication of CN109165685A publication Critical patent/CN109165685A/en
Application granted granted Critical
Publication of CN109165685B publication Critical patent/CN109165685B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of prison prisoner potentiality risk monitoring method and system based on expression and movement, obtain the current monitor video of inmate, the human face expression feature vector and its generic and human action feature vector and its generic of inmate are identified from video frame using the model trained;SVM multi-categorizer is constructed, the generic input multi-categorizer that human face expression and human action are identified carries out potentiality classification of risks;When judging that there is prison prisoner potentiality risk then to issue early warning, and the type of potential risk is marked out.The present invention can be improved issues early warning to the anticipation accuracy of potentiality risk present in prison prisoner in advance.

Description

Prison prisoner potentiality risk monitoring method and system based on expression and movement
Technical field
The present invention relates to technical field of computer vision, and in particular to a kind of prison prisoner based on expression and movement Potentiality risk monitoring method and system.
Background technique
With the quickening of IT application process, the management in prison has also welcome new change, in terms of video monitoring at the prison Also larger active influence is produced.During carrying out the analysis of prison mutual affection to prison prisoner, there is one very important Business is exactly that potentiality risk analysis (such as destroy, commit suiside, exerting violence, escaping) is carried out to inmate.This in prison administration at present Item work is mainly according to the observation of prison officer come what is carried out, and there are biggish for current methods in the actual operation process Deficiency, and anticipation accuracy and timely discovery rate are lower.
The deficiency of current methods mainly has: people is relied primarily on during existing prison prisoner potentiality Risk Monitoring Power carries out, and cannot accomplish the observation analysis of wall-weather and all-region, if making mistake to the inmate of no potentiality risk Sentence meeting wasting manpower and material resources and carries out subsequent processing, it, will if could not accurately be prejudged to the inmate with potentiality risk It will cause irreversible serious consequence;None potential risk situation executive plan standard of current methods, can not be for tool Body potential risk takes countermeasure;It serves one's term in prison and really occurs that some extreme cases are fewer, and this kind of data obtain in personnel Take difficulty larger data amount less, it is latent that existing method can not construct an accurate prison prisoner using so few data In property risk monitoring system.
Summary of the invention
It is an object of the invention to overcome deficiency in the prior art, propose a kind of based on the prison of expression and movement clothes Punishment personnel's potentiality risk monitoring method and system can effectively monitor potentiality risk present in prison prisoner, It improves and early warning is issued in advance to the anticipation accuracy of potentiality risk present in prison prisoner.
In order to solve the above technical problems, the present invention provides a kind of prison prisoner potentiality based on expression and movement Risk monitoring method, characterized in that the following steps are included:
Step S1 obtains the previous monitor video of each prison prisoner, the face table of inmate is had in this video frame Feelings and human action;Using video frame as sample set training be used to facial expression recognition convolutional neural networks CNN model and For the HMM model of human action identification;
Step S2 is each prison prisoner, establishes human face expression feature vector and its generic and human action is special Levy the database of corresponding relationship between vector and its generic;
Step S3 obtains the current monitor video of inmate, has human face expression and human action in this video frame, utilize instruction The model practiced identifies the human face expression feature vector and its generic and human action feature of inmate from video frame Vector and its generic;
Step 4, by the feature vector for the same category recorded in the recognition result and database of human face expression and human action into Row compares, if distance is less than given threshold between the two, then it is assumed that recognition result is correct, into next step;
Step S5, constructs SVM multi-categorizer, the generic that human face expression and human action are identified input multi-categorizer into Row potentiality classification of risks;
Step S6 and marks out the class of potential risk when judging that there is prison prisoner potentiality risk then to issue early warning Type.
Preferably, human face expression includes 6 kinds of basic facial expressions: happy, sad, surprised, detest, frightened, indignation, human action It is acted including 12 kinds: applauding, nods, beats one's breast, shakes the head, stamps foot, rubs hands, loosens one's grip, folded arm, body shakes, body erect, walks up and down Dynamic, shaking fist.
Preferably, in step S3, the detailed process of human face expression and human action is identified are as follows:
Step S31 obtains the picture element matrix of current video frame, using shallow-layer convolutional neural networks to human face region and human region It is detected, obtains face and human region coordinate;
Step S32 carries out facial expression recognition to the human face region detected by the CNN model trained, obtains human face expression Feature vector and its generic;
Step S33 carries out human action to the frame sequence that each frame human region detected forms by the HMM model trained Identification obtains human body operating characteristic vector and its generic.
Preferably, in step S4, the distance is Euclidean distance.
Preferably, classifier classification results are divided into: escaping from prison, fighting, suicide, breaking-up and normal five class.
Correspondingly, the present invention also provides a kind of prison prisoner potentiality Risk Monitoring phase based on expression and movement Together, characterized in that include:
Identification model training module has clothes in this video frame for obtaining the previous monitor video of each prison prisoner The human face expression and human action of punishment personnel;It is used to the convolutional Neural of facial expression recognition using video frame as sample set training Network C NN model and the HMM model identified for human action;
Historical data base establishes module, for being each prison prisoner, establishes human face expression feature vector and its affiliated class The database of corresponding relationship between other and human action feature vector and its generic;
Identification module has human face expression and human action for obtaining the current monitor video of inmate in this video frame, Identify that human face expression feature vector and its generic and the human body of inmate are dynamic from video frame using the model trained Make feature vector and its generic;
Recognition result judgment module, it is mutually similar for will be recorded in the recognition result of human face expression and human action and database Another characteristic vector is compared, if distance is less than given threshold between the two, then it is assumed that recognition result is correct;
Potentiality risk identification module, generic for identifying human face expression and human action input multi-categorizer into Row potentiality classification of risks;
Risk-warning module is judged that there is prison prisoner potentiality risk then to issue early warning for working as, and is marked out latent In the type of risk.
Preferably, human face expression includes 6 kinds of basic facial expressions: happy, sad, surprised, detest, frightened, indignation, human action It is acted including 12 kinds: applauding, nods, beats one's breast, shakes the head, stamps foot, rubs hands, loosens one's grip, folded arm, body shakes, body erect, walks up and down Dynamic, shaking fist.
Preferably, classifier classification results are divided into: escaping from prison, fighting, suicide, breaking-up and normal five class.
Compared with prior art, the beneficial effects obtained by the present invention are as follows being: the method for the present invention can effectively monitor prison Potentiality risk present in inmate improves and shifts to an earlier date to the anticipation accuracy of potentiality risk present in prison prisoner Issue early warning.
Detailed description of the invention
Fig. 1 is the flow diagram of the method for the present invention.
Specific embodiment
The invention will be further described below in conjunction with the accompanying drawings.Following embodiment is only used for clearly illustrating the present invention Technical solution, and not intended to limit the protection scope of the present invention.
In the description of the invention patent, it should be noted that the terms "include", "comprise" or its any other variant It is intended to non-exclusive inclusion, in addition to comprising those of listed element, but also may include other being not explicitly listed Element.
A kind of prison prisoner potentiality risk monitoring method and system based on expression and movement of the invention, referring to Shown in Fig. 1, including following procedure:
Step S1 obtains the previous monitor video of each prison prisoner, the face table of inmate is had in this video frame Feelings and human action;Using video frame as sample set training be used to facial expression recognition convolutional neural networks CNN model and For the HMM model of human action identification.
Video acquisition standard and facial expression and human action are classified as follows:
Various human face expressions and each 5000 sections of human action video clip, every section 2-4 seconds, 25 frame per second constructs basic face Expression and human action training dataset.Human face expression includes 6 kinds of basic facial expressions: happy, sad, surprised, detest, frightened, anger Anger.Human action includes 12 kinds of movements: applauding, nods, beating one's breast, shaking the head, stamping foot, rubbing hands, loosening one's grip, folded arm, body shakes, body Uprightly, it walks up and down, shake fist.
Facial expression recognition uses convolutional neural networks CNN, and human action identification uses HMM model, using acquisition with CNN and HMM model are trained toward monitor video.
Step S2 is each prison prisoner, establishes human face expression feature vector and its generic and human body is dynamic Make the database of corresponding relationship between feature vector and its generic.
Establishing history expression and movement potentiality vulnerability database for each inmate in prison, detailed process is as follows:
Step S21 obtains the previous monitor video of each prison prisoner, the face table of inmate is had in this video frame Feelings and human action;
Step S22 extracts expressive features vector by convolutional neural networks CNN from video frame, obtains the one-dimensional of human face expression Human face expression feature vector is stored and marks affiliated expression classification by feature vector;
Step S23 tracks human joint points in video frame using particle filter algorithm, all joints that one is completely acted Point diagram recombinates in temporal sequence, formed space pipeline after carry out standard normalization operation, with higher-order function to pipeline Access is fitted, and calculating three-dimensional vector between each maximum of function and minimum point is human action three-dimensional feature vector, It inputs HMM model and carries out human action identification, obtain the feature vector of human action, human action feature vector is stored And action classification belonging to marking.
Step S3 obtains the current monitor video of inmate, human face expression and human action, benefit is had in this video frame The human face expression feature vector and its generic and human action of inmate are identified from video frame with the model trained Feature vector and its generic.
Detailed process includes:
Step S31 defines current video stream V={ v1, v2 ... vn }, and V is video flowing set, and vi is the video frame at i moment, and vi is used The matrix of l*w indicates, wherein l is the line number of the picture element matrix of video frame, and w is the columns of the picture element matrix of video frame;It regards herein Frequency stream passes storage server real-time video back for what monitor camera in practical application scene acquired, and taking each minute is one Section, 25 frame output per second;
Step S32 obtains the picture element matrix of current video frame, right using shallow-layer convolutional neural networks (network model of VGG16) Human face region and human region are detected, and face and human region coordinate are obtained;Then it is carried out on tri- color spaces of RGB Optical flow computation;This optical flow computation guarantees not lose in next frame target mainly for the tracking of progress moving target;
Step S33 carries out facial expression recognition to the human face region detected by the CNN model trained, obtains human face expression Feature vector and its generic;
Step S34 carries out human action to the frame sequence that each frame human region detected forms by the HMM model trained Identification obtains human body operating characteristic vector and its generic.
Step 4, by the feature vector recorded in the recognition result and database of human face expression and human action and its affiliated Classification is compared, if distance is less than given threshold between the two, then it is assumed that recognition result is correct, into next step.
By Euclidean distance meter between classification results feature vector historical data feature vector progress vector corresponding with inmate It calculates, set distance threshold value is 0.1, if calculated Euclidean distance is less than 0.1, then it is assumed that identification is correct, otherwise incorrect, this Secondary identification is invalid.After judging that identification is correct, using the corresponding generic of feature vector in database as recognition result institute Corresponding classification.
Step S5 constructs SVM multi-categorizer using one-to-many method, and classification results are divided into: escape from prison, fighting, suicide, breaking-up and Normal five class, the generic input multi-categorizer that human face expression and human action are identified carry out potentiality classification of risks.
5 two classifiers are combined, the multi-categorizer that can separate 4 class potential risks and normal class is formed.
Step S6 and marks out potential risk when judging that there is prison prisoner potentiality risk then to issue early warning Type.
The method of the present invention can be improved and prejudge accuracy to potentiality risk present in prison prisoner and send out in advance Early warning out.
Based on inventive concept same as mentioned above, the present embodiment is that a kind of prison based on expression and movement is served a sentence people Member's potentiality risk monitoring system comprising:
Identification model training module has clothes in this video frame for obtaining the previous monitor video of each prison prisoner The human face expression and human action of punishment personnel;It is used to the convolutional Neural of facial expression recognition using video frame as sample set training Network C NN model and the HMM model identified for human action;
Historical data base establishes module, for being each prison prisoner, establishes human face expression feature vector and its affiliated class The database of corresponding relationship between other and human action feature vector and its generic;
Identification module has human face expression and human action for obtaining the current monitor video of inmate in this video frame, Identify that human face expression feature vector and its generic and the human body of inmate are dynamic from video frame using the model trained Make feature vector and its generic;
Recognition result judgment module, it is mutually similar for will be recorded in the recognition result of human face expression and human action and database Another characteristic vector is compared, if distance is less than given threshold between the two, then it is assumed that recognition result is correct;
Potentiality risk identification module, generic for identifying human face expression and human action input multi-categorizer into Row potentiality classification of risks;
Risk-warning module is judged that there is prison prisoner potentiality risk then to issue early warning for working as, and is marked out latent In the type of risk.
Preferably, human face expression includes 6 kinds of basic facial expressions: happy, sad, surprised, detest, frightened, indignation, human action It is acted including 12 kinds: applauding, nods, beats one's breast, shakes the head, stamps foot, rubs hands, loosens one's grip, folded arm, body shakes, body erect, walks up and down Dynamic, shaking fist.
Preferably, classifier classification results are divided into: escaping from prison, fighting, suicide, breaking-up and normal five class.
It should be understood by those skilled in the art that, embodiments herein can provide as method, system or computer program Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the application Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the application, which can be used in one or more, The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces The form of product.
The application is referring to method, the process of equipment (system) and computer program product according to the embodiment of the present application Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, without departing from the technical principles of the invention, several improvements and modifications, these improvements and modifications can also be made Also it should be regarded as protection scope of the present invention.

Claims (8)

1. the prison prisoner potentiality risk monitoring method based on expression and movement, characterized in that the following steps are included:
Step S1 obtains the previous monitor video of each prison prisoner, the face table of inmate is had in this video frame Feelings and human action;Using video frame as sample set training be used to facial expression recognition convolutional neural networks CNN model and For the HMM model of human action identification;
Step S2 is each prison prisoner, establishes human face expression feature vector and its generic and human action is special Levy the database of corresponding relationship between vector and its generic;
Step S3 obtains the current monitor video of inmate, has human face expression and human action in this video frame, utilize instruction The model practiced identifies the human face expression feature vector and its generic and human action feature of inmate from video frame Vector and its generic;
Step 4, by the feature vector for the same category recorded in the recognition result and database of human face expression and human action into Row compares, if distance is less than given threshold between the two, then it is assumed that recognition result is correct, into next step;
Step S5, constructs SVM multi-categorizer, the generic that human face expression and human action are identified input multi-categorizer into Row potentiality classification of risks;
Step S6 and marks out the class of potential risk when judging that there is prison prisoner potentiality risk then to issue early warning Type.
2. the prison prisoner potentiality risk monitoring method according to claim 1 based on expression and movement, special Sign is that human face expression includes 6 kinds of basic facial expressions: happy, sad, surprised, detest, frightened, indignation, and human action includes 12 kinds dynamic Make: applauding, nods, beating one's breast, shaking the head, stamping foot, rubbing hands, loosening one's grip, folded arm, body shakes, body erect, walk up and down, shake fist Head.
3. the prison prisoner potentiality risk monitoring method according to claim 1 based on expression and movement, special Sign is, in step S3, identifies the detailed process of human face expression and human action are as follows:
Step S31 obtains the picture element matrix of current video frame, using shallow-layer convolutional neural networks to human face region and human region It is detected, obtains face and human region coordinate;
Step S32 carries out facial expression recognition to the human face region detected by the CNN model trained, obtains human face expression Feature vector and its generic;
Step S33 carries out human action to the frame sequence that each frame human region detected forms by the HMM model trained Identification obtains human body operating characteristic vector and its generic.
4. the prison prisoner potentiality risk monitoring method according to claim 1 based on expression and movement, special Sign is, in step S4, the distance is Euclidean distance.
5. the prison prisoner potentiality risk monitoring method according to claim 1 based on expression and movement, special Sign is that classifier classification results are divided into: being escaped from prison, fighting, suicide, breaking-up and normal five class.
6. identical with the prison prisoner potentiality Risk Monitoring of movement based on expression, characterized in that include:
Identification model training module has clothes in this video frame for obtaining the previous monitor video of each prison prisoner The human face expression and human action of punishment personnel;It is used to the convolutional Neural of facial expression recognition using video frame as sample set training Network C NN model and the HMM model identified for human action;
Historical data base establishes module, for being each prison prisoner, establishes human face expression feature vector and its affiliated class The database of corresponding relationship between other and human action feature vector and its generic;
Identification module has human face expression and human action for obtaining the current monitor video of inmate in this video frame, Identify that human face expression feature vector and its generic and the human body of inmate are dynamic from video frame using the model trained Make feature vector and its generic;
Recognition result judgment module, it is mutually similar for will be recorded in the recognition result of human face expression and human action and database Another characteristic vector is compared, if distance is less than given threshold between the two, then it is assumed that recognition result is correct;
Potentiality risk identification module, generic for identifying human face expression and human action input multi-categorizer into Row potentiality classification of risks;
Risk-warning module is judged that there is prison prisoner potentiality risk then to issue early warning for working as, and is marked out latent In the type of risk.
7. the prison prisoner potentiality risk monitoring system according to claim 6 based on expression and movement, special Sign is that human face expression includes 6 kinds of basic facial expressions: happy, sad, surprised, detest, frightened, indignation, and human action includes 12 kinds dynamic Make: applauding, nods, beating one's breast, shaking the head, stamping foot, rubbing hands, loosening one's grip, folded arm, body shakes, body erect, walk up and down, shake fist Head.
8. the prison prisoner potentiality risk monitoring system according to claim 6 based on expression and movement, special Sign is that classifier classification results are divided into: being escaped from prison, fighting, suicide, breaking-up and normal five class.
CN201810952067.3A 2018-08-21 2018-08-21 Expression and action-based method and system for monitoring potential risks of prisoners Active CN109165685B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810952067.3A CN109165685B (en) 2018-08-21 2018-08-21 Expression and action-based method and system for monitoring potential risks of prisoners

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810952067.3A CN109165685B (en) 2018-08-21 2018-08-21 Expression and action-based method and system for monitoring potential risks of prisoners

Publications (2)

Publication Number Publication Date
CN109165685A true CN109165685A (en) 2019-01-08
CN109165685B CN109165685B (en) 2021-09-10

Family

ID=64896198

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810952067.3A Active CN109165685B (en) 2018-08-21 2018-08-21 Expression and action-based method and system for monitoring potential risks of prisoners

Country Status (1)

Country Link
CN (1) CN109165685B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106328134A (en) * 2016-08-18 2017-01-11 都伊林 Prison voice data identification and monitoring early warning system
CN109948483A (en) * 2019-03-07 2019-06-28 武汉大学 A kind of personage's interactive relation recognition methods based on movement and facial expression
CN110458015A (en) * 2019-07-05 2019-11-15 平安科技(深圳)有限公司 Anti- suicide method for early warning, device, equipment and storage medium based on image recognition
CN110533339A (en) * 2019-09-02 2019-12-03 北京旷视科技有限公司 The determination method, apparatus and system of security protection cost
CN111639584A (en) * 2020-05-26 2020-09-08 深圳壹账通智能科技有限公司 Risk identification method and device based on multiple classifiers and computer equipment
CN112101094A (en) * 2020-08-02 2020-12-18 华南理工大学 Suicide risk assessment method based on body language
CN112287873A (en) * 2020-11-12 2021-01-29 广东恒电信息科技股份有限公司 Judicial service early warning system
CN112288312A (en) * 2020-11-12 2021-01-29 广东恒电信息科技股份有限公司 Forewarning system for retrusive of prisoners based on judicial application
CN113723165A (en) * 2021-03-25 2021-11-30 山东大学 Method and system for detecting dangerous expressions of people to be detected based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740767A (en) * 2016-01-22 2016-07-06 江苏大学 Driver road rage real-time identification and warning method based on facial features
CN106295568A (en) * 2016-08-11 2017-01-04 上海电力学院 The mankind's naturalness emotion identification method combined based on expression and behavior bimodal
CN106530633A (en) * 2016-09-28 2017-03-22 中国人民解放军国防科学技术大学 Intelligent in-event disposal-based security protection method and system
CN106909887A (en) * 2017-01-19 2017-06-30 南京邮电大学盐城大数据研究院有限公司 A kind of action identification method based on CNN and SVM

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740767A (en) * 2016-01-22 2016-07-06 江苏大学 Driver road rage real-time identification and warning method based on facial features
CN106295568A (en) * 2016-08-11 2017-01-04 上海电力学院 The mankind's naturalness emotion identification method combined based on expression and behavior bimodal
CN106530633A (en) * 2016-09-28 2017-03-22 中国人民解放军国防科学技术大学 Intelligent in-event disposal-based security protection method and system
CN106909887A (en) * 2017-01-19 2017-06-30 南京邮电大学盐城大数据研究院有限公司 A kind of action identification method based on CNN and SVM

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106328134A (en) * 2016-08-18 2017-01-11 都伊林 Prison voice data identification and monitoring early warning system
CN109948483A (en) * 2019-03-07 2019-06-28 武汉大学 A kind of personage's interactive relation recognition methods based on movement and facial expression
CN109948483B (en) * 2019-03-07 2022-03-15 武汉大学 Character interaction relation recognition method based on actions and facial expressions
CN110458015A (en) * 2019-07-05 2019-11-15 平安科技(深圳)有限公司 Anti- suicide method for early warning, device, equipment and storage medium based on image recognition
CN110458015B (en) * 2019-07-05 2024-05-03 平安科技(深圳)有限公司 Suicide prevention early warning method, device, equipment and storage medium based on image recognition
CN110533339A (en) * 2019-09-02 2019-12-03 北京旷视科技有限公司 The determination method, apparatus and system of security protection cost
WO2021237907A1 (en) * 2020-05-26 2021-12-02 深圳壹账通智能科技有限公司 Risk identification method and apparatus based on multiple classifiers, computer device, and storage medium
CN111639584A (en) * 2020-05-26 2020-09-08 深圳壹账通智能科技有限公司 Risk identification method and device based on multiple classifiers and computer equipment
CN112101094A (en) * 2020-08-02 2020-12-18 华南理工大学 Suicide risk assessment method based on body language
CN112101094B (en) * 2020-08-02 2023-08-22 华南理工大学 Suicide risk assessment method based on limb language
CN112288312A (en) * 2020-11-12 2021-01-29 广东恒电信息科技股份有限公司 Forewarning system for retrusive of prisoners based on judicial application
CN112287873A (en) * 2020-11-12 2021-01-29 广东恒电信息科技股份有限公司 Judicial service early warning system
CN113723165A (en) * 2021-03-25 2021-11-30 山东大学 Method and system for detecting dangerous expressions of people to be detected based on deep learning
CN113723165B (en) * 2021-03-25 2022-06-07 山东大学 Method and system for detecting dangerous expressions of people to be detected based on deep learning

Also Published As

Publication number Publication date
CN109165685B (en) 2021-09-10

Similar Documents

Publication Publication Date Title
CN109165685A (en) Prison prisoner potentiality risk monitoring method and system based on expression and movement
Zhang et al. Ergonomic posture recognition using 3D view-invariant features from single ordinary camera
Yu et al. Posture-related data collection methods for construction workers: A review
CN104038738B (en) Intelligent monitoring system and intelligent monitoring method for extracting coordinates of human body joint
CN108053427A (en) A kind of modified multi-object tracking method, system and device based on KCF and Kalman
JP2018124972A (en) Crowd analytics via one shot learning
Davis et al. Analysis and recognition of walking movements
Favaretto et al. Detecting crowd features in video sequences
Wei et al. Real-time facial expression recognition for affective computing based on Kinect
Chang et al. Automated facial expression recognition system using neural networks
CN111931869A (en) Method and system for detecting user attention through man-machine natural interaction
EP4265107A1 (en) Information processing device, information processing method, and program
Feng et al. Using eye aspect ratio to enhance fast and objective assessment of facial paralysis
Yan et al. A review of basketball shooting analysis based on artificial intelligence
Zhao et al. Abnormal behavior detection based on dynamic pedestrian centroid model: Case study on u-turn and fall-down
Couceiro et al. A methodology for detection and estimation in the analysis of golf putting
Foytik et al. Tracking and recognizing multiple faces using Kalman filter and ModularPCA
Kishore et al. Spatial Joint features for 3D human skeletal action recognition system using spatial graph kernels
Venture Human characterization and emotion characterization from gait
Negi et al. Real-Time Human Pose Estimation: A MediaPipe and Python Approach for 3D Detection and Classification
Parisi et al. HandSOM-Neural clustering of hand motion for gesture recognition in real time
Renugadevi et al. Deep Learning-Based GYM Monitoring System using YOLOv5 and Pose Estimation Algorithm
Chen et al. A worker posture coding scheme to link automatic and manual coding
Aiman et al. Video based exercise recognition using gcn
Liu Video-based human motion capture and force estimation for comprehensive on-site ergonomic risk assessment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20190108

Assignee: NANJING NANYOU INSTITUTE OF INFORMATION TECHNOVATION Co.,Ltd.

Assignor: NANJING University OF POSTS AND TELECOMMUNICATIONS

Contract record no.: X2021980014141

Denomination of invention: Potential risk monitoring method and system for prison inmates based on expression and action

Granted publication date: 20210910

License type: Common License

Record date: 20211206

EC01 Cancellation of recordation of patent licensing contract
EC01 Cancellation of recordation of patent licensing contract

Assignee: NANJING NANYOU INSTITUTE OF INFORMATION TECHNOVATION Co.,Ltd.

Assignor: NANJING University OF POSTS AND TELECOMMUNICATIONS

Contract record no.: X2021980014141

Date of cancellation: 20231107