CN111967355A - Prison-crossing intention evaluation method for prisoners based on body language - Google Patents

Prison-crossing intention evaluation method for prisoners based on body language Download PDF

Info

Publication number
CN111967355A
CN111967355A CN202010763662.XA CN202010763662A CN111967355A CN 111967355 A CN111967355 A CN 111967355A CN 202010763662 A CN202010763662 A CN 202010763662A CN 111967355 A CN111967355 A CN 111967355A
Authority
CN
China
Prior art keywords
layer
prison
prisoners
skeleton information
skeleton
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010763662.XA
Other languages
Chinese (zh)
Other versions
CN111967355B (en
Inventor
杜广龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202010763662.XA priority Critical patent/CN111967355B/en
Publication of CN111967355A publication Critical patent/CN111967355A/en
Application granted granted Critical
Publication of CN111967355B publication Critical patent/CN111967355B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/048Fuzzy inferencing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Automation & Control Theory (AREA)
  • Fuzzy Systems (AREA)
  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a prison-crossing intention evaluation method for prisoners based on body language. The method comprises the following steps: extracting skeleton information from the body image of a prisoner in each frame of image of the monitoring video to obtain a skeleton information sequence; constructing a network fusing RNN and fuzzy inference and training; and inputting the skeleton information sequence into the constructed network integrating the RNN and the fuzzy reasoning, and outputting the prison-crossing intention of prisoners. The method has the advantages of high accuracy, high reliability level and good real-time property. The body language features of prisoners are collected through the camera, the modification requirement on prison equipment is low, and the situation that face images may be blurred is fully considered. The network combines RNN and fuzzy inference, which can extract the temporal information related to the body language and solve the problems of ambiguity and noise. The non-contact collection and evaluation method avoids psychological discomfort of prisoners.

Description

Prison-crossing intention evaluation method for prisoners based on body language
Technical Field
The invention belongs to the technical field of judicial supervision, and particularly relates to a prison crossing intention evaluation method for prisoners based on body language.
Background
The prison escaping and crossing behavior of prisoners is difficult to predict, and the prison escaping and crossing behavior is a big problem of risk management and control in prisons. The method is a fundamental method for predicting and controlling behavior trend of prisoners by mastering psychological fluctuation of the prisoners in real time and accurately. The method for realizing the prison safety management is an important means for realizing the adherence of a safety bottom line, perfecting a safety management system and creating a safe prison. How to utilize the existing video equipment and transmission system of prisons and use AI technology to obtain and evaluate the real psychological conditions and trends of prisoners is a new requirement for risk management and control in prisons.
Generally, at present, the risk management and control in the domestic prison is still in a starting stage, and experience evaluation, interview evaluation, questionnaires and scales are mainly used at present. However, most of the methods rely on subjective judgment, the accuracy is poor, the labor cost is high, and the processing time has certain hysteresis. In a few places, a scale evaluation tool based on a mobile terminal is used, research concerning dynamic factors such as occupation, personality, psychological condition, mental illness and the like is few in evaluation content, and dynamic psychological state collection based on daily life state is not reported yet. However, the related foreign research starts earlier, and forms four generations of empirical clinical evaluation, static scale evaluation, dynamic evaluation and dynamic structural evaluation, and the fifth generation evaluation characterized by artificial intelligence, neural networks and the like has also appeared. At present, evaluation tools such as SFS, RM2000, LSI-R, RNR, HCR-20 and the like widely used in America, English and other countries form a more intelligent evidence-based correction technology and an operation platform, biological factor detection is introduced, and the nervous system reactivity of a monitored person is evaluated by means of technologies such as electroencephalogram, electrodermal and the like to predict the crime risk of the monitored person (Levi, 2004; Nussbaum, 2005).
The AI-based assessment method can dynamically assess, collect data in real time and grasp psychological fluctuation of prisoners in time. The machine learning model is adopted to track psychological changes of prisoners, and the method has the advantages of high accuracy, high reliability level and good real-time performance. Assessment of the intention of prisoners to escape and cross prisons by AI is a new trend of risk management and control in prisons. The adoption of the image prediction intention technology is an important direction because the camera of the prison is complete. Considering that the face image of a person taking a criminal is blurred, the intention assessment technology of the person taking a criminal to escape from prison based on body language is considered.
Disclosure of Invention
The invention aims to evaluate the prison-crossing escape intention of prisoners through body language and provides a prison-crossing intention evaluation method of prisoners based on the body language.
The purpose of the invention is realized by at least one of the following technical solutions.
A prison-crossing intention assessment method for prisoners based on body language comprises the following steps:
s1, extracting skeleton information from the body image of the prisoner in each frame of image of the monitoring video to obtain a skeleton information sequence;
s2, constructing a network integrating RNN and fuzzy inference and training;
and S3, inputting the skeleton information sequence into the constructed network integrating the RNN and the fuzzy reasoning, and outputting the prison-crossing intention of prisoners.
Further, in step S1, the skeleton information extraction method is to perform skeleton extraction on the human body image in the input monitoring video by using the existing human body skeleton extraction network model, and the obtained skeleton information sequence x may be represented as:
x={x1,…,xk,…,xT};
wherein xkIs an s × 2 matrix representing skeleton information of the k-th frame; s represents the number of human skeleton points included in the skeleton information extracted by the network; t is the total number of frames of the video.
Further, in step S2, the RNN and fuzzy inference fused network includes the following 7 layers:
the first layer being an input layer to which a signal u is input(1)The skeleton information sequence x extracted from step S1;
the second layer is a fuzzy layer, interference noise in the skeleton information can be reduced through fuzzification, and the member qualification value of the data from the first layer is calculated through a Gaussian member function; the gaussian member function calculation formula is as follows:
Figure BDA0002613770770000021
wherein ,u(2)Is the output matrix of the second layer and,
Figure BDA0002613770770000022
representation matrix u(2)Row i and column j;
Figure BDA0002613770770000023
i-th value x representing a skeleton information sequence xiThe skeleton coordinates of the jth of (1); v. ofijAnd
Figure BDA0002613770770000024
the mean and variance of the Gaussian member functions of the jth skeleton point corresponding to the ith input; during training, get
Figure BDA0002613770770000025
σij=0.5;
The third layer is a space activation layer, each node of the third layer uses continuous accumulation multiplication as a fuzzy operator, and obtains space activation strength after operating on the membership value output by the second layer, and the calculation is as follows:
Figure BDA0002613770770000026
wherein
Figure BDA0002613770770000027
Is the ith output of the third layer;
the fourth layer is a time sequence activation layer which utilizes RNN and is used for acquiring time characteristics of the skeleton information; each neuron in this layer is calculated as follows:
Figure BDA0002613770770000028
wherein ,
Figure BDA0002613770770000029
is the mth node output value of the layer; t is the time step; n is the total number of nodes of the fourth layer;
Figure BDA0002613770770000031
a time activated strength value representing a current state of the mth node; w is amIs the weight of the current state value of the mth node relative to the last state value; output u of this layer(4)Combining the spatial activation intensity of the previous layer with the temporal activation intensity of the previous state;
the fifth layer is a subsequent layer, and the layer carries out weighted linear summation calculation by using the input of the first layer and the output of the fourth layer, and the specific formula is as follows:
Figure BDA0002613770770000032
wherein
Figure BDA0002613770770000033
Is the mth output node of the layer;
Figure BDA0002613770770000034
a coefficient matrix corresponding to the mth output node; omegamIs corresponding to
Figure BDA0002613770770000035
The weight parameter of (2); bmIs the deviation corresponding to the mth node;
Figure BDA0002613770770000036
is a weighted sum of the input data elements;
the sixth layer is a deblurring layer, and the task of the sixth layer is deblurring; the layer uses a weighted average defuzzification method as follows:
Figure BDA0002613770770000037
wherein
Figure BDA0002613770770000038
Is the mth output of the deblurring layer; the output of the layer extracts the relation between the skeleton information sequences and is used as the basis of final classification;
the seventh layer is a result layer, and the result layer adopts a sigmod function to predict the prison crossing intention of prisoners; the specific formula is as follows:
p=sigmod(Wu(6));
wherein p represents the jail crossing probability, W is a weight coefficient matrix of the layer, and the ADAM algorithm is used for automatic adjustment during training.
Further, in step S2, before training the network, the human skeleton extraction network adopts the existing network, and the parameters are trained; collecting prison crossing videos and normal prison serving videos of previous prisoners, and generating more training data by processing the collected videos such as cutting and shearing; labeling all training data, wherein the jail-crossing mark is 1, and the non-jail-crossing mark is 0; during training, the loss function is defined as:
Figure BDA0002613770770000039
wherein D represents the number of training data;
Figure BDA00026137707700000310
representing an output value obtained after the a-th training data is input into the network; y isaA label value representing the a-th training data; and training the constructed RNN and fuzzy inference fused network by using an ADAM optimization method.
Compared with the prior art, the invention has the following advantages:
(1) the method collects the body language characteristics of prisoners through the camera, has low requirement on the modification of prison equipment, and fully considers the situation that face images are possibly blurred.
(2) The network of the invention combines RNN and fuzzy inference, can extract the temporal information related to the body language, and solves the problems of ambiguity and noise.
(3) The invention is a non-contact acquisition and evaluation method, which avoids the psychological discomfort of prisoners.
Drawings
FIG. 1 is a flow chart of a prison-crossing intention evaluation method for prisoners based on body language according to the invention;
FIG. 2 is a structural diagram of a network fusing RNN and fuzzy inference constructed by the present invention.
Detailed Description
Specific implementations of the present invention will be further described with reference to the following examples and drawings, but the embodiments of the present invention are not limited thereto.
Example (b):
a prison crossing intention assessment method for prisoners based on body language, as shown in figure 1, comprises the following steps:
s1, extracting skeleton information from the body image of the prisoner in each frame of image of the monitoring video to obtain a skeleton information sequence;
the method for extracting the skeleton information is to adopt the existing human skeleton to extract a network model, in this embodiment, open-source openpos is adopted to perform skeleton extraction on a human body image in an input monitoring video, and an obtained skeleton information sequence x can be expressed as:
x={x1,…,xk,…,xT};
wherein xkIs an s × 2 matrix representing skeleton information of the k-th frame; s represents the number of human skeleton points included in the skeleton information extracted by the network; t is the total number of frames of the video.
S2, constructing a network integrating RNN and fuzzy inference and training;
as shown in fig. 2, the RNN and fuzzy inference fused network includes the following 7 layers:
the first layer being an input layer to which a signal u is input(1)The skeleton information sequence x extracted from step S1;
the second layer is a fuzzy layer, interference noise in the skeleton information can be reduced through fuzzification, and the member qualification value of the data from the first layer is calculated through a Gaussian member function; the gaussian member function calculation formula is as follows:
Figure BDA0002613770770000041
wherein ,u(2)Is the output matrix of the second layer and,
Figure BDA0002613770770000042
representation matrix u(2)Row i and column j;
Figure BDA0002613770770000043
i-th value x representing a skeleton information sequence xiThe skeleton coordinates of the jth of (1); v. ofijAnd
Figure BDA0002613770770000044
the mean and variance of the Gaussian member functions of the jth skeleton point corresponding to the ith input; during training, get
Figure BDA0002613770770000045
σij=0.5;
The third layer is a space activation layer, each node of the third layer uses continuous accumulation multiplication as a fuzzy operator, and obtains space activation strength after operating on the membership value output by the second layer, and the calculation is as follows:
Figure BDA0002613770770000051
wherein
Figure BDA0002613770770000052
Is the ith output of the third layer;
The fourth layer is a time sequence activation layer which utilizes RNN and is used for acquiring time characteristics of the skeleton information; each neuron in this layer is calculated as follows:
Figure BDA0002613770770000053
wherein ,
Figure BDA0002613770770000054
is the mth node output value of the layer; t is the time step; n is the total number of nodes of the fourth layer;
Figure BDA0002613770770000055
a time activated strength value representing a current state of the mth node; w is amIs the weight of the current state value of the mth node relative to the last state value; output u of this layer(4)Combining the spatial activation intensity of the previous layer with the temporal activation intensity of the previous state;
the fifth layer is a subsequent layer, and the layer carries out weighted linear summation calculation by using the input of the first layer and the output of the fourth layer, and the specific formula is as follows:
Figure BDA0002613770770000056
wherein
Figure BDA0002613770770000057
Is the mth output node of the layer;
Figure BDA0002613770770000058
a coefficient matrix corresponding to the mth output node; omegamIs corresponding to
Figure BDA0002613770770000059
The weight parameter of (2); bmIs the deviation corresponding to the mth node;
Figure BDA00026137707700000510
is a weighted sum of the input data elements;
the sixth layer is a deblurring layer, and the task of the sixth layer is deblurring; the layer uses a weighted average defuzzification method as follows:
Figure BDA00026137707700000511
wherein
Figure BDA00026137707700000512
Is the mth output of the deblurring layer; the output of the layer extracts the relation between the skeleton information sequences and is used as the basis of final classification;
the seventh layer is a result layer, and the result layer adopts a sigmod function to predict the prison crossing intention of prisoners; the specific formula is as follows:
p=sigmod(Wu(6));
wherein p represents the jail crossing probability, W is a weight coefficient matrix of the layer, and the ADAM algorithm is used for automatic adjustment during training.
Before training the network, the human skeleton extraction network adopts the existing network, such as OpenPose, and the parameters are trained; collecting prison crossing videos and normal prison serving videos of previous prisoners, and generating more training data by processing the collected videos such as cutting and shearing; labeling all training data, wherein the jail-crossing mark is 1, and the non-jail-crossing mark is 0; during training, the loss function is defined as:
Figure BDA0002613770770000061
wherein D represents the number of training data;
Figure BDA0002613770770000062
representing an output value obtained after the a-th training data is input into the network; y isaA label value representing the a-th training data; application of ADAM optimization method to constructed RNNAnd training the network fused with the fuzzy inference.
And S3, inputting the skeleton information sequence into the constructed network integrating the RNN and the fuzzy reasoning, and outputting the prison-crossing intention of prisoners.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can substitute or change the technical solution of the present invention and the inventive concept within the scope of the present invention disclosed by the present invention.

Claims (4)

1. A prison-crossing intention evaluation method for prisoners based on body language is characterized by comprising the following steps:
s1, extracting skeleton information from the body image of the prisoner in each frame of image of the monitoring video to obtain a skeleton information sequence;
s2, constructing a network integrating RNN and fuzzy inference and training;
and S3, inputting the skeleton information sequence into the constructed network integrating the RNN and the fuzzy reasoning, and outputting the prison-crossing intention of prisoners.
2. The method for assessing the prison crossing intention of prisoners based on body language as claimed in claim 1, wherein in step S1, the skeleton information is extracted by using the existing human skeleton extraction network model to perform skeleton extraction on human body images in the input surveillance video, and the obtained skeleton information sequence x is represented as:
x={x1,…,xk,…,xT};
wherein xkIs an s × 2 matrix representing skeleton information of the k-th frame; s represents the number of human skeleton points included in the extracted skeleton information; t is the total number of frames of the video.
3. A prison-crossing intention assessment method for prisoners based on body language according to claim 1, wherein said RNN and fuzzy inference fused network comprises the following 7 layers in step S2:
the first layer being an input layer to which a signal u is input(1)The skeleton information sequence x extracted from step S1;
the second layer is a fuzzy layer, interference noise in the skeleton information can be reduced through fuzzification, and the member qualification value of the data from the first layer is calculated through a Gaussian member function; the gaussian member function calculation formula is as follows:
Figure FDA0002613770760000011
wherein ,u(2)Is the output matrix of the second layer and,
Figure FDA0002613770760000012
representation matrix u(2)Row i and column j;
Figure FDA0002613770760000013
i-th value x representing a skeleton information sequence xiThe skeleton coordinates of the jth of (1); v. ofijAnd
Figure FDA0002613770760000014
the mean and variance of the Gaussian member functions of the jth skeleton point corresponding to the ith input; during training, get
Figure FDA0002613770760000015
σij=0.5;
The third layer is a space activation layer, each node of the third layer uses continuous accumulation multiplication as a fuzzy operator, and obtains space activation strength after operating on the membership value output by the second layer, and the calculation is as follows:
Figure FDA0002613770760000016
wherein
Figure FDA0002613770760000017
Is the ith output of the third layer;
the fourth layer is a time sequence activation layer which utilizes RNN and is used for acquiring time characteristics of the skeleton information;
each neuron in this layer is calculated as follows:
Figure FDA0002613770760000021
wherein ,
Figure FDA0002613770760000022
is the mth node output value of the layer; t is the time step; n is the total number of nodes of the fourth layer;
Figure FDA0002613770760000023
a time activated strength value representing a current state of the mth node; w is amIs the weight of the current state value of the mth node relative to the last state value; output u of this layer(4)Combining the spatial activation intensity of the previous layer with the temporal activation intensity of the previous state;
the fifth layer is a subsequent layer, and the layer carries out weighted linear summation calculation by using the input of the first layer and the output of the fourth layer, and the specific formula is as follows:
Figure FDA0002613770760000024
wherein
Figure FDA0002613770760000025
Is the mth output node of the layer;
Figure FDA0002613770760000026
is that
Figure FDA0002613770760000027
A coefficient matrix corresponding to the mth output node; omegamIs corresponding to
Figure FDA0002613770760000028
The weight parameter of (2); bmIs the deviation corresponding to the mth node;
Figure FDA0002613770760000029
is a weighted sum of the input data elements;
the sixth layer is a deblurring layer, and the task of the sixth layer is deblurring; the layer uses a weighted average defuzzification method as follows:
Figure FDA00026137707600000210
wherein
Figure FDA00026137707600000211
Is the mth output of the deblurring layer; the output of the layer extracts the relation between the skeleton information sequences and is used as the basis of final classification;
the seventh layer is a result layer, and the result layer adopts a sigmod function to predict the prison crossing intention of prisoners; the specific formula is as follows:
p=sigmod(Wu(6));
wherein p represents the jail crossing probability, W is a weight coefficient matrix of the layer, and the ADAM algorithm is used for automatic adjustment during training.
4. The prison-crossing intention assessment method for prisoners based on body language according to claim 1, wherein in step S2, before training the network, the human skeleton extraction network is the existing network, and the parameters are trained; collecting prison crossing videos and normal prison serving videos of previous prisoners, and generating more training data by processing the collected videos such as cutting and shearing; labeling all training data, wherein the jail-crossing mark is 1, and the non-jail-crossing mark is 0; during training, the loss function is defined as:
Figure FDA00026137707600000212
wherein D represents the number of training data;
Figure FDA0002613770760000031
representing an output value obtained after the a-th training data is input into the network; y isaA label value representing the a-th training data; and training the constructed RNN and fuzzy inference fused network by using an ADAM optimization method.
CN202010763662.XA 2020-07-31 2020-07-31 Prisoner jail-breaking intention assessment method based on limb language Active CN111967355B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010763662.XA CN111967355B (en) 2020-07-31 2020-07-31 Prisoner jail-breaking intention assessment method based on limb language

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010763662.XA CN111967355B (en) 2020-07-31 2020-07-31 Prisoner jail-breaking intention assessment method based on limb language

Publications (2)

Publication Number Publication Date
CN111967355A true CN111967355A (en) 2020-11-20
CN111967355B CN111967355B (en) 2023-09-01

Family

ID=73363781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010763662.XA Active CN111967355B (en) 2020-07-31 2020-07-31 Prisoner jail-breaking intention assessment method based on limb language

Country Status (1)

Country Link
CN (1) CN111967355B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052497A (en) * 2021-02-02 2021-06-29 浙江工业大学 Criminal worker risk prediction method based on dynamic and static feature fusion learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140219550A1 (en) * 2011-05-13 2014-08-07 Liberovision Ag Silhouette-based pose estimation
WO2018015080A1 (en) * 2016-07-19 2018-01-25 Siemens Healthcare Gmbh Medical image segmentation with a multi-task neural network system
CN110507335A (en) * 2019-08-23 2019-11-29 山东大学 Inmate's psychological health states appraisal procedure and system based on multi-modal information
CN110837523A (en) * 2019-10-29 2020-02-25 山东大学 High-confidence reconstruction quality and false-transient-reduction quantitative evaluation method based on cascade neural network
CN110942088A (en) * 2019-11-04 2020-03-31 山东大学 Risk level evaluation method based on effective influence factors of prisoners and realization system thereof
WO2020107833A1 (en) * 2018-11-26 2020-06-04 平安科技(深圳)有限公司 Skeleton-based behavior detection method, terminal device, and computer storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140219550A1 (en) * 2011-05-13 2014-08-07 Liberovision Ag Silhouette-based pose estimation
WO2018015080A1 (en) * 2016-07-19 2018-01-25 Siemens Healthcare Gmbh Medical image segmentation with a multi-task neural network system
WO2020107833A1 (en) * 2018-11-26 2020-06-04 平安科技(深圳)有限公司 Skeleton-based behavior detection method, terminal device, and computer storage medium
CN110507335A (en) * 2019-08-23 2019-11-29 山东大学 Inmate's psychological health states appraisal procedure and system based on multi-modal information
CN110837523A (en) * 2019-10-29 2020-02-25 山东大学 High-confidence reconstruction quality and false-transient-reduction quantitative evaluation method based on cascade neural network
CN110942088A (en) * 2019-11-04 2020-03-31 山东大学 Risk level evaluation method based on effective influence factors of prisoners and realization system thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GUANGLONG DU 等: "Non-Contact Emotion Recognition Combining Heart Rate and Facial Expression for Interactive Gaming Environments", 《IEEEACCESS》, pages 11896 - 11906 *
田曼;张艺;: "多模型融合动作识别研究", 电子测量技术, no. 20, pages 118 - 123 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052497A (en) * 2021-02-02 2021-06-29 浙江工业大学 Criminal worker risk prediction method based on dynamic and static feature fusion learning

Also Published As

Publication number Publication date
CN111967355B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
Chackravarthy et al. Intelligent crime anomaly detection in smart cities using deep learning
CN108764059B (en) Human behavior recognition method and system based on neural network
CN111626116B (en) Video semantic analysis method based on fusion of multi-attention mechanism and Graph
CN111209848A (en) Real-time fall detection method based on deep learning
CN111653023A (en) Intelligent factory supervision method
CN112784685A (en) Crowd counting method and system based on multi-scale guiding attention mechanism network
CN116564561A (en) Intelligent voice nursing system and nursing method based on physiological and emotion characteristics
CN111626199A (en) Abnormal behavior analysis method for large-scale multi-person carriage scene
CN113850229A (en) Method and system for early warning abnormal behaviors of people based on video data machine learning and computer equipment
CN113671421A (en) Transformer state evaluation and fault early warning method
Goel et al. An ontology-driven context aware framework for smart traffic monitoring
CN114266201B (en) Self-attention elevator trapping prediction method based on deep learning
CN111967355B (en) Prisoner jail-breaking intention assessment method based on limb language
Varghese et al. Application of cognitive computing for smart crowd management
Wu Design of intelligent nursing system based on artificial intelligence
Esan et al. Surveillance detection of anomalous activities with optimized deep learning technique in crowded scenes
CN113486754A (en) Event evolution prediction method and system based on video
CN113616209A (en) Schizophrenia patient discrimination method based on space-time attention mechanism
CN113158888A (en) Elevator abnormal video identification method
CN110705413A (en) Emotion prediction method and system based on sight direction and LSTM neural network
CN114826949B (en) Communication network condition prediction method
CN117391456B (en) Village management method and service platform system based on artificial intelligence
CN117275156A (en) Unattended chess and card room reservation sharing system
Jadhav et al. Mobilenet and Deep Residual Network for Object Detection and Classification of Objects in IoT Enabled Construction Safety Monitoring
Mazlam et al. Estimation of Fines Amount in Syariah Criminal Offences Using Adaptive Neuro-Fuzzy Inference System (ANFIS)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant