CN111967355B - Prisoner jail-breaking intention assessment method based on limb language - Google Patents

Prisoner jail-breaking intention assessment method based on limb language Download PDF

Info

Publication number
CN111967355B
CN111967355B CN202010763662.XA CN202010763662A CN111967355B CN 111967355 B CN111967355 B CN 111967355B CN 202010763662 A CN202010763662 A CN 202010763662A CN 111967355 B CN111967355 B CN 111967355B
Authority
CN
China
Prior art keywords
layer
skeleton
network
skeleton information
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010763662.XA
Other languages
Chinese (zh)
Other versions
CN111967355A (en
Inventor
杜广龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202010763662.XA priority Critical patent/CN111967355B/en
Publication of CN111967355A publication Critical patent/CN111967355A/en
Application granted granted Critical
Publication of CN111967355B publication Critical patent/CN111967355B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/048Fuzzy inferencing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Automation & Control Theory (AREA)
  • Fuzzy Systems (AREA)
  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a prisoner jail-breaking intention assessment method based on limb language. The method comprises the following steps: extracting skeleton information from the personal images of the prisoner in each frame of image of the monitoring video to obtain a skeleton information sequence; constructing a network with the fusion of RNN and fuzzy reasoning and training; inputting the skeleton information sequence into a constructed RNN and fuzzy reasoning fused network to output prison-breaking intention of prisoners. The invention has the advantages of high accuracy, high reliability level and good real-time performance. The camera is used for collecting the language characteristics of limbs of prisoners, so that the requirements on reconstruction of prison equipment are low, and the situation that facial images are possibly blurred is fully considered. The network combines RNN with fuzzy reasoning, so that not only can the temporal information related to the language of limbs be extracted, but also the problems of ambiguity and noise are solved. The non-contact collecting and evaluating method avoids psychological discomfort of the prisoner.

Description

Prisoner jail-breaking intention assessment method based on limb language
Technical Field
The invention belongs to the technical field of judicial supervision, and particularly relates to a prisoner jail-breaking intention assessment method based on limb language.
Background
The escape jail-breaking behavior of prisoners is difficult to predict, and is a great difficulty in managing and controlling risks in jail. The immediate and accurate mastering of the psychological fluctuation of the prisoner is the fundamental method for predicting and controlling the behavioral trend of the prisoner. The understanding of the ideas and actions of prisoners is an important means for realizing the adherence to the safety bottom line, perfecting the safety control system and creating the safety prison. How to use the existing video equipment and transmission system of prison and use AI technology to obtain and evaluate the real psychological condition and trend of prisoners is a new requirement for managing risks in prisons.
Overall, current in-prison risk management is still in the launch stage, and is currently based on experience assessment, interview assessment, questionnaires and scales. However, most of these methods rely on subjective judgment, and have poor accuracy, high labor cost and certain hysteresis in processing time. Few places begin to use a mobile-end-based scale assessment tool, and in terms of assessment content, little research is focused on dynamic factors such as occupation, personality, psychological conditions, mental states and the like, and no report is yet seen on dynamic psychological state acquisition based on daily life states. The related research in foreign countries starts earlier, forms four generations of empirical clinical evaluation, static scale evaluation, dynamic evaluation and dynamic structural evaluation, and the fifth generation evaluation featuring artificial intelligence, neural network and the like also appears. Currently, evaluation tools such as SFS, RM2000, LSI-R, RNR, HCR-20 and the like widely used in countries such as America and English form a more intelligent evidence-based correction technology and operation platform, and biological factor detection is introduced, and nervous system reactivity of a person to be monitored is evaluated by means of technologies such as electroencephalogram, dermatology and the like to predict crime risk of the person to be monitored (Levi, 2004; nussbaum, 2005).
The AI-based assessment method can dynamically assess, collect data in real time and timely master psychological fluctuation of the prisoner. The machine learning building model is adopted to track psychological changes of the prisoners, and the method has the advantages of high accuracy, high credibility level and good instantaneity. The intention assessment of prisoners to escape from the prison by AI is a new trend in risk management within the prison. Since the camera equipment of prisons is complete, it is an important direction to employ image predictive intent technology. Considering the phenomenon that the facial image of the prisoner shot by the camera is blurred, the intention assessment technology of prisoner jail escape based on limb language is mainly considered.
Disclosure of Invention
The invention aims to evaluate the jail-breaking escape intention of an prisoner through limb language and provides a jail-breaking intention evaluation method of the prisoner based on the limb language.
The object of the invention is achieved by at least one of the following technical solutions.
The prisoner jail-breaking intention assessment method based on the limb language comprises the following steps of:
s1, extracting skeleton information from the personal images of the prisoner in each frame of image of the monitoring video to obtain a skeleton information sequence;
s2, constructing a network integrating RNN and fuzzy reasoning and training;
s3, inputting the skeleton information sequence into a constructed RNN and fuzzy reasoning fused network to output prison-breaking intention of prisoners.
Further, in step S1, the skeleton information extraction method is to perform skeleton extraction on the human body image in the input monitoring video by using the existing human skeleton extraction network model, and the obtained skeleton information sequence x may be expressed as:
x={x 1 ,…,x k ,…,x T };
wherein xk Is a matrix of s×2, representing skeleton information of the kth frame; s represents the number of human skeleton points included in the skeleton information extracted by the network; t is the total number of frames of the video.
Further, in step S2, the network in which the RNN and the fuzzy inference are fused includes the following 7 layers:
the first layer is an input layer for inputting a signal u (1) The skeleton information sequence x extracted from step S1;
the second layer is a fuzzy layer, the fuzzy can reduce interference noise in skeleton information, and the layer uses a Gaussian member function to calculate a member information value of data from the first layer; the gaussian membership function is calculated as follows:
wherein ,u(2) Is the output matrix of the second layer,representation matrix u (2) The value of row i and column j; />Ith value x representing skeleton information sequence x i The j-th skeleton coordinate of (2); v ij and />The mean and variance of the Gaussian member function of the j skeleton point corresponding to the i input; during training, take +.>σ ij =0.5;
The third layer is a space activation layer, each node of the third layer uses continuous accumulation multiplication as a fuzzy operator, and the space activation intensity is obtained after the member information value output by the second layer is operated, and the calculation is as follows:
wherein Is the ith output of the third layer;
the fourth layer is a time sequence activating layer, and the layer uses RNN for acquiring time characteristics of skeleton information; each neuron in this layer is calculated as follows:
wherein ,is the output value of the mth node of the layer; t is the time step; n is the total number of nodes of the fourth layer; />Represents the mth sectionA time-activated intensity value of the current state of the point; w (w) m Is the weight of the current state value of the mth node relative to the last state value; this layer outputs u (4) Combining the spatial activation intensity of the previous layer with the temporal activation intensity of the previous state;
the fifth layer is a subsequent layer, and the weighted linear summation calculation is performed by using the input of the first layer and the output of the fourth layer, and the specific formula is as follows:
wherein Is the mth output node of the layer; />A coefficient matrix corresponding to the mth output node; omega m Is corresponding to->Weight parameters of (2); b m Is the deviation corresponding to the mth node; />Is a weighted sum of the input data elements;
the sixth layer is a defuzzification layer, and the task of the sixth layer is defuzzification; the layer adopts a weighted average defuzzification method, as follows:
wherein Is the mth output of the deblurring layer; the output of the layer extracts the relation between skeleton information sequences and takes the relation as the basis of the final classification;
a seventh layer is a result layer, and the prison-breaking intention of the prisoner is predicted by adopting a sigmod function; the specific formula is as follows:
p=sigmod(Wu (6) );
wherein p represents jail-break probability, W is a weight coefficient matrix of the layer, and the weight coefficient matrix is automatically adjusted by an ADAM algorithm during training.
Further, in step S2, before training the network, the human skeleton extraction network adopts the existing network, and parameters are already trained; acquiring prison-breaking videos of past prisoners and normal prison-breaking videos, and generating more training data by processing the acquired videos such as segmentation and shearing; labeling all training data, wherein the jail-break mark is 1, and the non-jail-break mark is 0; during training, the loss function is defined as:
wherein D represents the number of training data;representing an output value obtained after the a-th training data is input into the network; y is a A tag value representing the a-th training data; and training the constructed RNN and fuzzy reasoning fused network by using an ADAM optimization method.
Compared with the prior art, the invention has the following advantages:
(1) According to the invention, the camera is used for collecting the language characteristics of limbs of prisoners, the reconstruction requirement on prison equipment is low, and the situation that facial images are possibly blurred is fully considered.
(2) The network combines RNN with fuzzy reasoning, not only can extract temporal information related to the language of limbs, but also solves the problems of ambiguity and noise.
(3) The invention is a non-contact collecting and evaluating method, which avoids psychological discomfort of the prisoner.
Drawings
FIG. 1 is a flow chart of a prisoner jail-break intention assessment method based on limb language of the present invention;
FIG. 2 is a block diagram of a network in which the RNNs constructed in the present invention are fused with fuzzy reasoning.
Detailed Description
Specific embodiments of the present invention will be described further below with reference to examples and drawings, but the embodiments of the present invention are not limited thereto.
Examples:
a prisoner jail-break intention assessment method based on limb language, as shown in fig. 1, comprises the following steps:
s1, extracting skeleton information from the personal images of the prisoner in each frame of image of the monitoring video to obtain a skeleton information sequence;
the skeleton information extraction method adopts the existing human skeleton extraction network model, in this embodiment, open source openPose is adopted to perform skeleton extraction on human body images in the input monitoring video, and the obtained skeleton information sequence x can be expressed as:
x={x 1 ,…,x k ,…,x T };
wherein xk Is a matrix of s×2, representing skeleton information of the kth frame; s represents the number of human skeleton points included in the skeleton information extracted by the network; t is the total number of frames of the video.
S2, constructing a network integrating RNN and fuzzy reasoning and training;
as shown in fig. 2, the RNN-fuzzy inference converged network includes the following 7 layers:
the first layer is an input layer for inputting a signal u (1) The skeleton information sequence x extracted from step S1;
the second layer is a fuzzy layer, the fuzzy can reduce interference noise in skeleton information, and the layer uses a Gaussian member function to calculate a member information value of data from the first layer; the gaussian membership function is calculated as follows:
wherein ,u(2) Is the output matrix of the second layer,representation matrix u (2) The value of row i and column j; />Ith value x representing skeleton information sequence x i The j-th skeleton coordinate of (2); v ij and />The mean and variance of the Gaussian member function of the j skeleton point corresponding to the i input; during training, take +.>σ ij =0.5;
The third layer is a space activation layer, each node of the third layer uses continuous accumulation multiplication as a fuzzy operator, and the space activation intensity is obtained after the member information value output by the second layer is operated, and the calculation is as follows:
wherein Is the ith output of the third layer;
the fourth layer is a time sequence activating layer, and the layer uses RNN for acquiring time characteristics of skeleton information; each neuron in this layer is calculated as follows:
wherein ,is the output value of the mth node of the layer; t is the time step; n is the total number of nodes of the fourth layer; />A time activation strength value representing a current state of an mth node; w (w) m Is the weight of the current state value of the mth node relative to the last state value; this layer outputs u (4) Combining the spatial activation intensity of the previous layer with the temporal activation intensity of the previous state;
the fifth layer is a subsequent layer, and the weighted linear summation calculation is performed by using the input of the first layer and the output of the fourth layer, and the specific formula is as follows:
wherein Is the mth output node of the layer; />A coefficient matrix corresponding to the mth output node; omega m Is corresponding to->Weight parameters of (2); b m Is the deviation corresponding to the mth node; />Is a weighted sum of the input data elements;
the sixth layer is a defuzzification layer, and the task of the sixth layer is defuzzification; the layer adopts a weighted average defuzzification method, as follows:
wherein Is the mth output of the deblurring layer; the output of the layer extracts the relation between skeleton information sequences and takes the relation as the basis of the final classification;
a seventh layer is a result layer, and the prison-breaking intention of the prisoner is predicted by adopting a sigmod function; the specific formula is as follows:
p=sigmod(Wu (6) );
wherein p represents jail-break probability, W is a weight coefficient matrix of the layer, and the weight coefficient matrix is automatically adjusted by an ADAM algorithm during training.
Before training the network, the human skeleton extraction network adopts the existing network, such as OpenPose, and parameters are trained; acquiring prison-breaking videos of past prisoners and normal prison-breaking videos, and generating more training data by processing the acquired videos such as segmentation and shearing; labeling all training data, wherein the jail-break mark is 1, and the non-jail-break mark is 0; during training, the loss function is defined as:
wherein D represents the number of training data;representing an output value obtained after the a-th training data is input into the network; y is a A tag value representing the a-th training data; and training the constructed RNN and fuzzy reasoning fused network by using an ADAM optimization method.
S3, inputting the skeleton information sequence into a constructed RNN and fuzzy reasoning fused network to output prison-breaking intention of prisoners.
The above description is only of the preferred embodiments of the present invention, but the protection scope of the present invention is not limited thereto, and any person skilled in the art can make equivalent substitutions or modifications according to the technical scheme and the inventive concept thereof within the scope of the present invention disclosed in the present invention, and all those skilled in the art belong to the protection scope of the present invention.

Claims (3)

1. The prisoner jail-breaking intention assessment method based on the limb language is characterized by comprising the following steps of:
s1, extracting skeleton information from the personal images of the prisoner in each frame of image of the monitoring video to obtain a skeleton information sequence;
s2, constructing a network integrating RNN and fuzzy reasoning and training;
s3, inputting the skeleton information sequence into a constructed RNN and fuzzy reasoning fused network to output prison-breaking intention of prisoners; the network of the RNN and the fuzzy reasoning fusion comprises the following 7 layers:
the first layer is an input layer for inputting a signal u (1) The skeleton information sequence x extracted from step S1;
the second layer is a fuzzy layer, the fuzzy reduces interference noise in the skeleton information, and the layer calculates member information values of data from the first layer by using a Gaussian member function; the gaussian membership function is calculated as follows:
wherein ,u(2) Is the output matrix of the second layer,representation matrix u (2) The value of row i and column j; />Ith value x representing skeleton information sequence x i The j-th skeleton coordinate of (2); v ij and />The mean and variance of the Gaussian member function of the j skeleton point corresponding to the i input; during training, take +.>σ ij =0.5;
The third layer is a space activation layer, each node of the third layer uses continuous accumulation multiplication as a fuzzy operator, and the space activation intensity is obtained after the member information value output by the second layer is operated, and the calculation is as follows:
wherein Is the ith output of the third layer;
the fourth layer is a time sequence activating layer, and the layer uses RNN for acquiring time characteristics of skeleton information; each neuron in this layer is calculated as follows:
wherein ,is the output value of the mth node of the layer; t is the time step; n is the total number of nodes of the fourth layer; />A time activation strength value representing a current state of an mth node; w (w) m Is the weight of the current state value of the mth node relative to the last state value; this layer outputs u (4) Combining the spatial activation intensity of the previous layer with the temporal activation intensity of the previous state;
the fifth layer is a subsequent layer, and the weighted linear summation calculation is performed by using the input of the first layer and the output of the fourth layer, and the specific formula is as follows:
wherein Is the mth output node of the layer; />Is->A coefficient matrix corresponding to the mth output node; omega m Is corresponding to->Weight parameters of (2); b m Is the deviation corresponding to the mth node; />Is a weighted sum of the input data elements;
the sixth layer is a defuzzification layer, and the task of the sixth layer is defuzzification; the layer adopts a weighted average defuzzification method, as follows:
wherein Is the mth output of the deblurring layer; the output of the layer extracts the relation between skeleton information sequences and takes the relation as the basis of the final classification;
a seventh layer is a result layer, and the prison-breaking intention of the prisoner is predicted by adopting a sigmod function; the specific formula is as follows:
p=sigmod(Wu (6) );
wherein p represents jail-break probability, W is a weight coefficient matrix of the layer, and the weight coefficient matrix is automatically adjusted by an ADAM algorithm during training.
2. The prisoner jail intention assessment method based on limb language according to claim 1, wherein in step S1, the skeleton information extraction method is to use the existing human skeleton extraction network model to perform skeleton extraction on the human body image in the input monitoring video, and the obtained skeleton information sequence x is expressed as:
x={x 1 ,…,x k ,…,x T };
wherein xk Is a matrix of s×2, representing skeleton information of the kth frame; s represents the number of human skeleton points included in the extracted skeleton information; t is the total number of frames of the video.
3. The prison-breaking intention evaluation method for prisoners based on limb language according to claim 1, wherein in step S2, before training the network, the human skeleton extraction network adopts the existing network, and parameters are trained; acquiring prison-breaking videos of past prisoners and normal prison-breaking videos, and generating more training data by cutting and shearing the acquired videos; labeling all training data, wherein the jail-break mark is 1, and the non-jail-break mark is 0; during training, the loss function is defined as:
wherein D represents the number of training data;representing an output value obtained after the a-th training data is input into the network; y is a A tag value representing the a-th training data; and training the constructed RNN and fuzzy reasoning fused network by using an ADAM optimization method.
CN202010763662.XA 2020-07-31 2020-07-31 Prisoner jail-breaking intention assessment method based on limb language Active CN111967355B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010763662.XA CN111967355B (en) 2020-07-31 2020-07-31 Prisoner jail-breaking intention assessment method based on limb language

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010763662.XA CN111967355B (en) 2020-07-31 2020-07-31 Prisoner jail-breaking intention assessment method based on limb language

Publications (2)

Publication Number Publication Date
CN111967355A CN111967355A (en) 2020-11-20
CN111967355B true CN111967355B (en) 2023-09-01

Family

ID=73363781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010763662.XA Active CN111967355B (en) 2020-07-31 2020-07-31 Prisoner jail-breaking intention assessment method based on limb language

Country Status (1)

Country Link
CN (1) CN111967355B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052497A (en) * 2021-02-02 2021-06-29 浙江工业大学 Criminal worker risk prediction method based on dynamic and static feature fusion learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018015080A1 (en) * 2016-07-19 2018-01-25 Siemens Healthcare Gmbh Medical image segmentation with a multi-task neural network system
CN110507335A (en) * 2019-08-23 2019-11-29 山东大学 Inmate's psychological health states appraisal procedure and system based on multi-modal information
CN110837523A (en) * 2019-10-29 2020-02-25 山东大学 High-confidence reconstruction quality and false-transient-reduction quantitative evaluation method based on cascade neural network
CN110942088A (en) * 2019-11-04 2020-03-31 山东大学 Risk level evaluation method based on effective influence factors of prisoners and realization system thereof
WO2020107833A1 (en) * 2018-11-26 2020-06-04 平安科技(深圳)有限公司 Skeleton-based behavior detection method, terminal device, and computer storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2812578T3 (en) * 2011-05-13 2021-03-17 Vizrt Ag Estimating a posture based on silhouette

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018015080A1 (en) * 2016-07-19 2018-01-25 Siemens Healthcare Gmbh Medical image segmentation with a multi-task neural network system
WO2020107833A1 (en) * 2018-11-26 2020-06-04 平安科技(深圳)有限公司 Skeleton-based behavior detection method, terminal device, and computer storage medium
CN110507335A (en) * 2019-08-23 2019-11-29 山东大学 Inmate's psychological health states appraisal procedure and system based on multi-modal information
CN110837523A (en) * 2019-10-29 2020-02-25 山东大学 High-confidence reconstruction quality and false-transient-reduction quantitative evaluation method based on cascade neural network
CN110942088A (en) * 2019-11-04 2020-03-31 山东大学 Risk level evaluation method based on effective influence factors of prisoners and realization system thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Non-Contact Emotion Recognition Combining Heart Rate and Facial Expression for Interactive Gaming Environments;GUANGLONG DU 等;《IEEEAccess》;第11896-11906 *

Also Published As

Publication number Publication date
CN111967355A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
CN111079655B (en) Method for recognizing human body behaviors in video based on fusion neural network
CN108764059B (en) Human behavior recognition method and system based on neural network
CN111626116B (en) Video semantic analysis method based on fusion of multi-attention mechanism and Graph
CN112784685A (en) Crowd counting method and system based on multi-scale guiding attention mechanism network
CN113850229B (en) Personnel abnormal behavior early warning method and system based on video data machine learning and computer equipment
CN113065515B (en) Abnormal behavior intelligent detection method and system based on similarity graph neural network
CN113569766B (en) Pedestrian abnormal behavior detection method for patrol of unmanned aerial vehicle
CN116564561A (en) Intelligent voice nursing system and nursing method based on physiological and emotion characteristics
CN111967355B (en) Prisoner jail-breaking intention assessment method based on limb language
CN114266201B (en) Self-attention elevator trapping prediction method based on deep learning
Fakhrmoosavy et al. A modified brain emotional learning model for earthquake magnitude and fear prediction
CN115273146A (en) Escalator passenger posture abnormity detection method based on improved SSD model
CN117198468A (en) Intervention scheme intelligent management system based on behavior recognition and data analysis
CN112818740A (en) Psychological quality dimension evaluation method and device for intelligent interview
CN110705413B (en) Emotion prediction method and system based on sight direction and LSTM neural network
EP4163830A1 (en) Multi-modal prediction system
Chathuramali et al. Real-time detection of the interaction between an upper-limb power-assist robot user and another person for perception-assist
CN115393927A (en) Multi-modal emotion emergency decision system based on multi-stage long and short term memory network
Esan et al. Surveillance detection of anomalous activities with optimized deep learning technique in crowded scenes
CN111091269A (en) Criminal risk assessment method based on multi-dimensional risk index
CN113780091B (en) Video emotion recognition method based on body posture change representation
Wei et al. Pedestrian anomaly detection method using autoencoder
Petkov et al. Intuitionistic fuzzy evaluation of artificial neural network model
Rana Time series prediction of the COVID-19 outbreak in India using LSTM based deep learning models
CN117158904B (en) Old people cognitive disorder detection system and method based on behavior analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant