CN111814775A - Target object abnormal behavior identification method, device, terminal and storage medium - Google Patents

Target object abnormal behavior identification method, device, terminal and storage medium Download PDF

Info

Publication number
CN111814775A
CN111814775A CN202010944370.6A CN202010944370A CN111814775A CN 111814775 A CN111814775 A CN 111814775A CN 202010944370 A CN202010944370 A CN 202010944370A CN 111814775 A CN111814775 A CN 111814775A
Authority
CN
China
Prior art keywords
abnormal
target object
behavior
target
whole
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010944370.6A
Other languages
Chinese (zh)
Other versions
CN111814775B (en
Inventor
周琅
杜佳辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An International Smart City Technology Co Ltd
Original Assignee
Ping An International Smart City Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An International Smart City Technology Co Ltd filed Critical Ping An International Smart City Technology Co Ltd
Priority to CN202010944370.6A priority Critical patent/CN111814775B/en
Publication of CN111814775A publication Critical patent/CN111814775A/en
Application granted granted Critical
Publication of CN111814775B publication Critical patent/CN111814775B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • G06F18/256Fusion techniques of classification results, e.g. of results related to same input data of results relating to different input data, e.g. multimodal recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Abstract

The invention relates to artificial intelligence, and provides a method, a device, a terminal and a storage medium for identifying abnormal behaviors of a target object, wherein the method comprises the following steps: extracting a whole-body image in a video of a target object; identifying the human face expression in the whole body image through a human face expression identification model, identifying the limb action in the whole body image through a limb action identification model, and identifying the violence tendency behavior in the whole body image through a violence tendency behavior identification model; and when the abnormal behaviors of the target object are identified by respectively matching the facial expressions, the body actions and the violence tendency behaviors with the abnormal behavior early warning database, identifying the abnormal type of the abnormal behaviors and analyzing the abnormal grade and possible consequences. The method can be applied to intelligent government affairs or intelligent cells, can accurately identify whether the target object has abnormal behaviors, and gives the abnormal types of the abnormal behaviors, the corresponding abnormal levels and possible consequences. In addition, the invention also relates to the technical field of block chains, and the video of the target object can be acquired from the block chains.

Description

Target object abnormal behavior identification method, device, terminal and storage medium
Technical Field
The invention relates to the technical field of video monitoring in artificial intelligence, in particular to a method, a device, a terminal and a storage medium for identifying abnormal behaviors of a target object.
Background
Under the traditional video interview mode, video monitoring is mainly used for recording the portrait information of the interviewer and carrying out whole-course trace retaining on the interview process. Whether the visitor has abnormal behaviors or abnormal conditions mainly depends on manual judgment and identification and machine safety detection.
However, it is difficult to manually search for abnormal behavior points of the visiting person, the efficiency is low, and omission is easily caused; the machine safety detection only detects articles carried by the visitor, cannot detect whether the behavior of the visitor is abnormal or not, and has a limited detection range.
Image recognition and other technologies are widely applied to various industries, but the technologies are rarely applied in the petition reception, so that how to intelligently, automatically and efficiently detect whether the visitor has abnormal behaviors through the technologies such as image recognition and the like in the petition reception becomes a technical problem to be solved urgently.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method, an apparatus, a terminal and a storage medium for identifying abnormal behavior of a target object, which can accurately identify whether the target object has abnormal behavior, and provide the abnormal type of the abnormal behavior and the corresponding abnormal level and possible consequences.
The first aspect of the present invention provides a method for identifying abnormal behavior of a target object, where the method includes:
acquiring a first video of a target object and extracting a plurality of frames of first whole-body images in the first video;
calling a facial expression recognition model to recognize a first facial expression in each frame of first whole-body image, calling a limb action recognition model to recognize a first limb action in each frame of first whole-body image, and calling a violence tendency behavior recognition model to recognize a first violence tendency behavior in each frame of first whole-body image;
matching each first face expression, each first limb action and each first violence tendency behavior with a plurality of data in an abnormal behavior early warning database respectively, and identifying whether the target object has abnormal behaviors or not according to matching results;
when the target object is identified to have abnormal behaviors according to the matching result, identifying the abnormal types with the abnormal behaviors and determining the element identification corresponding to each abnormal type; analyzing the abnormal grade and the possible consequence corresponding to each element identification according to the abnormal behavior characteristic analysis rule, sequencing a plurality of abnormal grades corresponding to each abnormal type from high to low, taking the sequenced abnormal grade at the first as the abnormal grade of the abnormal type, sequencing a plurality of possible consequences corresponding to each abnormal type according to the abnormal grade, and connecting the sequenced possible consequences in series to obtain the possible consequence of the abnormal type.
According to an optional embodiment of the present invention, the matching each first facial expression, each first limb action, and each first violence tendency behavior with a plurality of data in an abnormal behavior early warning database, and identifying whether the target object has an abnormal behavior according to a matching result includes:
acquiring a plurality of facial expressions with abnormal types corresponding to expression abnormality in the abnormal behavior early warning database, matching the acquired facial expressions with the facial expressions, and determining that the abnormal type of the target object is expression abnormality when a target facial expression corresponding to the second facial expression is matched from the first facial expression;
acquiring a plurality of second limb actions with abnormal types corresponding to the overstimulated limbs in the abnormal behavior early warning database, matching the plurality of second limb actions with the first limb actions, and determining that the abnormal type of the target object is the overstimulated limb when the target limb actions corresponding to the second limb actions are matched from the first limb actions;
and acquiring a plurality of second violence tendency behaviors, the abnormal types of which are corresponding to violence tendency, in the abnormal behavior early warning database, matching the plurality of second violence tendency behaviors with the first violence tendency behaviors, and determining that the abnormal type of the target object is violence tendency when a target violence tendency behavior corresponding to the second violence tendency behavior is matched from the first violence tendency behavior.
According to an alternative embodiment of the invention, the method further comprises:
determining a first frame number set of a whole-body image corresponding to the target facial expression;
determining a second frame number set of the whole-body image corresponding to the target limb action;
determining a third frame number set of the whole-body image corresponding to the target violence tendency behavior;
judging whether any two frame sequence number sets among the first frame sequence number set, the second frame sequence number set and the third frame sequence number set have at least one same frame sequence number;
when judging whether any two frame number sets among the first frame number set, the second frame number set and the third frame number set have at least one same frame number, keeping a first target whole-body image with the same frame number and deleting a second target whole-body image without the same frame number;
and determining target abnormal types of the target abnormal behaviors corresponding to the first target whole-body image and determining a target element identifier corresponding to each target abnormal type.
According to an optional embodiment of the present invention, the constructing process of the abnormal behavior early warning database includes:
initializing an abnormal behavior early warning database;
acquiring second videos of a plurality of historical target objects and extracting a plurality of frames of second whole-body images in the second videos;
setting at least one element identifier and an abnormal type for each frame of the second whole-body image;
and storing each frame of second whole-body image and at least one element identifier and abnormal type association corresponding to the second whole-body image in the abnormal behavior early warning database, wherein the abnormal behavior early warning database is stored in a node of a block chain.
According to an optional embodiment of the present invention, after storing the second whole-body image and the at least one element identifier and the abnormality type association corresponding to the second whole-body image in the abnormal behavior early warning database, the method further includes:
reading the data in the abnormal behavior early warning database line by line and combining the read data in each line into a data pair;
training a preset first convolution neural network to obtain a facial expression recognition model based on a plurality of data corresponding to the abnormal expression types;
training a preset second convolutional neural network based on a plurality of data corresponding to the over-excited limb with the abnormal type to obtain a limb action recognition model;
and training a preset third convolutional neural network based on a plurality of data corresponding to the violence tendency of the abnormal type to obtain a violence tendency behavior recognition model.
According to an optional embodiment of the invention, before the obtaining the first video of the target object, the method further comprises:
responding to a received starting instruction, sending the starting instruction to an intelligent monitoring camera, and enabling the intelligent monitoring camera to start and collect a first video of the target object;
and receiving a first video of the target object transmitted by the intelligent monitoring camera.
According to an optional embodiment of the present invention, after the identifying an abnormal type with abnormal behavior and analyzing an abnormal level and possible consequences corresponding to the abnormal type according to an abnormal behavior feature analysis rule, the method further includes:
displaying a result of the existence of the abnormal behavior, the abnormal type, the abnormal level, and the possible consequences.
A second aspect of the present invention provides an apparatus for identifying abnormal behavior of a target object, the apparatus comprising:
the image extraction module is used for acquiring a first video of a target object and extracting a plurality of frames of first whole-body images in the first video;
the model calling module is used for calling a facial expression recognition model to recognize a first facial expression in each frame of first whole-body image, calling a limb action recognition model to recognize a first limb action in each frame of first whole-body image and calling a violence tendency behavior recognition model to recognize a first violence tendency behavior in each frame of first whole-body image;
the result matching module is used for respectively matching each first facial expression, each first limb action and each first violence tendency behavior with a plurality of data in an abnormal behavior early warning database and identifying whether the target object has abnormal behaviors or not according to a matching result;
and the consequence analysis module is used for identifying the abnormal type with the abnormal behavior and analyzing the abnormal grade and the possible consequence corresponding to the abnormal type according to the abnormal behavior characteristic analysis rule when the target object is identified to have the abnormal behavior according to the matching result.
A third aspect of the present invention provides a terminal, which includes a processor, and the processor is configured to implement the target object abnormal behavior identification method when executing a computer program stored in a memory.
A fourth aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the target object abnormal behavior identification method.
In summary, the method, the apparatus, the terminal and the storage medium for identifying abnormal behavior of a target object according to the present invention acquire a video of the target object and extract a plurality of frames of whole body images in the video, the facial expression, the limb actions and the violence tendency behaviors are recognized by respectively calling a facial expression recognition model, a limb action recognition model and a violence tendency behavior recognition model, then, the facial expressions, the body movements and the violence tendency behaviors are respectively matched with a plurality of data in an abnormal behavior early warning database to identify whether the target object has abnormal behaviors or not, when the target object is identified to have abnormal behaviors, the abnormal type with the abnormal behaviors is further identified, and the abnormal grade and the possible consequence corresponding to the abnormal type are analyzed according to the abnormal behavior characteristic analysis rule, so that the effect of intelligently, efficiently and accurately detecting whether the visitor has the abnormal behaviors is achieved.
Drawings
Fig. 1 is a flowchart of a target object abnormal behavior identification method according to an embodiment of the present invention.
Fig. 2 is a structural diagram of a target object abnormal behavior recognition apparatus according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a detailed description of the present invention will be given below with reference to the accompanying drawings and specific embodiments. It should be noted that the embodiments of the present invention and features of the embodiments may be combined with each other without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
Example one
Fig. 1 is a flowchart of a target object abnormal behavior identification method according to an embodiment of the present invention. The target object abnormal behavior identification method specifically comprises the following steps, and according to different requirements, the sequence of the steps in the flowchart can be changed, and some steps can be omitted.
S11, acquiring a first video of the target object and extracting a plurality of frames of first whole-body images in the first video.
An intelligent monitoring camera can be installed in a letter visit reception room of a detection organ. When a target object visits, the intelligent monitoring camera is used for collecting a first video of the target object, wherein the first video comprises a face, four limbs, a body and the like, the video is transmitted to a terminal through the intelligent monitoring camera, and the terminal receives the first video of the target object transmitted by the intelligent monitoring camera. The terminal may be a device of a visitor recipient. The target object is a visitor.
An image extraction frame rate is preset in the terminal, and a plurality of frames of first whole-body images in the first video are extracted according to the image extraction frame rate.
In an optional embodiment, before the step S11, the method for identifying abnormal behavior of a target object further includes:
and responding to the received starting instruction, and sending the starting instruction to an intelligent monitoring camera so that the intelligent monitoring camera starts and collects a first video of the target object.
In the optional embodiment, the intelligent monitoring camera is triggered to start by receiving the starting instruction of the visiting person, so that the intelligent monitoring camera can be prevented from being in a working state all the time. The intelligent monitoring camera is always in a working state, so that electric energy of the intelligent monitoring camera is consumed, and images related to the petition receptionist are collected and transmitted to the terminal, so that the memory space of the terminal is occupied, and the performance of the terminal is reduced.
And S12, calling a facial expression recognition model to recognize the first facial expression in each frame of the first whole-body image.
The method comprises the steps that a facial expression recognition model is trained in a terminal in advance, the facial expression of a target object is obtained by calling the facial expression recognition model to recognize first whole-body images, and each first whole-body image correspondingly obtains one facial expression.
And S13, calling the limb action recognition model to recognize the first limb action in each frame of the first whole-body image.
The terminal is pre-trained with a limb action recognition model, the limb action of the target object is obtained by calling the limb action recognition model to recognize the first whole-body image, and each first whole-body image correspondingly obtains one limb action.
And S14, calling the violence tendency behavior recognition model to recognize the first violence tendency behavior in the first whole-body image of each frame.
The method comprises the steps that a violent tendency behavior recognition model is trained in a terminal in advance, violent tendency behaviors of a target object are obtained by calling the violent tendency behavior recognition model to recognize first whole-body images, and each first whole-body image corresponds to one violent tendency behavior.
In an alternative embodiment, the terminal can start three threads to respectively call a facial expression recognition model, a limb action recognition model and a violence tendency behavior recognition model. That is, the terminal simultaneously performs the above-described steps S12-S14.
And S15, matching each first face expression, each first limb action and each first violence tendency behavior with a plurality of data in an abnormal behavior early warning database respectively, and identifying whether the target object has abnormal behaviors according to matching results.
An abnormal behavior early warning database is pre-constructed in the terminal, and a plurality of data are recorded in the abnormal behavior early warning database, and the plurality of data may include, but are not limited to: abnormal type, facial expression, limb movement, violence tendency behavior, etc.
And respectively matching the facial expressions, the body movements and the violence tendency behaviors of the target object with a plurality of data in the abnormal behavior early warning database one by one to obtain a matching result, and identifying whether the target object has the abnormal behaviors or not according to the matching result.
In an optional embodiment, the matching each first facial expression, each first limb action, and each first violence tendency behavior with a plurality of data in an abnormal behavior early warning database, and identifying whether the target object has an abnormal behavior according to a matching result includes:
acquiring a plurality of facial expressions with abnormal types corresponding to expression abnormality in the abnormal behavior early warning database, matching the acquired facial expressions with the facial expressions, and determining that the abnormal type of the target object is expression abnormality when a target facial expression corresponding to the second facial expression is matched from the first facial expression;
acquiring a plurality of second limb actions with abnormal types corresponding to the overstimulated limbs in the abnormal behavior early warning database, matching the plurality of second limb actions with the first limb actions, and determining that the abnormal type of the target object is the overstimulated limb when the target limb actions corresponding to the second limb actions are matched from the first limb actions;
and acquiring a plurality of second violence tendency behaviors, the abnormal types of which are corresponding to violence tendency, in the abnormal behavior early warning database, matching the plurality of second violence tendency behaviors with the first violence tendency behaviors, and determining that the abnormal type of the target object is violence tendency when a target violence tendency behavior corresponding to the second violence tendency behavior is matched from the first violence tendency behavior.
When the target facial expression corresponding to the facial expression is not matched, or when the target limb action corresponding to the limb action is not matched, or when the target violence tendency action corresponding to the violence tendency action is not matched, determining that the target object has no abnormal behavior, namely the target object has normal behavior. When the target object is identified to have no abnormal behavior according to the matching result, the target object abnormal behavior identification method further comprises the following steps: and outputting the result of the absence of the abnormal behavior.
In an optional embodiment, when a target facial expression, a target limb action, and a target violence tendency behavior corresponding to the facial expression are matched, the method further includes:
determining a first frame number set of a whole-body image corresponding to the target facial expression;
determining a second frame number set of the whole-body image corresponding to the target limb action;
determining a third frame number set of the whole-body image corresponding to the target violence tendency behavior;
judging whether any two frame sequence number sets among the first frame sequence number set, the second frame sequence number set and the third frame sequence number set have at least one same frame sequence number;
when judging whether any two frame number sets among the first frame number set, the second frame number set and the third frame number set have at least one same frame number, keeping a first target whole-body image with the same frame number and deleting a second target whole-body image without the same frame number;
and determining target abnormal types of the target abnormal behaviors corresponding to the first target whole-body image and determining a target element identifier corresponding to each target abnormal type.
For example, assuming that the frame numbers of the whole-body images corresponding to the target facial expressions matched from the first facial expressions are 1, 3, 4, 6, 7, the first frame number set is {1, 3, 4, 6, 7 }; if the frame numbers of the whole-body images corresponding to the target limb actions matched from the first limb actions are 1, 4, 5, 6 and 8, the first frame number set is {1, 4, 5, 6 and 8 }; if the frame numbers of the whole-body images corresponding to the target violence tendency behaviors matched from the first violence tendency behaviors are 2, 5, 6, and 9, the first frame number set is {2, 5, 6, and 9 }. The first frame sequence number set and the second frame sequence number set have the same frame sequence numbers of 1, 4 and 6; the first frame sequence number set and the third frame sequence number set have the same frame sequence number of 6; if the second frame number set and the third frame number set have the same frame number of 5, 6, the whole-body image with the frame number of 1, 4, 5, 6 is determined as the first target image. Subsequently, a first target image with a frame number of 1, 4, 5, 6 is respectively matched with a plurality of data in the abnormal behavior early warning database, specifically, a first facial expression in the first target image with the frame number of 1, 4, 6 is matched with a plurality of facial expression data in the abnormal behavior early warning database, a first limb action in the first target image with the frame number of 5, 6 is matched with a plurality of limb action data in the abnormal behavior early warning database, a first violence tendency behavior in the first target image with the frame number of 5, 6 is matched with a plurality of violence tendency behavior data in the abnormal behavior early warning database, so that a target abnormal type of the target abnormal behavior is determined, and a target element identifier corresponding to each target abnormal type is determined.
In the optional embodiment, the abnormal behaviors of the target object (the visitor) can be comprehensively acquired by independently identifying the facial expressions, the body actions and the violence tendency behaviors of the target object (the visitor), so that the abnormal level can be more comprehensively and accurately judged subsequently, and a coping strategy is provided for the petition receiver. When one abnormal behavior exists in the target object (visitor) during the petition, another or a plurality of other abnormal behaviors exist with high probability, for example, when an abnormal behavior with angry expression exists, the abnormal behavior of exciting the limb action exists with high probability; similarly, when there is an abnormal behavior such as violence-prone behavior, there are basically an abnormal behavior of angry expression and an abnormal behavior of overstimulated limb movement, that is, it is explained that in the visiting scene, the target subject (visitor) simultaneously has abnormal behaviors of at least two movement types. In this embodiment, by determining a first frame number set, a second frame number set, and a third frame number set, then reserving a first target whole-body image having the same frame number in any two frame number sets among the first frame number set, the second frame number set, and the third frame number set, and finally determining a target abnormal type of a target abnormal behavior corresponding to the first target whole-body image and determining a target element identifier corresponding to each target abnormal type, it is possible to avoid the problem of erroneous recognition or recognition of a relatively slice plane caused by singly recognizing an abnormal behavior of one abnormal type, and improve the accuracy of recognizing the abnormal behavior; and the number of the first target whole-body images is greatly reduced compared with that of the first whole-body images, so that the calculation process is reduced for analyzing the abnormal grade and the possible consequence corresponding to the abnormal type according to the abnormal behavior characteristic analysis rule subsequently, and the analysis efficiency of the abnormal grade and the possible consequence can be improved.
And S16, when the target object is identified to have abnormal behavior according to the matching result, identifying the abnormal type with the abnormal behavior and analyzing the abnormal grade and possible consequences corresponding to the abnormal type according to the abnormal behavior feature analysis rule.
And the terminal determines that the target object has abnormal behaviors, and further identifies the type of the abnormal behaviors to which the target object specifically belongs.
Each kind of facial expression, each kind of limb action and each violence tendency behavior are likely to cause adverse consequences, the degrees of the adverse consequences are different, the abnormal grade of the target object needs to be analyzed, the possible consequences are determined, and the convincing receptionist can be well prevented in advance.
In an optional embodiment, the analyzing the abnormality level and the possible consequences corresponding to the abnormality type according to the abnormal behavior feature analysis rule includes:
determining element identifications corresponding to each abnormal type;
analyzing the abnormal grade and possible consequences corresponding to each element identifier according to the abnormal behavior characteristic analysis rule;
sorting a plurality of exception grades corresponding to each exception type from high to low, and taking the first exception grade as the exception grade of the exception type;
and sequencing a plurality of possible results corresponding to each exception type according to the exception level, and connecting the sequenced possible results in series to obtain the possible results of the exception type.
The abnormal behavior feature analysis rule table in the terminal includes multiple columns of data, for example, a first column of data is an abnormal level, a second column of data is an abnormal type, and a third column of data is an element identifier, where each abnormal level corresponds to all abnormal types, each abnormal type corresponds to multiple element identifiers, and the element identifiers corresponding to different abnormal levels and different abnormal types are different.
For example, assume that the anomaly type is that the expression anomaly corresponds to a plurality of element identifications: angry, nervous, sad, wherein the abnormal level for angry is severe with possible consequences for the player, the abnormal level for nervous is general with possible consequences for the abuse, the abnormal level for sad is mild with possible consequences for the crying. Then, a plurality of abnormal levels corresponding to the abnormal type of expression abnormality are sorted from high to low: the severity is an abnormal grade of expression abnormality as an abnormal type. The possible consequences are sorted according to the exception level as: inflicting, abusing, crying, or getting concatenated together (inflicting, abusing, crying) as a possible consequence of an abnormality of the type of expression.
In an optional embodiment, the determining of the element identifier corresponding to each exception type is to determine a target element identifier corresponding to each target exception type, analyze a target exception level and a target possible outcome corresponding to each target element identifier according to an exception behavior characteristic analysis rule, sort the multiple target exception levels corresponding to each target exception type from high to low, use the sorted first target exception level as the target exception level of the target exception type, sort the multiple target possible outcomes corresponding to each target exception type according to the target exception levels, and concatenate the sorted multiple target possible outcomes to obtain a final possible outcome of the target exception type.
In an optional embodiment, the target object abnormal behavior identification method includes:
initializing an abnormal behavior early warning database;
acquiring second videos of a plurality of historical target objects and extracting a plurality of frames of second whole-body images in the second videos;
setting at least one element identifier and an abnormal type for each frame of the second whole-body image;
and storing each frame of second whole-body image and at least one element identifier and abnormal type association corresponding to the second whole-body image in the abnormal behavior early warning database.
In this alternative embodiment, second videos of a plurality of historical target objects are acquired, and a plurality of second whole-body images are captured from each second video as a feature map through the work experience of the visitor receptionist.
Since one or more elements may exist in each second whole-body image, marking each element to give a plurality of element identifications of each second whole-body image and an abnormality type corresponding to each element identification.
Wherein the exception types include: abnormal expression, overexcitation of the limbs, and tendency to violence.
The plurality of elements corresponding to the expression abnormality of the abnormality type may include: anger, tension, disgust, sadness, excitement, etc., to indicate emotional instability of the historical target subject.
The plurality of elements corresponding to the abnormality type of the over-excited limb may include: violence, 35881, abuse, etc., to indicate that the historical target object has overexcited limb behavior.
The plurality of elements corresponding to the abnormality type being violence tendency may include: questions and answers are not answered, the voice speed and the intonation are abnormal, loitering, personnel gathering, banner carrying, banner beating, control tools carrying and the like are presented, and the dangerous behaviors except expression abnormality and over-excited limbs of the historical target object are indicated.
And adding the relevant information of the target object with the abnormal behavior which is newly identified in the follow-up process into the abnormal behavior early warning database.
The abnormal behavior early warning database is established through the terminal and can be used as a data source for performing abnormal behavior comparison on the target object subsequently, and whether the target object has abnormal behaviors or not and which abnormal behaviors exist specifically are identified.
In an optional embodiment, the target object abnormal behavior identification method further includes:
reading the data in the abnormal behavior early warning database line by line and combining the read data in each line into a data pair;
training a preset first convolution neural network to obtain a facial expression recognition model based on a plurality of data corresponding to the abnormal expression types;
training a preset second convolutional neural network based on a plurality of data corresponding to the over-excited limb with the abnormal type to obtain a limb action recognition model;
and training a preset third convolutional neural network based on a plurality of data corresponding to the violence tendency of the abnormal type to obtain a violence tendency behavior recognition model.
In this optional embodiment, the facial expression recognition model, the limb movement recognition model and the violence tendency behavior recognition model are trained offline based on data in the abnormal behavior early warning database, so that the trained facial expression recognition model can be used for recognizing facial expressions in the target object image, the limb movement recognition model can be used for recognizing limb movements in the target object image, and the violence tendency behavior recognition model can be used for recognizing violence tendency behaviors in the target object image on line in the following process.
In an optional embodiment, the target object abnormal behavior identification method further includes:
displaying a result of the existence of the abnormal behavior, the abnormal type, the abnormal level, and the possible consequences.
When the target object is identified to have abnormal behaviors according to the matching result, the result that the target object has the abnormal behaviors is output, and the abnormal type, the abnormal level and the possible consequence are further output to the petition receptionist, so that the timeliness, pertinence and effectiveness of petition risk control are enhanced.
The target object abnormal behavior identification method provided by the embodiment of the invention can be applied to intelligent government affairs, intelligently identifies the abnormal behavior of visitors, and promotes the construction of smart cities. The method for identifying the abnormal behaviors of the target object can also be applied to an intelligent community, and the intelligent monitoring camera is installed in the community to intelligently identify the abnormal behaviors of outsiders, so that the safety of residents in the community is ensured.
It should be emphasized that, in order to further ensure privacy and security of the first video of the target object, the abnormal behavior early warning database, the analyzed abnormal level of the target object, and the possible consequences, the first video of the target object may be acquired from the node of the blockchain, and the abnormal behavior early warning database, the analyzed abnormal level of the target object, and the possible consequences may be stored in the node of the blockchain.
Example two
Fig. 2 is a structural diagram of a target object abnormal behavior recognition apparatus according to an embodiment of the present invention.
In some embodiments, the target object abnormal behavior recognition apparatus 20 may include a plurality of functional modules composed of computer program segments. The computer programs of the program segments in the target object abnormal behavior recognition apparatus 20 may be stored in a memory of the terminal and executed by at least one processor to perform (see fig. 1 for details) the function of target object abnormal behavior recognition.
In this embodiment, the target object abnormal behavior recognition apparatus 20 may be divided into a plurality of functional modules according to the functions executed by the target object abnormal behavior recognition apparatus. The functional module may include: the system comprises an image extraction module 201, an instruction receiving module 202, a model calling module 203, a result matching module 204, an outcome analysis module 205, an association storage module 206, a model training module 207 and a result display module 208. The module referred to herein is a series of computer program segments capable of being executed by at least one processor and capable of performing a fixed function and is stored in memory. In the present embodiment, the functions of the modules will be described in detail in the following embodiments.
The image extraction module 201 is configured to acquire a first video of a target object and extract multiple frames of first whole-body images in the first video.
An intelligent monitoring camera can be installed in a letter visit reception room of a detection organ. When a target object visits, the intelligent monitoring camera is used for collecting a first video of the target object, wherein the first video comprises a face, four limbs, a body and the like, the video is transmitted to a terminal through the intelligent monitoring camera, and the terminal receives the first video of the target object transmitted by the intelligent monitoring camera. The terminal may be a device of a visitor recipient. The target object is a visitor.
An image extraction frame rate is preset in the terminal, and a plurality of frames of first whole-body images in the first video are extracted according to the image extraction frame rate.
The instruction receiving module 202 is configured to send, in response to a received start instruction, the start instruction to the intelligent monitoring camera, so that the intelligent monitoring camera starts and collects a first video of the target object.
In the optional embodiment, the intelligent monitoring camera is triggered to start by receiving the starting instruction of the visiting person, so that the intelligent monitoring camera can be prevented from being in a working state all the time. The intelligent monitoring camera is always in a working state, so that electric energy of the intelligent monitoring camera is consumed, and images related to the petition receptionist are collected and transmitted to the terminal, so that the memory space of the terminal is occupied, and the performance of the terminal is reduced.
The model calling module 203 is configured to call a facial expression recognition model to recognize a first facial expression in each frame of the first whole-body image.
The method comprises the steps that a facial expression recognition model is trained in a terminal in advance, the facial expression of a target object is obtained by calling the facial expression recognition model to recognize first whole-body images, and each first whole-body image correspondingly obtains one facial expression.
The model calling module 203 is further configured to call a limb motion recognition model to recognize a first limb motion in each frame of the first whole-body image.
The terminal is pre-trained with a limb action recognition model, the limb action of the target object is obtained by calling the limb action recognition model to recognize the first whole-body image, and each first whole-body image correspondingly obtains one limb action.
The model calling module 203 is further configured to call a violence tendency behavior recognition model to recognize a first violence tendency behavior in the first whole-body image of each frame.
The method comprises the steps that a violent tendency behavior recognition model is trained in a terminal in advance, violent tendency behaviors of a target object are obtained by calling the violent tendency behavior recognition model to recognize first whole-body images, and each first whole-body image corresponds to one violent tendency behavior.
In an alternative embodiment, the model calling module 203 calls the facial expression recognition model, the limb movement recognition model and the violence tendency behavior recognition model at the same time.
The result matching module 204 is configured to match each first facial expression, each first limb action, and each first violence tendency behavior with a plurality of data in an abnormal behavior early warning database, and identify whether the target object has an abnormal behavior according to a matching result.
An abnormal behavior early warning database is pre-constructed in the terminal, and a plurality of data are recorded in the abnormal behavior early warning database, and the plurality of data may include, but are not limited to: abnormal type, facial expression, limb movement, violence tendency behavior, etc.
And respectively matching the facial expressions, the body movements and the violence tendency behaviors of the target object with a plurality of data in the abnormal behavior early warning database one by one to obtain a matching result, and identifying whether the target object has the abnormal behaviors or not according to the matching result.
In an optional embodiment, the matching module 204 matches each of the first facial expressions, each of the first body movements, and each of the first violence tendency behaviors with a plurality of data in an abnormal behavior early warning database, and identifying whether the target object has an abnormal behavior according to a matching result includes:
acquiring a plurality of facial expressions with abnormal types corresponding to expression abnormality in the abnormal behavior early warning database, matching the acquired facial expressions with the facial expressions, and determining that the abnormal type of the target object is expression abnormality when a target facial expression corresponding to the second facial expression is matched from the first facial expression;
acquiring a plurality of second limb actions with abnormal types corresponding to the overstimulated limbs in the abnormal behavior early warning database, matching the plurality of second limb actions with the first limb actions, and determining that the abnormal type of the target object is the overstimulated limb when the target limb actions corresponding to the second limb actions are matched from the first limb actions;
and acquiring a plurality of second violence tendency behaviors, the abnormal types of which are corresponding to violence tendency, in the abnormal behavior early warning database, matching the plurality of second violence tendency behaviors with the first violence tendency behaviors, and determining that the abnormal type of the target object is violence tendency when a target violence tendency behavior corresponding to the second violence tendency behavior is matched from the first violence tendency behavior.
When the target facial expression corresponding to the facial expression is not matched, or when the target limb action corresponding to the limb action is not matched, or when the target violence tendency action corresponding to the violence tendency action is not matched, determining that the target object has no abnormal behavior, namely the target object has normal behavior. When the target object is identified to have no abnormal behavior according to the matching result, the target object abnormal behavior identification method further comprises the following steps: and outputting the result of the absence of the abnormal behavior.
In an alternative embodiment, when the target facial expression, the target limb action, and the target violence tendency behavior corresponding to the facial expression are matched, the result matching module 204 is further configured to:
determining a first frame number set of a whole-body image corresponding to the target facial expression;
determining a second frame number set of the whole-body image corresponding to the target limb action;
determining a third frame number set of the whole-body image corresponding to the target violence tendency behavior;
judging whether any two frame sequence number sets among the first frame sequence number set, the second frame sequence number set and the third frame sequence number set have at least one same frame sequence number;
when judging whether any two frame number sets among the first frame number set, the second frame number set and the third frame number set have at least one same frame number, keeping a first target whole-body image with the same frame number and deleting a second target whole-body image without the same frame number;
and determining target abnormal types of the target abnormal behaviors corresponding to the first target whole-body image and determining a target element identifier corresponding to each target abnormal type.
For example, assuming that the frame numbers of the whole-body images corresponding to the target facial expressions matched from the first facial expressions are 1, 3, 4, 6, 7, the first frame number set is {1, 3, 4, 6, 7 }; if the frame numbers of the whole-body images corresponding to the target limb actions matched from the first limb actions are 1, 4, 5, 6 and 8, the first frame number set is {1, 4, 5, 6 and 8 }; if the frame numbers of the whole-body images corresponding to the target violence tendency behaviors matched from the first violence tendency behaviors are 2, 5, 6, and 9, the first frame number set is {2, 5, 6, and 9 }. The first frame sequence number set and the second frame sequence number set have the same frame sequence numbers of 1, 4 and 6; the first frame sequence number set and the third frame sequence number set have the same frame sequence number of 6; if the second frame number set and the third frame number set have the same frame number of 5, 6, the whole-body image with the frame number of 1, 4, 5, 6 is determined as the first target image. Subsequently, a first target image with a frame number of 1, 4, 5, 6 is respectively matched with a plurality of data in the abnormal behavior early warning database, specifically, a first facial expression in the first target image with the frame number of 1, 4, 6 is matched with a plurality of facial expression data in the abnormal behavior early warning database, a first limb action in the first target image with the frame number of 5, 6 is matched with a plurality of limb action data in the abnormal behavior early warning database, a first violence tendency behavior in the first target image with the frame number of 5, 6 is matched with a plurality of violence tendency behavior data in the abnormal behavior early warning database, so that a target abnormal type of the target abnormal behavior is determined, and a target element identifier corresponding to each target abnormal type is determined.
In the optional embodiment, the abnormal behaviors of the target object (the visitor) can be comprehensively acquired by independently identifying the facial expressions, the body actions and the violence tendency behaviors of the target object (the visitor), so that the abnormal level can be more comprehensively and accurately judged subsequently, and a coping strategy is provided for the petition receiver. When one abnormal behavior exists in the target object (visitor) during the petition, another or a plurality of other abnormal behaviors exist with high probability, for example, when an abnormal behavior with angry expression exists, the abnormal behavior of exciting the limb action exists with high probability; similarly, when there is an abnormal behavior such as violence-prone behavior, there are basically an abnormal behavior of angry expression and an abnormal behavior of overstimulated limb movement, that is, it is explained that in the visiting scene, the target subject (visitor) simultaneously has abnormal behaviors of at least two movement types. In this embodiment, by determining a first frame number set, a second frame number set, and a third frame number set, then reserving a first target whole-body image having the same frame number in any two frame number sets among the first frame number set, the second frame number set, and the third frame number set, and finally determining a target abnormal type of a target abnormal behavior corresponding to the first target whole-body image and determining a target element identifier corresponding to each target abnormal type, it is possible to avoid the problem of erroneous recognition or recognition of a relatively slice plane caused by singly recognizing an abnormal behavior of one abnormal type, and improve the accuracy of recognizing the abnormal behavior; and the number of the first target whole-body images is greatly reduced compared with that of the first whole-body images, so that the calculation process is reduced for analyzing the abnormal grade and the possible consequence corresponding to the abnormal type according to the abnormal behavior characteristic analysis rule subsequently, and the analysis efficiency of the abnormal grade and the possible consequence can be improved.
The consequence analysis module 205 is configured to, when it is identified that the target object has an abnormal behavior according to the matching result, identify an abnormal type having the abnormal behavior, and analyze an abnormal level and a possible consequence corresponding to the abnormal type according to an abnormal behavior feature analysis rule.
And the terminal determines that the target object has abnormal behaviors, and further identifies the type of the abnormal behaviors to which the target object specifically belongs.
Each kind of facial expression, each kind of limb action and each violence tendency behavior are likely to cause adverse consequences, the degrees of the adverse consequences are different, the abnormal grade of the target object needs to be analyzed, the possible consequences are determined, and the convincing receptionist can be well prevented in advance.
In an optional embodiment, the analyzing module 205 analyzes the abnormality level and the possible consequences corresponding to the abnormality type according to the abnormal behavior feature analysis rule, including:
determining element identifications corresponding to each abnormal type;
analyzing the abnormal grade and possible consequences corresponding to each element identifier according to the abnormal behavior characteristic analysis rule;
sorting a plurality of exception grades corresponding to each exception type from high to low, and taking the first exception grade as the exception grade of the exception type;
and sequencing a plurality of possible results corresponding to each exception type according to the exception level, and connecting the sequenced possible results in series to obtain the possible results of the exception type.
The abnormal behavior feature analysis rule table in the terminal includes multiple columns of data, for example, a first column of data is an abnormal level, a second column of data is an abnormal type, and a third column of data is an element identifier, where each abnormal level corresponds to all abnormal types, each abnormal type corresponds to multiple element identifiers, and the element identifiers corresponding to different abnormal levels and different abnormal types are different.
For example, assume that the anomaly type is that a plurality of elements corresponding to expression anomalies are identified as: angry, nervous, sad, wherein the abnormal level for angry is severe with possible consequences for the player, the abnormal level for nervous is general with possible consequences for the abuse, the abnormal level for sad is mild with possible consequences for the crying. Then, a plurality of abnormal levels corresponding to the abnormal type of expression abnormality are sorted from high to low: the severity is an abnormal grade of expression abnormality as an abnormal type. The possible consequences are sorted according to the exception level as: inflicting, abusing, crying, or getting concatenated together (inflicting, abusing, crying) as a possible consequence of an abnormality of the type of expression.
In an optional embodiment, the determining of the element identifier corresponding to each exception type is to determine a target element identifier corresponding to each target exception type, analyze a target exception level and a target possible outcome corresponding to each target element identifier according to an exception behavior characteristic analysis rule, sort the multiple target exception levels corresponding to each target exception type from high to low, use the sorted first target exception level as the target exception level of the target exception type, sort the multiple target possible outcomes corresponding to each target exception type according to the target exception levels, and concatenate the sorted multiple target possible outcomes to obtain a final possible outcome of the target exception type.
The association storage module 206 is configured to initialize an abnormal behavior early warning database; acquiring second videos of a plurality of historical target objects and extracting a plurality of frames of second whole-body images in the second videos; setting at least one element identifier and an abnormal type for each frame of the second whole-body image; and storing each frame of second whole-body image and at least one element identifier and abnormal type association corresponding to the second whole-body image in the abnormal behavior early warning database.
In this alternative embodiment, second videos of a plurality of historical target objects are acquired, and a plurality of second whole-body images are captured from each second video as a feature map through the work experience of the visitor receptionist.
Since one or more elements may exist in each second whole-body image, marking each element to give a plurality of element identifications of each second whole-body image and an abnormality type corresponding to each element identification.
Wherein the exception types include: abnormal expression, overexcitation of the limbs, and tendency to violence.
The plurality of elements corresponding to the expression abnormality of the abnormality type may include: anger, tension, disgust, sadness, excitement, etc., to indicate emotional instability of the historical target subject.
The plurality of elements corresponding to the abnormality type of the over-excited limb may include: violence, 35881, abuse, etc., to indicate that the historical target object has overexcited limb behavior.
The plurality of elements corresponding to the abnormality type being violence tendency may include: questions and answers are not answered, the voice speed and the intonation are abnormal, loitering, personnel gathering, banner carrying, banner beating, control tools carrying and the like are presented, and the dangerous behaviors except expression abnormality and over-excited limbs of the historical target object are indicated.
And adding the relevant information of the target object with the abnormal behavior which is newly identified in the follow-up process into the abnormal behavior early warning database.
The abnormal behavior early warning database is established through the terminal and can be used as a data source for performing abnormal behavior comparison on the target object subsequently, and whether the target object has abnormal behaviors or not and which abnormal behaviors exist specifically are identified.
The model training module 207 is configured to read the data in the abnormal behavior early warning database line by line and combine the read data in each line into one data pair.
The model training module 207 is further configured to train a preset first convolutional neural network based on a plurality of data corresponding to the expression anomaly in the anomaly type to obtain a facial expression recognition model.
The model training module 207 is further configured to train a preset second convolutional neural network based on a plurality of data pairs corresponding to the overstimulated limb in the abnormal type to obtain a limb motion recognition model.
The model training module 207 is further configured to train a preset third convolutional neural network based on a plurality of data corresponding to the abnormal type as the violence tendency to obtain a violence tendency behavior recognition model.
In this optional embodiment, the facial expression recognition model, the limb movement recognition model and the violence tendency behavior recognition model are trained offline based on data in the abnormal behavior early warning database, so that the trained facial expression recognition model can be used for recognizing facial expressions in the target object image, the limb movement recognition model can be used for recognizing limb movements in the target object image, and the violence tendency behavior recognition model can be used for recognizing violence tendency behaviors in the target object image on line in the following process.
The result display module 208 is configured to display the result of the abnormal behavior, the abnormal type, the abnormal level, and the possible consequences.
When the target object is identified to have abnormal behaviors according to the matching result, the result that the target object has the abnormal behaviors is output, and the abnormal type, the abnormal level and the possible consequence are further output to the petition receptionist, so that the timeliness, pertinence and effectiveness of petition risk control are enhanced.
The target object abnormal behavior recognition device provided by the embodiment of the invention can be applied to intelligent government affairs, intelligently recognizes the abnormal behavior of visitors, and promotes the construction of a smart city. The method for identifying the abnormal behaviors of the target object can also be applied to an intelligent community, and the intelligent monitoring camera is installed in the community to intelligently identify the abnormal behaviors of outsiders, so that the safety of residents in the community is ensured.
It should be emphasized that, in order to further ensure privacy and security of the first video of the target object, the abnormal behavior early warning database, the analyzed abnormal level of the target object, and the possible consequences, the first video of the target object may be acquired from the node of the blockchain, and the abnormal behavior early warning database, the analyzed abnormal level of the target object, and the possible consequences may be stored in the node of the blockchain.
EXAMPLE III
The present embodiment provides a storage medium, which stores thereon a computer program, which when executed by a processor implements all or part of the steps in the above-mentioned target object abnormal behavior recognition method embodiment, for example, S11-S16 shown in fig. 1:
s11, acquiring a first video of a target object and extracting a plurality of frames of first whole-body images in the first video;
s12, calling a facial expression recognition model to recognize a first facial expression in each frame of first whole-body image;
s13, calling a limb action recognition model to recognize a first limb action in each frame of first whole-body image;
s14, calling a violence tendency behavior recognition model to recognize a first violence tendency behavior in each frame of first whole-body image;
s15, matching each first face expression, each first limb action and each first violence tendency behavior with a plurality of data in an abnormal behavior early warning database respectively, and identifying whether the target object has abnormal behaviors according to matching results;
and S16, when the target object is identified to have abnormal behavior according to the matching result, identifying the abnormal type with the abnormal behavior and analyzing the abnormal grade and possible consequences corresponding to the abnormal type according to the abnormal behavior feature analysis rule.
Alternatively, the computer program, when executed by the processor, implements the functions of the modules in the above device embodiments, such as the module 201 and 208 in fig. 2:
the image extraction module 201 is configured to acquire a first video of a target object and extract multiple frames of first whole-body images in the first video;
the instruction receiving module 202 is configured to send, in response to a received start instruction, the start instruction to an intelligent monitoring camera, so that the intelligent monitoring camera starts and collects a first video of the target object;
the model calling module 203 is configured to call a facial expression recognition model to recognize a first facial expression in each frame of the first whole-body image;
the model calling module 203 is further configured to call a limb action recognition model to recognize a first limb action in each frame of the first whole-body image;
the model calling module 203 is further configured to call a violence tendency behavior recognition model to recognize a first violence tendency behavior in each frame of the first whole-body image;
the result matching module 204 is configured to match each first facial expression, each first limb action, and each first violence tendency behavior with a plurality of data in an abnormal behavior early warning database, and identify whether the target object has an abnormal behavior according to a matching result;
the consequence analysis module 205 is configured to, when it is identified that the target object has an abnormal behavior according to the matching result, identify an abnormal type having the abnormal behavior and analyze an abnormal level and a possible consequence corresponding to the abnormal type according to an abnormal behavior feature analysis rule;
the association storage module 206 is configured to initialize an abnormal behavior early warning database; acquiring second videos of a plurality of historical target objects and extracting a plurality of frames of second whole-body images in the second videos; setting at least one element identifier and an abnormal type for each frame of the second whole-body image; storing each frame of second whole-body image and at least one element identifier and abnormal type association corresponding to the second whole-body image in the abnormal behavior early warning database;
the model training module 207 is configured to read data in the abnormal behavior early warning database line by line and combine the read data in each line into one data pair;
the model training module 207 is further configured to train a preset first convolutional neural network to obtain a facial expression recognition model based on a plurality of data corresponding to the expression anomaly in the anomaly type;
the model training module 207 is further configured to train a preset second convolutional neural network to obtain a limb motion recognition model based on a plurality of data pairs corresponding to the overstimulated limb in the abnormal type;
the model training module 207 is further configured to train a preset third convolutional neural network based on a plurality of data corresponding to the abnormal type as the violence tendency to obtain a violence tendency behavior recognition model;
the result display module 208 is configured to display the result of the abnormal behavior, the abnormal type, the abnormal level, and the possible consequences.
Example four
Fig. 3 is a schematic structural diagram of a terminal according to an embodiment of the present invention. In the preferred embodiment of the present invention, the terminal 3 includes a memory 31, at least one processor 32, at least one communication bus 33, and a transceiver 34.
It will be appreciated by those skilled in the art that the configuration of the terminal shown in fig. 3 is not limiting to the embodiments of the present invention, and may be a bus-type configuration or a star-type configuration, and the terminal 3 may include more or less hardware or software than those shown, or a different arrangement of components.
In some embodiments, the terminal 3 is a terminal capable of automatically performing numerical calculation and/or information processing according to preset or stored instructions, and the hardware includes but is not limited to a microprocessor, an application specific integrated circuit, a programmable gate array, a digital processor, an embedded device, and the like. The terminal 3 may further include a client device, which includes, but is not limited to, any electronic product capable of performing human-computer interaction with a client through a keyboard, a mouse, a remote controller, a touch panel, or a voice control device, for example, a personal computer, a tablet computer, a smart phone, a digital camera, and the like.
It should be noted that the terminal 3 is only an example, and other existing or future electronic products, such as those that can be adapted to the present invention, should also be included in the scope of the present invention, and are included herein by reference.
In some embodiments, a computer program is stored in the memory 31, and the at least one processor 32 may call the computer program stored in the memory 31 to perform the related functions. For example, the respective modules described in the above embodiments are computer programs stored in the memory 31 and executed by the at least one processor 32, thereby implementing the functions of the respective modules. The Memory 31 includes a Read-Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), a One-time Programmable Read-Only Memory (OTPROM), an electronically Erasable rewritable Read-Only Memory (Electrically-Erasable Programmable Read-Only Memory (EEPROM)), an optical Read-Only Memory (CD-ROM) or other optical disk Memory, a magnetic disk Memory, a tape Memory, or any other medium readable by a computer capable of carrying or storing data.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
In some embodiments, the at least one processor 32 is a control core (control unit) of the terminal 3, connects various components of the entire terminal 3 by using various interfaces and lines, and executes various functions and processes data of the terminal 3 by running or executing programs or modules stored in the memory 31 and calling data stored in the memory 31. For example, the at least one processor 32, when executing the computer program stored in the memory, implements all or part of the steps of the target object abnormal behavior recognition method described in the embodiment of the present invention. The at least one processor 32 may be composed of an integrated circuit, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital processing chips, graphics processors, and combinations of various control chips.
In some embodiments, the at least one communication bus 33 is arranged to enable connection communication between the memory 31 and the at least one processor 32 or the like.
Although not shown, the terminal 3 may further include a power supply (such as a battery) for supplying power to various components, and preferably, the power supply may be logically connected to the at least one processor 32 through a power management device, so as to implement functions of managing charging, discharging, and power consumption through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The terminal 3 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again. The integrated unit implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions to enable a terminal (which may be a personal computer, or a network device, etc.) or a processor (processor) to execute a part of the target object abnormal behavior identification method according to the embodiments of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or that the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A target object abnormal behavior identification method is characterized by comprising the following steps:
acquiring a first video of a target object and extracting a plurality of frames of first whole-body images in the first video;
calling a facial expression recognition model to recognize a first facial expression in each frame of first whole-body image, calling a limb action recognition model to recognize a first limb action in each frame of first whole-body image, and calling a violence tendency behavior recognition model to recognize a first violence tendency behavior in each frame of first whole-body image;
matching each first face expression, each first limb action and each first violence tendency behavior with a plurality of data in an abnormal behavior early warning database respectively, and identifying whether the target object has abnormal behaviors or not according to matching results;
when the target object is identified to have abnormal behaviors according to the matching result, identifying the abnormal types with the abnormal behaviors and determining the element identification corresponding to each abnormal type;
analyzing the abnormal grade and the possible consequence corresponding to each element identification according to the abnormal behavior characteristic analysis rule, sequencing a plurality of abnormal grades corresponding to each abnormal type from high to low, taking the sequenced abnormal grade at the first as the abnormal grade of the abnormal type, sequencing a plurality of possible consequences corresponding to each abnormal type according to the abnormal grade, and connecting the sequenced possible consequences in series to obtain the possible consequence of the abnormal type.
2. The method for identifying abnormal behaviors of a target object according to claim 1, wherein the step of matching each of the first facial expressions, each of the first body movements and each of the first violence tendency behaviors with a plurality of data in an abnormal behavior early warning database, and the step of identifying whether the target object has the abnormal behaviors according to the matching result comprises the steps of:
acquiring a plurality of second facial expressions of which the abnormal types are expression abnormalities and correspond to the expression abnormalities in the abnormal behavior early warning database, matching the plurality of second facial expressions with the first facial expressions, and determining that the abnormal type of the target object is expression abnormality when a target facial expression corresponding to the second facial expression is matched from the first facial expression;
acquiring a plurality of second limb actions with abnormal types corresponding to the overstimulated limbs in the abnormal behavior early warning database, matching the plurality of second limb actions with the first limb actions, and determining that the abnormal type of the target object is the overstimulated limb when the target limb actions corresponding to the second limb actions are matched from the first limb actions;
and acquiring a plurality of second violence tendency behaviors, the abnormal types of which are corresponding to violence tendency, in the abnormal behavior early warning database, matching the plurality of second violence tendency behaviors with the first violence tendency behaviors, and determining that the abnormal type of the target object is violence tendency when a target violence tendency behavior corresponding to the second violence tendency behavior is matched from the first violence tendency behavior.
3. The method for identifying abnormal behavior of a target object according to claim 2, further comprising:
determining a first frame number set of a whole-body image corresponding to the target facial expression;
determining a second frame number set of the whole-body image corresponding to the target limb action;
determining a third frame number set of the whole-body image corresponding to the target violence tendency behavior;
judging whether any two frame sequence number sets among the first frame sequence number set, the second frame sequence number set and the third frame sequence number set have at least one same frame sequence number;
when judging whether any two frame number sets among the first frame number set, the second frame number set and the third frame number set have at least one same frame number, keeping a first target whole-body image with the same frame number and deleting a second target whole-body image without the same frame number;
and determining target abnormal types of the target abnormal behaviors corresponding to the first target whole-body image and determining a target element identifier corresponding to each target abnormal type.
4. The method for identifying the abnormal behavior of the target object according to claim 1, wherein the construction process of the abnormal behavior early warning database comprises the following steps:
initializing an abnormal behavior early warning database;
acquiring second videos of a plurality of historical target objects and extracting a plurality of frames of second whole-body images in the second videos;
setting at least one element identifier and an abnormal type for each frame of the second whole-body image;
and storing each frame of second whole-body image and at least one element identifier and abnormal type association corresponding to the second whole-body image in the abnormal behavior early warning database, wherein the abnormal behavior early warning database is stored in a node of a block chain.
5. The method for identifying abnormal behavior of target object according to claim 4, wherein after storing the second whole-body image and the at least one element identifier and the abnormality type association corresponding to the second whole-body image in each frame in the abnormal behavior early warning database, the method further comprises:
reading the data in the abnormal behavior early warning database line by line and combining the read data in each line into a data pair;
training a preset first convolution neural network to obtain a facial expression recognition model based on a plurality of data corresponding to the abnormal expression types;
training a preset second convolutional neural network based on a plurality of data corresponding to the over-excited limb with the abnormal type to obtain a limb action recognition model;
and training a preset third convolutional neural network based on a plurality of data corresponding to the violence tendency of the abnormal type to obtain a violence tendency behavior recognition model.
6. The method for identifying abnormal behavior of target object according to any one of claims 1 to 5, wherein before the obtaining the first video of the target object, the method further comprises:
responding to a received starting instruction, sending the starting instruction to an intelligent monitoring camera, and enabling the intelligent monitoring camera to start and collect a first video of the target object;
and receiving a first video of the target object transmitted by the intelligent monitoring camera.
7. The method for identifying abnormal behavior of target object according to claim 6, wherein after identifying the abnormal type with abnormal behavior and analyzing the abnormal grade and possible consequences corresponding to the abnormal type according to the abnormal behavior feature analysis rule, the method further comprises:
displaying a result of the existence of the abnormal behavior, the abnormal type, the abnormal level, and the possible consequences.
8. An apparatus for identifying abnormal behavior of a target object, the apparatus comprising:
the image extraction module is used for acquiring a first video of a target object and extracting a plurality of frames of first whole-body images in the first video;
the model calling module is used for calling a facial expression recognition model to recognize a first facial expression in each frame of first whole-body image, calling a limb action recognition model to recognize a first limb action in each frame of first whole-body image and calling a violence tendency behavior recognition model to recognize a first violence tendency behavior in each frame of first whole-body image;
the result matching module is used for respectively matching each first facial expression, each first limb action and each first violence tendency behavior with a plurality of data in an abnormal behavior early warning database and identifying whether the target object has abnormal behaviors or not according to a matching result;
and the consequence analysis module is used for identifying abnormal types with abnormal behaviors and determining element identifications corresponding to each abnormal type when the target object is identified to have the abnormal behaviors according to the matching result, analyzing the abnormal grade and the possible consequence corresponding to each element identification according to the abnormal behavior characteristic analysis rule, sequencing a plurality of abnormal grades corresponding to each abnormal type from high to low, taking the sequenced abnormal grade at the first as the abnormal grade of the abnormal type, sequencing a plurality of possible consequences corresponding to each abnormal type according to the abnormal grade, and concatenating the sequenced plurality of possible consequences to obtain the possible consequence of the abnormal type.
9. A terminal, characterized in that the terminal comprises a processor for implementing the target object abnormal behavior recognition method according to any one of claims 1 to 7 when executing a computer program stored in a memory.
10. A computer-readable storage medium, on which a computer program is stored, the computer program, when being executed by a processor, implementing the target object abnormal behavior recognition method according to any one of claims 1 to 7.
CN202010944370.6A 2020-09-10 2020-09-10 Target object abnormal behavior identification method, device, terminal and storage medium Active CN111814775B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010944370.6A CN111814775B (en) 2020-09-10 2020-09-10 Target object abnormal behavior identification method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010944370.6A CN111814775B (en) 2020-09-10 2020-09-10 Target object abnormal behavior identification method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN111814775A true CN111814775A (en) 2020-10-23
CN111814775B CN111814775B (en) 2020-12-11

Family

ID=72860749

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010944370.6A Active CN111814775B (en) 2020-09-10 2020-09-10 Target object abnormal behavior identification method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN111814775B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651308A (en) * 2020-12-14 2021-04-13 北京市商汤科技开发有限公司 Object identification tracking method and device, electronic equipment and storage medium
CN113096808A (en) * 2021-04-23 2021-07-09 深圳壹账通智能科技有限公司 Event prompting method and device, computer equipment and storage medium
CN113378733A (en) * 2021-06-17 2021-09-10 杭州海亮优教教育科技有限公司 System and device for constructing emotion diary and daily activity recognition
CN113408435A (en) * 2021-06-22 2021-09-17 华侨大学 Safety monitoring method, device, equipment and storage medium
CN113408495A (en) * 2021-07-30 2021-09-17 广州汇图计算机信息技术有限公司 Safety guard system for security
CN113627330A (en) * 2021-08-10 2021-11-09 北京百度网讯科技有限公司 Method and device for identifying target type dynamic image and electronic equipment
CN113673495A (en) * 2021-10-25 2021-11-19 北京通建泰利特智能系统工程技术有限公司 Intelligent security method and system based on neural network and readable storage medium
CN113627330B (en) * 2021-08-10 2024-05-14 北京百度网讯科技有限公司 Method and device for identifying target type dynamic image and electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102176222A (en) * 2011-03-18 2011-09-07 北京科技大学 Multi-sensor information collection analyzing system and autism children monitoring auxiliary system
CN102201061A (en) * 2011-06-24 2011-09-28 常州锐驰电子科技有限公司 Intelligent safety monitoring system and method based on multilevel filtering face recognition
CN103297751A (en) * 2013-04-23 2013-09-11 四川天翼网络服务有限公司 Wisdom skynet video behavior analyzing system
CN105653690A (en) * 2015-12-30 2016-06-08 武汉大学 Video big data rapid searching method and system constrained by abnormal behavior early-warning information
CN108596028A (en) * 2018-03-19 2018-09-28 昆明理工大学 A kind of unusual checking algorithm based in video record
CN109460702A (en) * 2018-09-14 2019-03-12 华南理工大学 Passenger's abnormal behaviour recognition methods based on human skeleton sequence
CN110135246A (en) * 2019-04-03 2019-08-16 平安科技(深圳)有限公司 A kind of recognition methods and equipment of human action
CN111353426A (en) * 2020-02-28 2020-06-30 山东浪潮通软信息科技有限公司 Abnormal behavior detection method and device
US10706284B2 (en) * 2007-07-11 2020-07-07 Avigilon Patent Holding 1 Corporation Semantic representation module of a machine-learning engine in a video analysis system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10706284B2 (en) * 2007-07-11 2020-07-07 Avigilon Patent Holding 1 Corporation Semantic representation module of a machine-learning engine in a video analysis system
CN102176222A (en) * 2011-03-18 2011-09-07 北京科技大学 Multi-sensor information collection analyzing system and autism children monitoring auxiliary system
CN102201061A (en) * 2011-06-24 2011-09-28 常州锐驰电子科技有限公司 Intelligent safety monitoring system and method based on multilevel filtering face recognition
CN103297751A (en) * 2013-04-23 2013-09-11 四川天翼网络服务有限公司 Wisdom skynet video behavior analyzing system
CN105653690A (en) * 2015-12-30 2016-06-08 武汉大学 Video big data rapid searching method and system constrained by abnormal behavior early-warning information
CN108596028A (en) * 2018-03-19 2018-09-28 昆明理工大学 A kind of unusual checking algorithm based in video record
CN109460702A (en) * 2018-09-14 2019-03-12 华南理工大学 Passenger's abnormal behaviour recognition methods based on human skeleton sequence
CN110135246A (en) * 2019-04-03 2019-08-16 平安科技(深圳)有限公司 A kind of recognition methods and equipment of human action
CN111353426A (en) * 2020-02-28 2020-06-30 山东浪潮通软信息科技有限公司 Abnormal behavior detection method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIA QING 等: "Real time violence detection based on deep spatio-temporal features", 《BIOMETRIC RECOGNITION-13TH CHINESE CONFERENCE, CCBR 2018》 *
刘雪奇 等: "基于YOLO网络模型的异常行为检测方法研究", 《电子设计工程》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651308A (en) * 2020-12-14 2021-04-13 北京市商汤科技开发有限公司 Object identification tracking method and device, electronic equipment and storage medium
CN113096808A (en) * 2021-04-23 2021-07-09 深圳壹账通智能科技有限公司 Event prompting method and device, computer equipment and storage medium
CN113378733A (en) * 2021-06-17 2021-09-10 杭州海亮优教教育科技有限公司 System and device for constructing emotion diary and daily activity recognition
CN113408435A (en) * 2021-06-22 2021-09-17 华侨大学 Safety monitoring method, device, equipment and storage medium
CN113408435B (en) * 2021-06-22 2023-12-05 华侨大学 Security monitoring method, device, equipment and storage medium
CN113408495A (en) * 2021-07-30 2021-09-17 广州汇图计算机信息技术有限公司 Safety guard system for security
CN113627330A (en) * 2021-08-10 2021-11-09 北京百度网讯科技有限公司 Method and device for identifying target type dynamic image and electronic equipment
CN113627330B (en) * 2021-08-10 2024-05-14 北京百度网讯科技有限公司 Method and device for identifying target type dynamic image and electronic equipment
CN113673495A (en) * 2021-10-25 2021-11-19 北京通建泰利特智能系统工程技术有限公司 Intelligent security method and system based on neural network and readable storage medium
CN113673495B (en) * 2021-10-25 2022-02-01 北京通建泰利特智能系统工程技术有限公司 Intelligent security method and system based on neural network and readable storage medium

Also Published As

Publication number Publication date
CN111814775B (en) 2020-12-11

Similar Documents

Publication Publication Date Title
CN111814775B (en) Target object abnormal behavior identification method, device, terminal and storage medium
CN112446025A (en) Federal learning defense method and device, electronic equipment and storage medium
CN110414370B (en) Face shape recognition method and device, electronic equipment and storage medium
CN112447189A (en) Voice event detection method and device, electronic equipment and computer storage medium
CN112016905B (en) Information display method and device based on approval process, electronic equipment and medium
CN111695594A (en) Image category identification method and device, computer equipment and medium
CN111696663A (en) Disease risk analysis method and device, electronic equipment and computer storage medium
CN112597238A (en) Method, system, device and medium for establishing knowledge graph based on personnel information
CN112686232B (en) Teaching evaluation method and device based on micro expression recognition, electronic equipment and medium
CN114639152A (en) Multi-modal voice interaction method, device, equipment and medium based on face recognition
CN114022841A (en) Personnel monitoring and identifying method and device, electronic equipment and readable storage medium
CN111738182B (en) Identity verification method, device, terminal and storage medium based on image recognition
CN113435353A (en) Multi-mode-based in-vivo detection method and device, electronic equipment and storage medium
CN112634017A (en) Remote card opening activation method and device, electronic equipment and computer storage medium
CN111651452A (en) Data storage method and device, computer equipment and storage medium
CN111695445A (en) Face recognition method, device, equipment and computer readable storage medium
CN116562894A (en) Vehicle insurance claim fraud risk identification method, device, electronic equipment and storage medium
CN114245204A (en) Video surface signing method and device based on artificial intelligence, electronic equipment and medium
CN111651652B (en) Emotion tendency identification method, device, equipment and medium based on artificial intelligence
CN114996386A (en) Business role identification method, device, equipment and storage medium
CN110717377B (en) Face driving risk prediction model training and prediction method thereof and related equipment
CN113987351A (en) Artificial intelligence based intelligent recommendation method and device, electronic equipment and medium
CN112686156A (en) Emotion monitoring method and device, computer equipment and readable storage medium
CN113221990A (en) Information input method and device and related equipment
CN110598527A (en) Machine learning-based claims insurance policy number identification method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant