CN111222370A - Case studying and judging method, system and device - Google Patents

Case studying and judging method, system and device Download PDF

Info

Publication number
CN111222370A
CN111222370A CN201811418363.1A CN201811418363A CN111222370A CN 111222370 A CN111222370 A CN 111222370A CN 201811418363 A CN201811418363 A CN 201811418363A CN 111222370 A CN111222370 A CN 111222370A
Authority
CN
China
Prior art keywords
video
frame
criminal
information
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811418363.1A
Other languages
Chinese (zh)
Inventor
林通
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201811418363.1A priority Critical patent/CN111222370A/en
Publication of CN111222370A publication Critical patent/CN111222370A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Social Psychology (AREA)
  • Educational Administration (AREA)
  • Psychiatry (AREA)
  • Human Computer Interaction (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a case studying and judging method, a system and a device, comprising the following steps: acquiring video information; and calling a preset studying and judging model, wherein the preset studying and judging model is used for analyzing the characteristic information of each frame of video image to obtain the identification result of each frame of video image, analyzing the identification result of continuous multi-frame video images and determining whether the criminal behavior occurs. Therefore, automatic study and judgment of the cases in the video is realized through the preset study and judgment model, the study and judgment speed is increased, and manpower and material resources are saved.

Description

Case studying and judging method, system and device
Technical Field
The present invention relates to the field of video processing, and in particular, to a case studying and judging method, system and device.
Background
With the development of cities, city sky eyes are more and more popular, and monitoring cameras are installed in public places and places which can endanger personal safety in many cities. Therefore, the public security department or related security departments can assist in detecting the case by calling the monitoring video or study and judge the case by the monitoring video.
The case study and judgment mainly comprises the steps of analyzing a monitoring video and judging whether criminal behaviors occur or not. However, in the prior art, the monitoring video is manually checked, and the video content is analyzed, so as to determine whether a criminal behavior occurs.
However, with the large area coverage of the monitoring device in the city, the video data is increased by geometric times, and if the case is still judged manually, not only huge manpower and material resources are consumed, but also the efficiency of judgment is low.
Disclosure of Invention
In view of this, the embodiment of the invention discloses a case studying and judging method, a case studying and judging system and a case studying and judging device, which realize case studying and judging on videos in an automatic mode, improve studying and judging speed and save manpower and material resources.
A case study and judgment method comprises the following steps:
acquiring video information;
extracting feature information of each frame of video image in the video information, wherein the feature information comprises: information for characterizing a behavioral action;
and calling a preset studying and judging model, wherein the preset studying and judging model is used for analyzing the characteristic information of each frame of video image to obtain the identification result of each frame of video image, analyzing the identification result of continuous multi-frame video images and determining whether the criminal behavior occurs.
Optionally, the extracting the feature information of each frame of image in the video information includes:
extracting the characteristics of each pixel point in each frame of video image;
filtering out pixel points irrelevant to case study and judgment according to the characteristics of each pixel point in each frame of video image;
generating a data packet according to the characteristics of the pixel points remaining after the pixel points are filtered out from each frame of image; wherein each frame of video image corresponds to a data packet.
Optionally, the method further includes:
determining the identifier of a corresponding data packet according to the generation time of each frame of video image;
recording the identifier of the data packet in a preset mapping table; the mapping table represents the relationship between a data packet and the data generation time;
and generating an ordered feature vector according to the mapping table and the data packet, wherein the feature vector is feature information used for analysis in the studying and judging model.
Optionally, the preset studying and judging model is obtained by training a preset machine learning model through a video sample marked with a criminal behavior.
Optionally, the training process of the preset studying and judging model includes:
acquiring a first video sample with a preset time length; the first video sample is marked with a criminal behavior pattern;
extracting feature information of criminal behaviors of each frame of video image in the first video sample;
and training a preset machine learning model according to the characteristic information of the criminal behaviors of each frame of video image in the first video sample and the criminal behavior pattern marked by the video sample to obtain a studying and judging model.
Or the training process of the judging model comprises the following steps:
dividing a video sample with a preset time length into a plurality of time segments;
sequentially acquiring a second video sample in each time period; the second video sample is marked with a criminal behavior pattern;
sequentially extracting characteristic information of the criminal behaviors of each frame of video image in each second video sample;
and training a preset machine learning model according to the characteristic information of each second video sample and the criminal behavior pattern marked by the second video sample to obtain a study and judgment model.
Optionally, the analyzing the feature information of each frame of video image to obtain an identification result of each frame of video image, and analyzing the identification result of each frame of video image to determine whether a criminal action occurs includes:
analyzing the characteristic information of each frame of video image through a study and judgment model to obtain a first probability that each frame of image contains criminal behaviors;
calculating a second probability of the video containing the criminal behavior according to the probability of the criminal behavior of the continuous multi-frame video images;
judging whether the second probability is greater than a preset probability threshold value;
if the probability is larger than the preset probability threshold value, the video is indicated to have the criminal behavior.
Optionally, the method further includes:
matching the characteristic information in the continuous multi-frame video images judged to have the criminal behaviors with the target characteristics of each criminal mode in a preset judging model;
and determining a crime mode contained in the continuous multi-frame video images judged to have the crime behavior according to the matching result.
Optionally, the method further includes:
when the video information is determined to have the criminal behavior, extracting the face features of the criminal suspect in the video information determined to have the criminal behavior;
matching the extracted features of the human face with a preset identity recognition library;
and determining the identity of the criminal suspect according to the matching result.
The embodiment of the invention also discloses a case studying and judging device, which comprises:
a first acquisition unit configured to acquire video information;
a first feature extraction unit, configured to extract feature information of each frame of video image in the video information, where the feature information includes: information for characterizing a behavioral action;
and the criminal behavior analysis unit is used for calling a preset study and judgment model, and the preset study and judgment model is used for analyzing the characteristic information of each frame of video image to obtain the identification result of each frame of video image, analyzing the identification result of each frame of video image and determining whether a criminal behavior occurs.
Optionally, the preset studying and judging model is obtained by training a preset machine learning model through a video sample marked with a criminal behavior.
The embodiment of the invention also discloses a case studying and judging system, which comprises:
a video acquisition end and a server end;
the video acquisition end is used for acquiring video information;
extracting feature information of each frame of video image in the video information, wherein the feature information comprises: information for characterizing a behavioral action;
the server side is used for acquiring video information, calling a preset study and judgment model, and the preset study and judgment model is used for analyzing the characteristic information of each frame of video image to obtain the identification result of each frame of video image, analyzing the identification result of continuous multi-frame video images and determining whether crime occurs.
Optionally, the system further includes:
a client;
the client is used for screening the video samples of the training study and judgment model; the judging model is used for executing the following steps:
analyzing the characteristic information of each frame of video image to obtain the identification result of each frame of video image, and analyzing the identification result of continuous multi-frame video images to determine whether a criminal behavior occurs.
The embodiment of the invention discloses a case studying and judging method, a system and a device, comprising the following steps: acquiring video information; and calling a preset studying and judging model, wherein the preset studying and judging model is used for analyzing the characteristic information of each frame of video image to obtain the identification result of each frame of video image, analyzing the identification result of continuous multi-frame video images and determining whether the criminal behavior occurs. Therefore, automatic study and judgment of criminal behaviors in the case in the video is realized through the preset study and judgment model, the study and judgment speed is increased, and manpower and material resources are saved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a case study method according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating a training method of a judging model according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a training method for judging a model according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a case study device according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a case study system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a schematic flow chart of a case study method according to an embodiment of the present invention is shown, in the embodiment, the method includes:
s101: acquiring video information;
in this embodiment, the sources of the video information include many sources, which are not limited in this embodiment, and for example, the video information may be obtained from a related video capture device, downloaded from a network, or obtained from a related storage device. .
S102: and calling a preset studying and judging model, wherein the preset studying and judging model is used for analyzing the characteristic information of each frame of video image to obtain the identification result of each frame of video image, analyzing the identification result of continuous multi-frame video images and determining whether the criminal behavior occurs.
The study and judgment model is obtained by training a preset machine learning model through video sample information marked with criminal behaviors, and the specific training process is described in detail below and is not repeated here.
In this embodiment, the motion of the human or animal having the living body in the video may play an important role in identifying whether a criminal behavior occurs. For example, if a lock picking operation is detected, it is considered that a criminal act may occur, or if a knife holder holding operation is detected, it is also considered that a criminal act may occur.
However, in addition to the motion information, in order to more accurately identify the behavior information of the person in the video frame image, the extracted features of the image may further include: any one or more of characteristic information characterizing time, characteristic information characterizing gender, and characteristics characterizing a particular item (e.g., a criminal instrument).
For example, the following steps are carried out: if the characteristics of the characteristic information of 2 points in the morning, male and lock picking are detected, the occurrence of criminal behaviors can be judged.
Specifically, the process of extracting features of each frame of video image includes:
extracting the characteristics of each pixel point in each frame of video image;
filtering out pixel points irrelevant to case study and judgment according to the characteristics of each pixel point in each frame of video image;
generating a data packet according to the characteristics of the pixel points remaining after the pixel points are filtered out from each frame of image; wherein each frame of video image corresponds to one data packet.
In this embodiment, each frame of the video image includes a large number of pixels, and some of the pixels have characteristics useful for identifying a criminal behavior, but some of the pixels have characteristics not useful for identifying the criminal behavior, for example, a background portion of the video image. Therefore, in order to increase the processing speed, the extracted feature information is subjected to dimension reduction, and features which are useless for identifying the criminal behavior in the video image can be filtered, namely pixel points of the features which are useless for identifying the criminal behavior are filtered.
In this embodiment, after the data packet containing the feature information is generated, a video may contain a large number of video frames, each video frame corresponds to one data packet, and since the video frames in the video have a precedence order, the generation time of the data packet also has a precedence order, when analyzing the feature information, in order to reflect the precedence order of the video frames, the data packet may be sorted according to the precedence order of the generation time, which may be specifically implemented by the following manner:
determining the identifier of a corresponding data packet according to the generation time of each frame of video image;
recording the identifier of the data packet in a preset mapping table; the mapping table identifies the relationship between the packet and the packet generation time;
and generating an ordered feature vector according to the mapping table and the data packet, wherein the feature vector is feature information used for analysis in the studying and judging model.
In this embodiment, the generated feature vector includes feature information in all data packets, and the feature vectors corresponding to each data packet are arranged in order, for example, the data packets may be converted into the feature vectors, and the feature vectors are sorted, that is, the feature vectors of each data packet are sorted in time order.
In this embodiment, determining the occurrence of the criminal act may include multiple implementations, for example, two implementations as follows:
the first method is as follows: through studying and judging the model and analyzing every frame video image, obtain video image's identification result, this identification result can be for the probability whether for the criminal action, but because the criminal action is a continuous process, a frame video image can't confirm whether the criminal action has appeared, consequently can analyze the probability that the criminal action appears in every frame video frame, and then confirm the probability that the criminal action appears in this video, thereby whether the definite criminal action takes place, it is concrete, include:
analyzing the characteristic information of each frame of video image through a study and judgment model to obtain a first probability that each frame of image contains criminal behaviors;
calculating a second probability of the video containing the criminal behavior according to the probability of the criminal behavior of the continuous multi-frame video images;
judging whether the second probability is greater than a preset probability threshold value;
if the probability is larger than the preset probability threshold, the video shows that the criminal behavior occurs;
if the probability is smaller than the preset probability threshold, the criminal behavior in the video frame is not established.
For the calculation of the first probability, the features included in the video frame may be matched with the features of the criminal behavior included in the judging model, for example, the feature information for picking the lock in the judging model includes: time: morning zero to 3 o' clock, gender: male, action: stoop, locks, keyless tools, etc. When identifying a frame of video image, identifying the frame of video image includes: and (3) determining the probability of the video frame having the criminal behavior according to the matching degree of the features contained in the video frame and all the features at 2 points in the morning. And then analyzing a plurality of continuous video frames to calculate the probability of the criminal behavior.
The second method comprises the following steps: analyzing each frame of video image through a research and judgment model to obtain an identification result of the video image, wherein the identification result is whether the video image contains a criminal behavior; and judging whether the image of the video frame with the criminal behavior is larger than a preset threshold value or not, wherein if the image of the video frame with the criminal behavior is larger than the preset threshold value, the video contains the criminal behavior.
In this embodiment, when it is determined whether a crime occurs in a video frame, in order to provide a more targeted analysis result for a user, a crime mode of the video frame is also output, which specifically includes:
matching the characteristic information in the continuous multi-frame video images judged to have the criminal behaviors with the target characteristics of each criminal mode in a preset judging model;
and determining a crime mode contained in the continuous multi-frame video images judged to have the crime behavior according to the matching result.
One video may include one or more crime modes, and if crime behaviors include a murder mode, a robbery mode and a theft mode, the murder behavior may be implemented when a lawless person performs robbery, so that the video includes the robbery mode and the murder mode. It can be seen that the results of matching criminal activity may include one or more criminal patterns.
Specifically, matching the feature information in the continuous multi-frame video images judged to have criminal behaviors with the target features of each criminal mode in a preset judging model, includes:
matching the characteristic information in the continuous multi-frame video images with the crime behaviors with the target characteristics of each crime mode in a preset studying and judging model, and calculating a third probability value of the continuous multi-frame video images belonging to each crime mode;
screening out a third probability value larger than a preset threshold value, and taking a crime mode corresponding to the third probability value larger than the preset threshold value as an output result;
or
And screening out the maximum third probability value, and taking the crime mode corresponding to the maximum third probability value as an output result. In this implementation, after determining that there is the criminal action to take place, in order to improve the efficiency of solving a case, can carry out face identification according to video image to determine the identity of the criminal suspect, it is concrete, still include:
when the video information is determined to have the crime, extracting and determining the face characteristics of the criminal suspect in the video information of the crime;
matching the extracted features of the human face with a preset identity recognition library;
and determining the identity of the criminal suspect according to the matching result.
When the face is identified, the features of the face can be extracted through one frame of video image or through multiple frames of video images.
The identity recognition library comprises face features and identity information of the criminal suspect, the face features can be matched in the identity recognition library, and the identity of the criminal suspect is determined according to a matching result.
In this embodiment, by extracting feature information of each frame of video image in video information, the feature information includes: information for characterizing a behavioral action; analyzing the characteristic information of each frame of video image to obtain the identification result of each frame of video image, and analyzing the identification result of continuous multi-frame video images to determine whether a criminal behavior occurs. Therefore, the video is automatically analyzed, so that the purpose of automatically studying and judging the case is achieved, the studying and judging speed is increased, and manpower and material resources are saved.
Referring to fig. 2, a flow chart of a training method for studying and judging a model according to an embodiment of the present invention is shown, in this embodiment, the method includes:
s201: acquiring a first video sample with a preset time length; the first video sample is marked with a criminal behavior pattern;
the criminal behavior patterns are used for characterizing different criminal behaviors, and include, for example: a murder mode, a robbery mode, a theft mode, a lock picking mode and the like.
S202: extracting feature information of criminal behaviors of each frame of video image in the first video sample;
s203: and training a preset machine learning model according to the characteristic information of the criminal behaviors of each frame of video image in the first video sample and the criminal behavior pattern marked by the video sample to obtain a studying and judging model.
In this embodiment, the extraction method of the features removed in S202 is the same as the extraction method of the features in S102.
In addition, in this embodiment, the video sample may be marked by an expert after video analysis, wherein each frame of video image in the video sample may mark a result, and a video within a period of time when a criminal act occurs is also marked. Therefore, the recognition result of each frame of video image can be trained, and whether crime behavior is determined within a period of time can be trained according to the trained recognition result.
In this embodiment, the preset time length may be a longer time, after the video of the preset time length is obtained, the video sample is marked, and after the video sample is marked, the machine learning model is trained, so that a more complete statistical characteristic can be obtained.
Referring to fig. 3, another flow chart of a training method for studying and judging a model according to an embodiment of the present invention is shown, in this embodiment, the method includes:
s301: dividing a video sample with a preset time length into a plurality of time segments;
s302: sequentially acquiring a second video sample in each time period; the second video sample is marked with a criminal behavior pattern;
the criminal behavior patterns are used for characterizing different criminal behaviors, and include, for example: theft mode, robbery mode, theft mode, lock picking mode, etc.
S303: sequentially extracting characteristic information of the criminal behaviors of each frame of video image in each second video sample;
s304: and training a preset machine learning model according to the characteristic information of each second video sample and the criminal behaviors marked by the second video samples to obtain a study and judgment model.
In this embodiment, the preset time length is divided into a plurality of time periods corresponding to the videos of the preset time length, and the preset machine learning model is trained on the video samples in each time period in sequence, so that the data processing speed is high.
In addition, in this embodiment, the video sample may be marked by an expert after video analysis, wherein each frame of video image in the video sample may mark a result, and a video within a period of time when a criminal act occurs is also marked. Therefore, the recognition result of each frame of video image can be trained, and whether crime behavior is determined within a period of time can be trained according to the trained recognition result.
In this embodiment, the above-mentioned machine learning model may include multiple machine learning algorithms, or a combination of multiple machine learning algorithms, and is not limited in this embodiment.
For clear identification of the training process of the machine learning model, the present embodiment is illustrated by two algorithms as follows:
1. kalman filter algorithm
Posterior probability distribution p (x)k-1|y1:k-1) For a gaussian distribution, the dynamic system is linear, expressed as:
xk=Sxk-1+Tuk-1+qk-1
yk=Hxk+rk
wherein x iskIs the state at time k, ukThe control quantity at the time k is similar to the cameras A and B, even if the same picture is shot, the difference of chromatic aberration and brightness is more or less, and the difference is used as the system control quantity to show the influence strength of the system control quantity on the actual image. S and T are parameters, ykIs the measured value at time k, H is a system parameter, qkAnd rkRepresenting process noise and measurement noise, respectively.
In the training process, system noise and measurement noise are both Gaussian distributed, and covariance matrixes are Qk-1And Rk
Five core formulas of the kalman filter algorithm:
the state value at the time k-1 is
Figure BDA0001880009660000101
Predicting a state value at time k of
Figure BDA0001880009660000102
From the last error covariance Pk-1Error of process noise Q prediction
Figure BDA0001880009660000103
Calculation kalmanThe gain of the power amplifier is increased,
Figure BDA0001880009660000104
the correction update is carried out and the correction update is carried out,
Figure BDA0001880009660000105
updating P for next iteration of estimating state value at k +1 timekThe value is obtained.
Thus, the algorithm can operate by autoregression.
2. Wiener filtering algorithm
The estimated value is calculated as follows
Figure BDA0001880009660000106
Minimum mean square error of objective function
Figure BDA0001880009660000107
Solving the impulse response partial derivative of the minimum mean square error and making the impulse response partial derivative be 0 to obtain a Weiner Hough equation
Figure BDA0001880009660000111
Then, h (m) is obtained as hopt(m) in combination with
Rxs(j)=E[x(n-j)s(n)],
Rxx(j-m)=E[x(n-m)x(n-j)];
The wienerhoff equation can be simplified to
Figure BDA0001880009660000112
It is assumed here that the signal is uncorrelated with noise
Rxs(m)=E[x(n)s(n+m)]=E[s(n)s(n+m)+ω(n)s(n+m)]=E[s(n)s(n+m)]=Rss(m)Rxx(m)=E[x(n)s(n+m)]=E[(s(n)+ω(n))(s(n+m)+ω(n+m))]=Rss(m)+Rωω(m)
The wienerhoff equation can be simplified to
Figure BDA0001880009660000113
Wherein n is time, s (n) is the actual result value of n time, s (n) (with a personal character mark on top of n) is the predicted result value of n time, x (n) is the input parameter of n time, i.e. n time characteristic value, h (n) is the impulse response, which should be a predicted model in this text, and the opt subscript h (n) is the optimistic value of n time, i.e. the most ideal predicted model of n time, and the judging model is obtained by continuously optimizing the existing h.
Referring to fig. 4, a schematic structural diagram of a case study device according to an embodiment of the present invention is shown,
in this embodiment, the apparatus comprises:
a first acquisition unit 401 for acquiring video information;
a first feature extraction unit 402, configured to extract feature information of each frame of video image in the video information, where the feature information includes: information for characterizing a behavioral action;
the criminal behavior analysis unit 403 is configured to call a preset study and judgment model, where the preset study and judgment model is used to analyze the feature information of each frame of video image to obtain an identification result of each frame of video image, and analyze the identification result of each frame of video image to determine whether a criminal behavior occurs.
Optionally, the criminal behavior analysis unit includes:
the characteristic extraction subunit is used for extracting the characteristic of each pixel point in each frame of video image;
the filtering subunit is used for filtering out pixel points irrelevant to case study and judgment according to the characteristics of each pixel point in each frame of video image;
the data packet generating subunit is used for generating a data packet according to the characteristics of the remaining pixel points after the pixel points are filtered out from each frame of image; wherein each frame of video image corresponds to a data packet.
Optionally, the method further includes:
the identification determining unit is used for determining the identification of the corresponding data packet according to the generation time of each frame of video image;
the recording unit is used for recording the identification of the data packet in a preset mapping table; the mapping table represents the relationship between a data packet and the data generation time;
and the ordered vector generating unit is used for generating ordered characteristic vectors according to the mapping table and the data packet.
Optionally, the preset studying and judging model is obtained by training a preset machine learning model through a video sample marked with a criminal behavior.
Optionally, the method further includes:
the second acquisition unit is used for acquiring a first video sample with a preset time length; the first video sample is marked with a criminal behavior pattern;
the second feature extraction unit is used for extracting feature information of criminal behaviors of each frame of video image in the first video sample;
and the first studying and judging model training unit is used for training a preset machine learning model according to the characteristic information of the criminal behavior of each frame of video image in the first video sample and the criminal behavior mode marked by the video sample to obtain the studying and judging model.
Or
The time division unit is used for dividing the video samples with preset time length into a plurality of time segments;
the third acquisition unit is used for sequentially acquiring the second video samples in each time period; the second video sample is marked with a criminal behavior pattern;
the third feature extraction unit is used for sequentially extracting feature information of criminal behaviors of each frame of video image in each second video sample;
and the second studying and judging model training unit is used for training a preset machine learning model according to the characteristic information of each video sample and the criminal behavior pattern marked by the second video sample to obtain the studying and judging model.
Optionally, the criminal behavior analysis unit is specifically configured to:
analyzing the characteristic information of each frame of video image through a study and judgment model to obtain a first probability that each frame of image contains criminal behaviors;
calculating a second probability of the video containing the criminal behavior according to the probability of the criminal behavior of the continuous multi-frame video images;
judging whether the second probability is greater than a preset probability threshold value;
if the probability is larger than the preset probability threshold value, the video is indicated to have the criminal behavior.
Optionally, the method further includes:
the matching unit is used for matching the characteristic information in the continuous multi-frame video images judged to have the criminal behaviors with the target characteristics of each criminal mode in the preset judging model;
and the output unit is used for determining the crime mode contained in the continuous multi-frame video images judged to have the crime behaviors according to the matching result.
Optionally, the method further includes:
identity recognition unit for
When the video information is determined to have the criminal behavior, extracting the face characteristics of the criminal suspect in the video information determined to have the criminal behavior;
matching the extracted features of the human face with a preset identity recognition library;
and determining the identity of the criminal suspect according to the matching result.
In this embodiment, through the device of this embodiment, realized studying and judging the automation of case in the video, improved and studied and judged speed, saved manpower, material resources.
Referring to fig. 5, a schematic structural diagram of a case study and judgment system according to an embodiment of the present invention is shown, in this embodiment, the system includes:
a video acquisition end 501 and a server end 502;
the video acquisition end is used for acquiring video information;
the server side is used for acquiring video information, calling a preset study and judgment model, and the preset study and judgment model is used for analyzing the characteristic information of each frame of video image to obtain the identification result of each frame of video image, analyzing the identification result of continuous multi-frame video images and determining whether crime occurs.
Optionally, the method further includes:
a client 503;
the client is used for screening the video samples of the training study and judgment model; the judging model is used for executing the following steps:
analyzing the characteristic information of each frame of video image to obtain the identification result of each frame of video image, and analyzing the identification result of continuous multi-frame video images to determine whether a criminal behavior occurs.
In different areas or at different times, the cases are different in occurrence conditions and the case study and judgment requirements are different, so that in order to meet the requirements of different areas, a fortification scheme can be set at a client, namely, study and judgment models meeting different requirements are trained.
Specifically, different training of different research and judgment models is mainly reflected in that video samples for training are different and the accuracy of the trained research and judgment models is high and low, so that the client can screen video samples for different purposes for training to achieve different purposes, or set parameters of a related machine learning algorithm for training of the research and judgment models.
By the aid of the system, automatic study and judgment of the cases in the video can be realized, study and judgment speed is increased, manpower and material resources are saved, case searching by people, person searching by cases and mutual searching by cases can be performed, and management of the cases is realized.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (12)

1. A case study and judgment method is characterized by comprising the following steps:
acquiring video information;
extracting feature information of each frame of video image in the video information, wherein the feature information comprises: information for characterizing a behavioral action;
and calling a preset studying and judging model, wherein the preset studying and judging model is used for analyzing the characteristic information of each frame of video image to obtain the identification result of each frame of video image, analyzing the identification result of continuous multi-frame video images and determining whether the criminal behavior occurs.
2. The method of claim 1, wherein the extracting feature information of each frame of image in the video information comprises:
extracting the characteristics of each pixel point in each frame of video image;
filtering out pixel points irrelevant to case study and judgment according to the characteristics of each pixel point in each frame of video image;
generating a data packet according to the characteristics of the pixel points remaining after the pixel points are filtered out from each frame of image; wherein each frame of video image corresponds to a data packet.
3. The method of claim 2, further comprising:
determining the identifier of a corresponding data packet according to the generation time of each frame of video image;
recording the identifier of the data packet in a preset mapping table; the mapping table represents the relationship between a data packet and the data generation time;
and generating an ordered feature vector according to the mapping table and the data packet, wherein the feature vector is feature information used for analysis in the studying and judging model.
4. The method of claim 1, wherein the predetermined study model is obtained by training a predetermined machine learning model through a video sample labeled with a criminal behavior.
5. The method of claim 4, wherein the training process of the predetermined judging model comprises:
acquiring a first video sample with a preset time length; the first video sample is marked with a criminal behavior pattern;
extracting feature information of criminal behaviors of each frame of video image in the first video sample;
training a preset machine learning model according to the characteristic information of the criminal behaviors of each frame of video image in the first video sample and the criminal behavior pattern marked by the video sample to obtain a studying and judging model;
or the training process of the judging model comprises the following steps:
dividing a video sample with a preset time length into a plurality of time segments;
sequentially acquiring a second video sample in each time period; the second video sample is marked with a criminal behavior pattern;
sequentially extracting characteristic information of the criminal behaviors of each frame of video image in each second video sample;
and training a preset machine learning model according to the characteristic information of each second video sample and the criminal behavior pattern marked by the second video sample to obtain a study and judgment model.
6. The method of claim 1, wherein analyzing the feature information of each frame of video image to obtain an identification result of each frame of video image, and analyzing the identification result of each frame of video image to determine whether a criminal action occurs comprises:
analyzing the characteristic information of each frame of video image through a study and judgment model to obtain a first probability that each frame of image contains criminal behaviors;
calculating a second probability of the video containing the criminal behavior according to the probability of the criminal behavior of the continuous multi-frame video images;
judging whether the second probability is greater than a preset probability threshold value;
if the probability is larger than the preset probability threshold value, the video is indicated to have the criminal behavior.
7. The method of claim 1, further comprising:
matching the characteristic information in the continuous multi-frame video images judged to have the criminal behaviors with the target characteristics of each criminal mode in a preset judging model;
and determining a crime mode contained in the continuous multi-frame video images judged to have the crime behavior according to the matching result.
8. The method of claim 1, further comprising:
when the video information is determined to have the criminal behavior, extracting the face features of the criminal suspect in the video information determined to have the criminal behavior;
matching the extracted features of the human face with a preset identity recognition library;
and determining the identity of the criminal suspect according to the matching result.
9. A case studying and judging device is characterized by comprising:
a first acquisition unit configured to acquire video information;
a first feature extraction unit, configured to extract feature information of each frame of video image in the video information, where the feature information includes: information for characterizing a behavioral action;
and the criminal behavior analysis unit is used for calling a preset study and judgment model, and the preset study and judgment model is used for analyzing the characteristic information of each frame of video image to obtain the identification result of each frame of video image, analyzing the identification result of each frame of video image and determining whether a criminal behavior occurs.
10. The apparatus of claim 8, wherein the predetermined study model is obtained by training a predetermined machine learning model through a video sample labeled with criminal behavior.
11. A case study and judgment system, comprising:
a video acquisition end and a server end;
the video acquisition end is used for acquiring video information;
extracting feature information of each frame of video image in the video information, wherein the feature information comprises: information for characterizing a behavioral action;
the server side is used for acquiring video information, calling a preset study and judgment model, and the preset study and judgment model is used for analyzing the characteristic information of each frame of video image to obtain the identification result of each frame of video image, analyzing the identification result of continuous multi-frame video images and determining whether crime occurs.
12. The system of claim 11, further comprising:
a client;
the client is used for screening the video samples of the training study and judgment model; the judging model is used for executing the following steps:
analyzing the characteristic information of each frame of video image to obtain the identification result of each frame of video image, and analyzing the identification result of continuous multi-frame video images to determine whether a criminal behavior occurs.
CN201811418363.1A 2018-11-26 2018-11-26 Case studying and judging method, system and device Pending CN111222370A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811418363.1A CN111222370A (en) 2018-11-26 2018-11-26 Case studying and judging method, system and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811418363.1A CN111222370A (en) 2018-11-26 2018-11-26 Case studying and judging method, system and device

Publications (1)

Publication Number Publication Date
CN111222370A true CN111222370A (en) 2020-06-02

Family

ID=70828724

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811418363.1A Pending CN111222370A (en) 2018-11-26 2018-11-26 Case studying and judging method, system and device

Country Status (1)

Country Link
CN (1) CN111222370A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112434580A (en) * 2020-11-13 2021-03-02 珠海大横琴科技发展有限公司 Video statistical analysis method and device
CN114821936A (en) * 2022-03-21 2022-07-29 慧之安信息技术股份有限公司 Method and device for detecting illegal criminal behaviors based on edge calculation

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101883258A (en) * 2009-05-08 2010-11-10 上海弘视通信技术有限公司 Violent crime detection system and detection method thereof
CN102938058A (en) * 2012-11-14 2013-02-20 南京航空航天大学 Method and system for video driving intelligent perception and facing safe city
CN104820834A (en) * 2015-05-19 2015-08-05 深圳市保千里电子有限公司 Fighting early warning method and device
CN104821060A (en) * 2015-05-19 2015-08-05 深圳市保千里电子有限公司 Robbery early warning method and device
CN106650655A (en) * 2016-12-16 2017-05-10 北京工业大学 Action detection model based on convolutional neural network
CN107818312A (en) * 2017-11-20 2018-03-20 湖南远钧科技有限公司 A kind of embedded system based on abnormal behaviour identification
CN108288015A (en) * 2017-01-10 2018-07-17 武汉大学 Human motion recognition method and system in video based on THE INVARIANCE OF THE SCALE OF TIME
CN108351968A (en) * 2017-12-28 2018-07-31 深圳市锐明技术股份有限公司 It is a kind of for the alarm method of criminal activity, device, storage medium and server
CN108446649A (en) * 2018-03-27 2018-08-24 百度在线网络技术(北京)有限公司 Method and device for alarm
CN108805142A (en) * 2018-05-31 2018-11-13 中国华戎科技集团有限公司 A kind of crime high-risk personnel analysis method and system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101883258A (en) * 2009-05-08 2010-11-10 上海弘视通信技术有限公司 Violent crime detection system and detection method thereof
CN102938058A (en) * 2012-11-14 2013-02-20 南京航空航天大学 Method and system for video driving intelligent perception and facing safe city
CN104820834A (en) * 2015-05-19 2015-08-05 深圳市保千里电子有限公司 Fighting early warning method and device
CN104821060A (en) * 2015-05-19 2015-08-05 深圳市保千里电子有限公司 Robbery early warning method and device
CN106650655A (en) * 2016-12-16 2017-05-10 北京工业大学 Action detection model based on convolutional neural network
CN108288015A (en) * 2017-01-10 2018-07-17 武汉大学 Human motion recognition method and system in video based on THE INVARIANCE OF THE SCALE OF TIME
CN107818312A (en) * 2017-11-20 2018-03-20 湖南远钧科技有限公司 A kind of embedded system based on abnormal behaviour identification
CN108351968A (en) * 2017-12-28 2018-07-31 深圳市锐明技术股份有限公司 It is a kind of for the alarm method of criminal activity, device, storage medium and server
CN108446649A (en) * 2018-03-27 2018-08-24 百度在线网络技术(北京)有限公司 Method and device for alarm
CN108805142A (en) * 2018-05-31 2018-11-13 中国华戎科技集团有限公司 A kind of crime high-risk personnel analysis method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王恬 等: "《利用姿势估计实现人体异常行为识别》", vol. 37, no. 10, pages 2366 - 2372 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112434580A (en) * 2020-11-13 2021-03-02 珠海大横琴科技发展有限公司 Video statistical analysis method and device
CN114821936A (en) * 2022-03-21 2022-07-29 慧之安信息技术股份有限公司 Method and device for detecting illegal criminal behaviors based on edge calculation

Similar Documents

Publication Publication Date Title
CN110070029B (en) Gait recognition method and device
CN111382623B (en) Live broadcast auditing method, device, server and storage medium
CN109272509B (en) Target detection method, device and equipment for continuous images and storage medium
CN109325429B (en) Method, device, storage medium and terminal for associating feature data
Salimi et al. Visual-based trash detection and classification system for smart trash bin robot
TWI416068B (en) Object tracking method and apparatus for a non-overlapping-sensor network
CN111898581B (en) Animal detection method, apparatus, electronic device, and readable storage medium
CN111353352B (en) Abnormal behavior detection method and device
EP1801757A1 (en) Abnormal action detector and abnormal action detecting method
CN108009466B (en) Pedestrian detection method and device
CN113065474B (en) Behavior recognition method and device and computer equipment
JP2017111660A (en) Video pattern learning device, method and program
JP2015082245A (en) Image processing apparatus, image processing method, and program
CN109195011B (en) Video processing method, device, equipment and storage medium
CN107516102B (en) Method, device and system for classifying image data and establishing classification model
CN110674680B (en) Living body identification method, living body identification device and storage medium
CN106612385B (en) Video detecting method and video detecting device
CN111723656B (en) Smog detection method and device based on YOLO v3 and self-optimization
CN110532746B (en) Face checking method, device, server and readable storage medium
CN115223246A (en) Personnel violation identification method, device, equipment and storage medium
CN111222370A (en) Case studying and judging method, system and device
CN109785386A (en) Object identification localization method and device
CN114463776A (en) Fall identification method, device, equipment and storage medium
CN111695404A (en) Pedestrian falling detection method and device, electronic equipment and storage medium
CN108596068B (en) Method and device for recognizing actions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination