CN111950518B - Video image enhancement method for violent behavior recognition - Google Patents

Video image enhancement method for violent behavior recognition Download PDF

Info

Publication number
CN111950518B
CN111950518B CN202010874439.2A CN202010874439A CN111950518B CN 111950518 B CN111950518 B CN 111950518B CN 202010874439 A CN202010874439 A CN 202010874439A CN 111950518 B CN111950518 B CN 111950518B
Authority
CN
China
Prior art keywords
video
optical flow
image
optical
blobs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010874439.2A
Other languages
Chinese (zh)
Other versions
CN111950518A (en
Inventor
易军
庞一然
郑福建
郭鑫
宋光磊
周伟
雷友峰
张秀才
邓建华
杨利平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Science and Technology
Original Assignee
Chongqing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Science and Technology filed Critical Chongqing University of Science and Technology
Priority to CN202010874439.2A priority Critical patent/CN111950518B/en
Publication of CN111950518A publication Critical patent/CN111950518A/en
Application granted granted Critical
Publication of CN111950518B publication Critical patent/CN111950518B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention provides a violent behavior detection training data set data enhancement method based on optical flow judgment and personnel distribution probability. Firstly, a certain amount of video frames are obtained, and high-risk violence areas in the video frames are extracted by utilizing dense optical flows. And then, using a CSR-Net crowd counting network to obtain the staff distribution in the video frame. And finally, comprehensively judging the area most likely to generate violent behavior in the video frame, intercepting the video frame and outputting the video frame as a new video. The method optimizes the video data set for violent behavior recognition training, extracts the ROI of the neural network, enhances the video training set data of key targets needing attention due to violent behaviors appearing only at the corners of a monitoring picture or at a longer distance, and can effectively improve the training effect of the neural network.

Description

Video image enhancement method for violent behavior identification
Technical Field
The invention relates to a data enhancement technology, in particular to a data enhancement technology for identifying training data of a neural network by violent behavior, belonging to a video image processing technology.
Background
Under the background of rapid development in the field of artificial intelligence, the importance degree of a data set is increasingly highlighted. High-risk violent behavior detection is an important task of intelligent video monitoring, and video data acquired under a monitoring camera visual angle possibly cause violent behavior to appear only at the corners of a monitoring picture or appear at a longer distance due to the fact that the monitoring camera visual angle is fixed, and key targets needing to be focused are too small. The effect that the neural network can obtain in the application depends on the quality of the training data set, and the training effect of the neural network is seriously affected by the problems, such as the problems mentioned above.
The existing data set is not only built by self, but also is more based on real data acquisition on the internet by a big data technology, but the acquired data set has good quality, and the acquired data set needs to be intercepted and enhanced manually so as to improve the training effect.
Considering that the number of training sets needed for training a neural network is huge, the patent designs a data set enhancing technology capable of automatically intercepting high-risk areas in a monitoring video. According to the research, the light flow intensity of a moving object in an image can be calculated through a dense light flow method, the structure and motion relation of the object is obtained, and therefore the possibility of occurrence of high-risk violent behaviors is quantized.
The high-risk region in the violent behavior data set is extracted only by adopting the light stream intensity, and although a certain effect can be achieved, the interference of other moving objects except a human body on the light stream is not considered. For example, in a monitoring video scene on a street, the optical flow intensity of a vehicle is significantly higher than that of a crowd, which may cause misdetection of a high-risk area and make it difficult to achieve the purpose of pipeline automatic enhancement of a large number of data sets.
Disclosure of Invention
The invention discloses a training data enhancement method for detecting violent behaviors in a video, which aims to solve the problem that the existing violent recognition data are concentrated, a large number of violent behaviors only appear at the corners of a monitoring picture or appear at a longer distance, so that key targets needing attention are excessively small in data, and the training effect of a neural network is poor.
In order to solve the above problems, the present application is implemented by using the following technical scheme:
a training data enhancement method for violent behavior detection in video, the method comprising the steps of:
s1: the method comprises the following steps of (1) utilizing an existing violent behavior public data set or collecting a video containing violent behaviors from the Internet;
s2: carrying out optical flow detection on the video continuous N-frame image by using an optical flow method;
assuming that an object with intensity I (x, y, t) is constant in intensity between two consecutive frames, it has a new intensity I (x + dx, y + dy, t + dt) when it moves a distance of dx and dy in one unit of time, i.e. the following formula is satisfied:
I(x,y,t)=I(x+dx,y+dy,t+dt)
x, y are coordinates of points, and t is the current time; by performing Taylor approximation to the right side of the equation, one obtains
Figure BDA0002652186380000031
Wherein
Figure BDA0002652186380000032
Is the gradient of the image along the horizontal axis,
Figure BDA0002652186380000033
is the gradient of the image along the vertical axis,
Figure BDA0002652186380000034
is an image along time; the solving mode is based on the dense optical flow algorithm of Gunnar Farnenback to detect two framesIn the specific algorithm of the patent, the upper and lower scale relation coefficients pyr _ scale between each layer of pyramid are 0.5, the pyramid layer number level is 3, the mean value window size is 15, each graph is iterated for 3 times, the pixel neighborhood size poly _ n is 5, and the gaussian standard deviation poly _ sigma is 1.2, and the input stream is used as the approximation of the initial stream.
The specific steps of using the optical flow method to carry out optical flow detection on the video frame comprise:
s21: removing the optical flow vector with the intensity smaller than the optical flow amplitude threshold T1 ═ 2 in each frame of image:
s22: superposing the optical flow information of the continuous N-64 frame images to obtain an optical flow intensity information graph;
s23: acquiring an optical flow intensity connected subgraph of the image by using a graph connectivity algorithm;
s231: performing binarization processing on the optical flow intensity information map obtained in the step S22, wherein the threshold value is 128;
s232: scanning the binarized optical flow intensity information graph to obtain an optical flow intensity connected subgraph; representing the pixel value of the image at the point by P (i, j), and marking a connected subgraph from the starting point P (0,0) of the image, wherein the method comprises the following specific steps:
s233: a progressive image, which is determined as an optical-flow blob starting point if P (r-1, c) is 0 and P (r, c) is 1, and is determined as an optical-flow blob ending point if P (r-1, c) is 1 and P (r, c) is 0, and is set with a unique identifier RI;
s234: for all the optical-flow blobs in all the rows except the first row, if the optical-flow blobs have no r-value overlapping area with the existing blobs in the previous row, namely the optical-flow blobs P (r, c-1) ≠ P (r, c) are established, setting a new unique identifier RI for the optical-flow blobs;
s235: for all rows of optical-flow blobs except the first row, if there is an r-value overlap region with one and only one region in the previous row, it gets the same identifier RI as the previous row of overlapping blobs;
s236: regarding optical flow blobs in all rows except the first row, if the optical flow blobs overlap with the r values of a plurality of areas in the previous row, the RI values of the areas in the row and all the overlapped blobs in the previous row are uniformly rewritten into the minimum RI values of the blobs to be used as a connected domain;
s237: summing the luminous flux intensities of the luminous flux subgraphs, K RI And the subgraph of more than or equal to 1000 is a high-risk area of violent behaviors.
S3: extracting personnel distribution probability information of continuous N frames in the video by using a pre-trained CSR-Net crowd counting network;
the specific steps of extracting the people distribution probability information of the continuous N frames in the video comprise:
s31: suppressing image areas with the distribution probability integral value smaller than 1 of a preset threshold T2 in each frame of image;
s32: overlapping the probability information of continuous N frames to obtain personnel distribution probability information, wherein the overlapping mode of the probability map is that the probability map is overlapped according to the mean value of pixel points;
s4: combining the optical flow intensity graph with the person distribution probability graph to obtain the area in the video where violent behaviors are most likely to occur, wherein the method comprises the following steps of:
s41: selecting the sub-image region K with the most significant movement intensity according to the step S4 RI
S42: checking whether the probability integral value of the personnel distribution of the corresponding probability map area is greater than 2;
s43: if the conditions are met, the region is a high-risk region most likely to have violent behaviors;
s44: if the condition is not met, the area is abandoned, S41 is returned, and the secondary danger area is selected again until all the areas are checked.
S5: intercepting and outputting a suspected violent behavior area of the video to form an enhanced training video data set; the specific method is to normalize the coordinates of the high-risk area selected in step S4, take the point (0,0) at the lower left corner of the video frame, and calculate the maximum and minimum values of the coordinates of the x-axis and the y-axis of the high-risk area: w min 、W max 、H min And H max (ii) a According to the input image proportion required by the subsequent neural network training using the data set, and the central point (C) of the high-risk region x ,C y ) Is a centerPerforming length-width equal ratio expansion selection, and expressing the coordinate of a selection frame as (x) min ,y min ,x max ,y max ) I.e. the coordinates of the lower left corner and the upper right corner of the rectangular area; when x is min ≥W min And y is min ≥H min And x max ≥W max And y is max ≥H max Then, the selected area can be intercepted to form a new video clip.
Compared with the prior art, the technical scheme that this application provided, the technological effect or advantage that have are: the method is based on a real scene, reduces the labor cost of manually marking high-risk violent regions in the video and intercepting the video, effectively improves the quality of the data set, and achieves the aim of finally improving the precision of the neural network model.
Drawings
FIG. 1 is a flow chart of a data enhancement method for violent behavior detection;
FIG. 2 is a schematic view of a process flow of a plot of optical flow intensity information for a population;
FIG. 3 is a schematic flow chart of a process for a person distribution probability map;
FIG. 4 is a schematic diagram of the effect of joint interception based on a crowd optical flow intensity information graph and a people distribution probability graph.
Detailed Description
The invention discloses a training data enhancement method for detecting violent behaviors in a video, which is used for solving the problems that the existing violent recognition data are concentrated, a large number of violent behaviors only appear at the corners of a monitoring picture or appear at a longer distance, so that key targets needing attention are over-small in data, and the training effect of a neural network is poor.
For better understanding of the above technical solutions, the following detailed descriptions will be provided with reference to the drawings and the detailed description of the present invention.
Examples
As shown in fig. 1, a training data enhancement method for violent behavior detection in video includes the following steps:
s1: the method comprises the following steps of (1) utilizing an existing violent behavior public data set or collecting a video containing violent behaviors from the Internet;
s2: carrying out optical flow detection on the video continuous N-frame image by using an optical flow method;
there are many algorithms for calculating the optical flow between video frames, and in the embodiment, the motion information is calculated by adopting a global dense optical flow algorithm matched with an image pyramid. Assuming that an object with intensity I (x, y, t) is constant in intensity between two consecutive frames, it has a new intensity I (x + dx, y + dy, t + dt) when it moves a distance of dx and dy in one unit of time, i.e. the following formula is satisfied:
I(x,y,t)=I(x+dx,y+dy,t+dt)
x, y are the coordinates of the point and t is the current time. By performing Taylor approximation to the right side of the equation, one obtains
Figure BDA0002652186380000061
Wherein
Figure BDA0002652186380000071
Is the gradient of the image along the horizontal axis,
Figure BDA0002652186380000072
is the gradient of the image along the vertical axis,
Figure BDA0002652186380000073
is an image along time.
In the specific algorithm of the patent, an upper and lower scale relation coefficient pyr _ scale between every two layers of pyramids is 0.5, a pyramid layer level is 3, a mean value window size winsize is 15, each graph is iterated for 3 times, a pixel neighborhood size poly _ n is 5, a gaussian standard deviation poly _ sigma input stream is 1.2, the graph is used as an approximation of an initial stream, the output is a two-dimensional matrix, each element of the output matrix has two parameters, and the value of the optical flow information (u, v) of the object in the video is represented.
S21: removing optical flow vectors with the intensity smaller than an optical flow amplitude threshold value T1 ═ 2 in each frame of image, wherein the amplitude value is the L2 norm of optical flow information;
s22: superposing the optical flow information of the continuous N-64 frame images to obtain an optical flow intensity information graph;
s23: acquiring an optical flow intensity connected subgraph of the image by using a graph connectivity algorithm;
there are many methods for calculating graph connectivity, and in this embodiment, the method is calculated as follows:
s231: the optical flow intensity information map obtained in step S22 is binarized, and its threshold value is N × T1 — 128.
S232: scanning the binarized optical flow intensity information graph to obtain an optical flow intensity connected subgraph; representing the pixel value of the image at the point by P (i, j), and marking a connected subgraph from the starting point P (0,0) of the image, wherein the method comprises the following specific steps:
s233: the progressive image has a pixel value of {0,1} due to the converted binarized image, and is determined as the optical flow blob start point if P (r-1, c) is 0 and P (r, c) is 1, and is determined as the optical flow blob end point if P (r-1, c) is 1 and P (r, c) is 0, and a unique identifier RI is set for the optical flow blob end point.
S234: for all rows except the first, if there is no r-value overlapping area with the existing blob in the previous row, i.e., the optical blob P (r, c-1) ≠ P (r, c) holds everywhere, a new unique identifier RI is set for it.
S235: for all rows of optical-flow blobs except the first row, if there is an r-value overlap region with one and only one region in the previous row, it gets the same identifier RI as the previous row of overlapping blobs.
S236: when the optical-flow blobs in all the rows except the first row overlap the r values of the multiple regions in the previous row, the RI values of the row region and all the overlapped blobs in the previous row are collectively rewritten to the minimum RI values of these blobs as one connected field.
S237: summing the optical flow intensity of the optical flow subgraph, and in each group of 64 continuous frames, carrying out optical flow energy judgment on the extracted video image frames by multiple times in succession, wherein the values are shown in table 1:
TABLE 1 values of stream energy characteristics in video frames
Frame number (group) Normal condition of the condition High risk situation
1 235.968 1101.376
2 223.488 1123.904
3 207.616 915.328
4 207.936 1101.312
5 245.76 963.84
6 258.816 994.368
Taking the data in table 1 in this example as an example, the average value of the optical flow energy values of the videos at the occurrence of the high-risk event can be counted as 1033.34, and the threshold K can be specified RI And the subgraph of more than or equal to 1000 is a high-risk area of violent behaviors.
S3: extracting the personnel distribution probability information of continuous N frames in the video by using a pre-trained CSR-Net crowd counting network;
the specific steps of extracting the people distribution probability information of the continuous N frames in the video comprise:
s31: suppressing image areas with the distribution probability integral value smaller than 1 of a preset threshold T2 in each frame of image;
s32: overlapping the probability information of continuous N frames according to the average value of pixel points to obtain personnel distribution probability information;
s4: combining the optical flow intensity graph with the personnel distribution probability graph to obtain an area where violent behaviors are most likely to occur in the video;
the specific way to acquire the region in the video where the violent behavior is most likely to occur in this example is as follows:
s41: selecting the sub-image region K with the most significant motion intensity according to step S4 RI
S42: checking whether the probability integral value of the person distribution corresponding to the probability map area is greater than 2.
S43: if the conditions are met, the area is a high-risk area where violent behaviors are most likely to occur.
S44: if the condition is not met, the area is abandoned, S41 is returned, and the secondary danger area is selected again until all the areas are checked.
S5: normalizing the coordinates of the high-risk area selected in the step S4, taking the point (0,0) at the lower left corner of the video frame, and solving four poles of the high-risk area, namely the maximum and minimum values of the coordinates of the x axis and the y axis: w min 、W max 、H min And H max . According to the input image proportion required by the subsequent neural network training using the data set, and the central point (C) of the high-risk region x ,C y ) Selecting the length-width equal ratio expansion as the center, and expressing the coordinate of the selection box as (x) min ,y min ,x max ,y max ) I.e. the coordinates of the lower left corner and the coordinates of the upper right corner of the rectangular area. When x is min ≥W min And y is min ≥H min And x max ≥W max And y is max ≥H max And then, the video is a suspected violent behavior area, and the area is intercepted and output to form an enhanced training video data set.
In conclusion, the method integrates the optical flow information and the personnel distribution information of the video to serve as the evaluation characteristics of the high-risk violent incident occurrence area, and can automatically evaluate the position of the high-risk area. In the calculation process, the weight coefficients are divided according to the optical flow lumps, and meanwhile, the probability distribution values are combined, so that the fighting data set enhancement for the complex scene has better stability.

Claims (4)

1. A video image enhancement method for violent behavior recognition, the method comprising the steps of:
s1: the method comprises the following steps of (1) utilizing an existing violent behavior public data set or collecting a video containing violent behaviors from the Internet;
s2: carrying out optical flow detection on the continuous N-frame video images by using an optical flow method; the specific steps of using the optical flow method to carry out optical flow detection on the video frame comprise:
s21: removing optical flow vectors with the intensity smaller than an optical flow amplitude threshold value T1 in each frame of image;
s22: superposing the optical flow information of the continuous N frames of images to obtain an optical flow intensity information graph;
s23: acquiring an optical flow intensity connected subgraph of the image by using a graph connectivity algorithm;
s3: extracting the personnel distribution probability information of continuous N frames in the video by using a pre-trained CSR-Net crowd counting network; the specific steps of extracting the personnel distribution probability information of continuous N frames in the video comprise:
s31: suppressing image areas with the distribution probability integral value smaller than a preset threshold value T2 in each frame of image;
s32: overlapping the probability information of the continuous N frames to obtain personnel distribution probability information;
s4: combining the optical flow intensity graph with the personnel distribution probability graph to obtain an area most likely to have violent behaviors in the video;
s5: and intercepting and outputting the suspected violent behavior area of the video to form an enhanced training video data set.
2. The video image enhancement method for violent behavior recognition according to claim 1, wherein the step S2 calculates the motion information by using a global dense optical flow algorithm of image pyramid matching; assuming that an object with intensity I (x, y, t) is constant in intensity between two consecutive frames, it has a new intensity I (x + dx, y + dy, t + dt) when it moves a distance of dx and dy in one unit of time, i.e. the following formula is satisfied:
I(x,y,t)=I(x+dx,y+dy,t+dt)
x, y are coordinates of the point, t is the current time; by performing Taylor approximation to the right side of the equation, one obtains
Figure FDA0003787203030000021
Wherein
Figure FDA0003787203030000022
Figure FDA0003787203030000023
Is the gradient of the image along the horizontal axis,
Figure FDA0003787203030000024
is the gradient of the image along the vertical axis,
Figure FDA0003787203030000025
is an image along time; the solving mode is based on a dense optical flow algorithm of Gunnar Farneback, the pixel intensity change of all points between two frames is detected, and the optical flow information (u, v) can be obtained; in the global dense optical flow algorithm, the upper and lower scale relation coefficient pyr _ scale between every two layers of pyramids is 0.5, pyramid layer level is 3,the mean window size winsize is 15, 3 iterations are performed per graph, the pixel neighborhood size poly _ N is 5, the gaussian standard deviation poly _ sigma is 1.2, and the input stream is used as an approximation of the initial stream, with a consecutive number of frames N of 64.
3. The video image enhancement method for violent behavior recognition according to claim 1, wherein the optical flow connectivity sub-graph obtaining in step S23 comprises the following steps:
s231: performing binarization processing on the optical flow intensity information map obtained in the step S22, wherein the threshold value is 128;
s232: scanning the binarized optical flow intensity information graph to obtain an optical flow intensity connected subgraph; representing the pixel value of the image at the point (i, j) by P (i, j), and marking a connected subgraph from the image starting point P (0,0), wherein the method comprises the following specific steps:
s233: a progressive image, which is determined as an optical-flow blob starting point if P (r-1, c) is 0 and P (r, c) is 1, and is determined as an optical-flow blob ending point if P (r-1, c) is 1 and P (r, c) is 0, and for which a unique identifier RI is set;
s234: for all the optical-flow blobs in all the rows except the first row, if the optical-flow blobs have no r-value overlapping area with the existing blobs in the previous row, namely the optical-flow blobs P (r, c-1) ≠ P (r, c) is established, setting a new unique identifier RI for the optical-flow blobs;
s235: for all rows of optical-flow blobs except the first row, if there is an r-value overlap region with one and only one region in the previous row, it gets the same identifier RI as the previous row of overlapping blobs;
s236: for optical flow blobs in all rows except the first row, if the optical flow blobs are overlapped with the r values of the areas in the previous row, selecting the minimum RI values of all overlapped blobs as new RI values of the blobs to form a connected domain;
s237: summing the luminous flux intensities of the luminous flux subgraphs, K RI And the subgraph of not less than 1000 is a violent behavior high-risk area.
4. Use according to claim 1 for violent actionThe identified video image enhancement method is characterized in that the video suspected violent behavior region intercepting method in the step S5 is implemented by normalizing the coordinates of the high-risk region selected in the step S4, taking the point (0,0) at the lower left corner of the video frame, and solving four poles of the high-risk region, namely the maximum and minimum values of the coordinates of the x axis and the y axis: w is a group of min 、W max 、H min And H max (ii) a According to the input image proportion required by the subsequent neural network training using the data set, and the central point (C) of the high-risk region x ,C y ) Selecting the length-width equal ratio expansion as the center, and expressing the coordinate of the selection box as (x) min ,y min ,x max ,y max ) I.e. the coordinates of the lower left corner and the upper right corner of the rectangular area; when x is min ≥W min And y is min ≥H min And x max ≥W max And y is max ≥H max Then, the selected area can be intercepted to form a new video clip.
CN202010874439.2A 2020-08-27 2020-08-27 Video image enhancement method for violent behavior recognition Active CN111950518B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010874439.2A CN111950518B (en) 2020-08-27 2020-08-27 Video image enhancement method for violent behavior recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010874439.2A CN111950518B (en) 2020-08-27 2020-08-27 Video image enhancement method for violent behavior recognition

Publications (2)

Publication Number Publication Date
CN111950518A CN111950518A (en) 2020-11-17
CN111950518B true CN111950518B (en) 2022-09-13

Family

ID=73366719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010874439.2A Active CN111950518B (en) 2020-08-27 2020-08-27 Video image enhancement method for violent behavior recognition

Country Status (1)

Country Link
CN (1) CN111950518B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113642403B (en) * 2021-07-13 2023-07-18 重庆科技学院 Crowd abnormal intelligent safety detection system based on edge calculation
CN114973663B (en) * 2022-05-16 2023-08-29 浙江机电职业技术学院 Intelligent road side unit device based on edge calculation

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103488993B (en) * 2013-09-22 2016-09-07 北京联合大学 A kind of crowd's abnormal behaviour recognition methods based on FAST
CN103500324B (en) * 2013-09-29 2016-07-13 重庆科技学院 Violent behavior recognition methods based on video monitoring
US10565455B2 (en) * 2015-04-30 2020-02-18 Ants Technology (Hk) Limited Methods and systems for audiovisual communication
CN111582031B (en) * 2020-04-03 2023-07-14 深圳市艾伯信息科技有限公司 Multi-model collaborative violence detection method and system based on neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Not All Explorations Are Equal: Harnessing Heterogeneous Profiling Cost for Efficient MLaaS Training;Jun Yi;《2020 IEEE International Parallel and Distributed Processing Symposium》;20200522;第419-428页 *

Also Published As

Publication number Publication date
CN111950518A (en) 2020-11-17

Similar Documents

Publication Publication Date Title
Xu et al. Inter/intra-category discriminative features for aerial image classification: A quality-aware selection model
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
CN105574855B (en) Infrared small target detection method under cloud background based on template convolution and false alarm rejection
CN109598794B (en) Construction method of three-dimensional GIS dynamic model
Huang et al. Motion detection with pyramid structure of background model for intelligent surveillance systems
CN110135354B (en) Change detection method based on live-action three-dimensional model
CN109685045B (en) Moving target video tracking method and system
CN108280409B (en) Large-space video smoke detection method based on multi-feature fusion
CN104061907A (en) Viewing-angle greatly-variable gait recognition method based on gait three-dimensional contour matching synthesis
CN111950518B (en) Video image enhancement method for violent behavior recognition
Sengar et al. Motion detection using block based bi-directional optical flow method
CN110751018A (en) Group pedestrian re-identification method based on mixed attention mechanism
CN107833239B (en) Optimization matching target tracking method based on weighting model constraint
TWI441096B (en) Motion detection method for comples scenes
CN110765841A (en) Group pedestrian re-identification system and terminal based on mixed attention mechanism
CN111723773B (en) Method and device for detecting carryover, electronic equipment and readable storage medium
CN111753651A (en) Subway group abnormal behavior detection method based on station two-dimensional crowd density analysis
Karpagavalli et al. Estimating the density of the people and counting the number of people in a crowd environment for human safety
CN105574515A (en) Pedestrian re-identification method in zero-lap vision field
CN114187665A (en) Multi-person gait recognition method based on human body skeleton heat map
CN103049788B (en) Based on space number for the treatment of object detection system and the method for computer vision
CN110866453B (en) Real-time crowd steady state identification method and device based on convolutional neural network
CN110414430B (en) Pedestrian re-identification method and device based on multi-proportion fusion
Xie et al. An enhanced relation-aware global-local attention network for escaping human detection in indoor smoke scenarios
CN104318216A (en) Method for recognizing and matching pedestrian targets across blind area in video surveillance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20201117

Assignee: Chongqing Qinlang Technology Co.,Ltd.

Assignor: Chongqing University of Science & Technology

Contract record no.: X2023980050332

Denomination of invention: A Video Image Enhancement Method for Violent Behavior Recognition

Granted publication date: 20220913

License type: Common License

Record date: 20231206

EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20201117

Assignee: Guangxi Chunmeng Intelligent Technology Co.,Ltd.

Assignor: Chongqing University of Science & Technology

Contract record no.: X2023980053984

Denomination of invention: A Video Image Enhancement Method for Violent Behavior Recognition

Granted publication date: 20220913

License type: Common License

Record date: 20231227

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20201117

Assignee: Yuao Holdings Co.,Ltd.

Assignor: Chongqing University of Science & Technology

Contract record no.: X2024980000640

Denomination of invention: A Video Image Enhancement Method for Violent Behavior Recognition

Granted publication date: 20220913

License type: Common License

Record date: 20240119

Application publication date: 20201117

Assignee: Youzhengyun (Chongqing) Technology Development Co.,Ltd.

Assignor: Chongqing University of Science & Technology

Contract record no.: X2024980000636

Denomination of invention: A Video Image Enhancement Method for Violent Behavior Recognition

Granted publication date: 20220913

License type: Common License

Record date: 20240119

Application publication date: 20201117

Assignee: Chongqing Yiquan Small and Medium Enterprise Service Co.,Ltd.

Assignor: Chongqing University of Science & Technology

Contract record no.: X2024980000635

Denomination of invention: A Video Image Enhancement Method for Violent Behavior Recognition

Granted publication date: 20220913

License type: Common License

Record date: 20240119

Application publication date: 20201117

Assignee: Shuwu Shenzhou (Chongqing) Technology Co.,Ltd.

Assignor: Chongqing University of Science & Technology

Contract record no.: X2024980000632

Denomination of invention: A Video Image Enhancement Method for Violent Behavior Recognition

Granted publication date: 20220913

License type: Common License

Record date: 20240119

Application publication date: 20201117

Assignee: Shenyang Hongwei Jiacheng Technology Co.,Ltd.

Assignor: Chongqing University of Science & Technology

Contract record no.: X2024980000646

Denomination of invention: A Video Image Enhancement Method for Violent Behavior Recognition

Granted publication date: 20220913

License type: Common License

Record date: 20240119

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20201117

Assignee: Chongqing Xinghua Network Technology Co.,Ltd.

Assignor: Chongqing University of Science & Technology

Contract record no.: X2024980001290

Denomination of invention: A Video Image Enhancement Method for Violent Behavior Recognition

Granted publication date: 20220913

License type: Common License

Record date: 20240126

Application publication date: 20201117

Assignee: Chongqing Shuangtu Technology Co.,Ltd.

Assignor: Chongqing University of Science & Technology

Contract record no.: X2024980001288

Denomination of invention: A Video Image Enhancement Method for Violent Behavior Recognition

Granted publication date: 20220913

License type: Common License

Record date: 20240126

Application publication date: 20201117

Assignee: Chongqing Chaimi Network Technology Service Co.,Ltd.

Assignor: Chongqing University of Science & Technology

Contract record no.: X2024980001287

Denomination of invention: A Video Image Enhancement Method for Violent Behavior Recognition

Granted publication date: 20220913

License type: Common License

Record date: 20240126

EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20201117

Assignee: Foshan shangxiaoyun Technology Co.,Ltd.

Assignor: Chongqing University of Science & Technology

Contract record no.: X2024980003005

Denomination of invention: A Video Image Enhancement Method for Violent Behavior Recognition

Granted publication date: 20220913

License type: Common License

Record date: 20240322

Application publication date: 20201117

Assignee: FOSHAN YAOYE TECHNOLOGY Co.,Ltd.

Assignor: Chongqing University of Science & Technology

Contract record no.: X2024980003003

Denomination of invention: A Video Image Enhancement Method for Violent Behavior Recognition

Granted publication date: 20220913

License type: Common License

Record date: 20240322

Application publication date: 20201117

Assignee: Foshan helixing Technology Co.,Ltd.

Assignor: Chongqing University of Science & Technology

Contract record no.: X2024980003002

Denomination of invention: A Video Image Enhancement Method for Violent Behavior Recognition

Granted publication date: 20220913

License type: Common License

Record date: 20240322