CN109711320A - A kind of operator on duty's unlawful practice detection method and system - Google Patents

A kind of operator on duty's unlawful practice detection method and system Download PDF

Info

Publication number
CN109711320A
CN109711320A CN201811583373.0A CN201811583373A CN109711320A CN 109711320 A CN109711320 A CN 109711320A CN 201811583373 A CN201811583373 A CN 201811583373A CN 109711320 A CN109711320 A CN 109711320A
Authority
CN
China
Prior art keywords
target detection
video
network model
duty
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811583373.0A
Other languages
Chinese (zh)
Other versions
CN109711320B (en
Inventor
余红涛
刘强
杨鹏
杨琪
宋玉浩
陈运锦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Data Communication Institute Of Science And Technology
Hangzhou Baoxin Technology Co Ltd
XINGTANG COMMUNICATIONS CO Ltd
Original Assignee
Data Communication Institute Of Science And Technology
Hangzhou Baoxin Technology Co Ltd
XINGTANG COMMUNICATIONS CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Data Communication Institute Of Science And Technology, Hangzhou Baoxin Technology Co Ltd, XINGTANG COMMUNICATIONS CO Ltd filed Critical Data Communication Institute Of Science And Technology
Priority to CN201811583373.0A priority Critical patent/CN109711320B/en
Publication of CN109711320A publication Critical patent/CN109711320A/en
Application granted granted Critical
Publication of CN109711320B publication Critical patent/CN109711320B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of operator on duty's unlawful practice detection method and systems, belong to intelligent video analysis technical field, solve the problems, such as that existing detection method low efficiency, at high cost, recognition accuracy is low.The following steps are included: building target detection network model, and be trained using data set;The multi-channel video of the different angle of the same scene is obtained in real time, using trained target detection network model and target tracking algorism, multi-target detection and tracking is carried out, the personal information in every road video is obtained, and integration processing is carried out, judge whether the behavior of operator on duty is abnormal.The present invention carries out intelligent video analysis by input video source of environment video video, support multipath video source input and convergence analysis, the accuracy rate that unlawful practice identification is substantially increased by deep learning and data modeling means, realizes and is real-time and accurately monitored to the behavior on duty of the operator on duty of the scenes such as monitoring center, duty room, command centre.

Description

Method and system for detecting violation behaviors of staff on duty
Technical Field
The invention relates to the technical field of intelligent video analysis, in particular to a method and a system for detecting violation behaviors of staff on duty.
Background
The development of video monitoring technology is very rapid, the key parts of key areas of various industries are already covered by monitoring, and the networking of monitoring video information is already realized. In practical application, the monitoring center, the duty room, the command center and other parts are on duty, the management department needs to periodically perform off-site inspection to retrieve historical video information, and then searches and records events of sleeping (off duty and sleeping) of the on-duty person, so that the on-duty violation behavior is accurately and timely detected, the normal operation of on-site safety management work can be guaranteed, and the basis of retrospective tracing is provided for the occurrence of the non-safety event. At present, the work is performed manually by inspectors, and all related historical videos need to be traversed (or nearly traversed), so that the time and the labor are consumed, and due to the difference of subjective judgments of different inspectors, even the same inspector has a judgment error in a high-intensity fatigue state, so that a large number of missing detection and false detection phenomena exist in an inspection result.
The mode of replacing manual inspection by utilizing the video analysis technology can effectively improve the working efficiency. However, in the current video analysis technology for the off-duty and sleeping of the operators on duty in the scenes of a monitoring center, an on-duty room, a command center and the like, a camera is generally added in front of each on-duty platform, and the human face detection and the human eye detection are carried out on the operators on duty, so that whether the operators are off-duty or sleep is judged; the technology is relatively complex, difficult to maintain, high in cost and low in identification accuracy, and the explicit deployment is easy to cause the discomfort of operators on duty and even destroy equipment.
Disclosure of Invention
In view of the above analysis, the present invention aims to provide a method and a system for detecting violation of duty personnel, so as to solve the problems of low efficiency, high cost and low recognition accuracy of the existing detection method.
The purpose of the invention is mainly realized by the following technical scheme:
on one hand, the invention discloses a method for detecting the violation of duty personnel, which comprises the following steps:
s1, constructing a target detection network model, and training by using a data set to obtain a trained target detection network model;
step S2, acquiring multiple paths of videos of the same scene at different angles in real time, and respectively performing multi-target detection and tracking on each path of video by using the trained target detection network model and the trained target tracking algorithm to acquire personnel information in each path of video;
step S3, integrating the acquired personnel information in each path of video according to the personnel matching relationship among the multiple paths of videos; and judging whether the behavior of the person on duty is abnormal or not according to the integrated data.
The invention has the following beneficial effects: the method takes the environment camera video as an input video source to carry out intelligent video analysis, supports the input of multiple video sources and carries out fusion analysis, greatly improves the accuracy rate of illegal behavior identification by means of deep learning and data modeling, realizes real-time and accurate monitoring on-duty behaviors of on-duty personnel in scenes such as a monitoring center, an on-duty room, a command center and the like, and ensures the normal operation of field safety management work.
On the basis of the scheme, the invention is further improved as follows:
further, the step S1 specifically includes the following steps:
step S101, firstly establishing a lightweight target classification network, and then constructing a target detection network model on the basis;
step S102, acquiring a data set, firstly acquiring data by acquiring open source data, crawling data from the Internet and self-making related data, and then screening the acquired data to obtain data related to a scene; and finally, marking the screened data.
And step S103, training the constructed target detection network model by using the acquired data set to obtain the trained target detection network model.
Further, the step S101 specifically includes the following steps:
step S10101, constructing a lightweight target classification network, wherein the network adopts a two-way dense structure and is provided with two branches, and the first branch is formed by 1 convolution of 1x1 and 1 convolution of 3x 3; the second branch consists of 1 convolution of 1x1 and 2 convolutions of 3x 3; and cascading the first branch and the second branch to be used as the output of the 2-way dense structure.
Step S10102, adding two branch-target multi-class classification sub-networks and a target coordinate regression sub-network on the basis of the constructed light-weight target classification network to obtain a target detection network model; the target multi-class classification sub-network uses 3x3 convolution to generate multi-class probabilities based on candidate frames for target multi-class classification; the target coordinate regression subnetwork convolves with 3x3 to generate candidate box-based coordinate offsets for target coordinate regression.
Further, in step S102, the data set includes:
a classification dataset comprising an Imagenet2012 dataset for training a lightweight target classification network;
a detection data set comprising VOC2007 and VOC2012 data sets for training a target detection network model;
and (3) capturing pictures of the monitoring room on-duty personnel from the network through a crawler by using the monitoring room on-duty personnel data set, manually marking the pictures, marking out frames of the on-duty personnel, and making the frames into corresponding data sets for fine tuning training of the target detection network model.
Further, step S103 specifically includes the following steps:
step S10301, training the constructed light weight type target classification network by using the classification data set to obtain the trained light weight type target classification network;
step S10302, removing all connection layers in the trained light-weight target classification network, and taking the rest part as a main body of the target detection network; nesting a plurality of layers of 3x3 convolutions and 1x1 convolutions to enable the output size of the last layer to be 1x1, selecting feature maps with the sizes of 38x38,19x19,10x10,5x5,3x3 and 1x1 as the input of a classification sub-network and a coordinate regression sub-network, and taking the sum of classification loss and positioning loss as a loss function to obtain a target detection network model;
step S10303, training the obtained target detection network model with the detection data set to obtain a preliminarily trained target detection network model.
And step S10304, performing fine tuning training on the preliminarily trained target detection network model by using a monitoring room attendant data set to obtain a trained target detection network model.
Further, the training the obtained target detection network model by using the detection data set includes:
after candidate frames with different sizes and aspect ratios corresponding to different feature maps are obtained, the IOU is used as a screening index, and the candidate frames are labeled to distinguish positive and negative samples;
after the positive and negative samples are screened out, the positive and negative samples are input into a target detection network model for training, a back propagation algorithm is used for training a classification sub-network and a regression sub-network according to the loss value of the loss function, and an mAP index is used as a performance measurement index of the target detection network to obtain the preliminarily trained target detection network model.
Further, the step S2 includes the steps of:
step S201, reading multi-channel video streams of different angles of the same scene, and analyzing and preprocessing the multi-channel video streams;
step S202, detecting each frame of image obtained by analyzing through the trained target detection network model to obtain one or more human body frames, and determining the position information and the number information of the on-duty personnel in each path of video according to the positions and the number of the human body frames;
and S203, performing matching tracking on the images of the adjacent frames in each video by using a tracking algorithm, acquiring the movement track information of the person on duty in each video according to the position of the human body frame in the adjacent frame, and calculating the action amplitude of the person on duty in the scene by using a background modeling algorithm and combining the position information of the person on duty.
Further, the matching and tracking of the images of the adjacent frames in each video by using a tracking algorithm includes:
and arranging a corresponding tracker for each human body frame detected in the current frame, detecting one or more new human body frames when the next frame image in the video is read, matching the trackers and the human body frames, thereby completing the tracking of multiple targets in adjacent frames, and generating a tracker state list in real time, wherein the list comprises the position of the tracker in the current frame, the number of the survival frames of the tracker, and whether the tracker is updated in the current frame.
Further, the performing integrated processing includes: searching and matching are carried out according to the characteristic point information in the multi-path video scene, the position relation among the multi-path video cameras is calculated, video contents with different angles and a tracker state list are integrated according to the calculated space rotation and translation angles among the cameras, and the collected personnel information is obtained.
In another aspect, a system for detecting violation of duty personnel is provided, including:
the target detection network model building and training module is used for building a target detection network model and training the target detection network model by using a data set to obtain a trained target detection network model;
the target detection unit module is used for acquiring multiple paths of videos of the same scene at different angles in real time, and performing multi-target detection and tracking on each path of video by using the trained target detection network model and the trained target tracking algorithm to acquire personnel information in each path of video;
the abnormal behavior judgment module is used for integrating the personnel information in each path of video acquired in the target detection unit module according to the personnel matching relationship among the multiple paths of videos; and judging whether the behavior of the person on duty is abnormal or not according to the integrated data.
The invention has the following beneficial effects: the system performs intelligent video analysis by taking an environment camera video as an input video source, supports multi-channel video source input and fusion analysis, greatly improves the accuracy of illegal behavior identification by means of deep learning and data modeling, realizes real-time and accurate monitoring of the on-duty behavior of an operator on duty in scenes such as a monitoring center, an on-duty room, a command center and the like, can guarantee normal operation of field safety management work by accurately and timely detecting the on-duty illegal behavior, and provides a basis for retrospective tracing for the occurrence of non-safety events.
In the invention, the technical schemes can be combined with each other to realize more preferable combination schemes. Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the invention, wherein like reference numerals are used to designate like parts throughout.
FIG. 1 is a flowchart of a method for detecting violation of an attendant according to an embodiment of the present invention;
FIG. 2 is a block diagram of a target detection network model in an embodiment of the present invention;
fig. 3 is a diagram of an application scenario and a real-time effect in an embodiment of the present invention.
Detailed Description
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate preferred embodiments of the invention and together with the description, serve to explain the principles of the invention and not to limit the scope of the invention.
Example 1
The invention discloses a specific embodiment of a method for detecting violation behaviors of staff on duty, which comprises the following steps as shown in figure 1:
s1, constructing a target detection network model, and training by using a data set to obtain a trained target detection network model;
step S2, acquiring multiple paths of videos of the same scene at different angles in real time, and respectively performing multi-target detection and tracking on each path of video by using the trained target detection network model and the trained target tracking algorithm to acquire personnel information in each path of video;
step S3, integrating the acquired personnel information in each path of video according to the personnel matching relationship among the multiple paths of videos; and judging whether the behavior of the person on duty is abnormal or not according to the integrated data.
Compared with the prior art, the method for detecting the violation behaviors of the on-duty personnel provided by the embodiment carries out intelligent video analysis by taking the environment camera video as an input video source, supports multi-channel video source input and fusion analysis, greatly improves the accuracy of violation behavior identification by means of deep learning and data modeling, and realizes real-time and accurate monitoring of the violation behaviors of the on-duty personnel in scenes such as a monitoring center, an on-duty room, a command center and the like.
Specifically, in step S1, a target detection network model is constructed and trained by using a data set, so as to obtain a trained target detection network model; the method specifically comprises the following steps:
step S101, a target detection network model is constructed, and in order to provide accuracy and real-time performance of the target detection network model and reduce consumption of computing resources, in this embodiment, a lightweight target classification network is first established, and on this basis, a mode of establishing a target detection network is specifically:
step S10101, constructing a lightweight target classification network, wherein the network adopts a two-way dense structure and has two branches: the first branch consists of a 1x1 convolution and a 3x3 convolution; the second branch consists of a 1x1 convolution and 2 3x3 convolutions; and cascading the first branch and the second branch to be used as the output of the 2-way dense structure.
It should be noted that, the lightweight network not only has a small number of network parameters, but also has a small number of network floating point calculations, so it is not suitable to use a convolution kernel with a large kernel size, in this embodiment, a convolution of 3x3 and a convolution of 1x1 are used, in addition, the size of the target in the classification task is different, so the network needs different receptive field areas, a two-way dense structure is designed, as shown in fig. 2, the structure has two branches, branch a is formed by a convolution of 1x1 and a convolution of 3x3, and the convolution of 3x3 has a good classification and detection effect on small objects; the branch B is formed by convolution of 1x1 and convolution of 2 3x3, equivalent to convolution of 1 5x5, has good classification and detection effects on large objects, and finally, the branches A and B are cascaded to be used as output of a 2-way dense structure.
Deep networks may face gradientsDisappearance, the problem of being difficult to train, the mode of carrying out the jump connection between the different convolutional layers has been designed, be favorable to the feature multiplexing, reduced the network parameter, reduce the network computational resource consumption, the flow of information in the network has been strengthened, preceding convolutional layer can directly link to each other with the convolutional layer of back, therefore the gradient can directly propagate forward, the problem that the gradient disappears has effectively been solved, 1x1 convolution generally is used for reducing the dimension, but the dimension reduction can damage the information, only use 1x1 convolution to reduce the dimension when linking to each other with the pooling layer in this light-weight type classification network, connect one deck full connection layer at last, this network uses softmax loss function, the formula is:wherein f isyiIs the output value of the network on the real category.
Step S10102, a target detection network is built, two branch-target multi-class classification sub-networks and a target coordinate regression sub-network are added on the basis of the built light-weight target classification network and are respectively used for target multi-class classification and target coordinate regression, and target frames with different sizes and aspect ratios are used as references of the target coordinate regression sub-networks;
wherein the target multi-class classification subnetwork: using convolution with 3 × 3, the number of output channels is num _ priors × class, where num _ priors is the number of candidate boxes, class is the number of categories plus 1, including the background category, and this sub-network does not predict the confidence score of the candidate boxes, so it is necessary to add classification of the background category to the original category to distinguish whether the candidate boxes are the background, and this sub-network uses a softmax penalty function, and the formula is as follows:
wherein,for the predicted box i to match the real box j with respect to the class p, the classThe higher the prediction probability of the other p, the smaller the loss,if the prediction frame is a negative sample, that is, if there is no object in the prediction frame, the prediction frame is a background region, and the higher the probability of predicting as a background is, the smaller the loss is.
Target coordinate regression subnetwork: using convolution with 3x3, the number of output channels is num _ priors × 4, where num _ priors is the number of candidate frames, and for each candidate frame, the network predicts the offset of x, y, w, h4 coordinates relative to the coordinates of the candidate frame, where x, y are the coordinates of the center point of the target frame, and w, h are the width and height of the target frame, respectively, and the formula is as follows:
wherein P isx,Py,Pw,PhFor predicting the coordinates of the center point of the box, the width and height of the box, Ax,Ay,Aw,AhThe coordinate of the center point of the candidate frame, the width and the height of the candidate frame.
The real frame coordinates marked in the picture label are xmin, ymin, xmax and ymax, the real frame coordinates are firstly converted into x, y, w, h and 4 coordinates, and then the coordinates are converted into the offset of the real frame relative to the candidate frame coordinates, and the formula is as follows:
wherein G isx,Gy,Gw,GhThe coordinate of the center point of the real frame, the width and the height of the real frame.
This subnetwork uses the smooth L1 loss function, the formula:
it should be emphasized that, in this embodiment, the target detection network model is different from the conventional idea of generating the region suggestions first, and then classifying the region suggestions to detect multiple targets, where the network directly generates multiple class probabilities and coordinate offsets based on candidate frames, the candidate frames are obtained by calculating feature maps with different sizes, taking the feature map size H × W as an example, at each (xi, yi) position, candidate frames with different sizes and aspect ratios are generated, the aspect ratio set is {1,2,1/2,3,1/3}, the size needs to be determined according to the feature maps and the size of the original image, the size of the candidate frames generated by the feature maps with different sizes is different, the candidate frame mechanism is similar to a sliding window, sliding of windows with different sizes is performed at each position on the feature map, and the number of required feature maps can be selected according to actual needs, preferably, the candidate frames are calculated by using 6 feature maps, and then the candidate frames of the 6 feature maps are cascaded, so that the detection efficiency can be ensured, and meanwhile, the ideal operation effect is ensured. The loss function of the target detection network is the sum of the classification loss function and the positioning loss function:
where N is the number of samples, LlocAs a function of coordinate regression network loss, LconfAs a function of classification network loss.
Step S102, acquiring a data set, acquiring training data by using methods such as an open source academic data set, web crawler acquisition data, self-control data and the like, and training the constructed target detection network model; specifically, data is collected first; then screening the acquired data to obtain data related to the scene; and finally, marking the screened data. The data set can be obtained by acquiring open source data and crawling data from the internet and self-making related data, specifically:
acquiring a classification data set: using an Imagenet2012 data set as a data set for training a lightweight target classification network, preferably, the training set comprises 128 ten thousand pictures, the verification set comprises 5 ten thousand pictures, and the test set comprises 10 ten thousand pictures;
acquiring a detection data set: using a VOC2007 and VOC2012 data set as a data set for training a target detection network model, preferably, 17000 pictures in the training set and 4952 pictures in the testing set;
acquiring a data set of an operator on duty in a monitoring room: capturing pictures of on-duty personnel in a monitoring room from the internet by using a crawler, and making the pictures into corresponding data sets for fine tuning training of a target detection network model; preferably, the training set is 20000 pictures, and the test set is 5000 pictures; and manually marking the pictures, and marking out the frames of the operators on duty, wherein the marking format is the same as the VOC data set, and the picture labels are stored in an XML format.
And S103, training the target detection model constructed in the step S101 by using the data set collected in the step S102 to obtain a trained target detection model.
Step S10301, a light weight type target classification network is trained, and the Imagenet2012 data set in the classification data set is used for training to obtain the trained light weight type target classification network.
Step S10302, removing all connection layers in the trained lightweight target classification network, taking the rest part as a main body of the target detection network, nesting by using multilayer 3x3 convolution and 1x1 convolution to enable the output size of the last layer to be 1x1, selecting feature maps with the size of 38x38,19x19,10x10,5x5,3x3 and 1x1 as the input of a classification sub-network and a coordinate regression sub-network, and taking the sum of classification loss and positioning loss as a loss function to obtain a target detection network model;
step S10303, training the target detection network by using VOC2007 and VOC2012 data in the detection data set, in the training process, after obtaining candidate frames with different sizes and aspect ratios from feature maps with different sizes, labeling the candidate frames to distinguish positive and negative samples, taking an IOU (Intersection-over-Union ratio) as a screening index, and specifically, calculating an IOU for each real object frame in the candidate frames and the picture labels. The IOU >0.5 may be set as a positive sample, the rest are negative samples, but the number of negative samples is excessive, and in this embodiment, the negative samples are sampled to obtain a ratio of the positive samples to the negative samples of 1:3, so the preferred sampling strategy is to calculate a softmax loss function of each negative sample, arrange loss values in descending order, take the first n as negative samples, and n is the number of negative samples calculated according to the ratio of the positive samples to the negative samples.
After the positive and negative samples are screened out, the positive and negative samples are input into a network for training, a classification sub-network and a regression sub-network are trained by using a back propagation algorithm according to a loss value of a loss function, an mAP (Mean average probability) index is used as a performance measurement index of the target detection network, and a primary target detection network is obtained after training is finished. The specific value of the mAP index may be set according to actual conditions (such as detection accuracy and processor performance).
Step S10304, fine tuning training is carried out by using the monitoring room attendant data set to obtain a trained target detection network model, so that the target detection network has good detection performance for the attendant in the monitoring room environment.
In step S2, multiple videos of the same scene at different angles are obtained in real time, and the trained target detection network model and the trained target tracking algorithm are used to perform multi-target detection and tracking on each video, so as to obtain the personnel information in each video, where the personnel information includes: personnel position, personnel number, motion trail and action amplitude; the method specifically comprises the following steps:
step S201, reading a video stream: reading multiple paths of video streams in the same scene, illustratively, reading two paths of rtsp video streams from two cameras, respectively analyzing the two paths of video streams, preprocessing frames obtained by analysis, including performing mean value reduction and normalization on pictures, and storing the frames into an annular buffer area after packaging for further processing;
step S202, acquiring position information and number information of on-duty personnel in each video scene by using the trained deep learning target detection network model, and recording and storing results;
specifically, each frame of image obtained by analyzing is detected through the trained target detection network model, one or more human body frames are obtained through detection, and the position information and the number information of the on-duty personnel in each path of video are determined according to the positions and the number of the human body frames; in which, different detection algorithms (e.g., SSD, YOLO, fast RCNN) can be used for the people number statistics, and illustratively, the HOG features can be extracted, and then the SVM classifier is used to determine whether the person is a person.
S203, acquiring action amplitude and motion track information of the person on duty in the video scene by using a multi-target tracking algorithm and a background modeling algorithm and combining position information of the person on duty;
specifically, matching and tracking images of adjacent frames in each video by using a tracking algorithm, acquiring the movement track information of the person on duty in each video according to the position of a human body frame in the adjacent frames, and calculating the action amplitude of the person on duty in the scene by using a background modeling algorithm and combining the position information of the person on duty.
The method comprises the following steps of (1) carrying out matching tracking on images of adjacent frames in each path of video by utilizing a tracking algorithm, wherein the specific parts are as follows: and arranging a corresponding tracker for each human body frame detected in the current frame, detecting one or more new human body frames when the next frame image in the video is read, matching the trackers and the human body frames, thereby completing the tracking of multiple targets in adjacent frames, and generating a tracker state list in real time, wherein the list comprises the position of the tracker in the current frame, the number of the survival frames of the tracker, and whether the tracker is updated in the current frame.
Tracking the detected person-on-duty area, and selecting different tracking areas of the human body according to different algorithm characteristics, such as all human body areas tracked by Kalman filtering, and head and shoulder or all areas tracked by a KCF algorithm. Preferably, Kalman filtering is adopted, the Kalman filtering is a recursive filter for a linear system, a model prediction value and an observation value are fused to obtain an output value, the output value is divided into two stages, the first stage is a prediction stage, a linear motion model is established for a target, a state variable of the target is calculated, the covariance of an error is calculated, the second stage is an observation updating stage, Kalman gain is firstly calculated, the state estimation of the target is updated according to the observation value, the covariance of the error is updated finally, and the Kalman filtering is iterated repeatedly; specifically, an output value of a target detection network is used as an observation value of Kalman filtering, modeling is performed aiming at linear motion to obtain a state model, the output value of the state model is used as a predicted value of the Kalman filtering, and the output value finally obtained by the Kalman filtering is used as the position of a target in a current frame;
illustratively, the current frame is denoted by fn, the adjacent frame by fn +1, the plurality of tracking targets by tracks, the plurality of detection target boxes by dets, and, specifically,
firstly, whether the tracks are empty is checked, if the tracks are empty, dets of fn frames are used for initializing the tracks, otherwise, the tracks are not empty, the tracks are updated, and the updating algorithm is updated by using Kalman filtering.
Matching tracks and dets, taking an IOU as a matching index, wherein the IOU is more than 0.7, the matching is successful, if a certain tracker fails to be matched, the certain tracker cannot be immediately deleted from the tracks, 3 frames can be reserved, if no matching is successful within the 3 frames, the certain tracker can be deleted from the tracks, if the certain tracker succeeds in matching, the certain tracker continues to be updated, and if a certain detection matching fails, the detection is used for initializing a new tracker;
in addition, whether people move or not needs to be calculated at any time in order to observe whether people sleep or not, and meanwhile, besides the movement of a human body, factors capable of enabling pixel color values to change in a monitoring picture are many, such as illumination, flying mosquitoes and the like. Calculating the action amplitude of the person on duty in the scene by a background modeling algorithm in combination with the position information of the person on duty; among them, the background modeling method may use a mixed gaussian model (GMM), a VIBE algorithm, or the like. In the present embodiment, it is assumed that each influencing factor is a gaussian distribution, so a gaussian mixture model is used to model the color value of each pixel in the surveillance video. In the model, each pixel point is formed by mixing and superposing a plurality of Gaussian distributions, and the weight distribution parameters of the Gaussian distributions are updated along with time. When the deviation of the new value of a pixel from the mean value of the Gaussian mixture model is within a certain threshold (which can be determined according to experiments and video contents), the pixel is considered as a background and no motion occurs. Otherwise, it is foreground and there is movement. Further, it is possible to determine whether the person has moved in combination with the result of detecting the position of the person in the adjacent frame.
In step S3, according to the person matching relationship among multiple videos, integrating the obtained person information in each video; and judging whether the behavior of the person on duty is abnormal or not according to the integrated data. The method specifically comprises the following steps:
step S301, integrating the detected multi-channel video information;
for multi-channel videos of the same scene at different angles, searching and matching are carried out according to feature point information in the scene, the position relation among the multi-channel video cameras is calculated, video contents of different angles and a tracker state list are integrated according to the calculated spatial rotation and translation angles among the cameras, and the collected personnel information (information such as personnel number, motion trail, action amplitude and the like) is obtained.
And step S302, judging whether the behavior of the person on duty is abnormal or not according to a preset rule by using the integrated information.
It should be noted that: the abnormal behaviors comprise sleeping (the person on duty sleeps on duty) and off duty (the person on duty leaves off duty), and if the person on duty keeps still for 30 minutes, the abnormal behaviors are considered to be sleeping, and if the number of the person on duty is reduced, the abnormal behaviors are considered to be off duty.
Specifically, the sleeping behavior checks the number of the survival frames of the tracker by checking a tracker state list acquired by the tracker, and if the number of the survival frames is greater than or equal to 30 × 25, the sleeping behavior of the person on duty tracked by the tracker is considered to occur, and an alarm is sent to a related system at the moment, wherein the alarm information comprises information such as the ID of the person on duty, the position of a boundary frame, the alarm time and the like; the off duty behavior judges whether abnormal behavior exists or not by counting the number of people on duty, a tracker state list is checked, the change conditions in the tracker state list in the current frame and the adjacent frame are checked, whether the number of trackers changes or not is known, if the number of trackers is reduced, the off duty behavior of the on duty personnel is judged, the system sends out an alarm, and the alarm information comprises information such as the ID of the on duty personnel, the number of the on duty personnel in the current monitoring room and the like.
It should be noted that, since a single-channel monitoring video may not completely cover a monitoring area, which may cause an omission problem, a blind area may exist in the single-channel video, in this embodiment, multiple channels of videos are obtained, target detection and tracking are performed on each channel of video, and finally, videos of different angles in the same scene are searched and matched according to feature point information in the scene, so as to calculate a position relationship between multiple channels of video cameras, and according to a calculated spatial rotation and translation angle between the cameras, video contents of different angles and analysis results are integrated, so as to perform violation behavior judgment; besides, the acquired multi-channel videos can be spliced and integrated into a large video, and the large video is analyzed and detected by using the deep learning target detection model obtained by training, so that violation judgment is performed. In an actual application scene, different strategies such as merging multiple paths of videos into one path of large video for processing (a common probe) and processing each path of video independently can be divided according to actual conditions and requirements (such as probe spatial distribution relation, calculation efficiency requirement and the like).
Example 2
Disclosed is a duty personnel violation detection system, comprising:
the target detection network model building and training module is used for building a target detection network model and training the target detection network model by using a data set to obtain a trained target detection network model;
the target detection unit module is used for acquiring multiple paths of videos of the same scene at different angles in real time, and performing multi-target detection and tracking on each path of video by using the trained target detection network model and the trained target tracking algorithm to acquire personnel information in each path of video;
the abnormal behavior judgment module is used for integrating the personnel information in each path of video acquired in the target detection unit module according to the personnel matching relationship among the multiple paths of videos; and judging whether the behavior of the person on duty is abnormal or not according to the integrated data.
Compared with the prior art, the system for detecting the violation of the operator on duty provided by the embodiment performs intelligent video analysis by taking the environment camera video as an input video source, supports multi-channel video source input and fusion analysis, greatly improves the accuracy of violation identification through deep learning and data modeling means, and realizes real-time and accurate monitoring of the violation of the operator on duty in scenes such as a monitoring center, an on duty room, a command center and the like.
Specifically, the target detection network model construction and training module comprises: the system comprises a target detection network model building unit, a data set building unit and a target detection network model training unit;
the system comprises a target detection network model building unit, a target detection network model building unit and a target detection network model building unit, wherein the target detection network model building unit is used for building a target detection network model and is divided into a lightweight target classification network building subunit and a target detection network model building subunit, specifically, in the lightweight target classification network building subunit, a classification network adopts a two-way dense structure and is provided with two branches, and the first branch is formed by 1x1 convolution and 1x 3 convolution; the second branch consists of 1 convolution of 1x1 and 2 convolutions of 3x 3; and cascading the first branch and the second branch to be used as the output of the 2-way dense structure.
The target detection network model building subunit is used for adding two branch-target multi-class classification subnetworks and a target coordinate regression subnetwork on the basis of the light-weight target classification network built by the light-weight target classification network building subunit to obtain a target detection network model; the target multi-class classification sub-network uses 3x3 convolution to generate multi-class probabilities based on the candidate frames for target multi-class classification; the target coordinate regression subnetwork convolves with 3x3 to generate candidate box-based coordinate offsets for target coordinate regression.
The data set establishing unit is used for firstly acquiring data, then screening the data to obtain data related to a scene, and finally labeling, wherein the data acquisition stage comprises acquisition of open source data, and the data crawling from the internet and the self-control related data comprise: a classification dataset comprising an Imagenet2012 dataset for training a lightweight target classification network; a detection data set comprising VOC2007 and VOC2012 data sets for training a target detection network model; and (3) capturing pictures of the monitoring room on-duty personnel from the network through a crawler by using the monitoring room on-duty personnel data set, manually marking the pictures, marking out frames of the on-duty personnel, and making the frames into corresponding data sets for fine tuning training of the target detection network model.
And the target detection network model training unit is used for training the network constructed in the target detection network model construction subunit by using the data set in the data set construction unit. Specifically, training the constructed light-weight target classification network by using a classification data set to obtain a trained light-weight target classification network; removing all connection layers in the trained light-weight target classification network, and taking the rest part as a main body of a target detection network; nesting a plurality of layers of 3x3 convolutions and 1x1 convolutions to enable the output size of the last layer to be 1x1, selecting feature maps with the sizes of 38x38,19x19,10x10,5x5,3x3 and 1x1 as the input of a classification sub-network and a coordinate regression sub-network, and taking the sum of classification loss and positioning loss as a loss function to obtain a target detection network model; and training the obtained target detection network model by using the detection data set to obtain the preliminarily trained target detection network model. And performing fine tuning training on the preliminarily trained target detection network model by using a monitoring room attendant data set to obtain the trained target detection network model.
The target detection module includes: the video reading unit reads the multi-channel video streams of the same scene at different angles, and analyzes and preprocesses the multi-channel video streams;
the position information and number information detection unit is used for detecting each frame of image obtained by analysis through the trained target detection network model to obtain one or more human body frames, and determining the position information and number information of the on-duty personnel in each path of video according to the positions and the number of the human body frames;
the motion track and action amplitude detection unit is used for matching and tracking images of adjacent frames in each video by using a tracking algorithm, acquiring the motion track information of the person on duty in each video according to the position of a human body frame in the adjacent frames, and meanwhile, calculating the action amplitude of the person on duty in the scene by using a background modeling algorithm and combining the position information of the person on duty. Specifically, the method comprises the steps of setting a corresponding tracker for each human body frame detected in a current frame, detecting one or more new human body frames when a next frame image in a video is read, matching the trackers with the human body frames, completing the tracking of multiple targets in adjacent frames, and generating a tracker state list in real time, wherein the list comprises the position of the tracker in the current frame, the number of the survival frames of the tracker, and whether the tracker is updated in the current frame.
The abnormal behavior judgment module comprises: the system is used for integrating the multi-channel video data and the detection information, and judging the behavior of the person on duty according to a preset rule so as to distinguish abnormal behaviors. The method comprises the following steps: and the integration processing unit is used for searching and matching according to the characteristic point information in the multi-path video scene, calculating the position relation between the multi-path video cameras, integrating the video contents at different angles and the tracker state list according to the calculated spatial rotation and translation angles between the cameras, and acquiring the summarized personnel information.
The abnormal behavior determination unit includes: the sleeping behavior judgment subunit checks the survival frame number of the tracker by checking the tracker state list acquired by the tracker, and if the survival frame number is more than or equal to 30 × 25, the sleeping behavior of the person on duty tracked by the tracker is considered to occur, and the system can give an alarm at the moment, wherein the alarm information comprises the information of the person on duty, the position of a boundary frame, the alarm time and the like; and the off duty behavior judgment subunit judges whether abnormal behaviors exist or not by counting the number of people on duty, checks the tracker state list, checks the change conditions in the tracker state list in the current frame and the adjacent frame to know whether the number of trackers changes or not, judges that the off duty behavior of the on duty personnel occurs if the number of trackers is reduced, and sends out an alarm, wherein the alarm information comprises information such as the ID of the on duty personnel, the number of the on duty personnel in the current monitoring room and the like.
It should be noted that, the related technical features of the detection system in this embodiment and the detection method in embodiment 1 are related to each other, and the same parts are referred to each other, and are not described again.
Those skilled in the art will appreciate that all or part of the flow of the method implementing the above embodiments may be implemented by hardware associated with computer program instructions, and the program may be stored in a computer readable storage medium. The computer readable storage medium is a magnetic disk, an optical disk, a read-only memory or a random access memory.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.

Claims (10)

1. A method for detecting violation behaviors of staff on duty is characterized by comprising the following steps:
s1, constructing a target detection network model, and training by using a data set to obtain a trained target detection network model;
step S2, acquiring multiple paths of videos of the same scene at different angles in real time, and respectively performing multi-target detection and tracking on each path of video by using the trained target detection network model and the trained target tracking algorithm to acquire personnel information in each path of video;
step S3, integrating the acquired personnel information in each path of video according to the personnel matching relationship among the multiple paths of videos; and judging whether the behavior of the person on duty is abnormal or not according to the integrated data.
2. The method according to claim 1, wherein the step S1 specifically comprises the steps of:
step S101, firstly establishing a lightweight target classification network, and then constructing a target detection network model on the basis;
step S102, acquiring a data set: firstly, acquiring data by acquiring open source data, crawling data from the internet and self-making related data, and then screening the acquired data to obtain data related to a scene; finally, marking the screened data;
and step S103, training the constructed target detection network model by using the acquired data set to obtain the trained target detection network model.
3. The method according to claim 2, wherein the step S101 specifically includes the steps of:
step S10101, constructing a lightweight target classification network, wherein the network adopts a two-way dense structure and is provided with two branches, and the first branch is formed by 1 convolution of 1x1 and 1 convolution of 3x 3; the second branch consists of 1 convolution of 1x1 and 2 convolutions of 3x 3; and cascading the first branch and the second branch to be used as the output of the 2-way dense structure.
Step S10102, adding two branch-target multi-class classification sub-networks and a target coordinate regression sub-network on the basis of the constructed light-weight target classification network to obtain a target detection network model; the target multi-class classification sub-network uses 3x3 convolution to generate multi-class probabilities based on candidate frames for target multi-class classification; the target coordinate regression subnetwork convolves with 3x3 to generate candidate box-based coordinate offsets for target coordinate regression.
4. The method of claim 3, wherein in step S102, the data set comprises:
a classification dataset comprising an Imagenet2012 dataset for training a lightweight target classification network;
a detection data set comprising VOC2007 and VOC2012 data sets for training a target detection network model;
and (3) capturing pictures of the monitoring room on-duty personnel from the network through a crawler by using the monitoring room on-duty personnel data set, manually marking the pictures, marking out frames of the on-duty personnel, and making the frames into corresponding data sets for fine tuning training of the target detection network model.
5. The method according to claim 4, wherein step S103 specifically comprises the following steps:
step S10301, training the constructed light weight type target classification network by using the classification data set to obtain the trained light weight type target classification network;
step S10302, removing all connection layers in the trained light-weight target classification network, and taking the rest part as a main body of the target detection network; nesting a plurality of layers of 3x3 convolutions and 1x1 convolutions to enable the output size of the last layer to be 1x1, selecting feature maps with the sizes of 38x38,19x19,10x10,5x5,3x3 and 1x1 as the input of a classification sub-network and a coordinate regression sub-network, and taking the sum of classification loss and positioning loss as a loss function to obtain a target detection network model;
step S10303, training the obtained target detection network model with the detection data set to obtain a preliminarily trained target detection network model.
And step S10304, performing fine tuning training on the preliminarily trained target detection network model by using a monitoring room attendant data set to obtain a trained target detection network model.
6. The method of claim 5, wherein training the obtained target detection network model using the detection data set comprises:
after candidate frames with different sizes and aspect ratios corresponding to different feature maps are obtained, the IOU is used as a screening index, and the candidate frames are labeled to distinguish positive and negative samples;
after the positive and negative samples are screened out, the positive and negative samples are input into a target detection network model for training, a back propagation algorithm is used for training a classification sub-network and a regression sub-network according to the loss value of the loss function, and an mAP index is used as a performance measurement index of the target detection network to obtain the preliminarily trained target detection network model.
7. The method according to claim 6, wherein the step S2 comprises the steps of:
step S201, reading multi-channel video streams of different angles of the same scene, and analyzing and preprocessing the multi-channel video streams;
step S202, detecting each frame of image obtained by analyzing through the trained target detection network model to obtain one or more human body frames, and determining the position information and the number information of the on-duty personnel in each path of video according to the positions and the number of the human body frames;
and S203, performing matching tracking on the images of the adjacent frames in each video by using a tracking algorithm, acquiring the movement track information of the person on duty in each video according to the position of the human body frame in the adjacent frame, and calculating the action amplitude of the person on duty in the scene by using a background modeling algorithm and combining the position information of the person on duty.
8. The method according to claim 7, wherein the matching and tracking images of adjacent frames in each video by using a tracking algorithm comprises:
and arranging a corresponding tracker for each human body frame detected in the current frame, detecting one or more new human body frames when the next frame image in the video is read, matching the trackers and the human body frames, thereby completing the tracking of multiple targets in adjacent frames, and generating a tracker state list in real time, wherein the list comprises the position of the tracker in the current frame, the number of the survival frames of the tracker, and whether the tracker is updated in the current frame.
9. The method of claim 8, wherein the performing an integration process comprises: searching and matching are carried out according to the characteristic point information in the multi-path video scene, the position relation among the multi-path video cameras is calculated, video contents with different angles and a tracker state list are integrated according to the calculated space rotation and translation angles among the cameras, and the collected personnel information is obtained.
10. An on-duty personnel violation detection system, comprising:
the target detection network model building and training module is used for building a target detection network model and training the target detection network model by using a data set to obtain a trained target detection network model;
the target detection unit module is used for acquiring multiple paths of videos of the same scene at different angles in real time, and performing multi-target detection and tracking on each path of video by using the trained target detection network model and the trained target tracking algorithm to acquire personnel information in each path of video;
the abnormal behavior judgment module is used for integrating the personnel information in each path of video acquired in the target detection unit module according to the personnel matching relationship among the multiple paths of videos; and judging whether the behavior of the person on duty is abnormal or not according to the integrated data.
CN201811583373.0A 2018-12-24 2018-12-24 Method and system for detecting violation behaviors of staff on duty Active CN109711320B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811583373.0A CN109711320B (en) 2018-12-24 2018-12-24 Method and system for detecting violation behaviors of staff on duty

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811583373.0A CN109711320B (en) 2018-12-24 2018-12-24 Method and system for detecting violation behaviors of staff on duty

Publications (2)

Publication Number Publication Date
CN109711320A true CN109711320A (en) 2019-05-03
CN109711320B CN109711320B (en) 2021-05-11

Family

ID=66256218

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811583373.0A Active CN109711320B (en) 2018-12-24 2018-12-24 Method and system for detecting violation behaviors of staff on duty

Country Status (1)

Country Link
CN (1) CN109711320B (en)

Cited By (51)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110197158A (en) * 2019-05-31 2019-09-03 广西南宁市博睿通软件技术有限公司 A kind of security cloud system and its application
CN110351598A (en) * 2019-07-18 2019-10-18 上海秒针网络科技有限公司 The transmission method and device of multimedia messages
CN110443280A (en) * 2019-07-05 2019-11-12 北京达佳互联信息技术有限公司 Training method, device and the storage medium of image detection model
CN110490126A (en) * 2019-08-15 2019-11-22 成都睿晓科技有限公司 A kind of safety cabinet security management and control system based on artificial intelligence
CN110516538A (en) * 2019-07-16 2019-11-29 广州中科凯泽科技有限公司 The double violation assessment method of leaving the post in prison based on deep learning target detection
CN110532852A (en) * 2019-07-09 2019-12-03 长沙理工大学 Subway station pedestrian's accident detection method based on deep learning
CN110580455A (en) * 2019-08-21 2019-12-17 广州洪森科技有限公司 image recognition-based illegal off-duty detection method and device for personnel
CN110689054A (en) * 2019-09-10 2020-01-14 华中科技大学 Worker violation monitoring method
CN110728316A (en) * 2019-09-30 2020-01-24 广州海昇计算机科技有限公司 Classroom behavior detection method, system, device and storage medium
CN110727688A (en) * 2019-10-24 2020-01-24 甘肃华科信息技术有限责任公司 Key personnel gridding service management system
CN110807429A (en) * 2019-10-23 2020-02-18 西安科技大学 Construction safety detection method and system based on tiny-YOLOv3
CN110826514A (en) * 2019-11-13 2020-02-21 国网青海省电力公司海东供电公司 Construction site violation intelligent identification method based on deep learning
CN110889858A (en) * 2019-12-03 2020-03-17 中国太平洋保险(集团)股份有限公司 Automobile part segmentation method and device based on point regression
CN111046797A (en) * 2019-12-12 2020-04-21 天地伟业技术有限公司 Oil pipeline warning method based on personnel and vehicle behavior analysis
CN111062364A (en) * 2019-12-28 2020-04-24 青岛理工大学 Deep learning-based assembly operation monitoring method and device
CN111062366A (en) * 2019-12-30 2020-04-24 中祖建设安装工程有限公司 Method and system for detecting postures of personnel in control room
CN111126328A (en) * 2019-12-30 2020-05-08 中祖建设安装工程有限公司 Intelligent firefighter posture monitoring method and system
CN111160197A (en) * 2019-12-23 2020-05-15 爱驰汽车有限公司 Face detection method and device, electronic equipment and storage medium
CN111339879A (en) * 2020-02-19 2020-06-26 安徽领云物联科技有限公司 Single-person entering monitoring method and device for weapon room
CN111507261A (en) * 2020-04-17 2020-08-07 无锡雪浪数制科技有限公司 Process operation quality monitoring method based on visual target positioning
CN111553305A (en) * 2020-05-09 2020-08-18 中国石油天然气集团有限公司 Violation video identification system and method
CN111709281A (en) * 2020-05-06 2020-09-25 北京图创时代科技有限公司武汉分公司 Intelligent security auxiliary system
CN111885349A (en) * 2020-06-08 2020-11-03 北京市基础设施投资有限公司(原北京地铁集团有限责任公司) Pipe rack abnormity detection system and method
CN112001230A (en) * 2020-07-09 2020-11-27 浙江大华技术股份有限公司 Sleeping behavior monitoring method and device, computer equipment and readable storage medium
CN112101212A (en) * 2020-09-15 2020-12-18 山东鲁能软件技术有限公司 Method for judging positions of personnel in electric power safety control complex scene
CN112149586A (en) * 2020-09-28 2020-12-29 上海翰声信息技术有限公司 Automatic video clip extraction system and method based on neural network
CN112200021A (en) * 2020-09-22 2021-01-08 燕山大学 Target crowd tracking and monitoring method based on limited range scene
CN112215200A (en) * 2020-10-28 2021-01-12 新东方教育科技集团有限公司 Identity recognition method and device
CN112257549A (en) * 2020-10-19 2021-01-22 中国电子科技集团公司第五十八研究所 Floor danger detection early warning method and system based on computer vision
CN112257545A (en) * 2020-10-19 2021-01-22 安徽领云物联科技有限公司 Violation real-time monitoring and analyzing method and device and storage medium
CN112307821A (en) * 2019-07-29 2021-02-02 顺丰科技有限公司 Video stream processing method, device, equipment and storage medium
CN112509000A (en) * 2020-11-20 2021-03-16 合肥市卓迩无人机科技服务有限责任公司 Moving target tracking algorithm for multi-path 4K quasi-real-time spliced video
CN112580449A (en) * 2020-12-06 2021-03-30 江苏集萃未来城市应用技术研究所有限公司 Method for judging abnormal behaviors of personnel on intelligent construction site
CN112750222A (en) * 2020-12-29 2021-05-04 杭州拓深科技有限公司 Fire-fighting on-duty room personnel on-duty identification method based on intelligent algorithm
CN113052127A (en) * 2021-04-09 2021-06-29 上海云从企业发展有限公司 Behavior detection method, behavior detection system, computer equipment and machine readable medium
CN113052049A (en) * 2021-03-18 2021-06-29 国网内蒙古东部电力有限公司 Off-duty detection method and device based on artificial intelligence tool identification
CN113158730A (en) * 2020-12-31 2021-07-23 杭州拓深科技有限公司 Multi-person on-duty identification method and device based on human shape identification, electronic device and storage medium
CN113269142A (en) * 2021-06-18 2021-08-17 中电科大数据研究院有限公司 Method for identifying sleeping behaviors of person on duty in field of inspection
CN113392776A (en) * 2021-06-17 2021-09-14 深圳市千隼科技有限公司 Seat leaving behavior detection method and storage device combining seat information and machine vision
CN113688804A (en) * 2021-10-25 2021-11-23 腾讯科技(深圳)有限公司 Multi-angle video-based action identification method and related equipment
WO2022037280A1 (en) * 2020-08-19 2022-02-24 广西电网有限责任公司贺州供电局 Multi-channel cnn based method for detecting power transformation field operation violations
CN114241397A (en) * 2022-02-23 2022-03-25 武汉烽火凯卓科技有限公司 Frontier defense video intelligent analysis method and system
CN114283492A (en) * 2021-10-28 2022-04-05 平安银行股份有限公司 Employee behavior-based work saturation analysis method, device, equipment and medium
CN114500871A (en) * 2021-12-15 2022-05-13 山东信通电子股份有限公司 Multi-channel video analysis method, equipment and medium
CN114821647A (en) * 2022-04-25 2022-07-29 济南博观智能科技有限公司 Sleeping post identification method, device, equipment and medium
CN114898342A (en) * 2022-07-15 2022-08-12 深圳市城市交通规划设计研究中心股份有限公司 Method for detecting call receiving and making of non-motor vehicle driver in driving
WO2022213540A1 (en) * 2021-04-09 2022-10-13 神思电子技术股份有限公司 Object detecting, attribute identifying and tracking method and system
CN115346169A (en) * 2022-08-08 2022-11-15 航天神舟智慧系统技术有限公司 Method and system for detecting sleep post behaviors
CN111191576B (en) * 2019-12-27 2023-04-25 长安大学 Personnel behavior target detection model construction method, intelligent analysis method and system
CN116071744A (en) * 2023-01-10 2023-05-05 山东省气候中心 Mature-period tomato identification method and system based on Faster RCNN network
CN116245425A (en) * 2023-05-04 2023-06-09 武汉理工大学 Ship attendant alertness characterization and evaluation method based on wireless signals

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969645A (en) * 2019-11-28 2020-04-07 北京影谱科技股份有限公司 Unsupervised abnormal track detection method and unsupervised abnormal track detection device for crowded scenes

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101854516A (en) * 2009-04-02 2010-10-06 北京中星微电子有限公司 Video monitoring system, video monitoring server and video monitoring method
CN102629385A (en) * 2012-02-28 2012-08-08 中山大学 Object matching and tracking system based on multiple camera information fusion and method thereof
US20170286774A1 (en) * 2016-04-04 2017-10-05 Xerox Corporation Deep data association for online multi-class multi-object tracking
CN107871111A (en) * 2016-09-28 2018-04-03 苏宁云商集团股份有限公司 A kind of behavior analysis method and system
US20180170730A1 (en) * 2015-03-06 2018-06-21 Walmart Apollo, Llc Systems, devices and methods of controlling motorized transport units in fulfilling product orders
CN108416250A (en) * 2017-02-10 2018-08-17 浙江宇视科技有限公司 Demographic method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101854516A (en) * 2009-04-02 2010-10-06 北京中星微电子有限公司 Video monitoring system, video monitoring server and video monitoring method
CN102629385A (en) * 2012-02-28 2012-08-08 中山大学 Object matching and tracking system based on multiple camera information fusion and method thereof
US20180170730A1 (en) * 2015-03-06 2018-06-21 Walmart Apollo, Llc Systems, devices and methods of controlling motorized transport units in fulfilling product orders
US20170286774A1 (en) * 2016-04-04 2017-10-05 Xerox Corporation Deep data association for online multi-class multi-object tracking
CN107871111A (en) * 2016-09-28 2018-04-03 苏宁云商集团股份有限公司 A kind of behavior analysis method and system
CN108416250A (en) * 2017-02-10 2018-08-17 浙江宇视科技有限公司 Demographic method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ROBERT J. WANG,ET AL.: "Pelee: A Real-Time Object Detection System on Mobile Devices", 《32ND CONFERENCE ON NEURAL INFORMATION PROCESSING SYSTEMS (NIPS)》 *
WEI LIU,ET AL.: "SSD: Single Shot Multibox detector", 《EUROPEAN CONFERENCE ON COMPUTER VISION》 *

Cited By (73)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110197158A (en) * 2019-05-31 2019-09-03 广西南宁市博睿通软件技术有限公司 A kind of security cloud system and its application
CN110443280A (en) * 2019-07-05 2019-11-12 北京达佳互联信息技术有限公司 Training method, device and the storage medium of image detection model
CN110443280B (en) * 2019-07-05 2022-06-03 北京达佳互联信息技术有限公司 Training method and device of image detection model and storage medium
CN110532852A (en) * 2019-07-09 2019-12-03 长沙理工大学 Subway station pedestrian's accident detection method based on deep learning
CN110532852B (en) * 2019-07-09 2022-10-18 长沙理工大学 Subway station pedestrian abnormal event detection method based on deep learning
CN110516538B (en) * 2019-07-16 2022-10-11 广州中科凯泽科技有限公司 Prison double off-duty violation assessment method based on deep learning target detection
CN110516538A (en) * 2019-07-16 2019-11-29 广州中科凯泽科技有限公司 The double violation assessment method of leaving the post in prison based on deep learning target detection
CN110351598A (en) * 2019-07-18 2019-10-18 上海秒针网络科技有限公司 The transmission method and device of multimedia messages
CN112307821A (en) * 2019-07-29 2021-02-02 顺丰科技有限公司 Video stream processing method, device, equipment and storage medium
CN110490126A (en) * 2019-08-15 2019-11-22 成都睿晓科技有限公司 A kind of safety cabinet security management and control system based on artificial intelligence
CN110490126B (en) * 2019-08-15 2023-04-18 成都睿晓科技有限公司 Safe deposit box safety control system based on artificial intelligence
CN110580455A (en) * 2019-08-21 2019-12-17 广州洪森科技有限公司 image recognition-based illegal off-duty detection method and device for personnel
CN110689054A (en) * 2019-09-10 2020-01-14 华中科技大学 Worker violation monitoring method
CN110689054B (en) * 2019-09-10 2022-04-01 华中科技大学 Worker violation monitoring method
CN110728316A (en) * 2019-09-30 2020-01-24 广州海昇计算机科技有限公司 Classroom behavior detection method, system, device and storage medium
CN110807429A (en) * 2019-10-23 2020-02-18 西安科技大学 Construction safety detection method and system based on tiny-YOLOv3
CN110807429B (en) * 2019-10-23 2023-04-07 西安科技大学 Construction safety detection method and system based on tiny-YOLOv3
CN110727688A (en) * 2019-10-24 2020-01-24 甘肃华科信息技术有限责任公司 Key personnel gridding service management system
CN110826514A (en) * 2019-11-13 2020-02-21 国网青海省电力公司海东供电公司 Construction site violation intelligent identification method based on deep learning
CN110889858A (en) * 2019-12-03 2020-03-17 中国太平洋保险(集团)股份有限公司 Automobile part segmentation method and device based on point regression
CN111046797A (en) * 2019-12-12 2020-04-21 天地伟业技术有限公司 Oil pipeline warning method based on personnel and vehicle behavior analysis
CN111160197A (en) * 2019-12-23 2020-05-15 爱驰汽车有限公司 Face detection method and device, electronic equipment and storage medium
CN111191576B (en) * 2019-12-27 2023-04-25 长安大学 Personnel behavior target detection model construction method, intelligent analysis method and system
CN111062364A (en) * 2019-12-28 2020-04-24 青岛理工大学 Deep learning-based assembly operation monitoring method and device
CN111062364B (en) * 2019-12-28 2023-06-30 青岛理工大学 Method and device for monitoring assembly operation based on deep learning
CN111126328A (en) * 2019-12-30 2020-05-08 中祖建设安装工程有限公司 Intelligent firefighter posture monitoring method and system
CN111062366A (en) * 2019-12-30 2020-04-24 中祖建设安装工程有限公司 Method and system for detecting postures of personnel in control room
CN111062366B (en) * 2019-12-30 2023-12-15 中祖建设安装工程有限公司 Method and system for detecting gesture of personnel in control room
CN111339879A (en) * 2020-02-19 2020-06-26 安徽领云物联科技有限公司 Single-person entering monitoring method and device for weapon room
CN111339879B (en) * 2020-02-19 2023-06-02 安徽领云物联科技有限公司 Weapon room single person room entering monitoring method and device
CN111507261B (en) * 2020-04-17 2023-05-26 无锡雪浪数制科技有限公司 Visual target positioning-based process operation quality monitoring method
CN111507261A (en) * 2020-04-17 2020-08-07 无锡雪浪数制科技有限公司 Process operation quality monitoring method based on visual target positioning
CN111709281A (en) * 2020-05-06 2020-09-25 北京图创时代科技有限公司武汉分公司 Intelligent security auxiliary system
CN111553305A (en) * 2020-05-09 2020-08-18 中国石油天然气集团有限公司 Violation video identification system and method
CN111553305B (en) * 2020-05-09 2023-05-05 中国石油天然气集团有限公司 System and method for identifying illegal videos
CN111885349A (en) * 2020-06-08 2020-11-03 北京市基础设施投资有限公司(原北京地铁集团有限责任公司) Pipe rack abnormity detection system and method
CN111885349B (en) * 2020-06-08 2023-05-09 北京市基础设施投资有限公司 Pipe gallery abnormality detection system and method
CN112001230B (en) * 2020-07-09 2024-07-30 浙江大华技术股份有限公司 Sleep behavior monitoring method and device, computer equipment and readable storage medium
CN112001230A (en) * 2020-07-09 2020-11-27 浙江大华技术股份有限公司 Sleeping behavior monitoring method and device, computer equipment and readable storage medium
WO2022037280A1 (en) * 2020-08-19 2022-02-24 广西电网有限责任公司贺州供电局 Multi-channel cnn based method for detecting power transformation field operation violations
CN112101212A (en) * 2020-09-15 2020-12-18 山东鲁能软件技术有限公司 Method for judging positions of personnel in electric power safety control complex scene
CN112200021A (en) * 2020-09-22 2021-01-08 燕山大学 Target crowd tracking and monitoring method based on limited range scene
CN112200021B (en) * 2020-09-22 2022-07-01 燕山大学 Target crowd tracking and monitoring method based on limited range scene
CN112149586A (en) * 2020-09-28 2020-12-29 上海翰声信息技术有限公司 Automatic video clip extraction system and method based on neural network
CN112257549B (en) * 2020-10-19 2022-08-02 中国电子科技集团公司第五十八研究所 Floor danger detection early warning method and system based on computer vision
CN112257549A (en) * 2020-10-19 2021-01-22 中国电子科技集团公司第五十八研究所 Floor danger detection early warning method and system based on computer vision
CN112257545A (en) * 2020-10-19 2021-01-22 安徽领云物联科技有限公司 Violation real-time monitoring and analyzing method and device and storage medium
CN112215200A (en) * 2020-10-28 2021-01-12 新东方教育科技集团有限公司 Identity recognition method and device
CN112509000A (en) * 2020-11-20 2021-03-16 合肥市卓迩无人机科技服务有限责任公司 Moving target tracking algorithm for multi-path 4K quasi-real-time spliced video
CN112580449B (en) * 2020-12-06 2022-10-21 江苏集萃未来城市应用技术研究所有限公司 Method for judging abnormal behaviors of people on intelligent construction site
CN112580449A (en) * 2020-12-06 2021-03-30 江苏集萃未来城市应用技术研究所有限公司 Method for judging abnormal behaviors of personnel on intelligent construction site
CN112750222A (en) * 2020-12-29 2021-05-04 杭州拓深科技有限公司 Fire-fighting on-duty room personnel on-duty identification method based on intelligent algorithm
CN113158730A (en) * 2020-12-31 2021-07-23 杭州拓深科技有限公司 Multi-person on-duty identification method and device based on human shape identification, electronic device and storage medium
CN113052049B (en) * 2021-03-18 2023-12-19 国网内蒙古东部电力有限公司 Off-duty detection method and device based on artificial intelligent tool identification
CN113052049A (en) * 2021-03-18 2021-06-29 国网内蒙古东部电力有限公司 Off-duty detection method and device based on artificial intelligence tool identification
WO2022213540A1 (en) * 2021-04-09 2022-10-13 神思电子技术股份有限公司 Object detecting, attribute identifying and tracking method and system
CN113052127A (en) * 2021-04-09 2021-06-29 上海云从企业发展有限公司 Behavior detection method, behavior detection system, computer equipment and machine readable medium
CN113392776A (en) * 2021-06-17 2021-09-14 深圳市千隼科技有限公司 Seat leaving behavior detection method and storage device combining seat information and machine vision
CN113392776B (en) * 2021-06-17 2022-07-12 深圳日海物联技术有限公司 Seat leaving behavior detection method and storage device combining seat information and machine vision
CN113269142A (en) * 2021-06-18 2021-08-17 中电科大数据研究院有限公司 Method for identifying sleeping behaviors of person on duty in field of inspection
CN113688804A (en) * 2021-10-25 2021-11-23 腾讯科技(深圳)有限公司 Multi-angle video-based action identification method and related equipment
CN114283492A (en) * 2021-10-28 2022-04-05 平安银行股份有限公司 Employee behavior-based work saturation analysis method, device, equipment and medium
CN114283492B (en) * 2021-10-28 2024-04-26 平安银行股份有限公司 Staff behavior-based work saturation analysis method, device, equipment and medium
CN114500871B (en) * 2021-12-15 2023-11-14 山东信通电子股份有限公司 Multipath video analysis method, equipment and medium
CN114500871A (en) * 2021-12-15 2022-05-13 山东信通电子股份有限公司 Multi-channel video analysis method, equipment and medium
CN114241397A (en) * 2022-02-23 2022-03-25 武汉烽火凯卓科技有限公司 Frontier defense video intelligent analysis method and system
CN114821647A (en) * 2022-04-25 2022-07-29 济南博观智能科技有限公司 Sleeping post identification method, device, equipment and medium
CN114898342A (en) * 2022-07-15 2022-08-12 深圳市城市交通规划设计研究中心股份有限公司 Method for detecting call receiving and making of non-motor vehicle driver in driving
CN115346169A (en) * 2022-08-08 2022-11-15 航天神舟智慧系统技术有限公司 Method and system for detecting sleep post behaviors
CN116071744B (en) * 2023-01-10 2023-06-30 山东省气候中心 Mature-period tomato identification method and system based on Faster RCNN network
CN116071744A (en) * 2023-01-10 2023-05-05 山东省气候中心 Mature-period tomato identification method and system based on Faster RCNN network
CN116245425A (en) * 2023-05-04 2023-06-09 武汉理工大学 Ship attendant alertness characterization and evaluation method based on wireless signals
CN116245425B (en) * 2023-05-04 2023-08-01 武汉理工大学 Ship attendant alertness characterization and evaluation method based on wireless signals

Also Published As

Publication number Publication date
CN109711320B (en) 2021-05-11

Similar Documents

Publication Publication Date Title
CN109711320B (en) Method and system for detecting violation behaviors of staff on duty
CN107818571B (en) Ship automatic tracking method and system based on deep learning network and average drifting
CN110688925B (en) Cascade target identification method and system based on deep learning
CN107123131B (en) Moving target detection method based on deep learning
CN105243356B (en) A kind of method and device that establishing pedestrian detection model and pedestrian detection method
CN113536972B (en) Self-supervision cross-domain crowd counting method based on target domain pseudo label
CN112949511A (en) Construction site personnel management method based on machine learning and image recognition
CN111476160A (en) Loss function optimization method, model training method, target detection method, and medium
Doulamis Coupled multi-object tracking and labeling for vehicle trajectory estimation and matching
CN114708537A (en) Multi-view-angle-based system and method for analyzing abnormal behaviors of complex places
KR20190088087A (en) method of providing categorized video processing for moving objects based on AI learning using moving information of objects
Liu et al. An improved faster R-CNN for UAV-based catenary support device inspection
Rezaee et al. Deep-Transfer-learning-based abnormal behavior recognition using internet of drones for crowded scenes
CN114663796A (en) Target person continuous tracking method, device and system
CN116824641B (en) Gesture classification method, device, equipment and computer storage medium
CN112069997B (en) Unmanned aerial vehicle autonomous landing target extraction method and device based on DenseHR-Net
CN117953009A (en) Space-time feature-based crowd personnel trajectory prediction method
Castellano et al. Crowd flow detection from drones with fully convolutional networks and clustering
CN115187884A (en) High-altitude parabolic identification method and device, electronic equipment and storage medium
Rakshit et al. Railway Track Fault Detection using Deep Neural Networks
CN117636454A (en) Intelligent video behavior analysis method based on computer vision
Gu et al. Real-Time Vehicle Passenger Detection Through Deep Learning
Prezioso et al. Integrating Object Detection and Advanced Analytics for Smart City Crowd Management
Li et al. Pedestrian Motion Path Detection Method Based on Deep Learning and Foreground Detection
Alhaq et al. Forensic Analysis of Car Accident Using MobileNet and Optical Flow

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant