CN107977646B - Partition delivery detection method - Google Patents

Partition delivery detection method Download PDF

Info

Publication number
CN107977646B
CN107977646B CN201711372450.3A CN201711372450A CN107977646B CN 107977646 B CN107977646 B CN 107977646B CN 201711372450 A CN201711372450 A CN 201711372450A CN 107977646 B CN107977646 B CN 107977646B
Authority
CN
China
Prior art keywords
target
convolutional neural
probability
neural network
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711372450.3A
Other languages
Chinese (zh)
Other versions
CN107977646A (en
Inventor
张恩伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Shengxun Technology Co ltd
Original Assignee
Beijing Bravevideo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Bravevideo Technology Co ltd filed Critical Beijing Bravevideo Technology Co ltd
Priority to CN201711372450.3A priority Critical patent/CN107977646B/en
Publication of CN107977646A publication Critical patent/CN107977646A/en
Application granted granted Critical
Publication of CN107977646B publication Critical patent/CN107977646B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Abstract

The invention provides a partition delivery object detection method based on combination of deep learning, a mean shift tracking algorithm and a Bayesian network. And detecting human bodies and articles of the current frame image in the video without any prediction information by utilizing the advantage of the convolutional neural network on target detection. And on the basis of the target detected by the convolutional neural network, calculating the target prediction position of the next frame by using a mean shift tracking algorithm, and feeding back the prediction position to a candidate target selection layer of the convolutional neural network. And calculating the matching rate of the detection target and the tracking target based on the overlapping degree of the detection target and the tracking target, and updating the track of the tracking target, the probability of the category of the tracking target and the like. And after the data of the tracking target track and the class probability are obtained, the data are input to a Bayesian network to judge whether a barrier article passing behavior occurs or not. The invention combines a deep learning algorithm and a traditional computer vision algorithm, realizes the barrier delivery behavior detection based on video analysis, and greatly improves the safety of the perimeter area.

Description

Partition delivery detection method
Technical Field
The invention belongs to the field of video monitoring in security technology, and relates to mode recognition, graphic image processing, video analysis and the like.
Background
Due to safety considerations, fences are common in real life and are conventional perimeter precaution facilities, and the fences can physically isolate a space so as to prevent unauthorized persons or vehicles and other objects from entering the space. Fences are different from fences, generally, walls are solid and cannot penetrate through, fences are usually formed by combining railings, hands can penetrate through the fences, and even small objects such as children or kittens and puppies can penetrate through holes of the fences. The fence delivery means that two or more persons deliver articles through a fence. For example, in a subway safety precaution system, a space area is divided into an area before security check and an area after security check through a fence gate and the like, and all passengers can enter a subway platform and take a subway through links such as security check doors or security check machines and manual security check. However, many passengers can not go out of the rail and exchange articles with passengers who do not enter the rail, and passengers and articles outside the rail are not subjected to security inspection in many cases, so that potential safety hazards are caused to subways, and if dangerous articles are transferred into the rail, personal safety of other passengers is threatened. The fence is widely distributed, and manual staring by video monitoring is basically impossible. A schematic of the barrier delivery is shown in figure 1.
The fence is essentially one of the perimeters, and by utilizing the perimeter alarm method and the perimeter alarm equipment, the barrier passing objects are deterred to a certain extent, the barrier passing objects can be reduced, but the barrier passing objects are different from the common climbing fence, so that the traditional perimeter alarm method is not suitable for detecting the barrier passing objects. Such as perimeter alarms with infrared correlation, are typically mounted above the fence in order to detect a person climbing the fence, but barrier deliveries generally do not need to climb the fence, but only deliver items through the void in the middle of the fence. If the vibration optical cable is used, vibration generated when a person climbs the fence is detected, and the barrier delivery object usually does not contact the fence, so that vibration cannot be generated, the vibration optical cable cannot be detected possibly, and on the contrary, a lot of people rest on the fence or leaves fall down and the like to generate vibration, so that a lot of false alarms can be caused. Therefore, conventional perimeter alarm algorithms often fail in detecting barrier deliveries.
With the development of artificial intelligence technology, video analysis technology has made a great progress, so that by analyzing the content of video, the detection of jube delivered objects based on video analysis becomes possible.
In recent years, a target detection algorithm based on deep learning makes a major breakthrough, and a convolutional neural network mainly brings about great improvement of a target detection recognition rate. The task of object detection is to provide an image, locate the position and size of an object in the image, and provide the category of the object, such as face detection and pedestrian detection. People and various articles in a scene can be detected by using a convolutional neural network, but the algorithms are usually based on single-frame image detection, so that the behavior of barrier deliveries cannot be detected.
Disclosure of Invention
In order to detect the barrier delivery behavior, the invention provides a barrier delivery detection algorithm based on the combination of deep learning, a mean shift tracking algorithm and a Bayesian network. The advantage of the convolutional neural network on target detection is utilized, and the detection of human bodies and articles is carried out on a single image (a frame in a video) without any prediction information. On the basis of the target (including target coordinates, width and height, category, probability and the like) detected by the convolutional neural network, the target prediction position of the next frame is calculated by using a mean shift tracking algorithm, and the prediction position is fed back to a candidate target selection layer of the convolutional neural network. And calculating the matching rate of the detection target and the tracking target based on the overlapping degree of the detection target and the tracking target, and updating the track of the tracking target, the probability of the category of the tracking target and the like. And after the data of the tracking target track and the class probability are obtained, the data are input to a Bayesian network to judge whether a barrier article passing behavior occurs or not. The invention combines a deep learning algorithm and a traditional computer vision algorithm, realizes the barrier delivery behavior detection based on video analysis, and greatly improves the safety of the perimeter area.
The invention provides a barrier delivery object detection algorithm based on combination of deep learning, a mean shift tracking algorithm and a Bayesian network, which comprises the following steps:
the method includes the steps of acquiring a video stream from a high definition network camera (IPC) or a Network Video Recorder (NVR), generally taking a first code stream, namely a high definition code stream such as 1080p, decoding the video stream into a frame-by-frame image, generally in an h.264 or h.265 encoding format, generally obtaining a YUV image from the decoded image, and then converting the YUV image into an RGB image through a color space, which is hereinafter collectively referred to as a frame image.
The method comprises the steps of extracting interesting pixels from a frame image by utilizing a preset Region (ROI), wherein the pixels are regions on two sides of a fence, the probability of barrier delivery is high in the range, and the pixels are regions far away from the fence, and the image is not included in the method so as to avoid false alarm caused by people far away. In the present invention, if not a pixel in the region of interest, it is filled with a value of R ═ G ═ B ═ 128. Finally, a frame image F in which only the pixels within the ROI are retained is formed.
And inputting the frame image F to a deep learning target detection module based on position prediction, wherein the position prediction is derived from a mean shift tracking algorithm, and if the frame image F is a first frame, default target detection based on a single frame is adopted. The deep learning target detection module adopts a regional convolutional neural network to identify 7 categories of human bodies, backpacks, handbags, luggage cases, handbags, mineral water bottles, drinking cups and the like. Frame image F is scaled to image I of the invention's base resolution 480x480, then the full-map regional convolutional neural network extracts features on I, then I is divided into 15x15 blocks B. For each block B, if the current block has no tracking feedback prediction information, 5 target frames (each target frame comprises a target width, a target height, a target center coordinate x and a target center coordinate y) and a category confidence are predicted on the block; if the current block contains the prediction information of the tracking feedback, only 2 target frames are predicted on the current block, wherein the position information of one target frame is obtained by the prediction information of the tracking feedback, and the other target frame is the same as the target frame without the tracking feedback information. The probability that each target box belongs to a certain class (7 classes) is calculated through the features obtained by the convolutional neural network, so that the total graph forms 15x15x5 including 1125 prediction boxes containing position information and classification probability information at most, and the minimum prediction boxes are 15x15x2 including 450 prediction boxes. And merging the prediction frames through a merging mechanism determined by the overlapping rate, merging only the prediction frames of the same class, and finally outputting the detection results of the 7 classes of targets in the whole graph, wherein each result comprises the center coordinate, the width and the height of the target and the probability of the target belonging to which class.
And (3) iteratively searching a region which is most matched with the target by taking the center of the target as a starting point and a color histogram as a characteristic through a mean shift tracking algorithm on the detected target, taking the Bhattacharya coefficient as the similarity measure of the target template and the candidate target, finally obtaining a coordinate point which is most matched with the detected target in a local range, and feeding the coordinate point back to the convolutional neural network as prediction information. In the present invention, all blocks B covered by the area finally obtained by mean shift share the predicted information.
When the target detected in the current frame and the target detected in the previous frame belong to the same class, the overlapping area of the targets is calculated between every two targets to generate an overlapping area matrix which is used as a characteristic matching matrix between the target of the previous frame and the target detected at present. When the overlapping area is larger than a certain threshold value, the target is considered to be the same target (the tracking target is matched), and at the moment, the track of the target and the class probability of the target are updated; in the current frame, if no target matched with the previous frame exists, the target is used as a new target, and a new tracking target is established; and in the previous frame target, if the target detected by the current frame is not matched with the previous frame target, the previous frame target is regarded as a disappeared target and is deleted from the tracking queue. By this step, tracking tracks of human bodies and various articles are established frame by frame, and the classification probability is updated in real time.
Through the above steps, the following variables can be obtained: number of people on left side of fence NLNumber of people on right side of fence NRAverage probability P of human body detected in ROI on both sides of fenceHAverage direction of motion V of the left human bodyHLAverage direction of motion V of the right human bodyHRAverage probability P of objects detected between human bodies in the vicinity of the fenceOMean direction of movement V of the articlesO. The above moving directions all use the direction perpendicular to the fence on the horizontal plane as a reference direction, an included angle theta between the reference direction and the reference direction is used as a moving direction angle, and cos theta is used as a probability value of the moving direction. If the variable for barrier passing alarm is A, the variables A and N can be constructedL、NR、PH、 VHL、VHR、PO、VOBy observing the variable NL、NR、PH、VHL、VHR、PO、VOThe probability of A occurrence is estimated, and finally the detection of the barrier deliveries is realized.
The traditional barrier article detection mainly depends on that people stare at a monitoring picture and fatigue is caused for a long time, or measures such as infrared correlation and vibration detection are adopted, so that a large amount of false alarms are caused. The barrier delivery detection algorithm based on the combination of deep learning, the mean shift tracking algorithm and the Bayesian network can liberate security personnel from high-load work of staring monitoring pictures for a long time and greatly reduce false alarms.
Drawings
FIG. 1 is a schematic representation of the barrier delivery of the present invention.
FIG. 2 is a flow chart of a barrier object detection algorithm based on a combination of deep learning, mean shift tracking algorithm and Bayesian network.
FIG. 3 is a schematic diagram illustrating the layers of the convolutional neural network of the present invention.
FIG. 4 is a diagram of a barrier delivery Bayesian network of the present invention.
Detailed Description
The invention is further explained below with reference to the figures and the specific examples. It should be noted that the examples described below are intended to better understand the invention and are only part of the invention and do not therefore limit the scope of protection of the invention.
As shown in FIG. 2, the invention realizes a series of steps from the collection of each frame of image to the linkage of alarm.
In step 201, a video stream is collected from a front-end device, where the front-end device may be IPC, NVR, DVR, or the like, but is not limited thereto, as long as the front-end device can obtain the video stream. After the video stream is collected, a frame image in a YUV format is decoded by a decoder, and then the YUV format image is converted into an RGB image through a color space.
In step 202, with the ROI layer of the layer calibrated manually in advance, any pixel falling into the ROI layer is used as an effective pixel. Through the step, the influence of a distant interference target can be filtered out, and finally a frame image input to the convolutional neural network is formed.
Step 203, as shown in fig. 3, the present invention adopts a deep learning network of 16 convolutional layers, 4 pooling layers, 1 merging layer and 1 fully-connected layer, and finally 1 classification layer. Convolutional layers employ convolution kernels of 7x7, 5x5, and 3x 3. The pooling layer employs a 2x2 window to reduce the size of the feature space by one layer. The 16 th and 19 th layers were laminated and outputted as 20 layers. The parameter training of the network is performed by pre-training two million samples of a calibration type, then performing parameter fine adjustment by using images of 7 types of targets in a monitoring scene (selecting a subway scene, a cell doorway, a peripheral scene and the like), and finally converging to obtain network parameters of the convolutional neural network. During detection, frame images are scaled to 480x480 uniform-resolution images I, characteristics are extracted through a convolutional neural network, the I is divided into 15x15 blocks, each block screens a prediction frame according to tracking feedback information, the probability, coordinates and width and height of 7 types of targets are estimated, and finally the blocks larger than a certain threshold value are combined to form a 7 types of target detection result. The result of each type of target detection is represented by the center coordinates and width and height, and the probability.
In step 204, the target position of the next frame image is estimated by using the mean shift algorithm according to the target detected in step 203. Mean shift is a gradient optimization algorithm, which uses mean shift iteration to search for the region that best matches the target model, and is an algorithm for finding the locally optimal solution. In the invention, the Bhattacharya coefficient is used as the similarity measure of the target template and the candidate target. Is provided with
Figure BDA0001514012020000042
With the coordinates of the pixels in the candidate target region centered at y, and the window width of kernel k (x) being h, the probability distribution at feature u being 1, …, m is given by the following equation:
Figure BDA0001514012020000041
wherein
Figure BDA0001514012020000051
Is a normalized coefficient. If the selected feature is a color, then
Figure BDA0001514012020000052
What is shown is a normalized and weighted color histogram, the weighting coefficients being determined by the distance of the pixel point from the center point y and the kernel function k (x). With the object being trackedProbability distribution of features
Figure BDA0001514012020000053
And probability distribution of features of candidate objects
Figure BDA0001514012020000054
Then, the Bhattacharya coefficient can be defined
Figure BDA0001514012020000055
And may define the distance between the tracked target and the candidate target feature
Figure BDA0001514012020000056
By minimizing d (y), an iterative formula for the target new coordinate point can be derived, as follows:
Figure BDA0001514012020000057
wherein the content of the first and second substances,
Figure BDA0001514012020000058
g(x)=-k′(x)
the target predicted position obtained by mean shift is fed back to the convolutional neural network of step 203.
Step 205 in steps 203 and 204, when the target detected in the current frame and the target detected in the previous frame belong to the same class, an overlap area matrix is generated by calculating the overlap area of the targets between each two frames, and the overlap coefficient is calculated as a feature matching matrix between the target in the previous frame and the target detected in the current frame. The overlap factor between the two targets a and B is as follows:
Figure BDA0001514012020000059
when eta is larger than a certain threshold value, the target is considered to be the same target (the tracking target is matched), and at the moment, the track of the target and the class probability of the target are updated; in the current frame, if no target matched with the previous frame exists, the target is used as a new target, and a new tracking target is established; and in the previous frame target, if the target detected by the current frame is not matched with the previous frame target, the previous frame target is regarded as a disappeared target and is deleted from the tracking queue. By this step, tracking tracks of human bodies and various articles are established frame by frame, and the classification probability is updated in real time.
Step 206 obtains the number of people on the left side of the variable fence N through step 205LNumber of people on right side of fence NRAverage probability P of human body detected in ROI on both sides of fenceHAverage direction of motion V of the left human bodyHLAverage direction of motion V of the right human bodyHRAverage probability P of objects detected between human bodies in the vicinity of the fenceOMean direction of movement V of the articlesO. A Bayesian network as shown in FIG. 4 is established assuming variable N under A conditionsL、NR、PH、VHL、VHR、PO、VOIndependently of each other, then
P(NL,NR,PH,PO,VHL,VHR,VO|A)
=P(NL|A)·P(NR|A)·P(PH|A)·P(PO|A)·P(VHL|A)·P(VHR|A)·P(VO|A)
Therefore, N is observedL、NR、PH、VHL、VHR、PO、VOWhen A occurs, the probability of A is as follows
P(A|NL,NR,PH,PO,VHL,VHR,VO)
∝P(NL|A)·P(NR|A)·P(PH|A)·P(PO|A)·P(VHL|A)·P(VHR|A)·P(VO|A)
Suppose P (N)L|A)、P(NR|A)、P(PH|A)、P(PO|A)、P(VHL|A)、P(VHR|A)、P(VO| a) obey a gaussian distribution, with actual samples used to estimate the parameters of the bayesian network. Finally according to NL、NR、PH、VHL、VHR、PO、 VOWhether it occurs or not to estimate the probability of the barrier delivery occurrence. In general, when two persons approach the fence from two sides respectively and pass through the fence with articles such as bags, the probability of barrier delivery is high.
The method combines the traditional mean shift algorithm, the Bayesian network and the like with the deep learning algorithm, and predicts the barrier passing behavior through the probability estimation mode, so that the method has higher accuracy and better popularization.

Claims (6)

1. A method for detecting barrier deliveries is characterized by comprising the following steps: the method comprises the steps of combining deep learning, a mean shift tracking algorithm and a Bayesian network, training a large number of samples to obtain a convolutional neural network model of 7 categories of human bodies, backpacks, handbags, luggage cases, handbags, mineral water bottles and drinking cups, feeding target prediction positions obtained by mean shift back to the convolutional neural network, detecting targets by the convolutional neural network, establishing tracking tracks of the human bodies and various articles and real-time updated category probabilities frame by using the targets detected by a current frame and the targets detected by a previous frame, and then obtaining the number N of people on the left side of the fenceLNumber of people on right side of fence NRAverage probability P of human body detected in ROI on both sides of fenceHAverage direction of motion V of the left human bodyHLAverage direction of motion V of the right human bodyHRAverage probability P of objects detected between human bodies in the vicinity of the fenceOMean direction of movement V of the articlesOTaking variable A for barrier delivery alarm as father node, NL、NR、PH、VHL、VHR、PO、VOConstructing a Bayesian network for the direct child nodes of A, assuming variable N under A conditionsL、NR、PH、VHL、VHR、PO、VOIndependent of each otherThen probability P (N)L,NR,PH,PO,VHL,VHR,VO| A) equals P (N)L|A)·P(NR|A)·P(PH|A)·P(PO|A)·P(VHL|A)·P(VHR|A)·P(VO| A), therefore, N is observedL、NR、PH、VHL、VHR、PO、VOThe probability P (A | N) of A occurringL,NR,PH,PO,VHL,VHR,VO) Is proportional to P (N)L|A)·P(NR|A)·P(PH|A)·P(PO|A)·P(VHL|A)·P(VHR|A)·P(VO| A), suppose P (N)L|A)、P(NR|A)、P(PH|A)、P(PO|A)、P(VHL|A)、P(VHR|A)、P(VOAll the | A) obey Gaussian distribution, and the parameters of the Bayesian network are estimated by using actual samples, and finally the parameters are estimated according to NL、NR、PH、VHL、VHR、PO、VOWhether it occurs or not to estimate the probability of the barrier delivery occurrence.
2. The jube pass detection method of claim 1, wherein the video stream is extracted from a front-end device, decoded into YUV image, then converted into RGB image, and the two side regions of the fence are retained by ROI, and the pixels in the far region are filled with R-G-B-128.
3. The method for detecting jube deliveries according to claim 1, wherein 7 categories of human bodies, backpacks, handbags, luggage cases, handbags, mineral water bottles and drinking cups are identified together by a convolutional neural network, the frame image F is scaled to an image I with a reference resolution of 480x480, then a full-image area convolutional neural network is performed on the I to extract features, then the I is divided into 15x15 blocks B, each block adopts a different target frame estimation mode according to whether tracking information is fed back, then the target probability and the width and height contained in each block are estimated, and finally the target is combined by an overlapping rate mechanism, and the center coordinate, the width and height of the target and the probability of which category the target belongs are output.
4. The method for detecting jube deliveries according to claim 1, wherein the detected target is iteratively searched for a region that best matches the target by a mean shift tracking algorithm with a center of the target as a starting point and a color histogram as a feature, and coordinates and width and height of the region are fed back to a convolutional neural network as predicted information.
5. The method according to claim 1, wherein when the target in the current frame and the target detected in the previous frame belong to the same class, an overlap area matrix is generated by calculating an overlap area between each two targets, the overlap area matrix is used as a feature matching matrix between the target in the previous frame and the target detected currently, and the track tracked by the target is updated through the feature matching matrix, including generation, update and deletion of the tracked target.
6. The jube pass detection method of claim 1, wherein the convolutional neural network model is trained from a large amount of data and is fine-tuned by monitoring the data of the scene.
CN201711372450.3A 2017-12-19 2017-12-19 Partition delivery detection method Active CN107977646B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711372450.3A CN107977646B (en) 2017-12-19 2017-12-19 Partition delivery detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711372450.3A CN107977646B (en) 2017-12-19 2017-12-19 Partition delivery detection method

Publications (2)

Publication Number Publication Date
CN107977646A CN107977646A (en) 2018-05-01
CN107977646B true CN107977646B (en) 2021-06-29

Family

ID=62006918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711372450.3A Active CN107977646B (en) 2017-12-19 2017-12-19 Partition delivery detection method

Country Status (1)

Country Link
CN (1) CN107977646B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443834A (en) * 2018-05-04 2019-11-12 大猩猩科技股份有限公司 A kind of distributed object tracking system
CN109597069A (en) * 2018-12-25 2019-04-09 山东雷诚电子科技有限公司 A kind of active MMW imaging method for secret protection
CN112668377A (en) * 2019-10-16 2021-04-16 清华大学 Information recognition system and method thereof
CN111144232A (en) * 2019-12-09 2020-05-12 国网智能科技股份有限公司 Transformer substation electronic fence monitoring method based on intelligent video monitoring, storage medium and equipment
CN111091098B (en) * 2019-12-20 2023-08-15 浙江大华技术股份有限公司 Training method of detection model, detection method and related device
CN112016528B (en) * 2020-10-20 2021-07-20 成都睿沿科技有限公司 Behavior recognition method and device, electronic equipment and readable storage medium
CN112818844A (en) * 2021-01-29 2021-05-18 成都商汤科技有限公司 Security check abnormal event detection method and device, electronic equipment and storage medium
CN112967320B (en) * 2021-04-02 2023-05-30 浙江华是科技股份有限公司 Ship target detection tracking method based on bridge anti-collision
CN113901946A (en) * 2021-10-29 2022-01-07 上海商汤智能科技有限公司 Abnormal behavior detection method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101146218A (en) * 2007-11-02 2008-03-19 北京博睿视科技有限责任公司 Video monitoring system of built-in smart video processing device based on serial port
CN102103684A (en) * 2009-12-21 2011-06-22 新谊整合科技股份有限公司 Image identification system and method
CN105100727A (en) * 2015-08-14 2015-11-25 河海大学 Real-time tracking method for specified object in fixed position monitoring image
CN105825198A (en) * 2016-03-29 2016-08-03 深圳市佳信捷技术股份有限公司 Pedestrian detection method and device
CN106503761A (en) * 2016-10-31 2017-03-15 紫光智云(江苏)物联网科技有限公司 Drawing system and method are sentenced in article safety check

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105300347A (en) * 2015-06-29 2016-02-03 国家电网公司 Distance measuring device and method
US10102635B2 (en) * 2016-03-10 2018-10-16 Sony Corporation Method for moving object detection by a Kalman filter-based approach

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101146218A (en) * 2007-11-02 2008-03-19 北京博睿视科技有限责任公司 Video monitoring system of built-in smart video processing device based on serial port
CN102103684A (en) * 2009-12-21 2011-06-22 新谊整合科技股份有限公司 Image identification system and method
CN105100727A (en) * 2015-08-14 2015-11-25 河海大学 Real-time tracking method for specified object in fixed position monitoring image
CN105825198A (en) * 2016-03-29 2016-08-03 深圳市佳信捷技术股份有限公司 Pedestrian detection method and device
CN106503761A (en) * 2016-10-31 2017-03-15 紫光智云(江苏)物联网科技有限公司 Drawing system and method are sentenced in article safety check

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Salvador, A.,et.al.Faster r-cnn features for instance search.《 In Proceedings of the IEEE conference on computer vision and pattern recognition workshops》.2016, *
Tracking people by detection using CNN features;Chahyati, D.,et.al;《Procedia Computer Science》;20171031;全文 *
场景约束下的视频数据人体异常行为识别研究;付路瑶;《中国优秀硕士学位论文全文数据库》;20161231;全文 *
浙江省第二监狱物联网整合系统的设计与开发;姚雨婷;《中国优秀硕士学位论文全文数据库》;20151231;全文 *

Also Published As

Publication number Publication date
CN107977646A (en) 2018-05-01

Similar Documents

Publication Publication Date Title
CN107977646B (en) Partition delivery detection method
US7596241B2 (en) System and method for automatic person counting and detection of specific events
CN108416250B (en) People counting method and device
CN110298278B (en) Underground parking garage pedestrian and vehicle monitoring method based on artificial intelligence
Wang et al. Detection of abnormal visual events via global optical flow orientation histogram
KR20180135898A (en) Systems and methods for training object classifiers by machine learning
Lim et al. iSurveillance: Intelligent framework for multiple events detection in surveillance videos
CN111144247A (en) Escalator passenger reverse-running detection method based on deep learning
US20080193010A1 (en) Behavioral recognition system
US20090319560A1 (en) System and method for multi-agent event detection and recognition
CN111191667A (en) Crowd counting method for generating confrontation network based on multiple scales
Mehta et al. Motion and region aware adversarial learning for fall detection with thermal imaging
Cao et al. Learning spatial-temporal representation for smoke vehicle detection
Teja Static object detection for video surveillance
CN113688761A (en) Pedestrian behavior category detection method based on image sequence
Park et al. A track-based human movement analysis and privacy protection system adaptive to environmental contexts
Mishra et al. Real-Time pedestrian detection using YOLO
Lee et al. Hostile intent and behaviour detection in elevators
CN112580633B (en) Public transport passenger flow statistics device and method based on deep learning
Chae et al. CCTV high-speed analysis algorithm for real-time monitoring of building access
Annapareddy et al. A robust pedestrian and cyclist detection method using thermal images
Arivazhagan Versatile loitering detection based on non-verbal cues using dense trajectory descriptors
Wang et al. Event detection and recognition using histogram of oriented gradients and hidden markov models
CN113223081A (en) High-altitude parabolic detection method and system based on background modeling and deep learning
Babiyola et al. A hybrid learning frame work for recognition abnormal events intended from surveillance videos

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231130

Address after: Room 609-1, 6th Floor, Import and Export Exhibition and Trading Center, Huanghua Comprehensive Bonded Zone, Huanghua Town, Lingkong Block, Changsha Area, Changsha Free Trade Zone, Hunan Province, 410137

Patentee after: Hunan Shengxun Technology Co.,Ltd.

Address before: 100190 Room 403, 4th floor, building 6, No.13, Beiertiao, Zhongguancun, Haidian District, Beijing

Patentee before: BEIJING BRAVEVIDEO TECHNOLOGY CO.,LTD.