CN107977646A - A kind of jube passs quality testing method of determining and calculating - Google Patents

A kind of jube passs quality testing method of determining and calculating Download PDF

Info

Publication number
CN107977646A
CN107977646A CN201711372450.3A CN201711372450A CN107977646A CN 107977646 A CN107977646 A CN 107977646A CN 201711372450 A CN201711372450 A CN 201711372450A CN 107977646 A CN107977646 A CN 107977646A
Authority
CN
China
Prior art keywords
target
jube
passs
calculating
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711372450.3A
Other languages
Chinese (zh)
Other versions
CN107977646B (en
Inventor
张恩伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Shengxun Technology Co ltd
Original Assignee
Beijing Boruishi Technology Coltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Boruishi Technology Coltd filed Critical Beijing Boruishi Technology Coltd
Priority to CN201711372450.3A priority Critical patent/CN107977646B/en
Publication of CN107977646A publication Critical patent/CN107977646A/en
Application granted granted Critical
Publication of CN107977646B publication Critical patent/CN107977646B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Abstract

The present invention passs thing behavior to detect jube, it is proposed that the jube being combined based on deep learning, mean shift track algorithm and Bayesian network passs quality testing method of determining and calculating.Using advantage of the convolutional neural networks in target detection, to single image under no any information of forecasting(Frame in video)Carry out the detection of human body and article.In the target that convolutional neural networks detect(Including coordinates of targets, wide height, classification, probability etc.)On the basis of, using the target predicted position of mean shift track algorithm calculating next frame, the candidate target that this predicted position is fed back to convolutional neural networks selects layer.Both matching rates, the track of renewal tracking target and generic probability etc. are calculated based on detection target and the overlapping degree for tracking target.After tracking target trajectory and class probability these data have been obtained, Bayesian network is inputed to, determines whether that jube passs thing behavior generation.Present invention incorporates deep learning algorithm and the algorithm of traditional computer vision, realizes the jube based on video analysis and passs thing behavioral value, greatly improves the security of perimeter region.

Description

A kind of jube passs quality testing method of determining and calculating
Technical field
The invention belongs to the field of video monitoring in security and guard technology, is related to pattern-recognition, graph and image processing, video analysis Deng realizing that jube passs analyte detection using target detection tracking and the algorithm of reasoning, mainly use deep learning, mean shift (MeanShift) track algorithm, Bayesian network etc..
Background technology
For security consideration, fence is very common in actual life, is a kind of conventional boundary defence facility, and fence can be Physically space is isolated, prevents the targets such as unauthorized people or car from entering to reach.Fence is different from enclosure wall, general wall Body is solid, can not be penetrated, and to be typically railing be composed fence, stretches out one's hand to pass through and goes, or even some small mesh Such as child or kitten doggie are marked, can be passed through by the hole of fence.So-called jube passs thing, be exactly two people or more people every The transmission that fence carries out article.For example in metro safety crime prevention system, area of space is divided into by safety check by fence gate etc. Region behind preceding region and safety check, all passengers all must by the link such as detector gate or screening machine and artificial safety check, Subway platform could be entered, take subway.However, but there is many passengers are not outbound to carry out thing with the passenger do not entered the station outside fence Product exchange, passenger and its article outside fence, and in many cases all without safety check is passed through, it is hidden thus to cause safety to subway Suffer from, if being dangerous material to what is transmitted in fence, that will threaten to other passenger's personal safeties.And fence distribution is wide, lean on Video monitoring is manually kept a close safeguard, substantially impossible.It is as shown in Figure 1 that jube passs thing schematic diagram.
Fence substantially belongs to one kind of circumference, using the method and apparatus of perimeter alarm, to a certain extent to jube Passing thing has fright effect, can reduce the generation that jube passs thing, but jube passs thing and is different from common climbing fence, so traditional Perimeter alarm method and inapplicable jube pass the detection of thing.Such as the boundary alarm with infrared emission, it is generally seated in fence Top, in order to detect people and climb up and over fence, but jube passs thing and is not usually required to climb up and over fence, simply by the centre of fence Gap transmit article.Optical cable is and for example vibrated, what is leaned on is that detection people climbs vibration caused by fence, and jube passs thing usually simultaneously Fence will not be contacted, therefore vibration will not be produced, thus vibrate optical cable probably can't detect, on the contrary, have many people according to Lean against on fence rest or leaf is fallen etc. can all produce vibration, can so cause largely to report by mistake.Therefore, traditional circumference Alarm algorithm is passed in detection jube would generally fail in this behavior of thing.
With the development of artificial intelligence technology, Video Analysis Technology also takes major progress of knowing clearly, therefore, by video Hold analysis, make it possible that the jube based on video analysis passs analyte detection.
In recent years, the algorithm of target detection based on deep learning achieves important breakthrough, mainly convolutional Neural net Network brings being substantially improved for target detection discrimination.So-called target detection, task are to provide piece image, orient target Position size in figure, and the classification of target is provided, such as Face datection and pedestrian detection.It can be examined using convolutional neural networks The people in scene and various articles are measured, but these algorithms are typically based on single-frame images detection, can not so detect Jube passs this behavior of thing.
The content of the invention
The present invention passs thing behavior to detect jube, it is proposed that based on deep learning, mean shift track algorithm and pattra leaves The jube that this network is combined passs quality testing method of determining and calculating.Using advantage of the convolutional neural networks in target detection, not any The detection of human body and article is carried out under information of forecasting to single image (frame in video).In the mesh that convolutional neural networks detect On the basis of mark (including coordinates of targets, wide height, classification, probability etc.), the target of next frame is calculated using mean shift track algorithm Predicted position, the candidate target that this predicted position is fed back to convolutional neural networks select layer.Based on detection target and tracking mesh Target overlapping degree calculates both matching rates, the track of renewal tracking target and generic probability etc..Tracked After target trajectory and class probability these data, Bayesian network is inputed to, determines whether that jube passs thing behavior generation.This hair The bright algorithm for combining deep learning algorithm and traditional computer vision, realizes the jube based on video analysis and passs thing behavior inspection Survey, greatly improve the security of perimeter region.
The jube provided by the invention being combined based on deep learning, mean shift track algorithm and Bayesian network passs thing Detection algorithm, including:
From high definition network head (IPC) or network hard disk video recorder (NVR) is inner obtains video flowing, usually takes first yard Stream, i.e. the code stream of high definition resolution ratio such as 1080p, is image one by one by decoding video stream, typically H.264 or H.265 coded format, decoded image are usually the image of YUV, and YUV then is formed RGB figures by color space conversion Picture, hereafter referred to collectively as two field picture.
Using region set in advance (ROI), pixel interested is extracted from two field picture, these pixels are fences two Border region, in the range of this occur jube pass thing probability it is higher, and away from fence region, the present invention in image do not wrap Containing inside, reported by mistake caused by evade the people of distant place.In the present invention, if not pixel in the region of interest, R=is used The value filling of G=B=128.Ultimately form the two field picture F for only retaining pixel in ROI.
Two field picture F is inputed into the deep learning module of target detection based on position prediction, position prediction derives from average Track algorithm is deviated, if the first frame, then using the target detection based on single frames of acquiescence.Deep learning module of target detection Using region convolutional neural networks, others' body, knapsack, handbag, luggage case, handbag, mineral water bottle, water glass etc. 7 of knowing together Kind classification.Two field picture F is zoomed to the image I of the reference resolution 480x480 of the present invention, the area of full figure is then carried out on I Domain convolutional neural networks extraction feature, is then divided into 15x15 block B by I.For each piece of B, if current block does not track The information of forecasting of feedback, then 5 target frames are predicted on the block, and (each target frame is including target is wide, target is high, target's center sits Mark x, target's center coordinate y) and classification confidence level;If current block is the information of forecasting containing tracking feedback, on the block Only predict 2 target frames, the positional information of one of them is obtained by the information of forecasting of tracking feedback, another target frame is then same not to be had Have tracking feedback information it is the same.It is a kind of that the feature calculation target frame that each target frame is obtained by convolutional neural networks belongs to certain The probability of (7 class), in this way, full figure at most forms prediction of 15x15x5 i.e. 1125 containing positional information and class probability information Frame, minimum prediction block are then that 15x15x2 is 450.Prediction block is merged by the merging mechanism determined by Duplication, and And only of a sort prediction block could merge, the testing result of 7 kinds of classification targets in final output full figure, each result includes The centre coordinate of target, wide high, target belong to the probability of which classification.
The target that will be detected, using target's center as starting point, characterized by color histogram, is tracked by mean shift Algorithm iteration search for the most matched region of target, the Bhattacharya coefficients of use are as target template and candidate target Similarity measure, can finally obtain the most matched coordinate points of target in subrange and detected, and this coordinate points are anti- Convolutional neural networks feed as information of forecasting.In the present invention, every region finally obtained by mean shift is covered Block B, then share the information of prediction.
The target that present frame detects, the target detected with previous frame, when belonging to same class, by calculating between any two The overlapping area of target, generates overlapping area matrix, as the characteristic matching between upper frame target and currently detected target Matrix.When overlapping area is more than certain threshold value, then it is assumed that be same target (tracking object matching), at this time, update the mesh Target track and the class probability of target;In the current frame, if not with the matched target of upper frame, as emerging mesh Mark, establishes new tracking target;In upper frame target, if matching without the target of present frame detection, then it is assumed that be to disappear Target, is deleted in tracking queue.By the step for, set up frame by frame human body and various articles pursuit path and in real time The class probability of renewal.
By above step, following variable can be obtained:Number N on the left of fenceL, fence right side number NR, fence both sides The average probability P of the human body detected in ROIH, the mean motion direction V of left side human bodyHL, the mean motion direction of right side human body VHR, the average probability P for the article that fence is nearby detected between human bodyO, the mean motion direction V of articleO.Movement side above To the direction all on the basis of the direction of vertical fence in the horizontal plane, with the angle theta of this reference direction as direction of motion angle, with Probable values of the cos θ as the direction of motion.If it is A that jube, which occurs, to pass the variable that thing is alarmed, then variables A and N can be builtL、NR、 PH、VHL、VHR、PO、VOBayesian network, pass through observational variable NL、NR、PH、VHL、VHR、PO、VOTo estimate the probability of A generations, Finally realize that jube passs the detection of thing.
Traditional jube passs the detection of thing, and staring at monitored picture mainly by people sees, can cause fatigue for a long time, or use The means such as infrared emission and vibration detection, these can cause largely to report by mistake.In the present invention based on deep learning, mean shift with The jube that track algorithm and Bayesian network are combined passs quality testing method of determining and calculating, can be by Security Personnel from staring at monitored picture for a long time Freed in high load capacity work, and wrong report can be greatly reduced.
Brief description of the drawings
Fig. 1 is that jube passs thing schematic diagram in the present invention.
Fig. 2 is that the jube that the present invention is combined based on deep learning, mean shift track algorithm and Bayesian network passs thing Detection algorithm flow chart.
Fig. 3 is that each layer of convolutional neural networks of the present invention illustrates schematic diagram.
Fig. 4 is that jube of the present invention passs thing bayesian network structure figure.
Embodiment
The present invention is further expalined with instantiation below in conjunction with the accompanying drawings.It is it should be noted that described below Example be intended to be better understood from the present invention, simply the present invention in a part, protection model not thereby limiting the invention Enclose.
As shown in Fig. 2, the present invention is realized by gathering per two field picture to series of steps such as alarm linkages.
In step 201, video flowing is gathered from headend equipment, headend equipment can be IPC either NVR or DVR etc., but It is not limited to this, as long as the headend equipment that can get video flowing can.After collecting video flowing, decoded by decoder The two field picture of yuv format, then forms RGB image by yuv format image by color space conversion.
In step 202, with the figure layer ROI figure layers manually demarcated in advance, every pixel for falling into ROI figure layers, just by this picture Element is used as valid pixel.By the step for, the influence of distant place jamming target can be filtered out, ultimately form be input to convolution god Two field picture through network.
Step 203, as shown in figure 3, the present invention merges layer and 1 full connection using 16 convolutional layers, 4 pond layers, 1 The deep learning network of layer, is finally 1 classification layer.Convolutional layer uses the convolution kernel of 7x7,5x5 and 3x3.Pond layer uses 2x2 Window, to reduce the size of feature space in layer.16th layer and the 19th it is laminated and export be 20 layers.The parameter instruction of network Practice and first carry out pre-training with 2,000,000 samples of calibration classification, then with (selection subway scene, cell doorway in monitoring scene And surrounding scene etc.) 7 classification target images carry out small parameter perturbations, finally restrain after obtain convolutional neural networks network ginseng Number.During detection, two field picture is zoomed to the image I of unified resolution 480x480, is then extracted using convolutional neural networks special Sign, is divided into 15x15 block, for each root tuber according to tracking feedack screening prediction block, it is general that estimation belongs to 7 classification targets by I Rate, coordinate and width are high, finally will be greater than the merged block of certain threshold value, form the result of 7 class target detections.Per class target detection As a result it is high with centre coordinate and width, and probability expression.
In step 204, the target that is detected according to step 203, the mesh of next two field picture is estimated using mean shift algorithm Cursor position.Mean shift is a kind of gradient optimal method, is searched for and the most matched area of object module using mean-shift iteration Domain, is a kind of algorithm for seeking locally optimal solution.Using Bhattacharya coefficients as target template and candidate's mesh in the present invention Target similarity measure.IfIt is pixel coordinate in the candidate target region centered on y, the window width of kernel function k (x) It is h, then is given by the following formula in feature u=1 ..., the probability distribution of m:
Wherein
It is normalization coefficient.If choosing is characterized as color,What is represented is exactly color that is normalized and weighting Histogram, weighting coefficient are determined by distance of the pixel away from central point y with kernel function k (x).There is tracked target signature Probability distributionWith the probability distribution of the feature of candidate targetAfterwards, then Bhattacharya coefficients can be defined
And the distance between tracked target and candidate target feature can be defined
By minimizing d (y), the iterative formula of the new coordinate points of target can be obtained, it is as follows:
Wherein,G (x)=- k ' (x)
The target predicted position obtained by mean shift feeds back to step 203 convolutional neural networks.
Step 205 is in step 203 and 204, target that present frame detects, and the target detected with previous frame, belongs to During same class, by calculating the overlapping area of target between any two, overlapping area matrix is generated, calculates overlap coefficient as upper frame Characteristic matching matrix between target and currently detected target.Overlap coefficient between two targets A and B is as follows:
When η is more than certain threshold value, then it is assumed that be same target (tracking object matching), at this time, update the target Track and the class probability of target;In the current frame, if not with the matched target of upper frame, as emerging target, Establish new tracking target;In upper frame target, if matching without the target of present frame detection, then it is assumed that be the mesh to disappear Mark, is deleted in tracking queue.By the step for, set up frame by frame human body and various articles pursuit path and in real time more New class probability.
Step 206 obtains number N on the left of variable fence by step 205L, fence right side number NR, in the ROI of fence both sides The average probability P of the human body detectedH, the mean motion direction V of left side human bodyHL, the mean motion direction V of right side human bodyHR, enclose The average probability P of the article detected near column between human bodyO, the mean motion direction V of articleO.Establish shellfish as shown in Figure 4 This network of leaf, it is assumed that the variable N under the conditions of AL、NR、PH、VHL、VHR、PO、VOIndependently of each other, then
P(NL,NR,PH,PO,VHL,VHR,VO|A)
=P (NL|A)·P(NR|A)·P(PH|A)·P(PO|A)·P(VHL|A)·P(VHR|A)·P(VO|A)
So observing NL、NR、PH、VHL、VHR、PO、VOWhen, the probability that A occurs is as follows
P(A|NL,NR,PH,PO,VHL,VHR,VO)
∝P(NL|A)·P(NR|A)·P(PH|A)·P(PO|A)·P(VHL|A)·P(VHR|A)·P(VO|A)
Assuming that P (NL|A)、P(NR|A)、P(PH|A)、P(PO|A)、P(VHL|A)、P(VHR|A)、P(VO| A) all obey Gauss Distribution, estimates the parameter of the Bayesian network with actual sample.Finally according to NL、NR、PH、VHL、VHR、PO、VOWhether occur To estimate that jube passs the probability of thing generation.Generally speaking, when have two people approached respectively from fence both sides and have bag and other items from When passing through above fence, the probability that generation jube passs thing is higher.
The present invention is combined traditional mean shift algorithm, Bayesian network etc. with deep learning algorithm, is estimated by probability The pattern of meter estimates the generation that jube passs thing behavior, has the accuracy rate of higher, more preferable generalization.

Claims (7)

1. a kind of jube passs quality testing method of determining and calculating, it is characterised in that:Utilize deep learning, mean shift track algorithm and Bayesian network Network is combined, and is trained to obtain human body, knapsack, handbag, luggage case, handbag, mineral water bottle, water glass etc. 7 with great amount of samples The other convolutional neural networks model of species, and mean shift tracking result is fed back into convolutional neural networks, finally use Bayes Network-evaluated jube passs the probability of thing generation.
2. the jube according to claim 1 passs quality testing method of determining and calculating, it is characterised in that video flowing is extracted from headend equipment, And YUV images are decoded into, RGB images are then converted into, and fence two side areas is retained by ROI, to remote areas Pixel filling R=G=B=128.
3. the jube according to claim 1 passs quality testing method of determining and calculating, it is characterised in that is known together by convolutional neural networks other 7 kinds of classifications such as human body, knapsack, handbag, luggage case, handbag, mineral water bottle, water glass;Two field picture F is zoomed to this The image I of the reference resolution 480x480 of invention, the region convolutional neural networks extraction that full figure is then carried out on I are special Sign, is then divided into 15x15 block B by I, and each block is according to whether there is tracking information feedback to use different target frames Pattern is estimated, destination probability and width height contained by each block is then estimated, merges, export eventually through Duplication mechanism The centre coordinate of target, wide high, target belong to the probability of which classification.
4. the jube according to claim 1 passs quality testing method of determining and calculating, it is characterised in that the target that will be detected, with target Center is starting point, characterized by color histogram, passes through mean shift track algorithm iterative search and the most matched area of target Domain, and this area coordinate and wide height are fed back into information of the convolutional neural networks as prediction.
5. the jube according to claim 1 passs quality testing method of determining and calculating, it is characterised in that the target of present frame, with previous frame The target detected, when belonging to same class, by calculating the overlapping area of target between any two, generates overlapping area matrix, makees For the characteristic matching matrix between upper frame target and currently detected target, pass through characteristic matching matrix update target following Track, includes generation, renewal, the deletion of tracking target.
6. the jube according to claim 1 passs quality testing method of determining and calculating, it is characterised in that passes through number on the left of fence, fence Right side number, the average probability of the human body detected in the ROI of fence both sides, the mean motion direction of left side human body, people from right side The mean motion direction of body, the average probability for the article that fence is nearby detected between human body, mean motion direction of article etc. Variable establishes Bayesian network, and the parameter of Bayesian network is estimated by sample, and analyte detection is passed for jube.
7. the jube according to claim 1 passs quality testing method of determining and calculating, it is characterised in that convolutional neural networks model is by a large amount of Data train to obtain, and are finely adjusted by the data of monitoring scene.
CN201711372450.3A 2017-12-19 2017-12-19 Partition delivery detection method Active CN107977646B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711372450.3A CN107977646B (en) 2017-12-19 2017-12-19 Partition delivery detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711372450.3A CN107977646B (en) 2017-12-19 2017-12-19 Partition delivery detection method

Publications (2)

Publication Number Publication Date
CN107977646A true CN107977646A (en) 2018-05-01
CN107977646B CN107977646B (en) 2021-06-29

Family

ID=62006918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711372450.3A Active CN107977646B (en) 2017-12-19 2017-12-19 Partition delivery detection method

Country Status (1)

Country Link
CN (1) CN107977646B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109597069A (en) * 2018-12-25 2019-04-09 山东雷诚电子科技有限公司 A kind of active MMW imaging method for secret protection
CN110443834A (en) * 2018-05-04 2019-11-12 大猩猩科技股份有限公司 A kind of distributed object tracking system
CN111091098A (en) * 2019-12-20 2020-05-01 浙江大华技术股份有限公司 Training method and detection method of detection model and related device
CN111144232A (en) * 2019-12-09 2020-05-12 国网智能科技股份有限公司 Transformer substation electronic fence monitoring method based on intelligent video monitoring, storage medium and equipment
CN112016528A (en) * 2020-10-20 2020-12-01 成都睿沿科技有限公司 Behavior recognition method and device, electronic equipment and readable storage medium
CN112668377A (en) * 2019-10-16 2021-04-16 清华大学 Information recognition system and method thereof
CN112818844A (en) * 2021-01-29 2021-05-18 成都商汤科技有限公司 Security check abnormal event detection method and device, electronic equipment and storage medium
CN112967320A (en) * 2021-04-02 2021-06-15 浙江华是科技股份有限公司 Ship target detection tracking method based on bridge collision avoidance
WO2023071188A1 (en) * 2021-10-29 2023-05-04 上海商汤智能科技有限公司 Abnormal-behavior detection method and apparatus, and electronic device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101146218A (en) * 2007-11-02 2008-03-19 北京博睿视科技有限责任公司 Video monitoring system of built-in smart video processing device based on serial port
CN102103684A (en) * 2009-12-21 2011-06-22 新谊整合科技股份有限公司 Image identification system and method
CN105100727A (en) * 2015-08-14 2015-11-25 河海大学 Real-time tracking method for specified object in fixed position monitoring image
CN105300347A (en) * 2015-06-29 2016-02-03 国家电网公司 Distance measuring device and method
CN105825198A (en) * 2016-03-29 2016-08-03 深圳市佳信捷技术股份有限公司 Pedestrian detection method and device
CN106503761A (en) * 2016-10-31 2017-03-15 紫光智云(江苏)物联网科技有限公司 Drawing system and method are sentenced in article safety check
US20170263005A1 (en) * 2016-03-10 2017-09-14 Sony Corporation Method for moving object detection by a kalman filter-based approach

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101146218A (en) * 2007-11-02 2008-03-19 北京博睿视科技有限责任公司 Video monitoring system of built-in smart video processing device based on serial port
CN102103684A (en) * 2009-12-21 2011-06-22 新谊整合科技股份有限公司 Image identification system and method
CN105300347A (en) * 2015-06-29 2016-02-03 国家电网公司 Distance measuring device and method
CN105100727A (en) * 2015-08-14 2015-11-25 河海大学 Real-time tracking method for specified object in fixed position monitoring image
US20170263005A1 (en) * 2016-03-10 2017-09-14 Sony Corporation Method for moving object detection by a kalman filter-based approach
CN105825198A (en) * 2016-03-29 2016-08-03 深圳市佳信捷技术股份有限公司 Pedestrian detection method and device
CN106503761A (en) * 2016-10-31 2017-03-15 紫光智云(江苏)物联网科技有限公司 Drawing system and method are sentenced in article safety check

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CHAHYATI, D.,ET.AL: "Tracking people by detection using CNN features", 《PROCEDIA COMPUTER SCIENCE》 *
SALVADOR, A.,ET.AL: "Faster r-cnn features for instance search", 《 IN PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS》 *
付路瑶: "场景约束下的视频数据人体异常行为识别研究", 《中国优秀硕士学位论文全文数据库》 *
姚雨婷: "浙江省第二监狱物联网整合系统的设计与开发", 《中国优秀硕士学位论文全文数据库》 *
朱安娜: "基于卷积神经网络的场景文本定位及多方向字符识别研究", 《中国优秀博士学位论文全文数据库》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443834A (en) * 2018-05-04 2019-11-12 大猩猩科技股份有限公司 A kind of distributed object tracking system
CN109597069A (en) * 2018-12-25 2019-04-09 山东雷诚电子科技有限公司 A kind of active MMW imaging method for secret protection
CN112668377A (en) * 2019-10-16 2021-04-16 清华大学 Information recognition system and method thereof
CN111144232A (en) * 2019-12-09 2020-05-12 国网智能科技股份有限公司 Transformer substation electronic fence monitoring method based on intelligent video monitoring, storage medium and equipment
CN111091098A (en) * 2019-12-20 2020-05-01 浙江大华技术股份有限公司 Training method and detection method of detection model and related device
CN111091098B (en) * 2019-12-20 2023-08-15 浙江大华技术股份有限公司 Training method of detection model, detection method and related device
CN112016528A (en) * 2020-10-20 2020-12-01 成都睿沿科技有限公司 Behavior recognition method and device, electronic equipment and readable storage medium
CN112818844A (en) * 2021-01-29 2021-05-18 成都商汤科技有限公司 Security check abnormal event detection method and device, electronic equipment and storage medium
WO2022160569A1 (en) * 2021-01-29 2022-08-04 成都商汤科技有限公司 Method and apparatus for detecting security check anomaly event, electronic device, and storage medium
CN112967320A (en) * 2021-04-02 2021-06-15 浙江华是科技股份有限公司 Ship target detection tracking method based on bridge collision avoidance
WO2023071188A1 (en) * 2021-10-29 2023-05-04 上海商汤智能科技有限公司 Abnormal-behavior detection method and apparatus, and electronic device and storage medium

Also Published As

Publication number Publication date
CN107977646B (en) 2021-06-29

Similar Documents

Publication Publication Date Title
CN107977646A (en) A kind of jube passs quality testing method of determining and calculating
CN112257557B (en) High-altitude parabolic detection and identification method and system based on machine vision
Lopez-Fuentes et al. Review on computer vision techniques in emergency situations
Wang et al. Detection of abnormal visual events via global optical flow orientation histogram
US8131012B2 (en) Behavioral recognition system
Liu et al. Intelligent video systems and analytics: A survey
Lim et al. iSurveillance: Intelligent framework for multiple events detection in surveillance videos
CN111932583A (en) Space-time information integrated intelligent tracking method based on complex background
CN103902966B (en) Video interactive affair analytical method and device based on sequence space-time cube feature
US10210392B2 (en) System and method for detecting potential drive-up drug deal activity via trajectory-based analysis
Patil et al. Suspicious movement detection and tracking based on color histogram
Zhao et al. Exploiting spatial-temporal correlations for video anomaly detection
Ansari et al. An expert video surveillance system to identify and mitigate shoplifting in megastores
Afsar et al. Automatic human trajectory destination prediction from video
CN109754411A (en) Building pivot frame larceny detection method and system are climbed based on optical flow method target following
Pouyan et al. Propounding first artificial intelligence approach for predicting robbery behavior potential in an indoor security camera
Mishra et al. Real-Time pedestrian detection using YOLO
Lee et al. Hostile intent and behaviour detection in elevators
Kanthaseelan et al. CCTV Intelligent Surveillance on Intruder Detection
CN112580633B (en) Public transport passenger flow statistics device and method based on deep learning
Song et al. A novel laser-based system: Fully online detection of abnormal activity via an unsupervised method
Pathak et al. Applying transfer learning to traffic surveillance videos for accident detection
Masood et al. Identification of Anomaly Scenes in Videos Using Graph Neural Networks
Mahin et al. A simple approach for abandoned object detection
Babiyola et al. A hybrid learning frame work for recognition abnormal events intended from surveillance videos

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231130

Address after: Room 609-1, 6th Floor, Import and Export Exhibition and Trading Center, Huanghua Comprehensive Bonded Zone, Huanghua Town, Lingkong Block, Changsha Area, Changsha Free Trade Zone, Hunan Province, 410137

Patentee after: Hunan Shengxun Technology Co.,Ltd.

Address before: 100190 Room 403, 4th floor, building 6, No.13, Beiertiao, Zhongguancun, Haidian District, Beijing

Patentee before: BEIJING BRAVEVIDEO TECHNOLOGY CO.,LTD.