CN110222735A - A kind of article based on neural network and background modeling is stolen to leave recognition methods - Google Patents
A kind of article based on neural network and background modeling is stolen to leave recognition methods Download PDFInfo
- Publication number
- CN110222735A CN110222735A CN201910415787.0A CN201910415787A CN110222735A CN 110222735 A CN110222735 A CN 110222735A CN 201910415787 A CN201910415787 A CN 201910415787A CN 110222735 A CN110222735 A CN 110222735A
- Authority
- CN
- China
- Prior art keywords
- stolen
- background
- mode
- residue
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 13
- 238000001514 detection method Methods 0.000 claims abstract description 22
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 12
- 230000000877 morphologic effect Effects 0.000 claims abstract description 6
- 238000012544 monitoring process Methods 0.000 description 8
- 238000000605 extraction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 239000004744 fabric Substances 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Burglar Alarm Systems (AREA)
- Image Analysis (AREA)
Abstract
It is stolen the invention proposes a kind of article based on neural network and background modeling and leaves recognition methods, first established background model, then pedestrian detection is carried out to video frame, pedestrian area is replaced with background model, generates the template image of video frame;Template image and background model are subjected to comparison pixel-by-pixel, generate comparison result binary map;Morphological scale-space is carried out to binary map, obtains the candidate region of stolen residue;Candidate region is sent into the picture classification network based on convolutional neural networks, carries out whether precise classification belongs to stolen residue.It can be accurately distinguished using background modeling and the template image of video frame and leave or be stolen, various complex scenes can be applicable in using convolution sorter network, while keeping high-precision to detect, reduce wrong report.
Description
Technical field
The present invention relates to the video detection technology fields of depth characteristic, and in particular to one kind is built based on neural network with background
The article of mould is stolen to leave recognition methods.
Background technique
The countries in the world of raising with to(for) the prevention awareness of non-safety factor, intelligent video monitoring system are also got over
To be more widely used.Leaving lost objects detection is important component in intelligent video prison system, at the train station, aircraft
The public places such as field, museum, which suffer from, to be widely applied.At present due to blocking the problems such as assembling with moving target in scene
It influences, so that the research work for leaving lost objects detection under intelligent video monitoring system has certain difficulty.
Three classes are roughly divided into for the identification technology of this kind of target both at home and abroad at present.First is that the stolen something lost based on background modeling
It stays detection: background modeling, such as mixed Gauss model modeling, double learning rates being carried out to monitoring scene by a variety of different methods
Double background modelings etc., by background modeling, then compared with real time monitoring, find suspicious item, pass through subsequent morphological operation
Determine accurate target position.Second is that stolen based on target following leaves detection: by the optical flow computation to monitoring scene, or
Moving target is determined in other methods calculating, carries out continuing tracking later to these moving targets, when the movement shape of some target
State occurs static from moving to, or when from static to the scene of movement, differentiation is to leave or be stolen.Third is that being based on convolutional Neural net
The stolen of the target detection of network leaves detection: as convolutional neural networks rise, the target identification skill based on convolutional neural networks
Art obtains important breakthrough, and whereby, researcher regards residue and stolen object as a kind of specific target, passes through convolutional Neural net
The automatic learning characteristic of network, returns target position, the final detection for realizing stolen residue.
These three types method suffers from different defects at present.First kind method especially relies on the quality of Background Modeling
Degree, but in actual use, scene be all it is more many and diverse, be difficult to establish relatively good background model and come out, lead to meeting
There are many wrong reports and fails to report problem.Second class method is then the special precision for relying on target following, but is actually using
In, target is easy to exist and block, the state of the various complexity such as overlapping, it is easy to cause tracking to be lost or track mistake, finally
Cause to report by mistake and fail to report serious.Third class method, it is time-consuming serious by being then based on the neural network of target identification, it is extremely difficult to
In real time, and in actual use, most of leave and be stolen object is smaller target, inspection of such algorithm for Small object
It is not high to survey precision, leads to final corresponding speed slowly and fails to report serious problem.
Summary of the invention
In view of the deficiencies of the prior art, the invention proposes a kind of stolen something lost of article based on neural network and background modeling
Recognition methods is stayed, various complex scenes can be applicable in, while keeping high-precision to detect, reduces wrong report.
In order to achieve the object of the present invention, the technical solution adopted by the present invention is that:
A kind of article based on neural network and background modeling is stolen to leave recognition methods comprising the steps of:
S1 learns video image by mixed Gauss model, establishes background model, and mixed Gaussian background modeling is calculated
Method process is as follows:
The each new pixel value Xt of S11 is compared as the following formula with current K model, directly finds point of matching new pixel value
Cloth mode, i.e., with the mean bias of the model in 2.5 σ: | Xt-μi,t-1|≤2.5σi,t-1
If the matched mode of S12 institute meets context request, which belongs to background, otherwise belongs to prospect.
The weight of each mode of S13 is updated as follows, and wherein a is learning rate, for matched mode Mk,
T=1, otherwise Mk, t=0, then the weight of each mode is normalized: ωk,t=(1- α) * ωk,t-1+α*Mk,t
The mean μ and standard deviation sigma of the non-match pattern of S14 are constant, and the parameter of match pattern is updated according to following:
ρ=α * η (Xt|μk,σk)
μt=(1- ρ) * μt-1+ρ*Xt
If S15, there is no any pattern match in step S11, then the smallest mode of weight is replaced, i.e., the mode is equal
Value is current pixel value, and standard deviation is initial the larger value, and weight is smaller value.
Each mode of S16 arranges in descending order according to w/a^2, and the mode that weight is big, standard deviation is small is arranged in front.
For B mode as background, B meets following formula before S17 is selected, and parameter T indicates ratio shared by background:
S2 uses the pedestrian detection algorithm based on convolutional neural networks, carries out pedestrian detection to video frame;
Preferably, the pedestrian detection algorithm is yolov3 algorithm.
S3 is replaced pedestrian area with the background model that step S1 is obtained, and generates the template image of video frame;
Template image and background model are carried out comparison pixel-by-pixel by S4, equal to be set as 1, are otherwise 0, are generated comparison result
Binary map;
S5 carries out Morphological scale-space to binary map, obtains the candidate region of stolen residue;
Candidate region is sent into the picture classification network based on convolutional neural networks by S6, and whether being classified, it is stolen to belong to
Residue.
It is preferably based on resenet network frame and is followed by the classification that full articulamentum carries out background area and stolen residue.
It is further preferred that distinguish residue or stolen object according to the source of stolen residue, if it is from background mould
It is obtained in type, is then that stolen event occurs;It is anti-then, obtained from the template image of video frame, then be occur legacy event.
The beneficial effects of the present invention are: compared to existing method, the present invention can remove row using pedestrian detection network
People's interference, can be accurately distinguished using background modeling and the template image of video frame and leave or be stolen, classified using convolution
Network can be applicable in various complex scenes, keep high-precision detect while, reduce wrong report, can more accurately detect by
Steal residue.
Detailed description of the invention
Fig. 1 is the stolen flow chart for leaving recognition methods of article of the invention.
Fig. 2 is network structure of the embodiment 5 based on convolutional neural networks pedestrian detection;Wherein, A is mentioned by convolutional layer
Mesh feature is taken, B is to carry out position recurrence and target classification according to the feature of extraction.
Fig. 3 is the video frame template image of embodiment 5 and the comparison binary map of background model.
Fig. 4 is sorter network structure chart of the embodiment 5 based on convolutional neural networks.
Specific embodiment
In order to it is clearer, explain purpose of the present invention technical solution in detail, below by related embodiment to this hair
It is bright to be described further.Following embodiment is only to illustrate implementation method of the invention, does not limit protection of the invention
Range.
Embodiment 1
A kind of article based on neural network and background modeling is stolen to leave recognition methods, comprising the following steps:
S1 learns video image by mixed Gauss model, establishes background model;
S2 uses the pedestrian detection algorithm based on convolutional neural networks, carries out pedestrian detection to video frame;
S3 is replaced pedestrian area with the background model that step S1 is obtained, and generates the template image of video frame;
Template image and background model are carried out comparison pixel-by-pixel by S4, equal to be set as 1, are otherwise 0, are generated comparison result
Binary map;
S5 carries out Morphological scale-space to binary map, obtains the candidate region of stolen residue;
Candidate region is sent into the picture classification network based on convolutional neural networks by S6, carries out whether precise classification belongs to
Stolen residue.
Embodiment 2
On the basis of embodiment 1:
Video image is learnt by mixed Gauss model, establishes background model, comprising the following steps:
The each new pixel value Xt of S11 is compared as the following formula with current K (K=2-10) a model, and it is new to directly find matching
The distribution pattern of pixel value, i.e., with the mean bias of the model in 2.5 σ: | Xt-μi,t-1|≤2.5σi,t-1;
If the matched mode of S12 institute meets context request, which belongs to background, otherwise belongs to prospect;
The each schema weight of S13 is updated as follows, and wherein a is learning rate, for matched mode Mk, t
=1, otherwise Mk, t=0, then the weight of each mode is normalized: ωk,t=(1- α) * ωk,t-1+α*Mk,t
The mean μ and standard deviation sigma of the non-match pattern of S14 are constant, and the parameter of match pattern is updated according to following:
ρ=α * η (Xt|μk,σk)
μt=(1- ρ) * μt-1+ρ*Xt
If S15, there is no any pattern match in step S11, then the smallest mode of weight is replaced, i.e., the mode is equal
Value is current pixel value, and standard deviation is initial the larger value, and weight is smaller value.
Each mode of S16 arranges in descending order according to w/a^2, and the mode that weight is big, standard deviation is small is arranged in front.
For B mode as background, B meets following formula before S17 is selected, and parameter T indicates ratio shared by background:
Embodiment 3
On the basis of embodiment 1:
The pedestrian detection algorithm of the step S2 is yolov3 algorithm.
The step S6 is followed by full articulamentum based on resenet network frame and carries out background area and stolen residue
Classification.Various complex scenes can be applicable in using convolution sorter network, while keeping high-precision to detect, reduce wrong report.
Embodiment 4
On the basis of embodiment 1:
The pedestrian detection algorithm of the step S2 is yolov3 algorithm.
The step S6 is followed by full articulamentum based on resenet network frame and carries out background area and stolen residue
Classification.
Residue or stolen object are distinguished according to the source of stolen residue, if it is what is obtained from background model, then
It is that stolen event occurs;It is anti-then, obtained from the template image of video frame, then be occur legacy event.It is obtained in background model
The article taken disappears in the template image of video frame, illustrates that monitoring field all items loses, it may occur however that stolen event;
The article obtained in the template image of video frame does not appear in but in former background model, illustrates that monitoring place occurs leaving
Object may have object loss in place.
Embodiment 5
Recognition methods is left using article of the invention is stolen, to monitoring camera some region of in subway station acquisition
Video source carries out identifying processing, the specific steps are as follows:
S1 learns video image by mixed Gauss model, establishes background model;
The each new pixel value Xt of S11 is compared as the following formula with current 5 models, directly finds point of matching new pixel value
Cloth mode, i.e., with the mean bias of the model in 2.5 σ: | Xt-μi,t-1|≤2.5σi,t-1
If the matched mode of S12 institute meets context request, which belongs to background, otherwise belongs to prospect.
The each schema weight of S13 is updated as follows, and wherein a is learning rate, for matched mode Mk, t
=1, otherwise Mk, t=0, then the weight of each mode is normalized: ωk,t=(1- α) * ωk,t-1+α*Mk,t
The mean μ and standard deviation sigma of the non-match pattern of S14 are constant, and the parameter of match pattern is updated according to following:
ρ=α * η (Xt|μk,σk)
μt=(1- ρ) * μt-1+ρ*Xt
If S15, there is no any pattern match in the first step, then the smallest mode of weight is replaced, i.e., the mode is equal
Value is current pixel value, and standard deviation is initial the larger value, and weight is smaller value.
Each mode of S16 arranges in descending order according to w/a^2, and the mode that weight is big, standard deviation is small is arranged in front.
For B mode as background, B meets following formula before S17 is selected, and parameter T indicates ratio shared by background:
S2 carries out pedestrian detection to video frame using the yolov3 algorithm based on convolutional neural networks, and network structure is shown in Fig. 2.
Wherein, Fig. 2A is that feature extraction is carried out based on residual error convolution block, and Fig. 2 B is to carry out pedestrian and background using the feature extracted
Classify and pedestrian position is returned.
S3 is replaced pedestrian area with the background model that step S1 is obtained, and generates the template image of video frame;
Template image and background model are carried out comparison pixel-by-pixel by S4, equal to be set as 1, are otherwise 0, are generated comparison result
Binary map is shown in Fig. 3, and the white area in figure indicates candidate region.
S5 carries out Morphological scale-space to binary map, obtains the candidate region of stolen residue;
Candidate region is sent into and is followed by full articulamentum progress background area and is stolen to leave based on resenet network frame by S6
The classification of object, network structure are shown in Fig. 4.Network structure is similar with Fig. 2, and classification judgement is only only carried out after feature extraction, and
There is no position recurrence.
Residue or stolen object are distinguished according to the source of stolen residue, if it is what is obtained from background model, then
It is that stolen event occurs;It is anti-then, obtained from the template image of video frame, then be occur legacy event.In this way, right
The video of multiple periods is identified that accuracy rate 100% is realized and accurately identified.
A specific embodiment of the invention above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously
Limitations on the scope of the patent of the present invention therefore cannot be interpreted as.It should be pointed out that for those of ordinary skill in the art
For, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to guarantor of the invention
Protect range.
Claims (5)
1. a kind of article based on neural network and background modeling is stolen to leave recognition methods, which is characterized in that including following step
It is rapid:
S1 learns video image by mixed Gauss model, establishes background model;
S2 uses the pedestrian detection algorithm based on convolutional neural networks, carries out pedestrian detection to video frame;
S3 is replaced pedestrian area with the background model that step S1 is obtained, and generates the template image of video frame;
Template image and background model are carried out comparison pixel-by-pixel by S4, equal to be set as 1, are otherwise 0, are generated comparison result two-value
Figure;
S5 carries out Morphological scale-space to binary map, obtains the candidate region of stolen residue;
Candidate region is sent into the picture classification network based on convolutional neural networks by S6, is classified and whether belongs to stolen leave
Object.
2. the article according to claim 1 based on neural network and background modeling, which is stolen, leaves recognition methods, feature exists
In, the background model modeling method the following steps are included:
The each new pixel value Xt of S11 is compared as the following formula with current K model, directly finds the distributed mode of matching new pixel value
Formula, i.e., with the mean bias of the model in 2.5 σ: | Xt-μi,t-1|≤2.5σi,t-1;
If the matched mode of S12 institute meets context request, which belongs to background, otherwise belongs to prospect;
The each schema weight of S13 is updated as follows, and wherein a is learning rate, for matched mode Mk, t=1,
Otherwise Mk, t=0, then the weight of each mode is normalized: ωk,t=(1- α) * ωk,t-1+α*Mk,t
The mean μ and standard deviation sigma of the non-match pattern of S14 are constant, and the parameter of match pattern is updated according to following:
ρ=α * η (Xt|μk,σk)
μt=(1- ρ) * μt-1+ρ*Xt
If S15, there is no any pattern match in step S11, then the smallest mode of weight is replaced, i.e. the mean value of the mode is
Current pixel value, standard deviation are initial the larger value, and weight is smaller value.
Each mode of S16 arranges in descending order according to w/a^2, and the mode that weight is big, standard deviation is small is arranged in front.
For B mode as background, B meets following formula before S17 is selected, and parameter T indicates ratio shared by background:
3. the article according to claim 1 based on neural network and background modeling, which is stolen, leaves recognition methods, feature exists
In the pedestrian detection algorithm of the step S2 is yolov3 algorithm.
4. the article according to claim 1 based on neural network and background modeling, which is stolen, leaves recognition methods, feature exists
In the step S6 is followed by point that full articulamentum carries out background area and stolen residue based on resenet network frame
Class.
5. the article according to claim 4 based on neural network and background modeling, which is stolen, leaves recognition methods, feature exists
In distinguishing residue or stolen object according to the source of stolen residue, be then to occur if it is what is obtained from background model
Stolen event;It is anti-then, obtained from the template image of video frame, then be occur legacy event.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910415787.0A CN110222735A (en) | 2019-05-18 | 2019-05-18 | A kind of article based on neural network and background modeling is stolen to leave recognition methods |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910415787.0A CN110222735A (en) | 2019-05-18 | 2019-05-18 | A kind of article based on neural network and background modeling is stolen to leave recognition methods |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110222735A true CN110222735A (en) | 2019-09-10 |
Family
ID=67821454
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910415787.0A Pending CN110222735A (en) | 2019-05-18 | 2019-05-18 | A kind of article based on neural network and background modeling is stolen to leave recognition methods |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110222735A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111401128A (en) * | 2020-01-16 | 2020-07-10 | 杭州电子科技大学 | Method for improving vehicle recognition rate |
CN111556278A (en) * | 2020-05-21 | 2020-08-18 | 腾讯科技(深圳)有限公司 | Video processing method, video display device and storage medium |
CN112651355A (en) * | 2020-12-29 | 2021-04-13 | 四川警察学院 | Hazardous article identification early warning method based on Gaussian mixture model and convolutional neural network |
CN114022468A (en) * | 2021-11-12 | 2022-02-08 | 珠海安联锐视科技股份有限公司 | Method for detecting article leaving and losing in security monitoring |
-
2019
- 2019-05-18 CN CN201910415787.0A patent/CN110222735A/en active Pending
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111401128A (en) * | 2020-01-16 | 2020-07-10 | 杭州电子科技大学 | Method for improving vehicle recognition rate |
CN111556278A (en) * | 2020-05-21 | 2020-08-18 | 腾讯科技(深圳)有限公司 | Video processing method, video display device and storage medium |
CN112651355A (en) * | 2020-12-29 | 2021-04-13 | 四川警察学院 | Hazardous article identification early warning method based on Gaussian mixture model and convolutional neural network |
CN114022468A (en) * | 2021-11-12 | 2022-02-08 | 珠海安联锐视科技股份有限公司 | Method for detecting article leaving and losing in security monitoring |
CN114022468B (en) * | 2021-11-12 | 2022-05-13 | 珠海安联锐视科技股份有限公司 | Method for detecting article left-over and lost in security monitoring |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109740413B (en) | Pedestrian re-identification method, device, computer equipment and computer storage medium | |
CN110222735A (en) | A kind of article based on neural network and background modeling is stolen to leave recognition methods | |
CN101389004B (en) | Moving target classification method based on on-line study | |
CN111882586B (en) | Multi-actor target tracking method oriented to theater environment | |
CN104992453A (en) | Target tracking method under complicated background based on extreme learning machine | |
CN110119726A (en) | A kind of vehicle brand multi-angle recognition methods based on YOLOv3 model | |
CN111611874A (en) | Face mask wearing detection method based on ResNet and Canny | |
CN102915433A (en) | Character combination-based license plate positioning and identifying method | |
CN103853794B (en) | Pedestrian retrieval method based on part association | |
CN112183472A (en) | Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet | |
Karpagavalli et al. | Estimating the density of the people and counting the number of people in a crowd environment for human safety | |
CN111199556A (en) | Indoor pedestrian detection and tracking method based on camera | |
CN105069816B (en) | A kind of method and system of inlet and outlet people flow rate statistical | |
Lian et al. | A novel method on moving-objects detection based on background subtraction and three frames differencing | |
Liu et al. | Dynamic RGB-D SLAM based on static probability and observation number | |
CN113436229A (en) | Multi-target cross-camera pedestrian trajectory path generation method | |
Liang et al. | Methods of moving target detection and behavior recognition in intelligent vision monitoring. | |
Al-Heety | Moving vehicle detection from video sequences for traffic surveillance system | |
CN105741326A (en) | Target tracking method for video sequence based on clustering fusion | |
CN105740814B (en) | A method of determining solid waste dangerous waste storage configuration using video analysis | |
Xu et al. | A novel method for people and vehicle classification based on Hough line feature | |
CN110349178A (en) | A kind of human body unusual checking and identifying system and method | |
CN107240111B (en) | Edge communication segmentation passenger flow statistical method | |
CN110349184A (en) | The more pedestrian tracting methods differentiated based on iterative filtering and observation | |
CN113284232B (en) | Optical flow tracking method based on quadtree |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190910 |