CN107944384B - Delivered object behavior detection method based on video - Google Patents
Delivered object behavior detection method based on video Download PDFInfo
- Publication number
- CN107944384B CN107944384B CN201711167860.4A CN201711167860A CN107944384B CN 107944384 B CN107944384 B CN 107944384B CN 201711167860 A CN201711167860 A CN 201711167860A CN 107944384 B CN107944384 B CN 107944384B
- Authority
- CN
- China
- Prior art keywords
- suspected area
- difference
- area
- value
- suspected
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a delivery behavior detection method based on videos, which specifically comprises the following steps: acquiring a video image, acquiring a motion area on a gray scale image by using a frame difference method, then acquiring a suspected area by using a blob detection method, and marking foreground pixels; making gradient difference on the whole image of the suspected area to obtain the outline of the object in the suspected area; and judging whether the suspected area is the delivery behavior or not according to the foreground pixel and the gradient difference value. The method does not need to model and judge the object, reduces the calculated amount, and has higher running speed and strong practicability; the method does not need fine image information, weakens the requirement on the definition of the image, improves the processing efficiency and accuracy and is convenient to apply.
Description
Technical Field
The invention belongs to the technical field of video detection, and particularly relates to a delivery behavior detection method based on videos.
Background
During the interrogation, the act of presenting articles by interrogation personnel and prisoners is highly regarded, but the interrogation process usually lasts for a long time, even a plurality of interrogation processes are carried out simultaneously, which requires the monitoring personnel to observe all monitoring videos for a long time with extremely high patience and attention, so that the monitoring personnel are easy to be fatigued and miss. Therefore, the delivery behavior during the monitoring interrogation period is an intelligent algorithm which can liberate manpower and help monitoring personnel to find problems in time, and the method has extremely high application value.
Disclosure of Invention
In view of the above, the present invention is directed to a method for detecting a delivery behavior based on a video, which is used to detect the delivery behavior under various environmental conditions, and assist the detection in an alarm or reporting manner to a monitoring center to prevent the occurrence of a delivery event.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
a delivery behavior detection method based on videos specifically comprises the following steps:
(1) acquiring a video image, acquiring a motion area on a gray scale image by using a frame difference method, then acquiring a suspected area by using a blob detection method, and marking foreground pixels;
(2) making gradient difference on the whole image of the suspected area to obtain the outline of the object in the suspected area;
(3) and judging whether the suspected area is the delivery behavior or not according to the foreground pixel and the gradient difference value.
Further, in the step (1), a difference value is calculated by using a gray value of each pixel of the two frames of images, the absolute value of the difference value is greater than a threshold value and is marked as a foreground pixel, otherwise, the absolute value is marked as a background pixel; and performing gradient conversion on the gray value of the image, performing difference calculation between two frames on the converted value, recording the difference, marking the difference as a contour pixel point if the difference is larger than a threshold value, otherwise, discarding the difference, performing blob detection on the foreground image obtained by the foreground pixel, and performing target fusion on the adjacent foreground pixel points to obtain a suspected area.
Further, in the step (2), a rule line is made, the rule line is considered as an effective line segment for judging the occurrence of the object transfer behavior, a suspected area which is beyond the line segment and is considered as an effective suspected area for the occurrence of the object transfer behavior is judged, whether the suspected area and the rule line are intersected is judged, if the suspected area and the rule line are intersected, the number of the outline pixel points is calculated, the mark with the number larger than the threshold value is a real object, the mark with the number smaller than the threshold value is a shadow, and otherwise, the suspected area is discarded.
Further, in the step (3), the obtained real object region is used for carrying out statistics on contour pixel points near the rule line, and the occurrence of the object transfer behavior is finally judged according to the statistical result.
Compared with the prior art, the delivery behavior detection method based on the video has the following advantages:
(1) the method does not need to model and judge the object, reduces the calculated amount, and has higher running speed and strong practicability;
(2) the method does not need fine image information, weakens the requirement on the definition of the image, improves the processing efficiency and accuracy and is convenient to apply.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart illustrating a method for detecting a delivery behavior based on video according to an embodiment of the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
As shown in fig. 1, the present invention provides a method for detecting a delivery behavior based on a video, which specifically includes the following steps:
1. firstly, the gray value of each pixel of the two frames of images is used for calculating the difference value, and the absolute value of the difference value is greater than the threshold value and is marked as a foreground pixel.
|pt(x,y)-pt-1(x,y)|>Threshold,
(x, y) belongs to the foreground, otherwise to the background. p is a radical oft(x, y) is the value of the t time (x, y) coordinate, pt-1(x, y) time t-1 is the value of the coordinate pixel, Threshold is the decision Threshold.
2. And transforming the gray value of the image, calculating the difference value between two frames of the transformed value, and judging whether the transformed value is a contour pixel according to a threshold value.
qt(x,y)=|pt(x-1,y)-pt(x+1,y)|+|pt(x,y-1)-pt(x,y+1)|
pt(x, y) is the gray scale value of the t time (x, y) coordinate, qt(x, y) is the gradient value of the t time (x, y) coordinate.
wt(x,y)=|qt(x,y)-qt-1(x,y)|,
wt(x, y) belongs to the object contour pixel difference. q. q.st(x, y) is the gradient value of the (x, y) coordinate at time t, qt-1(x, y) is the gradient value of the (x, y) coordinate pixel at time t-1.
vtAnd (x, y) is the identifier of whether the time (x, y) coordinate is t time, the OutlineThreshold is a contour judgment threshold, and the contour is judged if the time (x, y) coordinate is greater than the threshold.
3. And carrying out block mass detection on the foreground image, and carrying out target fusion on the connected foreground points to obtain a suspected region of the delivery behavior.
4. And (4) making a rule line, wherein the rule line is considered as an effective line segment for judging the occurrence of the delivery behavior, and the suspected area which is considered to be effective when the delivery behavior occurs after the rule line passes through the line segment.
5. And judging whether the rule line is intersected with the suspected area, if so, considering that the delivery behavior is suspected to occur, and otherwise, ignoring the area.
6. And calculating the number of contour pixels of the suspected area, and marking the contour pixels with the number larger than a threshold value as a real object, otherwise, marking the contour pixels as a shadow.
Count>ShadowThreshold
(x, y) is the point in the suspect region, and Count is the contour pixel Count in the suspect region.
And ShadowThreshold is a shadow judgment threshold, the suspected area is considered to be a real object area if the shadow judgment threshold is larger than the threshold, otherwise, the suspected area is judged to be a shadow area, and the shadow area is filtered.
7. And judging the real object area again according to whether enough contour pixels exist near the rule line, and if the contour pixels are larger than the threshold value, judging that the object delivery behavior occurs, and giving an alarm.
Count*>ShadowThreshold*
(x*,y*) Is a point near the ruled line, Count*Is the statistic value of contour pixels near the regular line. Shadowthreshold*Judging a threshold value for the shadow, considering the suspected area as a real object delivery area if the threshold value is larger than the threshold value, and alarming; otherwise, no alarm is given.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (4)
1. A delivery behavior detection method based on video is characterized in that: the method specifically comprises the following steps:
(1) acquiring a video image, acquiring a motion area on a gray scale image by using a frame difference method, then acquiring a suspected area by using a blob detection method, and marking foreground pixels;
(2) making gradient difference on the whole image of the suspected area to obtain the outline of the object in the suspected area;
(3) judging whether the suspected area is a delivery behavior or not according to the foreground pixel and the gradient difference value;
the process of determining whether the suspected area is a delivery behavior is as follows:
making a rule line, wherein the rule line is considered as an effective line segment for judging the occurrence of the delivery behavior, and the suspected area which is considered as effective when the delivery behavior occurs after the rule line passes through the line segment;
judging whether the rule line is intersected with the suspected area, if so, considering that the object delivery behavior is suspected to occur, otherwise, ignoring the area;
calculating the number of contour pixels of the suspected area, and marking the contour pixels with the number larger than a ShadowThreshold threshold value as a real object, otherwise, marking the contour pixels as a shadow;
re-judging the real object area according to the number of contour pixels near the regular line, and if the number is larger than Shadowthreshold*The threshold is considered to be the occurrence of a delivery behavior.
2. The method according to claim 1, wherein the method further comprises: in the step (1), the gray value of each pixel of the two frames of images is used for calculating the difference value, the absolute value of the difference value is greater than the threshold value and is marked as a foreground pixel, otherwise, the difference value is marked as a background pixel; and performing gradient conversion on the gray value of the image, performing difference calculation between two frames on the converted value, recording the difference, marking the difference as a contour pixel point if the difference is larger than a threshold value, otherwise, discarding the difference, performing blob detection on the foreground image obtained by the foreground pixel, and performing target fusion on the adjacent foreground pixel points to obtain a suspected area.
3. The method according to claim 1, wherein the method further comprises: in the step (2), a rule line is made, the rule line is considered to be an effective line segment for judging the occurrence of the delivery behavior, a suspected area which is beyond the line segment and is considered to be effective for judging whether the suspected area is intersected with the rule line, if so, the number of outline pixel points is calculated, the mark with the number larger than a threshold value is a real object, the mark with the number smaller than the threshold value is a shadow, otherwise, the suspected area is discarded.
4. The method according to claim 3, wherein the method further comprises: in the step (3), the obtained real object region is used for carrying out statistics on contour pixel points near the rule line, and finally the occurrence of the object transfer behavior is judged according to the statistical result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711167860.4A CN107944384B (en) | 2017-11-21 | 2017-11-21 | Delivered object behavior detection method based on video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711167860.4A CN107944384B (en) | 2017-11-21 | 2017-11-21 | Delivered object behavior detection method based on video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107944384A CN107944384A (en) | 2018-04-20 |
CN107944384B true CN107944384B (en) | 2021-08-20 |
Family
ID=61930572
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711167860.4A Active CN107944384B (en) | 2017-11-21 | 2017-11-21 | Delivered object behavior detection method based on video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107944384B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118314632B (en) * | 2024-06-07 | 2024-09-27 | 杭州海康威视系统技术有限公司 | Method and device for detecting delivery behavior, electronic equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101268478A (en) * | 2005-03-29 | 2008-09-17 | 斯达普力特有限公司 | Method and apparatus for detecting suspicious activity using video analysis |
CN102339465A (en) * | 2011-08-31 | 2012-02-01 | 中国科学院计算技术研究所 | Method and system for detecting the mutual closing and/or contact of moving objects |
CN103425960A (en) * | 2012-05-25 | 2013-12-04 | 信帧电子技术(北京)有限公司 | Method for detecting fast-moving objects in video |
CN106339677A (en) * | 2016-08-23 | 2017-01-18 | 天津光电高斯通信工程技术股份有限公司 | Video-based railway wagon dropped object automatic detection method |
CN106599788A (en) * | 2016-11-21 | 2017-04-26 | 桂林远望智能通信科技有限公司 | System and method for detecting line crossing of video moving target |
CN106826822A (en) * | 2017-01-25 | 2017-06-13 | 南京阿凡达机器人科技有限公司 | A kind of vision positioning and mechanical arm crawl implementation method based on ROS systems |
CN107133971A (en) * | 2017-04-19 | 2017-09-05 | 南京邮电大学 | A kind of abnormal track-detecting method of personage transmitted based on network node energy |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2266320A2 (en) * | 2008-04-11 | 2010-12-29 | Thomson Licensing | System and method for enhancing the visibility of an object in a digital picture |
-
2017
- 2017-11-21 CN CN201711167860.4A patent/CN107944384B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101268478A (en) * | 2005-03-29 | 2008-09-17 | 斯达普力特有限公司 | Method and apparatus for detecting suspicious activity using video analysis |
CN102339465A (en) * | 2011-08-31 | 2012-02-01 | 中国科学院计算技术研究所 | Method and system for detecting the mutual closing and/or contact of moving objects |
CN103425960A (en) * | 2012-05-25 | 2013-12-04 | 信帧电子技术(北京)有限公司 | Method for detecting fast-moving objects in video |
CN106339677A (en) * | 2016-08-23 | 2017-01-18 | 天津光电高斯通信工程技术股份有限公司 | Video-based railway wagon dropped object automatic detection method |
CN106599788A (en) * | 2016-11-21 | 2017-04-26 | 桂林远望智能通信科技有限公司 | System and method for detecting line crossing of video moving target |
CN106826822A (en) * | 2017-01-25 | 2017-06-13 | 南京阿凡达机器人科技有限公司 | A kind of vision positioning and mechanical arm crawl implementation method based on ROS systems |
CN107133971A (en) * | 2017-04-19 | 2017-09-05 | 南京邮电大学 | A kind of abnormal track-detecting method of personage transmitted based on network node energy |
Non-Patent Citations (2)
Title |
---|
Multiple sensor fusion and classification for moving object detection and tracking;Ricardo Omar Chavez-Garcia等;《IEEE Transactions on Intelligent Transportation Systems》;20150929;第525-534页 * |
视频监控中人体异常行为分析的研究与实现;林婷;《中国优秀硕士学位论文全文数据库信息科技辑》;20120615;第I138-2125页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107944384A (en) | 2018-04-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8902053B2 (en) | Method and system for lane departure warning | |
CN109146860B (en) | Full-automatic mechanical equipment installation leakage detection method and device | |
CN108229475B (en) | Vehicle tracking method, system, computer device and readable storage medium | |
US7982774B2 (en) | Image processing apparatus and image processing method | |
CN110191320B (en) | Video jitter and freeze detection method and device based on pixel time sequence motion analysis | |
CN107389693B (en) | Automatic detection method for defects of printed matter based on machine vision | |
CN107742307A (en) | Based on the transmission line galloping feature extraction and parameters analysis method for improving frame difference method | |
CN103559498A (en) | Rapid man and vehicle target classification method based on multi-feature fusion | |
US8538079B2 (en) | Apparatus capable of detecting location of object contained in image data and detection method thereof | |
CN111325048B (en) | Personnel gathering detection method and device | |
CN105975907B (en) | SVM model pedestrian detection method based on distributed platform | |
CN112528861B (en) | Foreign matter detection method and device applied to ballast bed in railway tunnel | |
CN106851302B (en) | A kind of Moving Objects from Surveillance Video detection method based on intraframe coding compression domain | |
CN110781853A (en) | Crowd abnormality detection method and related device | |
CN110782409B (en) | Method for removing shadow of multiple moving objects | |
CN108830204B (en) | Method for detecting abnormality in target-oriented surveillance video | |
CN107944384B (en) | Delivered object behavior detection method based on video | |
CN107977983A (en) | A kind of ghost and static target suppressing method based on modified ViBe | |
CN113221603A (en) | Method and device for detecting shielding of monitoring equipment by foreign matters | |
CN113469974B (en) | Method and system for monitoring state of grate plate of pellet grate | |
CN108009480A (en) | A kind of image human body behavioral value method of feature based identification | |
Kim et al. | Background subtraction using generalised Gaussian family model | |
CN102034243B (en) | Method and device for acquiring crowd density map from video image | |
CN103577805A (en) | Gender identification method based on continuous gait images | |
CN110930362B (en) | Screw safety detection method, device and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20210809 Address after: No.8, Haitai Huake 2nd Road, Huayuan Industrial Zone, Binhai New Area, Tianjin, 300450 Applicant after: TIANDY TECHNOLOGIES Co.,Ltd. Address before: 300384 room A221, complex building, No. 8, Haitai Huake Second Road, Huayuan Industrial Zone (outside the ring), high tech Zone, Binhai New Area, Tianjin Applicant before: TIANJIN YINGTIAN VIDEO SIGNAL TECHNOLOGY Co.,Ltd. |
|
TA01 | Transfer of patent application right |