CN109636835B - Foreground object detection method based on template optical flow - Google Patents
Foreground object detection method based on template optical flow Download PDFInfo
- Publication number
- CN109636835B CN109636835B CN201811536946.4A CN201811536946A CN109636835B CN 109636835 B CN109636835 B CN 109636835B CN 201811536946 A CN201811536946 A CN 201811536946A CN 109636835 B CN109636835 B CN 109636835B
- Authority
- CN
- China
- Prior art keywords
- template
- optical flow
- foreground object
- picture
- current frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003287 optical effect Effects 0.000 title claims abstract description 54
- 238000001514 detection method Methods 0.000 title claims abstract description 20
- 239000013598 vector Substances 0.000 claims abstract description 21
- 238000000034 method Methods 0.000 claims abstract description 12
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000007619 statistical method Methods 0.000 claims description 4
- 238000012549 training Methods 0.000 claims description 4
- 238000013135 deep learning Methods 0.000 abstract description 6
- 238000005516 engineering process Methods 0.000 abstract description 3
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000005266 casting Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/269—Analysis of motion using gradient-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a foreground object detection method based on template optical flow, which relates to the technical field of computer vision object detection,the technical scheme includes that S1, an initial template is selected; s2, acquiring an optical flow field between a template picture and each frame picture in a subsequent video stream; s3, calculating the optical flow field obtained in the step S2, counting the vector length of the optical flow field, and judging that a target different from the template exists in the current frame when the counting result is greater than a threshold value; s3, updating the template; every interval timeTAnd setting the current frame picture as a new template. The beneficial effects of the invention are as follows: the optical flow method has lower requirements on machine configuration and saves cost. And a large amount of tagged data is not needed, so compared with a target detection method based on deep learning, the target detection method based on deep learning has the advantages of simple technology, time saving and strong practicability. By updating the template, the method can adapt to the situation that the video background image changes slowly.
Description
Technical Field
The invention relates to the technical field of computer vision target detection, in particular to a foreground target detection method based on template optical flow.
Background
The foreground target detection in the quasi-static background belongs to the category of computer vision, and has huge practical application requirements in various industries, such as environments of production workshops, railway tracks, airport parking apron, home security and the like, and targets which break into specific areas are detected to serve as core functions.
In the field of general purpose target detection research, the recently rapidly developed deep learning technology has met practical use requirements in performance, and the technical methods include, but are not limited to, YOLO (you only look once) and SSD (Single Shot MultiBox Detector). Deep learning-based target detection methods generally require a large amount of tagged data (very time and money consuming) for model pre-training, while requiring highly configured hardware environments (high performance graphics cards, etc.), which limit their application to the cost-sensitive industry.
In view of this, how to efficiently and inexpensively realize target detection becomes a valuable research problem.
Disclosure of Invention
In order to achieve the above object, the present invention provides a foreground object detection method based on template optical flow, aiming at the above technical problems.
The technical scheme is that the method is based on the optical flow field vector statistical analysis of each frame of picture of a video stream, and is used for detecting a foreground target and comprises the following steps:
s1, selecting an initial template;
randomly designating a frame of picture as a template in a short period of time in the initial stage of the video stream, if a foreground target to be detected exists in the picture, canceling the current template, and setting the template again when no foreground target exists in the video stream;
s2, acquiring an optical flow field between a template picture and each frame picture in a subsequent video stream;
s3, calculating the optical flow field obtained in the step S2, counting the vector length of the optical flow field, and judging that a target different from the template exists in the current frame when the counting result is greater than a threshold value; the threshold is set according to the pre-trained experimental data results.
S4, updating the template;
when the conditions of light change, camera deflection or vibration exist, the difference between the picture in the real-time video stream and the template picture is gradually amplified, so that false detection can be caused, and the template needs to be updated timely to solve the problem. Specifically, every interval time T, if the current frame does not have a foreground object, setting the current frame picture as a new template, and if the current frame has the foreground object, carrying out forward casting according to video streaming until the current frame does not have the foreground object, and updating the template.
Preferably, in the step S3, the statistical method of the optical flow field is as follows:
applying classical optical flow method (pyramid Lucas Kanade optical flow method is preferably used for improving calculation efficiency), carrying out optical flow calculation on each frame of image in video flow and template image obtained in S1, and calculating all optical flow vectors in optical flow fieldAverage value of length>Namely:
wherein n is the total number of optical flow vectors in the optical flow field; i is an optical-flow vector index indicating the corresponding ith optical-flow vector.
When the average value of the optical flow vectorWhen the foreground object increases sharply, it is determined that the foreground object exists. This is because the frame image and the template image have great difference, so that the optical flow method calculates a mismatched optical flow field (an optical flow with mismatch between a certain feature point in the background of the template image and a certain irrelevant feature point of a foreground object in the real-time video stream), which cannot reflect the displacement condition of the same object point in the general sense of the optical flow method, but can be used for judging whether a new object exists.
Preferably, a threshold V is set by pre-training, whenAnd judging that the foreground object exists.
Preferably, in the step S4, when the time interval T between the current frame of the video stream and the template is greater than the preset value T, the picture of the current frame is set as the template.
Preferably, when the S4 updates the template, it is first detected whether the foreground object exists in the current frame; if the result is NO, using the current frame picture as a new template;
if yes, the template is not updated temporarily, and the next frame is continuously detected until no foreground object appears in a certain frame, and the frame picture is used as a new template.
Preferably, the template update time interval T is set according to the results of the pre-trained experimental data.
The technical scheme provided by the embodiment of the invention has the beneficial effects that: the optical flow method has lower requirements on machine configuration and saves cost. And a large amount of tagged data is not needed, so compared with a target detection method based on deep learning, the target detection method based on deep learning has the advantages of simple technology, time saving and strong practicability. By updating the template, the method can adapt to the situation that the video background image changes slowly.
Drawings
FIG. 1 is an initial template display diagram of an embodiment of the present invention.
FIG. 2 is a representation of an optical flow field with no foreground object according to an embodiment of the invention.
FIG. 3 is a diagram showing an optical flow field for a foreground object according to an embodiment of the present invention.
FIG. 4 is a new template display diagram of an embodiment of the present invention.
Fig. 5 is a flowchart of a foreground object detection method based on template optical flow according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. Of course, the specific embodiments described herein are for purposes of illustration only and are not intended to limit the invention.
It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other.
In the description of the invention, it should be understood that the terms "center," "longitudinal," "transverse," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate orientations or positional relationships that are based on the orientation or positional relationships shown in the drawings, merely to facilitate describing the invention and simplify the description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be configured and operate in a particular orientation, and therefore should not be construed as limiting the invention. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", etc. may explicitly or implicitly include one or more such feature. In the description of the invention, unless otherwise indicated, the meaning of "a plurality" is two or more.
In the description of the invention, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the creation of the present invention can be understood by those of ordinary skill in the art in a specific case.
Example 1
Referring to fig. 1 to 5, the present invention provides a foreground object detection method based on template optical flow, taking as an example whether a train exists on a train track or not, and describing in detail the operation steps of the present invention.
Pre-training, selecting real historical video data, setting a template, calculating an optical flow vector between the template and a subsequent video frame, manually judging whether a foreground object exists or not, respectively counting the average value of the optical flow field vectors under the two conditions that the foreground object exists or not, and taking the average value of the two result values as a threshold value because the two results are very different. When the detection time lasts for a period of time T, the statistical values of the optical flow vectors, which are matched with the video frames with and without the foreground object respectively, are quite close, so that the matching result is wrong, and in order to avoid the adverse result, the template is required to be updated in time when the time interval is smaller than T, and in particular, the template updating interval T can be set to be approximately equal to half T.
Step 1, selecting an initial template.
After manually judging that the current video does not have a target to be detected, randomly selecting a frame as an initial template, wherein fig. 1 is a template example:
and 2, calculating an optical flow field and detecting a foreground target.
And calculating an optical flow field between each frame of picture and the template picture in the video stream by adopting a pyramid Lucas Kanade optical flow method. Fig. 2 and 3 are graphs showing the calculation of the statistical average value of the optical flow vector length in the case where the foreground object is not present and the foreground object is present, respectively, and the results are shown in the following two tables.
TABLE 1 optical flow vector Length statistics (absence of foreground objects)
TABLE 2 optical flow vector Length statistics (with foreground objects present)
For the situation that the foreground object exists, a large number of wrong optical flow vectors are generated due to the fact that the foreground object of the current frame is not matched with the background in the template, and the lengths of the vectors are very large compared with the situation that the foreground object exists, and whether the foreground object exists can be distinguished only by selecting a proper threshold V. For example, when v=1.0 is greater than the threshold, it is determined that the foreground object exists, otherwise, the foreground object does not exist.
And 3, updating the template.
In order to cope with the situation that the background picture changes slowly, a template updating algorithm is introduced. And checking whether a foreground object exists in the current frame every interval time, if the result is NO, using the current frame picture as a new template, if the result is NO, temporarily not updating the template, and continuously detecting the next frame until the foreground object does not exist in a certain frame, and using the frame picture as the new template. Fig. 4 is an example of an updated template (brightness different from the original template).
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.
Claims (2)
1. The foreground object detection method based on the template optical flow is based on the optical flow field vector statistical analysis of each frame of picture of the video flow and is used for detecting a foreground object and is characterized by comprising the following steps:
s1, selecting an initial template;
at the initial stage of the video stream, randomly designating a frame of picture as a template, and if a foreground target to be detected exists in the picture, canceling the current template, and setting the template again when the foreground target does not exist in the video stream;
s2, acquiring an optical flow field between a template picture and each frame picture in a subsequent video stream;
s3, calculating the optical flow field obtained in the step S2, counting the vector length of the optical flow field, and judging that a target different from the template exists in the current frame when the counting result is greater than a threshold value;
s4, updating the template; every interval timeTSetting a current frame picture as a new template if the current frame does not have a foreground object, and updating the template according to video streaming if the current frame has the foreground object until the current frame does not have the foreground object;
in the step S3, the statistical method of the optical flow field is as follows:
carrying out optical flow calculation on each frame of image in the video stream and the template image obtained in the step S1, and calculating all optical flow vectors in the optical flow fieldAverage value of length>According to the formula:
wherein,,nis the total number of optical flow vectors in the optical flow field;
when the average value of the optical flow vectorWhen the foreground object increases sharply, judging that the foreground object exists;
in the step S4, when the time interval T between the current frame of the video stream and the template is greater than the preset value T, setting the picture of the current frame as the template;
when the S4 template is updated, firstly detecting whether a foreground target exists in the current frame; if the result is NO, using the current frame picture as a new template; if yes, the template is not updated temporarily, and the next frame is continuously detected until a foreground object does not appear in a certain frame, and the frame picture is used as a new template;
and setting a template updating time interval T according to the pre-trained experimental data result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811536946.4A CN109636835B (en) | 2018-12-14 | 2018-12-14 | Foreground object detection method based on template optical flow |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811536946.4A CN109636835B (en) | 2018-12-14 | 2018-12-14 | Foreground object detection method based on template optical flow |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109636835A CN109636835A (en) | 2019-04-16 |
CN109636835B true CN109636835B (en) | 2023-07-04 |
Family
ID=66074326
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811536946.4A Active CN109636835B (en) | 2018-12-14 | 2018-12-14 | Foreground object detection method based on template optical flow |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109636835B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110675369B (en) * | 2019-04-26 | 2022-01-14 | 深圳市豪视智能科技有限公司 | Coupling mismatch detection method and related equipment |
CN110233967B (en) * | 2019-06-20 | 2021-10-01 | 漳州智觉智能科技有限公司 | Mold template image generation system and method |
CN110264458B (en) * | 2019-06-20 | 2023-01-06 | 漳州智觉智能科技有限公司 | Mold monitoring system and method |
CN111754550B (en) * | 2020-06-12 | 2023-08-11 | 中国农业大学 | Method and device for detecting dynamic obstacle in movement state of agricultural machine |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102800106A (en) * | 2012-06-29 | 2012-11-28 | 刘怡光 | Self-adaptation mean-shift target tracking method based on optical flow field estimation |
US9251416B2 (en) * | 2013-11-19 | 2016-02-02 | Xerox Corporation | Time scale adaptive motion detection |
CN105894530A (en) * | 2014-12-11 | 2016-08-24 | 深圳市阿图姆科技有限公司 | Detection and tracking solution scheme aiming at motion target in video |
CN104966305B (en) * | 2015-06-12 | 2017-12-15 | 上海交通大学 | Foreground detection method based on motion vector division |
CN105654522B (en) * | 2015-12-30 | 2018-09-04 | 青岛海尔股份有限公司 | The frosting detection method and frosting detecting system of evaporator |
CN106874949B (en) * | 2017-02-10 | 2019-10-11 | 华中科技大学 | Movement imaging platform moving target detecting method and system based on infrared image |
-
2018
- 2018-12-14 CN CN201811536946.4A patent/CN109636835B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN109636835A (en) | 2019-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109636835B (en) | Foreground object detection method based on template optical flow | |
WO2021238062A1 (en) | Vehicle tracking method and apparatus, and electronic device | |
US11106920B2 (en) | People flow estimation device, display control device, people flow estimation method, and recording medium | |
Javed et al. | Tracking and object classification for automated surveillance | |
CN111241927A (en) | Cascading type face image optimization method, system and equipment and readable storage medium | |
US20080187172A1 (en) | Tracking Apparatus And Tracking Method | |
TWI668669B (en) | Object tracking system and method thereof | |
EP2965262A1 (en) | Method for detecting and tracking objects in sequence of images of scene acquired by stationary camera | |
US10373015B2 (en) | System and method of detecting moving objects | |
CN103793477B (en) | System and method for video abstract generation | |
CN111353452A (en) | Behavior recognition method, behavior recognition device, behavior recognition medium and behavior recognition equipment based on RGB (red, green and blue) images | |
US20110280442A1 (en) | Object monitoring system and method | |
CN104809742A (en) | Article safety detection method in complex scene | |
CN113691733A (en) | Video jitter detection method and device, electronic equipment and storage medium | |
CN107346547A (en) | Real-time foreground extracting method and device based on monocular platform | |
CN115641359B (en) | Method, device, electronic equipment and medium for determining movement track of object | |
CN111950345B (en) | Camera identification method and device, electronic equipment and storage medium | |
US20220309763A1 (en) | Method for identifying traffic light, device, cloud control platform and vehicle-road coordination system | |
CN114037087B (en) | Model training method and device, depth prediction method and device, equipment and medium | |
US20230410524A1 (en) | Information processing apparatus, control method, and program | |
Almomani et al. | Segtrack: A novel tracking system with improved object segmentation | |
CN111178244B (en) | Abnormal production scene identification method | |
CN111275693B (en) | Counting method and counting device for objects in image and readable storage medium | |
CN108711164B (en) | Motion detection method based on L BP and Color characteristics | |
CN108241837A (en) | A kind of remnant object detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |