CN110264422B - Optical image processing method for eliminating optical flicker pixels based on ViBe model - Google Patents
Optical image processing method for eliminating optical flicker pixels based on ViBe model Download PDFInfo
- Publication number
- CN110264422B CN110264422B CN201910517578.7A CN201910517578A CN110264422B CN 110264422 B CN110264422 B CN 110264422B CN 201910517578 A CN201910517578 A CN 201910517578A CN 110264422 B CN110264422 B CN 110264422B
- Authority
- CN
- China
- Prior art keywords
- pixel
- pixel points
- image
- frame
- vibe
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003287 optical effect Effects 0.000 title claims abstract description 32
- 238000003672 processing method Methods 0.000 title claims abstract description 6
- 238000001514 detection method Methods 0.000 claims abstract description 49
- 238000000034 method Methods 0.000 claims abstract description 19
- 230000000877 morphologic effect Effects 0.000 claims abstract description 13
- 230000008030 elimination Effects 0.000 claims 2
- 238000003379 elimination reaction Methods 0.000 claims 2
- 238000010586 diagram Methods 0.000 description 11
- 238000004088 simulation Methods 0.000 description 8
- 230000003068 static effect Effects 0.000 description 7
- 230000007547 defect Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 3
- 230000004397 blinking Effects 0.000 description 2
- 230000002401 inhibitory effect Effects 0.000 description 2
- 230000005764 inhibitory process Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20036—Morphological image processing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an optical image processing method for eliminating optical flicker pixels based on a ViBe model. The method comprises the following specific steps: (1) acquiring optical video information; (2) calculating a pixel point distance weight in a first frame image of the video; (3) establishing a ViBe model of the first frame image by using the Euclidean distance weight; (4) selecting a frame image which is not selected except the first frame; (5) binarization is carried out on the selected pixel points; (6) judging whether all pixel points in the selected frame image are selected; (7) eliminating the flicker pixel points by using a morphological algorithm; (8) judging whether all the frame images are selected; (9) and outputting the target detection image without the flicker pixel points. The method has the advantages of improving the target detection accuracy and the robustness of the diversity change background in the video image.
Description
Technical Field
The invention belongs to the technical field of computers, and further relates to an optical image processing method for eliminating optical flicker pixels based on a ViBe model in the technical field of optical video image moving target detection. The method can be applied to rapidly eliminating the optical flicker pixel points in the video frame images when the security system and the intelligent traffic monitoring system carry out target detection.
Background
The flicker pixel points existing in the video frame images influence the subsequent detection and visual observation effect when the security system and the intelligent traffic monitoring system carry out target detection. Therefore, in the application of security systems and intelligent traffic monitoring systems, the accurate target detection method has great significance in the technical field of computers.
The university of Anhui has proposed a video image object detection method for improving the VIBE algorithm in the patent document "an improved moving object detection VIBE algorithm" (patent application No. 201810498273.1, publication No. CN108805897A) applied by the university of Anhui. The method utilizes a VIBE algorithm to distinguish whether a static area is a ghost or a static target, describes the difference degree between a current pixel and a sample in a background model, and dynamically adjusts the radius in the discrimination model when the pixel is discriminated so as to detect more foreground points when the scene in a video image changes; and finally, carrying out binarization processing on the foreground points to realize moving target detection. Compared with other optical video image target detection methods, the method has the advantages of higher target detection and ghost suppression efficiency. However, the method still has the following defects: when the ViBe model with the sample number of 20 is built by 8 neighborhood pixel points, the repetition rate of sample values in the ViBe model is high. Therefore, the method has low accuracy in detecting the target in the video image.
The patent document filed by the university in south of the Yangtze river "a ghost and stationary target inhibition method based on improved ViBe" (patent application No. 201711200955.1, publication No. CN107977983A) proposes an improved ViBe ghost and stationary target inhibition method. Firstly, respectively carrying out ViBe foreground extraction on two continuous frames of images and a background model, wherein the extracted common parts are detected ghost and a static target area, then judging whether the area has the ghost or the static target by comparing the current frame pixel value variance and the background model pixel value variance in the area, and finally updating the background model through different strategies to achieve the purpose of inhibiting the ghost and the static target. The method has the effect of quickly inhibiting the ghosts in the video image target detection result, but the method still has the following defects: when the video background has a micro motion, a flicker pixel point appears in the detection result. Therefore, the method has no good robustness to the background of diversity change in the video image.
Disclosure of Invention
The invention aims to provide an optical image processing method for eliminating optical flicker pixels based on a ViBe model, aiming at the defects of the prior art. Compared with the ViBe target detection method in the prior art, the method can establish the ViBe model of the image by using the Euclidean distance weight, and then eliminate the flicker pixel points of the binary image by using a morphological algorithm so as to improve the target detection accuracy and robustness.
The idea for realizing the purpose of the invention is as follows: firstly, calculating a distance weight of pixel points in a first frame image of a video, establishing a ViBe model of the first frame image by utilizing the Euclidean distance weights after descending sorting, comparing the pixel points in the selected frame image with samples in the ViBe model one by one to obtain a binary image, eliminating the flicker pixel points by using a morphological algorithm, and outputting a target detection image after eliminating the flicker pixel points.
The method comprises the following specific steps:
(1) acquiring optical video information:
acquiring frame width, frame height and frame rate information of each frame of optical video from the input optical video;
(2) calculating the distance weight of a pixel point in the first frame image:
(2a) randomly selecting a pixel block with the size of 5 multiplied by 5 pixel points from the first frame of image, calculating Euclidean distances between the pixel point at the central position of the selected pixel block and each of the rest pixel points in the pixel block, and forming a distance map by all the Euclidean distances of the selected pixel block;
(2b) calculating the Euclidean distance weight of each pixel point by using the following formula:
wherein S iskRepresenting the Euclidean distance weight, d, of the k-th pixel among the rest pixels of the pixel blockkExpressing the Euclidean distance between the kth pixel point and the central pixel point of the pixel block in the rest pixel points of the pixel block, sigma expressing the summation operation, i expressing the ith pixel point in the rest pixel points of the pixel block, and diExpressing the Euclidean distance between the ith pixel point in the rest pixel points of the pixel block and the central pixel point of the pixel block;
(2c) sorting the Euclidean distance weights from large to small;
(3) establishing a ViBe model of the first frame image by using the Euclidean distance weight:
(3a) selecting an unselected pixel point from the first frame image;
(3b) selecting a pixel block of 5 multiplied by 5 pixels with the selected pixel as the center, calculating Euclidean distances between the pixel point at the center of the selected pixel block and each of the rest pixels in the pixel block, forming a distance map by all the Euclidean distances of the selected pixel block, and forming a ViBe model with the number of samples of the selected pixel point being 20 according to pixel values of the first 20 pixels which are ordered by the Euclidean distance weight of the distance map;
(3c) judging whether all pixel points in the first frame image are selected, if so, executing the step (3d), otherwise, executing the step (3 a);
(3d) forming a first frame image ViBe model by ViBe models of all pixel points in the first frame image;
(4) selecting a frame image which is not selected except the first frame;
(5) binarization of the selected pixel points:
(5a) selecting an unselected pixel point from the selected frame image;
(5b) calculating the difference value between the selected pixel point and each sample in the ViBe model in which the pixel point is positioned by using a sample difference value formula;
(5c) calculating the identification value of the selected pixel point by using the following formula:
s (x, y) represents an identification value of a selected pixel point located at an (x, y) coordinate, V (x, y, m) represents a difference value between a selected pixel value located at the (x, y) coordinate and an mth sample in the ViBe model, R represents a distance threshold of the pixel point, the value range of the distance threshold is [12,30], and min represents a sample set threshold;
(5d) judging whether S (x, y) of the selected pixel point is 0, if so, executing the step (5e), otherwise, executing the step (5 f);
(5e) representing the selected pixel points as background pixel points, and changing the pixel values of the background pixel points to be 0;
(5f) representing the selected pixel points as foreground pixel points, and changing the pixel values of the foreground pixel points to 255;
(6) judging whether all pixel points in the selected frame image are selected completely, if so, forming a binary image by all foreground pixel points and background pixel points in the selected frame image, and then executing the step (7), otherwise, executing the step (5);
(7) eliminating flicker pixel points by using a morphological algorithm:
sequentially carrying out morphological opening operation and morphological closing operation on the binary image to obtain a target detection image of the selected frame image after eliminating the flicker pixel points;
(8) judging whether all the frame images are selected, if so, executing the step (9), otherwise, executing the step (4);
(9) and outputting the target detection image without the flicker pixel points.
Compared with the prior art, the invention has the following advantages:
firstly, the ViBe model of the first frame image is established by utilizing the Euclidean distance weight, the pixel values of the first 20 pixel points are sorted from the Euclidean distance weight to form the ViBe model with the sample number of the selected pixel points being 20, and the defect that the sample value repetition rate in a background model is high when the ViBe model with the sample number of 20 is established by utilizing 8 neighborhood pixel points in the prior art is overcome, so that the target detection accuracy is improved.
Secondly, as the invention uses the morphological algorithm to eliminate the flicker pixel points, the defect that the flicker pixel points appear in the target detection image when the video image background has micro motion in the prior art is overcome, and the robustness of the diversity change background in the video image is improved.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of Euclidean distances between the centers of pixels and neighboring pixels in a pixel block of 5 × 5 pixels according to the present invention;
FIG. 3 is an image of frame 18, frame 54, frame 74 and frame 109 of a video captured by the present invention;
FIG. 4 is a graph of experimental results of a prior art ViBe target detection method;
FIG. 5 is a graph showing the results of an experiment in the method for detecting an object of the present invention;
Detailed Description
The present invention will be described in further detail below with reference to the accompanying drawings.
The specific implementation steps of the present invention are described in further detail with reference to fig. 1.
Frame width, frame height and frame rate information of each frame of optical video are obtained from the input optical video.
And 2, calculating the distance weight of the pixel points in the first frame image.
Referring to fig. 2, a schematic view of euclidean distances between the centers of the pixels and the neighboring pixels in the pixel block of 5 × 5 pixels according to the present invention is described.
The center of the pixel block in FIG. 2 represents the position of the selected pixel point, the number 1,And expressing Euclidean distances between the pixel point at the central position of the pixel block and each of the other pixel points in the pixel block respectively, and forming a distance map by all the Euclidean distances of the selected pixel block.
And calculating the Euclidean distance weight of each pixel point by using the following formula.
Wherein S iskRepresenting the Euclidean distance weight, d, of the k-th pixel among the rest pixels of the pixel blockkExpressing the Euclidean distance between the kth pixel point and the central pixel point of the pixel block in the rest pixel points of the pixel block, sigma expressing the summation operation, i expressing the ith pixel point in the rest pixel points of the pixel block, and diAnd expressing the Euclidean distance between the ith pixel point in the rest pixel points of the pixel block and the central pixel point of the pixel block.
And sorting the Euclidean distance weights from large to small.
And 3, establishing a ViBe model of the first frame image by using the Euclidean distance weight.
And (3.1) selecting an unselected pixel point from the first frame image.
(3.2) selecting a pixel block of 5 multiplied by 5 pixels with the selected pixel as the center, calculating Euclidean distances between the pixel point at the center of the selected pixel block and each of the rest pixels in the pixel block, forming a distance map by all the Euclidean distances of the selected pixel block, and forming ViBe with the number of samples of the selected pixel point being 20 according to the pixel values of the first 20 pixels which are sorted by the Euclidean distance weight of the distance map.
And (3.3) judging whether all pixel points in the first frame image are selected, if so, executing the step (3.4), otherwise, executing the step (3.1).
And (3.4) forming the ViBe models of all the pixel points in the first frame image into a first frame image ViBe model.
And 4, selecting a frame image which is not selected except the first frame.
And 5, binarizing the selected pixel points.
And (5.1) selecting an unselected pixel point from the selected frame image.
And (5.2) calculating the difference value of the selected pixel point and each sample in the ViBe model where the pixel point is located by using a sample difference value formula.
And (5.3) calculating the identification value of the selected pixel point by using the following formula.
Wherein, S (x, y) represents the identification value of the selected pixel point located at the (x, y) coordinate, V (x, y, m) represents the difference value between the selected pixel value located at the (x, y) coordinate and the mth sample in the ViBe model, R represents the distance threshold of the pixel point, the value range of the distance threshold is [12,30], and min represents the sample set threshold.
(5.4) judging whether S (x, y) of the selected pixel point is 0, if so, executing (5.5) of the step, otherwise, executing (5.6) of the step.
And (5.5) representing the selected pixel points as background pixel points, and changing the pixel values of the background pixel points to 0.
And (5.6) representing the selected pixel points as foreground pixel points, and changing the pixel values of the foreground pixel points to 255.
And 6, judging whether all pixel points in the selected frame image are selected, if so, forming a binary image by all foreground pixel points and background pixel points in the selected frame image, and then executing the step 7, otherwise, executing the step 5.
And 7, eliminating the flicker pixel points by using a morphological algorithm.
And sequentially carrying out morphology opening operation and morphology closing operation on the binary image to obtain a target detection image of the selected frame image after eliminating the flicker pixel points.
And 8, judging whether all the frame images are selected, if so, executing a step 9, and otherwise, executing a step 4.
And 9, outputting the target detection image without the flicker pixel points.
The effects of the present invention can be further illustrated by the following simulation experiments.
1. Simulation conditions are as follows:
the hardware platform of the simulation experiment of the invention is as follows: the processor is an Intel Core CPU i7-4790, the main frequency is 3.6GHz, and the memory is 12 GB.
The software platform of the simulation experiment of the invention is as follows: windows 10 operating system, Visual Studio 2013, OpenCV3.0.
The input video used in the simulation experiment is an optical video shot by Robotic C270, the optical video data is collected from a Guanghua Luo laboratory in Amanian district in xi city, Shaanxi province, the shooting time is 3 months in 2019, the image size is 544 multiplied by 960 pixels, the video comprises 518 frames in total, and the video format is MP 4.
2. Simulation content and result analysis:
the simulation experiment of the invention adopts the ViBe target detection method of the invention and the prior art to respectively carry out target detection on the input optical video image to obtain a target detection result graph.
In the simulation experiment, the adopted prior art refers to that:
the prior art ViBe target detection method is derived from the patent document "an improved moving target detection VIBE algorithm" applied by Anhui university (patent application No.: 201810498273.1, publication No.: CN 108805897A).
Fig. 3 shows video frame images selected for object detection according to the present invention, wherein (a), (b), (c), and (d) in fig. 3 are 18 th, 54 th, 74 th, and 109 th frame images of the selected video, respectively, and the 4 th frame images are frame images received in real time.
Fig. 4 is a graph showing the target detection results of fig. 4(a), (b), (c), and (d) obtained by performing target detection on the 4 frame images of fig. 3 by using the prior art ViBe target detection method.
Fig. 4(a) is a target detection result diagram obtained when the target to be detected changes from a static state to a moving state, and it can be seen that a large number of flicker pixels exist in fig. 4 (a). Fig. 4(b) and 4(c) are diagrams of target detection results obtained when the target to be detected has left the video area, respectively, and it can be seen that both the diagrams have flicker pixels. Fig. 4(d) is a target detection result diagram obtained when the target to be detected appears in the video region again, and it can be seen that a large number of flicker pixels still exist around the target to be detected in the diagram.
Fig. 5 is a target detection diagram obtained by eliminating flicker pixels from fig. 5(a), (b), (c), and (d) by performing target detection on the 4 frame images of fig. 3 by using the target detection method of the present invention.
Fig. 5(a) is a target detection result diagram obtained when the target to be detected changes from a static state to a moving state, and it can be seen that a flicker pixel exists in fig. 5 (a). Fig. 5(b) and 5(c) are diagrams of target detection results obtained when the target to be detected has left the video area, respectively, and it can be seen that the number of blinking pixel points in the two diagrams gradually decreases, and only a few blinking pixel points remain. Fig. 5(d) is a target detection result diagram obtained when the target to be detected appears in the video region again, and it can be seen that there is no any flickering pixel point around the target to be detected, and the target is clearly detected.
By analyzing the target detection result graphs of fig. 4 and 5, it can be seen that the target detection result graph of the present invention is clearer, and the flicker pixel point is eliminated faster and more thoroughly. A great number of flickering pixel points exist in a result graph of a ViBe target detection method in the prior art all the time. Therefore, the method has better moving target detection effect.
In conclusion, the ViBe model is established by using the Euclidean distance weight, and then the selected binary image is subjected to multiple morphological operation operations, so that the target detection accuracy and robustness are effectively improved.
Claims (3)
1. An optical image processing method for eliminating optical flicker pixels based on a ViBe model is characterized in that the ViBe model of a first frame image is established by using an Euclidean distance weight, and flicker pixels are eliminated by using a binary image generated by comparing the ViBe model with a video frame image by using a morphological algorithm; the method comprises the following steps:
(1) acquiring optical video information:
acquiring frame width, frame height and frame rate information of each frame of optical video from the input optical video;
(2) calculating the distance weight of a pixel point in the first frame image:
(2a) randomly selecting a pixel block with the size of 5 multiplied by 5 pixel points from the first frame of image, calculating Euclidean distances between the pixel point at the central position of the selected pixel block and each of the rest pixel points in the pixel block, and forming a distance map by all the Euclidean distances of the selected pixel block;
(2b) calculating the Euclidean distance weight of each pixel point by using the following formula:
wherein S iskRepresenting the Euclidean distance weight, d, of the k-th pixel among the rest pixels of the pixel blockkExpressing the Euclidean distance between the kth pixel point and the central pixel point of the pixel block in the rest pixel points of the pixel block, sigma expressing the summation operation, i expressing the ith pixel point in the rest pixel points of the pixel block, and diExpressing the Euclidean distance between the ith pixel point in the rest pixel points of the pixel block and the central pixel point of the pixel block;
(2c) sorting the Euclidean distance weights from large to small;
(3) establishing a ViBe model of the first frame image by using the Euclidean distance weight:
(3a) selecting an unselected pixel point from the first frame image;
(3b) selecting a pixel block of 5 multiplied by 5 pixels with the selected pixel as the center, calculating Euclidean distances between the pixel point at the center of the selected pixel block and each of the rest pixels in the pixel block, forming a distance map by all the Euclidean distances of the selected pixel block, and forming a ViBe model with the number of samples of the selected pixel point being 20 according to pixel values of the first 20 pixels which are ordered by the Euclidean distance weight of the distance map;
(3c) judging whether all pixel points in the first frame image are selected, if so, executing the step (3d), otherwise, executing the step (3 a);
(3d) forming a first frame image ViBe model by ViBe models of all pixel points in the first frame image;
(4) selecting a frame image which is not selected except the first frame;
(5) binarization of the selected pixel points:
(5a) selecting an unselected pixel point from the selected frame image;
(5b) calculating the difference value between the selected pixel point and each sample in the ViBe model in which the pixel point is positioned by using a sample difference value formula;
(5c) calculating the identification value of the selected pixel point by using the following formula:
s (x, y) represents an identification value of a selected pixel point located at an (x, y) coordinate, V (x, y, m) represents a difference value between a selected pixel value located at the (x, y) coordinate and an mth sample in the ViBe model, R represents a distance threshold of the pixel point, the value range of the distance threshold is [12,30], and min represents a sample set threshold;
(5d) judging whether S (x, y) of the selected pixel point is 0, if so, executing the step (5e), otherwise, executing the step (5 f);
(5e) representing the selected pixel points as background pixel points, and changing the pixel values of the background pixel points to be 0;
(5f) representing the selected pixel points as foreground pixel points, and changing the pixel values of the foreground pixel points to 255;
(6) judging whether all pixel points in the selected frame image are selected completely, if so, forming a binary image by all foreground pixel points and background pixel points in the selected frame image, and then executing the step (7), otherwise, executing the step (5);
(7) eliminating flicker pixel points by using a morphological algorithm:
sequentially carrying out morphological opening operation and morphological closing operation on the binary image to obtain a target detection image of the selected frame image after eliminating the flicker pixel points;
(8) judging whether all the frame images are selected, if so, executing the step (9), otherwise, executing the step (4);
(9) and outputting the target detection image without the flicker pixel points.
2. The method for optical image processing based on ViBe model elimination of optical flicker pixels according to claim 1, wherein the sample difference formula in step (5b) is as follows:
V(x,y,m)=I(x,y)-M(x,y,m)
where V (x, y, M) represents the difference between the pixel value at the (x, y) coordinate and the mth sample in the ViBe model, I (x, y) represents the pixel value at the (x, y) coordinate, and M (x, y, M) represents the value of the mth sample in the ViBe model for the pixel point at the (x, y) coordinate.
3. The method for optical image processing based on ViBe model elimination of optical flicker pixels according to claim 1, wherein the sample set threshold in step (5c) is the minimum total number of samples in the ViBe model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910517578.7A CN110264422B (en) | 2019-06-14 | 2019-06-14 | Optical image processing method for eliminating optical flicker pixels based on ViBe model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910517578.7A CN110264422B (en) | 2019-06-14 | 2019-06-14 | Optical image processing method for eliminating optical flicker pixels based on ViBe model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110264422A CN110264422A (en) | 2019-09-20 |
CN110264422B true CN110264422B (en) | 2020-12-08 |
Family
ID=67918370
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910517578.7A Active CN110264422B (en) | 2019-06-14 | 2019-06-14 | Optical image processing method for eliminating optical flicker pixels based on ViBe model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110264422B (en) |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104835179B (en) * | 2015-03-30 | 2018-01-12 | 复旦大学 | Based on the adaptive improvement ViBe background modeling methods of dynamic background |
CN105894534B (en) * | 2016-03-25 | 2018-07-17 | 中国传媒大学 | A kind of improvement moving target detecting method based on ViBe |
CN105913441B (en) * | 2016-04-27 | 2019-04-19 | 四川大学 | It is a kind of for improving the shadow removal method of target detection performance in video |
CN106203276A (en) * | 2016-06-30 | 2016-12-07 | 中原智慧城市设计研究院有限公司 | A kind of video passenger flow statistical system and passenger flow statistical method |
CN106157332A (en) * | 2016-07-07 | 2016-11-23 | 合肥工业大学 | A kind of motion inspection optimization method based on ViBe algorithm |
CN109308709B (en) * | 2018-08-14 | 2021-07-06 | 昆山智易知信息科技有限公司 | Vibe moving target detection algorithm based on image segmentation |
-
2019
- 2019-06-14 CN CN201910517578.7A patent/CN110264422B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN110264422A (en) | 2019-09-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107527009B (en) | Remnant detection method based on YOLO target detection | |
CN104392468B (en) | Based on the moving target detecting method for improving visual background extraction | |
EP1683105B1 (en) | Object detection in images | |
CN105404847B (en) | A kind of residue real-time detection method | |
Zhang et al. | Three-frame difference algorithm research based on mathematical morphology | |
CN110599523A (en) | ViBe ghost suppression method fused with interframe difference method | |
CN111062273B (en) | Method for tracing, detecting and alarming remaining articles | |
CN109918971B (en) | Method and device for detecting number of people in monitoring video | |
CN111723644A (en) | Method and system for detecting occlusion of surveillance video | |
CN102831442A (en) | Abnormal behavior detection method and equipment and method and equipment for generating abnormal behavior detection equipment | |
CN102073851A (en) | Method and system for automatically identifying urban traffic accident | |
CN106778712A (en) | A kind of multi-target detection and tracking method | |
CN105741319B (en) | Improvement visual background extracting method based on blindly more new strategy and foreground model | |
CN111860143B (en) | Real-time flame detection method for inspection robot | |
CN110298297A (en) | Flame identification method and device | |
CN111062974A (en) | Method and system for extracting foreground target by removing ghost | |
CN112417955B (en) | Method and device for processing tour inspection video stream | |
CN103473533B (en) | Moving Objects in Video Sequences abnormal behaviour automatic testing method | |
Malhi et al. | Vision based intelligent traffic management system | |
Su et al. | A new local-main-gradient-orientation HOG and contour differences based algorithm for object classification | |
CN1564600A (en) | Detection method of moving object under dynamic scene | |
CN107871315B (en) | Video image motion detection method and device | |
CN112927262A (en) | Camera lens shielding detection method and system based on video | |
CN116311000A (en) | Firework detection method, device, equipment and storage medium | |
CN107729811B (en) | Night flame detection method based on scene modeling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |