CN106651918B - Foreground extraction method under shaking background - Google Patents

Foreground extraction method under shaking background Download PDF

Info

Publication number
CN106651918B
CN106651918B CN201710083910.4A CN201710083910A CN106651918B CN 106651918 B CN106651918 B CN 106651918B CN 201710083910 A CN201710083910 A CN 201710083910A CN 106651918 B CN106651918 B CN 106651918B
Authority
CN
China
Prior art keywords
image
slider
motion vector
frame
average motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710083910.4A
Other languages
Chinese (zh)
Other versions
CN106651918A (en
Inventor
何冰
侯晓明
顾俊杰
印明骋
陆涛
柴忠良
赖志超
王欣庭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Shanghai Electric Power Co Ltd
Original Assignee
State Grid Shanghai Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Shanghai Electric Power Co Ltd filed Critical State Grid Shanghai Electric Power Co Ltd
Priority to CN201710083910.4A priority Critical patent/CN106651918B/en
Publication of CN106651918A publication Critical patent/CN106651918A/en
Application granted granted Critical
Publication of CN106651918B publication Critical patent/CN106651918B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Image Analysis (AREA)
  • Studio Devices (AREA)
  • Studio Circuits (AREA)

Abstract

The invention relates to a foreground extraction method under shaking backgrounds, which comprises the steps of S1 initializing the size of an image slider, the moving step length of the slider and the position of the slider, S2 reading front and rear frame images from a video and obtaining binary images be _ frame and af _ frame, S3 extracting the image slider at the same position in the be _ frame and the af _ frame and calculating the gravity center difference of two extraction results to obtain the motion vector of the front and rear frames under the position of the current image slider, S4 moving the image slider according to the step length and repeating the step S3 until the whole image is extracted, S5 calculating the average motion vector of the front and rear frames according to the obtained motion vectors of all the front and rear frames, S6 reversely translating the rear frame image according to the obtained average motion vector to obtain a reduced shaking error image, S7 extracting the foreground of the adjusted front and rear frames by utilizing a Gaussian mixture model, and compared with the prior art, the foreground extraction method has the application range and the like.

Description

Foreground extraction method under shaking background
Technical Field
The invention relates to foreground extraction methods, in particular to a foreground extraction method under shaking backgrounds.
Background
In order to improve the quality and level of urban security, the camera equipment is applied to almost all public places, but the videos acquired by a large number of camera equipment cannot be comprehensively, intelligently and accurately analyzed at present, if the monitored videos are checked by manpower, the method is neither practical nor economical, and researches show that the attention of people can be reduced to an unacceptable degree after the people watch the monitored videos for 20 minutes, unconventional phenomena appearing in the videos can be seen, and video foreground extraction algorithms capable of adapting to complex scenes can be used for automatically extracting the foreground in the videos and can be used as the input of a mode recognition and motion analysis system for intelligently analyzing the videos.
In many subject studies, the silhouette of the foreground extraction technique is also poor: after a patient takes certain medicines, doctors can use a prospect extraction technology to track the trace of the medicines in the body of the patient and verify whether the medicines accurately reach the focus and take effect; a scholars of animal behavior research may not have to monitor the behavior of an observation target individual for a long time, but may instead utilize an intelligent video analysis system. In military affairs, the foreground extraction and target tracking technology can also be used for capturing the behaviors of key targets to strengthen defense and assist attack.
When the equipment for shooting the video is almost static, the Gaussian mixture model can extract the foreground object more accurately, but the adaptability to the video with jitter is lacked.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide foreground extraction methods under a shaking background.
The purpose of the invention can be realized by the following technical scheme:
A foreground extraction method under a shaking background comprises the following steps:
step S1: initializing the size of an image slider, the moving step length of the slider and the position of the slider;
step S2: reading front and rear frame images from the video, and obtaining a binarization image be _ frame of the front frame image and a binarization image af _ frame of the rear frame image;
step S3, extracting the image slider at the same position in the binarized image of the previous frame image and the binarized image of the next frame image, and calculating the gravity center difference of the two extraction results to obtain the motion vector of the previous frame and the next frame under the current image slider position;
step S4: moving the image slider according to the step length, and repeating the step S3 until the whole image is extracted;
step S5: calculating the average motion vector of the previous frame and the average motion vector of the next frame according to the obtained motion vectors of all the previous frames and the next frames;
step S6: carrying out reverse translation on the subsequent frame image according to the obtained average motion vector to obtain a lightened jitter error image;
step S7: and performing foreground extraction on the front and rear frame images after the adjustment of the jitter by using a Gaussian mixture model.
The step S6 specifically includes:
step S61: judging whether the current average motion vector is greater than or equal to 2, if so, executing step S62, otherwise, executing step S64;
step S62, enlarging the size of the image slider to two times of the current size, repeating the steps S3 to S5 to obtain another average motion vector, taking the average value of the average motion vector and the original average motion vector as a final average motion vector, and executing the step S63;
step S63: according to the final average motion vector, carrying out reverse translation on the later frame image to obtain a lightened jitter error image;
step S64, the binarized image of the previous frame image and the binarized image of the next frame image are both enlarged to 10 times of the current image, the steps S3 to S5 are repeated to obtain another average motion vector, the average motion vector is used as the final average motion vector, and the step S65 is executed;
step S65: and amplifying the rear frame image to 10 times of the current image, performing reverse translation on the amplified rear frame image according to the final average motion vector, and then reducing the image to the original size to obtain the reduced jitter error image.
The initial size of the image slider is 50 pixels by 50 pixels with a step size of 25 pixels.
The motion vectors of the previous and subsequent frames are specifically:
Figure BDA0001226763220000021
wherein: cxbeFor the x-coordinate, Cy, of the center of gravity of the portion of the previous frame image extracted by the image sliderbeY-coordinate of center of gravity, Cx, for the portion of the previous frame image extracted by the image sliderafFor the x-coordinate, Cy, of the center of gravity of the portion of the subsequent frame image extracted by the image sliderafThe y-coordinate of the center of gravity of the part is extracted by the image slider for the subsequent frame image.
The barycentric coordinates are specifically:
Figure BDA0001226763220000031
wherein: cx is the x-coordinate of the center of gravity, Cy is the y-coordinate of the center of gravity, W is the sum of the pixel values of all the pixels of the portion of the image extracted by the image slider, W is the sum of the pixel values of the portion of the image extracted by the image slideriIs the pixel value, x, of pixel point iiIs the x coordinate, y, of pixel point iiAnd m is the total number of pixel points of the part of the image extracted by the image slider.
Compared with the prior art, the invention has the following advantages:
1) the color image is converted into the binary image, so that the operation speed can be increased, the storage space of a program can be reduced, on the basis, the extraction of the whole image can be automatically completed by an iterative algorithm in a sliding extraction mode of an image slider, the difference can be amplified by calculating the average value in different times, the sensitivity is improved, and the jitter correction effect is further improved.
2) If the numerator of the gravity center of the image block has large difference, the size of the image block can be adjusted, so that the denominator value of the calculated gravity center is large, the change on the numerator is weakened, the calculation of the gravity center is accurate, finally, the current frame is reversely translated according to the average motion vector to obtain a lightened jitter error image, if the deviation generated by image jitter is small, the front frame image and the rear frame image are amplified by N times, the motion vector is calculated according to the gravity center, then, the current frame is reversely translated according to the average motion vector, and finally, the obtained current frame is reduced by N times to obtain the lightened jitter error.
Drawings
FIG. 1 is a schematic flow chart of the main steps of the method of the present invention;
FIG. 2 is the front frame images of examples;
FIG. 3 is the rear frame image of instances;
FIG. 4(a) is an image of a non-jittered video 1 in an experiment;
fig. 4(b) is a foreground image extracted from the non-jittered video 1 by the gaussian mixture model method in the experiment;
fig. 4(c) is a foreground image extracted from the non-jittered video 1 by the method of the present application in the experiment;
fig. 5(a) is an image of the shake video 2 in the experiment;
fig. 5(b) is a foreground image extracted from the shake video 2 by the gaussian mixture model method in the experiment;
fig. 5(c) is a foreground image extracted from the shaking video 2 by the method of the present application in the experiment;
fig. 6(a) is an image of a shake video 3 in an experiment;
fig. 6(b) is a foreground image extracted from the shaking video 3 by the gaussian mixture model method in the experiment;
fig. 6(c) is a foreground image extracted from the shaking video 3 by the method of the present application in the experiment.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
In the video shooting, if the camera moves, the position of an object which is actually still in the picture changes, the position changes from (x1, y1) to (x2, y2), and the motion vector is:
Figure BDA0001226763220000041
the direct result of the camera shake is that the objects in the video generate the same displacement, the block motion estimation method divides the current frame into small blocks with the same size, then searches the most similar block in the previous frame for every small blocks, and compares the coordinates between the two blocks to obtain the displacement of each image block.
Block motion estimation algorithm based on moment digital features of random variable for random variables X, the calculation formula of its k-order moment is E (X-EX)kThe algorithm may use the n-order moment to describe the motion vector of the image block to estimate the image block, and in this application, the barycentric moment is used as the -order moment of the image block to perform the motion estimation of the block.
As shown in FIG. 2 and FIG. 3, the image blocks at the same position in the frame images at time t-1 and time t of videos respectively, the image in the block is shifted (the hollow circle is moved upwards) due to the camera shake, and the gravity center of the image block is changed.
foreground extraction method under shaking background, as shown in fig. 1, includes:
step S1: initializing the size of an image slider, the moving step length of the slider and the position of the slider, wherein the initial size of the image slider is 50 pixels multiplied by 50 pixels, and the step length is 25 pixels;
step S2: reading front and rear frame images from the video, and obtaining a binarization image be _ frame of the front frame image and a binarization image af _ frame of the rear frame image;
step S3, extracting the image slider at the same position in the binarized image of the previous frame image and the binarized image of the next frame image, and calculating the gravity center difference of the two extraction results to obtain the motion vector of the previous frame and the next frame under the current image slider position;
step S4: moving the image slider according to the step length, and repeating the step S3 until the whole image is extracted;
step S5: calculating the average motion vector of the previous and next frames according to the obtained motion vectors of all the previous and next frames,
the motion vectors of the previous and subsequent frames are specifically:
Figure BDA0001226763220000051
wherein: cxbeFor the x-coordinate, Cy, of the center of gravity of the portion of the previous frame image extracted by the image sliderbeY-coordinate of center of gravity, Cx, for the portion of the previous frame image extracted by the image sliderafFor the x-coordinate, Cy, of the center of gravity of the portion of the subsequent frame image extracted by the image sliderafY coordinates of the center of gravity of the extracted part of the image slider for the later frame image;
step S6: and reversely translating the subsequent frame image according to the obtained average motion vector to obtain a reduced jitter error image, which specifically comprises the following steps:
step S61: judging whether the current average motion vector is greater than or equal to 2, if so, executing step S62, otherwise, executing step S64;
step S62, enlarging the size of the image slider to two times of the current size, repeating the steps S3 to S5 to obtain another average motion vector, taking the average value of the average motion vector and the original average motion vector as a final average motion vector, and executing the step S63;
step S63: according to the final average motion vector, carrying out reverse translation on the later frame image to obtain a lightened jitter error image;
step S64, the binarized image of the previous frame image and the binarized image of the next frame image are both enlarged to 10 times of the current image, the steps S3 to S5 are repeated to obtain another average motion vector, the average motion vector is used as the final average motion vector, and the step S65 is executed;
step S65: and amplifying the rear frame image to 10 times of the current image, performing reverse translation on the amplified rear frame image according to the final average motion vector, and then reducing the image to the original size to obtain the reduced jitter error image.
The barycentric coordinates are specifically:
wherein: cx is the x-coordinate of the center of gravity, Cy is the y-coordinate of the center of gravity, W is the sum of the pixel values of all the pixels of the portion of the image extracted by the image slider, W is the sum of the pixel values of the portion of the image extracted by the image slideriIs the pixel value, x, of pixel point iiIs the x coordinate, y, of pixel point iiAnd m is the total number of pixel points of the part of the image extracted by the image slider.
When the block analysis of the video time sequence is carried out, if the difference between pixel points shifted out from the previous frame and pixel points shifted in from the next frame is ignored, the motion vector of the gravity center of the block can be considered as the motion vector of the block. In most cases, however, there will be differences, and sometimes large differences, between shifted-out and shifted-in pixels, and in order to minimize the effect of such differences, the following two measures are taken in the present application:
the color image uses the RGB color space to describe the value of the pixel point, which is often larger, and when the size of the image block is larger, the calculation amount of calculating the gravity center of the block is larger, so the Canny algorithm is adopted to convert the color image into the edge binary image, the calculation speed can be accelerated, and the storage space of a program can be reduced.
(1) Motion vector calculation of preceding and succeeding frame images with large jitter
However, block comparison directly on a binary image is problematic because the relationship of the shifted-out pixels to the shifted-in pixels cannot be determined: if the shifted-out pixel is black and the shifted-in pixel is white, the center of gravity of the block is shifted down, whereas the center of gravity of the block is shifted up. In the experiments herein, block comparison was performed using a binarized image of a black matrix with edges extracted from the frame image. Therefore, the pixel points shifted out from the previous frame and shifted in from the next frame are basically black points, and the motion vector of the block center of gravity is obtained. When the segmented image block is small, if the numerator of the gravity center of the image block has a large difference, the calculation of the gravity center has a large error, and in order to reduce the error, the size of the image block can be adjusted, so that the denominator value of the calculated gravity center is large, the change on the numerator is weakened, and the calculation of the gravity center is accurate. And finally, performing reverse translation on the current frame according to the average motion vector to obtain a reduced jitter error image.
(2) Motion vector calculation of preceding and following frame images with less shake
Therefore, when the center of gravity deviation is calculated to be 0 or 1 by the current method, the front frame image and the rear frame image are amplified by N times, a motion vector is calculated according to the center of gravity, then the current frame is reversely translated according to the average motion vector, and finally the obtained current frame is reduced by N times, so that the reduced jitter error is obtained.
Step S7: the foreground extraction is carried out on the front frame image and the rear frame image after the dithering is adjusted by utilizing the Gaussian mixture model, the foreground extraction by utilizing the Gaussian mixture model is the current quite mature foreground extraction prior art, the detailed description is not provided, and the following steps are roughly introduced:
given batches of observed data X ═ { X ═1,x2,…,xNThe batch of data is generated by M single Gaussian models, but a specific data xiProportion α of each single Gaussian model in the mixture model to which single Gaussian model belongsjMathematical expectation of mujSum covariance CjUnknown, these sample data from different Gaussian distributions are mixed together at to become a Gaussian mixture model.
Figure BDA0001226763220000071
Wherein
Figure BDA0001226763220000072
Figure BDA0001226763220000073
Order to
Estimating all parameters of a Gaussian mixture model by a sample set X
Figure BDA0001226763220000075
Probability density function of sample X of
Figure BDA0001226763220000076
And background models are extracted from the video by the Gaussian mixture model, when new pixels are read, pixels are matched with the known single Gaussian models one by one, the matching sequence is matched from low to high according to the priority of each model, if the pixel is matched with a certain single Gaussian model during matching, the pixel is considered to belong to a background point, and each parameter of the single Gaussian model is updated by using the pixel.
Experiment of
For foreground extraction under a complex background, two algorithms, namely a Gaussian mixture model (prior art) and the method of the application, are adopted, in order to analyze the adaptability of a comparison algorithm to a video scene, 20 groups of videos are automatically recorded to carry out a foreground extraction experiment, wherein 5 groups of videos are non-jittering videos, 15 groups of videos are jittering videos, every groups of videos respectively use the two algorithms to carry out foreground extraction, and 3 groups of the 20 groups of videos are adopted below.
Each set of example pictures was presented using three frames of video, and each frame of image was presented using a Gaussian mixture model and the extraction results of the method of the present application.
The video shown in fig. 4(a) (b) (c) is a non-jittered video, from which it can be seen that the extraction results of the gaussian mixture model and the method herein are not significantly different when the video is not jittered fig. 5(a) (b) (c) and fig. 6(a) (b) (c) show the foreground extraction results of jittered video.
The 20 videos of the experiment contain about 100-200 frames. 10 frames and 20 frames are randomly selected from the two types of videos respectively. Table 1 shows the number of frames and the ratio of the foreground extracted by the algorithm in each video frame randomly over the gaussian mixture model. Where V1 through V5 are non-jittered video and the remainder are jittered video.
TABLE 1 comparison of foreground extraction results
Figure BDA0001226763220000081
In a non-jittered video, the performance of the algorithm is consistent with that of the Gaussian mixture model , so the improved frame number ratio is always 0, in a jittered video, the improvement effect is obvious if the extracted frame is just jittered, and the improvement space is correspondingly reduced if the jitter amplitude of the extracted frame is small.
The experiments prove that the algorithm has better adaptability to foreground extraction of videos with jitter phenomena compared with a Gaussian mixture model.

Claims (4)

1, foreground extraction method under shaking background, comprising:
step S1: initializing the size of the image slider, the step size of the slider movement and the position of the slider,
step S2: reading the front and back frame images from the video, and obtaining a binary image be _ frame of the front frame image and a binary image af _ frame of the back frame image,
step S3, extracting the image slider at the same position in the binarized image of the previous frame image and the binarized image of the next frame image, calculating the gravity center difference of the two extraction results to obtain the motion vector of the previous and next frames under the current image slider position,
step S4: the image slider is moved by the step size, and the step S3 is repeated until the extraction of the whole image is completed,
step S5: calculating the average motion vector of the previous and next frames according to the obtained motion vectors of all the previous and next frames,
step S6: the subsequent frame image is reversely translated according to the obtained average motion vector to obtain a reduced jitter error image,
step S7: performing foreground extraction on the front and rear frame images after the adjustment of the jitter by using a Gaussian mixture model;
the step S6 specifically includes:
step S61: judging whether the current average motion vector is greater than or equal to 2, if yes, executing step S62, if no, executing step S64,
step S62, enlarging the size of the image slider to twice the current size, repeating steps S3 to S5 to obtain another average motion vector, and taking the average of the average motion vector and the original average motion vector as the final average motion vector, and performing step S63,
step S63: the subsequent frame image is reversely translated according to the final average motion vector to obtain a reduced jitter error image,
step S64, the binarized image of the previous frame image and the binarized image of the next frame image are both enlarged to 10 times of the current one, steps S3 to S5 are repeated to obtain another average motion vector, and this average motion vector is taken as the final average motion vector, and step S65 is performed,
step S65: and amplifying the rear frame image to 10 times of the current image, performing reverse translation on the amplified rear frame image according to the final average motion vector, and then reducing the image to the original size to obtain the reduced jitter error image.
2. The foreground extraction method under shaking background of claim 1, wherein the initial size of the image slider is 50 pixels by 50 pixels with a step size of 25 pixels.
3. The foreground extraction method under shaking background as recited in claim 1, wherein the motion vectors of the previous and next frames are specifically:
Figure FDA0002117096190000021
wherein: cxbeFor the x-coordinate, Cy, of the center of gravity of the portion of the previous frame image extracted by the image sliderbeY-coordinate of center of gravity, Cx, for the portion of the previous frame image extracted by the image sliderafFor the x-coordinate, Cy, of the center of gravity of the portion of the subsequent frame image extracted by the image sliderafThe y-coordinate of the center of gravity of the part is extracted by the image slider for the subsequent frame image.
4. The method of extracting foreground under a kinds of shaking backgrounds of claim 1, wherein the barycentric coordinates are specifically:
wherein: cx is the x-coordinate of the center of gravity, Cy is the y-coordinate of the center of gravity, W is the sum of the pixel values of all the pixels of the portion of the image extracted by the image slider, W is the sum of the pixel values of the portion of the image extracted by the image slideriIs the pixel value, x, of pixel point iiIs the x coordinate, y, of pixel point iiAnd m is the total number of pixel points of the part of the image extracted by the image slider.
CN201710083910.4A 2017-02-16 2017-02-16 Foreground extraction method under shaking background Active CN106651918B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710083910.4A CN106651918B (en) 2017-02-16 2017-02-16 Foreground extraction method under shaking background

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710083910.4A CN106651918B (en) 2017-02-16 2017-02-16 Foreground extraction method under shaking background

Publications (2)

Publication Number Publication Date
CN106651918A CN106651918A (en) 2017-05-10
CN106651918B true CN106651918B (en) 2020-01-31

Family

ID=58846281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710083910.4A Active CN106651918B (en) 2017-02-16 2017-02-16 Foreground extraction method under shaking background

Country Status (1)

Country Link
CN (1) CN106651918B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697689B (en) * 2017-10-23 2023-09-01 北京京东尚科信息技术有限公司 Storage medium, electronic device, video synthesis method and device
CN109724992A (en) * 2018-07-23 2019-05-07 永康市柴迪贸易有限公司 Cabinet for TV cleannes analytical mechanism
CN110458820A (en) * 2019-08-06 2019-11-15 腾讯科技(深圳)有限公司 A kind of multimedia messages method for implantation, device, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1211872A (en) * 1997-06-04 1999-03-24 株式会社日立制作所 Image signal system converter and TV set
CN1647113A (en) * 2002-04-11 2005-07-27 皇家飞利浦电子股份有限公司 Motion estimation unit and method of estimating a motion vector
CN1921628A (en) * 2005-08-23 2007-02-28 松下电器产业株式会社 Motion vector detection apparatus and motion vector detection method
CN101090456A (en) * 2006-06-14 2007-12-19 索尼株式会社 Image processing device and method, image pickup device and method
US8325810B2 (en) * 2002-06-19 2012-12-04 Stmicroelectronics S.R.L. Motion estimation method and stabilization method for an image sequence
CN104410855A (en) * 2014-11-05 2015-03-11 广州中国科学院先进技术研究所 Jitter detection method of monitoring video

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1211872A (en) * 1997-06-04 1999-03-24 株式会社日立制作所 Image signal system converter and TV set
CN1647113A (en) * 2002-04-11 2005-07-27 皇家飞利浦电子股份有限公司 Motion estimation unit and method of estimating a motion vector
US8325810B2 (en) * 2002-06-19 2012-12-04 Stmicroelectronics S.R.L. Motion estimation method and stabilization method for an image sequence
CN1921628A (en) * 2005-08-23 2007-02-28 松下电器产业株式会社 Motion vector detection apparatus and motion vector detection method
CN101090456A (en) * 2006-06-14 2007-12-19 索尼株式会社 Image processing device and method, image pickup device and method
CN104410855A (en) * 2014-11-05 2015-03-11 广州中国科学院先进技术研究所 Jitter detection method of monitoring video

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于视觉感知的非线性幅型比变换方法;胡彦婷;《中国图象图形学报》;20090630;第14卷(第6期);第1082-1087页 *

Also Published As

Publication number Publication date
CN106651918A (en) 2017-05-10

Similar Documents

Publication Publication Date Title
Li et al. Real-time visual tracking using compressive sensing
Cavallaro et al. Video object extraction based on adaptive background and statistical change detection
CN109685045B (en) Moving target video tracking method and system
CN112184759A (en) Moving target detection and tracking method and system based on video
CN109614933B (en) Motion segmentation method based on deterministic fitting
CN111383252B (en) Multi-camera target tracking method, system, device and storage medium
CN106651918B (en) Foreground extraction method under shaking background
KR101173559B1 (en) Apparatus and method for the automatic segmentation of multiple moving objects from a monocular video sequence
JP2015095897A (en) Method for processing video acquired from scene
CN109784215B (en) In-vivo detection method and system based on improved optical flow method
Zhang et al. An optical flow based moving objects detection algorithm for the UAV
Bhattacharyya et al. Long-term image boundary extrapolation
Nguyen et al. Real time human tracking using improved CAM-shift
CN116188535A (en) Video tracking method, device, equipment and storage medium based on optical flow estimation
Zhao et al. Shooting for smarter motion detection in cameras: improvements for the visual background extractor algorithm using optical flow
Ma et al. Automatic video object segmentation using depth information and an active contour model
Morerio et al. Optimizing superpixel clustering for real-time egocentric-vision applications
Bright et al. Mitigating motion blur for robust 3d baseball player pose modeling for pitch analysis
Abdulghafoor et al. Real-time object detection with simultaneous denoising using low-rank and total variation models
Yang et al. A hierarchical approach for background modeling and moving objects detection
Yin et al. Deep motion boundary detection
Kalsotra et al. Threshold-based moving object extraction in video streams
Fang et al. Video stabilization with local rotational motion model
Kaittan et al. Tracking of Video Objects Based on Kalman Filter
Kim et al. Unsupervised single-image reflection separation using perceptual deep image priors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant