KR101527962B1 - method of detecting foreground in video - Google Patents

method of detecting foreground in video Download PDF

Info

Publication number
KR101527962B1
KR101527962B1 KR1020140036204A KR20140036204A KR101527962B1 KR 101527962 B1 KR101527962 B1 KR 101527962B1 KR 1020140036204 A KR1020140036204 A KR 1020140036204A KR 20140036204 A KR20140036204 A KR 20140036204A KR 101527962 B1 KR101527962 B1 KR 101527962B1
Authority
KR
South Korea
Prior art keywords
image
motion object
edge
frames
extracting
Prior art date
Application number
KR1020140036204A
Other languages
Korean (ko)
Inventor
김진영
뷔넉남
민소희
김정기
Original Assignee
전남대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 전남대학교산학협력단 filed Critical 전남대학교산학협력단
Priority to KR1020140036204A priority Critical patent/KR101527962B1/en
Application granted granted Critical
Publication of KR101527962B1 publication Critical patent/KR101527962B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to a method for extracting a motion object from a video image, which comprises the steps of: detecting, by an edge detector, an edge of an image frame for an inputted image frame; creating a correction image by adding a predetermined weighted value to the detected edge for the image frame; extracting a first motion object based on RPCA for corresponding correction image frames when the number of obtained correction image frames reaches the predetermined number of frames; performing Gaussian filtering to remove high frequency noise for the first motion object; and generating a second motion object by filling up an empty region in an outline region with a value corresponding to the outline among data obtained through the filtering. According to the present invention, the method can satisfactorily detect the motion object even for an image frame having a small amount of about ten frames, thus processing rate for extracting the motion object is enhanced.

Description

A method of extracting motion objects from a video image,

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a motion object extraction method for a video image, and more particularly, to a motion object extraction method for extracting a foreground from a continuously input video image.

In order to extract and recognize a moving object in a continuously input sequence, it is necessary to separate a foreground from a background.

A method of extracting motion objects by subtracting consecutive images from each other in order to detect motion objects and extracting motion images is disclosed in various Korean Patent No. 10-0377067.

However, if the background image is fixed, the background image may be easily stored and removed. However, in this case, noise objects are generated due to distortion of the camera, distortion caused by the change of illumination, and the like.

Therefore, even in a fixed background, robust methods for such distortion are needed.

In addition to the method of using the difference of before and after images, there is also a method of separating the background by using an optical flow method or a simple cumulative value of an image.

On the other hand, there is a method of separating a motion object (foreground) using RPCA (robust principle component analysis) in the recently proposed method.

However, in order to obtain a good motion object extracting performance when successively input video images are directly processed by RPCA, it is necessary to apply a method of performing arithmetic operations on a long time image frame, for example, 30 frames or more.

As a result, when RPCA is applied to real-time image processing, performance degradation is serious and even application is impossible. This is because a limited number of frame images must be used for real-time processing. Even if this is not real time, there is a problem that the increase in the number of frames increases the calculation amount and increases the calculation burden.

It is an object of the present invention to provide a method of extracting a motion image of a video image that can extract a good motion object by RPCA even with a small number of frames.

According to another aspect of the present invention, there is provided a method of extracting a motion object from a video image, the method comprising: Detecting an edge of an image frame by an edge detector with respect to an input image frame; I. Generating a corrected image by adding weight values set for edges detected for the image frame; All. Extracting a primary motion object based on RPCA for the corresponding corrected image frames when the number of corrected image frames obtained through the steps a and b reaches a set number of target frames; la. Performing Gaussian filtering to remove high frequency noise for the primary motion object extracted in the multi-step; hemp. And generating a secondary motion object by filling a blank area in the outline area of the data obtained through the step a) with a value corresponding to the outline.

Preferably, the step further comprises applying a canny edge detector to detect an edge.

In addition, the second step generates the corrected image by multiplying the edge detected for the gray image obtained by binarizing the image frame by a value of 255.

More preferably, And generating final motion object information by removing an area less than a predetermined basic size among the motion object areas generated through the step.

According to the method for extracting a motion object of a video image according to the present invention, a motion object can be detected well for a small number of image frames of about 10 frames or less, thereby providing an advantage that a processing speed for motion object extraction is improved.

FIG. 1 is a flowchart showing a motion object extraction process of a video image according to the present invention,
FIG. 2 is a flow chart showing the edge detection process of FIG. 1,
FIG. 3 is a view for explaining a blank area filling process of FIG. 1,
4 is an image showing an example of extracting a motion object according to the present invention.

Hereinafter, a method for extracting a motion object of a video image according to a preferred embodiment of the present invention will be described in detail with reference to the accompanying drawings.

FIG. 1 is a flowchart illustrating a motion object extraction process of a video image according to the present invention.

First, an edge of an image frame is detected by an edge detector with respect to an image frame sequentially input from a video image (step 10).

Here, the edge detector detects an edge by applying a canny edge detector, and a detailed description will be given later.

Next, a weighted value set for edges detected for the image frame is added to generate a corrected image, and the generated corrected image is stored in a queue (Que) for each frame (step 20).

In step 20, a gray image obtained by binarizing the image frame from 0 to 255 is multiplied by a value of 255 for edges detected to generate a corrected image.

Next, it is determined whether the number of corrected image frames obtained has reached the set target frame number T (step 30). If it is determined that the number of corrected image frames T has reached the target frame number T, (Step 40) based on the RPCA.

Preferably, the number of target frames is 8, 12, more preferably 10.

Thereafter, Gaussian filtering is performed on the extracted primary motion object to remove high-frequency noise (step 50), and a process of filling a hole in the outline area of the data obtained by the filtering with a value corresponding to the outline is performed (Step 60).

Here, the contour line refers to a line that follows the boundary line of an object formed between the edges in the form of a lung orbit.

After step 60, an area of the generated motion object area that is less than the set basic size is removed (step 70), and final motion object information is generated (step 80).

Here, the removal target motion object may be appropriately applied in consideration of the number of pixels of the applied image, and preferably, the object corresponding to the region of 1000 to 2000th of the number of pixels of the frame image is removed.

As an example, in the case of an HD (1280x720) class image, an area smaller than a size corresponding to 300 pixels may be set to be removed.

Hereinafter, the motion object extraction process will be described in more detail.

First, let's look at the definition of RPCA.

RPCA was proposed by E. J. Candes et al. (E.J. Candes, et al., "Robust Principal Component Analysis," ACM, Vol 58, 2009). First, assume that there is a large-scale data matrix M. The PCA then expresses the matrix M as: " (1) "

Figure 112014029706665-pat00001

In the above matrix representation, Lo has a low rank and No is a small perturbation matrix. Here, RPCA is extended to be expressed by the following equation (2).

Figure 112014029706665-pat00002

In Equation (2)

Figure 112014029706665-pat00003
Is the nuclear norm of the matrix
Figure 112014029706665-pat00004
silver
Figure 112014029706665-pat00005
norm. When the above equation is solved, L of low rank represents the background, and sparse S represents the motion object (foreground).

The above problem can be solved by convex optimization.

In the motion object separation of the image processing, the matrix M becomes a matrix of vectorized image frames.

However, when the RPCA is applied in real time, a problem arises. As described above, there is a large amount of calculation and a considerable time frame image must be scaled. In order to solve this problem in the present invention, (RPCA) using edge information.

That is, a corrected image which is an edge-enhanced image is generated, and an RPCA is performed from the generated corrected image.

When edge information is used, irregular noise may be generated inside the motion object. To solve this problem, Gaussian filtering and hole filling are used.

First, edge information is extracted using a canny edge detector as an edge detector.

The edge image detected by the canny detector

Figure 112014029706665-pat00006
Let's say. Where k is the frame index. The Canny edge detector performs processing as shown in Fig.

First, the input image is smoothed through a Gaussian filter (step 11).

These Gaussian filters are used for noise reduction.

The Gaussian filter uses a 5x5 next binaural filter.

Figure 112014029706665-pat00007

Where A is the input image.

Next, the gradient or gradient is calculated for the Gaussian filtered image (step 12).

Gradient calculations are done using the Sobel edge detector below.

Figure 112014029706665-pat00008

Afterwards, the intensity of the gradient

Figure 112014029706665-pat00009
(Step 13).

Next, in the obtained G image, a local maxima which is a value larger than the surrounding value is found and determined as a candidate of an edge (step 14), and hysteresis is performed on the determined edge candidate (step 15).

That is, a strong edge is determined, and a weak edge survives when it is connected to a strong edge.

The edge image obtained by the canyon edge detector is binarized, that is, represented by 0 and 1

Figure 112014029706665-pat00010
(K) is obtained through Equation (3) to obtain an edge enhancement image as described in step 20 above.

Figure 112014029706665-pat00011

Where I (k) is a P × Q two-dimensional image. That is, the gray (black and white) image of the original image is multiplied by the value of the edge binary image by 255, and added to the original image. This image is the edge-enhanced corrected image.

These corrected images are stored in a buffer, which can be expressed by Equation (4) below.

Figure 112014029706665-pat00012

In Equation (4)

Figure 112014029706665-pat00013
Dimensional image signal
Figure 112014029706665-pat00014
Into a vector.

Thus, Buff (k) is a PQ 占 N matrix, and past N-1 images are added to the current frame number k and put in Buffers Buff (k).

RPCA is applied to the above Buff (k) matrix to obtain a matrix S (k). S (k) is a sparse matrix of the matrix Buff (k) and corresponds to a motion object (forground). In other words

Figure 112014029706665-pat00015
, And L (k) is a low rank and corresponds to a background.

At this time, RPCA

Figure 112014029706665-pat00016
subject to
Figure 112014029706665-pat00017
And is calculated by the Principal Component Pursuit by Alternating Directions algorithm by the calculation method shown in Table 1 below. In Table 1 below, M corresponds to Buff (k).

Figure 112014029706665-pat00018

Through this process, the image is reconstructed from S (k) corresponding to the motion object (forground), and Gaussian filtering is performed on the image as described above.

The Gaussian filter

Figure 112014029706665-pat00019
Dimensional Gaussian filter is shown below.

<Example of Gaussian filter>

Figure 112014029706665-pat00020

Then, hole filling is performed on the noise-removed motion object estimation image using Gaussian filtering, and a well-known morphological filter is used for filling the empty space.

In this empty space filling process, as shown in FIG. 3, a point of a hole in a boundary of a moving object A having a noise removal process and a bounded boundary is subjected to the following Equation 4 for Xo Repeat the process of filling empty space.

Figure 112014029706665-pat00021

Here, B is a symmetric strucuture as shown in Fig. 3, A c is a complementary image, and is a complementary image. Equation (4)

Figure 112014029706665-pat00022
.

Finally, the moving object is separated, and the object occupying the small area is removed to remove the noise object.

As an example of such a processing procedure, as shown in FIG. 4, which is performed on 10 frames, the motion object can be extracted with a small number of frames of about 10 frames or more.

Claims (5)

delete delete delete delete A method for extracting a motion object from a video image,
end. Detecting an edge of an image frame by an edge detector with respect to an input image frame;
I. Generating a corrected image by adding weight values set for edges detected for the image frame;
All. Extracting a primary motion object based on RPCA for the corresponding corrected image frames when the number of corrected image frames obtained through the steps a and b reaches a set number of target frames;
la. Performing Gaussian filtering to remove high frequency noise for the primary motion object extracted in the multi-step;
hemp. And generating a secondary motion object by filling a blank area in the outline area of the data obtained through the step a) with a value corresponding to the outline,
Wherein said step further comprises applying a canny edge detector to detect an edge,
The second step generates the corrected image by multiplying the edge detected for the gray image obtained by binarizing the image frame by a value of 255,
Wherein the number of the target frames is 8 to 12.
KR1020140036204A 2014-03-27 2014-03-27 method of detecting foreground in video KR101527962B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020140036204A KR101527962B1 (en) 2014-03-27 2014-03-27 method of detecting foreground in video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020140036204A KR101527962B1 (en) 2014-03-27 2014-03-27 method of detecting foreground in video

Publications (1)

Publication Number Publication Date
KR101527962B1 true KR101527962B1 (en) 2015-06-11

Family

ID=53503259

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020140036204A KR101527962B1 (en) 2014-03-27 2014-03-27 method of detecting foreground in video

Country Status (1)

Country Link
KR (1) KR101527962B1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105303584A (en) * 2015-09-18 2016-02-03 南京航空航天大学 Laser radar-based moving object detection method and device
KR101829976B1 (en) * 2016-03-18 2018-02-19 연세대학교 산학협력단 Appratus and method of noise relieving using noise tolerant optical detection in coms image sensor based visible light communication
CN109377515A (en) * 2018-08-03 2019-02-22 佛山市顺德区中山大学研究院 A kind of moving target detecting method and system based on improvement ViBe algorithm

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060035513A (en) * 2004-10-22 2006-04-26 이호석 Method and system for extracting moving object
KR20070080572A (en) * 2006-02-07 2007-08-10 소니 가부시끼 가이샤 Image processing apparatus and method, recording medium, and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060035513A (en) * 2004-10-22 2006-04-26 이호석 Method and system for extracting moving object
KR20070080572A (en) * 2006-02-07 2007-08-10 소니 가부시끼 가이샤 Image processing apparatus and method, recording medium, and program

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
C. Town, et at., "Robust Fusion of Colour Appearance Models for Object Tracking," BMVC, pp.1-10, 2004. *
C. Town, et at., "Robust Fusion of Colour Appearance Models for Object Tracking," BMVC, pp.1-10, 2004.*

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105303584A (en) * 2015-09-18 2016-02-03 南京航空航天大学 Laser radar-based moving object detection method and device
CN105303584B (en) * 2015-09-18 2018-09-18 南京航空航天大学 Moving target detecting method based on laser radar and device
KR101829976B1 (en) * 2016-03-18 2018-02-19 연세대학교 산학협력단 Appratus and method of noise relieving using noise tolerant optical detection in coms image sensor based visible light communication
CN109377515A (en) * 2018-08-03 2019-02-22 佛山市顺德区中山大学研究院 A kind of moving target detecting method and system based on improvement ViBe algorithm

Similar Documents

Publication Publication Date Title
Nandi Detection of human brain tumour using MRI image segmentation and morphological operators
Liu et al. Single image dehazing via large sky region segmentation and multiscale opening dark channel model
KR102091136B1 (en) method and apparatus for improving quality of image and recording medium thereof
JP2016505186A (en) Image processor with edge preservation and noise suppression functions
EP3016383B1 (en) Method, device, and system for pre-processing a video stream for subsequent motion detection processing
JP5908844B2 (en) Image processing apparatus and image processing method
Chengtao et al. A survey of image dehazing approaches
Wang et al. A framework of single-image deraining method based on analysis of rain characteristics
KR101527962B1 (en) method of detecting foreground in video
KR101615479B1 (en) Method and apparatus for processing super resolution image using adaptive pre/post-filtering
EP3847616A1 (en) Model-free physics-based reconstruction of images acquired in scattering media
CN110290310B (en) Image processing apparatus for reducing step artifacts from image signals
KR101681197B1 (en) Method and apparatus for extraction of depth information of image using fast convolution based on multi-color sensor
CN104103039A (en) Image noise estimation method
Ma et al. Video image clarity algorithm research of USV visual system under the sea fog
Gao et al. Multiscale phase congruency analysis for image edge visual saliency detection
Xiong et al. Research on an Edge Detection Algorithm of Remote Sensing Image Based on Wavelet Enhancement and Morphology.
KR102187639B1 (en) Method for removing noise for infrared image
KR102187634B1 (en) Apparatus for removing noise for infrared image
KR102187536B1 (en) Apparatus for removing noise for infrared image
Voronin et al. Fast texture and structure image reconstruction using the perceptual hash
KR102187551B1 (en) Method for removing noise for infrared image
Roosta et al. Multifocus image fusion based on surface area analysis
Mirhassani et al. Improvement of Hessian based vessel segmentation using two stage threshold and morphological image recovering
Sharma et al. Maker based watershed transformation for image segmentation

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20180618

Year of fee payment: 4

FPAY Annual fee payment

Payment date: 20190523

Year of fee payment: 5