KR101690050B1 - Intelligent video security system - Google Patents

Intelligent video security system Download PDF

Info

Publication number
KR101690050B1
KR101690050B1 KR1020150151839A KR20150151839A KR101690050B1 KR 101690050 B1 KR101690050 B1 KR 101690050B1 KR 1020150151839 A KR1020150151839 A KR 1020150151839A KR 20150151839 A KR20150151839 A KR 20150151839A KR 101690050 B1 KR101690050 B1 KR 101690050B1
Authority
KR
South Korea
Prior art keywords
background
model
foreground
updating
information
Prior art date
Application number
KR1020150151839A
Other languages
Korean (ko)
Inventor
박성기
김찬수
송동희
Original Assignee
한국과학기술연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국과학기술연구원 filed Critical 한국과학기술연구원
Priority to KR1020150151839A priority Critical patent/KR101690050B1/en
Application granted granted Critical
Publication of KR101690050B1 publication Critical patent/KR101690050B1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Abstract

A background modeling unit for generating a probability model on a pixel-by-pixel basis for continuously input image information, and a background modeling unit for generating a background model An object tracking unit for tracking the position of the object using the current input image, the background model, the object mask, and the object outline model, and an object tracking unit for storing information of the object being tracked It is possible to provide an intelligent image security system including an object information storage unit.

Description

[0001] INTELLIGENT VIDEO SECURITY SYSTEM AND OBJECT TRACKING METHOD [0002]

The present invention relates to an intelligent image security system and an object tracking method using the same.

Object tracking technology is a fundamental and important technology applied to image-based surveillance / security / security systems. Indeed, tracking technology is directly applied to the detection of anomalies such as illegal intrusions, roaming, and standing, and is used as an underlying technology for applications such as human / vehicle counting (counting).

 The tracking technology applied to the video-based surveillance system requires real-time processing speed and high accuracy. In order to satisfy this, a system for predicting the moving position of the object and searching the predicted area effectively is needed. For this purpose, a background modeling based motion region detection technique and a particle filter have been used, but they have not been able to effectively combine these techniques. In other words, it is used as a simple form to apply the particle filter in the detected region and the motion region detection.

To overcome this problem, we need an object tracking system that effectively combines background-foreground separation and particle filter.

In the past, we used a background modeling method to detect motion regions in successive frames, and to track the object by applying a particle filter based on the detected region. We relied on the object outline model to predict the position of the object.

However, since the motion area based on the background difference technique is used, all the erroneous or undesired motion regions are detected. Since the particle filter is applied to the detected region, many filtering processes are performed in the unnecessary region. Therefore, when the amount of computation is increased and the discrimination power of the object outline model deteriorates, there is a problem of causing misclassification.

Korean Patent Publication No. 10-1547255

The present invention has been made to solve the above-mentioned problems and disadvantages of the related art image security system, and it is an object of the present invention to provide an image acquisition unit for acquiring image information using an optical device such as a camera, A background modeling unit for generating a probability model in units of a background, and a background updating unit for excluding a region in which the object being traced is excluded from the update of the background model, a current model of the input image, a background model, And an object information storage unit for storing information of the object being tracked. The object tracking unit includes an object tracking unit for tracking the position of an object using the object tracking unit, and an object information storage unit for storing information of the object being tracked.

The object of the present invention is achieved by an image processing apparatus including an image acquisition unit for acquiring image information using an optical device such as a camera, a background modeling unit for generating a probability model on a pixel- An object tracking unit for tracking the position of the object using the current input image, the background model, the object mask, and the object outline model, and the information of the object being tracked And an object information storage unit for storing the object information.

At this time, the background modeling and updating unit may update the background through the predicted object position generated by the object tracking unit, and the object tracking unit may perform foreground-background separation, object tracking region selection, and particle filtering.

Another object of the present invention is to provide a background modeling and background updating step of forming a background model using image information obtained from a camera and updating the background through object position and size information, A background modeling and background updating step of updating the background through position and size information, a step of extracting pixels in the foreground region using the background model, performing a binarization and a pixel / fielding (Group / Labeling) A background separating step, an area selecting step of extracting an object activity radius and performing an area filtering, a particle generating part through image information through the area filtering, and predicting an object position using the weight or importance of the generated particle A filter step, a position accuracy determination step of determining the accuracy of the predicted object position, This object tracking method of the video surveillance system is achieved by the provided object information including the storing and updating steps for performing the storage and update of the object information satisfying accuracy.

In this case, the background updating step updates the background model by reflecting the object position and size information generated in the position accuracy determination step. In the foreground-background separating step, the background model generated in the background modeling and updating step is used And the foreground pixels are extracted by calculating the foreground probability using the object mask generated in the object information storing and updating step.

As described above, by using the image security system and object tracking method according to the present invention, it is possible to minimize an unnecessary particle and track an object with only a small number of particles because a foreground-background separation is performed to select an area where an object exists Real-time property of the video security system can be ensured.

In addition, since the objects can be tracked using foreground reliability and object appearance information, deterioration in tracking performance due to deterioration in object appearance model can be prevented and false tracking for predicting the background as an object can be reduced.

In addition, it is possible to improve the accuracy of the background model by reflecting the position of the object in the background model update and reduce the discrimination power of the object outline model by using the foreground region in the object outline model.

1 is a block diagram of an intelligent video security system in accordance with an embodiment of the present invention.
2 is a flowchart showing a functional configuration of an intelligent image security system according to an embodiment of the present invention;
3 is a flowchart of an object tracking method of a video security system according to an embodiment of the present invention.
FIG. 4 is a schematic view illustrating a process of selecting an object tracking area through an object mask, an input image, and a background model among object tracking methods of the image security system according to an exemplary embodiment of the present invention.
FIG. 5 is a schematic view for explaining a procedure for obtaining weight values based on an object appearance model and foreground reliability based on similarity based on an object appearance model and foreground reliability. FIG.
FIG. 6 is a one-dimensional histogram of Hue (color), Intensity (brightness), and Local Binary Pattern for measuring the similarity degree of object model.
FIG. 7 is a flowchart illustrating a conventional object tracking method; FIG.
Figure 8 is a simplified flowchart of an object tracking method of the present invention.

BRIEF DESCRIPTION OF THE DRAWINGS The advantages and features of the present invention and the manner of achieving them will become apparent with reference to the embodiments described in detail below with reference to the accompanying drawings. The present invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. The present embodiments are provided so that the disclosure of the present invention is complete and that those skilled in the art will fully understand the scope of the present invention. Like reference numerals refer to like elements throughout the specification.

The terms used herein are intended to illustrate the embodiments and are not intended to limit the invention. In the present specification, the singular form includes plural forms unless otherwise specified in the specification. It is to be understood that the terms 'comprise', and / or 'comprising' as used herein may be used to refer to the presence or absence of one or more other components, steps, operations, and / Or additions.

In addition, the embodiments described herein will be described with reference to cross-sectional views and / or plan views, which are ideal illustrations of the present invention. In the drawings, the thicknesses of the films and regions are exaggerated for an effective description of the technical content. The shape of the illustration may be modified by following and / or by tolerance or the like. Accordingly, the embodiments of the present invention are not limited to the specific forms shown, but also include changes in the shapes that are generated according to the manufacturing process. For example, the etched area shown at right angles may be rounded or may have a shape with a certain curvature.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The present invention will be described in detail with reference to the accompanying drawings.

FIG. 1 is a block diagram of an intelligent image security system according to an embodiment of the present invention, and FIG. 2 is a flowchart illustrating a functional configuration of an intelligent image security system according to an embodiment of the present invention.

1 and 2, an image security system 100 according to an exemplary embodiment of the present invention includes an image acquisition unit 110, a background modeling and updating unit 120, an object tracking unit 130, (140). ≪ / RTI >

The image acquisition unit 110 acquires an image input from an image acquisition device such as a camera.

The background modeling and updating unit 120 may include a modeling unit for generating a probability model on a pixel-by-pixel basis for successively input images, and an updating unit for excluding an area in which the object being tracked is located from the update of the background model.

The object tracking unit 130 may estimate an object position using a current input image, a background model, an object mask, and an object outline model.

The object image storage unit 140 may store information (including position, size, and appearance model) of the object being tracked.

When the image obtained through the image acquisition unit is transferred to the background modeling and updating unit 120, the background model is generated through the probability model on a pixel-by-pixel basis for the images continuously input by the modeling unit.

In this case, the probability model used for background modeling is a Gaussian Mixed Model. The Gaussian mixture model is a representative method for expressing a complex probability distribution using a plurality of Gaussian distributions. In the case of a fixed camera, only a limited area is continuously monitored, so that the image displayed on the screen is constant and the change in pixel values is limited. Therefore, we can use the distribution of each pixel values as a background model by continuously observing the image during a specific frame, and the Gaussian mixture model is a probability model that can effectively express the distribution.

The Gaussian mixture model is modeled with a high frequency of occurrence in each pixel (likely to be selected near the occurrence frequency) and a standard deviation based on that value. When a background model based on a Gaussian mixture model is generated, it is determined whether a pixel value of the input image exists within a predetermined deviation range in the Gaussian mixture model, and the result is classified into a background or a foreground.

The background model is continuously updated by reflecting the pixel values of the input image in the Gaussian mixture model. Therefore, the background model of (t-1) is the background model modeled from the image before the current (t) input image, and (t) (t) In order to distinguish the foreground and background from the input image, it should be compared with (t-1) background model.

Thereafter, the object tracking unit 130 tracks the position of the object using the generated (t-1) background model.

At this time, an object position predicted by the object tracking unit 130 is generated and fed back to the background modeling and updating unit so that the area where the object under tracking is located can be excluded from the update of the background model.

The position of the object estimated by the object tracking unit 130 may be transferred to and stored in the object information storage unit 140. The object information storage unit may feed back (t-1) the object information to the object tracking unit.

At this time, the object information may include position (image coordinates), size (width and height), object outline model (Hue, Intensity, LBP in this patent, but not limited to this), object mask, object trajectory and moving radius , The object information may be used for object tracking at a later time.

Referring to FIG. 2, the image security system of the present embodiment selects a foreground region through foreground-background separation and tracks an object by applying a particle filter to the selected region. Accordingly, It is possible to reduce the amount of computation.

Also, since the robustness of the object tracking can be increased by using the background model and the object outline model at the same time, the performance of the image security system can be improved.

FIG. 3 is a flow chart of an object tracking method of an image security system according to an embodiment of the present invention. FIG. 4 is a flowchart illustrating an object mask, an input image, and a background model of an object tracking method of the image security system according to an exemplary embodiment of the present invention. FIG. 5 is a schematic diagram for explaining a process of obtaining weight values based on an object outline model and foreground reliability based on similarity and foreground reliability based on an object outline model, FIG. 6 is a schematic view for explaining an object outline model It is a one-dimensional histogram of Hue (color), Intensity (brightness), Local Binary Pattern for measuring similarity.

3, an object tracking method according to an exemplary embodiment of the present invention includes a background modeling and updating step S100, a foreground / background separating step S200, a region selection step S300, a particle filter applying step S400, , Determining a position accuracy (S500), and storing and updating object information (S600).

The background modeling and updating step S100 may include a background modeling step S110 and a background updating step S120.

In the background modeling step (S110), a background model may be generated using RGB, a Gaussian Mixture Model (GMM), or the like, using an image input from a video input device such as a camera.

In the background update step S120, the background update may be performed using the object position and size information generated in the position accuracy determination step S500, which will be described later.

The foreground-background separation step S200 includes a background probability calculation step S210, a foreground probability calculation step S220, a foreground pixel extraction step S230, a binarization and pixel grouping step S240, and a foreground area expression step S250. And a foreground region can be expressed by extracting pixels in the foreground region using the background model and performing binarization and pixelization (grouping / labeling).

In the background probability calculation step (S210), the background probability is calculated using the background model generated in the background modeling step (S110).

At this time, the background probability can be calculated by the following equation (1), and is calculated based on the degree of similarity between the background model and the input image.

Figure 112015105682721-pat00001

At this time,

Figure 112015105682721-pat00002
Is the probability distribution of the background model,
Figure 112015105682721-pat00003
Is the probability distribution of the input image.

In the above formula

Figure 112015105682721-pat00004
,
Figure 112015105682721-pat00005
Respectively denote the current time point and the previous time point (temporal meaning)
Figure 112015105682721-pat00006
Lt; RTI ID = 0.0 >
Figure 112015105682721-pat00007
It means the probability that the location is background.

remind

Figure 112015105682721-pat00008
The
Figure 112015105682721-pat00009
Lt; / RTI >
Figure 112015105682721-pat00010
The
Figure 112015105682721-pat00011
The mask corresponding to the object size centered at
Figure 112015105682721-pat00012
Background model and current
Figure 112015105682721-pat00013
Is the conditional probability of the probability distribution obtained by applying it to the input image. The probability distribution is defined as a histogram
Figure 112015105682721-pat00014
Is proportional to the degree of similarity between the histograms.

In this embodiment, for example, the similarity of the histogram is estimated based on the Bhattacharyya distance, but it is also possible to estimate the similarity between the histograms through other methods.

In the foreground probability calculation step S220, the foreground probability can be calculated by Equation 2, Equation 3, and Equation 4, Equation 3 is the similarity between the object mask and the input image, Equation 4 is the background model, (2) is an arithmetic average of the degree of similarity between the background model and the input image, and the degree of similarity between the background model and the object mask.

Figure 112015105682721-pat00015

Figure 112015105682721-pat00016

Figure 112015105682721-pat00017

At this time,

Figure 112015105682721-pat00018
Is the probability distribution of the background model,
Figure 112015105682721-pat00019
Is the probability distribution of the input image,
Figure 112015105682721-pat00020
Is the probability distribution in the object mask.

Also, the probability that a pixel at a specific position is in the foreground is defined as an arithmetic average of two degrees of similarity,

Figure 112015105682721-pat00021
Is a similarity degree between the mask of the current input image and the object mask, which selects an area having color characteristics similar to the object mask in the current input image.

Of formula

Figure 112015105682721-pat00022
Is the similarity between the absolute value distribution of the difference between the mask of the background model and the object mask and the absolute value distribution of the difference between the mask of the background model and the mask of the input image. Thereby reducing the similarity of the regions. That is, the region similar to the input image is selected from the background image.

Of formula

Figure 112015105682721-pat00023
and
Figure 112015105682721-pat00024
Is used to briefly describe the absolute value of the difference between the masks.
Figure 112015105682721-pat00025
and
Figure 112015105682721-pat00026
.

In the foreground pixel extracting step (S230), it is determined whether the foreground pixel is a pixel according to Equation (5).

Figure 112015105682721-pat00027

That is,

Figure 112015105682721-pat00028
Probability of background
Figure 112015105682721-pat00029
It is judged as the foreground.

The foreground and background probabilities generated in the foreground pixel extraction step S230 may be used in the foreground reliability calculation step S330 described later.

In the foreground binarization and pixel grouping step S240, foreground binarization is performed according to Equation (6).

Figure 112015105682721-pat00030

At this time,

Figure 112015105682721-pat00031
An image is a binary image composed of 0 and 1, and is used to zonalize adjacent pixels. Also,
Figure 112015105682721-pat00032
Lt; RTI ID = 0.0 >
Figure 112015105682721-pat00033
, Which is used as a reference value for removing low probability pixels among the pixels classified into the foreground. The amount of data to be processed through the foreground binarization can be reduced, and the processing speed of the entire system can be improved.

In the foreground region representation step, foreground region information display and particle allocation are performed according to the following equations (7) and (8).

Figure 112015105682721-pat00034

Figure 112015105682721-pat00035

At this time,

Figure 112015105682721-pat00036
Is a normal distribution,
Figure 112015105682721-pat00037
Is the center point of the foreground region,
Figure 112015105682721-pat00038
Is the length of the foreground region,
Figure 112015105682721-pat00039
Is a variable for adjusting the standard deviation of the normal distribution (
Figure 112015105682721-pat00040
Is the same as the foreground region)
Figure 112015105682721-pat00041
Is the number of foreground pixels in the foreground region.

Also,

Figure 112015105682721-pat00042
Means a model of a foreground region defined by a normal distribution and the number of foreground pixels,
Figure 112015105682721-pat00043
Is the foreground area.
Figure 112015105682721-pat00044
Is the total number of particles. In object tracking, the particle filter repeats the process of collecting and evaluating arbitrary particles to find the desired position
Figure 112015105682721-pat00045
Represents the number of particles collected at one time. gun
Figure 112015105682721-pat00046
The number of particles to be collected is allocated to the foreground area in proportion to the number of foreground pixels.

As described above, the background probability and the foreground probability are calculated using the generated background model, and foreground pixels can be extracted, and binarization and pixel grouping can be performed to express the foreground region.

The region selection step S300 may include an object activity radius extraction step S310, an area filtering step S320, and a foreground reliability calculation step S330.

In the object activity radius extraction step S310, an activity radius of the object is extracted using the object trajectory information generated in the object information storing and updating step S600 and the information generated in the foreground rendering step S250 .

At this time, the object activity radius can use the average of the object movement trajectory.

That is, the foreground region may occur at various locations in the image. At this time, the foreground region generated at the position where the object can not exist should be excluded. For this purpose, we use the movable distance based on the image coordinates. Through the previous object positions, we can derive the moving direction and the moving distance of the object. The moving direction is possible in all directions (360 degrees), but the moving distance has a physical limit. Therefore, it is used as a condition to select the foreground region by inferring the movable radius of the object by using the average of the travel distance during a certain image.

In the area filtering step (S320), the object area may be filtered using the information generated in the object activity radius extracting step (S310).

In the foreground reliability calculation step S330, foreground reliability calculated using the foreground and background probability generated in the foreground pixel extraction step S230 may be calculated and information may be sent to the particle weight calculation step S420 described later.

The particle filter step S400 may include a particle generation step S410, a particle weight calculation step S420, and an object position prediction step S430. In the object shape storing and updating step S600, A particle can be generated through the image information through the area filtering and the object position can be predicted by calculating the particle weight using the weight or importance of the generated particle.

The particle generation step S410 may generate particles to which the particle filter is to be applied using the information generated in the area filtering step S320. The normal distribution expressing the foreground region may be expressed by the following equation (9) An arbitrary error following the distribution is given to generate the number of particles assigned to each region.

Figure 112015105682721-pat00047

At this time,

Figure 112015105682721-pat00048
The
Figure 112015105682721-pat00049
Th particle (sample), and "
Figure 112015105682721-pat00050
Is a variable for adjusting the radius (standard deviation of the normal distribution) for collecting the sample,
Figure 112015105682721-pat00051
Is set to 6, it is the same as the size of the foreground area, and the user can arbitrarily set it according to the application environment.

Figure 112015105682721-pat00052
Is a variable for arbitrarily changing the extracted position based on the normal distribution, and the user can select and use a probability model suitable for the application environment. In this embodiment, the value extracted randomly from the normal distribution inferred from the movement radius of the object use. The foreground region is based on the object mask and the input image and the background model, and does not always provide reliable information. Therefore, in order to always reflect the possibility that the foreground region is inaccurate, an arbitrary noise component is added to the position of the sample.

In the particle weight calculation step S420, the particle weight can be calculated using the object appearance model using the particles generated in the particle generation step S410 and the information generated in the foreground reliability calculation step S330. Equation (10) and Equation (11), it can be calculated as a product of the object appearance model similarity and foreground reliability and then normalized.

Figure 112015105682721-pat00053

Figure 112015105682721-pat00054

At this time, the particle filter consists largely of particle collection and evaluation process. In order to evaluate the collected particles, the probability that the particle is a tracking object should be obtained.

Figure 112015105682721-pat00055
.
Figure 112015105682721-pat00056
Is a model that expresses the characteristics of the tracking object (in the present embodiment, it is represented by an external model)
Figure 112015105682721-pat00057
Is collected as a candidate for the tracking object
Figure 112015105682721-pat00058
. That is,
Figure 112015105682721-pat00059
To extract the model from the
Figure 112015105682721-pat00060
To calculate the similarity. This is the degree of similarity by the object model and can be wrong if the object model is inaccurate. In order to compensate for this, the present embodiment reflects the Foreground Confidence with the degree of similarity.

Also,

Figure 112015105682721-pat00061
Is normalized by expressing the weight of the corresponding particle based on the similarity and foreground reliability.

In this case, the foreground reliability can be obtained by Equation (12), and the object appearance model similarity is modeled as a one-dimensional histogram of Hue (Color), Intensity (Brightness) and Local Binary Pattern with reference to FIG. 6, .

Figure 112015105682721-pat00062

In the object position predicting step S430, the position of the object may be predicted using the information generated in the particle weight calculating step S420, and may be calculated as a sum of weighted particle positions through the following equation (13).

Figure 112015105682721-pat00063

In order to estimate the position of the object based on the particles collected from the particle filter,

Figure 112015105682721-pat00064
) And the weight of the particle (
Figure 112015105682721-pat00065
) Is used. The particles represent candidates for the location where the tracking object exists, and the normalized weights indicate the possibility that the object exists at the corresponding position. Therefore, the product of these products has the result of converging into the area where the object exists. In other words, the particles collected in the region where the object is less likely to have a low weight and the particles collected in the region where the object is likely to be present have a high weight, and multiplying them will converge to the region where the weighted particle is located.

In the position accuracy determination step S500, the position accuracy can be determined using the information generated in the object position prediction step S430. If the position accuracy is satisfied using Equation 14, And the update step (S600). If not satisfied, the information is fed back to the particle generation step (S410) and the calculation is performed again.

Figure 112015105682721-pat00066

In this case, in the object tracking, the particle filter is a method of estimating the position of the object through collecting and evaluating the particles. In order to determine that the position of the estimated object is finally determined, The operation is stopped. As a condition for this, the similarity degree reference value and the estimated object position change rate reference value are used in this embodiment. The similarity standard value is the similarity between the outline model extracted from the object position estimated by the particle filter and the outline model of the tracking object

Figure 112015105682721-pat00067
), The operation of the particle filter is stopped. The position change rate reference value is obtained by repeatedly performing the particle filter,
Figure 112015105682721-pat00068
: The distance between the previous particle filter and the estimated object position of the current particle filter in the repeated particle filter process), the operation of the particle filter is stopped when the difference of the estimated position of the object by the particle filter is less than that.

In step S600 of storing and updating object information, object information satisfying positional accuracy, i.e., position, size, object outline model,

Figure 112015105682721-pat00069
The object trajectory during the frame, and the object mask, and updates the object outline model by reflecting the weight based on the similarity degree to the outline model newly extracted based on the existing object outline model and foreground region, Can be expressed.

Figure 112015105682721-pat00070

In general, the appearance of a moving object changes according to the influence of surrounding environment (illumination, shadow, etc.) and the pose of the object, so the object appearance model must be updated appropriately. In this embodiment, the area extracted from the foreground is used to update the object appearance model.

Figure 112015105682721-pat00071
Is the similarity at the predicted object position (Equation 10)
Figure 112015105682721-pat00072
The object appearance model of the current point,
Figure 112015105682721-pat00073
Represents the object appearance model at the previous time point.

Also,

Figure 112015105682721-pat00074
Is an outline model extracted from a foreground region having a similar size to an object existing near the predicted object position. In this patent, when there is an estimated object position and corresponding foreground area,
Figure 112015105682721-pat00075
To update the object outline model. In other words, when the discreteness of the currently used object model is high
Figure 112015105682721-pat00076
The current outward model is retained, and as the discernibility decreases, the new outward model is more reflected.

FIG. 7 is a simplified flowchart of the conventional object tracking method, and FIG. 8 is a simplified sequence of the object tracking method of the present invention.

Referring to FIGS. 7 and 8, a general approach of a conventional image security object tracking system includes an image acquisition, a particle filter, an object position prediction, and an object appearance model update step as shown in FIG. 7. Generally, The particles can be generated in an unnecessary background region because the particle information of the previous frame is used in order to generate the particles of the particles. Therefore, a large number of particles are required and the amount of computation can be increased.

Also, in the case of the existing system, since the weight of the particle is calculated based on the similarity degree of the object outline model, the tracking performance may be lowered due to the deterioration of the continuously changing object outline model.

However, in the case of the system of the present invention, as shown in FIG. 8, the area setting based on the foreground-background separation and the foreground reliability are reflected. In the foreground-background separation, the area where the object exists is selected to minimize unnecessary particles, Since the object weight is calculated by using foreground reliability and object appearance information together, it is possible to prevent degradation of tracking performance due to deterioration of the object appearance model and to predict the background as an object. It can reduce tracking.

Also, the system of the present invention can improve the accuracy of the background model by reflecting the position of the object in the background model update and reduce the discrimination power of the object outline model by using the foreground region in the object outline model.

The foregoing detailed description is illustrative of the present invention. In addition, the foregoing is merely illustrative and illustrative of preferred embodiments of the invention, and the invention may be used in various other combinations, modifications and environments. That is, changes and modifications may be made within the scope of the inventive concepts disclosed herein, within the scope of equivalents to those described and / or within the skill or knowledge of those skilled in the art. The foregoing embodiments are intended to illustrate the best mode of carrying out the present invention and are not intended to limit the scope of the present invention to other modes of operation known in the art for using other inventions such as the present invention, Various changes are possible. Accordingly, the foregoing description of the invention is not intended to limit the invention to the precise embodiments disclosed. In addition, the appended claims should be construed to include other embodiments.

100: Intelligent video security system
110:
120: background modeling and updating unit
130: object tracking unit
140: Object information storage unit

Claims (6)

An image acquiring unit acquiring image information using an optical device such as a camera;
A background modeling unit for generating a probability model on a pixel-by-pixel basis for continuously input image information; and a background updating unit for excluding an area in which the object being tracked is located from updating of the background model.
An object tracking unit for tracking a position of an object using a current input image, a background model, an object mask, and an object outline model; And
And an object information storage unit for storing information of the object being tracked,
And the background modeling and updating unit updates the background through the predicted object position generated by the object tracking unit.
delete The method according to claim 1,
The object tracking unit performs foreground-background separation, object tracking area selection, and particle filtering.
A background modeling and background updating step of forming a background model using the image information acquired from the camera and updating the background through the object position and size information;
A foreground-background separating step of extracting pixels in the foreground region using the background model, performing binarization and pixelation (grouping / labeling), and expressing foreground regions;
An area selection step of extracting an object activity radius and performing area filtering;
A particle filter step of generating particles through the image information through the area filtering and calculating a particle weight using the weight or importance of the generated particle to predict an object position;
Determining a position accuracy of the predicted object position; And
Storing and updating object information for storing and updating object information satisfying the positional accuracy,
Wherein the background update step updates the background by reflecting the object position and size information generated in the position accuracy determination step.
delete 5. The method of claim 4,
In the foreground-background separating step, the background probability is calculated using the background model generated in the background modeling and updating step, and foreground pixels are calculated by calculating the foreground probability using the object mask generated in the object information storing and updating step A method of tracking an object in a video security system.
KR1020150151839A 2015-10-30 2015-10-30 Intelligent video security system KR101690050B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150151839A KR101690050B1 (en) 2015-10-30 2015-10-30 Intelligent video security system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150151839A KR101690050B1 (en) 2015-10-30 2015-10-30 Intelligent video security system

Publications (1)

Publication Number Publication Date
KR101690050B1 true KR101690050B1 (en) 2016-12-27

Family

ID=57736730

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150151839A KR101690050B1 (en) 2015-10-30 2015-10-30 Intelligent video security system

Country Status (1)

Country Link
KR (1) KR101690050B1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102050890B1 (en) 2018-11-30 2019-12-02 주식회사우경정보기술 Server to secure video based on streaming, method for providing secured video between sever and client, and computer-readable recording media
KR102050882B1 (en) * 2018-11-30 2019-12-02 주식회사우경정보기술 Method, server and computer-readable recording media for video security using zero-watermarking based on stream cipher
WO2020096437A1 (en) * 2018-11-09 2020-05-14 에스케이텔레콤 주식회사 Apparatus and method for estimating location of vehicle
KR20220060587A (en) 2020-11-04 2022-05-12 한국전자기술연구원 Method, apparatus and system for detecting abnormal event

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120014515A (en) * 2010-08-09 2012-02-17 삼성테크윈 주식회사 Apparatus for separating foreground from background and method thereof
KR101547255B1 (en) 2015-05-21 2015-08-25 주식회사 넥스파시스템 Object-based Searching Method for Intelligent Surveillance System

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120014515A (en) * 2010-08-09 2012-02-17 삼성테크윈 주식회사 Apparatus for separating foreground from background and method thereof
KR101547255B1 (en) 2015-05-21 2015-08-25 주식회사 넥스파시스템 Object-based Searching Method for Intelligent Surveillance System

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
논문:전자공학회논문지_SP 48(5) *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020096437A1 (en) * 2018-11-09 2020-05-14 에스케이텔레콤 주식회사 Apparatus and method for estimating location of vehicle
KR20200053920A (en) * 2018-11-09 2020-05-19 에스케이텔레콤 주식회사 Apparatus and method for estimating location of vehicle
KR102604821B1 (en) * 2018-11-09 2023-11-20 에스케이텔레콤 주식회사 Apparatus and method for estimating location of vehicle
US11898851B2 (en) 2018-11-09 2024-02-13 Sk Telecom Co., Ltd. Apparatus and method for estimating location of vehicle
KR102050890B1 (en) 2018-11-30 2019-12-02 주식회사우경정보기술 Server to secure video based on streaming, method for providing secured video between sever and client, and computer-readable recording media
KR102050882B1 (en) * 2018-11-30 2019-12-02 주식회사우경정보기술 Method, server and computer-readable recording media for video security using zero-watermarking based on stream cipher
WO2020111403A1 (en) * 2018-11-30 2020-06-04 주식회사우경정보기술 Stream cipher-based image security method using zero-watermarking, server, and computer readable recording medium
KR20220060587A (en) 2020-11-04 2022-05-12 한국전자기술연구원 Method, apparatus and system for detecting abnormal event

Similar Documents

Publication Publication Date Title
CN107527009B (en) Remnant detection method based on YOLO target detection
CN109076198B (en) Video-based object tracking occlusion detection system, method and equipment
US9323991B2 (en) Method and system for video-based vehicle tracking adaptable to traffic conditions
CN109035304B (en) Target tracking method, medium, computing device and apparatus
US9213901B2 (en) Robust and computationally efficient video-based object tracking in regularized motion environments
US10373320B2 (en) Method for detecting moving objects in a video having non-stationary background
US10896495B2 (en) Method for detecting and tracking target object, target object tracking apparatus, and computer-program product
CN111881853B (en) Method and device for identifying abnormal behaviors in oversized bridge and tunnel
CN110781836A (en) Human body recognition method and device, computer equipment and storage medium
Bedruz et al. Real-time vehicle detection and tracking using a mean-shift based blob analysis and tracking approach
KR101690050B1 (en) Intelligent video security system
CN102346854A (en) Method and device for carrying out detection on foreground objects
CN111027370A (en) Multi-target tracking and behavior analysis detection method
CN108710879B (en) Pedestrian candidate region generation method based on grid clustering algorithm
Ali et al. Vehicle detection and tracking in UAV imagery via YOLOv3 and Kalman filter
Koniar et al. Machine vision application in animal trajectory tracking
Ghahremannezhad et al. Automatic road detection in traffic videos
Weng et al. Weather-adaptive flying target detection and tracking from infrared video sequences
Almomani et al. Segtrack: A novel tracking system with improved object segmentation
Suganyadevi et al. OFGM-SMED: An efficient and robust foreground object detection in compressed video sequences
CN114821441A (en) Deep learning-based airport scene moving target identification method combined with ADS-B information
Płaczek A real time vehicle detection algorithm for vision-based sensors
Sawalakhe et al. Foreground background traffic scene modeling for object motion detection
Sujatha et al. An innovative moving object detection and tracking system by using modified region growing algorithm
Kim et al. Abnormal object detection using feedforward model and sequential filters

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20191203

Year of fee payment: 4