KR101868103B1 - A video surveillance apparatus for identification and tracking multiple moving objects and method thereof - Google Patents

A video surveillance apparatus for identification and tracking multiple moving objects and method thereof Download PDF

Info

Publication number
KR101868103B1
KR101868103B1 KR1020170088638A KR20170088638A KR101868103B1 KR 101868103 B1 KR101868103 B1 KR 101868103B1 KR 1020170088638 A KR1020170088638 A KR 1020170088638A KR 20170088638 A KR20170088638 A KR 20170088638A KR 101868103 B1 KR101868103 B1 KR 101868103B1
Authority
KR
South Korea
Prior art keywords
moving object
moving
clustering
label
image
Prior art date
Application number
KR1020170088638A
Other languages
Korean (ko)
Inventor
주영훈
Original Assignee
군산대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 군산대학교 산학협력단 filed Critical 군산대학교 산학협력단
Priority to KR1020170088638A priority Critical patent/KR101868103B1/en
Application granted granted Critical
Publication of KR101868103B1 publication Critical patent/KR101868103B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a method for monitoring an image in order to identify and track multiple moving objects, wherein an FCM clustering-based label merging method is used to recognize and track a moving object. The method of the present invention comprises the following steps: using an erosion and dilation operation based on difference image, binarization, and morphology methods to extract an area of the moving object in order to interpret image information obtained from a camera; extracting data from divided labels and performing FCM clustering to merge the divided labels in order to recognize the moving object as an integrally whole object when the several labels are generated due to loss in the moving object; continuously extracting, comparing and tracking color histogram of the moving object recognized by merging of the labels; and accumulating the extracted color histogram as information of the recognized moving object to obtain the average thereof, thereby monitoring the moving object.

Description

The present invention relates to a video surveillance apparatus for identifying and tracking multiple moving objects,

The present invention relates to a video surveillance technique, and more particularly, to an image surveillance technique, in which a plurality of moving objects in an image are input by receiving an image, and even when a plurality of objects are separated or interlaced in a moving state, And a recording medium on which the method is recorded.

Recently, as crimes in public places or restricted areas frequently occur, locks or CCTV (Closed Circuit Television) are installed to monitor the determined area. However, when CCTV is used to monitor the operator in real time, it often happens that the CCTV can not detect the criminal activity while monitoring it. In this case, after recording the images recorded on a VCR (Video Cassette Recorder) or DVR (Digital Video Recorder), a method of detecting an event after the event is used. Recently, intelligent video surveillance system has attracted attention. That is, a method for detecting an event in real time using an image analysis method using an IP (Internet Protocol) camera is being studied.

An intelligent video surveillance system is a system that installs several IP cameras in the surveillance area and analyzes images from real-time video information to detect the specific situation and generate alarms or notify the surveillance agency when a specific situation arises. This surveillance system is used in many surveillance fields such as prevention of criminal acts, terrorism, fire protection, street security, traffic volume measurement, DMZ boundary, illegal parking regulation, etc. by applying input image analysis, computer vision, have. Video surveillance technology is usually used to extract moving area for moving object detection, ie moving object extraction technique and tracking method using feature information of detected moving object. Real-time tracking technology has become a core technology and much research has been done.

T. W. Jang, Y. T. Shin, and J. B. Kim, "A Study on the Object Extraction and Tracking System for Intelligent Surveillence", Jounal of the Korean Institute of Communications and Information Sciences, Vol. 38, No. 7, pp. 589-595, 2013 L. Y. Shi and Y. H. Joo, "Multiple moving objects detection and tracking algorithm for intelligent surveillance system ", Journal of the Korean Institute of Intelligent System, Vol. 22. No. 6. pp. 741-747. 2012 C. Y. Jeong and J. W. Han, "Technical trends of abnormal event detection in video analytics ", Electronics and Telecommunications Trends, Vol. 136. pp. 114-122, 2012

SUMMARY OF THE INVENTION It is an object of the present invention to provide a method of extracting a moving object in which a moving object is not smoothly extracted when the moving amount of the moving object is small or the processing speed is decreased according to a calculation amount, In order to overcome the limitations of the error, we are trying to solve the disadvantages that it is affected by the illumination change or noise in the tracking using the feature information of the moving object or there is a difficulty in real - time processing due to excessive computation amount.

According to an aspect of the present invention, there is provided a method of recognizing and tracking moving objects using a label merging method based on FCM clustering. First, in order to analyze the image information acquired from the camera, the moving object region is extracted using the erosion expansion calculation of the difference image, binarization, and morphology. Next, we propose a method of extracting data from divided labels and recognizing FCM clustering as merely a moving object when multiple labels are generated due to loss of moving objects and merge the divided labels. In addition, the color histogram of the moving object recognized by the label merging is continuously extracted and compared for tracking. The extracted color histogram is accumulated as the information of the recognized moving object, and the average is obtained to propose a robust method for the change.

According to the embodiments of the present invention, it is possible to easily extract an element most suitable for the actual region of the moving object without reducing the processing speed irrespective of whether the moving amount of the moving object is large or small, It is possible to extract the object accurately without any loss of the moving object and to provide a strong identification and tracking technology to the change of the moving object.

1 is an overall block diagram of a video surveillance system proposed by embodiments of the present invention.
2 is a diagram illustrating a difference image, a binarization, and a morphology process of an image input in the image monitoring method according to an embodiment of the present invention.
3 is a view for explaining a process of extracting candidate data from one label of each moving object in the video surveillance method according to an embodiment of the present invention.
4 is a diagram illustrating an example of a process of extracting actual data of a moving object in an image monitoring method according to an exemplary embodiment of the present invention.
5 is a view for explaining a process of counting the number of moving objects in the video monitoring method according to an embodiment of the present invention.
FIG. 6 is a view for explaining a process of determining overlapping and separation of moving objects in the video surveillance method according to an embodiment of the present invention.
7 is a flowchart illustrating a video surveillance method according to an embodiment of the present invention.
FIG. 8 is a detailed flowchart illustrating the video surveillance method of FIG. 7 according to an embodiment of the present invention.
9 is a diagram illustrating a result of recognition of a lost moving object in an experiment using an image monitoring method according to an embodiment of the present invention.
10 is a diagram illustrating a clustering process and results in an experiment using an image monitoring method according to an embodiment of the present invention.
11 is a view illustrating moving object transplantation and tracking using FCM clustering in an experiment using an image monitoring method according to an embodiment of the present invention.
FIG. 12 is a diagram for explaining a comparison between results of applying the conventional method to the moving object and the image monitoring method according to the embodiment of the present invention.

Prior to describing the embodiments of the present invention, the problems occurring in the existing image processing related to the intelligent video surveillance technology will be briefly reviewed. In order to solve these problems, technical means adopted by the embodiments of the present invention We introduce them sequentially.

First, in the moving object extraction method, the difference image method extracts the difference between the background image and the input image, and can quickly separate the moving object and the background, but is sensitive to the change noise of the illumination. A moving object extraction method using GMM (Gaussian Mixture Models) is a method of learning and adapting the change of background, which is robust in a noisy environment, but has a drawback in that it can not be extracted smoothly when the moving amount of a moving object is small. In addition, a window is set for a pixel of a corner point of each image input by using an optical flow, and a location where a matching rate is highest in the next frame is found There is a method of detecting a moving object by extracting a vector. This method has a disadvantage in that it requires a large amount of computation and is slowed down when there are many windows to be set. Such moving object extraction methods cause a case where a moving object is partially lost when an image (foreground) as a reference for comparison and a moving object portion in the input image are not largely changed. If a moving object is lost, one moving object can be divided into several parts. In order to solve this problem, a technique of merging the divided parts and recognizing the moving object as a moving object is needed. In this method, the shortest distance matching method is a method of calculating the distance between each divided object, and if there is another object within a predetermined distance, it is judged to be the same moving object and merged. This method is simple and fast in implementation, but it is merely a merge method according to the distance condition. Therefore, there is an error limit when there are plural moving objects, or when the loss is large.

There are many methods to track moving object feature information, but Scale Invariant Feature Transform (SIFT) algorithm and MeanShift algorithm are widely used. First, the SIFT algorithm is an algorithm that extracts features that are invariant to size and rotation, and is robust against size, illumination, and geometric changes, but has a disadvantage of high computational complexity. The MeanShift algorithm stores the histogram of the moving object in the database and finds the histogram most similar to the moving object histogram extracted from the input image. However, this method also has a problem in real-time processing due to the influence of illumination change, noise, and computation amount.

Therefore, embodiments of the present invention suggest a moving object extraction and tracking method for improving the identification and tracking technology of moving objects in order to overcome the above problems. First, we propose a method of merging separated moving objects into a moving object by using FCM (Fuzzy C-Means) clustering algorithm to solve the problem of moving object loss in moving object extraction process. In addition, we propose a method of extracting data from a moving object and a method of counting moving objects to determine the number of clusters in order to satisfy the condition for performing the FCM clustering algorithm. Next, we propose a method to continuously track merged moving objects. The proposed method first extracts a color histogram from feature information of each moving object, accumulates histograms continuously so as not to react sensitively to noise or change, and stores the obtained average. Thereafter, a method of comparing the stored color histograms each time a plurality of moving objects are overlapped and separated to correctly recognize each moving object is presented.

Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. In the following description and the accompanying drawings, detailed description of well-known functions or constructions that may obscure the subject matter of the present invention will be omitted. Incidentally, throughout the specification, " including " means not including other elements unless specifically stated to the contrary, it may include other elements.

The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The singular expressions include plural expressions unless the context clearly dictates otherwise. In the present application, the terms "comprise", "having", and the like are intended to specify the presence of stated features, integers, steps, operations, elements, components, or combinations thereof, , Steps, operations, components, parts, or combinations thereof, as a matter of principle.

Unless defined otherwise, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Terms such as those defined in commonly used dictionaries should be construed as meaning consistent with meaning in the context of the relevant art and are not to be construed as ideal or overly formal in meaning unless expressly defined in the present application .

(1) Overall system block diagram

FIG. 1 is a block diagram illustrating a moving object extracting method using a label merging method based on an FCM clustering algorithm proposed by embodiments of the present invention and a system block diagram suggesting a tracking method using a color histogram of a recognized moving object.

First, in order to separate the background and the moving object from the input RGB (Red-Green-Blue) image, a moving image technique is performed to extract a moving area in the input image. In addition, the motion region is intuitively changed through binarization, and moving objects are extracted through the erosion and expansion operation of the morphology technique to remove noise. However, if the moving object is similar to the background environment, the moving object is extracted with loss. To solve this problem, labeling is performed on the extracted moving object. Then, data (coordinate value) for clustering is extracted from the recognized labels through an equation and clusters the data. The moving objects are recognized by merging the labels using the outermost coordinates of each cluster cluster. Then, the color histogram in the label is extracted, and the information of the moving object is stored, continuously compared, and the summed average is calculated and stored. Thereafter, when a plurality of moving objects are overlapped and separated again using the stored information, each moving object is identified again and the moving object is tracked.

(2) Moving object recognition and tracking

2.1) Car image  Binarization, Morphology  Through computation Go Mobile  Object extraction

The method of extracting a moving object mainly uses a method of extracting using a difference between the pixel values of the fore-ground and the back-ground or the moving direction. In this moving object extracting method, even if the moving object moves in the input image, if the difference between the reference image and the input image is small, a moving object can not be detected.

In the embodiments of the present invention, a difference image technique is used as a method of separating a moving object from an input image. That is, a method of extracting a motion region by applying a difference image technique to a background image and an input image is used. At this time, the moving area includes noise due to the actual moving object, light source, and environmental data. FIG. 2 shows a process of extracting a moving object using a difference image, a binarization and a morphology operation.

2 is a diagram illustrating a process of extracting an object from an input image. 2 (a) is a background image, and Fig. 2 (b) is an input image. FIG. 2C is a diagram showing a difference image between an input image and a background image, and FIGS. 2D and 2E are images showing binarization and morphological operation results in order from a figure in which the difference image is performed .

When the difference image between the background and the input image is performed, the result shown in (c) of FIG. 2 is obtained. The difference between the pixel coordinates of the background and the input image is stored. The larger the difference value, the brighter the pixel value because the pixel value becomes larger. The binarization process refers to a process represented by (d) in FIG. 2, in which a pixel value of 255 is obtained when a pixel is above a certain value in an image represented by 0 to 255 pixels, and a value of 0 is otherwise obtained. In addition, the binarized image contains noise and erosion and expansion operations of morphology are performed in order to remove noise. The erosion operation of the morphology technique serves to remove small noises by eroding the pixel value of 255 by spreading the pixel value of 0, and the expansion operation expands the area of the pixel value of 255, Remove noise by acting as an extension. Here, a method of extracting a moving object through the secondary image binarization and morphology operations on the input image has been described.

2.2) FCM  Moving object recognition using clustering

A cluster is a set of data with the same conditions, and clustering refers to classifying multiple data having the same condition into one cluster. Typical clustering methods are K-means and FCM.

K-means clustering is a method of clustering data close to an arbitrarily set cluster. This is done by calculating the center point of the clustered data, clustering the new center point, and terminating the process when the error between the previous center point and the current center point is less than a certain level. This method has the advantage of being easy to implement and fast, but it is not suitable in a complicated environment because the result may vary depending on the initial cluster. FCM clustering is the same as that of K-means clustering, except that clustering using the weight of each data for each cluster is different. Here, the weight of data means a probability that one data is included in each cluster as a value of how much weight each cluster center points have. In addition, since the center point of the cluster is calculated in consideration of the coordinates of the data and the weight values corresponding to the data, not the average of the data, the center point is not sensitive to the selection of the initial center point. However, additional methods are needed because the number of clusters such as K-means clustering must be specified.

Embodiments of the present invention include a method of recognizing each moving object by merging the data extracted from the lost moving object using the FCM clustering method and separating the data from the moving object, And a method for counting the number of moving objects.

2.2.1) To cluster  Data extract

In order to perform FCM clustering, data to be clustered is necessary, so that data is extracted using a moving object. FCM clustering algorithm clusters using the weight between data and clusters and recalculates the center point, so the recognition performance of moving objects increases according to the number of data. However, clustering including data that is not related to the actual moving object does not accurately cluster the data for each moving object when a plurality of moving objects approach each other. Therefore, in the embodiments of the present invention, a method of filtering the unnecessary data including the area of each moving object is proposed.

To do this, first, labeling process is performed using the difference image described in 2.1), moving object extraction image through binarization and morphology operation, and several labels are formed on moving objects divided into several parts. 3 is a diagram illustrating a method of extracting candidate data.

In FIG. 3, (X min , Y min ) and (X max , Y max ) indicate the minimum and maximum coordinates of the label, respectively. W and H indicate the width and height of the label. In addition, the candidate data extracted using a given coordinate in one label are generated at regular intervals, and the coordinates of the candidate data do not exceed the label maximum coordinate (X max , Y max ).

The width W and the height H of the label are expressed by the following equation (1).

Figure 112017066832311-pat00001

Figure 112017066832311-pat00002

In the equation (2), CD nm is a set of candidate data having coordinates (x n , y m ), and the values of n and m have an integer value of 0 or more.

The mathematical expression for extracting the coordinates of CD nm is as follows.

Figure 112017066832311-pat00003

Here, EG means the interval between candidate data. As EG becomes smaller, more data is extracted, and less data is extracted. Therefore, as the EG is set higher, the accuracy of clustering increases, but the amount of computation due to the increase of data may increase.

4 (a) is a diagram of extracted candidate data, and FIG. 4 (b) is an area of a recognized moving object. The equation for FIG. 4 is as follows.

Figure 112017066832311-pat00004

Here, IMG (x n , y m ) is the coordinate of the image having the recognized moving object, and RD k (x k , y k ) means the data of FIG. 4 (d). IMG coordinates and CD, and stores the remaining coordinates as the kth coordinate value of RD. The coordinates of the extracted RD after all the data have passed through the above process are defined as real data.

If the actual data is extracted as described above, the data itself can find the most suitable data corresponding to the actual moving object, but it has a disadvantage that the width and height of the recognized moving object become unclear. Therefore, in the embodiment of the present invention, the problem is solved by adding the following coordinates.

Figure 112017066832311-pat00005

Equation (5) is the blue coordinate of (d) in Fig. This is used to store the width and height end points of each label generated before extracting the candidate data and to count the number of moving objects described in the next section.

2.2.2) How to count the number of moving objects

In order to perform FCM clustering, the number of clusters should be determined in advance. In the embodiments of the present invention, since the cluster is one moving object, a method of counting the number of moving objects is proposed as follows.

First, it is assumed that a moving object always occurs at an edge. When a moving object occurs, the label is in contact with the edge. When the label completely separates from the edge, it counts with the moving object.

5 (a) shows a state where labels are overlapped with corners when moving objects are generated from corners, and Fig. 5 (b) shows counting moving objects when corners and labels are separated. FIG. 5 (c) shows a case where the moving object approaches the edge, and in this case, the moving object does not count.

2.2.3) FCM  Moving object recognition through clustering-based label merging

When the data to be clustered and the number of clusters are determined, FCM clustering is performed. The weight of the data and the center point of the cluster are calculated by the following equation.

Figure 112017066832311-pat00006

w ij is the weight for the jth cluster of RD i and C is the number of moving objects counted in section 2.2.2). f is an integer of 2 or more and is a variable that determines the degree of dispersion of the cluster. The larger the value of f, the smaller the change in the cluster due to iteration. c is a center point of a cluster which is initially set arbitrarily, and when it is a repetition step, it is determined through the following equation (7).

Figure 112017066832311-pat00007

N is the total number of extracted actual data, and the cluster center point is calculated through Equation (7).

The clustering process is repeatedly performed, and the clustering process is terminated when the change rate of the cluster center point is less than a predetermined level. The rate of change of the cluster center point is determined by equation (9).

Figure 112017066832311-pat00008

Figure 112017066832311-pat00009

Equation (8) represents the coordinates of the cluster center point calculated through Equation (7). Err is the change rate of the center point during the repetition process,

Figure 112017066832311-pat00010
The
Figure 112017066832311-pat00011
And means the center point of the next calculated cluster. The clustering process is terminated until Err becomes equal to or less than a predetermined value, and the cluster center point at that time is output. Equation (10) is a mathematical expression representing a condition for putting actual data in a cluster.

Figure 112017066832311-pat00012

One piece of data in RD i has C weights. If the weight with the largest value is w ij , RD i is included in Cluster j . In addition, labeling is performed by finding the maximum coordinate of data clustered in each cluster j , and labels are merged to recognize a moving object.

2.3) Judgment of overlapped state and separation state of plural moving objects

When a plurality of moving objects are overlapped with each other during the movement, it is necessary to judge the separated state after overlapping with the overlapped state. This is an important part of how to recognize and track a particular moving object. FCM clustering has the advantage of recognizing each moving object even if two moving objects are close to each other. However, when two moving objects are overlapped, each moving object can not be recognized. Embodiments of the present invention propose a method of judging the overlapped state and the separated state in order to compensate for this.

First, the determination of the overlapping state is determined according to two conditions. The first condition is the overlap of the moving object label area and the second condition is the distance between the moving object label center points. Fig. 6 is a diagram for judging the two conditions and overlapping. Fig.

FIG. 6 (a) shows a case where labels of two moving objects are overlapped, and FIG. 6 (b) shows a state of overlapping because both conditions are satisfied. FIG. 6 (c) is a diagram illustrating a state in which a state of separation is released when both conditions are released after the overlapping state.

Recognition of moving objects recognized after being separated from each recognized moving object before overlapping can be reversed when it becomes separated after overlapping. In the next section, a method is proposed for recognizing each moving object after overlapping.

2.4) Storing moving object information using color histogram

In the embodiment of the present invention, a method of recognizing each separated moving object after overlapping using a color histogram is proposed. The color histogram is robust to image size and angle change, and is easy to extract and high performance compared with texture, shape, and features. Therefore, we use the color histogram to store information of the moving object being tracked and use it to recognize the moving object being tracked.

To do this, first, a color histogram of each moving object label region is extracted. The color histogram of the moving object is extracted from HSV (Hue-Saturation? Value: hue-saturation-brightness) color area. Because HSV color space is separated from color and brightness, Can be extracted. However, after information about each moving object is stored, each moving object is tracked and recognized through histogram matching. A single histogram may vary greatly depending on the situation. In order to solve this problem, in the embodiment of the present invention, a method of accumulating histograms extracted so as to be resistant to color change due to illumination of each moving object, and obtaining and storing an average is proposed.

Figure 112017066832311-pat00013

Here, t is the time, F (H) is the current frequency of the color value H (0? H? 180), and F t (H) is the average value of the color value H up to the previous time. Using this, the average F t + 1 (H) of the cumulative frequency up to now is obtained.

Using the above equation, the average color histogram of the moving object can be calculated. The color histogram of the moving object is extracted using Equation (11), and the moving object is continuously tracked by matching with the previous value. That is, if overlapping occurs during tracking of each moving object, the extraction of the histogram is stopped, and the process of tracking again through matching with the histogram stored before separation is repeated.

The above-described structures are summarized in time series as shown in FIGS. 7 and 8. FIG. FIG. 7 is an overall system flowchart proposed by an embodiment of the present invention, and FIG. 8 is an overall system detailed flowchart.

(3) Experiments and Results

In order to prove the superiority of the moving object recognition and tracking method using the label merging method based on the FCM clustering algorithm proposed by the embodiments of the present invention, various experiments were performed in a real environment. Experimental environment used i5-750 2.67GHz CPU, IBM PC with 4GB RAM and 640x480 pixel, fps 30 web camera.

9 shows the results of the extraction of the moving object and the recognition of the lost moving object. 9A shows the original image input from the camera, FIG. 9B shows the result of binarization with the difference image of the original image, FIG. 9C shows the erosion and dilation operation of the morphology operation, The result is as follows. Finally, (d) of FIG. 9 shows the result of labeling the lost moving object area. As a result of this performance, as shown in (d) of FIG. 9, when the moving object is lost, it is possible to see the result that the label is divided into several parts.

FIG. 10 is a diagram illustrating a procedure and a result of clustering in one frame. 10 (a) to 10 (c) are sequentially performed to perform labeling on the lost moving object, and candidate data as shown in FIG. 10 (d) is extracted. The actual data is extracted as shown in FIG. 10 (e) through the multiplication with FIG. 10 (b), and clustering is performed using the extracted actual data. As a result of FIG. 10 (f) .

11 is a result of applying the method proposed by the embodiments of the present invention. In Fig. 11A, only one moving object is searched because only one moving object is present, and Fig. 11B shows a state in which moving objects are counted and two moving objects are recognized. 11 (c) and 11 (g) illustrate a superimposed state. When the superimposed state is separated from the superimposed state, the color histogram of each moving object is used so that recognition of the moving object is not reversed 11 (d) and 11 (h). As can be seen from FIG. 11, the method proposed by the embodiments of the present invention is advantageous in that each moving object can be recognized because a plurality of moving objects are not merged according to distances, as shown in FIG. .

Fig. 12 shows a comparison between the recognition of a moving object by the distance matching method and the method proposed in the embodiments of the present invention. Fig. 12 (a) shows a case where two moving objects come close to each other in the shortest distance matching method, which is a conventional method, and Fig. 12 (b) shows the result in the proposed method.

Figure 112017066832311-pat00014

Table 1 is a table showing the number of moving objects recognized by the state of the shortest distance matching method and the proposed method. The state changes in the order of (1) → (4). In both methods, if the moving object is far away, similar results are obtained. However, in the proposed method, when the overlap occurs, the plurality of moving objects are recognized as the respective moving objects without being merged.

As described above, embodiments of the present invention have proposed a method of recognizing and tracking moving objects using a label merging method based on FCM clustering. First, in order to analyze the image information acquired from the camera, the moving object region is extracted using the erosion expansion calculation of the difference image, binarization, and morphology. Then, we propose a method of extracting data from divided labels and recognizing FCM clustering as merely a moving object when multiple labels are generated due to loss of moving object and merging the divided labels. In addition, the color histogram of the moving object recognized by the label merging is continuously extracted and compared for tracking. The extracted color histogram is accumulated as the information of the recognized moving object, and the average is obtained to propose a robust method for the change.

Meanwhile, the embodiments of the present invention can be embodied as computer readable codes on a computer readable recording medium. A computer-readable recording medium includes all kinds of recording apparatuses in which data that can be read by a computer system is stored.

Examples of the computer-readable recording medium include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage, and the like. In addition, the computer-readable recording medium may be distributed over network-connected computer systems so that computer readable codes can be stored and executed in a distributed manner. In addition, functional programs, codes, and code segments for implementing the present invention can be easily deduced by programmers skilled in the art to which the present invention belongs.

The present invention has been described above with reference to various embodiments. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the disclosed embodiments should be considered in an illustrative rather than a restrictive sense. The scope of the present invention is defined by the appended claims rather than by the foregoing description, and all differences within the scope of equivalents thereof should be construed as being included in the present invention.

Claims (20)

(a) extracting a moving region for the moving object by removing a background from an input image including the moving object;
(b) assigning a label to the extracted motion region; And
(c) recognizing the moving object by merging the labels using the coordinates of each of the clusters obtained through clustering on the image to which the label is assigned,
The step (c)
(c1) setting coordinates of candidate data at predetermined intervals in each area according to the area to which the label is allocated, and if pixel values of the moving object corresponding to the coordinates of the set candidate data exist, clustering Extracting as actual data for the first time; And
(c2) merging a plurality of separated labels as a moving object by performing FCM (Fuzzy C-Means) clustering on the extracted actual data, and when a label generated from an edge in the entire image is separated from the edge, And counting the moving object with the moving object.
The method according to claim 1,
The step (a)
Wherein a motion image of only a moving object is acquired by generating a difference image with an input image including a moving object using an image including only a pre-stored background.
3. The method of claim 2,
The step (a)
Performing binarization of pixel values through comparison with a preset reference value for a moving region of only the obtained moving object; And
And extracting a moving object from which noise has been removed by using a morphology operation on the binarized image.
The method of claim 3,
The morphology operation includes:
An erosion operation that erodes a portion having a pixel value of '255' by diffusing a pixel value '0' in the binarized image and dilates a portion having a pixel value of '255' among the binarized image, And an expansion operation for removing noise by expanding the area of the image.
The method according to claim 1,
The step (b)
And assigning a unique label to each of the plurality of separated parts in the extracted motion area in correspondence with the rectangular area.
delete The method according to claim 1,
The step (c1)
Wherein the coordinates of the moving object corresponding to the coordinates of the set candidate data are stored and coordinates indicating the width and height of the area belonging to the label are stored together.
delete The method according to claim 1,
The step (c2)
Clustering is repeated by calculating the weight of the actual data and the center point of the cluster based on the extracted actual data and the number of the counted clusters so as to find the maximum coordinate of the clustered data, And clustering is terminated when the rate of change of the center point is less than a reference value.
The method according to claim 1,
(d) extracting a color histogram in each region for each of the merged labels and storing the histogram as identification information of the moving object.
(a) extracting a moving region for the moving object by removing a background from an input image including the moving object;
(b) assigning a label to the extracted motion region;
(c) recognizing the moving object by merging the labels using the coordinates of clusters acquired through clustering of the images assigned with the labels; And
(d) extracting a color histogram in each region for each of the merged labels, storing the histogram as identification information of the moving object, and tracking the moving object using the stored color histogram,
The step (c)
(c1) setting coordinates of candidate data at predetermined intervals in each area according to the area to which the label is allocated, and if pixel values of the moving object corresponding to the coordinates of the set candidate data exist, clustering Extracting as actual data for the first time; And
(c2) merging a plurality of separated labels as a moving object by performing FCM (Fuzzy C-Means) clustering on the extracted actual data, and when a label generated from an edge in the entire image is separated from the edge, And counting the moving object with the moving object.
12. The method of claim 11,
The step (b)
And assigning a unique label to each of the plurality of separated parts in the extracted motion area in correspondence with the rectangular area.
delete 12. The method of claim 11,
The step (c1)
Wherein the coordinates of the moving object corresponding to the coordinates of the set candidate data are stored and coordinates indicating the width and height of the area belonging to the label are stored together.
delete 12. The method of claim 11,
The step (c2)
Clustering is repeated by calculating the weight of the actual data and the center point of the cluster based on the extracted actual data and the number of the counted clusters so as to find the maximum coordinate of the clustered data, And clustering is terminated when the rate of change of the center point is less than a reference value.
12. The method of claim 11,
The step (d)
(d1) extracting a color histogram in which the hue and brightness in each region are separated for each of the merged labels, and storing the extracted color histogram as identification information of the moving object; And
(d2) When it is detected that a plurality of moving objects overlap in the input image, the separated moving objects are histogram-matched using the color histogram per label stored in advance after moving objects are separated And tracking each moving object.
18. The method of claim 17,
The step (d1)
Wherein the color histogram is accumulated by using a frequency of color values appearing according to a change of time, and is set as identification information of a moving object by calculating an average of frequencies.
18. The method of claim 17,
The step (d2)
The color histogram of the moving object is extracted in real time, and the moving object is continuously tracked by matching with the previously stored previous value. If the overlapping of the moving objects occurs, the color histogram extraction is stopped and the color histogram And the tracking of the moving object is resumed.
A computer readable recording medium storing a program for causing a computer to execute the method of any one of claims 1 to 5, 7, 9 to 12, 14, 16 to 19 Recording medium.
KR1020170088638A 2017-07-12 2017-07-12 A video surveillance apparatus for identification and tracking multiple moving objects and method thereof KR101868103B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020170088638A KR101868103B1 (en) 2017-07-12 2017-07-12 A video surveillance apparatus for identification and tracking multiple moving objects and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020170088638A KR101868103B1 (en) 2017-07-12 2017-07-12 A video surveillance apparatus for identification and tracking multiple moving objects and method thereof

Publications (1)

Publication Number Publication Date
KR101868103B1 true KR101868103B1 (en) 2018-06-18

Family

ID=62767765

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020170088638A KR101868103B1 (en) 2017-07-12 2017-07-12 A video surveillance apparatus for identification and tracking multiple moving objects and method thereof

Country Status (1)

Country Link
KR (1) KR101868103B1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111918034A (en) * 2020-07-28 2020-11-10 上海电机学院 Embedded unattended base station intelligent monitoring system
KR20210072475A (en) * 2019-12-09 2021-06-17 재단법인대구경북과학기술원 Method and Device for Processing Segments of Video
KR20210133562A (en) * 2020-04-29 2021-11-08 군산대학교산학협력단 Method for multiple moving object tracking using similarity between probability distributions and object tracking system thereof
KR20220066245A (en) * 2020-11-11 2022-05-24 베이징 센스타임 테크놀로지 디벨롭먼트 컴퍼니 리미티드 Target tracking method and apparatus, electronic device and storage medium
CN114764897A (en) * 2022-03-29 2022-07-19 深圳市移卡科技有限公司 Behavior recognition method, behavior recognition device, terminal equipment and storage medium
WO2024071813A1 (en) * 2022-09-27 2024-04-04 삼성전자 주식회사 Electronic device for classifying object area and background area, and operating method of electronic device
US12002218B2 (en) 2020-11-26 2024-06-04 Samsung Electronics Co., Ltd. Method and apparatus with object tracking

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101508310B1 (en) * 2014-04-10 2015-04-07 군산대학교산학협력단 Apparatus and method for tracking multiple moving objects in video surveillance system
KR20160144149A (en) * 2015-06-08 2016-12-16 군산대학교산학협력단 A video surveillance apparatus for removing overlap and tracking multiple moving objects and method thereof
KR101731243B1 (en) * 2015-12-15 2017-04-28 군산대학교 산학협력단 A video surveillance apparatus for identification and tracking multiple moving objects with similar colors and method thereof
KR20170073963A (en) * 2015-12-21 2017-06-29 한국과학기술연구원 Device and method for tracking group-based multiple object

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101508310B1 (en) * 2014-04-10 2015-04-07 군산대학교산학협력단 Apparatus and method for tracking multiple moving objects in video surveillance system
KR20160144149A (en) * 2015-06-08 2016-12-16 군산대학교산학협력단 A video surveillance apparatus for removing overlap and tracking multiple moving objects and method thereof
KR101731243B1 (en) * 2015-12-15 2017-04-28 군산대학교 산학협력단 A video surveillance apparatus for identification and tracking multiple moving objects with similar colors and method thereof
KR20170073963A (en) * 2015-12-21 2017-06-29 한국과학기술연구원 Device and method for tracking group-based multiple object

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
C. Y. Jeong and J. W. Han, "Technical trends of abnormal event detection in video analytics", Electronics and Telecommunications Trends, No. 136. pp. 114-122, 2012
L. Y. Shi and Y. H. Joo, "Multiple moving objects detection and tracking algorithm for intelligent surveillance system", Journal of Korean Institute of Intelligent System, Vol. 22. No. 6. pp. 741-747. 2012
T. W. Jang, Y. T. Shin, and J. B. Kim, "A study on the object extraction and tracking system for intelligent surveillence", Jounal of Korean Institute of Communications and Information Sciences, Vol. 38, No. 7, pp. 589-595, 2013

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210072475A (en) * 2019-12-09 2021-06-17 재단법인대구경북과학기술원 Method and Device for Processing Segments of Video
KR102382962B1 (en) 2019-12-09 2022-04-06 재단법인대구경북과학기술원 Method and Device for Processing Segments of Video
KR20210133562A (en) * 2020-04-29 2021-11-08 군산대학교산학협력단 Method for multiple moving object tracking using similarity between probability distributions and object tracking system thereof
KR102370228B1 (en) 2020-04-29 2022-03-04 군산대학교 산학협력단 Method for multiple moving object tracking using similarity between probability distributions and object tracking system thereof
CN111918034A (en) * 2020-07-28 2020-11-10 上海电机学院 Embedded unattended base station intelligent monitoring system
KR20220066245A (en) * 2020-11-11 2022-05-24 베이징 센스타임 테크놀로지 디벨롭먼트 컴퍼니 리미티드 Target tracking method and apparatus, electronic device and storage medium
KR102446688B1 (en) * 2020-11-11 2022-09-23 베이징 센스타임 테크놀로지 디벨롭먼트 컴퍼니 리미티드 Target tracking method and apparatus, electronic device and storage medium
US12002218B2 (en) 2020-11-26 2024-06-04 Samsung Electronics Co., Ltd. Method and apparatus with object tracking
CN114764897A (en) * 2022-03-29 2022-07-19 深圳市移卡科技有限公司 Behavior recognition method, behavior recognition device, terminal equipment and storage medium
WO2024071813A1 (en) * 2022-09-27 2024-04-04 삼성전자 주식회사 Electronic device for classifying object area and background area, and operating method of electronic device

Similar Documents

Publication Publication Date Title
KR101868103B1 (en) A video surveillance apparatus for identification and tracking multiple moving objects and method thereof
KR101764845B1 (en) A video surveillance apparatus for removing overlap and tracking multiple moving objects and method thereof
EP2801078B1 (en) Context aware moving object detection
Elhabian et al. Moving object detection in spatial domain using background removal techniques-state-of-art
JP4741650B2 (en) Method of object tracking in video sequence
Guan Spatio-temporal motion-based foreground segmentation and shadow suppression
KR101508310B1 (en) Apparatus and method for tracking multiple moving objects in video surveillance system
Shukla et al. Moving object tracking of vehicle detection: a concise review
Fradi et al. Spatial and temporal variations of feature tracks for crowd behavior analysis
Patel et al. Moving object tracking techniques: A critical review
Zang et al. Object classification and tracking in video surveillance
Fradi et al. Spatio-temporal crowd density model in a human detection and tracking framework
Angelo A novel approach on object detection and tracking using adaptive background subtraction method
Hardas et al. Moving object detection using background subtraction shadow removal and post processing
KR102019301B1 (en) A video surveillance apparatus for detecting agro-livestock theft and method thereof
Almomani et al. Segtrack: A novel tracking system with improved object segmentation
Chebi et al. Dynamic detection of anomalies in crowd's behavior analysis
Fradi et al. Sparse feature tracking for crowd change detection and event recognition
Monteiro et al. Robust segmentation for outdoor traffic surveillance
Luvison et al. Automatic detection of unexpected events in dense areas for videosurveillance applications
KR20120054381A (en) Method and apparatus for detecting objects in motion through background image analysis by objects
Ali et al. A framework for human tracking using kalman filter and fast mean shift algorithms
Kavitha et al. A robust multiple moving vehicle tracking for intelligent transportation system
Malavika et al. Moving object detection and velocity estimation using MATLAB
RU2676028C1 (en) Method of detecting left object in video stream

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant