WO2001084844A1 - System for tracking and monitoring multiple moving objects - Google Patents

System for tracking and monitoring multiple moving objects Download PDF

Info

Publication number
WO2001084844A1
WO2001084844A1 PCT/KR2001/000711 KR0100711W WO0184844A1 WO 2001084844 A1 WO2001084844 A1 WO 2001084844A1 KR 0100711 W KR0100711 W KR 0100711W WO 0184844 A1 WO0184844 A1 WO 0184844A1
Authority
WO
WIPO (PCT)
Prior art keywords
moving object
image
movement
camera
tracking
Prior art date
Application number
PCT/KR2001/000711
Other languages
French (fr)
Inventor
Whoi-Yul Kim
Heun-Su Shin
Chan-Soo Lee
Original Assignee
Network Korea Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Network Korea Co., Ltd. filed Critical Network Korea Co., Ltd.
Priority to AU2001255087A priority Critical patent/AU2001255087A1/en
Publication of WO2001084844A1 publication Critical patent/WO2001084844A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/78Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using electromagnetic waves other than radio waves
    • G01S3/782Systems for determining direction or deviation from predetermined direction
    • G01S3/785Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system
    • G01S3/786Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system the desired condition being maintained automatically
    • G01S3/7864T.V. type tracking systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/78Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using electromagnetic waves other than radio waves
    • G01S3/7803Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/78Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using electromagnetic waves other than radio waves
    • G01S3/781Details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Definitions

  • the present invention relates to an object tracking and monitoring system, and more particularly, to a system for tracking and monitoring multiple moving objects using a broad area monitoring camera and a local area monitoring camera.
  • a system for tracking and monitoring multiple moving objects comprising a broad area monitoring apparatus for registering and tracking moving objects within a monitoring area at a fixed position, and a local monitoring apparatus including at least one local area monitoring camera for tracking and monitoring a moving object to be tracked among the registered objects by the broad area monitoring apparatus until the moving object is out of a monitoring range, wherein the broad area monitoring apparatus catches a moving object, recognizes the position of the moving object, and gives an order of tracking to the local area monitoring apparatus so that the local area monitoring apparatus tracks the moving object until the moving object is out of the monitoring area.
  • the broad area monitoring apparatus comprises a camera, an image input unit for receiving an image from the camera, an image correction unit for correcting the input image, a movement detection unit for detecting a movement from the corrected image, a moving object extraction unit for extracting a moving object, and a moving object tracking unit for tracking the position and the speed of the extracting moving object.
  • the local area monitoring apparatus comprises at least one camera, an image input unit for receiving an image from the camera, an image correction unit for correcting the input image, a movement detection unit for detecting a movement from the corrected image, a moving object extraction unit for extracting a moving object, a moving object tracking unit for tracking the position and the speed of the extracting moving object, and a camera movement correction unit for correcting the movement of the camera according to the movement of the moving object.
  • the image correction unit uses a noise removing filter to remove a noise component of an input image.
  • the noise removing filter is one of a median filter, a Gaussian filter, a leveling filter, and a mean filter.
  • the mean filter calculates by using a previously calculated mean value while moving a block.
  • the moving object extraction unit extracts one object by segmentation and merging using movement area information.
  • the moving object tracking unit estimates and tracks a next position of the moving object by applying information on the speed and position of the moving object to an estimation algorithm using a Kalman filter.
  • the moving object tracking unit tracks the moving object by using template matching when an object being tracked stops so that there is no movement or when many objects move across so that accurate tracking is not possible.
  • the camera movement correction unit extracts edges of continuously input image data in a panning mode of the camera, projects the extracted edge in X and Y axes, and corrects the movement of the camera by comparing the projected values.
  • FIG. 1 is a diagram showing the structure of a system for tracking and monitoring multiple moving objects according a preferred embodiment of the present invention
  • FIG. 2 is a diagram showing the structure of a broad area monitoring camera in the system for tracking and monitoring multiple moving objects according to a preferred embodiment of the present invention
  • FIG. 3 is a diagram showing the structure of a local area monitoring camera in the system for tracking and monitoring multiple moving objects according to a preferred embodiment of the present invention
  • FIG. 4 shows movement area extraction using an input image (left column) and movement area extraction using a filtered image (right column);
  • FIG. 5 shows input images, disturbance maps, and the results of extraction of a moving area
  • FIG.6 shows the results of a comparison between images before the binary filtering (left column) and images after the binary filtering (right column);
  • FIG. 7 shows a comparison of images before (left column) and after (right column) objects merge
  • FIG. 8 is a flowchart for explaining a Kalman filter
  • FIG. 9 shows the results of projection in vertical and horizontal directions with respect to a two-dimensional image
  • FIG. 10 shows the vectorial presentation with respect to projection signal and a vectorial sum thereof
  • FIG. 11 shows the tracking of an object by template matching
  • FIG. 12 shows an input image and a background image at the initial frame
  • FIG. 13 shows the input image and the background image after a predetermined time has passed;
  • FIG. 14 shows a movement extraction image by using the difference between the input image and the background image;
  • FIG. 15A shows an edge image at a presently input frame
  • FIG. 15B shows the resultant image
  • FIG. 16 shows a template (up) registered at the previous frame and a template (down) found at the present frame
  • FIGS. 17A through 17C show a comparison between the results of the movement extraction before and after camera movement correction, in which FIG. 17A shows input images, FIG. 17B shows the result of the movement extraction before the camera movement correction, and FIG. 17C shows the result of the movement extraction after the camera movement correction;
  • FIGS. 18A and 18B show the extraction of a motion vector by using the block matching, in which FIG. 18A shows the previous frame and FIG. 18B shows the present frame;
  • FIG. 19 shows the result of extraction of a movement area by using the block matching;
  • FIGS. 20A through 20E show the movement extraction process by using an input image and the background image, in which FIG. 20A shows the input image and the background image at the initial frame, FIG. 20B shows the input image and the background image after a predetermined time, FIG. 20C shows a movement image extracted by using the difference between the input image and the background image, FIG. 20D shows an edge image at the presently input image, and FIG. 20E shows the result image;
  • FIG. 21 shows difference images between the respective frames
  • FIG. 22 shows an outline image of the moving object extracted by using the difference from the present image
  • FIG. 23 is a table showing the movement of a camera frame by frame
  • FIG. 24 shows an image input to the camera
  • FIG. 25 shows an object independently moving after the movement of the camera is corrected.
  • FIG. 1 shows the structure of a system for tracking and monitoring multiple moving objects according to the present invention which includes a broad area monitoring apparatus 10 and a local area monitoring apparatus 20.
  • the broad area monitoring apparatus 10 for monitoring a broad area by using a fixed camera, extracts movement by analyzing an input image and registers a moving object by using the extracted movement information to track the movement of the object.
  • the broad area monitoring apparatus 10 for monitoring a broad area by using a fixed camera, extracts movement by analyzing an input image and registers a moving object by using the extracted movement information to track the movement of the object.
  • the broad area monitoring apparatus 10 includes a camera 110, an image input unit 120, an image correction unit 130, a movement detection unit 140, a moving object extraction unit 150, and a moving object tracking unit 160.
  • the camera 110 collects image data of a monitored area by using a fixed camera.
  • the image input unit 120 receives the image collected by the camera 110.
  • the image correction unit 130 corrects the image received from the image input unit 120 by means of filtering.
  • the image correction unit 130 is a means for correcting the image by removing a noise component from the input image, enabling accurate detection of a movement and extraction of a moving object.
  • the image correction unit 130 uses a filter in removing noise from the input image.
  • the movement detection unit 140 detects a movement from the image corrected by the image correction unit 130.
  • the movement detection unit 140 detects a portion where movement occurs by using a disturbance map.
  • the moving object tracking unit 150 is a means for separating an actually moving object from the movement area extracted by using the disturbance map.
  • the moving object tracking unit 150 extracts an actual object by means of segmentation and merging using movement area information.
  • the moving object tracking unit 160 is a means for tracking the position and speed of the extracted moving object. For fast and accurate tracking of a moving object, the moving object tracking unit 160 tracks the movement of the moving object by accurately anticipating the next position of the moving object by applying information on the position and speed of the moving object to an expectation algorithm using a Kalman filter.
  • the moving object tracking unit 160 enables continuous tracking of a moving object without omission by using template matching when an object being tracked stops so that there is no movement or when many objects move across so that accurate tracking is not possible.
  • the local monitoring apparatus 20 monitors a moving object by moving a pan/tilt camera (210 of FIG. 3) a short distance.
  • a pan/tilt camera 210 of FIG. 3
  • the camera moves to the left and right while monitoring the moving object.
  • the camera zooms in to detect the moving object and moves following the path of the movement of the object.
  • the steps of correcting the movement of the camera and performing tracking while moving the camera according to the object when a moving object is registered are needed in addition to the steps performed in the broad area monitoring apparatus 10.
  • the local area monitoring apparatus 20 includes a camera 210, an image input unit 220, an image correction unit 230, a movement detection unit 240, a moving object extraction unit 250, a moving object tracking unit 260, and a camera movement correction unit 270.
  • the camera 210 collects image data of a monitored area by using a pan/tilt camera covering a short distance.
  • the image input unit 220 received the image collected by the camera 210.
  • the image correction unit 230 corrects the image received from the image input unit 220 by means of filtering.
  • the image correction unit 230 is a means for correcting the image by removing a noise component from the input image, enabling accurate detection of a movement and extraction of a moving object.
  • the image correction unit 230 uses a filter in removing noise from the input image.
  • the movement detection unit 240 detects a movement from the image corrected by the image correction unit 230.
  • the movement detection unit 240 detects a portion where a movement occurs by using a disturbance map.
  • the moving object tracking unit 250 is a means for separating an actually moving object from the movement area extracted by using the disturbance map.
  • the moving object tracking unit 250 extracts an actual object by means of segmentation and merging using movement area information.
  • the moving object tracking unit 260 is a means for tracking the position and speed of the extracted moving object.
  • the camera 210 In local area monitoring, the camera 210 is operated in an auto- panning mode. When a movement is detect in this state, since the overall movement according to the movement of the camera 210 is detected, extraction of an actual movement of the moving object is difficult. To solve this problem, the camera movement correction unit 270 detects and corrects the movement of the camera 210 so that only the actual movements of the moving objects are detected.
  • image filtering is needed to effectively remove a noise component.
  • a filter for removing the noise component from an image various filters such as a median filter, a Gaussian smoothing filter, a leveling filter or a mean filter are used.
  • a mean filter is used to remove the noise component from an image input by the camera.
  • a normal mean filter is realized by applying a 3X3 mask to an image.
  • FIG. 4 shows movement area extraction using an input image (left column) and movement area extraction using a filtered image (right column). Referring to FIG. 4, it can be seen that a movement area by noise is extracted in the case of the unfiltered image.
  • Movement extraction algorithm To track down a moving object in an input image, detecting movement of the moving object and identifying the position of the moving object must be accomplished first.
  • a method of detecting a movement area there is a method which uses the difference between two images, a method which uses optical flow, and a method which uses a motion vector.
  • the method using the difference has the advantage of a simple and fast calculation.
  • the method is very sensitive to a change in illumination and noise.
  • the disturbance map is a method of generating a disturbance map using a present image and a background image (an average image of the previously input images) and detecting a movement area of a moving object by using the disturbance map.
  • the disturbance map enables simple and fast calculation without being affected by noise and a change in illumination.
  • the disturbance map is a method which uses the concept of a temporal average.
  • the disturbance map is obtained by generating a background image by applying a historical weight to the average up to the previous frame and obtaining the difference between the present frame and the generated background.
  • the disturbance map has a predetermined value in an area where movement exists and a value close to "0" in the background where no movement exists.
  • the disturbance map using the above values uses a thresholding method to extract an area where an actual movement exists. By taking the absolute value for the entire disturbance map, when the value is not less than a predetermined threshold, a portion is classified as a movement area, and when the value is less than a predetermined threshold, the portion is classified as a background area.
  • FIG. 5 shows input images, disturbance maps, and the results of extraction of a moving area according to a lapse of time.
  • the movement area extracted by the disturbance map is divided into each area by a labeling process.
  • an area of which the size is less than a reference value is considered to be noise and thus removed.
  • the center of weight of each of the labeled objects is obtained.
  • the obtained center of weight is set to be the position of a target.
  • a binary filtering of a movement area is required.
  • the binary filtering also provides a noise removing effect.
  • the binary filtering provides an effect of incorporation of objects which are separated by one or two pixels.
  • FIG. 6 shows the results of a comparison between images before the binary filtering (left column) and images after the binary filtering (right column). Although noise is removed and the objects are incorporated by the binary filtering, the incorporation is not perfect.
  • a process of 5 incorporating objects assumed to be a single object is performed by using information on the position of objects generated after labeling and information on the speed and direction used for tracking.
  • FIG. 7 shows a comparison of images o before (left column) and after (right column) objects merge.
  • the moving object extracted in the previous step has position s information at the present frame only.
  • a method for anticipating the next position by seeking correlation between the objects extracted at the respective frames based on the position information of the extracted objects, classifying the objects into a single moving object, and recognizing action of the moving object until the present.
  • a Kalman filter is used mainly for tracking of a moving object.
  • the Kalman filter is used to track a target from the 5 measured image information.
  • the Kalman filter has a linear filter structure. It is well known that, when given conditions are met in the linear system, the feature thereof is optimal.
  • x k is a vector of state having an arbitrary number at time k
  • A is a constant matrix showing the feature of the system.
  • w k signifies a process noise added to the system
  • v k in the measurement equation signifies noise generated when a signal is measured. It is assumed that the noise w k and v k have no correlation with each other, covariance of noise are respectively Q k and R k , and the noise have a feature of white noise that each average thereof is "0".
  • y k is input to time k, information on the measured value needs to be reinforced.
  • an estimated value x k obtained by reinforcing the information included in a new measured value can be expressed as a linear structure as in Equation 4-3.
  • the design of the filter changes to a problem of determining gain of the filter, K k , in Equation 4-3.
  • P k in Equation 4-4 can be expressed as in Equation 4-5 as a covariance matrix related to an estimated error of the measured value x k until the information of the measured value in Equation 4-5 is reinforced.
  • Equation 4-4 A cost function which is the sum of diagonal elements of a covariance matrix expressed by Equation 4-4 is a quadratic equation with respect to the gain K k of a filter. Since the coefficient of a quadratic term of the qudratic equation has a positive feature of covariance of a variable of state and measured noise, the coefficient always has a minimum value. Thus, the value of K k making a linear differential equation "0" with respect to K k is expressed by Equation 4-6. When the value is substitute in Equation 4-4, a covariance matrix of the estimated error after an estimated value reinforcement step is performed can be expressed by Equation 4-7.
  • Equation 4-3 The estimated value of a variable of state given by Equation 4-3 can be continuously expanded to an optimal estimated value according to the dynamic property of a system that the filter has, until the next estimated value is input.
  • Equation 4-8 shows an estimated equation of the variable of state proceeded by the dynamic property of the system until a new estimated value with respect to the variable of state is input.
  • Equation 4-9 shows a covariance matrix of an estimated value A . +1 of the expanded variable of state.
  • Equation 4-3 and a covariance matrix P M is obtained as in Equation 4-7.
  • the Kalman filter which is an optimal filter can be obtained by using the above method whenever the estimated value is present.
  • the Kalman filter can be summarized by the flowchart shown in FIG. 8.
  • FIG. 8 is a flowchart showing the Kalman filter.
  • a projection method improves performance speed by using a method of comparing the degree of similarity of a one dimension signal obtained by projecting a two-dimensional image, instead of a conventional method of comparing the degree of similarity of a two-dimensional image signal.
  • FIG. 9 shows the result of projection in the vertical and horizontal directions with respect to a two-dimensional image.
  • the vector sum is a method in which a projected one-dimensional signal is expanded in the form of a vector and the sum of the vectors are obtained and expressed in a single vector, which provides a great improvement in speed.
  • FIG. 10 shows the vector expression of a projection signal and the vector sum thereof.
  • FIG. 11 shows tracking of an object by using a template matching method.
  • FIG. 11 the result of registering a moving car as a template and searching for the position of the car by matching at the next frame is shown.
  • the matching method By using the matching method, the position of an object present in an image can be found regardless of stopping and moving of the object. Such a result may be used for tracking a moving object detected in a local monitoring system.
  • the conventional movement extracting method using a disturbance map has a disadvantage in that one moving object is divided into many moving objects. Thus, an additional step of incorporating the divided object into one is needed.
  • the present method is a new movement extracting method to solve the above problem by accurately extracting a moving object in an image obtained by a fixed camera and segmenting the extracted moving object.
  • the method include the following steps. A. Step of generating background image: to generate a background image from an input image
  • Step of generating difference image to extract a portion where there is a change by using the difference between an input image and the background image
  • Step of extracting image edge to detect an outline of an image which moves.
  • a background image is introduced from an input image and a portion where there is movement is segmented by using the difference between the present image and the background image.
  • the outline of the segmented area is detected and indicated.
  • a change in the background is less than that obtained by the method of the conventional disturbance, a phenomenon in which a single moving object having a size over a predetermined value is segmented into more than two is overcome.
  • the system is suitable for a gradual change in illumination so that it can be used as a system for monitoring traffic or a particular space.
  • the generation of a proper background image to search for a moving object is important in the system.
  • the method proposed in the present application minimizes the change in the background image so that the background image does not change when a change occurs rapidly in the image.
  • the background image generation algorithm is described as follows.
  • a first input image is input as a background image.
  • FIG. 12 shows the input image and the background image at the initial frame.
  • FIG. 13 shows the input image and the background image after a predetermined time has passed.
  • a portion where a change occurs is displayed by using the difference
  • the portion is expanded by applying a morphology method.
  • the difference image created as above is shown in FIG. 14.
  • FIG. 14 shows a movement extraction image by using the difference o between an input image and a background image.
  • An image obtained by performing edge-detection on the present input image is generated. Only an outline included in an area image is extracted from the generated edge image and displayed.
  • the edge detection is generated by using a Sobel mask method.
  • FIG. 15A shows an edge image at the present input image and FIG.
  • the present invention is o characterized in that segmentation is smoothly performed with respect to a single object by restricting the amount of change to be minimized after the background image is generated. Also, the effect of segmentation of an object is improved by using the edge information of the image.
  • the camera When the camera operates in an automatic monitoring mode in the local area monitoring system, the camera monitors a large area while automatically rotating. When the camera move in this way, it is difficult to o find the movement of an object by the conventional movement detecting method. To solve the above problem, a method of detecting movement by estimating the movement of the camera and correcting an image, is needed.
  • templates are registered at particular positions and at a particular interval in the image and the positions of the templates are searched at the next frame.
  • the movement of the camera can be estimated by comparing the difference between the searched positions of the templates at the next frame and the positions of the templates at the previous frame.
  • FIG. 16 shows a template (up) registered at the previous frame and a template (down) found at the present frame.
  • a motion vector is obtained by using the other templates and the movement of the camera is corrected by using the obtained motion vector.
  • FIG. 13 shows a comparison between the result of the detection of movement with respect to a moving image by an actual camera and the result of estimation of a movement after calculating and correcting the movement of the camera.
  • FIGS. 17A through 17C show a comparison between the results of the movement extraction before and after camera movement correction.
  • FIG. 17A shows input images.
  • FIG. 17B shows the result of the movement extraction before the camera movement correction.
  • FIG. 17C shows the result of the movement extraction after the camera movement correction.
  • the method of detecting movement using the difference from an input image is difficult to apply when there is a movement of a camera.
  • the motion vector of the camera and the motion vector of an object appear to be different from each other so that extraction of the movement of the object is possible when there is a movement of the camera.
  • an input image is divided by NxN into small blocks and a search is conducted to determine where each block moves at the next input image.
  • FIGS. 18A and 18B show the extraction of a motion vector by using the block matching.
  • FIG. 18A shows the previous frame and
  • FIG. 18B shows the present frame.
  • the block matching algorithm is a method for searching a motion vector by searching for all points included in a monitored area with respect to the positions of blocks at the previous frame having the same coordinates as those in the present frame.
  • an optimal motion vector can be found by the block matching algorithm, the algorithm requires a too large amount of calculation.
  • a three-step search algorithm is a simple and effective algorithm in which a motion vector is searched according to a predetermined search pattern so that the amount of calculation is reduced. However, when the first search is not correct, the algorithm may put into local optima.
  • An unrestricted center-biased diamond search algorithm increases the probability of an accurate search for a motion vector by using a center- biased property of the motion vector.
  • the diamond search algorithm is not appropriate for a fast moving image.
  • a prediction search algorithm uses information about a motion vector at the previous adjacent block. However, the performance of the algorithm is lowered when the correlation between motion vectors of the adjacent blocks is low.
  • An adaptive prediction directivity search algorithm is an algorithm which can reduce the amount of calculation in an overall area search algorithm and prevent a problem of inability to conduct a local search due to insufficient information in a high speed search algorithm.
  • This algorithm is a method of calculating a motion vector by using temporal correlation, that is, movements are consistent at continuous frames and by using spatial correlation between blocks within the present frame.
  • FIG. 19 shows the result of extraction of a movement area by using the block matching.
  • the method proposed here minimizes a change in a background image so that there is no interference in the background image when a rapid change is generated in the image.
  • the background image generating algorithm is described as follows.
  • FIG.20A shows an input image and a background image at the initial frame and FIG. 20B shows an input image and a background image after a predetermined time.
  • a portion where a change occurs is indicated by using the difference from the background image with respect to the presently input image.
  • the area is enlarged by slightly . applying a morphology method.
  • the difference map set forth as above is as follows.
  • FIG.20C shows a movement image extracted by using the difference between the input image and the background image. Outlines included in the area image only are extracted from an edge image generated as above and are indicated. The edge detection is performed by using a Sobel mask method.
  • FIG. 20D shows an edge image at the presently input image
  • FIG. 20E shows the result image. Disturbance is often used as a method for creating a background image by using input frames. However, since this method responds sensitively to the recent frame, there is an effect of a single object being divided.
  • the amount of change is restricted to be minimized after a background image is generated such that segmentation can be smoothly performed with respect to a single object. Also, the effect of the segmentation of an object is increased by using the edge information of an image.
  • 3-difference image method A change between an image at the previous frame and an image at the next frame is detected from the present frame.
  • a moving object is detected from a comparison between the images while an object that does not move is not detected.
  • a moving object at the present frame is segmented by using the difference detected from the comparison between the previous frame and the present frame and the difference obtained from the comparison between the present frame and the next frame.
  • FIG. 21 shows difference images between the respective frames.
  • FIG. 22 shows an outline image of the moving object extracted by using the difference from the present image.
  • the segmentation method of a moving object overcomes a remaining effect generated in the above-described adaptive background method and simply provides an algorithm adaptive to a fast moving object.
  • this method has a disadvantage in that a moving object at the present frame is segmented excessively.
  • a correlation method for each image line is used as an algorithm for analogizing the movement of a camera.
  • the previous frame and the present frame are compared line by line with respect to an input image to detect a movement for each line, and the overall movement is corrected based on the detected movement for each line. That is, the image is moved to the left and right for each line and a portion where the correlation is generated most greatly is set to be a range of movement of the line. The most frequently generated number among the ranges of the movement generated for each line is set to be the entire movement range. This method is equally applied both in the horizontal direction and the vertical direction for the detection.
  • FIG. 23 is a table showing the movement of a camera frame by frame.
  • FIG. 24 shows an image input to the camera.
  • FIG. 25 shows an independently moving object after the movement of a camera is corrected.
  • the present invention is a real time moving object detecting and tracking system using an input image to a camera, which may be applied to various fields.
  • the configuration of a 24-hour unmanned monitoring system is possible with respect to various military facilities, e.g., monitoring of a missile station, and monitoring of an ammunition depot or an armory.
  • the configuration of an unmanned system is possible with respect to various public facilities or private facilities, e.g., monitoring of a water intake structure and a water purification structure, monitoring of harbor, and monitoring of the flow of traffic
  • a moving object identification function automatic tracking and zoom in/out, setting of a monitoring area and a monitoring method, speed calculation and analysis, a recording function which works only when multiple object tracking is needed, superior compression rate, tracking in bad weather or in the dark, various objects to be monitored such as indoor, outdoor and dangerous objects, automation of monitoring (escape from manned monitoring) and the establishment of a system of an image monitoring combination function (automatic detection, tracking, and alarming of an intruder and analysis of an image), reduction of load by a monitor, reduction of the cost, saving of lives as a partner in conservation of environment such as poor environment, and automatic detection of multiple moving objects while a monitoring image is moved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

A system for tracking and monitoring multiple moving objects includes a broad area monitoring apparatus (10) for registering and tracking moving objects within a monitoring area at a fixed position, and a local monitoring apparatus (20) including at least one local area monitoring camera (110) for tracking and monitoring a moving object to be tracked among the registered objects by the broad area monitoring apparatus until the moving object is out of a monitoring range. In the system, the broad area monitoring apparatus catches a moving object, recognizes the position of the moving object, and gives an order of tracking to the local area monitoring apparatus so that the local area monitoring apparatus tracks the moving object until the moving object is out of the monitoring area. Thus, multiple moving objects can be tracked and monitored in a fast and accurate manner.

Description

SYSTEM FOR TRACKING AND MONITORING MULTIPLE MOVING
OBJECTS
Technical Field The present invention relates to an object tracking and monitoring system, and more particularly, to a system for tracking and monitoring multiple moving objects using a broad area monitoring camera and a local area monitoring camera.
Background Art
In the conventional object tracking system, there is a problem in that multiple moving objects such as persons, animals, vehicles, or airplanes cannot be identified and that additional equipment and cost are required to change the area to be monitored. Also, since a sensor by itself has a single function (line monitor or screen monitor), the cost for establishing a system is high, and the system is complicated.
Disclosure of the Invention
To solve the above problems, it is an objective of the present invention to provide a system for tracking and monitoring multiple moving objects to establish a real-time monitoring system.
To accomplish the above object of the present invention, there is provided a system for tracking and monitoring multiple moving objects comprising a broad area monitoring apparatus for registering and tracking moving objects within a monitoring area at a fixed position, and a local monitoring apparatus including at least one local area monitoring camera for tracking and monitoring a moving object to be tracked among the registered objects by the broad area monitoring apparatus until the moving object is out of a monitoring range, wherein the broad area monitoring apparatus catches a moving object, recognizes the position of the moving object, and gives an order of tracking to the local area monitoring apparatus so that the local area monitoring apparatus tracks the moving object until the moving object is out of the monitoring area.
It is preferred in the present invention that the broad area monitoring apparatus comprises a camera, an image input unit for receiving an image from the camera, an image correction unit for correcting the input image, a movement detection unit for detecting a movement from the corrected image, a moving object extraction unit for extracting a moving object, and a moving object tracking unit for tracking the position and the speed of the extracting moving object. It is preferred in the present invention that the local area monitoring apparatus comprises at least one camera, an image input unit for receiving an image from the camera, an image correction unit for correcting the input image, a movement detection unit for detecting a movement from the corrected image, a moving object extraction unit for extracting a moving object, a moving object tracking unit for tracking the position and the speed of the extracting moving object, and a camera movement correction unit for correcting the movement of the camera according to the movement of the moving object.
It is preferred in the present invention that the image correction unit uses a noise removing filter to remove a noise component of an input image.
It is preferred in the present invention that the noise removing filter is one of a median filter, a Gaussian filter, a leveling filter, and a mean filter.
It is preferred in the present invention that the mean filter calculates by using a previously calculated mean value while moving a block.
It is preferred in the present invention that the moving object extraction unit extracts one object by segmentation and merging using movement area information.
It is preferred in the present invention that the moving object tracking unit estimates and tracks a next position of the moving object by applying information on the speed and position of the moving object to an estimation algorithm using a Kalman filter.
It is preferred in the present invention that the moving object tracking unit tracks the moving object by using template matching when an object being tracked stops so that there is no movement or when many objects move across so that accurate tracking is not possible.
It is preferred in the present invention that the camera movement correction unit extracts edges of continuously input image data in a panning mode of the camera, projects the extracted edge in X and Y axes, and corrects the movement of the camera by comparing the projected values.
Brief Description of the Drawings
FIG. 1 is a diagram showing the structure of a system for tracking and monitoring multiple moving objects according a preferred embodiment of the present invention; FIG. 2 is a diagram showing the structure of a broad area monitoring camera in the system for tracking and monitoring multiple moving objects according to a preferred embodiment of the present invention;
FIG. 3 is a diagram showing the structure of a local area monitoring camera in the system for tracking and monitoring multiple moving objects according to a preferred embodiment of the present invention;
FIG. 4 shows movement area extraction using an input image (left column) and movement area extraction using a filtered image (right column);
FIG. 5 shows input images, disturbance maps, and the results of extraction of a moving area;
FIG.6 shows the results of a comparison between images before the binary filtering (left column) and images after the binary filtering (right column);
FIG. 7 shows a comparison of images before (left column) and after (right column) objects merge;
FIG. 8 is a flowchart for explaining a Kalman filter; FIG. 9 shows the results of projection in vertical and horizontal directions with respect to a two-dimensional image;
FIG. 10 shows the vectorial presentation with respect to projection signal and a vectorial sum thereof; FIG. 11 shows the tracking of an object by template matching;
FIG. 12 shows an input image and a background image at the initial frame;
FIG. 13 shows the input image and the background image after a predetermined time has passed; FIG. 14 shows a movement extraction image by using the difference between the input image and the background image;
FIG. 15A shows an edge image at a presently input frame;
FIG. 15B shows the resultant image;
FIG. 16 shows a template (up) registered at the previous frame and a template (down) found at the present frame;
FIGS. 17A through 17C show a comparison between the results of the movement extraction before and after camera movement correction, in which FIG. 17A shows input images, FIG. 17B shows the result of the movement extraction before the camera movement correction, and FIG. 17C shows the result of the movement extraction after the camera movement correction;
FIGS. 18A and 18B show the extraction of a motion vector by using the block matching, in which FIG. 18A shows the previous frame and FIG. 18B shows the present frame; FIG. 19 shows the result of extraction of a movement area by using the block matching;
FIGS. 20A through 20E show the movement extraction process by using an input image and the background image, in which FIG. 20A shows the input image and the background image at the initial frame, FIG. 20B shows the input image and the background image after a predetermined time, FIG. 20C shows a movement image extracted by using the difference between the input image and the background image, FIG. 20D shows an edge image at the presently input image, and FIG. 20E shows the result image;
FIG. 21 shows difference images between the respective frames; FIG. 22 shows an outline image of the moving object extracted by using the difference from the present image;
FIG. 23 is a table showing the movement of a camera frame by frame;
FIG. 24 shows an image input to the camera; and FIG. 25 shows an object independently moving after the movement of the camera is corrected.
Best mode for carrying out the Invention
FIG. 1 shows the structure of a system for tracking and monitoring multiple moving objects according to the present invention which includes a broad area monitoring apparatus 10 and a local area monitoring apparatus 20.
The broad area monitoring apparatus 10 for monitoring a broad area by using a fixed camera, extracts movement by analyzing an input image and registers a moving object by using the extracted movement information to track the movement of the object. The broad area monitoring apparatus
10 registers and tracks every moving object within a monitoring range.
The broad area monitoring apparatus 10 includes a camera 110, an image input unit 120, an image correction unit 130, a movement detection unit 140, a moving object extraction unit 150, and a moving object tracking unit 160.
The camera 110 collects image data of a monitored area by using a fixed camera. The image input unit 120 receives the image collected by the camera 110. The image correction unit 130 corrects the image received from the image input unit 120 by means of filtering. Here, the image correction unit 130 is a means for correcting the image by removing a noise component from the input image, enabling accurate detection of a movement and extraction of a moving object. The image correction unit 130 uses a filter in removing noise from the input image.
The movement detection unit 140 detects a movement from the image corrected by the image correction unit 130. The movement detection unit 140 detects a portion where movement occurs by using a disturbance map.
The moving object tracking unit 150 is a means for separating an actually moving object from the movement area extracted by using the disturbance map. The moving object tracking unit 150 extracts an actual object by means of segmentation and merging using movement area information.
The moving object tracking unit 160 is a means for tracking the position and speed of the extracted moving object. For fast and accurate tracking of a moving object, the moving object tracking unit 160 tracks the movement of the moving object by accurately anticipating the next position of the moving object by applying information on the position and speed of the moving object to an expectation algorithm using a Kalman filter.
Also, the moving object tracking unit 160 enables continuous tracking of a moving object without omission by using template matching when an object being tracked stops so that there is no movement or when many objects move across so that accurate tracking is not possible.
The local monitoring apparatus 20 monitors a moving object by moving a pan/tilt camera (210 of FIG. 3) a short distance. In an automatic visible mode, the camera moves to the left and right while monitoring the moving object. When movement is detected, the camera zooms in to detect the moving object and moves following the path of the movement of the object.
Also, since the movement of the camera occurs frequently in the local monitoring apparatus 20, the steps of correcting the movement of the camera and performing tracking while moving the camera according to the object when a moving object is registered, are needed in addition to the steps performed in the broad area monitoring apparatus 10.
The local area monitoring apparatus 20 includes a camera 210, an image input unit 220, an image correction unit 230, a movement detection unit 240, a moving object extraction unit 250, a moving object tracking unit 260, and a camera movement correction unit 270.
The camera 210 collects image data of a monitored area by using a pan/tilt camera covering a short distance. The image input unit 220 received the image collected by the camera 210. The image correction unit 230 corrects the image received from the image input unit 220 by means of filtering. Here, the image correction unit 230 is a means for correcting the image by removing a noise component from the input image, enabling accurate detection of a movement and extraction of a moving object. The image correction unit 230 uses a filter in removing noise from the input image.
The movement detection unit 240 detects a movement from the image corrected by the image correction unit 230. The movement detection unit 240 detects a portion where a movement occurs by using a disturbance map. The moving object tracking unit 250 is a means for separating an actually moving object from the movement area extracted by using the disturbance map. The moving object tracking unit 250 extracts an actual object by means of segmentation and merging using movement area information. The moving object tracking unit 260 is a means for tracking the position and speed of the extracted moving object. When movement is detected in the local area monitoring apparatus 20, the camera 210 is locked to the moving object and moves according to the movement of the moving object while tracking the moving object. As a method for quick and accurately tracking a moving object, methods using color information, a matching technique, or an optical flow are used. In local area monitoring, the camera 210 is operated in an auto- panning mode. When a movement is detect in this state, since the overall movement according to the movement of the camera 210 is detected, extraction of an actual movement of the moving object is difficult. To solve this problem, the camera movement correction unit 270 detects and corrects the movement of the camera 210 so that only the actual movements of the moving objects are detected.
The algorithm used for a system for tracking and monitoring multiple moving objects according to the present invention is described as follows.
1. Noise removing algorithm by means of filtering
A change due to noise which is input when an image is input using a CCD camera or due to a change of weather, may greatly increase the likelihood that an error will occur in the movement detecting step. Thus, image filtering is needed to effectively remove a noise component.
As a filter for removing the noise component from an image, various filters such as a median filter, a Gaussian smoothing filter, a leveling filter or a mean filter are used. To establish a real-time system, a mean filter is used to remove the noise component from an image input by the camera.
A normal mean filter is realized by applying a 3X3 mask to an image.
However, for faster realization, a method of quickly calculating a mean using a previously calculated mean value while moving a block without performing a mask calculation. FIG. 4 shows movement area extraction using an input image (left column) and movement area extraction using a filtered image (right column). Referring to FIG. 4, it can be seen that a movement area by noise is extracted in the case of the unfiltered image.
2. Movement extraction algorithm To track down a moving object in an input image, detecting movement of the moving object and identifying the position of the moving object must be accomplished first. As a method of detecting a movement area, there is a method which uses the difference between two images, a method which uses optical flow, and a method which uses a motion vector.
Of the methods of detecting movement, the method using the difference has the advantage of a simple and fast calculation. However, the method is very sensitive to a change in illumination and noise.
Meanwhile, the methods using optical flow or motion vector have disadvantages in that they require a large amount of calculation.
In the present invention, a disturbance map is used to solve the above problems. The disturbance map is a method of generating a disturbance map using a present image and a background image (an average image of the previously input images) and detecting a movement area of a moving object by using the disturbance map. The disturbance map enables simple and fast calculation without being affected by noise and a change in illumination.
The disturbance map is a method which uses the concept of a temporal average. The disturbance map is obtained by generating a background image by applying a historical weight to the average up to the previous frame and obtaining the difference between the present frame and the generated background.
[Equation 2-1] Dt=\r
At=(l-w)lt+wAt.1 w=history factor, 0<w<1 lt: current frame, A: temporal average
The disturbance map has a predetermined value in an area where movement exists and a value close to "0" in the background where no movement exists. The disturbance map using the above values uses a thresholding method to extract an area where an actual movement exists. By taking the absolute value for the entire disturbance map, when the value is not less than a predetermined threshold, a portion is classified as a movement area, and when the value is less than a predetermined threshold, the portion is classified as a background area.
[Equation 2-2] abs(D(x,y)t)>Threshold: Movement Area abs(D(x,y)t)<Threshold: Background Area
FIG. 5 shows input images, disturbance maps, and the results of extraction of a moving area according to a lapse of time.
3. Moving object extraction algorithm
The movement area extracted by the disturbance map is divided into each area by a labeling process. Here, an area of which the size is less than a reference value is considered to be noise and thus removed. The center of weight of each of the labeled objects is obtained. The obtained center of weight is set to be the position of a target.
However, when the disturbance map is applied to an actual image, the case in which many movement areas are extracted with respect to a single object often occurs. This is because an object is formed of many portions having different brightnesses so that each portion of the object is recognized as an independent object.
To incorporate the separately recognized objects into a single area, a binary filtering of a movement area is required. The binary filtering also provides a noise removing effect. For a divided object, the binary filtering provides an effect of incorporation of objects which are separated by one or two pixels. FIG. 6 shows the results of a comparison between images before the binary filtering (left column) and images after the binary filtering (right column). Although noise is removed and the objects are incorporated by the binary filtering, the incorporation is not perfect. Thus, a process of 5 incorporating objects assumed to be a single object is performed by using information on the position of objects generated after labeling and information on the speed and direction used for tracking.
Presently, the objects are incorporated using information on the distance between two objects only. FIG. 7 shows a comparison of images o before (left column) and after (right column) objects merge.
4. Position and speed tracking algorithm using Kalman filer
The moving object extracted in the previous step has position s information at the present frame only. To track the moving object, a method for anticipating the next position by seeking correlation between the objects extracted at the respective frames based on the position information of the extracted objects, classifying the objects into a single moving object, and recognizing action of the moving object until the present. By recognizing o the dynamic feature of the moving object and anticipating the next position of the moving object, fast and accurate tracking of a moving object is possible.
A Kalman filter is used mainly for tracking of a moving object. In the present invention, the Kalman filter is used to track a target from the 5 measured image information. The Kalman filter has a linear filter structure. It is well known that, when given conditions are met in the linear system, the feature thereof is optimal. In a discrete linear system given in Equation 4- 1 , xk is a vector of state having an arbitrary number at time k, and A is a constant matrix showing the feature of the system. The Kalman filter o configured when a measurement equation of which the output is obtained by H at a vector which is a feature of a sensor is expressed by Equation 4- [Equation 4-1] Xk+ι=Axk+wk
[Equation 4-2] yk=Hxk+vk
Here, wk signifies a process noise added to the system, and vk in the measurement equation signifies noise generated when a signal is measured. It is assumed that the noise wk and vk have no correlation with each other, covariance of noise are respectively Qk and Rk, and the noise have a feature of white noise that each average thereof is "0". When yk is input to time k, information on the measured value needs to be reinforced. Here, given that the structure of the filter is linear, an estimated value xk obtained by reinforcing the information included in a new measured value can be expressed as a linear structure as in Equation 4-3. The design of the filter changes to a problem of determining gain of the filter, Kk, in Equation 4-3.
[Equation 4-3]
Figure imgf000014_0001
When the performance index of the filter is set such that the square mean of an estimated error can be minimized, it is necessary to minimize the sum of diagonal elements in a covariance matrix of the estimated error given as in Equation 4-4.
[Equation 4-4]
Pk = E{(xk - x)(xk -x)T} = (I - KkH)Pk(I - KkH)T + KkRKk T Pk in Equation 4-4 can be expressed as in Equation 4-5 as a covariance matrix related to an estimated error of the measured value xk until the information of the measured value in Equation 4-5 is reinforced.
[Equation 4-5]
Pk = E{(xk - x)(xk - x)τ}
A cost function which is the sum of diagonal elements of a covariance matrix expressed by Equation 4-4 is a quadratic equation with respect to the gain Kk of a filter. Since the coefficient of a quadratic term of the qudratic equation has a positive feature of covariance of a variable of state and measured noise, the coefficient always has a minimum value. Thus, the value of Kk making a linear differential equation "0" with respect to Kk is expressed by Equation 4-6. When the value is substitute in Equation 4-4, a covariance matrix of the estimated error after an estimated value reinforcement step is performed can be expressed by Equation 4-7.
[Equation 4-6] Kk = PkHT(HPkHT + Ryl
[Equation 4-7]
Pk = (I ~ KkH)Pk
The estimated value of a variable of state given by Equation 4-3 can be continuously expanded to an optimal estimated value according to the dynamic property of a system that the filter has, until the next estimated value is input. Equation 4-8 shows an estimated equation of the variable of state proceeded by the dynamic property of the system until a new estimated value with respect to the variable of state is input. Equation 4-9 shows a covariance matrix of an estimated value A.+1 of the expanded variable of state.
[Equation 4-8] Xk+l ~ ^Xk
[Equation 4-9] PM = APkAT + Q
The estimated value obtained by using Equation 4-8 and Equation
4-9 and a covariance matrix of the estimated value become a gain of a filter of the next state given by Equation 4-6. When the next estimated value yk+1 is input, an estimated value xk+l of the variable of state is obtained as in
Equation 4-3 and a covariance matrix PM is obtained as in Equation 4-7. The Kalman filter which is an optimal filter can be obtained by using the above method whenever the estimated value is present. The Kalman filter can be summarized by the flowchart shown in FIG. 8. FIG. 8 is a flowchart showing the Kalman filter.
5. Template matching algorithm
In general, when a moving object being tracked by using detection of movement stops, no detection of movement occurs and thus the object being tracked is often missing. To solve this problem, a template matching method for directly searching for an object by using an input image is used.
Since a typical template matching method requires a large amount of calculation, it is difficult to apply the template matching method to a real time tracking system. To solve this problem, in the present invention, a high speed template matching algorithm using the sum of projection vectors is used. A projection method improves performance speed by using a method of comparing the degree of similarity of a one dimension signal obtained by projecting a two-dimensional image, instead of a conventional method of comparing the degree of similarity of a two-dimensional image signal. FIG. 9 shows the result of projection in the vertical and horizontal directions with respect to a two-dimensional image.
Although speed is greatly improved by the projection method, when the number of stopped objects increases, the total calculation time increase. A vector sum method is introduced to solve the problem. The vector sum is a method in which a projected one-dimensional signal is expanded in the form of a vector and the sum of the vectors are obtained and expressed in a single vector, which provides a great improvement in speed.
FIG. 10 shows the vector expression of a projection signal and the vector sum thereof. FIG. 11 shows tracking of an object by using a template matching method. In FIG. 11 , the result of registering a moving car as a template and searching for the position of the car by matching at the next frame is shown. By using the matching method, the position of an object present in an image can be found regardless of stopping and moving of the object. Such a result may be used for tracking a moving object detected in a local monitoring system.
6. Motion segmentation algorithm
The conventional movement extracting method using a disturbance map has a disadvantage in that one moving object is divided into many moving objects. Thus, an additional step of incorporating the divided object into one is needed. The present method is a new movement extracting method to solve the above problem by accurately extracting a moving object in an image obtained by a fixed camera and segmenting the extracted moving object. The method include the following steps. A. Step of generating background image: to generate a background image from an input image
B. Step of generating difference image: to extract a portion where there is a change by using the difference between an input image and the background image
C. Step of extracting image edge: to detect an outline of an image which moves.
Generation of a background image is introduced from an input image and a portion where there is movement is segmented by using the difference between the present image and the background image. The outline of the segmented area is detected and indicated. In this method, since a change in the background is less than that obtained by the method of the conventional disturbance, a phenomenon in which a single moving object having a size over a predetermined value is segmented into more than two is overcome. The system is suitable for a gradual change in illumination so that it can be used as a system for monitoring traffic or a particular space.
The generation of a proper background image to search for a moving object is important in the system. The method proposed in the present application minimizes the change in the background image so that the background image does not change when a change occurs rapidly in the image.
The background image generation algorithm is described as follows.
A. A first input image is input as a background image.
B. Since a moving object may be present in the first image, a change over a predetermined amount is given to an image until some initial frame.
C. A change of background is minimized after a predetermined time has passed so that the effect on the presently input image is reduced. FIG. 12 shows the input image and the background image at the initial frame. FIG. 13 shows the input image and the background image after a predetermined time has passed.
A portion where a change occurs is displayed by using the difference
5 between the presently input image and the background image. For the case in which a value of brightness between the background image and the moving object is less, the portion is expanded by applying a morphology method. The difference image created as above is shown in FIG. 14.
FIG. 14 shows a movement extraction image by using the difference o between an input image and a background image. An image obtained by performing edge-detection on the present input image is generated. Only an outline included in an area image is extracted from the generated edge image and displayed. The edge detection is generated by using a Sobel mask method. s FIG. 15A shows an edge image at the present input image and FIG.
15B shows a resultant image. Disturbance is often used as a method of creating a background image using input frames. However, since this method responds sensitively to the recent frame, an effect of a single object being segmented occurs. To compensate for this, the present invention is o characterized in that segmentation is smoothly performed with respect to a single object by restricting the amount of change to be minimized after the background image is generated. Also, the effect of segmentation of an object is improved by using the edge information of the image.
5 7. Camera movement correcting algorithm
When the camera operates in an automatic monitoring mode in the local area monitoring system, the camera monitors a large area while automatically rotating. When the camera move in this way, it is difficult to o find the movement of an object by the conventional movement detecting method. To solve the above problem, a method of detecting movement by estimating the movement of the camera and correcting an image, is needed.
To correct the movement of the camera by using the input image, templates are registered at particular positions and at a particular interval in the image and the positions of the templates are searched at the next frame. Thus, the movement of the camera can be estimated by comparing the difference between the searched positions of the templates at the next frame and the positions of the templates at the previous frame.
Here, to avoid incorrect camera movement estimation due to an error in matching, a position to which a large error is generated is removed from the estimation of the movement by using the positional relation of the templates at the previous frame and the positional relation between the templates found at the present frame. Thus, the estimation of a more accurate camera movement is possible. FIG. 16 shows a template (up) registered at the previous frame and a template (down) found at the present frame. A template (gray) showing a large error between the position of a template found at the present frame and the position set at the previous frame, as shown in FIG. 12, is removed from the estimation of the movement. A motion vector is obtained by using the other templates and the movement of the camera is corrected by using the obtained motion vector.
FIG. 13 shows a comparison between the result of the detection of movement with respect to a moving image by an actual camera and the result of estimation of a movement after calculating and correcting the movement of the camera. When the present algorithm is applied, the movement of a camera is greatly corrected so that the movements of actual moving objects can be detected.
FIGS. 17A through 17C show a comparison between the results of the movement extraction before and after camera movement correction. FIG. 17A shows input images. FIG. 17B shows the result of the movement extraction before the camera movement correction. FIG. 17C shows the result of the movement extraction after the camera movement correction.
8. Movement detection algorithm using block matching
In general, the method of detecting movement using the difference from an input image is difficult to apply when there is a movement of a camera. In this case, when using a block matching method in which the image is divided into small blocks and a motion vector of each block is extracted is used, the motion vector of the camera and the motion vector of an object appear to be different from each other so that extraction of the movement of the object is possible when there is a movement of the camera.
In the block matching algorithm, an input image is divided by NxN into small blocks and a search is conducted to determine where each block moves at the next input image.
FIGS. 18A and 18B show the extraction of a motion vector by using the block matching. FIG. 18A shows the previous frame and FIG. 18B shows the present frame.
The block matching algorithm is a method for searching a motion vector by searching for all points included in a monitored area with respect to the positions of blocks at the previous frame having the same coordinates as those in the present frame. Although an optimal motion vector can be found by the block matching algorithm, the algorithm requires a too large amount of calculation. A three-step search algorithm is a simple and effective algorithm in which a motion vector is searched according to a predetermined search pattern so that the amount of calculation is reduced. However, when the first search is not correct, the algorithm may put into local optima.
An unrestricted center-biased diamond search algorithm increases the probability of an accurate search for a motion vector by using a center- biased property of the motion vector. However, the diamond search algorithm is not appropriate for a fast moving image.
A prediction search algorithm uses information about a motion vector at the previous adjacent block. However, the performance of the algorithm is lowered when the correlation between motion vectors of the adjacent blocks is low.
An adaptive prediction directivity search algorithm is an algorithm which can reduce the amount of calculation in an overall area search algorithm and prevent a problem of inability to conduct a local search due to insufficient information in a high speed search algorithm. This algorithm is a method of calculating a motion vector by using temporal correlation, that is, movements are consistent at continuous frames and by using spatial correlation between blocks within the present frame.
FIG. 19 shows the result of extraction of a movement area by using the block matching. Although a case in which there is no movement of the camera only is tested, when there is a movement of the camera, the movement is extracted according to a motion vector of the camera and a motion vector of an object.
9. Adaptive background method
Generation of a proper background image is important in this system in order to search for a moving object. The method proposed here minimizes a change in a background image so that there is no interference in the background image when a rapid change is generated in the image. The background image generating algorithm is described as follows.
A. Inputting the first input image as a background image.
B. Since a moving object may be present in the first image, considering a change in the image over a predetermined amount as a background image until the next frames for a predetermined time.
C. Reducing an effect on a background image by the present input image by minimizing a change in the background image after a predetermined time has passed.
FIG.20A shows an input image and a background image at the initial frame and FIG. 20B shows an input image and a background image after a predetermined time.
A portion where a change occurs is indicated by using the difference from the background image with respect to the presently input image. In preparation of a case in which the value of brightness of the background image and the moving object is less, the area is enlarged by slightly . applying a morphology method. The difference map set forth as above is as follows.
FIG.20C shows a movement image extracted by using the difference between the input image and the background image. Outlines included in the area image only are extracted from an edge image generated as above and are indicated. The edge detection is performed by using a Sobel mask method.
FIG. 20D shows an edge image at the presently input image, and
FIG. 20E shows the result image. Disturbance is often used as a method for creating a background image by using input frames. However, since this method responds sensitively to the recent frame, there is an effect of a single object being divided.
In the present invention, to compensate for the above disadvantages, the amount of change is restricted to be minimized after a background image is generated such that segmentation can be smoothly performed with respect to a single object. Also, the effect of the segmentation of an object is increased by using the edge information of an image.
10. 3-difference image method A change between an image at the previous frame and an image at the next frame is detected from the present frame. Here, a moving object is detected from a comparison between the images while an object that does not move is not detected. A moving object at the present frame is segmented by using the difference detected from the comparison between the previous frame and the present frame and the difference obtained from the comparison between the present frame and the next frame.
FIG. 21 shows difference images between the respective frames. FIG. 22 shows an outline image of the moving object extracted by using the difference from the present image. The segmentation method of a moving object overcomes a remaining effect generated in the above-described adaptive background method and simply provides an algorithm adaptive to a fast moving object. However, this method has a disadvantage in that a moving object at the present frame is segmented excessively.
11. Global motion detection and moving object segmentation
11.1 Global motion detection
It is very difficult to recognize the movement of a camera with the input image only. Thus, a study of accurately correcting movement is currently being performed by numerous institutes. In the present invention, a correlation method for each image line is used as an algorithm for analogizing the movement of a camera. In the correlation method, the previous frame and the present frame are compared line by line with respect to an input image to detect a movement for each line, and the overall movement is corrected based on the detected movement for each line. That is, the image is moved to the left and right for each line and a portion where the correlation is generated most greatly is set to be a range of movement of the line. The most frequently generated number among the ranges of the movement generated for each line is set to be the entire movement range. This method is equally applied both in the horizontal direction and the vertical direction for the detection.
11.2 Moving object segmentation
When the movement of a camera is recognized and the recognized movement is inversely corrected, two images where there is no movement of a camera can be assumed. As segmentation is performed in the above- described 3-difference image method while the movement of a camera is corrected with respect to the coming frames, a moving camera can detect an independently moving object which appears to be different from the movement path of the camera.
FIG. 23 is a table showing the movement of a camera frame by frame. FIG. 24 shows an image input to the camera. FIG. 25 shows an independently moving object after the movement of a camera is corrected.
The present invention is a real time moving object detecting and tracking system using an input image to a camera, which may be applied to various fields.
A. Field of military use
The configuration of a 24-hour unmanned monitoring system is possible with respect to various military facilities, e.g., monitoring of a missile station, and monitoring of an ammunition depot or an armory.
B. Field of civilian use
The configuration of an unmanned system is possible with respect to various public facilities or private facilities, e.g., monitoring of a water intake structure and a water purification structure, monitoring of harbor, and monitoring of the flow of traffic
The drawings and specification reveal only an example of the present invention. It is noted that the present invention is not limited to the preferred embodiment described above, and it is apparent that variations and modifications by those skilled in the art can be effected within the spirit and scope of the present invention defined in the appended claims.
Industrial Applicability
According to the present invention, there are effects such as a moving object identification function, automatic tracking and zoom in/out, setting of a monitoring area and a monitoring method, speed calculation and analysis, a recording function which works only when multiple object tracking is needed, superior compression rate, tracking in bad weather or in the dark, various objects to be monitored such as indoor, outdoor and dangerous objects, automation of monitoring (escape from manned monitoring) and the establishment of a system of an image monitoring combination function (automatic detection, tracking, and alarming of an intruder and analysis of an image), reduction of load by a monitor, reduction of the cost, saving of lives as a partner in conservation of environment such as poor environment, and automatic detection of multiple moving objects while a monitoring image is moved.

Claims

What is claimed is:
1. A system for tracking and monitoring multiple moving objects comprising: a broad area monitoring apparatus for registering and tracking moving objects within a monitoring area at a fixed position; and a local monitoring apparatus including at least one local area monitoring camera for tracking and monitoring a moving object to be tracked among the registered objects by the broad area monitoring apparatus until the moving object is out of a monitoring range, wherein the broad area monitoring apparatus catches a moving object, recognizes the position of the moving object, and gives an order of tracking to the local area monitoring apparatus so that the local area monitoring apparatus tracks the moving object until the moving object is out of the monitoring area.
2. The system of claim 1 , wherein the broad area monitoring apparatus comprises: a camera; an image input unit for receiving an image from the camera; an image correction unit for correcting the input image; a movement detection unit for detecting a movement from the corrected image; a moving object extraction unit for extracting a moving object; and a moving object tracking unit for tracking the position and the speed of the extracting moving object.
3. The system of claim 1 , wherein the local area monitoring apparatus comprises: at least one camera; an image input unit for receiving an image from the camera; an image correction unit for correcting the input image; a movement detection unit for detecting a movement from the corrected image; a moving object extraction unit for extracting a moving object; a moving object tracking unit for tracking the position and the speed of the extracting moving object; and a camera movement correction unit for correcting the movement of the camera according to the movement of the moving object.
4. The system of claim 2 or claim 3, wherein the image correction unit uses a noise removing filter to remove a noise component of an input image.
5. The system of claim 4, wherein the noise removing filter is one of a median filter, a Gaussian filter, a leveling filter, and a mean filter.
6 The system of claim 5, wherein the mean filter calculates by using a previously calculated mean value while moving a block.
7. The system of claim 2 or claim 3, wherein the moving object extraction unit extracts one object by segmentation and merging using movement area information.
8. The system of claim 2 or claim 3, wherein the moving object tracking unit estimates and tracks a next position of the moving object by applying information on the speed and position of the moving object to an estimation algorithm using a Kalman filter.
9. The system of claim 2 or claim 3, wherein the moving object tracking unit tracks the moving object by using template matching when an object being tracked stops so that there is no movement or when many objects move across so that accurate tracking is not possible.
10. The system of claim 3, wherein the camera movement correction unit extracts edges of continuously input image data in a panning mode of the camera, projects the extracted edge in X and Y axes, and corrects the movement of the camera by comparing the projected values.
PCT/KR2001/000711 2000-04-28 2001-04-28 System for tracking and monitoring multiple moving objects WO2001084844A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001255087A AU2001255087A1 (en) 2000-04-28 2001-04-28 System for tracking and monitoring multiple moving objects

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020000022818A KR100364582B1 (en) 2000-04-28 2000-04-28 System tracking and watching multi moving object
KR2000/22818 2000-04-28

Publications (1)

Publication Number Publication Date
WO2001084844A1 true WO2001084844A1 (en) 2001-11-08

Family

ID=19667507

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2001/000711 WO2001084844A1 (en) 2000-04-28 2001-04-28 System for tracking and monitoring multiple moving objects

Country Status (3)

Country Link
KR (1) KR100364582B1 (en)
AU (1) AU2001255087A1 (en)
WO (1) WO2001084844A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1283499A1 (en) * 2001-08-07 2003-02-12 Samsung Electronics Co., Ltd. Moving object tracking apparatus and method
WO2004111943A1 (en) * 2003-05-30 2004-12-23 Robert Bosch Gmbh Method and device for locating objects for motor vehicles
FR2883382A1 (en) * 2005-03-21 2006-09-22 Giat Ind Sa Object e.g. enemy vehicle, or event e.g. fire accident, locating and perceiving method, involves determining zone in which event or object is found and is detected by sensor, and recording images in zone by video cameras by recording unit
US8542872B2 (en) 2007-07-03 2013-09-24 Pivotal Vision, Llc Motion-validating remote monitoring system

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100442535B1 (en) * 2001-07-06 2004-07-30 소재영 Shortage of hands image apparatus and system using color tracing, and thereof method
US6930638B2 (en) * 2001-08-01 2005-08-16 Roke Manor Research Limited Passive moving object detection system and method using signals transmitted by a mobile telephone station
KR100436049B1 (en) * 2001-12-31 2004-06-12 주식회사 하이닉스반도체 Movement prediction system
KR100455294B1 (en) * 2002-12-06 2004-11-06 삼성전자주식회사 Method for detecting user and detecting motion, and apparatus for detecting user within security system
KR20030064668A (en) * 2003-06-07 2003-08-02 나병호 Advanced Image Processing Digital Video Recorder System
KR20050078714A (en) * 2004-01-31 2005-08-08 엘지전자 주식회사 Local position tracking method, and system for the same
KR100630088B1 (en) 2004-12-28 2006-09-27 삼성전자주식회사 Apparatus and method for supervising vehicle using optical flow
KR100656345B1 (en) * 2005-03-30 2006-12-11 한국전자통신연구원 Method and apparatus for tracking moving object by using two-cameras
KR100757261B1 (en) * 2005-07-08 2007-09-11 전자부품연구원 Tracking method and system for tracking with multiple point-of view
KR20070038656A (en) * 2005-10-06 2007-04-11 엘지전자 주식회사 Method for controlling cooperation monitoring in digital video recorder
KR100872878B1 (en) * 2006-09-29 2008-12-10 (주)로그시스 Imaging System of Security Camera by Event Detection
KR100887942B1 (en) * 2007-01-31 2009-03-30 (주)씽크게이트테크놀러지 System for sensing abnormal phenomenon on realtime and method for controlling the same
KR101187909B1 (en) 2007-10-04 2012-10-05 삼성테크윈 주식회사 Surveillance camera system
CN101426080B (en) * 2007-10-29 2012-10-17 三星电子株式会社 Device and method for detecting and suppressing influence generated by camera moving in monitoring system
KR100879623B1 (en) * 2008-07-05 2009-01-21 주식회사 일리시스 Automated wide area surveillance system using ptz camera and method therefor
KR100877227B1 (en) * 2008-08-07 2009-01-07 주식회사 케이씨에스정보 Zeegbee multiple cameras system for multi moving object automatic tracking and operation method by using the same
KR101072399B1 (en) 2009-05-25 2011-10-11 김명수 Intelligent control method based on object recognition using camera image analysis
KR101120131B1 (en) * 2009-05-29 2012-03-22 주식회사 영국전자 Intelligent Panorama Camera, Circuit and Method for Controlling thereof, and Video Monitoring System
KR101320350B1 (en) * 2009-12-14 2013-10-23 한국전자통신연구원 Secure management server and video data managing method of secure management server
US20120257064A1 (en) * 2010-02-01 2012-10-11 Youngkook Electronics Co, Ltd Tracking and monitoring camera device and remote monitoring system using same
US9082278B2 (en) 2010-03-19 2015-07-14 University-Industry Cooperation Group Of Kyung Hee University Surveillance system
KR101142100B1 (en) * 2010-05-20 2012-05-03 주식회사 와이즈오토모티브 Apparatus and method for sensing data using concentration scan
KR101410985B1 (en) 2013-11-27 2014-07-04 주식회사 휴먼시스템 monitoring system and monitoring apparatus using security camera and monitoring method thereof
KR101592732B1 (en) 2014-07-09 2016-02-05 주식회사 에스원 Apparatus for removing snow rain image in camera images and method for removing snow rain image using the same
KR101496287B1 (en) 2014-11-11 2015-02-26 (주) 강동미디어 Video synopsis system and video synopsis method using the same
KR101548639B1 (en) 2014-12-10 2015-09-01 한국건설기술연구원 Apparatus for tracking the objects in surveillance camera system and method thereof
KR20160093253A (en) 2015-01-29 2016-08-08 쿠도커뮤니케이션 주식회사 Video based abnormal flow detection method and system
KR101723028B1 (en) * 2016-09-26 2017-04-07 서광항업 주식회사 Image processing system for integrated management of image information changing in real time
KR102169211B1 (en) * 2018-11-15 2020-10-22 한국철도공사 apparatus and method for automatically detecting bird's cast
KR102140195B1 (en) * 2020-05-15 2020-07-31 메타빌드(주) Method for detecting invasion of wild animal using radar and system thereof
KR102211734B1 (en) * 2020-08-28 2021-02-03 주식회사 문창 Video surveillance device using image correction filter

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR930019033A (en) * 1992-02-28 1993-09-22 강진구 Surveillance Camera System
JPH08147477A (en) * 1994-09-20 1996-06-07 Fujitsu Ltd Local area image tracking device
KR960039974A (en) * 1995-04-21 1996-11-25 이헌일 Subject tracking method using multiple surveillance cameras

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR930019033A (en) * 1992-02-28 1993-09-22 강진구 Surveillance Camera System
JPH08147477A (en) * 1994-09-20 1996-06-07 Fujitsu Ltd Local area image tracking device
KR960039974A (en) * 1995-04-21 1996-11-25 이헌일 Subject tracking method using multiple surveillance cameras

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1283499A1 (en) * 2001-08-07 2003-02-12 Samsung Electronics Co., Ltd. Moving object tracking apparatus and method
WO2004111943A1 (en) * 2003-05-30 2004-12-23 Robert Bosch Gmbh Method and device for locating objects for motor vehicles
FR2883382A1 (en) * 2005-03-21 2006-09-22 Giat Ind Sa Object e.g. enemy vehicle, or event e.g. fire accident, locating and perceiving method, involves determining zone in which event or object is found and is detected by sensor, and recording images in zone by video cameras by recording unit
EP1705451A1 (en) 2005-03-21 2006-09-27 Giat Industries Device and method for locating and perceiving an object or an event
US8542872B2 (en) 2007-07-03 2013-09-24 Pivotal Vision, Llc Motion-validating remote monitoring system
US9286518B2 (en) 2007-07-03 2016-03-15 Pivotal Vision, Llc Motion-validating remote monitoring system
US10275658B2 (en) 2007-07-03 2019-04-30 Pivotal Vision, Llc Motion-validating remote monitoring system

Also Published As

Publication number Publication date
AU2001255087A1 (en) 2001-11-12
KR20010000107A (en) 2001-01-05
KR100364582B1 (en) 2002-12-16

Similar Documents

Publication Publication Date Title
WO2001084844A1 (en) System for tracking and monitoring multiple moving objects
KR101808587B1 (en) Intelligent integration visual surveillance control system by object detection and tracking and detecting abnormal behaviors
Sidla et al. Pedestrian detection and tracking for counting applications in crowded situations
Zhou et al. Efficient road detection and tracking for unmanned aerial vehicle
US5757287A (en) Object recognition system and abnormality detection system using image processing
US6628805B1 (en) Apparatus and a method for detecting motion within an image sequence
RU2484531C2 (en) Apparatus for processing video information of security alarm system
CN110610150B (en) Tracking method, device, computing equipment and medium of target moving object
US8094884B2 (en) Apparatus and method for detecting object
KR101569919B1 (en) Apparatus and method for estimating the location of the vehicle
CN108038415B (en) Unmanned aerial vehicle automatic detection and tracking method based on machine vision
CN103824070A (en) Rapid pedestrian detection method based on computer vision
CN102598057A (en) Method and system for automatic object detection and subsequent object tracking in accordance with the object shape
US9213904B1 (en) Autonomous lock-on target tracking with geospatial-aware PTZ cameras
CN108804992B (en) Crowd counting method based on deep learning
Zang et al. Object classification and tracking in video surveillance
CN107045630B (en) RGBD-based pedestrian detection and identity recognition method and system
CN115346155A (en) Ship image track extraction method for visual feature discontinuous interference
Funde et al. Object detection and tracking approaches for video surveillance over camera network
Liu et al. A real-time vision-based vehicle tracking and traffic surveillance
KR100994722B1 (en) Method for tracking moving object on multiple cameras using probabilistic camera hand-off
Kanhere et al. Real-time detection and tracking of vehicle base fronts for measuring traffic counts and speeds on highways
Foedisch et al. Adaptive road detection through continuous environment learning
Unno et al. Vehicle motion tracking using symmetry of vehicle and background subtraction
Sincan et al. Moving object detection by a mounted moving camera

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP