WO2004057537A1 - 画像における移動物体の追跡方法及び装置 - Google Patents
画像における移動物体の追跡方法及び装置 Download PDFInfo
- Publication number
- WO2004057537A1 WO2004057537A1 PCT/JP2003/016058 JP0316058W WO2004057537A1 WO 2004057537 A1 WO2004057537 A1 WO 2004057537A1 JP 0316058 W JP0316058 W JP 0316058W WO 2004057537 A1 WO2004057537 A1 WO 2004057537A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- moving object
- motion vector
- block
- time
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/223—Analysis of motion using block-matching
- G06T7/231—Analysis of motion using block-matching using full search
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/255—Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30236—Traffic on road, railway or crossing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Definitions
- the present invention relates to a moving object tracking method and apparatus for processing a time-series image to track a moving object (a movable object such as a car, a bicycle, or an animal) in the image.
- a moving object a movable object such as a car, a bicycle, or an animal
- the recognition automation is expected. In order to raise the recognition rate of traffic accidents, it is necessary to process images captured by cameras and accurately track moving objects.
- each image is divided into blocks each composed of, for example, 8 ⁇ 8 pixels, and each block is compared with the corresponding block of the background image. Is used to determine the presence or absence of a moving object.
- an object of the present invention is to provide a method and apparatus for tracking a moving object in an image, which can track the moving object with a smaller number of temporarily stored time-series images. Is to provide.
- Another object of the present invention is to provide a method and an apparatus for tracking a moving object in an image, which can improve the boundary recognition accuracy of the moving object without making it difficult to determine a motion vector. It is in.
- Yet another object of the present invention is to provide an image processing method that does not require a special background image. To provide a tracking method and apparatus for a moving object.
- each image is divided into blocks each including a plurality of pixels, and an identification code of the moving object is blocked. If the motion vector of the moving object is obtained in units of blocks and the motion vector of the moving object is obtained in units of blocks,
- each object can be traced back in time before one cluster is separated into a plurality of objects.
- the storage capacity of the image memory can be reduced, and the load on the CPU can be reduced by reducing the amount of image processing.
- each image is divided into blocks each including a plurality of pixels, and the motion of the moving object If the vector is determined in block units and there is a first block whose motion vector is undetermined,
- each image is divided into blocks each including a plurality of pixels.
- the motion vector of the moving object is obtained in units of blocks while the identification code of
- the motion vector and the identification code ID are determined rationally simultaneously. can do.
- step (c) It may be a step.
- the quantity relating to the degree of correlation in step (c) is, for example,
- MVneigher is a motion vector of a block having the same ID as the identification code ID of the block BLK among the blocks around the block BLK, and ⁇ means the sum of the blocks having the same ID.
- L is the number of blocks having the same ID,
- each image is divided into blocks each including a plurality of pixels.
- the motion vector of the moving object is obtained in units of blocks while the identification code of
- the motion vector from the block size region of the image at time t1 to the target block of the image at time t2 is estimated as MV, and the motion vector is larger than the block size region.
- the amount of similarity between the image of the first region concentric with the region of the same size and the image of the second region having the same shape as the first region and concentric with the block of interest is obtained,
- the value of the evaluation function including the quantity relating to the similarity is obtained for each area by moving the first area within a predetermined range, and the motion vector MV is determined based on a substantially optimum value of the value.
- an on-image moving object tracking method for processing a time-series image to track a moving object in the image
- the background image is also regarded as one of the moving objects, and the identification code of the moving object is assigned in block units, and the motion vector of the moving object is obtained in block units. According to this configuration, it is not necessary to use a special background image, and the background image can be identified even if the camera shakes.
- each image is divided into blocks each including a plurality of pixels
- a plurality of object maps in which the motion vector of a moving object at a certain time is obtained in units of blocks are stored at different times, and
- a motion vector of a region in which the region of interest has been moved in the positive or negative direction by the obtained motion vector is obtained based on an op map at a time after the movement, and the moved region is obtained as the motion vector.
- the step (b) is repeated a plurality of times as a target area on the object map at the time after the movement, thereby tracking the target area.
- FIG. 1 is a diagram schematically showing an intersection and a moving object tracking device according to a first embodiment of the present invention disposed at the intersection.
- FIG. 2 is a functional block diagram of the moving object tracking device in FIG.
- FIG. 3 is an explanatory diagram showing the slits set at the four entrances to the intersection and the four exits from the intersection in the frame image, and IDs of the moving objects given to the blocks, respectively.
- FIG. 4 are diagrams schematically showing images at times t_1 and t, respectively, together with block boundaries.
- FIG. 5 are diagrams schematically showing images at times t11 and t together with pixel boundaries.
- FIG. 6 are diagrams schematically showing the images at times t11 and t together with the motion vector assigned to the block.
- FIG. 7 are diagrams schematically illustrating a motion vector and an object boundary assigned to the object map at times t11 and t, respectively.
- FIG. 8 is a flowchart showing a method for estimating an undetermined motion vector.
- FIG. 11 is a flowchart showing a method for creating an object map according to the second embodiment of the present invention.
- FIG. 12 are explanatory diagrams of the spatiotemporal texture correlation degree.
- FIG. 13 are explanatory diagrams of the spatial ID correlation degree.
- FIGS. 15A and 15B are diagrams showing experimental results of the second embodiment of the present invention
- FIGS. 15A and 15B are diagrams showing an image taken at an intersection and an object map of its ID, respectively. .
- FIGS. 16A and 16B are diagrams showing experimental results of the second embodiment of the present invention, wherein FIGS. 16A and 16B show a low-angle photographed image on an expressway and an object map of its ID, respectively.
- FIG. 16A and 16B show a low-angle photographed image on an expressway and an object map of its ID, respectively.
- FIGS. 17A and 17B are diagrams showing the experimental results of the second embodiment of the present invention, in which (A) and (B) show an image taken at a pedestrian crossing and an object map of this and an ID, respectively. It is a figure showing the image which superimposed on the mesh of the ID giving part.
- FIG. 18 is a flowchart showing a method of determining whether or not the object boundary for dividing a cluster has been determined according to the third embodiment of the present invention.
- FIG. 19 are diagrams for explaining the processing of FIG. 18.
- FIG. 20 is an explanatory diagram of block matching according to the fourth embodiment of the present invention, in which (A) and (B) are diagrams schematically showing images at times t11 and t together with block boundaries. It is.
- FIGS. 21A and 21B are diagrams for explaining the fifth embodiment of the present invention, wherein FIG. 21A is a diagram schematically showing an image, and FIG. FIG. 4 is a diagram showing an object map of a required motion vector.
- FIG. 22 are views for explaining the fifth embodiment
- (A) is a diagram showing an object map of the motion vector determined in the second stage.
- (B) is a diagram showing an ID object map.
- FIG. 23 is a diagram showing a time-series object map for describing a sixth embodiment of the present invention for tracking a region of interest.
- FIG. 24 are explanatory diagrams of a method of tracking a region of interest by going back in time.
- FIG. 26 is a diagram showing a histogram of the absolute value of the motion vector for one cluster.
- FIG. 27 is a flowchart showing an op-nect boundary recognition method according to the eighth embodiment of the present invention.
- FIG. 28 is a diagram schematically showing a time-series image captured by a camera installed above the center line of the road.
- FIG. 1 schematically shows an intersection and a moving object tracking device according to a first embodiment of the present invention disposed at the intersection.
- This device includes an electronic camera 10 that captures an intersection and outputs an image signal, and a moving object tracking device 20 that processes the image to track a moving object.
- FIG. 2 is a functional block diagram of the moving object tracking device 20.
- the components other than the storage unit can be configured by computer software, dedicated hardware, or a combination of computer software and dedicated hardware.
- a time-series image captured by the electronic camera 10 is stored in the image memory 21 at a rate of, for example, 12 frames Z seconds, and the oldest frame is rewritten with a new frame image.
- the image conversion unit 22 copies each frame image in the image memory 21 to the frame buffer memory 23, and uses the copied image data to generate a corresponding frame in the image memory 21. Convert the image to a spatial difference frame image. This conversion is performed in two stages.
- the pixel value (luminance value) of the i-th row and the j-th column of the original frame image is G (i, j)
- the pixel value H (i, j) of the i-th row and the j-th column after conversion in the first stage Is given by You.
- C a natural number.
- c l
- the i-th row and j-th column This is the sum of the eight pixels adjacent to the pixel.
- the pixel value G (i, j) and its neighboring pixel values G (i + di, j + dj) change in the same way, so that the image of H (i, j) Invariant.
- H (i, j) is normalized as follows.
- H (i, j) ⁇ I G (i + d i, j + d j)
- Gi, j, max is the value of the original pixel used to calculate H (i, j)
- Gmax can be a pixel value G (i, j)
- the maximum value for example, 255 if the pixel value is represented by 8 bits.
- This H (i, j) is converted to I (i, j) by the following equation using a sigmoid function.
- the image conversion unit 22 converts the image having the pixel value G (i, j) into a spatial difference frame image having the pixel value I (i, j) based on the above equations (2) and (3). Stored in image memory 21.
- the background image generation unit 24, the ID generation / disappearance unit 25, and the moving object tracking unit 27 perform processing based on the spatial difference frame image in the image memory 21.
- the spatial difference frame image is simply referred to as a frame image.
- the background image generation unit 24 includes a storage unit and a processing unit.
- the processing unit accesses the image memory 21 and, for example, obtains a histogram of pixel values for corresponding pixels of all frame images in the past 10 minutes.
- An image having the mode value as the pixel value of the pixel is generated as a background image in which no moving object exists, and is stored in the storage unit.
- the background image is updated by performing this process periodically.
- the ID generation / deletion unit 25 includes slits EN 1 to EN N4 and EX 1 to EX 4 that are respectively arranged at four entrances to the intersection and four exits from the intersection in the frame image. Position and size data are set in advance.
- the ID generation / destruction section 25 reads the image data in the input loss slits EN 1 to EN 4 from the image memory 21 and determines whether or not a moving object exists in these entrance slits in units of blocks.
- the mesh cells in FIG. 3 are blocks, and one block is, for example, 8 ⁇ 8 pixels. When one frame is 480 ⁇ 640 pixels, one frame is divided into 60 ⁇ 80 blocks.
- Whether or not a moving object exists in a certain block is determined based on whether or not the sum of absolute values of differences between each pixel in the block and a corresponding pixel in the background image is equal to or larger than a predetermined value. This determination is also performed by the moving object tracking unit 27.
- ID generation Z annihilation unit 25 determines that a moving object exists in the block, it assigns a new object identification code (ID) to the block.
- ID generation Z disappear
- the unit 25 assigns the same ID to the adjacent block as the assigned block.
- the ID is assigned to the corresponding block in the object map storage unit 26.
- the object map storage unit 26 stores the object map of 60 ⁇ 80 blocks in the case described above, and each block is provided with a flag indicating whether or not an ID is provided, and an ID. In this case, the number and a block motion vector described later are added as block information. In addition, without using the flag, when ID-0, it may be determined that the ID is not assigned. Alternatively, the most significant bit of ID may be used as a flag.
- the moving object tracking unit 27 assigns an ID to the block in the moving direction and erases the ID for the block in the direction opposite to the movement, that is, performs a cluster tracking process. .
- the tracking processing by the moving object tracking unit 27 is performed up to the exit slit for each cluster.
- the ID generation / deletion unit 25 further checks whether or not IDs are assigned to the blocks in the exit slits EX1 to EX4 based on the contents of the object map storage unit 26.
- the annihilation ID can be used as the next generation ID.
- the moving object tracking unit 27 stores the time stored in the object map storage unit 26.
- an object map at time t is created in the storage unit 26.
- this will be described.
- FIG. 4 to 7 schematically show images at times t-1 and t.
- the dotted lines in FIG. 4, FIG. 6 and FIG. 7 are block boundaries, and the dotted line in FIG. 5 is a pixel boundary line.
- the block at the i-th row and j-th column is B (i, j) at time t.
- the block at the i-th row and the j-th column in is denoted as B (t: i, j).
- Movement vector of block B (t-1: 1, 4) Is the MV.
- the correlation between the image of block B (t: 1, 5) and the image of block AX at time t-1 is shifted by one pixel in region AX within a predetermined range AM. Ask for each (block matching).
- the range AM is larger than the block, one side of which is, for example, 1.5 times the number of pixels on one side of the block.
- the center of the range AM is a pixel at a position where the center of the block B (t: 1, 5) has been moved by approximately one MV.
- the correlation is, for example, a spatiotemporal texture correlation, and is assumed to be larger as the evaluation value UD, which is the sum of absolute values of the differences between the corresponding pixel values of the block B (t: 1, 5) and the area AX, is smaller.
- the area AX where the degree of correlation is maximum within the range AM is found, and the vector starting from the center of this area and ending at the center of block B (1, 5) is the motion of block B (t: 1, 5). Determined as a vector. Also, the ID of the block at the time t-1 closest to the area AX where the degree of correlation is maximum is determined as the ID of the block B (t: 1, 5).
- the moving object tracking unit 27 assigns the same ID to a block whose absolute value of the difference between the motion vectors of adjacent blocks is equal to or less than a predetermined value. As a result, even one cluster is divided into a plurality of objects (moving objects) having different IDs. In Fig. 6, the boundaries between objects are shown by bold lines.
- FIG. 6 schematically illustrates the moving object on the object map for easy understanding.
- Figure 7 shows the boundaries of the objects in the object map in bold lines, and corresponds to Figure 6.
- Patent Literature 1 individual objects are tracked back in time after one cluster is separated into a plurality of clusters.
- the storage capacity of the image memory 21 can be reduced, and the amount of image processing can be reduced. CPU load can be reduced.
- the motion vector of such a block is estimated by the method shown in FIG.
- step S1 If there is an undetermined motion vector, the process proceeds to step S2; otherwise, the undetermined motion vector estimation process ends.
- step S3 If there is a motion vector 1 and a motion vector determined in step S2, proceed to step S4, otherwise proceed to step S6.
- step S6 The motion vector estimated in step S5 is regarded as the determined motion vector, and the process returns to step S1.
- the undetermined motion vector can be uniquely estimated.
- FIG. 9 (A) the motion vector of block B (i, j) in the i-th row and j-th column is denoted as MV (i, j).
- Fig. 9 (A) the motion vectors of blocks B (2, 2), B (2, 4) and B (3, 3) are undecided.
- the motion vector of the block around block B (2, 4) is a group of MV (2, 3), MV (3, 4) and MV (3, 5), and MV (1, 3), MV (1, 4), MV (1, 5) and MV (2, 5)
- MV (2,4) (MV (1,3) + MV (1,4) + MV (1,5) + MV (2,5)) / 4
- the motion vectors of the block around block B (3, 3) are MV (2, 3), MV (3, 2), MV (4, 2), MV (4, 4), and MV (3, 4). ) Because it is 1 drop
- MV (3, 3) (MV (2, 3) + MV (3, 2) + MV (4, 2) + MV (4, 4) + MV (3, 4)) / 5
- FIG. 9 (B) an object map as shown in FIG. 9 (B) is generated.
- the boundaries of the objects are indicated by bold lines.
- step S6 is regarded as the determined motion vector, and steps S1 to S5 are executed again to uniquely identify the motion vector of block B (3, 4). It is estimated as shown in Fig. 10 (C).
- one cluster is divided into a plurality of objects having different IDs by assigning the same ID to a block whose absolute value of the difference between the motion vectors of adjacent blocks is equal to or less than a predetermined value.
- the moving object tracking unit 27 stores the time series of the object map stored in the object map 26 as a tracking result on a hard disk (not shown).
- the undetermined motion vector is determined based only on the motion vectors of the blocks around the undetermined motion vector, when there are many undetermined motion vectors, the IDs and IDs of the blocks are determined. ⁇ The accuracy of motion vector determination is reduced.
- the ID and the motion vector of all the blocks are simultaneously determined based on the value of an evaluation function described later.
- the moving object tracking unit 27 in FIG. 2 includes an object map at time (t ⁇ 1) stored in the object map storage unit 26 and an image map.
- An object map at time t is created in the storage unit 26 based on the time (t — 1) stored in the memory 21 and the frame image at t.
- the evaluation function U (i, j) of an arbitrary block B (t: i, j) including a part of a moving object will be described.
- the evaluation function U (i, j) is expressed by a linear combination of four sub-evaluation functions as shown in the following equation.
- a to c and f are constants and are determined by trial and error.
- one block is assumed to be mXm pixels
- the value of the pixel at the g-th row and the h-th column of the image at time t is represented by G (t: g, h)
- the estimated motion vector of the block B (t: i, j) is obtained.
- the vector MV is represented by (MVX, MVY). i ⁇ 0, j0.
- the sub-evaluation function UD indicates the spatio-temporal texture correlation, and is described in the first embodiment. It is the same as described above, and is expressed by the following equation.
- UD (i, j, MV) ⁇
- Fig. 12 (B) shows the case where the estimated motion vector of the block of interest B (t: 1, 2) is MV
- Fig. 12 (A) shows the case of block B (t-1: 1 2).
- the area AX in which is shifted by one MV is shown.
- an evaluation function UD (1, 2, MV) between the image of the block B (t: .1, 2) and the image of the area AX is calculated.
- the value of the UD changes. The smaller the value of the UD, the greater the degree of texture correlation between the image of the block B (t: 1, 2) and the image of the area AX.
- the MV when UD is at the minimum value is the most likely motion vector. Since the speed of the moving object is limited, the area AX is moved within a predetermined range from the center of the target block B (t: 1, 2), for example, within ⁇ 25 pixels vertically and ⁇ 25 pixels horizontally. To find the minimum value of UD. As described in the first embodiment, the predetermined range may be the range AM predicted using the motion vector at time t-11.
- FIGS. 13 (A) and 13 (B) correspond to FIGS. 12 (A) and (B), respectively, and the hatched portion indicates a block that has been determined to have a moving object.
- the sub-evaluation function UM indicates the spatio-temporal ID correlation, and is expressed by the following equation.
- Fig. 13 (B) when it is estimated that the ID of the block of interest B (t: 1, 2) is ID1, eight blocks B (t: 0, 1), B ( t: 0, 2), B (t: 0, 3), B (t: l, 3), B (t: 2, 3), B (t: 2, 2), B (t: 2, 1 ) And B (t: 1, 1), let N be the number of blocks whose ID is ID1. If the IDs of the hatched portions in FIG. 13 (B) are all the same, the value of N of the block of interest B (t: 1, 2) is 5.
- the sub-evaluation function UN indicates the spatial ID correlation, and is expressed by the following equation.
- the area AX is moved within the above-mentioned predetermined range and the minimum value of a UD + b UM + c UN is determined, thereby simultaneously determining the ID and MV of the block of interest. It is possible.
- the motion vector MV is not determined.
- This motion vector MV can be estimated to be almost the same as the motion vector MV of the block near the block of interest and having the same ID. Therefore, the following sub-evaluation function UV that indicates the spatial MV correlation is defined.
- UV (i, j) I MV-MVneigher
- MV is the estimated motion vector of the block of interest B (t: i, j) described in (1) above
- MVneigher is the eight blocks around the block of interest B (t: i, j).
- ⁇ represents the sum of blocks having the same ID
- L represents the sum of the blocks having the same ID This is the number of blocks having an ID.
- UV (1, 2) (I MV— MV 1 I + I MV— MV 2
- the evaluation function U of the above equation (1) becomes a minimum value. It is possible to determine the ID and MV of the block of interest at the same time.
- the MVneigher only needs to be a block around the block of interest B (t: i, j). For example, four blocks, up, down, left and right, around the block of interest B (t: i, j) (one round) ) Of the 24 blocks around any one block or the block of interest B (t: i, j) (two rounds), the block of interest B (t: i, j) The motion vector of the block having the same ID as the estimated ID in (2) may be used. Further, the MVneigher may be approximated by a corresponding motion vector at time t-11.
- the block to which the center of the area in which the block of interest B (t: i, j) is shifted by one MV belongs is B (t-1, p, q)
- the block B (t-1, p, q) It may be a motion vector of a block having the same ID as the estimated ID of the block of interest B (t: i, j) among the nearby blocks.
- ID and MV are determined by an approximation method as shown in Fig. 11 in order to shorten the processing time and enable real-time processing.
- the motion vector MV that minimizes the value of the evaluation function UD in (2) is obtained. However, the motion vector MV cannot be obtained for a block that is not suitable for obtaining the above-described motion vector. Next, for each block that is not suitable for determining the motion vector,
- the motion vector MV that minimizes the value of the evaluation function UV of (5) is obtained.
- the motion vector MV may be uniquely determined by adding the processes of steps S1 to S3 and S6 in FIG.
- steps S 13 and S 14 have been repeated a predetermined number of times, or if it is determined that the sum UT has converged, the process is terminated.
- step S16 the MV of one block is shifted by one pixel within the predetermined range or the ID of one block is changed, and when returning to step S15, the sum UT is larger than the previous value, In step S16, the changed MV or ID is returned to its original state, and if it becomes smaller, the same change is made for the next block.
- the predetermined range is, for example, +4 pixels for each of the upper, lower, left, and right sides.
- a process of reducing the total UT is estimated in advance, and this process is performed to calculate the total UT or UTportion. If so, use that object map If not, the object map before processing may be adopted.
- step S11 the motion vector of the block that is not suitable for obtaining the motion vector is not obtained, and the motion vector is obtained by the process of steps S13 to S15 or the above-described process instead. You may decide the motion vector of the block that is not suitable for the search.
- Figures 16 (A) and (B) show the low-angle shot image on the expressway and its ID object map, respectively.
- FIGS. 17 (A) and 17 (B) show an image taken at a pedestrian crossing, and an image obtained by superimposing this image on the mesh of the ID adding section of the ID object map.
- the numbers given to the rectangles in FIGS. 16 (A) and 17 (B) are the IDs of the objects.
- Movement vector of adjacent blocks Same ID for blocks whose MV difference is within the specified value This problem can also be solved by increasing the predetermined value in the rule of providing, but in this case, the starting point of the tracking process going back in time is delayed.
- the method shown in FIG. 18 is performed to determine the starting point of the tracking process going back in time.
- the correlation between the same objects in temporally adjacent images is equal to or greater than a predetermined value for N consecutive images, for example, three images, the boundary of the object boundary It is determined that the reliability is high.
- step S23 If one cluster includes a plurality of objects, the process proceeds to step S24; otherwise, the process proceeds to step S27.
- one object OBJ 1 (t-1) in Fig. 19 (A) corresponds to the figure moved by the average motion vector of this object, and the one in Fig. 19 (B).
- the area AO may be the area AO of the figure of the object ⁇ B J 1 (t).
- step S25 If A1 / AO is greater than or equal to the predetermined value r0, the process proceeds to step S26; otherwise, the process proceeds to step S27.
- FIG. 20 is an explanatory diagram of an object map according to the fourth embodiment of the present invention.
- the blocks B and i for determining the motion vector of each block B (i, j) to which the ID and the motion vector MV are assigned are described.
- the size of (i, j) is larger than the size of block B (i, j).
- Block B ′ (i, j) is concentric with block B (i, j), and block B ′ (i, j) contains block B (i, j).
- block B ′ (t: 3,10) is for obtaining the motion vector of block B (t: 3,10).
- the degree of texture correlation between the image of the block B '(t: 3, 10) and the image of the area AX having the block size at the time t-11 is calculated using the area AX within the predetermined range AM. Is calculated every time one pixel is moved.
- the presence or absence of an object is checked by comparing the background image in units of blocks, so that the background image must be specially treated. Also, for example, since the background image is generated based on the captured images of the past 10 minutes, if the camera shakes, this shift cannot be reflected on the background image.
- an object map is created by regarding a background image as an object.
- the object map generation method is the same as the above-described first, second, third or fourth embodiment, except that it is compared with a background image to determine whether or not a moving object exists in a block. background Since images are also regarded as objects, block matching is performed on all blocks to assign IDs and determine MVs.
- FIG. 21 (B) By performing this processing on the image as shown in FIG. 21 (A), an object map of a motion vector as shown in FIG. 21 (B) is obtained.
- the dotted line is the boundary of the block, and the dot indicates that the motion vector is zero.
- a motion vector MV that minimizes the value of the evaluation function U V in the above equation (5) is obtained.
- an object map of the motion vector as shown in Fig. 22 (A) is obtained.
- steps S12 to S15 is the same as in the second embodiment.
- step S12 an ID object map as shown in FIG. 22 (B) is obtained.
- an image is divided into blocks, the ID and MV of an object are determined in block units, and a part of a moving object irrelevant to a block boundary is tracked.
- the object map storage unit 26 in FIG. Object maps OM (t) to OM (t-5) corresponding to the five time-series images are stored.
- t ⁇ t-1 is set, that is, the object maps OM (t) to OM (t-5) are set to OM (t-1) to OM (t-6), respectively. Then, the oldest object map OM (t-6) is updated with the new object map OM (t).
- tracking of a part of the moving object is performed as follows.
- the motion vector of the region of interest A (t) on the object map ⁇ M (t) is MV (t).
- the dotted line in FIG. 24 (A) is the block boundary, and in this example, the region of interest A (t) coincides with one block.
- the area of interest A (t) is obtained by calculating the area of interest A (t-1) in the object map OM (t-1) corresponding to the area moved by MV (t). You.
- the motion vector MV (t-1) of the region of interest A (t-1) is obtained by the following weighted average.
- MV (t-1) (MV1S1 + MV2S2 + MV3S3 + MV4S
- MV1 to MV4 are the motion vectors of the first to fourth blocks that overlap with the region of interest A (t-1), and S1 to S4 are the first vectors that overlap with the region of interest A (t-1).
- the motion vector MV (t-2) of the region of interest A (t-2) is obtained in the same manner as above.
- the target area can be tracked independently of the block boundaries. You. That is, for the region of interest A (t) at time t, the regions of interest A (t-1) to A (t-5) at times t1l to t-5 can be obtained.
- the sixth embodiment of the present invention it is possible to track a region of interest which is a part of a moving object, for example, to analyze or classify an action pattern of the region of interest, or to specify a specific behavior pattern. Can be determined.
- a specific behavior pattern is a relative behavior pattern between a plurality of regions of interest.
- the motion vector of the region of interest is obtained by weighted averaging as described above.
- the motion vector starts from the region of interest A (t-5) on the object map ⁇ M (t-5).
- the region of interest may be tracked by moving the region of interest in the positive direction.
- the attention area can be tracked by finding the attention area A (t) every time a new object map O M (t) is found.
- region of interest may be larger or smaller than the block size.
- the motion vectors of adjacent regions of interest A i (t) and A j (t) on the object map O M (t) are MV i (t) and MV j, respectively.
- the regions of interest A i (t ⁇ 5) and A j (t ⁇ 5) on the op map OM (t ⁇ 5) are determined by the method of the sixth embodiment.
- the motion vector from the center of the area A i (t—5) to the center of the area A i (t) MV i (t-5, t).
- a motion vector from the center of the area A j (t-5) to the center of the area A j (t) is obtained as a fast-forward motion vector MV j (t-5, t).
- IMVi (t_k, t) -MVj (t-k, t) 1 where k is 5 is explained, but the relative velocity on the image between overlapping moving objects is small. It is preferable to increase the value of k.
- ⁇ is a constant determined by trial and error, and ⁇ means rounding to an integer.
- FIG. 27 is a flowchart showing a moving object boundary recognition method according to the eighth embodiment of the present invention.
- the value of k is changed from 0 to the maximum value kmax in the following two regions of interest in FIG. kmax is 5, for example, for 10 frame seconds.
- step S32 IMVi (t_k, t) -MVj (t-k, t) If I> ⁇ , proceed to step S33, otherwise proceed to step S34.
- the value of k is automatically determined without creating the histogram.
- the present invention also includes various modified examples.
- a moving object is tracked by processing a spatial difference image.
- the present invention processes a moving object (including a part of the moving object) by processing various edge images and original images.
- a configuration for tracking may be used.
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/540,217 US7929613B2 (en) | 2002-12-20 | 2003-12-15 | Method and device for tracking moving objects in image |
CA002505563A CA2505563A1 (en) | 2002-12-20 | 2003-12-15 | Method and device for tracing moving object in image |
EP03778936.9A EP1574992B1 (en) | 2002-12-20 | 2003-12-15 | Method and device for tracking moving objects in images |
AU2003289096A AU2003289096A1 (en) | 2002-12-20 | 2003-12-15 | Method and device for tracing moving object in image |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2002371047A JP4217876B2 (ja) | 2002-12-20 | 2002-12-20 | 画像における移動物体の追跡方法及び装置 |
JP2002-371047 | 2002-12-20 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004057537A1 true WO2004057537A1 (ja) | 2004-07-08 |
Family
ID=32677192
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2003/016058 WO2004057537A1 (ja) | 2002-12-20 | 2003-12-15 | 画像における移動物体の追跡方法及び装置 |
Country Status (8)
Country | Link |
---|---|
US (1) | US7929613B2 (ja) |
EP (1) | EP1574992B1 (ja) |
JP (1) | JP4217876B2 (ja) |
KR (2) | KR20050085842A (ja) |
CN (1) | CN100385462C (ja) |
AU (1) | AU2003289096A1 (ja) |
CA (1) | CA2505563A1 (ja) |
WO (1) | WO2004057537A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021117930A1 (ko) * | 2019-12-10 | 2021-06-17 | 전자부품연구원 | 중복 영역 객체 검출 개선을 위한 의미적 필터링 모듈 시스템 |
Families Citing this family (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4734568B2 (ja) * | 2006-01-12 | 2011-07-27 | 国立大学法人 東京大学 | 画像上移動物体計測点決定方法及び装置 |
JP4714872B2 (ja) * | 2006-01-12 | 2011-06-29 | 国立大学法人 東京大学 | 画像上重畳移動物体分割方法及び装置 |
US8085849B1 (en) * | 2006-11-03 | 2011-12-27 | Keystream Corporation | Automated method and apparatus for estimating motion of an image segment using motion vectors from overlapping macroblocks |
KR20080057500A (ko) * | 2006-12-20 | 2008-06-25 | 재단법인 포항산업과학연구원 | 사람 움직임 추적 시스템 및 그 방법 |
JP2009053815A (ja) * | 2007-08-24 | 2009-03-12 | Nikon Corp | 被写体追跡プログラム、および被写体追跡装置 |
US20090097704A1 (en) * | 2007-10-10 | 2009-04-16 | Micron Technology, Inc. | On-chip camera system for multiple object tracking and identification |
KR100921821B1 (ko) | 2007-12-07 | 2009-10-16 | 영남대학교 산학협력단 | 특성 공간 궤적 데이터베이스 구축 방법 및 이를 이용한다중 각도 표적 식별 방법 |
US8325976B1 (en) * | 2008-03-14 | 2012-12-04 | Verint Systems Ltd. | Systems and methods for adaptive bi-directional people counting |
US9019381B2 (en) | 2008-05-09 | 2015-04-28 | Intuvision Inc. | Video tracking systems and methods employing cognitive vision |
JP4507129B2 (ja) * | 2008-06-06 | 2010-07-21 | ソニー株式会社 | 追尾点検出装置および方法、プログラム、並びに記録媒体 |
CN101615294B (zh) * | 2008-06-26 | 2012-01-18 | 睿致科技股份有限公司 | 一种多重对象追踪的方法 |
KR100958379B1 (ko) * | 2008-07-09 | 2010-05-17 | (주)지아트 | 복수 객체 추적 방법과 장치 및 저장매체 |
US8325227B2 (en) * | 2008-07-15 | 2012-12-04 | Aptina Imaging Corporation | Method and apparatus for low cost motion detection |
US20100119109A1 (en) * | 2008-11-11 | 2010-05-13 | Electronics And Telecommunications Research Institute Of Daejeon | Multi-core multi-thread based kanade-lucas-tomasi feature tracking method and apparatus |
CN101719278B (zh) * | 2009-12-21 | 2012-01-04 | 西安电子科技大学 | 基于khm算法的视频显微图像细胞自动跟踪方法 |
JP5561524B2 (ja) * | 2010-03-19 | 2014-07-30 | ソニー株式会社 | 画像処理装置および方法、並びにプログラム |
JP5338978B2 (ja) * | 2010-05-10 | 2013-11-13 | 富士通株式会社 | 画像処理装置および画像処理プログラム |
JP5459154B2 (ja) * | 2010-09-15 | 2014-04-02 | トヨタ自動車株式会社 | 車両用周囲画像表示装置及び方法 |
JP5218861B2 (ja) * | 2010-09-30 | 2013-06-26 | 株式会社Jvcケンウッド | 目標追跡装置、目標追跡方法 |
JP5828210B2 (ja) * | 2010-10-19 | 2015-12-02 | ソニー株式会社 | 画像処理装置および方法、並びに、プログラム |
US10018703B2 (en) * | 2012-09-13 | 2018-07-10 | Conduent Business Services, Llc | Method for stop sign law enforcement using motion vectors in video streams |
WO2013077562A1 (ko) * | 2011-11-24 | 2013-05-30 | 에스케이플래닛 주식회사 | 특징점 설정 장치 및 방법과 이를 이용한 객체 추적 장치 및 방법 |
KR101939628B1 (ko) | 2012-05-30 | 2019-01-17 | 삼성전자주식회사 | 모션 검출 방법 및 모션 검출기 |
CN103678299B (zh) * | 2012-08-30 | 2018-03-23 | 中兴通讯股份有限公司 | 一种监控视频摘要的方法及装置 |
US9311338B2 (en) * | 2013-08-26 | 2016-04-12 | Adobe Systems Incorporated | Method and apparatus for analyzing and associating behaviors to image content |
KR102161212B1 (ko) | 2013-11-25 | 2020-09-29 | 한화테크윈 주식회사 | 움직임 검출 시스템 및 방법 |
US9598011B2 (en) * | 2014-01-09 | 2017-03-21 | Northrop Grumman Systems Corporation | Artificial vision system |
US10290287B1 (en) * | 2014-07-01 | 2019-05-14 | Xilinx, Inc. | Visualizing operation of a memory controller |
CN104539864B (zh) * | 2014-12-23 | 2018-02-02 | 小米科技有限责任公司 | 记录图像的方法和装置 |
CN106296725B (zh) * | 2015-06-12 | 2021-10-19 | 富泰华工业(深圳)有限公司 | 运动目标实时检测与跟踪方法及目标检测装置 |
CN105719315B (zh) * | 2016-01-29 | 2019-01-22 | 深圳先进技术研究院 | 用于在移动终端中跟踪视频图像中的物体的方法 |
JP6526589B2 (ja) * | 2016-03-14 | 2019-06-05 | 株式会社東芝 | 画像処理デバイス及び画像処理プログラム |
KR102553598B1 (ko) * | 2016-11-18 | 2023-07-10 | 삼성전자주식회사 | 영상 처리 장치 및 그 제어 방법 |
US11405581B2 (en) | 2017-12-26 | 2022-08-02 | Pixart Imaging Inc. | Motion detection methods and image sensor devices capable of generating ranking list of regions of interest and pre-recording monitoring images |
JP7227969B2 (ja) * | 2018-05-30 | 2023-02-22 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 三次元再構成方法および三次元再構成装置 |
CN111105434A (zh) * | 2018-10-25 | 2020-05-05 | 中兴通讯股份有限公司 | 运动轨迹合成方法及电子设备 |
CN111192286A (zh) * | 2018-11-14 | 2020-05-22 | 西安中兴新软件有限责任公司 | 一种图像合成方法、电子设备及存储介质 |
CN109584575B (zh) * | 2018-12-19 | 2020-09-18 | 山东交通学院 | 一种基于能见度分析的道路安全限速提示系统及方法 |
US11277723B2 (en) * | 2018-12-27 | 2022-03-15 | Continental Automotive Systems, Inc. | Stabilization grid for sensors mounted on infrastructure |
US20210110552A1 (en) * | 2020-12-21 | 2021-04-15 | Intel Corporation | Methods and apparatus to improve driver-assistance vision systems using object detection based on motion vectors |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS62230180A (ja) * | 1986-03-31 | 1987-10-08 | Nippon Hoso Kyokai <Nhk> | 動きベクトル検出方法 |
JPH05137050A (ja) * | 1991-11-15 | 1993-06-01 | Sony Corp | 画像の手振れ補正装置 |
JPH09161071A (ja) * | 1995-12-12 | 1997-06-20 | Sony Corp | 領域対応付け装置および領域対応付け方法 |
JPH09185720A (ja) * | 1995-12-28 | 1997-07-15 | Canon Inc | 画像抽出装置 |
JP2000285245A (ja) * | 1999-03-31 | 2000-10-13 | Toshiba Corp | 移動体の衝突防止装置、衝突防止方法、および記録媒体 |
JP2001307104A (ja) * | 2000-04-26 | 2001-11-02 | Nippon Hoso Kyokai <Nhk> | 動画像のオブジェクト抽出装置 |
JP2002133421A (ja) * | 2000-10-18 | 2002-05-10 | Fujitsu Ltd | 移動物体認識方法及び装置 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6002428A (en) * | 1994-10-21 | 1999-12-14 | Sanyo Electric Co., Ltd. | Motion vector detection circuit and object tracking camera device utilizing the same |
JP3434979B2 (ja) * | 1996-07-23 | 2003-08-11 | 富士通株式会社 | 局所領域画像追跡装置 |
KR100501902B1 (ko) * | 1996-09-25 | 2005-10-10 | 주식회사 팬택앤큐리텔 | 영상정보부호화/복호화장치및방법 |
KR100244291B1 (ko) * | 1997-07-30 | 2000-02-01 | 구본준 | 동영상 움직임 벡터 코딩 방법 |
WO2000019375A1 (en) * | 1998-09-29 | 2000-04-06 | Koninklijke Philips Electronics N.V. | Partition coding method and device |
US6968004B1 (en) * | 1999-08-04 | 2005-11-22 | Kabushiki Kaisha Toshiba | Method of describing object region data, apparatus for generating object region data, video processing method, and video processing apparatus |
US7367042B1 (en) * | 2000-02-29 | 2008-04-29 | Goldpocket Interactive, Inc. | Method and apparatus for hyperlinking in a television broadcast |
JP3920535B2 (ja) * | 2000-06-12 | 2007-05-30 | 株式会社日立製作所 | 車両検出方法及び車両検出装置 |
US20030161399A1 (en) * | 2002-02-22 | 2003-08-28 | Koninklijke Philips Electronics N.V. | Multi-layer composite objective image quality metric |
-
2002
- 2002-12-20 JP JP2002371047A patent/JP4217876B2/ja not_active Expired - Lifetime
-
2003
- 2003-12-15 KR KR1020057011609A patent/KR20050085842A/ko not_active Application Discontinuation
- 2003-12-15 CA CA002505563A patent/CA2505563A1/en not_active Abandoned
- 2003-12-15 CN CNB2003801069714A patent/CN100385462C/zh not_active Expired - Fee Related
- 2003-12-15 WO PCT/JP2003/016058 patent/WO2004057537A1/ja active Application Filing
- 2003-12-15 EP EP03778936.9A patent/EP1574992B1/en not_active Expired - Fee Related
- 2003-12-15 KR KR1020077010090A patent/KR20070065418A/ko not_active Application Discontinuation
- 2003-12-15 AU AU2003289096A patent/AU2003289096A1/en not_active Abandoned
- 2003-12-15 US US10/540,217 patent/US7929613B2/en not_active Expired - Fee Related
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS62230180A (ja) * | 1986-03-31 | 1987-10-08 | Nippon Hoso Kyokai <Nhk> | 動きベクトル検出方法 |
JPH05137050A (ja) * | 1991-11-15 | 1993-06-01 | Sony Corp | 画像の手振れ補正装置 |
JPH09161071A (ja) * | 1995-12-12 | 1997-06-20 | Sony Corp | 領域対応付け装置および領域対応付け方法 |
JPH09185720A (ja) * | 1995-12-28 | 1997-07-15 | Canon Inc | 画像抽出装置 |
JP2000285245A (ja) * | 1999-03-31 | 2000-10-13 | Toshiba Corp | 移動体の衝突防止装置、衝突防止方法、および記録媒体 |
JP2001307104A (ja) * | 2000-04-26 | 2001-11-02 | Nippon Hoso Kyokai <Nhk> | 動画像のオブジェクト抽出装置 |
JP2002133421A (ja) * | 2000-10-18 | 2002-05-10 | Fujitsu Ltd | 移動物体認識方法及び装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP1574992A4 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021117930A1 (ko) * | 2019-12-10 | 2021-06-17 | 전자부품연구원 | 중복 영역 객체 검출 개선을 위한 의미적 필터링 모듈 시스템 |
Also Published As
Publication number | Publication date |
---|---|
KR20050085842A (ko) | 2005-08-29 |
KR20070065418A (ko) | 2007-06-22 |
EP1574992A4 (en) | 2009-11-11 |
JP4217876B2 (ja) | 2009-02-04 |
EP1574992A1 (en) | 2005-09-14 |
CA2505563A1 (en) | 2004-07-08 |
US7929613B2 (en) | 2011-04-19 |
CN100385462C (zh) | 2008-04-30 |
CN1729485A (zh) | 2006-02-01 |
JP2004207786A (ja) | 2004-07-22 |
EP1574992B1 (en) | 2015-03-04 |
AU2003289096A1 (en) | 2004-07-14 |
US20060092280A1 (en) | 2006-05-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2004057537A1 (ja) | 画像における移動物体の追跡方法及び装置 | |
US7139411B2 (en) | Pedestrian detection and tracking with night vision | |
JP5102410B2 (ja) | 移動体検出装置および移動体検出方法 | |
JP4782901B2 (ja) | 移動体検出装置および移動体検出方法 | |
JP5136504B2 (ja) | 物体識別装置 | |
JP5371040B2 (ja) | 移動物体追跡装置、移動物体追跡方法および移動物体追跡プログラム | |
EP1345175B1 (en) | Method and apparatus for tracking moving objects in pictures | |
KR20220032681A (ko) | 노상 주차장의 주차 관리 방법 | |
JP2002133421A (ja) | 移動物体認識方法及び装置 | |
JP4543106B2 (ja) | 画像における移動物体の追跡方法及び装置 | |
KR100566629B1 (ko) | 이동물체 검출 시스템 및 방법 | |
JP3763279B2 (ja) | 物体抽出システム、物体抽出方法および物体抽出プログラム | |
JP4818430B2 (ja) | 移動物体認識方法及び装置 | |
JP4923268B2 (ja) | 画像における移動物体の追跡方法及び装置 | |
JP5165103B2 (ja) | 画像における移動物体の追跡方法及び装置 | |
Jayanthi et al. | Deep learning-based vehicles tracking in traffic with image processing techniques | |
Wu et al. | Gradient map based Lane detection using CNN and RNN | |
Mudjirahardjo | A Study on Human Motion Detection-Toward Abnormal Motion Identification | |
CN115909497A (zh) | 一种人体姿态识别方法及装置 | |
Giardino et al. | MULTIPLE VEHICLE DETECTION AND TRACKING USING AN ADAPTIVE SYSTEM | |
Mu et al. | Multiple Vehicle Detection and Tracking in Highway |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2505563 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2003778936 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2006092280 Country of ref document: US Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020057011609 Country of ref document: KR Ref document number: 20038A69714 Country of ref document: CN Ref document number: 10540217 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 1020057011609 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 2003778936 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 10540217 Country of ref document: US |