WO2004021281A1 - 物体追跡装置、物体追跡方法および物体追跡プログラム - Google Patents
物体追跡装置、物体追跡方法および物体追跡プログラム Download PDFInfo
- Publication number
- WO2004021281A1 WO2004021281A1 PCT/JP2003/010837 JP0310837W WO2004021281A1 WO 2004021281 A1 WO2004021281 A1 WO 2004021281A1 JP 0310837 W JP0310837 W JP 0310837W WO 2004021281 A1 WO2004021281 A1 WO 2004021281A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- region
- information
- feature amount
- state
- correspondence
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
- G06V10/7515—Shifting the patterns to accommodate for positional errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/62—Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
Definitions
- the present invention relates to an object tracking device, an object tracking method, and an object tracking program.
- the present invention relates to an object tracking device, an object tracking method, and an object tracking program that track an object based on image information, and more particularly to an object tracking device that can continue tracking even when a plurality of objects overlap on an image, Related to tracking methods and object tracking programs. Background art
- An object tracking device that tracks an object such as a person based on image information that is input momentarily from a video camera or the like needs to continue tracking the object to be tracked even when multiple objects overlap on the image. is there.
- Japanese Patent Application Laid-Open No. H6-16-169458 describes an object tracking device capable of continuing to track an object even when a plurality of objects intersect during tracking.
- FIG. 1 is a block diagram showing the configuration of a conventional object tracking device described in Japanese Patent Application Laid-Open No. H06-169458.
- the object tracking device includes an object region extracting means 41 for extracting an object region from image information periodically input from a video camera or the like to an image input terminal 40, Tracking state detection means 42 for comparing the area with the tracking target object area output by the area selection means 45 described later to detect the state of the tracking target object, based on the image information and the tracking target object area Means for generating the feature amount of the object based on the image information and the object area extracted by the object area extracting means 41, and the feature amount generation means.
- the object identification unit 44 for selecting an object region having a feature amount closest to the feature amount of the object input from the stage 43, and an area selection unit 45 for outputting a tracking target object region.
- the object area is an area including an object in the image.
- the state of the tracked object output by the tracking state detecting means 42 includes an overlap state in which the tracked object intersects with another object and a tracked object that exists and is tracked independently. And And a tracking state.
- the feature value generation means 43 has a storage means for storing the generated feature value, and further includes an updating means for updating the feature value stored in the storage means.
- the characteristic amount generating means 43 updates the characteristic amount stored in the storage means.
- the feature amount generation unit 43 maintains the feature amount stored in the storage unit.
- the object identification unit 44 selects an object region having a feature amount closest to the feature amount of the object region stored in the feature amount generation unit 43 only when the state of the object is an overlapping state. 'And, when the state of the object changes from the overlapping state to the tracking state, the area selecting means 45 selects the object area selected by the object identifying means 44 as the tracking target object area. In cases other than the case where the state of the object has changed from the overlapping state to the tracking state, the region selecting means 45 selects the previous tracking target among all the object regions newly extracted by the object region extracting means 41. The object area closest to the position of the object area is selected and output. Then, the area selecting means 45 outputs the selected object area as a new object area to be tracked.
- the previous tracking target object region is, for example, a tracking target object region determined based on image information one frame before image information newly input from a video camera or the like.
- the object tracking apparatus continues to track the object area using the newly input image information.
- the state of the object changes from the tracking state to the overlapping state
- the feature amount of the object immediately before the change is stored.
- the tracking of the object area is continued with the object area selected by the object identifying means 44 as the tracking target object area. Therefore, when the state of the object changes from the tracking state to the overlapping state and then returns to the tracking state, the tracking of the object area can be continued.
- the plurality of objects intersect by associating the object with the object region based on the feature amount of the object immediately before changing to the overlapping state. In this case, the tracking of the object area can be continued.
- the object tracking device Incorrect association with the object area.
- the conventional object tracking device selects a region that is most similar to each object, but does not necessarily provide a consistent association for the entire object.
- an object of the present invention is to provide a method in which, even when a plurality of objects are separated from a state of overlapping on an image, there are still a plurality of objects in an image area, or before and after the separation, an object which overlaps on the image is removed. It is an object of the present invention to provide an object tracking device, an object tracking method, and an object tracking program that do not erroneously associate an object with an object region even when the combinations are switched. Another object of the present invention is to make it possible to select an optimum association as a whole object when associating an object with an object region.
- An object tracking device is an object tracking device that tracks an object based on image information, and generates a combined feature amount by combining an object feature amount indicating a feature amount of each object included in the image information.
- a feature amount synthesizing unit that associates the object region with the object based on the similarity between the feature amount of the object region extracted from the image information and the region including the object and the combined feature amount Associating means.
- the feature amount synthesizing means synthesizes feature amounts for all necessary combinations of a plurality of objects to generate respective synthetic feature amounts, and the associating means generates each synthesized feature amount and the object It is preferable to associate the object with the object region by comparing the region feature amount indicating the feature amount of the region.
- An object tracking device extracts an object region from image information and outputs object region information including image information of the object region, and a tracking state indicating a relative positional relationship with another object for each object.
- a tracking state determination unit that determines for each object region, and a feature amount generation unit that generates an area feature amount and an object feature amount using the image information, the object region information, and the determination result of the tracking state determination unit.
- the feature amount synthesizing unit generates the synthetic feature amount using the object feature amount and the determination result of the tracking state determining unit. According to such a configuration, it is possible to generate a combined feature based on a relative positional relationship with another object.
- the tracking state determination means determines the tracking state for each object or each object region from the object region information and the fixed correspondence information indicating the correspondence between the object region and the object before the current time.
- the feature amount and the object feature amount are generated, and the feature amount synthesizing unit generates, from the object feature amount and the first area correspondence information, each combined feature amount that is a candidate associated with the object region (that is, the object feature amount).
- the feature amount of the object is synthesized for the region), and the synthesized feature amount information, which is information including the correspondence between the synthesized feature amount and the object used for generating the synthesized feature amount and the synthesized feature amount, is output.
- the associating means is first area correspondence information
- the information processing apparatus includes a correspondence determining means for associating the object with the object area from the area feature information and the combined feature information which are information indicating the area feature, and outputting fixed correspondence information for the current time. According to such a configuration, when there are a plurality of objects after the object separation or when the objects are interchanged at the time of intersection, it is possible to prevent an erroneous association between the object and the object region.
- the tracking state may be a single state in which only one object exists in the object area, an overlapping state in which multiple objects correspond in the object area, or multiple object areas. It is preferable that the separation state is a transitional state in which a force is applied to a number of object regions. By doing so, the relative positional relationship with another object can be determined.
- the feature amount generating means generates, as the region feature amount, one or a combination of the color histogram of the object region, the area, the image template, and the color histogram normalized by the area, and generates the object region corresponding to the object. Generated from the first area correspondence information, the color histogram, area, and image template of the object area, the color histogram that is normalized by the area, and / or a combination thereof are generated as object feature amounts. It may be.
- the tracking state determination means tracks the object from the object area storage means for storing the object area information, the object area information, the determined correspondence information, and the object area information before the current time outputted from the object area storage means, Object tracking means for outputting second area correspondence information indicating the correspondence between the object and the object area, and determining the tracking state of the object from the second area correspondence information, the object area information and the object area information before the current time. And state determination means for outputting the first area correspondence information.
- the state determination means determines, based on the second area correspondence information and the object area information, a correspondence between the objects and the object areas, a distance between the object areas, a duration during which the object areas are separated, or a combination thereof.
- Objects that have a common region in the corresponding object regions are grouped based on the object regions, and those objects and the corresponding object regions are classified into one class.For objects whose corresponding object regions are different from any objects, By classifying the object and the corresponding object region into one class, the object and the object region are classified into a plurality of classes, and based on the classified class! It is also possible to determine the tracking state.
- the tracking state includes a separation state in which the object area is a transient state in which the object area is divided into a plurality of object areas, and the state determination unit determines that the classified class includes two or more object areas, the class is in the separation state. If it is determined that a certain condition is satisfied and the condition that the class is in the separated state is satisfied, the tracking state of the object and the object area included in the class may be determined as the separated state.
- the state determination means satisfies the condition that the classified class is in a separated state, and furthermore, the class includes two or more objects, and a distance between object regions included in the class is equal to or greater than a predetermined threshold.
- Duration during separation of object regions included in the class May be configured to determine a tracking state of an object and an object region included in a class as a separated state when any one of a predetermined threshold or more is satisfied or a combination thereof is satisfied. According to such a configuration, it is possible to prevent erroneous determination that the relative positional relationship with another object is in a separated state.
- the tracking state includes a separated state and a single state in which only one object is present in the object area, and the state determination means includes a single classified object, and the tracking state of the object and the object area included in the class. If is not the separated state, the tracking state of the object and the object area included in the class may be determined to be a single state.
- the tracking state includes a separation state and an overlapping state in which a plurality of objects correspond to each other in the object region.
- the state determination unit includes a classified class including two or more objects, and includes an object and an object region included in the class. When the tracking state is not the separation state, the tracking state of the object and the object area included in the class may be determined to be the overlapping state.
- a feature amount extracting unit that extracts a region feature amount from the image information, the object region information, and the first region correspondence information, and outputs region feature amount information that is information indicating the region feature amount;
- a feature amount storing means for storing the feature amount and selecting an object feature amount to be stored according to a request; area feature amount information and first area correspondence information; fixed correspondence information and current time It may be configured to include a feature amount updating unit that updates the object feature amount stored in the feature amount storing unit and the object feature amount and the force parameter generated earlier.
- the tracking state includes a separation state in which the object area is a transient state in which the object area is divided into a plurality of object areas
- the feature amount extraction means determines that the tracking state is other than the separation state
- the correspondence with the object Includes information indicating that attachment is not required in the area feature information
- the correspondence determination means ⁇
- the object area for which the association with the object is indicated by the area feature information is unnecessary. May be configured to be excluded from. According to such a configuration, the amount of calculation for calculating the similarity can be reduced.
- the tracking state includes a single state in which only one object exists in the object area, and the feature amount updating unit determines whether the tracking state of the object is the single state based on the first area correspondence information or the fixed correspondence information.
- the feature amount storage unit may not be configured to update the object feature amount.
- the feature synthesizing means determines all combinations of the combineable object and the object area based on the object feature quantity generated by the feature quantity generating means and the first area correspondence information, and determines the determined object.
- the configuration may be such that only the combination of the object and the object region is combined with the object feature to generate a combined feature. According to such a configuration, it is possible to reduce the processing of generating unnecessary combined feature amounts.
- the feature amount synthesizing means may be configured to obtain a synthesis ratio, which is a coefficient for correcting a ratio of synthesizing the object feature amount, and generate the synthetic feature amount from the synthesis ratio and the object feature amount. . According to such a configuration, when the size of the object on the image is different from the actual size of the object, an error based on the size of the object can be corrected.
- the feature amount synthesizing unit inputs the region feature amount together with the object feature amount from the feature amount generation unit, and based on the input region feature amount information and the object feature amount, calculates the combined feature amount according to an arbitrary synthesis ratio. May be calculated, and a combined feature amount corresponding to the combining ratio when the similarity between the calculated combined feature amount and the region feature amount is the highest may be output. According to such a configuration, it is only necessary to calculate the similarity for each combination of the object and the object region based on only one combined feature amount, and unnecessary calculation processing can be omitted.
- the tracking state includes a separation state in which the object area is a transient state in which the object area is divided into a plurality of object areas, and the feature amount synthesizing means synthesizes only for the object area in which the tracking state is indicated as being in the separation state. It may be configured to generate a feature amount. According to such a configuration, it is possible to reduce the processing of generating unnecessary combined feature amounts.
- the object feature amount includes the area of the object, and the feature amount synthesizing means obtains a synthesis ratio which is a coefficient for correcting a ratio of synthesizing the object feature amount from the area of the object included in the object feature amount. It may be configured to generate a combined feature amount from the object feature amount. According to such a configuration, the calculation amount for calculating the combination ratio can be reduced.
- the feature amount synthesizing unit may be configured to limit the synthesis ratio to a predetermined range based on a change in the area of the object. According to such a configuration, even when the area of the object on the image changes, a correct combination ratio can be obtained.
- the feature amount synthesizing unit inputs the area feature amount together with the object feature amount from the feature amount generating unit. Then, based on the input region feature and the object feature, a combined feature is generated within the range of the change in the area of the object, and the combined feature having the highest similarity with the object region is output. May be. According to such a configuration, it is only necessary to output one composite feature for each combination of objects, and unnecessary processing can be omitted.
- the object feature amount includes an image template, and the feature amount synthesizing unit determines the anterior-posterior relationship of each object from the image template and the region feature amount, and based on the determined anterior-posterior relationship of each object! /, May be configured to combine the image templates. According to such a configuration, even when an object is partially hidden behind another object, it is possible to correctly calculate the correspondence between the object and the object region.
- the correspondence determining unit is configured to determine, based on the combined feature amount information, the region feature amount information, and the first region correspondence information, a most similar combination from all possible combinations of combinations of the object and the object region.
- a correspondence calculating means for obtaining a combination, selecting a combination of the obtained object and the object region as an optimum correspondence, and generating optimum correspondence information indicating an optimum correspondence between the object and the object region;
- a response determining means for determining the correspondence between the object and the object region based on the region correspondence information and the optimal correspondence information, and outputting the determined correspondence information which is information including the correspondence between the determined object and the object region; and It may be configured to include: According to such a configuration, when associating an object with an object region, an optimal correspondence between the object and the object region can be selected.
- the correspondence calculation means obtains the total similarity, which is the sum of the similarity between the feature amount of the object region and the combined feature amount in each combination, and obtains all possible combinations.
- the combination with the highest total similarity is configured to be the most similar combination.
- the first area correspondence information includes information on a static state indicating whether the object area is in a stationary state or in a moving state
- the correspondence calculation means includes information on the static state. Therefore, the combination of the object region and the object may be excluded from all possible combinations. According to such a configuration, the amount of calculation of the correspondence calculation means can be reduced by calculating only the ray combination in which the object region in which the stationary state indicates the stationary state is not associated with any object. Can be.
- the correspondence calculation means determines that the combination similarity obtained from the similarity of the set of the object and the object region constituting the combination determined as the most similar combination is equal to or less than a predetermined threshold, the object and the object region Of all possible combinations with the combination having the maximum combination similarity, select a combination having a combination similarity within a predetermined threshold from the combination similarity of the combination having the largest combination similarity, and select an object which is present in common with the selected combination.
- the correspondence between the object and the object region is included in the optimal correspondence information as the optimal correspondence, and the object and the object region are not included in the correspondence between the commonly existing object and the object region.
- the information that indicates that there is no optimal response is included in the optimal response information, and the response determination means indicates that there is no corresponding relationship between the optimal object and the object region.
- the information indicating the correspondence between the object and the object region included in the optimal correspondence information is output as finalized correspondence information, and the optimal correspondence information is output.
- the information indicating the correspondence between the object and the object region included in the first region correspondence information is determined. It may be configured to output as According to such a configuration, it is possible to prevent selection of the correspondence between the erroneous object and the object region.
- the tracking state includes a separation state in which the object area is a transient state in which the object area is divided into a plurality of object areas, and the correspondence determination means determines the optimal correspondence information only for the object area in which the separation state is indicated as the tracking state. It may be configured to determine the correspondence between the indicated object and the object region. According to such a configuration, the process of determining the correspondence between the object and the object area can be reduced.
- the tracking state includes a separated state in which the object area is a transient state in which the object area is divided into a plurality of object areas, and the correspondence determination means determines that only the object area in which a state other than the separated state is indicated as the tracking state.
- the correspondence between the object and the object region included in the first region correspondence information may be output as finalized correspondence information. According to such a configuration, the process of determining the correspondence between the object and the object region can be reduced.
- An object tracking method is an object tracking method for tracking an object based on image information, and generates a combined feature amount by combining an object feature amount indicating a feature amount of each object included in the image information.
- the area extracted from the image information and contains the object The object region is associated with the object based on the similarity between the feature amount of the object region and the combined feature amount.
- a feature amount is synthesized for all necessary combinations of a plurality of objects to generate respective combined feature amounts, and a region feature amount indicating each of the generated combined feature amounts and the feature amount of the object region. It is preferable to associate the object with the object region by comparing According to such a method, when associating an object with an object region, an optimal correspondence between the object and the object region can be selected.
- the object tracking method extracts an object region from image information, outputs object region information including image information of the object region, and changes a tracking state indicating a relative positional relationship with another object for each object or each object region. It is preferable to make a determination, generate a region feature and an object feature using the image information, the object region information, and the determination result, and generate a combined feature using the object feature and the determination result. According to such a method, a composite feature can be generated based on a relative positional relationship with another object.
- the tracking state is determined for each object or each object region from the object region information and the determined correspondence information indicating the correspondence between the object region and the object before the current time, and the object region and the object and the tracking state are determined.
- the first area correspondence information indicating the correspondence relationship with the area information is output, and the area feature information is obtained from the image information at the current time, the area feature information which is information indicating the area feature, the first area correspondence information and the determined correspondence information.
- an object feature amount, and from the object feature amount and the first region correspondence information generate each candidate feature amount to be associated with the object region, and generate the combined feature amount and the combined feature amount.
- Outputs the combined feature information which is information including the correspondence between the object used in the process and the combined feature, and associates the object with the object region from the first area correspondence information, the area feature information, and the combined feature information.
- the tracking state is a single state in which only one object exists in the object area, an overlapping state in which multiple objects correspond in the object area, or the object area is divided into multiple object areas Includes the transitional state of separation.
- Object tracking methods include color histogram, area, image template and A color histogram normalized by the area, a deviation or a combination thereof is generated as a region feature, an object region corresponding to the object is obtained from the first region correspondence information, and the color histogram, the area, Image histogram and color histogram that has been regularized by area! It may be configured to generate /, a shift, or a combination thereof as an object feature amount.
- the object tracking method stores the object region information, tracks the object based on the object region information, the fixed correspondence information, and the object region information before the current time, and displays a second correspondence indicating the correspondence between the object and the object region. It is configured to output region correspondence information, determine the tracking state of the object from the second region correspondence information, the object region information, and the object region information before the current time, and output the first region correspondence information. You may.
- the object tracking method uses the second area correspondence information and the object area information to determine the correspondence between the objects and the object areas, the distance between the object areas, the deviation of the duration during the separation of the object areas, or a combination thereof.
- Objects that have a common region in the corresponding object regions are grouped based on, and these objects and the corresponding object regions are classified into one class.For objects whose corresponding object regions are different from any objects, By classifying the object region corresponding to the object into one class, the object and the object region are classified into a plurality of classes, and the tracking state is determined based on the classified class. Good.
- the tracking state includes a separation state in which the object region is a transient state in which the object region is divided into a plurality of object regions.
- the object tracking method is such that when the classified class includes two or more object regions, the class is in the separated state. If it is determined that a certain condition is satisfied and the condition that the class is in the separated state is satisfied, the tracking state of the object and the object area included in the class is determined as the separated state.
- the object tracking method satisfies the condition that the classified class is in the separated state, and the class includes two or more objects, and the distance between the object regions included in the class is equal to or greater than a predetermined threshold. If the duration of separation of the object region included in the class is equal to or greater than a predetermined threshold or a combination thereof, the tracking state of the object and the object region included in the class is separated. It may be configured to determine the state. According to such a method, relative to other objects It is possible to prevent erroneous determination that the physical positional relationship is in the separated state.
- the tracking state includes a separated state and a single state in which only one object exists in the object region.
- the object tracking method uses a method in which the classified class includes one object, and the object and the object region included in the class are included. When the tracking state is not the separated state, the tracking state of the objects and the object area included in the class may be determined to be the single state.
- the tracking state includes a separated state and an overlapping state in which a plurality of objects correspond to each other in the object area.
- the object tracking method includes an object in which the classified class includes two or more classes, and includes an object and an object area included in the class. When the tracking state is not the separation state, the tracking state of the object and the object area included in the class may be determined to be the overlapping state.
- the object tracking method extracts an area feature from the image information, the object area information, and the first area correspondence information, outputs area feature information that is information indicating the area feature, and stores the object feature.
- the stored object feature amount is selected and output, and the object feature amount to be stored is selected based on the region feature amount information and the first region correspondence information or the fixed correspondence information, and the region feature amount is selected.
- the stored object feature amount may be updated from the feature amount information, the determination correspondence information, and the object feature amount generated before the current time.
- the object tracking method is a transient state in which the tracking state includes a separation state in which the object region is a component of a plurality of object regions, and the object region is a transient state in which the object region is a component of a plurality of object regions. If the tracking state is other than the separation state, including the separation state, information indicating that the association with the object is unnecessary is included in the area feature information, and the association with the object is performed using the area feature information. May be configured to exclude from the object of association the object region indicated as unnecessary. According to such a method, the amount of calculation for calculating the similarity can be reduced.
- the tracking state includes a single state in which only one object exists in the object area, and it is determined whether the tracking state of the object is the single state from the first area correspondence information or the fixed correspondence information, If the tracking state of the object is not a single state, the stored feature value of the object may not be updated. According to such a method, unnecessary update processing of the object feature amount can be reduced.
- the object tracking method is based on the generated object feature amount and the first area correspondence information, It is configured to determine all possible combinations of an object and an object region that can be combined, and combine only the determined combination of the object and the object region to generate a combined feature amount by combining the object feature amounts. May be. According to such a method, it is possible to reduce the processing of generating the unnecessary synthetic feature quantity.
- the object tracking method may be configured to obtain a combination ratio, which is a coefficient for correcting the ratio of combining the object feature amounts, and generate the combined feature amount from the combination ratio and the object feature amount. According to such a method, when the size of the object on the image is different from the actual size of the object, an error based on the size of the object can be corrected.
- an area feature amount is input together with an object feature amount, a combined feature amount is calculated according to an arbitrary combination ratio based on the input area feature amount information and the object feature amount, and the calculated combined feature amount is calculated. It may be configured to output a combined feature amount according to the combination ratio when the similarity between the and the region feature amount is the highest. According to such a configuration, similarity may be calculated based on only one combined feature amount for each combination of an object and an object region, and unnecessary calculation processing can be omitted.
- the object tracking method includes a separation state in which the tracking state is a transient state in which the object area is divided into a plurality of object areas, and only for an object area in which the tracking state is indicated to be in the separation state. It may be configured to generate a composite feature. According to such a method, it is possible to reduce the processing of generating unnecessary combined feature amounts.
- the object feature amount includes the area of the object, and a combination ratio which is a coefficient for correcting a ratio of combining the object feature amount from the area of the object included in the object feature amount is obtained. It may be configured to generate a composite feature from the object feature. According to such a method, the amount of calculation for calculating the combination ratio can be reduced.
- the object tracking method may be configured to limit the combination ratio to a predetermined range based on a change in the area of the object. According to such a method, even when the area of the object on the image changes, the correctness and the composition ratio can be obtained.
- an area feature amount is input together with the object feature amount, a synthetic feature amount is generated from the input area feature amount and the object feature amount within an area change of the object, and the similarity with the object region is generated. It may be configured to output the combined feature value having the highest degree. According to such a configuration, one combined feature is provided for each combination of objects. It is only necessary to output, and unnecessary processing can be omitted.
- the object feature amount includes an image template
- the feature amount synthesizing unit determines an anteroposterior relationship of each object from the image template and the region feature amount, and determines an image template based on the determined anterior / posterior relationship of each object. May be configured to synthesize According to such a configuration, even when an object is partially hidden behind another object, the correspondence between the object and the object region can be correctly calculated.
- the object tracking method is based on the combined feature amount information, the region feature amount information, and the first region correspondence information, and is most similar to all possible combinations of combinations of the object and the object region.
- a combination is obtained, a combination of the obtained object and the object region is selected as an optimum association, and optimal correspondence information indicating an optimal correspondence between the object and the object region is generated.
- the configuration may be such that the correspondence between the object and the object region is determined based on the optimum correspondence information, and the determined correspondence information that is information including the correspondence between the determined object and the object region is output.
- the total similarity which is the sum of the similarity between the feature amount of the object region and the combined feature amount in each combination.
- the combination with the highest total similarity may be configured as the most similar combination.
- the first area correspondence information includes information of a static state indicating whether the object area is in a stationary state or in a force acting state, and is based on the information of the static state.
- the combination of the object region and the object that is shown to be stationary may be excluded from all possible combinations. According to such a configuration, the amount of calculation can be reduced by calculating only combinations and combinations in which the object region in which the stationary state indicates the stationary state is not associated with any object.
- the object tracking method uses an object and an object region when the combination similarity obtained from the similarity between the object and the object region constituting the combination determined as the most similar combination is equal to or less than a predetermined threshold. Of all possible combinations with the combination having the maximum combination similarity, select a combination having a combination similarity within a predetermined threshold from the combination similarity, and select the selected combination.
- the correspondence between the object and the object region that exist in common is included in the optimal correspondence information as the optimal correspondence, and the object and the object region that are not included in the correspondence between the object and the object region that exist in common
- the information that indicates that there is no optimal correspondence between the object and the object area is included in the optimal correspondence information, and for the object that does not indicate that there is no correspondence between the optimal object and the object area, Information indicating the correspondence between the object and the object area included in the optimal correspondence information is output as finalized correspondence information, and the object indicating that there is no correspondence between the optimal object and the object area is indicated in the optimal correspondence information.
- the information indicating the correspondence between the object and the object region included in the first region correspondence information may be output as finalized correspondence information. According to such a method, it is possible to prevent an incorrect correspondence between the object and the object region from being selected.
- the object tracking method may be configured to determine the correspondence between the object indicated by the optimal correspondence information and the object region only for the object region for which the separation state is indicated as the tracking state. According to such a method, the processing for determining the correspondence between the object and the object area can be reduced.
- the object tracking method is configured to output the correspondence between the object and the object region included in the first region correspondence information as fixed correspondence information only for the object region for which a state other than the separated state is indicated as the tracking state. It may be done. According to such a method, the process of determining the correspondence between the object and the object area can be reduced.
- An object tracking program is an object tracking program that tracks an object based on image information, and includes a process of inputting image information to a computer, and a feature amount of each object included in the input image information. Processing for generating a combined feature by combining the object features shown in the image data; and processing of the region based on the similarity between the feature of the object region extracted from the image information and the region including the object and the combined feature. And executing a process of associating the object region with the object.
- An object tracking program is an object tracking program for associating an object with an object region in input image information, the processing comprising: inputting image information to a computer; A process of extracting the region and outputting the object region information including the image information of the object region; determining the correspondence between the object region information and the determined correspondence information indicating the correspondence between the object region and the object before the current time; Tracking status for each object Or, processing is performed by outputting the first area correspondence information indicating the correspondence between the object area and the object and the tracking state by making a determination for each object area, and using the image information, the object area information, and the first area correspondence information.
- a process of outputting the combined feature amount information and a process of associating the object with the object region from the first region correspondence information, the region feature amount information and the combined feature amount information, and outputting the fixed correspondence information with respect to the current time are executed. It may be configured as .
- An object tracking program is an object tracking program for associating an object with an object area in input image information, the processing comprising: inputting image information to a computer; Determining correspondence information indicating the correspondence between the object area and the object before the current time, the force, and the tracking state are determined for each object or each object area, and the correspondence between the object area and the object and the tracking state is indicated.
- the region feature indicating the feature of the object region and the object indicating the feature of the object The process of generating feature values and the tracking state of each object are determined for objects whose object region is determined to be in a state other than the separated state, which is a transient state in which the object region is divided into multiple object regions. Then, the correspondence between the object and the object region included in the first region correspondence information is determined as the correspondence between the object and the object region, and the object feature amount is determined for the object for which the tracking state is determined to be the separated state.
- the processing may be configured to execute a process of associating an object corresponding to the combined feature with the highest similarity with an object region.
- the present invention has the following effects.
- the feature amounts of a plurality of objects are combined, and the degree of similarity of the feature amount between the object and the object region is obtained using the combined feature amounts. If the objects are switched at the time of intersection or Can be done. In particular, the correspondence between the object and the object region when the overlapping state is resolved can be prevented so that the correspondence between the object and the object region is not mistaken.
- FIG. 1 is a block diagram showing the configuration of a conventional object tracking device.
- FIG. 2 is a block diagram illustrating an example of the configuration of the object tracking device according to the present invention
- FIG. 3 is a block diagram illustrating an example of the configuration of the association unit.
- FIG. 4 is a flowchart showing an example of the processing progress of the object tracking device according to the present invention.
- FIG. 5 is an explanatory diagram showing an example of the first area correspondence information.
- FIG. 6 is a block diagram illustrating an example of the configuration of the tracking state determination unit.
- FIG. 7 is an explanatory diagram showing an example of the second area correspondence information.
- FIG. 8 is a block diagram illustrating an example of the configuration of the feature amount generation unit.
- FIG. 9 is a block diagram showing an example of the configuration of the correspondence determination unit.
- FIG. 10 is an explanatory diagram showing an example of a feature amount synthesizing method using a template.
- FIG. 11 is a block diagram showing another example of the configuration of the object tracking device according to the present invention. Is a block diagram showing still another example of the configuration of the object tracking device according to the present invention,
- FIG. 13 is a block diagram illustrating an example of a configuration of the first control unit.
- FIG. 14 is a flowchart showing another example of the processing progress of the object tracking device according to the present invention.
- FIG. 2 shows an object according to the present invention. It is a block diagram which shows an example of a structure of a body tracking device.
- the object tracking device includes an image input terminal 11 to which image information is input momentarily from a video camera or the like, and an object region to extract an object region from image information input to the image input terminal 11.
- Extraction unit 12 Feature generation unit 14 that generates the feature of the object and the feature of the object area based on the image information input to image input terminal 11, Synthesizes the feature of multiple objects
- an associating unit 17 for associating an object with an object region.
- the object region extracting unit 12, the feature amount generating unit 14, the feature amount synthesizing unit 15 and the associating unit 17 can be realized by hardware, but can also be realized by software. That is, the object region extracting unit 12, the feature amount generating unit 14, the feature amount synthesizing unit 15 and the associating unit 17 are provided with a CPU that executes processing based on a program, and an object region extracting unit 1 described below. 2. It can be realized by a program stored in a storage device for realizing the functions of the feature amount generating unit 14, the feature amount synthesizing unit 15, and the associating unit 17. .
- the object tracking device continuously captures an image in a predetermined range (hereinafter, referred to as a tracking range) using a video camera or the like, and recognizes the object in the captured image to identify the object.
- a tracking range a predetermined range
- a video camera is fixed so that it does not move, and the video camera continuously captures a predetermined range of images.
- the video camera may be capable of changing the shooting range in the vertical and horizontal directions.
- another image capturing device that captures images at fixed time intervals may be used.
- a device that captures image information as a still image such as a digital camera, may be used.
- the object area is an area including an object in an image.
- the object region is extracted, for example, as follows. That is, the video camera or the like causes the tracking area to be photographed beforehand when no object is present at all, and the object region extracting unit 12 stores information of the photographed image in the background image storage unit as background image information. Then, the object region extraction unit 12 compares the image information input to the image input terminal 11 with the background image information stored in the background image storage unit on a pixel-by-pixel basis, and calculates a difference between pixel values. Calculated difference Pixels whose minutes are equal to or larger than a predetermined threshold (hereinafter referred to as object pixels) are extracted. The object region extraction unit 12 selects all adjacent object pixels among the extracted object pixels. The object area extraction unit 12 extracts the area by connecting the selected object pixels. Then, the object area extracting unit 12 assigns an area number to the extracted area and outputs the extracted area as an object area.
- a predetermined threshold hereinafter referred to as object pixels
- the background image storage unit may store a plurality of pieces of background image information instead of storing only one piece of background image information.
- the background image storage unit stores background image information captured during the day (hereinafter, referred to as daytime image) and background image information captured at night (hereinafter, nighttime image). ) May be stored, and daytime images may be used during the day, and nighttime images may be used at night.
- the object tracking device does not have a background image storage unit, but instead uses information such as the shape, color, and motion of the object as object model information in advance. It may have an object model storage device for storing. Then, the object region extraction unit 12 may collate the input image information with the object model stored in the object model storage device and extract a region matching the object model as the object region.
- the feature amount is an amount used when associating an object with an object region.
- the feature amount of the object region indicates a feature amount obtained from each object region, and the feature amount of the object indicates a feature amount obtained from a set of object regions constituting the object.
- the feature amount generation unit 14 calculates, for example, a color histogram of the object area. Then, the calculated color histogram of the object region is output as the feature amount of the object region.
- the feature amount of the object region may be the area of the object region, or may be an image template or the like. Also, a color histogram that is normalized by area may be used. Further, a combination of two or more of a color histogram, an area, an image template, and a color histogram normalized by the area may be used.
- the feature amount of an object is a feature amount obtained from a set of object regions constituting the object
- the feature amount of the object is also determined by the color histogram, the area, the image template, and the deviation of the color histogram normalized by the area. Or a combination thereof.
- the feature amount generation unit 14 calculates the feature amount of the object based on the feature amount of the object region.
- the feature amount of the object region is a feature amount obtained from each object region, and the feature amount of the object is a feature amount obtained from a set of object regions constituting the object.
- the feature amount of the object is determined based on the feature amount of the object region, and when the object is composed of a plurality of object regions, The features of the object are determined from the sum of the features of those object regions.
- the object region extracting unit 12 is a specific example (specific example) of an object region extracting unit that extracts an object region from image information and outputs object region information including image information of the object region.
- the feature amount generation unit 14 is a specific example of a feature amount generation unit that generates an area feature amount and an object feature amount.
- the feature amount synthesis unit 15 includes a feature amount generation unit 15 for all combinations required for a plurality of objects. It is a specific example of a feature amount synthesizing unit that synthesizes amounts to generate respective combined feature amounts.
- FIG. 3 is a block diagram illustrating an example of the configuration of the association unit 17.
- the association unit 17 includes a tracking state determination unit 13 and a correspondence determination unit 16.
- the tracking state determination unit 13 determines the tracking state by associating the object with the object area.
- the correspondence determination unit 16 determines the correspondence between the object and the object region based on the tracking state of the object.
- the tracking state is a state indicating a positional relationship with respect to another object, such as overlapping with or separating from another object.
- Tracking states include, for example, a single state in which only one object exists in the object area, an overlapping state in which multiple objects correspond in the object area, and a transient state in which the object area is divided into multiple object areas.
- separation states There are certain separation states. That is, in the present embodiment, the tracking state includes an independent state, an overlapping state, and a separated state. In general, transition from the overlapping state to the single state via the separation state.
- the tracking state determination unit 13 is a specific example of a tracking state determination unit that determines a tracking state indicating a relative positional relationship with another object for each object or each object region.
- the correspondence determining unit 16 is a unit for associating the object region with the object based on the similarity between the feature amount of the object region that is the region including the object and the combined feature amount, and determining the current time. It is a specific example of correspondence determination means for outputting correspondence information.
- the feature amount corresponding to the object region is synthesized using the feature amount of the object, and the object is associated with the object region.
- the reliability of object determination in the object area can be improved as compared with the case where no object is determined.
- the feature amounts of the plurality of objects are combined, and the plurality of objects are present in the separated object region using the combined feature amount. Since the similarity of the feature amount between the object and the object region is calculated in the case, even if there are multiple objects in the separated region or if the objects are switched when the objects intersect, the association between the object and the object region Misleading,
- FIG. 4 is a flowchart showing an example of the processing progress of the object tracking device according to the present invention.
- the object area extraction unit 12 inputs image information via the image input terminal 11 (step S310).
- the object area extraction unit 12 extracts an object area from the input image information (step S302).
- the object region extraction unit 12 outputs object region information including image information of the object region (object region image information).
- the tracking state determination unit 13 stores the object region information output by the object region extraction unit 12.
- the tracking state determination unit 13 receives the current time object region information output from the object region extraction unit 12, the past object region information stored by the tracking state determination unit 13 itself, and the input from the correspondence determination unit 16. Based on the determined past correspondence information, the object is tracked to associate the object with the object region, and the tracking state of the object and the object region is determined for each object or for each object region.
- the tracking state of the object when it is focused on, it is expressed as the tracking state of the object, and when it is focused on the object area, it is expressed as the tracking state of the object area, but when the object is associated with the object area. In fact, those expressions are substantially synonymous.
- the tracking state determination unit 13 outputs the first area correspondence information (Step S303).
- the first area correspondence information is information indicating the correspondence between the object and the object area and the tracking state (information indicating the correspondence between the object area and the object and the tracking state).
- the determined correspondence information is information indicating the correspondence between the object determined by the correspondence determining unit 16 and the object region. The procedure by which the correspondence determination unit 16 determines the correspondence between the object and the object area will be described later.
- FIG. 5 is an explanatory diagram showing an example of the first area correspondence information.
- the first area correspondence information is, for example, object area information includes image information of three object areas ⁇ , ⁇ , and ⁇ , and four If objects A, B, C, and D exist, the correspondence between object regions ⁇ and ⁇ ⁇ and objects A, B, C, and D as shown in FIG. This is information as shown in Fig. 5A, which shows the correspondence between the object and the tracking state. Further, for example, information indicating the correspondence between the object area and the tracking state for each object as shown in FIG. 5B may be used.
- Object tracking is to associate the tracked object with the object region.
- the position of the object is obtained from the object region corresponding to the past object, and the closest near object region is obtained from the position of the object. It can be realized by associating the obtained object region with the object being tracked.
- the position of the center of gravity of the object area is set as the position of the object. A method of associating an object with an object region will be described later.
- the feature amount generation unit 14 generates a feature amount of the object region based on the image information input via the image input terminal 11 and the object region information input from the object region extraction unit 12. (Step S 304), the area feature amount information is output to the feature amount synthesis unit 15 and the association unit 17.
- the region feature amount information is information indicating a feature amount (region feature amount) of each object region extracted by the object region extraction unit 12. Further, the feature amount generation unit 14 calculates the feature amount of the object based on the first region correspondence information output from the tracking state determination unit 13 and the feature amount of the object region generated in step S304.
- the feature amount of the object to be determined and stored is updated (step S305). Then, the feature amount generation unit 14 outputs the feature amount of the object in response to a request from the feature amount synthesis unit 15.
- the feature amount generation unit 14 updates the feature amount of the object whose tracking state is single in the first area correspondence information output by the tracking state determination unit 13. If the objects overlap on the image, the feature amount of the object cannot be accurately calculated.Therefore, when the tracking state is a state other than the single state, the feature amount generation unit 14 stores the stored features. Do not update the quantity.
- the update of the feature amount of the object is performed based on, for example, Expression (1).
- H in indicates the currently observed characteristic quantity of an object (feature amount obtained from the set of the body area of the current time constituting the object).
- H t and H t _! Shows the feature amount of the object to be stored at a time t of the features of the memory that the object time t one 1 Tokyo, H t represents the characteristic quantity of an object after the update.
- ⁇ indicates the update coefficient of the feature value of the object.
- the feature amount synthesizing unit 15 performs all necessary combinations of a plurality of objects based on the feature amount of the object stored in the feature amount generating unit 14 and the first area correspondence information output by the tracking state determination unit 13. Then, a feature amount obtained by combining the feature amounts of the objects (hereinafter, referred to as a combined feature amount) is calculated (step S306). That is, the feature amount synthesizing unit 15 determines all possible combinations of the object and the object region that can be combined based on the feature amount of the object generated by the feature amount generating unit 14 and the first area correspondence information. Then, only for the determined combination of the object and the object region, a combined feature amount (a concept including the object feature amount of a single object) is generated.
- the feature amount synthesizing unit 15 outputs combined feature amount information including the calculated combined feature amount.
- the combined feature information is information including the calculated combined feature and the correspondence between the object used for calculating the combined feature and the combined feature.
- the feature amount generation unit 14 selects the stored feature amount of the object in response to a request from the feature amount synthesis unit 15 and outputs the feature amount of the selected object to the feature amount synthesis unit 15.
- the feature amount synthesizing unit 15 determines the feature amount of the object requested to the feature amount generating unit 14 based on the first area correspondence information.
- All the necessary combinations of multiple objects are, for example, after the objects A, ⁇ , and C are superimposed on the image (that is, the superimposed state), and then the objects of some force are separated and the two object regions are combined.
- ⁇ ) (AB, C) indicates that object areas c and A and B exist, and object C exists in the object area.
- the combined feature amount information includes, for example, a combined feature amount of ⁇ , and information indicating that the object associated with the calculated combined feature amount of ⁇ is an object ⁇ and an object ⁇ .
- the correspondence determining unit 16 includes: a region feature amount information output by the feature amount generation unit 14; a synthesized feature amount information output by the feature amount synthesis unit 15; and a first feature amount output by the tracking state determination unit 13. The correspondence between the object and the object region is determined based on the region correspondence information (step S307). Then, the correspondence determining unit 16 outputs the determined correspondence information.
- the correspondence determination unit 16 determines the region feature amount information and the combined feature amount for an object whose tracking state is indicated as being separated in the first region correspondence information output from the tracking state determination unit 13. The correspondence between the object and the object region is determined based on the information. Further, for an object whose tracking state is indicated to be other than the separated state in the first area correspondence information ⁇ : the object area included in the first area correspondence information output by the tracking state determination unit 13 Is determined as the correspondence between the object and the object region. Then, the correspondence determining unit 16 sets information on the correspondence between the determined object and the object region as determined correspondence information.
- the tracking state determination unit 13 determines whether the tracking state of the object is a single state, an overlapping state, or a separated state by the following method.
- the tracking state determination unit 13 outputs the current time object region information from the object region extraction unit 12, the past object region information stored in the tracking state determination unit 13 itself, and the correspondence determination unit 16.
- the object and the object region are classified as a whole based on the past determined correspondence information.
- the tracking state determination unit 13 classifies the object A and the object area ⁇ ; into a class X. I do.
- the tracking state determination unit 13 classifies the object B and the object region] 3 into a class Y different from the class X.
- the tracking state determination unit 13 associates the object ⁇ included in the class X with the object ⁇ included in the class ⁇ . It is assumed that the object region ⁇ and all the elements included in the class X and all the elements included in the class ⁇ belong to the same class.
- each object and each object region is associated with at least one other object or other object region, Classify.
- the tracking state determination unit 13 determines whether the tracking state is a single state, an overlapping state, or a separated state based on the number of elements of each class. When only one element of the object is included in the elements of the class, the tracking state determination unit 13 determines that the tracking state is a single state. When two or more elements of the object are included in the elements of the class and only one element of the force and the object area is included, the tracking state determination unit 13 determines that the tracking state is an overlapping state. When two or more elements of the object are included in the elements of the class and two or more elements of the object area are included, the tracking state determination unit 13 determines the tracking state to be the separated state.
- the correspondence determination unit 16 calculates the combination similarity for all combinations of the object and the object region. Then, the correspondence determining unit 16 selects a combination of the object and the object region having the highest combination similarity. The selected combination is the correspondence between the object and the object area.
- the combination similarity is an index indicating the validity of the combination obtained from the similarity of the pair of the object and the object region constituting the combination. A high combination similarity indicates that the combination is valid and that the pair of the object and the object region constituting the combination is similar.
- the combination similarity the total similarity, which is the sum of the similarity between the feature amount of the object region and the combined feature amount in each combination, is used. Not limited to
- ⁇ ( ⁇ C, ⁇ ), ( ⁇ , ABC), (A, BC), (B, CA), (C,
- AB object and object region
- the correspondence determination unit 16 sets the combination of the object and the object region in the case where the calculated combination similarity is the largest as the optimum association.
- the correspondence determining unit 16 calculates the distance between the combined features of the object and the features of the object region (hereinafter referred to as the distance between the features), and calculates the distance between the features. From the combination.
- the correspondence determination unit 16 determines that the combination similarity is high when the calculated combination distance is small, and determines that the calculated combination distance is large, and the combination similarity is low! / I do.
- the distance of the combination is, for example, the sum of the distances between the feature amounts, and is calculated as in equation (2).
- i is the number of the object area
- k is the combination number of the object and the object area
- Hri is the feature amount of the i-th object area
- Hg ki is the object corresponding to the i-th object area in the k-th combination.
- D k indicates the distance of the k-th combination.
- the distance (Dk) of the combination is obtained from the distance between the feature values of the pair consisting of A and the color, and the pair consisting of and BC.
- the feature amount synthesizing unit 1'5 calculates a combined feature amount according to an arbitrary synthesis ratio ⁇ . Then, the correspondence determination unit 16 calculates a combination similarity based on the combined feature amount calculated according to the combination ratio ⁇ , and selects an optimal combination of the object and the object region.
- Synthesis ratio £. A is a coefficient for positively catching the ratio of feature amount of each object included in the object territory ⁇ . For example, images differ due to the distance from the camera between objects. The apparent size of the object on the image may be different. In this case, if the combined features are simply calculated by summing the features of the object, the calculated combined features are different from the features of the actual object region. The association with can not be determined. Since the feature amount synthesizing unit 15 calculates the synthesized feature amount according to the predetermined synthesis ratio ⁇ , the correspondence determination unit 16 can prevent the erroneous association between the object and the object region from being determined.
- the feature amount combining unit 15 when combining the feature amounts of two objects A and ⁇ , the feature amount combining unit 15 generates a combined feature amount based on Expression (3). Then, the correspondence determination unit 16 calculates the combination distance based on the equation (4).
- g (e) (l -e) H A + s B ( ⁇ ⁇ ⁇ U%)
- the feature amount synthesizing unit 15 changes the synthesis ratio ⁇ from 0 to 1 (for example, changes the synthesis ratio from 0 to 1 at intervals of 0.1), and changes all the changed synthesis ratios ⁇ .
- the composite feature is calculated based on the value of.
- the correspondence determination unit 16 selects a combined feature amount that minimizes the distance between the feature amount of the object region and the combined feature amount of the object. Then, the correspondence determining unit 16 calculates the combination distance based on the selected combined feature amount as in Equation (4), and combines the object and the object region when the combination distance becomes the smallest. Is the optimal correspondence.
- the correspondence determination unit 16 calculates the distance between the feature amounts based on the L2 norm. However, if the calculation method is capable of calculating the similarity, the feature amount is determined based on another distance scale. The distance between them may be calculated.
- the L 2 norm refers to a distance scale calculated by the sum of squared differences shown in equations (2) and (4).
- the object tracking device calculates the combined feature amount of the objects in the overlapping state, calculates the similarity between the feature amount of the object region and the combined feature amount, and based on the calculated similarity degree.
- the object tracking device calculates the similarity between the feature amount of the object region and the combined feature amount based on all combinations of the object and the object region, and based on the calculated similarity. Since the optimal combination of the object and the object area is selected, when the object to be tracked is associated with the area where the object to be tracked exists, the optimum association can be selected for the entire object.
- FIG. 6 is a block diagram illustrating an example of the configuration of the tracking state determination unit 13.
- the tracking state determination unit 13 includes an object tracking unit 131, a state determination unit 132, and an object area information storage unit 133.
- the object tracking unit 1331 tracks the object from the object region information, the determined correspondence information, and the past object region information output from the object region storage unit, and indicates the correspondence between the object and the object region.
- 7 is a specific example of an object tracking unit that outputs area correspondence information of No. 2;
- the object area information storage unit 133 is a specific example of an object area storage unit that stores the object area information and the first area correspondence information.
- This is a specific example of a state determination unit that determines the tracking state of an object from information, object area information, and past object area information, and outputs first area correspondence information.
- the object tracking unit 13 1 outputs the current time object region information output by the object region extraction unit 12, the past object region information stored by the object region information storage unit 13 3, and the correspondence determination unit 16. The object is tracked based on the past correspondence information and the object is correlated with the object area. Then, the object tracking unit 1331 generates the second region correspondence information, and outputs the generated second region correspondence information to the state determination unit 1332.
- the second area correspondence information is information indicating a correspondence between an object and an object area. Unlike the first area correspondence information, the second area correspondence information does not include the tracking state information.
- FIG. 7 is an explanatory diagram showing an example of the second area correspondence information.
- the second region correspondence information includes, for example, image information of three object regions of object region information, ⁇ , and ⁇ , and four objects A, ⁇ , C, and D as past correspondence information.
- the object regions ⁇ ,] 3 and ⁇ as shown in Fig. 7C correspond to the objects A, B, C, and D
- the information shown in FIG. 7A indicates the correspondence between each object region and the object. Further, for example, information indicating the correspondence between each object and the object region as shown in FIG. 7B may be used.
- the state determination unit 1332 determines the tracking state of the object based on the object region information output by the object region extraction unit 12 and the second region correspondence information output by the object tracking unit 1331. Then, the state determination unit 132 outputs the first area correspondence information based on the second area correspondence information and the determination result.
- the object area information storage unit 133 stores the object area information output by the object area extraction unit 12.
- the object area information stored in the object area information storage unit 133 becomes past object area information used for generating the second area correspondence information at the next time.
- FIG. 8 is a block diagram illustrating an example of the configuration of the feature amount generation unit 14.
- the feature amount generation unit 14 includes a feature amount extraction unit 141, a feature amount update unit 142, and a feature amount storage unit 144.
- the feature quantity extraction unit 1441 extracts a feature quantity from the image information, the object area information, and the first area correspondence information, and outputs feature quantity information that is information including the feature quantity. It is a specific example of an extraction unit.
- the feature amount storage unit 144 is a specific example of a feature amount storage unit that selects and outputs an object feature amount to be stored according to a request, and the feature amount update unit 1442 includes area feature amount information, 9 is a specific example of a feature amount updating unit that updates the object feature amount stored in the feature amount storage unit from the first area correspondence information and the object feature amount generated before the current time.
- the feature amount extraction unit 1441 calculates the feature amount of each object region based on the image information input via the image input terminal 11 and the object region information output by the object region extraction unit 12 . Then, the feature amount extracting unit 141 outputs region feature amount information including the calculated feature amount of the object region. The output region feature amount information is input to the feature amount update unit 144 and the correspondence determination unit 16.
- the feature amount updating unit 1442 determines, based on the region feature amount information output from the feature amount extracting unit 141, for an object whose tracking state is indicated to be a single state in the first region correspondence information. Then, the feature amount stored in the feature amount storage unit 144 is updated. The feature update unit 144 displays the tracking status other than the single status in the first area correspondence information. The feature amount stored in the feature amount storage unit 144 is not updated for the object that has been set. The feature amount of the object stored in the feature amount storage unit 144 is used by the feature amount synthesis unit 15 to generate a synthesized feature amount. That is, in response to a request from the feature amount synthesizing unit 15, the feature amount of the stored object is selected, and the feature amount of the selected object is output to the feature amount synthesizing unit 15.
- FIG. 9 is a block diagram illustrating an example of the configuration of the correspondence determination unit 16.
- the correspondence determination unit 16 includes a correspondence calculation unit 161 and a correspondence determination unit 162.
- the correspondence calculation unit 161 is a specific example of a correspondence calculation unit that generates optimal correspondence information indicating an optimal correspondence between an object and an object region.
- the correspondence determining unit 162 is a specific example of a correspondence determining unit that outputs fixed correspondence information, which is information including the correspondence between the determined object and the object region.
- the correspondence calculation unit 16 1 includes the region feature amount information output by the feature amount generation unit 14, the combined feature amount information output by the feature amount synthesis unit 15, and the first feature information output by the tracking state determination unit 13. Based on the region correspondence information, the similarity between the feature amount of the object region and the combined feature amount is calculated for all possible combinations of the combinations of the object and the object region. Calculate the similarity. Then, the correspondence calculation unit 161 selects the combination of the object and the object region having the largest combination similarity as the optimal correspondence, generates the optimal correspondence information, and outputs the optimal correspondence information.
- the optimal correspondence information is information indicating an optimal correspondence relationship between an object and an object region (a combination of an object and an object region when the combination similarity is maximized).
- the correspondence determining unit 162 determines whether the tracking state of the first area correspondence information output from the tracking state determining unit 13 is in the separated state.
- the information on the correspondence relationship with the object region included in the output optimal correspondence information is determined as the information on the correspondence relationship between this object and the object region.
- the correspondence determination unit 162 determines the correspondence between the object region included in the first region correspondence information output by the tracking state determination unit 13. The information is determined as information on the correspondence between the object and the object region.
- the correspondence determination unit 162 compares the information of the object included in the first area correspondence information with the information on the correspondence between the determined object and the object area, and includes the information in the first area correspondence information. If an uncorresponding object area occurs that is not associated with a displaced or displaced object, a new object is generated and associated with the uncorresponding object area. Then, the correspondence determining unit 162 outputs information on the correspondence between the determined object and the object region as the determined correspondence information. For example, because two objects A and ⁇ exist in the object area ⁇ in a completely overlapping state, the object tracking device may not be able to recognize the force due to the presence of one object ⁇ in the object area ⁇ .
- the correspondence determination unit 162 newly generates the object ⁇ and associates the object ⁇ with the object region 3.
- FIGS. 2, 3, 6, 8, and 9 can be realized by software. That is, in the present embodiment, image input processing for inputting image information to a computer, object area extraction for extracting an object area from the input image information and outputting object area information including image information of the object area, From the processing, the object area information, and the determined correspondence information indicating the correspondence between the object area and the object before the current time, the tracking state is determined for each object or each object area, and the object area and the object and the tracking state are determined.
- a feature amount generation process for generating an object feature amount indicating the feature amount of the object, and a feature amount based on all necessary combinations of a plurality of objects from the object feature amount and the first area correspondence information.
- Synthesize A feature that outputs the combined feature information, which is information including the combined feature and the correspondence between the object used for generating the combined feature and the combined feature.
- Volume synthesis processing and correspondence determination processing for associating an object with an object area from the object area information, the first area correspondence information, the area feature information, and the combined feature information, and outputting a determined correspondence information ⁇ for the current time.
- An object tracking device can be realized by an object tracking program to be executed.
- the tracking state determination unit 13 may determine the tracking state of the object based on a determination method described below. For example, one object in the object area If only the association is made, the tracking state is determined to be the single state. When the object area is divided into a plurality of components from that state, it is not determined that the state has transitioned from the overlapping state to the separated state because there is only one object, and it may be determined that the state remains alone. . In order to cope with the separation of multiple objects from the single state object, the tracking state determination unit 13 determines that the class has two or more object area elements even if the class contains only one element element. If it is included, the tracking state may be determined to be the separation state.
- the object region extraction unit 12 extracts the force object region by extracting the difference between the pixel value of the image information input via the image input terminal 11 and the background image information captured in advance.
- An example of calculation is shown.
- the object region extraction unit 1 2 force correctly due to factors such as the fact that the object has features similar to the background (for example, when the color of the object and the background are similar).
- the object area cannot be extracted.
- the state determination unit 132 of the tracking state determination unit 13 may erroneously determine that the tracking state has transitioned to the separation state. In such a case, in order to correctly extract the object region, the state determination unit 132 may use the spatial distance between the object regions.
- the state determination unit 1332 determines whether the distance between the object regions is less than or equal to a certain threshold.
- the objects to be associated are all classified into the same class. For example, let us consider a case where there are objects A and B and an object region,] 3, and the object region and object A are associated with each other, and the object region
- the state determination unit 132 generates two classes, a class to which the object region ⁇ and the object A belong, and a class to which the object region and the object B belong. .
- the state determination unit 13 2 determines that the object region H,] 3 and the object A, ⁇ May be combined into one class.
- the state determination unit 1332 can determine the tracking state even if the object area is separated into two or more in a class that includes two or more objects. If the distance is less than or equal to the threshold, it can be determined that the tracking state does not change with the overlapping state. Then, the state determination unit 1 3 2 When the distance exceeds the threshold, the tracking state is determined to be the separated state.
- the state determination unit 1332 may deal with a case where the object region is partly separated by chance based on the duration during separation of the object region.
- the duration during separation means that when the former one object region is separated, the object region can be separated when the distance between the separated object regions is greater than or equal to a certain threshold (may be 0). Indicates the duration of the state where the object area can be separated when the state is set.
- the state determination unit 1332 when classifying, classifies the object region corresponding to the object region and the object region whose duration during separation is less than a certain threshold and maintains the previous tracking state. To do. Then, when the duration during separation exceeds the threshold, the state determination unit 1332 determines the tracking state to be the separation state.
- the duration during separation is updated and stored as follows.
- the duration during separation is stored in the object area information storage unit 133.
- the object tracking unit 133 acquires the duration during separation stored in the object area information storage unit 133, associates the duration during separation with each object area, and calculates a new duration during separation. Then, the object tracking unit 1331 updates the duration during separation stored in the object region information storage unit 133.
- the object tracking unit 1331 associates the object region with the duration during separation as follows.
- the object tracking unit 131 based on the object region at the current time included in the object region information output by the object region extraction unit 12 and the past determined correspondence information output by the correspondence determination unit 16 based on the current The object area is associated with the past object area.
- the object tracking unit 1331 associates the separation duration duration corresponding to the past object region stored in the object region information storage unit 133 with the current object region corresponding to the past object region.
- the object tracking unit 131 associates the largest duration during separation among the plurality of durations during separation with the object area.
- the object tracking unit 1331 may associate the minimum duration during separation among the plurality of durations during separation with the current object area.
- the object tracking unit 131 may calculate the average time of the duration during separation and associate the calculated average time with the current object area.
- the state determination unit 1332 acquires the duration during separation from the object region information storage unit 133.
- the state determination unit 132 determines the tracking state of the object area.
- the state The determination unit 1332 updates the in-separation duration stored in the object area information storage unit 133 based on the determination result. For example, when the state determination unit 13 2 determines that the tracking state of the object area is separable, the state determination unit 13 2 continues the separation duration stored in the object area information storage unit 13 3 during the separation.
- the state determination unit 1332 resets the duration during separation. Also, instead of storing the duration during separation, a separation start time indicating the time when separation started may be stored. At this time, the duration during separation can be obtained by taking the difference between the current time and the separation start time.
- the object tracking unit 1331 may generate the second area correspondence information including the separation duration, instead of storing the separation duration in the object area information storage unit 133. ,.
- the state determination unit 132 acquires the duration during separation from the second area correspondence information. In this way, by directly transmitting and receiving the duration during separation from the object tracking unit 131 to the state determination unit 132 without passing through the object region information storage unit 133, the object region information storage is performed. There is an advantage that the processing of the part 133 can be reduced.
- Modification Example 1 and Modification Example 2 may be used in combination.
- objects having a common region in the corresponding object region are put together.
- the object region corresponding to the object may be classified into one class.
- the classified class satisfies the condition of being in a separated state, and furthermore, the class includes two or more objects, the distance between object regions included in the class is equal to or greater than a predetermined threshold, and If the duration of separation of the object regions included in the class is greater than or equal to a predetermined threshold or a combination thereof, the tracking state of the objects or object regions included in the class is satisfied. May be determined to be in the separated state.
- the object tracking device may combine the feature amounts of the object and associate the object with the object region by the following method. That is, the feature amount synthesizing unit 15 acquires not only the feature amount of the object but also the region feature amount information from the feature amount generating unit 14. The feature amount synthesizing unit 15 calculates the region feature amount information output from the feature amount generating unit 14 and the feature amount of the object. Based on, the combined feature amount is calculated according to an arbitrary combining ratio E. Then, the feature amount synthesizing unit 15 outputs a synthesized feature amount according to the synthesis ratio ⁇ when the similarity between the calculated synthesized feature amount and the feature amount of the object region included in the region feature amount information is highest. . The correspondence determination unit 16 calculates the similarity between the feature amount of the object region and the combined feature amount of each combination of the object and the object region based on the output combined feature amount, and outputs the combination similarity. ⁇ ⁇ ⁇ .
- the feature generation unit 14 outputs a synthesized feature amount corresponding to the synthesis ratio ⁇ at which the similarity between the object and the object region is the highest, so that the correspondence determination unit 16 determines the combination of the object and the object region for each combination. ; Similarity may be calculated based on only the L composite features. Therefore, the correspondence determination unit 16 can omit useless calculation processing.
- the object tracking device may combine the feature amounts and associate the object with the object region by the following method. That is, the feature amount synthesizing unit 15 acquires not only the feature amount of the object but also the region feature amount information from the feature amount generating unit 14, and calculates the value of the synthesis ratio ⁇ that minimizes the distance according to the equations (5) to (7). calculate. Equation (5) is an equation for calculating the distance f ( ⁇ ) between the feature quantity of the object and the feature quantity of the object area based on the area feature quantity information output by the feature quantity generation unit 14 and the feature quantity of the object. It is.
- the feature amount synthesis unit 15 calculates the value of the synthesis ratio ⁇ when the equation obtained by partially differentiating the distance f ( ⁇ ) with the synthesis ratio ⁇ becomes “0”. Is calculated. That is, the feature amount synthesis unit 15 calculates the synthesis ratio ⁇ at which the distance f ( ⁇ ) is minimized. The feature amount synthesizing unit 15 generates a synthesized feature amount by Expression (3) based on the calculated synthesis ratio ⁇ . Then, the correspondence determination unit 16 calculates the similarity based on the generated combined feature amount.
- an example is given of the combined feature amounts of the objects A and ⁇ .
- Hg ki is a combined feature amount of the feature amount H A of the object A and the feature amount H B of the object B
- H ri is a feature amount of the object region.
- the feature quantity is a color histogram and the number of bins in the color histogram is n
- the value of the combined ratio ⁇ calculated by the equation (7) out of 0 or 1 is A close value is selected as the value of the synthesis ratio ⁇ .
- the feature amount synthesizing unit 15 generates a synthesized feature amount based on the calculated synthesis ratio ⁇ .
- the correspondence determination unit 16 obtains the distance of the combination shown in Expression (2) based on the generated combined feature amount, and determines the combination of the object and the object region that minimizes the distance of the combination as the optimal correspondence. I do.
- the feature amount synthesizing unit 15 may generate the combined feature amount only for a specific combination ratio ⁇ . Therefore, the amount of calculation for calculating the similarity can be reduced as compared with the case where the combined feature is generated for various combination ratios ⁇ . Note that the feature amount synthesizing unit 15 can generate a synthesized feature amount by the same method as described above even when three or more objects overlap. Further, here, the feature amount synthesizing unit 15 uses the L2 norm as a distance scale between the feature amounts, but may use another distance scale.
- the object tracking device may be a device that combines a feature amount and selects a region and an object by the following method. That is, the feature amount synthesizing unit 15 generates a synthesized feature amount using the area ratio of the object as the synthesis ratio. Then, the correspondence determination unit 16 calculates the similarity based on the generated combined feature amount. This method is based on the fact that the composition ratio of the features when the objects overlap is almost equal to the ratio of the areas of the objects. By using the area ratio of the object as the synthesis ratio, The component 15 can reduce the calculation for calculating the combination ratio.
- the combined feature is calculated by equation (8).
- H A and H B indicate the feature amounts of the objects A and B, respectively.
- S A and S B indicate the areas of the objects A and B on the image information, respectively.
- Hg is a combined feature value.
- the correspondence determination unit 16 obtains the sum of the distances shown in Expression (2) using the generated combined feature amount Hg, and determines the combination of the object and the object region that minimizes the sum of the distances as the optimum association. Set.
- the feature amount generation unit 14 sets the number of pixels included in the image of the object region as the area of the object, and generates a value obtained by adding the area to the feature amount of the object in the same manner as the feature amount of another object. Then, the feature amount storage unit 143 of the feature amount generation unit 14 stores the feature amount to which the area is added, and adds the area when the tracking state indicates a state other than the overlap state in the first area correspondence information. The updated feature value is updated.
- the object tracking device may combine the feature amounts and associate the object with the object region by the following method.
- the area of each of the overlapping objects may change in the range of S1 to S2, and the feature amount synthesizing unit 15 calculates the combined feature amount by the equation (9) within the change range.
- Generate Equation (9) is an equation for calculating a combined feature amount of the feature amount of the object A and the feature amount of the object B.
- the correspondence determination unit 16 determines the similarity between the object and the object region based on the combined feature that has the highest similarity with the region feature among the generated combined features according to Expression (10). The degree is calculated, and the combination similarity is calculated.
- ⁇ S indicates the amount of change in the area of the object A immediately before the object A and the object B overlap.
- the feature amount synthesizing unit 15 acquires not only the feature amount of the object from the feature amount generating unit 14 but also the region feature amount information, and obtains the region feature amount information input from the feature amount generating unit 14 and the object feature amount. From the body features, a combined feature is generated within the range of the area change, and the combined feature having the highest similarity to the object region is output.
- the correspondence determining unit 16 determines the object based on the combined feature.
- the combination similarity of each combination of the object and the object region may be obtained. That is, the composition ratio ⁇ may be limited to a predetermined range based on the change of the area of the object. Even in such a case, the feature amount synthesizing unit 15 only needs to output one combined feature amount for each combination of objects, and generates a combined feature amount for various combination ratios ⁇ . As compared with, useless processing can be omitted.
- the feature amount synthesizing unit 15 may calculate the synthesized feature amount using the template of the object as the feature amount.
- a template is a partial image obtained by extracting a portion corresponding to an object from an image, and corresponds to a description of the shape and color of the object.
- the template is generated by extracting a region corresponding to the object in the image based on the correspondence between the object and the object region.
- an image modeled in advance such as a standard face image, may be used as a template.
- the feature amount synthesizing unit 15 scans the template in the object region, and calculates the similarity between the template and the object region at each position in the object region. Calculate the degree. The feature amount synthesizing unit 15 obtains the position where the similarity becomes maximum. The feature amount synthesizing unit 15 calculates the similarity using the equations (11) and (12). (A,)-R (x, y) f,... (1 1)
- the feature amount synthesizing unit 15 calculates a sum of squares of differences between pixel values of the template and the object region as a distance between feature amounts. Then, as shown in Expression (12), the case where the distance between the feature amounts is minimum is determined to be the maximum similarity between the template and the object region. Further, the feature amount synthesizing unit 15 sets the position ( a , b) at which the similarity between the template and the object region becomes maximum as the position of the object.
- the feature amount synthesizing unit 15 determines the order of the overlapping objects based on the similarity between the template and the object region.
- the feature amount synthesizing unit 15 determines that a template having a high similarity to the object region blocks a template having a low similarity to the object region.
- the feature amount synthesizing unit 15 generates a synthesized template corresponding to the synthesized feature amount by synthesizing the template.
- the feature amount synthesizing unit 15 determines the order of each object from the image template and the feature amount of the object region, and synthesizes and synthesizes the image template based on the determined order of each object.
- the characteristic amount may be obtained.
- the feature amount synthesizing unit 15 may select a template having the maximum similarity with the object region and determine that the selected template is blocking a template that is not selected. At this time, a region where the object region overlaps with the selected template (the template having the highest similarity) is excluded from the object region, and based on the remaining object region and the unselected template, The calculation of the similarity between the template and the object region, the selection of the template having the maximum similarity, the calculation of the position of the template when the similarity is the maximum, and the determination of occlusion may be repeatedly performed.
- FIG. 10 is an explanatory diagram showing the concept of generating a combined feature using a template.
- the feature amount synthesizing unit 15 runs the template A and the template B in the object area. Since the maximum value of the similarity between template B and object region a is larger than the maximum value of the similarity between template A and object region a, feature amount synthesizing unit 15 blocks template B from template A3 ⁇ 4r. Can be determined. As shown in Fig. 10 ⁇ , the feature amount synthesizing unit 15 calculates the similarity with the object area. The template A and the template B are combined at each of the maximum positions to generate a combined template X.
- the feature amount synthesizing unit 15 When generating the synthetic template X, the feature amount synthesizing unit 15 performs synthesis using the value of the template B for a portion where the template A and the template B overlap. Note that (a 1, b 1) is the position where the similarity between the template A and the object region is maximum, and (a 2, b 2) is the position where the similarity between the template B and the object region is maximum. is there.
- the correspondence determination unit 16 can calculate the similarity between the feature amount of the object region and the combined feature amount by Expression (2) based on the color histogram in the combined template, and can calculate the combined similarity. Further, the correspondence determination unit 16 calculates the similarity between the feature amount of the object region and the combined feature amount by comparing the object region with the combined template, as in Equation (12), and calculates the similarity of the combination. You can calculate the degree. By using a template, even when an object is partially hidden behind another object, the correspondence between the object and the object region can be correctly calculated.
- the correspondence determination unit 16 when associating an object with an object region, the correspondence determination unit 16 forcibly performs association even when the combination similarity is low. May not be correctly correlated. Therefore, when the combination similarity of the optimal combination (that is, the combination similarity of the combination having the largest combination similarity) is low, the correspondence determination unit 16 does not perform the association between the object and the object region.
- the correspondence calculation unit 16 1 selects an optimal combination of the object and the object region, and when the combination similarity is equal to or less than a predetermined threshold, the correspondence calculation unit 16 1 Judge that there is no optimal object area. Then, the correspondence calculation unit 161 outputs optimal correspondence information including information indicating that there is no correspondence between the optimal object and the object region.
- the correspondence determination unit 162 determines that the object includes the object included in the first region correspondence information.
- the information on the correspondence relationship with the object region is determined as it is as the correspondence relationship between the object and the object region, and is output as the determined correspondence information. Otherwise, the information indicating the correspondence between the object and the object region included in the optimal correspondence information is output as the determined correspondence information.
- the correspondence calculation unit 16 1 has a set of ⁇ and A
- the combination similarity is calculated from the similarity and the similarity of the set consisting of j8 and BC, and it is determined whether or not the combination similarity is equal to or smaller than a threshold. If the combination similarity is equal to or smaller than the threshold, the correspondence determination unit 162 determines that the object A and the object B and the object A and the object B included in the first area correspondence information
- the information on the correspondence between the object C and the object region is determined as it is as the correspondence between the objects A and B and the object C and the object region.
- the correspondence determination unit 16 when associating an object with an object region, the correspondence determination unit 16 forcibly performs association even when the combination similarity is low. May not be correctly correlated. Therefore, when the combination similarity of the optimal combination (that is, the combination similarity of the combination having the largest combination similarity) is low, the correspondence determination unit 16 does not perform the association between the object and the object region.
- the correspondence calculation unit 16 1 selects the optimal combination of the object and the object region, and when the similarity between the object region and the object pair is equal to or less than a predetermined threshold, the correspondence calculation unit 16 1 It is determined that there is no corresponding optimal object area.
- the correspondence calculation unit 161 outputs optimal correspondence information including information indicating that there is no correspondence between the optimal object and the object region.
- the correspondence determination unit 162 determines the object region corresponding to the object A as the object region corresponding to the object A and the object region included in the first region correspondence information.
- the correspondence information As it is, the correspondence between the object A and the object area is determined.
- the correspondence calculation unit 161 may associate only the clearly identifiable object with the object region among the optimal combinations of the object and the object region. In that case, the correspondence calculation unit 16 1 arranges the combinations of all the objects and the object regions in descending order of the value of the combination similarity. Then, when the difference in the value of the combination similarity between the upper combinations is small (for example, equal to or less than a predetermined threshold value), and there is an object and an object region common to the upper combinations, the correspondence calculation unit 1 6 1 sets only the combination of the object and the object region as the optimum association.
- the correspondence calculation unit 161 outputs optimal correspondence information including information on only the combination of the object and the object region. Therefore, the optimal correspondence information includes information indicating that there is no combination that provides an optimal correspondence between another object and another object region.
- the correspondence calculation unit 16 1 sets only the combination of the object C and the object region V to the optimal correspondence.
- the 161 outputs optimal correspondence information including information that makes only the combination of the object C and the object region ⁇ an optimal correspondence.
- This method may be combined with the seventh modification. That is, when the similarity between the object region and the object pair is equal to or less than a predetermined threshold value for the optimal combination of the object and the object region, the difference in the value of the total similarity between the higher-order combinations is small (for example, If there is no object and object region common to the higher-level combination, the correspondence calculation unit 161 sets the optimal correspondence information to the other object and other object region. May include information indicating that there is no combination having the optimum correspondence.
- the correspondence determination unit 162 determines the correspondence between the object and the object region included in the first region correspondence information for the object for which the optimum correspondence information indicates that there is no optimal correspondence, with the object. It is determined as the correspondence with the object area.
- the correspondence calculation unit 16 1 sets a threshold determined in advance from the combination similarity of the combination having the maximum combination similarity among all the possible combinations. For the combinations having the combination similarity within the combination, the combination having the combination similarity within the predetermined threshold is selected from the combination similarity of the combination having the maximum combination similarity, and the combination common to the selected combination is selected.
- the correspondence between the existing object and the object region is included in the optimal correspondence information as an optimal correspondence, and the object and the object region that are not included in the correspondence between the commonly existing object and the object region are optimized.
- the information indicating that there is no correspondence is included in the optimal correspondence information.
- the information indicating the correspondence between the object and the object area included in the optimum correspondence information is output as finalized correspondence information, and it is optimal that there is no correspondence between the optimum object and the object area.
- the object shown in the response information may be output information indicating the correspondence between the object and the object area included in the first region correspondence information as a defined correspondence information.
- the feature amount generation unit 14 may change information included in the area feature amount information output to the correspondence determination unit 16 according to the tracking state.
- the area feature information is the area area Including features.
- an object region whose tracking state is indicated to be in a state other than the separated state may include a feature amount, but the correspondence determination unit 16 does not need the feature amount, so that nothing is performed. It may not be included.
- the correspondence determination unit 16 may exclude information indicating that the association is unnecessary.
- the correspondence determination unit 16 excludes, from the target of the association, an object area for which the association with the object is indicated by the area feature amount information. As a result, the amount of calculation of the correspondence determination unit 16 can be reduced.
- the feature amount synthesizing unit 15 does not acquire the feature amounts of all the objects stored in the feature amount storage unit 142, but uses the first area correspondence information output from the tracking state determination unit 13.
- the feature amount of the object stored in the feature amount storage unit 142 may be acquired only for the object whose tracking state is indicated to be the separated state. By doing so, It is possible to omit the useless processing of acquiring the feature amount of the object necessary for generating the feature amount, and the feature amount synthesizing unit 15 can speed up the processing of the synthesis feature amount generation. it can.
- the first area correspondence information and the confirmed correspondence information are the same.
- the information included in the confirmed response information fed back to step 3 may be limited to the correspondence only for the separated object.
- the tracking state determination unit 13 obtains in the past the tracking state determination unit 13 itself instead of the confirmed correspondence information (included in the first area correspondence information). Use correspondence. Even in such a case, a device equivalent to the object tracking device using each of the above-described methods can be realized.
- the feature amount updating unit 1442 updates the feature amount of the object based on the first region correspondence information.
- the feature amount of the object may be updated based on the correspondence information.
- the feature amount extraction ⁇ 14 1 is based on the region feature amount information output to the feature amount updating unit 14 2
- the tracking state is a single state or a separated state based on the first area correspondence information. Include the feature amount of the object region that is indicated.
- the feature update unit 144 determines the tracking state of the object based on the determined correspondence information output by the correspondence determination unit 16 after the correspondence determination unit 16 determines the association between the object and the object region. .
- the feature amount updating unit 144 based on the feature amount information of the object area output by the feature amount extracting unit 141 only for the object whose tracking state is the single state, Update the feature of the object to be stored. Even in such a case, a device equivalent to the object tracking device using each of the above-described methods can be realized.
- the correspondence determining unit 16 may determine the tracking state of the object. In this case, the correspondence determining unit 16 outputs the determined correspondence information including the information on the tracking state of the determined object. By doing so, the feature amount updating unit 14 4 2 force S, determines the tracking state of the object Processing can be omitted. Even in such a case, a device equivalent to the object tracking device using each of the above-described methods can be realized.
- the feature amount updating unit 144 does not update the feature amount of the object stored in the feature amount storage unit 144 based on only the first region correspondence information, but determines the first region correspondence information.
- the feature value of the object stored in the feature value storage unit 144 may be updated based on the correspondence information.
- the feature amount extraction unit 1441 determines whether the region feature amount information output to the feature amount update unit 142 indicates that the object is in the single state or the separated state in the first region correspondence information. Include the feature of the region.
- the feature amount update unit 1442 determines the tracking state independently based on the region feature amount information output from the feature amount extraction unit 141 and the first region correspondence information output from the tracking state determination unit 13
- the feature amount of the object stored in the feature amount storage unit 144 is updated only for the object that is shown to be in the state.
- the feature amount update unit 14 2 determines the tracking state of the object based on the determined correspondence information output by the correspondence determination unit 16, Only for the object whose tracking state has transitioned from the separated state to the single state, the feature amount of the object stored in the feature amount storage unit 144 is stored based on the area feature amount information output by the feature amount extraction unit 141. Update.
- the correspondence determining unit 16 may determine the tracking state of the object. Then, the correspondence determining unit 16 outputs the determined correspondence information including the information on the tracking state of the determined object. By doing so, it is possible to omit the process of determining the tracking state of the object.
- the first area correspondence information in the present embodiment includes information of a static state indicating whether the object area is in a stationary state or in a key operation state.
- the static motion state includes a stationary state indicating that the object area is stationary and an operating state indicating that the object area is operating.
- the object tracking device treats an object region whose stationary state is a stationary state as a background.
- the first area correspondence information includes a background indicating whether or not the object area is handled as a background. Contains scenery update information.
- the background update information is represented by, for example, binary values “0” and “1”. If the background update information of the object region is “1”, the object amount region is taken as the background. If the background update information of the object area is “0”, the object area is not set as the background.
- the object tracking device can track only a person by regarding a stationary object separated from the person as the background.
- a method for determining the still / moving state for example, when the continuation time of the stationary state (hereinafter, referred to as a quiescent duration period) in the extracted object region is equal to or greater than a predetermined threshold, the object region Is determined to be in a stationary state, and when the stationary duration is smaller than a predetermined threshold, a method is used in which the stationary state of the object area is determined to be an operating state.
- the object tracking device By introducing background update information, the object tracking device considers the object area regarded as the background as an object area that does not need to be tracked and excludes it from the tracking target, and only the objects that need tracking are tracked. Can be. That is, the information of the stationary state indicates that the object is in a stationary state! /, And the combination of the object region and the object is excluded from all possible combinations. For example, in the object region, it is determined that the stationary continuation period exceeds a predetermined threshold value, the squeezing and moving state is a stationary state, and further, there is no object associated with the fixed correspondence information. Only for the object area, the object tracker treats the object area as a part of the background. In addition, by setting a threshold for determining whether the object is the background, even if the object tracking fails, the object tracking device prevents the object area where the object to be tracked exists from being mistakenly regarded as the background. can do.
- FIG. 11 is a block diagram showing a second embodiment of the object tracking device according to the present invention.
- the components of the object tracking device are the same as those in the first embodiment, but the object region extraction unit 12 is input via the image input terminal 11
- the object region is extracted based on not only the image information but also the image information and the determined correspondence information of the associating unit 17.
- the configuration of the association unit 17 is the same as the configuration shown in FIG. 3, and the configuration of the tracking state determination unit 13 is the same as the configuration shown in FIG.
- the configuration of this is the same as the configuration shown in FIG.
- the configuration is the same as the configuration shown in FIG.
- the object area information storage unit 133 stores the stationary duration of the object area.
- the object tracking unit 1331 associates the stationary duration of each past object region stored in the object region information storage unit 133 with the object region at the current time. Then, the object tracking unit 1331 calculates the stationary duration at the current time.
- the object area information storage unit 133 stores the calculated continuation duration of the object area at the current time.
- the object tracking unit 1331 selects the largest stationary duration from the plurality of past stationary durations, and associates the largest stationary duration with the stationary duration at the current time. Note that the shortest stationary duration may be selected and associated with the stationary duration at the current time, or the average duration of the stationary duration may be calculated, and the calculated average time may correspond to the stationary duration at the current time. May be attached.
- the object tracking unit 1331 calculates the stationary duration at the current time by updating the stationary duration according to the following procedure.
- the object tracking unit 13 1 calculates a difference vector between the center of gravity of the past object region and the center of gravity of the latest object region for each object region. If the magnitude of the difference vector is equal to or smaller than a predetermined threshold, the object tracking unit 1331 determines that the object is stationary, and updates the stationary duration. If the magnitude of the difference vector exceeds a predetermined threshold, the object tracking unit 131 determines that the object area is moving, and resets the stationary duration.
- the state determination unit 132 determines the static state of the object region based on the continuation duration of the object region stored in the object region information storage unit 133. Then, the state determining unit 132 'outputs the first area correspondence information including the information on the continuation duration of the object area and the still / moving state. In addition, the object area information storage unit 133 updates the static state of the object area based on the determination by the statelessness determination unit 132.
- the correspondence calculation unit 16 1 calculates, for all combinations (all possible combinations) of the combinations of the object and the object region, The similarity with the composite feature was calculated, but in the present embodiment, the combination of the object region in which the stationary state indicates the stationary state does not correspond to any object Calculate only for That is, a combination that associates an object with an object region in which the static state indicates stationary is excluded from all possible combinations. By doing so, the correspondence calculation unit 161 can reduce the amount of calculation for calculating the similarity between the object and the object region.
- the correspondence determining unit 162 outputs the determined correspondence information including the information on the continuation duration of the object area calculated by the object tracking unit 131.
- the correspondence determination unit 162 determines whether or not the object area can be regarded as the background based on the information of the stationary continuation period and the information of the stationary state. Specifically, if the stationary duration exceeds a predetermined threshold, the stationary state is the stationary state, and there is no object associated with the fixed correspondence information, the object area can be regarded as the background. Is determined. If the object area can be regarded as the background, the 'correspondence determination unit 162 includes background update information indicating that the background in the object area is updated in the determined correspondence information.
- the object area extraction unit 12 compares the object area information with the image information input via the image input terminal 11. Based on the fixed correspondence information output by the associating unit 17, the background image in the object area is updated. The object region extraction unit 12 updates the background image based on Expression (13).
- B t (x, y) (1- ⁇ ) B t _ t (x, y) + ⁇ ⁇ ( ⁇ ⁇ ) ... (13)
- ( ⁇ , y) is the pixel coordinates
- It0 (X, y) indicates the pixel value of the image information at the coordinates (x, y).
- B t (x, y) and B t — (x, y) denote the pixel values of the background image at the coordinates (x, y) at times t and t ⁇ 1, respectively. Indicates the update coefficient.
- the state determination unit 132 outputs the first region correspondence information including the information of the stationary duration and the still / moving state of the object region, and the correspondence unit 17 outputs the fixed correspondence information including the background update information.
- the object tracking device excludes the stationary object from the candidates associated with the object region by including the static region information in the first region correspondence information.
- the amount of calculation for calculating the similarity performed by the correspondence calculation unit 161 can be reduced.
- only the moving object can be tracked.
- the object area information storage unit 133 stores the stationary duration
- the state determination unit 132 acquires the stationary duration stored in the object area information storage 133, instead of tracking the object.
- the unit 1331 may output the second area correspondence information including the information of the stationary duration to the state determination unit 1332. By doing so, the state determination unit 1332 can acquire the information on the continuation period of rest without passing through the object area information storage unit 133.
- the feature amount extraction unit 1441 calculates the feature amount in the region feature amount information for an object region in which the tracking state is indicated as being in the separated state and the still / moving state is indicated as being in the stationary state. May be included, but may not be included because it is unnecessary in the correspondence calculation unit 16 1. Alternatively, instead of including nothing, information indicating that no association is necessary may be included. As a result, unnecessary output of unnecessary feature amounts can be eliminated.
- FIG. 12 is a configuration diagram showing a third embodiment of the object tracking device according to the present invention.
- the object tracking device includes an image input terminal 21, a first control unit 22, an object region extraction unit 23, a feature amount generation unit 24, and a feature amount synthesis unit 25. Including.
- the first control unit 22 is connected to the image input terminal 21, the object region extraction unit 23, the feature amount generation unit 24, and the feature amount synthesis unit 25, and controls them. In addition, the first control unit 22 associates the object with the object region and outputs fixed correspondence information.
- the object region extraction unit 23 extracts the object region based on the image information of the first control unit 22 and outputs the object region information including the image information of the object region with the region number to the first control unit 22. Output to control unit 22.
- the feature amount generation unit 24 extracts the feature amount of the object region based on the image information of the first control unit 22 and the object region information. Then, it outputs area feature information including the extracted feature of the object area to the first control unit 22.
- the feature amount synthesizing unit 25 performs, based on the area feature amount information from the first control unit 22 and the first area correspondence information, based on all necessary combinations of a plurality of objects. Calculate a combined feature amount obtained by combining the feature amounts of the objects. Then, the feature amount combining unit 25 calculates the combined space amount. The included combined feature information is output to the first control unit 22.
- the first region correspondence information includes information on the correspondence between the object and the object region and information on the tracking state of the object, as in the first embodiment.
- the first control unit 22 generates first area correspondence information.
- the first control unit 22 inputs image information via an image input terminal 21 to which an image signal is input from a video camera or the like.
- the first control unit 22 outputs the image information to the object region extraction unit 23, and inputs the object region information output by the object region extraction unit 23.
- first area correspondence information is generated based on the object area information and the past confirmed correspondence information, and the generated first area correspondence information, image information, and object area information are sent to the feature amount generation unit 24.
- it inputs the region feature amount information output by the feature amount generation unit 24, and outputs the region feature amount information and the first region correspondence information to the feature amount synthesis unit 25.
- the combined feature amount information output by the feature amount combining unit 25 is input.
- the first control unit 22 calculates the similarity between the feature amount of the object region included in the region feature amount information and the combined feature amount included in the combined feature amount information, calculates the combination similarity, and calculates Determine the optimal association with the object area. Then, information on the correspondence between the determined object and the object area is output as the determined correspondence information.
- FIG. 13 is a block diagram showing an example of the configuration of the first control unit 22.
- the first control unit 22 includes a second control unit 221, an object tracking unit 222, a state determination unit 223, and a correspondence calculation unit 224.
- the first control unit 22 (the second control unit 221, the object tracking unit 222, the state determination unit 23, and the correspondence calculation unit 22), the object region extraction unit 23, the feature value The generating unit 24 and the feature amount synthesizing unit 25 can be realized by hardware, but can also be realized by software. That is, the first control unit 22, the object region extraction unit 23, the feature value generation unit 24, and the feature value synthesis unit 25 include a CPU that executes a process based on a program, and a first control unit that will be described below. And a program stored in a storage device for realizing the functions of the control unit 22, the object region extracting unit 23, the feature amount generating unit 24, and the feature amount synthesizing unit 25.
- the second control unit 22 1 When image information is input via the image input terminal 21, the second control unit 22 1 outputs the image information to the object area extraction unit 23.
- the second control unit 2 2 1 The current time object region information output by the region extraction unit 23, the past object region information stored by the second control unit 221 itself, and the past determination stored by the second control unit 221 itself The corresponding information is output to the object tracking unit 222.
- the second control unit 221 sends the image information input via the image input terminal 21 and the object region information output by the object region extraction unit 23 to the region feature amount generation unit 24. Output.
- the contents of the confirmation correspondence information are the same as those in the first embodiment.
- the second control section 221 judges the second area correspondence information and the object area information as the state determination section 223. Output to. Note that the contents of the second area correspondence information are the same as those in the first embodiment.
- the second control section 221 checks that the tracking state is in the separated state based on the first area correspondence information. Only for the object region indicated by, is output the feature amount of the object and the first region correspondence information to the feature amount synthesizing unit 25. The contents of the first area correspondence information are the same as those in the first embodiment.
- the second control unit 221 converts the area feature information, the combined feature information, and the first area correspondence information into a correspondence calculation unit.
- Output to 2 2 4 For the first area correspondence information output by the state determination unit 222, an object whose tracking state is indicated to be a state other than the separated state is the object included in the first area correspondence information.
- the correspondence with the region is determined as the correspondence between the object and the object region.
- the correspondence between the object area included in the optimal correspondence information output by the correspondence calculation unit 224 and the object and the object Determine the correspondence with the area. Note that the contents of the optimal correspondence information output by the correspondence calculation unit 224 are the same as those in the first embodiment.
- the object tracking unit 222 includes current object region information input from the second control unit 222, past object region information stored by the second control unit 222, and past determination. The object is tracked based on the correspondence information, and the second area correspondence information is output to the second control unit 221. Note that the method of tracking the object by the object tracking unit 222 is the same as the method of tracking the object in the first embodiment.
- the state determination unit 222 determines the tracking state of the object based on the second region correspondence information and the object region information input from the second control unit 221. The area correspondence information is output to the second control unit 221.
- FIG. 14 is a flowchart illustrating an example of the processing progress of the object tracking device.
- the object region extraction unit 23 extracts an object region from the image information (step S1402). Then, object region information including image information of the object region is output.
- the object tracking unit 222 tracks the object, associates the object with the object area, and outputs second area correspondence information.
- the state determination unit 222 determines the tracking state of the object based on the second area correspondence information and the object area information (Step S1403).
- the feature amount generation unit 24 calculates the feature amount of the object region (step S144).
- the second control unit 222 has a counter i.
- the second control unit 222 sets the number of objects in the counter i (step S1405).
- the second control unit 221 determines whether the value of the counter i is “0” (step S 1 406), and if the value of the counter i is “0 J”, performs the processing. When the value of the counter i is not “0”, the second control unit 221 instructs the state determination unit 223 to execute the process of the next step S 1407.
- the state determination unit 223 determines whether the tracking state of the object is the separation state (step S 1407).
- the feature amount synthesis unit 25 synthesizes and synthesizes the feature amounts of the objects for all necessary combinations of a plurality of objects.
- a feature is generated (step S1408).
- Correspondence calculation unit 2 2 4 is used for all possible combinations of the object and the object area.
- the similarity between the combined feature of the body and the feature of the object region is calculated, and the combination similarity is calculated.
- the second control unit 222 determines the combination of the object and the object region having the highest combination similarity calculated by the correspondence calculation unit 222 as the correspondence between the optimal object and the object region (step S 1 409).
- step S 1407 when the state determination unit 223 determines that the tracking state is a state other than the separation state, the second control unit 221 sets the first area correspondence information to The correspondence between the included object and the object region is determined as the corresponding relationship between the optimal object and the object region (step S 14 09) 0
- the feature amount generation unit 14 updates the feature amount of the object whose correspondence has been determined (step S 1410). However, the feature amount generation unit 14 updates the feature amount only for an object whose tracking state is indicated to be a single state.
- step S14411 decreases the value of the counter by “1” (step S14411). Then, in step S1406, the processing from step S1407 to step S14010 is repeatedly executed until the value of the counter is determined to be "0". That is, for all objects, the processing from step S 1407 to step S 1410 is performed until the correspondence force with the optimal object region is determined. After the processing for determining the correspondence between all objects and the object area (steps S1407 to S1401) is completed, the processing is executed again from step S1401. .
- an object tracking device having the same function as that of the first embodiment can be realized.
- the tracking state is determined for each object or based on image input processing for inputting image information, object area information, and fixed correspondence information indicating the correspondence between the object area and the object before the current time.
- a tracking state determination process for determining for each object region and outputting first region correspondence information indicating a correspondence between the object region and the object and the tracking state; and a determination result of the image information, the object region information, and the tracking state determination unit.
- the feature amount generation processing for generating an area feature amount indicating the feature amount of the object region and an object feature amount indicating the feature amount of the object, and a tracking state for each object, an object region having a plurality of object regions.
- the correspondence between the object and the object area included in the first area correspondence information is determined by comparing the object to the object area.
- all necessary combinations of a plurality of objects are obtained from the object feature amount and the first area correspondence information.
- a combining process for generating the combined feature amounts by combining the feature amounts, comparing each combined feature amount with the region feature amount, and associating the object corresponding to the combined feature amount with the highest similarity with the object region;
- An object tracking device can be realized by an object tracking program for executing the following.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP03791330A EP1538565A4 (en) | 2002-08-30 | 2003-08-27 | OBJECT TRACKING DEVICE, OBJECT TRACKING PROCEDURE AND OBJECT PURPOSE PROGRAM |
US10/525,874 US20050254686A1 (en) | 2002-08-30 | 2003-08-27 | Object trace device, object method, and object trace program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2002254493A JP4240957B2 (ja) | 2002-08-30 | 2002-08-30 | 物体追跡装置、物体追跡方法および物体追跡プログラム |
JP2002-254493 | 2002-08-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004021281A1 true WO2004021281A1 (ja) | 2004-03-11 |
Family
ID=31972842
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2003/010837 WO2004021281A1 (ja) | 2002-08-30 | 2003-08-27 | 物体追跡装置、物体追跡方法および物体追跡プログラム |
Country Status (6)
Country | Link |
---|---|
US (1) | US20050254686A1 (ja) |
EP (1) | EP1538565A4 (ja) |
JP (1) | JP4240957B2 (ja) |
KR (1) | KR100664425B1 (ja) |
CN (1) | CN1320502C (ja) |
WO (1) | WO2004021281A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11212572B2 (en) * | 2018-02-06 | 2021-12-28 | Nippon Telegraph And Telephone Corporation | Content determination device, content determination method, and program |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7526750B2 (en) * | 2003-10-15 | 2009-04-28 | Microsoft Corporation | Object-based systematic state space exploration of software |
JP4140567B2 (ja) | 2004-07-14 | 2008-08-27 | 松下電器産業株式会社 | 物体追跡装置および物体追跡方法 |
CN100435555C (zh) * | 2005-11-17 | 2008-11-19 | 中国科学院半导体研究所 | 高速目标跟踪方法及其电路系统 |
CA2661136A1 (en) * | 2006-08-11 | 2008-02-14 | Thomson Licensing | Accurate motion portrayal for display and digital projectors |
US7920717B2 (en) * | 2007-02-20 | 2011-04-05 | Microsoft Corporation | Pixel extraction and replacement |
JP4970195B2 (ja) | 2007-08-23 | 2012-07-04 | 株式会社日立国際電気 | 人物追跡システム、人物追跡装置および人物追跡プログラム |
JP2009053815A (ja) * | 2007-08-24 | 2009-03-12 | Nikon Corp | 被写体追跡プログラム、および被写体追跡装置 |
US8660300B2 (en) * | 2008-12-12 | 2014-02-25 | Silicon Laboratories Inc. | Apparatus and method for optical gesture recognition |
JP2011015244A (ja) * | 2009-07-03 | 2011-01-20 | Sanyo Electric Co Ltd | ビデオカメラ |
US8509482B2 (en) * | 2009-12-21 | 2013-08-13 | Canon Kabushiki Kaisha | Subject tracking apparatus, subject region extraction apparatus, and control methods therefor |
JP5488076B2 (ja) | 2010-03-15 | 2014-05-14 | オムロン株式会社 | 対象物追跡装置、対象物追跡方法、および制御プログラム |
US8850003B2 (en) * | 2010-04-20 | 2014-09-30 | Zte Corporation | Method and system for hierarchical tracking of content and cache for networking and distribution to wired and mobile devices |
KR101054736B1 (ko) * | 2010-05-04 | 2011-08-05 | 성균관대학교산학협력단 | 3차원 물체 인식 및 자세 추정 방법 |
JP5338978B2 (ja) | 2010-05-10 | 2013-11-13 | 富士通株式会社 | 画像処理装置および画像処理プログラム |
RU2546327C1 (ru) | 2011-03-28 | 2015-04-10 | Нек Корпорейшн | Устройство для отслеживания человека, способ отслеживания человека и невременный машиночитаемый носитель, хранящий программу для отслеживания человека |
JP6067547B2 (ja) * | 2013-12-13 | 2017-01-25 | 国立大学法人 東京大学 | 物体認識装置、ロボット及び物体認識方法 |
US9230366B1 (en) * | 2013-12-20 | 2016-01-05 | Google Inc. | Identification of dynamic objects based on depth data |
JP6203077B2 (ja) * | 2014-02-21 | 2017-09-27 | 株式会社東芝 | 学習装置、密度計測装置、学習方法、学習プログラム、及び密度計測システム |
KR101596436B1 (ko) * | 2014-05-19 | 2016-02-23 | 한국과학기술연구원 | 다중 목표 추적 시스템 및 방법 |
JP6340957B2 (ja) * | 2014-07-02 | 2018-06-13 | 株式会社デンソー | 物体検出装置および物体検出プログラム |
US20170053172A1 (en) * | 2015-08-20 | 2017-02-23 | Kabushiki Kaisha Toshiba | Image processing apparatus, and image processing method |
JP6504711B2 (ja) * | 2016-03-29 | 2019-04-24 | Kddi株式会社 | 画像処理装置 |
US10867394B2 (en) | 2016-05-18 | 2020-12-15 | Nec Corporation | Object tracking device, object tracking method, and recording medium |
JP6787075B2 (ja) | 2016-11-22 | 2020-11-18 | 富士通株式会社 | 画像処理システム、画像処理装置および画像処理方法 |
US10462449B2 (en) * | 2017-02-23 | 2019-10-29 | Novatek Microelectronics Corp. | Method and system for 360-degree video playback |
KR102022971B1 (ko) * | 2017-10-18 | 2019-09-19 | 한국전자통신연구원 | 영상의 객체 처리 방법 및 장치 |
JP6962145B2 (ja) | 2017-11-13 | 2021-11-05 | 富士通株式会社 | 画像処理プログラム、画像処理方法および情報処理装置 |
WO2023176405A1 (ja) * | 2022-03-17 | 2023-09-21 | 国立研究開発法人量子科学技術研究開発機構 | 画像認識プログラム、これを用いた画像認識装置、検出対象個体数計数方法、およびこれらに使用する画像認識学習用モデル画像作成装置 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0259976A (ja) * | 1988-08-26 | 1990-02-28 | Matsushita Electric Works Ltd | ブロック統合処理方式 |
JPH04245579A (ja) * | 1991-01-31 | 1992-09-02 | N T T Data Tsushin Kk | 図形の部分照合方法 |
JPH0721475A (ja) * | 1993-06-30 | 1995-01-24 | Fuji Electric Co Ltd | 侵入物体監視装置 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2620952B2 (ja) * | 1988-04-18 | 1997-06-18 | 日本電信電話株式会社 | 微細パターン形成方法 |
US6185314B1 (en) * | 1997-06-19 | 2001-02-06 | Ncr Corporation | System and method for matching image information to object model information |
US6545706B1 (en) * | 1999-07-30 | 2003-04-08 | Electric Planet, Inc. | System, method and article of manufacture for tracking a head of a camera-generated image of a person |
-
2002
- 2002-08-30 JP JP2002254493A patent/JP4240957B2/ja not_active Expired - Fee Related
-
2003
- 2003-08-27 US US10/525,874 patent/US20050254686A1/en not_active Abandoned
- 2003-08-27 EP EP03791330A patent/EP1538565A4/en not_active Withdrawn
- 2003-08-27 WO PCT/JP2003/010837 patent/WO2004021281A1/ja active Application Filing
- 2003-08-27 CN CNB038247518A patent/CN1320502C/zh not_active Expired - Fee Related
- 2003-08-27 KR KR1020057003601A patent/KR100664425B1/ko not_active IP Right Cessation
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0259976A (ja) * | 1988-08-26 | 1990-02-28 | Matsushita Electric Works Ltd | ブロック統合処理方式 |
JPH04245579A (ja) * | 1991-01-31 | 1992-09-02 | N T T Data Tsushin Kk | 図形の部分照合方法 |
JPH0721475A (ja) * | 1993-06-30 | 1995-01-24 | Fuji Electric Co Ltd | 侵入物体監視装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP1538565A4 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11212572B2 (en) * | 2018-02-06 | 2021-12-28 | Nippon Telegraph And Telephone Corporation | Content determination device, content determination method, and program |
Also Published As
Publication number | Publication date |
---|---|
JP4240957B2 (ja) | 2009-03-18 |
CN1695167A (zh) | 2005-11-09 |
KR20050057096A (ko) | 2005-06-16 |
EP1538565A1 (en) | 2005-06-08 |
JP2004096402A (ja) | 2004-03-25 |
KR100664425B1 (ko) | 2007-01-03 |
US20050254686A1 (en) | 2005-11-17 |
CN1320502C (zh) | 2007-06-06 |
EP1538565A4 (en) | 2010-11-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2004021281A1 (ja) | 物体追跡装置、物体追跡方法および物体追跡プログラム | |
JP7317919B2 (ja) | 外観検索のシステムおよび方法 | |
US10599228B2 (en) | Information processing device and method, program and recording medium for identifying a gesture of a person from captured image data | |
WO2007105768A1 (ja) | 顔画像登録装置、顔画像登録方法、顔画像登録プログラム、および記録媒体 | |
JP5219795B2 (ja) | 被写体追跡装置、その制御方法、撮像装置、表示装置及びプログラム | |
EP2339536B1 (en) | Image processing system, image processing apparatus, image processing method, and program | |
CN111950424B (zh) | 一种视频数据处理方法、装置、计算机及可读存储介质 | |
JP4874150B2 (ja) | 移動物体追跡装置 | |
WO2009143279A1 (en) | Automatic tracking of people and bodies in video | |
JP2022548915A (ja) | 人体属性の認識方法、装置、電子機器及びコンピュータプログラム | |
JP7159384B2 (ja) | 画像処理装置、画像処理方法、及びプログラム | |
JP4969291B2 (ja) | 移動物体追跡装置 | |
JP2010039788A (ja) | 画像処理装置及びその方法並びに画像処理プログラム | |
JP2011089784A (ja) | 物体方向推定装置 | |
US11335027B2 (en) | Generating spatial gradient maps for a person in an image | |
JP2024045460A (ja) | 情報処理システム、情報処理装置、情報処理方法、およびプログラム | |
EP2639744B1 (en) | Image processor, image processing method, control program, and recording medium | |
JP2016219879A (ja) | 画像処理装置、画像処理方法及びプログラム | |
JP5241687B2 (ja) | 物体検出装置及び物体検出プログラム | |
JPH09265538A (ja) | 自動追尾装置 | |
JP6350331B2 (ja) | 追尾装置、追尾方法及び追尾プログラム | |
CN117156259B (zh) | 一种视频流获取方法及电子设备 | |
JP2017033390A (ja) | 画像解析装置及びプログラム | |
JPH11296673A (ja) | ジェスチャ認識装置 | |
JP3122290B2 (ja) | ジェスチャ動画像認識方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CN KR US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR |
|
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 10525874 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020057003601 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2003791330 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 20038247518 Country of ref document: CN |
|
WWP | Wipo information: published in national office |
Ref document number: 2003791330 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1020057003601 Country of ref document: KR |