WO2010095437A1 - 物体位置推定システム、物体位置推定装置、物体位置推定方法、及び物体位置推定プログラム - Google Patents
物体位置推定システム、物体位置推定装置、物体位置推定方法、及び物体位置推定プログラム Download PDFInfo
- Publication number
- WO2010095437A1 WO2010095437A1 PCT/JP2010/001038 JP2010001038W WO2010095437A1 WO 2010095437 A1 WO2010095437 A1 WO 2010095437A1 JP 2010001038 W JP2010001038 W JP 2010001038W WO 2010095437 A1 WO2010095437 A1 WO 2010095437A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- likelihood
- observation
- person
- unit
- information
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 58
- 238000001514 detection method Methods 0.000 claims description 125
- 238000012545 processing Methods 0.000 description 46
- 230000006870 function Effects 0.000 description 35
- 230000008569 process Effects 0.000 description 30
- 238000010586 diagram Methods 0.000 description 23
- 238000004364 calculation method Methods 0.000 description 8
- 238000006243 chemical reaction Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 238000009434 installation Methods 0.000 description 4
- 230000033001 locomotion Effects 0.000 description 4
- 238000002474 experimental method Methods 0.000 description 3
- 238000004590 computer program Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 235000007688 Lycopersicon esculentum Nutrition 0.000 description 1
- 244000141359 Malus pumila Species 0.000 description 1
- 240000003768 Solanum lycopersicum Species 0.000 description 1
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 235000021016 apples Nutrition 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S5/00—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
- G01S5/02—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
- G01S5/0294—Trajectory determination or predictive filtering, e.g. target tracking or Kalman filtering
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/02—Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
- G01S13/06—Systems determining position data of a target
- G01S13/46—Indirect determination of position data
- G01S2013/466—Indirect determination of position data by Trilateration, i.e. two antennas or two sensors determine separately the distance to a target, whereby with the knowledge of the baseline length, i.e. the distance between the antennas or sensors, the position data of the target is determined
Definitions
- the present invention relates to an object position estimation system, an object position estimation apparatus, an object position estimation method, and an object position estimation program for estimating the identification and position of an object based on a plurality of different types of sensor (observation apparatus) information.
- a wireless tag or a camera may be used as a sensor that can detect the position of an object.
- the wireless tag basically does not mistake the ID identification of the object (because the ID identification of the object is performed based on the ID information transmitted from the transmission tag provided with the object to be identified), but the positioning accuracy is the camera. Not good compared to. Furthermore, there is a problem that detection of a tag becomes unstable due to absorption of radio waves by moisture. Therefore, in a general environment where the environment is not maintained such as a factory, the positioning accuracy of the wireless tag cannot be expected so much. In particular, when a person or the like containing a lot of moisture owns a wireless tag, it is conceivable that the position error reaches 1 m or more, or the tag itself cannot be detected.
- the positioning accuracy is better than that of the wireless tag, but the object ID (to identify the object ID from the image characteristics (shape, color, etc.) obtained from the camera).
- the ID identification accuracy cannot be 100%.
- the identification rate of objects having similar image characteristics is low. For example, it is difficult to identify objects having similar colors or shapes such as tomatoes and apples with high accuracy.
- Non-patent Document 1 a technique for estimating the position of an object while compensating for the lack of observation accuracy of each type of sensor by combining a plurality of sensors and integrating the observation information of a plurality of different types of sensors.
- Hirofumi Kanazaki Takehisa Yairi, Kazuo Machida, Kenji Kondo, and Yoshihiko Matsukawa, “Variational Approximation Data Association Filter”, 15th European Signal Processing Conference (EUSIPCO2007).
- Non-Patent Document 1 has a problem that the position of an object cannot be estimated when an observation apparatus that cannot perform ID identification is included. .
- the situation in which the preconditions of the prior art are satisfied is extremely limited and has a problem that the application range is narrow.
- an object of the present invention is to provide an object position estimation system, an object position estimation apparatus, and an object position estimation that can estimate the position of an object even when an observation apparatus that cannot identify an object represented by a camera is included.
- a method and an object position estimation program are provided. Specifically, the present invention calculates the numerical value corresponding to the ID likelihood of the object of the observation apparatus that cannot perform ID identification based on observation information from other than the observation apparatus that cannot perform ID identification. To solve.
- the present invention is configured as follows.
- an object position estimation system for estimating the position of an object
- a first observation unit for observing the object at different times and obtaining first observation information including the position and ID of the object
- First object position likelihood determining means for determining a first object position likelihood that is an estimated position of the object at each time based on the first observation information respectively observed by the first observation unit
- First object ID likelihood determining means for determining a first object ID likelihood of the object at each time based on the first object position likelihood determined by the first object position likelihood determining means
- a second observation unit that observes the object at different times to obtain second observation information including the position and feature of the object, and attaches a second observation ID to each second observation information
- Second object position likelihood determining means for determining a second object position likelihood that is an estimated position of the object at each time based on the second observation information respectively observed by the second observation unit
- Two pieces of the second observation information having the same feature quantity but different times are detected, and the tracking state information of the object is determined by associ
- Object tracking status determination means Second object ID likelihood determining means for determining a second object ID likelihood of the second observation information based on the tracking state information of the object and the estimated position of the object; A first object association value is calculated based on the first object ID likelihood and the first object position likelihood of the object, and a second value is calculated based on the second object ID likelihood and the second object position likelihood.
- An association means for calculating an object association value; The first object ID likelihood, the first object position likelihood, the first object association value, and / or the second object ID likelihood, the second object position likelihood, and the second object of the object.
- Object position estimation means for estimating the position of the object based on the association value; An object position estimation system is provided.
- an object position estimation method for estimating the position of an object, Observing the object at different times and obtaining first observation information including the position and ID of the object in the first observation unit, Based on the first observation information observed by the first observation unit, first object position likelihood determining means determines a first object position likelihood that is an estimated position of the object at each time. , Based on the first object position likelihood determined by the first object position likelihood determining unit, the first object ID likelihood determining unit determines the first object ID likelihood of the object at each time, Each of the objects is observed at different times, and second observation information including the position and feature amount of the object is acquired by the second observation unit, and a second observation ID is obtained for each second observation information by the second observation unit.
- second object position likelihood determining means determines a second object position likelihood that is an estimated position of the object at each time. , Two pieces of the second observation information having the same feature quantity but different times are detected, and the tracking information of the object is tracked by associating the second observation IDs of the two second observation information detected.
- a first object association value is calculated by an association means based on the first object ID likelihood and the first object position likelihood of the object
- a second object association value is calculated by an association means based on the second object ID likelihood and the second object position likelihood of the object
- the position of the object is estimated by the object position estimation means, An object position estimation method is provided.
- a computer A function of observing the object at different times and acquiring first observation information including the position and ID of the object in the first observation unit; Based on the first observation information observed by the first observation unit, first object position likelihood determining means determines a first object position likelihood that is an estimated position of the object at each time. Function and A function of determining the first object ID likelihood of the object at each time by the first object ID likelihood determining means based on the first object position likelihood determined by the first object position likelihood determining means.
- the second object position likelihood determining means determines a second object position likelihood that is an estimated position of the object at each time. Function and Two pieces of the second observation information having the same feature quantity but different times are detected, and the tracking information of the object is tracked by associating the second observation IDs of the two second observation information detected.
- an object position estimation program for realizing a function of estimating the position of the object by an object position estimation means based on an association value.
- First object position likelihood determining means for determining a first object position likelihood that is an estimated position of the object at each time;
- First object ID likelihood determining means for determining a first object ID likelihood of the object at each time based on the first object position likelihood determined by the first object position likelihood determining means;
- Second object position likelihood determining means for determining a second object position likelihood that is an estimated position of the object at each time; Object tracking status determination means for determining the tracking status information of the object by associating the second observation IDs of the two second observation information obtained by observing the same feature quantity at different times; Second object ID likelihood determining means for determining a second object ID likelihood of the second observation information based on the tracking state information of the object and the estimated position of the object; A first object association value based on the first object ID likelihood and the first object position likelihood of the object; a second object association value based on the second object ID likelihood and the second object position likelihood; An association means for calculating (1) The first object ID likelihood, the first object position likelihood, the first object association value of the object, (2) the second object ID likelihood, the second object position likelihood, and the Object position estimating means for estimating the position of the object based on at least one of a second object association value; An object position estimation apparatus is provided.
- an amount corresponding to the object ID likelihood of the second observation device can be calculated based on the observation information of the first observation device capable of ID identification. Even when the function is not provided, the position of the object can be estimated.
- the object ID likelihood based on information from other than the observation apparatus that cannot identify the object ID Can be determined. Therefore, the position of the object can be estimated even when the observation apparatus does not have an object ID identification function.
- FIG. 1A is a block diagram showing a configuration of an object position estimation system according to the first embodiment of the present invention
- FIG. 1B is a block diagram showing a configuration of an object position estimation apparatus according to a modification of the first embodiment of the present invention
- FIG. 2 is a diagram illustrating an operation example of the Kalman filter used in the object position estimation system according to the first embodiment of the present invention
- FIG. 3 is a diagram illustrating a room as a living space that is an observation target in the object position estimation system according to the first embodiment of the present invention
- FIG. 1A is a block diagram showing a configuration of an object position estimation system according to the first embodiment of the present invention
- FIG. 1B is a block diagram showing a configuration of an object position estimation apparatus according to a modification of the first embodiment of the present invention
- FIG. 2 is a diagram illustrating an operation example of the Kalman filter used in the object position estimation system according to the first embodiment of the present invention
- FIG. 3 is a diagram illustrating a room as a living space
- FIG. 4 is a diagram showing an example of a person ID conversion table of the object position estimation system according to the first embodiment of the present invention
- FIG. 5 is a diagram illustrating an example of an output history of the first observation device of the object position estimation system according to the first embodiment of the present invention
- FIG. 6 is a diagram illustrating an example of an output history of the second observation apparatus of the object position estimation system according to the first embodiment of the present invention
- FIG. 7 is a diagram showing an example of an output history of the object position estimation means of the object position estimation system according to the first embodiment of the present invention
- FIG. 8 is a diagram showing an example of an output history of the second object ID likelihood determining means of the object position estimation system according to the first embodiment of the present invention
- FIG. 9A is a diagram showing an example of an actual position of a person at time 2008/09 / 02_12: 00: 00 in a room to be observed by the object position estimation system according to the first embodiment of the present invention.
- FIG. 9B is a diagram showing an example of an actual position of a person at time 2008/09 / 02_12: 00: 01 in a room to be observed by the object position estimation system according to the first embodiment of the present invention.
- FIG. 9C is a diagram showing an example of the actual position of the person at the time 2008/09 / 02_12: 00: 02 in the room to be observed by the object position estimation system according to the first embodiment of the present invention.
- FIG. 9A is a diagram showing an example of an actual position of a person at time 2008/09 / 02_12: 00: 00 in a room to be observed by the object position estimation system according to the first embodiment of the present invention.
- FIG. 9B is a diagram showing an example of an actual position of
- FIG. 10A shows an example of a person detection position (observation position) at time 2008/09 / 02_12: 00: 00 in a room to be observed by the object position estimation system according to the first embodiment of the present invention.
- Figure FIG. 10B shows an example of a person detection position (observation position) at time 2008/09 / 02_12: 00: 01 in a room to be observed by the object position estimation system according to the first embodiment of the present invention.
- Figure FIG. 10C shows an example of a person detection position (observation position) at time 2008/09 / 02_12: 00: 02 in the room to be observed by the object position estimation system according to the first embodiment of the present invention.
- Figure FIG. 10A shows an example of a person detection position (observation position) at time 2008/09 / 02_12: 00: 00 in a room to be observed by the object position estimation system according to the first embodiment of the present invention.
- Figure FIG. 10B shows an example of a person detection position (observation position
- FIG. 11A is a diagram illustrating an example of an initial position of a person (a position before application of an observation value) in a room to be observed when the object position estimation system according to the first embodiment of the present invention is activated.
- FIG. 11B shows an estimated position of a person at the time 2008/09 / 02_12: 00: 00 in the room to be observed by the object position estimation system according to the first embodiment of the present invention (position after application of observation values).
- FIG. 11C shows an initial position of a person at the time 2008/09 / 02_12: 00: 01 in the room to be observed by the object position estimation system according to the first embodiment of the present invention (position before application of observation values).
- FIG. 11B shows an example of FIG. 11D shows an estimated position of a person at the time 2008/09 / 02_12: 00: 01 in the room to be observed by the object position estimation system according to the first embodiment of the present invention (position after application of observation values).
- FIG. 11E shows the initial position of a person at the time 2008/09 / 02_12: 00: 02 in the room to be observed by the object position estimation system according to the first embodiment of the present invention (position before application of observation values).
- FIG. 11D shows the example of FIG.
- FIG. 11F shows an estimated position of a person at the time 2008/09 / 02_12: 00: 02 in the room to be observed by the object position estimation system according to the first embodiment of the present invention (position after application of observation values).
- FIG. 12A is a diagram showing a distance between human detection positions at a time T in a room to be observed by the object position estimation system according to the first embodiment of the present invention
- FIG. 12B is a diagram showing a distance between human detection positions at a time T + 1 in a room to be observed by the object position estimation system according to the first embodiment of the present invention
- FIG. 13 is a diagram showing a human detection position at time T + 2 in a room to be observed by the object position estimation system according to the first embodiment of the present invention
- FIG. 14 is a flowchart showing processing of the first observation apparatus in a room to be observed by the object position estimation system according to the first embodiment of the present invention
- FIG. 15 is a flowchart showing processing of the second observation apparatus in a room to be observed by the object position estimation system according to the first embodiment of the present invention
- FIG. 16 is a flowchart showing processing of the object position estimation system according to the first embodiment of the present invention
- FIG. 17 is a diagram for explaining an observation state of the camera when two persons having the same color feature amount rub each other in the object position estimation system according to the first embodiment of the present invention. Yes, FIG.
- FIG. 18 is a diagram illustrating an example of setting a reference time in the object position estimation system according to the first embodiment of the present invention.
- FIG. 19A is a diagram showing an example of an environment map (environment map information) provided in the object position estimation means in the object position estimation system according to the first embodiment of the present invention;
- FIG. 19B is a diagram illustrating an example of an environment map provided in the camera in the object position estimation system according to the first embodiment of the present invention.
- an object position estimation system for estimating the position of an object
- a first observation unit for observing the object at different times and obtaining first observation information including the position and ID of the object
- First object position likelihood determining means for determining a first object position likelihood that is an estimated position of the object at each time based on the first observation information respectively observed by the first observation unit
- First object ID likelihood determining means for determining a first object ID likelihood of the object at each time based on the first object position likelihood determined by the first object position likelihood determining means
- a second observation unit that observes the object at different times to obtain second observation information including the position and feature of the object, and attaches a second observation ID to each second observation information
- Second object position likelihood determining means for determining a second object position likelihood that is an estimated position of the object at each time based on the second observation information respectively observed by the second observation unit
- Two pieces of the second observation information having the same feature quantity but different times are detected, and the tracking state information of the object is determined by associ
- Object tracking status determination means Second object ID likelihood determining means for determining a second object ID likelihood of the second observation information based on the tracking state information of the object and the estimated position of the object; A first object association value is calculated based on the first object ID likelihood and the first object position likelihood of the object, and a second value is calculated based on the second object ID likelihood and the second object position likelihood.
- An association means for calculating an object association value; The first object ID likelihood, the first object position likelihood, the first object association value, and / or the second object ID likelihood, the second object position likelihood, and the second object of the object.
- Object position estimation means for estimating the position of the object based on the association value; An object position estimation system is provided.
- the object tracking status determination means further includes a tracking success likelihood indicating a probability that the object tracking is successful and a tracking failure indicating a probability that the object tracking is failed.
- Output likelihood The second object ID likelihood determining means calculates a tracking failure likelihood by a value obtained by multiplying the association value calculated at the previous detection of the object by the tracking success likelihood, and the number of all objects to be detected.
- the object position estimation system according to the first aspect is provided, wherein the sum of the divided values is set as the second ID likelihood of the object.
- the second tracking device detects the second observation information by detecting two pieces of the second observation information having the same feature quantity but different times.
- the association unit detects the object detected by the second observation device based on the ID of the object and the position of the object estimated by the object position estimation unit.
- the entrance information including the position of the entrance where the person existing in the environment enters and exits, or the blind spot information of the first observation device, or the second observation device.
- An object position estimation system according to any one of the first to third aspects is provided, which includes an environment map (environment map information) in which the blind spot information is recorded.
- the object tracking situation determination means further determines a probability that a plurality of overlapping objects are detected as one object.
- An object position estimation system according to one aspect is provided.
- an object position estimation method for estimating the position of an object, Observing the object at different times and obtaining first observation information including the position and ID of the object in the first observation unit, Based on the first observation information observed by the first observation unit, first object position likelihood determining means determines a first object position likelihood that is an estimated position of the object at each time. , Based on the first object position likelihood determined by the first object position likelihood determining unit, the first object ID likelihood determining unit determines the first object ID likelihood of the object at each time, Each of the objects is observed at different times, and second observation information including the position and feature amount of the object is acquired by the second observation unit, and a second observation ID is obtained for each second observation information by the second observation unit.
- second object position likelihood determining means determines a second object position likelihood that is an estimated position of the object at each time. , Two pieces of the second observation information having the same feature quantity but different times are detected, and the tracking information of the object is tracked by associating the second observation IDs of the two second observation information detected.
- a first object association value is calculated by an association means based on the first object ID likelihood and the first object position likelihood of the object
- a second object association value is calculated by an association means based on the second object ID likelihood and the second object position likelihood of the object
- the position of the object is estimated by the object position estimation means, An object position estimation method is provided.
- a computer A function of observing the object at different times and acquiring first observation information including the position and ID of the object in the first observation unit; Based on the first observation information observed by the first observation unit, first object position likelihood determining means determines a first object position likelihood that is an estimated position of the object at each time. Function and A function of determining the first object ID likelihood of the object at each time by the first object ID likelihood determining means based on the first object position likelihood determined by the first object position likelihood determining means.
- the second object position likelihood determining means determines a second object position likelihood that is an estimated position of the object at each time. Function and Two pieces of the second observation information having the same feature quantity but different times are detected, and the tracking information of the object is tracked by associating the second observation IDs of the two second observation information detected.
- an object position estimation program for realizing a function of estimating the position of the object by an object position estimation means based on an association value.
- First object position likelihood determining means for determining a first object position likelihood that is an estimated position of the object at each time;
- First object ID likelihood determining means for determining a first object ID likelihood of the object at each time based on the first object position likelihood determined by the first object position likelihood determining means;
- Second object position likelihood determining means for determining a second object position likelihood that is an estimated position of the object at each time; Object tracking status determination means for determining the tracking status information of the object by associating the second observation IDs of the two second observation information obtained by observing the same feature quantity at different times; Second object ID likelihood determining means for determining a second object ID likelihood of the second observation information based on the tracking state information of the object and the estimated position of the object; A first object association value based on the first object ID likelihood and the first object position likelihood of the object; a second object association value based on the second object ID likelihood and the second object position likelihood; An association means for calculating (1) The first object ID likelihood, the first object position likelihood, the first object association value of the object, (2) the second object ID likelihood, the second object position likelihood, and the Object position estimating means for estimating the position of the object based on at least one of a second object association value; An object position estimation apparatus is provided.
- FIG. 1A is a diagram showing a configuration of an object position estimation system according to the first embodiment of the present invention.
- the object position estimation system includes a first observation device 101, a second observation device 102, second object ID likelihood determination means (second object ID likelihood determination unit) 107, An object position estimation unit (object position estimation unit) 108 and an association unit (association unit) 109 are provided.
- the first observation apparatus 101 includes a first object position likelihood determining unit (first object position likelihood determining unit) 103, a first detection unit 101a functioning as a first observation unit, and a first object ID likelihood determining unit. (First object ID likelihood determination unit) 104 and first internal storage unit 110 are provided.
- the second observation apparatus 102 includes a second detection unit 102a that functions as a second observation unit, an image processing unit 102b, second object position likelihood determination means (second object position likelihood determination unit) 105, and object tracking.
- a situation determining unit (object tracking situation determining unit) 106 and a second internal storage unit 111 are provided.
- the object position estimation system according to the first embodiment of the present invention includes a first observation apparatus 101 and a second observation apparatus 102 as a modification, as shown in FIG.
- the first detection unit 101a that includes these components individually and functions as the first observation unit, the first timer 101t, the second detection unit 102a that functions as the second observation unit, and the second Components other than the timer 102t may be configured as the object position estimation device 99.
- the first observation apparatus 101 is configured and the image processing unit 102b is added in the same manner as the object position estimation system according to the first embodiment in FIG. 1A.
- the two observation apparatus 102 is comprised, it can be set as the structure of the object position estimation system which concerns on 1st embodiment of FIG. 1A.
- the object position estimating device 99 includes first object position likelihood determining means (first object position likelihood determining section) 103 and first object ID likelihood determining means (first object ID likelihood determining section). 104, second object position likelihood determining means (second object position likelihood determining section) 105, object tracking situation determining means (object tracking situation determining section) 106, and second object ID likelihood determining means (second An object ID likelihood determination unit) 107, an association unit (association unit) 109, and an object position estimation unit (object position estimation unit) 108 are provided as main components.
- the object position estimation device 99 may include a first storage unit 110m and a second storage unit 111m.
- the information from the first detection unit 101a functioning as the first observation unit and the first timer 101t is the first object position likelihood determining unit (first object position likelihood determining unit) 103. And may also be stored in the first storage unit 110m.
- information from the first object position likelihood determining unit (first object position likelihood determining unit) 103 is input to the first object ID likelihood determining unit (first object ID likelihood determining unit) 104.
- the information from the first object position likelihood determining means (first object position likelihood determining section) 103 and the information from the first object ID likelihood determining means (first object ID likelihood determining section) 104 are the first You may make it memorize
- Information from the first object ID likelihood determining unit (first object ID likelihood determining unit) 104 is input to the object position estimating unit (object position estimating unit) 108.
- information from the second detection unit 102a functioning as the second observation unit and the second timer 102t is input to the second object position likelihood determining unit (second object position likelihood determining unit) 105.
- it may be stored in the second storage unit 111m.
- information from the second object position likelihood determining unit (second object position likelihood determining unit) 105 is input to the object tracking state determining unit (object tracking state determining unit) 106.
- Information from the second object position likelihood determining means (second object position likelihood determining section) 105 and the information from the object tracking situation determining means (object tracking situation determining section) 106 are also stored in the second storage section 111m. You may be made to do.
- Information from the object tracking situation determination means (object tracking situation determination section) 106 includes an object position estimation means (object position estimation section) 108, an association means (association section) 109, and a second object ID likelihood determination means (second object). ID likelihood determination unit) 107.
- the object position estimation apparatus 99 of FIG. 1B having such a configuration can also achieve the same effects as the corresponding means (corresponding part) of the object position estimation system according to the first embodiment of FIG. 1A.
- FIG. 3 shows a concrete example of a living environment, which is an example of an environment, including a first observation device 101 and a second observation device 102 that are components of the object position estimation system according to the first embodiment of the present invention.
- An exemplary room 301 is shown.
- a UWB (Ultra Wide Band) tag reader 304 and a stereo camera 305 are respectively installed at different corners of the ceiling of the rectangular room 301.
- the tag reader 304 functions as an example of the first detection unit 101a of the first observation apparatus 101
- the stereo camera 305 functions as an example of the second detection unit 102a of the second observation apparatus 102.
- Both the first observation apparatus 101 and the second observation apparatus 102 respectively detect a person 302 existing in a room 301 that is a specific example of a living environment.
- the first observation device 101 includes a first detection unit 101a that functions as a first observation unit, a first object position likelihood determination unit 103, a first object ID likelihood determination unit 104, and a first internal storage unit 110. It has.
- the first observation apparatus 101 performs first observation on the first ID likelihood (first object ID likelihood) and the first position likelihood (first object position likelihood) of the person 302 existing in the room 301. It is determined by the apparatus 101. Information on the determined first ID likelihood and first position likelihood of the person 302 can be output from the first observation apparatus 101 to the object position estimating means 108.
- the ID likelihood in the case of the first ID likelihood and the second ID likelihood described later is the ID of the detected object (for example, the person 302 here) that is the object (for example, the person 302 here).
- the ID likelihood is 1 and the probability of the object A being another object. Probability is zero.
- the object A is detected by the camera, it cannot be reliably identified as a specific object as described above. For example, even when the object A is identified through the camera, there is a possibility that an object other than the object A (object B, object C) is identified.
- the probability of ID likelihood is 0.8
- the probability of object B is 0.1
- the probability of object C is 0.1
- the probability is assigned to all existing objects. It will be.
- the first object ID likelihood determining unit 104 determines the first ID likelihood of the person 302 detected by the first detection unit 101 a of the first observation apparatus 101.
- the camera 305 as an example of the second detection unit 102a of the second observation apparatus 102 does not have an object ID identification function.
- the second object ID likelihood determining unit 107 different from the second observation device 102 uses the camera 305 and the image processing unit 102b which are examples of the second detection unit 102a of the second observation device 102 to detect the person 302 detected.
- the second ID likelihood is determined.
- the position likelihood in the case of the first position likelihood and the second position likelihood described later is based on the position of each object estimated at a certain point in time, with respect to the position of the newly observed object. This is a probabilistic representation of which ID the object seems to be. For example, it is assumed that the object A exists at the 10 position on the one-dimensional coordinate, the object B exists at the 20 position, and the object C exists at the 40 position. In this situation, it is assumed that an object is detected at position 0.
- the position likelihood at this time is obtained by, for example, taking the reciprocal of the distance from the estimated position of each of the objects A, B, and C and performing normalization so that the probability that the object A is 0.58 and the probability that the object B is The probability that the object C is 0.28 can be calculated as 0.14.
- the tag reader 304 can reliably identify the person ID. Therefore, the position where the person 302 (the tag 303 held by the person 302) is detected is more likely to be closer to the actual position of the person 302 than the (initial) estimated position of the person determined at random. Therefore, the second position likelihood may be obtained based on the position where the tag reader 304 detects the person 302 (the tag 303 held by the tag reader 304), not the estimated position of the object.
- the tag reader 304 detects the person 302 (the tag 303 possessed by the person).
- the second position likelihood may be determined based on the estimated position of the object in the Nth and subsequent observations of the camera 305.
- the optimum value of the number N of times the second position likelihood is determined based on the position where the tag reader 304 detects the person 302 (the tag 303 held by the person 302) varies depending on the performance of the observation apparatus. For this reason, it is necessary to estimate an optimal value in advance by a preliminary experiment or the like.
- the first position likelihood is determined by the first object position likelihood determining unit 103 of the first observation apparatus 101.
- the second position likelihood is determined by the second object position likelihood determining unit 105 of the second observation apparatus 102.
- a tag reader 304 can be used as the first detection unit 101a of the first observation apparatus 101.
- the first object position likelihood determining unit 103 determines the first position likelihood (first object position likelihood) of the person 302 detected by the first detection unit 101a of the first observation apparatus 101.
- the first object position likelihood determining unit 103 uses, for example, the principle of three-point surveying to determine the first position likelihood of the person 302. Can be determined.
- the installation position of each wireless tag reader itself is stored in advance in the internal storage unit of the first object position likelihood determining unit 103 or the first internal storage unit 110.
- the first object position likelihood determining means 103 draws a sphere based on the position where each wireless tag reader detects the tag 303 with each wireless tag reader around the installation position of each wireless tag reader itself. More specifically, the first object position likelihood determining means 103 draws a spherical surface having a radius calculated from the detected position and the installation position. At this time, the first object position likelihood determining means 103 sets the position where the spherical surfaces overlap most as the position where the tag 303 owned by the person 302 exists and the position where the person 302 exists.
- 1st object ID likelihood determination means 104 determines the 1st ID likelihood (1st object ID likelihood) of the person 302 which the 1st detection part 101a of the 1st observation apparatus 101 detected.
- the tag reader 304 When the tag reader 304 is used as the first detection unit 101 a of the first observation apparatus 101, the tag reader 304 records the ID of the person 302 recorded in the tag 303 by recording the ID of the person 302 in the tag 303. Can be read directly from. Thereby, the probability that the first ID likelihood of the person 302 is the ID of the person 302 can be 1.
- a person ID conversion table that can determine the first ID likelihood of the person 302 from the tag ID may be used.
- the person ID conversion table may be recorded in the first internal storage unit 110 of the first observation apparatus 101. In addition, it is recorded in an external database or the like, and the first object ID likelihood determining unit 104 acquires necessary information from the external database and determines the first ID likelihood of the person 302 as necessary. You may do it.
- FIG. 5 An example of human detection by the first observation apparatus 101 is shown in FIG. FIG. 5 is an output example of the tag reader 304 with an observation period of 1 second.
- the observation ID the time when the person 302 was detected, the position (xy coordinate) where the person 302 was detected, and the person 302 owned Each tag ID is output.
- the first detection unit 101a of the first observation apparatus 101 is the tag reader 304 .
- step S1401 the tag 303 existing in the room 301 which is a specific example of the environment is detected by the first detection unit 101a, and the ID and position of the tag 303 are detected by the first detection unit 101a. Based on the position of the tag 303, the first object position likelihood determining means 103 determines the first position likelihood.
- step S1402 the ID of the person 302 holding the tag 303 is detected by the first detection unit 101a, and the first object ID likelihood determining unit 104 determines the person's first ID likelihood.
- step S1403 the first ID likelihood and the first position likelihood of the person are output from the first observation device 101 to the object position estimating means 108.
- the second observation apparatus 102 includes a second detection unit 102a that functions as a second observation unit, an image processing unit 102b, a second object position likelihood determination unit 105, an object tracking situation determination unit 106, and a second internal storage unit 111. It has.
- the second observation device acquires the second position likelihood (second object position likelihood) of the person 302 existing in the room 301 and the tracking state information (tracking state information) of the person 302 by the second observation device 102.
- the second observation device 102 can output to the second object ID likelihood determination means 107, the object position estimation means 108, and the association means 109, respectively.
- a camera 305 is used as an example of the second detection unit 102a of the second observation apparatus 102.
- the background difference method can be used in the image processing unit 102b.
- the method is as follows.
- the image processing unit 102b compares the environment image when the person 302 does not exist, for example, the background image data of the room 301 prepared in advance by the camera 305 with the current image data captured by the camera 305. Thereafter, an area with different pixel values is extracted as a difference area by the image processing unit 102b. Then, the image processing unit 102b detects the difference area as a person 302.
- the image processing unit 102b can determine that the difference area is sufficiently small relative to the person 302, the difference area is not the person 302. The determination may be made by the image processing unit 102b.
- the case where the difference area is sufficiently small with respect to the person 302 may be a case where the number of pixels in the difference area is equal to or less than a threshold value set in advance based on the minimum number of pixels that can be recognized as the person 302.
- the second object position likelihood determining unit 105 determines the second position likelihood (second object position likelihood) of the person 302 detected by the second detection unit 102a and the image processing unit 102b of the second observation apparatus 102. To do. It is assumed that the camera 305 as an example of the second detection unit 102a of the second observation apparatus 102 is installed so as to look down on the ground vertically from the ceiling. At this time, for example, the second object position likelihood determining unit 105 can determine the second position likelihood of the person 302 based on the barycentric position of the difference area acquired by the image processing unit 102b.
- the object tracking status determination unit 106 determines tracking status information (tracking status information) of the person 302 detected by the second detection unit 102a and the image processing unit 102b of the second observation apparatus 102.
- tracking status information tracking status information
- the camera 305 is used as an example of the second detection unit 102a of the second observation apparatus 102, for example, by storing the color distribution of the difference area acquired by the image processing unit 102b in the second internal storage unit 111, Tracking of the person 302 can be realized. This tracking will be described using a human detection history database stored in the second internal storage unit 111 shown in FIG. FIG.
- the second observation information of observation ID OBS_CAM_001 represents that the camera 305 detected the person 302 at the position (150, 410) at time 2008/09 / 02_12: 00: 00. Further, red is recorded in the color feature amount, and as a result of analyzing the color distribution of the difference area by the image processing unit 102b, it indicates that the red component is the most.
- the object tracking status determination unit 106 determines that OBS_CAM_004 is tracking status information obtained by tracking OBS_CAM_001.
- the object tracking situation determination means 106 determines the tracking situation information that the person has been detected for the first time.
- an RGB component ratio or the like may be used as the color feature amount. It is assumed that the second observation apparatus 102 includes a second timer 102t for acquiring observation period and time information.
- OBS_CAM_004 could not be obtained.
- the object tracking situation determination unit 106 may determine that OBS_CAM_007 is the first person detected. In addition, when the human detection performance of the camera 305 is low, it is determined that the human detection is missed in the previous observation in which OBS_CAM_007 is obtained, and the OBS_CAM_001 having the same color feature amount “red” in the previous observation value is further determined. The object tracking situation determination unit 106 may determine that the person is the same as OBS_CAM_007.
- the object tracking situation determination means 106 determines that the person 302 detected by OBS_CAM_004 is the same person as the person 302 detected by OBS_CAM_001. However, even if OBS_CAM_001 and OBS_CAM_004 detect the same color feature amount, the object tracking situation determination means 106 may determine that another person has been detected from the difference in detection time and the difference in detection position. For example, when it is clear that the movement from the detection position (150, 401) of OBS_CAM_001 to the detection position (320, 390) of OBS_CAM_004 cannot be moved at the walking speed of a person during the difference in detection time, such a determination is made. Is done.
- a judgment criterion for example, when the distance between the detection position (150, 401) and the detection position (320, 390) is larger than the walking distance that is the product of the time and the maximum walking speed of the person (for example, 5 m / s). And so on.
- the observation ID obtained this time determines which of the observation IDs obtained previously is tracked. It cannot be determined by means 106.
- the object tracking situation determination means 106 may consider a human motion model.
- FIG. 17 shows the observation state of the camera 305 when two people having the same color feature amount rub each other. Originally, the observation value of the tag reader 304 should also be obtained, but it is omitted here.
- OBS_CAM_103 is an observation value obtained by tracking OBS_CAM_101 having the same color feature amount and capable of moving within one second. This is determined by the tracking status determination means 106. Similarly, the object tracking status determination unit 106 determines that OBS_CAM_104 is an observation value obtained by tracking OBS_CAM_102 having the same color feature amount and capable of moving within one second.
- the observation IDs can be associated with only the feature amount of the object as the tracking state information and can be handled as the tracking state information.
- the observation ID may be associated in consideration of not only the feature quantity but also the position where the object exists every time. For example, when a person is moving to a place 100 m or more in one second, a rule such as not performing association even if the feature amount is the same is stored in advance, and the object tracking situation determination unit 106 stores the rule. Should be applicable.
- the object tracking situation determination means 106 determines that OBS_CAM_103 proceeds in the ⁇ X direction at a speed of 2 m / s and OBS_CAM_104 travels in the + X direction at a speed of 2 m / s.
- OBS_CAM_103 and OBS_CAM_104 which have the same color feature value and can be moved within one second as the observation values of the tracking source of OBS_CAM_105.
- the observation history information of the OBS_CAM_103 and the OBS_CAM_104 in the human detection history database of the second internal storage unit 111 is referred to by the object tracking status determination unit 106, which observation value is the observation value of the tracking source of the OBS_CAM_105. Is determined by the object tracking situation determination means 106.
- OBS_CAM_103 Since OBS_CAM_103 was traveling in the -X direction at a speed of 2 meters per second, it is highly possible that the next observation (observation at 12:00:03) will be obtained at coordinates (550,350). This can be determined at 106. Further, since OBS_CAM_104 was traveling in the + X direction at a speed of 2 m / s, the object tracking situation determination unit 106 can determine that the next observation is highly likely to be obtained at coordinates (450, 250).
- OBS_CAM_105 is obtained by coordinates (550, 350), and the object tracking situation determination means 106 can determine that the observed value is obtained by tracking OBS_CAM_103. Similarly, the object tracking situation determination unit 106 can determine that OBS_CAM_106 is an observation value obtained by tracking the OBS_CAM 104.
- the human detection history database may be recorded in the second internal storage unit 111 of the second observation apparatus 102 as described above.
- the information may be recorded in an external database or the like, and the object tracking status determination unit 106 may acquire necessary information from an external database or the like to acquire information on the object tracking status as necessary.
- the observation periods of the first observation apparatus 101 and the second observation apparatus 102 need not be the same.
- the second detection unit 102a of the second observation apparatus 102 will be described as the camera 305.
- step S1501 the person 302 existing in the room 301 which is a specific example of the environment is detected by the second detection unit 102a, and the second position likelihood is determined based on the position detected by the image processing unit 102b.
- the position likelihood determining unit 105 determines the position.
- step S1502 information detected by the second detection unit 102a and the image processing unit 102b is recorded in the human detection history database of the second internal storage unit 111 by the second detection unit 102a and the image processing unit 102b.
- step S1503 the object tracking history database 106 is referred to by the object tracking status determination unit 106, and the tracking status of the person 302 detected by the second detection unit 102a and the image processing unit 102b is determined by the object tracking status determination unit 106.
- step S1504 the second position likelihood and tracking status information described above are output from the second observation apparatus 102 to the second object ID likelihood determining means 107, the object position estimating means 108, and the association means 109, respectively.
- the object position estimation means 108 includes the first ID likelihood and the first position likelihood of the person 302 determined (detected) by the first observation apparatus 101, and the second of the person 302 determined (detected) by the second observation apparatus 102. Based on the position likelihood and the second ID likelihood determined by the second object ID likelihood determining means 107, the position of the person 302 is estimated.
- the association unit 109 receives the following one ID likelihood and first position likelihood as the position estimation method, the association unit 109 calculates an association value based on the first ID likelihood and the first position likelihood of the received object. I do.
- the association unit 109 receives the first ID likelihood and the first position likelihood of the object received from the first observation apparatus 101, the second position likelihood of the object received from the second observation apparatus 102, and the second object ID likelihood. Based on the second ID likelihood of the object received from the degree determining means 107, the ID of the detected object is determined probabilistically.
- the detection of the object is performed by the first detection unit 101a of the first observation apparatus 101, the second detection unit 102a of the second observation apparatus 102, and the image processing unit 102b.
- the first observation device 101 and the second observation device 102 are collectively referred to as “observation device”, and the first ID likelihood and the second ID likelihood are collectively referred to as “ID likelihood”.
- the one-position likelihood and the second-position likelihood are collectively referred to as “position likelihood”.
- association value is a value representing the association between the information of the ID likelihood and the position likelihood observed by the observation device and the actual object (for the second ID likelihood, the second object ID likelihood determining means is an output).
- ID likelihood and the position likelihood of the object received from the observation apparatus are values that probabilistically indicate which ID of the object is obtained by detecting the object.
- the ID likelihood and the position likelihood of the object described above are values representing the certainty that the observation information (observation data) is information (data) obtained by observing an object. .
- the association value is expressed as a product of the ID likelihood and the position likelihood. Note that the second ID likelihood of the object detected by the second observation device 102 used to calculate the association value by the association means 109 is not the second observation device 102 but the second object ID likelihood determination means. Calculated at 107.
- the position of the person 302 is estimated by the object position estimation unit 108 based on the calculation result of the association unit 109 and the tracking status information of the object.
- the end of the calculation of the association value by the association unit 109 can be determined by, for example, inputting the calculation result of the association unit 109 from the association unit 109 to the object position estimation unit 108.
- the object position estimation unit 108 determines that the calculation of the association value is finished. Then, the position of the person 302 is estimated by the object position estimation means 108 based on the calculation result in the association means 109 and the tracking status information of the object.
- a Bayesian estimation framework represented by a Kalman filter or the like can be used in the object position estimation means 108.
- the position of the person 302 is estimated by the object position estimating means 108.
- the update amount of the position of the person 302 is The association value is weighted.
- the weighting information based on the association value is output from the association unit 109 to the object position estimation unit 108.
- the update amount of the position of an object becomes large, so that the association value is high.
- the contribution rate to the position update of observation data that is likely to be observation data of a certain object is high.
- the Kalman filter is used for both information on the state of the system (for example, the position of an object in the first embodiment of the present invention) and observation data (observation information) of the first observation device 101 and the second observation device 102. Based on the assumption that noise is included, the state of a likely system is estimated. In other words, the most probable state is estimated from the possible state candidates of the system.
- Fig. 2 shows an example of using the Kalman filter for object position estimation processing.
- the vertical axis represents probability and the horizontal axis represents position.
- the second observation device 102 can obtain the observed value 203 obtained by (Equation 2).
- A represents a motion model of an object
- x represents an object position
- v represents process noise generated during movement
- y represents an observation value
- H represents an observation model that associates the object position x and the observation value y
- w represents observation noise
- t represents time.
- N (0, Q) represents a Gaussian distribution with an average of 0 and a variance of Q.
- N (0, R) similarly represents a Gaussian distribution with an average of 0 and a variance of R.
- the prior probability distribution 201 (hereinafter referred to as “prior distribution”) relating to the position of the currently obtained object is updated by the object position estimating means 108, and the predicted probability distribution 202 (hereinafter “ This is referred to as “predictive distribution”) by the object position estimating means 108.
- the average (position) of the predicted distribution 202 can be obtained by the object position estimating unit 108 using (Expression 5), and the variance of the predicted distribution 202 can be obtained by the object position estimating unit 108 using (Expression 6).
- b represents an estimated value of X at time a based on the information at time b.
- t ⁇ 1 ” in (Expression 5) represents an estimated value of the object position x at time t based on the information at time t ⁇ 1
- P represents the distribution of the distribution.
- the posterior distribution 204 is obtained by the object position estimating means 108 from the observed value 203 and the predicted distribution 202.
- the average (position) of the posterior distribution can be obtained by the object position estimating means 108 in (Expression 7), and the variance of the posterior distribution can be obtained in the object position estimating means 108 by (Expression 8).
- K is a value called Kalman gain, and is obtained by (Equation 9).
- the Kalman gain is a value that determines the update amount. When the accuracy of the observed value is good (dispersion R is very small), the value of the Kalman gain increases to increase the update amount. On the contrary, when the accuracy of the prior distribution is good (the variance P is very small), the value of the Kalman gain is small in order to reduce the update amount.
- the formula (9) when weighting the update amount of the position of the article A using the association value, the formula (9) may be replaced with the formula (9A) as described later.
- “D” represents an association value for the article A.
- FIG. 7 shows an example of the output history of the object position estimating means 108. It is assumed that the object position estimating means 108 has grasped position error characteristics such as standard deviation regarding the positions of the first observation apparatus 101 and the second observation apparatus 102. In other words, here, information on position error characteristics such as standard deviations regarding the positions of the first observation apparatus 101 and the second observation apparatus 102 is input to the object position estimation means 108 and stored in the internal storage of the object position estimation means 108. It is assumed that it is stored in the department.
- the second object ID likelihood determining unit 107 is observed by the camera 305 and the image processing unit 102b based on the object tracking state information of the object tracking state determining unit 106 and the association value obtained by the association unit 109.
- a second ID likelihood (second object ID likelihood) in the observed value is determined.
- FIG. 7 shows an example of an output diagram of the estimation result of the object position estimation means 108.
- the estimated position (estimated position) is the estimated position of the person in advance and the observation apparatus. Calculated based on observations.
- the estimated position of the person in advance is the initial value (in this example, the center of the room), so even if you calculate including the observation value of the observation device, It is difficult to estimate the position of a person at the correct position.
- the association value is obtained by the association means 109 based on the currently estimated person ID and person position. Ask.
- the obtained association value is recorded in the association value database 109 a of the internal storage unit built in the association means 109.
- FIG. 8 shows an example of the association value database 109a.
- the association value database 109a an observation ID of a camera 305, which is an example of the second detection unit 102a of the second observation apparatus 102, and an association value for each person 302 are recorded.
- the person detected by OBS_CAM_004 has the highest probability of being a person of HUM_001 at 0.69, followed by the probability of being a person of HUM_002 is 0.19, and the probability of being a person of HUM_003 is the lowest, 0.12. Yes.
- association values for each person are recorded for other observation IDs.
- the association value database 109 a may be recorded in the internal storage unit of the association unit 109.
- the information may be recorded in an external database or the like, and the association unit 109 may obtain necessary information from the external database and calculate an association value as necessary.
- the second object ID likelihood determining means 107 determines the previous value associated with the observed value.
- the previous association value obtained by the association means 109 with the observed value is determined as the second ID likelihood.
- the association means 109 obtains the current association value based on the currently estimated person ID and person position. .
- the obtained association value is recorded in the association value database 109a by the association means 109.
- the camera 305 outputs the observation value obtained by tracking the same object by associating the previous observation value with the current observation value, and outputs the observation value from the second observation unit 102 to the second object ID likelihood determining unit 107.
- a random variable T that allows the camera 305 to correctly track the observed value is introduced by the second object ID likelihood determining means 107, and the second ID likelihood is determined by the second object ID likelihood determining means 107 according to (Equation 10). You may decide.
- Equation 10 p id is the second ID likelihood, r is an association value, N is the number of people in the room 301, and t is time.
- the object tracking situation determination unit 106 outputs a tracking success likelihood indicating the probability of successful tracking of the object and a tracking failure likelihood indicating a probability of failure of tracking the object.
- the second object ID likelihood determining means 107 is a tracking failure likelihood based on a value obtained by multiplying the association value calculated at the previous detection of the object and the tracking success likelihood, and the number of all objects to be detected. The sum of the values obtained by dividing is used as the second ID likelihood p id of the object.
- the tracking success likelihood is a random variable T
- the tracking failure likelihood is (1-T).
- the object of the ID likelihood Estimate the degree.
- the estimated ID likelihood is assigned based on the association value of the previous observation value by the probability of the tracking success likelihood. This is (r t-1 T t ) on the left side of (Equation 10).
- the second object ID likelihood determining means 107 is a method of obtaining the random variable T, but a preliminary experiment is performed to determine the probability of actually succeeding tracking, and the information is used as the second object ID likelihood determining means. For example, it may be stored in the internal storage unit 107. Further, when a plurality of observation values described in the object tracking method of the second observation apparatus 102 that are not identifiable by a color feature amount are dense, if the probability of tracking failure increases, second object ID likelihood determination means Processing such as lowering the value of the random variable T at 107 may be performed.
- the second object ID likelihood determining means 107 can determine whether or not “when a plurality of observation values that cannot be identified by color feature amounts or the like are dense” is as follows, for example.
- the second object ID likelihood determining means 107 determines that the distance is close. When all the three people have the same color feature amount, processing such as lowering the value of the random variable T can be performed.
- FIG. 12B shows the positions of the person H1 and the person H2 detected by the camera 305 at time T + 1 and the distance between the detected positions of the person H1 and the person H2.
- the object tracking situation determination unit 106 determines that the distance between the H1 person and the H2 person has decreased from 600 cm to 200 cm.
- the camera 305 can track the person H1 and the person H2 by the object tracking state determination unit 106.
- FIG. 13 shows the human detection status of the camera 305 at time T + 2. According to FIG. 13, only the person H1 is detected at time T + 2. It can be considered that the person of H2 is hidden in the blind spot caused by the presence of the person of H1.
- the variable r in Expression 10 may be an average value of the association value of the person H1 at time T + 1 and the association value of the person H2 at time T + 1.
- the second object ID likelihood determining means 107 determines whether or not the person H2 is hidden in the blind spot caused by the presence of the person H1 by setting a threshold value for the distance between the human detection positions. Can do. That is, regarding the detection position A of the H1 person, the second object ID likelihood determining means 107 determines that the detection position B of the H2 person is within a distance shorter than the threshold, and in this case, the H1 person is detected. If not, the second object ID likelihood determining means 107 determines that the person H1 is concealed by the person H2. The same determination is made by the second object ID likelihood determination means 107 for the person H2 or other persons.
- FIG. 9A to 9C show actual positions of the person 302A, person 302B, and person 302C existing in the room 301.
- FIG. FIG. 9A shows the actual positions of the person 302A, person 302B, and person 302C at the time 2008/09 / 02_12: 00: 00.
- FIG. 9B shows the actual positions of the person 302A, person 302B, and person 302C at time 2008/09 / 02_12: 00: 01.
- FIG. 9C shows the actual positions of the person 302A, person 302B, and person 302C at time 2008/09 / 02_12: 00: 02. According to FIG.
- FIG. 9 is a diagram for explaining the object position estimating means 108 and the second object ID likelihood determining means 107, and the information shown in FIG. 9A is the information of the first embodiment of the present invention. It is not used in other descriptions.
- FIG. 10A to FIG. 10C show the state of human detection by the tag reader 304 and the camera 305 in the room 301.
- FIG. 10A shows a human detection situation at time 2008/09 / 02_12: 00: 00.
- FIG. 10B shows a person detection situation at time 2008/09 / 02_12: 00: 01.
- FIG. 10C shows the human detection status at time 2008/09 / 02_12: 00: 02.
- the person 302A of the tag T7 is detected at the position of coordinates (620,100), and the person 302C of the tag T8 is detected at the position of coordinates (620,630) (see FIG. 5).
- three camera person detection positions 1002 are also obtained, one person at the position of coordinates (150,410), one person at the position of coordinates (810,220), and one person at the position of coordinates (810,650). It has been detected (see FIG. 6).
- FIG. 11A to 11F show the position of the person 302 estimated by the object position estimating means 108 in the room 301.
- FIG. FIG. 11A is an initial position of the estimated position at the time of starting the object position estimating system according to the first embodiment of the present invention at time 2008/09 / 02_12: 00: 00.
- the initial position is randomly determined, but it may be the center of the room 301 or the like.
- the object position estimation system possesses an environment map (environment map information) describing entrance / exit information such as the entrance / exit position of the room 301, the entrance / exit of the room 301 may be set as the initial position.
- environment map environment map information
- the environment map may include blind spot information of the first observation device or blind spot information of the second observation device.
- the environment map (environment map information) in which the entrance / exit information is recorded may be stored in the internal storage unit of the object position estimation means 108.
- the environment map (environment map information) in which the camera blind spot information referred to by the object tracking situation determination unit 106 is recorded may be stored in the internal storage unit of the second observation apparatus 102.
- the environment map includes exit / entrance environment map data in which position coordinate information of the entrance / exit is recorded as shown in FIG. 19A, and camera blind spot environment map data in which the blind spot information of the camera 305 is recorded as shown in FIG. 19B.
- the entrance / exit environment map is stored and recorded in the internal storage unit of the object position estimation unit 108, and the camera blind spot environment map is stored and recorded in the internal storage unit of the camera 305.
- the object position estimation unit 108 determines the initial position by referring to the entrance / exit environment map by the object position estimation unit 108. can do.
- the blind spot of the camera 305 is expressed as a rectangle with two points recorded in the camera blind spot environment map as diagonal points.
- the object tracking situation determination means 106 refers to the camera blind spot environment map, it can be seen that both the observed value A and the observed value B are close to “camera blind spot 1”. That is, at the time 13:00:01 and the time 13:00:02, the object tracking situation determination unit 106 can determine that there is a possibility that the observation value could not be obtained because the person entered the blind spot of the camera. Therefore, the object tracking state determination means 106 can determine that the observed value B is not the observed value observed for the first time but the observed value obtained by tracking the observed value A.
- the object tracking situation determination means 106 can use the relationship between the observation period of the camera and the walking speed of the person. For example, assuming that the observation cycle of the camera is once per second and the walking speed of the person is 70 cm per second, the observation values observed within the range of 70 cm from the camera blind spot are observed values near the camera blind spot. And the object tracking situation determination means 106 can determine.
- FIG. 11B shows an estimated position of a person estimated based on the person information detected by the tag reader 304 and the camera 305 at the time 2008/09 / 02_12: 00: 00.
- FIG. 11C further shows the initial position of the estimated position at time 2008/09 / 02_12: 00: 01.
- FIG. 11D shows the estimated position of the person estimated using the person information detected by the tag reader 304 and the camera 305 at the time 2008/09 / 02_12: 00: 01.
- FIG. 11E shows the initial position of the estimated position at time 2008/09 / 02_12: 00: 02.
- FIG. 11F shows the estimated position of the person estimated using the person information detected by the tag reader 304 and the camera 305 at the time 2008/09 / 02_12: 00: 02.
- the object position estimation apparatus 120 performs operation control.
- the second object ID likelihood determination means 107, the object position estimation means 108, and the association means 109 constitute an object position estimation device 120.
- the object position estimation system will be described as including the object position estimation apparatus 120, a first observation apparatus 101 having a tag reader 304, and a second observation apparatus 102 having a camera 305.
- the tag reader 304 is an example of the first detection unit 101a
- the camera 305 is an example of the second detection unit 102a.
- the object position estimation unit 108 sets an initial position to the estimated position of the person 302 (see FIG. 11A).
- the object position estimation system according to the first embodiment includes a counter in the object position estimation unit 108 for grasping how many times the object position has been updated. When this counter is 0, that is, when the object position estimation system has not been updated, the position of the person 302 is set to the initial position.
- a gate type tag reader 304d that functions as a tag reader of the first observation apparatus 101 is installed at the entrance 301D of the room 301 indicated by a one-dot chain line in FIG.
- the gate type tag reader 304d when the person 302 enters the room 301 through the entrance 301D, all the tags of the person 302 who entered the room 301 can be read by the gate type tag reader 304d. Therefore, it is assumed that the ID of the person 302 existing in the room 301 is all known by the first observation apparatus 101. Furthermore, the ID of the person 302 existing in the room 301 may be recorded in, for example, the first internal storage unit 110 of the first observation apparatus 101 of the object position estimation system. If it is an environment where a gate type tag reader cannot be installed, the initial position of the person 302 may be set to the initial position when the tag reader 304 detects the person 302 for the first time.
- step S1602 the object position estimation device 120 is prompted to receive outputs from the first observation device 101 and the second observation device 102, that is, observation information.
- the person 302 is detected for the first time by the tag reader 304, the camera 305, and the image processing unit 102b (see FIG. 10A for the detection status.
- the object position estimation apparatus 120 receives observation information.
- Information detected by the tag reader 304 and the camera 305 is output to the object position estimation apparatus 120.
- the observation times of the tag reader 304 and the camera 305 are both described as 2008/09 / 02_12: 00: 00, but in actuality, there is a possibility that the observation time will be shifted.
- the reference time is set at an interval of 1 second, and all the observation values obtained within the allowable time range from the reference time, for example, ⁇ 500 msec, are the observation values obtained at the same time. The system may determine that there is.
- FIG. 18 shows an example of setting the reference time. In the case of the observation situation in FIG.
- the system determines that OBS_CAM_201 and OBS_TAG_201, which are observed within a range of ⁇ 500 msec from the reference time 12: 00: 01: 000, are observed at the same time.
- OBS_CAM_202 and OBS_TAG_202 are determined by the system as OBS_CAM_203 and OBS_TAG_203 are observed at the same time.
- the object position estimation unit 108 processes the human detection information (OBS_CAM_001, OBS_CAM_002, OBS_CAM_003) detected by the camera 305 and the image processing unit 102b of the second detection unit 102a of the second observation apparatus 102. Does not include the required second ID likelihood. Therefore, first, the person detection information is output from the second observation apparatus 102 to the second object ID likelihood determination means 107, and the second object ID likelihood determination means 107 detects the person 302 detected by the second observation apparatus 102. The second ID likelihood of is determined. Here, in step S1603, it is determined whether or not the human detection information is output from the second observation apparatus 102 to the second object ID likelihood determination means 107.
- step S1604 If the person detection information is output from the second observation apparatus 102 to the second object ID likelihood determining means 107, the process proceeds to step S1604. On the other hand, if the person detection information is not output from the second observation apparatus 102 to the second object ID likelihood determining means 107, the process proceeds to step S1609. In this example, since the camera 305 outputs human detection information, the process proceeds in the direction of YES in step S1603, and the process proceeds to step S1604. On the other hand, if the camera 305 has not output the human detection information, such as the person 302 has entered the blind spot of the camera 305, the process proceeds to NO in step S1603, and the process proceeds to step S1609.
- the second object ID likelihood determining means 107 determines the second ID likelihood for the person detected by the second observation apparatus 102. That is, the second object ID likelihood determining means 107 has all the information of OBS_CAM_001, OBS_CAM_002, OBS_CAM_003 based on the output from the object tracking situation determining means 106 of the second observation apparatus 102 to the second object ID likelihood determining means 107. It is determined that the detected person information is obtained by detecting a new person. The reason for this is determined by the second object ID likelihood determining means 107 as described above, for example, when there is no information for detecting these persons within a certain time before the information. can do.
- the second object ID likelihood determining means 107 evenly assigns the probability of being a person existing in the room 301 to the second ID likelihood of the person detected by OBS_CAM_001, OBS_CAM_002, OBS_CAM_003. That is, the person detected by OBS_CAM_001 by the second object ID likelihood determining means 107 is 1/3 the probability of being a person 302A, 1/3 the probability of being a person 302B, and 1/3 the probability of being a person 302C. To do.
- the second object ID likelihood determining means 107 assigns probabilities to OBS_CAM_002 and OBS_CAM_003 in the same manner.
- step S1605 the association unit 109 detects the person position detected by the camera 305, the second ID likelihood of the person output by the second object ID likelihood determination unit 107, and the object position estimation shown in FIG. 11A. Using the person ID and person position estimated by the means 108, an association value is obtained. The obtained association value is recorded in the association value database 109a (see FIG. 8).
- step S1606 the human detection information detected by the tag reader 304 of the first detection unit 101a of the first observation apparatus 101 is directly output from the first observation apparatus 101 to the object position estimation means 108.
- the process proceeds to step S1607. If the human detection information is not output from the first observation apparatus 101 to the object position estimation unit 108, the process proceeds to step S1608.
- the tag reader 304 since the tag reader 304 outputs the person detection information, the process proceeds to YES in step S1606, and the process proceeds to step S1607.
- the tag reader 304 has not output the person detection information, such as the person 302 has entered the blind spot of the tag reader 304, the process proceeds to NO in step S1606, and the process proceeds to step S1608.
- step S1609 the human detection information detected by the tag reader 304 of the first detection unit 101a of the first observation apparatus 101 is directly output from the first observation apparatus 101 to the object position estimation means 108.
- the above-described person detection information includes the first ID likelihood and the first position likelihood that the object position estimation unit 108 requires for processing.
- step S1609 whether or not the human detection information including the first ID likelihood and the first position likelihood is output from the first observation apparatus 101 to the object position estimation unit 108. to decide. If the person detection information is output from the first observation apparatus 101 to the object position estimation means 108, the process proceeds to step S1610, and if the person detection information is not output from the first observation apparatus 101 to the object position estimation means 108. Return to step S1602.
- step S1609 since the tag reader 304 outputs the person detection information, the process proceeds to YES in step S1609, and the process proceeds to step S1610.
- the tag reader 304 if the tag reader 304 has not output the person detection information, such as the person 302 has entered the blind spot of the tag reader 304, the process proceeds in the NO direction in step S1609 and returns to step S1602.
- step S ⁇ b> 1607 the object position estimating unit 108 detects the person's first ID likelihood and person's first position likelihood detected by the tag reader 304 of the first detection unit 101 a of the first observation apparatus 101, and the second observation apparatus 102. Based on the second position likelihood of the person detected by the camera 305 of the second detection unit 102a and the second ID likelihood of the person output by the second object ID likelihood determining means 107, the position of the person is estimated. Do. Based on this position estimation, the person's ID and the person's position are updated (see FIG. 11B in which FIG. 11A is updated). In this example, since the tag reader 304 and the camera 305 output the person detection information, the process of step S1607 is performed. Thereafter, the process returns to step S1602.
- step S1608 the object position estimation unit 108 outputs the second position likelihood of the person detected by the camera 305 of the second detection unit 102a of the second observation apparatus 102 and the second object ID likelihood determination unit 107.
- the position of the person is estimated based on the second ID likelihood of the person who has performed. And based on this position estimation, a person's ID and a person's position are updated (refer FIG. 11B which updated FIG. 11A). Thereafter, the process returns to step S1602.
- step S1610 the object position estimation unit 108 uses the person's first ID likelihood and the person's first position likelihood detected by the tag reader 304 of the first detection unit 101a of the first observation apparatus 101 to The position of is estimated. And based on this position estimation, a person's ID and a person's position are updated (refer FIG. 11B which updated FIG. 11A). Thereafter, the process returns to step S1602.
- step S1602 returned from step S1607, step S1608, or step S1610, the person 302 is detected by the tag reader 304 and the camera 305 at the next time 2008/09 / 02_12: 00: 01. (See FIG. 10B for the detection status. See FIGS. 5 and 6 for the detected position coordinates.)
- step S1603 it is determined whether or not the person detection information is output from the second observation apparatus 102 to the second object ID likelihood determining means 107. If the person detection information is output from the second observation apparatus 102 to the second object ID likelihood determining means 107, the process proceeds to step S1604. On the other hand, if the person detection information is not output from the second observation apparatus 102 to the second object ID likelihood determining means 107, the process proceeds to step S1609.
- the second object ID likelihood determining means 107 determines the second ID likelihood for the person detected by the second observation apparatus 102. That is, similarly to the time 2008/09 / 02_12: 00: 00, at the time 2008/09 / 02_12: 00: 01, the second ID likelihood of each person detected by OBS_CAM_004, OBS_CAM_005, OBS_CAM_006 is set to the second It is necessary to obtain the object ID likelihood determining means 107. However, in this case, it is determined by the object tracking situation determination means 106 that OBS_CAM_004 is human detection information obtained by tracking OBS_CAM_001 from the identity of the color feature amount output by the object tracking situation determination means 106. Suppose.
- the object tracking situation determination means 106 determines that OBS_CAM_005 is human detection information obtained by tracking OBS_CAM002 and OBS_CAM_006 is human detection information obtained by tracking OBS_CAM_003. Suppose that it was decided. Then, the second object ID likelihood determining unit 107 uses the association value database 109a as the association value of OBS_CAM_001 recorded in the association value database 109a based on the information of the determination by the object tracking situation determination unit 106, and the second ID of the person of OBS_CAM_004. Output as likelihood. Similarly, OBS_CAM_005 and OBS_CAM_006 are also output from the second object ID likelihood determining means 107.
- observation ID OBS_CAM_004
- observation ID OBS_CAM_001
- the second object ID likelihood determining unit 107 receives the set observation ID from the object tracking situation determining unit 106
- the two-object ID likelihood determining means 107 reads out from the association value database 109a.
- a method for distinguishing between the observation ID for which the ID likelihood is to be obtained and the observation ID for which the association value is to be read out is required. For example, it is possible to determine that the observation value in which the association value is recorded in the association value database 109a is the observation value from which the association value is read.
- steps S1605, S1606, and S1609 processing similar to that described above is performed.
- step S ⁇ b> 1607 the object position estimating unit 108 detects the person's first ID likelihood and person's first position likelihood detected by the tag reader 304 of the first detection unit 101 a of the first observation apparatus 101, and the second observation apparatus 102. Based on the second position likelihood of the person detected by the camera 305 of the second detection unit 102a and the second ID likelihood of the person output by the second object ID likelihood determining means 107, the position of the person is estimated. Do. And based on this position estimation, a person's ID and a person's position are updated (refer FIG. 11D which updated FIG. 11C). Thereafter, the process returns to step S1602.
- step S1608 the object position estimation unit 108 outputs the second position likelihood of the person detected by the camera 305 of the second detection unit 102a of the second observation apparatus 102 and the second object ID likelihood determination unit 107.
- the position of the person is estimated based on the second ID likelihood of the person who has performed. And based on this position estimation, a person's ID and a person's position are updated (refer FIG. 11D which updated FIG. 11C). Thereafter, the process returns to step S1602.
- step S1610 the object position estimation unit 108 uses the person's first ID likelihood and the person's first position likelihood detected by the tag reader 304 of the first detection unit 101a of the first observation apparatus 101 to The position of is estimated. And based on this position estimation, a person's ID and a person's position are updated (refer FIG. 11D which updated FIG. 11C). Thereafter, the process returns to step S1602.
- the explanation has been given by taking the indoor space of the room 301 as an example.
- the system can be used even outdoors. For example, by installing a gate-type tag reader at a school gate or an entrance that is located between the school building and the ground, it is possible to distinguish between students present in the school building and students present on the ground. Then, by installing a tag reader and a camera on the ground fence or school building, it is possible to estimate where the student is in the ground. If the ground area is large and one tag reader and camera cannot observe the entire ground area, the number of tag readers and cameras may be increased.
- the association value when the object was detected last time is displayed as an image of the second detection unit 102a of the second observation device 102 this time. It can replace with the 2nd ID likelihood of the object which the process part 102b detected. Thereby, the processing of the object position estimating means 108 can be performed.
- FIG. 11A shows the actual position of the person (FIG. 9A) because the initial values are set at random.
- FIG. 11B shows the result of object position estimation based on the human information detected by the first observation apparatus 101 and the second observation apparatus 102 at time 2008/09 / 02_12: 00: 00.
- the association unit 109 is detected by the second detection unit 102a and the image processing unit 102b of the second observation device 102 based on the object ID and the position of the object detected by the first detection unit 101a of the first observation device 101.
- An association value for the selected object may be obtained.
- the tag reader 304 is used as an example for the first detection unit 101a of the first observation apparatus 101.
- the association unit 109 may change information used for calculation for the association value until the object position estimation result of the object position estimation unit 108 converges and after the convergence.
- the second detection unit 102a of the second observation apparatus 102 and the image are detected based on the object ID and the object position detected by the tag reader 304.
- An association value related to the object detected by the processing unit 102b is obtained.
- an association related to the object detected by the second detection unit 102a and the image processing unit 102b of the second observation apparatus 102 A value may be obtained.
- the association unit 109 determines that the average variance of all objects is less than the threshold.
- the association unit 109 may determine that the object position has converged.
- the association means 109 grasps in advance the object position convergence status (for example, information that the object position converges at the Nth time from the start (where N is an integer exceeding 1)) by the prior experiment, The association unit 109 may determine that the object position does not converge from the start of the position estimation system to the Nth observation.
- the object position estimating means 108 includes a counter for grasping how many times the object position has been updated.
- the association means 109 can obtain the association value using the output information of the first observation apparatus 101. Thereby, a more accurate association value, that is, the second ID likelihood can be calculated.
- the first detection unit 101a that functions as the first observation unit, the first object position likelihood determination unit 103, the first object ID likelihood determination unit 104, and the second observation unit function.
- Each, or any part of it, can itself consist of software. Therefore, for example, as a computer program having steps constituting the control operation of each embodiment of the present specification, it is stored in a recording medium such as a storage device (hard disk or the like) so as to be readable, and the computer program is temporarily stored in the computer.
- a recording medium such as a storage device (hard disk or the like) so as to be readable
- the computer program is temporarily stored in the computer.
- Each function or each step described above can be executed by reading it into a device (semiconductor memory or the like) and executing it using
- the object position estimation system, the object position estimation apparatus, the object position estimation method, and the object position estimation program according to the present invention can estimate the position of an object even when an observation apparatus without an object ID identification function is included.
- Surveillance camera systems and the like are already in widespread use. However, it is common that the image of the monitoring area is accumulated and the subject is identified by browsing manually. If it is possible to specify a person and position without human intervention according to the present invention, automatic acquisition and management of a person's position or flow line will be realized, and a human management system having unprecedented features as a security application can be provided. Very useful. It can also be applied to a system for managing the position of an article such as a container at a distribution site.
Abstract
Description
異なる時刻に前記物体をそれぞれ観測して前記物体の位置とIDとを含む第一観測情報をそれぞれ取得する第一観測部と、
前記第一観測部でそれぞれ観測された前記第一観測情報を基に、前記それぞれの時刻での前記物体の推定位置である第一物体位置尤度を決定する第一物体位置尤度決定手段と、
前記第一物体位置尤度決定手段で決定された前記第一物体位置尤度を基に、前記物体の第一物体ID尤度をそれぞれの時刻で決定する第一物体ID尤度決定手段と、
異なる時刻に前記物体をそれぞれ観測して前記物体の位置と特徴量を含む第二観測情報をそれぞれ取得し、第二観測情報毎に第二観測IDを付す第二観測部と、
前記第二観測部でそれぞれ観測された前記第二観測情報を基に、前記それぞれの時刻での前記物体の推定位置である第二物体位置尤度を決定する第二物体位置尤度決定手段と、
時刻が異なるが前記物体の特徴量が同じ前記第二観測情報を2つ検出し、検出された前記2つの第二観測情報の第二観測ID同士を関連付けて前記物体の追跡状況情報を決定する物体追跡状況決定手段と、
前記物体の前記追跡状況情報と前記物体の推定位置とに基づいて前記第二観測情報の第二物体ID尤度を決定する第二物体ID尤度決定手段と、
前記物体の前記第一物体ID尤度と前記第一物体位置尤度に基づいて第一物体アソシエーション値を算出し、前記第二物体ID尤度と前記第二物体位置尤度に基づいて第二物体アソシエーション値を算出するアソシエーション手段と、
前記物体の前記第一物体ID尤度と前記第一物体位置尤度と前記第一物体アソシエーション値、又は/且つ、前記第二物体ID尤度と前記第二物体位置尤度と前記第二物体アソシエーション値とに基づいて前記物体の位置を推定する物体位置推定手段と、
を備えることを特徴とする物体位置推定システムを提供する。
異なる時刻に前記物体をそれぞれ観測して前記物体の位置とIDとを含む第一観測情報をそれぞれ第一観測部で取得し、
前記第一観測部でそれぞれ観測された前記第一観測情報を基に、前記それぞれの時刻での前記物体の推定位置である第一物体位置尤度を第一物体位置尤度決定手段で決定し、
前記第一物体位置尤度決定手段で決定された前記第一物体位置尤度を基に、前記物体の第一物体ID尤度をそれぞれの時刻で第一物体ID尤度決定手段で決定し、
異なる時刻に前記物体をそれぞれ観測して前記物体の位置と特徴量を含む第二観測情報をそれぞれ第二観測部で取得して、前記第二観測部で第二観測情報毎に第二観測IDを付し、
前記第二観測部でそれぞれ観測された前記第二観測情報を基に、前記それぞれの時刻での前記物体の推定位置である第二物体位置尤度を第二物体位置尤度決定手段で決定し、
時刻が異なるが前記物体の特徴量が同じ前記第二観測情報を2つ検出し、検出された前記2つの第二観測情報の第二観測ID同士を関連付けて前記物体の追跡状況情報を物体追跡状況決定手段で決定し、
前記物体の前記追跡状況情報と前記物体の推定位置とに基づいて前記第二観測情報の第二物体ID尤度を第二物体ID尤度決定手段で決定し、
前記物体の前記第一物体ID尤度と前記第一物体位置尤度に基づいて第一物体アソシエーション値をアソシエーション手段で算出し、
前記物体の前記第二物体ID尤度と前記第二物体位置尤度に基づいて第二物体アソシエーション値をアソシエーション手段で算出し、
前記物体の前記第一物体ID尤度と前記第一物体位置尤度と前記第一物体アソシエーション値、又は/且つ、前記第二物体ID尤度と前記第二物体位置尤度と前記第二物体アソシエーション値とに基づいて前記物体の位置を物体位置推定手段で推定する、
ことを特徴とする物体位置推定方法を提供する。
異なる時刻に物体をそれぞれ観測して前記物体の位置とIDとを含む第一観測情報をそれぞれ第一観測部で取得する機能と、
前記第一観測部でそれぞれ観測された前記第一観測情報を基に、前記それぞれの時刻での前記物体の推定位置である第一物体位置尤度を第一物体位置尤度決定手段で決定する機能と、
前記第一物体位置尤度決定手段で決定された前記第一物体位置尤度を基に、前記物体の第一物体ID尤度をそれぞれの時刻で第一物体ID尤度決定手段で決定する機能と、
異なる時刻に前記物体をそれぞれ観測して前記物体の位置と特徴量を含む第二観測情報をそれぞれ第二観測部で取得して、前記第二観測部で第二観測情報毎に第二観測IDを付する機能と、
前記第二観測部でそれぞれ観測された前記第二観測情報を基に、前記それぞれの時刻での前記物体の推定位置である第二物体位置尤度を第二物体位置尤度決定手段で決定する機能と、
時刻が異なるが前記物体の特徴量が同じ前記第二観測情報を2つ検出し、検出された前記2つの第二観測情報の第二観測ID同士を関連付けて前記物体の追跡状況情報を物体追跡状況決定手段で決定する機能と、
前記物体の前記追跡状況情報と前記物体の推定位置とに基づいて前記第二観測情報の第二物体ID尤度を第二物体ID尤度決定手段で決定する機能と、
前記物体の前記第一物体ID尤度と前記第一物体位置尤度に基づいて第一物体アソシエーション値をアソシエーション手段で算出する機能と、
前記物体の前記第二物体ID尤度と前記第二物体位置尤度に基づいて第二物体アソシエーション値をアソシエーション手段で算出する機能と、
前記物体の前記第一物体ID尤度と前記第一物体位置尤度と前記第一物体アソシエーション値、又は/且つ、前記第二物体ID尤度と前記第二物体位置尤度と前記第二物体アソシエーション値とに基づいて前記物体の位置を物体位置推定手段で推定する機能とを実現させるための物体位置推定プログラムを提供する。
前記第一物体位置尤度決定手段で決定された前記第一物体位置尤度を基に、前記物体の第一物体ID尤度をそれぞれの時刻で決定する第一物体ID尤度決定手段と、
異なる時刻に前記物体をそれぞれ観測して取得された前記物体の位置と特徴量を含む第二観測情報毎に第二観測IDを付す第二観測部からの前記第二観測情報を基に、前記それぞれの時刻での前記物体の推定位置である第二物体位置尤度を決定する第二物体位置尤度決定手段と、
同じ特徴量を、互いに異なる時刻で観測した2つの前記第二観測情報の第二観測ID同士を関連付けて前記物体の追跡状況情報を決定する物体追跡状況決定手段と、
前記物体の前記追跡状況情報と前記物体の推定位置とに基づいて前記第二観測情報の第二物体ID尤度を決定する第二物体ID尤度決定手段と、
前記物体の前記第一物体ID尤度と前記第一物体位置尤度に基づく第一物体アソシエーション値と、前記第二物体ID尤度と前記第二物体位置尤度に基づく第二物体アソシエーション値とを算出するアソシエーション手段と、
(1)前記物体の前記第一物体ID尤度と前記第一物体位置尤度と前記第一物体アソシエーション値と、(2)前記第二物体ID尤度と前記第二物体位置尤度と前記第二物体アソシエーション値と、の少なくともいずれか一方に基づいて前記物体の位置を推定する物体位置推定手段と、
を備える物体位置推定装置を提供する。
異なる時刻に前記物体をそれぞれ観測して前記物体の位置とIDとを含む第一観測情報をそれぞれ取得する第一観測部と、
前記第一観測部でそれぞれ観測された前記第一観測情報を基に、前記それぞれの時刻での前記物体の推定位置である第一物体位置尤度を決定する第一物体位置尤度決定手段と、
前記第一物体位置尤度決定手段で決定された前記第一物体位置尤度を基に、前記物体の第一物体ID尤度をそれぞれの時刻で決定する第一物体ID尤度決定手段と、
異なる時刻に前記物体をそれぞれ観測して前記物体の位置と特徴量を含む第二観測情報をそれぞれ取得し、第二観測情報毎に第二観測IDを付す第二観測部と、
前記第二観測部でそれぞれ観測された前記第二観測情報を基に、前記それぞれの時刻での前記物体の推定位置である第二物体位置尤度を決定する第二物体位置尤度決定手段と、
時刻が異なるが前記物体の特徴量が同じ前記第二観測情報を2つ検出し、検出された前記2つの第二観測情報の第二観測ID同士を関連付けて前記物体の追跡状況情報を決定する物体追跡状況決定手段と、
前記物体の前記追跡状況情報と前記物体の推定位置とに基づいて前記第二観測情報の第二物体ID尤度を決定する第二物体ID尤度決定手段と、
前記物体の前記第一物体ID尤度と前記第一物体位置尤度に基づいて第一物体アソシエーション値を算出し、前記第二物体ID尤度と前記第二物体位置尤度に基づいて第二物体アソシエーション値を算出するアソシエーション手段と、
前記物体の前記第一物体ID尤度と前記第一物体位置尤度と前記第一物体アソシエーション値、又は/且つ、前記第二物体ID尤度と前記第二物体位置尤度と前記第二物体アソシエーション値とに基づいて前記物体の位置を推定する物体位置推定手段と、
を備えることを特徴とする物体位置推定システムを提供する。
前記第二物体ID尤度決定手段は、前記物体の前回検出時に算出されたアソシエーション値と前記追跡成功尤度を乗算した値と、検出対象としている総ての物体の数で追跡失敗尤度を除算した値の和を、前記物体の第二ID尤度とすることを特徴とする第1の態様に記載の物体位置推定システムを提供する。
異なる時刻に前記物体をそれぞれ観測して前記物体の位置とIDとを含む第一観測情報をそれぞれ第一観測部で取得し、
前記第一観測部でそれぞれ観測された前記第一観測情報を基に、前記それぞれの時刻での前記物体の推定位置である第一物体位置尤度を第一物体位置尤度決定手段で決定し、
前記第一物体位置尤度決定手段で決定された前記第一物体位置尤度を基に、前記物体の第一物体ID尤度をそれぞれの時刻で第一物体ID尤度決定手段で決定し、
異なる時刻に前記物体をそれぞれ観測して前記物体の位置と特徴量を含む第二観測情報をそれぞれ第二観測部で取得して、前記第二観測部で第二観測情報毎に第二観測IDを付し、
前記第二観測部でそれぞれ観測された前記第二観測情報を基に、前記それぞれの時刻での前記物体の推定位置である第二物体位置尤度を第二物体位置尤度決定手段で決定し、
時刻が異なるが前記物体の特徴量が同じ前記第二観測情報を2つ検出し、検出された前記2つの第二観測情報の第二観測ID同士を関連付けて前記物体の追跡状況情報を物体追跡状況決定手段で決定し、
前記物体の前記追跡状況情報と前記物体の推定位置とに基づいて前記第二観測情報の第二物体ID尤度を第二物体ID尤度決定手段で決定し、
前記物体の前記第一物体ID尤度と前記第一物体位置尤度に基づいて第一物体アソシエーション値をアソシエーション手段で算出し、
前記物体の前記第二物体ID尤度と前記第二物体位置尤度に基づいて第二物体アソシエーション値をアソシエーション手段で算出し、
前記物体の前記第一物体ID尤度と前記第一物体位置尤度と前記第一物体アソシエーション値、又は/且つ、前記第二物体ID尤度と前記第二物体位置尤度と前記第二物体アソシエーション値とに基づいて前記物体の位置を物体位置推定手段で推定する、
ことを特徴とする物体位置推定方法を提供する。
異なる時刻に物体をそれぞれ観測して前記物体の位置とIDとを含む第一観測情報をそれぞれ第一観測部で取得する機能と、
前記第一観測部でそれぞれ観測された前記第一観測情報を基に、前記それぞれの時刻での前記物体の推定位置である第一物体位置尤度を第一物体位置尤度決定手段で決定する機能と、
前記第一物体位置尤度決定手段で決定された前記第一物体位置尤度を基に、前記物体の第一物体ID尤度をそれぞれの時刻で第一物体ID尤度決定手段で決定する機能と、
異なる時刻に前記物体をそれぞれ観測して前記物体の位置と特徴量を含む第二観測情報をそれぞれ第二観測部で取得して、前記第二観測部で第二観測情報毎に第二観測IDを付する機能と、
前記第二観測部でそれぞれ観測された前記第二観測情報を基に、前記それぞれの時刻での前記物体の推定位置である第二物体位置尤度を第二物体位置尤度決定手段で決定する機能と、
時刻が異なるが前記物体の特徴量が同じ前記第二観測情報を2つ検出し、検出された前記2つの第二観測情報の第二観測ID同士を関連付けて前記物体の追跡状況情報を物体追跡状況決定手段で決定する機能と、
前記物体の前記追跡状況情報と前記物体の推定位置とに基づいて前記第二観測情報の第二物体ID尤度を第二物体ID尤度決定手段で決定する機能と、
前記物体の前記第一物体ID尤度と前記第一物体位置尤度に基づいて第一物体アソシエーション値をアソシエーション手段で算出する機能と、
前記物体の前記第二物体ID尤度と前記第二物体位置尤度に基づいて第二物体アソシエーション値をアソシエーション手段で算出する機能と、
前記物体の前記第一物体ID尤度と前記第一物体位置尤度と前記第一物体アソシエーション値、又は/且つ、前記第二物体ID尤度と前記第二物体位置尤度と前記第二物体アソシエーション値とに基づいて前記物体の位置を物体位置推定手段で推定する機能とを実現させるための物体位置推定プログラムを提供する。
前記第一物体位置尤度決定手段で決定された前記第一物体位置尤度を基に、前記物体の第一物体ID尤度をそれぞれの時刻で決定する第一物体ID尤度決定手段と、
異なる時刻に前記物体をそれぞれ観測して取得された前記物体の位置と特徴量を含む第二観測情報毎に第二観測IDを付す第二観測部からの前記第二観測情報を基に、前記それぞれの時刻での前記物体の推定位置である第二物体位置尤度を決定する第二物体位置尤度決定手段と、
同じ特徴量を、互いに異なる時刻で観測した2つの前記第二観測情報の第二観測ID同士を関連付けて前記物体の追跡状況情報を決定する物体追跡状況決定手段と、
前記物体の前記追跡状況情報と前記物体の推定位置とに基づいて前記第二観測情報の第二物体ID尤度を決定する第二物体ID尤度決定手段と、
前記物体の前記第一物体ID尤度と前記第一物体位置尤度に基づく第一物体アソシエーション値と、前記第二物体ID尤度と前記第二物体位置尤度に基づく第二物体アソシエーション値とを算出するアソシエーション手段と、
(1)前記物体の前記第一物体ID尤度と前記第一物体位置尤度と前記第一物体アソシエーション値と、(2)前記第二物体ID尤度と前記第二物体位置尤度と前記第二物体アソシエーション値と、の少なくともいずれか一方に基づいて前記物体の位置を推定する物体位置推定手段と、
を備える物体位置推定装置を提供する。
<システム構成>
図1Aは、本発明の第一実施形態に係る物体位置推定システムの構成を示した図である。
第一観測装置101と第二観測装置102とは、共に生活環境の具体的な例である部屋301内に存在する人302をそれぞれ検出する。
第一観測装置101は、第一観測部として機能する第一検出部101aと、第一物体位置尤度決定手段103と、第一物体ID尤度決定手段104と、第一内部記憶部110とを備えている。第一観測装置101は、部屋301内に存在する人302の第一ID尤度(第一物体ID尤度)と、第一位置尤度(第一物体位置尤度)と、を第一観測装置101で決定する。決定された人302の第一ID尤度と第一位置尤度との情報を、第一観測装置101から物体位置推定手段108に出力することができる。
尚、これは位置尤度を決定する一例であり、本発明はこれに限るものではない。第一位置尤度は、第一観測装置101の第一物体位置尤度決定手段103で決定される。第二位置尤度は、第二観測装置102の第二物体位置尤度決定手段105で決定される。
第二観測装置102は、第二観測部として機能する第二検出部102aと、画像処理部102bと第二物体位置尤度決定手段105と物体追跡状況決定手段106と第二内部記憶部111とを備えている。第二観測装置は、部屋301内に存在する人302の第二位置尤度(第二物体位置尤度)と人302の追跡状況の情報(追跡状況情報)を第二観測装置102で取得して、第二観測装置102から、第二物体ID尤度決定手段107と物体位置推定手段108とアソシエーション手段109とにそれぞれ出力することができる。第二観測装置102の第二検出部102aの例としては、カメラ305を用いる。
物体位置推定手段108は、第一観測装置101で決定(検出)した人302の第一ID尤度と第一位置尤度と、第二観測装置102で決定(検出)した人302の第二位置尤度と、第二物体ID尤度決定手段107で決定した第二ID尤度とを基に、人302の位置を推定する。
カルマンフィルタとは、システムの状態(本発明のこの第一実施形態においては、例えば、物体の位置)の情報、及び、第一観測装置101、第二観測装置102の観測データ(観測情報)双方にノイズが含まれるという仮定の基に、尤もらしいシステムの状態を推定するものである。言い換えれば、システムの取りうる状態の候補の中から、最も確率の高い状態を推定するものである。
第二物体ID尤度決定手段107は、物体追跡状況決定手段106の物体の追跡状況情報とアソシエーション手段109で求められたアソシエーション値とに基づいて、カメラ305と画像処理部102bとが、観測した観測値における、第二ID尤度(第二物体ID尤度)を決定する。
図9A~図11Fを用いて、物体位置推定手段108と第二物体ID尤度決定手段107とについて説明する。
以下、図16のフローチャートを参照しながら時間の流れに沿って詳細に説明する。なお、以下の手順については、物体位置推定装置120で動作制御を行う。ここで、物体位置推定システムのうちの第二物体ID尤度決定手段107と、物体位置推定手段108と、アソシエーション手段109とで物体位置推定装置120を構成するものとする。また、物体位置推定システムは、前記物体位置推定装置120と、タグリーダ304を有する第一観測装置101と、カメラ305を有する第二観測装置102とを備えて構成するものとして説明する。タグリーダ304は、第一検出部101aの一例であり、カメラ305は、第二検出部102aの一例である。
ここで、物体位置推定手段108の推定状況について考える。図11Aに示した人位置は、初期値をランダムに設定してあるため、実際の人の存在位置(図9A)とはまるで異なっている。時刻2008/09/02_12:00:00に、第一観測装置101及び第二観測装置102が検出した人情報を基に物体位置推定を行った結果が図11Bである。実際の人の存在位置(図9A)に近づいてはいるものの、未だ2m近くの位置誤差がある。これは、人の初期位置を基に、人位置を更新しているためである。つまり、物体位置推定システムの稼動直後は、物体位置の推定精度は低いことになる。それに伴い、物体位置推定手段108の物体位置推定結果を利用するアソシエーション手段109のアソシエーション値の精度も低くなる。
Claims (8)
- 物体の位置を推定する物体位置推定システムであって、
異なる時刻に前記物体をそれぞれ観測して前記物体の位置とIDとを含む第一観測情報をそれぞれ取得する第一観測部と、
前記第一観測部でそれぞれ観測された前記第一観測情報を基に、前記それぞれの時刻での前記物体の推定位置である第一物体位置尤度を決定する第一物体位置尤度決定手段と、
前記第一物体位置尤度決定手段で決定された前記第一物体位置尤度を基に、前記物体の第一物体ID尤度をそれぞれの時刻で決定する第一物体ID尤度決定手段と、
異なる時刻に前記物体をそれぞれ観測して前記物体の位置と特徴量を含む第二観測情報をそれぞれ取得し、第二観測情報毎に第二観測IDを付す第二観測部と、
前記第二観測部でそれぞれ観測された前記第二観測情報を基に、前記それぞれの時刻での前記物体の推定位置である第二物体位置尤度を決定する第二物体位置尤度決定手段と、
時刻が異なるが前記物体の特徴量が同じ前記第二観測情報を2つ検出し、検出された前記2つの第二観測情報の第二観測ID同士を関連付けて前記物体の追跡状況情報を決定する物体追跡状況決定手段と、
前記物体の前記追跡状況情報と前記物体の推定位置とに基づいて前記第二観測情報の第二物体ID尤度を決定する第二物体ID尤度決定手段と、
前記物体の前記第一物体ID尤度と前記第一物体位置尤度に基づいて第一物体アソシエーション値を算出し、前記第二物体ID尤度と前記第二物体位置尤度に基づいて第二物体アソシエーション値を算出するアソシエーション手段と、
前記物体の前記第一物体ID尤度と前記第一物体位置尤度と前記第一物体アソシエーション値、又は/且つ、前記第二物体ID尤度と前記第二物体位置尤度と前記第二物体アソシエーション値とに基づいて前記物体の位置を推定する物体位置推定手段と、
を備える物体位置推定システム。 - さらに、前記物体追跡状況決定手段は、物体の追跡に成功している確率を示す追跡成功尤度と、物体の追跡に失敗している確率を示す追跡失敗尤度を出力し、
前記第二物体ID尤度決定手段は、前記物体の前回検出時に算出されたアソシエーション値と前記追跡成功尤度を乗算した値と、検出対象としている総ての物体の数で追跡失敗尤度を除算した値の和を、前記物体の第二ID尤度とする請求項1に記載の物体位置推定システム。 - 前記物体追跡状況決定手段が、時刻が異なるが前記物体の特徴量が同じ前記第二観測情報を2つ検出することにより、前記第二観測装置が検出した前記物体の追跡状況を追跡中であると決定した場合、前記アソシエーション手段は、前記物体位置推定手段が推定した前記物体のIDと前記物体の位置に基づいて、前記第二観測装置が検出した前記物体のIDを求める請求項1又は2に記載の物体位置推定システム。
- 更に、前記環境内に存在する前記人が出入りする出入口の位置を含む出入口情報、又は、前記第一観測装置の死角情報と、又は、前記第二観測装置の死角情報が記録された環境マップを備える請求項1又は2に記載の物体位置推定システム。
- 更に、前記物体追跡状況決定手段は、重なり合った複数物体を1つの物体として検出している確率を決定する請求項1又は2に記載の物体位置推定システム。
- 物体の位置を推定する物体位置推定方法であって、
異なる時刻に前記物体をそれぞれ観測して前記物体の位置とIDとを含む第一観測情報をそれぞれ第一観測部で取得し、
前記第一観測部でそれぞれ観測された前記第一観測情報を基に、前記それぞれの時刻での前記物体の推定位置である第一物体位置尤度を第一物体位置尤度決定手段で決定し、
前記第一物体位置尤度決定手段で決定された前記第一物体位置尤度を基に、前記物体の第一物体ID尤度をそれぞれの時刻で第一物体ID尤度決定手段で決定し、
異なる時刻に前記物体をそれぞれ観測して前記物体の位置と特徴量を含む第二観測情報をそれぞれ第二観測部で取得して、前記第二観測部で第二観測情報毎に第二観測IDを付し、
前記第二観測部でそれぞれ観測された前記第二観測情報を基に、前記それぞれの時刻での前記物体の推定位置である第二物体位置尤度を第二物体位置尤度決定手段で決定し、
時刻が異なるが前記物体の特徴量が同じ前記第二観測情報を2つ検出し、検出された前記2つの第二観測情報の第二観測ID同士を関連付けて前記物体の追跡状況情報を物体追跡状況決定手段で決定し、
前記物体の前記追跡状況情報と前記物体の推定位置とに基づいて前記第二観測情報の第二物体ID尤度を第二物体ID尤度決定手段で決定し、
前記物体の前記第一物体ID尤度と前記第一物体位置尤度に基づいて第一物体アソシエーション値をアソシエーション手段で算出し、
前記物体の前記第二物体ID尤度と前記第二物体位置尤度に基づいて第二物体アソシエーション値をアソシエーション手段で算出し、
前記物体の前記第一物体ID尤度と前記第一物体位置尤度と前記第一物体アソシエーション値、又は/且つ、前記第二物体ID尤度と前記第二物体位置尤度と前記第二物体アソシエーション値とに基づいて前記物体の位置を物体位置推定手段で推定する、物体位置推定方法。 - コンピュータに、
異なる時刻に物体をそれぞれ観測して前記物体の位置とIDとを含む第一観測情報をそれぞれ第一観測部で取得する機能と、
前記第一観測部でそれぞれ観測された前記第一観測情報を基に、前記それぞれの時刻での前記物体の推定位置である第一物体位置尤度を第一物体位置尤度決定手段で決定する機能と、
前記第一物体位置尤度決定手段で決定された前記第一物体位置尤度を基に、前記物体の第一物体ID尤度をそれぞれの時刻で第一物体ID尤度決定手段で決定する機能と、
異なる時刻に前記物体をそれぞれ観測して前記物体の位置と特徴量を含む第二観測情報をそれぞれ第二観測部で取得して、前記第二観測部で第二観測情報毎に第二観測IDを付する機能と、
前記第二観測部でそれぞれ観測された前記第二観測情報を基に、前記それぞれの時刻での前記物体の推定位置である第二物体位置尤度を第二物体位置尤度決定手段で決定する機能と、
時刻が異なるが前記物体の特徴量が同じ前記第二観測情報を2つ検出し、検出された前記2つの第二観測情報の第二観測ID同士を関連付けて前記物体の追跡状況情報を物体追跡状況決定手段で決定する機能と、
前記物体の前記追跡状況情報と前記物体の推定位置とに基づいて前記第二観測情報の第二物体ID尤度を第二物体ID尤度決定手段で決定する機能と、
前記物体の前記第一物体ID尤度と前記第一物体位置尤度に基づいて第一物体アソシエーション値をアソシエーション手段で算出する機能と、
前記物体の前記第二物体ID尤度と前記第二物体位置尤度に基づいて第二物体アソシエーション値をアソシエーション手段で算出する機能と、
前記物体の前記第一物体ID尤度と前記第一物体位置尤度と前記第一物体アソシエーション値、又は/且つ、前記第二物体ID尤度と前記第二物体位置尤度と前記第二物体アソシエーション値とに基づいて前記物体の位置を物体位置推定手段で推定する機能とを実現させるための物体位置推定プログラム。 - 異なる時刻に物体をそれぞれ観測して前記物体の位置とIDとを含む第一観測情報をそれぞれ取得する第一観測部からの前記第一観測情報を基に、前記それぞれの時刻での前記物体の推定位置である第一物体位置尤度を決定する第一物体位置尤度決定手段と、
前記第一物体位置尤度決定手段で決定された前記第一物体位置尤度を基に、前記物体の第一物体ID尤度をそれぞれの時刻で決定する第一物体ID尤度決定手段と、
異なる時刻に前記物体をそれぞれ観測して取得された前記物体の位置と特徴量を含む第二観測情報毎に第二観測IDを付す第二観測部からの前記第二観測情報を基に、前記それぞれの時刻での前記物体の推定位置である第二物体位置尤度を決定する第二物体位置尤度決定手段と、
同じ特徴量を、互いに異なる時刻で観測した2つの前記第二観測情報の第二観測ID同士を関連付けて前記物体の追跡状況情報を決定する物体追跡状況決定手段と、
前記物体の前記追跡状況情報と前記物体の推定位置とに基づいて前記第二観測情報の第二物体ID尤度を決定する第二物体ID尤度決定手段と、
前記物体の前記第一物体ID尤度と前記第一物体位置尤度に基づく第一物体アソシエーション値と、前記第二物体ID尤度と前記第二物体位置尤度に基づく第二物体アソシエーション値とを算出するアソシエーション手段と、
(1)前記物体の前記第一物体ID尤度と前記第一物体位置尤度と前記第一物体アソシエーション値と、(2)前記第二物体ID尤度と前記第二物体位置尤度と前記第二物体アソシエーション値と、の少なくともいずれか一方に基づいて前記物体の位置を推定する物体位置推定手段と、
を備える物体位置推定装置。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2010800021760A CN102105811B (zh) | 2009-02-19 | 2010-02-18 | 物体位置推定系统、物体位置推定装置、物体位置推定方法及物体位置推定程序 |
US12/933,888 US8527235B2 (en) | 2009-02-19 | 2010-02-18 | Object position estimation system, object position estimation device, object position estimation method and object position estimation program |
JP2010518663A JP4709943B2 (ja) | 2009-02-19 | 2010-02-18 | 物体位置推定システム、物体位置推定装置、物体位置推定方法、及び物体位置推定プログラム |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2009-036202 | 2009-02-19 | ||
JP2009036202 | 2009-02-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010095437A1 true WO2010095437A1 (ja) | 2010-08-26 |
Family
ID=42633724
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2010/001038 WO2010095437A1 (ja) | 2009-02-19 | 2010-02-18 | 物体位置推定システム、物体位置推定装置、物体位置推定方法、及び物体位置推定プログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US8527235B2 (ja) |
JP (1) | JP4709943B2 (ja) |
CN (1) | CN102105811B (ja) |
WO (1) | WO2010095437A1 (ja) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013072858A (ja) * | 2011-09-29 | 2013-04-22 | Panasonic Corp | 移動体位置推定装置、移動体位置推定方法、及び、移動体位置推定プログラム |
WO2013128852A1 (ja) * | 2012-02-29 | 2013-09-06 | 日本電気株式会社 | 動線情報生成システム、動線情報生成方法および動線情報生成プログラム |
JP2015527573A (ja) * | 2012-06-28 | 2015-09-17 | ノースロップ グラマン システムズ コーポレーション | Wifiマッピング及び動き検出 |
JP2016024704A (ja) * | 2014-07-23 | 2016-02-08 | Kddi株式会社 | ユーザ特定装置、ユーザ特定方法、およびプログラム |
JPWO2020230645A1 (ja) * | 2019-05-13 | 2020-11-19 | ||
US11210536B2 (en) | 2020-01-06 | 2021-12-28 | Toyota Jidosha Kabushiki Kaisha | Moving object recognition system, moving object recognition method, and program |
JP7457948B2 (ja) | 2021-03-10 | 2024-03-29 | パナソニックIpマネジメント株式会社 | 位置推定システム、位置推定方法、位置情報管理システム、位置情報管理方法及びプログラム |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW201206192A (en) * | 2010-07-23 | 2012-02-01 | Hon Hai Prec Ind Co Ltd | Detection device and method |
US9077343B2 (en) * | 2011-06-06 | 2015-07-07 | Microsoft Corporation | Sensing floor for locating people and devices |
JP5919665B2 (ja) * | 2011-07-19 | 2016-05-18 | 日本電気株式会社 | 情報処理装置、物体追跡方法および情報処理プログラム |
TWI474173B (zh) * | 2012-02-21 | 2015-02-21 | Hon Hai Prec Ind Co Ltd | 行走輔助系統及行走輔助方法 |
US9820677B2 (en) * | 2013-01-03 | 2017-11-21 | St. Jude Medical, Atrial Fibrillation Division, Inc. | Cointegration filter for a catheter navigation system |
US9196150B2 (en) * | 2013-03-15 | 2015-11-24 | Honeywell International Inc. | System and method for video monitoring of restricted areas below bucket trucks, lineworkers on power distribution poles or other elevated loads |
JP5812061B2 (ja) * | 2013-08-22 | 2015-11-11 | 株式会社デンソー | 物標検出装置およびプログラム |
JP6482844B2 (ja) * | 2014-12-11 | 2019-03-13 | 株式会社メガチップス | 状態推定装置、プログラムおよび集積回路 |
JP6770299B2 (ja) * | 2015-03-25 | 2020-10-14 | パナソニック株式会社 | 物体検出装置および物体検出方法 |
US10007991B2 (en) | 2016-01-29 | 2018-06-26 | International Business Machines Corporation | Low-cost method to reliably determine relative object position |
JP6730177B2 (ja) * | 2016-12-28 | 2020-07-29 | 株式会社デンソーテン | 画像生成装置および画像生成方法 |
US10304207B2 (en) * | 2017-07-07 | 2019-05-28 | Samsung Electronics Co., Ltd. | System and method for optical tracking |
WO2021202380A1 (en) * | 2020-03-30 | 2021-10-07 | Wiser Systems, Inc. | Integrated camera and ultra- wideband location devices and related systems |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005031955A (ja) * | 2003-07-11 | 2005-02-03 | Kddi Corp | 移動体追跡システム |
JP2006270456A (ja) * | 2005-03-23 | 2006-10-05 | Matsushita Electric Works Ltd | 情報提示システム |
WO2007074671A1 (ja) * | 2005-12-28 | 2007-07-05 | Matsushita Electric Industrial Co., Ltd. | 物体検出装置、物体検出方法、及び、物体検出用コンピュータプログラム |
JP2008008684A (ja) * | 2006-06-27 | 2008-01-17 | Ezis Solutions:Kk | 位置特定装置 |
JP2008275324A (ja) * | 2007-04-25 | 2008-11-13 | Doshisha | 物体位置特定システム、物体位置特定方法、物体位置算出装置、及びコンピュータプログラム |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US66513A (en) * | 1867-07-09 | Benjamin | ||
US6061644A (en) * | 1997-12-05 | 2000-05-09 | Northern Digital Incorporated | System for determining the spatial position and orientation of a body |
CN1199054A (zh) | 1998-04-20 | 1998-11-18 | 台湾塑胶工业股份有限公司 | 催化合成方法 |
US6882959B2 (en) * | 2003-05-02 | 2005-04-19 | Microsoft Corporation | System and process for tracking an object state using a particle filter sensor fusion technique |
JP4490076B2 (ja) | 2003-11-10 | 2010-06-23 | 日本電信電話株式会社 | 物体追跡方法、物体追跡装置、プログラム、および、記録媒体 |
JP3931879B2 (ja) | 2003-11-28 | 2007-06-20 | 株式会社デンソー | センサフュージョンシステム及びそれを用いた車両制御装置 |
KR100754385B1 (ko) | 2004-09-30 | 2007-08-31 | 삼성전자주식회사 | 오디오/비디오 센서를 이용한 위치 파악, 추적 및 분리장치와 그 방법 |
CN101052982A (zh) | 2005-08-04 | 2007-10-10 | 松下电器产业株式会社 | 检索物品推定装置和方法、及检索物品推定装置用服务器 |
JP3989527B2 (ja) | 2005-08-04 | 2007-10-10 | 松下電器産業株式会社 | 検索物品推定装置及び方法、並びに、検索物品推定装置用サーバ |
JP2007255977A (ja) * | 2006-03-22 | 2007-10-04 | Nissan Motor Co Ltd | 物体検出方法および物体検出装置 |
-
2010
- 2010-02-18 US US12/933,888 patent/US8527235B2/en active Active
- 2010-02-18 JP JP2010518663A patent/JP4709943B2/ja active Active
- 2010-02-18 CN CN2010800021760A patent/CN102105811B/zh not_active Expired - Fee Related
- 2010-02-18 WO PCT/JP2010/001038 patent/WO2010095437A1/ja active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005031955A (ja) * | 2003-07-11 | 2005-02-03 | Kddi Corp | 移動体追跡システム |
JP2006270456A (ja) * | 2005-03-23 | 2006-10-05 | Matsushita Electric Works Ltd | 情報提示システム |
WO2007074671A1 (ja) * | 2005-12-28 | 2007-07-05 | Matsushita Electric Industrial Co., Ltd. | 物体検出装置、物体検出方法、及び、物体検出用コンピュータプログラム |
JP2008008684A (ja) * | 2006-06-27 | 2008-01-17 | Ezis Solutions:Kk | 位置特定装置 |
JP2008275324A (ja) * | 2007-04-25 | 2008-11-13 | Doshisha | 物体位置特定システム、物体位置特定方法、物体位置算出装置、及びコンピュータプログラム |
Non-Patent Citations (2)
Title |
---|
"Annual Conference of JSAI Ronbunshu (CD-ROM), 2007", vol. 21ST, article YOHEI SHIRASAKA: "RFID ni yoru Gazo Shikibetsu Model Parameter no Jido Gakushu", pages: 1 - 4, XP008141832 * |
HIROFUMI KANAZAKI; TAKEHISA YAIRI; KAZUO MACHIDA; KENJI KONDO; YOSHIHIKO MATSUKAWA: "Variational Approximation Data Association Filter", 15TH EUROPEAN SIGNAL PROCESSING CONFERENCE, 3 September 2007 (2007-09-03), pages 1872 - 1876 |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013072858A (ja) * | 2011-09-29 | 2013-04-22 | Panasonic Corp | 移動体位置推定装置、移動体位置推定方法、及び、移動体位置推定プログラム |
WO2013128852A1 (ja) * | 2012-02-29 | 2013-09-06 | 日本電気株式会社 | 動線情報生成システム、動線情報生成方法および動線情報生成プログラム |
JPWO2013128852A1 (ja) * | 2012-02-29 | 2015-07-30 | 日本電気株式会社 | 動線情報生成システム、動線情報生成方法および動線情報生成プログラム |
US10648803B2 (en) | 2012-02-29 | 2020-05-12 | Nec Corporation | Movement line information generation system, movement line information generation method and movement line information generation program |
US10895454B2 (en) | 2012-02-29 | 2021-01-19 | Nec Corporation | Movement line information generation system, movement line information generation method and movement line information generation program |
JP2015527573A (ja) * | 2012-06-28 | 2015-09-17 | ノースロップ グラマン システムズ コーポレーション | Wifiマッピング及び動き検出 |
JP2016024704A (ja) * | 2014-07-23 | 2016-02-08 | Kddi株式会社 | ユーザ特定装置、ユーザ特定方法、およびプログラム |
JPWO2020230645A1 (ja) * | 2019-05-13 | 2020-11-19 | ||
JP7384199B2 (ja) | 2019-05-13 | 2023-11-21 | 日本電気株式会社 | 位置推定システム、位置推定方法、プログラム、及び記録媒体 |
US11210536B2 (en) | 2020-01-06 | 2021-12-28 | Toyota Jidosha Kabushiki Kaisha | Moving object recognition system, moving object recognition method, and program |
JP7457948B2 (ja) | 2021-03-10 | 2024-03-29 | パナソニックIpマネジメント株式会社 | 位置推定システム、位置推定方法、位置情報管理システム、位置情報管理方法及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
JP4709943B2 (ja) | 2011-06-29 |
CN102105811B (zh) | 2013-10-16 |
US8527235B2 (en) | 2013-09-03 |
US20110029278A1 (en) | 2011-02-03 |
CN102105811A (zh) | 2011-06-22 |
JPWO2010095437A1 (ja) | 2012-08-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4709943B2 (ja) | 物体位置推定システム、物体位置推定装置、物体位置推定方法、及び物体位置推定プログラム | |
US11386616B2 (en) | Automated spatial indexing of images based on floorplan features | |
KR101203232B1 (ko) | 이동 장치의 로케이션 역학을 결정하는 시스템 및 방법, 이동 장치를 추적하는 것을 용이하게 하는 머신 구현 시스템, 무선 장치 로케이션 시스템, 및 컴퓨터 판독가능 기록 매체 | |
KR102297478B1 (ko) | 주변 신호를 이용한 궤적 매칭 | |
US20200142388A1 (en) | Assisting execution of manual protocols at production equipment | |
US20080130949A1 (en) | Surveillance System and Method for Tracking and Identifying Objects in Environments | |
US20120148102A1 (en) | Mobile body track identification system | |
JP4677060B1 (ja) | 位置校正情報収集装置、位置校正情報収集方法、及び位置校正情報収集プログラム | |
US8731829B2 (en) | Flow line detection system, flow line detection method, and flow line detection program | |
JPWO2013128852A1 (ja) | 動線情報生成システム、動線情報生成方法および動線情報生成プログラム | |
EP1927947A1 (en) | Computer implemented method and system for tracking objects using surveillance database | |
US11386151B2 (en) | Image search in walkthrough videos | |
US11734882B2 (en) | Machine learning based object identification using scaled diagram and three-dimensional model | |
Joseph et al. | Indoor positioning using WiFi fingerprint | |
US20170339528A1 (en) | Method and system for calculating a position of a mobile communication device within an environment | |
US10614436B1 (en) | Association of mobile device to retail transaction | |
Guan et al. | Towards a crowdsourced radio map for indoor positioning system | |
US20200068344A1 (en) | Method and system for wireless localization data acquisition and calibration with image localization | |
Keser et al. | A priori verification and validation study of RFKON database | |
Aung et al. | Construction and management of fingerprint database with estimated reference locations for wifi indoor positioning systems | |
JP2008211781A (ja) | ある環境において物体の移動をモデル化するためのシステムおよびコンピュータにより実施されるその方法 | |
Palamarchuk | ORCID: 0000-0002-7443-099X METHODS OF STUDENT INTERACTION WITH THE AUTOMATED ATTENDANCE SYSTEM IN HIGHER EDUCATION INSTITUTIONS | |
WO2012042864A1 (ja) | 物体位置推定システム、物体位置推定装置、物体位置推定方法、及び物体位置推定プログラム | |
CN115326071A (zh) | 一种自适应融合的室内定位方法、计算机设备和可读存储介质 | |
Papaioannou | Tracking multiple mobile devices in CCTV-enabled areas |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201080002176.0 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010518663 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 12933888 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10743566 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010743566 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 10743566 Country of ref document: EP Kind code of ref document: A1 |