WO2012029878A1 - Dispositif et procédé pour créer une carte de l'environnement et dispositif et procédé pour une prévision d'action - Google Patents

Dispositif et procédé pour créer une carte de l'environnement et dispositif et procédé pour une prévision d'action Download PDF

Info

Publication number
WO2012029878A1
WO2012029878A1 PCT/JP2011/069832 JP2011069832W WO2012029878A1 WO 2012029878 A1 WO2012029878 A1 WO 2012029878A1 JP 2011069832 W JP2011069832 W JP 2011069832W WO 2012029878 A1 WO2012029878 A1 WO 2012029878A1
Authority
WO
WIPO (PCT)
Prior art keywords
probability density
moving body
group
moving
map
Prior art date
Application number
PCT/JP2011/069832
Other languages
English (en)
Japanese (ja)
Inventor
ステファノ ペレグリ
アンドレアス エス
ゴ ルック ファン
船山 竜士
Original Assignee
トヨタ自動車株式会社
アイドゲノッシッシェ テヒニッシェ ホッホシューレ チューリッヒ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by トヨタ自動車株式会社, アイドゲノッシッシェ テヒニッシェ ホッホシューレ チューリッヒ filed Critical トヨタ自動車株式会社
Publication of WO2012029878A1 publication Critical patent/WO2012029878A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes

Definitions

  • the present invention relates to an environment map creation apparatus and method for creating an environment map related to a moving object such as a pedestrian, and an action prediction apparatus and method for predicting the action of a moving object.
  • Patent Document 1 As a conventional behavior prediction apparatus and method, for example, the one described in Patent Document 1 is known.
  • a pedestrian is detected from the pixel pattern information etc. of a moving object, both leg support time and the maximum stride of the pedestrian are obtained, and whether or not the pedestrian stops based on these information
  • the future position of the pedestrian is predicted using the determination result, and the range where the pedestrian can exist is specified.
  • An object of the present invention is to provide an environment map creation device and method, and a behavior prediction device and method that can estimate the movement behavior of a moving object with high accuracy.
  • the present invention relates to an environment map creation device for creating an environment map related to a moving body, the moving body detecting means for detecting the moving body, and the probability of the position after a predetermined time of the moving body detected by the moving body detecting means.
  • Probability density map creation means for creating a probability density map representing density as an environment map is provided.
  • a probability density map representing the probability density of the position of the moving object after a predetermined time is created. Accordingly, when predicting the course of a moving object using such a probability density map, for example, by selecting a plurality of points with high probability on the probability density map, a plurality of predicted paths are obtained for one moving object. be able to. Thereby, the estimation accuracy of the moving action of the moving body can be improved.
  • a mixed normal distribution generating means for expressing the probability density map by a mixed normal distribution is further provided.
  • a plurality of normal distributions constituting the mixed normal distribution by obtaining parameters (center value, height, etc.) of a plurality of normal distributions constituting the mixed normal distribution, a plurality of predicted courses can be reliably obtained for one moving body.
  • the apparatus further comprises group determination means for determining whether or not the plurality of moving bodies belong to one group when a plurality of moving bodies are detected by the moving body detection means, and the probability density map creation means
  • group determination means for determining whether or not the plurality of moving bodies belong to one group when a plurality of moving bodies are detected by the moving body detection means
  • the probability density map creation means When the determination means determines that a plurality of mobile objects belong to one group, a probability density map is created with the group as one mobile body.
  • the moving speed and moving direction of each moving body should be the same. Therefore, when it is determined that a plurality of moving bodies belong to one group, a probability density map is created by using the group as one moving body, and the path of the moving body is determined using such a probability density map.
  • the group determination unit determines that each moving body belongs to one group when the distance between the moving bodies is equal to or less than a predetermined value. Thereby, when the distance between each moving body is below a predetermined value, it can determine with each moving body belonging to one group.
  • the present invention also relates to a behavior predicting apparatus for predicting the behavior of a moving body, the moving body detecting means for detecting the moving body, and the probability of the position after a predetermined time of the moving body detected by the moving body detecting means.
  • a probability density map creating means for creating a probability density map representing density; and a course predicting means for predicting the course of the moving object using the probability density map created by the probability density map creating means.
  • a probability density map representing the probability density for the position of the moving object after a predetermined time is created, and the course of the moving object is predicted using the probability density map.
  • the course of the moving object is predicted using the probability density map.
  • the course prediction means predicts the course of the moving object by arranging a plurality of hypotheses on the probability density map.
  • a plurality of predicted courses can be reliably obtained for one moving body.
  • the apparatus further comprises a mixed normal distribution generating means for expressing the probability density map by a mixed normal distribution, and a parameter acquiring means for obtaining a parameter of the mixed normal distribution, and the course predicting means includes a moving object and other adjacent to the moving object.
  • the course of the moving body may be predicted based on the parameters of the mixed normal distribution related to the moving body. In this case, a plurality of predicted courses can be reliably obtained for the moving body and other moving bodies adjacent to the moving body.
  • the parameter obtaining means obtains the parameters of the mixed normal distribution based on the discretization map obtained by discretizing the probability density map.
  • the parameters of the mixed normal distribution can be easily obtained with a small amount of calculation.
  • the apparatus further comprises group determination means for determining whether or not the plurality of moving bodies belong to one group when a plurality of moving bodies are detected by the moving body detection means, and the probability density map creation means When the determination means determines that a plurality of mobile objects belong to one group, a probability density map is created with the group as one mobile body.
  • the group determination unit determines that each moving body belongs to one group when the distance between the moving bodies is equal to or less than a predetermined value. Thereby, when the distance between each moving body is below a predetermined value, it can determine with each moving body belonging to one group.
  • the course prediction means uses the path of the moving body predicted using the probability density map as an initial value, and models the states of the plurality of moving bodies with a conditional random field, so that a plurality of moving bodies can be Simultaneously with the determination of whether or not it belongs to a group, the course of a plurality of moving bodies may be predicted.
  • the time required only to determine whether or not a plurality of moving bodies belong to one group is eliminated, so that it is possible to predict the course when the plurality of moving bodies act in a group in a short time.
  • the present invention is also an environment map creation method for creating an environment map relating to a moving body, the step of detecting the moving body, and a probability density map representing a probability density for a position after a predetermined time of the moving body. And a step of creating as a feature.
  • a probability density map representing a probability density for a position after a predetermined time of a moving object is created, so that a plurality of predicted courses are obtained for one moving object as described above. Obtainable. Thereby, the estimation accuracy of the moving action of the moving body can be improved.
  • the present invention is a behavior prediction method for predicting the behavior of a mobile object, the step of detecting the mobile object, and the step of creating a probability density map representing the probability density for the position of the mobile object after a predetermined time; And a step of predicting the course of the moving object using the probability density map.
  • the behavior prediction method of the present invention by creating a probability density map representing the probability density for the position of the moving object after a predetermined time, and predicting the course of the moving object using the probability density map, As described above, a plurality of predicted courses can be obtained for one moving body. Thereby, the estimation accuracy of the moving action of the moving body can be improved.
  • the present invention it is possible to estimate the movement behavior of the moving object with high accuracy. As a result, for example, when tracking a moving object using a technique for estimating the moving behavior of the moving object, the tracking performance can be improved.
  • FIG. 1st Embodiment of the action prediction apparatus It is a block diagram which shows schematic structure of 1st Embodiment of the action prediction apparatus concerning this invention. It is a flowchart which shows the detail of the process sequence performed by the pedestrian course prediction part shown in FIG. It is a figure which shows an example of the likelihood map which expressed the validity of the next velocity vector of a pedestrian by the polar coordinate. It is a figure which shows an example of the likelihood map which expressed the validity of the next velocity vector of a pedestrian by the square coordinate. It is a figure which shows the probability density map corresponding to the likelihood map shown in FIG. It is a figure which shows the state which has arrange
  • FIG. 6 is a diagram illustrating a state in which the probability density map illustrated in FIG. 5 is expressed by a mixed normal distribution. It is a figure which shows each normal probability distribution which comprises the mixed normal distribution shown in FIG. It is the schematic which shows an example of the following speed vector about two pedestrians. It is the schematic which shows a mode that the position of the next unit time of each pedestrian is calculated
  • FIG. 4 It is a block diagram which shows schematic structure of 4th Embodiment of the action prediction apparatus concerning this invention. It is a flowchart which shows the detail of the process sequence performed by the group determination / pedestrian course prediction part shown in FIG. It is the schematic which shows an example of the graph model of a conditional random field.
  • FIG. 1 is a block diagram showing a schematic configuration of a first embodiment of the behavior prediction apparatus according to the present invention.
  • a behavior prediction device 1 of the present embodiment is a device that detects a pedestrian by image recognition, creates an environment map related to the pedestrian, and predicts the movement behavior of the pedestrian using the environment map.
  • the behavior prediction apparatus 1 includes a camera 2 that images a pedestrian, an ECU (Electronic Control Unit) 3, and an output display 4.
  • the camera 2 may be fixedly installed at a high place such as a building, or may be mounted on a vehicle or the like.
  • the ECU 3 includes a CPU, a memory such as a ROM and a RAM, an input / output circuit, and the like.
  • the ECU 3 includes an image processing unit 5, a storage unit 6, and a pedestrian course prediction unit 7.
  • the image processing unit 5 performs image processing such as filter processing, binarization processing, and feature extraction processing on the captured image of the pedestrian acquired by the camera 2 to generate an image frame (image data) including the pedestrian. To do.
  • the storage unit 6 stores parameters and the like used for calculation processing by the pedestrian course prediction unit 7.
  • the pedestrian course prediction unit 7 performs a predetermined process based on the image frame generated by the image processing unit 5 to predict the pedestrian's course, and outputs and displays the prediction result on the output display 4.
  • FIG. 2 is a flowchart showing details of a processing procedure executed by the pedestrian course prediction unit 7.
  • a likelihood map (potential map) representing the validity of the next speed vector (direction and speed) of the pedestrian is created based on the image frame generated by the image processing unit 5 (step S101).
  • the likelihood map shown in FIG. 3 includes an energy term relating to a condition that the distance between pedestrians is maintained above a certain level, an energy term relating to a condition that the pedestrian tries to maintain a constant speed, and the pedestrian is a destination.
  • white points A 1 and A 2 indicate speed vectors that can advance to the next unit time (for example, 0.2 seconds) when the pedestrians P 1 and P 2 maintain the current speed and direction.
  • 60 degrees on the left and right with respect to the directions indicated by the points A 1 and A 2 are changed in increments of 10 degrees and further changed by 0.3 m / s before and after the speed indicated by the points A 1 and A 2 .
  • the validity (possibility) as a velocity vector increases in the order of the black portion, hatched portion, dot portion, and white portion.
  • FIG. 4 shows an example of a likelihood map expressing the validity of the next velocity vector of the pedestrian in square coordinates.
  • the likelihood map shown in FIG. 4 does not correspond to the likelihood map shown in FIG.
  • the x-axis of the likelihood map indicates the lateral (left / right) direction position when the pedestrian P advances to the next unit time
  • the y-axis of the likelihood map indicates the pedestrian P is the next unit time.
  • the forward position when proceeding to is shown.
  • the next velocity vector of the pedestrian is represented as an xy coordinate position when the pedestrian P has advanced to the next unit time. The lower the likelihood map height (deeper), the more appropriate the velocity vector.
  • the likelihood map obtained in step S101 is converted into a probability density map (step S102).
  • the probability density map is an environment map representing the probability density for the position of the pedestrian after a unit time.
  • FIG. 5 is obtained by converting the likelihood map shown in FIG. 4 into a probability density map.
  • This probability density map is represented in the form of a Markov random field or a Gibbs random field.
  • the following conversion formula is used for conversion from the likelihood map to the probability density map. Note that ⁇ E (v i t
  • regions S1 and S2 having a low height in the likelihood map shown in FIG. 4 correspond to regions S1 and S2 having a high probability density in the probability density map shown in FIG.
  • a plurality (n) of hypotheses are arranged in a region having a high probability density on the probability density map obtained in step S102 (step S103).
  • a plurality of hypotheses are arranged randomly or evenly according to the probability of the probability density map or on the maximum point of the probability density map.
  • FIG. 6 shows five hypotheses K arranged on the probability density map shown in FIG.
  • n predicted movement paths corresponding to the n hypotheses are calculated (step S104).
  • the hypothesis in this case is that it corresponds to one point on the probability density map, and the pedestrian advances by the next unit time at the speed vector corresponding to that point. Then, a predicted movement path is obtained from the velocity vector.
  • step S105 it is determined whether or not n is larger than 1 (step S105). When n is larger than 1, n is subtracted by 1 (step S106). If n is not greater than 1, step S106 is omitted.
  • step S107 it is determined whether or not the processing of steps S101 to S104 has been repeated a specified number of times. If the processing of steps S101 to S104 has not been repeated the specified number of times, the process returns to step S101.
  • the process for obtaining the predicted movement path of the pedestrian P is repeated a plurality of times.
  • the unit time is 0.2 seconds
  • the course of the pedestrian P up to 2 seconds later is predicted when the processing of steps S101 to S104 is repeated 10 times.
  • the number of hypotheses arranged on the probability density map decreases from n to one. As a result, a plurality of predicted movement paths are obtained for the pedestrian P.
  • steps S101 to S104 are repeated a specified number of times, a plurality of predicted movement paths of the pedestrians up to a predetermined time later are output to the output display 4 (step S108).
  • the camera 2 and the image processing unit 5 of the ECU 3 constitute moving body detection means for detecting a moving body.
  • the above steps S101 and S102 of the pedestrian course prediction unit 7 of the ECU 3 include probability density map creating means for creating a probability density map representing the probability density for the position after a predetermined time of the moving body detected by the moving body detecting means.
  • the steps S103 to S107 constitute a course prediction unit that predicts the course of the moving object using the probability density map created by the probability density map creation unit.
  • the likelihood map of the pedestrian velocity vector obtained from the image captured by the camera 2 is converted into a probability density map, and a plurality of different hypotheses are arranged on the probability density map.
  • a plurality of predicted movement paths can be obtained for one pedestrian.
  • both the left and right courses of the obstacle are predicted as the travel path of the pedestrian. Therefore, it is possible to prevent the prediction of the movement path of the pedestrian from greatly deviating. Therefore, the movement behavior of the pedestrian can be estimated with high accuracy.
  • FIG. 8 is a flowchart showing details of a processing procedure executed by the pedestrian course prediction unit 7 in the second embodiment of the behavior prediction apparatus according to the present invention.
  • steps S101 and S102 are executed similarly to the processing shown in FIG. Subsequently, the probability density map obtained in step S102 is expressed by a mixed normal distribution (step S111).
  • FIG. 9 represents the probability density map shown in FIG. 5 with a mixed normal distribution. As shown in FIG. 10, the mixed normal distribution is expressed by a combination of a plurality of normal probability distributions.
  • a parameter of the mixed normal distribution is obtained (step S112).
  • the parameters of the mixed normal distribution include the center value ⁇ , the spread ⁇ , and the height w of the normal probability distribution. These parameters are determined using known EM algorithms. As shown in FIG. 10, w is a weight value determined depending on the height of each normal distribution. As this weight value increases, the possibility that the pedestrian moves according to the distribution increases.
  • next velocity vector predicted movement route
  • the next velocity vector is obtained from the parameters of the mixed normal distribution (step S113). Then, it is determined whether or not the next speed vector has been obtained for all pedestrians (step S114). If the next speed vector has not been obtained for all pedestrians, the process returns to step S101.
  • next velocity vector of pedestrian A at this time is represented by a mixed normal distribution having two normal distributions. For this reason, the velocity vector corresponding to the median ⁇ of the two normal distributions indicates two course candidates that the pedestrian A may proceed next. Let these velocity vectors be s 1 and s 2 .
  • next velocity vector of pedestrian B at this time is also represented by a mixed normal distribution having two normal distributions. For this reason, similarly to the above, two velocity vectors indicating the course candidates that the pedestrian B may proceed to the next are obtained. Let these velocity vectors be t 1 and t 2 .
  • step S114 When it is determined in step S114 that the next velocity vector has been obtained for all pedestrians, a plurality of combinations relating to the movements of an arbitrary pedestrian and surrounding pedestrians are considered, and the product of these weight values is calculated ( Procedure S115). Then, the top m items having a large product of the weight values are selected from a plurality of combinations (step S116).
  • the product of the weight values in is: w (s 1 ) ⁇ w (t 1 ), w (s 1 ) ⁇ w (t 2 ), w (s 2 ) ⁇ w (t 1 ), w (s 2 ) ⁇ w Calculated by (t 2 ).
  • m for example, two
  • m for example, two
  • two patterns having a large product of weight values are selected from the four combination patterns (see FIG. 12).
  • two patterns can be left as combinations of positions that the pedestrians A and B can take after a unit time.
  • step S117 it is determined whether or not the processes of steps S101, S102, and S111 to S116 have been repeated a specified number of times. If these processes have not been repeated the specified number of times, the process returns to step S101.
  • steps S101, S102, and S111 to S116 are repeated a predetermined number of times, a plurality of predicted movement paths of the pedestrian up to a predetermined time later are output to the output display 4 (step S108).
  • the top m patterns having a large weight value product are selected from a plurality of combination patterns related to the movements between adjacent pedestrians.
  • the weight value product is a predetermined value or more. A certain pattern may be selected.
  • the above-described procedure S111 constitutes a mixed normal distribution generating means for expressing the probability density map by a mixed normal distribution.
  • the procedure S112 constitutes a parameter acquisition unit for obtaining a parameter of the mixed normal distribution.
  • the above steps S113 to S117 constitute a course prediction means for predicting the course of the moving object using the probability density map created by the probability density map creation means.
  • the parameters of the mixed normal distribution matching the probability density map are obtained, and the next velocity vector of a plurality of pedestrians is obtained based on the parameters.
  • a plurality of predicted movement paths can be obtained.
  • the weight values at each time in the plurality of predicted travel routes finally obtained may be summed for each predicted travel route, and only the top several items having a large total value may be output. .
  • a weight value corresponds to the calculated predicted movement route, it is possible to perform processing using the weight value in the subsequent stage.
  • the parameters of the mixed normal distribution are obtained using the EM algorithm.
  • the EM algorithm is a type of algorithm that is converged by repeated calculation, the amount of calculation must be increased. Therefore, it is desirable to easily obtain the parameters of the mixed normal distribution by a method other than the EM algorithm.
  • FIG. 13 is a flowchart showing details of a processing procedure for obtaining a parameter of a mixed normal distribution by a simple method.
  • the probability density map is discretized to 10 ⁇ 10, for example, and the maximum point in the discretized map is extracted (step S121).
  • the point of interest becomes the maximum point. Note that the number of maximum points may be limited to the top k.
  • FIG. 14A when the probability density map shown in FIG. 14A is discretized, a discretized map as shown in FIG. 14B is obtained.
  • the map is represented in a two-dimensionally simplified manner. Then, maximum points E1 and E2 are extracted in the discretization map.
  • the center of the maximal point in the discretization map is set as an initial value, and the maximal point is obtained using the quasi-Newton method or the like on the probability density map before being discretized (step S122).
  • the quasi-Newton method or the like on the probability density map before being discretized.
  • the maximum point obtained at this time is the median value ⁇ of one normal distribution of the mixed normal distribution (see FIG. 14).
  • step S123 look at the points around the maximum point in the discretization map. Since the point around the local maximum is smaller than the value of the local maximum, it is labeled as the same group as the local maximum. Then, the surroundings of the newly labeled points are viewed in the same manner, and those with smaller values are sequentially labeled. As a result, a region including one local maximum point and connecting the surrounding points is completed. Do the same for all local maxima. Then, the variance covariance matrix ⁇ of the probability density map before being discretized is obtained from the region including these local maximum points (step S123).
  • the height w of the normal distribution is obtained from the median ⁇ of the normal distribution obtained in step S122 and the variance covariance matrix ⁇ of the probability density map obtained in step S123 (step S124). At this time, a height w that minimizes the difference between the normal distribution and the probability density map is obtained.
  • the difference between the normal distribution and the probability density map the sum of squares of the difference between the points on the normal distribution and the points on the probability density map is used.
  • the parameters of the mixed normal distribution can be easily obtained with a small amount of calculation.
  • the parameters of the mixed normal distribution obtained by this method are not strict, it has been confirmed by experiments that it is sufficiently practical for the task of predicting pedestrian behavior.
  • FIG. 15 is a block diagram which shows schematic structure of 3rd Embodiment of the action prediction apparatus concerning this invention.
  • the ECU 3 of the pedestrian movement estimation apparatus 1 includes a plurality of pedestrians in one group (group) in addition to the image processing unit 5, the storage unit 6, and the pedestrian course prediction unit 7. It has a group determination unit 8 that determines whether or not it belongs.
  • FIG. 16 is a flowchart showing details of a processing procedure executed by the group determination unit 8. Pedestrians forming a group often talk to each other, and in that case, the pedestrians face each other. In addition, pedestrians forming a group may hold hands with each other.
  • step S131 based on the image frame generated by the image processing unit 5, the face orientations and positions of a plurality of pedestrians are detected (step S131). Subsequently, it is determined whether or not the distance between the pedestrians is equal to or less than a predetermined value and the faces of the pedestrians face each other (step S132).
  • step S133 When the distance between the pedestrians is equal to or less than a predetermined value and the faces of the pedestrians face each other, it is determined that the pedestrians form a group (step S133). Therefore, when two pedestrians face each other while maintaining a distance of a predetermined value or less, it is determined that both pedestrians form a group. In addition, when one or both of these two pedestrians and another pedestrian are facing each other with a distance of a predetermined value or less, the three pedestrians form a group. It is determined that
  • this determination is performed by comparing a region V between the pedestrians P 1 and P 2 with a pattern in which hands are obtained in advance by learning.
  • a pattern matching method V & J method, HOG, or the like can be used.
  • step S133 When the pedestrians are holding hands, it is determined that each pedestrian forms a group (step S133). On the other hand, when the pedestrians are not holding hands, it is determined that each pedestrian does not form a group (step S135).
  • FIG. 18 is a flowchart showing details of another processing procedure executed by the group determination unit 8. It is estimated that the pedestrians forming the group are at a certain distance for a certain time or more.
  • step S141 the positions of a plurality of pedestrians are detected based on the image frames generated by the image processing unit 5 (step S141). Subsequently, it is determined whether or not the situation in which the distance between the pedestrians is equal to or less than a predetermined value continues for a predetermined time (for example, 3 seconds) (step S142).
  • a predetermined time for example, 3 seconds
  • step S143 When the situation where the distance between each pedestrian is less than or equal to a predetermined value continues for a predetermined time, it is determined that each pedestrian forms a group (step S143), and the distance between each pedestrian is less than or equal to a predetermined value. When such a situation does not continue for a predetermined time, it is determined that each pedestrian does not form a group (step S144).
  • FIG. 19 (a) when the situation where the distance between the pedestrians P 1 and P 2 is not more than a predetermined value continues for a long time, the pedestrians P 1 and P 2 form a group. It is determined that On the other hand, as shown in FIG. 19 (b), when the distance between the pedestrian P 1, P 2 is not followed long situation is less than the predetermined value, the pedestrian P 1, P 2 form a group It is determined that it is not.
  • the pedestrian course prediction unit 7 regards the group as a virtual single pedestrian. To predict the course.
  • the course prediction at this time is performed in the same manner as in the first embodiment or the second embodiment described above.
  • the group determining unit 8 constitutes a group determining unit that determines whether or not a plurality of moving bodies belong to one group when a plurality of moving bodies are detected by the moving body detecting unit.
  • the course is predicted on the assumption that the group is a single pedestrian.
  • a person acts as a group as a group, it is possible to prevent the pedestrian's prediction of the movement path from being erroneous.
  • FIG. 20 is a block diagram showing a schematic configuration of the fourth embodiment of the behavior prediction apparatus according to the present invention.
  • the ECU 3 of the behavior prediction apparatus 1 of the present embodiment has a group determination / pedestrian path prediction unit 9 instead of the group determination unit 8 and the pedestrian path prediction unit 7 shown in FIG.
  • FIG. 21 is a flowchart showing details of a processing procedure executed by the group determination / pedestrian course prediction unit 9.
  • first initial values of the movement paths of a plurality of pedestrians are obtained (step S151).
  • the calculation of the initial value of the movement path is performed using the environment map related to the pedestrian, as in the first embodiment or the second embodiment.
  • step S152 determination as to whether each pedestrian belongs to one group and prediction of each pedestrian's travel path are performed simultaneously.
  • FIG. 22 shows an example of a conditional random field graph model.
  • is a term representing the movement and appearance characteristics of the pedestrian P.
  • is a term representing the relationship between the position and orientation of two pedestrians P.
  • is a term indicating whether the two pedestrians P form a group.
  • is a term for confirming that the group includes three pedestrians P when one pedestrian P common to the two-two group is included.
  • ⁇ , ⁇ , ⁇ , and ⁇ are functions that output probabilities.
  • H represents a predicted course of a pedestrian
  • L represents a group state between pedestrians
  • I represents an input image sequence
  • represents an observed parameter.
  • I and ⁇ are given, it is the meaning of the above formula that H and L are determined such that logP is maximized.
  • ⁇ motion represents the movement of the pedestrian
  • ⁇ app represents the appearance characteristic of the pedestrian
  • ⁇ speed represents the speed of the pedestrian
  • ⁇ app represents the likelihood of the pedestrian
  • ⁇ pos represents the positional relationship between the two pedestrians
  • ⁇ ang represents the relationship between the orientations of the two pedestrians.
  • i is a number representing each target pedestrian. If there are n pedestrians in the scene, ⁇ in the first term on the right side of the above equation (A) is Means. That is, the probability for each pedestrian is calculated, and the probabilities for the entire scene are calculated by adding all the probabilities.
  • the subscript represents the elapsed time.
  • ⁇ speed given as a parameter is the speed of the pedestrian i at the time of each unit time obtained from h i . That is, ⁇ speed means the speed of pedestrian i after 0.2 seconds, the speed after 0.4 seconds, ..., the speed after 2.0 seconds.
  • h i since it represents the position of the pedestrian i for every elapse each time unit, can also be calculated velocity at that point in time.
  • ⁇ speed ) means the probability that the course at that time is h i when ⁇ speed is given.
  • the probability is obtained by determining the relationship between the walking speed and direction of each pedestrian from past observation results. Although not specifically described in detail, this is defined as the formula (B). In other words, ask for one of the probability assumed h i can occur much to every pedestrian, adding the probabilities for all of the pedestrian. The fact that this sum combined probability is maximum, because most likely probability that scene is that high, it comes to the path to be obtained when the resulting h i truly thereon.
  • ⁇ motion , ⁇ app , ⁇ pos , and ⁇ ang are all functions whose values change depending on h i , the most probable probability in the scene when the respective probabilities are maximized so that a high h i, ie ⁇ motion, ⁇ app, ⁇ pos , that [psi ang all elements considered the most appropriate h i.
  • a probability function is similarly defined for the group state, and the probabilities represented by ⁇ , ⁇ pos , ⁇ ang , and ⁇ vary depending on the group state.
  • an existing technique such as a Dual-Decomposition method is used.
  • the group determination / pedestrian course prediction unit 9 constitutes a course prediction unit that predicts the course of the moving object using the probability density map created by the probability density map creation unit.
  • the initial values of the movement paths of the plurality of pedestrians are obtained using the environment map described above.
  • the method is not limited to this method, and the movements of the plurality of pedestrians are not limited.
  • the initial value of the course may also be obtained using a conditional random field graph model.
  • the above embodiment creates an environment map for a pedestrian and predicts the behavior of the pedestrian, but creates an environment map for a moving body other than the pedestrian and predicts the behavior of the moving body.
  • moving bodies such as a pedestrian
  • sensors such as laser radar and a millimeter wave radar, or GPS. Also good.
  • the present invention it is possible to estimate the movement behavior of the moving object with high accuracy. As a result, for example, when tracking a moving object using a technique for estimating the moving behavior of the moving object, the tracking performance can be improved.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)
  • Alarm Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

L'invention concerne un dispositif et un procédé pour une prévision d'action, susceptibles d'estimer précisément une action de type mouvement d'un corps mobile. Le dispositif de prévision d'action crée une carte de vraisemblance représentant l'adéquation du vecteur de vitesse suivant d'un piéton, en fonction d'une image photographiée du piéton, et ensuite, convertit la carte de vraisemblance en une carte de densité de probabilité représentant la densité de probabilité concernant une position du piéton après une unité de temps. Le dispositif de prévision d'action organise une pluralité d'hypothèses dans le champ ayant une densité de probabilité élevée sur la carte de densité de probabilité, et calcule une pluralité de parcours de déplacement prévus correspondant aux hypothèses. L'hypothèse dans ce cas correspond à un point unique sur la carte de densité de probabilité et signifie que le piéton se déplace avec le vecteur de vitesse correspondant au point pendant l'unité de temps suivante. Par conséquent, un parcours de déplacement prévu peut être obtenu à partir du vecteur de vitesse.
PCT/JP2011/069832 2010-09-03 2011-08-31 Dispositif et procédé pour créer une carte de l'environnement et dispositif et procédé pour une prévision d'action WO2012029878A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010198212A JP2012058780A (ja) 2010-09-03 2010-09-03 環境マップ作成装置及び方法、行動予測装置及び方法
JP2010-198212 2010-09-03

Publications (1)

Publication Number Publication Date
WO2012029878A1 true WO2012029878A1 (fr) 2012-03-08

Family

ID=45772955

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/069832 WO2012029878A1 (fr) 2010-09-03 2011-08-31 Dispositif et procédé pour créer une carte de l'environnement et dispositif et procédé pour une prévision d'action

Country Status (2)

Country Link
JP (1) JP2012058780A (fr)
WO (1) WO2012029878A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016045543A (ja) * 2014-08-20 2016-04-04 ヤフー株式会社 情報処理装置、情報処理方法及びプログラム
WO2017130639A1 (fr) * 2016-01-28 2017-08-03 株式会社リコー Dispositif de traitement d'images, dispositif d'imagerie, système de commande d'appareil d'entité mobile, procédé de traitement d'images et programme
CN114402575A (zh) * 2020-03-25 2022-04-26 株式会社日立制作所 行动识别服务器、行动识别系统和行动识别方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7284576B2 (ja) * 2018-12-19 2023-05-31 株式会社Subaru 行動予測システム

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001056805A (ja) * 1999-08-18 2001-02-27 Sony Corp 行動予測方法及びその装置
JP2004320217A (ja) * 2003-04-14 2004-11-11 Sony Corp 情報提供システム,携帯端末装置,グループ化装置,情報提供装置,サービス提供側装置,情報提供方法およびこれらに関するコンピュータプログラム
WO2007102405A1 (fr) * 2006-03-01 2007-09-13 Toyota Jidosha Kabushiki Kaisha Procédé destiné à déterminer une trajectoire pour un trajet de véhicule et dispositif destiné à déterminer un parcours de véhicule
JP2010003024A (ja) * 2008-06-19 2010-01-07 Panasonic Corp 行動予測装置、行動予測方法およびナビゲーション装置
JP2010145115A (ja) * 2008-12-16 2010-07-01 Nec Corp 目的地予測システム、目的地予測方法及びプログラム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001056805A (ja) * 1999-08-18 2001-02-27 Sony Corp 行動予測方法及びその装置
JP2004320217A (ja) * 2003-04-14 2004-11-11 Sony Corp 情報提供システム,携帯端末装置,グループ化装置,情報提供装置,サービス提供側装置,情報提供方法およびこれらに関するコンピュータプログラム
WO2007102405A1 (fr) * 2006-03-01 2007-09-13 Toyota Jidosha Kabushiki Kaisha Procédé destiné à déterminer une trajectoire pour un trajet de véhicule et dispositif destiné à déterminer un parcours de véhicule
JP2010003024A (ja) * 2008-06-19 2010-01-07 Panasonic Corp 行動予測装置、行動予測方法およびナビゲーション装置
JP2010145115A (ja) * 2008-12-16 2010-07-01 Nec Corp 目的地予測システム、目的地予測方法及びプログラム

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016045543A (ja) * 2014-08-20 2016-04-04 ヤフー株式会社 情報処理装置、情報処理方法及びプログラム
WO2017130639A1 (fr) * 2016-01-28 2017-08-03 株式会社リコー Dispositif de traitement d'images, dispositif d'imagerie, système de commande d'appareil d'entité mobile, procédé de traitement d'images et programme
JPWO2017130639A1 (ja) * 2016-01-28 2018-11-08 株式会社リコー 画像処理装置、撮像装置、移動体機器制御システム、画像処理方法、及びプログラム
US10984509B2 (en) 2016-01-28 2021-04-20 Ricoh Company, Ltd. Image processing apparatus, imaging device, moving body device control system, image information processing method, and program product
CN114402575A (zh) * 2020-03-25 2022-04-26 株式会社日立制作所 行动识别服务器、行动识别系统和行动识别方法
CN114402575B (zh) * 2020-03-25 2023-12-12 株式会社日立制作所 行动识别服务器、行动识别系统和行动识别方法

Also Published As

Publication number Publication date
JP2012058780A (ja) 2012-03-22

Similar Documents

Publication Publication Date Title
US10748061B2 (en) Simultaneous localization and mapping with reinforcement learning
JP6917878B2 (ja) 移動体挙動予測装置
US8024072B2 (en) Method for self-localization of robot based on object recognition and environment information around recognized object
O'Callaghan et al. Contextual occupancy maps using Gaussian processes
Ćesić et al. Radar and stereo vision fusion for multitarget tracking on the special Euclidean group
EP2202672B1 (fr) Appareil, procédé et programme de traitement d'informations
CN110189366B (zh) 一种激光粗配准方法、装置、移动终端及存储介质
Spaan et al. Active cooperative perception in network robot systems using POMDPs
JP5563796B2 (ja) 歩行者移動推定装置及び方法
JP2017059207A (ja) 画像認識方法
US20110169923A1 (en) Flow Separation for Stereo Visual Odometry
JP2005032196A (ja) 移動ロボット用経路計画システム
WO2017051480A1 (fr) Dispositif de traitement d'images et procédé de traitement d'images
Lin et al. Intelligent filter-based SLAM for mobile robots with improved localization performance
KR101517937B1 (ko) 클라이언트, 서버 및 이를 포함하는 무선 신호 지도 작성 시스템
WO2012029878A1 (fr) Dispositif et procédé pour créer une carte de l'environnement et dispositif et procédé pour une prévision d'action
Chame et al. Neural network for black-box fusion of underwater robot localization under unmodeled noise
Skoglar et al. Pedestrian tracking with an infrared sensor using road network information
EP3098682B1 (fr) Commande d'objet mobile, programme et circuit intégré
JP2012003401A (ja) ランドマーク検知方法、ロボット及びプログラム
Munz et al. A sensor independent probabilistic fusion system for driver assistance systems
Silva et al. Towards a grid based sensor fusion for visually impaired navigation using sonar and vision measurements
Madhavan et al. Moving object prediction for off-road autonomous navigation
CN115131752A (zh) 学习方法、学习装置以及程序记录介质
CN112433193A (zh) 一种基于多传感器的模位置定位方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11821893

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11821893

Country of ref document: EP

Kind code of ref document: A1