WO2014034779A1 - Individual characteristic estimation unit and individual characteristic estimation method - Google Patents

Individual characteristic estimation unit and individual characteristic estimation method Download PDF

Info

Publication number
WO2014034779A1
WO2014034779A1 PCT/JP2013/073140 JP2013073140W WO2014034779A1 WO 2014034779 A1 WO2014034779 A1 WO 2014034779A1 JP 2013073140 W JP2013073140 W JP 2013073140W WO 2014034779 A1 WO2014034779 A1 WO 2014034779A1
Authority
WO
WIPO (PCT)
Prior art keywords
characteristic
scene
instantaneous
driver
vehicle
Prior art date
Application number
PCT/JP2013/073140
Other languages
French (fr)
Japanese (ja)
Inventor
充伸 神沼
Original Assignee
日産自動車株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日産自動車株式会社 filed Critical 日産自動車株式会社
Publication of WO2014034779A1 publication Critical patent/WO2014034779A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W40/09Driving style or behaviour
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/165Anti-collision systems for passive traffic, e.g. including static obstacles, trees
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera

Definitions

  • the present invention relates to a personal characteristic estimation device and a personal characteristic estimation method for estimating a driver's characteristics based on driver's operation information, vehicle behavior, and driving scene when driving the vehicle.
  • Patent Document 1 describes an apparatus for estimating driver characteristics (a general term for personality, tendency, temperament, etc.) based on behavior when a driver drives a vehicle. Things are known. Patent Document 1 discloses that a driver recognizes a driver's kite based on a driving operation when driving the vehicle, and supports driving based on the driver's kite.
  • Patent Document 1 recognizes a driver's habit and supports driving, and does not estimate the basic characteristics of the driver.
  • An object of the present invention is to provide a personal characteristic estimation device and a personal characteristic estimation method capable of estimating a basic characteristic of a driver.
  • a personal characteristic estimation device includes a vehicle information acquisition unit that acquires vehicle information, a scene estimation unit that estimates a scene in which the vehicle travels based on the vehicle information, a plurality of scenes, and driving Instantaneous characteristic model storage means for setting a plurality of characteristic tendency labels indicating an instantaneous characteristic tendency of a person and storing a characteristic model associating a characteristic tendency label easy to be expressed in each scene as an instantaneous characteristic model Based on the scene estimated by the scene estimation means, referring to the instantaneous characteristic model, the instantaneous characteristic estimation means for estimating the instantaneous characteristic tendency of the driver, and the instantaneous characteristic estimated by the instantaneous characteristic estimation means Characteristic tendency estimating means for statistically analyzing the characteristic tendency and estimating a static characteristic tendency of the driver.
  • the personal characteristic estimation method sets a plurality of scenes, sets a plurality of characteristic tendency labels indicating an instantaneous characteristic tendency of the driver, and displays each scene in the scene.
  • the characteristic model associated with the easy characteristic tendency label is stored as the instantaneous characteristic model, the scene where the vehicle travels is estimated based on the vehicle information, and the instantaneous characteristic model of the driver is referred to based on the estimated scene.
  • a static characteristic tendency of the driver is estimated by statistically analyzing the estimated instantaneous characteristic tendency and statistically analyzing the estimated instantaneous characteristic tendency.
  • FIG. 1 is a block diagram showing the configuration of the personal characteristic estimation apparatus according to the first embodiment of the present invention.
  • FIG. 2 is an explanatory diagram schematically showing processing of the estimation unit of the personal characteristic estimation device according to the first embodiment of the present invention.
  • FIG. 3 is a flowchart showing a processing procedure of the personal characteristic estimation apparatus according to the first embodiment of the present invention.
  • FIG. 4 is a block diagram showing a configuration of the personal characteristic estimation apparatus according to the second embodiment of the present invention.
  • FIG. 5 is an explanatory diagram schematically showing an instantaneous characteristic model of the personal characteristic estimation device according to the second embodiment of the present invention.
  • FIG. 1 is a block diagram showing the configuration of the personal characteristic estimation apparatus according to the first embodiment of the present invention.
  • FIG. 2 is an explanatory diagram schematically showing processing of the estimation unit of the personal characteristic estimation device according to the first embodiment of the present invention.
  • FIG. 3 is a flowchart showing a processing procedure of the personal characteristic estimation apparatus according to the first embodiment of the
  • FIG. 6 is an explanatory diagram schematically showing how a characteristic model is selected by the model switching unit of the personal characteristic estimation device according to the second embodiment of the present invention.
  • FIG. 7 is a flowchart showing a processing procedure of the personal characteristic estimation apparatus according to the second embodiment of the present invention.
  • FIG. 8A is an explanatory diagram illustrating an example of a likelihood threshold according to each embodiment of the present invention.
  • FIG. 8B is an explanatory diagram illustrating an example of a calculated likelihood value according to each embodiment of the present invention.
  • FIG. 9A is an explanatory diagram illustrating a relationship between a frequency threshold value and a frequency result according to each embodiment of the present invention.
  • FIG. 9B is an explanatory diagram illustrating a relationship between a frequency threshold value and a frequency result according to each embodiment of the present invention.
  • FIG. 10 is an explanatory diagram showing a relationship between a plurality of scenes and a characteristic tendency label that can be detected in each scene according to each embodiment of the present invention.
  • FIG. 11A is an explanatory diagram showing the situation of the scene (1) according to each embodiment of the present invention.
  • FIG. 11B is an explanatory diagram showing the situation of the scene (2) according to each embodiment of the present invention.
  • FIG. 11C is an explanatory diagram showing a situation of the scene (3) according to each embodiment of the present invention.
  • FIG. 11D is an explanatory diagram showing a situation of the scene (4) according to each embodiment of the present invention.
  • FIG. 11A is an explanatory diagram showing the situation of the scene (1) according to each embodiment of the present invention.
  • FIG. 11B is an explanatory diagram showing the situation of the scene (2) according to each embodiment of the present invention.
  • FIG. 12A is an explanatory diagram showing the situation of the scene (5) according to each embodiment of the present invention.
  • FIG. 12B is an explanatory diagram showing the situation of the scene (6) according to each embodiment of the present invention.
  • FIG. 12C is an explanatory diagram showing the situation of the scene (7) according to each embodiment of the present invention.
  • FIG. 12D is an explanatory diagram showing a situation of the scene (8) according to each embodiment of the present invention.
  • FIG. 13 is an explanatory diagram showing the relationship between the driving signal and the driver's static characteristics according to each embodiment of the present invention.
  • FIG. 14A is an explanatory diagram showing a procedure for designing an environmental corpus according to each embodiment of the present invention.
  • FIG. 14A is an explanatory diagram showing a procedure for designing an environmental corpus according to each embodiment of the present invention.
  • FIG. 14B is an explanatory diagram showing a procedure for designing an environmental corpus according to each embodiment of the present invention.
  • FIG. 15 is an explanatory diagram showing a procedure for designing a characteristic corpus according to each embodiment of the present invention.
  • FIG. 16 is an explanatory diagram showing the structure of the environmental model and the characteristic model according to each embodiment of the present invention.
  • FIG. 1 is a block diagram showing the configuration of the personal characteristic estimation apparatus according to this embodiment.
  • the personal characteristic estimation apparatus 100 includes a signal detection unit (vehicle information acquisition unit) 11, an estimation unit 12, an instantaneous characteristic model storage unit (instantaneous characteristic model storage unit) 15, and a tendency estimation unit ( Characteristic tendency estimation means) 16.
  • the estimation unit 12 includes a scene estimation unit (scene estimation unit) 13 and an instantaneous characteristic estimation unit (instantaneous characteristic estimation unit) 14.
  • the signal detection unit 11 acquires information related to the operation of the vehicle and information related to the behavior of the vehicle from a CAN (Controller Area Network) mounted on the vehicle.
  • information relating to the operation of the vehicle include information such as a brake operation, a steering angle of the steering, and an accelerator opening.
  • information regarding the behavior of the vehicle include information such as the traveling speed of the vehicle, the yaw rate, and the discharge level of the battery.
  • the signal detection unit 11 acquires information on the outside of the vehicle.
  • the signal detection unit 11 acquires information related to the external environment based on a surrounding image captured by an in-vehicle camera (not shown) or a traveling path of the host vehicle detected by a GPS device or the like.
  • the signal detection unit 11 acquires information related to operation history of the navigation device, the air conditioner, and the audio mounted on the vehicle.
  • the information related to the operation of the vehicle, the information related to the behavior of the vehicle, and the information related to the operation history will be collectively referred to as a “vehicle signal”.
  • vehicle information the concept which shows at least one of the information regarding the operation of the vehicle, the information regarding the behavior of the vehicle, the information regarding the operation history, and the information on the outside of the vehicle.
  • the instantaneous characteristic model storage unit 15 stores a plurality of preset scenes (scenes (1) to (8) shown in FIGS. 11A to 12D described later).
  • the instantaneous characteristic model storage unit 15 sets a plurality of characteristic tendency labels indicating the characteristics of the driver, an environmental model used for estimating the scene, and a characteristic tendency that can be easily expressed in each scene.
  • a characteristic model associated with a label (hereinafter abbreviated as “label”) is stored as an instantaneous characteristic model.
  • the instantaneous characteristic model storage unit 15 estimates the scene as shown in the correspondence relationship in the correspondence table shown in FIG. 10 in which a plurality of scenes related to vehicle travel are taken in the row direction and a plurality of labels are taken in the column direction.
  • an environment model that is used for the purpose and a characteristic model that associates each scene with a label instantaneous characteristic of the driver
  • an instantaneous characteristic model that is prominently displayed in the scene are stored as an instantaneous characteristic model.
  • scenes (1) to (1) to (3) are classified according to conditions such as road shape in the row direction, time zone, traffic jam information, external state, presence of pedestrians, presence of vehicles, presence of oncoming vehicles, and the like. 8) is set.
  • various characteristic tendency labels labels (1) to (9) indicating the characteristics of the driver are set.
  • one or more labels that are easy to be displayed for each scene are indicated by circles. For example, in a scene (1) (FIG. 11A) in which a vehicle travels on a “crosswalk without a signal on a narrow street” (FIG.
  • the instantaneous characteristic model storage unit 15 stores two characteristic models in which the labels (1) and (4) are associated with the scene (1). As shown in FIG.
  • the estimation unit 12 since the estimation unit 12 uses a characteristic model that combines a scene and a characteristic (label) when estimating the instantaneous characteristic of the driver, the instantaneous characteristic model storage unit 15 stores the characteristic model in FIG. Sixteen characteristic models corresponding to the number of circles shown are stored.
  • the correspondence table shown in FIG. 10 is merely an example.
  • the “determinism” column in the label (9) is not circled. This does not mean that the driver's characteristics such as determinism are difficult to express in the scene (1), but data indicating the relationship between the scene (1) and the label (9) is not obtained. It is shown that. Therefore, the correspondence table shown in FIG. 10 may be changed by future research.
  • a static characteristic detected by a general and static characteristic tendency measuring method such as AQ test (Autism-Spectrum Quotient)
  • AQ test Autism-Spectrum Quotient
  • it is greater than a predetermined value for example, 0 .2 or higher
  • a predetermined value for example, 0 .2 or higher
  • the “instantaneous characteristic” is an instantaneous characteristic estimated based on the behavior of the driver.
  • a “static characteristic” which is a personal characteristic of the driver (generic name such as personality, tendency, and temperament) is obtained based on a number of “instantaneous characteristics”.
  • the scene estimation unit 13 shown in FIG. 1 has instantaneous characteristics based on vehicle signals (information on vehicle operation, information on vehicle behavior, and information on operation history) detected by the signal detection unit 11 and information on the outside of the vehicle.
  • the scene in which the host vehicle travels is estimated with reference to the environmental model stored in the model storage unit 15.
  • the scene estimation unit 13 estimates a scene by grasping a situation in which the host vehicle is currently traveling based on a surrounding image captured by an in-vehicle camera (not shown).
  • the scene estimation unit 13 refers to the environment model and corresponds to which scene among the scenes (1) to (8) indicated in the row direction of the correspondence table shown in FIG. Estimate.
  • the instantaneous characteristic estimation unit 14 refers to the characteristic model stored in the instantaneous characteristic model storage unit 15 based on the vehicle signal detected by the signal detection unit 11 and the scene estimated by the scene estimation unit 13.
  • the instantaneous characteristics of the person Specifically, the instantaneous characteristic estimation unit 14 obtains the likelihood of each label using an HMM (Hidden Markov Model) algorithm (an algorithm for statistically analyzing characteristic trends) or the like based on the vehicle signal. Detailed procedures will be described later.
  • HMM Hidden Markov Model
  • the personal characteristic estimation apparatus 100 can be configured as an integrated computer including a central processing unit (CPU), RAM, ROM, and storage means such as a hard disk.
  • CPU central processing unit
  • RAM random access memory
  • ROM read-only memory
  • storage means such as a hard disk.
  • the general questionnaire C1 asks the subject (driver in this embodiment) his / her behavior and introspection when a specific environment is given, and estimates the static characteristics of the subject from the answer result. Therefore, in order to estimate a static characteristic similar to the static characteristic estimated by the general questionnaire C1 from a driving operation signal or a vehicle operation signal (hereinafter referred to as a driving signal), based on the driving signal, It is only necessary to be able to estimate the specific environment given by the questionnaire C1 and the behavior of the driver when the specific environment is given.
  • driving behavior is determined instantaneously by the driver's characteristics, environment, surrounding conditions, traffic rules, and manners
  • the driver's instantaneous characteristics can be detected by grasping the driving behavior.
  • the driving signal does not include information representing the relationship between the driving behavior and the driver's static characteristics
  • the driving version questionnaire is used to represent the relationship between the driving behavior and the driver's static characteristics. C2 is required.
  • a question such as “Do you think that pedestrians should give up in any case when they are trying to cross?” Is given to the subject.
  • the subject answers the question while considering the actual driving scene, such as “Would you stop before the pedestrian crossing?” Or “Would you go first while observing the pedestrian?”
  • the static characteristics of the driver can be estimated by observing the driving behavior described in the driving questionnaire C2.
  • a signal indicating a one-time stop after stepping on the brake appears before the pedestrian crossing, and “going first while observing the pedestrian” is expressed.
  • the driving signal C3 when the driver takes an action in a specific environment is extracted and a label explaining the driving action (for example, “other-friendliness empathy”, etc.) can be given.
  • the driving behavior can be estimated from only the driving signal, and the instantaneous characteristics of the driver observed instantaneously estimated from the driving behavior can be calculated.
  • static characteristics equivalent to the driver's static characteristics expected from the score (answer result) obtained from the psychological test (general questionnaire C1) are obtained. It can be estimated automatically.
  • the estimation unit 12 first divides the operation signal input from the signal detection unit 11 for each unit time and performs environment recognition.
  • the estimation unit 12 uses an environmental model based on an HMM (Hidden Markov Model) and prepares in advance a road shape such as a right / left turn or a narrow road, a traffic jam situation, and the like based on the input driving signal.
  • One pattern is recognized from among the patterns representing a plurality of environments.
  • the estimation unit 12 selects a characteristic model based on the environment represented by the recognized pattern.
  • the estimation unit 12 selects a characteristic model that has been learned in advance so that the instantaneous characteristics of the driver can be recognized in the “right turn” environment.
  • the estimation unit 12 recognizes the instantaneous characteristics of the driver observed instantaneously using the selected characteristic model.
  • the recognized instantaneous characteristic is accumulated every time a recognition result is transmitted.
  • the tendency estimation unit 16 obtains the driver's static characteristics based on the occurrence frequency and distribution of a plurality of recognized instantaneous characteristics.
  • the environmental corpus and the characteristic corpus are designed in advance by the following method.
  • An environmental model is obtained by pre-learning based on an environmental corpus, and a characteristic model is obtained by pre-learning based on a characteristic corpus.
  • FIG. 14A and 14B are explanatory diagrams showing examples of collection and extraction of driving signals (vehicle speed in this example) in three environments by each of driver numbers D1 to Dn.
  • Environment S1 shown to FIG. 14A has shown the situation of straight road congestion, S2 the narrow road crossing, and S3 the state of a T-shaped road right turn.
  • FIG. 14B shows a state in which vehicle speed data for each environment (S1 to S3) is cut out and environmental symbols (straight road congestion, narrow road crossing, T-turn right turn) are added to the cut out vehicle speed data.
  • the characteristic corpus collects driving signals in a plurality of environments by each of the drivers D1 to Dn over a long period of time, and extracts and classifies vehicle speed data for each environment symbol. Further, the plurality of vehicle speed data collected for each environmental symbol is classified for each driving action corresponding to the instantaneous characteristics of the driver, and a characteristic symbol is given to the vehicle speed data collected for each driving action.
  • FIG. 15 gives characteristic symbols (emotional empathy behavior, non-sympathetic behavior) to the vehicle speed data collected for each driving behavior (a1, a2) when the environmental symbol is “narrow road crossing, with pedestrians” It shows how it was done.
  • FIG. 16 is an explanatory diagram showing the structure of the environment model and the characteristic model, and uses the Left_to_Right model used in speech recognition.
  • an instantaneous characteristic model is generated by using each corpus to learn transition probabilities and output probabilities between states that are output when an input parameter series is observed for each symbol. .
  • step S11 the signal detection unit 11 acquires various vehicle signals and information on the outside of the vehicle. Specifically, depending on the CAN mounted on the vehicle, vehicle signals such as accelerator opening, brake operation, steering angle, vehicle speed, yaw rate, etc., external video signals captured by an in-vehicle camera, or GPS current Get location information.
  • vehicle signals such as accelerator opening, brake operation, steering angle, vehicle speed, yaw rate, etc.
  • external video signals captured by an in-vehicle camera, or GPS current Get location information.
  • step S ⁇ b> 12 the signal detection unit 11 determines the primary difference between the acquired signals at a certain time interval (this is referred to as a difference ⁇ ) and the secondary difference between the primary differences of the acquired signal (this is referred to as a difference ⁇ ). Is calculated and parameterized.
  • the signal detector 11 Accelerator opening difference ⁇ 1 obtained at t2, Accelerator opening difference ⁇ 2 obtained at times t2 and t3, Accelerator opening difference ⁇ 3 obtained at times t3 and t4, Accelerator opening difference obtained at times t4 and t5
  • the difference ⁇ 4 is calculated. Further, the signal detector 11 obtains a difference ⁇ 1 between the differences ⁇ 1 and ⁇ 2, a difference ⁇ 2 between the differences ⁇ 2 and ⁇ 3, and a difference ⁇ 3 between the differences ⁇ 3 and ⁇ 4.
  • the scene estimation unit 13 refers to the environmental model, and estimates the scene in which the host vehicle is traveling based on various vehicle signals detected by the signal detection unit 11 or information on the outside of the vehicle. .
  • the instantaneous characteristic estimation unit 14 refers to the characteristic model and calculates the likelihood for the label (labeled with a circle in the correspondence table shown in FIG. 10) associated with the estimated scene. .
  • the instantaneous characteristic estimation unit 14 calculates the likelihood of each label using an HMM (Hidden Markov Model) algorithm based on the difference ⁇ obtained in step S12 and the difference ⁇ .
  • HMM Hidden Markov Model
  • the instantaneous characteristic estimation unit 14 obtains the likelihood of each label by substituting the difference ⁇ obtained in step S12 and the data of the difference ⁇ into the HMM algorithm. Note that the details of the HMM algorithm are well-known techniques and will not be described.
  • the vehicle is traveling on the “crosswalk without a signal on a narrow road” of the scene (1) in the correspondence table shown in FIG. 10 in the scene estimation unit 13. Is estimated, the likelihood of “other-person sympathy” of label (1) and the likelihood of “non-sympathy” of label (4) are calculated.
  • step S14 the instantaneous characteristic estimation unit 14 determines whether or not the likelihood of the label having the maximum likelihood exceeds a preset threshold value.
  • a preset threshold value For example, in the case of the scene (1) in the correspondence table shown in FIG. 10, the likelihood threshold for “other-sympathetic empathy” in label (1) and the likelihood threshold for “non-sympathy” in label (4) are as follows: “3000” and “2000” shown in FIG. 8A are set.
  • the instantaneous characteristic estimation unit 14 sets a histogram for each label.
  • the trend estimation unit 16 increments the numerical value of the histogram of the label having the maximum likelihood in step S15.
  • the likelihood of “other-person sympathy” in label (1) and the likelihood of “non-sympathy” in label (4) are obtained by the processing by the HMM algorithm. Assume that “2500” and “2300” are recognized. When the label having the maximum likelihood is the “other-friendliness sympathy” of the label (1) at the detection time t1, the likelihood “2500” of the label (1) does not exceed the likelihood threshold “3000”. Therefore (NO in step S14), the tendency estimation unit 16 does not increment the numerical value of the histogram of the label (1).
  • the tendency estimation unit 16 increments the numerical value of the histogram of the label (4).
  • the histogram is an accumulation of the number of times that the likelihood exceeds the likelihood threshold.
  • the histogram is stored in a memory or the like.
  • FIG. 9A is an explanatory diagram showing the relationship between the frequency threshold and the frequency result.
  • FIG. 9B is an explanatory diagram showing a frequency threshold and a histogram of frequency results.
  • the trend estimation unit 16 increments the numerical value of the histogram of the label with the maximum likelihood when the likelihood of the label with the maximum likelihood exceeds the likelihood threshold.
  • the tendency estimation unit 16 displays the “frequency” shown in FIGS. 9A and 9B. Increment the value of “Result”.
  • step S16 the tendency estimation unit 16 determines whether or not the maximum frequency value appearing in the histogram exceeds a preset frequency threshold value.
  • a preset frequency threshold value In the example shown in FIG. 9B, the frequency result of “non-sympathy” of the label (4) which is the maximum frequency value exceeds the frequency threshold value (that is, the frequency threshold value is “4”). Therefore, the tendency estimation unit 16 determines that the maximum frequency value appearing in the histogram exceeds the frequency threshold (YES in step S16).
  • step S17 the tendency estimation unit 16 outputs “non-sympathy” of the label (4) as a static characteristic of the driver. That is, the tendency estimation unit 16 determines that the static characteristic of the driver who drives the vehicle is “non-sympathetic”, and outputs the determination result to a subsequent device (not shown).
  • the frequency result shown in the histogram of FIG. 9B increases with time and eventually exceeds the frequency threshold. In order to avoid this phenomenon, it is also possible to accumulate the frequency result for a certain period of time or every predetermined number of detections, and use a numerical value obtained by normalizing the accumulated data as the frequency result.
  • Scene (1) is the situation shown in FIG. 11A.
  • scene (2) to (8) in the correspondence table shown in FIG. 10 will be described with reference to FIGS. 11B to 12D.
  • FIG. 11B shows a situation in which the host vehicle passes through the “crosswalk without a stop line on a narrow road” in the scene (2).
  • FIG. 11C shows a situation in which the vehicle makes a “left turn from a narrow road to a wide road” in the scene (3).
  • FIG. 11D shows a situation in which the host vehicle passes through “a pedestrian crossing on a one-lane road with good visibility” in scene (4).
  • FIG. 12A shows a situation where the host vehicle makes a “left turn from a wide road to a narrow road” in the scene (5).
  • FIG. 12B shows a situation in which the host vehicle performs the “straight line in a congested narrow road” in the scene (6).
  • FIG. 12C shows a situation in which the host vehicle travels in “Merge with heavy traffic” in scene (7).
  • FIG. 12D shows a situation in which the host vehicle performs the “lane changing action in a traffic jam” of the scene (8).
  • the instantaneous characteristic estimator 14 includes labels associated with the scenes estimated by the scene estimator 13 among the scenes (1) to (8) (labels that are circled in the correspondence table shown in FIG. 10). ) For the likelihood.
  • the personal characteristic estimation device 100 associates an environmental model used for estimating a scene with a label (a driver's instantaneous characteristic) that is prominently expressed in the scene.
  • the model is stored as a characteristic model and an instantaneous characteristic model.
  • the personal characteristic estimation device 100 estimates a scene in which the host vehicle travels based on the vehicle information with reference to the environment model.
  • the personal characteristic estimation device 100 refers to the characteristic model and calculates the likelihood of each label associated with the estimated scene using an algorithm of the HMM or the like based on information operated by the driver.
  • the personal characteristic estimation device 100 increments the numerical value of the histogram of the label having the maximum likelihood.
  • the personal characteristic estimation device 100 determines that the label having the maximum frequency value (driver's instantaneous characteristic) is the static characteristic of the driver. To do.
  • the personal characteristic estimation device 100 estimates the driver's static characteristics (generic name, tendency, temperament, etc.) with high accuracy based on the behavior that the driver can take while driving the vehicle and the surrounding environment of the vehicle. can do.
  • HMI Human Machine Interface
  • the following three guidances can be considered when the navigation device detects a traffic jam and searches for a detour: (1) Explain to the driver the reason for detouring and automatically set the detour (2) Let the driver select whether to take a detour or the original route; (3) Automatically set a detour without explaining anything to the driver. If the driver's static characteristic estimated by the personal characteristic estimation device 100 is used, an appropriate guidance can be selected from the guidances (1) to (3) according to the driver's static characteristic. Become.
  • the driver's static characteristics are estimated by a psychological method such as having a subject perform an AQ test using a questionnaire.
  • the conventional method has a problem that a true answer cannot always be obtained, and further, personal information leaks.
  • the personal characteristic estimation device 100 of the present embodiment estimates the static characteristics of the driver based on the behavior of the driver in each scene when driving the vehicle. The static characteristics of the driver can be estimated accurately at a low cost.
  • the instantaneous characteristic model storage unit 15 stores a characteristic model that combines a scene and a characteristic (label).
  • the personal characteristic estimation device 100 can reduce the storage capacity.
  • the signal detection unit 11 acquires at least one signal among the accelerator opening of the vehicle, the brake operation, the steering angle, the yaw rate, the vehicle speed, and the battery discharge level. Since the estimation unit 12 estimates a scene using the scene estimation unit 13 based on the signal acquired by the signal detection unit 11 and detects the driver's behavior using the instantaneous characteristic estimation unit 14, The characteristic estimation device 100 can estimate the static characteristic of the driver with high accuracy based on the vehicle signal.
  • the scene estimation unit 13 estimates the scene in which the host vehicle travels based on data such as road shape, time zone, traffic jam information, external conditions, presence / absence of pedestrians, presence / absence of vehicles, presence / absence of oncoming vehicles, etc.
  • the personal characteristic estimation device 100 can estimate an appropriate scene according to the situation around the host vehicle, and can estimate the static characteristics of the driver with high accuracy.
  • the instantaneous characteristic estimator 14 uses a hidden Markov model (HMM) as a method for statistically analyzing the result estimated using the instantaneous characteristic model.
  • HMM hidden Markov model
  • the personal characteristic estimation device 100 is a characteristic having a correlation of a predetermined value or more (for example, 0.2 or more) with respect to a characteristic detected by a conventional method of measuring a general and static characteristic tendency such as an AQ test. Is set as a characteristic tendency label, a label that can easily detect the static characteristics of the driver can be appropriately selected, and the static characteristics of the driver can be estimated with high accuracy.
  • the trend estimation unit 16 detects a label having the maximum likelihood every arbitrary time, accumulates the detected labels, forms a histogram, calculates a distribution for each label, and based on a label whose histogram value exceeds a threshold value Thus, since the static characteristic of the driver is estimated, the personal characteristic estimation device 100 can estimate the static characteristic of the driver with high accuracy.
  • FIG. 4 is a block diagram showing the configuration of the personal characteristic estimation apparatus according to this embodiment.
  • the personal characteristic estimation apparatus 101 includes a signal detection unit 11, a scene estimation unit 13, an instantaneous characteristic estimation unit 14, an instantaneous characteristic model storage unit 15, a trend estimation unit 16, and a model switching unit. (Model switching means) 17.
  • the signal detection unit 11 Since the signal detection unit 11 has the same configuration as the signal detection unit 11 shown in FIG.
  • the instantaneous characteristic model storage unit 15 stores a plurality of preset scenes (scenes (1) to (8) shown in FIGS. 11A to 12D).
  • the instantaneous characteristic model storage unit 15 sets a plurality of characteristic tendency labels indicating the characteristics of the driver, an environmental model used for estimating the scene, and a characteristic tendency that can be easily expressed in each scene.
  • a characteristic model associated with a label (hereinafter abbreviated as “label”) is stored as an instantaneous characteristic model.
  • the instantaneous characteristic model storage unit 15 estimates the scene as shown in the correspondence relationship in the correspondence table shown in FIG. 10 in which a plurality of scenes related to vehicle travel are taken in the row direction and a plurality of labels are taken in the column direction.
  • a characteristic model in which a label (a driver's instantaneous characteristic) prominently displayed in each scene is associated with each scene.
  • the instantaneous characteristic model storage unit 15 stores a plurality of characteristic models classified for each scene. Specifically, as shown in FIG. 5, label (1), label (4), and garbage (others) are stored as the characteristic model of scene (1). Label (2), label (6), and garbage are stored as a characteristic model of scene (2). Similarly, one or more labels associated with the scene are stored as a characteristic model of each scene after the scene 3.
  • the scene estimation unit 13 is information about the external environment of the host vehicle detected by the signal detection unit 11 and vehicle signals (information about vehicle operation, information about vehicle behavior, and information about operation history). ),
  • the scene in which the host vehicle travels is estimated with reference to the environmental model stored in the instantaneous characteristic model storage unit 15.
  • the scene estimation unit 13 grasps a situation in which the host vehicle is currently traveling based on a surrounding image captured by an in-vehicle camera (not shown), and estimates a scene.
  • the scene estimation unit 13 refers to the environment model and corresponds to which scene among the scenes (1) to (8) indicated in the row direction of the correspondence table shown in FIG. Estimate.
  • the model switching unit 17 performs a process of selecting a desired characteristic model from a plurality of characteristic models stored in the instantaneous characteristic model storage unit 15 based on the scene estimated by the scene estimation unit 13. That is, as illustrated in FIG. 6, the model switching unit 17 performs a process of selecting a characteristic model associated with the scene estimated by the scene estimation unit 13 from a plurality of characteristic models. Specifically, when the current scene is “a crosswalk without a signal on a narrow road” in the scene (1) in the correspondence table shown in FIG. 10, the model switching unit 17 changes the scene (1) to the scene (1).
  • the associated characteristic model including “other-person sympathy” of label (1), “non-sympathy” of label (4), and “garbage” is selected.
  • the instantaneous characteristic estimation unit 14 estimates the driver's instantaneous characteristic using the vehicle signal detected by the signal detection unit 11 and the characteristic model selected by the model switching unit 17. For example, as shown in FIG. 6, when the model switching unit 17 selects the scene 2 characteristic model, the label (2), label (6), and garbage stored as the scene 2 characteristic model The label with the maximum likelihood is extracted and output to the trend estimation unit 16. A method for extracting the label with the maximum likelihood will be described later.
  • step S31 the signal detection unit 11 acquires various vehicle signals and vehicle external information. Specifically, depending on the CAN mounted on the vehicle, vehicle signals such as accelerator opening, brake operation, steering angle, vehicle speed, yaw rate, etc., external video signals captured by an in-vehicle camera, or GPS navigation Get information.
  • vehicle signals such as accelerator opening, brake operation, steering angle, vehicle speed, yaw rate, etc.
  • external video signals captured by an in-vehicle camera, or GPS navigation Get information.
  • step S ⁇ b> 32 the signal detection unit 11 determines the primary difference between the acquired signals at a certain time interval (this is referred to as a difference ⁇ ) and the secondary difference between the primary differences of the acquired signal (this is referred to as a difference ⁇ ). Is calculated and parameterized.
  • the scene estimation unit 13 refers to the environment model, and information on the outside of the vehicle detected by the signal detection unit 11 (specifically, measured by an outside image captured by the in-vehicle camera or a GPS device). Based on the position information of the own vehicle), the scene likelihood of the own vehicle is calculated, and the scene in which the own vehicle is traveling is estimated.
  • step S34 the model switching unit 17 determines whether or not the scene estimated by the scene estimation unit 13 is garbage. If the estimated scene is garbage, the process returns to step S31. If the estimated scene is not garbage, the process proceeds to step S35.
  • step S35 the model switching unit 17 selects a characteristic model associated with the estimated scene from among a plurality of characteristic models (see FIG. 5) respectively associated with the scenes (1) to (8). .
  • the characteristic model associated with the scene (2) is selected as shown in FIG.
  • step S36 the instantaneous characteristic estimation unit 14 calculates the likelihood (characteristic likelihood) of each label using an algorithm of HMM (Hidden Markov Model) using the difference ⁇ obtained in step S32 and the difference ⁇ of the difference. calculate.
  • HMM Hidden Markov Model
  • the signal detection unit 11 determines that the time t1 , T2 of the accelerator opening obtained at time t2, the difference ⁇ 2 of the accelerator opening obtained at time t2, t3 ⁇ 2, the difference of accelerator opening obtained at time t3, t4, the accelerator opening obtained at time t4, t5
  • the difference ⁇ 4 is calculated. Further, the signal detector 11 obtains a difference ⁇ 1 between the differences ⁇ 1 and ⁇ 2, a difference ⁇ 2 between the differences ⁇ 2 and ⁇ 3, and a difference ⁇ 3 between the differences ⁇ 3 and ⁇ 4.
  • the instantaneous characteristic estimator 14 calculates the likelihood for each label by substituting these data into the HMM algorithm.
  • the instantaneous characteristic estimating unit 14 causes the scene estimating unit 13 to drive the host vehicle on the “crosswalk without a signal on a narrow road” of the scene (1) in the correspondence table shown in FIG. If it is estimated that there is, the likelihood of “other-person sympathy” in label (1) and the likelihood of “non-sympathy” in label (4) are calculated.
  • step S37 the instantaneous characteristic estimation unit 14 determines whether or not the likelihood of the label having the maximum likelihood exceeds a preset threshold value.
  • a preset threshold value For example, in the case of scene (1) in the correspondence table shown in FIG. 10, the likelihood threshold for “other-sympathetic empathy” for label (1) and the likelihood threshold for “non-sympathy” for label (4) are , “3000” and “2000” shown in FIG. 8A, respectively.
  • the instantaneous characteristic estimation unit 14 sets a histogram for each label.
  • the tendency estimation unit 16 sets the numerical value of the histogram of the label having the maximum likelihood. Is incremented.
  • the likelihood of “other-person sympathy” in label (1) and the likelihood of “non-sympathy” in label (4) are obtained by the processing by the HMM algorithm. , “2500” and “2300”.
  • the likelihood “2500” of the label (1) does not exceed the likelihood threshold “3000”. Therefore (NO in step S37), the tendency estimation unit 16 does not increment the numerical value of the histogram of the label (1).
  • the tendency estimation unit 16 increments the numerical value of the histogram of the label (4).
  • the histogram is an accumulation of the number of times that the likelihood exceeds the likelihood threshold.
  • the histogram is stored in a memory or the like.
  • FIG. 9A is an explanatory diagram showing the relationship between the frequency threshold and the frequency result.
  • FIG. 9B is an explanatory diagram showing a histogram of frequency results.
  • the trend estimation unit 16 increments the numerical value of the histogram of the label with the maximum likelihood when the likelihood of the label with the maximum likelihood exceeds the likelihood threshold.
  • the trend estimation unit 16 performs the “frequency” shown in FIGS. 9A and 9B. Increment the value of “Result”.
  • step S39 the trend estimation unit 16 determines whether or not the normalized maximum frequency value appearing in the histogram exceeds a preset frequency threshold value.
  • a preset frequency threshold value In the example shown in FIG. 9B, the frequency result of “non-sympathy” of the label (4) which is the normalized maximum frequency value exceeds the frequency threshold value (that is, the frequency threshold value is “4”).
  • the tendency estimation unit 16 determines that the normalized maximum frequency value appearing in the histogram exceeds the frequency threshold (YES in step S39). ).
  • step S40 the tendency estimation unit 16 outputs “non-sympathy” of the label (4) as a static characteristic of the driver. That is, the tendency estimation unit 16 determines that the static characteristic of the driver who drives the vehicle is “non-sympathetic”, and outputs the determination result to a subsequent device (not shown).
  • the personal characteristic estimation device 101 associates an environmental model used for estimating a scene with a label (a driver's instantaneous characteristic) that is prominently expressed in the scene.
  • the model is stored as a characteristic model and an instantaneous characteristic model.
  • the personal characteristic estimation apparatus 101 estimates a scene in which the host vehicle travels based on the vehicle information with reference to the environment model.
  • the personal characteristic estimation device 101 refers to the characteristic model, recognizes the label associated with the estimated scene, and uses the HMM algorithm or the like based on the information operated by the driver to estimate the likelihood of each label. Calculate the degree.
  • the personal characteristic estimation apparatus 101 increments the numerical value of the histogram of the label having the maximum likelihood.
  • the personal characteristic estimation device 101 displays the label (the driver's instantaneous characteristic) having the normalized maximum frequency value as the driver's static value. Judged to be characteristic.
  • the personal characteristic estimation apparatus 101 estimates the driver's static characteristics (generic name, tendency, temperament, etc.) with high accuracy based on the behavior that the driver can take while driving the vehicle and the surrounding environment of the vehicle. can do.
  • the driver's static characteristics for example, when performing guidance based on the estimated driver static characteristics based on the HMI (Human_Machine_Interface) for the driver, an appropriate response corresponding to the driver static characteristics may be taken. it can.
  • the model switching unit 17 selects a characteristic model associated with the scene estimated by the scene estimation unit 13 from a plurality of characteristic models (see FIG. 5), and the instantaneous characteristic estimation unit 14 selects the selected characteristic model. Based on this, since the likelihood of each label is calculated, the personal characteristic estimation device 101 can reduce the calculation load when calculating the likelihood of each label.
  • the trend estimator 16 accumulates and normalizes the label detected by the instantaneous characteristic estimator 14 at any given time and having the maximum likelihood for a predetermined time or every predetermined number of detections. Create a histogram and calculate the distribution of each label. Therefore, the personal characteristic estimating apparatus 101 converts the numerical value of the histogram into an appropriate numerical value and estimates the driver's static characteristic, so that the driver's static characteristic can be estimated with high accuracy. .
  • the personal characteristic estimation apparatus and personal characteristic estimation method of the present invention are not limited to the first and second embodiments, and can be appropriately changed without departing from the gist of the present invention.
  • the HMM Hidden Markov Model
  • the present invention is not limited to this. Instead, it is possible to use other techniques having equivalent functions.
  • the present invention can be used to estimate personal characteristics based on behavior during vehicle driving.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Mathematical Physics (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

Individual characteristic estimation devices (100, 101) comprise a signal detection unit (11), a scene estimation unit (13), an instantaneous characteristic estimation unit (14), an instantaneous characteristic model storage unit (15), and a trend estimation unit (16). The signal detection unit (11) acquires vehicle information. The scene estimation unit (13) estimates, on the basis of the vehicle information, scenes wherethrough a host vehicle travels. The instantaneous characteristic model storage unit (15) stores as an instantaneous characteristic model a characteristic model wherein a characteristic trend label whereby the scene is easily expressed is associated with each scene. On the basis of the scene which is estimated with the scene estimation unit (13), the instantaneous characteristic estimation unit (14) queries the instantaneous characteristic model and estimates an instantaneous characteristic trend of a driver. The trend estimation unit statistically analyses the estimated instantaneous characteristic trend, using HMM or other algorithm, and estimates a static characteristic trend of the driver.

Description

個人特性推定装置及び個人特性推定方法Personal characteristic estimation apparatus and personal characteristic estimation method
 本発明は、車両運転時における運転者の操作情報、車両の挙動、及び走行シーンに基づいて、運転者の特性を推定する個人特性推定装置及び個人特性推定方法に関する。 The present invention relates to a personal characteristic estimation device and a personal characteristic estimation method for estimating a driver's characteristics based on driver's operation information, vehicle behavior, and driving scene when driving the vehicle.
 運転者が車両を運転する際の挙動に基づいて、運転者の特性(性格、傾向、気質等の総称)を推定する装置として、特開2011-34430号公報(特許文献1)に記載されたものが知られている。特許文献1では、運転者が車両を運転する際の運転操作に基づいて運転者の癖を認識し、運転者の癖に基づいて運転を支援することが開示されている。 Japanese Patent Application Laid-Open No. 2011-34430 (Patent Document 1) describes an apparatus for estimating driver characteristics (a general term for personality, tendency, temperament, etc.) based on behavior when a driver drives a vehicle. Things are known. Patent Document 1 discloses that a driver recognizes a driver's kite based on a driving operation when driving the vehicle, and supports driving based on the driver's kite.
特開2011-34430号公報JP 2011-34430 A
 しかしながら、特許文献1に開示された従来例は、運転者の癖を認識して運転支援するものであり、運転者の基本的な特性を推定するものではない。 However, the conventional example disclosed in Patent Document 1 recognizes a driver's habit and supports driving, and does not estimate the basic characteristics of the driver.
 本発明は、運転者の基本的な特性を推定することが可能な個人特性推定装置及び個人特性推定方法を提供することを目的とする。 An object of the present invention is to provide a personal characteristic estimation device and a personal characteristic estimation method capable of estimating a basic characteristic of a driver.
 本発明の一態様に係る個人特性推定装置は、車両情報を取得する車両情報取得手段と、車両情報に基づいて車両が走行するシーンを推定するシーン推定手段と、複数のシーンを設定し、運転者の瞬時的な特性傾向を示す複数の特性傾向ラベルを設定し、各シーンに対して、該シーンで表出し易い特性傾向ラベルを関連付けた特性モデルを瞬時特性モデルとして記憶する瞬時特性モデル記憶手段と、シーン推定手段で推定されたシーンに基づき、瞬時特性モデルを参照して、運転者の瞬時的な特性傾向を推定する瞬時特性推定手段と、瞬時特性推定手段にて推定された瞬時的な特性傾向を統計的に分析して、運転者の静的な特性傾向を推定する特性傾向推定手段と、を備える。 A personal characteristic estimation device according to an aspect of the present invention includes a vehicle information acquisition unit that acquires vehicle information, a scene estimation unit that estimates a scene in which the vehicle travels based on the vehicle information, a plurality of scenes, and driving Instantaneous characteristic model storage means for setting a plurality of characteristic tendency labels indicating an instantaneous characteristic tendency of a person and storing a characteristic model associating a characteristic tendency label easy to be expressed in each scene as an instantaneous characteristic model Based on the scene estimated by the scene estimation means, referring to the instantaneous characteristic model, the instantaneous characteristic estimation means for estimating the instantaneous characteristic tendency of the driver, and the instantaneous characteristic estimated by the instantaneous characteristic estimation means Characteristic tendency estimating means for statistically analyzing the characteristic tendency and estimating a static characteristic tendency of the driver.
 本発明の一態様に係る個人特性推定方法は、複数のシーンを設定し、運転者の瞬時的な特性傾向を示す複数の特性傾向ラベルを設定し、各シーンに対して、該シーンで表出し易い特性傾向ラベルを関連付けた特性モデルを瞬時特性モデルとして記憶し、車両情報に基づいて車両が走行するシーンを推定し、推定されたシーンに基づき、瞬時特性モデルを参照して、運転者の瞬時的な特性傾向を推定し、推定された瞬時的な特性傾向を統計的に分析して、運転者の静的な特性傾向を推定する。 The personal characteristic estimation method according to an aspect of the present invention sets a plurality of scenes, sets a plurality of characteristic tendency labels indicating an instantaneous characteristic tendency of the driver, and displays each scene in the scene. The characteristic model associated with the easy characteristic tendency label is stored as the instantaneous characteristic model, the scene where the vehicle travels is estimated based on the vehicle information, and the instantaneous characteristic model of the driver is referred to based on the estimated scene. A static characteristic tendency of the driver is estimated by statistically analyzing the estimated instantaneous characteristic tendency and statistically analyzing the estimated instantaneous characteristic tendency.
図1は、本発明の第1実施形態に係る個人特性推定装置の構成を示すブロック図である。FIG. 1 is a block diagram showing the configuration of the personal characteristic estimation apparatus according to the first embodiment of the present invention. 図2は、本発明の第1実施形態に係る個人特性推定装置の推定部の処理を模式的に示す説明図である。FIG. 2 is an explanatory diagram schematically showing processing of the estimation unit of the personal characteristic estimation device according to the first embodiment of the present invention. 図3は、本発明の第1実施形態に係る個人特性推定装置の処理手順を示すフローチャートである。FIG. 3 is a flowchart showing a processing procedure of the personal characteristic estimation apparatus according to the first embodiment of the present invention. 図4は、本発明の第2実施形態に係る個人特性推定装置の構成を示すブロック図である。FIG. 4 is a block diagram showing a configuration of the personal characteristic estimation apparatus according to the second embodiment of the present invention. 図5は、本発明の第2実施形態に係る個人特性推定装置の瞬時特性モデルを模式的に示す説明図である。FIG. 5 is an explanatory diagram schematically showing an instantaneous characteristic model of the personal characteristic estimation device according to the second embodiment of the present invention. 図6は、本発明の第2実施形態に係る個人特性推定装置のモデル切替部にて特性モデルを選択する様子を模式的に示す説明図である。FIG. 6 is an explanatory diagram schematically showing how a characteristic model is selected by the model switching unit of the personal characteristic estimation device according to the second embodiment of the present invention. 図7は、本発明の第2実施形態に係る個人特性推定装置の処理手順を示すフローチャートである。FIG. 7 is a flowchart showing a processing procedure of the personal characteristic estimation apparatus according to the second embodiment of the present invention. 図8Aは、本発明の各実施形態に係る、尤度閾値の一例を示す説明図である。FIG. 8A is an explanatory diagram illustrating an example of a likelihood threshold according to each embodiment of the present invention. 図8Bは、本発明の各実施形態に係る、計算された尤度値の一例を示す説明図である。FIG. 8B is an explanatory diagram illustrating an example of a calculated likelihood value according to each embodiment of the present invention. 図9Aは、本発明の各実施形態に係る、頻度閾値と頻度結果の関係を示す説明図である。FIG. 9A is an explanatory diagram illustrating a relationship between a frequency threshold value and a frequency result according to each embodiment of the present invention. 図9Bは、本発明の各実施形態に係る、頻度閾値と頻度結果の関係を示す説明図である。FIG. 9B is an explanatory diagram illustrating a relationship between a frequency threshold value and a frequency result according to each embodiment of the present invention. 図10は、本発明の各実施形態に係る、複数のシーンと、各シーンにおいて検出可能な特性傾向ラベルとの関係を示す説明図である。FIG. 10 is an explanatory diagram showing a relationship between a plurality of scenes and a characteristic tendency label that can be detected in each scene according to each embodiment of the present invention. 図11Aは、本発明の各実施形態に係るシーン(1)の状況を示す説明図である。FIG. 11A is an explanatory diagram showing the situation of the scene (1) according to each embodiment of the present invention. 図11Bは、本発明の各実施形態に係るシーン(2)の状況を示す説明図である。FIG. 11B is an explanatory diagram showing the situation of the scene (2) according to each embodiment of the present invention. 図11Cは、本発明の各実施形態に係るシーン(3)の状況を示す説明図である。FIG. 11C is an explanatory diagram showing a situation of the scene (3) according to each embodiment of the present invention. 図11Dは、本発明の各実施形態に係るシーン(4)の状況を示す説明図である。FIG. 11D is an explanatory diagram showing a situation of the scene (4) according to each embodiment of the present invention. 図12Aは、本発明の各実施形態に係るシーン(5)の状況を示す説明図である。FIG. 12A is an explanatory diagram showing the situation of the scene (5) according to each embodiment of the present invention. 図12Bは、本発明の各実施形態に係るシーン(6)の状況を示す説明図である。FIG. 12B is an explanatory diagram showing the situation of the scene (6) according to each embodiment of the present invention. 図12Cは、本発明の各実施形態に係るシーン(7)の状況を示す説明図である。FIG. 12C is an explanatory diagram showing the situation of the scene (7) according to each embodiment of the present invention. 図12Dは、本発明の各実施形態に係るシーン(8)の状況を示す説明図である。FIG. 12D is an explanatory diagram showing a situation of the scene (8) according to each embodiment of the present invention. 図13は、本発明の各実施形態に係る、運転信号と運転者の静的特性との関連性を示す説明図である。FIG. 13 is an explanatory diagram showing the relationship between the driving signal and the driver's static characteristics according to each embodiment of the present invention. 図14Aは、本発明の各実施形態に係る環境コーパスを設計する手順を示す説明図である。FIG. 14A is an explanatory diagram showing a procedure for designing an environmental corpus according to each embodiment of the present invention. 図14Bは、本発明の各実施形態に係る環境コーパスを設計する手順を示す説明図である。FIG. 14B is an explanatory diagram showing a procedure for designing an environmental corpus according to each embodiment of the present invention. 図15は、本発明の各実施形態に係る特性コーパスを設計する手順を示す説明図である。FIG. 15 is an explanatory diagram showing a procedure for designing a characteristic corpus according to each embodiment of the present invention. 図16は、本発明の各実施形態に係る、環境モデル、及び特性モデルの構造を示す説明図である。FIG. 16 is an explanatory diagram showing the structure of the environmental model and the characteristic model according to each embodiment of the present invention.
 以下、図1乃至16を参照して、本発明の第1及び2実施形態を説明する。 Hereinafter, first and second embodiments of the present invention will be described with reference to FIGS.
(第1実施形態)
 図1は、本実施形態に係る個人特性推定装置の構成を示すブロック図である。図1に示すように、個人特性推定装置100は、信号検出部(車両情報取得手段)11と、推定部12と、瞬時特性モデル記憶部(瞬時特性モデル記憶手段)15と、傾向推定部(特性傾向推定手段)16と、を備えている。推定部12は、シーン推定部(シーン推定手段)13及び瞬時特性推定部(瞬時特性推定手段)14を含む。
(First embodiment)
FIG. 1 is a block diagram showing the configuration of the personal characteristic estimation apparatus according to this embodiment. As shown in FIG. 1, the personal characteristic estimation apparatus 100 includes a signal detection unit (vehicle information acquisition unit) 11, an estimation unit 12, an instantaneous characteristic model storage unit (instantaneous characteristic model storage unit) 15, and a tendency estimation unit ( Characteristic tendency estimation means) 16. The estimation unit 12 includes a scene estimation unit (scene estimation unit) 13 and an instantaneous characteristic estimation unit (instantaneous characteristic estimation unit) 14.
 信号検出部11は、車両に搭載されるCAN(Controller Area Network)等より、車両の操作に関する情報、及び車両の挙動に関する情報を取得する。車両の操作に関する情報として、例えば、ブレーキ操作、ステアリングの操舵角、アクセル開度等の情報が挙げられる。車両の挙動に関する情報として、例えば、車両の走行速度、ヨーレート、電池の放電レベル等の情報が挙げられる。 The signal detection unit 11 acquires information related to the operation of the vehicle and information related to the behavior of the vehicle from a CAN (Controller Area Network) mounted on the vehicle. Examples of information relating to the operation of the vehicle include information such as a brake operation, a steering angle of the steering, and an accelerator opening. Examples of the information regarding the behavior of the vehicle include information such as the traveling speed of the vehicle, the yaw rate, and the discharge level of the battery.
 更に、信号検出部11は、車両外界の情報を取得する。例えば、信号検出部11は、車載カメラ(図示略)により撮像される周囲映像や、GPS装置等により検出される自車両の走行路に基づいて、外界の状況に関する情報を取得する。加えて、信号検出部11は、車両に搭載されるナビゲーション装置、エアコン、オーディオの操作履歴に関する情報を取得する。以下では、車両の操作に関する情報、車両の挙動に関する情報、及び操作履歴に関する情報、を総称して「車両信号」ということにする。また、車両の操作に関する情報、車両の挙動に関する情報、操作履歴に関する情報、及び車両外界の情報のうちの少なくとも一つを示す概念を車両情報と称する。 Furthermore, the signal detection unit 11 acquires information on the outside of the vehicle. For example, the signal detection unit 11 acquires information related to the external environment based on a surrounding image captured by an in-vehicle camera (not shown) or a traveling path of the host vehicle detected by a GPS device or the like. In addition, the signal detection unit 11 acquires information related to operation history of the navigation device, the air conditioner, and the audio mounted on the vehicle. Hereinafter, the information related to the operation of the vehicle, the information related to the behavior of the vehicle, and the information related to the operation history will be collectively referred to as a “vehicle signal”. Moreover, the concept which shows at least one of the information regarding the operation of the vehicle, the information regarding the behavior of the vehicle, the information regarding the operation history, and the information on the outside of the vehicle is referred to as vehicle information.
 瞬時特性モデル記憶部15は、予め設定されている複数のシーン(後述する図11A乃至12Dに示すシーン(1)乃至(8))を記憶している。瞬時特性モデル記憶部15は、運転者の特性を示す複数の特性傾向ラベルを設定し、シーンを推定するために使用される環境モデルと、各シーンに対して、該シーンで表出し易い特性傾向ラベル(以下、「ラベル」と略す)を関連付けた特性モデルとを、瞬時特性モデルとして記憶している。具体的には、車両の走行に関する複数のシーンを行方向にとり、複数のラベルを列方向にとった図10に示す対応表における対応関係のように、瞬時特性モデル記憶部15は、シーンを推定するために使用される環境モデルと、各シーンに対して、該シーンで顕著に表出されるラベル(運転者の瞬時特性)を関連付けた特性モデルとを、瞬時特性モデルとして記憶している。 The instantaneous characteristic model storage unit 15 stores a plurality of preset scenes (scenes (1) to (8) shown in FIGS. 11A to 12D described later). The instantaneous characteristic model storage unit 15 sets a plurality of characteristic tendency labels indicating the characteristics of the driver, an environmental model used for estimating the scene, and a characteristic tendency that can be easily expressed in each scene. A characteristic model associated with a label (hereinafter abbreviated as “label”) is stored as an instantaneous characteristic model. Specifically, the instantaneous characteristic model storage unit 15 estimates the scene as shown in the correspondence relationship in the correspondence table shown in FIG. 10 in which a plurality of scenes related to vehicle travel are taken in the row direction and a plurality of labels are taken in the column direction. In addition, an environment model that is used for the purpose and a characteristic model that associates each scene with a label (instantaneous characteristic of the driver) that is prominently displayed in the scene are stored as an instantaneous characteristic model.
 図10に示す対応表において、行方向に道路形状、時間帯、渋滞情報、外界の状態、歩行者の有無、車両の有無、対向車の有無などの条件により分類されるシーン(1)乃至(8)が設定されている。列方向には、運転者の特性を示す各種の特性傾向ラベル(ラベル(1)乃至(9))が設定されている。図10に示す対応表において、各シーンに対して表出し易い1つ以上のラベルが丸印で示されている。例えば、「狭路にある信号の無い横断歩道」を車両が走行するというシーン(1)(図11A)では、ラベル(1)の「他者配慮共感性」、及びラベル(4)の「非共感性」といった運転者の特性を判断し易い挙動が表出するので、図10に示す対応表では、ラベル(1)及びラベル(4)に丸印を付している。換言すれば、狭路にある信号の無い横断歩道を車両が走行するときの、運転者の挙動を検出することにより、この運転者の「他者配慮共感性」及び「非共感性」を判断することができる。従って、瞬時特性モデル記憶部15は、シーン(1)に対して、ラベル(1)及び(4)を関連づけた2つの特性モデルを記憶している。図2に示すように、推定部12は、運転者の瞬時特性を推定する際に、シーンと特性(ラベル)を組み合わせた特性モデルを利用するので、瞬時特性モデル記憶部15は、図10に示す丸印の個数に対応する16個の特性モデルを記憶している。 In the correspondence table shown in FIG. 10, scenes (1) to (1) to (3) are classified according to conditions such as road shape in the row direction, time zone, traffic jam information, external state, presence of pedestrians, presence of vehicles, presence of oncoming vehicles, and the like. 8) is set. In the column direction, various characteristic tendency labels (labels (1) to (9)) indicating the characteristics of the driver are set. In the correspondence table shown in FIG. 10, one or more labels that are easy to be displayed for each scene are indicated by circles. For example, in a scene (1) (FIG. 11A) in which a vehicle travels on a “crosswalk without a signal on a narrow street” (FIG. 11A), “other-person sympathy” in label (1) and “non-existence” in label (4) Since a behavior that makes it easy to determine the driver's characteristics such as “sympathy” appears, in the correspondence table shown in FIG. 10, the labels (1) and (4) are circled. In other words, by detecting the driver's behavior when the vehicle travels on a pedestrian crossing without a signal on a narrow road, the driver's "other-sympathetic empathy" and "non-sympathy" are determined. can do. Therefore, the instantaneous characteristic model storage unit 15 stores two characteristic models in which the labels (1) and (4) are associated with the scene (1). As shown in FIG. 2, since the estimation unit 12 uses a characteristic model that combines a scene and a characteristic (label) when estimating the instantaneous characteristic of the driver, the instantaneous characteristic model storage unit 15 stores the characteristic model in FIG. Sixteen characteristic models corresponding to the number of circles shown are stored.
 なお、図10に示す対応表は、一例を示しているに過ぎない。例えば、図10に示すシーン(1)の「狭路にある信号の無い横断歩道」に対して、ラベル(9)の「決断性」の欄に丸印が付されていない。これは、シーン(1)において決断性といった運転者の特性が表出し難いのを意味しているのではなく、シーン(1)とラベル(9)との関連性を示すデータが得られていないことを示している。従って、今後の研究により、図10に示す対応表は変更される可能性がある。 Note that the correspondence table shown in FIG. 10 is merely an example. For example, with respect to the “crosswalk without a signal on a narrow road” in the scene (1) shown in FIG. 10, the “determinism” column in the label (9) is not circled. This does not mean that the driver's characteristics such as determinism are difficult to express in the scene (1), but data indicating the relationship between the scene (1) and the label (9) is not obtained. It is shown that. Therefore, the correspondence table shown in FIG. 10 may be changed by future research.
 また、AQテスト(Autism-Spectrum Quotient;自閉症スペクトラム指数)等の、一般的且つ静的な特性傾向を測定する手法によって検出された静的特性と比較して、所定値以上(例えば、0.2以上)の相関値を有する瞬時特性を、ラベルとして設定することができる。更に、動画像や音声から運転者の内観を想定し、想定された運転者の内観をラベルとして付与することも可能である。 In addition, compared with a static characteristic detected by a general and static characteristic tendency measuring method such as AQ test (Autism-Spectrum Quotient), it is greater than a predetermined value (for example, 0 .2 or higher) can be set as a label. Furthermore, it is also possible to assume the driver's introspection from a moving image or sound and assign the assumed driver's introspection as a label.
 なお、「瞬時特性」とは、運転者の挙動に基づいて推定される瞬時的な特性である。本発明では多数の「瞬時特性」に基づいて、運転者の個人的な特性(性格、傾向、気質等の総称)である「静的特性」を求める。 The “instantaneous characteristic” is an instantaneous characteristic estimated based on the behavior of the driver. In the present invention, a “static characteristic” which is a personal characteristic of the driver (generic name such as personality, tendency, and temperament) is obtained based on a number of “instantaneous characteristics”.
 図1に示すシーン推定部13は、信号検出部11で検出された車両信号(車両の操作に関する情報、車両の挙動に関する情報、及び操作履歴に関する情報)及び車両外界の情報に基づいて、瞬時特性モデル記憶部15に記憶されている環境モデルを参照して、自車両が走行するシーンを推定する。例えば、シーン推定部13は、車載カメラ(図示略)で撮影される周囲映像に基づいて、自車両が現在走行している状況を把握して、シーンを推定する。このとき、シーン推定部13は、環境モデルを参照して、把握した状況が、図10に示す対応表の行方向に示されたシーン(1)乃至(8)のうち、どのシーンに対応するのか推定する。 The scene estimation unit 13 shown in FIG. 1 has instantaneous characteristics based on vehicle signals (information on vehicle operation, information on vehicle behavior, and information on operation history) detected by the signal detection unit 11 and information on the outside of the vehicle. The scene in which the host vehicle travels is estimated with reference to the environmental model stored in the model storage unit 15. For example, the scene estimation unit 13 estimates a scene by grasping a situation in which the host vehicle is currently traveling based on a surrounding image captured by an in-vehicle camera (not shown). At this time, the scene estimation unit 13 refers to the environment model and corresponds to which scene among the scenes (1) to (8) indicated in the row direction of the correspondence table shown in FIG. Estimate.
 瞬時特性推定部14は、信号検出部11で検出された車両信号、及びシーン推定部13で推定されたシーンに基づき、瞬時特性モデル記憶部15に記憶されている特性モデルを参照して、運転者の瞬時特性を推定する。具体的には、瞬時特性推定部14は、車両信号に基づき、HMM(隠れマルコフモデル)のアルゴリズム(特性傾向を統計的に分析するアルゴリズム)等を用いて、各ラベルの尤度を求める。詳細な手順については後述する。 The instantaneous characteristic estimation unit 14 refers to the characteristic model stored in the instantaneous characteristic model storage unit 15 based on the vehicle signal detected by the signal detection unit 11 and the scene estimated by the scene estimation unit 13. The instantaneous characteristics of the person. Specifically, the instantaneous characteristic estimation unit 14 obtains the likelihood of each label using an HMM (Hidden Markov Model) algorithm (an algorithm for statistically analyzing characteristic trends) or the like based on the vehicle signal. Detailed procedures will be described later.
 なお、個人特性推定装置100は、中央演算ユニット(CPU)と、RAMと、ROMと、ハードディスク等の記憶手段とからなる一体型のコンピュータとして構成することができる。 The personal characteristic estimation apparatus 100 can be configured as an integrated computer including a central processing unit (CPU), RAM, ROM, and storage means such as a hard disk.
 次に、運転者の静的特性を推定する手法について、図13を参照して説明する。運転者の静的特性(性格、傾向、気質等)を推定する手法として、質問紙法を挙げることができる。一般質問紙C1は、被験者(本実施形態では運転者)に対して、特定の環境が与えられたときの自身の行動や内観を問い、回答結果から被験者の静的特性を推定する。従って、運転操作信号や車両の動作信号(以下、運転信号という)から、一般質問紙C1により推定された静的特性と同様の静的特性を推定するためには、運転信号に基づいて、一般質問紙C1で与えられる特定の環境と、その特定の環境が与えられたときの運転者の行動を推定できれば良い。 Next, a method for estimating the driver's static characteristics will be described with reference to FIG. As a method for estimating the driver's static characteristics (character, tendency, temperament, etc.), a questionnaire method can be cited. The general questionnaire C1 asks the subject (driver in this embodiment) his / her behavior and introspection when a specific environment is given, and estimates the static characteristics of the subject from the answer result. Therefore, in order to estimate a static characteristic similar to the static characteristic estimated by the general questionnaire C1 from a driving operation signal or a vehicle operation signal (hereinafter referred to as a driving signal), based on the driving signal, It is only necessary to be able to estimate the specific environment given by the questionnaire C1 and the behavior of the driver when the specific environment is given.
 運転行動は、運転者の特性、環境、周囲の状況、交通ルール、マナーにより瞬時的に決定されるので、運転行動を把握することにより、運転者の瞬時特性を検出することができる。しかし、運転信号には、運転行動と運転者の静的特性との関係を表す情報が含まれていないので、運転行動と運転者の静的特性との関係を表すために、運転版質問紙C2が必要である。 Since driving behavior is determined instantaneously by the driver's characteristics, environment, surrounding conditions, traffic rules, and manners, the driver's instantaneous characteristics can be detected by grasping the driving behavior. However, since the driving signal does not include information representing the relationship between the driving behavior and the driver's static characteristics, the driving version questionnaire is used to represent the relationship between the driving behavior and the driver's static characteristics. C2 is required.
 例えば、他者配慮共感性の有無に関する傾向を調べるための質問の例として、「歩行者が横断しようとしているときにはどんな場合でも譲るべきだと思うか」といった問いを被験者に与える。被験者は、「横断歩道の手前で停車しようか」或いは、「歩行者の様子を観察しながら自分が先に行こうか」といった実際の運転シーンを考慮しながら、その問いに回答する。 For example, as an example of a question for investigating the tendency regarding the presence or absence of empathy for others, a question such as “Do you think that pedestrians should give up in any case when they are trying to cross?” Is given to the subject. The subject answers the question while considering the actual driving scene, such as “Would you stop before the pedestrian crossing?” Or “Would you go first while observing the pedestrian?”
 このように、実際の運転シーンを想定した質問項目を有する運転版質問紙C2の回答結果と、一般的な他者配慮共感性の有無に関する傾向を調べる一般質問紙C1の回答結果との間に相関があれば、運転版質問紙C2に記載された運転行動を観測することで、運転者の静的特性を推定することができる。 Thus, between the answer result of the driving version question sheet C2 having the question items assuming the actual driving scene and the answer result of the general question sheet C1 for examining the tendency regarding the presence or absence of general empathy for others. If there is a correlation, the static characteristics of the driver can be estimated by observing the driving behavior described in the driving questionnaire C2.
例えば、「横断歩道の手前で停車する」場合では、横断歩道手前よりブレーキを踏んだ1回限りの停止を示す信号が表出し、「歩行者の様子を観察しながら自分が先に行く」場合では、アクセルを開としたり、閉としたりする行動が表出する。これらの関係性に基づいて、運転者が特定の環境において行動をとったときの運転信号C3を抽出し、その運転行動を説明するラベル(例えば、「他者配慮共感性」等)を付与できれば、運転信号だけから運転行動を推定でき、更に、運転行動から推定される瞬時的に観測される運転者の瞬時特性を計算できる。これらの瞬時特性の発生頻度を計測することにより、結果として、心理テスト(一般質問紙C1)で得られたスコア(回答結果)から予想される運転者の静的特性と同等の静的特性を自動的に推定できると考えられる。 For example, in the case of “stop in front of a pedestrian crossing”, a signal indicating a one-time stop after stepping on the brake appears before the pedestrian crossing, and “going first while observing the pedestrian” Then, the action of opening and closing the accelerator is expressed. Based on these relationships, if the driving signal C3 when the driver takes an action in a specific environment is extracted and a label explaining the driving action (for example, “other-friendliness empathy”, etc.) can be given. The driving behavior can be estimated from only the driving signal, and the instantaneous characteristics of the driver observed instantaneously estimated from the driving behavior can be calculated. By measuring the frequency of occurrence of these instantaneous characteristics, as a result, static characteristics equivalent to the driver's static characteristics expected from the score (answer result) obtained from the psychological test (general questionnaire C1) are obtained. It can be estimated automatically.
 推定部12は、最初に、信号検出部11から入力された運転信号を単位時間毎に分割し、環境認識を行う。環境認識では、推定部12は、HMM(隠れマルコフモデル)による環境モデルを用いて、入力された運転信号に基づいて、右左折、狭路等の道路形状、渋滞状況など、予め準備しておいた複数の環境を表すパターンの中から、1つのパターンを認識する。推定部12は、認識したパターンにより表される環境に基づき、特性モデルを選択する。 The estimation unit 12 first divides the operation signal input from the signal detection unit 11 for each unit time and performs environment recognition. In the environment recognition, the estimation unit 12 uses an environmental model based on an HMM (Hidden Markov Model) and prepares in advance a road shape such as a right / left turn or a narrow road, a traffic jam situation, and the like based on the input driving signal. One pattern is recognized from among the patterns representing a plurality of environments. The estimation unit 12 selects a characteristic model based on the environment represented by the recognized pattern.
 例えば、推定部12は、「右折」の環境が認識された場合には、「右折」の環境で、運転者の瞬時特性を認識できるように事前学習された特性モデルを選択する。次に、推定部12は、選択された特性モデルを用いて、瞬時的に観測された運転者の瞬時特性を認識する。認識された瞬時特性は、認識結果が送出される度に蓄積される。一定量の認識結果が蓄積された時点で、認識された複数の瞬時特性の発生頻度や分布に基づき、傾向推定部16は運転者の静的特性を求める。 For example, when the “right turn” environment is recognized, the estimation unit 12 selects a characteristic model that has been learned in advance so that the instantaneous characteristics of the driver can be recognized in the “right turn” environment. Next, the estimation unit 12 recognizes the instantaneous characteristics of the driver observed instantaneously using the selected characteristic model. The recognized instantaneous characteristic is accumulated every time a recognition result is transmitted. When a certain amount of recognition results are accumulated, the tendency estimation unit 16 obtains the driver's static characteristics based on the occurrence frequency and distribution of a plurality of recognized instantaneous characteristics.
 なお、以下の手法により、環境コーパス及び特性コーパスが予め設計されている。環境コーパスに基づく事前学習により環境モデルが求められ、特性コーパスに基づく事前学習により特性モデルが求められている。 The environmental corpus and the characteristic corpus are designed in advance by the following method. An environmental model is obtained by pre-learning based on an environmental corpus, and a characteristic model is obtained by pre-learning based on a characteristic corpus.
(環境コーパスの設計)
 図14A,14Bは、運転者番号D1~Dnの各々による、3つの環境における運転信号(この例では、車速)の収集及び切り出しの例を示す説明図である。図14Aに示す環境S1は直進路渋滞、S2は狭路横断歩道、S3はT字路右折の状況を示している。図14Bは、各環境(S1~S3)の車速データを切り出し、切り出した車速データに環境シンボル(直進路渋滞、狭路横断歩道、T字路右折)を付与した様子を示している。なるべく多くの運転者による運転信号を収集することで、各環境シンボルに対する運転信号(車速データ)の分散の形態を安定化させることができる。
(Design of environmental corpus)
14A and 14B are explanatory diagrams showing examples of collection and extraction of driving signals (vehicle speed in this example) in three environments by each of driver numbers D1 to Dn. Environment S1 shown to FIG. 14A has shown the situation of straight road congestion, S2 the narrow road crossing, and S3 the state of a T-shaped road right turn. FIG. 14B shows a state in which vehicle speed data for each environment (S1 to S3) is cut out and environmental symbols (straight road congestion, narrow road crossing, T-turn right turn) are added to the cut out vehicle speed data. By collecting driving signals from as many drivers as possible, it is possible to stabilize the form of distribution of driving signals (vehicle speed data) for each environmental symbol.
(特性コーパスの設計)
 特性コーパスについても、環境コーパスと同様に、運転者D1~Dnの各々による、複数の環境における運転信号を長時間に亘って収集して、環境シンボル毎に車速データを切り出して分類する。更に、環境シンボル毎に集められた複数の車速データを、運転者の瞬時特性に対応する運転行動毎に分類して、運転行動毎に集められた車速データに特性シンボルを付与する。図15は、環境シンボルが「狭路横断歩道、歩行者あり」の場合に、運転行動(a1,a2)毎に集められた車速データに特性シンボル(情動的共感行動、非共感行動)を付与した様子を示している。
(Design of characteristic corpus)
Similarly to the environment corpus, the characteristic corpus collects driving signals in a plurality of environments by each of the drivers D1 to Dn over a long period of time, and extracts and classifies vehicle speed data for each environment symbol. Further, the plurality of vehicle speed data collected for each environmental symbol is classified for each driving action corresponding to the instantaneous characteristics of the driver, and a characteristic symbol is given to the vehicle speed data collected for each driving action. FIG. 15 gives characteristic symbols (emotional empathy behavior, non-sympathetic behavior) to the vehicle speed data collected for each driving behavior (a1, a2) when the environmental symbol is “narrow road crossing, with pedestrians” It shows how it was done.
 環境シンボルが「狭路横断歩道、歩行者あり」の場合、運転者の運動行動として、運転行動a1の「一時停止して歩行者が通過した後に発進」と、運転行動a2の「歩行者の様子を見ながらも停止することなく横断歩道を通過」が観測されている。運転行動a1を観測した車速データには、「情動的共感行動」を特性シンボルとして付与している。運転行動a2を観測した車速データには、「非共感行動」を特性シンボルとして付与している。特性シンボルでは、環境シンボルとは異なり、運転行動の名称を直接付与していない。 When the environmental symbol is “narrow pedestrian crossing, with pedestrians”, the driving behavior of the driver is “pause after the pedestrian passes after driving temporarily” of driving behavior a1, and While watching the situation, "passing the pedestrian crossing without stopping" has been observed. “Emotional empathy behavior” is assigned as a characteristic symbol to the vehicle speed data obtained by observing the driving behavior a1. “Non-sympathetic behavior” is given as a characteristic symbol to the vehicle speed data obtained by observing the driving behavior a2. In the characteristic symbol, unlike the environmental symbol, the name of the driving action is not directly given.
 図16は、環境モデル及び特性モデルの構造を示す説明図であり、音声認識で用いられるLeft_to_Rightモデルを用いている。環境モデルアルゴリズムを用いて、シンボル毎に、入力パラメータ系列が観測された際に出力される、状態間の遷移確率及び出力確率を、各コーパスを用いて学習させることにより、瞬時特性モデルを生成する。 FIG. 16 is an explanatory diagram showing the structure of the environment model and the characteristic model, and uses the Left_to_Right model used in speech recognition. Using an environmental model algorithm, an instantaneous characteristic model is generated by using each corpus to learn transition probabilities and output probabilities between states that are output when an input parameter series is observed for each symbol. .
 次に、個人特性推定装置100の動作を、図3に示すフローチャートを参照して説明する。 Next, the operation of the personal characteristic estimation apparatus 100 will be described with reference to the flowchart shown in FIG.
 初めに、ステップS11において、信号検出部11は、各種の車両信号や車両外界の情報を取得する。具体的には、車両に搭載されるCANにより、アクセル開度、ブレーキ操作、ステアリングの操舵角、車両速度、ヨーレート等の車両信号や、車載カメラにて撮影される外界の映像信号或いはGPSによる現在位置情報を取得する。 First, in step S11, the signal detection unit 11 acquires various vehicle signals and information on the outside of the vehicle. Specifically, depending on the CAN mounted on the vehicle, vehicle signals such as accelerator opening, brake operation, steering angle, vehicle speed, yaw rate, etc., external video signals captured by an in-vehicle camera, or GPS current Get location information.
 ステップS12において、信号検出部11は、取得した信号の一定時間間隔の一次差分(これを差分Δとする)、及び取得した信号の一次差分間の二次差分(これを、差分ΔΔとする)を演算し、パラメータ化する。例えば、取得した信号がアクセル開度である場合、一定時間間隔を有する時刻t1,t2,t3,t4,t5の各々でアクセル開度が検出された際に、信号検出部11は、時刻t1,t2で求められるアクセル開度の差分Δ1,時刻t2,t3で求められるアクセル開度の差分Δ2,時刻t3,t4で求められるアクセル開度の差分Δ3,時刻t4,t5で求められるアクセル開度の差分Δ4を演算する。更に、信号検出部11は、差分Δ1とΔ2の差分ΔΔ1,差分Δ2とΔ3の差分ΔΔ2,差分Δ3とΔ4の差分ΔΔ3を求める。 In step S <b> 12, the signal detection unit 11 determines the primary difference between the acquired signals at a certain time interval (this is referred to as a difference Δ) and the secondary difference between the primary differences of the acquired signal (this is referred to as a difference ΔΔ). Is calculated and parameterized. For example, when the acquired signal is the accelerator opening, when the accelerator opening is detected at each of the times t1, t2, t3, t4, and t5 having a certain time interval, the signal detector 11 Accelerator opening difference Δ1 obtained at t2, Accelerator opening difference Δ2 obtained at times t2 and t3, Accelerator opening difference Δ3 obtained at times t3 and t4, Accelerator opening difference obtained at times t4 and t5 The difference Δ4 is calculated. Further, the signal detector 11 obtains a difference ΔΔ1 between the differences Δ1 and Δ2, a difference ΔΔ2 between the differences Δ2 and Δ3, and a difference ΔΔ3 between the differences Δ3 and Δ4.
 ステップS13において、シーン推定部13は、環境モデルを参照して、信号検出部11で検出された各種の車両信号、或いは車両外界の情報に基づいて、自車両が走行しているシーンを推定する。更に、瞬時特性推定部14は、特性モデルを参照して、推定したシーンに対応付けられているラベル(図10に示す対応表で丸印が付されているラベル)について、尤度を計算する。この処理では、瞬時特性推定部14は、ステップS12で求めた差分Δ、及び差分の差分ΔΔに基づき、HMM(隠れマルコフモデル)のアルゴリズムを用いて、各ラベルの尤度を計算する。具体的には、瞬時特性推定部14は、ステップS12で求めた差分Δ、及び差分の差分ΔΔのデータをHMMのアルゴリズムに代入することにより、各ラベルの尤度を求める。なお、HMMのアルゴリズムの詳細については、周知の技術であるので説明を省略する。 In step S13, the scene estimation unit 13 refers to the environmental model, and estimates the scene in which the host vehicle is traveling based on various vehicle signals detected by the signal detection unit 11 or information on the outside of the vehicle. . Furthermore, the instantaneous characteristic estimation unit 14 refers to the characteristic model and calculates the likelihood for the label (labeled with a circle in the correspondence table shown in FIG. 10) associated with the estimated scene. . In this process, the instantaneous characteristic estimation unit 14 calculates the likelihood of each label using an HMM (Hidden Markov Model) algorithm based on the difference Δ obtained in step S12 and the difference ΔΔ. Specifically, the instantaneous characteristic estimation unit 14 obtains the likelihood of each label by substituting the difference Δ obtained in step S12 and the data of the difference ΔΔ into the HMM algorithm. Note that the details of the HMM algorithm are well-known techniques and will not be described.
 具体的には、瞬時特性推定部14は、シーン推定部13にて、図10に示す対応表におけるシーン(1)の「狭路にある信号の無い横断歩道」を自車両が走行していると推定された場合には、ラベル(1)の「他者配慮共感性」の尤度、及びラベル(4)の「非共感性」の尤度を計算する。 Specifically, in the instantaneous characteristic estimation unit 14, the vehicle is traveling on the “crosswalk without a signal on a narrow road” of the scene (1) in the correspondence table shown in FIG. 10 in the scene estimation unit 13. Is estimated, the likelihood of “other-person sympathy” of label (1) and the likelihood of “non-sympathy” of label (4) are calculated.
 ステップS14において、瞬時特性推定部14は、尤度が最大となったラベルの尤度が、予め設定した閾値を超えているか否かを判断する。例えば、図10に示す対応表におけるシーン(1)の場合、ラベル(1)の「他者配慮共感性」の尤度閾値、及びラベル(4)の「非共感性」の尤度閾値は、図8Aに示す「3000」及び「2000」にそれぞれ設定されている。 In step S14, the instantaneous characteristic estimation unit 14 determines whether or not the likelihood of the label having the maximum likelihood exceeds a preset threshold value. For example, in the case of the scene (1) in the correspondence table shown in FIG. 10, the likelihood threshold for “other-sympathetic empathy” in label (1) and the likelihood threshold for “non-sympathy” in label (4) are as follows: “3000” and “2000” shown in FIG. 8A are set.
 瞬時特性推定部14は、各ラベルのヒストグラムを設定する。尤度が最大となったラベルの尤度が尤度閾値を超えた場合には、ステップS15において、傾向推定部16は、尤度が最大となったラベルのヒストグラムの数値をインクリメントする。 The instantaneous characteristic estimation unit 14 sets a histogram for each label. When the likelihood of the label having the maximum likelihood exceeds the likelihood threshold, the trend estimation unit 16 increments the numerical value of the histogram of the label having the maximum likelihood in step S15.
 具体例として、図8Bに示すように、HMMのアルゴリズムによる処理にて、ラベル(1)の「他者配慮共感性」の尤度及びラベル(4)の「非共感性」の尤度がそれぞれ「2500」及び「2300」であると認識されたと仮定する。検出時刻t1において、尤度が最大となるラベルがラベル(1)の「他者配慮共感性」である場合、ラベル(1)の尤度「2500」は尤度閾値「3000」を超えていないので(ステップS14でNO)、傾向推定部16は、ラベル(1)のヒストグラムの数値をインクリメントしない。 As a specific example, as shown in FIG. 8B, the likelihood of “other-person sympathy” in label (1) and the likelihood of “non-sympathy” in label (4) are obtained by the processing by the HMM algorithm. Assume that “2500” and “2300” are recognized. When the label having the maximum likelihood is the “other-friendliness sympathy” of the label (1) at the detection time t1, the likelihood “2500” of the label (1) does not exceed the likelihood threshold “3000”. Therefore (NO in step S14), the tendency estimation unit 16 does not increment the numerical value of the histogram of the label (1).
 一方、検出時刻t2において、尤度が最大となるラベルがラベル(4)の「非共感性」である場合、ラベル(4)の尤度「2300」は尤度閾値「2000」を超えているので(ステップS14でYES)、ステップS15において、傾向推定部16は、ラベル(4)のヒストグラムの数値をインクリメントする。ここで、ヒストグラムとは、尤度が尤度閾値を超えた回数を積算するものである。例えば、ヒストグラムはメモリ等に記憶される。 On the other hand, when the label having the maximum likelihood is “non-sympathetic” of the label (4) at the detection time t2, the likelihood “2300” of the label (4) exceeds the likelihood threshold “2000”. Therefore (YES in step S14), in step S15, the tendency estimation unit 16 increments the numerical value of the histogram of the label (4). Here, the histogram is an accumulation of the number of times that the likelihood exceeds the likelihood threshold. For example, the histogram is stored in a memory or the like.
 図9Aは頻度閾値と頻度結果の関係を示す説明図である。図9Bは頻度閾値と頻度結果のヒストグラムを示す説明図である。上述したように、ステップS15において、傾向推定部16は、尤度が最大となるラベルの尤度が尤度閾値を超えた際に、尤度が最大となるラベルのヒストグラムの数値をインクリメントする。具体的には、図8Bに示したように、ラベル(4)の「非共感性」の尤度が尤度閾値を超えているので、傾向推定部16は、図9A,9Bに示す「頻度結果」の値をインクリメントする。 FIG. 9A is an explanatory diagram showing the relationship between the frequency threshold and the frequency result. FIG. 9B is an explanatory diagram showing a frequency threshold and a histogram of frequency results. As described above, in step S15, the trend estimation unit 16 increments the numerical value of the histogram of the label with the maximum likelihood when the likelihood of the label with the maximum likelihood exceeds the likelihood threshold. Specifically, as shown in FIG. 8B, since the likelihood of “non-sympathy” of the label (4) exceeds the likelihood threshold, the tendency estimation unit 16 displays the “frequency” shown in FIGS. 9A and 9B. Increment the value of “Result”.
 ステップS16において、傾向推定部16は、ヒストグラムに現れる最大頻度値が予め設定した頻度閾値を超えたか否かを判定する。図9Bに示す例では、最大頻度値であるラベル(4)の「非共感性」の頻度結果が、頻度閾値を上回っているので(すなわち、頻度閾値の数値が「4」であるのに対して、頻度結果の数値は「6」となっているので)、傾向推定部16は、ヒストグラムに現れる最大頻度値が頻度閾値を超えたものと判断する(ステップS16でYES)。 In step S16, the tendency estimation unit 16 determines whether or not the maximum frequency value appearing in the histogram exceeds a preset frequency threshold value. In the example shown in FIG. 9B, the frequency result of “non-sympathy” of the label (4) which is the maximum frequency value exceeds the frequency threshold value (that is, the frequency threshold value is “4”). Therefore, the tendency estimation unit 16 determines that the maximum frequency value appearing in the histogram exceeds the frequency threshold (YES in step S16).
 ステップS17において、傾向推定部16は、ラベル(4)の「非共感性」を運転者の静的特性として出力する。即ち、傾向推定部16は、この車両を運転する運転者の静的特性は「非共感性」であると判断して、この判断結果を後段機器(図示略)に出力する。なお、図9Bのヒストグラムに示す頻度結果は、時間経過に伴って数値が増加し、やがては頻度閾値を超えることになる。この現象を回避するために、頻度結果を一定時間、或いは所定の検出回数毎に蓄積し、蓄積したデータを正規化した数値を頻度結果として使用することも可能である。 In step S17, the tendency estimation unit 16 outputs “non-sympathy” of the label (4) as a static characteristic of the driver. That is, the tendency estimation unit 16 determines that the static characteristic of the driver who drives the vehicle is “non-sympathetic”, and outputs the determination result to a subsequent device (not shown). Note that the frequency result shown in the histogram of FIG. 9B increases with time and eventually exceeds the frequency threshold. In order to avoid this phenomenon, it is also possible to accumulate the frequency result for a certain period of time or every predetermined number of detections, and use a numerical value obtained by normalizing the accumulated data as the frequency result.
 上述の説明では、図10に示す対応表におけるシーン(1)の「狭路にある信号の無い横断歩道」を自車両が走行するシーンを例に挙げた。シーン(1)は、図11Aに示す状況である。以下に、図11B乃至12Dを参照して、図10に示す対応表におけるシーン(2)~(8)の状況を説明する。 In the above description, the scene in which the vehicle travels on the “crosswalk without a signal on a narrow road” in the scene (1) in the correspondence table shown in FIG. 10 is taken as an example. Scene (1) is the situation shown in FIG. 11A. Hereinafter, the situation of scenes (2) to (8) in the correspondence table shown in FIG. 10 will be described with reference to FIGS. 11B to 12D.
 図11Bは、シーン(2)の「狭路にある停止線の無い横断歩道」を自車両が通過する状況を示す。図11Cは、シーン(3)の「狭い道から広い道への左折」を自車両が行う状況を示す。図11Dは、シーン(4)の「見通しの良い片側1車線道路の横断歩道」を自車両が通過する状況を示す。図12Aは、シーン(5)の「広い道路から狭路へ左折」を自車両が行う状況を示す。図12Bは、シーン(6)の「渋滞中の狭路における直進」を自車両が行う状況を示す。図12Cは、シーン(7)の「交通量が多い合流」を自車両が走行する状況を示す。図12Dは、シーン(8)の「渋滞中の車線変更行動」を自車両が行う状況を示す。 FIG. 11B shows a situation in which the host vehicle passes through the “crosswalk without a stop line on a narrow road” in the scene (2). FIG. 11C shows a situation in which the vehicle makes a “left turn from a narrow road to a wide road” in the scene (3). FIG. 11D shows a situation in which the host vehicle passes through “a pedestrian crossing on a one-lane road with good visibility” in scene (4). FIG. 12A shows a situation where the host vehicle makes a “left turn from a wide road to a narrow road” in the scene (5). FIG. 12B shows a situation in which the host vehicle performs the “straight line in a congested narrow road” in the scene (6). FIG. 12C shows a situation in which the host vehicle travels in “Merge with heavy traffic” in scene (7). FIG. 12D shows a situation in which the host vehicle performs the “lane changing action in a traffic jam” of the scene (8).
 瞬時特性推定部14は、シーン(1)~(8)のうち、シーン推定部13により推定されたシーンに対応付けられているラベル(図10に示す対応表で丸印が付されているラベル)について、尤度を計算する。 The instantaneous characteristic estimator 14 includes labels associated with the scenes estimated by the scene estimator 13 among the scenes (1) to (8) (labels that are circled in the correspondence table shown in FIG. 10). ) For the likelihood.
 上述のように、個人特性推定装置100は、シーンを推定するために使用される環境モデルと、各シーンに対して、該シーンで顕著に表出されるラベル(運転者の瞬時特性)を関連付けたモデルを特性モデルとを、瞬時特性モデルとして記憶している。個人特性推定装置100は、環境モデルを参照して、車両情報に基づいて、自車両が走行するシーンを推定する。個人特性推定装置100は、特性モデルを参照して、運転者が操作した情報に基づいて、HMMのアルゴリズ等を用いて、推定されたシーンに関連付けられた各ラベルの尤度を算出する。個人特性推定装置100は、尤度が最大となったラベルの尤度が尤度閾値を超えると、尤度が最大となったラベルのヒストグラムの数値をインクリメントする。個人特性推定装置100は、ヒストグラムに現れる最大頻度値が予め設定した頻度閾値を超えた場合に、この最大頻度値を有するラベル(運転者の瞬時特性)を運転者の静的特性であると判断する。 As described above, the personal characteristic estimation device 100 associates an environmental model used for estimating a scene with a label (a driver's instantaneous characteristic) that is prominently expressed in the scene. The model is stored as a characteristic model and an instantaneous characteristic model. The personal characteristic estimation device 100 estimates a scene in which the host vehicle travels based on the vehicle information with reference to the environment model. The personal characteristic estimation device 100 refers to the characteristic model and calculates the likelihood of each label associated with the estimated scene using an algorithm of the HMM or the like based on information operated by the driver. When the likelihood of the label having the maximum likelihood exceeds the likelihood threshold, the personal characteristic estimation device 100 increments the numerical value of the histogram of the label having the maximum likelihood. When the maximum frequency value appearing in the histogram exceeds a preset frequency threshold, the personal characteristic estimation device 100 determines that the label having the maximum frequency value (driver's instantaneous characteristic) is the static characteristic of the driver. To do.
 従って、個人特性推定装置100は、運転者が車両運転中に取り得る挙動、及び車両の周囲環境に基づいて、運転者の静的特性(性格、傾向、気質等の総称)を高精度に推定することができる。結果として、例えば、推定した運転者の静的特性に基づいて、この運転者に対してHMI(Human Machine Interface)によるガイダンスを行う場合に、運転者の静的特性に応じた適切な対応を執ることができる。 Accordingly, the personal characteristic estimation device 100 estimates the driver's static characteristics (generic name, tendency, temperament, etc.) with high accuracy based on the behavior that the driver can take while driving the vehicle and the surrounding environment of the vehicle. can do. As a result, for example, when performing guidance by HMI (Human Machine Interface) for this driver based on the estimated static characteristics of the driver, an appropriate response according to the static characteristics of the driver is taken. be able to.
 例えば、ナビゲーション装置が渋滞を検出して迂回路を検索した際に実施されるガイダンスとして、次の3つのガイダンスが考えられる:(1)迂回する理由を運転者に説明して迂回路を自動設定する;(2)迂回路をとるのか元の経路をとるのか運転者に選択させる;(3)何も運転者に説明せずに迂回路を自動設定する。個人特性推定装置100により推定された運転者の静的特性を用いれば、運転者の静的特性に応じて、ガイダンス(1)~(3)のうち、適切なガイダンスを選択することが可能となる。 For example, the following three guidances can be considered when the navigation device detects a traffic jam and searches for a detour: (1) Explain to the driver the reason for detouring and automatically set the detour (2) Let the driver select whether to take a detour or the original route; (3) Automatically set a detour without explaining anything to the driver. If the driver's static characteristic estimated by the personal characteristic estimation device 100 is used, an appropriate guidance can be selected from the guidances (1) to (3) according to the driver's static characteristic. Become.
 従来の運転者の静的特性を推定する方法では、質問紙を用いて被験者にAQテストを実施させる等の心理学的な手法により、運転者の静的特性を推定していた。しかしながら、従来の手法では、必ずしも本音の回答が得られないという問題があり、更には、個人情報が流出するという問題があった。これに対して、本実施形態の個人特性推定装置100では、車両を運転する際の各シーンにおける運転者の挙動に基づいて、運転者の静的特性を推定しているので、上記の問題を回避でき、低コストで且つ正確に運転者の静的特性を推定することができる。 In the conventional method for estimating the driver's static characteristics, the driver's static characteristics are estimated by a psychological method such as having a subject perform an AQ test using a questionnaire. However, the conventional method has a problem that a true answer cannot always be obtained, and further, personal information leaks. On the other hand, the personal characteristic estimation device 100 of the present embodiment estimates the static characteristics of the driver based on the behavior of the driver in each scene when driving the vehicle. The static characteristics of the driver can be estimated accurately at a low cost.
 図2に示すように、瞬時特性モデル記憶部15は、シーンと特性(ラベル)を組み合わせた特性モデルを記憶している。本実施形態では、図10に示す対応表において丸印で表された、シーンとラベルとの組み合わせの個数は比較的少ないので、個人特性推定装置100は、記憶容量を低減させることができる。 As shown in FIG. 2, the instantaneous characteristic model storage unit 15 stores a characteristic model that combines a scene and a characteristic (label). In the present embodiment, since the number of combinations of scenes and labels represented by circles in the correspondence table shown in FIG. 10 is relatively small, the personal characteristic estimation device 100 can reduce the storage capacity.
 信号検出部11は、車両のアクセル開度、ブレーキ操作、ステアリング操舵角、ヨーレート、車両速度、電池の放電レベルのうちの少なくとも一つの信号を取得する。推定部12は、信号検出部11により取得された信号に基づいて、シーン推定部13を用いてシーンを推定し、且つ、瞬時特性推定部14を用いて運転者の挙動を検出するので、個人特性推定装置100は、車両信号に基づいて、運転者の静的特性を高精度に推定することができる。 The signal detection unit 11 acquires at least one signal among the accelerator opening of the vehicle, the brake operation, the steering angle, the yaw rate, the vehicle speed, and the battery discharge level. Since the estimation unit 12 estimates a scene using the scene estimation unit 13 based on the signal acquired by the signal detection unit 11 and detects the driver's behavior using the instantaneous characteristic estimation unit 14, The characteristic estimation device 100 can estimate the static characteristic of the driver with high accuracy based on the vehicle signal.
 シーン推定部13は、道路形状、時間帯、渋滞情報、外界の状態、歩行者の有無、車両の有無、対向車の有無等のデータに基づいて、自車両が走行するシーンを推定するので、個人特性推定装置100は、自車両の周囲の状況に応じた適切なシーンを推定することができ、運転者の静的特性を高精度に推定することができる。 Since the scene estimation unit 13 estimates the scene in which the host vehicle travels based on data such as road shape, time zone, traffic jam information, external conditions, presence / absence of pedestrians, presence / absence of vehicles, presence / absence of oncoming vehicles, etc. The personal characteristic estimation device 100 can estimate an appropriate scene according to the situation around the host vehicle, and can estimate the static characteristics of the driver with high accuracy.
 瞬時特性推定部14は、瞬時特性モデルを用いて推定された結果を統計的に分析する手法として、隠れマルコフモデル(HMM)を用いるので、高精度な分析を行うことができ、運転者の静的特性を高精度に推定することができる。 The instantaneous characteristic estimator 14 uses a hidden Markov model (HMM) as a method for statistically analyzing the result estimated using the instantaneous characteristic model. The target characteristic can be estimated with high accuracy.
 個人特性推定装置100は、AQテスト等の一般的且つ静的な特性傾向を測定する従来の手法によって検出された特性に対して、所定値以上(例えば、0.2以上)の相関を有する特性を、特性傾向ラベルとして設定するので、運転者の静的特性を検出し易いラベルを適切に選択することができ、運転者の静的特性を高精度に推定することができる。 The personal characteristic estimation device 100 is a characteristic having a correlation of a predetermined value or more (for example, 0.2 or more) with respect to a characteristic detected by a conventional method of measuring a general and static characteristic tendency such as an AQ test. Is set as a characteristic tendency label, a label that can easily detect the static characteristics of the driver can be appropriately selected, and the static characteristics of the driver can be estimated with high accuracy.
 傾向推定部16は、任意の時間毎に最大尤度を有するラベルを検出し、検出したラベルを蓄積してヒストグラム化し、ラベル毎の分布を計算し、ヒストグラムの数値が閾値を超えたラベルに基づいて、運転者の静的特性を推定するので、個人特性推定装置100は、運転者の静的特性を高精度に推定することができる。 The trend estimation unit 16 detects a label having the maximum likelihood every arbitrary time, accumulates the detected labels, forms a histogram, calculates a distribution for each label, and based on a label whose histogram value exceeds a threshold value Thus, since the static characteristic of the driver is estimated, the personal characteristic estimation device 100 can estimate the static characteristic of the driver with high accuracy.
(第2実施形態)
 図4は、本実施形態に係る個人特性推定装置の構成を示すブロック図である。図4に示すように、個人特性推定装置101は、信号検出部11と、シーン推定部13と、瞬時特性推定部14と、瞬時特性モデル記憶部15と、傾向推定部16と、モデル切替部(モデル切替手段)17と、を備えている。
(Second Embodiment)
FIG. 4 is a block diagram showing the configuration of the personal characteristic estimation apparatus according to this embodiment. As shown in FIG. 4, the personal characteristic estimation apparatus 101 includes a signal detection unit 11, a scene estimation unit 13, an instantaneous characteristic estimation unit 14, an instantaneous characteristic model storage unit 15, a trend estimation unit 16, and a model switching unit. (Model switching means) 17.
 信号検出部11は、図1に示した信号検出部11と同様の構成であるので、詳細な説明を省略する。 Since the signal detection unit 11 has the same configuration as the signal detection unit 11 shown in FIG.
 瞬時特性モデル記憶部15は、予め設定されている複数のシーン(図11A乃至12Dに示すシーン(1)乃至(8))を記憶している。瞬時特性モデル記憶部15は、運転者の特性を示す複数の特性傾向ラベルを設定し、シーンを推定するために使用される環境モデルと、各シーンに対して、該シーンで表出し易い特性傾向ラベル(以下、「ラベル」と略す)を関連付けた特性モデルを瞬時特性モデルとして記憶している。具体的には、車両の走行に関する複数のシーンを行方向にとり、複数のラベルを列方向にとった図10に示す対応表における対応関係のように、瞬時特性モデル記憶部15は、シーンを推定するために使用される環境モデルと、各シーンに対して、該シーンで顕著に表出されるラベル(運転者の瞬時特性)を関連付けた特性モデルとを、記憶している。 The instantaneous characteristic model storage unit 15 stores a plurality of preset scenes (scenes (1) to (8) shown in FIGS. 11A to 12D). The instantaneous characteristic model storage unit 15 sets a plurality of characteristic tendency labels indicating the characteristics of the driver, an environmental model used for estimating the scene, and a characteristic tendency that can be easily expressed in each scene. A characteristic model associated with a label (hereinafter abbreviated as “label”) is stored as an instantaneous characteristic model. Specifically, the instantaneous characteristic model storage unit 15 estimates the scene as shown in the correspondence relationship in the correspondence table shown in FIG. 10 in which a plurality of scenes related to vehicle travel are taken in the row direction and a plurality of labels are taken in the column direction. And a characteristic model in which a label (a driver's instantaneous characteristic) prominently displayed in each scene is associated with each scene.
 更に、瞬時特性モデル記憶部15は、複数の特性モデルをシーン毎に分類して記憶している。具体的には、図5に示すように、シーン(1)の特性モデルとして、ラベル(1)、ラベル(4)、ガーベジ(その他)が記憶されている。シーン(2)の特性モデルとして、ラベル(2)、ラベル(6)、ガーベジが記憶されている。同様に、シーン3以降の各シーンの特性モデルとして、該シーンに対応付けられた1つ以上のラベルが記憶されている。 Furthermore, the instantaneous characteristic model storage unit 15 stores a plurality of characteristic models classified for each scene. Specifically, as shown in FIG. 5, label (1), label (4), and garbage (others) are stored as the characteristic model of scene (1). Label (2), label (6), and garbage are stored as a characteristic model of scene (2). Similarly, one or more labels associated with the scene are stored as a characteristic model of each scene after the scene 3.
 シーン推定部13は、第1実施形態と同様に、信号検出部11で検出された自車両の外界の情報、及び車両信号(車両の操作に関する情報、車両の挙動に関する情報、及び操作履歴に関する情報)に基づいて、瞬時特性モデル記憶部15に記憶されている環境モデルを参照して、自車両が走行するシーンを推定する。例えば、シーン推定部13は、車載カメラ(図示略)で撮影される周囲映像に基づいて、自車両が現在走行している状況を把握し、シーンを推定する。このとき、シーン推定部13は、環境モデルを参照して、把握した状況が、図10に示す対応表の行方向に示されたシーン(1)乃至(8)のうち、どのシーンに対応するのか推定する。 As in the first embodiment, the scene estimation unit 13 is information about the external environment of the host vehicle detected by the signal detection unit 11 and vehicle signals (information about vehicle operation, information about vehicle behavior, and information about operation history). ), The scene in which the host vehicle travels is estimated with reference to the environmental model stored in the instantaneous characteristic model storage unit 15. For example, the scene estimation unit 13 grasps a situation in which the host vehicle is currently traveling based on a surrounding image captured by an in-vehicle camera (not shown), and estimates a scene. At this time, the scene estimation unit 13 refers to the environment model and corresponds to which scene among the scenes (1) to (8) indicated in the row direction of the correspondence table shown in FIG. Estimate.
 モデル切替部17は、シーン推定部13で推定されたシーンに基づいて、瞬時特性モデル記憶部15に記憶されている複数の特性モデルから、所望の特性モデルを選択する処理を行う。即ち、図6に示すように、モデル切替部17は、複数の特性モデルから、シーン推定部13で推定されたシーンに対応付けられた特性モデルを選択する処理を行う。具体的には、現在のシーンが、図10に示す対応表におけるシーン(1)の「狭路にある信号の無い横断歩道」である場合には、モデル切替部17は、シーン(1)に対応付けられた、ラベル(1)の「他者配慮共感性」、ラベル(4)の「非共感性」、及び「ガーベジ」を含む特性モデルを選択する。 The model switching unit 17 performs a process of selecting a desired characteristic model from a plurality of characteristic models stored in the instantaneous characteristic model storage unit 15 based on the scene estimated by the scene estimation unit 13. That is, as illustrated in FIG. 6, the model switching unit 17 performs a process of selecting a characteristic model associated with the scene estimated by the scene estimation unit 13 from a plurality of characteristic models. Specifically, when the current scene is “a crosswalk without a signal on a narrow road” in the scene (1) in the correspondence table shown in FIG. 10, the model switching unit 17 changes the scene (1) to the scene (1). The associated characteristic model including “other-person sympathy” of label (1), “non-sympathy” of label (4), and “garbage” is selected.
 瞬時特性推定部14は、信号検出部11で検出された車両信号と、モデル切替部17で選択された特性モデルを用いて、運転者の瞬時特性を推定する。例えば、図6に示すように、モデル切替部17がシーン2の特性モデルを選択した場合には、シーン2の特性モデルとして記憶されているラベル(2),ラベル(6),及びガーベジのうち、尤度が最大となったラベルを抽出して、傾向推定部16に出力する。なお、尤度が最大となるラベルを抽出する手法については後述する。 The instantaneous characteristic estimation unit 14 estimates the driver's instantaneous characteristic using the vehicle signal detected by the signal detection unit 11 and the characteristic model selected by the model switching unit 17. For example, as shown in FIG. 6, when the model switching unit 17 selects the scene 2 characteristic model, the label (2), label (6), and garbage stored as the scene 2 characteristic model The label with the maximum likelihood is extracted and output to the trend estimation unit 16. A method for extracting the label with the maximum likelihood will be described later.
 次に、個人特性推定装置101の動作を、図7に示すフローチャートを参照して説明する。 Next, the operation of the personal characteristic estimation apparatus 101 will be described with reference to the flowchart shown in FIG.
 初めに、ステップS31において、信号検出部11は、各種の車両信号や車両の外界情報を取得する。具体的には、車両に搭載されるCANにより、アクセル開度、ブレーキ操作、ステアリングの操舵角、車両速度、ヨーレート等の車両信号や、車載カメラにて撮影される外界の映像信号或いはGPSによるナビゲーション情報を取得する。 First, in step S31, the signal detection unit 11 acquires various vehicle signals and vehicle external information. Specifically, depending on the CAN mounted on the vehicle, vehicle signals such as accelerator opening, brake operation, steering angle, vehicle speed, yaw rate, etc., external video signals captured by an in-vehicle camera, or GPS navigation Get information.
 ステップS32において、信号検出部11は、取得した信号の一定時間間隔の一次差分(これを差分Δとする)、及び取得した信号の一次差分間の二次差分(これを、差分ΔΔとする)を演算し、パラメータ化する。 In step S <b> 32, the signal detection unit 11 determines the primary difference between the acquired signals at a certain time interval (this is referred to as a difference Δ) and the secondary difference between the primary differences of the acquired signal (this is referred to as a difference ΔΔ). Is calculated and parameterized.
 ステップS33において、シーン推定部13は、環境モデルを参照して、信号検出部11で検出された車両外界の情報(具体的には、車載カメラで撮影される外界の映像やGPS装置により計測される自車両の位置情報)に基づいて、自車両のシーン尤度を計算して、自車両が走行しているシーンを推定する。 In step S <b> 33, the scene estimation unit 13 refers to the environment model, and information on the outside of the vehicle detected by the signal detection unit 11 (specifically, measured by an outside image captured by the in-vehicle camera or a GPS device). Based on the position information of the own vehicle), the scene likelihood of the own vehicle is calculated, and the scene in which the own vehicle is traveling is estimated.
 ステップS34において、モデル切替部17は、シーン推定部13で推定されたシーンがガーベジであるか否かを判断する。推定されたシーンがガーベジである場合には、ステップS31に戻る。推定されたシーンがガーベジでない場合には、ステップS35に進む。 In step S34, the model switching unit 17 determines whether or not the scene estimated by the scene estimation unit 13 is garbage. If the estimated scene is garbage, the process returns to step S31. If the estimated scene is not garbage, the process proceeds to step S35.
 ステップS35において、モデル切替部17は、シーン(1)乃至(8)にそれぞれ対応付けられた複数の特性モデル(図5参照)のうち、推定されたシーンに対応付けられた特性モデルを選択する。例えば、推定されたシーンが図10に示す対応表におけるシーン(2)である場合には、図6に示すように、シーン(2)に対応付けられた特性モデルを選択する。 In step S35, the model switching unit 17 selects a characteristic model associated with the estimated scene from among a plurality of characteristic models (see FIG. 5) respectively associated with the scenes (1) to (8). . For example, when the estimated scene is the scene (2) in the correspondence table shown in FIG. 10, the characteristic model associated with the scene (2) is selected as shown in FIG.
 ステップS36において、瞬時特性推定部14は、ステップS32で求めた差分Δ、及び差分の差分ΔΔを用いてHMM(隠れマルコフモデル)のアルゴリズムを用いて、各ラベルの尤度(特性尤度)を計算する。 In step S36, the instantaneous characteristic estimation unit 14 calculates the likelihood (characteristic likelihood) of each label using an algorithm of HMM (Hidden Markov Model) using the difference Δ obtained in step S32 and the difference ΔΔ of the difference. calculate.
 例えば、取得した車両信号がアクセル開度である場合、一定時間間隔を有する時刻t1,t2,t3,t4,t5の各々でアクセル開度が検出された際に、信号検出部11は、時刻t1,t2で求められるアクセル開度の差分Δ1,時刻t2,t3で求められるアクセル開度の差分Δ2,時刻t3,t4で求められるアクセル開度の差分Δ3,時刻t4,t5で求められるアクセル開度の差分Δ4を演算する。更に、信号検出部11は、差分Δ1とΔ2の差分ΔΔ1,差分Δ2とΔ3の差分ΔΔ2,差分Δ3とΔ4の差分ΔΔ3を求める。瞬時特性推定部14は、これらのデータをHMMのアルゴリズムに代入することにより、各ラベルについての尤度を求める。 For example, when the acquired vehicle signal is the accelerator opening, when the accelerator opening is detected at each of the times t1, t2, t3, t4, and t5 having a certain time interval, the signal detection unit 11 determines that the time t1 , T2 of the accelerator opening obtained at time t2, the difference Δ2 of the accelerator opening obtained at time t2, t3 Δ2, the difference of accelerator opening obtained at time t3, t4, the accelerator opening obtained at time t4, t5 The difference Δ4 is calculated. Further, the signal detector 11 obtains a difference ΔΔ1 between the differences Δ1 and Δ2, a difference ΔΔ2 between the differences Δ2 and Δ3, and a difference ΔΔ3 between the differences Δ3 and Δ4. The instantaneous characteristic estimator 14 calculates the likelihood for each label by substituting these data into the HMM algorithm.
 具体例で示すと、瞬時特性推定部14は、シーン推定部13にて、図10に示す対応表におけるシーン(1)の「狭路にある信号の無い横断歩道」を自車両が走行していると推定した場合には、ラベル(1)の「他者配慮共感性」の尤度、及びラベル(4)の「非共感性」の尤度を計算する。 As a specific example, the instantaneous characteristic estimating unit 14 causes the scene estimating unit 13 to drive the host vehicle on the “crosswalk without a signal on a narrow road” of the scene (1) in the correspondence table shown in FIG. If it is estimated that there is, the likelihood of “other-person sympathy” in label (1) and the likelihood of “non-sympathy” in label (4) are calculated.
 ステップS37において、瞬時特性推定部14は、尤度が最大となったラベルの尤度が、予め設定した閾値を超えているか否かを判断する。例えば、図10に示す対応表におけるシーン(1)の場合に、ラベル(1)の「他者配慮共感性」の尤度閾値、及びラベル(4)の「非共感性」の尤度閾値は、それぞれ図8Aに示す「3000」及び「2000」にそれぞれ設定されている。 In step S37, the instantaneous characteristic estimation unit 14 determines whether or not the likelihood of the label having the maximum likelihood exceeds a preset threshold value. For example, in the case of scene (1) in the correspondence table shown in FIG. 10, the likelihood threshold for “other-sympathetic empathy” for label (1) and the likelihood threshold for “non-sympathy” for label (4) are , “3000” and “2000” shown in FIG. 8A, respectively.
 瞬時特性推定部14は、各ラベルのヒストグラムを設定する。尤度が最大となったラベルの尤度が尤度閾値を超えた場合(ステップ37でYES)には、ステップS38において、傾向推定部16は、尤度が最大となったラベルのヒストグラムの数値をインクリメントする。 The instantaneous characteristic estimation unit 14 sets a histogram for each label. When the likelihood of the label having the maximum likelihood exceeds the likelihood threshold (YES in step 37), in step S38, the tendency estimation unit 16 sets the numerical value of the histogram of the label having the maximum likelihood. Is incremented.
 具体例として、図8Bに示すように、HMMのアルゴリズムによる処理にて、ラベル(1)の「他者配慮共感性」の尤度及びラベル(4)の「非共感性」の尤度がそれぞれ、「2500」及び「2300」であると認識されたと仮定する。検出時刻t1において、尤度が最大となるラベルがラベル(1)の「他者配慮共感性」である場合、ラベル(1)の尤度「2500」は尤度閾値「3000」を超えていないので(ステップS37でNO)、傾向推定部16は、ラベル(1)のヒストグラムの数値をインクリメントしない。 As a specific example, as shown in FIG. 8B, the likelihood of “other-person sympathy” in label (1) and the likelihood of “non-sympathy” in label (4) are obtained by the processing by the HMM algorithm. , “2500” and “2300”. When the label having the maximum likelihood is the “other-friendliness sympathy” of the label (1) at the detection time t1, the likelihood “2500” of the label (1) does not exceed the likelihood threshold “3000”. Therefore (NO in step S37), the tendency estimation unit 16 does not increment the numerical value of the histogram of the label (1).
 一方、検出時刻t2において、尤度が最大となるラベルがラベル(4)の「非共感性」である場合、ラベル(4)の尤度「2300」は尤度閾値「2000」を超えているので(ステップS37でYES)、ステップS38において、傾向推定部16は、ラベル(4)のヒストグラムの数値をインクリメントする。ここで、ヒストグラムとは、尤度が尤度閾値を超えた回数を積算するものである。例えば、ヒストグラムはメモリ等に記憶される。 On the other hand, when the label having the maximum likelihood is “non-sympathetic” of the label (4) at the detection time t2, the likelihood “2300” of the label (4) exceeds the likelihood threshold “2000”. Therefore (YES in step S37), in step S38, the tendency estimation unit 16 increments the numerical value of the histogram of the label (4). Here, the histogram is an accumulation of the number of times that the likelihood exceeds the likelihood threshold. For example, the histogram is stored in a memory or the like.
 図9Aは頻度閾値と頻度結果の関係を示す説明図である。図9Bは頻度結果のヒストグラムを示す説明図である。上述したように、ステップS37において、傾向推定部16は、尤度が最大となるラベルの尤度が尤度閾値を超えた際に、尤度が最大となるラベルのヒストグラムの数値をインクリメントする。具体的には、図8Bに示したように、ラベル(4)の「非共感性」の尤度が尤度閾値を超えているので、傾向推定部16は、図9A、9Bに示す「頻度結果」の値をインクリメントする。 FIG. 9A is an explanatory diagram showing the relationship between the frequency threshold and the frequency result. FIG. 9B is an explanatory diagram showing a histogram of frequency results. As described above, in step S37, the trend estimation unit 16 increments the numerical value of the histogram of the label with the maximum likelihood when the likelihood of the label with the maximum likelihood exceeds the likelihood threshold. Specifically, as shown in FIG. 8B, since the likelihood of “non-sympathy” of the label (4) exceeds the likelihood threshold, the trend estimation unit 16 performs the “frequency” shown in FIGS. 9A and 9B. Increment the value of “Result”.
 ステップS39において、傾向推定部16は、ヒストグラムに現れる正規化した最大頻度値が予め設定した頻度閾値を超えたか否かを判定する。図9Bに示す例では、正規化した最大頻度値であるラベル(4)の「非共感性」の頻度結果が、頻度閾値を上回っているので(すなわち、頻度閾値の数値が「4」であるのに対して、頻度結果の数値は「6」となっているので)、傾向推定部16は、ヒストグラムに現れる正規化した最大頻度値が頻度閾値を超えたものと判断する(ステップS39でYES)。 In step S39, the trend estimation unit 16 determines whether or not the normalized maximum frequency value appearing in the histogram exceeds a preset frequency threshold value. In the example shown in FIG. 9B, the frequency result of “non-sympathy” of the label (4) which is the normalized maximum frequency value exceeds the frequency threshold value (that is, the frequency threshold value is “4”). On the other hand, since the numerical value of the frequency result is “6”), the tendency estimation unit 16 determines that the normalized maximum frequency value appearing in the histogram exceeds the frequency threshold (YES in step S39). ).
 ステップS40において、傾向推定部16は、ラベル(4)の「非共感性」を運転者の静的特性として出力する。即ち、傾向推定部16は、この車両を運転する運転者の静的特性は、「非共感性」であると判断して、この判断結果を後段機器(図示略)へ出力する。 In step S40, the tendency estimation unit 16 outputs “non-sympathy” of the label (4) as a static characteristic of the driver. That is, the tendency estimation unit 16 determines that the static characteristic of the driver who drives the vehicle is “non-sympathetic”, and outputs the determination result to a subsequent device (not shown).
 上述のように、個人特性推定装置101は、シーンを推定するために使用される環境モデルと、各シーンに対して、該シーンで顕著に表出されるラベル(運転者の瞬時特性)を関連付けたモデルを特性モデルとを、瞬時特性モデルとして記憶している。、個人特性推定装置101は、環境モデルを参照して、車両情報に基づいて、自車両が走行するシーンを推定する。個人特性推定装置101は、特性モデルを参照して、推定されたシーンに対応付けられたラベルを認識し、運転者が操作した情報に基づいて、HMMのアルゴリズ等を用いて、各ラベルの尤度を算出する。個人特性推定装置101は、尤度が最大となったラベルの尤度が尤度閾値を超えると、尤度が最大となったラベルのヒストグラムの数値をインクリメントする。個人特性推定装置101は、ヒストグラムに現れる正規化した最大頻度値が予め設定した頻度閾値を超えた場合に、この正規化した最大頻度値を有するラベル(運転者の瞬時特性)を運転者の静的な特性であるものと判断する。 As described above, the personal characteristic estimation device 101 associates an environmental model used for estimating a scene with a label (a driver's instantaneous characteristic) that is prominently expressed in the scene. The model is stored as a characteristic model and an instantaneous characteristic model. The personal characteristic estimation apparatus 101 estimates a scene in which the host vehicle travels based on the vehicle information with reference to the environment model. The personal characteristic estimation device 101 refers to the characteristic model, recognizes the label associated with the estimated scene, and uses the HMM algorithm or the like based on the information operated by the driver to estimate the likelihood of each label. Calculate the degree. When the likelihood of the label having the maximum likelihood exceeds the likelihood threshold, the personal characteristic estimation apparatus 101 increments the numerical value of the histogram of the label having the maximum likelihood. When the normalized maximum frequency value appearing in the histogram exceeds a preset frequency threshold, the personal characteristic estimation device 101 displays the label (the driver's instantaneous characteristic) having the normalized maximum frequency value as the driver's static value. Judged to be characteristic.
 従って、個人特性推定装置101は、運転者が車両運転中に取り得る挙動、及び車両の周囲環境に基づいて、運転者の静的特性(性格、傾向、気質等の総称)を高精度に推定することができる。結果として、例えば、推定した運転者の静的特性に基づいて、この運転者に対してHMI(Human_Machine_Interface)によるガイダンスを行う場合に、運転者の静的特性に応じた適切な対応を執ることができる。 Therefore, the personal characteristic estimation apparatus 101 estimates the driver's static characteristics (generic name, tendency, temperament, etc.) with high accuracy based on the behavior that the driver can take while driving the vehicle and the surrounding environment of the vehicle. can do. As a result, for example, when performing guidance based on the estimated driver static characteristics based on the HMI (Human_Machine_Interface) for the driver, an appropriate response corresponding to the driver static characteristics may be taken. it can.
 モデル切替部17は、複数の特性モデル(図5参照)から、シーン推定部13により推定されたシーンに対応付けられた特性モデルを選択し、瞬時特性推定部14は、選択された特性モデルに基づいて、各ラベルの尤度を算出するので、個人特性推定装置101は、各ラベルの尤度を算出する際の演算負荷を軽減することができる。 The model switching unit 17 selects a characteristic model associated with the scene estimated by the scene estimation unit 13 from a plurality of characteristic models (see FIG. 5), and the instantaneous characteristic estimation unit 14 selects the selected characteristic model. Based on this, since the likelihood of each label is calculated, the personal characteristic estimation device 101 can reduce the calculation load when calculating the likelihood of each label.
 傾向推定部16は、任意の時間毎に、瞬時特性推定部14で検出された、尤度が最大となるラベルを、一定時間、或いは、所定の検出回数毎に蓄積して正規化した値をヒストグラム化し、各ラベルの分布を計算する。それゆえ、個人特性推定装置101は、ヒストグラムの数値を適正な数値に換算して、運転者の静的特性を推定するので、運転者の静的特性を高精度に推定することが可能となる。 The trend estimator 16 accumulates and normalizes the label detected by the instantaneous characteristic estimator 14 at any given time and having the maximum likelihood for a predetermined time or every predetermined number of detections. Create a histogram and calculate the distribution of each label. Therefore, the personal characteristic estimating apparatus 101 converts the numerical value of the histogram into an appropriate numerical value and estimates the driver's static characteristic, so that the driver's static characteristic can be estimated with high accuracy. .
 以上、本発明の個人特性推定装置及び個人特性推定方法は第1,2実施形態に限定されるものではなく、本発明の要旨を逸脱しない範囲において、適宜変更することが可能である。 As described above, the personal characteristic estimation apparatus and personal characteristic estimation method of the present invention are not limited to the first and second embodiments, and can be appropriately changed without departing from the gist of the present invention.
 例えば、第1,2実施形態では、運転者の瞬時特性が推定された結果を統計的に分析する手法として、HMM(隠れマルコフモデル)を用いたが、本発明は、これに限定されるものではなく、他の同等な機能を有する手法を用いることも可能である。 For example, in the first and second embodiments, the HMM (Hidden Markov Model) is used as a method for statistically analyzing the result of estimating the instantaneous characteristics of the driver, but the present invention is not limited to this. Instead, it is possible to use other techniques having equivalent functions.
 本発明は、車両運転時の挙動に基づいて個人特性を推定することに利用することができる。 The present invention can be used to estimate personal characteristics based on behavior during vehicle driving.
本出願は、2012年8月31日に出願された日本国特許願第2012-190952号に基づく優先権を主張しており、これらの出願の全内容がここに援用される。 This application claims priority based on Japanese Patent Application No. 2012-190952 filed on August 31, 2012, the entire contents of which are incorporated herein by reference.
 11 信号検出部
 12 推定部
 13 シーン推定部
 14 瞬時特性推定部
 15 瞬時特性モデル記憶部
 16 傾向推定部
 17 モデル切替部
 100,101 個人特性推定装置
DESCRIPTION OF SYMBOLS 11 Signal detection part 12 Estimation part 13 Scene estimation part 14 Instantaneous characteristic estimation part 15 Instantaneous characteristic model memory | storage part 16 Trend estimation part 17 Model switching part 100,101 Personal characteristic estimation apparatus

Claims (9)

  1.  車両の運転操作に基づいて、運転者の特性を推定する個人特性推定装置において、
     運転者による操作に関する情報、車両の挙動に関する情報、及び外界の状況に関する情報のうちの少なくとも一つを車両情報として取得する車両情報取得手段と、
     前記車両情報に基づいて自車両が走行するシーンを推定するシーン推定手段と、
     複数のシーンを設定し、運転者の瞬時的な特性傾向を示す複数の特性傾向ラベルを設定し、各シーンに対して、該シーンで表出し易い特性傾向ラベルを関連付けた特性モデルを瞬時特性モデルとして記憶する瞬時特性モデル記憶手段と、
     前記シーン推定手段で推定されたシーンに基づき、前記瞬時特性モデルを参照して、運転者の瞬時的な特性傾向を推定する瞬時特性推定手段と、
     前記瞬時特性推定手段にて推定された瞬時的な特性傾向を統計的に分析して、運転者の静的な特性傾向を推定する特性傾向推定手段と、
     を備えたことを特徴とする個人特性推定装置。
    In the personal characteristic estimation device that estimates the characteristics of the driver based on the driving operation of the vehicle,
    Vehicle information acquisition means for acquiring, as vehicle information, at least one of information relating to operations by the driver, information relating to vehicle behavior, and information relating to the situation of the outside world;
    Scene estimation means for estimating a scene in which the host vehicle travels based on the vehicle information;
    Set up multiple scenes, set up multiple characteristic trend labels that indicate the instantaneous characteristic trends of the driver, and associate an instantaneous characteristic model with a characteristic model that associates characteristic trend labels that are easy to express in each scene. Instantaneous characteristic model storage means for storing as,
    Based on the scene estimated by the scene estimation means, referring to the instantaneous characteristic model, the instantaneous characteristic estimation means for estimating the instantaneous characteristic tendency of the driver;
    Statistical analysis of the instantaneous characteristic trend estimated by the instantaneous characteristic estimation means, characteristic tendency estimation means for estimating a static characteristic tendency of the driver,
    An apparatus for estimating personal characteristics, comprising:
  2.  前記車両情報取得手段は、車両のアクセル開度、ブレーキ操作、ステアリング操舵角、ヨーレート、車両速度、及び電池の放電レベルのうちの少なくとも一つを取得することを特徴とする請求項1に記載の個人特性推定装置。 2. The vehicle information acquisition unit according to claim 1, wherein the vehicle information acquisition unit acquires at least one of a vehicle accelerator opening, a brake operation, a steering angle, a yaw rate, a vehicle speed, and a battery discharge level. Personal characteristic estimation device.
  3.  前記瞬時特性モデル記憶手段にてシーン毎に分類された前記複数の特性モデルのうち、前記シーン推定手段により推定されたシーンに分類された特性モデルを選択するモデル切替手段、を更に備えたことを特徴とする請求項1または請求項2に記載の個人特性推定装置。 Model switching means for selecting a characteristic model classified into the scene estimated by the scene estimation means from the plurality of characteristic models classified for each scene in the instantaneous characteristic model storage means, The personal characteristic estimation apparatus according to claim 1, wherein the personal characteristic estimation apparatus is characterized.
  4.  前記複数のシーンは、道路形状、時間帯、渋滞情報、外界の状態、歩行者の有無、車両の有無、対向車の有無のうちの少なくとも一つに基づいて分類されることを特徴とする請求項1~請求項3のいずれか1項に記載の個人特性推定装置。 The plurality of scenes are classified based on at least one of road shape, time zone, traffic jam information, external state, presence / absence of pedestrians, presence / absence of vehicles, and presence / absence of oncoming vehicles. The personal characteristic estimation device according to any one of claims 1 to 3.
  5.  前記瞬時特性推定手段にて推定された瞬時的な特性傾向を統計的に分析する手法として、隠れマルコフモデルを用いることを特徴とする請求項1~請求項4のいずれか1項に記載の個人特性推定装置。 5. The individual according to claim 1, wherein a hidden Markov model is used as a method of statistically analyzing the instantaneous characteristic tendency estimated by the instantaneous characteristic estimation unit. Characteristic estimation device.
  6.  前記特性傾向ラベルとして、一般的、且つ静的な特性傾向を測定する手段によって検出された特性と比較して、所定値以上の相関を有する特性を使用することを特徴とする請求項1~請求項5のいずれか1項に記載の個人特性推定装置。 The characteristic trend label is a characteristic having a correlation of a predetermined value or more as compared with a characteristic detected by means for measuring a general and static characteristic tendency. Item 6. The personal characteristic estimation device according to any one of Items 5 to 6.
  7.  前記特性傾向推定手段は、任意の時間毎に、尤度が最大となるラベルを検出し、前記検出したラベルを蓄積してヒストグラム化し、各ラベルの分布を計算することを特徴とする請求項1~請求項6のいずれか1項に記載の個人特性推定装置。 2. The characteristic trend estimating means detects a label having the maximum likelihood at every arbitrary time, accumulates the detected label, forms a histogram, and calculates a distribution of each label. The personal characteristic estimation device according to any one of claims 6 to 6.
  8.  前記特性傾向推定手段は、任意の時間毎に、尤度が最大となるラベルを一定時間、或いは、所定の検出回数毎に蓄積して正規化した値をヒストグラム化し、各ラベルの分布を計算することを特徴とする請求項1~請求項6のいずれか1項に記載の個人特性推定装置。 The characteristic tendency estimation means calculates a distribution of each label by histogramating a normalized value by accumulating a label having the maximum likelihood for every predetermined time or every predetermined number of detection times. The personal characteristic estimation device according to any one of claims 1 to 6, characterized in that:
  9. 車両の運転操作に基づいて、運転者の特性を推定する個人特性推定方法において、
    複数のシーンを設定し、
    運転者の瞬時的な特性傾向を示す複数の特性傾向ラベルを設定し、
    各シーンに対して、該シーンで表出し易い特性傾向ラベルを関連付けた特性モデルを瞬時特性モデルとして記憶し、
    運転者による操作に関する情報、車両の挙動に関する情報、及び外界の状況に関する情報のうちの少なくとも一つを車両情報として取得し、
    前記車両情報に基づいて自車両が走行するシーンを推定し、
    前記推定されたシーンに基づき、前記瞬時特性モデルを参照して、運転者の瞬時的な特性傾向を推定し、
    前記推定された瞬時的な特性傾向を統計的に分析して、運転者の静的な特性傾向を推定することを特徴とする個人特性推定方法。
    In the personal characteristic estimation method for estimating the characteristics of the driver based on the driving operation of the vehicle,
    Set multiple scenes,
    Set multiple characteristic trend labels to indicate the driver's instantaneous characteristic trends,
    For each scene, store a characteristic model associating characteristic tendency labels that are easy to express in the scene as an instantaneous characteristic model,
    Obtaining at least one of information related to operations by the driver, information related to vehicle behavior, and information related to the situation of the outside world as vehicle information;
    Estimating a scene in which the vehicle travels based on the vehicle information,
    Based on the estimated scene, referring to the instantaneous characteristic model, an instantaneous characteristic tendency of the driver is estimated,
    A personal characteristic estimation method characterized by statistically analyzing the estimated instantaneous characteristic tendency to estimate a static characteristic tendency of a driver.
PCT/JP2013/073140 2012-08-31 2013-08-29 Individual characteristic estimation unit and individual characteristic estimation method WO2014034779A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012190952 2012-08-31
JP2012-190952 2012-08-31

Publications (1)

Publication Number Publication Date
WO2014034779A1 true WO2014034779A1 (en) 2014-03-06

Family

ID=50183584

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/073140 WO2014034779A1 (en) 2012-08-31 2013-08-29 Individual characteristic estimation unit and individual characteristic estimation method

Country Status (1)

Country Link
WO (1) WO2014034779A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019016238A (en) * 2017-07-07 2019-01-31 Kddi株式会社 Estimation apparatus, vehicle terminal, program, and method for estimating road section from which personal characteristic can be easily specified from driving vehicle signal
WO2019193660A1 (en) * 2018-04-03 2019-10-10 株式会社ウフル Machine-learned model switching system, edge device, machine-learned model switching method, and program
EP4273013A3 (en) * 2022-05-02 2024-01-10 Toyota Jidosha Kabushiki Kaisha Individual characteristics management device, individual characteristics management method, non-transitory storage medium storing a program, and method of generating learned model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008126908A (en) * 2006-11-22 2008-06-05 Denso Corp Driving behavior estimation method and device
JP2009073465A (en) * 2007-08-28 2009-04-09 Fuji Heavy Ind Ltd Safe driving support system
JP2010221962A (en) * 2009-03-25 2010-10-07 Denso Corp Driving behavior prediction device
JP2012113631A (en) * 2010-11-26 2012-06-14 Toyota Motor Corp Driving support system and driving support management center

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008126908A (en) * 2006-11-22 2008-06-05 Denso Corp Driving behavior estimation method and device
JP2009073465A (en) * 2007-08-28 2009-04-09 Fuji Heavy Ind Ltd Safe driving support system
JP2010221962A (en) * 2009-03-25 2010-10-07 Denso Corp Driving behavior prediction device
JP2012113631A (en) * 2010-11-26 2012-06-14 Toyota Motor Corp Driving support system and driving support management center

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019016238A (en) * 2017-07-07 2019-01-31 Kddi株式会社 Estimation apparatus, vehicle terminal, program, and method for estimating road section from which personal characteristic can be easily specified from driving vehicle signal
WO2019193660A1 (en) * 2018-04-03 2019-10-10 株式会社ウフル Machine-learned model switching system, edge device, machine-learned model switching method, and program
EP4273013A3 (en) * 2022-05-02 2024-01-10 Toyota Jidosha Kabushiki Kaisha Individual characteristics management device, individual characteristics management method, non-transitory storage medium storing a program, and method of generating learned model

Similar Documents

Publication Publication Date Title
JP6074553B1 (en) Information processing system, information processing method, and program
CN110114810B (en) Information processing system, information processing method, and storage medium
JP6447929B2 (en) Information processing system, information processing method, and program
JP6341311B2 (en) Real-time creation of familiarity index for driver&#39;s dynamic road scene
JP6307356B2 (en) Driving context information generator
EP3159235B1 (en) Method and system for assisting a driver of a vehicle in driving the vehicle and computer program
JP6800575B2 (en) Methods and systems to assist drivers in their own vehicles
JP5867296B2 (en) Driving scene recognition device
JP5840046B2 (en) Information providing apparatus, information providing system, information providing method, and program
CN112203916A (en) Method and device for determining lane change related information of target vehicle, method and device for determining vehicle comfort measure for predicting driving maneuver of target vehicle, and computer program
JP6418574B2 (en) Risk estimation device, risk estimation method, and computer program for risk estimation
JP6206022B2 (en) Driving assistance device
CN110765807A (en) Driving behavior analysis method, driving behavior processing method, driving behavior analysis device, driving behavior processing device and storage medium
JP2008146549A (en) Drive support device, map generator and program
JP6511982B2 (en) Driving operation discrimination device
JP2012226602A (en) Vehicle information providing-device
WO2014034779A1 (en) Individual characteristic estimation unit and individual characteristic estimation method
JP2022502750A (en) Methods and devices for analyzing sensor data flows, as well as methods for guiding vehicles.
Rahman et al. Driving behavior profiling and prediction in KSA using smart phone sensors and MLAs
Westny et al. Vehicle behavior prediction and generalization using imbalanced learning techniques
JP4919172B2 (en) Vehicle guidance device
CN105447511B (en) A kind of SVM object detection method based on Adaboost Haar-Like feature
JP2014046820A (en) Driver&#39;s property estimation system
JP6926644B2 (en) Anomaly estimation device and display device
US20230048304A1 (en) Environmentally aware prediction of human behaviors

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13833851

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13833851

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP