WO2009107185A1 - On-vehicle robot - Google Patents

On-vehicle robot Download PDF

Info

Publication number
WO2009107185A1
WO2009107185A1 PCT/JP2008/053147 JP2008053147W WO2009107185A1 WO 2009107185 A1 WO2009107185 A1 WO 2009107185A1 JP 2008053147 W JP2008053147 W JP 2008053147W WO 2009107185 A1 WO2009107185 A1 WO 2009107185A1
Authority
WO
WIPO (PCT)
Prior art keywords
driver
vehicle
familiarity
robot
index
Prior art date
Application number
PCT/JP2008/053147
Other languages
French (fr)
Japanese (ja)
Inventor
裕昭 柴▲崎▼
Original Assignee
パイオニア株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パイオニア株式会社 filed Critical パイオニア株式会社
Priority to PCT/JP2008/053147 priority Critical patent/WO2009107185A1/en
Priority to JP2010500462A priority patent/JPWO2009107185A1/en
Publication of WO2009107185A1 publication Critical patent/WO2009107185A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/0003Home robots, i.e. small robots for domestic use
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H3/00Dolls
    • A63H3/28Arrangements of sound-producing means in dolls; Means in dolls for producing sounds
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H2200/00Computerized interactive toys, e.g. dolls

Definitions

  • the present invention relates to a vehicle-mounted robot that reacts to a driver.
  • this robot interaction device simply reacts and exchanges simple conversations regardless of who the driver is, and what is the trigger to make the driver to drive safely by such simple conversation alone? I wouldn't.
  • the problems to be solved by the present invention include the above-mentioned problems as an example.
  • the invention performs an operation according to the determination by the operation determination unit, an operation determination unit that determines an operation to be reacted according to the degree of familiarity with the driver.
  • An operation mechanism and a control unit that corrects a degree of familiarity with the driver based on a reaction of the driver in contact with an operation performed by the operation mechanism, and changes an operation to be performed by the operation mechanism.
  • FIG. 1 is a perspective view illustrating a configuration example of a part of a vehicle 100 according to the present embodiment.
  • the vehicle 100 includes left and right seats 6 a and 6 b, a side brake 7, a door 5, a handle 4, a meter 7, a device panel 21, and a dashboard 9 in the vehicle.
  • the equipment panel 21 has a display unit 15.
  • the display unit 15 is, for example, a display device for a car navigation device or a television receiver.
  • the door 5 is an opening and closing device for opening and closing when a driver or a passenger gets on or off the vehicle.
  • the left and right seats 6a and 6b are seats for the driver and passengers to sit on.
  • the handle 4 is an operation member that is operated by the driver for driving.
  • the side brake 7 is a kind of braking device for fixing the vehicle to a place so that the vehicle does not move any more when the vehicle is stopped.
  • the meter 7 is a display unit that displays the speed of the vehicle 100, the engine speed, the water temperature, and the like.
  • the dashboard 9 represents a member in front of the passenger seat side among the left and right seats 6a and 6b.
  • an in-vehicle robot 1 corresponding to a robot is disposed particularly at a position where the driver enters the field of view while driving.
  • the dashboard 9 is arranged in the vicinity of the center of the driver's seat and the passenger seat 6a, 6b.
  • the in-vehicle robot 1 may be placed on the dashboard 9 so as to be detachable or may be fixed.
  • This in-vehicle robot 1 has a photographing function, a recognition function, an individual reaction function, and the like, which will be described later, determines a familiar person, identifies a familiar driver and a passenger who frequently rides, and gives greetings and reactions respectively. Has the function to perform separately.
  • this vehicle-mounted robot 1 is exemplified as a reaction to the driver as an example.
  • FIGS. 2 and 3 are enlarged perspective views showing an example of an appearance in which the vehicle-mounted robot 1 shown in FIG. 1 is operating.
  • the in-vehicle robot 1 includes a head cover 1C, a head 1H, a main body 1B, a left arm 1L, a right arm 1R, and a leg 1FT.
  • the head cover 1C is a transparent member provided on the upper part of the main body 1B and covers the head 1H.
  • the left arm portion 1L and the right arm portion 1R can be moved relative to the main body 1B.
  • the in-vehicle robot 1 has a head 1H that moves relative to the main body 1B, and a camera 5 is provided at a position corresponding to the human eye in the head 1H.
  • the position of the camera 5 corresponding to the human eye has an eye shape
  • the mouth has a shape corresponding to the human mouth. Therefore, the head 1H represents the expression 1F as if it were the shape of the eyes and mouth.
  • the in-vehicle robot 1 shows that the facial expression 1F creates a mild atmosphere and is intimately contacting the driver.
  • the vehicle-mounted robot 1 expresses a state in which the expression 1F creates an unpleasant atmosphere and touches the driver in an unfriendly atmosphere.
  • the facial expression 1F of the in-vehicle robot 1 is not limited to these, and for example, it may be a scary and uneasy atmosphere.
  • FIG. 4 is a block diagram showing an example of functions of the in-vehicle robot 1 shown in FIGS.
  • the in-vehicle robot 1 includes a camera 5, a subject recognition unit 6, a familiarity index calculation unit 11, a memory 19, a factor acquisition unit 18, an operation control unit 13, an evaluation unit 12, and an operation mechanism 14.
  • the familiarity index calculation unit 11 and the operation control unit 13 correspond to an operation determination unit, and determine an operation to react according to the familiarity with the driver. Further, the familiarity index calculation unit 11 and the operation control unit 13 may determine an operation to be reacted according to the ride history of the driver in addition to the familiarity with the driver.
  • the familiarity index calculation unit 11 corresponds to a familiarity index calculation means, and calculates a familiarity index in which the degree of familiarity with the driver is quantified.
  • the operation control unit 13 corresponds to an operation control unit, and controls the operation mechanism 14 by determining an operation to be reacted according to the familiarity index.
  • the operation mechanism 14 corresponds to an operation mechanism, and performs an operation according to the determination by the familiarity index calculation unit 11 and the operation control unit 13 as operation determining means.
  • This operation mechanism 14 is a generic term for actuators for moving the above-described head 1H, right arm 1R, and left arm 1L relative to the main body 1B.
  • the evaluation unit 12 corresponds to control means, and corrects the degree of familiarity with the driver based on the reaction of the driver in contact with the operation performed by the operation mechanism 14, and changes the operation to be performed by the operation mechanism 14.
  • the function of the vehicle-mounted robot 1 will be specifically described.
  • the camera 5 is a photographing device for photographing a landscape when a driver or a passenger enters the field of view from the head 1H of the in-vehicle robot 1 disposed in the vehicle 100.
  • This landscape includes at least one of a driver and a passenger.
  • the camera 5 starts to operate when a driver (or one of the passengers) gets into the vehicle 100.
  • the camera 5 may start operating when the power is turned on to start the engine.
  • the target person recognition unit 6 corresponds to a recognition means and recognizes each of the plurality of drivers.
  • the target person recognition unit 6 has a function of recognizing at least one of a driver and a passenger as a target person by at least one of voice recognition and face recognition, for example.
  • the target person recognition unit 6 includes, for example, an analog / digital conversion unit, converts an analog signal photographed by the camera 5 into a digital signal, and captures the image of the driver photographed based on the digital image data. Identify the face.
  • the memory 19 is a storage medium capable of storing information, and stores a factor for calculating a familiarity index, which will be described later, for expressing the degree of familiarity between the in-vehicle robot 1 and the driver as a numerical value. Specifically, the memory 19 has a quantitative factor 17 and a qualitative factor 16.
  • the quantitative factor 17 is boarding history information representing past boarding history.
  • the quantitative factor 17 will be described in detail later.
  • the quantitative factor 17 includes the detection result of the driving state by the G sensor as an example of the factor acquisition unit 18 in view of the “method of calculating the familiarity index by including the operational determination as familiar factor” described later.
  • the evaluation by the evaluation unit 12 may be one item of the quantitative factor 17.
  • the part may be updated independently of other items, and among some items of the quantitative factor 17, there are some items for which the evaluation of the evaluation unit 12 is fed back. Yes, and only the contents may be updated.
  • the emotion determination result for the robot based on the tone of the opponent's call voice the determination result based on the meaning content of the call voice of the opponent recognized by the voice, Judgment result based on whether or not the person who has called the person including the name of 1 and the result of judging whether the person's gesture, reaction, facial expression, etc. were gentle rather than the tone of the voice, etc.
  • the evaluation by the evaluation unit 12 may be one item of the qualitative factor 16. Then, the part may be updated independently of other items, and among some items of the qualitative factor 16, there are some items to which the evaluation of the evaluation unit 12 is fed back. Yes, and only the contents may be updated.
  • the familiarity index calculation unit 11 calculates a familiarity index, which will be described later, based on at least one of the quantitative factor 17 and the qualitative factor 16 read from the memory 19 in correspondence with the driver recognized by the subject recognition unit 6. .
  • This familiarity index is a numerical index that represents the degree of familiarity between the vehicle-mounted robot 1 showing various facial expressions and the driver.
  • the motion control unit 13 has a function of determining an operation to be performed by the in-vehicle robot 1 based on the familiarity index calculated by the familiarity index calculation unit 11.
  • the operation control unit 13 controls the operation mechanism 14 based on the determination items to perform an operation to be reacted. As described above, the operation mechanism 14 operates to move any one of the head 1H, the right arm 1R, and the left arm 1L of the in-vehicle robot 1.
  • the vehicle-mounted robot 1 The driver who is in contact with any one of the movements of the head 1H, the right arm 1R, and the left arm 1L, for example, from the content of the movement, for example, the vehicle-mounted robot 1 is unique and is familiar to him. For example, it feels as if the driver is scared.
  • the evaluation unit 12 evaluates how the driver's attitude has changed due to the operation of the operation mechanism 14. As described above, the evaluation unit 12 determines how the driver's attitude toward the vehicle-mounted robot 1 is based on the recognition result of the driver's image captured by the camera 5 and then recognized by the subject recognition unit 6. It has a function to evaluate whether it has changed.
  • the in-vehicle robot 1 has one configuration example as described above. Next, an example of a procedure included in the operation control process based on the familiarity factor according to the one configuration example will be described with reference to FIGS. 1 to 4.
  • FIG. 5 is a flowchart illustrating an example of a procedure included in the operation control process based on the familiarity factor.
  • the target person recognition unit 6 recognizes a driver to be a target from an image in the vehicle 100 captured by the camera 5.
  • the familiarity index calculation unit 11 acquires a factor from the memory 19.
  • the memory 19 includes a quantitative factor 17 and a qualitative factor 16 as factors.
  • the familiarity index calculation unit 11 acquires at least one of the quantitative factor 17 and the qualitative factor 16 from the memory 19.
  • step S3 the familiar index calculation unit 11 calculates a familiar index based on the acquired factor. Variations of the method of calculating the familiarity index by the familiarity index calculating unit 11 will be described later.
  • step S4 the operation control unit 13 selects a determination method. Variations of this determination method will also be described later.
  • step S4 when a determination method is selected by the operation control unit 13, the familiarity with the driver is determined using the determination method, and the operation mechanism 14 is operated based on the familiarity index described above.
  • the operating mechanism 14 moves the left arm 1L downward relative to the main body 1B as shown in FIG. Let 1H be an expression 1F that is not intimate.
  • the driver notices that the in-vehicle robot 1 that was felt to be a unique presence and a close presence to him has such an intimate expression, and his own driving behavior is He notices that the vehicle-mounted robot 1 has some unfavorable influence, and he has to change his driving attitude if he has to drive carefully.
  • step S5 the evaluation unit 12 recognizes the driver's image taken by the camera 5 after the subject recognition unit 6 has passed for a while and recognizes the driver who is the subject from the facial expression (or voice). Get the reaction.
  • step S6 the evaluation part 12 evaluates and determines the driver's reaction acquired in this way.
  • step S7 the familiarity index calculation unit 11 calculates the familiarity index again based on the determination result of the driver's reaction by the evaluation unit 12. At this time, the familiarity index calculation unit 11 may recalculate the familiarity index based on at least one of the quantitative factor 17 and the qualitative factor 16 described above as well as such a determination result.
  • step S8 the motion control unit 13 determines whether or not the new familiarity index is higher than the previous familiarity index. If the new familiarity index is higher, step S9 described later is executed. On the other hand, if the new familiarity index is not high, step S10 described later is executed.
  • step S8 the operation control unit 13 controls the operation mechanism 14 to execute an operation with a higher familiarity than the previous time. Specifically, the motion control unit 13 controls the motion mechanism 14 to move the left arm 1L upward relative to the main body while making the head 1H an intimate facial expression 1F as shown in FIG. To make it work.
  • the vehicle-mounted robot 1 expresses an attitude that makes the driver feel intimate, so that the driver feels that the vehicle-mounted robot 1 is more unique and closer to him / her. The driver tries to continue driving safely. Next, it returns to step S5 mentioned above and is performed.
  • step S8 when the motion control unit 13 determines in step S8 that the new familiarity index is not higher than the previous familiarity index, the operation mechanism 14 is controlled to be the same as the previous familiarity or the previous familiarity. Perform low action.
  • the in-vehicle robot 1 expresses an attitude that is not intimate with respect to the driver, and the driver is estranged from the in-vehicle robot 1 that has felt that it is a unique existence and is closer to him / her.
  • the driver changes his / her driving and becomes aware that if he / she does not drive safely, he / she will not be intimate. Therefore, the driver is careful to drive carefully so that the in-vehicle robot 1 that he / she feels is intimate.
  • the in-vehicle robot 1 is mounted in the vehicle 100 as described above.
  • the familiar index calculation unit 11 may calculate the familiar index using at least one of the following quantitative factor 17 and qualitative factor 16.
  • Quantitative factor 17 (1) Cumulative number of rides on the vehicle 100 (2) Cumulative boarding time of the vehicle 100-Frequency of boarding the vehicle 100-It is determined whether or not a person who often rides as a normal driver. That is, the determination is made according to the ratio of the number of times / period / frequency of boarding as a driver to the total number of times / period / frequency. ⁇ Even if it is the same person, it is determined whether the boarding at that time is a boarding as a driver or not.
  • Determine the relationship with the driver based on whether the passenger seat is often in the passenger seat or the back seat. An example of such a relationship is the familiarity between the in-vehicle robot 1 and the driver.
  • Driving judgment result of safe driving or rough driving Qualitative factor 16 You may make it determine the emotion with respect to the vehicle-mounted robot 1 from the tone of the other party's calling voice. -You may judge from the meaning content of the other party's calling voice. The determination may be made according to whether or not the person has made a call including a predetermined word such as the name of the robot. ⁇ If there is a word that is not normally spoken by someone who is not the first, such as “I met you again,” you may make a decision equivalent to your name.
  • -It may be determined by image recognition whether the person's gesture, reaction, facial expression, etc. are gentle rather than the tone of the voice.
  • the state of driving may be determined by an image captured by a camera, or may be determined based on engine sound, road noise, presence or absence of sleepiness by recognition of a driver's facial expression, and the like.
  • the familiarity index calculation unit 11 described above uses the following factors singly or in combination.
  • the familiarity index calculation unit 11 calculates a familiarity index for each driver, and the operation control unit 13 determines an action to be taken based on the familiarity index.
  • the quantitative factor 17 as accumulated data may be accumulated for each driver.
  • Quantitative factor 17 (1) Cumulative number of rides on the vehicle 100 (1-1) Determination method that a driver or the like “gets on” the vehicle 100 ⁇ Even if the determination is based on the recognition of the person's face and voice by the robot good. -Data of a personal authentication system other than the robot (navigation, vehicle fingerprint sensor, car key detection, etc.) may be used. -It may be used in combination with a seating sensor or door opening / closing sensor (not shown). (1-2) Entrance / exit other than during traveling may be excluded from the count. (1-3) Boarding and entering / leaving less than a predetermined time may be excluded.
  • Exit / exit that can be determined from the stop-off point when it is a stop on the way may be excluded. The determination may be made when the point is a place where a temporary boarding / exiting is generally performed such as a convenience store, a gas stunt, a restaurant, a shopping center, or a parking lot in the town. (1-5) It is possible to exclude entry / exit that can be determined as a stopover from the set destination or set route. -Entering / exiting at a point other than the destination-Entering / exiting on the way of the set route (1-6) One day (or during a predetermined period such as 24 hours) any number of times may be counted as one time.
  • Frequency of boarding the vehicle 100 (3-1) The determination may be made based on the elapsed period from the previous boarding of the driver to the current boarding. (3-2) The determination may be made based on the number of boarding times or boarding time in a predetermined period. (3-3) The number of times of boarding may be obtained, for example, by calculating the ratio of the number of days actually boarded to the number of days in a predetermined period. (3-4) The frequency may be calculated in units of a predetermined coefficient period, and it may be determined whether the frequency is increasing.
  • It may be based on whether the person who rides normally as a normal driver or not. That is, the number of rides as a driver, the period, and the total number of times, the period, and the ratio to the frequency may be used as a reference. ⁇ For example, recognize the face at the driver's seat. The driver's seat position may be registered first, or the vehicle-mounted robot 1 may recognize the handle 4 as an image.
  • the relationship (intimacy) with the driver is determined and taken into account based on whether the passenger seat 6b is often used or the back seat is often used.
  • the seat position may be registered first, or the vehicle-mounted robot 1 may recognize the presence or absence of the handle 4 or the presence or absence of the front seat.
  • the passenger seat 6b may be determined to have higher intimacy, or may be determined for each age group based on the passenger's appearance (children are determined to have higher intimacy even in the rear seat), and driving This determination may be performed only when there are a plurality of passengers in addition to the passenger.
  • the number of times, the period, the frequency, etc. described above may be accumulated from the first time, or may be accumulated for a predetermined period such as the last one month, for example.
  • each judgment value in chronological order or in order of occurrence and judge the friendliness by the pattern of this graph.
  • the familiarity index generally shows a certain positive value, but even if a negative value sometimes shows a particularly low judgment and the cumulative total is low, the underlying trend is positive because the emotional undulation of the person is intense from the graph pattern judge.
  • the total may be a total from the first time, or may be a total for a predetermined period such as the last month. Further, in the present embodiment, it may be accumulated only during the drive, and may be reset at the next drive, or only data of a predetermined drive such as only the previous drive may be stored, and the next time may be started from there. As a specific example, the evaluation unit 12 may make the determination by adding the current total to the previous total, for example.
  • FIG. 6 is a diagram showing a first example in which the quantitative factor 17 is quantified.
  • the vertical axis indicates the evaluation
  • the horizontal axis indicates the number of times, the frequency, and the like.
  • the evaluation unit 12 can employ a pattern in which the quantitative factor 17 is evaluated only in the positive direction by a stacking method. That is, when the evaluation unit 12 performs numerical evaluation of the quantitative factor 17 and calculates a familiarity index to perform the evaluation, the evaluation increases to the right as illustrated in accordance with an increase in the number of times, frequency, and the like.
  • FIG. 7 is a diagram showing a second example in which the quantitative factor 17 is digitized.
  • the vertical axis indicates the evaluation
  • the horizontal axis indicates the number of times, the frequency, and the like.
  • the evaluation unit 12 can employ a pattern that evaluates positively or negatively depending on the numerical value at that time. That is, when the evaluation unit 12 converts the quantitative factor 17 into a numerical value, calculates a familiarity index, and performs the evaluation, the evaluation increases or decreases with the evaluation value 0 as a boundary according to an increase in the number of times, frequency, and the like.
  • a negative determination of a numerical value corresponding to the period may be made (see a graph application example described later).
  • a negative determination of a numerical value corresponding to the period may be made (see a graph application example described later).
  • -It is possible to accumulate both positive and negative values of a factor (predetermined period) and determine whether the familiarity index is changed to positive or negative depending on whether the cumulative value is positive or negative.
  • the numerical value obtained by the graph as in the graph application example described later is accumulated over a predetermined period such as the last month, and the result value is positive or negative It may be determined whether the familiarity index is changed to plus or minus.
  • -It is also possible to accumulate both positive and negative values of a plurality of factors for a predetermined period, and determine whether the acclimatization index changes to positive or negative depending on whether the cumulative value is positive or negative.
  • a factor determined only as a positive factor and a factor determined only as a negative factor may be mixed by multiplying a factor determined for both by a predetermined coefficient.
  • Familiarity index k p [cumulative ride count evaluation] + q [previous elapsed period evaluation] + r [name evaluation]
  • FIG. 8 is a diagram illustrating a first example of how the index is reduced according to the elapsed period from when the driver got on the last time.
  • the evaluation unit 12 may adopt a pattern in which the index is adapted according to the elapsed period from when the driver got on the last time.
  • FIG. 9 is a diagram illustrating a second example in which the habituation index decreases according to the elapsed period from when the driver got on the last time.
  • the evaluation unit 12 may adopt a pattern in which the acclimatization index is comparatively long and the qualitative factor 16 or the quantitative factor 17 is held in the memory 19 even after an elapsed period from when the driver got on the last ride. .
  • FIG. 10 is a diagram illustrating a third example in which the habituation index decreases according to the elapsed period from when the driver got on the previous ride.
  • the evaluation unit 12 may adopt a pattern in which the habituation index decreases immediately after the elapsed period from when the driver boarded the vehicle last time.
  • a plurality of graphs may be prepared, and the graph to be applied may be changed according to the character, growth, and mood of the vehicle-mounted robot 1.
  • the character of the vehicle-mounted robot 1 here represents an individual difference.
  • the in-vehicle robot 1 in the above embodiment is equivalent to the operation determining means 11 and 13 (corresponding to the familiarity index calculation unit and the operation control unit) that determine the operation to be reacted according to the familiarity with the driver (corresponding to a qualitative factor). ), An operation mechanism 14 that performs an operation according to the determination by the operation determination means 11 and 13 (corresponding to the operation mechanism), and the driver based on the reaction of the driver in contact with the operation performed by the operation mechanism 14 And a control unit 12 (corresponding to an evaluation unit) that changes the operation to be performed by the operation mechanism 14.
  • the in-vehicle robot 1 In response to such safe driving, a friendly gesture is performed so as to show familiarity to the driver. Then, the driver comes into contact with the intimate reaction of the in-vehicle robot 1 and tries to continue such safe driving.
  • the vehicle-mounted robot 1 makes a frightening response in response to such rough driving. Then, the driver comes into contact with the sparse response of the in-vehicle robot, notices that the in-vehicle robot 1 that he / she feels important is feared, and tries to change such a rough operation. As a result, the driver's own reflection on driving can be promoted, and the driver can be led to safer driving. This effect can be obtained as the vehicle-mounted robot 1 becomes a target of feelings or emotion transfer more than just a machine for the driver.
  • the in-vehicle robot 1 performs a serious operation, an uneasy behavior, and a voice utterance / pronunciation when a rough operation such as a sudden handle or a sudden brake is detected by a G sensor or the like.
  • the vehicle-mounted robot 1 gradually shows a friendly gesture to a familiar person, but if the driver continues rough driving, the driver will become accustomed and improve his friendship. The usual behavior for, will shift to something strange.
  • the vehicle-mounted robot 1 When the vehicle-mounted robot 1 recognizes a familiar driver when riding and starts to show a more pleasant gesture than before, the driver will show a scary gesture if the driver drives rough. become. Then, if the driver continues the driving, it will eliminate the accumulated effort that has been accustomed until now, and in order to make the vehicle-mounted robot 1 behave amiably again, familiarize from the beginning and do not accumulate safe driving. It will be recognized that it should not, and psychological restraint to refrain from rough driving will work.
  • the vehicle-mounted robot 1 changes its operation and reaction according to such a familiarity level (corresponding to the familiarity index). That is, when the vehicle-mounted robot 1 detects a rough driving, the vehicle-mounted robot 1 changes the operation and reaction according to the degree of familiarity with the driver at that time. At this time, the vehicle-mounted robot 1 may be operated more exaggerated than usual when, for example, a highly accustomed person performs rough driving. As a betrayal of the trust that the driver has built for the in-vehicle robot 1 until now, this in-vehicle robot 1 performs a more fearful operation, and if the driving is immediately changed, the relationship of trust becomes stronger. For example, you may make it make a gesture more amiable than before.
  • the motion determination means 11 and 13 (corresponding to the familiarity index calculation unit and the operation control unit) not only include the degree of familiarity with the driver, but also the driving The operation to be reacted is determined according to the rider's boarding history 17 (corresponding to a quantitative factor).
  • the vehicle-mounted robot 1 will perform the operation
  • the in-vehicle robot 1 in the embodiment further includes a recognition unit 6 (corresponding to a target person recognition unit) that identifies and recognizes each of the plurality of drivers, in addition to the above-described configuration.
  • No. 13 is characterized in that the familiarity degree is managed according to the reaction for each driver identified by the recognition means 6.
  • the vehicle-mounted robot 1 can operate to perform different reactions depending on each driver even when any of a plurality of drivers is randomly driving.
  • the motion determination unit further includes a familiarity index calculation unit 11 (into the familiarity index calculation unit) that calculates a familiarity index obtained by quantifying the familiarity with the driver. And a motion control means 13 (corresponding to a motion control unit) for determining a motion to be reacted according to the familiarity index.
  • the vehicle-mounted robot 1 objectively determines the familiarity of the driver with the driver using the familiarity index that quantifies the relationship with the driver, and according to the familiarity index, the driver If an intimate relationship is not established, for example, an unfriendly operation can be performed, and if an intimate relationship is established, a favorable operation can be performed, for example.
  • the motion determination means 11 and 13 are further based on the familiarity level according to the driver's speech content recognized by the recognition means 6.
  • the operation mechanism 14 is controlled by determining the operation to be reacted.
  • the in-vehicle robot 1 can determine the degree of familiarity according to the speech of the driver who has recognized the voice, and can perform an action that should react accurately according to the degree of familiarity with the driver.
  • control unit 12 (corresponding to the evaluation unit) further includes the familiarity index according to the utterance content of the driver recognized by the recognition unit 6. It is characterized by correcting.
  • the familiarity index calculation means 11 is further adapted to the familiarity depending on whether or not a predetermined word is included in the speech content that has been voice-recognized. Calculate the index.
  • a predetermined wording the name given to the vehicle-mounted robot 1 can be mentioned, for example.
  • the vehicle-mounted robot 1 can easily determine the degree of familiarity with the driver based on whether or not a predetermined word is present in the utterance content of the driver.
  • the motion determination means 11 and 13 react based on the degree of familiarity according to the driver's facial expression image-recognized by the recognition means 6. It is characterized by determining an operation to be performed.
  • the vehicle-mounted robot 1 can determine the degree of familiarity with the driver according to the driver's facial expression that has been image-recognized, and can perform an action that should react according to the degree of familiarity with the driver.
  • control unit 12 further corrects the familiarity index according to the driver's facial expression image-recognized by the recognition unit 6. To do.
  • one of two types of familiarity index calculation methods such as the latter familiarity calculation method can be selectively employed.
  • the in-vehicle robot 1 gets used to the driver when the driver gets on the vehicle many times. However, when the driver makes a rough driving, the familiarity index does not change, so the familiarity does not change, but an uneasy expression is expressed. When you return to safe driving, you will be happy with the familiar appearance. For this reason, the driver will think that it would be good to reflect on the familiar presence of the in-vehicle robot 1 showing an uneasy expression and return to safe driving. This uses the utility that a psychological effect can be obtained if the relationship built over time is based on the relationship rather than the familiar one from the beginning.
  • the motion control unit 13 and the familiar index calculation unit 11 may be integrated.
  • the vehicle-mounted robot 1 illustrated in the inside of the vehicle 100 as an example it is not restricted to this, For example, even if it applies to the robot or toy put in a house, such as a front door and a room good.
  • the in-vehicle robot 1 may be applied to a robot intended to contribute to other purposes instead of the robot intended to contribute to safe driving as described above.
  • a robot having substantially the same function as the in-vehicle robot 1 is equivalent to the above-described driving determination. Instead, for example, the room is overlooked, and if the room is messed up by image recognition, the robot will be displeased and organized. The function may be demonstrated. For example, the state of being in a bad mood represents a state in which habituation is reduced.
  • FIG. 4 is a block diagram illustrating an example of functions of the robot illustrated in FIGS. 1 to 3. It is a flowchart which shows an example of the procedure contained in the operation control process based on a familiarity factor. It is a figure which shows the 1st example which digitized the quantitative factor. It is a figure which shows the 2nd example which digitized the quantitative factor.
  • Target person recognition unit (equivalent to recognition means)
  • Familiarity index calculation unit (equivalent to familiarity index calculation means, action determination means)
  • Evaluation part (equivalent to control means)
  • Operation control unit (equivalent to operation control means and operation determination means)
  • Operating mechanism (equivalent to operating mechanism)
  • Quantitative factor 100 Image display control device

Abstract

An on-vehicle response robot (1) is provided with an adaptation index calculating part (11) and an operation control part (13) which decide an operation to be reacted in accordance with an adaptation degree with an operator, an operation mechanism (14) performing an operation corresponding to decision by the adaptation index calculating part (11) and the operation control part (13), and an evaluating part (12) correcting the adaptation degree with the operator based on reaction of the operator who is brought into contact with the operation that the operation mechanism (14) performs and changing the operation that the operation mechanism (14)is to perform.

Description

車載ロボットIn-vehicle robot
 本発明は、運転者に反応する車載ロボットに関する。 The present invention relates to a vehicle-mounted robot that reacts to a driver.
 近年、産業用ロボットのみならず人間に近い動作を行う様々な人間型ロボットが開発されている。このような人間型ロボットとしては、従来、操作者の発話内容を音声認識し、その発話内容を解析して対応させた内容をその操作者に対して返答するロボット対話装置が存在している(特許文献1参照)。このロボット対話装置を車両に搭載させておくと、このロボット対話装置は、運転者などの発話内容に応じて運転者などと簡単な会話を行うことができる。
特開2004-291176号公報(段落番号0025、図1)
In recent years, not only industrial robots but also various humanoid robots that perform operations close to humans have been developed. Conventionally, as such a humanoid robot, there is a robot interaction device that recognizes a speech content of an operator and analyzes the speech content and responds to the operator with a corresponding content ( Patent Document 1). If this robot interaction device is mounted on a vehicle, this robot interaction device can perform simple conversations with the driver or the like according to the utterance content of the driver or the like.
JP 2004-291176 A (paragraph number 0025, FIG. 1)
 上記従来技術では、このロボット対話装置は、運転者などが誰であろうと単に反応して簡単な会話を交わすのみであり、このような単なる会話だけでは運転者に安全運転を心がけさせるきっかけとはなり得なかった。 In the above-mentioned conventional technology, this robot interaction device simply reacts and exchanges simple conversations regardless of who the driver is, and what is the trigger to make the driver to drive safely by such simple conversation alone? I couldn't.
 本発明が解決しようとする課題には、上記した問題が一例として挙げられる。 The problems to be solved by the present invention include the above-mentioned problems as an example.
 上記課題を解決するために、請求項1に記載の発明は、運転者との馴れ度合いに応じて反応すべき動作を決定する動作決定手段と、前記動作決定手段による決定に応じた動作を行う動作機構と、前記動作機構が行う動作に接した前記運転者の反応に基づいて前記運転者との馴れ度合いを補正し、前記動作機構が行うべき動作を変更させる制御手段と、を有する。 In order to solve the above-mentioned problem, the invention according to claim 1 performs an operation according to the determination by the operation determination unit, an operation determination unit that determines an operation to be reacted according to the degree of familiarity with the driver. An operation mechanism; and a control unit that corrects a degree of familiarity with the driver based on a reaction of the driver in contact with an operation performed by the operation mechanism, and changes an operation to be performed by the operation mechanism.
 以下、本発明の一実施の形態を図面を参照しつつ説明する。
 図1は、本実施形態における車両100の車内の一部の構成例を示す斜視図である。
 車両100は、その車内に、左右のシート6a,6b、サイドブレーキ7、ドア5、ハンドル4、メータ7、機器パネル21及びダッシュボード9を有する。
Hereinafter, an embodiment of the present invention will be described with reference to the drawings.
FIG. 1 is a perspective view illustrating a configuration example of a part of a vehicle 100 according to the present embodiment.
The vehicle 100 includes left and right seats 6 a and 6 b, a side brake 7, a door 5, a handle 4, a meter 7, a device panel 21, and a dashboard 9 in the vehicle.
 機器パネル21は表示部15を有し、この表示部15は、例えばカーナビゲーション装置やテレビジョン受像機の表示装置である。ドア5は、運転者や同乗者が乗車したり降車する際に開閉するための開閉装置である。左右のシート6a,6bは、運転者及び同乗者が着席するための座席である。ハンドル4は、運転者が運転にあたり操作する操作部材である。サイドブレーキ7は、車両が停車した状態でその車両がそれ以上移動しないように、その場所に固定するための制動装置の一種である。メータ7は、車両100のスピード、エンジン回転数及び水温などを表示する表示部である。ダッシュボード9は、左右のシート6a,6bのうち助手席側の目の前の部材を表している。 The equipment panel 21 has a display unit 15. The display unit 15 is, for example, a display device for a car navigation device or a television receiver. The door 5 is an opening and closing device for opening and closing when a driver or a passenger gets on or off the vehicle. The left and right seats 6a and 6b are seats for the driver and passengers to sit on. The handle 4 is an operation member that is operated by the driver for driving. The side brake 7 is a kind of braking device for fixing the vehicle to a place so that the vehicle does not move any more when the vehicle is stopped. The meter 7 is a display unit that displays the speed of the vehicle 100, the engine speed, the water temperature, and the like. The dashboard 9 represents a member in front of the passenger seat side among the left and right seats 6a and 6b.
 このダッシュボード9上には、特に運転者が運転中に視界に入る位置に、ロボットに相当する車載ロボット1が配置されている。本実施形態では、ダッシュボード9において運転席及び助手席のシート6a,6bのほぼ中央付近に配置されている。この車載ロボット1はダッシュボード9に着脱可能に置かれているだけでも良いし固定されていても良い。この車載ロボット1は、後述する撮影機能、認識機能及び個別反応機能などを有し、見慣れた人間を判定し、その見慣れた運転者や頻繁に乗車する同乗者を識別して挨拶や反応を各々分けて行う機能を有する。以下の説明では、この車載ロボット1が一例として運転者に対して反応するものと例示する。 On the dashboard 9, an in-vehicle robot 1 corresponding to a robot is disposed particularly at a position where the driver enters the field of view while driving. In the present embodiment, the dashboard 9 is arranged in the vicinity of the center of the driver's seat and the passenger seat 6a, 6b. The in-vehicle robot 1 may be placed on the dashboard 9 so as to be detachable or may be fixed. This in-vehicle robot 1 has a photographing function, a recognition function, an individual reaction function, and the like, which will be described later, determines a familiar person, identifies a familiar driver and a passenger who frequently rides, and gives greetings and reactions respectively. Has the function to perform separately. In the following description, this vehicle-mounted robot 1 is exemplified as a reaction to the driver as an example.
 図2及び図3は、それぞれ図1に示す車載ロボット1が動作している外観の一例を拡大して表した斜視図である。
 図2に示すように車載ロボット1は、頭部カバー1C、頭部1H、本体1B、左腕部1L、右腕部1R及び脚部1FTを有する。頭部カバー1Cは本体1Bの上部に設けられた透明な部材であり、頭部1Hを覆っている。左腕部1L及び右腕部1Rは各々本体1Bに対して動かすことができる。
FIGS. 2 and 3 are enlarged perspective views showing an example of an appearance in which the vehicle-mounted robot 1 shown in FIG. 1 is operating.
As shown in FIG. 2, the in-vehicle robot 1 includes a head cover 1C, a head 1H, a main body 1B, a left arm 1L, a right arm 1R, and a leg 1FT. The head cover 1C is a transparent member provided on the upper part of the main body 1B and covers the head 1H. The left arm portion 1L and the right arm portion 1R can be moved relative to the main body 1B.
 この車載ロボット1は、その頭部1Hが本体1Bに対して相対的に動くようになっているとともに、その頭部1Hにおいて人間の目に相当する位置にカメラ5が設けられている。この頭部1Hには、人間の目に相当するカメラ5の位置が目の形状となっているとともに、人間の口に相当する位置に口の形状となっている。従ってこの頭部1Hはこれら目や口の形状によりあたかも表情1Fを表すことになる。これら目や口は後述する動作機構によって変形することができる。 The in-vehicle robot 1 has a head 1H that moves relative to the main body 1B, and a camera 5 is provided at a position corresponding to the human eye in the head 1H. In the head 1H, the position of the camera 5 corresponding to the human eye has an eye shape, and the mouth has a shape corresponding to the human mouth. Therefore, the head 1H represents the expression 1F as if it were the shape of the eyes and mouth. These eyes and mouth can be deformed by an operating mechanism described later.
 図2においては、車載ロボット1は、表情1Fが穏和な雰囲気を醸し出しており、運転者に親密に接している様子を表している。一方、図3においては、車載ロボット1は、表情1Fがよそよそしい雰囲気を醸し出しており、運転者によそよそしい雰囲気で接している様子を表している。このような車載ロボット1の表情1Fとしては、これらに限られず、例えば怖がっていて不安そうな雰囲気を醸し出すようにしても良い。 In FIG. 2, the in-vehicle robot 1 shows that the facial expression 1F creates a mild atmosphere and is intimately contacting the driver. On the other hand, in FIG. 3, the vehicle-mounted robot 1 expresses a state in which the expression 1F creates an unpleasant atmosphere and touches the driver in an unfriendly atmosphere. The facial expression 1F of the in-vehicle robot 1 is not limited to these, and for example, it may be a scary and uneasy atmosphere.
 図4は、図1~図3に示す車載ロボット1の機能の一例を表すブロック図である。
 車載ロボット1は、カメラ5、対象者認識部6、馴れ指数算出部11、メモリ19、ファクター取得部18、動作制御部13、評価部12及び動作機構14を有する。
FIG. 4 is a block diagram showing an example of functions of the in-vehicle robot 1 shown in FIGS.
The in-vehicle robot 1 includes a camera 5, a subject recognition unit 6, a familiarity index calculation unit 11, a memory 19, a factor acquisition unit 18, an operation control unit 13, an evaluation unit 12, and an operation mechanism 14.
 これらのうち馴れ指数算出部11及び動作制御部13は動作決定手段に相当し、運転者との馴れ度合いに応じて反応すべき動作を決定する。またこの馴れ指数算出部11及び動作制御部13は、このような運転者との馴れ度合いに加えてその運転者の乗車履歴に応じて反応すべき動作を決定するようにしても良い。 Of these, the familiarity index calculation unit 11 and the operation control unit 13 correspond to an operation determination unit, and determine an operation to react according to the familiarity with the driver. Further, the familiarity index calculation unit 11 and the operation control unit 13 may determine an operation to be reacted according to the ride history of the driver in addition to the familiarity with the driver.
 具体的には、馴れ指数算出部11は馴れ指数算出手段に相当し、運転者との馴れ度合いを数値化した馴れ指数を算出する。動作制御部13は動作制御手段に相当し、その馴れ指数に応じて反応すべき動作を決定して動作機構14を制御する。 Specifically, the familiarity index calculation unit 11 corresponds to a familiarity index calculation means, and calculates a familiarity index in which the degree of familiarity with the driver is quantified. The operation control unit 13 corresponds to an operation control unit, and controls the operation mechanism 14 by determining an operation to be reacted according to the familiarity index.
 動作機構14は動作機構に相当し、動作決定手段としての馴れ指数算出部11及び動作制御部13による決定に応じた動作を行う。この動作機構14は、上述した頭部1H、右腕部1R及び左腕部1Lを本体1Bに対して相対的に動かすためのアクチュエータの総称である。 The operation mechanism 14 corresponds to an operation mechanism, and performs an operation according to the determination by the familiarity index calculation unit 11 and the operation control unit 13 as operation determining means. This operation mechanism 14 is a generic term for actuators for moving the above-described head 1H, right arm 1R, and left arm 1L relative to the main body 1B.
 評価部12は制御手段に相当し、動作機構14が行う動作に接した運転者の反応に基づいてその運転者との馴れ度合いを補正し、その動作機構14が行うべき動作を変更させる。以下、車載ロボット1の機能を具体的に説明する。 The evaluation unit 12 corresponds to control means, and corrects the degree of familiarity with the driver based on the reaction of the driver in contact with the operation performed by the operation mechanism 14, and changes the operation to be performed by the operation mechanism 14. Hereinafter, the function of the vehicle-mounted robot 1 will be specifically described.
 カメラ5は、車両100の車内に配置された車載ロボット1の頭部1Hから運転者や同乗者を視界に入れた場合における風景を撮影するための撮影装置である。この風景には運転者及び同乗者の少なくとも一方が含まれている。このカメラ5は、この車両100の車内に運転者(又は同乗者のいずれか一方)が乗車したことを契機として作動を開始する。なおこのカメラ5は、エンジンをかけるために電源をオンとしたことを契機として作動を開始しても良い。 The camera 5 is a photographing device for photographing a landscape when a driver or a passenger enters the field of view from the head 1H of the in-vehicle robot 1 disposed in the vehicle 100. This landscape includes at least one of a driver and a passenger. The camera 5 starts to operate when a driver (or one of the passengers) gets into the vehicle 100. The camera 5 may start operating when the power is turned on to start the engine.
 対象者認識部6は認識手段に相当し、複数の前記運転者を各々認識する。この対象者認識部6は、例えば音声認識及び顔認識の少なくとも一方によって対象者としての運転者及び同乗者の少なくとも一方を認識する機能を有する。この対象者認識部6は、例えばアナログ/デジタル変換部を内蔵しており、カメラ5が撮影したアナログ信号をデジタル信号に変換し、デジタル信号化した画像データに基づいて撮影されている運転者の顔を識別する。 The target person recognition unit 6 corresponds to a recognition means and recognizes each of the plurality of drivers. The target person recognition unit 6 has a function of recognizing at least one of a driver and a passenger as a target person by at least one of voice recognition and face recognition, for example. The target person recognition unit 6 includes, for example, an analog / digital conversion unit, converts an analog signal photographed by the camera 5 into a digital signal, and captures the image of the driver photographed based on the digital image data. Identify the face.
 メモリ19は情報を記憶可能な記憶媒体であり、この車載ロボット1と運転者との馴れ度合いを数値化して表すための後述する馴れ指数を算出するためのファクターを格納している。具体的には、このメモリ19は、定量的ファクター17及び定性的ファクター16を有する。 The memory 19 is a storage medium capable of storing information, and stores a factor for calculating a familiarity index, which will be described later, for expressing the degree of familiarity between the in-vehicle robot 1 and the driver as a numerical value. Specifically, the memory 19 has a quantitative factor 17 and a qualitative factor 16.
 これらのうち定量的ファクター17の一例としては過去の乗車履歴を表す乗車履歴情報を挙げることができる。この定量的ファクター17としては、詳細は後述するが、例えば車両100に乗車した累積回数、車両100の累積乗車時間、安全な運転であるか荒い運転であるかに関する運転判定結果を挙げることができる。またこの定量的ファクター17としては、後述する「運転判定を馴れファクターに含めて馴れ指数を算出する方法」に鑑みて、ファクター取得部18の一例としてのGセンサによる運転状態の検出結果を含めても良い。評価部12の評価は、定量的ファクター17の一項目であってもよい。そして、その部分が他の項目とは独立していて更新されるようにしても良いし、また幾つかある定量的ファクター17の項目の中で評価部12の評価がフィードバックされる項目が幾つか有り、その内容についてのみ更新されるようにしても良い。 Among these, one example of the quantitative factor 17 is boarding history information representing past boarding history. The quantitative factor 17 will be described in detail later. For example, the cumulative number of times the vehicle 100 has been boarded, the cumulative boarding time of the vehicle 100, and driving determination results regarding safe driving or rough driving can be given. . The quantitative factor 17 includes the detection result of the driving state by the G sensor as an example of the factor acquisition unit 18 in view of the “method of calculating the familiarity index by including the operational determination as familiar factor” described later. Also good. The evaluation by the evaluation unit 12 may be one item of the quantitative factor 17. The part may be updated independently of other items, and among some items of the quantitative factor 17, there are some items for which the evaluation of the evaluation unit 12 is fed back. Yes, and only the contents may be updated.
 一方、定性的ファクター16の一例としては、詳細は後述するが、例えば相手の呼びかけ声の調子からロボットに対する感情の判定結果、音声認識された相手の呼びかけ声の意味内容に基づく判定結果、車載ロボット1の名前を含む呼びかけをしたことのある人か否かによる判定結果、声の調子ではなく人のしぐさや反応、顔の表情などが優しいものだったか否かを画像認識で判定した結果などを挙げることができる。評価部12の評価は、定性的ファクター16の一項目であってもよい。そして、その部分が他の項目とは独立していて更新されるようにしても良いし、また幾つかある定性的ファクター16の項目の中で評価部12の評価がフィードバックされる項目が幾つか有り、その内容についてのみ更新されるようにしても良い。 On the other hand, as an example of the qualitative factor 16, the details will be described later. For example, the emotion determination result for the robot based on the tone of the opponent's call voice, the determination result based on the meaning content of the call voice of the opponent recognized by the voice, Judgment result based on whether or not the person who has called the person including the name of 1 and the result of judging whether the person's gesture, reaction, facial expression, etc. were gentle rather than the tone of the voice, etc. Can be mentioned. The evaluation by the evaluation unit 12 may be one item of the qualitative factor 16. Then, the part may be updated independently of other items, and among some items of the qualitative factor 16, there are some items to which the evaluation of the evaluation unit 12 is fed back. Yes, and only the contents may be updated.
 馴れ指数算出部11は、対象者認識部6によって認識された運転者に対応させて、メモリ19から読み出した定量的ファクター17及び定性的ファクター16の少なくとも一方に基づいて後述する馴れ指数を算出する。この馴れ指数は、様々な表情を示す車載ロボット1と運転者との馴れ度合いを表す数的指標である。 The familiarity index calculation unit 11 calculates a familiarity index, which will be described later, based on at least one of the quantitative factor 17 and the qualitative factor 16 read from the memory 19 in correspondence with the driver recognized by the subject recognition unit 6. . This familiarity index is a numerical index that represents the degree of familiarity between the vehicle-mounted robot 1 showing various facial expressions and the driver.
 動作制御部13は、馴れ指数算出部11によって算出された馴れ指数に基づいて車載ロボット1が反応すべき動作を決定する機能を有する。この動作制御部13は、その決定事項に基づいて動作機構14を制御して反応すべき動作を行わせる。この動作機構14は、上述したように車載ロボット1の頭部1H、右腕部1R及び左腕部1Lのいずれかを動かすように作動する。 The motion control unit 13 has a function of determining an operation to be performed by the in-vehicle robot 1 based on the familiarity index calculated by the familiarity index calculation unit 11. The operation control unit 13 controls the operation mechanism 14 based on the determination items to perform an operation to be reacted. As described above, the operation mechanism 14 operates to move any one of the head 1H, the right arm 1R, and the left arm 1L of the in-vehicle robot 1.
 これら頭部1H、右腕部1R及び左腕部1Lのいずれかの動きに接した運転者は、その動きの内容から、例えば車載ロボット1が唯一無二の存在であり自分に親しい存在であるように感じ取ったり、例えば運転者自身を怖がっているかのように感じるようになる。 The driver who is in contact with any one of the movements of the head 1H, the right arm 1R, and the left arm 1L, for example, from the content of the movement, for example, the vehicle-mounted robot 1 is unique and is familiar to him. For example, it feels as if the driver is scared.
 評価部12は、その動作機構14の作動によって運転者の態度がどのように変化したかを評価する。この評価部12は、上述したようにカメラ5によって撮影された運転者の映像を対象者認識部6によってその後認識された認識結果に基づいて、その運転者の車載ロボット1に対する態度がどのように変化したかについて評価する機能を有する。 The evaluation unit 12 evaluates how the driver's attitude has changed due to the operation of the operation mechanism 14. As described above, the evaluation unit 12 determines how the driver's attitude toward the vehicle-mounted robot 1 is based on the recognition result of the driver's image captured by the camera 5 and then recognized by the subject recognition unit 6. It has a function to evaluate whether it has changed.
 車載ロボット1は以上のような一構成例であり、次に図1~図4を参照しつつ当該一構成例による馴れファクターに基づく動作制御処理に含まれる手順の一例について説明する。 The in-vehicle robot 1 has one configuration example as described above. Next, an example of a procedure included in the operation control process based on the familiarity factor according to the one configuration example will be described with reference to FIGS. 1 to 4.
 図5は、馴れファクターに基づく動作制御処理に含まれる手順の一例を示すフローチャートである。
 ステップS1では、対象者認識部6が、カメラ5によって撮影した車両100の車内の映像から対象とすべき運転者を認識する。次にステップS2では、馴れ指数算出部11がメモリ19からファクターを取得する。具体的には、メモリ19にはファクターとして定量的ファクター17及び定性的ファクター16を含んでいる。馴れ指数算出部11は、メモリ19から定量的ファクター17及び定性的ファクター16のうち少なくとも一方を取得する。
FIG. 5 is a flowchart illustrating an example of a procedure included in the operation control process based on the familiarity factor.
In step S <b> 1, the target person recognition unit 6 recognizes a driver to be a target from an image in the vehicle 100 captured by the camera 5. Next, in step S <b> 2, the familiarity index calculation unit 11 acquires a factor from the memory 19. Specifically, the memory 19 includes a quantitative factor 17 and a qualitative factor 16 as factors. The familiarity index calculation unit 11 acquires at least one of the quantitative factor 17 and the qualitative factor 16 from the memory 19.
 次にステップS3では、この馴れ指数算出部11が取得済のファクターに基づいて馴れ指数を算出する。この馴れ指数算出部11による馴れ指数の算出方法のバリエーションについては後述する。次にステップS4では、動作制御部13が判定方法を選択する。この判定方法のバリエーションについても後述する。 Next, in step S3, the familiar index calculation unit 11 calculates a familiar index based on the acquired factor. Variations of the method of calculating the familiarity index by the familiarity index calculating unit 11 will be described later. Next, in step S4, the operation control unit 13 selects a determination method. Variations of this determination method will also be described later.
 このステップS4では、動作制御部13がある判定方法を選択すると、その判定方法を用いて運転者との馴れ度合いを判定し、上述した馴れ指数に基づいて動作機構14を作動させる。ここで、例えば運転者が乱暴な運転をしていて馴れ指数が低い場合、動作機構14は、例えば図3に示すように左腕部1Lを本体1Bに対して相対的に下方に動かすとともに頭部1Hを親密とはいえない表情1Fとする。 In step S4, when a determination method is selected by the operation control unit 13, the familiarity with the driver is determined using the determination method, and the operation mechanism 14 is operated based on the familiarity index described above. Here, for example, when the driver is driving violently and the familiarity index is low, the operating mechanism 14 moves the left arm 1L downward relative to the main body 1B as shown in FIG. Let 1H be an expression 1F that is not intimate.
 すると、運転者は、唯一無二の存在であり自分に親しい存在であるように感じていた車載ロボット1がこのような親密でない表情をしていることに気付き、自らの運転などの行動がこの車載ロボット1に対して何かしら良からぬ影響を与えていることに気付き、運転を丁寧にしなければと運転態度を改める。 Then, the driver notices that the in-vehicle robot 1 that was felt to be a unique presence and a close presence to him has such an intimate expression, and his own driving behavior is He notices that the vehicle-mounted robot 1 has some unfavorable influence, and he has to change his driving attitude if he has to drive carefully.
 次にステップS5では、評価部12は、対象者認識部6がその後暫く経過してからカメラ5によって撮影された運転者の映像を認識してその表情(又は声)から対象者である運転者の反応を取得する。次にステップS6では、評価部12がこのように取得したその運転者の反応を評価して判定する。次にステップS7では、馴れ指数算出部11が、評価部12による運転者の反応の判定結果に基づいて馴れ指数を再度算出する。このとき馴れ指数算出部11は、このような判定結果のみならず、上述した定量的ファクター17及び定性的ファクター16の少なくとも一方にも基づいて馴れ指数を算出し直しても良い。 Next, in step S5, the evaluation unit 12 recognizes the driver's image taken by the camera 5 after the subject recognition unit 6 has passed for a while and recognizes the driver who is the subject from the facial expression (or voice). Get the reaction. Next, in step S6, the evaluation part 12 evaluates and determines the driver's reaction acquired in this way. Next, in step S7, the familiarity index calculation unit 11 calculates the familiarity index again based on the determination result of the driver's reaction by the evaluation unit 12. At this time, the familiarity index calculation unit 11 may recalculate the familiarity index based on at least one of the quantitative factor 17 and the qualitative factor 16 described above as well as such a determination result.
 次にステップS8では、動作制御部13が、新しい馴れ指数が前回の馴れ指数と比較して高くなったか否かを判断し、新しい馴れ指数が高くなっている場合には後述するステップS9を実行する一方、新しい馴れ指数が高くなっていない場合には後述するステップS10を実行する。 Next, in step S8, the motion control unit 13 determines whether or not the new familiarity index is higher than the previous familiarity index. If the new familiarity index is higher, step S9 described later is executed. On the other hand, if the new familiarity index is not high, step S10 described later is executed.
 ステップS8では、動作制御部13が動作機構14を制御し、前回よりも馴れ度の高い動作を実行させる。具体的には、動作制御部13は動作機構14を制御し、図2に示すように頭部1Hを親密な表情1Fとする一方、左腕部1Lを本体に対して相対的に上方向に動かすように動作させる。 In step S8, the operation control unit 13 controls the operation mechanism 14 to execute an operation with a higher familiarity than the previous time. Specifically, the motion control unit 13 controls the motion mechanism 14 to move the left arm 1L upward relative to the main body while making the head 1H an intimate facial expression 1F as shown in FIG. To make it work.
 すると、車載ロボット1は、運転者に対して親密に感じさせる態度を表現し、運転者はこの車載ロボット1に対してさらに唯一無二の存在であり自分により親しい存在であるように感じさせるため、運転者はこのまま安全運転を継続しようと試みる。次に上述したステップS5に戻って実行される。 Then, the vehicle-mounted robot 1 expresses an attitude that makes the driver feel intimate, so that the driver feels that the vehicle-mounted robot 1 is more unique and closer to him / her. The driver tries to continue driving safely. Next, it returns to step S5 mentioned above and is performed.
 一方、ステップS8において動作制御部13が、新しい馴れ指数が前回の馴れ指数よりも高くなっていないと判断した場合、動作機構14を制御し、前回の馴れ度と同様或いは前回の馴れ度よりも低い動作を実行させる。 On the other hand, when the motion control unit 13 determines in step S8 that the new familiarity index is not higher than the previous familiarity index, the operation mechanism 14 is controlled to be the same as the previous familiarity or the previous familiarity. Perform low action.
 すると、車載ロボット1は、運転者に対して親密ではない態度を表現し、運転者は、唯一無二の存在であり自分により親しい存在であると感じていた車載ロボット1に、何か疎遠に行動させる要因が運転者自身にあるのではと感じさせるため、運転者は運転を改め、安全運転をしなければ親密に接してもらえないと気付くようになる。従って運転者は、自らが親密であると感じている車載ロボット1に気に入ってもらえるようにと丁寧な運転を心がけるようになる。 Then, the in-vehicle robot 1 expresses an attitude that is not intimate with respect to the driver, and the driver is estranged from the in-vehicle robot 1 that has felt that it is a unique existence and is closer to him / her. In order to make the driver feel that there is a factor to act, the driver changes his / her driving and becomes aware that if he / she does not drive safely, he / she will not be intimate. Therefore, the driver is careful to drive carefully so that the in-vehicle robot 1 that he / she feels is intimate.
 上述のフローにおける各手順においては次のような手法も採用することができる。なお以下の例では、上記同様に車載ロボット1を車両100の車内に搭載していることを例示している。 The following methods can also be adopted in each procedure in the above flow. In the following example, the in-vehicle robot 1 is mounted in the vehicle 100 as described above.
 <馴れ指数を決定するファクターの一例>
 馴れ指数算出部11は次のような定量的ファクター17及び定性的ファクター16の少なくとも一方を用いて馴れ指数を算出しても良い。
1.定量的ファクター17
(1)車両100に乗車した累計回数
(2)車両100の累計乗車時間
 ・車両100に乗車する頻度
 ・通常運転者として乗車することが多い人であるか否かを判定する。つまり運転者として乗車する回数/期間/頻度の総回数/期間/頻度に対する割合に応じて判定する。
 ・同一人物であっても、その時の乗車が運転者としての乗車であるかそうではないかを判定して加味する。
 ・助手席に乗ることが多いか後席が多いかで、運転者との関係を判定して加味する。このような関係としては車載ロボット1と運転者の親密度を例示することができる。
(3)安全な運転か荒い運転かの運転判定結果
2.定性的ファクター16
・相手の呼びかけ声の調子から車載ロボット1に対する感情を判定するようにしても良い。
・相手の呼びかけ声の意味内容から判定しても良い。
・ロボットの名前など予め定められた文言を含む呼びかけをしたことのある人か否かに応じて判定しても良い。
・「また会ったね」などと初めてではない人が通常言う内容の言葉があれば、名前と同等の判定をしても良い。
・声の調子ではなく人のしぐさや反応、顔の表情などが優しいものだったかを画像認識で判定しても良い。
・走行の様子をカメラで捉えた画像で判定しても良いし、エンジン音、ロードノイズの様子、ドライバーの表情認識による眠気有無などで判定しても良い。
<Example of factors that determine the familiarity index>
The familiar index calculation unit 11 may calculate the familiar index using at least one of the following quantitative factor 17 and qualitative factor 16.
1. Quantitative factor 17
(1) Cumulative number of rides on the vehicle 100 (2) Cumulative boarding time of the vehicle 100-Frequency of boarding the vehicle 100-It is determined whether or not a person who often rides as a normal driver. That is, the determination is made according to the ratio of the number of times / period / frequency of boarding as a driver to the total number of times / period / frequency.
・ Even if it is the same person, it is determined whether the boarding at that time is a boarding as a driver or not.
・ Determine the relationship with the driver based on whether the passenger seat is often in the passenger seat or the back seat. An example of such a relationship is the familiarity between the in-vehicle robot 1 and the driver.
(3) Driving judgment result of safe driving or rough driving Qualitative factor 16
-You may make it determine the emotion with respect to the vehicle-mounted robot 1 from the tone of the other party's calling voice.
-You may judge from the meaning content of the other party's calling voice.
The determination may be made according to whether or not the person has made a call including a predetermined word such as the name of the robot.
・ If there is a word that is not normally spoken by someone who is not the first, such as “I met you again,” you may make a decision equivalent to your name.
-It may be determined by image recognition whether the person's gesture, reaction, facial expression, etc. are gentle rather than the tone of the voice.
-The state of driving may be determined by an image captured by a camera, or may be determined based on engine sound, road noise, presence or absence of sleepiness by recognition of a driver's facial expression, and the like.
 <判定方法の一例>
 上述した馴れ指数算出部11は、次のような各ファクターを単独で又は複数を組み合わせて使用する。なお馴れ指数算出部11は運転者ごとに馴れ指数を算出し、動作制御部13がその馴れ指数に基づき取るべき動作を判定する。なお累積データとしての定量的ファクター17は運転者ごとに累積保存を行うようにしても良い。
<Example of determination method>
The familiarity index calculation unit 11 described above uses the following factors singly or in combination. The familiarity index calculation unit 11 calculates a familiarity index for each driver, and the operation control unit 13 determines an action to be taken based on the familiarity index. The quantitative factor 17 as accumulated data may be accumulated for each driver.
1.定量的ファクター17
(1)車両100に乗車した累計回数
(1-1)運転者などが車両100に「乗車した」ことの判定方法
 ・その人の顔や声をロボットが認識したことを基準に判定しても良い。
 ・ロボット以外の持つ個人認証システムのデータ(ナビや車両の指紋センサーや、車のキーの検知等)を利用しても良い。
 ・図示しない着座センサーやドアの開閉センサーとの併用でも良い。
(1-2)走行中以外の出入りはカウントから除いてもよい。
(1-3)所定時間に満たない乗車や出入りは除いてもよい。
 例えば運転者がドライブ中にコンビニエンスストア又はガソリンスタントに立ち寄って下車し、再乗車したときなどの場合に、このような所定時間で便宜的に判定を行って運転者が車両の出入りなどを行っていなかったものと取り扱うようにしても良い。
(1-4)途中下車であると立ち寄り地点から判定できる出入りは除いてもよい。
 ・その地点がコンビニエンスストア、ガソリンスタント、レストラン、ショッピングセンター、町中の駐車場等一般的に一時的な乗降を行う場所であった場合に判定を行うようにしても良い。
(1-5)設定目的地や設定経路から途中下車であると判定できる出入りは除いても良い。
 ・目的地以外の地点における出入り乗車
 ・設定経路途中の出入り乗車
(1-6)一日(または24時間等の所定期間中)何回乗っても1回とカウントしても良い。
1. Quantitative factor 17
(1) Cumulative number of rides on the vehicle 100 (1-1) Determination method that a driver or the like “gets on” the vehicle 100 ・ Even if the determination is based on the recognition of the person's face and voice by the robot good.
-Data of a personal authentication system other than the robot (navigation, vehicle fingerprint sensor, car key detection, etc.) may be used.
-It may be used in combination with a seating sensor or door opening / closing sensor (not shown).
(1-2) Entrance / exit other than during traveling may be excluded from the count.
(1-3) Boarding and entering / leaving less than a predetermined time may be excluded.
For example, when a driver stops at a convenience store or gasoline stunt while driving and then gets off the vehicle again, the driver makes a decision for convenience at such a predetermined time, and the driver goes in and out of the vehicle. You may make it handle with what was not.
(1-4) Exit / exit that can be determined from the stop-off point when it is a stop on the way may be excluded.
The determination may be made when the point is a place where a temporary boarding / exiting is generally performed such as a convenience store, a gas stunt, a restaurant, a shopping center, or a parking lot in the town.
(1-5) It is possible to exclude entry / exit that can be determined as a stopover from the set destination or set route.
-Entering / exiting at a point other than the destination-Entering / exiting on the way of the set route (1-6) One day (or during a predetermined period such as 24 hours) any number of times may be counted as one time.
(2)車両100の累計乗車時間
(2-1)累積乗車時間を累計する。
(2) Cumulative boarding time of vehicle 100 (2-1) Cumulative boarding time is accumulated.
(3)車両100に乗る頻度
(3-1)その運転者の前回乗車から今回乗車までの経過期間で判定するようにしても良い。
(3-2)所定期間の乗車回数または乗車時間で判定するようにしても良い。
(3-3)この乗車回数は、例えば所定期間の日数に対して実際に乗車した日数の割合を計算して求めても良い。
(3-4)所定の係数期間単位で頻度を算出し、その頻度が増加傾向かどうかを判定しても良い。
(3) Frequency of boarding the vehicle 100 (3-1) The determination may be made based on the elapsed period from the previous boarding of the driver to the current boarding.
(3-2) The determination may be made based on the number of boarding times or boarding time in a predetermined period.
(3-3) The number of times of boarding may be obtained, for example, by calculating the ratio of the number of days actually boarded to the number of days in a predetermined period.
(3-4) The frequency may be calculated in units of a predetermined coefficient period, and it may be determined whether the frequency is increasing.
(4)通常運転者として乗ることが多い人かそうでないかを基準としても良い。つまり運転者として乗った回数、期間、頻度の総回数、期間、頻度に対する割合を基準としても良い。
 ・例えば、運転席の場所で顔を認識する。運転席位置は最初に登録しても良いし、車載ロボット1がハンドル4を画像認識しても良い。
(4) It may be based on whether the person who rides normally as a normal driver or not. That is, the number of rides as a driver, the period, and the total number of times, the period, and the ratio to the frequency may be used as a reference.
・ For example, recognize the face at the driver's seat. The driver's seat position may be registered first, or the vehicle-mounted robot 1 may recognize the handle 4 as an image.
(5)同一人物であっても、その時の乗車が運転者かそうでないか
 ・例えば、運転席の場所で顔を認識する。運転席6aの位置は最初に登録しても良いし、車載ロボット1がハンドル4を画像認識しても良い。
(5) Even if it is the same person, is the driver at that time or not? ・ For example, recognize the face at the location of the driver's seat. The position of the driver's seat 6a may be registered first, or the vehicle-mounted robot 1 may recognize the handle 4 as an image.
(6)助手席6bに乗ることが多いか後席が多いかで、運転者との関係(親密度)を判定して加味する。
 ・座席位置は最初に登録しても良いし、車載ロボット1がハンドル4の有無や前席の有無を画像認識しても良い。
 ・例えば、助手席6bの方を親密度が高いと判定しても良いし、同乗者の見た目から年齢層ごとに判定しても良い(子供は後席でも親密度を高く判定)し、運転者以外に複数人の乗車者がいる場合に限ってこの判定を行うようにしても良い。
(6) The relationship (intimacy) with the driver is determined and taken into account based on whether the passenger seat 6b is often used or the back seat is often used.
The seat position may be registered first, or the vehicle-mounted robot 1 may recognize the presence or absence of the handle 4 or the presence or absence of the front seat.
・ For example, the passenger seat 6b may be determined to have higher intimacy, or may be determined for each age group based on the passenger's appearance (children are determined to have higher intimacy even in the rear seat), and driving This determination may be performed only when there are a plurality of passengers in addition to the passenger.
 なお本実施形態においては、上述した回数、期間、頻度等は各々初回からの累計でも良いし、例えば最近1ヶ月等の所定期間における累計でもよい。 In the present embodiment, the number of times, the period, the frequency, etc. described above may be accumulated from the first time, or may be accumulated for a predetermined period such as the last one month, for example.
2.定性的なファクター16
(1)相手の呼びかけ声の調子からロボットに対する感情を判定した結果
 ・親愛さを示す調子であればプラス判定として馴れ指数も上がり、怒ったような調子であればマイナスと判定して馴れ指数も下がる。
 ・この判定を呼びかけされるたびに累積保存し、データの累計等により指数に反映しても良い。
  ・各判定の数値をプラスマイナス累計して、合計のプラス度またはマイナス度で判定しても良い。
  ・プラスが多くなる傾向か、少なくなる傾向かで判定する。
  ・累計する際、プラス傾向の中の特に低い数値やその逆など、異常値は判断から省いても良い。
  ・各判定数値を時系列または発生順にグラフ化し、このグラフのパターンで親愛さを判定する。
   例えば馴れ指数が概してある程度のプラス数値を示すが時々マイナス数値の特に低い判定が現れていて累計が低くなっていても、グラフパターンからその人の感情の起伏が激しいためで基調はプラスであると判定する。
2. Qualitative factor 16
(1) The result of judging the feelings of the robot from the tone of the opponent's calling voice ・ If the tone shows affection, the familiarity index increases as a positive judgment, and if it is angry, the negativeness is judged as negative. Go down.
-Accumulated and saved each time this judgment is called, it may be reflected in the index by accumulating data.
-The numerical value of each determination may be accumulated plus or minus, and the determination may be made with the total plus or minus degree.
・ Determine based on the tendency to increase or decrease plus.
-When accumulating, abnormal values such as especially low values in the positive trend and vice versa may be omitted from the judgment.
-Graph each judgment value in chronological order or in order of occurrence, and judge the friendliness by the pattern of this graph.
For example, the familiarity index generally shows a certain positive value, but even if a negative value sometimes shows a particularly low judgment and the cumulative total is low, the underlying trend is positive because the emotional undulation of the person is intense from the graph pattern judge.
(2)音声認識された相手の呼びかけ声の意味内容から判定した結果
 ・運転者などの発話内容が親愛さを示す意味内容であればプラス判定として馴れ指数も上がり、怒ったような意味内容であればマイナスと判定して馴れ指数も下がるようにすることもできる。
  ・呼びかけ語のみの意味内容から判定しても良いし、前後の文脈からその呼びかけ語の意図や意味内容を判定しても良い。
 ・この判定を呼びかけされるたびに累積保存し、データの累計等により指数に反映しても良い。
(2) Result of judgment from the meaning of the caller's call voice that has been recognized ・ If the utterance content of the driver, etc., is a meaning content that indicates dearness, then the familiarity increases as a positive judgment and the meaning content is angry If there is, it can be judged as negative and the familiarity index can be lowered.
-You may determine from the meaning content only of a call word, and you may determine the intention and meaning content of the call word from the context before and behind.
-Accumulated and saved each time this judgment is called, it may be reflected in the index by accumulating data.
(3)車載ロボット1の名前を含む呼びかけをしたことのある人か否かにより判定した結果
 ・つまり自分のこと(車載ロボット1)を知っている又は覚えている人かを判定して加味する。
 ・過去一度でも呼んだ人であればプラス判定しても良いし、最近1ヶ月等の所定期間で一度でも呼んでくれたかで判定しても良いし、ドライブごとにリセットして当該ドライブで呼んでくれたかを判定しても良い。
 ・「また会ったね」などと初めてではない人が通常言う内容の言葉があれば、名前と同等の判定をしても良い。
(3) Result of determination based on whether or not the person who has called with the name of the vehicle-mounted robot 1 ・ In other words, determine whether or not he / she knows or remembers himself / herself (vehicle-mounted robot 1) .
・ If you have called once in the past, you can make a positive decision, or you can make a decision based on whether you have called at least once in a predetermined period such as the last month. You may judge whether you gave me.
・ If there is a word that is not normally spoken by someone who is not the first, such as “I met you again,” you may make a decision equivalent to your name.
(4)声の調子ではなく人のしぐさや反応、顔の表情などが優しいものだったか否かを画像認識で判定した結果
 ・呼びかけられたときなど、ロボットとのコミュニケーションが発生したときに判定しても良い。またその判定を累積保存して判定しても良い。
 ・車載ロボット1が何か動作を行ったときに、人のリアクションを判定しても良い。
 ・このような画像認識による判定は声の調子の判定と併用しても良い。
(4) Results of image recognition to determine whether the person's gestures, reactions, facial expressions, etc., rather than the tone of the voice, were friendly. • Determined when communication with the robot occurred, such as when called. May be. The determination may be made by accumulating the determination.
-A reaction of a person may be determined when the in-vehicle robot 1 performs some operation.
Such determination by image recognition may be used in combination with determination of voice tone.
 なお本実施形態では、累計は初回からの累計でも良いし、例えば最近1ヶ月等の所定期間における累計でも良い。さらに本実施形態では、当該ドライブ中に限り累計し、次回ドライブではリセットされているのでもよいし、例えば前回のみ等の所定のドライブのデータのみ保存し、次回はそこからスタートしても良い。具体例としては、評価部12は、例えば毎回、前回の累計に今回の累計を加えて判定していくようにしても良い。 In the present embodiment, the total may be a total from the first time, or may be a total for a predetermined period such as the last month. Further, in the present embodiment, it may be accumulated only during the drive, and may be reset at the next drive, or only data of a predetermined drive such as only the previous drive may be stored, and the next time may be started from there. As a specific example, the evaluation unit 12 may make the determination by adding the current total to the previous total, for example.
 <馴れ度合いの数値化>
1.定量的ファクター17
 図6は、定量的ファクター17を数値化した第1の例を示す図である。図6においては、縦軸が評価を示し、横軸は回数、頻度などを示している。
 評価部12は、例えば積み上げ方式で定量的ファクター17をプラス方向にのみ評価するパターンを採用することができる。つまり評価部12は、定量的ファクター17を数値化して馴れ指数を算出して評価を行うと、その評価は回数、頻度などの増加に応じて図示のように右上がりになる。
<Quantification of familiarity>
1. Quantitative factor 17
FIG. 6 is a diagram showing a first example in which the quantitative factor 17 is quantified. In FIG. 6, the vertical axis indicates the evaluation, and the horizontal axis indicates the number of times, the frequency, and the like.
For example, the evaluation unit 12 can employ a pattern in which the quantitative factor 17 is evaluated only in the positive direction by a stacking method. That is, when the evaluation unit 12 performs numerical evaluation of the quantitative factor 17 and calculates a familiarity index to perform the evaluation, the evaluation increases to the right as illustrated in accordance with an increase in the number of times, frequency, and the like.
 図7は、定量的ファクター17を数値化した第2の例を示す図である。図7においては、縦軸が評価を示し、横軸は回数、頻度などを示している。
 評価部12は、例えばその時々の数値に応じてプラスにもマイナスにも評価するパターンを採用することができる。つまり評価部12は、定量的ファクター17を数値化して馴れ指数を算出して評価を行うと、その評価は回数、頻度などの増加に応じて図示のように評価値0を境に増減する。
FIG. 7 is a diagram showing a second example in which the quantitative factor 17 is digitized. In FIG. 7, the vertical axis indicates the evaluation, and the horizontal axis indicates the number of times, the frequency, and the like.
For example, the evaluation unit 12 can employ a pattern that evaluates positively or negatively depending on the numerical value at that time. That is, when the evaluation unit 12 converts the quantitative factor 17 into a numerical value, calculates a familiarity index, and performs the evaluation, the evaluation increases or decreases with the evaluation value 0 as a boundary according to an increase in the number of times, frequency, and the like.
2.定性的ファクター16
(1)2値化や定性的判定基準を設け段階的な数値化を行う。
(2)状況や個人別の累計データにより判定を変えても良い。
(3)例えば、車載ロボット1の名前を呼ぶか否かについては、まだ乗車回数の少ない段階では呼ばなくてもマイナスにはせず呼べばプラスとするが、多く登場している場合には、呼んでもプラスにはせず呼ばなかったらマイナスとするようにしても良い。
2. Qualitative factor 16
(1) Binarization and qualitative judgment criteria are provided and stepwise digitization is performed.
(2) The determination may be changed according to the situation or individual cumulative data.
(3) For example, as to whether or not to call the name of the in-vehicle robot 1, it is positive if it is called without calling minus even if it is not yet called at the stage where the number of boarding is small, but when many appear, You may make it minus if you don't call it.
 <馴れ指数の変動>
1.プラスファクターのみを累積して、それに応じて馴れ指数が高まっていく運用としても良い。
 ・例えば乗車回数を単純に累積して回数が増えれば増えるごとに、馴れ指数が高まるとしても良い。
2.一方、馴れ指数は、マイナスファクターも含めて判定し、馴れが低くなる方向にも変動することとしても良い。
 ・その時々の判定ファクターの数値がプラスかマイナスかで馴れ指数をプラスかマイナスかどちらに変動させるか決定しても良い。
  ・例えば、前回乗車から今回乗車までの経過期間が所定以上となったら、期間の多少に応じた数値のマイナス判定をしても良い(後述のグラフ適用例を参照)。
 ・あるファクターのプラス数値とマイナス数値をともに(所定期間)累積して、累計数値がプラスかマイナスかに応じて、馴れ指数をプラスかマイナスかどちらに変動させるかを決定しても良い。
  ・例えば、前回乗車から今回乗車までの経過期間により後述のグラフ適用例のようなグラフで得た数値を最近1ヶ月など所定期間分にわたり積算し、その結果の数値がプラスかマイナスかに応じて、馴れ指数をプラスかマイナスかどちらに変動させるか決定しても良い。
 ・複数のファクターのプラス数値とマイナス数値をともに所定期間累積して、累計数値がプラスかマイナスかに応じて馴れ指数をプラスかマイナスかどちらに変動させるか決定しても良い。プラスにしか判定されないファクターと、マイナスにしか判定されないファクター、どちらにも判定されるファクターを所定の係数を掛けて取り混ぜても良い。
 ここで馴れ指数kの算出例としては次のような式を挙げることができる。
馴れ指数k=p[累計乗車回数評価]+q[前回経過期間評価]+r[名前を呼ぶかの評価]
<Change in familiarity index>
1. It is good also as the operation which accumulates only a positive factor and acclimatizes according to it, and an index increases.
・ For example, the acclimatization index may increase as the number of boarding increases simply by accumulating the number of boarding.
2. On the other hand, the familiarity index may be determined including a negative factor, and may be changed in a direction in which the familiarity decreases.
-You may decide whether to change the index to plus or minus depending on whether the value of the judgment factor at that time is plus or minus.
-For example, if the elapsed period from the previous boarding to the current boarding becomes equal to or greater than a predetermined value, a negative determination of a numerical value corresponding to the period may be made (see a graph application example described later).
-It is possible to accumulate both positive and negative values of a factor (predetermined period) and determine whether the familiarity index is changed to positive or negative depending on whether the cumulative value is positive or negative.
・ For example, according to the elapsed time from the previous boarding to the current boarding, the numerical value obtained by the graph as in the graph application example described later is accumulated over a predetermined period such as the last month, and the result value is positive or negative It may be determined whether the familiarity index is changed to plus or minus.
-It is also possible to accumulate both positive and negative values of a plurality of factors for a predetermined period, and determine whether the acclimatization index changes to positive or negative depending on whether the cumulative value is positive or negative. A factor determined only as a positive factor and a factor determined only as a negative factor may be mixed by multiplying a factor determined for both by a predetermined coefficient.
Here, as an example of calculating the familiarity index k, the following equation can be given.
Familiarity index k = p [cumulative ride count evaluation] + q [previous elapsed period evaluation] + r [name evaluation]
 <グラフの適用例>
 図8は、運転者が前回乗車したときからの経過期間に応じて馴れ指数が低下する様子の第1の例を示す図である。
 評価部12は、運転者が前回乗車したときからの経過期間に応じて馴れ指数が低下するパターンを採用しても良い。
<Application example of graph>
FIG. 8 is a diagram illustrating a first example of how the index is reduced according to the elapsed period from when the driver got on the last time.
The evaluation unit 12 may adopt a pattern in which the index is adapted according to the elapsed period from when the driver got on the last time.
 図9は、運転者が前回乗車したときからの経過期間に応じて馴れ指数が低下する様子の第2の例を示す図である。
 評価部12は、運転者が前回乗車したときからの経過期間が経過しても馴れ指数が比較的長くメモリ19に定性的ファクター16又は定量的ファクター17保持されているパターンを採用しても良い。
FIG. 9 is a diagram illustrating a second example in which the habituation index decreases according to the elapsed period from when the driver got on the last time.
The evaluation unit 12 may adopt a pattern in which the acclimatization index is comparatively long and the qualitative factor 16 or the quantitative factor 17 is held in the memory 19 even after an elapsed period from when the driver got on the last ride. .
 図10は、運転者が前回乗車したときからの経過期間に応じて馴れ指数が低下する様子の第3の例を示す図である。
 評価部12は、運転者が前回乗車したときからの経過期間が経過すると直ちに馴れ指数が低下するパターンを採用しても良い。
FIG. 10 is a diagram illustrating a third example in which the habituation index decreases according to the elapsed period from when the driver got on the previous ride.
The evaluation unit 12 may adopt a pattern in which the habituation index decreases immediately after the elapsed period from when the driver boarded the vehicle last time.
 なお図8~図10においてはグラフ(式)を複数用意し、車載ロボット1の性格、成長、その時々の気分に応じて、どのグラフを適用するかを変えるようにしても良い。ここでいう車載ロボット1の性格とは個体差を表している。 8 to 10, a plurality of graphs (formulas) may be prepared, and the graph to be applied may be changed according to the character, growth, and mood of the vehicle-mounted robot 1. The character of the vehicle-mounted robot 1 here represents an individual difference.
 上記実施形態における車載ロボット1は、運転者との馴れ度合い(定性的なファクターに相当)に応じて反応すべき動作を決定する動作決定手段11,13(馴れ指数算出部及び動作制御部に相当)と、前記動作決定手段11,13による決定に応じた動作を行う動作機構14と(動作機構に相当)、前記動作機構14が行う動作に接した前記運転者の反応に基づいて前記運転者との馴れ度合いを補正し、前記動作機構14が行うべき動作を変更させる制御手段12(評価部に相当)と、を有することを特徴とする。 The in-vehicle robot 1 in the above embodiment is equivalent to the operation determining means 11 and 13 (corresponding to the familiarity index calculation unit and the operation control unit) that determine the operation to be reacted according to the familiarity with the driver (corresponding to a qualitative factor). ), An operation mechanism 14 that performs an operation according to the determination by the operation determination means 11 and 13 (corresponding to the operation mechanism), and the driver based on the reaction of the driver in contact with the operation performed by the operation mechanism 14 And a control unit 12 (corresponding to an evaluation unit) that changes the operation to be performed by the operation mechanism 14.
 このようにすると、例えば運転者が安全な運転を行うと、車載ロボット1は、
そのような安全運転に応じて運転者に対して馴れを示すように愛想の良い仕草の動作を行うようになる。すると、運転者は、その車載ロボット1の親密な反応に接し、そのような安全運転を継続しようと心がけるようになる。
In this way, for example, when the driver performs a safe driving, the in-vehicle robot 1
In response to such safe driving, a friendly gesture is performed so as to show familiarity to the driver. Then, the driver comes into contact with the intimate reaction of the in-vehicle robot 1 and tries to continue such safe driving.
 一方、例えば運転者が荒い運転を行うと、車載ロボット1は、そのような荒い運転に応じて怖がる応答を行うようになる。すると、運転者は、その車載ロボットの疎遠な反応に接し、自己の運転が自ら大切に感じている車載ロボット1に恐怖を与えていることに気付き、そのような荒い運転を改めようと試みる。これにより運転者自身の運転への自己反省を促し、運転者をより安全な運転へと導くことができる。これは、この車載ロボット1がその運転者にとって単なる機械以上の思い入れや感情移入の対象であればある程その効果を得ることができる。 On the other hand, for example, when the driver performs rough driving, the vehicle-mounted robot 1 makes a frightening response in response to such rough driving. Then, the driver comes into contact with the sparse response of the in-vehicle robot, notices that the in-vehicle robot 1 that he / she feels important is feared, and tries to change such a rough operation. As a result, the driver's own reflection on driving can be promoted, and the driver can be led to safer driving. This effect can be obtained as the vehicle-mounted robot 1 becomes a target of feelings or emotion transfer more than just a machine for the driver.
 より具体的に説明すると、車載ロボット1は、Gセンサー等により急ハンドルや急ブレーキ等の荒い運転を検知したら怖がるような動作や不安げな振る舞い、音声発話/発音をする。車載ロボット1は、見慣れた人には徐々に愛想の良いしぐさを見せるようになるが、運転者によって荒い運転が続くと、せっかく慣れてきて愛想が良くなってきたにもかかわらず、その運転者に対する普段のしぐさもよそよそしいものに移行する。 More specifically, the in-vehicle robot 1 performs a terrible operation, an uneasy behavior, and a voice utterance / pronunciation when a rough operation such as a sudden handle or a sudden brake is detected by a G sensor or the like. The vehicle-mounted robot 1 gradually shows a friendly gesture to a familiar person, but if the driver continues rough driving, the driver will become accustomed and improve his friendship. The usual behavior for, will shift to something strange.
 その車載ロボット1が、乗車時に見慣れた運転者を認識して以前よりも愛想の良いしぐさを見せるようになってきているときに、その運転者が荒い運転してしまうと、怖がるしぐさを見せることになる。すると、運転者は、その運転を続けることは、それは今まで見慣れさせてきた積み上げ努力を無にしてしまい、再び車載ロボット1に愛想良く振舞わせるには、初めから見慣れさせ且つ安全運転を積み上げないといけないとの認識を持つことになり、荒い運転を控えようとの心理的抑制が働くようになる。 When the vehicle-mounted robot 1 recognizes a familiar driver when riding and starts to show a more pleasant gesture than before, the driver will show a scary gesture if the driver drives rough. become. Then, if the driver continues the driving, it will eliminate the accumulated effort that has been accustomed until now, and in order to make the vehicle-mounted robot 1 behave amiably again, familiarize from the beginning and do not accumulate safe driving. It will be recognized that it should not, and psychological restraint to refrain from rough driving will work.
 このとき車載ロボット1は、そのような馴れ度合い(上記馴れ指数に相当)に応じて動作や反応を変化させるようにしている。すなわち車載ロボット1は、荒い運転を検知したら、そのときの当該運転者に対する馴れ度合いに応じて動作や反応を変化させるようにしている。このとき車載ロボット1は、例えば馴れ度合いの高い人が荒い運転を行った場合、通常より大げさな動作をさせるようにしても良い。それは、今まで運転者が車載ロボット1に対して築いた信頼を裏切るものとして、この車載ロボット1がより怖がり度合いの強い動作を行い、すぐに運転が改められたらより信頼関係が強固になったとして、例えば今まで以上に愛想の良いしぐさをするようにしても良い。これは、馴れ指数の高い運転者は感情移入の度合いが高い人であると考えられるから、このように怖がり度合いや反応をやや大げさにして高い心理的抑制効果を狙っても感情移入を損ねにくくなることによるためである。また、例えば、あまり馴れていない人に対しては感情移入が十分でなくあまり怖がり度合いを大きくすると逆に嫌悪感を抱く等感情移入を損ねて逆効果になる場合があるため、車載ロボット1は抑え目な反応とするのも良い。 At this time, the vehicle-mounted robot 1 changes its operation and reaction according to such a familiarity level (corresponding to the familiarity index). That is, when the vehicle-mounted robot 1 detects a rough driving, the vehicle-mounted robot 1 changes the operation and reaction according to the degree of familiarity with the driver at that time. At this time, the vehicle-mounted robot 1 may be operated more exaggerated than usual when, for example, a highly accustomed person performs rough driving. As a betrayal of the trust that the driver has built for the in-vehicle robot 1 until now, this in-vehicle robot 1 performs a more fearful operation, and if the driving is immediately changed, the relationship of trust becomes stronger. For example, you may make it make a gesture more amiable than before. This is because a driver with a high familiarity index is considered to be a person with a high degree of emotional transfer, so even if the degree of fear and reaction is slightly exaggerated and a high psychological suppression effect is aimed at, it is difficult to impair emotional transfer It is because it becomes. In addition, for example, for a person who is not so familiar, if the emotional transfer is not enough and the fear level is increased too much, the in-vehicle robot 1 may be adversely affected because the emotional transfer is impaired. It is also good to have a slow reaction.
 上記実施形態における車載ロボット1は、上述した構成に加えてさらに、動作決定手段11,13(馴れ指数算出部及び動作制御部に相当)は、前記運転者との馴れ度合いのみならず、前記運転者の乗車履歴17(定量的ファクターに相当)に応じて反応すべき動作を決定することを特徴とする。 In the vehicle-mounted robot 1 in the above embodiment, in addition to the above-described configuration, the motion determination means 11 and 13 (corresponding to the familiarity index calculation unit and the operation control unit) not only include the degree of familiarity with the driver, but also the driving The operation to be reacted is determined according to the rider's boarding history 17 (corresponding to a quantitative factor).
 このようにすると、車載ロボット1は、運転者の乗車履歴を考慮しつつ運転者との馴れ度合いに応じた動作をその運転者に対して行うようになり、その運転者が持つ車載ロボット1への親密度をより反映した動作を行うことができる。 If it does in this way, the vehicle-mounted robot 1 will perform the operation | movement according to the driver's familiarity, considering the ride history of the driver, and the driver | operator's vehicle-mounted robot 1 which the driver has. It is possible to perform an operation that more closely reflects the familiarity.
 上記実施形態における車載ロボット1は、上述した構成に加えてさらに、複数の前記運転者を各々識別して認識する認識手段6(対象者認識部に相当)を有し、前記動作決定手段11,13は、前記認識手段6によって識別された前記運転者ごとの反応に応じて前記馴れ度合いを管理することを特徴とする。 The in-vehicle robot 1 in the embodiment further includes a recognition unit 6 (corresponding to a target person recognition unit) that identifies and recognizes each of the plurality of drivers, in addition to the above-described configuration. No. 13 is characterized in that the familiarity degree is managed according to the reaction for each driver identified by the recognition means 6.
 このようにすると、車載ロボット1は、複数の運転者のいずれかが無作為に運転する場合においても、各運転者に応じて異なる反応を行うように動作することができる。 In this way, the vehicle-mounted robot 1 can operate to perform different reactions depending on each driver even when any of a plurality of drivers is randomly driving.
 上記実施形態における車載ロボット1は、上述した構成に加えてさらに、上記動作決定手段は、前記運転者との馴れ度合いを数値化した馴れ指数を算出する馴れ指数算出手段11(馴れ指数算出部に相当)と、前記馴れ指数に応じて反応すべき動作を決定する動作制御手段13(動作制御部に相当)と、を備えることを特徴とする。 In the vehicle-mounted robot 1 in the above embodiment, in addition to the above-described configuration, the motion determination unit further includes a familiarity index calculation unit 11 (into the familiarity index calculation unit) that calculates a familiarity index obtained by quantifying the familiarity with the driver. And a motion control means 13 (corresponding to a motion control unit) for determining a motion to be reacted according to the familiarity index.
 このようにすると、車載ロボット1は、運転者との関係を数値化した馴れ指数を用いて客観的に運転者との運転者との馴れ度合いを判断し、その馴れ指数に応じて、運転者との間で親密な関係を築けていなければ例えばよそよそしい動作を行うことができ、親密な関係を築けていれば例えば好意的な動作を行うことができる。 In this way, the vehicle-mounted robot 1 objectively determines the familiarity of the driver with the driver using the familiarity index that quantifies the relationship with the driver, and according to the familiarity index, the driver If an intimate relationship is not established, for example, an unfriendly operation can be performed, and if an intimate relationship is established, a favorable operation can be performed, for example.
 上記実施形態における車載ロボット1は、上述した構成に加えてさらに、前記動作決定手段11,13は、前記認識手段6によって音声認識された前記運転者の発話内容に応じた前記馴れ度合いに基づいて反応すべき動作を決定して動作機構14を制御することを特徴とする。 In the vehicle-mounted robot 1 in the above embodiment, in addition to the above-described configuration, the motion determination means 11 and 13 are further based on the familiarity level according to the driver's speech content recognized by the recognition means 6. The operation mechanism 14 is controlled by determining the operation to be reacted.
 このようにすると、車載ロボット1は、音声認識した運転者の発話に応じて馴れ度合いを判断し、その運転者との馴れ度合いに応じて正確に反応すべき動作を行うことができる。 In this way, the in-vehicle robot 1 can determine the degree of familiarity according to the speech of the driver who has recognized the voice, and can perform an action that should react accurately according to the degree of familiarity with the driver.
 上記実施形態における車載ロボット1は、上述した構成に加えてさらに、前記制御手段12(評価部に相当)は、前記認識手段6によって音声認識された前記運転者の発話内容に応じて前記馴れ指数を補正することを特徴とする。 In the vehicle-mounted robot 1 in the above embodiment, in addition to the above-described configuration, the control unit 12 (corresponding to the evaluation unit) further includes the familiarity index according to the utterance content of the driver recognized by the recognition unit 6. It is characterized by correcting.
 上記実施形態における車載ロボット1は、上述した構成に加えてさらに、上記馴れ指数算出手段11は、音声認識された前記発話内容に予め定められた文言が含まれているか否かに応じて前記馴れ指数を算出する。このような予め定められた文言としては、例えば車載ロボット1に付けた名前を挙げることができる。 In the vehicle-mounted robot 1 in the above embodiment, in addition to the above-described configuration, the familiarity index calculation means 11 is further adapted to the familiarity depending on whether or not a predetermined word is included in the speech content that has been voice-recognized. Calculate the index. As such a predetermined wording, the name given to the vehicle-mounted robot 1 can be mentioned, for example.
 このようにすると、車載ロボット1は、運転者の発話内容に予め定められた文言が存在するか否かにより簡単に運転者との馴れ度合いを判定することができる。 In this way, the vehicle-mounted robot 1 can easily determine the degree of familiarity with the driver based on whether or not a predetermined word is present in the utterance content of the driver.
 上記実施形態における車載ロボット1は、上述した構成に加えてさらに、上記動作決定手段11,13は、前記認識手段6によって画像認識された運転者の表情に応じた前記馴れ度合いに基づいて反応すべき動作を決定することを特徴とする。 In the vehicle-mounted robot 1 in the above embodiment, in addition to the above-described configuration, the motion determination means 11 and 13 react based on the degree of familiarity according to the driver's facial expression image-recognized by the recognition means 6. It is characterized by determining an operation to be performed.
 このようにすると、車載ロボット1は、画像認識した運転者の表情に応じて運転者との馴れ度合いを判断し、その運転者との馴れ度合いに応じて反応すべき動作を行うことができる。 In this way, the vehicle-mounted robot 1 can determine the degree of familiarity with the driver according to the driver's facial expression that has been image-recognized, and can perform an action that should react according to the degree of familiarity with the driver.
 上記実施形態における車載ロボット1は、上述した構成に加えてさらに、前記制御手段12は、前記認識手段6によって画像認識された前記運転者の表情に応じて前記馴れ指数を補正することを特徴とする。 In the vehicle-mounted robot 1 in the above embodiment, in addition to the above-described configuration, the control unit 12 further corrects the familiarity index according to the driver's facial expression image-recognized by the recognition unit 6. To do.
 以上のように本実施形態では、次のような前者の馴れ算出方法の他にも後者の馴れ算出方法のような2種類の馴れ指数の算出方法のいずれかを選択的に採用することができる。 As described above, in this embodiment, in addition to the following familiarity calculation method, one of two types of familiarity index calculation methods such as the latter familiarity calculation method can be selectively employed. .
1.運転判定を馴れファクターに含めて馴れ指数を算出する方法(上記実施形態)
 車載ロボット1は、運転者が回数多く車両に乗るとその運転者に馴れてくるが、せっかく馴れても運転者が荒い運転をすると馴れ指数が減算されて馴れ具合いが低下するので、その後の応対が元のあまり馴れていない状態に戻ってしまうおそれがある。そこでその運転者は、車載ロボット1の馴れを戻そうと安全運転するようになる。
1. Method for calculating familiarity index by including driving judgment in familiarity factor (the above embodiment)
The vehicle-mounted robot 1 gets used to the driver when the driver gets on the vehicle many times. However, even if the driver gets used to it, if the driver performs rough driving, the familiarity index is subtracted and the familiarity is lowered. May return to its original unfamiliar state. Therefore, the driver comes to drive safely to restore the familiarity of the in-vehicle robot 1.
2.運転判定を馴れファクターに含めずに馴れ指数を算出する方法(応用例)
 車載ロボット1は、運転者が回数多く車両に乗るとその運転者に馴れてくるが、その運転者が荒い運転をすると、馴れ指数は変化しないため馴れ具合いは変わらないものの不安な表情を表すが、安全な運転に戻すと馴れた様子は変わらずにうれしそうな動作をする。このためその運転者は、その車載ロボット1という馴れた存在が不安な表情を表しているのを見て反省し、さらに安全運転に戻すと良かったと思うようになる。これは、最初から馴れ馴れしいものよりも時間をかけて構築した関係がベースにあったほうがより心理的効果が得られるという効用を利用したものである。
2. How to calculate the familiarity index without including the driving judgment as a familiar factor (application example)
The in-vehicle robot 1 gets used to the driver when the driver gets on the vehicle many times. However, when the driver makes a rough driving, the familiarity index does not change, so the familiarity does not change, but an uneasy expression is expressed. When you return to safe driving, you will be happy with the familiar appearance. For this reason, the driver will think that it would be good to reflect on the familiar presence of the in-vehicle robot 1 showing an uneasy expression and return to safe driving. This uses the utility that a psychological effect can be obtained if the relationship built over time is based on the relationship rather than the familiar one from the beginning.
 なお、本実施形態は、上記に限られず、種々の変形が可能である。以下、そのような変形例を順を追って説明する。 In addition, this embodiment is not restricted above, Various deformation | transformation are possible. Hereinafter, such modifications will be described in order.
 上記実施形態においては、動作制御部13と馴れ指数算出部11とが一体となっていても良い。 In the above embodiment, the motion control unit 13 and the familiar index calculation unit 11 may be integrated.
 上記実施形態においては、車載ロボット1が車両100の車内に配置されていることを例示しているがこれに限られず、例えば玄関や部屋などの家の中に置くロボット又はおもちゃに応用しても良い。また車載ロボット1は、上述のような安全運転に寄与する意図のロボットの代わりに、その他の目的に寄与する意図のロボットに応用しても良い。上記車載ロボット1とほぼ同様の機能を有するロボットが、上記運転判定に相当するものとして、その代わりに、例えば、部屋を見渡し、画像認識で部屋が乱雑であれば不機嫌になり、整頓するよう仕向ける機能を発揮しても良い。ここでいう不機嫌になるとは、例えば馴れが低下した状態を表している。 In the said embodiment, although the vehicle-mounted robot 1 illustrated in the inside of the vehicle 100 as an example, it is not restricted to this, For example, even if it applies to the robot or toy put in a house, such as a front door and a room good. The in-vehicle robot 1 may be applied to a robot intended to contribute to other purposes instead of the robot intended to contribute to safe driving as described above. A robot having substantially the same function as the in-vehicle robot 1 is equivalent to the above-described driving determination. Instead, for example, the room is overlooked, and if the room is messed up by image recognition, the robot will be displeased and organized. The function may be demonstrated. For example, the state of being in a bad mood represents a state in which habituation is reduced.
 また、以上既に述べた以外にも、上記実施形態や各変形例による手法を適宜組み合わせて利用しても良い。 In addition to those already described above, the methods according to the above-described embodiments and modifications may be used in appropriate combination.
本実施形態における車両の車内の一部の構成例を示す斜視図である。It is a perspective view which shows the example of a part of structure in the vehicle of the vehicle in this embodiment. 図1に示すロボットが動作している外観の一例を拡大して表した斜視図である。It is the perspective view which expanded and represented an example of the external appearance in which the robot shown in FIG. 1 is operate | moving. 図1に示すロボットが動作している外観の一例を拡大して表した斜視図である。It is the perspective view which expanded and represented an example of the external appearance in which the robot shown in FIG. 1 is operate | moving. 図1~図3に示すロボットの機能の一例を表すブロック図である。FIG. 4 is a block diagram illustrating an example of functions of the robot illustrated in FIGS. 1 to 3. 馴れファクターに基づく動作制御処理に含まれる手順の一例を示すフローチャートである。It is a flowchart which shows an example of the procedure contained in the operation control process based on a familiarity factor. 定量的ファクターを数値化した第1の例を示す図である。It is a figure which shows the 1st example which digitized the quantitative factor. 定量的ファクターを数値化した第2の例を示す図である。It is a figure which shows the 2nd example which digitized the quantitative factor. 運転者が前回乗車したときからの経過期間に応じて馴れ指数が低下する様子の第1の例を示す図である。It is a figure which shows the 1st example of a mode that an acclimatization index falls according to the elapsed period from the time when a driver | operator got on last time. 運転者が前回乗車したときからの経過期間に応じて馴れ指数が低下する様子の第2の例を示す図である。It is a figure which shows the 2nd example of a mode that a habituation index falls according to the elapsed period from the time when a driver | operator got on last time. 運転者が前回乗車したときからの経過期間に応じて馴れ指数が低下する様子の第3の例を示す図である。It is a figure which shows the 3rd example of a mode that a habituation index falls according to the elapsed period from the time when a driver | operator got on last time.
符号の説明Explanation of symbols
 1        車載ロボット
 6        対象者認識部(認識手段に相当)
 11       馴れ指数算出部(馴れ指数算出手段、動作決定手段に相当)
 12       評価部(制御手段に相当)
 13       動作制御部(動作制御手段、動作決定手段に相当)
 14       動作機構(動作機構に相当)
 16       定性的ファクター
 17       定量的ファクター
 100      画像表示制御装置
1 In-vehicle robot 6 Target person recognition unit (equivalent to recognition means)
11 Familiarity index calculation unit (equivalent to familiarity index calculation means, action determination means)
12 Evaluation part (equivalent to control means)
13 Operation control unit (equivalent to operation control means and operation determination means)
14 Operating mechanism (equivalent to operating mechanism)
16 Qualitative factor 17 Quantitative factor 100 Image display control device

Claims (9)

  1.  運転者との馴れ度合いに応じて反応すべき動作を決定する動作決定手段と、
     前記動作決定手段による決定に応じた動作を行う動作機構と、
     前記動作機構が行う動作に接した前記運転者の反応に基づいて前記運転者との馴れ度合いを補正し、前記動作機構が行うべき動作を変更させる制御手段と、
    を有することを特徴とする車載ロボット。
    An action determining means for determining an action to be reacted according to the familiarity with the driver;
    An operation mechanism for performing an operation according to the determination by the operation determination means;
    Control means for correcting the degree of familiarity with the driver based on the reaction of the driver in contact with the operation performed by the operation mechanism, and changing the operation to be performed by the operation mechanism;
    A vehicle-mounted robot comprising:
  2.  請求項1に記載の車載ロボットにおいて、
     前記動作決定手段は、前記運転者との馴れ度合いのみならず、前記運転者の乗車履歴に応じて反応すべき動作を決定する
    ことを特徴とする車載ロボット。
    The in-vehicle robot according to claim 1,
    The in-vehicle robot characterized in that the operation determining means determines not only the degree of familiarity with the driver but also the operation to be reacted according to the ride history of the driver.
  3.  請求項1又は請求項2に記載の車載ロボットにおいて、
     複数の前記運転者を各々認識する認識手段を有し、
     前記動作決定手段は、
     前記認識手段によって認識された前記運転者ごとの反応に応じて前記馴れ度合いを管理する
    ことを特徴とする車載ロボット。
    In the in-vehicle robot according to claim 1 or 2,
    Recognizing means for recognizing each of the plurality of drivers,
    The operation determining means includes
    The vehicle-mounted robot characterized by managing the degree of familiarity according to the reaction of each driver recognized by the recognition means.
  4.  請求項3に記載の車載ロボットにおいて、
     前記動作決定手段は、
     前記運転者との馴れ度合いを数値化した馴れ指数を算出する馴れ指数算出手段と、
     前記馴れ指数に応じて反応すべき動作を決定して前記動作機構を制御する動作制御手段と、
    を備えることを特徴とする車載ロボット。
    The in-vehicle robot according to claim 3,
    The operation determining means includes
    Familiarity index calculating means for calculating a familiarity index that quantifies the familiarity degree with the driver,
    An operation control means for controlling the operation mechanism by determining an operation to react according to the familiarity index;
    A vehicle-mounted robot comprising:
  5.  請求項3に記載の車載ロボットにおいて、
     前記動作決定手段は、
     前記認識手段によって音声認識された前記運転者の発話内容に応じた前記馴れ度合いに基づいて反応すべき動作を決定する
    ことを特徴とする車載ロボット。
    The in-vehicle robot according to claim 3,
    The operation determining means includes
    An in-vehicle robot characterized in that an action to be reacted is determined based on the degree of familiarity according to the utterance content of the driver that is voice-recognized by the recognition means.
  6.  請求項4に記載の車載ロボットにおいて、
     前記制御手段は、
     前記認識手段によって音声認識された前記運転者の発話内容に応じて前記馴れ指数を補正する
    ことを特徴とする車載ロボット。
    The in-vehicle robot according to claim 4,
    The control means includes
    An in-vehicle robot, wherein the familiarity index is corrected according to the utterance content of the driver that is voice-recognized by the recognition means.
  7.  請求項4に記載の車載ロボットにおいて、
     前記馴れ指数算出手段は、
     音声認識された前記発話内容に予め定められた文言が含まれているか否かに応じて前記馴れ指数を算出する
    ことを特徴とする車載ロボット。
    The in-vehicle robot according to claim 4,
    The familiarity index calculating means is:
    An in-vehicle robot characterized in that the familiarity index is calculated according to whether or not a predetermined wording is included in the speech content that has been voice-recognized.
  8.  請求項3に記載の車載ロボットにおいて、
     前記動作決定手段は、
     前記認識手段によって画像認識された運転者の表情に応じた前記馴れ度合いに基づいて反応すべき動作を決定する
    ことを特徴とする車載ロボット。
    The in-vehicle robot according to claim 3,
    The operation determining means includes
    An in-vehicle robot characterized in that an action to be performed is determined based on the degree of familiarity according to a driver's facial expression image-recognized by the recognition means.
  9.  請求項4に記載の車載ロボットにおいて、
     前記制御手段は、
     前記認識手段によって画像認識された前記運転者の表情に応じて前記馴れ指数を補正する
    ことを特徴とする車載ロボット。
    The in-vehicle robot according to claim 4,
    The control means includes
    An in-vehicle robot, wherein the familiarity index is corrected according to the facial expression of the driver whose image is recognized by the recognition means.
PCT/JP2008/053147 2008-02-25 2008-02-25 On-vehicle robot WO2009107185A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2008/053147 WO2009107185A1 (en) 2008-02-25 2008-02-25 On-vehicle robot
JP2010500462A JPWO2009107185A1 (en) 2008-02-25 2008-02-25 In-vehicle robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2008/053147 WO2009107185A1 (en) 2008-02-25 2008-02-25 On-vehicle robot

Publications (1)

Publication Number Publication Date
WO2009107185A1 true WO2009107185A1 (en) 2009-09-03

Family

ID=41015599

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2008/053147 WO2009107185A1 (en) 2008-02-25 2008-02-25 On-vehicle robot

Country Status (2)

Country Link
JP (1) JPWO2009107185A1 (en)
WO (1) WO2009107185A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015066621A (en) * 2013-09-27 2015-04-13 株式会社国際電気通信基礎技術研究所 Robot control system, robot, output control program and output control method
JP2016090775A (en) * 2014-11-04 2016-05-23 トヨタ自動車株式会社 Response generation apparatus, response generation method, and program
CN108021633A (en) * 2017-11-24 2018-05-11 交通宝互联网技术有限公司 A kind of intelligent vehicle-carried robot system
CN110562164A (en) * 2019-09-11 2019-12-13 广东法波德机器人科技有限公司 Vehicle-mounted robot and vehicle
JPWO2019220733A1 (en) * 2018-05-15 2021-06-17 ソニーグループ株式会社 Control devices, control methods and programs
JP2022553773A (en) * 2020-06-23 2022-12-26 上海商▲湯▼▲臨▼港智能科技有限公司 In-vehicle digital human-based interaction method, device, and storage medium
WO2023017291A1 (en) * 2021-08-11 2023-02-16 日産自動車株式会社 Driving assistance device and driving assistance method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002166379A (en) * 2000-09-19 2002-06-11 Toyota Motor Corp Robot to be mounted on movable body and movable body with the robot
JP2002239959A (en) * 2001-02-20 2002-08-28 Toyota Motor Corp Electronic partner system for vehicle
JP2003117866A (en) * 2001-10-16 2003-04-23 Nec Corp Robot device and its control method
JP2004090109A (en) * 2002-08-29 2004-03-25 Sony Corp Robot device and interactive method for robot device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002166379A (en) * 2000-09-19 2002-06-11 Toyota Motor Corp Robot to be mounted on movable body and movable body with the robot
JP2002239959A (en) * 2001-02-20 2002-08-28 Toyota Motor Corp Electronic partner system for vehicle
JP2003117866A (en) * 2001-10-16 2003-04-23 Nec Corp Robot device and its control method
JP2004090109A (en) * 2002-08-29 2004-03-25 Sony Corp Robot device and interactive method for robot device

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015066621A (en) * 2013-09-27 2015-04-13 株式会社国際電気通信基礎技術研究所 Robot control system, robot, output control program and output control method
JP2016090775A (en) * 2014-11-04 2016-05-23 トヨタ自動車株式会社 Response generation apparatus, response generation method, and program
CN108021633A (en) * 2017-11-24 2018-05-11 交通宝互联网技术有限公司 A kind of intelligent vehicle-carried robot system
JPWO2019220733A1 (en) * 2018-05-15 2021-06-17 ソニーグループ株式会社 Control devices, control methods and programs
JP7327391B2 (en) 2018-05-15 2023-08-16 ソニーグループ株式会社 Control device, control method and program
CN110562164A (en) * 2019-09-11 2019-12-13 广东法波德机器人科技有限公司 Vehicle-mounted robot and vehicle
JP2022553773A (en) * 2020-06-23 2022-12-26 上海商▲湯▼▲臨▼港智能科技有限公司 In-vehicle digital human-based interaction method, device, and storage medium
WO2023017291A1 (en) * 2021-08-11 2023-02-16 日産自動車株式会社 Driving assistance device and driving assistance method

Also Published As

Publication number Publication date
JPWO2009107185A1 (en) 2011-06-30

Similar Documents

Publication Publication Date Title
WO2009107185A1 (en) On-vehicle robot
JP4380541B2 (en) Vehicle agent device
JP4682714B2 (en) Dialog system
JP6466385B2 (en) Service providing apparatus, service providing method, and service providing program
US7020544B2 (en) Vehicle information processing device, vehicle, and vehicle information processing method
US20190016344A1 (en) Method for adjusting at least one operating parameter of a transportation vehicle, system for adjusting at least one operating parameter of a transportation vehicle, and transportation vehicle
US20180275652A1 (en) Driving assistance device
JP4973722B2 (en) Voice recognition apparatus, voice recognition method, and navigation apparatus
CN108688670B (en) Vehicle driving support system
JP2018060192A (en) Speech production device and communication device
US20170270916A1 (en) Voice interface for a vehicle
JP2018055550A (en) Facility satisfaction calculation device
JP2019158975A (en) Utterance system
CN112035034B (en) Vehicle-mounted robot interaction method
JP6075577B2 (en) Driving assistance device
JP6213489B2 (en) Vehicle occupant emotion response control device
JP2019182244A (en) Voice recognition device and voice recognition method
JP2007216920A (en) Seat controller for automobile, seat control program and on-vehicle navigation device
JP2015158573A (en) Vehicle voice response system and voice response program
Brewer et al. Supporting people with vision impairments in automated vehicles: Challenge and opportunities
JP2019101472A (en) Emotion estimation device
JP6419134B2 (en) Vehicle emotion display device, vehicle emotion display method, and vehicle emotion display program
JP2006313287A (en) Speech dialogue apparatus
JP6785889B2 (en) Service provider
JP6213488B2 (en) Vehicle occupant emotion response control device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08711909

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2010500462

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08711909

Country of ref document: EP

Kind code of ref document: A1