WO2019064650A1 - Information transfer support system for vehicle - Google Patents

Information transfer support system for vehicle Download PDF

Info

Publication number
WO2019064650A1
WO2019064650A1 PCT/JP2018/012651 JP2018012651W WO2019064650A1 WO 2019064650 A1 WO2019064650 A1 WO 2019064650A1 JP 2018012651 W JP2018012651 W JP 2018012651W WO 2019064650 A1 WO2019064650 A1 WO 2019064650A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
information
conversation
robot
speaker
Prior art date
Application number
PCT/JP2018/012651
Other languages
French (fr)
Japanese (ja)
Inventor
真吾 入方
宗義 難波
Original Assignee
三菱自動車工業株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱自動車工業株式会社 filed Critical 三菱自動車工業株式会社
Publication of WO2019064650A1 publication Critical patent/WO2019064650A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/10Speech classification or search using distance or distortion measures between unknown speech and reference templates
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q9/00Arrangements in telecontrol or telemetry systems for selectively calling a substation from a main station, in which substation desired apparatus is selected for applying a control signal thereto or for obtaining measured values therefrom

Definitions

  • the present invention relates to a vehicle information transmission support system for supporting information transmission between a vehicle and the outside of the vehicle.
  • Patent Literatures 1 and 2 propose a technique for providing a robot with an AI (hereinafter, referred to as an AI robot) to a vehicle to support the driver.
  • AI robot an AI
  • Patent Document 1 proposes a technology in which a vehicle is equipped with an AI robot, and a management system for managing the robot outside the vehicle is equipped to enable conversation between a vehicle-mounted robot and a driver or the like.
  • the on-board robot responds to the voice of the driver etc. to talk, and at this time, information necessary for the reaction of the robot is sent from the management system by communication.
  • the management system By preparing a large amount of information on the management system side, it is supposed that various information exchange with the driver etc. becomes possible through the on-vehicle robot.
  • the vehicle is equipped with an AI robot, and the degree of familiarity (numerical value) is calculated based on history contents of the driver's uttered content, facial expressions and driving conditions (rough driving, etc.).
  • a technology has been proposed that changes the expression / motion of the robot according to the degree.
  • Patent Document 1 the service that the on-vehicle robot can provide to the driver etc. is only based on the information prepared on the management system side.
  • Patent Document 2 the technology of Patent Document 2 described above relates to information transmission only between the driver or the like in the vehicle and the on-vehicle robot. For the driver, for example, if it is possible to contact while driving with a family at home, the convenience can be further improved. Therefore, there is room for further technical development regarding such information transmission.
  • the present invention was devised focusing on such problems, and provides an information communication support system for a vehicle, which can further improve the convenience of the driver of the vehicle and the like by using the AI.
  • the purpose is to
  • a vehicle information transmission support system is mounted on a vehicle, has an artificial intelligence, and has an on-vehicle conversation robot that exchanges information by exchanging a conversation with a vehicle-side speaker in the vehicle;
  • a first communication means connected to a robot, equipped in a facility or other vehicle outside the vehicle, having artificial intelligence, and having a conversation with an outside speaker in the facility or other vehicle
  • the on-vehicle conversation robot should transmit information from the conversation with the on-vehicle speaker to the on-vehicle speaker, the second on-vehicle conversation robot for exchanging information, and the second communication means connected to the on-vehicle conversation robot.
  • the information is extracted and transmitted to the conversation robot outside the vehicle by the first communication means, and the conversation robot outside the vehicle receives the extraction information transmitted from the on-vehicle conversation robot side by the second communication device, and the extraction is performed Information is transmitted to the outside speaker by conversation, and the outside conversation robot extracts information to be transferred from the conversation with the outside speaker to the on-vehicle speaker, and the second communication means While transmitting to the conversation robot, the on-vehicle conversation robot receives the extraction information transmitted from the outside-of-vehicle conversation robot by the first communication means, and transmits the extraction information to the vehicle-side speaker by conversation. And
  • a third communication unit for communicating with the outside, the extraction information from the on-vehicle conversation robot side obtained by the communication by the third communication unit, and the outside of the vehicle obtained by the communication by the third communication unit It is preferable to have an external server having a database for storing the extracted information from the conversation robot side, and information transfer between the on-vehicle conversation robot and the outside-of-vehicle conversation robot is performed via the external server.
  • the on-vehicle conversation robot includes: a first database storing analysis information for analysis of voice recognition information and analysis of image recognition information; and first voice receiving means for receiving the voice of the vehicle speaker.
  • a first image acquisition unit that acquires an ambient image; a first voice recognition unit that recognizes a voice received by the voice reception unit; a first image recognition unit that recognizes an image acquired by the first image acquisition unit;
  • First speaker recognition means for recognizing the vehicle-side speaker from at least one of recognition information of the voice recognition information of the first speech recognition means and image recognition information of the image recognition means and analysis information of the first database
  • a first conversation meaning understanding means for understanding the meaning of the conversation of the vehicle speaker from the recognition information of the first speech recognition means and the analysis information of the first database; and the conversation of the first conversation meaning understanding means
  • Intention First conversation content generation means for generating a conversation content to be sent back to the vehicle-side speaker from the understanding information and the analysis information of the first database; and the first conversation content generation means corresponding to the conversation content
  • the on-vehicle conversation robot further includes a first feeling estimation means for estimating the state of the emotion of the vehicle side speaker from the recognition information of at least one of the first speech recognition means and the first image recognition means.
  • the first conversation content generation means generates a conversation content to the vehicle-side speaker from information including emotion estimation information by the first emotion estimation means.
  • the vehicle includes an on-vehicle device mounted on the vehicle and operated according to a command, and the on-vehicle conversation robot extracts vehicle operation related information from conversation meaning understanding information by the first conversation meaning understanding means It is preferable to output a command according to the extracted information to the on-vehicle device.
  • the outside-car conversation robot includes a second database storing analysis information for analyzing voice recognition information and image recognition information, and second voice receiving means for receiving the voice of the vehicle speaker.
  • a second image acquisition unit that acquires an ambient image; a second voice recognition unit that recognizes a voice received by the voice reception unit; and a second image recognition unit that recognizes an image acquired by the second image acquisition unit.
  • Second speaker recognition means for recognizing the vehicle speaker from the recognition information of at least one of the speech recognition information of the second speech recognition means and the image recognition information of the image recognition means and the analysis information of the second database
  • second conversation meaning understanding means for understanding the meaning of the conversation of the vehicle side speaker from the recognition information of the second speech recognition means and the analysis information of the second database, and the conversation of the second speech meaning understanding means
  • Intention Second conversation content generation means for generating a conversation content to be sent back to the vehicle-side speaker from the understanding information and the analysis information of the second database; and the second conversation content generation means corresponding to the conversation content
  • the vehicle side talk from the second speech synthesis means for synthesizing the reply speech, the second speech transmission means for transmitting the reply speech synthesized by the second speech synthesis means, and the conversation meaning comprehension information by the second conversation meaning understanding means
  • a second information extraction unit that extracts information to be transmitted to a person.
  • the out-vehicle conversation robot further includes second emotion estimation means for estimating the state of emotion of the outside speaker from the recognition information of at least one of the second speech recognition means and the second image recognition means.
  • the second conversation content generation means generates a conversation content for the outside speaker from information including emotion estimation information by the second emotion estimation means.
  • the facility or other vehicle is equipped with a specific device that is equipped in the facility or other vehicle and operates according to a command, and the out-of-vehicle conversation robot is a conversation according to the second conversation meaning understanding means It is preferable to extract equipment operation related information from the meaning understanding information and output a command according to the extracted information to the specific device.
  • the conversation between the vehicle side speaker and the in-vehicle conversation robot, the conversation between the outside speaker and the outside conversation robot, and the exchange of information to be transmitted between the in-vehicle conversation robot and the outside conversation robot
  • the outside-vehicle speaker performs some work in the facility (for example, the home of the vehicle-side speaker) or while driving the vehicle in another vehicle.
  • the vehicular information transfer support system includes an on-vehicle conversation robot 100 mounted on a vehicle 1 and an on-vehicle conversation robot 200 equipped in a home 2 as a facility outside the vehicle. , And a cloud server (hereinafter referred to as a cloud) 300 as an external server.
  • the vehicle 1 is an electric car.
  • the on-vehicle conversation robot 100 is an AI robot having artificial intelligence (hereinafter referred to as AI) using a computer, includes the first database 101, and has a conversation with a vehicle-side speaker (for example, a driver) 10 in the vehicle. Exchange information. For this reason, the on-vehicle conversation robot 100 has a function equivalent to an "ear” for listening to the voice of the vehicle side speaker 10 and a function equivalent to a "mouth” for emitting a voice to the driver 10. It is not necessary to have a part corresponding to the body as in a mold robot. However, the on-vehicle conversation robot 100 according to the present embodiment also has a function equivalent to an “eye” for observing the state of the vehicle-side speaker 10. Hereinafter, the on-vehicle conversation robot 100 is also referred to as the on-vehicle AI robot 100.
  • AI artificial intelligence
  • the vehicle 1 is mounted with an on-vehicle device 11 that operates according to a command from an air conditioner or a navigation device, for example.
  • the on-vehicle AI robot 100 is also connected to the on-vehicle device 11 and can output commands to these devices. It is supposed to be.
  • the vehicle 1 is equipped with a communication device (first communication means) 12 for communicating with the outside, and the on-vehicle AI robot 100 is also connected to the communication device 12 and information is exchanged with the communication device 12 It is possible to send and receive signals.
  • the out-of-vehicle conversation robot 200 is also an AI robot having an AI using a computer, includes a second database 201, and exchanges information with the in-vehicle outside speaker (for example, a driver's family) 20 by exchanging a conversation.
  • a function equivalent to the "ear” for listening to the voice of the outside speaker 20 and a function equivalent to the "mouth” for emitting the outside speaker 20 are essential. It is not necessary to have a part corresponding to the body as in a so-called humanoid robot.
  • the outside-of-vehicle conversation robot 200 of the present embodiment is configured as a humanoid robot that also has a function equivalent to an “eye” that observes the appearance of the outside speaker 20.
  • the outside conversation robot 200 is also referred to as a home AI robot 200.
  • the home 2 is equipped with a home electric device (specific device) 21 that operates according to an instruction of an AV device such as an air conditioner or a television, for example, and the home AI robot 200 is also connected to these home electric devices 21. And can output commands to these.
  • the home 2 is equipped with a communication device (second communication means) 22 for communicating with the outside, and the home AI robot 200 is also connected to the communication device 22 and information is exchanged with the communication device 22. It is possible to send and receive signals.
  • the cloud 300 is configured by a cloud computer and includes a database 301.
  • the input information is stored in the database 301, and this information is managed while securing security.
  • the cloud 300 is equipped with a communication device (third communication means) 32 that communicates with the outside, and can exchange information signals with a terminal connected via the Internet.
  • This terminal includes the in-vehicle AI robot 100 and the home AI robot 200, and can transmit and receive information signals between the in-vehicle AI robot 100 and the home AI robot 200 and the cloud 300 through the communication devices 12, 22, and 32. ing.
  • a huge amount of big data is stored in the database 301, and each terminal can extract necessary data from the big data in the database 301 to perform each process.
  • the in-vehicle AI robot 100 includes an in-vehicle HMI (Human Machine Interface) unit 110 that is an interface unit with the vehicle-side speaker 10 and input information from the in-vehicle HMI unit 110 and the in-vehicle HMI unit 110.
  • the in-vehicle HMI processing unit 120 processes output information
  • the in-vehicle AI control unit 130 recognizes and processes information processed by the in-vehicle HMI processing unit 120 and information from the home AI robot 200 obtained by communication from the cloud 300.
  • an on-board AI assist unit 140 that complements the processing of the on-vehicle AI control unit 130, and the database 101.
  • the on-vehicle HMI unit 110 includes a microphone (first voice receiving means) 111 for receiving the voice of the vehicle side speaker 10, and a speaker (first voice sending means) 112 for transmitting voice to the vehicle side speaker 10.
  • An in-vehicle camera (first image acquisition means) 113 for acquiring an ambient image including the vehicle side speaker 10, and a sensor 114 for detecting a vital sign of a driving vehicle and a vehicle environment (air pressure of a wheel or remaining amount of battery) Have.
  • Information detected or acquired by the microphone 111, the speaker 112, the in-vehicle camera 113, and the sensor 114 is sent to the on-vehicle HMI processing unit 120 and the on-vehicle AI assist unit 140.
  • the on-vehicle HMI processing unit 120 includes a voice recognition unit (first voice recognition unit) 121 that recognizes voice received by the microphone 111, and a voice synthesis unit (first voice synthesis unit) 122 that combines the voice transmitted from the speaker 112. And an image recognition unit (first image recognition means) 123 for recognizing an ambient image acquired by the in-vehicle camera 113.
  • information from the in-vehicle AI assist unit 140 is also appropriately used for voice recognition and image recognition.
  • Information from the in-vehicle AI control unit 130 is used for speech synthesis, and information from the in-vehicle AI assist unit 140 is also used as appropriate.
  • the in-vehicle AI control unit 130 identifies a speaker from the recognition result by the speech recognition unit 121 and the image recognition unit 123 and recognizes a speaker, and a speaker recognition unit (first speaker recognition means) 131, a speech recognition unit 121, A conversation meaning understanding unit (first conversation meaning understanding means) 132 that understands the meaning of the speaker's conversation by referring to a word map stored in advance from the recognition result by the image recognition unit 123, a speech recognition unit 121, and an image recognition unit A personal emotion estimation unit (first personal emotion estimation means) 133 that estimates the emotional state of the speaker with reference to the emotion map stored in advance from the recognition result by 123 and a conversation with the vehicle speaker 10 And an information extraction unit (first information extraction means) 134 for extracting information to be transmitted to the person 20. Also, information from the in-vehicle AI assist unit 140 is appropriately used for speaker recognition, conversation meaning understanding, individual emotion estimation, and information extraction.
  • the in-vehicle AI assist unit 140 appropriately accesses the database 101 and the cloud 300 to obtain information necessary for speech recognition, image recognition and speech synthesis from the database 101 and the database (big data) 301, and performs speech recognition and image recognition. And analysis and evaluation for speech synthesis. Further, the information exchange is also performed with the in-vehicle AI control unit 130 to assist speaker recognition, conversation meaning understanding, and personal emotion estimation by the in-vehicle AI control unit 130. Further, the in-vehicle AI assist unit 140 outputs a command signal to the in-vehicle device.
  • the communication unit 12 includes a communication unit 12A, and the communication unit 12A performs authentication of received data and encryption of launch data.
  • the home AI robot 200 processes the home HMI unit 210, which is an interface unit with the outside speaker 20, and input information from the home HMI unit 210 and output information to the home HMI unit 210.
  • Home AI control unit 230 that recognizes and understands information processed by the home HMI processing unit 220, information processed by the home HMI processing unit 220, and information obtained from the cloud 300 by communication from the on-vehicle AI robot 100, and home AI control
  • a home AI assist unit 240 that complements the processing of the unit 230 and a database 201 are provided.
  • the home HMI unit 210 has a microphone (second voice receiving means) 211 for receiving the voice of the outside speaker 20 and a speaker (second voice transmitting means) 212 for sending out voice toward the outside speaker 20;
  • a home camera (second image acquisition means) 213 for acquiring an ambient image including the vehicle outside speaker 20, and a sensor 214 for detecting the security state of the home, the air conditioning, and the illumination state are provided.
  • Information detected or acquired by the microphone 211, the speaker 212, the in-vehicle camera 213, and the sensor 214 is sent to the home HMI processing unit 220 and the home AI assist unit 240.
  • the home HMI processing unit 220 includes a voice recognition unit (second voice recognition unit) 221 that recognizes voice received by the microphone 211, and a voice synthesis unit (second voice synthesis unit) 222 that combines the voice transmitted from the speaker 212. And an image recognition unit (second image recognition means) 223 for recognizing an ambient image acquired by the home camera 213. Further, information from the home AI assist unit 240 is also appropriately used for voice recognition and image recognition. Information from the home AI control unit 230 is used for speech synthesis, and information from the home AI assist unit 240 is also used as appropriate.
  • the home AI control unit 230 identifies a speaker from the recognition results by the speech recognition unit 221 and the image recognition unit 223, and recognizes a speaker, and a speaker recognition unit (second speaker recognition means) 231 for recognizing the attribute;
  • a conversation meaning understanding unit (second conversation meaning understanding means) 232 that understands the meaning of the speaker's conversation by referring to a word map stored in advance from the recognition result by the image recognition unit 223, a voice recognition unit 221, and an image recognition unit
  • a personal emotion estimation unit (second individual emotion estimation means) 233 for estimating the emotional state of the speaker with reference to the emotion map stored in advance from the recognition result by 223, and a conversation with the vehicle outside speaker 20 from the side talk of the vehicle
  • an information extraction unit (second information extraction means) 234 for extracting information to be transmitted to the person 10.
  • information from the home AI assist unit 240 is also appropriately used for speaker recognition, conversation meaning understanding, individual emotion estimation, and information extraction.
  • the home AI assist unit 240 appropriately accesses the database 201 and the cloud 300 to obtain information necessary for speech recognition, image recognition and speech synthesis from the database 201 and the database (big data) 301, and performs speech recognition and image recognition. And analysis and evaluation for speech synthesis. Further, the information exchange is also performed with the home AI control unit 230 to assist the speaker recognition by the home AI control unit 230, the conversation meaning comprehension, and the personal emotion estimation. In addition, the home AI assist unit 240 outputs a command signal to the in-vehicle device.
  • the database 201 includes, for example, personal attribute data such as data relating to the driver and his family, communication attribute data necessary for communication in the communication unit, vehicle attribute data For example, event management data for managing a specific event, scheduling data such as a schedule of a driver or his family, vital data such as a driver's health state, etc. are stored.
  • the communication unit 22 includes a communication unit 22A, and the communication unit 22A performs authentication of received data and encryption of start data.
  • Initial setting The in-vehicle AI robot 100 and the home AI robot 200 are initially initialized (also referred to as initialization), and this initialization is performed, for example, as shown in the flowchart of FIG. Note that this initial setting may be performed periodically, for example, every day, or may be performed as needed.
  • the vehicle-side personal attribute data is initialized on the side of the in-vehicle AI robot 100 (step S110). For example, face recognition and facial expression recognition of the driver and his family are performed and registered as vehicle-side personal attribute data. Similarly, initialization of the vehicle-side personal attribute data is also performed on the home AI robot 200 side (step S130). For example, face recognition and facial expression recognition of the driver and his family are performed and registered as vehicle-side personal attribute data. Thus, personal attribute data is shared between the vehicle 1 side and the home 2 side.
  • step S112 initialization of the vehicle-side communication attribute data is performed.
  • step S112 initialization of the vehicle-side communication attribute data
  • step S132 home-side communication attribute data is also initialized on the home AI robot 200 side.
  • step S132 the setting of the incoming call address and the setting of the authentication method are performed.
  • the vehicle attribute data is initialized on the side of the in-vehicle AI robot 100 (step S114). For example, vehicle information such as a state of tire air pressure of the vehicle 1 and information of a battery residual amount (SOC) are registered. These pieces of information are sent from the vehicle side to the home side by information synchronization, and the home-side vehicle attribute data is also initialized on the home AI robot 200 side (step S134). Thus, the vehicle attribute data is shared between the vehicle 1 and the home 2.
  • vehicle information such as a state of tire air pressure of the vehicle 1 and information of a battery residual amount (SOC) are registered. These pieces of information are sent from the vehicle side to the home side by information synchronization, and the home-side vehicle attribute data is also initialized on the home AI robot 200 side (step S134).
  • the vehicle attribute data is shared between the vehicle 1 and the home 2.
  • step S116 initialization of the vehicle-side event management data is performed (step S116). For example, event events (normal and emergency) of the vehicle 1 and operations corresponding to the events are registered.
  • home-side event management data is initialized (step S136). For example, an event event at home 2 (normal and emergency) and an operation corresponding to the event are registered.
  • event management data are also shared between the vehicle 1 side and the home 2 side by information synchronization.
  • the event event of the vehicle 1 includes various event events that occur in relation to the traveling and driving of the vehicle.
  • the event events at home 2 include various event events that occur in the family life.
  • step S118 initialization of the vehicle-side schedule data is performed. For example, behavior data of a vehicle occupant (driver or the like) is registered.
  • home-side schedule data is initialized (step S138). For example, behavior data of a home-side participant (family, etc.) is registered.
  • step S120 initialization of vehicle-side vital data is performed (step S120). For example, vital data of the vehicle occupant (driver or passenger) is registered. These pieces of information are sent from the vehicle to the home by information synchronization, and initialization of the vehicle vital data is performed on the home AI robot 200 (step S140). Thus, the vehicle-side vital data is shared between the vehicle 1 and the home 2.
  • step S100 the initialization shown in FIG. 4 is performed (step S100).
  • step S200 voice conversation input (step S210), camera image input (step S220), sensor data input (step S230), data input via the cloud (step S240) If there is, each processing is performed.
  • speech recognition is performed (step S212), speaker recognition is performed (step S214), and the meaning of speech is understood (step S216). If there is a camera image input, image recognition is performed (step S222), and understanding of the emotion of the driver etc. is performed (step S224).
  • step S232 When the sensor data is input, the driver or the like understands the vital sign, and the vehicle environment such as the air pressure or the remaining battery amount (step S232).
  • message understanding step S244 is performed via message input (step S242).
  • step S300 After such conversation meaning comprehension, emotion comprehension, vital sign comprehension, vehicle environment comprehension comprehension, and message comprehension, processing for selecting information corresponding to AI, response expression, etc. is performed (step S300).
  • voice synthesis is performed (step S312), and spontaneous speech output is performed (step S314).
  • the control of the on-vehicle device such as the on-vehicle air conditioner is selected (step S320)
  • the on-vehicle device is instructed to control (step S322).
  • step S330 when control of the navigation device is selected (step S330), for example, search for service facilities and display thereof (step S332), route search, prediction of arrival time and display thereof, and service reservation are performed (step S334). ).
  • step S340 When access to the database 101 is selected (step S340), the database 101 is accessed to update data, and the information is synchronized (step S342).
  • step S350 When connection to the home AI robot 200 is selected (step S350), a message is transmitted via the cloud (step S352).
  • the flows described in parallel can be performed simultaneously and progressive, and return returns to the standby state in step S200. However, when the initial setting is updated, the process returns to step S100.
  • the on-vehicle AI robot 100 can extract information to be transmitted to the on-vehicle speaker 20 from the conversation with the on-vehicle speaker 10, and can transmit the information to the home AI robot 200 by communication. Further, the on-vehicle AI robot 100 can receive the extraction information transmitted from the home AI robot 200 by communication, and can transmit the extraction information to the vehicle speaker 10 by conversation. Further, the on-vehicle AI robot 100 can generate conversation contents using information including emotion estimation information of the vehicle speaker 10.
  • step S400 the initialization shown in FIG. 4 described above is performed (step S400), and then the standby state (step S500) is entered, and voice conversation input (step S510), camera image input (step S520), and sensors If there is data input (step S530) or data input via the cloud (step S540), the respective processing is performed.
  • voice recognition is performed (step S512), speaker recognition is performed (step S514), and the meaning of the conversation is understood (step S516).
  • image recognition is performed (step S522), and understanding of the emotion of the driver etc. is performed (step S524).
  • sensor data is input, security data, air conditioning, and illumination data are recognized (step S532).
  • conversation input step S542
  • image input step S544
  • sensor data input from the vehicle side step S546) are performed.
  • step S600 processing for selecting information corresponding to information extraction or reaction expression by AI is performed (step S600).
  • voice synthesis is performed (step S612), and spontaneous speech output is performed (step S614).
  • step S620 control of the home appliance is selected (step S620).
  • step S630 When access to the database 101 is selected (step S630), the database 101 is accessed to update data, and information is synchronized (step S632).
  • step S640 When connection to the in-vehicle AI robot 100 is selected (step S640), a message is transmitted via the cloud (step S642). Also in FIG. 6, the flows described in parallel can be performed simultaneously, and the process returns from the return to the standby state in step S500. Also in this case, when the initial setting is updated, the process returns to step S400.
  • the home AI robot 200 can extract information to be transmitted to the in-vehicle speaker 10 from the conversation with the in-vehicle speaker 20 and transmit the information to the in-vehicle AI robot 100 by communication.
  • the home AI robot 200 can receive the extracted information transmitted from the on-vehicle AI robot 100 by communication, and can transmit the extracted information to the on-vehicle speaker 20 by conversation.
  • the home AI robot 200 can generate conversation contents using information including emotion estimation information of the outside speaker 20.
  • the in-vehicle AI robot 100 and the home AI robot 200 perform information synchronization through the cloud 300 to share information, and each AI robot 100 accesses the cloud 300 at a predetermined cycle to perform in-vehicle AI.
  • the robot 100 acquires information from the home AI robot 200, and the home AI robot 200 acquires information from the on-board AI robot 100 in substantially real time.
  • the in-vehicle AI robot 100 can acquire information from the home AI robot 200, and the home AI robot 200 can acquire information from the in-vehicle AI robot 100 in substantially real time.
  • the driver on the side 1 and the family on the side 2 at home can exchange information in real time through the AI robot 100, 200.
  • the driver on the vehicle 1 side and the family on the home 2 side only talk with the AI robots 100 and 200, so the driver does not disturb the driving of the vehicle and the family disturbs the work such as housework. Can communicate the necessary information.
  • the following information can be transmitted.
  • the home AI robot 200 grasps the condition of the home and the contents of conversation and sends it to the in-vehicle AI robot 100
  • the in-vehicle AI robot 100 determines whether transmission of the received information is unnecessary or not.
  • the driver can be notified of information by selecting an appropriate transmission timing while observing the situation of
  • the on-vehicle AI robot 100 can also instruct the navigation system the route of the vehicle 1 from the conversation with the driver 10, calculate the fastest route and arrival time from various information, and send it to the home AI robot 200.
  • the home AI robot 200 can determine the necessity of the received information and can transmit necessary information to a home person at an appropriate timing.
  • the on-vehicle AI robot 100 can also estimate the information on the state of the vehicle and the time until restoration when a vehicle failure occurs, and can send it to the home AI robot 200. Necessary information can be determined, and necessary information can be transmitted to householders at appropriate times.
  • the home AI robot 200 can grasp the state of the home and conversation contents and send it to the on-vehicle AI robot 100.
  • the on-vehicle AI robot 100 determines the necessity of the received information and determines the necessary information at an appropriate timing. Can be transmitted to the driver.
  • the home AI robot 200 searches for a route to a predetermined destination and transmits it to the on-vehicle AI robot 100, and the on-vehicle AI robot 10 adds more information based on the received information, and an optimal route is obtained. It can also be set. Identify the number and attributes of the in-vehicle AI robot 10, send it to the home AI robot 200, and judge whether the information received by the home AI robot 200 is necessary or unnecessary, and transmit it to the home person at an appropriate timing. You can also.
  • the home AI robot 200 can grasp sleep time and health status through, for example, a wearable device, and can transmit it to the on-board AI robot 100.
  • the on-board AI robot 10 is in the state of the driver sent from the home AI robot 200 Based on the analysis information, support can be provided, such as changing the setting of the safety function and recommending break points.
  • the in-vehicle AI robot 100 transmits the information such as the in-vehicle air-conditioning temperature setting, the result of judgment whether the driver is hot or cold, and the distance / time to the home to the home AI robot 200
  • the home AI robot 200 transmits the transmitted information. Also, based on the behavior pattern of the target person, it is possible to set the temperature of the home air conditioning, the temperature of the bath, the hot water supply timing, and the like.
  • the embodiment has been described above, in the embodiment, a home as a facility outside the vehicle is illustrated, but the facility outside the vehicle may be an office or the like.
  • the cloud 300 is used, the present invention is not limited to this, and other media may be used, or direct communication may be performed between the vehicle side and the outside of the vehicle such as at home.
  • the humanoid robot is adopted only for the home AI robot 200.
  • a humanoid robot may be adopted for the on-vehicle AI robot 100.
  • a functional robot equipped with only necessary functions may be adopted as neither the in-vehicle AI robot 100 nor the home AI robot 200, without adopting a humanoid robot.

Abstract

An information transfer support system for a vehicle includes an internal conversation robot 100 that has an artificial intelligence (AI) to exchange information with a speaker 10 inside a vehicle via conversation, and an external conversation robot 200 that exchanges information with a speaker 20 outside the vehicle in external facilities or in another vehicle via conversation. The internal conversation robot 100 extracts information that should be transmitted to the speaker 20 outside the vehicle from the conversation with the speaker 10 inside the vehicle and transmits the extracted information to the external conversation robot. The external conversation robot 200 receives the extracted information transmitted from the internal conversation robot 100 and transmits the extracted information to the speaker 20 outside the vehicle via conversation. The external conversation robot 200 extracts information that should be transmitted to the speaker 10 inside the vehicle from the conversation with the speaker 20 outside the vehicle and transmits the extracted information to the internal conversation robot 100. The internal conversation robot 100 receives the extracted information transmitted from the external conversation robot 200 and transmits the extracted information to the speaker 10 inside the vehicle via conversation. Thereby, the present invention improves convenience for vehicle drivers using an AI.

Description

車両用情報伝達支援システムInformation Transmission Support System for Vehicles
 本発明は、車両と車両外部との間での情報伝達を支援する車両用情報伝達支援システムに関する。 The present invention relates to a vehicle information transmission support system for supporting information transmission between a vehicle and the outside of the vehicle.
 近年、AI(Artificial Intelligence,人工知能)を利用して車両の運転者等の利便性を向上させようとする技術が開発されている。例えば、特許文献1,2には、AIを有するロボット(以下、AIロボットという)を車両に装備して、運転者の支援を行なう技術が提案されている。 In recent years, a technology has been developed which attempts to improve the convenience of the driver of a vehicle or the like using artificial intelligence (AI). For example, Patent Literatures 1 and 2 propose a technique for providing a robot with an AI (hereinafter, referred to as an AI robot) to a vehicle to support the driver.
 特許文献1には、車両にAIロボットを装備し、車両外部にこのロボットを管理する管理システムを装備し、車載のロボットと運転者等との会話等を可能とする技術が提案されている。車載のロボットは運転者等の音声に反応し会話をするが、このときロボットの反応に必要な情報は管理システムから通信によって送られる。管理システム側に膨大な情報を用意しておくことで、車載のロボットを通して運転者等との多彩な情報交換が可能となるとされている。 Patent Document 1 proposes a technology in which a vehicle is equipped with an AI robot, and a management system for managing the robot outside the vehicle is equipped to enable conversation between a vehicle-mounted robot and a driver or the like. The on-board robot responds to the voice of the driver etc. to talk, and at this time, information necessary for the reaction of the robot is sent from the management system by communication. By preparing a large amount of information on the management system side, it is supposed that various information exchange with the driver etc. becomes possible through the on-vehicle robot.
 また、特許文献2には、車両にAIロボットを装備し、運転者の発話内容や表情や運転状況(乱暴な運転等)の履歴データに基づいて、馴れ度合い(数値)を算出して、馴れ度合いに応じてロボットの表情・動作を変えるようにした技術が提案されている。 Further, according to Patent Document 2, the vehicle is equipped with an AI robot, and the degree of familiarity (numerical value) is calculated based on history contents of the driver's uttered content, facial expressions and driving conditions (rough driving, etc.). A technology has been proposed that changes the expression / motion of the robot according to the degree.
特開2003-280688号公報JP 2003-280688 A 国際公開第2009/107185号WO 2009/107185
 しかしながら、上記の特許文献1の技術では、車載ロボットが運転者等に提供できるサービスは管理システム側に用意された情報に基づくものだけである。
 また、上記の特許文献2の技術は、車両内における運転者等と車載ロボットとの間だけでの情報伝達に関するものである。
 運転者にとっては例えば自宅にいる家族と運転しながら連絡をとったりすることができれば、利便性をより向上させることができる。したがって、かかる情報伝達に関しては更なる技術開発の余地がある。
However, in the technology of Patent Document 1 described above, the service that the on-vehicle robot can provide to the driver etc. is only based on the information prepared on the management system side.
Further, the technology of Patent Document 2 described above relates to information transmission only between the driver or the like in the vehicle and the on-vehicle robot.
For the driver, for example, if it is possible to contact while driving with a family at home, the convenience can be further improved. Therefore, there is room for further technical development regarding such information transmission.
 本発明は、このような課題に着目して創案されたもので、AIを利用して車両の運転者等の利便性をより向上させることができるようにした、車両用情報伝達支援システムを提供することを目的としている。 The present invention was devised focusing on such problems, and provides an information communication support system for a vehicle, which can further improve the convenience of the driver of the vehicle and the like by using the AI. The purpose is to
 (1)本発明の車両用情報伝達支援システムは、車両に装備され、人工知能を有し、前記車両内の車両側話者と会話を交わして情報を交換する車載会話ロボットと、前記車載会話ロボットに接続された第1通信手段と、前記車両外部の施設内または他の車両内に装備され、人工知能を有し、前記施設内または他の車両内の車外側話者と会話を交わして情報を交換する車外会話ロボットと、前記車外会話ロボットに接続された第2通信手段と、を備え、前記車載会話ロボットは、前記車両側話者との会話から前記車外側話者へ伝達すべき情報を抽出して前記第1通信手段によって前記車外会話ロボットに送信すると共に、前記車外会話ロボットは、前記車載会話ロボット側から送信された抽出情報を前記第2通信手段によって受信し当該抽出情報を前記車外側話者に会話によって伝達し、前記車外会話ロボットは、前記車外側話者との会話から前記車両側話者へ伝達すべき情報を抽出して前記第2通信手段によって前記車載会話ロボットに送信すると共に、前記車載会話ロボットは、前記車外会話ロボット側から送信された抽出情報を前記第1通信手段によって受信し当該抽出情報を前記車両側話者に会話によって伝達することを特徴としている。 (1) A vehicle information transmission support system according to the present invention is mounted on a vehicle, has an artificial intelligence, and has an on-vehicle conversation robot that exchanges information by exchanging a conversation with a vehicle-side speaker in the vehicle; A first communication means connected to a robot, equipped in a facility or other vehicle outside the vehicle, having artificial intelligence, and having a conversation with an outside speaker in the facility or other vehicle The on-vehicle conversation robot should transmit information from the conversation with the on-vehicle speaker to the on-vehicle speaker, the second on-vehicle conversation robot for exchanging information, and the second communication means connected to the on-vehicle conversation robot. The information is extracted and transmitted to the conversation robot outside the vehicle by the first communication means, and the conversation robot outside the vehicle receives the extraction information transmitted from the on-vehicle conversation robot side by the second communication device, and the extraction is performed Information is transmitted to the outside speaker by conversation, and the outside conversation robot extracts information to be transferred from the conversation with the outside speaker to the on-vehicle speaker, and the second communication means While transmitting to the conversation robot, the on-vehicle conversation robot receives the extraction information transmitted from the outside-of-vehicle conversation robot by the first communication means, and transmits the extraction information to the vehicle-side speaker by conversation. And
 (2)外部と通信する第3通信手段と、前記第3通信手段による通信により得られた前記車載会話ロボット側からの前記抽出情報、並びに、前記第3通信手段による通信により得られた前記車外会話ロボット側からの前記抽出情報を記憶するデータベースと、を有する、外部サーバーを備え、前記車載会話ロボットと前記車外会話ロボットとの間の情報伝達は前記外部サーバーを介して行われることが好ましい。 (2) A third communication unit for communicating with the outside, the extraction information from the on-vehicle conversation robot side obtained by the communication by the third communication unit, and the outside of the vehicle obtained by the communication by the third communication unit It is preferable to have an external server having a database for storing the extracted information from the conversation robot side, and information transfer between the on-vehicle conversation robot and the outside-of-vehicle conversation robot is performed via the external server.
 (3)前記車載会話ロボットは、音声認識情報の解析及び画像認識情報の解析のための解析情報を記憶する第1データベースと、前記車両側話者の音声を受信する第1音声受信手段と、周囲画像を取得する第1画像取得手段と、前記音声受信手段が受信した音声を認識する第1音声認識手段と、前記第1画像取得手段が取得した画像を認識する第1画像認識手段と、前記第1音声認識手段の音声認識情報及び前記画像認識手段の画像認識情報の少なくとも何れかの認識情報と前記第1データベースの解析情報とから前記車両側話者を認識する第1話者認識手段と、前記第1音声認識手段の認識情報と前記第1データベースの解析情報とから前記車両側話者の会話の意味を理解する第1会話意味理解手段と、前記第1会話意味理解手段の会話意味理解情報と前記第1データベースの解析情報とから前記車両側話者へ返信する会話内容を生成する第1会話内容生成手段と、前記第1会話内容生成手段により生成された前記会話内容に対応する返信音声を合成する第1音声合成手段と、前記第1音声合成手段により合成した返信音声を発信する第1音声発信手段と、前記第1会話意味理解手段による会話意味理解情報から前記車外側話者へ伝達すべき情報を抽出する第1情報抽出手段と、を備えていることが好ましい。
 (4)前記車載会話ロボットは、前記第1音声認識手段及び前記第1画像認識手段の少なくとも何れかの認識情報から前記車両側話者の感情の状態を推定する第1感情推定手段をさらに備え、前記第1会話内容生成手段は、前記第1感情推定手段による感情推定情報を含めた情報から前記車両側話者への会話内容を生成することが好ましい。
 (5)前記車両は、前記車両に搭載され指令に応じて作動する車載機器を備え、前記車載会話ロボットは、前記第1会話意味理解手段による会話意味理解情報から車両操作関連情報を抽出して抽出した情報に応じた指令を前記車載機器に出力することが好ましい。
(3) The on-vehicle conversation robot includes: a first database storing analysis information for analysis of voice recognition information and analysis of image recognition information; and first voice receiving means for receiving the voice of the vehicle speaker. A first image acquisition unit that acquires an ambient image; a first voice recognition unit that recognizes a voice received by the voice reception unit; a first image recognition unit that recognizes an image acquired by the first image acquisition unit; First speaker recognition means for recognizing the vehicle-side speaker from at least one of recognition information of the voice recognition information of the first speech recognition means and image recognition information of the image recognition means and analysis information of the first database A first conversation meaning understanding means for understanding the meaning of the conversation of the vehicle speaker from the recognition information of the first speech recognition means and the analysis information of the first database; and the conversation of the first conversation meaning understanding means Intention First conversation content generation means for generating a conversation content to be sent back to the vehicle-side speaker from the understanding information and the analysis information of the first database; and the first conversation content generation means corresponding to the conversation content A first voice synthesizing means for synthesizing a reply voice, a first voice transmitting means for transmitting a reply voice synthesized by the first voice synthesizing means, and the conversation meaning comprehension information by the first conversation meaning understanding means It is preferable to have a first information extraction means for extracting information to be transmitted to a person.
(4) The on-vehicle conversation robot further includes a first feeling estimation means for estimating the state of the emotion of the vehicle side speaker from the recognition information of at least one of the first speech recognition means and the first image recognition means. Preferably, the first conversation content generation means generates a conversation content to the vehicle-side speaker from information including emotion estimation information by the first emotion estimation means.
(5) The vehicle includes an on-vehicle device mounted on the vehicle and operated according to a command, and the on-vehicle conversation robot extracts vehicle operation related information from conversation meaning understanding information by the first conversation meaning understanding means It is preferable to output a command according to the extracted information to the on-vehicle device.
 (6)前記車外会話ロボットは、音声認識情報の解析及び画像認識情報の解析のための解析情報を記憶する第2データベースと、前記車両側話者の音声を受信する第2音声受信手段と、周囲画像を取得する第2画像取得手段と、前記音声受信手段が受信した音声を認識する第2音声認識手段と、前記第2画像取得手段が取得した画像を認識する第2画像認識手段と、前記第2音声認識手段の音声認識情報及び前記画像認識手段の画像認識情報の少なくとも何れかの認識情報と前記第2データベースの解析情報とから前記車両側話者を認識する第2話者認識手段と、前記第2音声認識手段の認識情報と前記第2データベースの解析情報とから前記車両側話者の会話の意味を理解する第2会話意味理解手段と、前記第2会話意味理解手段の会話意味理解情報と前記第2データベースの解析情報とから前記車両側話者へ返信する会話内容を生成する第2会話内容生成手段と、前記第2会話内容生成手段により生成された前記会話内容に対応する返信音声を合成する第2音声合成手段と、前記第2音声合成手段により合成した返信音声を発信する第2音声発信手段と、前記第2会話意味理解手段による会話意味理解情報から前記車両側話者へ伝達すべき情報を抽出する第2情報抽出手段と、を備えていることが好ましい。
 (7)前記車外会話ロボットは、前記第2音声認識手段及び前記第2画像認識手段の少なくとも何れかの認識情報から前記車外側話者の感情の状態を推定する第2感情推定手段をさらに備え、前記第2会話内容生成手段は、前記第2感情推定手段による感情推定情報を含めた情報から前記車外側話者への会話内容を生成することが好ましい。
 (8)前記施設または他の車両は、前記施設内または他の車両内に装備され、指令に応じて作動する特定の機器を備え、前記車外会話ロボットは、前記第2会話意味理解手段による会話意味理解情報から設備操作関連情報を抽出して抽出した情報に応じた指令を前記特定の機器に出力することが好ましい。
(6) The outside-car conversation robot includes a second database storing analysis information for analyzing voice recognition information and image recognition information, and second voice receiving means for receiving the voice of the vehicle speaker. A second image acquisition unit that acquires an ambient image; a second voice recognition unit that recognizes a voice received by the voice reception unit; and a second image recognition unit that recognizes an image acquired by the second image acquisition unit. Second speaker recognition means for recognizing the vehicle speaker from the recognition information of at least one of the speech recognition information of the second speech recognition means and the image recognition information of the image recognition means and the analysis information of the second database And second conversation meaning understanding means for understanding the meaning of the conversation of the vehicle side speaker from the recognition information of the second speech recognition means and the analysis information of the second database, and the conversation of the second speech meaning understanding means Intention Second conversation content generation means for generating a conversation content to be sent back to the vehicle-side speaker from the understanding information and the analysis information of the second database; and the second conversation content generation means corresponding to the conversation content The vehicle side talk from the second speech synthesis means for synthesizing the reply speech, the second speech transmission means for transmitting the reply speech synthesized by the second speech synthesis means, and the conversation meaning comprehension information by the second conversation meaning understanding means And a second information extraction unit that extracts information to be transmitted to a person.
(7) The out-vehicle conversation robot further includes second emotion estimation means for estimating the state of emotion of the outside speaker from the recognition information of at least one of the second speech recognition means and the second image recognition means. Preferably, the second conversation content generation means generates a conversation content for the outside speaker from information including emotion estimation information by the second emotion estimation means.
(8) The facility or other vehicle is equipped with a specific device that is equipped in the facility or other vehicle and operates according to a command, and the out-of-vehicle conversation robot is a conversation according to the second conversation meaning understanding means It is preferable to extract equipment operation related information from the meaning understanding information and output a command according to the extracted information to the specific device.
 本発明によれば、車両側話者と車載会話ロボットとの会話、車外側話者と車外会話ロボットとの会話、及び、車載会話ロボットと車外会話ロボットとの間での伝達すべき情報の授受によって、車両側話者は車両を運転しながら、車外側話者は施設(例えば車両側話者の自宅)内での何らかの作業をしながら、或いは、他の車両内で車両を運転しながら、それぞれの会話ロボットと会話することによって、両会話ロボットを通じて車両側話者と車外側話者との間で必要な情報の授受を行うことができる。したがって、車両の運転や施設内での作業に支障をきたすことなく、必要な情報の伝達を行うことができる。 According to the present invention, the conversation between the vehicle side speaker and the in-vehicle conversation robot, the conversation between the outside speaker and the outside conversation robot, and the exchange of information to be transmitted between the in-vehicle conversation robot and the outside conversation robot Thus, while the vehicle-side speaker drives the vehicle, the outside-vehicle speaker performs some work in the facility (for example, the home of the vehicle-side speaker) or while driving the vehicle in another vehicle, By having conversations with the respective conversation robots, it is possible to exchange necessary information between the vehicle-side speaker and the outside speaker through both conversation robots. Therefore, necessary information can be transmitted without interfering with the operation of the vehicle and the work in the facility.
一実施形態に係る車両用情報伝達支援システムの概略構成図である。It is a schematic block diagram of the information transmission support system for vehicles concerning one embodiment. 一実施形態に係る車載会話ロボットの構成を示すブロック図である。It is a block diagram which shows the structure of the vehicle-mounted conversation robot which concerns on one Embodiment. 一実施形態に係る車外会話ロボットの構成を示すブロック図である。It is a block diagram showing composition of the conversational robot outside the car concerning one embodiment. 一実施形態に係る車載会話ロボット及び車外会話ロボットの初期設定を示すフローチャートである。It is a flowchart which shows the initial setting of the vehicle-mounted conversation robot which concerns on one Embodiment, and a conversation robot outside a vehicle. 一実施形態に係る車載会話ロボットにおける処理を示すフローチャートである。It is a flowchart which shows the process in the vehicle-mounted conversation robot which concerns on one Embodiment. 一実施形態に係る車外会話ロボットにおける処理を示すフローチャートである。It is a flowchart which shows the process in the conversation robot outside which concerns on one Embodiment.
 以下、図面により実施の形態について説明する。なお、以下に示す実施形態はあくまでも例示に過ぎず、以下の実施形態で明示しない種々の変形や技術の適用を排除する意図はない。以下の実施形態の各構成は、それらの趣旨を逸脱しない範囲で種々変形して実施することができると共に、必要に応じて取捨選択することができ、あるいは適宜組み合わせることが可能である。 Hereinafter, embodiments will be described with reference to the drawings. Note that the embodiments described below are merely illustrative, and there is no intention to exclude the application of various modifications and techniques that are not specified in the following embodiments. The configurations of the following embodiments can be variously modified and implemented without departing from the scope of the embodiments, and can be selected as needed or can be combined as appropriate.
 [1.システムの概略構成]
 図1に示すように、本実施形態に係る車両用情報伝達支援システムは、車両1に搭載された車載会話ロボット100と、車両外部の施設としての自宅2内に装備された車外会話ロボット200と、外部サーバーとしてのクラウドサーバー(以下、クラウドという)300とを備えている。なお、車両1は電気自動車であるものとする。
[1. Schematic configuration of system]
As shown in FIG. 1, the vehicular information transfer support system according to the present embodiment includes an on-vehicle conversation robot 100 mounted on a vehicle 1 and an on-vehicle conversation robot 200 equipped in a home 2 as a facility outside the vehicle. , And a cloud server (hereinafter referred to as a cloud) 300 as an external server. The vehicle 1 is an electric car.
 車載会話ロボット100は、コンピュータを用いた人工知能(Artificial Intelligence,以下、AIという)を有するAIロボットであり、第1データベース101を備え、車両内の車両側話者(例えば運転者)10と会話を交わして情報を交換する。このため、車載会話ロボット100は、車両側話者10の音声を聞き取る「耳」に相当する機能と、運転者10に音声を発する「口」に相当する機能とが必須であるが、いわゆる人型ロボットのように、身体に相当する部分を有する必要はない。ただし、本実施形態の車載会話ロボット100は、車両側話者10の様子を観察する「目」に相当する機能をも有している。以下、車載会話ロボット100を車載AIロボット100ともいう。 The on-vehicle conversation robot 100 is an AI robot having artificial intelligence (hereinafter referred to as AI) using a computer, includes the first database 101, and has a conversation with a vehicle-side speaker (for example, a driver) 10 in the vehicle. Exchange information. For this reason, the on-vehicle conversation robot 100 has a function equivalent to an "ear" for listening to the voice of the vehicle side speaker 10 and a function equivalent to a "mouth" for emitting a voice to the driver 10. It is not necessary to have a part corresponding to the body as in a mold robot. However, the on-vehicle conversation robot 100 according to the present embodiment also has a function equivalent to an “eye” for observing the state of the vehicle-side speaker 10. Hereinafter, the on-vehicle conversation robot 100 is also referred to as the on-vehicle AI robot 100.
 車両1には、例えば空調装置やナビゲーション装置などの指令に応じて作動する車載機器11が搭載されており、車載AIロボット100は、これらの車載機器11にも接続され、これらに指令を出力できるようになっている。また、車両1には、外部と通信する通信機器(第1通信手段)12が装備されており、車載AIロボット100は、この通信機器12にも接続されており通信機器12との間で情報信号を授受できるようになっている。 The vehicle 1 is mounted with an on-vehicle device 11 that operates according to a command from an air conditioner or a navigation device, for example. The on-vehicle AI robot 100 is also connected to the on-vehicle device 11 and can output commands to these devices. It is supposed to be. Also, the vehicle 1 is equipped with a communication device (first communication means) 12 for communicating with the outside, and the on-vehicle AI robot 100 is also connected to the communication device 12 and information is exchanged with the communication device 12 It is possible to send and receive signals.
 車外会話ロボット200も、コンピュータを用いたAIを有するAIロボットであり、第2データベース201を備え、自宅内の車外側話者(例えば運転者の家族)20と会話を交わして情報を交換する。このため、車外会話ロボット200は、車外側話者20の音声を聞き取る「耳」に相当する機能と、車外側話者20に音声を発する「口」に相当する機能とが必須であるが、いわゆる人型ロボットのように、身体に相当する部分を有する必要はない。ただし、本実施形態の車外会話ロボット200は、車外側話者20の様子を観察する「目」に相当する機能をも有する人型ロボットとして構成されている。以下、車外会話ロボット200を自宅AIロボット200ともいう。 The out-of-vehicle conversation robot 200 is also an AI robot having an AI using a computer, includes a second database 201, and exchanges information with the in-vehicle outside speaker (for example, a driver's family) 20 by exchanging a conversation. For this reason, in the outside-of-vehicle conversation robot 200, a function equivalent to the "ear" for listening to the voice of the outside speaker 20 and a function equivalent to the "mouth" for emitting the outside speaker 20 are essential. It is not necessary to have a part corresponding to the body as in a so-called humanoid robot. However, the outside-of-vehicle conversation robot 200 of the present embodiment is configured as a humanoid robot that also has a function equivalent to an “eye” that observes the appearance of the outside speaker 20. Hereinafter, the outside conversation robot 200 is also referred to as a home AI robot 200.
 自宅2には、例えば空調装置やテレビ等のAV機器などの指令に応じて作動する家電機器(特定の機器)21が装備されており、自宅AIロボット200は、これらの家電機器21にも接続され、これらに指令を出力できるようになっている。また、自宅2には、外部と通信する通信機器(第2通信手段)22が装備されており、自宅AIロボット200は、この通信機器22にも接続されており通信機器22との間で情報信号を授受できるようになっている。 The home 2 is equipped with a home electric device (specific device) 21 that operates according to an instruction of an AV device such as an air conditioner or a television, for example, and the home AI robot 200 is also connected to these home electric devices 21. And can output commands to these. In addition, the home 2 is equipped with a communication device (second communication means) 22 for communicating with the outside, and the home AI robot 200 is also connected to the communication device 22 and information is exchanged with the communication device 22. It is possible to send and receive signals.
 クラウド300は、クラウドコンピュータにより構成されデータベース301を備えており、入力された情報をデータベース301に保存し、セキュリティを確保しながらこの情報を管理する。クラウド300には、外部と通信する通信機器(第3通信手段)32が装備されており、インターネットによって接続された端末と情報信号を授受できる。この端末には、車載AIロボット100や自宅AIロボット200が含まれ、通信機器12,22,32を通じて車載AIロボット100や自宅AIロボット200とクラウド300との間で情報信号を授受できるようになっている。データベース301には、特に、膨大なビッグデータが保存されており、各端末はデータベース301のビッグデータから必要なデータを取り出して各処理を行えるようになっている。 The cloud 300 is configured by a cloud computer and includes a database 301. The input information is stored in the database 301, and this information is managed while securing security. The cloud 300 is equipped with a communication device (third communication means) 32 that communicates with the outside, and can exchange information signals with a terminal connected via the Internet. This terminal includes the in-vehicle AI robot 100 and the home AI robot 200, and can transmit and receive information signals between the in-vehicle AI robot 100 and the home AI robot 200 and the cloud 300 through the communication devices 12, 22, and 32. ing. In particular, a huge amount of big data is stored in the database 301, and each terminal can extract necessary data from the big data in the database 301 to perform each process.
 [2.車載AIロボットの概略構成]
 図2に示すように、車載AIロボット100は、車両側話者10とのインターフェース部である車載HMI(Human Machine Interface)部110と、車載HMI部110からの入力情報や車載HMI部110への出力情報を処理する車載HMI処理部120と、車載HMI処理部120で処理された情報やクラウド300から通信で得られた自宅AIロボット200側からの情報の認識や理解を行う車載AI制御部130と、車載AI制御部130の処理を補完する車載AIアシスト部140と、データベース101とを備えている。
[2. Schematic configuration of in-vehicle AI robot]
As shown in FIG. 2, the in-vehicle AI robot 100 includes an in-vehicle HMI (Human Machine Interface) unit 110 that is an interface unit with the vehicle-side speaker 10 and input information from the in-vehicle HMI unit 110 and the in-vehicle HMI unit 110. The in-vehicle HMI processing unit 120 processes output information, and the in-vehicle AI control unit 130 recognizes and processes information processed by the in-vehicle HMI processing unit 120 and information from the home AI robot 200 obtained by communication from the cloud 300. And an on-board AI assist unit 140 that complements the processing of the on-vehicle AI control unit 130, and the database 101.
 車載HMI部110は、車両側話者10の音声を受信するマイク(第1音声受信手段)111と、車両側話者10に向けて音声を発信するスピーカ(第1音声発信手段)112と、車両側話者10を含む周囲画像を取得する車内カメラ(第1画像取得手段)113と、運転車のバイタルサインや車両環境(車輪の空気圧やバッテリの残量)の検出を行うセンサ114とを備えている。マイク111、スピーカ112、車内カメラ113、センサ114で検出または取得した情報は、車載HMI処理部120及び車載AIアシスト部140に送られる。 The on-vehicle HMI unit 110 includes a microphone (first voice receiving means) 111 for receiving the voice of the vehicle side speaker 10, and a speaker (first voice sending means) 112 for transmitting voice to the vehicle side speaker 10. An in-vehicle camera (first image acquisition means) 113 for acquiring an ambient image including the vehicle side speaker 10, and a sensor 114 for detecting a vital sign of a driving vehicle and a vehicle environment (air pressure of a wheel or remaining amount of battery) Have. Information detected or acquired by the microphone 111, the speaker 112, the in-vehicle camera 113, and the sensor 114 is sent to the on-vehicle HMI processing unit 120 and the on-vehicle AI assist unit 140.
 車載HMI処理部120は、マイク111で受信した音声を認識する音声認識部(第1音声認識手段)121と、スピーカ112から発信する音声を合成する音声合成部(第1音声合成手段)122と、車内カメラ113で取得した周囲画像を認識する画像認識部(第1画像認識手段)123とを備えている。また、音声認識や画像認識には車載AIアシスト部140からの情報も適宜利用される。音声合成には車載AI制御部130からの情報が利用され、車載AIアシスト部140からの情報も適宜利用される。 The on-vehicle HMI processing unit 120 includes a voice recognition unit (first voice recognition unit) 121 that recognizes voice received by the microphone 111, and a voice synthesis unit (first voice synthesis unit) 122 that combines the voice transmitted from the speaker 112. And an image recognition unit (first image recognition means) 123 for recognizing an ambient image acquired by the in-vehicle camera 113. In addition, information from the in-vehicle AI assist unit 140 is also appropriately used for voice recognition and image recognition. Information from the in-vehicle AI control unit 130 is used for speech synthesis, and information from the in-vehicle AI assist unit 140 is also used as appropriate.
 車載AI制御部130は、音声認識部121及び画像認識部123による認識結果から話者を特定しその属性を認識する話者認識部(第1話者認識手段)131と、音声認識部121及び画像認識部123による認識結果から予め記憶された単語マップを参照して話者の会話の意味を理解する会話意味理解部(第1会話意味理解手段)132と、音声認識部121及び画像認識部123による認識結果から予め記憶された感情マップを参照して話者の感情状態を推定する個人感情推定部(第1個人感情推定手段)133と、車両側話者10との会話から車外側話者20へ伝達すべき情報を抽出する情報抽出部(第1情報抽出手段)134とを備えている。また、話者認識や会話意味理解や個人感情推定や情報抽出には、車載AIアシスト部140からの情報も適宜利用される。 The in-vehicle AI control unit 130 identifies a speaker from the recognition result by the speech recognition unit 121 and the image recognition unit 123 and recognizes a speaker, and a speaker recognition unit (first speaker recognition means) 131, a speech recognition unit 121, A conversation meaning understanding unit (first conversation meaning understanding means) 132 that understands the meaning of the speaker's conversation by referring to a word map stored in advance from the recognition result by the image recognition unit 123, a speech recognition unit 121, and an image recognition unit A personal emotion estimation unit (first personal emotion estimation means) 133 that estimates the emotional state of the speaker with reference to the emotion map stored in advance from the recognition result by 123 and a conversation with the vehicle speaker 10 And an information extraction unit (first information extraction means) 134 for extracting information to be transmitted to the person 20. Also, information from the in-vehicle AI assist unit 140 is appropriately used for speaker recognition, conversation meaning understanding, individual emotion estimation, and information extraction.
 車載AIアシスト部140は、データベース101及びクラウド300に適宜アクセスして、音声認識や画像認識や音声合成に必要な情報をデータベース101やデータベース(ビッグデータ)301から入手して、音声認識や画像認識や音声合成のための分析や評価を行う。また、車載AI制御部130とも情報交換して、車載AI制御部130による話者認識や会話意味理解や個人感情推定をアシストする。また、車載AIアシスト部140は、車載機器に指令信号を出力する。 The in-vehicle AI assist unit 140 appropriately accesses the database 101 and the cloud 300 to obtain information necessary for speech recognition, image recognition and speech synthesis from the database 101 and the database (big data) 301, and performs speech recognition and image recognition. And analysis and evaluation for speech synthesis. Further, the information exchange is also performed with the in-vehicle AI control unit 130 to assist speaker recognition, conversation meaning understanding, and personal emotion estimation by the in-vehicle AI control unit 130. Further, the in-vehicle AI assist unit 140 outputs a command signal to the in-vehicle device.
 なお、データベース101には、例えば運転者やその家族に関するデータなどの個人属性データや、通信部における通信に必要な通信属性データや、車両1に関する車両属性データや、登録された特定のイベントを管理するためのイベント管理データや、例えば運転者やその家族のスケジュールなどのスケジューリングデータや、例えば運転者の健康状態などバイタルデータ等が記憶されている。
 通信部12は通信ユニット12Aを備え、通信ユニット12Aは受信データの認証及び発進データの暗号化を行う。
In the database 101, for example, personal attribute data such as data on the driver and his family, communication attribute data necessary for communication in the communication unit, vehicle attribute data on the vehicle 1, and registered specific events are managed. For example, event management data for scheduling, scheduling data such as a schedule of a driver or a family member thereof, vital data such as a health condition of the driver, and the like are stored.
The communication unit 12 includes a communication unit 12A, and the communication unit 12A performs authentication of received data and encryption of launch data.
 [3.自宅AIロボットの概略構成]
 図3に示すように、自宅AIロボット200は、車外側話者20とのインターフェース部である自宅HMI部210と、自宅HMI部210からの入力情報や自宅HMI部210への出力情報を処理する自宅HMI処理部220と、自宅HMI処理部220で処理された情報やクラウド300から通信で得られた車載AIロボット100側からの情報の認識や理解を行う自宅AI制御部230と、自宅AI制御部230の処理を補完する自宅AIアシスト部240と、データベース201とを備えている。
[3. Schematic Configuration of Home AI Robot]
As shown in FIG. 3, the home AI robot 200 processes the home HMI unit 210, which is an interface unit with the outside speaker 20, and input information from the home HMI unit 210 and output information to the home HMI unit 210. Home AI control unit 230 that recognizes and understands information processed by the home HMI processing unit 220, information processed by the home HMI processing unit 220, and information obtained from the cloud 300 by communication from the on-vehicle AI robot 100, and home AI control A home AI assist unit 240 that complements the processing of the unit 230 and a database 201 are provided.
 自宅HMI部210は、車外側話者20の音声を受信するマイク(第2音声受信手段)211と、車外側話者20に向けて音声を発信するスピーカ(第2音声発信手段)212と、車外側話者20を含む周囲画像を取得する自宅カメラ(第2画像取得手段)213と、自宅のセキュリティ状態や空調や照明の状態の検出を行うセンサ214とを備えている。マイク211、スピーカ212、車内カメラ213、センサ214で検出または取得した情報は、自宅HMI処理部220及び自宅AIアシスト部240に送られる。 The home HMI unit 210 has a microphone (second voice receiving means) 211 for receiving the voice of the outside speaker 20 and a speaker (second voice transmitting means) 212 for sending out voice toward the outside speaker 20; A home camera (second image acquisition means) 213 for acquiring an ambient image including the vehicle outside speaker 20, and a sensor 214 for detecting the security state of the home, the air conditioning, and the illumination state are provided. Information detected or acquired by the microphone 211, the speaker 212, the in-vehicle camera 213, and the sensor 214 is sent to the home HMI processing unit 220 and the home AI assist unit 240.
 自宅HMI処理部220は、マイク211で受信した音声を認識する音声認識部(第2音声認識手段)221と、スピーカ212から発信する音声を合成する音声合成部(第2音声合成手段)222と、自宅カメラ213で取得した周囲画像を認識する画像認識部(第2画像認識手段)223とを備えている。また、音声認識や画像認識には自宅AIアシスト部240からの情報も適宜利用される。音声合成には自宅AI制御部230からの情報が利用され、自宅AIアシスト部240からの情報も適宜利用される。 The home HMI processing unit 220 includes a voice recognition unit (second voice recognition unit) 221 that recognizes voice received by the microphone 211, and a voice synthesis unit (second voice synthesis unit) 222 that combines the voice transmitted from the speaker 212. And an image recognition unit (second image recognition means) 223 for recognizing an ambient image acquired by the home camera 213. Further, information from the home AI assist unit 240 is also appropriately used for voice recognition and image recognition. Information from the home AI control unit 230 is used for speech synthesis, and information from the home AI assist unit 240 is also used as appropriate.
 自宅AI制御部230は、音声認識部221及び画像認識部223による認識結果から話者を特定しその属性を認識する話者認識部(第2話者認識手段)231と、音声認識部221及び画像認識部223による認識結果から予め記憶された単語マップを参照して話者の会話の意味を理解する会話意味理解部(第2会話意味理解手段)232と、音声認識部221及び画像認識部223による認識結果から予め記憶された感情マップを参照して話者の感情状態を推定する個人感情推定部(第2個人感情推定手段)233と、車外側話者20との会話から車両側話者10へ伝達すべき情報を抽出する情報抽出部(第2情報抽出手段)234とを備えている。また、話者認識や会話意味理解や個人感情推定や情報抽出には、自宅AIアシスト部240からの情報も適宜利用される。 The home AI control unit 230 identifies a speaker from the recognition results by the speech recognition unit 221 and the image recognition unit 223, and recognizes a speaker, and a speaker recognition unit (second speaker recognition means) 231 for recognizing the attribute; A conversation meaning understanding unit (second conversation meaning understanding means) 232 that understands the meaning of the speaker's conversation by referring to a word map stored in advance from the recognition result by the image recognition unit 223, a voice recognition unit 221, and an image recognition unit A personal emotion estimation unit (second individual emotion estimation means) 233 for estimating the emotional state of the speaker with reference to the emotion map stored in advance from the recognition result by 223, and a conversation with the vehicle outside speaker 20 from the side talk of the vehicle And an information extraction unit (second information extraction means) 234 for extracting information to be transmitted to the person 10. In addition, information from the home AI assist unit 240 is also appropriately used for speaker recognition, conversation meaning understanding, individual emotion estimation, and information extraction.
 自宅AIアシスト部240は、データベース201及びクラウド300に適宜アクセスして、音声認識や画像認識や音声合成に必要な情報をデータベース201やデータベース(ビッグデータ)301から入手して、音声認識や画像認識や音声合成のための分析や評価を行う。また、自宅AI制御部230とも情報交換して、自宅AI制御部230による話者認識や会話意味理解や個人感情推定をアシストする。また、自宅AIアシスト部240は、車載機器に指令信号を出力する。 The home AI assist unit 240 appropriately accesses the database 201 and the cloud 300 to obtain information necessary for speech recognition, image recognition and speech synthesis from the database 201 and the database (big data) 301, and performs speech recognition and image recognition. And analysis and evaluation for speech synthesis. Further, the information exchange is also performed with the home AI control unit 230 to assist the speaker recognition by the home AI control unit 230, the conversation meaning comprehension, and the personal emotion estimation. In addition, the home AI assist unit 240 outputs a command signal to the in-vehicle device.
 なお、データベース201には、データベース101と同様に、例えば運転者やその家族に関するデータなどの個人属性データや、通信部における通信に必要な通信属性データや、車両1に関する車両属性データや、登録された特定のイベントを管理するためのイベント管理データや、例えば運転者やその家族のスケジュールなどのスケジューリングデータや、例えば運転者の健康状態などバイタルデータ等が記憶されている。
 また、通信部22は通信ユニット22Aを備え、通信ユニット22Aは受信データの認証及び発進データの暗号化を行う。
Similar to the database 101, the database 201 includes, for example, personal attribute data such as data relating to the driver and his family, communication attribute data necessary for communication in the communication unit, vehicle attribute data For example, event management data for managing a specific event, scheduling data such as a schedule of a driver or his family, vital data such as a driver's health state, etc. are stored.
Further, the communication unit 22 includes a communication unit 22A, and the communication unit 22A performs authentication of received data and encryption of start data.
 [4.初期設定]
 このような車載AIロボット100及び自宅AIロボット200は、初めに初期設定(初期化ともいう)が行われるが、この初期設定は、例えば図4のフローチャートに示すように行われる。なお、この初期設定は、例えば1日毎など定期的に行ってもよく、或いは随時更新するように行ってもよい。
[4. Initial setting]
The in-vehicle AI robot 100 and the home AI robot 200 are initially initialized (also referred to as initialization), and this initialization is performed, for example, as shown in the flowchart of FIG. Note that this initial setting may be performed periodically, for example, every day, or may be performed as needed.
 図4に示すように、まず、車載AIロボット100側では、車両側個人属性データの初期化が行われる(ステップS110)。例えば運転者やその家族の顔認識や表情認識を行って車両側個人属性データとして登録する。同様に、自宅AIロボット200側でも、車両側個人属性データの初期化が行われる(ステップS130)。例えば運転者やその家族の顔認識や表情認識を行って車両側個人属性データとして登録する。こうして、個人属性データが車両1側と自宅2側とで共有化される。 As shown in FIG. 4, first, the vehicle-side personal attribute data is initialized on the side of the in-vehicle AI robot 100 (step S110). For example, face recognition and facial expression recognition of the driver and his family are performed and registered as vehicle-side personal attribute data. Similarly, initialization of the vehicle-side personal attribute data is also performed on the home AI robot 200 side (step S130). For example, face recognition and facial expression recognition of the driver and his family are performed and registered as vehicle-side personal attribute data. Thus, personal attribute data is shared between the vehicle 1 side and the home 2 side.
 次に、車載AIロボット100側では、車両側通信属性データの初期化が行われる(ステップS112)。例えば着信アドレスの設定や認証方法の設定を行う。同様に、自宅AIロボット200側でも、自宅側通信属性データの初期化が行われる(ステップS132)。例えば着信アドレスの設定や認証方法の設定を行う。これらの通信属性データも、車両1側と自宅2側とで共有化される。 Next, on the on-vehicle AI robot 100 side, initialization of the vehicle-side communication attribute data is performed (step S112). For example, the setting of the incoming call address and the setting of the authentication method are performed. Similarly, home-side communication attribute data is also initialized on the home AI robot 200 side (step S132). For example, the setting of the incoming call address and the setting of the authentication method are performed. These communication attribute data are also shared between the vehicle 1 side and the home 2 side.
 次に、車載AIロボット100側では、車両属性データの初期化が行われる(ステップS114)。例えば車両1のタイヤ空気圧の状態やバッテリ残量(SOC)の情報等の車両情報が登録される。これらの情報は、情報同期によって車両側から自宅側に送られ、自宅AIロボット200側でも、自宅側車両属性データの初期化が行われる(ステップS134)。こうして、車両属性データが車両1側と自宅2とで共有化される。 Next, the vehicle attribute data is initialized on the side of the in-vehicle AI robot 100 (step S114). For example, vehicle information such as a state of tire air pressure of the vehicle 1 and information of a battery residual amount (SOC) are registered. These pieces of information are sent from the vehicle side to the home side by information synchronization, and the home-side vehicle attribute data is also initialized on the home AI robot 200 side (step S134). Thus, the vehicle attribute data is shared between the vehicle 1 and the home 2.
 次に、車載AIロボット100側では、車両側イベント管理データの初期化が行われる(ステップS116)。例えば車両1のイベント事象(平常時と緊急時)とイベントに対応する動作が登録される。自宅AIロボット200側では、自宅側イベント管理データの初期化が行われる(ステップS136)。例えば自宅2のイベント事象(平常時と緊急時)とイベントに対応する動作が登録される。これらのイベント管理データも、情報同期によって車両1側と自宅2側とで共有化される。なお、車両1のイベント事象には車両の走行や運転に関して発生する種々のイベント事象が含まれる。また、自宅2のイベント事象には家族の生活において発生する種々のイベント事象が含まれる。 Next, on the on-vehicle AI robot 100 side, initialization of the vehicle-side event management data is performed (step S116). For example, event events (normal and emergency) of the vehicle 1 and operations corresponding to the events are registered. At the home AI robot 200 side, home-side event management data is initialized (step S136). For example, an event event at home 2 (normal and emergency) and an operation corresponding to the event are registered. These event management data are also shared between the vehicle 1 side and the home 2 side by information synchronization. The event event of the vehicle 1 includes various event events that occur in relation to the traveling and driving of the vehicle. Also, the event events at home 2 include various event events that occur in the family life.
 次に、車載AIロボット100側では、車両側スケジュールデータの初期化が行われる(ステップS118)。例えば車両側乗員(運転者等)の行動データが登録される。自宅AIロボット200側では、自宅側スケジュールデータの初期化が行われる(ステップS138)。例えば自宅側参加者(家族等)の行動データが登録される。これらのスケジュールデータも、情報同期によって車両1側と自宅2側とで共有化される。 Next, on the on-vehicle AI robot 100 side, initialization of the vehicle-side schedule data is performed (step S118). For example, behavior data of a vehicle occupant (driver or the like) is registered. At the home AI robot 200 side, home-side schedule data is initialized (step S138). For example, behavior data of a home-side participant (family, etc.) is registered. These schedule data are also shared between the vehicle 1 side and the home 2 side by information synchronization.
 次に、車載AIロボット100側では、車両側バイタルデータの初期化が行われる(ステップS120)。例えば車両側乗員(運転者や同乗者)のバイタルデータが登録される。これらの情報は、情報同期によって車両側から自宅側に送られ、自宅AIロボット200側での、車両側バイタルデータの初期化が行われる(ステップS140)。こうして、車両側バイタルデータが車両1側と自宅2とで共有化される。 Next, on the on-vehicle AI robot 100 side, initialization of vehicle-side vital data is performed (step S120). For example, vital data of the vehicle occupant (driver or passenger) is registered. These pieces of information are sent from the vehicle to the home by information synchronization, and initialization of the vehicle vital data is performed on the home AI robot 200 (step S140). Thus, the vehicle-side vital data is shared between the vehicle 1 and the home 2.
 [5.車載AIロボットの処理]
 車載AIロボット100では、図5のフローチャートに示すように処理が行われる。
 まず、上記の図4に示す初期化が実施される(ステップS100)。次に、待ち受け状態(ステップS200)となって、音声会話入力(ステップS210)や、カメラ画像入力(ステップS220)や、センサデータ入力(ステップS230)や、クラウド経由のデータ入力(ステップS240)があると、それぞれの処理が行われる。
[5. Processing of in-vehicle AI robot]
In the on-vehicle AI robot 100, processing is performed as shown in the flowchart of FIG.
First, the initialization shown in FIG. 4 is performed (step S100). Next, in the standby state (step S200), voice conversation input (step S210), camera image input (step S220), sensor data input (step S230), data input via the cloud (step S240) If there is, each processing is performed.
 音声会話入力があると、音声認識を実施し(ステップS212)、話者認識を実施し(ステップS214)、会話の意味の理解を実施する(ステップS216)。
 カメラ画像入力があると、画像認識を実施し(ステップS222)、運転者等の感情の理解を実施する(ステップS224)。
When there is a speech conversation input, speech recognition is performed (step S212), speaker recognition is performed (step S214), and the meaning of speech is understood (step S216).
If there is a camera image input, image recognition is performed (step S222), and understanding of the emotion of the driver etc. is performed (step S224).
 センサデータ入力があると、運転者等のバイタルサインの理解や、空気圧やバッテリ残量等の車両環境の理解を実施する(ステップS232)。
 クラウド経由のデータ入力があると、メッセージ入力(ステップS242)を経て、メッセージ理解を実施する(ステップS244)。
When the sensor data is input, the driver or the like understands the vital sign, and the vehicle environment such as the air pressure or the remaining battery amount (step S232).
When there is data input via the cloud, message understanding (step S244) is performed via message input (step S242).
 このような会話意味理解や感情理解やバイタルサイン理解や車両環境理解やメッセージ理解を経て、AIによる情報抽出や反応表現等の対応の選択処理等を実施する(ステップS300)。
 この処理結果に基づいて、対応音声応答を選択したら(ステップS310)、音声合成を行って(ステップS312)、自発的会話出力を行う(ステップS314)。
 また、車載エアコンなどの車載機器の制御を選択したら(ステップS320)、車載機器に制御指令する(ステップS322)。
After such conversation meaning comprehension, emotion comprehension, vital sign comprehension, vehicle environment comprehension comprehension, and message comprehension, processing for selecting information corresponding to AI, response expression, etc. is performed (step S300).
When the corresponding voice response is selected based on the processing result (step S310), voice synthesis is performed (step S312), and spontaneous speech output is performed (step S314).
In addition, when the control of the on-vehicle device such as the on-vehicle air conditioner is selected (step S320), the on-vehicle device is instructed to control (step S322).
 また、ナビ機器の制御を選択したら(ステップS330)、例えばサービス施設の検索やその表示(ステップS332)や、ルート検索、到着時間の予測やその表示、さらにはサービスの予約等を行う(ステップS334)。
 データベース101へのアクセスを選択したら(ステップS340)、データベース101へアクセスしてデータを更新し、情報を同期する(ステップS342)。
 自宅AIロボット200への接続を選択したら(ステップS350)、クラウド経由でメッセージを送信する(ステップS352)。
 なお、図5において、並列に記載するフローは同時進行的に行われうるものであり、また、リターンからはステップS200の待ち受け状態に戻る。ただし、初期設定が更新されたらステップS100に戻る。
Further, when control of the navigation device is selected (step S330), for example, search for service facilities and display thereof (step S332), route search, prediction of arrival time and display thereof, and service reservation are performed (step S334). ).
When access to the database 101 is selected (step S340), the database 101 is accessed to update data, and the information is synchronized (step S342).
When connection to the home AI robot 200 is selected (step S350), a message is transmitted via the cloud (step S352).
In addition, in FIG. 5, the flows described in parallel can be performed simultaneously and progressive, and return returns to the standby state in step S200. However, when the initial setting is updated, the process returns to step S100.
 したがって、車載AIロボット100は、車両側話者10との会話から車外側話者20へ伝達すべき情報を抽出して通信によって自宅AIロボット200に送信することができる。また、車載AIロボット100は、自宅AIロボット200から送信された抽出情報を通信によって受信しこの抽出情報を車両側話者10に会話によって伝達することができる。また、車載AIロボット100は、車両側話者10の感情推定情報を含めた情報をもちいて、会話内容を生成することができる。 Therefore, the on-vehicle AI robot 100 can extract information to be transmitted to the on-vehicle speaker 20 from the conversation with the on-vehicle speaker 10, and can transmit the information to the home AI robot 200 by communication. Further, the on-vehicle AI robot 100 can receive the extraction information transmitted from the home AI robot 200 by communication, and can transmit the extraction information to the vehicle speaker 10 by conversation. Further, the on-vehicle AI robot 100 can generate conversation contents using information including emotion estimation information of the vehicle speaker 10.
 [6.自宅AIロボットの処理]
 自宅AIロボット200では、図6のフローチャートに示すように処理が行われる。
 まず、上記の図4に示す初期化が実施され(ステップS400)、次に、待ち受け状態(ステップS500)となって、音声会話入力(ステップS510)や、カメラ画像入力(ステップS520)や、センサデータ入力(ステップS530)や、クラウド経由のデータ入力(ステップS540)があると、それぞれの処理が行われる。
[6. Processing of home AI robot]
In the home AI robot 200, processing is performed as shown in the flowchart of FIG.
First, the initialization shown in FIG. 4 described above is performed (step S400), and then the standby state (step S500) is entered, and voice conversation input (step S510), camera image input (step S520), and sensors If there is data input (step S530) or data input via the cloud (step S540), the respective processing is performed.
 音声会話入力があると、音声認識を実施し(ステップS512)、話者認識を実施し(ステップS514)、会話の意味の理解を実施する(ステップS516)。
 カメラ画像入力があると、画像認識を実施し(ステップS522)、運転者等の感情の理解を実施する(ステップS524)。
 センサデータ入力があると、セキュリティデータや空調、照明データを認識する(ステップS532)。
 クラウド経由のデータ入力があると、会話入力(ステップS542)、画像入力(ステップS544)、車両側からのセンサデータ入力(ステップS546)がそれぞれ行われる。
If there is voice conversation input, voice recognition is performed (step S512), speaker recognition is performed (step S514), and the meaning of the conversation is understood (step S516).
When there is a camera image input, image recognition is performed (step S522), and understanding of the emotion of the driver etc. is performed (step S524).
When sensor data is input, security data, air conditioning, and illumination data are recognized (step S532).
When there is data input via the cloud, conversation input (step S542), image input (step S544), and sensor data input from the vehicle side (step S546) are performed.
 このような会話意味理解や感情理解やバイタルサイン理解や車両環境理解やデータ入力を経て、AIによる情報抽出や反応表現等の対応の選択処理を実施する(ステップS600)。
 この処理結果に基づいて、対応音声応答を選択したら(ステップS610)、音声合成を行って(ステップS612)、自発的会話出力を行う(ステップS614)。
 また、家電機器の制御を選択したら(ステップS620)、家電機器に制御指令する(ステップS622)。
After such conversation meaning comprehension, emotion comprehension, vital sign comprehension, vehicle environment comprehension, and data input, processing for selecting information corresponding to information extraction or reaction expression by AI is performed (step S600).
When the corresponding voice response is selected based on the processing result (step S610), voice synthesis is performed (step S612), and spontaneous speech output is performed (step S614).
Also, when control of the home appliance is selected (step S620), the home appliance is instructed to control (step S622).
 データベース101へのアクセスを選択したら(ステップS630)、データベース101へアクセスしてデータを更新し、情報を同期する(ステップS632)。
 車載AIロボット100への接続を選択したら(ステップS640)、クラウド経由でメッセージを送信する(ステップS642)。
 なお、図6においても、並列に記載するフローは同時進行的に行われうるものであり、また、リターンからはステップS500の待ち受け状態に戻る。この場合も、初期設定が更新されたらステップS400に戻る。
When access to the database 101 is selected (step S630), the database 101 is accessed to update data, and information is synchronized (step S632).
When connection to the in-vehicle AI robot 100 is selected (step S640), a message is transmitted via the cloud (step S642).
Also in FIG. 6, the flows described in parallel can be performed simultaneously, and the process returns from the return to the standby state in step S500. Also in this case, when the initial setting is updated, the process returns to step S400.
 したがって、自宅AIロボット200は、車外側話者20との会話から車内側話者10へ伝達すべき情報を抽出して通信によって車載AIロボット100に送信することができる。また、自宅AIロボット200は、車載AIロボット100から送信された抽出情報を通信によって受信しこの抽出情報を車外側話者20に会話によって伝達することができる。また、自宅AIロボット200は、車外側話者20の感情推定情報を含めた情報をもちいて、会話内容を生成することができる。 Therefore, the home AI robot 200 can extract information to be transmitted to the in-vehicle speaker 10 from the conversation with the in-vehicle speaker 20 and transmit the information to the in-vehicle AI robot 100 by communication. In addition, the home AI robot 200 can receive the extracted information transmitted from the on-vehicle AI robot 100 by communication, and can transmit the extracted information to the on-vehicle speaker 20 by conversation. In addition, the home AI robot 200 can generate conversation contents using information including emotion estimation information of the outside speaker 20.
 なお、車載AIロボット100と自宅AIロボット200とは、クラウド300を通じて情報同期を行って情報を共有化すようになっており、各AIロボット100は所定の周期でクラウド300にアクセスして、車載AIロボット100は自宅AIロボット200からの情報を、自宅AIロボット200は車載AIロボット100からの情報を、何れもほぼリアルタイムで取得するようになっている。 The in-vehicle AI robot 100 and the home AI robot 200 perform information synchronization through the cloud 300 to share information, and each AI robot 100 accesses the cloud 300 at a predetermined cycle to perform in-vehicle AI. The robot 100 acquires information from the home AI robot 200, and the home AI robot 200 acquires information from the on-board AI robot 100 in substantially real time.
 [7.作用及び効果]
 本実施形態に係るは、上記のように構成されているので、車両側では、車両側話者である運転者と車載AIロボット100とで会話を行い、車外側では、車外側話者である家族と自宅AIロボット200とで会話を行い、各AIロボット100,200がそれぞれの会話から車両側と自宅側との間での伝達すべき情報を抽出して、抽出した情報をそれぞれクラウド300に送る。
[7. Action and effect]
According to the present embodiment, since it is configured as described above, on the vehicle side, the driver who is the vehicle-side speaker talks with the on-vehicle AI robot 100, and on the vehicle outside, it is the outside speaker. The family and the home AI robot 200 have a conversation, and each AI robot 100, 200 extracts the information to be transmitted between the vehicle side and the home side from each conversation, and the extracted information is stored in the cloud 300, respectively. send.
 クラウド300に情報が送られると、車載AIロボット100は自宅AIロボット200からの情報を、自宅AIロボット200は車載AIロボット100からの情報を、何れもほぼリアルタイムで取得することができるので、車両1側の運転者と自宅2側の家族とは、AIロボット100,200を通じてほぼリアルタイムで情報交換をすることができる。車両1側の運転者や自宅2側の家族は、AIロボット100,200と会話をするだけなので、運転者は車両の運転に支障をきたすことなく、家族は家事等の作業に支障をきたすことなく、必要な情報の伝達を行うことができる。 When information is sent to the cloud 300, the in-vehicle AI robot 100 can acquire information from the home AI robot 200, and the home AI robot 200 can acquire information from the in-vehicle AI robot 100 in substantially real time. The driver on the side 1 and the family on the side 2 at home can exchange information in real time through the AI robot 100, 200. The driver on the vehicle 1 side and the family on the home 2 side only talk with the AI robots 100 and 200, so the driver does not disturb the driving of the vehicle and the family disturbs the work such as housework. Can communicate the necessary information.
 具体的には、例えば、以下のような情報の伝達を行うことができる。
 例えば自宅AIロボット200が家庭の様子や会話内容を把握し、車載AIロボット100に送付すると、車載AIロボット100は受信した情報の伝達の要不要の判断や、伝達要の場合には、運転者の状況を見ながら、適切な伝達タイミングを選んで運転者に情報を伝えることができる。
Specifically, for example, the following information can be transmitted.
For example, when the home AI robot 200 grasps the condition of the home and the contents of conversation and sends it to the in-vehicle AI robot 100, the in-vehicle AI robot 100 determines whether transmission of the received information is unnecessary or not. The driver can be notified of information by selecting an appropriate transmission timing while observing the situation of
 また、車載AIロボット100は、運転者10との会話から車両1のルートをナビゲーションシステムに指令すると共に、様々な情報から最速ルート及び到着時間を計算し、自宅AIロボット200へ送付することもでき、自宅AIロボット200は受信した情報の要不要の判断をし、必要な情報を適切なタイミングで家庭の人へ伝達することができる。 The on-vehicle AI robot 100 can also instruct the navigation system the route of the vehicle 1 from the conversation with the driver 10, calculate the fastest route and arrival time from various information, and send it to the home AI robot 200. The home AI robot 200 can determine the necessity of the received information and can transmit necessary information to a home person at an appropriate timing.
 また、車載AIロボット100は、車両不具合が発生した時には、車両の状態に関する情報と復旧までの時間を推定して、自宅AIロボット200へ送付することもでき、自宅AIロボット200は受信した情報の要不要の判断をし、必要な情報を適切なタイミングで家庭の人へ伝達することができる。
 自宅AIロボット200は、家庭の様子・会話内容を把握し、車載AIロボット100へ送付することができ、車載AIロボット100は受信した情報の要不要の判断をし、必要な情報を適切なタイミングで運転者へ伝達することができる。
The on-vehicle AI robot 100 can also estimate the information on the state of the vehicle and the time until restoration when a vehicle failure occurs, and can send it to the home AI robot 200. Necessary information can be determined, and necessary information can be transmitted to householders at appropriate times.
The home AI robot 200 can grasp the state of the home and conversation contents and send it to the on-vehicle AI robot 100. The on-vehicle AI robot 100 determines the necessity of the received information and determines the necessary information at an appropriate timing. Can be transmitted to the driver.
 また、自宅AIロボット200の側で所定の目的地までのルートを検索して、車載AIロボット100へ送信し、車載AIロボット10は受信した情報をもとにさらに情報を追加、最適なルートを設定することもできる。
 車載AIロボット10が乗車している人数や属性を把握し、自宅AIロボット200へ送信し、自宅AIロボット200が受信した情報の要不要の判断、適切なタイミングで家庭の人へ伝達を行うこともできる。
Also, the home AI robot 200 searches for a route to a predetermined destination and transmits it to the on-vehicle AI robot 100, and the on-vehicle AI robot 10 adds more information based on the received information, and an optimal route is obtained. It can also be set.
Identify the number and attributes of the in-vehicle AI robot 10, send it to the home AI robot 200, and judge whether the information received by the home AI robot 200 is necessary or unnecessary, and transmit it to the home person at an appropriate timing. You can also.
 また、自宅AIロボット200は、例えばウェアラブル機器を通して、睡眠時間や健康状態を把握し、車載AIロボット100へ送信することができ、車載AIロボット10は自宅AIロボット200から送られたドライバーの状態の分析情報をもとに、安全機能の設定変更や、休憩ポイントのレコメンドなど支援を行うことができる。 In addition, the home AI robot 200 can grasp sleep time and health status through, for example, a wearable device, and can transmit it to the on-board AI robot 100. The on-board AI robot 10 is in the state of the driver sent from the home AI robot 200 Based on the analysis information, support can be provided, such as changing the setting of the safety function and recommending break points.
 車載AIロボット100が車内空調温度設定と、ドライバーが暑そうか寒そうかの判断結果、家までの距離/時間などの情報を自宅AIロボット200に送信すると、自宅AIロボット200は送信された情報と対象者の行動パターンをもとに、家庭の空調やお風呂の温度や給湯タイミングなどを設定することもできる。 When the in-vehicle AI robot 100 transmits the information such as the in-vehicle air-conditioning temperature setting, the result of judgment whether the driver is hot or cold, and the distance / time to the home to the home AI robot 200, the home AI robot 200 transmits the transmitted information. Also, based on the behavior pattern of the target person, it is possible to set the temperature of the home air conditioning, the temperature of the bath, the hot water supply timing, and the like.
 [8.その他]
 以上、実施形態を説明したが、実施形態では、車両外部の施設としての自宅を例示したが、車両外部の施設はこれに限らずオフィスなどでもよい。
 また、実施形態では、クラウド300を介しているが、これに限らず他の媒体を利用したり、車両側と自宅等の車外側とで直接通信したりするようにしてもよい。
 また、上記実施形態では、自宅AIロボット200のみに人型ロボットを採用しているが、車載AIロボット100についても人型ロボットを採用してもよい。
 あるいは、車載AIロボット100及び自宅AIロボット200の何れにも、人型ロボットを採用せずに、必要な機能のみを装備した機能ロボットを採用するようにしてもよい。
 また、実施形態では、車両と車両外部の施設との間において車両側話者と車外側話者との間で必要な情報の授受を行う例を示したが、車両と他の車両との間において車両側話者と車外側話者(他の車両の乗員、例えばドライバー)との間で必要な情報の授受を行うようにしてもよい。
[8. Other]
Although the embodiment has been described above, in the embodiment, a home as a facility outside the vehicle is illustrated, but the facility outside the vehicle may be an office or the like.
In the embodiment, although the cloud 300 is used, the present invention is not limited to this, and other media may be used, or direct communication may be performed between the vehicle side and the outside of the vehicle such as at home.
In the above embodiment, the humanoid robot is adopted only for the home AI robot 200. However, a humanoid robot may be adopted for the on-vehicle AI robot 100.
Alternatively, a functional robot equipped with only necessary functions may be adopted as neither the in-vehicle AI robot 100 nor the home AI robot 200, without adopting a humanoid robot.
In the embodiment, an example in which necessary information is exchanged between the vehicle side speaker and the vehicle outside speaker between the vehicle and the facility outside the vehicle is shown, but between the vehicle and another vehicle In this case, necessary information may be exchanged between the vehicle-side speaker and the vehicle-side speaker (the occupant of another vehicle, for example, a driver).
 1 車両
 10 車両側話者(例えば運転者)
 11 車載機器
 12 通信機器(第1通信手段)
 2 自宅
 20 車外側話者(例えば運転者の家族)
 21 家電機器(特定の機器)
 22 通信機器(第2通信手段)
 32 通信機器(第3通信手段)
 100 車載AIロボット(車載会話ロボット)
 101 第1データベース
 110 車載HMI部
 111 マイク(第1音声受信手段)
 112 スピーカ(第1音声発信手段)
 113 車内カメラ(第1画像取得手段)
 120 車載HMI処理部
 121 音声認識部(第1音声認識手段)
 122 音声合成部(第1音声合成手段)
 123 画像認識部(第1画像認識手段)
 130 車載AI制御部
 131 話者認識部(第1話者認識手段)
 132 会話意味理解部(第1会話意味理解手段)
 133 個人感情推定部(第1個人感情推定手段)
 134 情報抽出部(第1情報抽出手段)
 140 車載AIアシスト部
 200 自宅AIロボット(車外会話ロボット)
 201 第2データベース
 210 自宅HMI部
 211 マイク(第2音声受信手段)
 212 スピーカ(第2音声発信手段)
 213 自宅カメラ(第2画像取得手段)
 220 自宅HMI処理部
 221 音声認識部(第2音声認識手段)
 222 音声合成部(第2音声合成手段)
 223 画像認識部(第2画像認識手段)
 230 自宅AI制御部
 231 話者認識部(第2話者認識手段)
 232 会話意味理解部(第2会話意味理解手段)
 233 個人感情推定部(第2個人感情推定手段)
 234 情報抽出部(第2情報抽出手段)
 240 自宅AIアシスト部
 300 外部サーバーとしてのクラウドサーバー(クラウド)
 301 データベース
1 vehicle 10 vehicle side speaker (eg driver)
11 in-vehicle device 12 communication device (first communication means)
2 home 20 car outside speakers (eg driver's family)
21 Household appliances (specific equipment)
22 Communication device (second communication means)
32 Communication Device (Third Communication Means)
100 In-vehicle AI robot (In-vehicle conversation robot)
101 first database 110 in-vehicle HMI unit 111 microphone (first voice receiving means)
112 Speaker (1st voice transmission means)
113 In-vehicle camera (first image acquisition means)
120 in-vehicle HMI processing unit 121 speech recognition unit (first speech recognition means)
122 Speech synthesis unit (first speech synthesis means)
123 Image recognition unit (first image recognition means)
130 in-vehicle AI control unit 131 speaker recognition unit (first speaker recognition means)
132 Conversation Meaning Understanding Unit (First Conversation Meaning Understanding Means)
133 Personal Emotion Estimator (First Personal Emotion Estimator)
134 Information Extraction Unit (First Information Extraction Unit)
140 In-vehicle AI assist unit 200 Home AI robot (outside conversation robot)
201 second database 210 home HMI unit 211 microphone (second voice receiving means)
212 Speaker (second voice transmission means)
213 Home camera (second image acquisition means)
220 home HMI processing unit 221 voice recognition unit (second voice recognition means)
222 Speech synthesis unit (second speech synthesis means)
223 Image recognition unit (second image recognition means)
230 home AI control unit 231 speaker recognition unit (second speaker recognition means)
232 Conversational Meaning Understanding Unit (2nd Conversational Meaning Understanding Means)
233 Personal Emotion Estimator (Second Personal Emotion Estimator)
234 Information Extraction Unit (Second Information Extraction Unit)
240 Home AI Assist Unit 300 Cloud Server as an External Server (Cloud)
301 database

Claims (8)

  1.  車両に装備され、人工知能を有し、前記車両内の車両側話者と会話を交わして情報を交換する車載会話ロボットと、
     前記車載会話ロボットに接続された第1通信手段と、
     前記車両外部の施設内または他の車両内に装備され、人工知能を有し、前記施設内または前記他の車両内の車外側話者と会話を交わして情報を交換する車外会話ロボットと、
     前記車外会話ロボットに接続された第2通信手段と、を備え、
     前記車載会話ロボットは、前記車両側話者との会話から前記車外側話者へ伝達すべき情報を抽出して前記第1通信手段によって前記車外会話ロボットに送信すると共に、前記車外会話ロボットは、前記車載会話ロボット側から送信された抽出情報を前記第2通信手段によって受信し当該抽出情報を前記車外側話者に会話によって伝達し、
     前記車外会話ロボットは、前記車外側話者との会話から前記車両側話者へ伝達すべき情報を抽出して前記第2通信手段によって前記車載会話ロボットに送信すると共に、前記車載会話ロボットは、前記車外会話ロボット側から送信された抽出情報を前記第1通信手段によって受信し当該抽出情報を前記車両側話者に会話によって伝達する
    ことを特徴とする車両用情報伝達支援システム。
    An on-vehicle conversation robot equipped in a vehicle, having artificial intelligence, and exchanging information with the vehicle-side speaker in the vehicle;
    First communication means connected to the on-board conversation robot;
    An out-of-vehicle conversation robot equipped in a facility or other vehicle outside the vehicle, having artificial intelligence, and exchanging information with a speaker outside the vehicle in the facility or the other vehicle;
    And a second communication unit connected to the outside robot.
    The on-vehicle conversation robot extracts information to be transmitted to the on-vehicle speaker from the conversation with the on-vehicle speaker, and transmits the information to the on-vehicle conversation robot by the first communication unit. The extraction information transmitted from the on-vehicle conversation robot side is received by the second communication means, and the extraction information is transmitted to the outside speaker of the vehicle by conversation.
    The on-vehicle conversation robot extracts information to be transmitted to the on-vehicle speaker from the conversation with the on-vehicle speaker, and transmits the information to the on-vehicle conversation robot by the second communication unit. An information transmission support system for a vehicle, characterized in that the extraction information transmitted from the out-vehicle conversation robot side is received by the first communication means, and the extraction information is transmitted to the vehicle speaker by conversation.
  2.  外部と通信する第3通信手段と、前記第3通信手段による通信により得られた前記車載会話ロボット側からの前記抽出情報、並びに、前記第3通信手段による通信により得られた前記車外会話ロボット側からの前記抽出情報を記憶するデータベースと、を有する、外部サーバーを備え、
     前記車載会話ロボットと前記車外会話ロボットとの間の情報伝達は前記外部サーバーを介して行われる
    ことを特徴とする請求項1に記載の車両用情報伝達支援システム。
    Third communication means for communicating with the outside, the extraction information from the on-board conversation robot side obtained by communication by the third communication means, and the outside-car conversation robot side obtained by communication by the third communication means And a database storing the extracted information from
    The information transmission support system for a vehicle according to claim 1, wherein information transmission between the on-vehicle conversation robot and the outside-of-vehicle conversation robot is performed via the external server.
  3.  前記車載会話ロボットは、
     音声認識情報の解析及び画像認識情報の解析のための解析情報を記憶する第1データベースと、
     前記車両側話者の音声を受信する第1音声受信手段と、
     周囲画像を取得する第1画像取得手段と、
     前記音声受信手段が受信した音声を認識する第1音声認識手段と、
     前記第1画像取得手段が取得した画像を認識する第1画像認識手段と、
     前記第1音声認識手段の音声認識情報及び前記画像認識手段の画像認識情報の少なくとも何れかの認識情報と前記第1データベースの解析情報とから前記車両側話者を認識する第1話者認識手段と、
     前記第1音声認識手段の認識情報と前記第1データベースの解析情報とから前記車両側話者の会話の意味を理解する第1会話意味理解手段と、
     前記第1会話意味理解手段の会話意味理解情報と前記第1データベースの解析情報とから前記車両側話者へ返信する会話内容を生成する第1会話内容生成手段と、
     前記第1会話内容生成手段により生成された前記会話内容に対応する返信音声を合成する第1音声合成手段と、
     前記第1音声合成手段により合成した返信音声を発信する第1音声発信手段と、
     前記第1会話意味理解手段による会話意味理解情報から前記車外側話者へ伝達すべき情報を抽出する第1情報抽出手段と、を備えている
    ことを特徴とする請求項1または2に記載の車両用情報伝達支援システム。
    The on-board conversation robot is
    A first database storing analysis information for analysis of voice recognition information and analysis of image recognition information;
    First voice receiving means for receiving the voice of the vehicle-side speaker;
    First image acquisition means for acquiring an ambient image;
    First voice recognition means for recognizing voice received by the voice receiving means;
    A first image recognition unit that recognizes an image acquired by the first image acquisition unit;
    First speaker recognition means for recognizing the vehicle-side speaker from at least one of recognition information of the voice recognition information of the first speech recognition means and image recognition information of the image recognition means and analysis information of the first database When,
    First conversation meaning understanding means for understanding the meaning of the conversation of the vehicle side speaker from the recognition information of the first speech recognition means and the analysis information of the first database;
    First conversation content generation means for generating a conversation content to be sent back to the vehicle side speaker from the conversation meaning understanding information of the first conversation meaning understanding means and the analysis information of the first database;
    First voice synthesizing means for synthesizing a reply voice corresponding to the conversation contents generated by the first conversation contents generating means;
    First voice transmitting means for transmitting a reply voice synthesized by the first voice synthesizing means;
    The first information extraction means for extracting information to be transmitted to the outside speaker from the conversation meaning comprehension information by the first conversation meaning comprehension means, according to claim 1 or 2, Information transmission support system for vehicles.
  4.  前記車載会話ロボットは、
     前記第1音声認識手段及び前記第1画像認識手段の少なくとも何れかの認識情報から前記車両側話者の感情の状態を推定する第1感情推定手段をさらに備え、
     前記第1会話内容生成手段は、前記第1感情推定手段による感情推定情報を含めた情報から前記車両側話者への会話内容を生成する
    ことを特徴とする請求項3に記載の車両用情報伝達支援システム。
    The on-board conversation robot is
    The system further comprises first emotion estimation means for estimating the state of emotion of the vehicle speaker from the recognition information of at least one of the first speech recognition means and the first image recognition means,
    The vehicle information according to claim 3, wherein the first conversation content generation means generates the conversation content to the vehicle side speaker from the information including the emotion estimation information by the first emotion estimation means. Transmission support system.
  5.  前記車両は、前記車両に搭載され指令に応じて作動する車載機器を備え、
     前記車載会話ロボットは、
     前記第1会話意味理解手段による会話意味理解情報から車両操作関連情報を抽出して抽出した情報に応じた指令を前記車載機器に出力する
    ことを特徴とする請求項3または4に記載の車両用情報伝達支援システム。
    The vehicle is equipped with on-vehicle equipment mounted on the vehicle and operated in accordance with a command.
    The on-board conversation robot is
    The vehicle according to claim 3 or 4, characterized in that a command according to information extracted by extracting vehicle operation related information from conversation meaning understanding information by the first conversation meaning understanding means is output to the on-vehicle device. Information transmission support system.
  6.  前記車外会話ロボットは、
     音声認識情報の解析及び画像認識情報の解析のための解析情報を記憶する第2データベースと、
     前記車両側話者の音声を受信する第2音声受信手段と、
     周囲画像を取得する第2画像取得手段と、
     前記音声受信手段が受信した音声を認識する第2音声認識手段と、
     前記第2画像取得手段が取得した画像を認識する第2画像認識手段と、
     前記第2音声認識手段の音声認識情報及び前記画像認識手段の画像認識情報の少なくとも何れかの認識情報と前記第2データベースの解析情報とから前記車両側話者を認識する第2話者認識手段と、
     前記第2音声認識手段の認識情報と前記第2データベースの解析情報とから前記車両側話者の会話の意味を理解する第2会話意味理解手段と、
     前記第2会話意味理解手段の会話意味理解情報と前記第2データベースの解析情報とから前記車両側話者へ返信する会話内容を生成する第2会話内容生成手段と、
     前記第2会話内容生成手段により生成された前記会話内容に対応する返信音声を合成する第2音声合成手段と、
     前記第2音声合成手段により合成した返信音声を発信する第2音声発信手段と、
     前記第2会話意味理解手段による会話意味理解情報から前記車両側話者へ伝達すべき情報を抽出する第2情報抽出手段と、を備えている
    ことを特徴とする請求項3~5の何れか1項に記載の車両用情報伝達支援システム。
    The outside conversation robot is
    A second database storing analysis information for analysis of voice recognition information and analysis of image recognition information;
    Second voice receiving means for receiving the voice of the vehicle-side speaker;
    Second image acquisition means for acquiring an ambient image;
    Second voice recognition means for recognizing voice received by the voice receiving means;
    A second image recognition unit that recognizes an image acquired by the second image acquisition unit;
    Second speaker recognition means for recognizing the vehicle speaker from the recognition information of at least one of the speech recognition information of the second speech recognition means and the image recognition information of the image recognition means and the analysis information of the second database When,
    Second conversation meaning understanding means for understanding the meaning of the conversation of the vehicle speaker from the recognition information of the second speech recognition means and the analysis information of the second database;
    Second conversation content generation means for generating a conversation content to be sent back to the vehicle side speaker from the conversation meaning understanding information of the second conversation meaning understanding means and the analysis information of the second database;
    Second voice synthesis means for synthesizing a reply voice corresponding to the conversation contents generated by the second conversation contents generation means;
    Second voice transmitting means for transmitting a reply voice synthesized by the second voice synthesizing means;
    The second information extraction means for extracting information to be transmitted to the vehicle side speaker from the conversation meaning understanding information by the second conversation meaning understanding means, The method according to any one of claims 3 to 5, further comprising: The information transmission support system for vehicles of item 1.
  7.  前記車外会話ロボットは、
     前記第2音声認識手段及び前記第2画像認識手段の少なくとも何れかの認識情報から前記車外側話者の感情の状態を推定する第2感情推定手段をさらに備え、
     前記第2会話内容生成手段は、前記第2感情推定手段による感情推定情報を含めた情報から前記車外側話者への会話内容を生成する
    ことを特徴とする請求項6に記載の車両用情報伝達支援システム。
    The outside conversation robot is
    The system further comprises second emotion estimation means for estimating the state of the emotion of the outside speaker from the recognition information of at least one of the second speech recognition means and the second image recognition means,
    7. The vehicle information according to claim 6, wherein the second conversation content generation means generates the conversation content to the outside speaker from the information including the emotion estimation information by the second emotion estimation means. Transmission support system.
  8.  前記施設または前記他の車両は、前記施設内または前記他の車両内に装備され、指令に応じて作動する特定の機器を備え、
     前記車外会話ロボットは、
     前記第2会話意味理解手段による会話意味理解情報から設備操作関連情報を抽出して抽出した情報に応じた指令を前記特定の機器に出力する
    ことを特徴とする請求項6または7に記載の車両用情報伝達支援システム。
    The facility or the other vehicle is equipped in the facility or in the other vehicle, and is equipped with a specific device that operates in response to a command.
    The outside conversation robot is
    8. The vehicle according to claim 6, wherein a command corresponding to information extracted from facility operation related information from the conversation meaning understanding information by the second conversation meaning understanding means is output to the specific device. Communication support system.
PCT/JP2018/012651 2017-09-28 2018-03-28 Information transfer support system for vehicle WO2019064650A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017-188911 2017-09-28
JP2017188911 2017-09-28

Publications (1)

Publication Number Publication Date
WO2019064650A1 true WO2019064650A1 (en) 2019-04-04

Family

ID=65901722

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/012651 WO2019064650A1 (en) 2017-09-28 2018-03-28 Information transfer support system for vehicle

Country Status (1)

Country Link
WO (1) WO2019064650A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111063349A (en) * 2019-12-17 2020-04-24 苏州思必驰信息科技有限公司 Key query method and device based on artificial intelligence voice
CN113393687A (en) * 2020-03-12 2021-09-14 奥迪股份公司 Driving assistance device, driving assistance method, vehicle, and medium
DE102022119837A1 (en) 2022-08-08 2024-02-08 Audi Aktiengesellschaft Emergency call procedure with low consumption of spectral resources

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001325202A (en) * 2000-05-12 2001-11-22 Sega Corp Conversation method in virtual space and system therefor
JP2003280688A (en) * 2002-03-25 2003-10-02 Nissan Diesel Motor Co Ltd Information exchange system
JP2004090109A (en) * 2002-08-29 2004-03-25 Sony Corp Robot device and interactive method for robot device
WO2005086051A1 (en) * 2004-03-08 2005-09-15 National Institute Of Information And Communications Technology Interactive system, interactive robot, program, and recording medium
JP2016012340A (en) * 2014-06-05 2016-01-21 ソフトバンク株式会社 Action control system and program
WO2016103881A1 (en) * 2014-12-25 2016-06-30 エイディシーテクノロジー株式会社 Robot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001325202A (en) * 2000-05-12 2001-11-22 Sega Corp Conversation method in virtual space and system therefor
JP2003280688A (en) * 2002-03-25 2003-10-02 Nissan Diesel Motor Co Ltd Information exchange system
JP2004090109A (en) * 2002-08-29 2004-03-25 Sony Corp Robot device and interactive method for robot device
WO2005086051A1 (en) * 2004-03-08 2005-09-15 National Institute Of Information And Communications Technology Interactive system, interactive robot, program, and recording medium
JP2016012340A (en) * 2014-06-05 2016-01-21 ソフトバンク株式会社 Action control system and program
WO2016103881A1 (en) * 2014-12-25 2016-06-30 エイディシーテクノロジー株式会社 Robot

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111063349A (en) * 2019-12-17 2020-04-24 苏州思必驰信息科技有限公司 Key query method and device based on artificial intelligence voice
CN113393687A (en) * 2020-03-12 2021-09-14 奥迪股份公司 Driving assistance device, driving assistance method, vehicle, and medium
DE102022119837A1 (en) 2022-08-08 2024-02-08 Audi Aktiengesellschaft Emergency call procedure with low consumption of spectral resources

Similar Documents

Publication Publication Date Title
US10155524B2 (en) Vehicle with wearable for identifying role of one or more users and adjustment of user settings
US20190031127A1 (en) System and method for determining a user role and user settings associated with a vehicle
US10908677B2 (en) Vehicle system for providing driver feedback in response to an occupant's emotion
CN107415938B (en) Controlling autonomous vehicle functions and outputs based on occupant position and attention
WO2019064650A1 (en) Information transfer support system for vehicle
CN102039898B (en) Emotive advisory system
CN106164398B (en) Mobile device, vehicle remote operation system and vehicle remote operation method
JP7146585B2 (en) Line-of-sight detection device, program, and line-of-sight detection method
JP2018060192A (en) Speech production device and communication device
JP6800249B2 (en) Conversation processing server, conversation processing server control method, and terminal
US20190251973A1 (en) Speech providing method, speech providing system and server
CN111190480A (en) Control device, agent device, and computer-readable storage medium
US10631140B2 (en) Server, client, and system
KR20190006741A (en) Connectivity Integration Management Method and Connected Car thereof
CN111547063A (en) Intelligent vehicle-mounted emotion interaction device for fatigue detection
CN111750885B (en) Control device, control method, and storage medium storing program
CN111144539A (en) Control device, agent device, and computer-readable storage medium
JP6879220B2 (en) Servers, control methods, and control programs
US20210326659A1 (en) System and method for updating an input/output device decision-making model of a digital assistant based on routine information of a user
JP7084848B2 (en) Control equipment, agent equipment and programs
JP2018039282A (en) Air-conditioning operation proposal method and air-conditioning operation proposal system
JP7291476B2 (en) Seat guidance device, seat guidance method, and seat guidance system
JP2020060623A (en) Agent system, agent method, and program
JP6739017B1 (en) Tourism support device, robot equipped with the device, tourism support system, and tourism support method
US20230158899A1 (en) Information processing apparatus and information processing system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18861258

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18861258

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP