WO2022124164A1 - Attention object sharing device, and attention object sharing method - Google Patents

Attention object sharing device, and attention object sharing method Download PDF

Info

Publication number
WO2022124164A1
WO2022124164A1 PCT/JP2021/044133 JP2021044133W WO2022124164A1 WO 2022124164 A1 WO2022124164 A1 WO 2022124164A1 JP 2021044133 W JP2021044133 W JP 2021044133W WO 2022124164 A1 WO2022124164 A1 WO 2022124164A1
Authority
WO
WIPO (PCT)
Prior art keywords
child
interest
driver
information
target
Prior art date
Application number
PCT/JP2021/044133
Other languages
French (fr)
Japanese (ja)
Inventor
加奈子 金澤
好文 伊藤
大地 八木
昊舟 李
友英 西野
裕子 中村
Original Assignee
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社デンソー filed Critical 株式会社デンソー
Priority to CN202180081745.3A priority Critical patent/CN116547729A/en
Publication of WO2022124164A1 publication Critical patent/WO2022124164A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/787Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y10/00Economic sectors
    • G16Y10/40Transportation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y20/00Information sensed or collected by the things
    • G16Y20/20Information sensed or collected by the things relating to the thing itself
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/20Analytics; Diagnosis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • This disclosure relates to a technique in which the driver shares information about things outside the vehicle that the person sitting in the backseat is interested in.
  • Patent Document 1 describes an out-of-vehicle camera based on the fact that a passenger who is a occupant other than the driver makes a specific facial expression or action, emits a voice expressing a specific emotion, or performs a predetermined instruction operation.
  • the technique of displaying the captured image of the above on a display visible to the driver is disclosed.
  • the image displayed on the display is an image of a subject existing in the direction of the passenger's line of sight outside the vehicle interior.
  • an image including a subject that the passenger may be interested in is displayed on the display.
  • a child may be seated in a child's seat provided in the back seat, and a guardian (for example, a parent) may be seated in the driver's seat.
  • a guardian for example, a parent
  • Children can be more interested in a variety of objects than adults. And children often ask their parents as drivers about things they are interested in and tell them about their existence.
  • parents as drivers need to concentrate on driving, so they cannot always respond politely to children's questions. Also, it is difficult for the driver to know what the child sitting in the back seat is looking at. Therefore, even when the driving load is light, it is difficult for the driver to respond closely to the child's interest reaction. It becomes even more difficult, especially when the child is still unable to verbalize things well, or when the vocabulary is low.
  • an image including a subject that may be of interest to a child is displayed on the display, so that the driver can see the image. You can have some idea of what your child is interested in.
  • only an image is displayed. Therefore, in order for parents to respond / sympathize with the child's reaction to things outside the vehicle interior, it is necessary to identify the subject that the child would have focused on based on the displayed image, verbalize it, and speak. Yes, the burden on the driver is still large.
  • the driver here refers to a driver's seat occupant who is a person seated in the driver's seat.
  • the present disclosure is based on this circumstance, and the purpose of the present disclosure is to make it easier for the driver's seat occupant to respond to the reaction of the child to things outside the vehicle interior. To provide a method.
  • the focus sharing device for achieving that purpose is a focus sharing device used in a vehicle provided with a child seat, which is a seat for a child to sit on, and is seated on the child seat.
  • a child information acquisition unit that acquires the line-of-sight direction of the child based on an image of a vehicle interior camera that includes at least the face of the child in the imaging range, a child's biological information, a child's voice, and Attached to the vehicle so as to image the line-of-sight direction acquired by the child information acquisition unit and the interest reaction detection unit that detects the child's interest reaction to things outside the vehicle interior based on at least one of the line-of-sight directions.
  • Attention is paid from the attention target detection unit that detects the attention target that is the target that the child is interested in, and the database arranged inside or outside the vehicle, based on the captured image of the vehicle outside camera (28).
  • the thing that the child is paying attention to is specified as the object of interest, and at least the child or the driver's seat occupant obtains the verbalized information about the object of interest.
  • the verbalized information Provide to one side.
  • the configuration of notifying the guardian as a driver's seat occupant of information about the object of interest not only the image but also the verbalized information is notified, so that the guardian pays attention to what the child pays attention to. It is easy to recognize if you are doing it. Therefore, it becomes easier to have a conversation with the child on the theme of the thing that the child is paying attention to.
  • the verbalized information about the object of interest is also notified, and based on the information, the driver's seat occupant knows what he / she is paying attention to. It will be easier to tell. As a result, it becomes easier for the driver's seat occupant to respond to the child's interest reaction.
  • the focus sharing method to achieve the above objectives is at least one processor for parents to share things that the child is interested in sitting in the child's seat preset in the vehicle.
  • a method of sharing attention that is performed by the child's gaze direction and acquisition of the child's line-of-sight direction based on the image of the vehicle interior camera that includes at least the face of the child sitting on the child's seat in the imaging range.
  • To detect the child's interest reaction to things outside the vehicle interior based on at least one of biometric information, the child's voice, and the line-of-sight direction, and to image the acquired child's line-of-sight direction and the outside of the vehicle.
  • the child Based on the image taken by the outside camera attached to the vehicle, the child can detect the object of interest that is of interest to the child, and the language of the object of interest from the database placed inside or outside the vehicle. At least one of the driver's seat occupant and the child, using at least one of the display of the text corresponding to the information and the voice output of the acquired information about the acquired information and the acquired information. (S110, S111, S113) and the like.
  • the above method is a method executed by the above-mentioned attention target sharing device. According to the above method, the same effect can be obtained by the same operation as that of the target sharing device of interest.
  • FIG. 1 is a diagram showing an example of a schematic configuration of a attention target sharing system Sys to which the attention target sharing device according to the present disclosure is applied.
  • Some or all of the elements constituting the attention target sharing system Sys of the present disclosure are mounted on the vehicle. Some of the functional elements included in the attention target sharing system Sys may be provided outside the vehicle.
  • the attention target sharing system Sys can be understood as a system that supports communication between a child and a guardian as a driver in one aspect. Therefore, the attention target sharing system Sys can also be called a communication support system.
  • the driver in the present disclosure refers to a person sitting in the driver's seat, that is, a driver's seat occupant.
  • the expression "driver” is not limited to a person who actually performs a part or all of the driving operation.
  • the description of a driver refers to a person who should receive the authority of driving operation from the automatic driving system during automatic driving.
  • a child is in a child seat installed in the back seat and a guardian of the child sitting in the back seat is seated in the driver's seat as a driver.
  • the operation of the attention target sharing system Sys will be described.
  • the concept of a guardian includes the mother of the child, the father, and relatives such as grandparents. Babysitters may also be included in parents according to local customs or laws in which the system is used.
  • a child seat is fixed to a predetermined position among the rear seats of the vehicle, such as a seat located behind the driver's seat, by a mechanism such as a seat belt or ISOFIX.
  • ISOFIX has a configuration defined in ISO13216-1: 1999, and is also referred to as LATCH (Lower Anchors and Tethers for Children).
  • the child seat here refers to a occupant protection device or a restraint device for children in a broad sense, and may include a booster seat, a junior seat, and the like for adjusting the height of the seating surface.
  • Seats where children are seated, such as seats with child seats, are hereinafter referred to as children's seats.
  • the child seat may not be installed on the child seat.
  • Children's seats need only be seats that children can sit in, and are not necessarily exclusively for children. Children's seats may sometimes be configured as seats that adults can also sit in.
  • level pointed to by "automatic driving” in the present disclosure may be, for example, equivalent to level 3 defined by the American Society of Automotive Engineers of Japan (SAE International), or may be level 4 or higher.
  • Level 3 refers to the level at which the system executes all operation tasks within the operational design domain (ODD), while the operation authority is transferred from the system to the user in an emergency.
  • ODD operational design domain
  • the ODD defines conditions under which automatic driving can be executed, such as the traveling position being in a highway.
  • Level 3 corresponds to so-called conditional automated driving.
  • Level 4 is a level at which the system can perform all driving tasks except under specific circumstances such as unresponsive roads and extreme environments.
  • Level 5 is the level at which the system can perform all driving tasks in any environment.
  • Level 4 or higher automated driving refers to the level at which the automated driving device performs all driving tasks, that is, the automated level at which the driver's seat occupants are allowed to sleep.
  • level 2 or lower corresponds to a driving support level in which the driver executes at least a part of driving tasks of steering and acceleration / deceleration.
  • the system here refers to an in-vehicle system including the attention target sharing system Sys.
  • the attention target sharing system Sys includes an HCU (HMI Control Unit) 1 that controls the operation of an HMI (Human Machine Interface) such as a display.
  • HCU1 corresponds to a shared device of interest.
  • the HCU1 is used by being connected to various in-vehicle devices such as a driver status monitor (hereinafter, DSM: Driver Status Monitor) 21.
  • DSM Driver Status Monitor
  • the HCU 1 is connected to a DSM 21, a driver sensor 22, a driver microphone 23, an input device 24, a child camera 25, a child sensor 26, a child microphone 27, an outside camera 28, and a locator 29.
  • the HCU 1 is also connected to a communication device 31, a meter display 32, a center display 33, a head-up display (HUD) 34, a rear seat display 35, a window display 36, a dialogue device 37, a speaker 38, and the like.
  • HUD head-up display
  • the HCU1 is also connected to various sensors / devices (not shown in FIG. 1) via the in-vehicle network Nw, which is a communication network constructed in the vehicle.
  • the HCU 1 is configured to be capable of intercommunication with a computer that controls the running of the vehicle such as an automatic driving device via the in-vehicle network Nw, and is a signal indicating whether or not the current vehicle is in the automatic driving mode. Can be entered.
  • the HCU1 is input with the detection results of various in-vehicle sensors and the like via the in-vehicle network Nw.
  • the in-vehicle sensor include sensors that detect vehicle speed, acceleration, steering angle, shift position, accelerator depression amount, brake depression amount, and the like.
  • In-vehicle sensors also include sensors / switches that detect the operating state of the parking brake and the power state of the vehicle.
  • the HCU1 and various devices may be connected by a dedicated line or may be connected via the in-vehicle network Nw. Further, an ECU (Electronic Control Unit) may be interposed between the HCU 1 and the in-vehicle device.
  • ECU Electronic Control Unit
  • the DSM21 is a device that sequentially detects the user's state based on the user's face image.
  • the DSM 21 includes, for example, a near-infrared light source, a near-infrared camera, and a control module for controlling them.
  • the DSM 21 is installed in a posture in which the near-infrared camera faces the direction in which the headrest of the driver's seat is present, for example, on the upper surface of the steering column portion, the upper surface of the instrument panel, or the like.
  • the DSM 21 uses a near-infrared camera to photograph the head of the driver irradiated with near-infrared light by a near-infrared light source.
  • the image captured by the near-infrared camera is image-analyzed by the control module.
  • the control module extracts driver state information, which is information indicating the driver's state, such as the opening of the driver's eyes, from the captured image input from the near-infrared camera.
  • the camera constituting the DSM 21 may be a visible light camera.
  • the DSM 21 outputs the driver status information extracted from the driver's face image to the HCU1.
  • the driver status information includes, for example, the direction of the driver's face, the direction of the line of sight, the opening of the eyelids, the opening of the pupil, the opening of the mouth, the posture, and the like.
  • the DSM 21 may be configured to estimate the facial expression, emotion, etc. of the driver based on the distribution of facial feature points and the like.
  • various methods such as a method using a distribution pattern of facial feature points and a method using facial muscle movement can be used, and therefore detailed description thereof is omitted here.
  • AU Action units
  • detection amount (score) pattern for each AU is detected. It is possible to adopt a method of estimating facial expressions based on. Examples of the AU include lowering the eyebrows, raising the cheeks, raising the corners of the mouth, and raising the upper lip.
  • the DSM21 may estimate the degree of driver tension based on the driver's facial expression and the like.
  • the health condition of the driver may be estimated based on the color information of the driver's face.
  • the driver's status information can include facial expressions, emotions, tension, health status, and the like.
  • the HCU1 may have a function of detecting the state of the driver based on image analysis. In that case, the DSM 21 may be configured to be able to output the driver's face image to the HCU1. The functional arrangement between DSM21 and HCU1 can be changed as appropriate.
  • the driver sensor 22 is a biosensor that senses the biometric information of the driver.
  • the driver sensor 22 is, for example, a pulse wave sensor that senses a pulse wave.
  • the driver sensor 22 includes blood pressure, electrocardiogram, heart rate, sweating amount, body temperature, heat dissipation from the human body, breathing rhythm, breathing depth, exhaled breath component, body composition, posture, body movement, skin electrical activity, and facial muscles.
  • the sensor may be a sensor whose detection target is at least one of the activity potential of the body and the peripheral blood pressure.
  • Peripheral blood flow refers to blood flow in the peripheral part such as a fingertip.
  • Biological sensors include temperature sensors, pulse wave sensors, humidity sensors, heart rate sensors and the like.
  • the concept of biometric information can include various state quantities as described above.
  • a plurality of types of driver sensors 22 having different biological information to be detected may be connected to the HCU 1.
  • the above-mentioned DSM 21 can also be included in the driver sensor 22 in a broad sense.
  • the driver sensor 22 may be built in the backrest of the driver's seat or the headrest, or may be provided in the steering. In addition, by transmitting and receiving millimeter waves as exploration waves toward the driver's seat, millimeter wave radars that detect the driver's heart rate, body movement, and posture can also be included in the biosensor.
  • the driver sensor 22 may be a thermography camera. As the driver sensor 22, sensors having various detection principles such as a radio wave type sensor and a driver sensor 22 using infrared rays can be adopted.
  • the various driver sensors 22 may be wearable devices that are worn and used on the driver, for example, the wrist.
  • various shapes such as a wristband type, a wristwatch type, a ring type, a glasses type, and an earphone type can be adopted.
  • the wearable device as the driver sensor 22 is configured to be capable of intercommunication with the HCU 1 via a communication device 31 mounted on the vehicle.
  • the connection mode between the wearable device and the communication device 31 may be a wired connection or a wireless connection.
  • a wireless connection method short-range wireless communication standards such as Bluetooth (registered trademark) and Wi-Fi (registered trademark) can be adopted.
  • a skin electrical activity (EDA) sensor that detects changes in skin conductance (skin surface potential) derived from sympathetic nervous system activity can also be adopted.
  • the driver microphone 23 is a device that converts ambient sounds such as voices spoken by passengers in the front seats into electrical signals and inputs them to the HCU 1.
  • the driver microphone 23 is arranged, for example, on the upper surface of the steering column cover, the steering wheel, the central portion of the instrument panel, or the like so as to easily collect the voice spoken by the driver.
  • the input device 24 is an operating member for receiving a driver's instruction to the HCU1.
  • the input device 24 may be a mechanical switch (so-called steer switch) provided on the spoke portion of the steering wheel, or may be a voice input device that recognizes the utterance content of the driver.
  • the input device 24 may be a touch panel laminated on the display panel of the display provided on the instrument panel, for example, the center display 33.
  • the input device 24 may be a driver's smartphone.
  • the touch panel and display of the smartphone possessed by the driver can be used as the input device 24.
  • the child camera 25 is a camera that captures the face of a child sitting on a child seat.
  • the child camera 25 is attached to the back of the front seat, which is located in front of the child's seat.
  • the child camera 25 may be arranged on the ceiling or the like in the vehicle interior so that the face of the child sitting in the rear seat can be imaged.
  • a camera that includes the entire rear seat in the imaging range may be used as the child camera 25.
  • the child camera 25 may be attached to the central portion of the ceiling portion, the upper end portion of the windshield, the overhead console, or the like.
  • the image data captured by the child camera 25 is output toward the HCU 1. It should be noted that the concept of image data here can also include a video signal.
  • the child camera 25 corresponds to a vehicle interior camera.
  • the child sensor 26 is a biological sensor that senses biological information of an occupant (that is, a child) sitting on a child's seat. Like the driver sensor 22, the child sensor 26 is a sensor that detects at least one of various state quantities such as blood pressure, heart rate, heart rate, sweating amount, and body temperature. For example, the child sensor 26 is a pulse wave sensor that senses a pulse wave. A plurality of types of child sensors 26 having different biological information to be detected may be connected to the HCU 1.
  • the biometric information of a child can include various items as well as the biometric information of a driver.
  • the child sensor 26 is built in, for example, a child seat.
  • the child sensor 26 may be a non-contact type sensor that acquires various vital information using millimeter waves, infrared rays, or the like.
  • the child sensor 26 may be a thermography camera.
  • the child sensor 26 may be a wearable device that is worn and used on, for example, a wrist of a child.
  • the wearable device as the child sensor 26 may be configured to be capable of intercommunication with the HCU 1 via a communication device 31 mounted on the vehicle.
  • the child's microphone 27 is a device that converts the voice spoken by a occupant in the back seat, particularly a child sitting in the child's seat, into an electric signal and inputs it to the HCU1.
  • the children's microphone 27 is, for example, the rear portion of the front seat located in front of the children's seat or the central portion of the vehicle interior ceiling so that the voice of the child sitting on the children's seat can be easily collected. It is placed in such as.
  • the child's microphone 27 may be provided near the headrest of the child seat.
  • the out-of-vehicle camera 28 is an in-vehicle camera that photographs the surroundings of the own vehicle and outputs the data of the captured image to the HCU 1.
  • the out-of-vehicle camera 28 includes at least a lens and an image sensor, and electronically acquires an image showing the periphery of the own vehicle.
  • the number of external cameras 28 may be one or a plurality.
  • the front camera, the rear camera, the left side camera, and the right side camera are connected to the HCU 1 as the outside camera 28.
  • the front camera is a camera that captures an image of the front of the vehicle at a predetermined angle of view, and is attached to the front end of the own vehicle such as a front grill.
  • the rear camera is a camera that captures the rear of the vehicle at a predetermined angle of view, and is arranged at a predetermined position on the rear surface of the vehicle body, for example, near the rear license plate or the rear window.
  • the left side camera is a camera that captures the left side of the vehicle and is attached to the left side mirror.
  • the right side camera is a camera that captures the right side of the vehicle and is attached to the right side mirror.
  • a wide-angle lens such as a fisheye lens is adopted as the lens of these out-of-vehicle cameras 28, and each out-of-vehicle camera 28 has an angle of view of 180 degrees or more. Therefore, by using the four cameras 2, it is possible to take a picture of the entire circumference (that is, 360 °) of the own vehicle.
  • each camera 2 described above can be changed as appropriate.
  • the front camera may be attached to a rearview mirror, the upper end of the windshield, or the like.
  • the left and right side cameras may be arranged near the bases of the A pillar and the B pillar.
  • the external camera 28 may be mounted on the roof or may be mounted on the ceiling in the vehicle interior.
  • the partial or all exterior cameras 28 may be cameras retrofitted, for example, on the roof, on the dashboard, near the window frame of the door for the rear seats, and the like.
  • the out-of-vehicle camera 28 may be a compound-eye camera that includes a plurality of sets of lenses and image pickup elements and can take 360 ° images with one unit.
  • the attention target sharing system Sys may include a plurality of cameras having different imaging distance ranges as the outside camera 28.
  • the attention target sharing system Sys may include a short-distance camera for taking a short-distance image and a telephoto camera for taking a relatively long-distance image as the out-of-vehicle camera 28.
  • the locator 29 is a device for positioning the current position of the own vehicle.
  • the locator 29 is realized by using, for example, a GNSS receiver, an inertial sensor, and a map database (hereinafter, DB).
  • a GNSS receiver is a device that sequentially (for example, every 100 milliseconds) detects the current position of the GNSS receiver by receiving a navigation signal transmitted from a positioning satellite that constitutes a GNSS (Global Navigation Satellite System). be.
  • the locator 29 sequentially positions the position of its own vehicle by combining the positioning result of the GNSS receiver and the measurement result of the inertial sensor.
  • the positioned vehicle position is output toward HCU1. Further, the locator 29 reads out the map data in a predetermined range determined based on the current position from the map DB and provides it to the HCU 1.
  • the map DB may be stored locally in the vehicle or may be located on the cloud.
  • the communication device 31 is a device for wirelessly or wired communication with an external device.
  • the communication device 31 carries out data communication with a smartphone 4 or a wearable device brought into the vehicle interior in accordance with a standard such as Bluetooth (registered trademark).
  • the smartphone 4 is mainly a smartphone owned by the driver.
  • the communication device 31 is configured to be capable of performing wireless communication conforming to standards such as LTE (Long Term Evolution) and 4G, 5G, and performs data communication with a predetermined server 5.
  • the server 5 stores data about an object of interest, which is an object outside the vehicle interior that the child is interested in, as will be described later. Further, the server 5 includes a database and the like in which explanatory information about various objects is registered. The server 5 is configured to be able to transmit explanatory information about the child's object of interest detected by the HCU1 based on the request from the vehicle. The server 5 may have a function of acquiring information on the Internet and returning the information on the object of interest transmitted from the vehicle to the vehicle. The server 5 may be a Web server.
  • the data about the object of interest transmitted from the vehicle may be image data, and in that case, the server 5 analyzes the received image to use text data in which the object of interest is verbalized as a search word. It suffices if it is configured so that it can be acquired.
  • the meter display 32 is a display arranged in an area located in front of the driver's seat on the instrument panel. As the display, a liquid crystal display, an organic EL display, or the like can be adopted.
  • the center display 33 is a display arranged near the center in the vehicle width direction on the instrument panel.
  • the HUD 34 is a device that projects a virtual image that can be perceived by the user by projecting image light onto a predetermined area of the windshield based on control signals and video data input from the HCU 1 or a navigation device. The HUD 34 displays an image superimposed on the landscape in front of the vehicle.
  • Each of the meter display 32, the center display 33, and the HUD 34 displays an image corresponding to the signal input from the HCU 1.
  • the meter display 32, the center display 33, and the HUD 34 correspond to displays for drivers.
  • the center display 33 can also be visually recognized by the passenger in the passenger seat. Therefore, the center display 33 can be understood as a display for front seat occupants including the passenger seat occupant.
  • the rear seat display 35 is a display for passengers in the rear seats, mainly passengers seated in children's seats.
  • the rear seat display 35 is also referred to as a rear monitor.
  • the rear seat display 35 is arranged, for example, on the back portion of the front seat located in front of the child seat, the ceiling portion in the vehicle interior, or the like.
  • the rear seat display 35 also operates based on the input signal from the HCU 1.
  • the rear seat display 35 is provided with a touch panel, and the rear seat display 35 is configured to be able to receive an instruction operation by a child on the display screen.
  • the window display 36 is a device that displays an image on a window glass on the side of a vehicle, particularly a side window adjacent to a child's seat, by irradiating the window glass with display light.
  • the window display 36 includes, for example, as shown in FIG. 2, a projector 361 for emitting image light and a screen 362.
  • the screen 362 has a film-like structure for reflecting the image light toward the vehicle interior, and is attached to the surface of the window glass 61 to be irradiated with the image light on the vehicle interior side.
  • a mirror 363 for magnified reflection may be interposed between the projector 361 and the screen 362.
  • the mirror 363 is preferably a concave mirror, but may be a plane mirror.
  • the projector 361 and the mirror 363 are arranged on the surface of the roof portion 62 on the vehicle interior side, that is, on the vehicle interior ceiling portion.
  • the dialogue device 37 is a device that interacts with various occupants such as a child and a driver sitting on a child's seat.
  • the dialogue device 37 recognizes, for example, the child's voice data acquired by the child's microphone 27, creates an answer to the child's input voice, and outputs the answer by voice.
  • the dialogue device 37 is configured to, for example, use artificial intelligence to recognize the utterance content of the occupant and generate an answer.
  • the process of recognizing the utterance content and the process of generating an answer may be configured to be performed by the server 5 via the communication device 31.
  • the dialogue device 37 may also have a function of displaying an image of a predetermined agent on the rear seat display 35.
  • An agent is, for example, a character such as a fictitious person or an anthropomorphic animal.
  • the agent may be a driver's avatar or the like preset by the driver.
  • the dialogue device 37 When the dialogue device 37 outputs a predetermined message by voice, the dialogue device 37 can display an animation that moves as if the agent is speaking on the rear seat display 35.
  • the operation of the dialogue device 37, including the display of the animation using the agent, is controlled by the HCU1. Further, the display destination of the animation using the agent is not limited to the rear seat display 35, and may be the window display 36.
  • the speaker 38 generates sound in the passenger compartment of the vehicle.
  • Types of voice output include voice messages that read a predetermined text, music, and alarms.
  • the expression with voice includes mere sound.
  • the vehicle is provided with a speaker 38A for a driver and a speaker 38B for a child as the speaker 38.
  • the speaker 38A for the driver is provided, for example, on the instrument panel, the headrest of the driver's seat, or the like.
  • the child speaker 38B is built in the child seat.
  • the child speaker 38B may be provided on the side wall portion or the ceiling portion near the child seat.
  • the HCU 1 is a computer that comprehensively controls the presentation of information to a user using a display or the like.
  • the HCU 1 is configured as a computer including a processor 11, a RAM (Random Access Memory) 12, a storage 13, a communication interface 14 (I / O in the figure), a bus line connecting these configurations, and the like.
  • the processor 11 is, for example, an arithmetic core such as a CPU (Central Processing Unit).
  • the processor 11 executes various processes by accessing the RAM 12.
  • the RAM 12 is a volatile memory.
  • the communication interface 14 is a circuit for the HCU 1 to communicate with another device.
  • the communication interface 14 may be realized by using an analog circuit element, an IC, or the like.
  • the storage 13 has a configuration including a non-volatile storage medium such as a flash memory.
  • a attention target sharing program which is a program for making the computer function as the HCU 1 is stored.
  • Executing the attention target sharing program by the processor 11 corresponds to executing the attention target sharing method which is a method corresponding to the attention target sharing program.
  • data indicating the installation position of the external camera 28 in the vehicle, the position of the child's seat, the installation position of the child's camera 25, and the like are registered in the storage 13.
  • the HCU 1 provides each functional unit shown in FIG. 3 by the processor 11 executing the attention target sharing program stored in the storage 13. That is, the HCU 1 includes a child information acquisition unit F1, an outside vehicle information acquisition unit F2, a driver information acquisition unit F3, and a vehicle information acquisition unit F4. Further, the HCU 1 includes an interest reaction detection unit F5, an object identification unit F6, an explanatory information acquisition unit F7, a driving load estimation unit F8, a notification control unit F9, a recording processing unit FA, and an interest target management unit FB as functional units. ..
  • the notification control unit F9 includes a timing arbitration unit F91 and a target control unit F92 as finer functional units. The notification control unit F9 corresponds to the notification processing unit.
  • the child information acquisition unit F1 acquires various information regarding the condition of the child sitting on the child seat from the child camera 25 and the child sensor 26. For example, the child information acquisition unit F1 analyzes an image provided by the child camera 25 to determine the direction of the child's face, the direction of the line of sight, the opening of the eyelids, the opening of the pupil, the opening of the mouth, the posture, and the like. Estimate at least part of the body movement. Body movements also include the behavior of pointing a finger or hand out of the window. The child information acquisition unit F1 may be configured to estimate the facial expression, emotion, etc. of the child based on the distribution of facial feature points included in the image from the child camera 25.
  • the rhythm of breathing may be estimated from the pattern of changes in body movements, mainly the position of the chest or abdomen.
  • the child camera 25 may have a function / processing module for estimating the state of the child by analyzing the captured image of the child camera 25. Further, the above functions may be provided in the external server. In that case, the communication device 31 transmits the image data of the child to the server 5, analyzes the received image on the server 5, and returns the result to the vehicle. As such, the various functions for realizing the configuration of the present disclosure may be distributed at the edge and the cloud. The arrangement mode of various functions can be changed as appropriate.
  • the child information acquisition unit F1 acquires detection results such as pulse wave information from the child sensor 26.
  • the child sensor 26 can detect pulse rate, blood pressure, electrocardiogram, heart rate, sweating amount, body temperature, heat dissipation amount, breathing rhythm, breathing depth, exhalation component, body composition, posture, body movement, etc.
  • the child information acquisition unit F1 can also acquire such information.
  • the child status information acquired by the child information acquisition unit F1 is stored in the RAM 12 with a time stamp indicating the acquisition time.
  • the information on the state amount of the child acquired by the child information acquisition unit F1 is classified for each information type and is stored in, for example, the RAM 12 for a certain period of time. Data with different acquisition times can be sorted and saved in order of acquisition time so that the data with the latest acquisition time comes first.
  • the data retention period can be, for example, 2 minutes or 5 minutes.
  • the child information acquisition unit F1 calculates the normal values of the pulse rate, the opening of the eyes, the heart rate, the body temperature, the skin surface potential, etc., based on the detection results of the latest predetermined time.
  • the normal value can be, for example, the average value or the median value of the observed values within the latest predetermined time.
  • the child information acquisition unit F1 calculates the normal value of each state quantity by averaging the observed values within the last 1 minute for each of the pulse rate, the opening of the eyes, and the direction of the face. These normal values can be used as criteria for detecting that a child is interested in something and becomes excited.
  • the child information acquisition unit F1 detects that the child has spoken based on the input signal from the child microphone 27. Further, more preferably, when the child utters some words, the child information acquisition unit F1 acquires information indicating the utterance content and the loudness of the voice. The content of the utterance can be specified by voice recognition processing. Various information acquired or detected by the child information acquisition unit F1 is used by the interest reaction detection unit F5 and the object identification unit F6.
  • the out-of-vehicle information acquisition unit F2 is configured to acquire out-of-vehicle information from an out-of-vehicle camera 28, a locator 29, or the like.
  • the vehicle exterior information acquisition unit F2 sequentially acquires image data captured by the vehicle exterior camera 28 and temporarily stores the image data in a RAM 12 or the like.
  • the image data storage area can be configured as a ring buffer. That is, when the conserved quantity reaches a certain upper limit, the oldest data is sequentially deleted and new data is saved.
  • the vehicle outside information acquisition unit F2 stores the image data input from the vehicle outside camera 28 in the RAM 12 in association with the vehicle position information and the time information at the time of shooting.
  • the vehicle exterior information acquisition unit F2 may be configured to specify the position and type of the subject included in the image by analyzing the image input from the vehicle exterior camera 28.
  • CNN Convolutional Neural Network
  • DNN Deep Neural Network
  • the driver information acquisition unit F3 acquires various information regarding the driver status from the DSM 21 and the driver sensor 22.
  • the driver information acquisition unit F3 corresponds to the driver's seat occupant information acquisition unit.
  • the driver information acquisition unit F3 acquires at least a part of the driver's face direction, line-of-sight direction, eyelid opening degree, pupil opening degree, mouth opening degree, posture, and the like from DSM21.
  • the driver information acquisition unit F3 can also acquire such information.
  • it may be configured to detect the presence or absence of utterance of the driver based on the time-series change pattern of the distribution of the feature points around the mouth.
  • the driver's respiratory rhythm may be estimated from body movements, primarily patterns of changes in chest or abdominal position.
  • the rhythm of breathing can be an estimate of the degree of tension and driving load.
  • the function included in the DSM 21, that is, the function / processing module for estimating the state of the driver by analyzing the image of the driver may be provided in the driver information acquisition unit F3 or the server 5.
  • the vehicle information acquisition unit F4 also acquires the state (on / off) of the driving power source, vehicle speed, acceleration, steering angle, accelerator depression amount, brake depression amount, and the like.
  • the traveling power source is a power source for the vehicle to travel, and refers to an ignition power source when the vehicle is a gasoline-powered vehicle.
  • the driving power source refers to the system main relay.
  • the vehicle information acquisition unit F4 acquires information indicating the current position of the vehicle from the locator 29.
  • the position information of the own vehicle can be expressed by latitude, longitude, altitude, and the like.
  • the interest reaction detection unit F5 determines that there is an interest reaction when the child sitting on the child's seat keeps looking at the same thing for a predetermined time or more. If the child keeps looking at the same thing for a predetermined time or more, the line-of-sight direction is constant for a predetermined time, and the line-of-sight direction is opposite to the traveling direction of the vehicle (for example, rearward) so as to follow the object of interest with the eyes. ) Can also be included.
  • the interest reaction detection unit F5 may determine that there is an interest reaction when the child has a facial expression expressing a specific emotion.
  • the specific emotion here includes admiration, admire, and more specifically, surprise, admiration, smile, and the like.
  • the interest reaction detection unit F5 may determine that there is an interest reaction when it detects that the child has spoken with his / her face turned to the outside of the vehicle interior. At that time, the height of interest may be evaluated according to the loudness of the child's voice. Further, it may be determined that the child shows a high interest reaction based on the fact that the same word is repeatedly spoken a predetermined number of times or more with the child facing the outside of the vehicle interior.
  • an interest reaction may be detected based on the onomatopoeia, mimetic word, or onomatopoeia (so-called onomatopoeia) indicating a specific animal or state.
  • an interest reaction may be determined that the child has an interest reaction based on the action of pointing a finger or hand out of the window, that is, the action of pointing at something.
  • the interest reaction detection unit F5 may detect the child's interest reaction based on biological information such as the child's pulse, heartbeat, and eye opening. For example, it may be determined that there is an interest reaction based on the fact that the pulse becomes faster than the normal value by a predetermined threshold value or more. Further, it may be determined that there is an interest reaction based on the fact that the opening degree of the eyes is increased by a predetermined value or more from the normal value.
  • a material for detecting an interest reaction changes in body temperature, respiratory conditions, body composition, and the like can also be used. The state of respiration includes the speed of respiration, the exhaled breath component, and the like. In this way, the interest reaction detection unit F5 may detect that the child is in an unusual state or is in an excited state and is interested in something.
  • the object identification unit F6 is an object in which the child shows interest based on the line-of-sight direction of the child within the most recent predetermined retroactive time from that time when the interest reaction detection unit F5 detects the child's interest reaction. Identify the object of interest.
  • the concept of the object of interest here includes facilities, landmarks, stationary objects such as signs, moving objects such as pedestrians, animals, situations (events), landscapes, and the like. For example, unfamiliar buildings, people wearing whimsy, emergency vehicles such as dogs, cats, and fire trucks, commercial signs with characters popular with children, and corporate signs can be of interest.
  • the object identification unit F6 corresponds to the object detection unit of interest.
  • the object identification unit F6 identifies a subject existing in the line-of-sight direction of the child as an object of interest in the image captured by the external camera 28, for example, at the time when the interest reaction is detected or within the retroactive time.
  • the retroactive time here can be 200 milliseconds, 500 milliseconds, 1 second, or the like.
  • the direction of the child's line of sight with respect to the outside of the vehicle interior is based on the position of the child's eyes in the vehicle interior and the direction of the line of sight starting from the position of the child's eyes, which is specified based on the image of the child camera 25. Can be calculated.
  • the position of the child's eyes in the vehicle interior can be calculated based on the installation position and posture of the child camera 25 and the position of the child's eyes in the image. Further, by combining information such as the azimuth angle at which the vehicle body is facing with the direction of the child's line of sight with respect to the outside of the vehicle interior, the absolute direction in which the child is looking can be calculated.
  • the absolute direction here is a direction corresponding to a predetermined azimuth angle such as north, south, east, and west.
  • the object identification unit F6 acquires the object of interest as an image.
  • the object specifying unit F6 may specify the type and name of the object of interest by analyzing the image of the object of interest, which is an image of the subject as the specified object of interest. ..
  • the attention target image which is an image of the attention target acquired by the object identification unit F6, is output to the explanatory information acquisition unit F7.
  • the object specifying unit F6 also specifies the direction in which the object of interest exists when viewed from the vehicle or the driver's seat in the process of specifying the object of interest. Information about the direction in which the object of interest exists is also output to the explanatory information acquisition unit F7 and the notification control unit F9.
  • the explanatory information acquisition unit F7 acquires explanatory information about the object of interest specified by the object specifying unit F6 from the dictionary database arranged inside or outside the vehicle.
  • the dictionary database is a database in which explanatory information about various objects is registered.
  • the dictionary database may be installed in the vehicle or may be located in the cloud.
  • the dictionary database may be a Web server or the like.
  • the explanatory information acquisition unit F7 corresponds to the target information acquisition unit.
  • the explanatory information acquisition unit F7 acquires explanatory information about the object of interest from the server 5 by transmitting an image of the object of interest to the server 5.
  • the explanatory information here includes at least one such as a target category (major classification), a role, a background, and similar things, in addition to the target name.
  • a target category major classification
  • a role a role
  • a background a background
  • similar things in addition to the target name.
  • the object of interest is a building
  • its proper name / general name, role, height, year of construction, etc. are included in the explanatory information.
  • the explanatory information includes a large classification such as dogs, cats, birds, etc., and more detailed names, sizes, places of origin, characteristics (characters), and the like.
  • the explanatory information includes the name, historical background, and time of the event.
  • the object of interest is a natural phenomenon such as a rainbow
  • its name, generation principle, etc. can be explanatory information.
  • the object of interest is a vehicle such as an ambulance or a construction vehicle
  • its name, services, features, etc. are included in the explanatory information.
  • the explanatory information may include the type of service provided, the year of establishment, the representative product, etc., in addition to the name of the company / shop.
  • the explanatory information can be mainly information published in pictorial books, dictionaries, guidebooks, and the like.
  • the server 5 identifies the name of the object of interest by analyzing the image of the object of interest based on the inquiry from the explanatory information acquisition unit F7, and also acquires information other than the name related to the object of interest.
  • Information other than the name may be acquired by searching the Internet using the name as a search key, or by referring to the dictionary database owned by the server 5 itself. Then, the server 5 returns the information collected using the name of the object of interest as the search key to the vehicle as explanatory information.
  • the explanatory information acquisition unit F7 can acquire information about various objects of interest even if the vehicle does not have a database containing a huge amount of data.
  • the explanatory information acquisition unit F7 may specify the type and name of the object of interest by analyzing the image of the object of interest.
  • the name of the object of interest may be used as a search key, and supplementary information such as the service and background of the object of interest may be acquired from a database arranged inside or outside the vehicle.
  • the information acquired by the explanatory information acquisition unit F7 is temporarily stored in the RAM 12 or the like in association with the object image.
  • the data set is referred to by the notification control unit F9 and the recording processing unit FA.
  • the driving load estimation unit F8 determines whether or not the driver's driving load is high based on at least one of the driver's state information, the driving environment of the vehicle, and whether or not automatic driving is in progress.
  • the driver status information the information acquired / detected by the driver information acquisition unit F3 is used.
  • the traveling environment information the information around the vehicle acquired by the vehicle information acquisition unit F4 or the outside information acquisition unit F2 can be used. Whether it is in automatic driving or not can be input from the automatic driving device via the in-vehicle network Nw.
  • a signal indicating the level of automatic driving that is, any of levels 0 to 5, may be input from the automatic driving device.
  • the driving load estimation unit F8 determines that the driving load is not high, for example, when the automatic operation of level 3 or higher is in progress and the remaining time until the handover is equal to or longer than a predetermined threshold value.
  • the handover corresponds to transferring the authority of the driving operation from the system to the driver's seat occupant. On the other hand, even during automatic operation, it can be determined that the driver's driving load is high when the system requests the driver to perform a handover.
  • the driving load estimation unit F8 determines whether or not the driving load is high based on biological information such as the driver's pulse, breathing interval and depth, skin electrical activity, facial muscle action potential, and peripheral blood flow. You may. For example, when the driver is in a tense state, such as when the pulse or respiration is faster than the normal value by a predetermined value or more, it can be determined that the driving load is high.
  • the tension state of the driver can be estimated from various biological information such as the strength of gripping the handle, the posture, the electrical activity of the skin, the interval between blinks, and the peripheral blood flow. Various methods can be used to determine the tension state and driving load.
  • the driving load estimation unit F8 may determine that the driving load is high based on the fact that the vehicle is traveling near a branch / confluence point or an intersection of an expressway or that the lane is being changed.
  • the vicinity of the intersection can be, for example, a section where the remaining distance to the intersection is within 50 m.
  • the vicinity of the intersection also includes the inside of the intersection.
  • Whether or not you are driving near a branch / confluence or intersection of an expressway, whether or not you are planning to change lanes, etc. correspond to indicators of the safety level of the driving environment.
  • the above configuration corresponds to a configuration in which the driver's driving load is evaluated according to the safety level of the driving environment, in other words, the magnitude of the potential risk determined according to the driving environment.
  • Various information / conditions can be adopted as the judgment material / judgment condition for judging that the operating load is high.
  • the notification control unit F9 is configured to control the notification related to the information about the object of interest acquired by the explanatory information acquisition unit F7. For example, the notification control unit F9 determines the determination result of the driving load estimation unit F8 regarding the person / seat to be notified, the display timing of the image related to the object of interest, the timing of voice output, the output destination when displaying the image, and the like. Control in an integrated manner based on.
  • the functional unit that arbitrates the timing of displaying the image and the audio output related to the object of interest corresponds to the timing arbitration unit F91.
  • adjusting the display destination and audio output destination of the image related to the object of interest corresponds to selecting the occupant to be notified of the information related to the object of interest.
  • the display for the driver such as the meter display 32 and the HUD 34 is equivalent to excluding the driver from the display target of the image information.
  • the functional unit that adjusts the display destination and audio output destination of the image related to the object of interest corresponds to the target control unit F92.
  • the notification control unit F9 is configured to display an icon image indicating that the child has shown an interest reaction on the HUD352 when the interest reaction of the child is detected by the interest reaction detection unit F5. May be. The details of the notification control unit F9 will be described later.
  • the recording processing unit FA stores the data about the object of interest acquired by the explanatory information acquisition unit F7 in a predetermined storage medium in association with the position information at the time when the interest reaction is shown.
  • the device to store the data may be an external device such as the driver's smartphone 4 or the server 5, or an internal device such as the storage 13.
  • the recorded data which is the data to be recorded, preferably includes the image data of the object of interest.
  • the image data included in the recorded data may be a still image or a video. Further, the recorded data may include text data of explanatory information determined by analyzing the image.
  • the recorded data not only the position information of the vehicle at the time when the interest reaction is detected, but also the direction in which the object of interest exists, the position information of the object of interest, and the detection time information are stored in association with each other. good.
  • the recorded data may include image data of a child when looking at an object of interest.
  • the recorded data may include voice data in the vehicle within a predetermined time determined based on the time when the interest reaction is detected.
  • the storage destination of the above data is the server 5
  • the data stored in the server 5 can be referred to from a smartphone or the like owned by a driver, grandparents, or the like. This configuration allows children to share things that interest them with their families living apart.
  • the HCU 1 or the server 5 may notify the device registered in advance that the recorded data has been updated.
  • the above sharing process may be realized by cooperation with a social networking service (SNS: Social Networking Service).
  • the recording processing unit FA saves the data about the object of interest in association with various information, so that the data of the object to be recorded can be referred to later.
  • parents are more likely to talk to their child about what they have seen, for example during a driving break or after returning home, referring to recorded data about the object of interest. It also has the advantage that parents can feel the growth of their child by seeing the things that the child has paid attention to later.
  • the interest target management unit FB is configured to specify the interest target category, which is the category of things that the child is interested in, based on the information such as the type of the attention object detected in the past.
  • Vehicles, animals, buildings, signboards, plants, characters, fashion, etc. are assumed as the categories of interest. Characters may be classified in more detail by the name of the animation or the like.
  • Vehicles may also be subdivided into four-wheeled vehicles, two-wheeled vehicles, trains, and the like.
  • Fashion may also be subdivided into clothes and hairstyles. Animals may be subdivided into dogs, cats, birds and the like.
  • the HCU1 detects the child's interest reaction when the interest category, which is the category in which the child is interested, is configured to be identifiable, and when things belonging to the interest category are captured by the outside camera 28.
  • the threshold value of may be adjusted. For example, when things belonging to the interest category are captured by the outside camera 28, the threshold value for detecting the interest reaction of the child may be lowered.
  • the communication support process executed by the HCU 1 will be described with reference to FIG.
  • the communication support process includes steps S101 to S115.
  • the number of steps constituting the communication support process and the process order can be changed as appropriate.
  • the communication support process shown in FIG. 4 is started when a predetermined start event occurs.
  • the start event for example, the ignition of the vehicle is turned on, the running of the vehicle is started, the start instruction by the driver is input, and the like can be adopted.
  • the communication support process may be started with the detection of the interest reaction of the child sitting on the child's seat as a trigger by the interest reaction detection unit F5. In that case, steps S101 to S104 can be sequentially executed as a process independent of the communication support process.
  • HCU1 acquires information necessary for processing from various devices connected to HCU1 and moves to step S102.
  • the child information acquisition unit F1 acquires the biological information of the child from the child sensor 26 or the like.
  • the out-of-vehicle information acquisition unit F2 acquires information on the environment outside the vehicle from the out-of-vehicle camera 28 and the like.
  • the driver information acquisition unit F3 acquires the biological information of the driver from the DSM21, the driver sensor 22, and the like.
  • the vehicle information acquisition unit F4 acquires information on the current position and traveling speed of the vehicle from the in-vehicle network Nw, the locator 29, and the like.
  • step S101 The various information acquired in step S101 is stored in a predetermined memory such as the RAM 12 together with the information indicating the acquisition time.
  • a predetermined memory such as the RAM 12
  • Such step S101 can be called an information acquisition step and an information acquisition process.
  • the information acquisition step can be sequentially executed in a predetermined cycle such as 100 milliseconds, 200 milliseconds, and 500 milliseconds even after step S102.
  • step S102 the driving load estimation unit F8 determines whether or not the driver's driving load is high based on the driver's biological information acquired by the driver information acquisition unit F3, for example, by the above-mentioned algorithm, and in step S103.
  • the determination result relating to the driver's operating load may be expressed by a level value of a plurality of stages. In that case, the state in which the determination value is equal to or higher than the predetermined threshold value corresponds to the state in which the operating load is high. Further, whether or not the driver's operating load is high may be managed by a flag. For example, when the operating load is high, the operating load flag may be set to 1 (on), while when the operating load is not high, the operating load flag may be set to 0 (off).
  • step S102 is completed, the process proceeds to step S103.
  • Such step S102 can be called a driver state estimation step.
  • step S103 the interest reaction detection unit F5 determines whether or not the child has shown an interest reaction based on the biological information of the child within the latest predetermined time acquired by the child information acquisition unit F1. If there is an interest reaction, affirmative determination is made in step S104, and the process proceeds to step S105. On the other hand, if there is no interest reaction, it is determined in step S115 whether or not the predetermined end condition is satisfied.
  • the processing of steps S103 to S104 can be referred to as an interest reaction detection step.
  • step S105 the object identification unit F6 estimates the direction in which the object of interest is present when viewed from the vehicle, based on the direction of the child's line of sight and the position of the child's eyes. Then, from the subject included in the image data of the external camera 28, the thing existing in the estimation direction of the object of interest is extracted as the object of interest, and the process proceeds to step S106.
  • step S105 can be referred to as an object specifying step of interest.
  • step S106 the object identification unit F6 extracts the image portion in which the object of interest is captured from the image captured by the camera outside the vehicle 28 as the image of the object of interest, and moves to step S107.
  • step S106 can be called an object image acquisition step of interest.
  • step S107 the explanatory information acquisition unit F7 cooperates with the communication device 31 to access the server 5 or the like, the Internet, or the like to acquire explanatory information about the object of interest and move to step S108.
  • the explanatory information is data in which the features, services, names, etc. of the object of interest are verbalized by text or voice data.
  • step S107 can be called an explanatory information acquisition step.
  • step S108 it is determined whether or not it is determined that the operating load is high. As the information related to the operating load used in step S108, the determination result in step S102 can be diverted. If the operating load is high, step S108 is positively determined and the process proceeds to step S109. On the other hand, if the operating load is not high, step S108 is negatively determined and the process proceeds to step S111.
  • the operation mode of the HCU 1 a proxy response mode in which the system automatically responds instead of the driver may be provided. When the HCU1 is set to the proxy response mode by the operation of the driver, it may be configured to move to step S109 regardless of the determination value of the operating load.
  • the notification control unit F9 excludes the driver from the notification target of the information related to the object of interest.
  • the notification target of the information related to the object of interest is set only for children. If an occupant is also in the passenger seat, the notification target of the information related to the object of interest may be set to the child and the passenger seat occupant. Whether or not a person is in the passenger seat can be specified from the detection signal of the seating sensor provided in the passenger seat.
  • the information related to the object of interest here is an image or explanatory information of the object of interest.
  • the explanatory information may be output as an image such as text or an icon, or may be output as a voice message.
  • the image of the object of interest includes not only an image of the object of interest itself, but also a text image of explanatory information.
  • Information about the object of interest can be realized using at least one of image display and audio output.
  • the notification device here refers to a device that outputs information related to an object of interest as an image or voice. Further, including the driver in the notification target corresponds to adopting at least one of the meter display 32, the center display 33, the HUD 34, and the speaker 38A as the notification device. Including the passenger seat occupant in the notification target corresponds to adopting at least one of the center display 33 and the speaker 38A as the notification device.
  • the passenger seat display When the attention target sharing system Sys is provided with a passenger seat display which is a display provided in front of the passenger seat, the passenger seat display can also be adopted as a notification device for the passenger seat occupant.
  • the passenger display may be part of a display that is continuously provided from the right edge to the left edge of the instrument panel.
  • the notification target for image display and the notification target for voice output may be set separately.
  • the child may be notified by both image and voice, while the driver may be controlled to notify only by voice.
  • step S110 the notification control unit F9 displays an image of the object of interest and explanatory information thereof on at least one of the rear seat display 35 and the window display 36, and outputs the voice corresponding to the explanatory information from the speaker 38B. ..
  • the system response as step S110 is a process in which the system responds on behalf of the driver. Corresponds to a certain proxy response process.
  • the proxy response process can also be understood as a process of controlling the operation of the dialogue device 37 in one aspect.
  • the system responds promptly on behalf of the driver to provide explanatory information about the object of interest before the child loses interest in the object of interest. can.
  • the need to immediately respond to a child's question or the like is reduced, so that the driver can concentrate on the driving operation.
  • step S110 the dialogue device 37 may be activated to build a system state capable of responding to additional questions of the child. With such a configuration, the system will continue to be able to respond to children's questions about the presented explanatory information.
  • the interaction between the dialogue device 37 and the child can be performed using agent images.
  • step S111 information about the object of interest is displayed on the display for the driver's seat and the display for children, and the process proceeds to step S112.
  • a predetermined sound effect may be output from each speaker 38 so that the driver or the child can easily notice that the image is displayed.
  • step S112 it is determined whether or not the driver has spoken within a predetermined response waiting time from the image display in step S111.
  • the step S112 corresponds to a process of determining whether or not the driver has made any response to the child's interest reaction.
  • the response waiting time can be, for example, 4 seconds or 6 seconds. If the driver's utterance is detected within the response waiting time, it is considered that the driver has responded to the child's interest reaction, the voice output is omitted, and the process proceeds to step S114.
  • step S113 explanatory information about the object of interest is output by voice and the process proceeds to step S114.
  • Steps S110, S111, and S113 can be called a notification processing step because they present at least one of the driver and the child with verbalized information about the object of interest.
  • step S114 the recording processing unit FA saves the image data of the object of interest in a predetermined recording device in association with the position information, the voice data in the vehicle interior, the time information, and the like, and moves to step S115.
  • step S115 it is determined whether or not the predetermined end condition is satisfied.
  • the termination conditions include, for example, that the vehicle's running power has been turned off, that the vehicle has arrived at the destination, that the driver has given an termination instruction, and the like. If the end condition is satisfied (S115 YES), this process ends.
  • the object of interest such as the direction in which the object identification unit F6 has detected the object of interest and whether or not the object of interest can still be captured by the external camera 28.
  • the target position information image Pd may be composed of text, an icon, or the like.
  • FIG. 5 is a diagram simulating the notification image Px when the object of interest is, for example, a dog walking on a sidewalk.
  • the notification image Px when the object of interest is a dog includes a text image Pt such as a name (breed), a country of origin, and a character, in addition to the image Pi of the object of interest itself.
  • FIG. 6 is a diagram simulating the notification image Px when the object of interest is another vehicle traveling in the adjacent lane.
  • the notification image Px when the object of interest is a car is a text image Pt of the name (vehicle type), the manufacturer (so-called manufacturer), features, etc., in addition to the image Pi of the object of interest itself. including.
  • the characteristics of a vehicle refer to whether it is an electric vehicle or an engine vehicle, size, running performance, and the like.
  • the notification image Px when the object of interest is a building such as a tower shows the name, role, year of construction, height, etc. of the building in addition to the image Pi of the object of interest itself.
  • the role of a building includes, for example, a radio tower that transmits broadcast waves such as television and radio, a commercial complex, government offices, factories, schools, and houses. Roles can also be rephrased as provided services and attributes.
  • the notification image Px of the building as a landmark may display a photograph from another angle or an external photograph Pj at a time zone different from the present, such as at night, in a touch-selectable manner.
  • FIG. 8 is a diagram simulating the notification image Px when the object of interest is a rainbow.
  • the notification image Px when the object of interest is a natural phenomenon such as a rainbow includes, in addition to the image Pi of the object of interest itself, a text image Pt showing the reason for occurrence, features, and the like.
  • the characteristic of the rainbow includes information about the color gradation, for example, the outermost colors are red and purple.
  • the notification control unit F9 may display the video content related to the object of interest or a list thereof as the notification image Px.
  • the object of interest is a character of a certain animation
  • the content such as a moving image of the animation may be displayed.
  • the HCU 1 presents to the driver not only the image of the object of interest that the child is interested in, but also explanatory information in which the name of the object of interest is verbalized.
  • the driver can easily recognize what the child has focused on. As a result, the driver is more likely to respond closely to the child's reaction to the child's interest reaction.
  • the system when it is determined that the driver's driving load is high, the system answers the child's question or the like without waiting for the driver's response to the child's interest reaction. According to this configuration, the driver tends to concentrate on driving. Also, the child can get the information he wants to know from the system. In addition, the system responds to what the child is interested in, reducing the risk of traveling time being boring.
  • the explanatory information is based on the elapse of a predetermined response waiting time after displaying the image. Audio output.
  • the guardian himself as a driver can easily respond to the interest reaction of the child. This can be expected to have the effect of activating communication between the guardian as a driver and the child.
  • the system responds on behalf of the driver. According to the above configuration, the child can easily enjoy the traveling time because at least one of the guardian and the system responds to his / her interest.
  • things that show the child's interest are recorded in association with location information and the like. According to this, what the child is interested in can be specified later even if the driver cannot be specified immediately after the interest reaction is detected. For example, even after passing the object of interest, by looking at the image of the object of interest, it becomes possible to identify things that the child is interested in. Further, for example, it is possible to look back at the image of the object of interest even after the operation is completed or after returning home.
  • the data of the object of interest is configured so that the family living apart in cooperation with the server 5 or the like can also refer to it. Therefore, it becomes easier for family members and relatives other than the driver to share things that the child is interested in.
  • the driver is notified of the type of the object of interest and its existence direction by image or voice.
  • the driver can easily see the object that the child is paying attention to outside the vehicle interior even while driving.
  • a guardian such as a driver can easily notice the growth of the child from the change of the object of interest to the child. Therefore, according to the above configuration, the travel time can also be used as an opportunity for parents to know the growth of their children.
  • things that are detected as objects of interest are not limited to map elements such as facilities and landmarks that are registered in map data.
  • Various objects such as pedestrians, automobiles, trains, animals, and rainbows are detected as objects of interest.
  • the explanatory information about the detected object of interest is acquired from the Internet or the like and presented to at least one of the child and the driver. With such a configuration, the child can actively obtain information about various things that he / she is interested in, so that he / she can learn various things during the traveling time.
  • the child can learn by specifically linking the knowledge on the database with the real world.
  • the knowledge and memories that you actually get with your own eyes are easier to remember than the information that you get from textbooks. Therefore, according to the above configuration, it is possible to efficiently acquire knowledge by using the travel time to a cram school or school.
  • travel time can be used as an opportunity to acquire knowledge.
  • the configuration of the present disclosure has an advantage that the travel time can be easily utilized as an opportunity for knowledge acquisition and communication between parents and children.
  • the image of the object of interest and the voice data showing the conversation in the car are also saved. According to this configuration, the conversation in the car about the object of interest can be reproduced at a later date.
  • the configuration of the present disclosure by facilitating communication between the child and the guardian while driving, the risk of the child causing tantrum can be reduced, and as a result, the effect of suppressing an increase in the driving load can be expected.
  • the amount, type, and expression method of information to be notified to the child may be changed based on the child's age, knowledge level, and ability level. For example, if the child is still unable to read the characters in his / her mother tongue, the text and voice in his / her mother tongue may be output as a set. According to this structure, the effect of accelerating the acquisition of characters in the native language can be expected. In addition, if the child is around the age of being able to read the characters in his / her native language to some extent, the text in his / her native language and the translated text in another language may be output as a set. According to this structure, the effect of learning a language other than the mother tongue can be expected.
  • the amount, type, expression method, etc. of information to be notified to the child may be changed based on the arousal level, posture, boarding time, etc. of the child while riding.
  • the amount of information may be less than when the child is awake.
  • the amount of characters may be reduced to increase the number of images. According to this configuration, it is possible to reduce the risk of causing annoyance to the child due to excessive information.
  • the child's riding posture is out of order, it is highly likely that the child is tired. Therefore, when the riding posture of the child is broken, the amount of information to be presented may be smaller than that when the riding posture of the child is not broken.
  • the content is configured to be presented to the child only for the content whose playback time is shorter than the remaining time until the destination is reached. Is also good.
  • the amount and type of information to be notified to the child via the rear seat display 35 or the like may be configured so that the driver can set it via a predetermined setting screen.
  • Information such as the age of the child may be manually input by the driver via the setting screen or the like, or may be estimated by the HCU 1 based on an image of the child camera 25 or the like.
  • the amount of information of the content of notification to the child depends on whether or not the interest reaction is detected based on the fact that the child utters a compound word phrase that is a phrase that is a combination of multiple words such as "what is that?" You may change such things. Since it can be expected that the compound word phrase can be uttered and the corresponding word is memorized, more detailed explanatory information may be notified. On the other hand, when an interest reaction is detected based on an utterance other than a compound word phrase, the content of the notification may be relatively simple information. In this way, the amount of information notified to the child may be increased or decreased according to the content of the utterance used as the detection trigger of the interest reaction.
  • the information notified to the child may be controlled depending on whether or not the interest reaction is detected by using the spoken voice of the child. For example, information about an object of interest detected in the absence of a child's utterance may be limited to an image display. This is because there is a relatively high risk of erroneous detection when the child does not speak, and there is a risk of causing trouble to users including children if voice output is performed. According to the above configuration, when the risk of erroneous detection is relatively high, the risk of causing trouble to the user can be reduced by omitting the audio output.
  • the HCU1 evaluates the degree of attention based on the length of time while gazing at the same object, the content of speech, the degree of excitement indicated by biological information such as heartbeat, and the like, and when the degree of attention is equal to or less than a predetermined threshold value. , The audio output for the object of interest may be omitted. Even with this configuration, it is possible to reduce the risk of voice output of information about things that are not of interest or that are of little interest. Further, as a result, it is possible to reduce the risk of causing trouble to the user.
  • the HCU 1 may display status information indicating the child's body temperature, drowsiness, etc. on a display for a driver such as HUD34 at all times or at predetermined notification intervals.
  • a driver such as HUD34
  • Such a configuration makes it easier to recognize the child's appearance, such as whether the child is sleepy or hot. In addition, it becomes easier to make a voice call according to the child's condition.
  • a control example is disclosed in which the mode of presenting information about the object of interest to the driver is changed according to the determination result of whether or not the driving load is high, but the mode of presenting information to the driver is switched.
  • the parameters for this are not limited to the height of the operating load.
  • the HCU 1 may change its operation when presenting information about an object of interest to the driver, depending on whether the vehicle is running or not. For example, when the vehicle is stopped, both the image display and the audio output may be performed, while when the vehicle is running, the image display may not be performed and only the audio output may be performed.
  • HCU1 may change the operation when presenting information about the object of interest to the driver depending on whether or not the automatic operation of level 3 or higher is in progress. For example, when the automatic operation of level 3 or higher is performed, both the image display and the audio output may be performed, while when the automatic operation is not performed, the image display may not be performed and only the audio output may be performed.
  • the HCU1 may change the combination of image display destinations for the object of interest by using not only the running state of the vehicle but also whether or not another occupant is in the passenger seat. For example, if an occupant is also seated in the passenger seat, information about the object of interest is displayed only on the passenger seat display and the child display during automatic driving, while the driver display is also displayed during automatic driving. An information image may be displayed.
  • the presentation of information about the object of interest to the driver is omitted. You may be. If the time remaining until the transfer of authority is less than the predetermined value, by refraining from presenting information to the object of interest to the driver, it becomes easier for the driver to concentrate on preparing to resume the driving operation.
  • the HCU 1 may change the place where information about the object of interest is displayed and the notification mode according to the driving state of the driver.
  • the HCU 1 may be configured so that the timing of displaying an image can be adjusted based on the driver's situation and the driver's instruction.
  • the driver's instruction operation can be accepted by voice, touch, gesture, or the like.
  • display timing selection candidates immediately, 5 minutes later, when the vehicle is temporarily stopped, when the vehicle is parked, when the driving load is reduced, when the automatic operation is started, and the like can be adopted.
  • the HCU1 may adjust the threshold value for detecting the interest reaction of the child based on the past detection results of the object of interest. For example, the interest reaction detection unit F5 may lower the threshold value for detecting the interest reaction of a child when traveling near an object that has shown an interest reaction in the past.
  • the object of interest management unit FB may register an object that has shown an interest reaction a plurality of times in the past as a favorite object that is an object of particular interest.
  • the HCU 1 may notify the child of the existence of the favorite object when the remaining distance to the favorite object is less than a predetermined distance or when the favorite object is imaged by the outside camera 28. good.
  • the HCU1 may be configured to learn an object that the child is interested in based on past detection results.
  • the HCU1 when the HCU1 is configured to manage the interest target category and detects that the child is interested in an object belonging to the existing interest target category, a mode different from the normal mode is used. You may notify the driver with.
  • the normal time here refers to the case where the child is interested in an object belonging to an existing interest category.
  • the driver may be able to know that the child has begun to take an interest in new things that are different from the past, in other words, the change of interests as he grows up.
  • the driver can know the interest target of the child that the driver himself did not know. In other words, the management of the interest category by the HCU1 can help to know the unexpected side of the child.
  • the object identification unit F6 may be configured to detect the object of interest based on the data. That is, it may be configured to preferentially extract a subject related to a child's favorite thing as an object of interest from the camera image corresponding to the line-of-sight direction of the child. According to this configuration, it is possible to reduce the possibility that things that the child is not interested in / thin will be extracted as objects of interest. In other words, it is possible to suppress the malfunction of the system. Further, the HCU 1 may change the amount of information to be notified to the child, the expression method, and the like based on the child's preference information.
  • the object of interest may display more / more detailed information than it would otherwise. Also, if the object of interest is a child's favorite thing, an image related to the object of interest is displayed with a notification sound, while if the object of interest is not a thing that the child likes, the object of interest is not a notification sound. An image related to an object may be displayed.
  • the driver and the child may be configured to be able to talk through a camera and a microphone.
  • the image of the child camera 25 is displayed on the meter display 32, and the voice acquired by the child microphone 27 is output from the speaker 38A.
  • the image of the DSM 21 is displayed on the rear seat display 35, and the sound acquired by the driver microphone 23 is output from the speaker 38B.
  • communication is possible even when the driver's seat and the child's seat are separated, for example, when the seat configuration has three or more rows and the child is seated in the third and subsequent rows from the front. It will be easier.
  • the HCU 1 may be configured so that the image pickup direction and the enlargement ratio of the child camera 25 can be changed based on the voice command of the driver acquired via the driver microphone 23.
  • the imaging direction of the child camera 25 is expressed by a pitch angle, a roll angle, a yaw angle, and the like. Changing the imaging direction of the child camera 25 corresponds to changing the posture angle.
  • the imaging direction of the child camera 25 can be realized, for example, by controlling a motor that controls the posture of the child camera 25.
  • the driver can adjust the imaging range of the child camera 25 by voice, and it becomes easy to check the facial expression of the child.
  • the control of the imaging range of the child camera 25 is not limited to voice input, and may be configured to be executable via an operating member such as a haptic device.
  • the devices, systems, and methods thereof described herein may be implemented by a dedicated computer constituting a processor programmed to perform one or more functions embodied by a computer program. .. Further, the apparatus and the method thereof described in the present disclosure may be realized by using a dedicated hardware logic circuit. Further, the apparatus and method thereof described in the present disclosure may be realized by one or more dedicated computers configured by a combination of a processor for executing a computer program and one or more hardware logic circuits. Further, the computer program may be stored in a computer-readable non-transitional tangible recording medium as an instruction executed by the computer.
  • the means and / or functions provided by HCU1 and the like can be provided by software recorded in a substantive memory device and a computer, software only, hardware only, or a combination thereof that execute the software.
  • some or all of the functions included in the HCU 1 may be realized as hardware.
  • a mode in which a certain function is realized as hardware includes a mode in which one or more ICs are used.
  • the HCU 1 may be realized by using an MPU, a GPU, or a DFP (Data Flow Processor) instead of the CPU.
  • the HCU 1 may be realized by combining a plurality of types of arithmetic processing devices such as a CPU, an MPU, and a GPU.
  • HCU1 may be realized as a system-on-chip (SoC: System-on-Chip). Further, various processing units may be realized by using FPGA (Field-Programmable Gate Array) or ASIC (Application Specific Integrated Circuit).
  • FPGA Field-Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • Various programs may be stored in a non-transitionary tangible storage medium.
  • program storage medium various storage media such as HDD (Hard-disk Drive), SSD (Solid State Drive), flash memory, and SD (Secure Digital) card can be adopted.
  • the non-transitional substantive recording medium also includes a ROM such as EPROM (Erasable Programmable Read Only Memory).
  • the plurality of functions possessed by one component in the above embodiment may be realized by a plurality of components, or one function possessed by one component may be realized by a plurality of components. Further, a plurality of functions possessed by the plurality of components may be realized by one component, or one function realized by the plurality of components may be realized by one component. In addition, a part of the configuration of the above embodiment may be omitted. Further, at least a part of the configuration of the above embodiment may be added or replaced with the configuration of the other above embodiment.
  • the scope of the present disclosure also includes a program for making a computer function as a shared device of interest, a non-transitional actual recording medium such as a semiconductor memory in which this program is recorded, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Marketing (AREA)
  • Development Economics (AREA)
  • Signal Processing (AREA)
  • Biomedical Technology (AREA)
  • Strategic Management (AREA)
  • Operations Research (AREA)
  • Accounting & Taxation (AREA)
  • Tourism & Hospitality (AREA)
  • Primary Health Care (AREA)
  • Human Resources & Organizations (AREA)
  • Mathematical Physics (AREA)
  • Library & Information Science (AREA)
  • Traffic Control Systems (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

When an interest reaction by a child is detected based on input data from a child camera (25) and a child sensor, an HCU (1) identifies an attention object that is something outside a vehicle that a child shows interest in based on the child's line of sight information and an image of a vehicle-external camera at the time of that detection. The name, type, characteristics, etc., of the attention object are acquired from a server (5) and presented to the driver and the child. Information about the attention object is saved in a server, smartphone, etc., so that it can be referenced later.

Description

注目対象共有装置、注目対象共有方法Attention target sharing device, attention target sharing method 関連出願の相互参照Cross-reference of related applications
 この出願は、2020年12月11日に日本に出願された特許出願第2020-206064号を基礎としており、基礎の出願の内容を、全体的に、参照により援用している。 This application is based on Patent Application No. 2020-206044 filed in Japan on December 11, 2020, and the contents of the basic application are incorporated by reference as a whole.
 本開示は、後部座席に着座している人物が関心を寄せた車外の物事についての情報をドライバが共有する技術に関する。 This disclosure relates to a technique in which the driver shares information about things outside the vehicle that the person sitting in the backseat is interested in.
 特許文献1には、ドライバ以外の乗員である同乗者が特定の表情や動作をしたこと、特定感情を表す音声を発したこと、又は所定の指示操作が行われたことに基づいて、車外カメラの撮像画像をドライバが視認可能なディスプレイに表示する技術が開示されている。ディスプレイに表示される画像は、車室外のうち、同乗者の視線方向に存在する被写体の画像である。このように特許文献1には、特許文献1に開示の構成では、同乗者が関心を寄せた可能性がある被写体を含む画像がディスプレイに表示される。 Patent Document 1 describes an out-of-vehicle camera based on the fact that a passenger who is a occupant other than the driver makes a specific facial expression or action, emits a voice expressing a specific emotion, or performs a predetermined instruction operation. The technique of displaying the captured image of the above on a display visible to the driver is disclosed. The image displayed on the display is an image of a subject existing in the direction of the passenger's line of sight outside the vehicle interior. As described above, in Patent Document 1, in the configuration disclosed in Patent Document 1, an image including a subject that the passenger may be interested in is displayed on the display.
特開2014-96632号公報Japanese Unexamined Patent Publication No. 2014-96632
 車両の利用形態として、子供が後部座席に設けられた子供用シートに着座し、その保護者(例えば親)が運転席に着座することがある。子供は大人に比べて多様な物体に興味を持ちうる。そして、子供は、興味をもった物事について、ドライバとしての保護者にしばしば問い合わせたり、その存在を教えてくれたりする。 As a vehicle usage pattern, a child may be seated in a child's seat provided in the back seat, and a guardian (for example, a parent) may be seated in the driver's seat. Children can be more interested in a variety of objects than adults. And children often ask their parents as drivers about things they are interested in and tell them about their existence.
 しかしながら、ドライバとしての保護者は運転に集中する必要があるため、子供の問いかけに対していつも丁寧に応対できるわけではない。また、ドライバは後部座席に着座している子供が何を見ているか把握しづらい。よって、運転負荷が軽い場合であっても、子供の興味反応に寄り添った対応をドライバが行うことは難しい。特に、子供がまだ物事をうまく言語化できない年頃である場合や、ボキャブラリーが少ない年頃である場合には、より一層困難となる。 However, parents as drivers need to concentrate on driving, so they cannot always respond politely to children's questions. Also, it is difficult for the driver to know what the child sitting in the back seat is looking at. Therefore, even when the driving load is light, it is difficult for the driver to respond closely to the child's interest reaction. It becomes even more difficult, especially when the child is still unable to verbalize things well, or when the vocabulary is low.
 そのような課題に対し、特許文献1に開示の技術によれば、子供が興味を持った可能性がある被写体を含む画像がディスプレイに表示されるため、ドライバは、当該画像を見ることにより、子供が何に興味を寄せているのか、ある程度は見当をつけることができる。しかしながら、特許文献1に開示の構成では単に画像が表示されるだけである。そのため、車室外の物事に対する子供の反応に対して親が回答/共感するためには、表示画像をもとに子供が着目したであろう被写体を特定し、それを言語化して発話する必要があり、依然としてドライバへの負担が大きい。なお、ここでのドライバとは運転席に着座している人物である運転席乗員を指す。 To solve such a problem, according to the technique disclosed in Patent Document 1, an image including a subject that may be of interest to a child is displayed on the display, so that the driver can see the image. You can have some idea of what your child is interested in. However, in the configuration disclosed in Patent Document 1, only an image is displayed. Therefore, in order for parents to respond / sympathize with the child's reaction to things outside the vehicle interior, it is necessary to identify the subject that the child would have focused on based on the displayed image, verbalize it, and speak. Yes, the burden on the driver is still large. The driver here refers to a driver's seat occupant who is a person seated in the driver's seat.
 本開示は、この事情に基づいて成されたものであり、その目的とするところは、車室外の物事に対する子供の反応に対して運転席の乗員が応答しやすい注目対象共有装置、注目対象共有方法を提供することにある。 The present disclosure is based on this circumstance, and the purpose of the present disclosure is to make it easier for the driver's seat occupant to respond to the reaction of the child to things outside the vehicle interior. To provide a method.
 その目的を達成するための注目対象共有装置は、子供が着座するための座席である子供用シートが設けられた車両で使用される注目対象共有装置であって、子供用シートに着座している子供の状態を表す情報として、子供の少なくとも顔部を撮像範囲に含む車室内カメラの画像に基づき、子供の視線方向を取得する子供情報取得部と、子供の生体情報、子供の音声、及び、視線方向の少なくとも何れか1つに基づいて、車室外の物事に対する子供の興味反応を検出する興味反応検出部と、子供情報取得部が取得した視線方向と、車外を撮像するように車両に取り付けられている車外カメラ(28)の撮像画像と、に基づいて、子供が興味を持った対象である注目対象を検出する注目対象検出部と、車両の内部又は外部に配置されたデータベースから、注目対象について言語化された情報を取得する対象情報取得部と、対象情報取得部が取得した情報を、当該情報に対応するテキストの表示及び音声出力の少なくとも何れか一方を用いて、運転席乗員及び子供の少なくとも何れか一方に通知する通知処理部と、を備える。 The focus sharing device for achieving that purpose is a focus sharing device used in a vehicle provided with a child seat, which is a seat for a child to sit on, and is seated on the child seat. As information indicating the state of the child, a child information acquisition unit that acquires the line-of-sight direction of the child based on an image of a vehicle interior camera that includes at least the face of the child in the imaging range, a child's biological information, a child's voice, and Attached to the vehicle so as to image the line-of-sight direction acquired by the child information acquisition unit and the interest reaction detection unit that detects the child's interest reaction to things outside the vehicle interior based on at least one of the line-of-sight directions. Attention is paid from the attention target detection unit that detects the attention target that is the target that the child is interested in, and the database arranged inside or outside the vehicle, based on the captured image of the vehicle outside camera (28). The driver's seat occupant and the driver's seat occupant and the information acquired by the target information acquisition unit using at least one of the display of the text corresponding to the information and the voice output of the target information acquisition unit for acquiring the verbalized information about the target. It is provided with a notification processing unit that notifies at least one of the children.
 上記構成では、子供の視線情報に基づき、子供が注目している物事を注目対象として特定するとともに、当該注目対象について言語化されている情報を取得して子供又は運転席乗員の少なくともに何れか一方に提供する。ここで、運転席乗員としての保護者に注目対象についての情報を通知する構成によれば、単に画像だけでなく、言語化された情報も通知されるため、当該保護者は子供が何に注目しているのかを認識しやすい。よって、子供が注目している物事をテーマとして子供と会話を行いやすくなる。 In the above configuration, based on the child's line-of-sight information, the thing that the child is paying attention to is specified as the object of interest, and at least the child or the driver's seat occupant obtains the verbalized information about the object of interest. Provide to one side. Here, according to the configuration of notifying the guardian as a driver's seat occupant of information about the object of interest, not only the image but also the verbalized information is notified, so that the guardian pays attention to what the child pays attention to. It is easy to recognize if you are doing it. Therefore, it becomes easier to have a conversation with the child on the theme of the thing that the child is paying attention to.
 また、子供に注目対象についての情報を通知する構成によれば、注目対象物について言語化された情報も通知されるため、当該情報を元に、自分が何に注目しているかを運転席乗員に伝えやすくなる。その結果、子供の興味反応に対して運転席乗員が応答しやすくなる。 In addition, according to the configuration for notifying the child of the information about the object of interest, the verbalized information about the object of interest is also notified, and based on the information, the driver's seat occupant knows what he / she is paying attention to. It will be easier to tell. As a result, it becomes easier for the driver's seat occupant to respond to the child's interest reaction.
 また、上記目的を達成するための注目対象共有方法は、車両に予め設定されている子供用シートに着座している子供が興味を示した物事を保護者が共有するための、少なくとも1つのプロセッサによって実行される注目対象共有方法であって、子供用シートに着座している子供の少なくとも顔部を撮像範囲に含む車室内カメラの画像に基づき、子供の視線方向を取得することと、子供の生体情報、子供の音声、及び、視線方向の少なくとも何れか1つに基づいて、車室外の物事に対する子供の興味反応を検出することと、取得された子供の視線方向と、車外を撮像するように車両に取り付けられている車外カメラの撮像画像とに基づいて、子供が興味を持った対象である注目対象を検出することと、車両の内部又は外部に配置されたデータベースから、注目対象について言語化された情報を取得することと、取得された注目対象についての情報を、当該情報に対応するテキストの表示及び音声出力の少なくとも何れか一方を用いて、運転席乗員及び子供の少なくとも何れか一方に通知すること(S110、S111、S113)と、を含む。 Also, the focus sharing method to achieve the above objectives is at least one processor for parents to share things that the child is interested in sitting in the child's seat preset in the vehicle. A method of sharing attention that is performed by the child's gaze direction and acquisition of the child's line-of-sight direction based on the image of the vehicle interior camera that includes at least the face of the child sitting on the child's seat in the imaging range. To detect the child's interest reaction to things outside the vehicle interior based on at least one of biometric information, the child's voice, and the line-of-sight direction, and to image the acquired child's line-of-sight direction and the outside of the vehicle. Based on the image taken by the outside camera attached to the vehicle, the child can detect the object of interest that is of interest to the child, and the language of the object of interest from the database placed inside or outside the vehicle. At least one of the driver's seat occupant and the child, using at least one of the display of the text corresponding to the information and the voice output of the acquired information about the acquired information and the acquired information. (S110, S111, S113) and the like.
 上記方法は、上述した注目対象共有装置によって実行される方法である。上記の方法によれば注目対象共有装置と同様の作用により同様の効果が得られる。 The above method is a method executed by the above-mentioned attention target sharing device. According to the above method, the same effect can be obtained by the same operation as that of the target sharing device of interest.
 なお、請求の範囲に記載した括弧内の符号は、一つの態様として後述する実施形態に記載の具体的手段との対応関係を示すものであって、本開示の技術的範囲を限定するものではない。 The reference numerals in parentheses described in the claims indicate the correspondence with the specific means described in the embodiment described later as one embodiment, and do not limit the technical scope of the present disclosure. do not have.
注目対象共有システムの全体構成の一例を示すブロック図である。It is a block diagram which shows an example of the whole structure of the attention target sharing system. ウインドウディスプレイの構成の一例を示す図である。It is a figure which shows an example of the structure of a window display. HCUの機能ブロック図である。It is a functional block diagram of HCU. HCUの作動を説明するためのフローチャートである。It is a flowchart for demonstrating operation of HCU. 注目対象物(犬)についての通知画像の一例を示す図である。It is a figure which shows an example of the notification image about the object of interest (dog). 注目対象物(車)についての通知画像の一例を示す図である。It is a figure which shows an example of the notification image about the object of interest (car). 注目対象物(建設物)についての通知画像の一例を示す図である。It is a figure which shows an example of the notification image about the object of interest (building). 注目対象物(虹)についての通知画像の一例を示す図である。It is a figure which shows an example of the notification image about the object of interest (rainbow).
 以下、本開示の実施形態について図を用いて説明する。図1は、本開示に係る注目対象共有装置が適用された注目対象共有システムSysの概略的な構成の一例を示す図である。本開示の注目対象共有システムSysを構成する要素の一部又は全部は、車両に搭載されている。注目対象共有システムSysが備える機能要素の一部は、車両の外部に設けられていても良い。なお、注目対象共有システムSysは、1つの側面において子供とドライバとしての保護者とのコミュニケーションを支援するシステムと解する事もできる。故に、注目対象共有システムSysはコミュニケーション支援システムと呼ぶこともできる。 Hereinafter, embodiments of the present disclosure will be described with reference to the drawings. FIG. 1 is a diagram showing an example of a schematic configuration of a attention target sharing system Sys to which the attention target sharing device according to the present disclosure is applied. Some or all of the elements constituting the attention target sharing system Sys of the present disclosure are mounted on the vehicle. Some of the functional elements included in the attention target sharing system Sys may be provided outside the vehicle. In addition, the attention target sharing system Sys can be understood as a system that supports communication between a child and a guardian as a driver in one aspect. Therefore, the attention target sharing system Sys can also be called a communication support system.
 <前置き>
 本開示におけるドライバとは、運転席に着座している人物、つまり運転席乗員を指す。ドライバとの表現には、実際に運転操作の一部又は全部を実施している人物に限らない。ドライバとの記載は、自動運転中においては自動運転システムから運転操作の権限を受け取るべき人物を指す。ここでは一例として、後部座席に設置されたチャイルドシートに子供が乗車してあって、かつ、後部座席に着座している子供の保護者がドライバとして運転席に着座している場合を想定して、注目対象共有システムSysの作動を説明する。保護者の概念には、子供の母親や、父親の他、祖父母などの親戚などが含まれる。ベビーシッターもまた当該システムが使用される地域の慣習又は法律に応じて保護者に含まれうる。
<Introduction>
The driver in the present disclosure refers to a person sitting in the driver's seat, that is, a driver's seat occupant. The expression "driver" is not limited to a person who actually performs a part or all of the driving operation. The description of a driver refers to a person who should receive the authority of driving operation from the automatic driving system during automatic driving. Here, as an example, it is assumed that a child is in a child seat installed in the back seat and a guardian of the child sitting in the back seat is seated in the driver's seat as a driver. The operation of the attention target sharing system Sys will be described. The concept of a guardian includes the mother of the child, the father, and relatives such as grandparents. Babysitters may also be included in parents according to local customs or laws in which the system is used.
 例えば運転席の後ろ側に位置する座席など、車両が備える後部座席のうちの所定位置には、チャイルドシートが、シートベルトやISOFIX等の機構により固定されている。ISOFIXは、ISO13216-1:1999で規定されている構成であって、LATCH(Lower Anchors and Tethers for Children)とも称される。ここでのチャイルドシートは、広義での子供用の乗員保護装置又は拘束装置を指し、着座面の高さを調整するためのブースターシートやジュニアシートなどを含めることができる。チャイルドシートが設置されている座席など、子供が着座する席のことを以降では子供用シートと記載する。なお、例えば子供の体が十分に大きい場合など、拘束器具の設置を省略可能な条件を充足している場合には、子供用シートへのチャイルドシートは設置されていなくとも良い。子供用シートは、子供が着座可能な座席であればよく、必ずしも子供専用というわけではない。子供用シートは、時として大人も着座可能な座席として構成されていても良い。 A child seat is fixed to a predetermined position among the rear seats of the vehicle, such as a seat located behind the driver's seat, by a mechanism such as a seat belt or ISOFIX. ISOFIX has a configuration defined in ISO13216-1: 1999, and is also referred to as LATCH (Lower Anchors and Tethers for Children). The child seat here refers to a occupant protection device or a restraint device for children in a broad sense, and may include a booster seat, a junior seat, and the like for adjusting the height of the seating surface. Seats where children are seated, such as seats with child seats, are hereinafter referred to as children's seats. If the condition that the installation of the restraint device can be omitted is satisfied, for example, when the child's body is sufficiently large, the child seat may not be installed on the child seat. Children's seats need only be seats that children can sit in, and are not necessarily exclusively for children. Children's seats may sometimes be configured as seats that adults can also sit in.
 また、本開示の「自動運転」が指すレベルは、例えば米国自動車技術会(SAE International)が定義するレベル3相当であってもよいし、レベル4以上であってもよい。レベル3は、運行設計領域(ODD:Operational Design Domain)内においてシステムが全ての運転タスクを実行する一方、緊急時にはシステムからユーザに操作権限が移譲されるレベルを指す。ODDは、例えば走行位置が高速道路内であること等の、自動運転を実行可能な条件を規定するものである。レベル3では、システムから運転交代の要求があった場合に、ドライバが迅速に対応可能であることが求められる。レベル3は、いわゆる条件付き自動運転に相当する。レベル4は、対応不可能な道路、極限環境等の特定状況下を除き、システムが全ての運転タスクを実施可能なレベルである。レベル5は、あらゆる環境下でシステムが全ての運転タスクを実施可能なレベルである。レベル4以上の自動運転とは、自動運転装置がすべての運転タスクを行うレベル、すなわち、運転席乗員の睡眠が許容される自動化レベルを指す。なお、レベル2以下は、ドライバが操舵及び加減速の少なくとも一部の運転タスクを実行する運転支援レベルに相当する。ここでのシステムとは注目対象共有システムSysを含む車載システムを指す。 Further, the level pointed to by "automatic driving" in the present disclosure may be, for example, equivalent to level 3 defined by the American Society of Automotive Engineers of Japan (SAE International), or may be level 4 or higher. Level 3 refers to the level at which the system executes all operation tasks within the operational design domain (ODD), while the operation authority is transferred from the system to the user in an emergency. The ODD defines conditions under which automatic driving can be executed, such as the traveling position being in a highway. At level 3, the driver is required to be able to respond promptly when there is a request for a change of operation from the system. Level 3 corresponds to so-called conditional automated driving. Level 4 is a level at which the system can perform all driving tasks except under specific circumstances such as unresponsive roads and extreme environments. Level 5 is the level at which the system can perform all driving tasks in any environment. Level 4 or higher automated driving refers to the level at which the automated driving device performs all driving tasks, that is, the automated level at which the driver's seat occupants are allowed to sleep. It should be noted that level 2 or lower corresponds to a driving support level in which the driver executes at least a part of driving tasks of steering and acceleration / deceleration. The system here refers to an in-vehicle system including the attention target sharing system Sys.
 <注目対象共有システムSysの構成について>
 図1に示すように注目対象共有システムSysは、ディスプレイなどのHMI(Human Machine Interface)の作動を制御するHCU(HMI Control Unit)1を含んで構成されている。HCU1が注目対象共有装置に相当する。
<About the configuration of the attention target sharing system Sys>
As shown in FIG. 1, the attention target sharing system Sys includes an HCU (HMI Control Unit) 1 that controls the operation of an HMI (Human Machine Interface) such as a display. HCU1 corresponds to a shared device of interest.
 HCU1は、ドライバステータスモニタ(以降、DSM:Driver Status Monitor)21などの多様な車載デバイスと接続されて使用される。例えばHCU1は、DSM21、ドライバセンサ22、ドライバ用マイク23、入力装置24、子供用カメラ25、子供センサ26、子供用マイク27、車外カメラ28、及びロケータ29と接続されている。また、HCU1は、通信装置31や、メータディスプレイ32、センターディスプレイ33、ヘッドアップディスプレイ(HUD)34、後席ディスプレイ35、ウインドウディスプレイ36、対話装置37、スピーカ38などとも接続されている。 HCU1 is used by being connected to various in-vehicle devices such as a driver status monitor (hereinafter, DSM: Driver Status Monitor) 21. For example, the HCU 1 is connected to a DSM 21, a driver sensor 22, a driver microphone 23, an input device 24, a child camera 25, a child sensor 26, a child microphone 27, an outside camera 28, and a locator 29. The HCU 1 is also connected to a communication device 31, a meter display 32, a center display 33, a head-up display (HUD) 34, a rear seat display 35, a window display 36, a dialogue device 37, a speaker 38, and the like.
 加えて、HCU1は、車両内に構築されている通信ネットワークである車両内ネットワークNwを介して、図1での図示を省略している多様なセンサ/デバイスとも接続されている。例えばHCU1には、車両内ネットワークNwを介して自動運転装置などの車両の走行を制御するコンピュータとも相互通信可能に構成されてあって、現在の車両が自動運転モードであるか否かを示す信号が入力されうる。また、HCU1は、車両内ネットワークNwを介して多様な車載センサの検出結果等が入力される。車載センサとしては、例えば、車速、加速度、操舵角、シフトポジション、アクセルの踏み込み量、ブレーキの踏み込み量等を検出するセンサが挙げられる。また、車載センサにはパーキングブレーキの作動状態や、車両の電源状態を検出するセンサ/スイッチなども含まれる。 In addition, the HCU1 is also connected to various sensors / devices (not shown in FIG. 1) via the in-vehicle network Nw, which is a communication network constructed in the vehicle. For example, the HCU 1 is configured to be capable of intercommunication with a computer that controls the running of the vehicle such as an automatic driving device via the in-vehicle network Nw, and is a signal indicating whether or not the current vehicle is in the automatic driving mode. Can be entered. Further, the HCU1 is input with the detection results of various in-vehicle sensors and the like via the in-vehicle network Nw. Examples of the in-vehicle sensor include sensors that detect vehicle speed, acceleration, steering angle, shift position, accelerator depression amount, brake depression amount, and the like. In-vehicle sensors also include sensors / switches that detect the operating state of the parking brake and the power state of the vehicle.
 なお、HCU1と種々のデバイスとは専用線で接続されていても良いし、車両内ネットワークNwを介して接続されていてもよい。また、HCU1と車載デバイスとの間にはECU(Electronic Control Unit)が介在していてもよい。 The HCU1 and various devices may be connected by a dedicated line or may be connected via the in-vehicle network Nw. Further, an ECU (Electronic Control Unit) may be interposed between the HCU 1 and the in-vehicle device.
 DSM21は、ユーザの顔画像に基づいてユーザの状態を逐次検出する装置である。DSM21は、例えば近赤外光源と、近赤外カメラと、これらを制御する制御モジュールと、を含む。DSM21は、近赤外カメラが運転席のヘッドレストが存在する方向に向いた姿勢にて、例えばステアリングコラム部の上面、又はインストゥルメントパネルの上面等に設置されている。DSM21は、近赤外光源によって近赤外光を照射されたドライバの頭部を、近赤外カメラによって撮影する。近赤外カメラによる撮像画像は、制御モジュールによって画像解析される。制御モジュールは、近赤外カメラから入力される撮像画像から、例えばドライバの目の開度など、ドライバの状態を示す情報であるドライバ状態情報を抽出する。なお、DSM21を構成するカメラは可視光カメラであってもよい。DSM21は、ドライバの顔画像から抽出したドライバ状態情報をHCU1に出力する。 The DSM21 is a device that sequentially detects the user's state based on the user's face image. The DSM 21 includes, for example, a near-infrared light source, a near-infrared camera, and a control module for controlling them. The DSM 21 is installed in a posture in which the near-infrared camera faces the direction in which the headrest of the driver's seat is present, for example, on the upper surface of the steering column portion, the upper surface of the instrument panel, or the like. The DSM 21 uses a near-infrared camera to photograph the head of the driver irradiated with near-infrared light by a near-infrared light source. The image captured by the near-infrared camera is image-analyzed by the control module. The control module extracts driver state information, which is information indicating the driver's state, such as the opening of the driver's eyes, from the captured image input from the near-infrared camera. The camera constituting the DSM 21 may be a visible light camera. The DSM 21 outputs the driver status information extracted from the driver's face image to the HCU1.
 ドライバ状態情報には、例えばドライバの顔の向きや、視線方向、瞼の開度、瞳孔の開度、口の開度、姿勢などを含む。また、DSM21は、顔の特徴点の分布等に基づき、ドライバの表情や感情等を推定するように構成されていてもよい。画像解析に基づく表情を検出する手法としては、顔の特徴点の分布パターンや顔の筋肉の動きを用いる方法など、多様な方法を援用することができるため、ここでの詳しい説明は省略する。一例としては解剖学的知見に基づき顔面の各筋肉に対応づいた動きの単位として定義された、複数種類のAction unit(以下、AU)を検出するとともに、AU毎の検出量(スコア)のパターンに基づいて表情を推定する方法などを採用可能である。AUとしては、例えば眉が下がる、頬が上がる、口角が上がる、上唇が上がるなどが挙げられる。 The driver status information includes, for example, the direction of the driver's face, the direction of the line of sight, the opening of the eyelids, the opening of the pupil, the opening of the mouth, the posture, and the like. Further, the DSM 21 may be configured to estimate the facial expression, emotion, etc. of the driver based on the distribution of facial feature points and the like. As a method for detecting facial expressions based on image analysis, various methods such as a method using a distribution pattern of facial feature points and a method using facial muscle movement can be used, and therefore detailed description thereof is omitted here. As an example, multiple types of Action units (hereinafter referred to as AU) defined as units of movement corresponding to each muscle of the face based on anatomical knowledge are detected, and the detection amount (score) pattern for each AU is detected. It is possible to adopt a method of estimating facial expressions based on. Examples of the AU include lowering the eyebrows, raising the cheeks, raising the corners of the mouth, and raising the upper lip.
 加えて、DSM21は、ドライバの表情などに基づき、ドライバの緊張度合いを推定しても良い。その他、ドライバの顔部の色情報に基づきドライバの健康状態を推定しても良い。ドライバの状態情報には、表情や感情、緊張度合い、健康状態などを含めることができる。なお、画像解析に基づいてドライバの状態を検出する機能は、HCU1が備えていても良い。その場合、DSM21はドライバの顔部画像をHCU1に出力可能に構成されていればよい。DSM21とHCU1の間における機能配置は適宜変更可能である。 In addition, the DSM21 may estimate the degree of driver tension based on the driver's facial expression and the like. In addition, the health condition of the driver may be estimated based on the color information of the driver's face. The driver's status information can include facial expressions, emotions, tension, health status, and the like. The HCU1 may have a function of detecting the state of the driver based on image analysis. In that case, the DSM 21 may be configured to be able to output the driver's face image to the HCU1. The functional arrangement between DSM21 and HCU1 can be changed as appropriate.
 ドライバセンサ22は、ドライバの生体情報をセンシングする生体センサである。ドライバセンサ22は、例えば脈波をセンシングする脈波センサである。ドライバセンサ22は、血圧、心電位、心拍数、発汗量、体温、人体からの放熱量、呼吸のリズム、呼吸の深さ、呼気成分、体組成、姿勢、体動、皮膚電気活動、顔面筋の活動電位、及び末梢血流量の少なくとも何れか1つを検出対象とするセンサであってもよい。末梢血流量は例えば指先などの末梢部の血流量を指す。生体センサには、温度センサや脈波センサ、湿度センサ、心拍センサなどが含まれる。また、生体情報の概念には、上述したように多様な状態量が含まれうる。HCU1には検出対象とする生体情報が異なる複数種類のドライバセンサ22が接続されていても良い。なお、前述のDSM21も広義においては、ドライバセンサ22に含めることができる。 The driver sensor 22 is a biosensor that senses the biometric information of the driver. The driver sensor 22 is, for example, a pulse wave sensor that senses a pulse wave. The driver sensor 22 includes blood pressure, electrocardiogram, heart rate, sweating amount, body temperature, heat dissipation from the human body, breathing rhythm, breathing depth, exhaled breath component, body composition, posture, body movement, skin electrical activity, and facial muscles. The sensor may be a sensor whose detection target is at least one of the activity potential of the body and the peripheral blood pressure. Peripheral blood flow refers to blood flow in the peripheral part such as a fingertip. Biological sensors include temperature sensors, pulse wave sensors, humidity sensors, heart rate sensors and the like. In addition, the concept of biometric information can include various state quantities as described above. A plurality of types of driver sensors 22 having different biological information to be detected may be connected to the HCU 1. The above-mentioned DSM 21 can also be included in the driver sensor 22 in a broad sense.
 ドライバセンサ22は、運転席の背もたれ部やヘッドレストに内蔵されていても良いし、ステアリングに設けられていても良い。また、探査波としてのミリ波を運転席に向けて送受信することで、ドライバの心拍数や体動、姿勢を検出するミリ波レーダも生体センサに含めることができる。ドライバセンサ22はサーモグラフィーカメラであってもよい。ドライバセンサ22としては電波式のタイプのものや、赤外線を用いるタイプのものなど、多様な検出原理のセンサを採用することができる。 The driver sensor 22 may be built in the backrest of the driver's seat or the headrest, or may be provided in the steering. In addition, by transmitting and receiving millimeter waves as exploration waves toward the driver's seat, millimeter wave radars that detect the driver's heart rate, body movement, and posture can also be included in the biosensor. The driver sensor 22 may be a thermography camera. As the driver sensor 22, sensors having various detection principles such as a radio wave type sensor and a driver sensor 22 using infrared rays can be adopted.
 加えて、種々のドライバセンサ22の一部又は全部は、ドライバの例えば手首等に装着されて使用されるウェアラブルデバイスであっていてもよい。ウェアラブルデバイスは、リストバンド型、腕時計型、指輪型、メガネ型、イヤホン型など、多様な形状のものを採用可能である。ドライバセンサ22としてのウェアラブルデバイスは、車両に搭載されている通信装置31を介してHCU1と相互通信可能に構成されている。ウェアラブルデバイスと通信装置31との接続態様は有線接続であっても良いし、無線接続であっても良い。無線接続の方式としては、Bluetooth(登録商標)や、Wi-Fi(登録商標)などの近距離無線通信規格を採用可能である。その他、生体センサとしては、交感神経システム活動に由来する皮膚コンダクタンス(皮膚表面電位)変化を検出する皮膚電気活動(EDA)センサなども採用可能である。 In addition, some or all of the various driver sensors 22 may be wearable devices that are worn and used on the driver, for example, the wrist. As the wearable device, various shapes such as a wristband type, a wristwatch type, a ring type, a glasses type, and an earphone type can be adopted. The wearable device as the driver sensor 22 is configured to be capable of intercommunication with the HCU 1 via a communication device 31 mounted on the vehicle. The connection mode between the wearable device and the communication device 31 may be a wired connection or a wireless connection. As a wireless connection method, short-range wireless communication standards such as Bluetooth (registered trademark) and Wi-Fi (registered trademark) can be adopted. In addition, as the biological sensor, a skin electrical activity (EDA) sensor that detects changes in skin conductance (skin surface potential) derived from sympathetic nervous system activity can also be adopted.
 ドライバ用マイク23は、前部座席の乗員が発話した音声等の周囲の音を電気信号に変換してHCU1に入力する装置である。ドライバ用マイク23は、ドライバの発話音声を集音しやすいように、例えばステアリングコラムカバーの上面や、ハンドル、インストゥルメントパネルの中央部などに配置されている。 The driver microphone 23 is a device that converts ambient sounds such as voices spoken by passengers in the front seats into electrical signals and inputs them to the HCU 1. The driver microphone 23 is arranged, for example, on the upper surface of the steering column cover, the steering wheel, the central portion of the instrument panel, or the like so as to easily collect the voice spoken by the driver.
 入力装置24は、HCU1に対するドライバの指示を受け付けるための操作部材である。入力装置24は、ステアリングホイールのスポーク部に設けられたメカニカルスイッチ(いわゆるステアスイッチ)であってもよいし、ドライバの発話内容を認識する音声入力装置であってもよい。また、入力装置24は、例えばセンターディスプレイ33など、インストゥルメントパネルに設けられたディスプレイの表示パネル上に積層されたタッチパネルであってもよい。さらに、入力装置24はドライバのスマートフォンであっても良い。例えばドライバが所持するスマートフォンのタッチパネル及びディスプレイを入力装置24として援用する事ができる。 The input device 24 is an operating member for receiving a driver's instruction to the HCU1. The input device 24 may be a mechanical switch (so-called steer switch) provided on the spoke portion of the steering wheel, or may be a voice input device that recognizes the utterance content of the driver. Further, the input device 24 may be a touch panel laminated on the display panel of the display provided on the instrument panel, for example, the center display 33. Further, the input device 24 may be a driver's smartphone. For example, the touch panel and display of the smartphone possessed by the driver can be used as the input device 24.
 子供用カメラ25は、子供用シートに着座する子供の顔部を撮像するカメラである。例えば子供用カメラ25は、子供用シートの正面に位置する前部座席の背面部に取り付けられている。子供用カメラ25は、後部座席に着座している子供の顔部を撮像可能なように、車室内の天井部などに配置されていても良い。後部座席全体を撮像範囲に含むカメラを子供用カメラ25として援用しても良い。例えば子供用カメラ25は、天井部の中央部や、フロントガラスの上端部、オーバーヘッドコンソールなどに取り付けられていても良い。子供用カメラ25が撮像した画像データは、HCU1に向けて出力される。なお、ここでの画像データの概念には映像信号も含めることができる。子供用カメラ25は車室内カメラに相当する。 The child camera 25 is a camera that captures the face of a child sitting on a child seat. For example, the child camera 25 is attached to the back of the front seat, which is located in front of the child's seat. The child camera 25 may be arranged on the ceiling or the like in the vehicle interior so that the face of the child sitting in the rear seat can be imaged. A camera that includes the entire rear seat in the imaging range may be used as the child camera 25. For example, the child camera 25 may be attached to the central portion of the ceiling portion, the upper end portion of the windshield, the overhead console, or the like. The image data captured by the child camera 25 is output toward the HCU 1. It should be noted that the concept of image data here can also include a video signal. The child camera 25 corresponds to a vehicle interior camera.
 子供センサ26は、子供用シートに着座している乗員(つまり子供)の生体情報をセンシングする生体センサである。子供センサ26は、ドライバセンサ22と同様に、血圧や、心電位、心拍数、発汗量、体温など、多様な状態量の少なくとも1つを検出対象とするセンサである。例えば子供センサ26は、脈波をセンシングする脈波センサである。HCU1には検出対象とする生体情報が異なる複数種類の子供センサ26が接続されていても良い。子供の生体情報には、ドライバの生体情報と同様に多様な項目が含まれうる。 The child sensor 26 is a biological sensor that senses biological information of an occupant (that is, a child) sitting on a child's seat. Like the driver sensor 22, the child sensor 26 is a sensor that detects at least one of various state quantities such as blood pressure, heart rate, heart rate, sweating amount, and body temperature. For example, the child sensor 26 is a pulse wave sensor that senses a pulse wave. A plurality of types of child sensors 26 having different biological information to be detected may be connected to the HCU 1. The biometric information of a child can include various items as well as the biometric information of a driver.
 子供センサ26は、例えば子供用シートに内蔵されている。なお、子供センサ26は、ミリ波や赤外線などを用いて種々のバイタル情報を取得する、非接触式のセンサであってもよい。子供センサ26はサーモグラフィーカメラであってもよい。加えて、子供センサ26として、子供の例えば手首等に装着されて使用されるウェアラブルデバイスであっていてもよい。子供センサ26としてのウェアラブルデバイスは、車両に搭載されている通信装置31を介してHCU1と相互通信可能に構成されていればよい。 The child sensor 26 is built in, for example, a child seat. The child sensor 26 may be a non-contact type sensor that acquires various vital information using millimeter waves, infrared rays, or the like. The child sensor 26 may be a thermography camera. In addition, the child sensor 26 may be a wearable device that is worn and used on, for example, a wrist of a child. The wearable device as the child sensor 26 may be configured to be capable of intercommunication with the HCU 1 via a communication device 31 mounted on the vehicle.
 子供用マイク27は、後部座席の乗員、特に子供用シートに着座している子供が発話した音声を電気信号に変換してHCU1に入力する装置である。子供用マイク27は、子供用シートに着座している子供の発話音声を集音しやすいように、例えば子供用シートの正面に位置する前部座席の背面部や、車室内天井部の中央部などに配置されている。子供用マイク27は、チャイルドシートのヘッドレスト付近に設けられていても良い。 The child's microphone 27 is a device that converts the voice spoken by a occupant in the back seat, particularly a child sitting in the child's seat, into an electric signal and inputs it to the HCU1. The children's microphone 27 is, for example, the rear portion of the front seat located in front of the children's seat or the central portion of the vehicle interior ceiling so that the voice of the child sitting on the children's seat can be easily collected. It is placed in such as. The child's microphone 27 may be provided near the headrest of the child seat.
 車外カメラ28は、自車両周辺を撮影し、当該撮影画像のデータをHCU1に出力する車載カメラである。車外カメラ28は、少なくともレンズとイメージセンサとを備えており、自車両の周辺を示す画像を電子的に取得する。車外カメラ28は、一台であっても良いし、複数台であっても良い。 The out-of-vehicle camera 28 is an in-vehicle camera that photographs the surroundings of the own vehicle and outputs the data of the captured image to the HCU 1. The out-of-vehicle camera 28 includes at least a lens and an image sensor, and electronically acquires an image showing the periphery of the own vehicle. The number of external cameras 28 may be one or a plurality.
 例えばHCU1には、車外カメラ28としてフロントカメラ、リアカメラ、左サイドカメラ、及び右サイドカメラが接続されている。フロントカメラは、車両前方を所定の画角で撮像するカメラであって、例えばフロントグリルなどの自車両の前端に取り付けられている。リアカメラは、車両後方を所定の画角で撮像するカメラであって、例えばリアナンバープレート付近やリアウインドウ付近など、車体背面部の所定位置に配置されている。左サイドカメラは、自車両の左側方を撮像するカメラであって、左側サイドミラーに取り付けられている。右サイドカメラは、自車両の右側方を撮像するカメラであって、右側サイドミラーに取り付けられている。これらの車外カメラ28のレンズとしては魚眼レンズなどの広角レンズが採用されており、各車外カメラ28は180度以上の画角を有している。このため、4つのカメラ2を利用することで、自車両の全周囲(つまり360°)を撮影することが可能である。 For example, the front camera, the rear camera, the left side camera, and the right side camera are connected to the HCU 1 as the outside camera 28. The front camera is a camera that captures an image of the front of the vehicle at a predetermined angle of view, and is attached to the front end of the own vehicle such as a front grill. The rear camera is a camera that captures the rear of the vehicle at a predetermined angle of view, and is arranged at a predetermined position on the rear surface of the vehicle body, for example, near the rear license plate or the rear window. The left side camera is a camera that captures the left side of the vehicle and is attached to the left side mirror. The right side camera is a camera that captures the right side of the vehicle and is attached to the right side mirror. A wide-angle lens such as a fisheye lens is adopted as the lens of these out-of-vehicle cameras 28, and each out-of-vehicle camera 28 has an angle of view of 180 degrees or more. Therefore, by using the four cameras 2, it is possible to take a picture of the entire circumference (that is, 360 °) of the own vehicle.
 なお、上述した各カメラ2の取り付け位置は適宜変更可能である。フロントカメラは、ルームミラーやフロントガラスの上端部などに取り付けられていても良い。左右のサイドカメラは、AピラーやBピラーの付け根付近に配置されていてもよい。車外カメラ28はルーフ上に取り付けられても良いし、車室内の天井部に取り付けられていても良い。一部又は全部の車外カメラ28は、例えばルーフ上や、ダッシュボード上、後部座席用のドアの窓枠付近などに後付されたカメラであってもよい。車外カメラ28は、レンズと撮像素子のセットを複数備え、1台で360°撮像可能な複眼カメラであってもよい。また、注目対象共有システムSysは車外カメラ28として、撮像距離の範囲が異なる複数のカメラを備えていても良い。例えば、注目対象共有システムSysは車外カメラ28として、近距離を撮像するための近距離カメラと、相対的に遠方を撮像するための望遠カメラとを備えていても良い。 The mounting position of each camera 2 described above can be changed as appropriate. The front camera may be attached to a rearview mirror, the upper end of the windshield, or the like. The left and right side cameras may be arranged near the bases of the A pillar and the B pillar. The external camera 28 may be mounted on the roof or may be mounted on the ceiling in the vehicle interior. The partial or all exterior cameras 28 may be cameras retrofitted, for example, on the roof, on the dashboard, near the window frame of the door for the rear seats, and the like. The out-of-vehicle camera 28 may be a compound-eye camera that includes a plurality of sets of lenses and image pickup elements and can take 360 ° images with one unit. Further, the attention target sharing system Sys may include a plurality of cameras having different imaging distance ranges as the outside camera 28. For example, the attention target sharing system Sys may include a short-distance camera for taking a short-distance image and a telephoto camera for taking a relatively long-distance image as the out-of-vehicle camera 28.
 ロケータ29は、自車両の現在位置を測位する装置である。ロケータ29は、例えばGNSS受信機、慣性センサ、地図データベース(以下、DB)を用いて実現されている。GNSS受信機は、GNSS(Global Navigation Satellite System)を構成する測位衛星から送信される航法信号を受信することで、当該GNSS受信機の現在位置を逐次(例えば100ミリ秒毎に)検出するデバイスである。ロケータ29は、GNSS受信機の測位結果と、慣性センサでの計測結果とを組み合わせることにより、自車両の位置を逐次測位する。測位した車両位置はHCU1に向けて出力される。また、ロケータ29は地図DBから現在位置を基準として定まる所定範囲の地図データを読み出し、HCU1に提供する。地図DBは車両にローカル保存されていてもよいし、クラウド上に配置されていても良い。 The locator 29 is a device for positioning the current position of the own vehicle. The locator 29 is realized by using, for example, a GNSS receiver, an inertial sensor, and a map database (hereinafter, DB). A GNSS receiver is a device that sequentially (for example, every 100 milliseconds) detects the current position of the GNSS receiver by receiving a navigation signal transmitted from a positioning satellite that constitutes a GNSS (Global Navigation Satellite System). be. The locator 29 sequentially positions the position of its own vehicle by combining the positioning result of the GNSS receiver and the measurement result of the inertial sensor. The positioned vehicle position is output toward HCU1. Further, the locator 29 reads out the map data in a predetermined range determined based on the current position from the map DB and provides it to the HCU 1. The map DB may be stored locally in the vehicle or may be located on the cloud.
 通信装置31は、外部装置と無線又は有線通信するための装置である。例えば通信装置31は、Bluetooth(登録商標)などの規格に準拠して車室内に持ち込まれたスマートフォン4やウェアラブルデバイスとデータ通信を実施する。スマートフォン4は主としてドライバが所持するスマートフォンである。また、通信装置31は、例えばLTE(Long Term Evolution)や4G、5Gなどの規格に準拠した無線通信を実施可能に構成されており、所定のサーバ5とデータ通信を実施する。 The communication device 31 is a device for wirelessly or wired communication with an external device. For example, the communication device 31 carries out data communication with a smartphone 4 or a wearable device brought into the vehicle interior in accordance with a standard such as Bluetooth (registered trademark). The smartphone 4 is mainly a smartphone owned by the driver. Further, the communication device 31 is configured to be capable of performing wireless communication conforming to standards such as LTE (Long Term Evolution) and 4G, 5G, and performs data communication with a predetermined server 5.
 サーバ5は、例えば別途後述するように、子供が興味を寄せた車室外の物体である注目対象物についてのデータを保存する。また、サーバ5は、多様な物体についての説明情報が登録されているデータベースなどを備えている。サーバ5は、車両からの要求に基づいて、HCU1で検出された子供の注目対象物についての説明情報を送信可能に構成されている。なお、サーバ5は、車両から送信されてきた注目対象物について、インターネット上の情報を取得して車両に返送する機能を有していても良い。サーバ5はWebサーバであってもよい。車両から送信されてくる注目対象物についてのデータは、画像データであってもよく、その場合、サーバ5は受信した画像を解析することで、検索ワードとして、注目対象物を言語化したテキストデータを取得可能に構成されていればよい。 The server 5 stores data about an object of interest, which is an object outside the vehicle interior that the child is interested in, as will be described later. Further, the server 5 includes a database and the like in which explanatory information about various objects is registered. The server 5 is configured to be able to transmit explanatory information about the child's object of interest detected by the HCU1 based on the request from the vehicle. The server 5 may have a function of acquiring information on the Internet and returning the information on the object of interest transmitted from the vehicle to the vehicle. The server 5 may be a Web server. The data about the object of interest transmitted from the vehicle may be image data, and in that case, the server 5 analyzes the received image to use text data in which the object of interest is verbalized as a search word. It suffices if it is configured so that it can be acquired.
 メータディスプレイ32は、インストゥルメントパネルにおいて運転席の正面に位置する領域に配置されているディスプレイである。ディスプレイとしては液晶ディスプレイや有機ELディスプレイなどを採用可能である。センターディスプレイ33は、インストゥルメントパネルにおいて車幅方向中央付近に配置されているディスプレイである。HUD34は、HCU1やナビゲーション装置などから入力される制御信号及び映像データに基づき、フロントガラスの所定領域に画像光を投影することにより、ユーザによって知覚されうる虚像を映し出すデバイスである。HUD34は、車両の前方の風景と重畳した画像を表示する。メータディスプレイ32、センターディスプレイ33、及びHUD34のそれぞれは、HCU1から入力された信号に応じた画像を表示する。メータディスプレイ32、センターディスプレイ33、及びHUD34は、ドライバ向けのディスプレイに相当する。もちろん、センターディスプレイ33は、助手席の乗員も視認可能である。故に、センターディスプレイ33は助手席乗員を含む前部座席の乗員向けのディスプレイと解することもできる。 The meter display 32 is a display arranged in an area located in front of the driver's seat on the instrument panel. As the display, a liquid crystal display, an organic EL display, or the like can be adopted. The center display 33 is a display arranged near the center in the vehicle width direction on the instrument panel. The HUD 34 is a device that projects a virtual image that can be perceived by the user by projecting image light onto a predetermined area of the windshield based on control signals and video data input from the HCU 1 or a navigation device. The HUD 34 displays an image superimposed on the landscape in front of the vehicle. Each of the meter display 32, the center display 33, and the HUD 34 displays an image corresponding to the signal input from the HCU 1. The meter display 32, the center display 33, and the HUD 34 correspond to displays for drivers. Of course, the center display 33 can also be visually recognized by the passenger in the passenger seat. Therefore, the center display 33 can be understood as a display for front seat occupants including the passenger seat occupant.
 後席ディスプレイ35は、後部座席の乗員、主として、子供用シートに着座している乗員向けのディスプレイである。後席ディスプレイ35はリアモニターとも称される。後席ディスプレイ35は、例えば子供用シートの正面に位置する前部座席の背面部や、車室内天井部などに配置されている。後席ディスプレイ35もまた、HCU1からの入力信号に基づいて作動する。なお、ここでは一例として、後席ディスプレイ35はタッチパネルを備え、後席ディスプレイ35に表示画面への子供による指示操作を受付可能に構成されている。 The rear seat display 35 is a display for passengers in the rear seats, mainly passengers seated in children's seats. The rear seat display 35 is also referred to as a rear monitor. The rear seat display 35 is arranged, for example, on the back portion of the front seat located in front of the child seat, the ceiling portion in the vehicle interior, or the like. The rear seat display 35 also operates based on the input signal from the HCU 1. Here, as an example, the rear seat display 35 is provided with a touch panel, and the rear seat display 35 is configured to be able to receive an instruction operation by a child on the display screen.
 ウインドウディスプレイ36は、車両側面の窓ガラス、特に子供用シートに隣接するサイドウインドウに、表示光を照射することによって、当該窓ガラスに画像を表示する装置である。ウインドウディスプレイ36は、例えば図2に示すように、画像光を射出するためのプロジェクタ361と、スクリーン362とを含む。スクリーン362は、画像光を車室内に向けて反射するためのフィルム状の構成であって、画像光の照射対象となる窓ガラス61の車室内側の面に貼り付けられている。プロジェクタ361とスクリーン362の間には、拡大反射するためのミラー363が介在していてもよい。ミラー363は凹面鏡が好ましいが平面鏡であってもよい。プロジェクタ361やミラー363はルーフ部62の車室内側の面、すなわち車室内天井部に配置されている。 The window display 36 is a device that displays an image on a window glass on the side of a vehicle, particularly a side window adjacent to a child's seat, by irradiating the window glass with display light. The window display 36 includes, for example, as shown in FIG. 2, a projector 361 for emitting image light and a screen 362. The screen 362 has a film-like structure for reflecting the image light toward the vehicle interior, and is attached to the surface of the window glass 61 to be irradiated with the image light on the vehicle interior side. A mirror 363 for magnified reflection may be interposed between the projector 361 and the screen 362. The mirror 363 is preferably a concave mirror, but may be a plane mirror. The projector 361 and the mirror 363 are arranged on the surface of the roof portion 62 on the vehicle interior side, that is, on the vehicle interior ceiling portion.
 対話装置37は、子供用シートに着座している子供やドライバなどの種々の乗員と対話を行う装置である。対話装置37は、例えば子供用マイク27が取得した子供の音声データを認識すること、子供の入力音声に対する回答を作成すること、及び回答を音声出力することを行う。対話装置37は、例えば、人工知能を用いて乗員の発話内容の認識及び回答の生成を行うように構成されている。発話内容の認識処理や、回答の生成処理は、通信装置31を介してサーバ5が実施するように構成されていてもよい。対話装置37は、後席ディスプレイ35に、所定のエージェントの画像を表示する機能も備えうる。エージェントとは、例えば、架空の人物や、擬人化された動物などのキャラクターである。エージェントは、ドライバが事前に設定した、ドライバのアバターなどであってもよい。 The dialogue device 37 is a device that interacts with various occupants such as a child and a driver sitting on a child's seat. The dialogue device 37 recognizes, for example, the child's voice data acquired by the child's microphone 27, creates an answer to the child's input voice, and outputs the answer by voice. The dialogue device 37 is configured to, for example, use artificial intelligence to recognize the utterance content of the occupant and generate an answer. The process of recognizing the utterance content and the process of generating an answer may be configured to be performed by the server 5 via the communication device 31. The dialogue device 37 may also have a function of displaying an image of a predetermined agent on the rear seat display 35. An agent is, for example, a character such as a fictitious person or an anthropomorphic animal. The agent may be a driver's avatar or the like preset by the driver.
 対話装置37は、所定のメッセージを音声出力する場合、エージェントが喋っているように動くアニメーションを後席ディスプレイ35に表示しうる。なお、エージェントを用いたアニメーションの表示を含む、対話装置37の作動はHCU1によって制御される。また、エージェントを用いたアニメーションの表示先は後席ディスプレイ35に限らず、ウインドウディスプレイ36であってもよい。 When the dialogue device 37 outputs a predetermined message by voice, the dialogue device 37 can display an animation that moves as if the agent is speaking on the rear seat display 35. The operation of the dialogue device 37, including the display of the animation using the agent, is controlled by the HCU1. Further, the display destination of the animation using the agent is not limited to the rear seat display 35, and may be the window display 36.
 スピーカ38は、車両の車室内で音声を発生させる。音声出力の種類としては、所定のテキストを読み上げる音声メッセージや、音楽と、アラームなどがある。音声との表現には、単なる音も含まれる。車両にはスピーカ38としてドライバ用のスピーカ38Aと、子供用のスピーカ38Bとが設けられている。ドライバ用のスピーカ38Aは例えばインストゥルメントパネルや運転席のヘッドレスト等に設けられている。また子供用のスピーカ38Bは、子供用シートに内蔵されている。子供用のスピーカ38Bは、子供用シート付近の側壁部や天井部に設けられていてもよい。 The speaker 38 generates sound in the passenger compartment of the vehicle. Types of voice output include voice messages that read a predetermined text, music, and alarms. The expression with voice includes mere sound. The vehicle is provided with a speaker 38A for a driver and a speaker 38B for a child as the speaker 38. The speaker 38A for the driver is provided, for example, on the instrument panel, the headrest of the driver's seat, or the like. The child speaker 38B is built in the child seat. The child speaker 38B may be provided on the side wall portion or the ceiling portion near the child seat.
 <HCU1の構成について>
 HCU1は、ディスプレイ等を用いたユーザへの情報提示を統合的に制御するコンピュータである。HCU1は、プロセッサ11、RAM(Random Access Memory)12、ストレージ13、通信インターフェース14(図中のI/O)、及びこれらの構成を接続するバスラインなどを備えた、コンピュータとして構成されている。
<About the configuration of HCU1>
The HCU 1 is a computer that comprehensively controls the presentation of information to a user using a display or the like. The HCU 1 is configured as a computer including a processor 11, a RAM (Random Access Memory) 12, a storage 13, a communication interface 14 (I / O in the figure), a bus line connecting these configurations, and the like.
 プロセッサ11は、例えばCPU(Central Processing Unit)等の演算コアである。プロセッサ11は、RAM12へのアクセスにより、種々の処理を実行する。RAM12は揮発性のメモリである。通信インターフェース14は、HCU1が他の装置と通信するための回路である。通信インターフェース14は、アナログ回路素子やICなどを用いて実現されればよい。 The processor 11 is, for example, an arithmetic core such as a CPU (Central Processing Unit). The processor 11 executes various processes by accessing the RAM 12. The RAM 12 is a volatile memory. The communication interface 14 is a circuit for the HCU 1 to communicate with another device. The communication interface 14 may be realized by using an analog circuit element, an IC, or the like.
 ストレージ13は、フラッシュメモリ等の不揮発性の記憶媒体を含む構成である。ストレージ13には、コンピュータをHCU1として機能させるためのプログラムである注目対象共有プログラムが格納されている。プロセッサ11が注目対象共有プログラムを実行することは、注目対象共有プログラムに対応する方法である注目対象共有方法が実行されることに相当する。また、ストレージ13には、車外カメラ28の車両における設置位置や、子供用シートの位置、子供用カメラ25の設置位置などを示すデータが登録されている。 The storage 13 has a configuration including a non-volatile storage medium such as a flash memory. In the storage 13, a attention target sharing program, which is a program for making the computer function as the HCU 1, is stored. Executing the attention target sharing program by the processor 11 corresponds to executing the attention target sharing method which is a method corresponding to the attention target sharing program. Further, data indicating the installation position of the external camera 28 in the vehicle, the position of the child's seat, the installation position of the child's camera 25, and the like are registered in the storage 13.
 HCU1は、プロセッサ11がストレージ13に保存されている注目対象共有プログラムを実行することにより図3に示す各機能部を提供する。つまり、HCU1は、子供情報取得部F1、車外情報取得部F2、ドライバ情報取得部F3、及び車両情報取得部F4を備える。また、HCU1は機能部として、興味反応検出部F5、対象物特定部F6、説明情報取得部F7、運転負荷推定部F8、通知制御部F9、記録処理部FA、及び興味対象管理部FBを備える。通知制御部F9はより細かい機能部として、タイミング調停部F91や対象制御部F92を備える。通知制御部F9が通知処理部に相当する。 The HCU 1 provides each functional unit shown in FIG. 3 by the processor 11 executing the attention target sharing program stored in the storage 13. That is, the HCU 1 includes a child information acquisition unit F1, an outside vehicle information acquisition unit F2, a driver information acquisition unit F3, and a vehicle information acquisition unit F4. Further, the HCU 1 includes an interest reaction detection unit F5, an object identification unit F6, an explanatory information acquisition unit F7, a driving load estimation unit F8, a notification control unit F9, a recording processing unit FA, and an interest target management unit FB as functional units. .. The notification control unit F9 includes a timing arbitration unit F91 and a target control unit F92 as finer functional units. The notification control unit F9 corresponds to the notification processing unit.
 子供情報取得部F1は、子供用カメラ25や子供センサ26から、子供用シートに着座している子供の状態に関する多様な情報を取得する。例えば子供情報取得部F1は、子供用カメラ25から提供される画像を解析することで、子供の顔の向きや、視線方向、瞼の開度、瞳孔の開度、口の開度、姿勢、体動などの少なくとも一部を推定する。体動には、窓の外に指又は手を向ける振る舞いも含まれる。子供情報取得部F1は、子供用カメラ25からの画像に含まれる顔の特徴点の分布等に基づき、子供の表情や感情等を推定するように構成されていてもよい。加えて、口周りの特徴点の分布の時系列変化パターンに基づいて子供の発話の有無を検出するように構成されていてもよい。さらに、体動、主として胸部又は腹部の位置の変化パターンから呼吸のリズムを推定しても良い。 The child information acquisition unit F1 acquires various information regarding the condition of the child sitting on the child seat from the child camera 25 and the child sensor 26. For example, the child information acquisition unit F1 analyzes an image provided by the child camera 25 to determine the direction of the child's face, the direction of the line of sight, the opening of the eyelids, the opening of the pupil, the opening of the mouth, the posture, and the like. Estimate at least part of the body movement. Body movements also include the behavior of pointing a finger or hand out of the window. The child information acquisition unit F1 may be configured to estimate the facial expression, emotion, etc. of the child based on the distribution of facial feature points included in the image from the child camera 25. In addition, it may be configured to detect the presence or absence of utterances in the child based on the time-series change pattern of the distribution of feature points around the mouth. Furthermore, the rhythm of breathing may be estimated from the pattern of changes in body movements, mainly the position of the chest or abdomen.
 なお、子供用カメラ25の撮像画像を解析することで子供の状態を推定する機能/処理モジュールは子供用カメラ25が備えていてもよい。また、上記の機能は外部サーバが備えていても良い。その場合、通信装置31が子供の画像データをサーバ5に送信するとともに、サーバ5で受信画像を解析し、その結果を車両に返送する。そのように本開示の構成を実現するための種々の機能は、エッジとクラウドとに分散配置されていてもよい。種々の機能の配置態様は適宜変更可能である。 The child camera 25 may have a function / processing module for estimating the state of the child by analyzing the captured image of the child camera 25. Further, the above functions may be provided in the external server. In that case, the communication device 31 transmits the image data of the child to the server 5, analyzes the received image on the server 5, and returns the result to the vehicle. As such, the various functions for realizing the configuration of the present disclosure may be distributed at the edge and the cloud. The arrangement mode of various functions can be changed as appropriate.
 また、子供情報取得部F1は、子供センサ26から脈波情報などの検出結果を取得する。もちろん、子供センサ26が、脈拍数、血圧、心電位、心拍数、発汗量、体温、放熱量、呼吸のリズム、呼吸の深さ、呼気成分、体組成、姿勢、体動などを検出可能に構成されている場合には、子供情報取得部F1はそれらの情報も取得しうる。 Further, the child information acquisition unit F1 acquires detection results such as pulse wave information from the child sensor 26. Of course, the child sensor 26 can detect pulse rate, blood pressure, electrocardiogram, heart rate, sweating amount, body temperature, heat dissipation amount, breathing rhythm, breathing depth, exhalation component, body composition, posture, body movement, etc. When configured, the child information acquisition unit F1 can also acquire such information.
 子供情報取得部F1が取得した子供の状態情報は、取得時刻を示すタイムスタンプを付与してRAM12に保存する。子供情報取得部F1が取得した子供の状態量の情報は、情報種別ごとに区別されて、例えばRAM12に一定時間保存される。取得時刻が異なるデータは、取得時刻が最新のデータが先頭となるように取得時刻順にソートされて保存されうる。データの保存期間は例えば2分や5分などとすることができる。 The child status information acquired by the child information acquisition unit F1 is stored in the RAM 12 with a time stamp indicating the acquisition time. The information on the state amount of the child acquired by the child information acquisition unit F1 is classified for each information type and is stored in, for example, the RAM 12 for a certain period of time. Data with different acquisition times can be sorted and saved in order of acquisition time so that the data with the latest acquisition time comes first. The data retention period can be, for example, 2 minutes or 5 minutes.
 加えて、子供情報取得部F1は、脈拍数や、目の開度、心拍数、体温、皮表面電位など関し、直近所定時間の検出結果に基づいて、それぞれの平常値を算出する。平常値は、たとえは直近所定時間以内の観測値の平均値や中央値とすることができる。例えば子供情報取得部F1は脈拍数や、目の開度、顔の向きのそれぞれについて、直近1分以内の観測値を平均化することで各状態量の平常値を算出する。それらの平常値は、子供が何かに興味を持って興奮状態となったことを検出するための判断基準として使用されうる。 In addition, the child information acquisition unit F1 calculates the normal values of the pulse rate, the opening of the eyes, the heart rate, the body temperature, the skin surface potential, etc., based on the detection results of the latest predetermined time. The normal value can be, for example, the average value or the median value of the observed values within the latest predetermined time. For example, the child information acquisition unit F1 calculates the normal value of each state quantity by averaging the observed values within the last 1 minute for each of the pulse rate, the opening of the eyes, and the direction of the face. These normal values can be used as criteria for detecting that a child is interested in something and becomes excited.
 その他、子供情報取得部F1は、子供が発話した場合には、子供用マイク27からの入力信号に基づき、子供が発話したことを検出する。また、より好ましくは、子供情報取得部F1は、子供が何か言葉を発した場合には、その発話内容や声の大きさを示す情報を取得する。発話内容は音声認識処理によって特定されうる。子供情報取得部F1が取得又は検出した種々の情報は興味反応検出部F5や対象物特定部F6によって利用される。 In addition, when the child speaks, the child information acquisition unit F1 detects that the child has spoken based on the input signal from the child microphone 27. Further, more preferably, when the child utters some words, the child information acquisition unit F1 acquires information indicating the utterance content and the loudness of the voice. The content of the utterance can be specified by voice recognition processing. Various information acquired or detected by the child information acquisition unit F1 is used by the interest reaction detection unit F5 and the object identification unit F6.
 車外情報取得部F2は、車外カメラ28やロケータ29などから、車外の情報取得する構成である。例えば車外情報取得部F2は、車外カメラ28が撮像した画像データを逐次取得し、RAM12等に一時保存する。画像データの保存領域はリングバッファとして構成されうる。すなわち、保存量が一定の上限値に達した場合には古いデータから順次削除して新規データが保存される。 The out-of-vehicle information acquisition unit F2 is configured to acquire out-of-vehicle information from an out-of-vehicle camera 28, a locator 29, or the like. For example, the vehicle exterior information acquisition unit F2 sequentially acquires image data captured by the vehicle exterior camera 28 and temporarily stores the image data in a RAM 12 or the like. The image data storage area can be configured as a ring buffer. That is, when the conserved quantity reaches a certain upper limit, the oldest data is sequentially deleted and new data is saved.
 車外情報取得部F2は、車外カメラ28から入力された画像データを、撮影時点における車両の位置情報や、時刻情報と対応付けてRAM12に保存する。なお、車外情報取得部F2は、車外カメラ28から入力された画像を解析することで、当該画像内に含まれる被写体の位置や種別を特定するように構成されていても良い。物体の識別には、例えばディープラーニングを適用したCNN(Convolutional Neural Network)やDNN(Deep Neural Network)技術などを用いることができる。 The vehicle outside information acquisition unit F2 stores the image data input from the vehicle outside camera 28 in the RAM 12 in association with the vehicle position information and the time information at the time of shooting. The vehicle exterior information acquisition unit F2 may be configured to specify the position and type of the subject included in the image by analyzing the image input from the vehicle exterior camera 28. For example, CNN (Convolutional Neural Network) or DNN (Deep Neural Network) technology to which deep learning is applied can be used for identifying an object.
 また、車外情報取得部F2はロケータ29から車両周辺の地図情報を取得する。すなわち、車両周辺に存在する施設やランドマークなどの情報を取得する。また、地図情報には、現在走行している道路の種別などの道路構造に係る情報も含まれる。道路の種別とは、例えば一般道路か、高速道路であるかなどを示す。また、道路構造に係る情報には、道路の曲率や、車両の前方に存在する分岐/合流地点、交差点、信号機までの残り距離などを含む。 In addition, the out-of-vehicle information acquisition unit F2 acquires map information around the vehicle from the locator 29. That is, information such as facilities and landmarks existing around the vehicle is acquired. The map information also includes information related to the road structure such as the type of road currently being traveled. The type of road indicates, for example, whether it is a general road or an expressway. In addition, the information related to the road structure includes the curvature of the road, the branch / confluence points existing in front of the vehicle, the intersection, the remaining distance to the traffic light, and the like.
 ドライバ情報取得部F3は、DSM21やドライバセンサ22から、ドライバの状態に関する多様な情報を取得する。ドライバ情報取得部F3が運転席乗員情報取得部に相当する。例えばドライバ情報取得部F3は、DSM21からドライバの顔の向きや、視線方向、瞼の開度、瞳孔の開度、口の開度、姿勢などの少なくとも一部を取得する。また、DSM21がドライバの表情や緊張度合い等を推定可能に構成されている場合には、ドライバ情報取得部F3はそれらの情報も取得しうる。加えて、口周りの特徴点の分布の時系列変化パターンに基づいてドライバの発話の有無を検出するように構成されていてもよい。さらに、体動、主として胸部又は腹部の位置の変化パターンからドライバの呼吸のリズムを推定しても良い。呼吸のリズムは緊張度合いや運転負荷を推定材料となりうる。前述の通り、DSM21が備える機能、すなわちドライバの画像を解析することでドライバの状態を推定する機能/処理モジュールは、ドライバ情報取得部F3やサーバ5が備えていても良い。 The driver information acquisition unit F3 acquires various information regarding the driver status from the DSM 21 and the driver sensor 22. The driver information acquisition unit F3 corresponds to the driver's seat occupant information acquisition unit. For example, the driver information acquisition unit F3 acquires at least a part of the driver's face direction, line-of-sight direction, eyelid opening degree, pupil opening degree, mouth opening degree, posture, and the like from DSM21. Further, when the DSM 21 is configured so that the facial expression of the driver, the degree of tension, and the like can be estimated, the driver information acquisition unit F3 can also acquire such information. In addition, it may be configured to detect the presence or absence of utterance of the driver based on the time-series change pattern of the distribution of the feature points around the mouth. In addition, the driver's respiratory rhythm may be estimated from body movements, primarily patterns of changes in chest or abdominal position. The rhythm of breathing can be an estimate of the degree of tension and driving load. As described above, the function included in the DSM 21, that is, the function / processing module for estimating the state of the driver by analyzing the image of the driver may be provided in the driver information acquisition unit F3 or the server 5.
 車両情報取得部F4は、走行用電源の状態(オン/オフ)や、車速、加速度、操舵角、アクセルの踏み込み量、ブレーキの踏み込み量なども取得する。走行用電源は、車両が走行するための電源であって、車両がガソリン車である場合にはイグニッション電源を指す。車両が電気自動車やハイブリッド車などといった電動車である場合、走行用電源とはシステムメインリレーを指す。また、車両情報取得部F4は、ロケータ29から車両の現在位置を示す情報を取得する。自車両の位置情報は緯度、経度、高度などで表現されうる。 The vehicle information acquisition unit F4 also acquires the state (on / off) of the driving power source, vehicle speed, acceleration, steering angle, accelerator depression amount, brake depression amount, and the like. The traveling power source is a power source for the vehicle to travel, and refers to an ignition power source when the vehicle is a gasoline-powered vehicle. When the vehicle is an electric vehicle such as an electric vehicle or a hybrid vehicle, the driving power source refers to the system main relay. Further, the vehicle information acquisition unit F4 acquires information indicating the current position of the vehicle from the locator 29. The position information of the own vehicle can be expressed by latitude, longitude, altitude, and the like.
 興味反応検出部F5は、子供情報取得部F1が取得している子供の状態情報に基づき、子供が車室外の何かに興味を示したか否かを判定する。子供が車室外の何かに興味を示したことをここでは興味反応とも記載する。興味反応が生じたと判定することは興味反応を検出することに相当する。子供が興味を示す対象としては建物や看板などの静止物のほか、歩行者や動物などの移動体、景色など、多様な物事が想定される。ここでの物事との表現には、静止物や移動体といった物だけでなく、状況や、状態、風景、体験などが含まれる。 The interest reaction detection unit F5 determines whether or not the child is interested in something outside the vehicle interior based on the child status information acquired by the child information acquisition unit F1. The fact that the child is interested in something outside the car room is also described here as an interest reaction. Determining that an interest reaction has occurred corresponds to detecting an interest reaction. In addition to stationary objects such as buildings and signboards, various things such as moving objects such as pedestrians and animals, and scenery are assumed to be of interest to children. The expression of things here includes not only things such as stationary objects and moving objects, but also situations, states, landscapes, experiences, and the like.
 例えば興味反応検出部F5は、子供用シートに着座している子供が所定時間以上、同じものを見続けていた場合に、興味反応があったと判定する。子供が所定時間以上同じものを見続けていた場合には、視線方向が所定時間一定方向であった場合のほか、注目物体を目で追いかけるように視線方向を車両の進行方向逆側(例えば後方)に動かした場合も含めることができる。 For example, the interest reaction detection unit F5 determines that there is an interest reaction when the child sitting on the child's seat keeps looking at the same thing for a predetermined time or more. If the child keeps looking at the same thing for a predetermined time or more, the line-of-sight direction is constant for a predetermined time, and the line-of-sight direction is opposite to the traveling direction of the vehicle (for example, rearward) so as to follow the object of interest with the eyes. ) Can also be included.
 また、興味反応検出部F5は、子供が特定感情を表す表情となった場合に、興味反応があったと判定してもよい。ここでの特定感情とは、感嘆や驚嘆などを含み、より具体的には、驚きや、感心、笑顔などを含む。さらに興味反応検出部F5は、子供が車室外に顔を向けた状態にて発話したことを検出した場合に、興味反応があったと判定してもよい。その際、子供の声の大きさに応じて興味の高さを評価しても良い。また、子供が車室外に顔を向けた状態で同じ単語を所定回数以上繰り返し発したことに基づいて、高い興味反応を示していると判定しても良い。 Further, the interest reaction detection unit F5 may determine that there is an interest reaction when the child has a facial expression expressing a specific emotion. The specific emotion here includes admiration, marvel, and more specifically, surprise, admiration, smile, and the like. Further, the interest reaction detection unit F5 may determine that there is an interest reaction when it detects that the child has spoken with his / her face turned to the outside of the vehicle interior. At that time, the height of interest may be evaluated according to the loudness of the child's voice. Further, it may be determined that the child shows a high interest reaction based on the fact that the same word is repeatedly spoken a predetermined number of times or more with the child facing the outside of the vehicle interior.
 その他、子供が「見て」や、「あれはなに?」など、ドライバに質問するようなフレーズを発したことに基づいて興味反応があったと判定しても良い。その他、特定の動物又は状態を示す擬声語や擬態語、擬音語(いわゆるオノマトペ)を発したことに基づいて興味反応を検出してもよい。加えて、子供が窓の外に指又は手を向ける動作、すなわち何かを指し示す動作を行ったことに基づいて興味反応があったと判定しても良い。 In addition, it may be judged that there was an interest reaction based on the child's phrase such as "look" or "what is that?" That asks the driver. In addition, an interest reaction may be detected based on the onomatopoeia, mimetic word, or onomatopoeia (so-called onomatopoeia) indicating a specific animal or state. In addition, it may be determined that the child has an interest reaction based on the action of pointing a finger or hand out of the window, that is, the action of pointing at something.
 加えて、興味反応検出部F5は、子供の脈拍や心拍、目の開度などの生体情報に基づいて子供の興味反応を検出しても良い。例えば脈拍が平常値よりも所定の閾値以上早くなったことに基づいて興味反応があったと判定してもよい。また、目の開度が平常値よりも所定値以上増加したことに基づいて興味反応があったと判定してもよい。興味反応の検出材料としては、体温の変化や、呼吸の状態、体組成なども使用可能である。なお、呼吸の状態には、呼吸の速度や、呼気成分などが含まれる。このように興味反応検出部F5は、子供が普段と違う状態となっていることや、興奮状態であることに基づいて、何かに興味関心を示していることを検知してもよい。 In addition, the interest reaction detection unit F5 may detect the child's interest reaction based on biological information such as the child's pulse, heartbeat, and eye opening. For example, it may be determined that there is an interest reaction based on the fact that the pulse becomes faster than the normal value by a predetermined threshold value or more. Further, it may be determined that there is an interest reaction based on the fact that the opening degree of the eyes is increased by a predetermined value or more from the normal value. As a material for detecting an interest reaction, changes in body temperature, respiratory conditions, body composition, and the like can also be used. The state of respiration includes the speed of respiration, the exhaled breath component, and the like. In this way, the interest reaction detection unit F5 may detect that the child is in an unusual state or is in an excited state and is interested in something.
 対象物特定部F6は、興味反応検出部F5が子供の興味反応を検出した場合に、その時点から直近所定の遡及時間以内の子供の視線方向に基づいて、子供が興味を示した対象である注目対象物を特定する。ここでの注目対象物の概念には、施設や、ランドマーク、看板などの静止物、歩行者などの移動体、動物、状況(イベント)、景色などが含まれる。例えば、見慣れない建物や、気ぐるみを着た人、犬、猫、消防車などの緊急車両、子供に人気のあるキャラクターを用いた商業看板、企業の看板などが注目対象物となりうる。対象物特定部F6が注目対象検出部に相当する。 The object identification unit F6 is an object in which the child shows interest based on the line-of-sight direction of the child within the most recent predetermined retroactive time from that time when the interest reaction detection unit F5 detects the child's interest reaction. Identify the object of interest. The concept of the object of interest here includes facilities, landmarks, stationary objects such as signs, moving objects such as pedestrians, animals, situations (events), landscapes, and the like. For example, unfamiliar buildings, people wearing whimsy, emergency vehicles such as dogs, cats, and fire trucks, commercial signs with characters popular with children, and corporate signs can be of interest. The object identification unit F6 corresponds to the object detection unit of interest.
 対象物特定部F6は、例えば興味反応が検出された時点、又は遡及時間以内において、車外カメラ28が撮像した画像のなかで、子供の視線方向に存在する被写体を注目対象として特定する。ここでの遡及時間は、200ミリ秒や500ミリ秒、1秒などとすることができる。 The object identification unit F6 identifies a subject existing in the line-of-sight direction of the child as an object of interest in the image captured by the external camera 28, for example, at the time when the interest reaction is detected or within the retroactive time. The retroactive time here can be 200 milliseconds, 500 milliseconds, 1 second, or the like.
 なお、車室外に対する子供の視線方向は、子供用カメラ25の画像に基づいて特定される、車室内における子供の目の位置と、子供の目の位置を始点とする視線方向と、に基づいて算出されうる。車室内における子供の目の位置は、子供用カメラ25の設置位置及び姿勢と、画像内における子供の目の位置とに基づいて算出されうる。また、車室外に対する子供の視線方向に、車体が向いている方位角などの情報を組み合わせることで、子供が視線を向けていた絶対方向なども算出されうる。ここでの絶対方向とは、例えば東西南北などの所定の方位角に対応する方向である。 The direction of the child's line of sight with respect to the outside of the vehicle interior is based on the position of the child's eyes in the vehicle interior and the direction of the line of sight starting from the position of the child's eyes, which is specified based on the image of the child camera 25. Can be calculated. The position of the child's eyes in the vehicle interior can be calculated based on the installation position and posture of the child camera 25 and the position of the child's eyes in the image. Further, by combining information such as the azimuth angle at which the vehicle body is facing with the direction of the child's line of sight with respect to the outside of the vehicle interior, the absolute direction in which the child is looking can be calculated. The absolute direction here is a direction corresponding to a predetermined azimuth angle such as north, south, east, and west.
 そのように興味反応が検出された時点又はその直前に取得及び保存された車外カメラ28の画像と子供の視線方向とが照合されることにより、子供が注目していた対象物が自動的に特定されうる。 By collating the image of the outside camera 28 acquired and saved at the time when the interest reaction is detected or immediately before that with the line-of-sight direction of the child, the object that the child is paying attention to is automatically identified. Can be done.
 なお、ここでは一例として対象物特定部F6は、注目対象物を画像として取得するものとする。もちろん、他の態様として対象物特定部F6は、特定した注目対象物としての被写体の画像である注目対象物画像を解析することにより、当該注目対象物の種別や名称などを特定しても良い。対象物特定部F6が取得した注目対象物の画像である注目対象画像は、説明情報取得部F7に出力される。なお、対象物特定部F6は、注目対象物の特定過程において、車両又は運転席から見て注目対象物が存在する方向も特定する。注目対象物が存在する方向についての情報も、説明情報取得部F7や通知制御部F9に出力される。 Here, as an example, the object identification unit F6 acquires the object of interest as an image. Of course, as another aspect, the object specifying unit F6 may specify the type and name of the object of interest by analyzing the image of the object of interest, which is an image of the subject as the specified object of interest. .. The attention target image, which is an image of the attention target acquired by the object identification unit F6, is output to the explanatory information acquisition unit F7. The object specifying unit F6 also specifies the direction in which the object of interest exists when viewed from the vehicle or the driver's seat in the process of specifying the object of interest. Information about the direction in which the object of interest exists is also output to the explanatory information acquisition unit F7 and the notification control unit F9.
 説明情報取得部F7は、車両内又は車両外に配置された辞書データベースから、対象物特定部F6が特定した注目対象物についての説明情報を取得する。辞書データベースは、多様な物体についての説明情報が登録されているデータベースである。辞書データベースは車両に搭載されていても良いし、クラウド上に配置されていても良い。辞書データベースは、Webサーバなどであっても良い。説明情報取得部F7が対象情報取得部に相当する。 The explanatory information acquisition unit F7 acquires explanatory information about the object of interest specified by the object specifying unit F6 from the dictionary database arranged inside or outside the vehicle. The dictionary database is a database in which explanatory information about various objects is registered. The dictionary database may be installed in the vehicle or may be located in the cloud. The dictionary database may be a Web server or the like. The explanatory information acquisition unit F7 corresponds to the target information acquisition unit.
 例えば説明情報取得部F7は、サーバ5に注目対象物画像を送信することにより、サーバ5から注目対象物についての説明情報を取得する。ここでの説明情報とは、対象の名称の他、対象のカテゴリ(大分類)や、役割、背景、類似する物事などの少なくとも1つを含む。例えば注目対象物が建物であれば、その固有名称/一般名称や、役割、高さ、建設年などが説明情報に含まれる。また、注目対象物が動物であれば、犬や猫、鳥などといった大分類と、より詳細な種別の名称、大きさ、原産地、特性(性格)などが説明情報に含まれる。注目対象物がお祭りなどのイベントであれば、その名称や、歴史的背景、開催時期などが説明情報に含まれる。注目対象物が、虹などの自然現象である場合には、その名称や、発生原理などが説明情報となりうる。注目対象物が救急車や工事車両などの車両である場合には、その名称や、役務、特徴などが説明情報に含まれる。注目対象物が企業やお店の看板である場合には、当該企業/お店の名称の他、提供サービスの種別、設立年、代表商品などが説明情報に含まれうる。説明情報は主として図鑑や、辞書、ガイドブックなどに掲載されている情報とすることができる。 For example, the explanatory information acquisition unit F7 acquires explanatory information about the object of interest from the server 5 by transmitting an image of the object of interest to the server 5. The explanatory information here includes at least one such as a target category (major classification), a role, a background, and similar things, in addition to the target name. For example, if the object of interest is a building, its proper name / general name, role, height, year of construction, etc. are included in the explanatory information. In addition, if the object of interest is an animal, the explanatory information includes a large classification such as dogs, cats, birds, etc., and more detailed names, sizes, places of origin, characteristics (characters), and the like. If the object of interest is an event such as a festival, the explanatory information includes the name, historical background, and time of the event. When the object of interest is a natural phenomenon such as a rainbow, its name, generation principle, etc. can be explanatory information. When the object of interest is a vehicle such as an ambulance or a construction vehicle, its name, services, features, etc. are included in the explanatory information. When the object of interest is a signboard of a company or shop, the explanatory information may include the type of service provided, the year of establishment, the representative product, etc., in addition to the name of the company / shop. The explanatory information can be mainly information published in pictorial books, dictionaries, guidebooks, and the like.
 サーバ5は、説明情報取得部F7からの問い合わせに基づいて、注目対象物画像を解析することで注目対象物の名称を特定するとともに、当該注目対象物に係る名称以外の情報も取得する。名称以外の情報は、当該名称を検索キーとしてインターネット検索することで取得しても良いし、サーバ5自身が保有する辞書データベースを参照することで取得してもよい。そして、サーバ5は、注目対象物の名称を検索キーとして収集した情報を説明情報として車両に返送する。そのような構成によれば、説明情報取得部F7は車両が膨大なデータを収録したデータベースを備えていなくとも、多様な注目対象物についての情報を取得可能となる。 The server 5 identifies the name of the object of interest by analyzing the image of the object of interest based on the inquiry from the explanatory information acquisition unit F7, and also acquires information other than the name related to the object of interest. Information other than the name may be acquired by searching the Internet using the name as a search key, or by referring to the dictionary database owned by the server 5 itself. Then, the server 5 returns the information collected using the name of the object of interest as the search key to the vehicle as explanatory information. According to such a configuration, the explanatory information acquisition unit F7 can acquire information about various objects of interest even if the vehicle does not have a database containing a huge amount of data.
 なお、説明情報取得部F7が、注目対象物画像を解析することにより、当該注目対象物の種別や名称などを特定しても良い。その場合には、注目対象物の名称を検索キーとして、当該注目対象物の役務や背景などの補足情報を車両内又は車両外に配置されたデータベースから取得すれば良い。説明情報取得部F7が取得した情報は対象物画像と対応付けられてRAM12等に一時保存される。当該データセットは通知制御部F9や記録処理部FAによって参照される。 The explanatory information acquisition unit F7 may specify the type and name of the object of interest by analyzing the image of the object of interest. In that case, the name of the object of interest may be used as a search key, and supplementary information such as the service and background of the object of interest may be acquired from a database arranged inside or outside the vehicle. The information acquired by the explanatory information acquisition unit F7 is temporarily stored in the RAM 12 or the like in association with the object image. The data set is referred to by the notification control unit F9 and the recording processing unit FA.
 運転負荷推定部F8は、ドライバの状態情報、車両の走行環境、及び自動運転中か否かの少なくとも何れか1つに基づいて、ドライバの運転負荷が高い状態であるか否かを判定する。ドライバの状態情報としてはドライバ情報取得部F3が取得/検出した情報が用いられる。走行環境情報としては、車両情報取得部F4又は車外情報取得部F2が取得した車両周辺の情報が用いられうる。自動運転中かは、車両内ネットワークNwを介して自動運転装置から入力されうる。なお、自動運転装置からは自動運転のレベル、すなわちレベル0~5の何れかに該当するかを示す信号が入力されても良い。 The driving load estimation unit F8 determines whether or not the driver's driving load is high based on at least one of the driver's state information, the driving environment of the vehicle, and whether or not automatic driving is in progress. As the driver status information, the information acquired / detected by the driver information acquisition unit F3 is used. As the traveling environment information, the information around the vehicle acquired by the vehicle information acquisition unit F4 or the outside information acquisition unit F2 can be used. Whether it is in automatic driving or not can be input from the automatic driving device via the in-vehicle network Nw. A signal indicating the level of automatic driving, that is, any of levels 0 to 5, may be input from the automatic driving device.
 運転負荷推定部F8は、例えばレベル3以上の自動運転中であって、かつ、ハンドオーバーまでの残り時間が所定の閾値以上である場合には、運転負荷は高くないと判定する。なお、ハンドオーバーは、システムから運転席乗員に対して運転操作の権限を移譲することに相当する。一方、自動運転中であっても、システムがドライバにハンドオーバーを要求している場合などにはドライバの運転負荷は高いと判断されうる。 The driving load estimation unit F8 determines that the driving load is not high, for example, when the automatic operation of level 3 or higher is in progress and the remaining time until the handover is equal to or longer than a predetermined threshold value. The handover corresponds to transferring the authority of the driving operation from the system to the driver's seat occupant. On the other hand, even during automatic operation, it can be determined that the driver's driving load is high when the system requests the driver to perform a handover.
 また、運転負荷推定部F8は、ドライバの脈拍や、呼吸の間隔及び深さ、皮膚電気活動、顔面筋の活動電位、末梢血流量等の生体情報に基づいて運転負荷が高いか否かを判定しても良い。例えば脈拍や呼吸が平常値よりも所定値以上速い場合など、ドライバが緊張状態であることを示している場合には運転負荷が高いと判断されうる。ドライバの緊張状態は、ハンドルを握る強さや、姿勢、皮膚電気活動、瞬目の間隔、末梢血流量など、多様な生体情報から推定可能である。緊張状態や運転負荷の判定方法としては多様な方法を援用可能である。 In addition, the driving load estimation unit F8 determines whether or not the driving load is high based on biological information such as the driver's pulse, breathing interval and depth, skin electrical activity, facial muscle action potential, and peripheral blood flow. You may. For example, when the driver is in a tense state, such as when the pulse or respiration is faster than the normal value by a predetermined value or more, it can be determined that the driving load is high. The tension state of the driver can be estimated from various biological information such as the strength of gripping the handle, the posture, the electrical activity of the skin, the interval between blinks, and the peripheral blood flow. Various methods can be used to determine the tension state and driving load.
 その他、運転負荷推定部F8は、高速道路の分岐/合流地点や交差点付近を走行中であることや、車線変更中であることに基づいて、運転負荷が高いと判定しても良い。交差点付近とは例えば交差点までの残り距離が50m以内となる区間などとすることができる。また交差点付近には交差点の内部も含まれる。その他、見通しが悪い交差点や、歩行者の飛び出しが多い道路、交通事故が多発している区間を走行していることに基づいて運転負荷が高いと判定しても良い。見通しが悪い交差点や、歩行者の飛び出しが多い道路、交通事故が多発している区間などの情報は地図データとしてロケータ29又は地図サーバから取得されうる。 In addition, the driving load estimation unit F8 may determine that the driving load is high based on the fact that the vehicle is traveling near a branch / confluence point or an intersection of an expressway or that the lane is being changed. The vicinity of the intersection can be, for example, a section where the remaining distance to the intersection is within 50 m. The vicinity of the intersection also includes the inside of the intersection. In addition, it may be determined that the driving load is high based on the fact that the vehicle is traveling at an intersection with poor visibility, a road with many pedestrians jumping out, or a section where traffic accidents occur frequently. Information such as intersections with poor visibility, roads with many pedestrians popping out, and sections with frequent traffic accidents can be acquired from the locator 29 or the map server as map data.
 なお、高速道路の分岐/合流地点や交差点付近を走行中であるか否か、車線変更を計画中か否かなどは走行環境の安全レベルの指標に相当する。上記の構成は、走行環境の安全レベル、換言すれば走行環境に応じて定まる潜在リスクの大きさに応じて、ドライバの運転負荷を評価する構成に相当する。運転負荷が高いと判定するための判断材料/判断条件としては多様な情報/条件を採用可能である。 Whether or not you are driving near a branch / confluence or intersection of an expressway, whether or not you are planning to change lanes, etc. correspond to indicators of the safety level of the driving environment. The above configuration corresponds to a configuration in which the driver's driving load is evaluated according to the safety level of the driving environment, in other words, the magnitude of the potential risk determined according to the driving environment. Various information / conditions can be adopted as the judgment material / judgment condition for judging that the operating load is high.
 通知制御部F9は、説明情報取得部F7が取得した注目対象物についての情報に係る通知を制御する構成である。例えば通知制御部F9は、通知対象とする人物/座席や、注目対象物に係る画像の表示タイミング、音声出力のタイミング、画像表示をする場合の出力先などを、運転負荷推定部F8の判定結果に基づいて統合的に制御する。注目対象物に係る画像の表示や音声出力のタイミングを調停する機能部がタイミング調停部F91に相当する。また、注目対象物に係る画像の表示先や音声出力先を調整することは、注目対象物に係る情報の通知対象とする乗員を選定することに対応する。注目対象物に係る画像の表示先として、メータディスプレイ32やHUD34などのドライバ向けのディスプレイを除外することは、画像情報の提示対象からドライバを除外することに相当するためである。注目対象物に係る画像の表示先や音声出力先を調整する機能部が対象制御部F92に相当する。通知制御部F9は、1つの制御形態として、興味反応検出部F5によって子供の興味反応が検出された場合、子供が興味反応を示したことを示すアイコン画像をHUD352に表示するように構成されていても良い。通知制御部F9の詳細については別途後述する。 The notification control unit F9 is configured to control the notification related to the information about the object of interest acquired by the explanatory information acquisition unit F7. For example, the notification control unit F9 determines the determination result of the driving load estimation unit F8 regarding the person / seat to be notified, the display timing of the image related to the object of interest, the timing of voice output, the output destination when displaying the image, and the like. Control in an integrated manner based on. The functional unit that arbitrates the timing of displaying the image and the audio output related to the object of interest corresponds to the timing arbitration unit F91. In addition, adjusting the display destination and audio output destination of the image related to the object of interest corresponds to selecting the occupant to be notified of the information related to the object of interest. Excluding the display for the driver such as the meter display 32 and the HUD 34 as the display destination of the image related to the object of interest is equivalent to excluding the driver from the display target of the image information. The functional unit that adjusts the display destination and audio output destination of the image related to the object of interest corresponds to the target control unit F92. As one control mode, the notification control unit F9 is configured to display an icon image indicating that the child has shown an interest reaction on the HUD352 when the interest reaction of the child is detected by the interest reaction detection unit F5. May be. The details of the notification control unit F9 will be described later.
 記録処理部FAは、説明情報取得部F7が取得した注目対象物についてのデータを、興味反応を示した時点における位置情報と対応付けて所定の記憶媒体に保存する。データの保存先とするデバイス(いわゆる保存デバイス)は、ドライバのスマートフォン4や、サーバ5などの外部デバイスであってもよいし、ストレージ13などの内部デバイスであっても良い。なお、記録対象とするデータである記録データは、注目対象物の画像データを含むことが好ましい。記録データに含める画像データは静止画であってもよいし映像であっても良い。また、記録データには、当該画像を解析して定まる説明情報のテキストデータが含まれていても良い。記録データには、興味反応が検出された時点での車両の位置情報だけでなく、注目対象物が存在する方向や、注目対象物の位置情報、検出時刻情報が対応付けられて保存されても良い。また、記録データは、注目対象物を見ているときの子供の画像データを含んでいても良い。加えて、記録データは、興味反応が検出された時点を基準として定まる所定時間以内の車内の音声データを含んでいても良い。 The recording processing unit FA stores the data about the object of interest acquired by the explanatory information acquisition unit F7 in a predetermined storage medium in association with the position information at the time when the interest reaction is shown. The device (so-called storage device) to store the data may be an external device such as the driver's smartphone 4 or the server 5, or an internal device such as the storage 13. The recorded data, which is the data to be recorded, preferably includes the image data of the object of interest. The image data included in the recorded data may be a still image or a video. Further, the recorded data may include text data of explanatory information determined by analyzing the image. In the recorded data, not only the position information of the vehicle at the time when the interest reaction is detected, but also the direction in which the object of interest exists, the position information of the object of interest, and the detection time information are stored in association with each other. good. Further, the recorded data may include image data of a child when looking at an object of interest. In addition, the recorded data may include voice data in the vehicle within a predetermined time determined based on the time when the interest reaction is detected.
 なお、上記データの保存先をサーバ5とする場合には、ドライバや祖父母等が所有するスマートフォン等などから、サーバ5に保存されているデータを参照可能に構成されていることが好ましい。当該構成によれば、離れて暮らす家族とも、子供が興味を持った物事を共有可能となる。興味反応の検出をトリガとするデータ保存が生じた場合、HCU1又はサーバ5は、事前に登録されたデバイスへ記録データの更新が生じたことを通知しても良い。上記の共有処理は、ソーシャルネットワーキングサービス(SNS:Social Networking Service)との連携によって実現されても良い。 When the storage destination of the above data is the server 5, it is preferable that the data stored in the server 5 can be referred to from a smartphone or the like owned by a driver, grandparents, or the like. This configuration allows children to share things that interest them with their families living apart. When the data storage triggered by the detection of the interest reaction occurs, the HCU 1 or the server 5 may notify the device registered in advance that the recorded data has been updated. The above sharing process may be realized by cooperation with a social networking service (SNS: Social Networking Service).
 上記のように記録処理部FAが注目対象物についてのデータを種々の情報と対応付けて保存することにより、記録対象物のデータを後から参照可能となる。その結果、保護者は、例えば運転の休憩中や自宅に戻ってから、注目対象物についての記録データを参照しながら子供がどんなものを見かけたかについて子供と会話しやすくなる。また、子供が注目した物事を後から見ることで、保護者が子供の成長を感じることも可能となるといった利点を有する。 As described above, the recording processing unit FA saves the data about the object of interest in association with various information, so that the data of the object to be recorded can be referred to later. As a result, parents are more likely to talk to their child about what they have seen, for example during a driving break or after returning home, referring to recorded data about the object of interest. It also has the advantage that parents can feel the growth of their child by seeing the things that the child has paid attention to later.
 興味対象管理部FBは、過去に検出された注目対象物の種別などの情報に基づき、子供が興味を持っている物事のカテゴリである興味対象カテゴリを特定する構成である。興味対象カテゴリとしては、乗り物、動物、建物、看板、植物、キャラクター、ファッションなどが想定される。キャラクターは、アニメーションの名前などでより詳細に区分されても良い。乗り物などについても、四輪車か二輪車か電車かなどで細分化しても良い。ファッションに関しても、洋服や髪型などに細分化されていてもよい。動物は、犬、猫、鳥などに細分化されていても良い。 The interest target management unit FB is configured to specify the interest target category, which is the category of things that the child is interested in, based on the information such as the type of the attention object detected in the past. Vehicles, animals, buildings, signboards, plants, characters, fashion, etc. are assumed as the categories of interest. Characters may be classified in more detail by the name of the animation or the like. Vehicles may also be subdivided into four-wheeled vehicles, two-wheeled vehicles, trains, and the like. Fashion may also be subdivided into clothes and hairstyles. Animals may be subdivided into dogs, cats, birds and the like.
 HCU1は、子供が興味を持っているカテゴリである興味対象カテゴリを特定可能に構成されている場合、興味対象カテゴリに属する物事が車外カメラ28に写っている場合、子供の興味反応を検出するための閾値を調整しても良い。例えば、興味対象カテゴリに属する物事が車外カメラ28に写っている場合には、子供の興味反応を検出するための閾値を下げても良い。 The HCU1 detects the child's interest reaction when the interest category, which is the category in which the child is interested, is configured to be identifiable, and when things belonging to the interest category are captured by the outside camera 28. The threshold value of may be adjusted. For example, when things belonging to the interest category are captured by the outside camera 28, the threshold value for detecting the interest reaction of the child may be lowered.
 <HCU1の作動例について>
 ここでは図4を用いてHCU1が実行するコミュニケーション支援処理について説明する。ここでは一例としてコミュニケーション支援処理は、ステップS101~S115を備えるものとする。もちろん、コミュニケーション支援処理を構成するステップの数や、処理順序は適宜変更可能である。
<About the operation example of HCU1>
Here, the communication support process executed by the HCU 1 will be described with reference to FIG. Here, as an example, the communication support process includes steps S101 to S115. Of course, the number of steps constituting the communication support process and the process order can be changed as appropriate.
 図4に示すコミュニケーション支援処理は、所定の開始イベントが発生したときに開始される。開始イベントとしては、例えば、車両のイグニッションがオンになること、車両の走行が開始されること、ドライバによる開始指示が入力されたこと等が採用することができる。その他、コミュニケーション支援処理は、興味反応検出部F5によって子供用シートに着座している子供の興味反応が検出されたことをトリガとして開始されても良い。その場合、ステップS101~S104はコミュニケーション支援処理とは別途独立した処理として逐次実行されうる。 The communication support process shown in FIG. 4 is started when a predetermined start event occurs. As the start event, for example, the ignition of the vehicle is turned on, the running of the vehicle is started, the start instruction by the driver is input, and the like can be adopted. In addition, the communication support process may be started with the detection of the interest reaction of the child sitting on the child's seat as a trigger by the interest reaction detection unit F5. In that case, steps S101 to S104 can be sequentially executed as a process independent of the communication support process.
 まずステップS101ではHCU1が、HCU1に接続されている多様なデバイスから、処理上必要な情報を取得してステップS102に移る。例えば子供情報取得部F1が子供センサ26等から子供の生体情報を取得する。車外情報取得部F2が、車外カメラ28等から車外の環境に関する情報を取得する。ドライバ情報取得部F3がDSM21やドライバセンサ22などからドライバの生体情報を取得する。また、車両情報取得部F4が車両内ネットワークNwやロケータ29等から車両の現在位置や走行速度等に関する情報を取得する。ステップS101で取得された各種情報は取得時刻を示す情報とともにRAM12などの所定のメモリに保存される。このようなステップS101は情報取得ステップ及び情報取得処理と呼ぶことができる。なお、情報取得ステップは、ステップS102以降においても、例えば100ミリ秒や200ミリ秒、500ミリ秒など、所定の周期で逐次実行されうる。 First, in step S101, HCU1 acquires information necessary for processing from various devices connected to HCU1 and moves to step S102. For example, the child information acquisition unit F1 acquires the biological information of the child from the child sensor 26 or the like. The out-of-vehicle information acquisition unit F2 acquires information on the environment outside the vehicle from the out-of-vehicle camera 28 and the like. The driver information acquisition unit F3 acquires the biological information of the driver from the DSM21, the driver sensor 22, and the like. Further, the vehicle information acquisition unit F4 acquires information on the current position and traveling speed of the vehicle from the in-vehicle network Nw, the locator 29, and the like. The various information acquired in step S101 is stored in a predetermined memory such as the RAM 12 together with the information indicating the acquisition time. Such step S101 can be called an information acquisition step and an information acquisition process. It should be noted that the information acquisition step can be sequentially executed in a predetermined cycle such as 100 milliseconds, 200 milliseconds, and 500 milliseconds even after step S102.
 ステップS102では運転負荷推定部F8が、ドライバ情報取得部F3が取得したドライバの生体情報に基づき、例えば上述したアルゴリズムにてドライバの運転負荷が高い状態にあるか否かを判定し、ステップS103に移る。なお、ドライバの運転負荷に係る判定結果は、複数段階のレベル値で表現されても良い。その場合には判定値が所定の閾値以上である状態が運転負荷の高い状態に対応する。また、ドライバの運転負荷が高い状態にあるか否かは、フラグで管理されても良い。例えば運転負荷の高い場合には、運転負荷フラグが1(オン)に設定される一方、運転負荷が高くない場合には運転負荷フラグが0(オフ)に設定されうる。ステップS102が完了するとステップS103に移る。このようなステップS102はドライバ状態推定ステップと呼ぶことができる。 In step S102, the driving load estimation unit F8 determines whether or not the driver's driving load is high based on the driver's biological information acquired by the driver information acquisition unit F3, for example, by the above-mentioned algorithm, and in step S103. Move. The determination result relating to the driver's operating load may be expressed by a level value of a plurality of stages. In that case, the state in which the determination value is equal to or higher than the predetermined threshold value corresponds to the state in which the operating load is high. Further, whether or not the driver's operating load is high may be managed by a flag. For example, when the operating load is high, the operating load flag may be set to 1 (on), while when the operating load is not high, the operating load flag may be set to 0 (off). When step S102 is completed, the process proceeds to step S103. Such step S102 can be called a driver state estimation step.
 ステップS103では興味反応検出部F5が、子供情報取得部F1が取得している直近所定時間以内の子供の生体情報に基づいて、子供が興味反応を示したか否かを判定する。興味反応があった場合にはステップS104を肯定判定してステップS105に移る。一方、興味反応がない場合にはステップS115で所定の終了条件が充足したかどうかを判定する。ステップS103~S104の処理は興味反応検出ステップと呼ぶことができる。 In step S103, the interest reaction detection unit F5 determines whether or not the child has shown an interest reaction based on the biological information of the child within the latest predetermined time acquired by the child information acquisition unit F1. If there is an interest reaction, affirmative determination is made in step S104, and the process proceeds to step S105. On the other hand, if there is no interest reaction, it is determined in step S115 whether or not the predetermined end condition is satisfied. The processing of steps S103 to S104 can be referred to as an interest reaction detection step.
 ステップS105では対象物特定部F6が、子供の視線方向及び子供の目の位置に基づいて、車両から見て注目対象物が存在する方向を推定する。そして、車外カメラ28の画像データに含まれる被写体の中から、注目対象物の推定方向に存在する物事を、注目対象物として抽出してステップS106に移る。このようなステップS105は注目対象物特定ステップと呼ぶことができる。 In step S105, the object identification unit F6 estimates the direction in which the object of interest is present when viewed from the vehicle, based on the direction of the child's line of sight and the position of the child's eyes. Then, from the subject included in the image data of the external camera 28, the thing existing in the estimation direction of the object of interest is extracted as the object of interest, and the process proceeds to step S106. Such step S105 can be referred to as an object specifying step of interest.
 ステップS106では対象物特定部F6が、車外カメラ28の撮像画像から、注目対象物が写っている画像部分を注目対象物画像として抽出してステップS107に移る。このようなステップS106は注目対象物画像取得ステップと呼ぶことができる。 In step S106, the object identification unit F6 extracts the image portion in which the object of interest is captured from the image captured by the camera outside the vehicle 28 as the image of the object of interest, and moves to step S107. Such step S106 can be called an object image acquisition step of interest.
 ステップS107では説明情報取得部F7が通信装置31と連携して、サーバ5等やインターネットなどにアクセスすることにより、注目対象物についての説明情報を取得してステップS108に移る。なお、説明情報はテキスト又は音声データなどによって、注目対象物の特徴や役務、名称等が言語化されたデータである。このようなステップS107は説明情報取得ステップと呼ぶことができる。 In step S107, the explanatory information acquisition unit F7 cooperates with the communication device 31 to access the server 5 or the like, the Internet, or the like to acquire explanatory information about the object of interest and move to step S108. The explanatory information is data in which the features, services, names, etc. of the object of interest are verbalized by text or voice data. Such step S107 can be called an explanatory information acquisition step.
 ステップS108では運転負荷が高い状態であると判定されているか否かを判定する。ステップS108で使用される運転負荷に係る情報は、ステップS102での判定結果を流用することができる。運転負荷が高い場合にはステップS108を肯定判定してステップS109に移る。一方、運転負荷が高い状態ではない場合にはステップS108を否定判定してステップS111に移る。なお、HCU1の動作モードとして、ドライバの代わりにシステムが自動応答する代理応答モードを備えていてもよい。HCU1はドライバの操作によって代理応答モードに設定されている場合には、運転負荷の判定値によらずにステップS109に移るように構成されていても良い。 In step S108, it is determined whether or not it is determined that the operating load is high. As the information related to the operating load used in step S108, the determination result in step S102 can be diverted. If the operating load is high, step S108 is positively determined and the process proceeds to step S109. On the other hand, if the operating load is not high, step S108 is negatively determined and the process proceeds to step S111. As the operation mode of the HCU 1, a proxy response mode in which the system automatically responds instead of the driver may be provided. When the HCU1 is set to the proxy response mode by the operation of the driver, it may be configured to move to step S109 regardless of the determination value of the operating load.
 ステップS109では通知制御部F9が、注目対象物に係る情報の通知対象からドライバを除外する。例えば、注目対象物に係る情報の通知対象を子供だけに設定する。助手席にも乗員が乗車している場合には、注目対象物に係る情報の通知対象を子供と助手席乗員に設定しても良い。助手席に人が乗車しているか否かは、助手席に設けられた着座センサの検出信号から特定可能である。 In step S109, the notification control unit F9 excludes the driver from the notification target of the information related to the object of interest. For example, the notification target of the information related to the object of interest is set only for children. If an occupant is also in the passenger seat, the notification target of the information related to the object of interest may be set to the child and the passenger seat occupant. Whether or not a person is in the passenger seat can be specified from the detection signal of the seating sensor provided in the passenger seat.
 ここでの注目対象物に係る情報とは、注目対象物の画像や説明情報である。説明情報は、前述の通りテキストやアイコンなどの画像で出力されても良いし、音声メッセージとして出力されても良い。注目対象物についての画像には、注目対象物そのものの画像のほか、説明情報のテキスト画像も含まれる。注目対象物に係る情報は、画像表示と音声出力の少なくとも何れか一方を用いて実現されうる。 The information related to the object of interest here is an image or explanatory information of the object of interest. As described above, the explanatory information may be output as an image such as text or an icon, or may be output as a voice message. The image of the object of interest includes not only an image of the object of interest itself, but also a text image of explanatory information. Information about the object of interest can be realized using at least one of image display and audio output.
 なお、子供を通知対象に含めるということは、後席ディスプレイ35やウインドウディスプレイ36、及び、スピーカ38Bの少なくとも何れか1つを、通知デバイスとして採用することに相当する。ここでの通知デバイスとは、注目対象物に係る情報を画像又は音声で出力するデバイスを指す。また、ドライバを通知対象に含めるということは、メータディスプレイ32、センターディスプレイ33、HUD34、及び、スピーカ38Aの少なくとも何れか1つを、通知デバイスとして採用することに相当する。助手席乗員を通知対象に含めるということは、センターディスプレイ33及びスピーカ38Aの少なくとも何れか1つを、通知デバイスとして採用することに相当する。なお、注目対象共有システムSysが助手席の正面に設けられたディスプレイである助手席用ディスプレイを備える場合には、助手席乗員用の通知デバイスとして助手席用ディスプレイも採用可能である。助手席用ディスプレイは、インストゥルメントパネルの右端から左端まで連続的に設けられたディスプレイの一部であってもよい。 It should be noted that including the child as the notification target corresponds to adopting at least one of the rear seat display 35, the window display 36, and the speaker 38B as the notification device. The notification device here refers to a device that outputs information related to an object of interest as an image or voice. Further, including the driver in the notification target corresponds to adopting at least one of the meter display 32, the center display 33, the HUD 34, and the speaker 38A as the notification device. Including the passenger seat occupant in the notification target corresponds to adopting at least one of the center display 33 and the speaker 38A as the notification device. When the attention target sharing system Sys is provided with a passenger seat display which is a display provided in front of the passenger seat, the passenger seat display can also be adopted as a notification device for the passenger seat occupant. The passenger display may be part of a display that is continuously provided from the right edge to the left edge of the instrument panel.
 もちろん、画像表示の通知対象と、音声出力での通知対象は別々に設定されても良い。子供へは画像と音声の両方で通知を行う一方、ドライバには音声のみで通知するように制御されても良い。ステップS109での通知対象の選択処理が完了するとステップS110に移る。 Of course, the notification target for image display and the notification target for voice output may be set separately. The child may be notified by both image and voice, while the driver may be controlled to notify only by voice. When the selection process of the notification target in step S109 is completed, the process proceeds to step S110.
 ステップS110では通知制御部F9が、後席ディスプレイ35及びウインドウディスプレイ36の少なくとも何れか一方に注目対象物の画像及びその説明情報を表示するとともに、当該説明情報に対応する音声をスピーカ38Bから出力させる。仮にステップS104で検出された興味反応の内容が、子供がドライバに注目対象物について質問するタイプのものであった場合、当該ステップS110としてのシステム応答は、システムがドライバの代わりに応答する処理である代理応答処理に相当する。代理応答処理は、1つの側面において対話装置37の動作を制御する処理と解することもできる。 In step S110, the notification control unit F9 displays an image of the object of interest and explanatory information thereof on at least one of the rear seat display 35 and the window display 36, and outputs the voice corresponding to the explanatory information from the speaker 38B. .. If the content of the interest reaction detected in step S104 is of the type in which the child asks the driver about the object of interest, the system response as step S110 is a process in which the system responds on behalf of the driver. Corresponds to a certain proxy response process. The proxy response process can also be understood as a process of controlling the operation of the dialogue device 37 in one aspect.
 このようにドライバの運転負荷が高い場合には、ドライバの代わりにシステムが速やかに応答することにより、子供が注目対象物に対する興味をなくす前に、注目対象物についての説明情報を提示することができる。また、ドライバとしてはすぐに子供の問いかけ等に応答する必要性が低減されるため、運転操作に集中することができる。 When the driver's driving load is high in this way, the system responds promptly on behalf of the driver to provide explanatory information about the object of interest before the child loses interest in the object of interest. can. In addition, as a driver, the need to immediately respond to a child's question or the like is reduced, so that the driver can concentrate on the driving operation.
 なお、ステップS110では、対話装置37を起動させて、子供の追加的な質問に対して応答可能なシステム状態を構築してもよい。そのような構成によれば、提示された説明情報に対する子供の疑問等に対して、引き続きシステムが応答可能となる。対話装置37と子供とのやり取りは、エージェント画像を用いて実施されうる。ステップS110が完了するとステップS114に移る。 Note that in step S110, the dialogue device 37 may be activated to build a system state capable of responding to additional questions of the child. With such a configuration, the system will continue to be able to respond to children's questions about the presented explanatory information. The interaction between the dialogue device 37 and the child can be performed using agent images. When step S110 is completed, the process proceeds to step S114.
 ステップS111では、運転席向けのディスプレイや子供向けのディスプレイに注目対象物についての情報を表示してステップS112に移る。なお、各種ディスプレイに表示する際には、画像表示されたことをドライバや子供が気づきやすいように、各スピーカ38から所定の効果音を出力させても良い。 In step S111, information about the object of interest is displayed on the display for the driver's seat and the display for children, and the process proceeds to step S112. When displaying the image on various displays, a predetermined sound effect may be output from each speaker 38 so that the driver or the child can easily notice that the image is displayed.
 ステップS112ではステップS111での画像表示から所定の応答待機時間以内に、ドライバが発話したか否かを判定する。当該ステップS112は子供の興味反応に対してドライバが何かしらの応答をしたか否かを判定する処理に相当する。応答待機時間は例えば4秒や6秒などとすることができる。応答待機時間内にドライバの発話を検出した場合には、子供の興味反応に対してドライバが応答したとみなし、音声出力は省略してステップS114に移る。一方、ステップS111での画像表示開始から応答待機時間が経過してもドライバの発話が検出されなかった場合には、子供の興味反応に対してドライバが応答困難であるとみなし、ステップS113に移る。ステップS113では注目対象物についての説明情報を音声出力してステップS114に移る。ステップS110やS111、S113は、ドライバ及び子供の少なくとも何れか一方に注目対象について言語化された情報を提示するため、通知処理ステップと呼ぶことができる。 In step S112, it is determined whether or not the driver has spoken within a predetermined response waiting time from the image display in step S111. The step S112 corresponds to a process of determining whether or not the driver has made any response to the child's interest reaction. The response waiting time can be, for example, 4 seconds or 6 seconds. If the driver's utterance is detected within the response waiting time, it is considered that the driver has responded to the child's interest reaction, the voice output is omitted, and the process proceeds to step S114. On the other hand, if the driver's utterance is not detected even after the response waiting time has elapsed from the start of the image display in step S111, it is considered that the driver has difficulty in responding to the interest reaction of the child, and the process proceeds to step S113. .. In step S113, explanatory information about the object of interest is output by voice and the process proceeds to step S114. Steps S110, S111, and S113 can be called a notification processing step because they present at least one of the driver and the child with verbalized information about the object of interest.
 ステップS114では記録処理部FAが、注目対象物についての画像データを、位置情報や車室内での音声データ、時刻情報などと対応付けて所定の記録装置に保存してステップS115に移る。 In step S114, the recording processing unit FA saves the image data of the object of interest in a predetermined recording device in association with the position information, the voice data in the vehicle interior, the time information, and the like, and moves to step S115.
 ステップS115では所定の終了条件が充足しているか否かを判定する。終了条件としては例えば、車両の走行用電源がオフになったこと、目的地に到着したこと、ドライバによる終了指示がなされたこと等が挙げられる。終了条件が充足された場合(S115 YES)には、本処理は終了する。 In step S115, it is determined whether or not the predetermined end condition is satisfied. The termination conditions include, for example, that the vehicle's running power has been turned off, that the vehicle has arrived at the destination, that the driver has given an termination instruction, and the like. If the end condition is satisfied (S115 YES), this process ends.
 <注目対象物についての通知画像の表示例>
 ここでは図5~図8を用いて、注目対象物についての情報を示す表示画像の一例について説明する。
<Display example of notification image about the object of interest>
Here, an example of a display image showing information about an object of interest will be described with reference to FIGS. 5 to 8.
 なお、ドライバに提供する注目対象物についての通知画像には、対象物特定部F6が注目対象物を検出した方向や、まだ車外カメラ28で注目対象物を捕捉できているか否かといった、注目対象の位置を示す対象位置情報画像Pdを含んでいることが好ましい。対象位置情報画像Pdは、テキスト又はアイコンなどで構成されうる。対象位置情報画像Pdを提示することにより、ドライバが注目対象物を自分の目で見て確認しやすくなる。また、車外カメラ28でまだ撮像できているか否かを示す情報を提示することにより、車両及び注目対象物の少なくとも何れか一方の移動に伴って既に見えなくなっているものを探すことを抑制できる。 In the notification image about the object of interest provided to the driver, the object of interest, such as the direction in which the object identification unit F6 has detected the object of interest and whether or not the object of interest can still be captured by the external camera 28. It is preferable to include the target position information image Pd indicating the position of. The target position information image Pd may be composed of text, an icon, or the like. By presenting the target position information image Pd, the driver can easily see and confirm the object of interest with his / her own eyes. Further, by presenting information indicating whether or not the image can be taken by the external camera 28, it is possible to suppress the search for an object that has already disappeared due to the movement of at least one of the vehicle and the object of interest.
 図5は、注目対象物が例えば歩道上を散歩している犬であった場合の通知画像Pxを模した図である。図5に示すように注目対象物が犬である場合の通知画像Pxは、注目対象物そのものの画像Piに加えて、名称(犬種)や原産国、性格などのテキスト画像Ptを含む。また、図6は注目対象物が隣接車線を走行していた他車両であった場合の通知画像Pxを模した図である。図6に示すように注目対象物が車である場合の通知画像Pxは、注目対象物そのものの画像Piに加えて、名称(車種)や、製造者(いわゆるメーカー)、特徴などのテキスト画像Ptを含む。車両の特徴とは、電気自動車かエンジン車か否かや、サイズ、走行性能などを指す。 FIG. 5 is a diagram simulating the notification image Px when the object of interest is, for example, a dog walking on a sidewalk. As shown in FIG. 5, the notification image Px when the object of interest is a dog includes a text image Pt such as a name (breed), a country of origin, and a character, in addition to the image Pi of the object of interest itself. Further, FIG. 6 is a diagram simulating the notification image Px when the object of interest is another vehicle traveling in the adjacent lane. As shown in FIG. 6, the notification image Px when the object of interest is a car is a text image Pt of the name (vehicle type), the manufacturer (so-called manufacturer), features, etc., in addition to the image Pi of the object of interest itself. including. The characteristics of a vehicle refer to whether it is an electric vehicle or an engine vehicle, size, running performance, and the like.
 注目対象物が例えばタワーなどの建設物である場合の通知画像Pxは図7に示すように、注目対象物そのものの画像Piに加えて、建物の名称や、役割、建設年、高さなどのテキスト画像Ptを含む。建物の役割とは、例えばテレビやラジオなどの放送波の送信を行う電波塔や、複合商業施設、官公庁、工場、学校、住宅などを含む。役割は、提供サービスや属性と言い換えることもできる。なお、ランドマークとしての建設物の通知画像Pxは、別の角度からの写真や、たとえば夜などの現在とは異なる時間帯での外観写真Pjをタッチ選択可能な態様で表示しても良い。 As shown in FIG. 7, the notification image Px when the object of interest is a building such as a tower shows the name, role, year of construction, height, etc. of the building in addition to the image Pi of the object of interest itself. Includes text image Pt. The role of a building includes, for example, a radio tower that transmits broadcast waves such as television and radio, a commercial complex, government offices, factories, schools, and houses. Roles can also be rephrased as provided services and attributes. The notification image Px of the building as a landmark may display a photograph from another angle or an external photograph Pj at a time zone different from the present, such as at night, in a touch-selectable manner.
 図8は、注目対象物が虹であった場合の通知画像Pxを模した図である。図8に示すように注目対象物が虹などの自然現象である場合の通知画像Pxは、注目対象物そのものの画像Piに加えて、発生理由や特徴などを示すテキスト画像Ptを含む。なお、虹の特徴とは、例えば一番外側の色が赤と紫であることなど、色のグラデーションについての情報を含む。 FIG. 8 is a diagram simulating the notification image Px when the object of interest is a rainbow. As shown in FIG. 8, the notification image Px when the object of interest is a natural phenomenon such as a rainbow includes, in addition to the image Pi of the object of interest itself, a text image Pt showing the reason for occurrence, features, and the like. The characteristic of the rainbow includes information about the color gradation, for example, the outermost colors are red and purple.
 その他、通知制御部F9は、通知画像Pxとして、注目対象物に関連する動画コンテンツ又はそのリストを表示しても良い。例えば注目対象物が或るアニメーションのキャラクターである場合には、当該アニメーションの動画等のコンテンツを表示しても良い。 In addition, the notification control unit F9 may display the video content related to the object of interest or a list thereof as the notification image Px. For example, when the object of interest is a character of a certain animation, the content such as a moving image of the animation may be displayed.
 <効果>
 上記の構成においてHCU1は、子供が興味を示した注目対象物について、その画像だけでなく、注目対象物の名称等を言語化した説明情報をドライバに提示する。そのような構成によれば、ドライバは子供が何に注目したのかを認識しやすい。その結果、ドライバは、子供の興味反応に対して、子供の反応に寄り添った対応をしやすくなる。
<Effect>
In the above configuration, the HCU 1 presents to the driver not only the image of the object of interest that the child is interested in, but also explanatory information in which the name of the object of interest is verbalized. With such a configuration, the driver can easily recognize what the child has focused on. As a result, the driver is more likely to respond closely to the child's reaction to the child's interest reaction.
 また、上記では一例として、ドライバの運転負荷が高いと判定されている場合には、子供の興味反応に対するドライバの応答を待つことなく、システムが子供の質問等に対する回答を行う。当該構成によればドライバは運転に集中しやすい。また、子供は、システムから知りたい情報を取得できる。加えて、子供にとっては自分が興味を示したものに対してシステムが応答してくれるため、移動時間を退屈と感じる恐れを低減できる。 Further, as an example in the above, when it is determined that the driver's driving load is high, the system answers the child's question or the like without waiting for the driver's response to the child's interest reaction. According to this configuration, the driver tends to concentrate on driving. Also, the child can get the information he wants to know from the system. In addition, the system responds to what the child is interested in, reducing the risk of traveling time being boring.
 また、上記構成では一例として、ドライバに注目対象物についての情報を画像及び音声で通知する場合であっても、画像表示を行ってから所定の応答待機時間が経過したことに基づいて、説明情報の音声出力を行う。そのように画像表示と音声出力との間に時間差を設けることで、ドライバとしての保護者自身が子供の興味反応に対して応答しやすくなる。これにより、ドライバとしての保護者と子供とのコミュニケーションが活性化する効果が期待できる。また、上記構成では、ドライバ用のディスプレイに画像表示を行ってから所定の応答待機時間が経過してもドライバの発話が検出できなかった場合には、システムが代理的に応答する。上記構成によれば、子供は自身の興味に対して保護者とシステムの少なくとも何れか一方が応答してくれるため、移動時間を楽しみやすくなる。 Further, in the above configuration, as an example, even when the driver is notified of the information about the object of interest by image and voice, the explanatory information is based on the elapse of a predetermined response waiting time after displaying the image. Audio output. By providing a time difference between the image display and the audio output in this way, the guardian himself as a driver can easily respond to the interest reaction of the child. This can be expected to have the effect of activating communication between the guardian as a driver and the child. Further, in the above configuration, if the driver's utterance cannot be detected even after a predetermined response waiting time has elapsed after displaying the image on the driver's display, the system responds on behalf of the driver. According to the above configuration, the child can easily enjoy the traveling time because at least one of the guardian and the system responds to his / her interest.
 加えて、上記構成では、子供の興味を示した物事を位置情報等と対応付けて記録する。これによれば、子供が何に興味を示したのか、興味反応が検出された直後ではドライバが特定できなかった場合でも、後から特定可能となる。例えば、注目対象物を通り過ぎたあとも、当該注目対象物の画像を見ることで、子供が興味を持った物事を特定可能となる。また例えば運転終了後や、帰宅後などにおいても、注目対象物の画像を見返す事が可能となる。加えて、注目対象物のデータは、サーバ5などと連携して離れて暮らす家族も参照可能に構成されている。故に、子供が興味をもった物事を、ドライバ以外の家族、親戚も共有しやすくなる。 In addition, in the above configuration, things that show the child's interest are recorded in association with location information and the like. According to this, what the child is interested in can be specified later even if the driver cannot be specified immediately after the interest reaction is detected. For example, even after passing the object of interest, by looking at the image of the object of interest, it becomes possible to identify things that the child is interested in. Further, for example, it is possible to look back at the image of the object of interest even after the operation is completed or after returning home. In addition, the data of the object of interest is configured so that the family living apart in cooperation with the server 5 or the like can also refer to it. Therefore, it becomes easier for family members and relatives other than the driver to share things that the child is interested in.
 上記構成では一例として、注目対象物の種別とその存在方向とを画像又は音声にてドライバに通知する。当該構成によれば、ドライバは、運転中も車室外において子供が注目している物体を視認しやすくなる。また、上記構成によれば、ドライバ等の保護者は、子供に注目対象物の変化から子供の成長を気づきやすくなる。そのため、上記構成によれば、移動時間を保護者が子供の成長を知る機会としても利用可能となりうる。 In the above configuration, as an example, the driver is notified of the type of the object of interest and its existence direction by image or voice. According to this configuration, the driver can easily see the object that the child is paying attention to outside the vehicle interior even while driving. Further, according to the above configuration, a guardian such as a driver can easily notice the growth of the child from the change of the object of interest to the child. Therefore, according to the above configuration, the travel time can also be used as an opportunity for parents to know the growth of their children.
 さらに、注目対象物として検出される物事は、例えば施設やランドマークなど、地図データに登録されているような地図要素に限定されない。歩行者や自動車、電車、動物、虹など多様なものが注目対象物として検出される。そして、当該検出された注目対象物についての説明情報をインターネットなどから取得して子供及びドライバの少なくとも何れか一方に提示する。このような構成によれば、子供は、自分が興味をもった多様な物事についての情報を積極的に取得可能となるため、移動時間中にも様々なことを学習可能となる。 Furthermore, things that are detected as objects of interest are not limited to map elements such as facilities and landmarks that are registered in map data. Various objects such as pedestrians, automobiles, trains, animals, and rainbows are detected as objects of interest. Then, the explanatory information about the detected object of interest is acquired from the Internet or the like and presented to at least one of the child and the driver. With such a configuration, the child can actively obtain information about various things that he / she is interested in, so that he / she can learn various things during the traveling time.
 また、上記構成によれば子供は、データベース上の知識と現実世界とを具体的に結びつけて学ぶことが可能となる。実際に自分の目でみて得た知識やその記憶は、教科書などから得た情報よりも記憶に残りやすい。故に、上記の構成によれば、塾や学校までの移動時間を用いて効率的に知識を獲得可能となる。つまり移動時間を知識獲得の機会として利用可能となる。特に、子供が複数の塾に通っている場合には、移動時間を有効に活用することは、子供、及び子供に付き添う保護者にとって重要となりうる。そのような事情に対し、本開示の構成によれば、移動時間を知識獲得や親子のコミュニケーションの機会として活用しやすくなるといった利点を有する。 In addition, according to the above configuration, the child can learn by specifically linking the knowledge on the database with the real world. The knowledge and memories that you actually get with your own eyes are easier to remember than the information that you get from textbooks. Therefore, according to the above configuration, it is possible to efficiently acquire knowledge by using the travel time to a cram school or school. In other words, travel time can be used as an opportunity to acquire knowledge. In particular, when a child attends multiple cram schools, it may be important for the child and the guardian who accompanies the child to make effective use of the travel time. In response to such circumstances, the configuration of the present disclosure has an advantage that the travel time can be easily utilized as an opportunity for knowledge acquisition and communication between parents and children.
 その他、上記の構成では一例として、注目対象物の画像とともに、車内の会話を示す音声データなども合わせて保存する。当該構成によれば、注目対象物についての車内の会話も後日再生することができる。 In addition, in the above configuration, as an example, the image of the object of interest and the voice data showing the conversation in the car are also saved. According to this configuration, the conversation in the car about the object of interest can be reproduced at a later date.
 上記構成によれば、運転中における子供とドライバとしての保護者のコミュニケーションを支援することにより、車両での移動時間をより楽しくすることができる。また、小さい子供は、時として、親からの反応が得られないことに対して大きな声を出したり泣いたりしうる。子供が癇癪を起こすと、結果としてドライバの運転負荷が上がりうる。本開示の構成によれば、運転中における子供と保護者のコミュニケーションを円滑にすることにより、子供が癇癪を起こす恐れを低減でき、その結果として運転負荷が上がることを抑制する効果も期待できる。 According to the above configuration, by supporting communication between the child and the guardian as a driver while driving, it is possible to make the traveling time in the vehicle more enjoyable. Also, small children can sometimes yell or cry for lack of parental response. When a child has a tantrum, the driver's driving load can increase as a result. According to the configuration of the present disclosure, by facilitating communication between the child and the guardian while driving, the risk of the child causing tantrum can be reduced, and as a result, the effect of suppressing an increase in the driving load can be expected.
 以上、本開示の実施形態を説明したが、本開示は上述の実施形態に限定されるものではなく、以降で述べる種々の補足や変形例も本開示の技術的範囲に含まれ、さらに、下記以外にも要旨を逸脱しない範囲内で種々変更して実施することができる。例えば種々の変形例は、技術的な矛盾が生じない範囲において適宜組み合わせて実施することができる。なお、前述の実施形態で述べた部材と同一の機能を有する部材については、同一の符号を付し、その説明を省略する。また、構成の一部のみに言及している場合、他の部分については先に説明した実施形態の構成を適用することができる。 Although the embodiments of the present disclosure have been described above, the present disclosure is not limited to the above-described embodiments, and various supplements and modifications described below are also included in the technical scope of the present disclosure, and further described below. In addition to this, various changes can be made within the range that does not deviate from the gist. For example, various modifications can be appropriately combined and carried out within a range that does not cause a technical contradiction. The members having the same functions as those described in the above-described embodiment are designated by the same reference numerals, and the description thereof will be omitted. Further, when only a part of the configuration is referred to, the configuration of the embodiment described above can be applied to the other parts.
 <子供への通知内容の制御例>
 子供の年齢や知識レベル、能力レベルに基づいて、子供に通知する情報の量や、種類、表現方法などを変更しても良い。例えば、子供がまだ母国語の文字を読めない年頃である場合には母国語のテキストと音声とをセットで出力してもよい。当該構成によれば母国語の文字の習得を早める効果が期待できる。また、子供が母国語の文字をある程度読める年頃である場合には母国語のテキストと他の言語での翻訳文とをセットで出力してもよい。当該構成によれば母国語以外の言語を学習する効果が期待できる。
<Example of controlling the content of notification to children>
The amount, type, and expression method of information to be notified to the child may be changed based on the child's age, knowledge level, and ability level. For example, if the child is still unable to read the characters in his / her mother tongue, the text and voice in his / her mother tongue may be output as a set. According to this structure, the effect of accelerating the acquisition of characters in the native language can be expected. In addition, if the child is around the age of being able to read the characters in his / her native language to some extent, the text in his / her native language and the translated text in another language may be output as a set. According to this structure, the effect of learning a language other than the mother tongue can be expected.
 また、乗車中の子供の覚醒度や、姿勢、乗車時間などに基づいて、子供に通知する情報の量や、種類、表現方法などを変更しても良い。子供が眠そうな状態においては、覚醒している状態よりも情報量を減らしても良い。あるいは、文字量を減らして画像を増やしてもよい。当該構成によれば、過剰な情報によって子供に煩わしさを与えてしまう恐れを低減できる。また、子供の乗車姿勢が崩れている場合、子供が疲れている可能性が高い。故に、子供の乗車姿勢が崩れている場合には、子供の乗車姿勢が崩れていない場合に比べて提示する情報量を少なくしても良い。加えて、乗車してから経過時間(つまり乗車時間)が長いほど、情報量を低減するように要構成されていても良い。その他、仮に注目対象物に係る動画コンテンツのリストを子供に提示する構成においては、目的地に到達するまでの残り時間よりも再生時間が短いコンテンツに絞って子供に提示するように構成されていても良い。 In addition, the amount, type, expression method, etc. of information to be notified to the child may be changed based on the arousal level, posture, boarding time, etc. of the child while riding. When the child is sleepy, the amount of information may be less than when the child is awake. Alternatively, the amount of characters may be reduced to increase the number of images. According to this configuration, it is possible to reduce the risk of causing annoyance to the child due to excessive information. Also, if the child's riding posture is out of order, it is highly likely that the child is tired. Therefore, when the riding posture of the child is broken, the amount of information to be presented may be smaller than that when the riding posture of the child is not broken. In addition, the longer the elapsed time (that is, the boarding time) after boarding, the more information may be required to be reduced. In addition, in the configuration of presenting the list of video contents related to the object of interest to the child, the content is configured to be presented to the child only for the content whose playback time is shorter than the remaining time until the destination is reached. Is also good.
 後席ディスプレイ35等を介して子供に通知する情報の量や種類は所定の設定画面を介してドライバが設定可能に構成されていても良い。なお、子供の年齢等の情報は、設定画面等を介してドライバが手動で入力しても良いし、子供用カメラ25等の画像を元にHCU1が推定しても良い。 The amount and type of information to be notified to the child via the rear seat display 35 or the like may be configured so that the driver can set it via a predetermined setting screen. Information such as the age of the child may be manually input by the driver via the setting screen or the like, or may be estimated by the HCU 1 based on an image of the child camera 25 or the like.
 また、子供が「あれは何?」といった複数の単語の組み合わせたフレーズである複合語フレーズを発話したことに基づいて興味反応を検出したか否かに応じて、子供への通知内容の情報量などを変更しても良い。複合語フレーズを発話できたということは相応の言葉を覚えていることが期待できるため、より詳細な説明情報を通知しても良い。一方、複合語フレーズ以外の発話を元に興味反応を検出した場合には、相対的に通知内容は簡素な情報にとどめてもよい。このように興味反応の検出トリガとして用いた発話内容に応じて、子供に通知する情報量を増減させても良い。 In addition, the amount of information of the content of notification to the child depends on whether or not the interest reaction is detected based on the fact that the child utters a compound word phrase that is a phrase that is a combination of multiple words such as "what is that?" You may change such things. Since it can be expected that the compound word phrase can be uttered and the corresponding word is memorized, more detailed explanatory information may be notified. On the other hand, when an interest reaction is detected based on an utterance other than a compound word phrase, the content of the notification may be relatively simple information. In this way, the amount of information notified to the child may be increased or decreased according to the content of the utterance used as the detection trigger of the interest reaction.
 また、子供の発話音声を用いて興味反応を検出したか否かに応じて、子供に通知する情報を制御してもよい。例えば子供の発話がない状態で検出された注目対象物についての情報は、画像表示に留めてもよい。子供の発話を伴わない場合には、誤検出の恐れが相対的に高く、音声出力まで行うと子供を含むユーザに煩わしさを与える恐れがあるためである。上記構成によれば、誤検出の恐れが相対的に高い場合には、音声出力を省略することでユーザに煩わしさを与える恐れを低減できる。 Further, the information notified to the child may be controlled depending on whether or not the interest reaction is detected by using the spoken voice of the child. For example, information about an object of interest detected in the absence of a child's utterance may be limited to an image display. This is because there is a relatively high risk of erroneous detection when the child does not speak, and there is a risk of causing trouble to users including children if voice output is performed. According to the above configuration, when the risk of erroneous detection is relatively high, the risk of causing trouble to the user can be reduced by omitting the audio output.
 なお、HCU1は、同一物を注視している時間の長さや、発話内容、心拍などの生体情報が示す興奮度合いなどから、注目度合いを評価し、注目度合いが所定の閾値以下である場合には、注目対象物についての音声出力を省略するように構成されていてもよい。当該構成によっても、興味が無いもの或いは興味が薄いものついての情報を音声出力する恐れを低減できる。また、その結果としてユーザに煩わしさを与える恐れを低減できる。 The HCU1 evaluates the degree of attention based on the length of time while gazing at the same object, the content of speech, the degree of excitement indicated by biological information such as heartbeat, and the like, and when the degree of attention is equal to or less than a predetermined threshold value. , The audio output for the object of interest may be omitted. Even with this configuration, it is possible to reduce the risk of voice output of information about things that are not of interest or that are of little interest. Further, as a result, it is possible to reduce the risk of causing trouble to the user.
 <ドライバへの通知内容の制御例>
 HCU1は、興味反応が検出されていない場合にも、常時又は所定の通知間隔で、子供の体温や眠気等を示すステータス情報をHUD34等のドライバ用のディスプレイに表示してもよい。そのような構成によれば、例えば子供が眠そうにしているか否か、暑そうか否かなど、子供の様子を認識しやすくなる。また、子供の状態に応じた声掛けを行いやすくなる。
<Example of controlling the content of notification to the driver>
Even when no interest reaction is detected, the HCU 1 may display status information indicating the child's body temperature, drowsiness, etc. on a display for a driver such as HUD34 at all times or at predetermined notification intervals. Such a configuration makes it easier to recognize the child's appearance, such as whether the child is sleepy or hot. In addition, it becomes easier to make a voice call according to the child's condition.
 <ドライバへの通知制御の補足>
 上述した実施形態では運転負荷が高いか否かの判定結果に応じて、注目対象物についての情報のドライバへの提示態様を変更する制御例を開示したが、ドライバへの情報の提示態様を切り替えるためのパラメータは運転負荷の高さに限定されない。例えば、HCU1は、車両が走行中であるか否かに応じて、注目対象物についての情報をドライバに提示する際の作動を変更しても良い。例えば停車時には画像表示と音声出力の両方を実施する一方、車両が走行中である場合には画像表示は行わずに音声出力だけに留めてもよい。
<Supplement to notification control to the driver>
In the above-described embodiment, a control example is disclosed in which the mode of presenting information about the object of interest to the driver is changed according to the determination result of whether or not the driving load is high, but the mode of presenting information to the driver is switched. The parameters for this are not limited to the height of the operating load. For example, the HCU 1 may change its operation when presenting information about an object of interest to the driver, depending on whether the vehicle is running or not. For example, when the vehicle is stopped, both the image display and the audio output may be performed, while when the vehicle is running, the image display may not be performed and only the audio output may be performed.
 HCU1は、レベル3以上の自動運転中であるか否かに応じて、注目対象物についての情報をドライバに提示する際の作動を変更しても良い。例えばレベル3以上の自動運転中である場合には画像表示と音声出力の両方を実施する一方、自動運転中ではない場合には画像表示は行わずに音声出力だけに留めてもよい。 HCU1 may change the operation when presenting information about the object of interest to the driver depending on whether or not the automatic operation of level 3 or higher is in progress. For example, when the automatic operation of level 3 or higher is performed, both the image display and the audio output may be performed, while when the automatic operation is not performed, the image display may not be performed and only the audio output may be performed.
 なお、HCU1は、車両の走行状態に加えて、助手席に他の乗員が乗車しているか否かも併用して、注目対象物についての画像の表示先の組み合わせを変更しても良い。例えば、助手席にも乗員が着座している場合、自動運転中では助手席用ディスプレイと子供用ディスプレイにのみ注目対象物についての情報を表示する一方、自動運転中にはドライバ用ディスプレイにも当該情報画像を表示しても良い。 The HCU1 may change the combination of image display destinations for the object of interest by using not only the running state of the vehicle but also whether or not another occupant is in the passenger seat. For example, if an occupant is also seated in the passenger seat, information about the object of interest is displayed only on the passenger seat display and the child display during automatic driving, while the driver display is also displayed during automatic driving. An information image may be displayed.
 その他、ドライバの運転負荷は高くないと判定している状態であっても、システムからドライバへの権限移譲が近い場合には、ドライバへの注目対象物についての情報提示は省略するように構成されていても良い。権限移譲までの残り時間が所定値未満である場合にはドライバへの注目対象物への情報提示を差し控えることで、ドライバが運転操作を再開する準備に集中しやすくなる。 In addition, even if it is determined that the driver's operating load is not high, if the transfer of authority from the system to the driver is near, the presentation of information about the object of interest to the driver is omitted. You may be. If the time remaining until the transfer of authority is less than the predetermined value, by refraining from presenting information to the object of interest to the driver, it becomes easier for the driver to concentrate on preparing to resume the driving operation.
 また、HCU1は、ドライバの運転状態に応じて、注目対象物についての情報を表示する場所や、通知態様を変更しても良い。HCU1は、ドライバの状況や、ドライバの指示に基づき、画像表示を行うタイミングを調整可能に構成されていても良い。ドライバの指示操作は、音声、タッチ、ジェスチャーなどによって受け付け可能である。表示タイミングの選択候補としては、今すぐ、5分後、一時停車時、駐車時、運転負荷低下時、自動運転開始時などが採用可能である。 Further, the HCU 1 may change the place where information about the object of interest is displayed and the notification mode according to the driving state of the driver. The HCU 1 may be configured so that the timing of displaying an image can be adjusted based on the driver's situation and the driver's instruction. The driver's instruction operation can be accepted by voice, touch, gesture, or the like. As display timing selection candidates, immediately, 5 minutes later, when the vehicle is temporarily stopped, when the vehicle is parked, when the driving load is reduced, when the automatic operation is started, and the like can be adopted.
 <過去の検出結果の利用>
 HCU1は、注目対象物に係る過去の検出結果をもとに、子供の興味反応を検出するための閾値を調整しても良い。例えば興味反応検出部F5は、過去に興味反応を示したことがある対象物の近くを走行している場合には、子供の興味反応を検出するための閾値を下げても良い。また、興味対象管理部FBは、過去に複数回興味反応を示している対象物は、特に興味を持っている物体であるお気に入り対象物として登録しても良い。なお、HCU1は、お気に入り対象物までの残り距離が所定距離未満となった場合や、お気に入り対象物が車外カメラ28で撮像されている場合には、子供にお気に入り対象物の存在を通知しても良い。HCU1は、過去の検出結果をもとに子供が興味を持っている対象を学習するように構成されていても良い。
<Use of past detection results>
The HCU1 may adjust the threshold value for detecting the interest reaction of the child based on the past detection results of the object of interest. For example, the interest reaction detection unit F5 may lower the threshold value for detecting the interest reaction of a child when traveling near an object that has shown an interest reaction in the past. In addition, the object of interest management unit FB may register an object that has shown an interest reaction a plurality of times in the past as a favorite object that is an object of particular interest. The HCU 1 may notify the child of the existence of the favorite object when the remaining distance to the favorite object is less than a predetermined distance or when the favorite object is imaged by the outside camera 28. good. The HCU1 may be configured to learn an object that the child is interested in based on past detection results.
 その他、HCU1が、興味対象カテゴリを管理可能に構成されている場合、既存の興味対象カテゴリに属する対象物に対して子供が興味を示したことを検出した場合には、通常時とは異なる態様でドライバに通知を行っても良い。ここでの通常時とは、既存の興味対象カテゴリに属する対象物に対して子供が興味を示した場合を指す。当該構成によれば、ドライバは子供がこれまでとは異なる、新たな物事に興味を持ち始めたこと、換言すれば、成長に伴う興味対象の移り変わりを知ることが可能となりうる。また、上記の構成によれば1つの側面において、ドライバは、ドライバ自身が把握していなかった子供の興味対象を知ることが可能となる。換言すれば、上記HCU1による興味対象カテゴリの管理は、子供の意外な一面を知るための一助となりうる。 In addition, when the HCU1 is configured to manage the interest target category and detects that the child is interested in an object belonging to the existing interest target category, a mode different from the normal mode is used. You may notify the driver with. The normal time here refers to the case where the child is interested in an object belonging to an existing interest category. With this configuration, the driver may be able to know that the child has begun to take an interest in new things that are different from the past, in other words, the change of interests as he grows up. Further, according to the above configuration, in one aspect, the driver can know the interest target of the child that the driver himself did not know. In other words, the management of the interest category by the HCU1 can help to know the unexpected side of the child.
 <プリセットされた子供の注目対象物の利用>
 以上ではHCU1が過去の検出結果に基づいて、子供が興味を持っている物事を学習する態様を言及したがこれに限らない。所定の設定画面を介して子供が好きな物事、嗜好の情報は、HCU1やサーバ5に予め登録されていても良い。
<Use of preset children's objects of interest>
In the above, the mode in which the HCU1 learns things that the child is interested in based on the past detection results is mentioned, but the present invention is not limited to this. Information on things and preferences that the child likes via a predetermined setting screen may be registered in advance in the HCU 1 or the server 5.
 手動登録又は自動学習により、子供が好きな物事に関するデータが登録されている場合には、対象物特定部F6は、当該データに基づいて注目対象物を検出するように構成されていても良い。つまり、子供の視線方向に対応するカメラ画像の中から、子供が好きな物事に関連する被写体を優先的に注目対象物として抽出するように構成されていてもよい。当該構成によれば、子供が興味のない/薄い物事が、注目対象物として抽出されるおそれを低減できる。換言すればシステムの誤作動を抑制可能となる。また、HCU1は、子供の嗜好情報に基づいて、子供に通知する情報の量や、表現方法などを変更しても良い。注目対象物が子供の好きな物事である場合には、そうではない場合よりも多くの/より詳細な情報を表示してもよい。また、注目対象物が子供の好きな物事である場合には通知音とともに注目対象物に係る画像を表示する一方、注目対象物が子供の好きな物事ではない場合には通知音なしで注目対象物に係る画像を表示しても良い。 When data related to things that the child likes is registered by manual registration or automatic learning, the object identification unit F6 may be configured to detect the object of interest based on the data. That is, it may be configured to preferentially extract a subject related to a child's favorite thing as an object of interest from the camera image corresponding to the line-of-sight direction of the child. According to this configuration, it is possible to reduce the possibility that things that the child is not interested in / thin will be extracted as objects of interest. In other words, it is possible to suppress the malfunction of the system. Further, the HCU 1 may change the amount of information to be notified to the child, the expression method, and the like based on the child's preference information. If the object of interest is a child's favorite thing, it may display more / more detailed information than it would otherwise. Also, if the object of interest is a child's favorite thing, an image related to the object of interest is displayed with a notification sound, while if the object of interest is not a thing that the child likes, the object of interest is not a notification sound. An image related to an object may be displayed.
 <コミュニケーション方法の補足>
 ドライバと子供とは、カメラ及びマイクを通して会話可能に構成されていてもよい。例えば会話機能がオンに設定されている場合には、子供用カメラ25の画像をメータディスプレイ32に表示するとともにスピーカ38Aから子供用マイク27が取得した音声を出力する。また、DSM21の画像を後席ディスプレイ35に表示するとともにスピーカ38Bからドライバ用マイク23が取得した音声を出力する。当該構成によれば、例えば3列以上のシート構成であって、前から3列目以降に子供が着座している場合など、運転席と子供用シートとが離れている場合にもコミュニケーションを取りやすくなる。
<Supplement of communication method>
The driver and the child may be configured to be able to talk through a camera and a microphone. For example, when the conversation function is set to on, the image of the child camera 25 is displayed on the meter display 32, and the voice acquired by the child microphone 27 is output from the speaker 38A. Further, the image of the DSM 21 is displayed on the rear seat display 35, and the sound acquired by the driver microphone 23 is output from the speaker 38B. According to this configuration, communication is possible even when the driver's seat and the child's seat are separated, for example, when the seat configuration has three or more rows and the child is seated in the third and subsequent rows from the front. It will be easier.
 <子供用カメラの撮像範囲制御について>
 HCU1はドライバ用マイク23を介して取得したドライバの音声コマンドに基づいて、子供用カメラ25の撮像方向や拡大率を変更可能に構成されていても良い。子供用カメラ25の撮像方向は、ピッチ角やロール角、ヨー角などで表現される。子供用カメラ25の撮像方向を変更することは、姿勢角を変更することに相当する。子供用カメラ25の撮像方向は、例えば子供用カメラ25の姿勢を制御するモータを制御することで実現されうる。
<About controlling the imaging range of a child's camera>
The HCU 1 may be configured so that the image pickup direction and the enlargement ratio of the child camera 25 can be changed based on the voice command of the driver acquired via the driver microphone 23. The imaging direction of the child camera 25 is expressed by a pitch angle, a roll angle, a yaw angle, and the like. Changing the imaging direction of the child camera 25 corresponds to changing the posture angle. The imaging direction of the child camera 25 can be realized, for example, by controlling a motor that controls the posture of the child camera 25.
 上記構成によればドライバは、子供用カメラ25の撮像範囲を音声にて調整可能となり、子供の表情などを確認しやすくなる。なお、子供用カメラ25の撮像範囲の制御は、音声入力に限らず、ハプティックデバイスなどの操作部材などを介して実行可能に構成されていてもよい。 According to the above configuration, the driver can adjust the imaging range of the child camera 25 by voice, and it becomes easy to check the facial expression of the child. The control of the imaging range of the child camera 25 is not limited to voice input, and may be configured to be executable via an operating member such as a haptic device.
 <付言>
 本開示に記載の装置、システム、並びにそれらの手法は、コンピュータプログラムにより具体化された一つ乃至は複数の機能を実行するようにプログラムされたプロセッサを構成する専用コンピュータにより、実現されてもよい。また、本開示に記載の装置及びその手法は、専用ハードウェア論理回路を用いて実現されてもよい。さらに、本開示に記載の装置及びその手法は、コンピュータプログラムを実行するプロセッサと一つ以上のハードウェア論理回路との組み合わせにより構成された一つ以上の専用コンピュータにより、実現されてもよい。また、コンピュータプログラムは、コンピュータにより実行されるインストラクションとして、コンピュータ読み取り可能な非遷移有形記録媒体に記憶されていてもよい。つまり、HCU1等が提供する手段及び/又は機能は、実体的なメモリ装置に記録されたソフトウェア及びそれを実行するコンピュータ、ソフトウェアのみ、ハードウェアのみ、あるいはそれらの組合せによって提供できる。例えばHCU1が備える機能の一部又は全部はハードウェアとして実現されても良い。或る機能をハードウェアとして実現する態様には、1つ又は複数のICなどを用いて実現する態様が含まれる。HCU1は、CPUの代わりに、MPUやGPU、DFP(Data Flow Processor)を用いて実現されていてもよい。HCU1は、CPUや、MPU、GPUなど、複数種類の演算処理装置を組み合せて実現されていてもよい。HCU1は、システムオンチップ(SoC:System-on-Chip)として実現されていても良い。さらに、各種処理部は、FPGA(Field-Programmable Gate Array)や、ASIC(Application Specific Integrated Circuit)を用いて実現されていても良い。各種プログラムは、非遷移的実体的記録媒体(non- transitory tangible storage medium)に格納されていればよい。プログラムの保存媒体としては、HDD(Hard-disk Drive)やSSD(Solid State Drive)、フラッシュメモリ、SD(Secure Digital)カード等、多様な記憶媒体を採用可能である。非遷移的実体的記録媒体には、EPROM(Erasable Programmable Read Only Memory)などのROMも含まれる。
<Addition>
The devices, systems, and methods thereof described herein may be implemented by a dedicated computer constituting a processor programmed to perform one or more functions embodied by a computer program. .. Further, the apparatus and the method thereof described in the present disclosure may be realized by using a dedicated hardware logic circuit. Further, the apparatus and method thereof described in the present disclosure may be realized by one or more dedicated computers configured by a combination of a processor for executing a computer program and one or more hardware logic circuits. Further, the computer program may be stored in a computer-readable non-transitional tangible recording medium as an instruction executed by the computer. That is, the means and / or functions provided by HCU1 and the like can be provided by software recorded in a substantive memory device and a computer, software only, hardware only, or a combination thereof that execute the software. For example, some or all of the functions included in the HCU 1 may be realized as hardware. A mode in which a certain function is realized as hardware includes a mode in which one or more ICs are used. The HCU 1 may be realized by using an MPU, a GPU, or a DFP (Data Flow Processor) instead of the CPU. The HCU 1 may be realized by combining a plurality of types of arithmetic processing devices such as a CPU, an MPU, and a GPU. HCU1 may be realized as a system-on-chip (SoC: System-on-Chip). Further, various processing units may be realized by using FPGA (Field-Programmable Gate Array) or ASIC (Application Specific Integrated Circuit). Various programs may be stored in a non-transitionary tangible storage medium. As a program storage medium, various storage media such as HDD (Hard-disk Drive), SSD (Solid State Drive), flash memory, and SD (Secure Digital) card can be adopted. The non-transitional substantive recording medium also includes a ROM such as EPROM (Erasable Programmable Read Only Memory).
 上記実施形態における1つの構成要素が有する複数の機能は、複数の構成要素によって実現したり、1つの構成要素が有する1つの機能を、複数の構成要素によって実現したりしてもよい。また、複数の構成要素が有する複数の機能を、1つの構成要素によって実現したり、複数の構成要素によって実現される1つの機能を、1つの構成要素によって実現したりしてもよい。加えて、上記実施形態の構成の一部を省略してもよい。また、上記実施形態の構成の少なくとも一部を、他の上記実施形態の構成に対して付加又は置換してもよい。 The plurality of functions possessed by one component in the above embodiment may be realized by a plurality of components, or one function possessed by one component may be realized by a plurality of components. Further, a plurality of functions possessed by the plurality of components may be realized by one component, or one function realized by the plurality of components may be realized by one component. In addition, a part of the configuration of the above embodiment may be omitted. Further, at least a part of the configuration of the above embodiment may be added or replaced with the configuration of the other above embodiment.
 上述した注目対象共有装置を構成要素とするシステムなど、種々の形態も本開示の範囲に含まれる。例えば、コンピュータを注目対象共有装置として機能させるためのプログラム、このプログラムを記録した半導体メモリ等の非遷移的実態的記録媒体等の形態も本開示の範囲に含まれる。 Various forms such as the system having the above-mentioned attention target sharing device as a component are also included in the scope of the present disclosure. For example, the scope of the present disclosure also includes a program for making a computer function as a shared device of interest, a non-transitional actual recording medium such as a semiconductor memory in which this program is recorded, and the like.

Claims (10)

  1.  子供が着座するための座席である子供用シートが設けられた車両で使用される注目対象共有装置であって、
     前記子供用シートに着座している子供の状態を表す情報として、前記子供の少なくとも顔部を撮像範囲に含む車室内カメラ(25)の画像に基づき、前記子供の視線方向を取得する子供情報取得部(F1)と、
     前記子供の生体情報、前記子供の音声、及び、前記視線方向の少なくとも何れか1つに基づいて、車室外の物事に対する前記子供の興味反応を検出する興味反応検出部(F5)と、
     前記子供情報取得部が取得した前記視線方向と、車外を撮像するように前記車両に取り付けられている車外カメラ(28)の撮像画像と、に基づいて、前記子供が興味を持った対象である注目対象を検出する注目対象検出部(F6)と、
     前記車両の内部又は外部に配置されたデータベースから、前記注目対象について言語化された情報を取得する対象情報取得部(F7)と、
     前記対象情報取得部が取得した前記情報を、当該情報に対応するテキストの表示及び音声出力の少なくとも何れか一方を用いて、運転席乗員及び前記子供の少なくとも何れか一方に通知する通知処理部(F9)と、を備える注目対象共有装置。
    A shared device of interest used in vehicles equipped with children's seats, which are seats for children to sit on.
    As information representing the state of the child sitting on the child's seat, the child information acquisition for acquiring the line-of-sight direction of the child based on the image of the vehicle interior camera (25) including at least the face of the child in the imaging range. Part (F1) and
    An interest reaction detection unit (F5) that detects the child's interest reaction to things outside the vehicle interior based on the child's biological information, the child's voice, and at least one of the line-of-sight directions.
    Based on the line-of-sight direction acquired by the child information acquisition unit and the captured image of the vehicle outside camera (28) attached to the vehicle so as to image the outside of the vehicle, the child is interested in the object. The attention target detection unit (F6) that detects the attention target, and
    A target information acquisition unit (F7) that acquires verbalized information about the attention target from a database arranged inside or outside the vehicle, and a target information acquisition unit (F7).
    A notification processing unit that notifies at least one of the driver's seat occupant and the child by using at least one of the display of the text corresponding to the information and the voice output of the information acquired by the target information acquisition unit ( F9) and a shared device of interest.
  2.  請求項1に記載の注目対象共有装置であって、
     前記対象情報取得部は、前記注目対象についての情報として、前記注目対象の種別を取得し、
     前記注目対象検出部は、前記車両に対して前記注目対象が存在する方向を取得し、
     前記通知処理部は、前記注目対象の種別と、前記注目対象が存在する方向とを前記運転席乗員に通知するように構成されている注目対象共有装置。
    The shared device of interest according to claim 1.
    The target information acquisition unit acquires the type of the attention target as information about the attention target, and obtains the type of the attention target.
    The attention target detection unit acquires the direction in which the attention target exists with respect to the vehicle, and obtains the direction in which the attention target exists.
    The notification processing unit is an attention target sharing device configured to notify the driver's seat occupant of the type of the attention target and the direction in which the attention target exists.
  3.  請求項1又は2に記載の注目対象共有装置であって、
     前記注目対象検出部は、前記注目対象として、歩行者、他車両、動物、イベント、自然現象、及び所定のキャラクターの絵の、少なくとも何れか1つを検出可能に構成されており、
     前記対象情報取得部は、前記注目対象についての情報として、前記注目対象の名称、特徴、製造者、原産国、発生理由、大きさ、建設年、役割、及び歴史的背景の少なくとも何れか1つを取得し、
     前記通知処理部は、前記対象情報取得部が取得した前記情報に対応するテキストを前記注目対象の画像とともに、所定のディスプレイに表示するように構成されている注目対象共有装置。
    The shared device of interest according to claim 1 or 2.
    The attention target detection unit is configured to be capable of detecting at least one of a pedestrian, another vehicle, an animal, an event, a natural phenomenon, and a picture of a predetermined character as the attention target.
    The target information acquisition unit has at least one of the name, characteristic, manufacturer, country of origin, reason for occurrence, size, year of construction, role, and historical background of the target of interest as information about the target of interest. To get,
    The notification processing unit is an attention target sharing device configured to display a text corresponding to the information acquired by the target information acquisition unit on a predetermined display together with an image of the attention target.
  4.  請求項1から3の何れか1項に記載の注目対象共有装置であって、
     前記注目対象の検出結果を管理する興味対象管理部(FB)を備え、
     前記興味対象管理部は、過去の前記注目対象の検出結果の履歴をもとに、前記子供が興味を持っている対象のカテゴリである興味対象カテゴリを特定するとともに、
     新たに検出された前記注目対象が既存の前記興味対象カテゴリに属さない場合には、新たに検出された前記注目対象が既存の前記興味対象カテゴリに属する場合とは異なる態様で前記運転席乗員に通知を行うように構成されている注目対象共有装置。
    The shared device of interest according to any one of claims 1 to 3.
    It is equipped with an interest target management unit (FB) that manages the detection result of the attention target.
    The interest target management unit identifies the interest target category, which is the target category in which the child is interested, based on the history of the detection results of the attention target in the past, and also,
    When the newly detected attention target does not belong to the existing interest target category, the newly detected attention target is assigned to the driver's seat occupant in a manner different from the case where the newly detected attention target belongs to the existing interest target category. A focus sharing device that is configured to make notifications.
  5.  請求項1から4の何れか1項に記載の注目対象共有装置であって、
     前記注目対象を、位置情報と対応付けて所定の保存デバイスに記録する記録処理部(FA)を備え、
     前記通知処理部は、過去に検出された前記注目対象から所定距離以内まで接近したことに基づいて、前記子供又は前記運転席乗員の少なくとも何れか一方に前記注目対象の存在を予告するように構成されている注目対象共有装置。
    The shared device of interest according to any one of claims 1 to 4.
    A recording processing unit (FA) that records the object of interest in a predetermined storage device in association with location information is provided.
    The notification processing unit is configured to notify at least one of the child or the driver's seat occupant of the existence of the attention target based on the approach to the attention target detected in the past within a predetermined distance. Attention target sharing device that has been.
  6.  請求項1から5の何れか1項に記載の注目対象共有装置であって、
     前記運転席乗員が発した音声を含む、前記運転席乗員の状態を示す情報を取得する運転席乗員情報取得部(F3)を備え、
     前記通知処理部は、前記興味反応検出部が前記子供の前記興味反応を検出してから所定の応答待機時間が経過しても、前記運転席乗員情報取得部が前記運転席乗員の発話音声を取得しなかったことに基づいて、前記対象情報取得部が取得した前記情報を音声出力するように構成されている注目対象共有装置。
    The shared device of interest according to any one of claims 1 to 5.
    The driver's seat occupant information acquisition unit (F3) for acquiring information indicating the state of the driver's seat occupant, including the voice emitted by the driver's seat occupant, is provided.
    In the notification processing unit, even if a predetermined response waiting time elapses after the interest reaction detection unit detects the child's interest reaction, the driver's seat occupant information acquisition unit still hears the voice of the driver's seat occupant. An attention target sharing device configured to output the information acquired by the target information acquisition unit by voice based on the fact that the information was not acquired.
  7.  請求項6に記載の注目対象共有装置であって、
     前記運転席乗員情報取得部は、前記運転席乗員の運転負荷を示す情報を取得可能に構成されており、
     前記通知処理部は、
     前記運転席乗員情報取得部が前記運転席乗員の前記運転負荷が高いことを示す情報を取得していない場合には、前記興味反応が検出されてから前記応答待機時間が経過しても、前記運転席乗員情報取得部が前記運転席乗員の発話音声を取得しなかったことを条件として、前記対象情報取得部が取得した前記情報を音声出力する一方、
     前記運転席乗員情報取得部が前記運転席乗員の前記運転負荷が高いことを示す情報を取得している場合には、前記応答待機時間の経過を待たずに、前記対象情報取得部が取得した前記情報を前記車両に設けられたスピーカから音声出力するように構成されている注目対象共有装置。
    The shared device of interest according to claim 6.
    The driver's seat occupant information acquisition unit is configured to be able to acquire information indicating the driving load of the driver's seat occupant.
    The notification processing unit
    If the driver's seat occupant information acquisition unit has not acquired information indicating that the driver's seat occupant has a high driving load, the response waiting time may elapse after the interest reaction is detected. On the condition that the driver's seat occupant information acquisition unit does not acquire the voice of the driver's seat occupant, the target information acquisition unit outputs the acquired information by voice.
    When the driver's seat occupant information acquisition unit has acquired information indicating that the driver's seat occupant has a high driving load, the target information acquisition unit has acquired the information without waiting for the response waiting time to elapse. An object sharing device of interest configured to output the information by voice from a speaker provided in the vehicle.
  8.  請求項7に記載の注目対象共有装置であって、
     前記通知処理部は、前記興味反応検出部が前記興味反応を検出したことに基づいて、前記対象情報取得部が取得した前記情報を含む前記注目対象についての情報を、前記運転席乗員が視認可能なディスプレイと、前記子供が視認可能なディスプレイのそれぞれに表示するとともに、
     前記運転席乗員情報取得部が前記運転席乗員の前記運転負荷が高いことを示す情報を取得していない場合には、前記ディスプレイへの前記注目対象についての情報を表示してから前記応答待機時間が経過しても前記運転席乗員情報取得部が前記運転席乗員の発話音声を取得しなかったことを条件として、前記対象情報取得部が取得した前記情報を音声出力するように構成されている注目対象共有装置。
    The shared device of interest according to claim 7.
    Based on the fact that the interest reaction detection unit detects the interest reaction, the notification processing unit allows the driver's seat occupant to visually recognize information about the attention target including the information acquired by the target information acquisition unit. Display on each of the display and the display that can be seen by the child.
    When the driver's seat occupant information acquisition unit has not acquired the information indicating that the driver's seat occupant has a high driving load, the response standby time after displaying the information about the attention target on the display. Is configured to output the information acquired by the target information acquisition unit by voice, provided that the driver's seat occupant information acquisition unit does not acquire the voice of the driver's seat occupant even after the lapse of time. Attention target sharing device.
  9.  請求項1から8の何れか1項に記載の注目対象共有装置であって、
     前記通知処理部は、前記子供の年齢、嗜好、乗車からの経過時間、目的地までの残り時間の少なくとも何れか1つに基づいて、前記子供への前記注目対象についての前記情報の通知態様を変更するように構成されている注目対象共有装置。
    The shared device of interest according to any one of claims 1 to 8.
    The notification processing unit notifies the child of the information about the attention target based on at least one of the child's age, preference, elapsed time from boarding, and remaining time to the destination. A focus sharing device that is configured to change.
  10.  車両に予め設定されている子供用シートに着座している子供が興味を示した物事を保護者が共有するための、少なくとも1つのプロセッサによって実行される注目対象共有方法であって、
     前記子供用シートに着座している子供の少なくとも顔部を撮像範囲に含む車室内カメラ(25)の画像に基づき、前記子供の視線方向を取得すること(S101)と、
     前記子供の生体情報、前記子供の音声、及び、前記視線方向の少なくとも何れか1つに基づいて、車室外の物事に対する前記子供の興味反応を検出すること(S104)と、
     取得された前記子供の前記視線方向と、車外を撮像するように前記車両に取り付けられている車外カメラ(28)の撮像画像とに基づいて、前記子供が興味を持った対象である注目対象を検出すること(S105)と、
     前記車両の内部又は外部に配置されたデータベースから、前記注目対象について言語化された情報を取得すること(S107)と、
     取得された前記注目対象についての情報を、当該情報に対応するテキストの表示及び音声出力の少なくとも何れか一方を用いて、運転席乗員及び前記子供の少なくとも何れか一方に通知すること(S110、S111、S113)と、を含む注目対象共有方法。
    A focus sharing method performed by at least one processor for parents to share things that a child sitting in a child's seat preset in the vehicle is interested in.
    Acquiring the line-of-sight direction of the child based on the image of the vehicle interior camera (25) including at least the face of the child sitting on the child's seat in the imaging range (S101).
    Detecting the child's interest reaction to things outside the vehicle interior based on the child's biological information, the child's voice, and at least one of the line-of-sight directions (S104).
    Based on the acquired direction of the line of sight of the child and the captured image of the vehicle outside camera (28) attached to the vehicle so as to image the outside of the vehicle, the object of interest that the child is interested in is selected. To detect (S105) and
    Obtaining verbalized information about the object of interest from a database arranged inside or outside the vehicle (S107), and
    Notifying at least one of the driver's seat occupant and the child by using at least one of the display of the text corresponding to the information and the voice output of the acquired information about the attention target (S110, S111). , S113), and a method of sharing attention targets.
PCT/JP2021/044133 2020-12-11 2021-12-01 Attention object sharing device, and attention object sharing method WO2022124164A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202180081745.3A CN116547729A (en) 2020-12-11 2021-12-01 Attention object sharing device and attention object sharing method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020206064A JP7537259B2 (en) 2020-12-11 2020-12-11 Attention target sharing device, attention target sharing method
JP2020-206064 2020-12-11

Publications (1)

Publication Number Publication Date
WO2022124164A1 true WO2022124164A1 (en) 2022-06-16

Family

ID=81973944

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/044133 WO2022124164A1 (en) 2020-12-11 2021-12-01 Attention object sharing device, and attention object sharing method

Country Status (3)

Country Link
JP (1) JP7537259B2 (en)
CN (1) CN116547729A (en)
WO (1) WO2022124164A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4421614A1 (en) 2023-02-27 2024-08-28 Bayerische Motoren Werke Aktiengesellschaft Data processing device and method for providing a video to a passenger of a vehicle

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004061259A (en) * 2002-07-29 2004-02-26 Mazda Motor Corp System, method, and program for providing information
JP2014223887A (en) * 2013-05-17 2014-12-04 日産自動車株式会社 Vehicle cabin indoor monitoring device
JP2017067849A (en) * 2015-09-28 2017-04-06 株式会社デンソー Interactive device and interactive method
JP2020057066A (en) * 2018-09-28 2020-04-09 パナソニックIpマネジメント株式会社 Information presentation server, information presentation system and information presentation method

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4292646B2 (en) 1999-09-16 2009-07-08 株式会社デンソー User interface device, navigation system, information processing device, and recording medium
JP2005157535A (en) 2003-11-21 2005-06-16 Canon Inc Content extraction method, content extraction device, content information display method, and display device
JP4604597B2 (en) 2004-07-30 2011-01-05 トヨタ自動車株式会社 State estimating device, state estimating method, information providing device using the same, information providing method
JP4556586B2 (en) 2004-09-22 2010-10-06 トヨタ自動車株式会社 Driving assistance device
JP2007110272A (en) 2005-10-12 2007-04-26 Nissan Motor Co Ltd System, device, and method of providing information
JP2008045962A (en) 2006-08-14 2008-02-28 Nissan Motor Co Ltd Navigation device for vehicle
JP2013011483A (en) 2011-06-28 2013-01-17 Denso Corp Driving support device
JP2015021836A (en) 2013-07-18 2015-02-02 株式会社デンソー Navigation apparatus and route calculation device
JP6449504B1 (en) 2018-05-16 2019-01-09 オムロン株式会社 Information processing apparatus, information processing method, and information processing program
JP2020112733A (en) 2019-01-15 2020-07-27 株式会社デンソーテン Information processing apparatus and information processing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004061259A (en) * 2002-07-29 2004-02-26 Mazda Motor Corp System, method, and program for providing information
JP2014223887A (en) * 2013-05-17 2014-12-04 日産自動車株式会社 Vehicle cabin indoor monitoring device
JP2017067849A (en) * 2015-09-28 2017-04-06 株式会社デンソー Interactive device and interactive method
JP2020057066A (en) * 2018-09-28 2020-04-09 パナソニックIpマネジメント株式会社 Information presentation server, information presentation system and information presentation method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4421614A1 (en) 2023-02-27 2024-08-28 Bayerische Motoren Werke Aktiengesellschaft Data processing device and method for providing a video to a passenger of a vehicle

Also Published As

Publication number Publication date
CN116547729A (en) 2023-08-04
JP7537259B2 (en) 2024-08-21
JP2022093012A (en) 2022-06-23

Similar Documents

Publication Publication Date Title
JP7288911B2 (en) Information processing device, mobile device, method, and program
JP7155122B2 (en) Vehicle control device and vehicle control method
JP7080598B2 (en) Vehicle control device and vehicle control method
US11993293B2 (en) Information processing apparatus, moving apparatus, and method, and program
US10908677B2 (en) Vehicle system for providing driver feedback in response to an occupant&#39;s emotion
US10317900B2 (en) Controlling autonomous-vehicle functions and output based on occupant position and attention
KR102669020B1 (en) Information processing devices, mobile devices, and methods, and programs
JPWO2019202881A1 (en) Information processing equipment, mobile devices, information processing systems, and methods, and programs
WO2021145131A1 (en) Information processing device, information processing system, information processing method, and information processing program
JP6083441B2 (en) Vehicle occupant emotion response control device
WO2021049219A1 (en) Information processing device, mobile device, information processing system, method, and program
JP6115577B2 (en) Vehicle occupant emotion response control device
JP2019131096A (en) Vehicle control supporting system and vehicle control supporting device
CN113260547A (en) Information processing apparatus, mobile apparatus, method, and program
JP2021128349A (en) Information processing device, information processing system, information processing method, and program
WO2022124164A1 (en) Attention object sharing device, and attention object sharing method
JP6213488B2 (en) Vehicle occupant emotion response control device
JP2016137202A (en) Control device for coping with feeling of passenger for vehicle
JP7238193B2 (en) Vehicle control device and vehicle control method
JP2018018184A (en) Vehicle event discrimination device
JP2024083068A (en) Information processing method and information processing apparatus
JP2024073110A (en) Control method and information processing apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21903268

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202180081745.3

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21903268

Country of ref document: EP

Kind code of ref document: A1