WO2019221070A1 - Information processing device, information processing method, and information processing program - Google Patents

Information processing device, information processing method, and information processing program Download PDF

Info

Publication number
WO2019221070A1
WO2019221070A1 PCT/JP2019/018971 JP2019018971W WO2019221070A1 WO 2019221070 A1 WO2019221070 A1 WO 2019221070A1 JP 2019018971 W JP2019018971 W JP 2019018971W WO 2019221070 A1 WO2019221070 A1 WO 2019221070A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
driver
interest level
generation unit
processing apparatus
Prior art date
Application number
PCT/JP2019/018971
Other languages
French (fr)
Japanese (ja)
Inventor
知柔 今林
洋治 森
秀人 大前
悠介 鵜飼
Original Assignee
オムロン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オムロン株式会社 filed Critical オムロン株式会社
Publication of WO2019221070A1 publication Critical patent/WO2019221070A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0968Systems involving transmission of navigation instructions to the vehicle

Definitions

  • the present invention relates to an information processing apparatus, an information processing method, and an information processing program.
  • Patent Literature 1 discloses a portable terminal device that grasps a trend on the side of viewing, such as a portion of interest and content in a display image.
  • One embodiment of the present invention has been made in view of the above problems, and an object thereof is to realize an information processing apparatus capable of sensing the degree of interest of a driver.
  • an information processing apparatus includes a face information acquisition unit that acquires face information of a driver, and a line of sight that detects a line of sight from the face information acquired from the face information acquisition unit.
  • the visual recognition object determination part to determine, and the interest level information generation part which produces
  • an information processing apparatus that can sense the interest level of a driver can be realized.
  • the information processing apparatus 1 is an apparatus that performs face information acquisition processing, line-of-sight detection processing, object detection processing, visual object determination processing, interest level information generation processing, and the like.
  • the information processing apparatus 1 is mounted on a vehicle or the like, for example, but this does not limit the present embodiment.
  • the mobile terminal device 2 may be a smartphone, a tablet terminal, or the like.
  • FIG. 1 is a block diagram showing components of the information processing apparatus 1 according to the present embodiment.
  • the information processing apparatus 1 includes a face information acquisition unit 12, a line-of-sight detection unit 13, an object detection unit 11, a visual object determination unit 14, an interest level information generation unit 15, a recommendation engine (suggested information generation unit) 16, And a destination presentation unit (navigation system) 17.
  • the face information acquisition unit 12 acquires driver face information.
  • the driver's face information acquired by the face information acquisition unit 12 is, for example, a captured image of the driver.
  • an imaging device that captures the driver's face is arranged in a vehicle or the like on which the information processing apparatus 1 is mounted.
  • the face information of the driver acquired by the face information acquisition unit 12 is the positional information of each part of the face extracted from the captured image of the driver. Etc.
  • the method by which the face information acquisition unit 12 acquires the driver's face information is not particularly limited.
  • the driver's face information is acquired by using a face information acquisition device equipped with a known technique for acquiring a person's face. be able to.
  • the face information acquisition unit 12 outputs the acquired face information to the line-of-sight detection unit 13.
  • the line-of-sight detection unit 13 detects the line of sight from the face information acquired from the face information acquisition unit 12. That is, the gaze detection unit 13 detects the gaze direction of the driver included in the acquired face image. Specifically, the line-of-sight detection unit 13 specifies the line-of-sight angle of the driver imaged in the face image that is the face information, and refers to the line-of-sight direction specifying information indicating the correspondence between the line-of-sight angle and the line-of-sight direction. A line-of-sight direction corresponding to the specified line-of-sight angle is specified.
  • the correspondence relationship between the gaze angle and the gaze direction may be set as appropriate, a predetermined number of gaze directions may be set, and the range of the corresponding gaze angle may be arbitrarily set.
  • the line-of-sight direction specifying information is set in advance. However, since it is desirable to set the correspondence between the line-of-sight angle and the line-of-sight direction based on the position of the camera and the face of the driver, the line-of-sight specific information is corrected for each driver or whenever the line-of-sight direction is set. It is desirable to do.
  • the line-of-sight detection unit 13 identifies a plurality of line-of-sight angles from a plurality of images taken during a predetermined period, and a line-of-sight direction corresponding to an average value, a most frequent angle, an intermediate value, or the like of the plurality of line-of-sight angles. May be specified. Accordingly, even when the driver's line-of-sight angle is not stable, the driver's line-of-sight direction with high reliability can be specified.
  • the line-of-sight detection unit 13 may specify the line-of-sight angle of the driver imaged in the image, and output the specified line-of-sight angle itself to the visual object determination unit 14.
  • the method for detecting the user's line-of-sight direction from the image is not limited to the above-described method, and a known technique can be used.
  • the line-of-sight direction may be detected based on the position of the iris.
  • the line-of-sight detection method is not particularly limited, but the information processing apparatus 1 is provided with a point light source (not shown), and the user captures a corneal reflection image of light from the point light source with the imaging unit for a predetermined time.
  • a method for detecting the movement destination of the line of sight There is a method for detecting the movement destination of the line of sight.
  • the type of the point light source is not particularly limited, and examples thereof include visible light and infrared light. For example, by using an infrared LED, it is possible to detect the line of sight without causing discomfort to the user.
  • the line of sight if the line of sight does not move for a predetermined time or more, it can be said that the same place is being watched.
  • the line-of-sight detection unit 13 may detect not only the driver's line of sight but also the state of the driver by referring to the state of the pupil of the driver and the number of blinks.
  • the method for detecting the state of the pupil is not particularly limited, and examples thereof include a method for detecting a circular pupil from an eye image using Hough transform.
  • the degree of concentration of the driver can be evaluated by detecting the size of the pupil. For example, when the pupil size is detected for a predetermined time and the pupil is enlarged within the predetermined time, it can be said that there is a high possibility that the driver is gazing at an object.
  • a threshold value may be set for the pupil size, and may be evaluated as “open” when the pupil size is equal to or larger than the threshold value, and “closed” when the pupil size is smaller than the threshold value.
  • the method for detecting the number of blinks is not particularly limited.
  • the driver's eyes are irradiated with infrared light, and the difference in the amount of reflected infrared light between when the eye is opened and when the eye is closed is detected. Methods and the like.
  • humans tend to blink at a stable interval at a low frequency when they are concentrated. Therefore, the concentration level of a driver can be evaluated by detecting the number of blinks. For example, if the number of blinks is detected for a predetermined time and blinking is performed at a stable interval within the predetermined time, it can be said that there is a high possibility that the driver is gazing at an object.
  • the line-of-sight detection unit 13 may detect at least the line of sight of the driver, but preferably combines the line of sight of the driver and the state of the pupil, or the line of sight of the driver and the number of blinks. By combining the detection methods in this way, the line-of-sight detection unit 13 can suitably evaluate the driver's concentration when viewing a certain object.
  • driver concentration described above can be used as an index for evaluating the driver's interest level described later.
  • the line-of-sight detection unit 13 measures the gaze time for the object that the driver is viewing.
  • the line-of-sight detection unit 13 outputs the specified line-of-sight direction to the visual object determination unit 14.
  • the object detection unit 11 detects each object present in the driver's field of view.
  • the object detection unit 11 includes an imaging unit and an object extraction unit.
  • the imaging unit captures a scene outside the vehicle and generates a captured image.
  • FIG. 2 is a diagram illustrating an example of a captured image 100 showing a scenery outside the vehicle captured by the imaging unit of the object detection unit 11.
  • the captured image 100 captured by the imaging unit of the object detection unit 11 only needs to include an image of the scenery outside the vehicle, and may include an image of the scenery inside the vehicle.
  • Examples of the object included in the image of the scenery inside the vehicle include a fuel meter, a clock, a display screen of a navigation system, and a passenger.
  • the object extraction unit extracts an object from the captured image 100 acquired from the imaging unit.
  • the object extraction unit detects an object from the captured image 100 acquired from the imaging unit, and extracts the detected object.
  • the object extraction unit generates object information indicating the extracted object, and outputs the generated object information to the visual object determination unit.
  • the object extracting unit may add position information and size information indicating the position and size of each object in the captured image 100 as additional information of the object to the object information.
  • an object refers to the general target object contained in a captured image, as a more specific example, as an object outside a vehicle, buildings and signs, such as a store and a building, a road, for example Traffic signs, traffic lights, people, animals and the like.
  • buildings and signs such as a store and a building
  • a road for example Traffic signs, traffic lights, people, animals and the like.
  • examples of the object in the vehicle include a fuel meter, a clock, a display screen of a navigation system, and a passenger.
  • the object information may be information indicating the pixel value of the pixel group in the object region in the captured image 100, or may be the object feature amount such as edge information indicating the edge (contour) of the object. It may be the information shown. Further, the additional information of the object does not need to include both the position information and the size information, and may include at least one.
  • the object extraction unit includes an object detection unit and a region extraction unit, and more specifically, the object detection unit and the region extraction unit generate object information.
  • the object detection unit reads an image template that is a standard image of the object from the storage unit further included in the information processing apparatus 1. Then, the captured image and the image template are matched, and it is determined whether or not the captured image includes the same object as the matched image template. When the object detection unit determines that the same object as the matched image template is included, the object detection unit extracts the object from the captured image, and generates object information indicating the extracted object.
  • the object detection unit reads a feature amount template indicating the feature amount of the standard image of the object from the storage unit, calculates a feature amount of the captured image, and matches the feature amount of the captured image with the feature amount template. Do. Then, it is determined whether or not the captured image includes the same object as the object having the feature amount indicated by the matched feature amount template. When the object detection unit determines that the same object as the object having the feature amount indicated by the matched feature amount template is included, the object detection unit extracts the object from the captured image, and generates object information indicating the extracted object.
  • the object detection unit detects an object outside the vehicle or an object inside the vehicle.
  • the object detection unit adds object name information indicating the name of the object to the object information indicating the extracted object. It may be added as
  • the area extraction unit extracts a characteristic area (pixel group) from the query image using algorithms such as Saliency Map and segmentation (segmentation), identifies the extracted area as an object area, Information is generated.
  • the area extraction unit when using the Saliency Map, the area extraction unit generates a feature map indicating the contrast of the feature amount such as color, luminance, and edge from the captured image, and adds and averages each pixel of each feature map to the saliency map (SM) is generated, and a region having a high contrast in SM (for example, a pixel group having a pixel value equal to or larger than a predetermined value) is extracted.
  • SM saliency map
  • Saliency Map is a model of human visual processing, and by extracting a region using Saliency Map, it is possible to automatically specify a region that is likely to be noticed by humans.
  • an area dividing process by integrating adjacent pixels, an area dividing process by classifying pixel features, or an area dividing process by a technique called snakes using edges, etc. May be applied.
  • the object detection unit and the region extraction unit may be configured to be realized using machine learning, similarly to the visual object determination unit 14 described later.
  • the visual recognition object determination unit 14 determines the object that the driver is viewing from the line of sight detected by the line-of-sight detection unit 13 and the object detected by the object detection unit 11. A specific process of the visual object determination unit 14 will be described with reference to FIG.
  • the visual recognition object determination unit 14 can acquire the coordinates in the image area of each object with reference to the position information of the vehicle on which the driver is boarded and the map information close to the position information of the vehicle.
  • the visual recognition object determination unit 14 collates the acquired image area coordinates of each object with the visual line information acquired from the visual line detection unit 13, and the position coordinates of the driver's visual line are within the coordinates of the image area of the object. Determine if there is. For example, as illustrated in FIG.
  • the visual object determination unit 14 determines the position coordinates of the driver's line of sight. Starts counting the time remaining within the coordinates of the image area of the object OB. In this way, by comparing the position coordinates of the driver's line of sight with the coordinates of the image area of the object, it is possible to suitably determine which object the driver is viewing. Further, in addition to the information on the line of sight, as described in (Gaze detection unit), by referring to the detection result of the state of the pupil and the number of blinks, it is further determined which object the driver is concentrating on. It can be suitably determined.
  • the visual recognition object determination unit 14 may detect the line-of-sight movement time.
  • the method for detecting the line-of-sight movement time is not particularly limited. For example, after a part of a specific object is displayed in the captured image, the position coordinates of the line of sight of the driver move within the coordinates of the image area of the object. There is a method of measuring the time to do. Moreover, the method of measuring time until the position coordinate of a driver
  • the visual object determination unit 14 can preferably determine an object with a high interest level of the driver by referring to the measured time, a change in the facial expression of the driver, and the like.
  • the visual object determination unit 14 can also calculate the object information that the driver is visually recognizing by machine learning.
  • the specific configuration of the learning process for acquiring the object information visually recognized by the driver is not limited to this embodiment. For example, any one of the following machine learning methods or a combination thereof is used. Can be used.
  • the data may be processed and used in advance for input to the neural network.
  • a method such as data argumentation (Deta Argumentation) can be used in addition to the one-dimensional arrangement or multidimensional arrangement of data.
  • a convolutional neural network including convolution processing
  • CNN Convolutional Neural Network
  • a convolution layer that performs a convolution operation is provided as one or a plurality of layers (layers) included in the neural network, and a filter operation (product-sum operation) is performed on input data input to the layer. It is good also as a structure. Further, when performing the filter operation, a process such as padding may be used together, or an appropriately set stride width may be adopted.
  • a multi-layer or super multi-layer neural network having several tens to several thousand layers may be used.
  • the interest level information generation unit 15 generates interest level information indicating the interest level of the driver with respect to the object determined by the visual object determination unit 14. For example, the interest level information generation unit 15 displays at least one of the number of objects detected by the object detection unit 11, the gaze time detected by the line-of-sight detection unit 13, the line-of-sight movement time, the pupil state, the number of blinks, and the degree of concentration. Refer to and create interest level information.
  • the relationship between the various information referred to by the interest level information generation unit 15 and the generated interest level information includes, for example, the following.
  • the degree-of-interest information generation unit 15 determines that the degree of interest in the object viewed by the driver is higher as the ratio of the number of objects viewed by the driver to the number of objects in the captured image is smaller, and the degree-of-interest information including the determination result Create
  • the driver when there are 10 objects in the captured image and one object is visually recognized by the driver, the driver is more than when there are 10 objects in the captured image and 5 objects visually recognized by the driver. Can be said to be focused by the visually recognized object. Therefore, in the former case, it is determined that the degree of interest in the object visually recognized by the driver is higher than in the latter case.
  • the interest level information generation unit 15 determines that the interest level for an object is higher as the gaze time for a certain object is longer, and generates interest level information including the determination result.
  • the degree of interest information generation unit 15 indicates that the shorter the time from when a part of a specific object is displayed in the captured image until the position coordinate of the line of sight of the driver moves within the coordinates of the image area of the object, It is determined that the degree of interest in the object is high, and interest level information including the determination result is created.
  • the interest level information generation unit 15 determines that the interest level of each object viewed by the driver is higher as the time interval until the position coordinate of the line of sight of the driver moves from a specific object to another object is longer.
  • the interest level information including the determination result is created.
  • the degree-of-interest information generation unit 15 determines that the degree of interest in the object is higher as the size of the driver's pupil when viewing a certain object is larger, and creates the degree-of-interest information including the determination result. .
  • the degree-of-interest information generation unit 15 determines that the degree of interest in the object is higher as the driver blinks when the object is visually recognized at a lower frequency and at a stable interval, and includes the determination result. Create degree information.
  • the degree-of-interest information generation unit 15 determines that the degree of interest in the object is higher as the driver's degree of concentration when viewing a certain object is higher, and creates interest level information including the determination result.
  • the interest level information generation unit 15 may determine whether or not the interest level is high by providing predetermined thresholds for various types of information to be referred to.
  • the interest level information generation unit 15 may calculate the interest level information by machine learning, similarly to the visual object determination unit 14.
  • the interest level information generation unit 15 outputs the created interest level information to the recommendation engine 16.
  • the recommendation engine 16 refers to the interest level information generated by the interest level information generation unit 15 and generates proposal information for the driver.
  • the recommendation engine 16 refers to the information on the object in which the potential interest of the driver is included, which is included in the interest level information generated by the interest level information generation unit 15, and the potential interest is indicated. Proposal information including proposals related to the object is generated.
  • the proposal information generated by the recommendation engine 16 is notified to the driver by display or voice by the destination presentation unit 17. Moreover, the proposal information produced
  • the recommendation engine 16 determines whether the rest area and the toilet A proposal to stop at least one of them is included in the proposal information.
  • CASE 1 in Table 1 shows an example of an object noticed in a common sight of the town and proposal information corresponding to the object.
  • Examples of the store belonging to the first category include, as shown in Table 1, a combustion supply station such as a convenience store and a gas station, a power supply station, a restaurant such as a family restaurant, and the like.
  • the recommendation engine 16 is used for restaurants and toilets. A proposal to stop at least one of them is included in the proposal information.
  • CASE 2 in Table 1 shows an example of the store or signboard belonging to the second category noted in the common sight of the town, and proposal information corresponding to the store or signboard belonging to the second category.
  • a store which belongs to the 2nd category as shown in Table 1, restaurants, such as a family restaurant and fast food, a shopping center, etc. are mentioned, for example.
  • the recommendation engine 16 includes a fuel supply station, a power station, and an automobile supply store. A proposal to stop at at least one of the above is included in the proposal information.
  • CASE 3 in Table 1 shows an example of an object in the vehicle that has been transferred in the vehicle, and proposal information corresponding to the object in the vehicle.
  • Examples of the objects in the vehicle include a fuel meter, a clock, a display screen of a navigation system, and a passenger.
  • Fuel stations include gas stations and hydrogen stations.
  • the recommendation engine 16 determines that the specific logo or mark, Or the information regarding the goods relevant to the said specific logo or mark is included in proposal information.
  • CASE 4 in Table 1 shows an example of a display that is noticed in the town and proposal information corresponding to the display.
  • the information regarding the specific logo or mark or the product related to the specific logo or mark may be output to a mobile terminal device such as a smartphone after returning home after driving.
  • the recommendation engine 16 determines that the specific vehicle type The information related to the car belonging to the car model is included in the proposal information.
  • CASE 5 in Table 1 shows an example of a car noticed on the road and proposal information corresponding to the car.
  • the vehicle type may be a manufacturer name or a model name of each manufacturer's automobile.
  • the destination presentation unit 17 refers to the proposal information to the driver generated by the interest level information generation unit 15 and presents the destination to the driver.
  • the destination presentation unit 17 may present the destination by voice, may present the destination as an image on the display unit, or may present the destination by combining them. In addition, the destination presentation unit 17 may present a route to the destination.
  • the destination presentation unit 17 provides voice guidance of the number of nearby spots, the distance to the spot, and the time.
  • the destination presentation unit 17 sets the destination and the shortest route.
  • step is omitted
  • the face information acquisition unit 12 acquires the driver's face information. Details of the processing are as described in (Face Information Acquisition Unit).
  • the face information acquisition unit 12 transmits the acquired face information to the line-of-sight detection unit 13, and proceeds to S12.
  • the line-of-sight detection unit 13 detects and extracts the line of sight from the acquired face image. Details of the processing are as described in (Gaze detection unit).
  • the line-of-sight detection unit 13 determines whether or not the driver's line of sight has been focused from the detected line of sight, and if it has been focused (Yes in S13), the identified line-of-sight direction is displayed in the visual object determination unit 14. Send to S15. If the driver's line of sight is not focused (No in S13), the process in S11 is performed again.
  • the object detection unit 11 detects and extracts each object existing in the driver's field of view. Specifically, after the imaging unit captures a scene outside the vehicle, the object extraction unit extracts an object from the acquired captured image 100. Details of the processing are as described in (Object Detection Unit). The object extraction unit of the object detection unit 11 transmits the extracted object to the visual object determination unit, and proceeds to S15.
  • the visual recognition object determination unit 14 extracts an object that the driver is viewing from the visual line detected by the visual line detection unit 13 and the object detected by the object detection unit 11, and the process proceeds to S16.
  • the visual object determination unit 14 determines whether or not the driver has focused on the object that the extracted driver is viewing for a certain period of time, and if the driver has been focused on for a certain period of time (Yes in S16), Proceed to S17. Details of the processing are as described in (visual recognition object determination unit). If the driver has not paid attention for a certain time (No in S16), the process of S15 is performed again.
  • the interest level information generation unit 15 generates interest level information related to the interest level of the driver. Details of the processing are as described in (Interesting degree information generation unit).
  • the interest level information generation unit 15 transmits the created interest level information to the recommendation engine 16 and proceeds to S18.
  • the recommendation engine 16 refers to the interest level information generated by the interest level information generation unit, and generates proposal information to the driver. Details of the processing are as described in (Recommendation Engine).
  • FIG. 4 is a block diagram showing components of the information processing apparatus 1 according to the present embodiment.
  • the information processing apparatus 1 further includes a position information acquisition unit 21 and a facial expression estimation unit 22.
  • the position information acquisition unit 21 acquires position information of a vehicle on which the driver is boarded.
  • the position information acquisition unit 21 includes at least one of a GPS antenna, a Wi-Fi (registered trademark) antenna, a direction magnet, an acceleration sensor, and the like.
  • the position information acquisition unit 21 can acquire position information from a position detection unit (not shown) configured to be able to detect a direction in which the vehicle is facing, position information such as a current position, and the like.
  • the position information acquisition unit 21 may acquire the position of the vehicle from other than the position detection unit.
  • the position information of the vehicle may be acquired from a wireless communication base station.
  • the term “location information” refers to vehicle position information.
  • the position information acquisition unit 21 outputs the acquired position information of the vehicle on which the driver is boarded to the object detection unit 11.
  • the facial expression estimation unit 22 estimates the driver's facial expression from the driver's face information.
  • the expression estimation unit 22 estimates expression of the driver by acquiring expression information indicating the expression of the driver from the driver face information.
  • the facial expression estimation unit 22 estimates the driver's facial expression by using the feature amount of each part of the face as face information in addition to using the eye feature amount detected by the line-of-sight detection unit 13.
  • the driver's face information only needs to include expression information sufficient to estimate the driver's expression.
  • expression information indicating the degree of smile of the driver
  • sadness information indicating the degree of sadness
  • tension information indicating the degree of tension.
  • emotion information indicating the degree of tension.
  • the method by which the facial expression estimation unit 22 estimates the facial expression of the driver is not particularly limited.
  • the facial expression estimation apparatus 22 uses a facial expression estimation apparatus equipped with a known technique for estimating the facial expression of a person such as OKAO (registered trademark) Vision. Can be estimated.
  • the facial expression estimation unit 22 refers to the facial information, estimates the facial expression of the driver, and calculates facial expression information.
  • the facial expression estimation unit 22 outputs facial expression information to the interest level information generation unit 15.
  • the facial expression estimation unit 22 may be provided in the vehicle or may be provided outside the vehicle such as a server, as will be described later.
  • the facial expression estimation unit 22 may be configured to calculate information related to the facial expression of the driver by machine learning, similar to the visual object determination unit 14.
  • the facial expression estimation unit 22 outputs the estimated emotion information to the interest level information generation unit 15.
  • the interest level information generation unit 15 generates the interest level information by further referring to the vehicle position information and the driver's facial expression.
  • the degree-of-interest information generation unit 15 determines that the degree of interest in the object is higher as the position of the vehicle when the driver visually recognizes the object is higher in the traffic volume. In addition, the interest level information generation unit 15 may determine that the degree of interest in the object is high when the position of the vehicle when the object is visually recognized is a predetermined place. Then, the interest level information generation unit 15 creates interest level information including the determination result.
  • the degree-of-interest information generation unit 15 determines that the degree of interest in the object is higher as the driver looks more happy or surprised when the driver visually recognizes the object, and includes the determination result. Create information.
  • the interest level information generation unit 15 may generate the interest level information by further referring to at least one of the driver's biological information, date / time information, and weather information.
  • the biometric information includes, for example, driver's brain information, vital information, and the like.
  • the brain information includes information such as a driver's brain wave.
  • the method for measuring the electroencephalogram is not particularly limited, and examples thereof include a method for measuring with a known electroencephalograph.
  • the vital information includes, for example, information such as the driver's pulse, blood pressure, body temperature, and sweating amount.
  • the method of measuring vitals is not particularly limited, and examples thereof include a method of measuring with a known pulse meter, blood pressure meter, thermometer, sweat amount detector, and the like.
  • the date / time information is date / time information when the driver is driving, and the weather information is weather information on the position of the vehicle acquired from the position information acquisition unit 21.
  • the interest level information generation unit 15 determines that the interest level of the object is high when the driver's brain wave when the driver visually recognizes the object is within a predetermined frequency range, and includes the interest level information including the determination result. create.
  • the interest level information generation unit 15 determines that the greater the pulse rate of the driver when the driver visually recognizes the object, the higher the interest level with respect to the object, and creates interest level information including the determination result.
  • the interest level information generation unit 15 determines that the higher the blood pressure of the driver when the driver visually recognizes the object, the higher the interest level for the object, and creates the interest level information including the determination result.
  • the interest level information generation unit 15 determines that the higher the body temperature of the driver when the driver visually recognizes the object, the higher the interest level for the object, and creates the interest level information including the determination result.
  • the degree-of-interest information generation unit 15 determines that the degree of interest in the object is higher as the amount of sweating of the driver when the driver visually recognizes the object is larger, and creates the degree-of-interest information including the determination result.
  • the recommendation engine 16 uses the interest level information generation unit 15 to generate object information that suggests potential interest of the driver included in the interest level information generated by further referring to the vehicle position information and the driver's facial expression.
  • the proposal information including the proposal related to the object in which the potential interest is suggested is generated with reference to the proposal information.
  • proposal information include the example described in the first embodiment.
  • the recommendation engine 16 acquires accident information indicating that an accident has occurred, and generates accident information including object information indicating an object visually recognized by the driver in a predetermined period before the accident occurrence time indicated by the accident information. May be.
  • the accident information may be generated by an accident information generation unit mounted on the vehicle when an impact sensor (acceleration sensor) mounted on the vehicle detects an impact of a predetermined value or more. And it is good also as a structure which transmits such impact information from each vehicle to a server, and the said server produces
  • the recommendation engine 16 may include date and time information when the accident occurs, vehicle position information, and weather information in the accident information.
  • the generated accident information can be used as information for skill evaluation of safe driving.
  • the position information acquisition unit 21 acquires the position information of the vehicle on which the driver is boarded. Details of the processing are as described in (Position information acquisition unit).
  • the position information acquisition unit 21 transmits the acquired position information of the vehicle on which the driver is boarded to the object detection unit 11, and proceeds to S14.
  • the facial expression estimation unit 22 estimates the driver's facial expression from the driver's face information, and proceeds to S23. Details of the processing are as described in (Expression estimation unit).
  • the facial expression estimation unit 22 measures the emotional change to the visual recognition object, transmits the measured emotion information to the interest level information generation unit 15, and proceeds to S17.
  • FIG. 6 is a block diagram illustrating a configuration of an information processing system according to the third embodiment of the present invention. As shown in FIG. 6, the information processing system includes a vehicle 3, a server 4, and a mobile terminal device 2.
  • the vehicle 3 includes an in-vehicle camera 31 and a destination presentation unit 17.
  • the destination presentation unit 17 is configured to be included in the information processing apparatus 1, but in the present embodiment, the destination presentation unit 17 is provided in the vehicle 3.
  • the server 4 includes a face information acquisition unit 12, a line-of-sight detection unit 13, an object detection unit 11, a visual object determination unit 14, an interest level information generation unit 15, and a recommendation engine 16.
  • the server 4 according to the present embodiment includes a configuration other than the destination presentation unit 17 among the configurations included in the information processing apparatus 1 according to the first embodiment.
  • the server 4 may be configured to further include at least one of the position information acquisition unit 21 and the facial expression estimation unit 22 described in the second embodiment.
  • the server and the information processing apparatus include a communication unit (not shown), and information exchange between the server and the information processing apparatus is performed via the communication unit.
  • the in-vehicle camera 31 has the same configuration as that of the imaging unit included in the object detection unit 11 described in the first embodiment.
  • the in-vehicle camera 31 captures the scenery outside the vehicle to acquire the surrounding environment information, and uses the acquired surrounding environment information as the object of the server 4. Output to the detector 11. Further, the in-vehicle camera 31 captures the face of the driver to acquire face information, and outputs the acquired face information to the face information acquisition unit 12 of the server 4.
  • the recommendation engine 16 outputs the generated proposal information to the driver to the destination presentation unit 17 as in the first and second embodiments.
  • the processing can be performed via the server 4.
  • the driver's interest level can be sensed.
  • the control block of the information processing apparatus 1 may be realized by a logic circuit (hardware) formed in an integrated circuit (IC chip) or the like, or may be realized by software.
  • each unit included in the information processing apparatus 1 includes a computer that executes instructions of a program that is software that realizes each function.
  • the computer includes, for example, at least one processor (control device) and at least one computer-readable recording medium storing the program.
  • the processor reads the program from the recording medium and executes the program, thereby achieving the object of the present invention.
  • a CPU Central Processing Unit
  • the recording medium a “non-temporary tangible medium” such as a ROM (Read Only Memory), a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used.
  • a RAM Random Access Memory
  • the program may be supplied to the computer via an arbitrary transmission medium (such as a communication network or a broadcast wave) that can transmit the program.
  • an arbitrary transmission medium such as a communication network or a broadcast wave
  • one embodiment of the present invention can also be realized in the form of a data signal embedded in a carrier wave, in which the program is embodied by electronic transmission.
  • the information processing apparatus may be realized by a computer.
  • the information processing apparatus is operated on each computer by causing the computer to operate as each unit (software element) included in the information processing apparatus.
  • the control program for the information processing apparatus to be realized in this way and a computer-readable recording medium on which the control program is recorded also fall within the scope of the present invention.
  • An information processing apparatus includes a face information acquisition unit that acquires face information of a driver, a line-of-sight detection unit that detects a line of sight from face information acquired from the face information acquisition unit, and the driver's field of view.
  • An object detection unit that detects each existing object; a visual object determination unit that determines an object that the driver is viewing from the line of sight detected by the line-of-sight detection unit and the object detected by the object detection unit;
  • An interest level information generation unit that generates interest level information indicating the interest level of the driver with respect to the object determined by the visual object determination unit.
  • An information processing apparatus further includes a position information acquisition unit that acquires position information of a vehicle on which the driver is boarded, and the interest level information generation unit further refers to the position information of the vehicle. Interest level information may be generated.
  • the information processing apparatus further includes a facial expression estimation unit that estimates the facial expression of the driver from the facial information of the driver, and the interest level information generation unit further refers to the facial expression of the driver, Interest level information may be generated.
  • the interest level information generation unit may generate the interest level information by further referring to the biological information of the driver.
  • the interest level information generation unit may generate the interest level information by further referring to the date / time information.
  • the interest level information generation unit may generate the interest level information by further referring to the weather information.
  • the information processing apparatus may further include a proposal information generation unit that generates the proposal information for the driver with reference to the interest level information generated by the interest level information generation unit.
  • the proposal information generation unit acquires accident information indicating that an accident has occurred, and the driver performs a predetermined period before the accident occurrence time indicated by the accident information.
  • Accident information including object information indicating the visually recognized object may be generated.
  • the interest level information generated by the interest level information generation unit includes information indicating that the driver is paying attention to a store or a signboard belonging to the first category.
  • the proposal information generation unit may include a proposal to stop at at least one of a rest place and a toilet in the proposal information.
  • the interest level information generated by the interest level information generation unit includes information indicating that the driver is paying attention to a store or a signboard belonging to the second category.
  • the proposal information generation unit may include a proposal to stop at at least one of a restaurant and a toilet in the proposal information.
  • the information processing apparatus is configured to generate the proposal information when the interest level information generated by the interest level information generation unit includes information indicating that attention is paid to a predetermined object in the vehicle.
  • the section may include a proposal to stop at at least one of a fuel supply station, a power supply station, and an automobile supply store in the proposal information.
  • the information processing apparatus may further include a destination presentation unit that refers to the proposal information and presents the destination to the driver.
  • the interest level information generated by the interest level information generation unit includes information indicating that the driver is paying attention to a specific logo or mark.
  • the proposal information generation unit may include information related to the specific logo or mark or a product related to the specific logo or mark in the proposal information.
  • the proposal information generation unit may include information regarding the specific vehicle type or an automobile belonging to the specific vehicle type in the proposal information.
  • An information processing method is an information processing method executed by an information processing apparatus, and includes a face information acquisition step for acquiring driver face information, and the face information acquired in the face information acquisition step.
  • the driver visually recognizes a line-of-sight detection step for detecting a line of sight, an object detection step for detecting each object existing in the field of view of the driver, and the line of sight detected in the line-of-sight detection step and the object detected in the object detection step.
  • a visual recognition object determination step for determining the object that is being performed, and an interest level information generation step for generating interest level information indicating the interest level of the driver with respect to the object determined in the visual recognition object determination step.
  • the program according to an aspect of the present invention is an information processing program for causing a computer to function as an information processing apparatus, and may cause the computer to function as the interest level information generation unit.

Abstract

Implemented is an information processing device capable of sensing the interest level of a driver. The present invention involves: acquiring face information about a driver; detecting the visual line from the acquired face information; detecting objects existing in the visual field of the driver; determining an object that is being visually recognized by the driver on the basis of the detected visual line and the detected objects; and generating interest level information indicating the interest level of the driver in the determined object.

Description

情報処理装置、情報処理方法、及び情報処理プログラムInformation processing apparatus, information processing method, and information processing program
 本発明は、情報処理装置、情報処理方法、及び情報処理プログラムに関する。 The present invention relates to an information processing apparatus, an information processing method, and an information processing program.
 従来、定点カメラを用いて、特定の人の視線情報分析、および装着型の脳波計による脳情報分析を行い、消費者の注目している箇所を追跡する方法がある。例えば、特許文献1には、表示画像内で注目している箇所および内容等の視聴する側の動向を把握する携帯端末装置が開示されている。 Conventionally, there is a method of tracking a spot where a consumer is paying attention by performing a gaze information analysis of a specific person using a fixed point camera and a brain information analysis by a wearable electroencephalograph. For example, Patent Literature 1 discloses a portable terminal device that grasps a trend on the side of viewing, such as a portion of interest and content in a display image.
日本国公開特許公報「特開2009-5094号公報」Japanese Patent Publication “JP 2009-5094”
 一方で、発明者は、自動車などの移動体の運転者が視認する建物などのオブジェクトに対して、当該運転者がどの程度の関心を示しているのかを評価することができれば、効果的な広告や街づくりの観点から有効であるとの知見を得た。 On the other hand, if the inventor can evaluate how much the driver is interested in an object such as a building visually recognized by a driver of a moving body such as an automobile, an effective advertisement And gained knowledge that it is effective from the viewpoint of urban development.
 このような技術は、従来のように固定カメラを用いた構成では実現することができない。 Such a technique cannot be realized with a conventional configuration using a fixed camera.
 本発明の一態様は、上記の問題点に鑑みてなされたものであり、その目的は、ドライバの関心度をセンシングすることができる情報処理装置を実現することにある。 One embodiment of the present invention has been made in view of the above problems, and an object thereof is to realize an information processing apparatus capable of sensing the degree of interest of a driver.
 上記の課題を解決するために、本発明の一態様に係る情報処理装置は、ドライバの顔情報を取得する顔情報取得部と、上記顔情報取得部から取得した顔情報から視線を検出する視線検出部と、上記ドライバの視界に存在する各オブジェクトを検出するオブジェクト検出部と、上記視線検出部が検出した視線と上記オブジェクト検出部が検出したオブジェクトとから、上記ドライバが視認しているオブジェクトを判定する視認オブジェクト判定部と、上記視認オブジェクト判定部が判定したオブジェクトに対する上記ドライバの関心度を示す関心度情報を生成する関心度情報生成部とを備えている。 In order to solve the above problems, an information processing apparatus according to an aspect of the present invention includes a face information acquisition unit that acquires face information of a driver, and a line of sight that detects a line of sight from the face information acquired from the face information acquisition unit. An object that the driver is viewing from: a detection unit; an object detection unit that detects each object existing in the field of view of the driver; and a line of sight detected by the line of sight detection unit and an object detected by the object detection unit. The visual recognition object determination part to determine, and the interest level information generation part which produces | generates the interest level information which shows the interest degree of the said driver with respect to the object which the said visual recognition object determination part determined.
 本発明の一態様によれば、ドライバの関心度をセンシングすることができる情報処理装置を実現することできる。 According to one aspect of the present invention, an information processing apparatus that can sense the interest level of a driver can be realized.
本発明の実施形態1に係る情報処理装置の構成要素を示すブロック図である。It is a block diagram which shows the component of the information processing apparatus which concerns on Embodiment 1 of this invention. 本発明の実施形態1に係る情報処理装置における、撮像部が撮影する車外の景色の撮像画像の一例を示す図である。It is a figure which shows an example of the picked-up image of the scenery outside a vehicle which an imaging part image | photographs in the information processing apparatus which concerns on Embodiment 1 of this invention. 本発明の実施形態1に係る情報処理装置の処理の流れの一例を示すフローチャートである。It is a flowchart which shows an example of the flow of a process of the information processing apparatus which concerns on Embodiment 1 of this invention. 本発明の実施形態2に係る情報処理装置の構成要素を示すブロック図である。It is a block diagram which shows the component of the information processing apparatus which concerns on Embodiment 2 of this invention. 本発明の実施形態2に係る情報処理装置の処理の流れを示すフローチャートである。It is a flowchart which shows the flow of a process of the information processing apparatus which concerns on Embodiment 2 of this invention. 本発明の実施形態3に係る情報処理システムの構成要素を示すブロック図である。It is a block diagram which shows the component of the information processing system which concerns on Embodiment 3 of this invention.
 <実施形態1>
 以下、本発明の一実施形態について、詳細に説明する。以下の特定の項目(実施形態)における構成について、それが他の項目で説明されている構成と同じである場合は、説明を省略する場合がある。また、説明の便宜上、各項目に示した部材と同一の機能を有する部材については、同一の符号を付し、適宜その説明を省略する。
<Embodiment 1>
Hereinafter, an embodiment of the present invention will be described in detail. The configuration of the following specific items (embodiments) may be omitted if it is the same as the configuration described in other items. For convenience of explanation, members having the same functions as those shown in each item are given the same reference numerals, and the explanation thereof is omitted as appropriate.
 〔情報処理装置〕
 本実施形態に係る情報処理装置1は、顔情報取得処理、視線検出処理、オブジェクト検出処理、視認オブジェクト判定処理、関心度情報生成処理等を行う装置である。情報処理装置1は、例えば、車両等に実装されるが、これは本実施形態を限定するものではない。
[Information processing equipment]
The information processing apparatus 1 according to the present embodiment is an apparatus that performs face information acquisition processing, line-of-sight detection processing, object detection processing, visual object determination processing, interest level information generation processing, and the like. The information processing apparatus 1 is mounted on a vehicle or the like, for example, but this does not limit the present embodiment.
 携帯端末装置2は、スマートフォン、タブレット端末等であってよい。 The mobile terminal device 2 may be a smartphone, a tablet terminal, or the like.
 図1は、本実施形態に係る情報処理装置1の構成要素を示すブロック図である。 FIG. 1 is a block diagram showing components of the information processing apparatus 1 according to the present embodiment.
 情報処理装置1は、顔情報取得部12と、視線検出部13と、オブジェクト検出部11と、視認オブジェクト判定部14と、関心度情報生成部15と、リコメンドエンジン(提案情報生成部)16と、目的地提示部(ナビゲーションシステム)17とを備えている。 The information processing apparatus 1 includes a face information acquisition unit 12, a line-of-sight detection unit 13, an object detection unit 11, a visual object determination unit 14, an interest level information generation unit 15, a recommendation engine (suggested information generation unit) 16, And a destination presentation unit (navigation system) 17.
 (顔情報取得部)
 顔情報取得部12は、ドライバの顔情報を取得する。顔情報取得部12が取得するドライバの顔情報は、例えば、ドライバの撮像画像である。このドライバの撮像画像を取得するため、情報処理装置1が実装される車両等には、ドライバの顔を撮像する撮像装置が配置される。
(Face information acquisition unit)
The face information acquisition unit 12 acquires driver face information. The driver's face information acquired by the face information acquisition unit 12 is, for example, a captured image of the driver. In order to acquire the captured image of the driver, an imaging device that captures the driver's face is arranged in a vehicle or the like on which the information processing apparatus 1 is mounted.
 ただし、視線検出部13が視線を検出するために十分な情報を含んでいれば、顔情報取得部12が取得するドライバの顔情報は、ドライバの撮像画像から抽出した顔の各パーツの位置情報等であってもよい。顔情報取得部12がドライバの顔情報を取得する方法は特に限定されないが、例えば、人物の顔を取得するための公知の技術を搭載した顔情報取得装置を用いてドライバの顔情報を取得することができる。顔情報取得部12は、取得した顔情報を視線検出部13に出力する。 However, if the line-of-sight detection unit 13 includes sufficient information for detecting the line of sight, the face information of the driver acquired by the face information acquisition unit 12 is the positional information of each part of the face extracted from the captured image of the driver. Etc. The method by which the face information acquisition unit 12 acquires the driver's face information is not particularly limited. For example, the driver's face information is acquired by using a face information acquisition device equipped with a known technique for acquiring a person's face. be able to. The face information acquisition unit 12 outputs the acquired face information to the line-of-sight detection unit 13.
 (視線検出部)
 視線検出部13は、顔情報取得部12から取得した顔情報から視線を検出する。つまり、視線検出部13は、取得した顔画像に含まれるドライバの視線方向を検出する。具体的には、視線検出部13は、顔情報である顔画像に撮像されているドライバの視線角度を特定し、視線角度と視線方向との対応関係を示す視線方向特定情報を参照して、特定した視線角度に対応する視線方向を特定する。
(Gaze detection unit)
The line-of-sight detection unit 13 detects the line of sight from the face information acquired from the face information acquisition unit 12. That is, the gaze detection unit 13 detects the gaze direction of the driver included in the acquired face image. Specifically, the line-of-sight detection unit 13 specifies the line-of-sight angle of the driver imaged in the face image that is the face information, and refers to the line-of-sight direction specifying information indicating the correspondence between the line-of-sight angle and the line-of-sight direction. A line-of-sight direction corresponding to the specified line-of-sight angle is specified.
 視線方向特定情報において、視線角度と視線方向との対応関係は、適宜設定すればよく、視線方向を所定の数設定してもよいし、対応する視線角度の範囲を任意に設定してよい。 In the gaze direction specifying information, the correspondence relationship between the gaze angle and the gaze direction may be set as appropriate, a predetermined number of gaze directions may be set, and the range of the corresponding gaze angle may be arbitrarily set.
 視線方向特定情報は、予め設定されるものである。ただし、カメラの位置およびドライバの顔の位置に基づいて、視線角度と視線方向との対応関係を設定することが望ましいため、ドライバ毎にまたは視線方向を設定する度に、視線方向特定情報を補正することが望ましい。 The line-of-sight direction specifying information is set in advance. However, since it is desirable to set the correspondence between the line-of-sight angle and the line-of-sight direction based on the position of the camera and the face of the driver, the line-of-sight specific information is corrected for each driver or whenever the line-of-sight direction is set. It is desirable to do.
 また、視線検出部13は、所定の期間に撮影された複数の画像から、それぞれ複数の視線角度を特定し、その複数の視線角度の平均値、最多頻出角度または中間値等に対応する視線方向を特定してもよい。これにより、ドライバの視線角度が安定しない場合であっても、信頼度の高いドライバの視線方向を特定することができる。 The line-of-sight detection unit 13 identifies a plurality of line-of-sight angles from a plurality of images taken during a predetermined period, and a line-of-sight direction corresponding to an average value, a most frequent angle, an intermediate value, or the like of the plurality of line-of-sight angles. May be specified. Accordingly, even when the driver's line-of-sight angle is not stable, the driver's line-of-sight direction with high reliability can be specified.
 また、視線検出部13は、画像に撮像されているドライバの視線角度を特定し、特定した視線角度そのものを視認オブジェクト判定部14に出力してもよい。 Also, the line-of-sight detection unit 13 may specify the line-of-sight angle of the driver imaged in the image, and output the specified line-of-sight angle itself to the visual object determination unit 14.
 なお、画像からユーザの視線方向を検出する方法は上記のものに限らず、公知の技術を用いることが可能である。例えば、虹彩の位置に基づいて視線方向を検出してもよい。 Note that the method for detecting the user's line-of-sight direction from the image is not limited to the above-described method, and a known technique can be used. For example, the line-of-sight direction may be detected based on the position of the iris.
 例えば、視線の検出方法としては、特に限定されないが、情報処理装置1に、点光源(不図示)を設け、点光源からの光の角膜反射像を撮像部で所定時間撮影することにより、ユーザの視線の移動先を検出する方法が挙げられる。点光源の種類は特に限定されず、可視光、赤外光が挙げられるが、例えば赤外線LEDを用いることで、ユーザに不快感を与えることなく、視線の検出をすることができる。視線の検出において、視線が所定時間以上移動しない場合は、同じ場所を注視しているといえる。 For example, the line-of-sight detection method is not particularly limited, but the information processing apparatus 1 is provided with a point light source (not shown), and the user captures a corneal reflection image of light from the point light source with the imaging unit for a predetermined time. There is a method for detecting the movement destination of the line of sight. The type of the point light source is not particularly limited, and examples thereof include visible light and infrared light. For example, by using an infrared LED, it is possible to detect the line of sight without causing discomfort to the user. When detecting the line of sight, if the line of sight does not move for a predetermined time or more, it can be said that the same place is being watched.
 視線検出部13は、ドライバの視線を検出するだけでなく、ドライバの瞳孔の状態および瞬きの回数等を参照して、ドライバの状態を検出してもよい。 The line-of-sight detection unit 13 may detect not only the driver's line of sight but also the state of the driver by referring to the state of the pupil of the driver and the number of blinks.
 瞳孔の状態を検出する方法としては、特に限定されないが、例えば、ハフ変換を利用して、目の画像から円形の瞳孔を検出する方法等が挙げられる。一般的に、人間は、集中している場合に開瞳する傾向にあるため、瞳孔のサイズを検出することで、ドライバの集中の度合いを評価することができる。例えば、瞳孔のサイズを所定時間検出し、所定時間内で瞳孔が大きくなっている時間は、ドライバがある対象を注視している可能性が高いといえる。瞳孔のサイズに関して、閾値を設定し、瞳孔のサイズが閾値以上である場合は「開」、瞳孔のサイズが閾値未満である場合は「閉」として評価してもよい。 The method for detecting the state of the pupil is not particularly limited, and examples thereof include a method for detecting a circular pupil from an eye image using Hough transform. In general, since humans tend to open their eyes when they are concentrated, the degree of concentration of the driver can be evaluated by detecting the size of the pupil. For example, when the pupil size is detected for a predetermined time and the pupil is enlarged within the predetermined time, it can be said that there is a high possibility that the driver is gazing at an object. A threshold value may be set for the pupil size, and may be evaluated as “open” when the pupil size is equal to or larger than the threshold value, and “closed” when the pupil size is smaller than the threshold value.
 また、瞬きの回数を検出する方法としては、特に限定されないが、例えば、赤外光をドライバの目に対して照射し、開眼時と、閉眼時との赤外光量反射量の差を検出する方法等が挙げられる。一般的に、人間は、集中している場合、低い頻度で安定した間隔で瞬きをする傾向にあるため、瞬きの回数を検出することで、ドライバの集中度を評価することができる。例えば、瞬きの回数を所定時間検出し、所定時間内で瞬きが安定した間隔で行われている場合、ドライバがある対象を注視している可能性が高いといえる。 The method for detecting the number of blinks is not particularly limited. For example, the driver's eyes are irradiated with infrared light, and the difference in the amount of reflected infrared light between when the eye is opened and when the eye is closed is detected. Methods and the like. In general, humans tend to blink at a stable interval at a low frequency when they are concentrated. Therefore, the concentration level of a driver can be evaluated by detecting the number of blinks. For example, if the number of blinks is detected for a predetermined time and blinking is performed at a stable interval within the predetermined time, it can be said that there is a high possibility that the driver is gazing at an object.
 視線検出部13は、ドライバの視線を少なくとも検出すればよいが、ドライバの視線と、瞳孔の状態とを、またはドライバの視線と、瞬きの回数とを組み合わせることが好ましい。このように検出方法を組み合わせることで、視線検出部13は、あるオブジェクトを視認しているときのドライバの集中度を好適に評価することができる。 The line-of-sight detection unit 13 may detect at least the line of sight of the driver, but preferably combines the line of sight of the driver and the state of the pupil, or the line of sight of the driver and the number of blinks. By combining the detection methods in this way, the line-of-sight detection unit 13 can suitably evaluate the driver's concentration when viewing a certain object.
 なお、上述のドライバの集中度は、後述するドライバの関心度を評価するための指標として用いることができる。 Note that the driver concentration described above can be used as an index for evaluating the driver's interest level described later.
 視線検出部13は、ドライバが視認しているオブジェクトに対する注視時間を計測する。 The line-of-sight detection unit 13 measures the gaze time for the object that the driver is viewing.
 視線検出部13は、特定した視線方向を視認オブジェクト判定部14に出力する。 The line-of-sight detection unit 13 outputs the specified line-of-sight direction to the visual object determination unit 14.
 (オブジェクト検出部)
 オブジェクト検出部11は、ドライバの視界に存在する各オブジェクトを検出する。
(Object detection unit)
The object detection unit 11 detects each object present in the driver's field of view.
 オブジェクト検出部11は、撮像部とオブジェクト抽出部とを含んで構成される。撮像部は、車外の景色を撮影して撮像画像を生成する。 The object detection unit 11 includes an imaging unit and an object extraction unit. The imaging unit captures a scene outside the vehicle and generates a captured image.
 図2は、オブジェクト検出部11の撮像部が撮影した車外の景色を示す撮像画像100の一例を示す図である。なお、オブジェクト検出部11の撮像部が撮影した撮像画像100には、車外の景色の画像が主として含まれていればよく、車内の景色の画像が含まれていてもよい。車内の景色の画像に含まれるオブジェクトとしては、例えば、燃料メータ、時計、ナビゲーションシステムの表示画面、同乗者等が挙げられる。 FIG. 2 is a diagram illustrating an example of a captured image 100 showing a scenery outside the vehicle captured by the imaging unit of the object detection unit 11. Note that the captured image 100 captured by the imaging unit of the object detection unit 11 only needs to include an image of the scenery outside the vehicle, and may include an image of the scenery inside the vehicle. Examples of the object included in the image of the scenery inside the vehicle include a fuel meter, a clock, a display screen of a navigation system, and a passenger.
 オブジェクト抽出部は、上記撮像部から取得した撮像画像100から、オブジェクトを抽出する。 The object extraction unit extracts an object from the captured image 100 acquired from the imaging unit.
 オブジェクト抽出部は、上記撮像部から取得した撮像画像100からオブジェクトを検出して、検出したオブジェクトを抽出するものである。オブジェクト抽出部は、抽出したオブジェクトを示すオブジェクト情報を生成し、生成したオブジェクト情報を視認オブジェクト判定部に出力する。 The object extraction unit detects an object from the captured image 100 acquired from the imaging unit, and extracts the detected object. The object extraction unit generates object information indicating the extracted object, and outputs the generated object information to the visual object determination unit.
 また、オブジェクト抽出部は、オブジェクト情報を生成する際に、各オブジェクトの撮像画像100における位置およびサイズをそれぞれ示す位置情報およびサイズ情報を当該オブジェクトの付加情報として、オブジェクト情報に付加してもよい。 Further, when generating the object information, the object extracting unit may add position information and size information indicating the position and size of each object in the captured image 100 as additional information of the object to the object information.
 なお、本明細書において、オブジェクトとは、撮像画像に含まれる対象物一般のことを指すが、より具体的な例として、車外のオブジェクトとしては、例えば、店舗、ビル等の建物及び看板、道路、交通標識、信号機、人物、動物等が挙げられる。また、車内のオブジェクトとしては、例えば、燃料メータ、時計、ナビゲーションシステムの表示画面、同乗者等が挙げられる。 In addition, in this specification, although an object refers to the general target object contained in a captured image, as a more specific example, as an object outside a vehicle, buildings and signs, such as a store and a building, a road, for example Traffic signs, traffic lights, people, animals and the like. Further, examples of the object in the vehicle include a fuel meter, a clock, a display screen of a navigation system, and a passenger.
 また、オブジェクト情報とは、撮像画像100中のオブジェクトの領域の画素群の画素値を示す情報であってもよいし、また、オブジェクトのエッジ(輪郭)を示すエッジ情報などのオブジェクトの特徴量を示す情報であってもよい。また、上記オブジェクトの付加情報は、位置情報およびサイズ情報の両方を含んでいなくてもよく、少なくとも1つを含んでいればよい。 The object information may be information indicating the pixel value of the pixel group in the object region in the captured image 100, or may be the object feature amount such as edge information indicating the edge (contour) of the object. It may be the information shown. Further, the additional information of the object does not need to include both the position information and the size information, and may include at least one.
 一例として、オブジェクト抽出部は、物体検出部および領域抽出部を備え、より詳細には、物体検出部および領域抽出部が、オブジェクト情報を生成する。 As an example, the object extraction unit includes an object detection unit and a region extraction unit, and more specifically, the object detection unit and the region extraction unit generate object information.
 物体検出部は、オブジェクトの標準的な画像である画像テンプレートを、情報処理装置1が更に備える記憶部から読み出す。そして、撮像画像と画像テンプレートとのマッチングを行い、撮像画像の中に、マッチングした画像テンプレートと同じオブジェクトが含まれているか否かを判定するものである。物体検出部は、マッチングした画像テンプレートと同じオブジェクトが含まれていると判定すると、当該オブジェクトを撮像画像から抽出し、抽出したオブジェクトを示すオブジェクト情報を生成する。 The object detection unit reads an image template that is a standard image of the object from the storage unit further included in the information processing apparatus 1. Then, the captured image and the image template are matched, and it is determined whether or not the captured image includes the same object as the matched image template. When the object detection unit determines that the same object as the matched image template is included, the object detection unit extracts the object from the captured image, and generates object information indicating the extracted object.
 また、物体検出部は、オブジェクトの標準的な画像の特徴量を示す特徴量テンプレートを記憶部から読み出すと共に、撮像画像の特徴量を算出し、撮像画像の特徴量と特徴量テンプレートとのマッチングを行う。そして、撮像画像の中に、マッチングした特徴量テンプレートの示す特徴量を有するオブジェクトと同じオブジェクトが含まれているか否かを判定する。物体検出部は、マッチングした特徴量テンプレートの示す特徴量を有するオブジェクトと同じオブジェクトが含まれていると判定すると、当該オブジェクトを撮像画像から抽出し、抽出したオブジェクトを示すオブジェクト情報を生成する。 The object detection unit reads a feature amount template indicating the feature amount of the standard image of the object from the storage unit, calculates a feature amount of the captured image, and matches the feature amount of the captured image with the feature amount template. Do. Then, it is determined whether or not the captured image includes the same object as the object having the feature amount indicated by the matched feature amount template. When the object detection unit determines that the same object as the object having the feature amount indicated by the matched feature amount template is included, the object detection unit extracts the object from the captured image, and generates object information indicating the extracted object.
 物体検出部は、上述した車外のオブジェクトまたは車内のオブジェクトを検出する。 The object detection unit detects an object outside the vehicle or an object inside the vehicle.
 また、物体検出部は、画像テンプレートまたは特徴量テンプレートに当該テンプレートの示すオブジェクトの名称が対応付けられている場合、抽出したオブジェクトを示すオブジェクト情報に、当該オブジェクトの名称を示すオブジェクト名称情報を付加情報として付加してもよい。 In addition, when the name of the object indicated by the template is associated with the image template or the feature amount template, the object detection unit adds object name information indicating the name of the object to the object information indicating the extracted object. It may be added as
 領域抽出部は、Saliency Map、領域分割処理(セグメンテーション)などのアルゴリズムを用いて、クエリ画像の中から特徴的な領域(画素群)を抽出し、抽出した領域をオブジェクトの領域として特定し、オブジェクト情報を生成するものである。 The area extraction unit extracts a characteristic area (pixel group) from the query image using algorithms such as Saliency Map and segmentation (segmentation), identifies the extracted area as an object area, Information is generated.
 領域抽出部は、例えば、Saliency Mapを用いる場合、撮像画像から、色、輝度、エッジ等の特徴量のコントラストを示すfeature mapをそれぞれ生成し、各feature mapの各画素を加算平均してsaliency map(SM)を生成し、SMにおけるコントラストが高い領域(例えば、画素値が所定値以上の画素群)を抽出する。Saliency Mapは、人間の視覚処理をモデル化したものであり、Saliency Mapを用いて領域を抽出することにより、人間が注目しやすい(注目すると考えられる)領域を自動的に特定することができる。 For example, when using the Saliency Map, the area extraction unit generates a feature map indicating the contrast of the feature amount such as color, luminance, and edge from the captured image, and adds and averages each pixel of each feature map to the saliency map (SM) is generated, and a region having a high contrast in SM (for example, a pixel group having a pixel value equal to or larger than a predetermined value) is extracted. Saliency Map is a model of human visual processing, and by extracting a region using Saliency Map, it is possible to automatically specify a region that is likely to be noticed by humans.
 また、領域分割処理として、具体的には、近接画素の統合による領域分割処理、画素特徴量のクラス分けによる領域分割処理、または、エッジを利用したスネーク(snakes)と呼ばれる手法による領域分割処理等を適用してもよい。 In addition, as the area dividing process, specifically, an area dividing process by integrating adjacent pixels, an area dividing process by classifying pixel features, or an area dividing process by a technique called snakes using edges, etc. May be applied.
 また、物体検出部及び領域抽出部は、後述する視認オブジェクト判定部14と同様に、機械学習を用いて実現される構成としてもよい。 Also, the object detection unit and the region extraction unit may be configured to be realized using machine learning, similarly to the visual object determination unit 14 described later.
 (視認オブジェクト判定部)
 視認オブジェクト判定部14は、視線検出部13が検出した視線とオブジェクト検出部11が検出したオブジェクトとから、ドライバが視認しているオブジェクトを判定する。視認オブジェクト判定部14の具体的な処理について、図2を用いて説明する。
(Visual object determination unit)
The visual recognition object determination unit 14 determines the object that the driver is viewing from the line of sight detected by the line-of-sight detection unit 13 and the object detected by the object detection unit 11. A specific process of the visual object determination unit 14 will be described with reference to FIG.
 例えば、図2のように、撮像画像100に、各オブジェクトの画像が配置されている場合を想定する。視認オブジェクト判定部14は、ドライバが搭乗する車両の位置情報、および当該車両の位置情報の至近の地図情報を参照して、各オブジェクトの画像領域における座標を取得することができる。視認オブジェクト判定部14は、取得した各オブジェクトの画像領域の座標と、視線検出部13から取得した視線情報とを照合して、ドライバの視線の位置座標が、どのオブジェクトの画像領域の座標内にあるかを判定する。例えば、図2に示されるように、ドライバの視線の先Aの位置座標が、オブジェクトOBの画像領域の座標内にあると判定された場合、視認オブジェクト判定部14は、ドライバの視線の位置座標が、オブジェクトOBの画像領域の座標内に留まっている時間のカウントを開始する。このように、ドライバの視線の位置座標と、オブジェクトの画像領域の座標とを照合することで、ドライバがどのオブジェクトを視認しているかを好適に判定することができる。また、視線の情報に加えて、(視線検出部)で記載したように、瞳孔の状態および瞬きの回数の検出結果を参照することで、ドライバがどのオブジェクトを集中して視認しているかをさらに好適に判定することができる。 For example, as shown in FIG. 2, it is assumed that an image of each object is arranged in the captured image 100. The visual recognition object determination unit 14 can acquire the coordinates in the image area of each object with reference to the position information of the vehicle on which the driver is boarded and the map information close to the position information of the vehicle. The visual recognition object determination unit 14 collates the acquired image area coordinates of each object with the visual line information acquired from the visual line detection unit 13, and the position coordinates of the driver's visual line are within the coordinates of the image area of the object. Determine if there is. For example, as illustrated in FIG. 2, when it is determined that the position coordinates of the driver's line of sight A are within the coordinates of the image area of the object OB, the visual object determination unit 14 determines the position coordinates of the driver's line of sight. Starts counting the time remaining within the coordinates of the image area of the object OB. In this way, by comparing the position coordinates of the driver's line of sight with the coordinates of the image area of the object, it is possible to suitably determine which object the driver is viewing. Further, in addition to the information on the line of sight, as described in (Gaze detection unit), by referring to the detection result of the state of the pupil and the number of blinks, it is further determined which object the driver is concentrating on. It can be suitably determined.
 視認オブジェクト判定部14は、視線移動時間を検出してもよい。視線移動時間を検出する方法としては、特に限定されないが、例えば、特定のオブジェクトの一部が撮像画像に表示されてから、ドライバの視線の位置座標が、当該オブジェクトの画像領域の座標内に移動するまでの時間を計測する方法が挙げられる。また、ドライバの視線の位置座標が特定のオブジェクトから他のオブジェクトに移動するまでの時間を計測する方法であってもよい。 The visual recognition object determination unit 14 may detect the line-of-sight movement time. The method for detecting the line-of-sight movement time is not particularly limited. For example, after a part of a specific object is displayed in the captured image, the position coordinates of the line of sight of the driver move within the coordinates of the image area of the object. There is a method of measuring the time to do. Moreover, the method of measuring time until the position coordinate of a driver | operator's eyes | visual_axis moves from a specific object to another object may be sufficient.
 視認オブジェクト判定部14は、計測された時間、ドライバの表情の変化等を参照して、ドライバの関心度が高いオブジェクトを好適に判定することができる。 The visual object determination unit 14 can preferably determine an object with a high interest level of the driver by referring to the measured time, a change in the facial expression of the driver, and the like.
 なお、視認オブジェクト判定部14は、ドライバが視認しているオブジェクト情報を、機械学習により算出することもできる。ドライバが視認しているオブジェクト情報を取得するための学習処理の具体的な構成は本実施形態を限定するものではないが、例えば、以下のような機械学習的手法の何れかまたはそれらの組み合わせを用いることができる。 Note that the visual object determination unit 14 can also calculate the object information that the driver is visually recognizing by machine learning. The specific configuration of the learning process for acquiring the object information visually recognized by the driver is not limited to this embodiment. For example, any one of the following machine learning methods or a combination thereof is used. Can be used.
 ・サポートベクターマシン(SVM: Support Vector Machine)
 ・クラスタリング(Clustering)
 ・帰納論理プログラミング(ILP: Inductive Logic Programming)
 ・遺伝的アルゴリズム(GP: Genetic Programming)
 ・ベイジアンネットワーク(BN: Baysian Network)
 ・ニューラルネットワーク(NN: Neural Network)
 ニューラルネットワークを用いる場合、データをニューラルネットワークへのインプット用に予め加工して用いるとよい。このような加工には、データの1次元的配列化、または多次元的配列化に加え、例えば、データアーギュメンテーション(Deta Argumentation)等の手法を用いることができる。
・ Support Vector Machine (SVM)
・ Clustering
・ Inductive Logic Programming (ILP)
・ Genetic Algorithm (GP)
・ Baysian Network (BN)
・ Neural network (NN)
When using a neural network, the data may be processed and used in advance for input to the neural network. For such processing, a method such as data argumentation (Deta Argumentation) can be used in addition to the one-dimensional arrangement or multidimensional arrangement of data.
 また、ニューラルネットワークを用いる場合、畳み込み処理を含む畳み込みニューラルネットワーク(CNN: Convolutional Neural Network)を用いてもよい。より具体的には、ニューラルネットワークに含まれる1又は複数の層(レイヤ)として、畳み込み演算を行う畳み込み層を設け、当該層に入力される入力データに対してフィルタ演算(積和演算)を行う構成としてもよい。またフィルタ演算を行う際には、パディング等の処理を併用したり、適宜設定されたストライド幅を採用したりしてもよい。 Also, when using a neural network, a convolutional neural network (CNN: Convolutional Neural Network) including convolution processing may be used. More specifically, a convolution layer that performs a convolution operation is provided as one or a plurality of layers (layers) included in the neural network, and a filter operation (product-sum operation) is performed on input data input to the layer. It is good also as a structure. Further, when performing the filter operation, a process such as padding may be used together, or an appropriately set stride width may be adopted.
 また、ニューラルネットワークとして、数十~数千層に至る多層型又は超多層型のニューラルネットワークを用いてもよい。 Further, as the neural network, a multi-layer or super multi-layer neural network having several tens to several thousand layers may be used.
 (関心度情報生成部)
 関心度情報生成部15は、視認オブジェクト判定部14が判定したオブジェクトに対するドライバの関心度を示す関心度情報を生成する。例えば、関心度情報生成部15は、オブジェクト検出部11によって検出したオブジェクト数、視線検出部13によって検出した注視時間、視線移動時間、瞳孔の状態、瞬きの回数、及び集中度の少なくとも何れかを参照して関心度情報を作成する。
(Interest level information generator)
The interest level information generation unit 15 generates interest level information indicating the interest level of the driver with respect to the object determined by the visual object determination unit 14. For example, the interest level information generation unit 15 displays at least one of the number of objects detected by the object detection unit 11, the gaze time detected by the line-of-sight detection unit 13, the line-of-sight movement time, the pupil state, the number of blinks, and the degree of concentration. Refer to and create interest level information.
 関心度情報生成部15が参照する各種の情報と、生成する関心度情報との関係は、例えば以下のようなものが挙げられる。 The relationship between the various information referred to by the interest level information generation unit 15 and the generated interest level information includes, for example, the following.
 (オブジェクト数)
 関心度情報生成部15は、撮像画像中のオブジェクト数に対する、ドライバが視認したオブジェクト数の割合が小さいほど、当該ドライバが視認したオブジェクトに対する関心度が高いと判定し、判定結果を含む関心度情報を作成する。
(Number of objects)
The degree-of-interest information generation unit 15 determines that the degree of interest in the object viewed by the driver is higher as the ratio of the number of objects viewed by the driver to the number of objects in the captured image is smaller, and the degree-of-interest information including the determination result Create
 例えば、撮像画像中のオブジェクトが20個あり、ドライバが視認したオブジェクトが1個である場合は、注視時間が同じであるとき、撮像画像中のオブジェクトが1個しかなく、ドライバが視認したオブジェクトが1個である場合よりも、ドライバは視認したオブジェクトにより注目しているといえる。それゆえ、前者の場合は、後者の場合よりもドライバが視認したオブジェクトに対する関心度が高いと判定する。 For example, if there are 20 objects in the captured image and there is only one object visually recognized by the driver, when the gaze time is the same, there is only one object in the captured image, and the object visually recognized by the driver is It can be said that the driver is paying more attention to the visually recognized object than the case of one. Therefore, in the former case, it is determined that the degree of interest in the object visually recognized by the driver is higher than in the latter case.
 また、撮像画像中のオブジェクトが10個あり、ドライバが視認したオブジェクトが1個である場合は、撮像画像中のオブジェクトが10個あり、ドライバが視認したオブジェクトが5個である場合よりも、ドライバは視認したオブジェクトにより注目しているといえる。それゆえ、前者の場合は、後者の場合よりもドライバが視認したオブジェクトに対する関心度が高いと判定する。 In addition, when there are 10 objects in the captured image and one object is visually recognized by the driver, the driver is more than when there are 10 objects in the captured image and 5 objects visually recognized by the driver. Can be said to be focused by the visually recognized object. Therefore, in the former case, it is determined that the degree of interest in the object visually recognized by the driver is higher than in the latter case.
 (注視時間)
 関心度情報生成部15は、あるオブジェクトに対する注視時間が長いほど、当該オブジェクトに対する関心度が高いと判定し、判定結果を含む関心度情報を生成する。
(Gaze time)
The interest level information generation unit 15 determines that the interest level for an object is higher as the gaze time for a certain object is longer, and generates interest level information including the determination result.
 (視線移動時間)
 関心度情報生成部15は、特定のオブジェクトの一部が撮像画像に表示されてから、ドライバの視線の位置座標が、当該オブジェクトの画像領域の座標内に移動するまでの時間が短いほど、当該オブジェクトに対する関心度が高いと判定し、判定結果を含む関心度情報を作成する。
(Gaze movement time)
The degree of interest information generation unit 15 indicates that the shorter the time from when a part of a specific object is displayed in the captured image until the position coordinate of the line of sight of the driver moves within the coordinates of the image area of the object, It is determined that the degree of interest in the object is high, and interest level information including the determination result is created.
 また、関心度情報生成部15は、ドライバの視線の位置座標が特定のオブジェクトから他のオブジェクトに移動するまでの時間間隔が長いほど、当該ドライバが視認した各オブジェクトに対する関心度が高いと判定し、判定結果を含む関心度情報を作成する。 In addition, the interest level information generation unit 15 determines that the interest level of each object viewed by the driver is higher as the time interval until the position coordinate of the line of sight of the driver moves from a specific object to another object is longer. The interest level information including the determination result is created.
 (瞳孔の状態)
 関心度情報生成部15は、あるオブジェクトを視認しているときのドライバの瞳孔のサイズが大きくなっているほど、当該オブジェクトに対する関心度が高いと判定し、判定結果を含む関心度情報を作成する。
(Pupil condition)
The degree-of-interest information generation unit 15 determines that the degree of interest in the object is higher as the size of the driver's pupil when viewing a certain object is larger, and creates the degree-of-interest information including the determination result. .
 (瞬きの回数)
 関心度情報生成部15は、あるオブジェクトを視認しているときのドライバの瞬きが低い頻度で安定した間隔で行われているほど、当該オブジェクトに対する関心度が高いと判定し、判定結果を含む関心度情報を作成する。
(Number of blinks)
The degree-of-interest information generation unit 15 determines that the degree of interest in the object is higher as the driver blinks when the object is visually recognized at a lower frequency and at a stable interval, and includes the determination result. Create degree information.
 (集中度)
 関心度情報生成部15は、あるオブジェクトを視認しているときのドライバの集中度が高いほど、当該オブジェクトに対する関心度が高いと判定し、判定結果を含む関心度情報を作成する。
(Degree of concentration)
The degree-of-interest information generation unit 15 determines that the degree of interest in the object is higher as the driver's degree of concentration when viewing a certain object is higher, and creates interest level information including the determination result.
 関心度情報生成部15は、参照する各種の情報に対する所定の閾値を設けることにより、関心度が高いか否かを判定すればよい。 The interest level information generation unit 15 may determine whether or not the interest level is high by providing predetermined thresholds for various types of information to be referred to.
 なお、関心度情報生成部15は、視認オブジェクト判定部14と同様に、関心度情報を機械学習により算出する構成としてもよい。 Note that the interest level information generation unit 15 may calculate the interest level information by machine learning, similarly to the visual object determination unit 14.
 関心度情報生成部15は、作成した関心度情報をリコメンドエンジン16に出力する。 The interest level information generation unit 15 outputs the created interest level information to the recommendation engine 16.
 (リコメンドエンジン)
 リコメンドエンジン16は、関心度情報生成部15が生成した関心度情報を参照して、上記ドライバへの提案情報を生成する。
(Recommendation engine)
The recommendation engine 16 refers to the interest level information generated by the interest level information generation unit 15 and generates proposal information for the driver.
 リコメンドエンジン16は、一例として、関心度情報生成部15が生成した関心度情報に含まれるドライバの潜在的な関心が示唆されたオブジェクトの情報を参照して、当該潜在的な関心が示唆されたオブジェクトに関連する提案を含む提案情報を生成する。 As an example, the recommendation engine 16 refers to the information on the object in which the potential interest of the driver is included, which is included in the interest level information generated by the interest level information generation unit 15, and the potential interest is indicated. Proposal information including proposals related to the object is generated.
 リコメンドエンジン16により生成された提案情報は、目的地提示部17によって、表示又は音声にてドライバに通知される。また、リコメンドエンジン16により生成された提案情報は、直接的に、又は他のサーバや基地局を介して携帯端末装置2に送信され、当該携帯端末装置2が提案情報を表示してもよい。 The proposal information generated by the recommendation engine 16 is notified to the driver by display or voice by the destination presentation unit 17. Moreover, the proposal information produced | generated by the recommendation engine 16 may be transmitted to the portable terminal device 2 directly or via another server or a base station, and the said portable terminal device 2 may display proposal information.
 以下、表1を参照して、リコメンドエンジン16が作成する具体的な関心度情報を例示する。 Hereinafter, specific interest level information created by the recommendation engine 16 will be exemplified with reference to Table 1.
Figure JPOXMLDOC01-appb-T000001
Figure JPOXMLDOC01-appb-T000001
 (CASE1)
 関心度情報生成部15が生成した関心度情報に、ドライバが第1のカテゴリに属する店舗又は看板に注目していることを示す情報が含まれる場合に、リコメンドエンジン16は、休憩場所およびトイレの少なくとも何れかに立ち寄る旨の提案を提案情報に含ませる。
(CASE1)
When the interest level information generated by the interest level information generation unit 15 includes information indicating that the driver is paying attention to the store or the signboard belonging to the first category, the recommendation engine 16 determines whether the rest area and the toilet A proposal to stop at least one of them is included in the proposal information.
 表1のCASE1は、町のよくある光景で注目した対象物、および当該対象物に応じた提案情報の一例を示す。第1のカテゴリに属する店舗としては、例えば、表1に示すように、コンビニ、ガソリンスタンド等の燃焼供給所、給電所、ファミレス等の飲食店等が挙げられる。 CASE 1 in Table 1 shows an example of an object noticed in a common sight of the town and proposal information corresponding to the object. Examples of the store belonging to the first category include, as shown in Table 1, a combustion supply station such as a convenience store and a gas station, a power supply station, a restaurant such as a family restaurant, and the like.
 (CASE2)
 関心度情報生成部15が生成した関心度情報に、ドライバが第2のカテゴリに属する店舗又は看板に注目していることを示す情報が含まれる場合に、リコメンドエンジン16は、飲食店およびトイレの少なくとも何れかに立ち寄る旨の提案を提案情報に含ませる。
(CASE2)
When the interest level information generated by the interest level information generation unit 15 includes information indicating that the driver is paying attention to a store or a signboard belonging to the second category, the recommendation engine 16 is used for restaurants and toilets. A proposal to stop at least one of them is included in the proposal information.
 表1のCASE2は、町のよくある光景で注目した第2のカテゴリに属する店舗又は看板、および当該第2のカテゴリに属する店舗又は看板に応じた提案情報の一例を示す。第2のカテゴリに属する店舗としては、例えば、表1に示すように、ファミレス、ファストフード等の飲食店、ショッピングセンター等が挙げられる。 CASE 2 in Table 1 shows an example of the store or signboard belonging to the second category noted in the common sight of the town, and proposal information corresponding to the store or signboard belonging to the second category. As a store which belongs to the 2nd category, as shown in Table 1, restaurants, such as a family restaurant and fast food, a shopping center, etc. are mentioned, for example.
 (CASE3)
 関心度情報生成部15が生成した関心度情報に、車内の所定のオブジェクトに注目していることを示す情報が含まれる場合に、リコメンドエンジン16は、燃料供給所、給電所、および自動車用品店の少なくとも何れかに立ち寄る旨の提案を提案情報に含ませる。
(CASE3)
When the interest level information generated by the interest level information generation unit 15 includes information indicating that attention is paid to a predetermined object in the vehicle, the recommendation engine 16 includes a fuel supply station, a power station, and an automobile supply store. A proposal to stop at at least one of the above is included in the proposal information.
 表1のCASE3は、車内で目移りした車内のオブジェクト、および当該車内のオブジェクトに応じた提案情報の一例を示す。車内のオブジェクトとしては、例えば、燃料メータ、時計、ナビゲーションシステムの表示画面、同乗者等が挙げられる。燃料供給所には、ガソリンスタンドおよび水素供給所が含まれる。 CASE 3 in Table 1 shows an example of an object in the vehicle that has been transferred in the vehicle, and proposal information corresponding to the object in the vehicle. Examples of the objects in the vehicle include a fuel meter, a clock, a display screen of a navigation system, and a passenger. Fuel stations include gas stations and hydrogen stations.
 (CASE4)
 関心度情報生成部15が生成した関心度情報に、ドライバが特定のロゴ又はマークに注目していることを示す情報が含まれている場合に、リコメンドエンジン16は、上記特定のロゴ若しくはマーク、又は、上記特定のロゴ若しくはマークに関連する商品に関する情報を提案情報に含ませる。
(CASE4)
When the interest level information generated by the interest level information generation unit 15 includes information indicating that the driver is paying attention to the specific logo or mark, the recommendation engine 16 determines that the specific logo or mark, Or the information regarding the goods relevant to the said specific logo or mark is included in proposal information.
 表1のCASE4は、町中で注目したディスプレイ、および当該ディスプレイに応じた提案情報の一例を示す。上記特定のロゴ若しくはマーク、又は、上記特定のロゴ若しくはマークに関連する商品に関する情報は、運転を終えた帰宅後、スマートフォン等の携帯端末装置に出力されてもよい。 CASE 4 in Table 1 shows an example of a display that is noticed in the town and proposal information corresponding to the display. The information regarding the specific logo or mark or the product related to the specific logo or mark may be output to a mobile terminal device such as a smartphone after returning home after driving.
 (CASE5)
 関心度情報生成部15が生成した関心度情報に、ドライバが特定の車種に注目していることを示す情報が含まれている場合に、リコメンドエンジン16は、上記特定の車種、又は、上記特定の車種に属する自動車に関する情報を提案情報に含ませる。
(CASE5)
When the interest level information generated by the interest level information generation unit 15 includes information indicating that the driver is paying attention to a specific vehicle type, the recommendation engine 16 determines that the specific vehicle type The information related to the car belonging to the car model is included in the proposal information.
 表1のCASE5は、路上で注目した自動車、および当該自動車に応じた提案情報の一例を示す。車種は、メーカー名であってもよいし、各メーカーの自動車の型名であってもよい。 CASE 5 in Table 1 shows an example of a car noticed on the road and proposal information corresponding to the car. The vehicle type may be a manufacturer name or a model name of each manufacturer's automobile.
 (目的地提示部)
 目的地提示部17は、関心度情報生成部15が生成したドライバへの提案情報を参照して、当該ドライバに目的地を提示する。目的地提示部17は、音声で目的地を提示してもよいし、表示部に画像で目的地を提示してもよく、それらを組み合わせて目的地を提示してもよい。また、目的地提示部17は、目的地までのルートを提示してもよい。
(Destination presentation part)
The destination presentation unit 17 refers to the proposal information to the driver generated by the interest level information generation unit 15 and presents the destination to the driver. The destination presentation unit 17 may present the destination by voice, may present the destination as an image on the display unit, or may present the destination by combining them. In addition, the destination presentation unit 17 may present a route to the destination.
 例えば、表1に示すように、CASE1およびCASE2の場合には、目的地提示部17は、至近のスポットの数と、スポットまでの距離および時間を音声案内する。また、CASE3の場合には、目的地提示部17は、目的地および最短ルートを設定する。 For example, as shown in Table 1, in the case of CASE 1 and CASE 2, the destination presentation unit 17 provides voice guidance of the number of nearby spots, the distance to the spot, and the time. In the case of CASE 3, the destination presentation unit 17 sets the destination and the shortest route.
 〔情報処理装置の処理例1〕
 次に、図3のフローチャートを参照して、情報処理装置1の処理について説明する。
[Processing Example 1 of Information Processing Device]
Next, processing of the information processing apparatus 1 will be described with reference to the flowchart of FIG.
 まず、情報処理装置1の使用を開始し、ステップ(以下、「ステップ」は省略する)S11およびS14に進む。 First, use of the information processing apparatus 1 is started, and the process proceeds to steps (hereinafter, “step” is omitted) S11 and S14.
 S11では、顔情報取得部12が、ドライバの顔情報を取得する。処理の詳細は、(顔情報取得部)に記載の通りである。顔情報取得部12は、取得した顔情報を視線検出部13に送信し、S12に進む。 In S11, the face information acquisition unit 12 acquires the driver's face information. Details of the processing are as described in (Face Information Acquisition Unit). The face information acquisition unit 12 transmits the acquired face information to the line-of-sight detection unit 13, and proceeds to S12.
 S12では、視線検出部13が、取得した顔画像から視線を検出し、抽出する。処理の詳細は、(視線検出部)に記載の通りである。 In S12, the line-of-sight detection unit 13 detects and extracts the line of sight from the acquired face image. Details of the processing are as described in (Gaze detection unit).
 S13では、視線検出部13が、検出した視線からドライバの視線が注目していたか否かを判定し、注目していた場合は(S13のYes)、特定した視線方向を視認オブジェクト判定部14に送信し、S15に進む。また、ドライバの視線が注目していなかった場合は(S13のNo)、S11の処理を再度行う。 In S13, the line-of-sight detection unit 13 determines whether or not the driver's line of sight has been focused from the detected line of sight, and if it has been focused (Yes in S13), the identified line-of-sight direction is displayed in the visual object determination unit 14. Send to S15. If the driver's line of sight is not focused (No in S13), the process in S11 is performed again.
 S14では、オブジェクト検出部11が、ドライバの視界に存在する各オブジェクトを検出し、抽出する。具体的には、撮像部は、車外の景色を撮影した後、オブジェクト抽出部が、取得した撮像画像100から、オブジェクトを抽出する。処理の詳細は、(オブジェクト検出部)に記載の通りである。オブジェクト検出部11のオブジェクト抽出部は、抽出したオブジェクトを視認オブジェクト判定部に送信し、S15に進む。 In S14, the object detection unit 11 detects and extracts each object existing in the driver's field of view. Specifically, after the imaging unit captures a scene outside the vehicle, the object extraction unit extracts an object from the acquired captured image 100. Details of the processing are as described in (Object Detection Unit). The object extraction unit of the object detection unit 11 transmits the extracted object to the visual object determination unit, and proceeds to S15.
 S15では、視認オブジェクト判定部14が、視線検出部13が検出した視線とオブジェクト検出部11が検出したオブジェクトとから、ドライバが視認しているオブジェクトを抽出し、S16に進む。 In S15, the visual recognition object determination unit 14 extracts an object that the driver is viewing from the visual line detected by the visual line detection unit 13 and the object detected by the object detection unit 11, and the process proceeds to S16.
 S16では、視認オブジェクト判定部14が、抽出したドライバが視認しているオブジェクトにおいて、ドライバが一定時間注目していたか否かを判定し、ドライバが一定時間注目していた場合(S16のYes)、S17に進む。処理の詳細は、(視認オブジェクト判定部)に記載の通りである。ドライバが一定時間注目していなかった場合(S16のNo)、S15の処理を再度行う。 In S16, the visual object determination unit 14 determines whether or not the driver has focused on the object that the extracted driver is viewing for a certain period of time, and if the driver has been focused on for a certain period of time (Yes in S16), Proceed to S17. Details of the processing are as described in (visual recognition object determination unit). If the driver has not paid attention for a certain time (No in S16), the process of S15 is performed again.
 S17では、関心度情報生成部15が、ドライバの関心度に関する関心度情報を生成する。処理の詳細は、(関心度情報生成部)に記載の通りである。関心度情報生成部15は、作成した関心度情報をリコメンドエンジン16に送信し、S18に進む。 In S17, the interest level information generation unit 15 generates interest level information related to the interest level of the driver. Details of the processing are as described in (Interesting degree information generation unit). The interest level information generation unit 15 transmits the created interest level information to the recommendation engine 16 and proceeds to S18.
 S18では、リコメンドエンジン16が、関心度情報生成部が生成した関心度情報を参照して、ドライバへの提案情報を生成する。処理の詳細は、(リコメンドエンジン)に記載の通りである。 In S18, the recommendation engine 16 refers to the interest level information generated by the interest level information generation unit, and generates proposal information to the driver. Details of the processing are as described in (Recommendation Engine).
 <実施形態2>
 本発明の他の実施形態について、以下に説明する。なお、説明の便宜上、上記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を繰り返さない。
<Embodiment 2>
Another embodiment of the present invention will be described below. For convenience of explanation, members having the same functions as those described in the above embodiment are given the same reference numerals, and the description thereof will not be repeated.
 図4は、本実施形態に係る情報処理装置1の構成要素を示すブロック図である。 FIG. 4 is a block diagram showing components of the information processing apparatus 1 according to the present embodiment.
 情報処理装置1は、位置情報取得部21と、表情推定部22とを更に備えている。 The information processing apparatus 1 further includes a position information acquisition unit 21 and a facial expression estimation unit 22.
 (位置情報取得部)
 位置情報取得部21は、ドライバが搭乗する車両の位置情報を取得する。位置情報取得部21は、GPSアンテナ、Wi-Fi(登録商標)アンテナ、方位磁石、および加速度センサ等の少なくとも何れかを備えて構成される。位置情報取得部21は、車両の向いている方角と現在位置等の位置情報と検出可能に構成された位置検出部(図示せず)から位置情報を取得することができる。または、位置情報取得部21は、位置検出部以外から車両の位置を取得してもよい。例えば、車両が無線通信を行いながら移動するものである場合、無線通信の基地局から、車両の位置情報を取得してもよい。なお、本実施形態の説明において単に位置情報といった場合、車両の位置情報を指すものとする。位置情報取得部21は、取得したドライバが搭乗する車両の位置情報をオブジェクト検出部11に出力する。
(Location information acquisition unit)
The position information acquisition unit 21 acquires position information of a vehicle on which the driver is boarded. The position information acquisition unit 21 includes at least one of a GPS antenna, a Wi-Fi (registered trademark) antenna, a direction magnet, an acceleration sensor, and the like. The position information acquisition unit 21 can acquire position information from a position detection unit (not shown) configured to be able to detect a direction in which the vehicle is facing, position information such as a current position, and the like. Alternatively, the position information acquisition unit 21 may acquire the position of the vehicle from other than the position detection unit. For example, when the vehicle moves while performing wireless communication, the position information of the vehicle may be acquired from a wireless communication base station. In the description of the present embodiment, the term “location information” refers to vehicle position information. The position information acquisition unit 21 outputs the acquired position information of the vehicle on which the driver is boarded to the object detection unit 11.
 (表情推定部)
 表情推定部22は、ドライバの顔情報からドライバの表情を推定する。表情推定部22は、ドライバ顔情報から当該ドライバの表情を示す表情情報を取得することによって、当該ドライバの表情を推定する。表情推定部22は、視線検出部13で検出した目の特徴量を使用することに加えて、顔情報として顔の各部位の特徴量を組み合わせて使用し、ドライバの表情を推定する。
(Facial expression estimation unit)
The facial expression estimation unit 22 estimates the driver's facial expression from the driver's face information. The expression estimation unit 22 estimates expression of the driver by acquiring expression information indicating the expression of the driver from the driver face information. The facial expression estimation unit 22 estimates the driver's facial expression by using the feature amount of each part of the face as face information in addition to using the eye feature amount detected by the line-of-sight detection unit 13.
 ドライバの顔情報は、ドライバの表情を推定するために十分な表情情報を含んでいればよいが、例えば、ドライバの笑顔度を示す笑顔情報、悲哀度を示す悲哀情報、緊張度を示す緊張情報等の感情情報を含む。表情推定部22がドライバの表情を推定する方法は、特に限定されないが、例えば、OKAO(登録商標)Vision等の人物の表情を推定するための公知の技術を搭載した表情推定装置を用いて表情を推定することができる。 The driver's face information only needs to include expression information sufficient to estimate the driver's expression. For example, smile information indicating the degree of smile of the driver, sadness information indicating the degree of sadness, and tension information indicating the degree of tension. Including emotion information. The method by which the facial expression estimation unit 22 estimates the facial expression of the driver is not particularly limited. For example, the facial expression estimation apparatus 22 uses a facial expression estimation apparatus equipped with a known technique for estimating the facial expression of a person such as OKAO (registered trademark) Vision. Can be estimated.
 表情推定部22は、顔情報を参照して、ドライバの表情を推定し、表情情報を算出する。表情推定部22は、表情情報を、関心度情報生成部15に出力する。なお、表情推定部22は、車両に設けられていても、後述するように、サーバ等の車両外に設けられていてもよい。 The facial expression estimation unit 22 refers to the facial information, estimates the facial expression of the driver, and calculates facial expression information. The facial expression estimation unit 22 outputs facial expression information to the interest level information generation unit 15. The facial expression estimation unit 22 may be provided in the vehicle or may be provided outside the vehicle such as a server, as will be described later.
 なお、表情推定部22は、視認オブジェクト判定部14と同様に、ドライバの表情に関する情報を機械学習により算出する構成としてもよい。 It should be noted that the facial expression estimation unit 22 may be configured to calculate information related to the facial expression of the driver by machine learning, similar to the visual object determination unit 14.
 表情推定部22は、推定した感情情報を関心度情報生成部15に出力する。 The facial expression estimation unit 22 outputs the estimated emotion information to the interest level information generation unit 15.
 (関心度情報生成部)
 関心度情報生成部15は、車両の位置情報およびドライバの表情を更に参照して、関心度情報を生成する。
(Interest level information generator)
The interest level information generation unit 15 generates the interest level information by further referring to the vehicle position information and the driver's facial expression.
 (位置情報)
 関心度情報生成部15は、ドライバがあるオブジェクトを視認したときの車両の位置が、交通量のより多い場所であるほど、当該オブジェクトに対する関心度が高いと判定する。また、関心度情報生成部15は、あるオブジェクトを視認したときの車両の位置が、所定の場所である場合に、当該当該オブジェクトに対する関心度が高いと判定してもよい。そして、関心度情報生成部15は、判定結果を含む関心度情報を作成する。
(location information)
The degree-of-interest information generation unit 15 determines that the degree of interest in the object is higher as the position of the vehicle when the driver visually recognizes the object is higher in the traffic volume. In addition, the interest level information generation unit 15 may determine that the degree of interest in the object is high when the position of the vehicle when the object is visually recognized is a predetermined place. Then, the interest level information generation unit 15 creates interest level information including the determination result.
 (ドライバの表情)
 関心度情報生成部15は、ドライバがあるオブジェクトを視認したときの当該ドライバの表情が喜んでいるほど、又は驚いているほど、当該オブジェクトに対する関心度が高いと判定し、判定結果を含む関心度情報を作成する。
(Driver's expression)
The degree-of-interest information generation unit 15 determines that the degree of interest in the object is higher as the driver looks more happy or surprised when the driver visually recognizes the object, and includes the determination result. Create information.
 関心度情報生成部15は、ドライバの生体情報、日時情報、および気象情報のうち少なくとも1つを更に参照して関心度情報を生成してもよい。 The interest level information generation unit 15 may generate the interest level information by further referring to at least one of the driver's biological information, date / time information, and weather information.
 生体情報には、例えば、ドライバの脳情報、バイタル情報等が含まれる。脳情報には、ドライバの脳波等の情報が含まれる。脳波を測定する方法としては、特に限定されないが、例えば、公知の脳波計により測定する方法が挙げられる。バイタル情報には、例えば、ドライバの脈拍、血圧、体温、発汗量等の情報が含まれる。バイタルを測定する方法としては、特に限定されないが、例えば、公知の脈拍計、血圧計、体温計、発汗量検出計等により測定する方法が挙げられる。日時情報は、ドライバの運転時の日時情報であり、気象情報は、位置情報取得部21から取得した車両の位置の気象情報である。 The biometric information includes, for example, driver's brain information, vital information, and the like. The brain information includes information such as a driver's brain wave. The method for measuring the electroencephalogram is not particularly limited, and examples thereof include a method for measuring with a known electroencephalograph. The vital information includes, for example, information such as the driver's pulse, blood pressure, body temperature, and sweating amount. The method of measuring vitals is not particularly limited, and examples thereof include a method of measuring with a known pulse meter, blood pressure meter, thermometer, sweat amount detector, and the like. The date / time information is date / time information when the driver is driving, and the weather information is weather information on the position of the vehicle acquired from the position information acquisition unit 21.
 (脳波)
 関心度情報生成部15は、ドライバがあるオブジェクトを視認したときの当該ドライバの脳波が所定の周波数の範囲であるとき、当該オブジェクトに対する関心度が高いと判定し、判定結果を含む関心度情報を作成する。
(Electroencephalogram)
The interest level information generation unit 15 determines that the interest level of the object is high when the driver's brain wave when the driver visually recognizes the object is within a predetermined frequency range, and includes the interest level information including the determination result. create.
 (脈拍)
 関心度情報生成部15は、ドライバがあるオブジェクトを視認したときの当該ドライバの脈拍数が多いほど、当該オブジェクトに対する関心度が高いと判定し、判定結果を含む関心度情報を作成する。
(pulse)
The interest level information generation unit 15 determines that the greater the pulse rate of the driver when the driver visually recognizes the object, the higher the interest level with respect to the object, and creates interest level information including the determination result.
 (血圧)
 関心度情報生成部15は、ドライバがあるオブジェクトを視認したときの当該ドライバの血圧が高いほど、当該オブジェクトに対する関心度が高いと判定し、判定結果を含む関心度情報を作成する。
(blood pressure)
The interest level information generation unit 15 determines that the higher the blood pressure of the driver when the driver visually recognizes the object, the higher the interest level for the object, and creates the interest level information including the determination result.
 (体温)
 関心度情報生成部15は、ドライバがあるオブジェクトを視認したときの当該ドライバの体温が高いほど、当該オブジェクトに対する関心度が高いと判定し、判定結果を含む関心度情報を作成する。
(Body temperature)
The interest level information generation unit 15 determines that the higher the body temperature of the driver when the driver visually recognizes the object, the higher the interest level for the object, and creates the interest level information including the determination result.
 (発汗量)
 関心度情報生成部15は、ドライバがあるオブジェクトを視認したときの当該ドライバの発汗量が多いほど、当該オブジェクトに対する関心度が高いと判定し、判定結果を含む関心度情報を作成する。
(Sweating amount)
The degree-of-interest information generation unit 15 determines that the degree of interest in the object is higher as the amount of sweating of the driver when the driver visually recognizes the object is larger, and creates the degree-of-interest information including the determination result.
 (リコメンドエンジン)
 リコメンドエンジン16は、一例として、関心度情報生成部15が車両の位置情報およびドライバの表情を更に参照して生成した関心度情報に含まれるドライバの潜在的な関心が示唆されたオブジェクトの情報を参照して、当該潜在的な関心が示唆されたオブジェクトに関連する提案を含む提案情報を生成する。
(Recommendation engine)
As an example, the recommendation engine 16 uses the interest level information generation unit 15 to generate object information that suggests potential interest of the driver included in the interest level information generated by further referring to the vehicle position information and the driver's facial expression. The proposal information including the proposal related to the object in which the potential interest is suggested is generated with reference to the proposal information.
 具体的な提案情報としては、実施形態1に説明した例が挙げられる。 Specific examples of proposal information include the example described in the first embodiment.
 また、リコメンドエンジン16は、事故が起こったことを示す事故情報を取得し、上記事故情報が示す事故発生時刻の前の所定の期間においてドライバが視認したオブジェクトを示すオブジェクト情報を含む事故情報を生成してもよい。
 なお、上記事故情報は、車両に搭載された衝撃センサ(加速度センサ)が、所定の値以上の衝撃を検知した場合に、当該車両に搭載された事故情報生成部によって生成される構成としてもよいし、そのような衝撃情報を、各車両からサーバに送信し、当該サーバが、当該衝撃情報を参照して事故情報を生成する構成としてもよい。
Also, the recommendation engine 16 acquires accident information indicating that an accident has occurred, and generates accident information including object information indicating an object visually recognized by the driver in a predetermined period before the accident occurrence time indicated by the accident information. May be.
The accident information may be generated by an accident information generation unit mounted on the vehicle when an impact sensor (acceleration sensor) mounted on the vehicle detects an impact of a predetermined value or more. And it is good also as a structure which transmits such impact information from each vehicle to a server, and the said server produces | generates accident information with reference to the said impact information.
 リコメンドエンジン16は、事故が起こったときの日時情報、車両の位置情報、および気象情報を事故情報に含ませてもよい。生成した事故情報は、安全運転の技能評価の情報として活用することができる。 The recommendation engine 16 may include date and time information when the accident occurs, vehicle position information, and weather information in the accident information. The generated accident information can be used as information for skill evaluation of safe driving.
 〔情報処理装置の処理例2〕
 次に、図5のフローチャートを参照して、情報処理装置1の処理について説明する。S11~S17については、〔情報処理装置の処理例1〕で説明した通りであり、相違する点のみ説明する。
[Processing Example 2 of Information Processing Device]
Next, processing of the information processing apparatus 1 will be described with reference to the flowchart of FIG. S11 to S17 are as described in [Processing example 1 of information processing apparatus], and only differences will be described.
 まず、情報処理装置1の使用を開始し、S11およびS21に進む。 First, use of the information processing apparatus 1 is started, and the process proceeds to S11 and S21.
 S21では、位置情報取得部21が、ドライバが搭乗する車両の位置情報を取得する。処理の詳細は、(位置情報取得部)に記載の通りである。位置情報取得部21は、取得したドライバが搭乗する車両の位置情報をオブジェクト検出部11に送信し、S14に進む。 In S21, the position information acquisition unit 21 acquires the position information of the vehicle on which the driver is boarded. Details of the processing are as described in (Position information acquisition unit). The position information acquisition unit 21 transmits the acquired position information of the vehicle on which the driver is boarded to the object detection unit 11, and proceeds to S14.
 S16では、ドライバが一定時間オブジェクトに注目していた場合(S16のYes)、S22に進む。 In S16, when the driver has focused on the object for a certain period of time (Yes in S16), the process proceeds to S22.
 S22では、表情推定部22が、ドライバの顔情報からドライバの表情を推定し、S23に進む。処理の詳細は、(表情推定部)に記載の通りである。 In S22, the facial expression estimation unit 22 estimates the driver's facial expression from the driver's face information, and proceeds to S23. Details of the processing are as described in (Expression estimation unit).
 S23では、表情推定部22が、視認オブジェクトへの感情変化を測定し、測定した感情情報を関心度情報生成部15に送信し、S17に進む。 In S23, the facial expression estimation unit 22 measures the emotional change to the visual recognition object, transmits the measured emotion information to the interest level information generation unit 15, and proceeds to S17.
 〔実施形態3〕
 本発明の他の実施形態について、以下に説明する。なお、説明の便宜上、上記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を繰り返さない。
[Embodiment 3]
Another embodiment of the present invention will be described below. For convenience of explanation, members having the same functions as those described in the above embodiment are given the same reference numerals, and the description thereof will not be repeated.
 (情報処理システムの構成)
 図6は、本発明の実施形態3に係る情報処理システムの構成を示すブロック図である。図6に示すように、情報処理システムは、車両3と、サーバ4と、携帯端末装置2とを備えている。
(Configuration of information processing system)
FIG. 6 is a block diagram illustrating a configuration of an information processing system according to the third embodiment of the present invention. As shown in FIG. 6, the information processing system includes a vehicle 3, a server 4, and a mobile terminal device 2.
 車両3は、車載カメラ31と、目的地提示部17とを備えている。実施形態1では、目的地提示部17は、情報処理装置1が備える構成であったが、本実施形態では、目的地提示部17は車両3に備えられている。 The vehicle 3 includes an in-vehicle camera 31 and a destination presentation unit 17. In the first embodiment, the destination presentation unit 17 is configured to be included in the information processing apparatus 1, but in the present embodiment, the destination presentation unit 17 is provided in the vehicle 3.
 サーバ4は、顔情報取得部12と、視線検出部13と、オブジェクト検出部11と、視認オブジェクト判定部14と、関心度情報生成部15と、リコメンドエンジン16とを備えている。このように、本実施形態に係るサーバ4は、実施形態1に係る情報処理装置1の備える構成のうち、目的地提示部17以外の構成を備えている。 The server 4 includes a face information acquisition unit 12, a line-of-sight detection unit 13, an object detection unit 11, a visual object determination unit 14, an interest level information generation unit 15, and a recommendation engine 16. As described above, the server 4 according to the present embodiment includes a configuration other than the destination presentation unit 17 among the configurations included in the information processing apparatus 1 according to the first embodiment.
 サーバ4は、実施形態2で説明した位置情報取得部21、及び表情推定部22の少なくとも1つを更に備える構成としてもよい。 The server 4 may be configured to further include at least one of the position information acquisition unit 21 and the facial expression estimation unit 22 described in the second embodiment.
 また、サーバ及び情報処理装置は、図示しない通信部を備えており、サーバと情報処理装置との情報のやり取りは当該通信部を介して行われる。 Further, the server and the information processing apparatus include a communication unit (not shown), and information exchange between the server and the information processing apparatus is performed via the communication unit.
 車載カメラ31は、実施形態1において説明したオブジェクト検出部11が備える撮像部と同様の構成であり、車外の景色を撮影して周辺環境情報を取得し、取得した周辺環境情報をサーバ4のオブジェクト検出部11に出力する。また、車載カメラ31は、ドライバの顔を撮影して顔情報を取得し、取得した顔情報をサーバ4の顔情報取得部12に出力する。 The in-vehicle camera 31 has the same configuration as that of the imaging unit included in the object detection unit 11 described in the first embodiment. The in-vehicle camera 31 captures the scenery outside the vehicle to acquire the surrounding environment information, and uses the acquired surrounding environment information as the object of the server 4. Output to the detector 11. Further, the in-vehicle camera 31 captures the face of the driver to acquire face information, and outputs the acquired face information to the face information acquisition unit 12 of the server 4.
 リコメンドエンジン16は、実施形態1、2と同様に、生成したドライバへの提案情報を目的地提示部17に出力する。 The recommendation engine 16 outputs the generated proposal information to the driver to the destination presentation unit 17 as in the first and second embodiments.
 このように、車両3に、視認オブジェクト判定部14、関心度情報生成部15、リコメンドエンジン等が含まれていない態様であっても、サーバ4を経由することにより、処理を行うことができる。 As described above, even if the vehicle 3 does not include the visual object determination unit 14, the interest level information generation unit 15, the recommendation engine, and the like, the processing can be performed via the server 4.
 このような構成であっても、ドライバの関心度をセンシングすることができるという効果を奏する。 Even with such a configuration, the driver's interest level can be sensed.
 〔ソフトウェアによる実現例〕
 情報処理装置1の制御ブロックは、集積回路(ICチップ)等に形成された論理回路(ハードウェア)によって実現してもよいし、ソフトウェアによって実現してもよい。
[Example of software implementation]
The control block of the information processing apparatus 1 may be realized by a logic circuit (hardware) formed in an integrated circuit (IC chip) or the like, or may be realized by software.
 後者の場合、情報処理装置1に含まれる各部は、各機能を実現するソフトウェアであるプログラムの命令を実行するコンピュータを備えている。このコンピュータは、例えば少なくとも1つのプロセッサ(制御装置)を備えていると共に、上記プログラムを記憶したコンピュータ読み取り可能な少なくとも1つの記録媒体を備えている。そして、上記コンピュータにおいて、上記プロセッサが上記プログラムを上記記録媒体から読み取って実行することにより、本発明の目的が達成される。上記プロセッサとしては、例えばCPU(Central Processing Unit)を用いることができる。上記記録媒体としては、「一時的でない有形の媒体」、例えば、ROM(Read Only Memory)等の他、テープ、ディスク、カード、半導体メモリ、プログラマブルな論理回路などを用いることができる。また、上記プログラムを展開するRAM(Random Access Memory)などをさらに備えていてもよい。また、上記プログラムは、該プログラムを伝送可能な任意の伝送媒体(通信ネットワークや放送波等)を介して上記コンピュータに供給されてもよい。なお、本発明の一態様は、上記プログラムが電子的な伝送によって具現化された、搬送波に埋め込まれたデータ信号の形態でも実現され得る。 In the latter case, each unit included in the information processing apparatus 1 includes a computer that executes instructions of a program that is software that realizes each function. The computer includes, for example, at least one processor (control device) and at least one computer-readable recording medium storing the program. In the computer, the processor reads the program from the recording medium and executes the program, thereby achieving the object of the present invention. As the processor, for example, a CPU (Central Processing Unit) can be used. As the recording medium, a “non-temporary tangible medium” such as a ROM (Read Only Memory), a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used. Further, a RAM (Random Access Memory) for expanding the program may be further provided. The program may be supplied to the computer via an arbitrary transmission medium (such as a communication network or a broadcast wave) that can transmit the program. Note that one embodiment of the present invention can also be realized in the form of a data signal embedded in a carrier wave, in which the program is embodied by electronic transmission.
 本発明の各態様に係る情報処理装置は、コンピュータによって実現してもよく、この場合には、コンピュータを上記情報処理装置が備える各部(ソフトウェア要素)として動作させることにより上記情報処理装置をコンピュータにて実現させる情報処理装置の制御プログラム、およびそれを記録したコンピュータ読み取り可能な記録媒体も、本発明の範疇に入る。 The information processing apparatus according to each aspect of the present invention may be realized by a computer. In this case, the information processing apparatus is operated on each computer by causing the computer to operate as each unit (software element) included in the information processing apparatus. The control program for the information processing apparatus to be realized in this way and a computer-readable recording medium on which the control program is recorded also fall within the scope of the present invention.
 〔まとめ〕
 本発明の一態様に係る情報処理装置は、ドライバの顔情報を取得する顔情報取得部と、上記顔情報取得部から取得した顔情報から視線を検出する視線検出部と、上記ドライバの視界に存在する各オブジェクトを検出するオブジェクト検出部と、上記視線検出部が検出した視線と上記オブジェクト検出部が検出したオブジェクトとから、上記ドライバが視認しているオブジェクトを判定する視認オブジェクト判定部と、上記視認オブジェクト判定部が判定したオブジェクトに対する上記ドライバの関心度を示す関心度情報を生成する関心度情報生成部とを備えている。
[Summary]
An information processing apparatus according to an aspect of the present invention includes a face information acquisition unit that acquires face information of a driver, a line-of-sight detection unit that detects a line of sight from face information acquired from the face information acquisition unit, and the driver's field of view. An object detection unit that detects each existing object; a visual object determination unit that determines an object that the driver is viewing from the line of sight detected by the line-of-sight detection unit and the object detected by the object detection unit; An interest level information generation unit that generates interest level information indicating the interest level of the driver with respect to the object determined by the visual object determination unit.
 本発明の一態様に係る情報処理装置は、上記ドライバが搭乗する車両の位置情報を取得する位置情報取得部を更に備え、上記関心度情報生成部は、上記車両の位置情報を更に参照して、関心度情報を生成してもよい。 An information processing apparatus according to an aspect of the present invention further includes a position information acquisition unit that acquires position information of a vehicle on which the driver is boarded, and the interest level information generation unit further refers to the position information of the vehicle. Interest level information may be generated.
 本発明の一態様に係る情報処理装置は、上記ドライバの顔情報から上記ドライバの表情を推定する表情推定部を更に備え、上記関心度情報生成部は、上記ドライバの表情を更に参照して、関心度情報を生成してもよい。 The information processing apparatus according to an aspect of the present invention further includes a facial expression estimation unit that estimates the facial expression of the driver from the facial information of the driver, and the interest level information generation unit further refers to the facial expression of the driver, Interest level information may be generated.
 本発明の一態様に係る情報処理装置は、上記関心度情報生成部は、上記ドライバの生体情報を更に参照して、関心度情報を生成してもよい。 In the information processing apparatus according to an aspect of the present invention, the interest level information generation unit may generate the interest level information by further referring to the biological information of the driver.
 本発明の一態様に係る情報処理装置は、上記関心度情報生成部は、日時情報を更に参照して、関心度情報を生成してもよい。 In the information processing apparatus according to an aspect of the present invention, the interest level information generation unit may generate the interest level information by further referring to the date / time information.
 本発明の一態様に係る情報処理装置は、上記関心度情報生成部は、気象情報を更に参照して、関心度情報を生成してもよい。 In the information processing apparatus according to one aspect of the present invention, the interest level information generation unit may generate the interest level information by further referring to the weather information.
 本発明の一態様に係る情報処理装置は、上記関心度情報生成部が生成した関心度情報を参照して、上記ドライバへの提案情報を生成する提案情報生成部を更に備えていてもよい。 The information processing apparatus according to an aspect of the present invention may further include a proposal information generation unit that generates the proposal information for the driver with reference to the interest level information generated by the interest level information generation unit.
 本発明の一態様に係る情報処理装置は、上記提案情報生成部は、事故が起こったことを示す事故情報を取得し、上記事故情報が示す事故発生時刻の前の所定の期間において上記ドライバが視認したオブジェクトを示すオブジェクト情報を含む事故情報を生成してもよい。 In the information processing apparatus according to an aspect of the present invention, the proposal information generation unit acquires accident information indicating that an accident has occurred, and the driver performs a predetermined period before the accident occurrence time indicated by the accident information. Accident information including object information indicating the visually recognized object may be generated.
 本発明の一態様に係る情報処理装置は、上記関心度情報生成部が生成した関心度情報に、上記ドライバが第1のカテゴリに属する店舗又は看板に注目していることを示す情報が含まれる場合に、上記提案情報生成部は、休憩場所およびトイレの少なくとも何れかに立ち寄る旨の提案を上記提案情報に含ませてもよい。 In the information processing apparatus according to an aspect of the present invention, the interest level information generated by the interest level information generation unit includes information indicating that the driver is paying attention to a store or a signboard belonging to the first category. In this case, the proposal information generation unit may include a proposal to stop at at least one of a rest place and a toilet in the proposal information.
 本発明の一態様に係る情報処理装置は、上記関心度情報生成部が生成した関心度情報に、上記ドライバが第2のカテゴリに属する店舗又は看板に注目していることを示す情報が含まれる場合に、上記提案情報生成部は、飲食店およびトイレの少なくとも何れかに立ち寄る旨の提案を上記提案情報に含ませてもよい。 In the information processing apparatus according to an aspect of the present invention, the interest level information generated by the interest level information generation unit includes information indicating that the driver is paying attention to a store or a signboard belonging to the second category. In this case, the proposal information generation unit may include a proposal to stop at at least one of a restaurant and a toilet in the proposal information.
 本発明の一態様に係る情報処理装置は、上記関心度情報生成部が生成した関心度情報に、車内の所定のオブジェクトに注目していることを示す情報が含まれる場合に、上記提案情報生成部は、燃料供給所、給電所、および自動車用品店の少なくとも何れかに立ち寄る旨の提案を上記提案情報に含ませてもよい。 The information processing apparatus according to an aspect of the present invention is configured to generate the proposal information when the interest level information generated by the interest level information generation unit includes information indicating that attention is paid to a predetermined object in the vehicle. The section may include a proposal to stop at at least one of a fuel supply station, a power supply station, and an automobile supply store in the proposal information.
 本発明の一態様に係る情報処理装置は、上記提案情報を参照して、上記ドライバに目的地を提示する目的地提示部を更に備えていてもよい。 The information processing apparatus according to an aspect of the present invention may further include a destination presentation unit that refers to the proposal information and presents the destination to the driver.
 本発明の一態様に係る情報処理装置は、上記関心度情報生成部が生成した関心度情報に、上記ドライバが特定のロゴ又はマークに注目していることを示す情報が含まれている場合に、上記提案情報生成部は、上記特定のロゴ若しくはマーク、又は、上記特定のロゴ若しくはマークに関連する商品に関する情報を上記提案情報に含ませてもよい。 In the information processing apparatus according to an aspect of the present invention, when the interest level information generated by the interest level information generation unit includes information indicating that the driver is paying attention to a specific logo or mark. The proposal information generation unit may include information related to the specific logo or mark or a product related to the specific logo or mark in the proposal information.
 本発明の一態様に係る情報処理装置は、上記関心度情報生成部が生成した関心度情報に、上記ドライバが特定の車種に注目していることを示す情報が含まれている場合に、上記提案情報生成部は、上記特定の車種、又は、上記特定の車種に属する自動車に関する情報を上記提案情報に含ませてもよい。 In the information processing apparatus according to an aspect of the present invention, when the interest level information generated by the interest level information generation unit includes information indicating that the driver is paying attention to a specific vehicle type, The proposal information generation unit may include information regarding the specific vehicle type or an automobile belonging to the specific vehicle type in the proposal information.
 本発明の一態様に係る情報処理方法は、情報処理装置によって実行される情報処理方法であって、ドライバの顔情報を取得する顔情報取得ステップと、上記顔情報取得ステップにおいて取得した顔情報から視線を検出する視線検出ステップと、上記ドライバの視界に存在する各オブジェクトを検出するオブジェクト検出ステップと、上記視線検出ステップにおいて検出した視線と上記オブジェクト検出ステップにおいて検出したオブジェクトとから、上記ドライバが視認しているオブジェクトを判定する視認オブジェクト判定ステップと、上記視認オブジェクト判定ステップにおいて判定したオブジェクトに対する上記ドライバの関心度を示す関心度情報を生成する関心度情報生成ステップとを備えている。 An information processing method according to an aspect of the present invention is an information processing method executed by an information processing apparatus, and includes a face information acquisition step for acquiring driver face information, and the face information acquired in the face information acquisition step. The driver visually recognizes a line-of-sight detection step for detecting a line of sight, an object detection step for detecting each object existing in the field of view of the driver, and the line of sight detected in the line-of-sight detection step and the object detected in the object detection step. A visual recognition object determination step for determining the object that is being performed, and an interest level information generation step for generating interest level information indicating the interest level of the driver with respect to the object determined in the visual recognition object determination step.
 本発明の一態様に係るプログラムは、情報処理装置としてコンピュータを機能させるための情報処理プログラムであって、上記関心度情報生成部としてコンピュータを機能させてもよい。 The program according to an aspect of the present invention is an information processing program for causing a computer to function as an information processing apparatus, and may cause the computer to function as the interest level information generation unit.
 本発明は上述した各実施形態に限定されるものではなく、請求項に示した範囲で種々の変更が可能であり、異なる実施形態にそれぞれ開示された技術的手段を適宜組み合わせて得られる実施形態についても本発明の技術的範囲に含まれる。さらに、各実施形態にそれぞれ開示された技術的手段を組み合わせることにより、新しい技術的特徴を形成することができる。 The present invention is not limited to the above-described embodiments, and various modifications are possible within the scope shown in the claims, and embodiments obtained by appropriately combining technical means disclosed in different embodiments. Is also included in the technical scope of the present invention. Furthermore, a new technical feature can be formed by combining the technical means disclosed in each embodiment.
 1 情報処理装置
 2 携帯端末装置
 3 車両
 4 サーバ
 11 オブジェクト検出部
 12 顔情報取得部
 13 視線検出部
 14 視認オブジェクト判定部
 15 関心度情報生成部
 16 リコメンドエンジン(提案情報生成部)
 17 目的地提示部(ナビゲーションシステム)
 21 位置情報取得部
 22 表情推定部
 31 車載カメラ
DESCRIPTION OF SYMBOLS 1 Information processing apparatus 2 Portable terminal device 3 Vehicle 4 Server 11 Object detection part 12 Face information acquisition part 13 Gaze detection part 14 Visual object determination part 15 Interest degree information generation part 16 Recommendation engine (proposition information generation part)
17 Destination presentation unit (navigation system)
21 Position information acquisition unit 22 Facial expression estimation unit 31 Car-mounted camera

Claims (16)

  1.  ドライバの顔情報を取得する顔情報取得部と、
     上記顔情報取得部から取得した顔情報から視線を検出する視線検出部と、
     上記ドライバの視界に存在する各オブジェクトを検出するオブジェクト検出部と、
     上記視線検出部が検出した視線と上記オブジェクト検出部が検出したオブジェクトとから、上記ドライバが視認しているオブジェクトを判定する視認オブジェクト判定部と、
     上記視認オブジェクト判定部が判定したオブジェクトに対する上記ドライバの関心度を示す関心度情報を生成する関心度情報生成部と
    を備えていることを特徴とする情報処理装置。
    A face information acquisition unit for acquiring driver face information;
    A gaze detection unit that detects a gaze from the face information acquired from the face information acquisition unit;
    An object detection unit for detecting each object existing in the field of view of the driver;
    A visual object determination unit that determines an object that the driver is viewing from the line of sight detected by the line of sight detection unit and the object detected by the object detection unit;
    An information processing apparatus comprising: an interest level information generation unit configured to generate interest level information indicating an interest level of the driver for the object determined by the visual recognition object determination unit.
  2.  上記ドライバが搭乗する車両の位置情報を取得する位置情報取得部を更に備え、
     上記関心度情報生成部は、上記車両の位置情報を更に参照して、関心度情報を生成することを特徴とする請求項1に記載の情報処理装置。
    A position information acquisition unit for acquiring position information of a vehicle on which the driver is boarded;
    The information processing apparatus according to claim 1, wherein the interest level information generation unit generates the interest level information by further referring to the position information of the vehicle.
  3.  上記ドライバの顔情報から上記ドライバの表情を推定する表情推定部を更に備え、
     上記関心度情報生成部は、上記ドライバの表情を更に参照して、関心度情報を生成することを特徴とする請求項1又は2に記載の情報処理装置。
    A facial expression estimation unit for estimating the facial expression of the driver from the facial information of the driver;
    The information processing apparatus according to claim 1, wherein the interest level information generation unit generates the interest level information by further referring to the facial expression of the driver.
  4.  上記関心度情報生成部は、上記ドライバの生体情報を更に参照して、関心度情報を生成することを特徴とする請求項1~3の何れか1項に記載の情報処理装置。 The information processing apparatus according to any one of claims 1 to 3, wherein the interest level information generation unit generates the interest level information by further referring to the biological information of the driver.
  5.  上記関心度情報生成部は、日時情報を更に参照して、関心度情報を生成することを特徴とする請求項1~4の何れか1項に記載の情報処理装置。 5. The information processing apparatus according to claim 1, wherein the interest level information generation unit generates the interest level information by further referring to the date and time information.
  6.  上記関心度情報生成部は、気象情報を更に参照して、関心度情報を生成することを特徴とする請求項1~5の何れか1項に記載の情報処理装置。 6. The information processing apparatus according to claim 1, wherein the interest level information generation unit further generates reference level information by further referring to weather information.
  7.  上記関心度情報生成部が生成した関心度情報を参照して、上記ドライバへの提案情報を生成する提案情報生成部を更に備えている
    ことを特徴とする請求項1~6の何れか1項に記載の情報処理装置。
    7. The information processing apparatus according to claim 1, further comprising a proposal information generation unit that generates proposal information for the driver with reference to the interest level information generated by the interest level information generation unit. The information processing apparatus described in 1.
  8.  上記提案情報生成部は、
      事故が起こったことを示す事故情報を取得し、
      上記事故情報が示す事故発生時刻の前の所定の期間において上記ドライバが視認したオブジェクトを示すオブジェクト情報を含む事故情報を生成する
    ことを特徴とする請求項7に記載の情報処理装置。
    The proposal information generation unit
    Acquire accident information indicating that an accident occurred,
    8. The information processing apparatus according to claim 7, wherein accident information including object information indicating an object viewed by the driver in a predetermined period before an accident occurrence time indicated by the accident information is generated.
  9.  上記関心度情報生成部が生成した関心度情報に、上記ドライバが第1のカテゴリに属する店舗又は看板に注目していることを示す情報が含まれる場合に、
     上記提案情報生成部は、休憩場所およびトイレの少なくとも何れかに立ち寄る旨の提案を上記提案情報に含ませる
    ことを特徴とする請求項7又は8に記載の情報処理装置。
    When the interest level information generated by the interest level information generation unit includes information indicating that the driver is paying attention to a store or a signboard belonging to the first category,
    The information processing apparatus according to claim 7, wherein the proposal information generation unit includes a proposal to stop at at least one of a rest place and a toilet in the proposal information.
  10.  上記関心度情報生成部が生成した関心度情報に、上記ドライバが第2のカテゴリに属する店舗又は看板に注目していることを示す情報が含まれる場合に、
     上記提案情報生成部は、飲食店およびトイレの少なくとも何れかに立ち寄る旨の提案を上記提案情報に含ませる
    ことを特徴とする請求項7~9の何れか1項に記載の情報処理装置。
    When the interest level information generated by the interest level information generation unit includes information indicating that the driver is paying attention to a store or a signboard belonging to the second category,
    The information processing apparatus according to any one of claims 7 to 9, wherein the proposal information generation unit includes a proposal to stop at at least one of a restaurant and a toilet in the proposal information.
  11.  上記関心度情報生成部が生成した関心度情報に、車内の所定のオブジェクトに注目していることを示す情報が含まれる場合に、
     上記提案情報生成部は、燃料供給所、給電所、および自動車用品店の少なくとも何れかに立ち寄る旨の提案を上記提案情報に含ませる
    ことを特徴とする請求項7~10の何れか1項に記載の情報処理装置。
    When the interest level information generated by the interest level information generation unit includes information indicating that attention is paid to a predetermined object in the vehicle,
    11. The proposal information generation unit according to claim 7, wherein the proposal information includes a proposal to stop at at least one of a fuel supply station, a power supply station, and an automobile supply store. The information processing apparatus described.
  12.  上記提案情報を参照して、上記ドライバに目的地を提示する目的地提示部を更に備えている
    ことを特徴とする請求項7~11の何れか1項に記載の情報処理装置。
    12. The information processing apparatus according to claim 7, further comprising a destination presentation unit that refers to the proposal information and presents the destination to the driver.
  13.  上記関心度情報生成部が生成した関心度情報に、上記ドライバが特定のロゴ又はマークに注目していることを示す情報が含まれている場合に、
     上記提案情報生成部は、
      上記特定のロゴ若しくはマーク、又は、
      上記特定のロゴ若しくはマークに関連する商品
    に関する情報を上記提案情報に含ませる
    ことを特徴とする請求項7~12の何れか1項に記載の情報処理装置。
    When the interest level information generated by the interest level information generation unit includes information indicating that the driver is paying attention to a specific logo or mark,
    The proposal information generation unit
    The specific logo or mark, or
    The information processing apparatus according to any one of claims 7 to 12, wherein information relating to a product related to the specific logo or mark is included in the proposal information.
  14.  上記関心度情報生成部が生成した関心度情報に、上記ドライバが特定の車種に注目していることを示す情報が含まれている場合に、
     上記提案情報生成部は、
      上記特定の車種、又は、
      上記特定の車種に属する自動車
    に関する情報を上記提案情報に含ませる
    ことを特徴とする請求項7~13の何れか1項に記載の情報処理装置。
    When the interest level information generated by the interest level information generation unit includes information indicating that the driver is paying attention to a specific vehicle type,
    The proposal information generation unit
    The above specific vehicle type, or
    The information processing apparatus according to any one of claims 7 to 13, wherein information relating to an automobile belonging to the specific vehicle type is included in the proposal information.
  15.  情報処理装置によって実行される情報処理方法であって、
     ドライバの顔情報を取得する顔情報取得ステップと、
     上記顔情報取得ステップにおいて取得した顔情報から視線を検出する視線検出ステップと、
     上記ドライバの視界に存在する各オブジェクトを検出するオブジェクト検出ステップと、
     上記視線検出ステップにおいて検出した視線と上記オブジェクト検出ステップにおいて検出したオブジェクトとから、上記ドライバが視認しているオブジェクトを判定する視認オブジェクト判定ステップと、
     上記視認オブジェクト判定ステップにおいて判定したオブジェクトに対する上記ドライバの関心度を示す関心度情報を生成する関心度情報生成ステップと
    を備えていることを特徴とする情報処理方法。
    An information processing method executed by an information processing apparatus,
    A face information acquisition step for acquiring driver face information;
    A line of sight detection step of detecting a line of sight from the face information acquired in the face information acquisition step;
    An object detection step of detecting each object present in the field of view of the driver;
    A visual object determination step for determining an object that the driver is viewing from the visual line detected in the visual line detection step and the object detected in the object detection step;
    An interest level information generating step of generating interest level information indicating the level of interest of the driver for the object determined in the visual object determination step.
  16.  請求項1に記載の情報処理装置としてコンピュータを機能させるための情報処理プログラムであって、上記関心度情報生成部としてコンピュータを機能させるための情報処理プログラム。 An information processing program for causing a computer to function as the information processing apparatus according to claim 1, wherein the information processing program causes the computer to function as the interest level information generation unit.
PCT/JP2019/018971 2018-05-16 2019-05-13 Information processing device, information processing method, and information processing program WO2019221070A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018094884A JP6449504B1 (en) 2018-05-16 2018-05-16 Information processing apparatus, information processing method, and information processing program
JP2018-094884 2018-05-16

Publications (1)

Publication Number Publication Date
WO2019221070A1 true WO2019221070A1 (en) 2019-11-21

Family

ID=64960352

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/018971 WO2019221070A1 (en) 2018-05-16 2019-05-13 Information processing device, information processing method, and information processing program

Country Status (2)

Country Link
JP (1) JP6449504B1 (en)
WO (1) WO2019221070A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022118899A1 (en) * 2020-12-03 2022-06-09 京セラ株式会社 Electronic apparatus, information processing device, degree of concentration calculation program, degree of concentration calculation method, and computer training method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006048171A (en) * 2004-07-30 2006-02-16 Toyota Motor Corp Status estimation device, status estimation method, information providing device using the same, and information providing method
JP2007072567A (en) * 2005-09-05 2007-03-22 Denso Corp Vehicle traveling information recording device
JP2009058431A (en) * 2007-08-31 2009-03-19 Aisin Aw Co Ltd Navigation apparatus and navigation program
JP2009168473A (en) * 2008-01-10 2009-07-30 Nissan Motor Co Ltd On-vehicle navigation device, and interest-level estimation method of driver in on-vehicle navigation device
JP2014096632A (en) * 2012-11-07 2014-05-22 Denso Corp Imaging system
JP2018055296A (en) * 2016-09-28 2018-04-05 損害保険ジャパン日本興亜株式会社 Information processor, information processing method and information processing program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006048171A (en) * 2004-07-30 2006-02-16 Toyota Motor Corp Status estimation device, status estimation method, information providing device using the same, and information providing method
JP2007072567A (en) * 2005-09-05 2007-03-22 Denso Corp Vehicle traveling information recording device
JP2009058431A (en) * 2007-08-31 2009-03-19 Aisin Aw Co Ltd Navigation apparatus and navigation program
JP2009168473A (en) * 2008-01-10 2009-07-30 Nissan Motor Co Ltd On-vehicle navigation device, and interest-level estimation method of driver in on-vehicle navigation device
JP2014096632A (en) * 2012-11-07 2014-05-22 Denso Corp Imaging system
JP2018055296A (en) * 2016-09-28 2018-04-05 損害保険ジャパン日本興亜株式会社 Information processor, information processing method and information processing program

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022118899A1 (en) * 2020-12-03 2022-06-09 京セラ株式会社 Electronic apparatus, information processing device, degree of concentration calculation program, degree of concentration calculation method, and computer training method

Also Published As

Publication number Publication date
JP6449504B1 (en) 2019-01-09
JP2019200614A (en) 2019-11-21

Similar Documents

Publication Publication Date Title
EP3488382B1 (en) Method and system for monitoring the status of the driver of a vehicle
CN110167823B (en) System and method for driver monitoring
JP6933668B2 (en) Driving condition monitoring methods and devices, driver monitoring systems, and vehicles
Mbouna et al. Visual analysis of eye state and head pose for driver alertness monitoring
Seshadri et al. Driver cell phone usage detection on strategic highway research program (SHRP2) face view videos
US9750420B1 (en) Facial feature selection for heart rate detection
US20210081754A1 (en) Error correction in convolutional neural networks
EP2779045A1 (en) Computer-based method and system for providing active and automatic personal assistance using an automobile or a portable electronic device
Wang et al. A survey on driver behavior analysis from in-vehicle cameras
García et al. Driver monitoring based on low-cost 3-D sensors
JP2014160394A (en) Service provision system
CN110582781A (en) Sight tracking system and method
JP2008237625A (en) Degree of visibility judging apparatus
JP6459856B2 (en) Vehicle driving support device, vehicle driving support method, and program
JP6449504B1 (en) Information processing apparatus, information processing method, and information processing program
CN110803170B (en) Driving assistance system with intelligent user interface
JP2018110023A (en) Target detection method
US20200279110A1 (en) Information processing apparatus, information processing method, and program
KR20120070888A (en) Method, electronic device and record medium for provoding information on wanted target
US20220319232A1 (en) Apparatus and method for providing missing child search service based on face recognition using deep-learning
JP6305483B2 (en) Computer apparatus, service providing system, service providing method, and program
JP2019152734A (en) Digital information display system
US20210279486A1 (en) Collision avoidance and pedestrian detection systems
Mihai et al. Using dual camera smartphones as advanced driver assistance systems: Navieyes system architecture
JP2010108167A (en) Face recognition device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19802890

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19802890

Country of ref document: EP

Kind code of ref document: A1