US20190251374A1 - Travel assistance device and computer program - Google Patents

Travel assistance device and computer program Download PDF

Info

Publication number
US20190251374A1
US20190251374A1 US16/331,392 US201716331392A US2019251374A1 US 20190251374 A1 US20190251374 A1 US 20190251374A1 US 201716331392 A US201716331392 A US 201716331392A US 2019251374 A1 US2019251374 A1 US 2019251374A1
Authority
US
United States
Prior art keywords
image
captured image
risk factor
map information
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/331,392
Inventor
Takamitsu Sakai
Tomoaki Hirota
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aisin AW Co Ltd
Original Assignee
Aisin AW Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aisin AW Co Ltd filed Critical Aisin AW Co Ltd
Assigned to AISIN AW CO., LTD. reassignment AISIN AW CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HIROTA, TOMOAKI, SAKAI, TAKAMITSU
Assigned to AISIN AW CO., LTD. reassignment AISIN AW CO., LTD. CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE ADDRESS PREVIOUSLY RECORDED AT REEL: 048532 FRAME: 0807. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT. Assignors: HIROTA, TOMOAKI, SAKAI, TAKAMITSU
Publication of US20190251374A1 publication Critical patent/US20190251374A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06K9/00805
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0956Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W40/09Driving style or behaviour
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3697Output of additional, non-guidance related information, e.g. low fuel level
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/3815Road data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/2163Partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • G06K9/00798
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/042Knowledge-based neural networks; Logical representations of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/045Explanation of inference; Explainable artificial intelligence [XAI]; Interpretable artificial intelligence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Definitions

  • the risk factor is a factor to which attention is to be paid when the mobile unit travels, and includes, for example, an obstacle such as another vehicle and a pedestrian, a road sign for a crossroad, a crosswalk, etc., a section where a lane increases or decreases, and the entrance of a building facing a road, which are present in locations where it is difficult to visually identify them from the mobile unit (the same hereinafter).
  • a risk factor such as those described above, for example, it is possible to make a determination by comparing the current location and orientation of the mobile unit with map information, or make a determination using a camera, a sensor, or a communication device installed on the mobile unit.
  • JP 2012-192878 A discloses that during traveling of a vehicle, the locations and movement speeds of obstacles such as other vehicles present around the vehicle are detected in real time using a camera, a sensor, and a communication device installed on the vehicle, and when it is determined that the vehicle has a blind spot region and the risk level of the blind spot region is high, driving assistance for avoiding an obstacle present in the blind spot region is provided.
  • a blind spot region is determined in a bird's-eye view manner based on a positional relationship between a vehicle and another vehicle, a surrounding environment that can be actually visually identified by a driver is not considered, and thus, there has been a case in which a region determined to be a blind spot region differs from an actual driver's blind spot region.
  • the risk level of a blind spot region is determined by the behaviors of a preceding vehicle and an oncoming vehicle, even if the preceding vehicle or oncoming vehicle exhibits a behavior that is determined to be high in risk level, it has been difficult to accurately determine whether the blind spot region is actually risky.
  • a blind spot region that does not serve as a risk factor is also determined to be a risk factor and thus serves as an assistance target.
  • Exemplary embodiments of the broad inventive principles described herein solve the above-described conventional problem, and provide a travel assistance device and a computer program that enable to more accurately determine a risk factor present in the surround environment of a mobile unit.
  • Exemplary embodiments provide travel assistance devices and programs obtain a captured image that captures a surrounding environment of a mobile unit and obtain, as a map information image, a map image that three-dimensionally represents a map and that represents a same range as an image capturing range of the captured image from a same direction as an image capturing direction of the captured image.
  • the devices and programs extract a risk factor based on learning data by inputting the captured image and the map information image to machine learning as input images with a plurality of channels.
  • the risk factor is present in the surrounding environment of the mobile unit and is not included in the captured image.
  • the “mobile unit” is not limited to a vehicle and may be any as long as the mobile unit is one that moves on a road such as a pedestrian or a bicycle.
  • the “risk factor” is a factor to which attention is to be paid when the mobile unit travels, and includes, for example, an obstacle such as another vehicle and a pedestrian, a road sign for a crossroad, a crosswalk, etc., a section where a lane increases or decreases, and the entrance of a building facing a road, which are present in locations where it is difficult to visually identify them from the mobile unit (the same hereinafter).
  • the travel assistance device and computer program that have the above-described configurations, by inputting a map image that three-dimensionally represents a map and a captured image of an area around a mobile unit to machine learning as input images with a plurality of channels, it becomes possible to more accurately determine a risk factor present in the surrounding environment of the mobile unit based on learning data.
  • FIG. 1 is a block diagram showing a navigation device according to the present embodiment.
  • FIG. 2 is a flowchart of a machine learning processing program according to the present embodiment.
  • FIG. 3 is a diagram showing an image capturing range of a captured image.
  • FIG. 4 is a diagram showing a map information image created based on a captured image.
  • FIG. 5 is a diagram describing an example of machine learning performed on images.
  • FIG. 6 is a diagram for comparison between the captured image and the map information image.
  • FIG. 7 is a flowchart of a risk factor determination processing program according to the present embodiment.
  • FIG. 1 is a block diagram showing the navigation device 1 according to the present embodiment.
  • the navigation device 1 includes a current location detecting part 11 that detects a current location of a vehicle having the navigation device 1 mounted thereon; a data recording part 12 having various types of data recorded therein; a navigation ECU 13 that performs various types of arithmetic processing based on inputted information; an operating part 14 that accepts operations from a user; a liquid crystal display 15 that displays a map of an area around the vehicle, information about a guided route set on the navigation device 1 , etc., to the user; a speaker 16 that outputs audio guidance on route guidance, an alert against risk factors, etc.; a DVD drive 17 that reads a DVD which is a storage medium; and a communication module 18 that performs communication with information centers such as a probe center and a VICS (registered trademark: Vehicle Information and Communication System) center.
  • the term “storage medium” does not encompass transitory signals.
  • an exterior camera 19 installed on the vehicle having the navigation device 1 mounted thereon
  • the current location detecting part 11 includes a GPS 21 , a vehicle speed sensor 22 , a steering sensor 23 , a gyro sensor 24 , etc., and can detect the current location, orientation, and travel speed of the vehicle, the current time, etc.
  • the vehicle speed sensor 22 is a sensor for detecting the movement distance and vehicle speed of the vehicle, and generates pulses according to the rotation of drive wheels of the vehicle and outputs a pulse signal to the navigation ECU 13 . Then, the navigation ECU 13 counts the generated pulses and thereby calculates the rotational speed of the drive wheels and a movement distance.
  • the navigation device 1 does not need to include all of the above-described four types of sensors and may be configured to include only one or a plurality of types of sensors among those sensors.
  • the data recording part 12 includes a hard disk (not shown) serving as an external storage device and a recording medium; and a recording head (not shown) which is a driver for reading a map information DB 31 , a captured-image DB 32 , a predetermined program, etc., recorded on the hard disk, and writing predetermined data to the hard disk.
  • the data recording part 12 may include a memory card or an optical disc such as a CD or a DVD instead of the hard disk.
  • the map information DB 31 and the captured-image DB 32 may be stored on an external server, and the navigation device 1 may obtain the map information DB 31 and the captured-image DB 32 by communication.
  • the map information DB 31 stores therein each of two-dimensional map information 33 and a three-dimensional map information 34 .
  • the two-dimensional map information 33 is general map information used in the navigation device 1 and includes, for example, link data about roads (links), node data about node points, facility data about facilities, search data used in a route search process, map display data for displaying a map, intersection data about each intersection, and retrieval data for retrieving points.
  • the three-dimensional map information 34 is information about a map image that three-dimensionally represents a map.
  • the three-dimensional map information 34 is information about a map image that three-dimensionally represents road outlines.
  • the map image may also represent other information than road outlines.
  • the map image may also three-dimensionally represent the shapes of facilities, the section lines of roads, road signs, signs, etc.
  • the navigation device 1 performs general functions such as display of a map image on the liquid crystal display 15 and a search for a guided route, using the two-dimensional map information 33 .
  • a process related to a determination of a risk factor is performed using the three-dimensional map information 34 .
  • the captured-image DB 32 is storage means for storing captured images 35 captured by the exterior camera 19 .
  • the captured images 35 captured by the exterior camera 19 are cumulatively stored in the captured-image DB 32 and are deleted in turn from the old ones.
  • the navigation ECU (electronic control unit) 13 is an electronic control unit that performs overall control of the navigation device 1 , and includes a CPU 41 serving as a computing device and a control device; and internal storage devices such as a RAM 42 that is used as a working memory when the CPU 41 performs various types of arithmetic processing and that stores route data obtained when a route is searched for, etc., a ROM 43 having recorded therein a machine learning processing program (see FIG. 2 ) and a risk factor determination processing program (see FIG. 7 ) which will be described later, etc., in addition to a program for control, and a flash memory 44 that stores a program read from the ROM 43 .
  • the navigation ECU 13 includes various types of means serving as processing algorithms.
  • surrounding environment imaging means obtains a captured image that captures the surrounding environment of the vehicle.
  • Map information image obtaining means obtains, as a map information image, a map image that represents three-dimensional map information 34 of the same range as an image capturing range of the captured image from the same direction as an image capturing direction of the captured image.
  • Risk factor extracting means extracts, based on learning data, a risk factor that is present in the surrounding environment of the vehicle and that is not included in the captured image, by inputting the captured image and the map information image to machine learning as input images with a plurality of channels.
  • the operating part 14 is operated when, for example, a point of departure serving as a travel start point and a destination serving as a travel end point are inputted, and includes a plurality of operating switches such as various types of keys and buttons (not shown). Based on switch signals outputted by, for example, pressing each switch, the navigation ECU 13 performs control to perform corresponding various types of operation.
  • the operating part 14 may include a touch panel provided on the front of the liquid crystal display 15 .
  • the operating part 14 may include a microphone and an audio recognition device.
  • the liquid crystal display 15 there are displayed a map image including roads, traffic information, operation guidance, an operation menu, guidance on keys, a guided route set on the navigation device 1 , guidance information according to the guided route, news, a weather forecast, time, an e-mail, a TV program, etc.
  • a HUD or an HMD may be used instead of the liquid crystal display 15 .
  • guidance on a result of a determination of a risk factor is also displayed.
  • the speaker 16 outputs audio guidance that provides guidance on travel along a guided route or guidance on traffic information, based on an instruction from the navigation ECU 13 .
  • guidance on a result of a determination of a risk factor is also outputted.
  • the DVD drive 17 is a drive that can read data recorded on a recording medium such as a DVD or a CD. Then, based on the read data, for example, music or video is played back or the map information DB 31 is updated. Note that a card slot for performing reading and writing on a memory card may be provided instead of the DVD drive 17 .
  • the communication module 18 is a communication device for receiving traffic information transmitted from traffic information centers, e.g., a VICS center and a probe center, and corresponds, for example, to a mobile phone or a DCM.
  • traffic information centers e.g., a VICS center and a probe center
  • the exterior camera 19 is composed of, for example, a camera using a solid-state imaging device such as a CCD, and is attached to the back of a vehicle's rearview mirror, a vehicle's front bumper, etc., and is placed such that an optical-axis direction is downward at a predetermined angle relative to the horizontal.
  • the exterior camera 19 captures an image of the surrounding environment ahead in a vehicle's traveling direction.
  • the navigation ECU 13 determines a risk factor present around the vehicle, by inputting a captured image having been captured together with an image of the three-dimensional map information 34 to machine learning as input images with a plurality of channels.
  • the exterior camera 19 may be configured to be also disposed on the side or rear of the vehicle.
  • the placement position of the exterior camera 19 is substantially the same as a driver's eye position (a start point of the line of sight) and the optical-axis direction is substantially the same as a driver's normal line-of-sight direction.
  • the above-described risk factor determined by machine learning in the navigation device 1 according to the present embodiment is a factor to which attention is to be paid (guidance on which is to be provided) when the vehicle travels.
  • the risk factor includes, for example, an obstacle such as another vehicle and a pedestrian, a road sign for a crossroad, a crosswalk, etc., a section where a lane increases or decreases, and the entrance of a building facing a road, which are present in locations where it is difficult to visually identify them from the vehicle (the same hereinafter).
  • the “entrance of a building facing a road” is a point where a pedestrian may possibly newly appear on the road, and is a location to which attention is to be paid when the vehicle travels.
  • the “section where a lane increases or decreases” is a point where another vehicle may possibly change its lane, and is a location to which attention is to be paid when the vehicle travels.
  • the three-dimensional map information 34 is information about a map image that three-dimensionally represents particularly road outline lines.
  • a risk factor that serves as a determination target by inputting a map image of the three-dimensional map information 34 to machine learning is a factor related to a road, e.g., a “crossroad present in a location where it is difficult to visually identify it from the vehicle.” Note, however, that by increasing the number of pieces of information to be included in the three-dimensional map information 34 , risk factors of other types can also serve as determination targets.
  • section lines in the three-dimensional map information 34 a “section where a lane increases or decreases and which is present in a location where it is difficult to visually identify it from the vehicle” serves as a determination target for a risk factor.
  • an “entrance of a building facing a road that is present in a location where it is difficult to visually identify it from the vehicle” serves as a determination target for a risk factor.
  • FIG. 2 is a flowchart of the machine learning processing program according to the present embodiment.
  • the machine learning processing program is a program that is executed after turning on a vehicle's ACC, and sets supervisory signals for learning (correct values) used upon performing machine learning for determining a risk factor based on a captured image captured by the exterior camera 19 and the three-dimensional map information 34 .
  • the machine learning processing program ( FIG. 2 ) is executed in parallel with a risk factor determination processing program ( FIG. 7 ) for determining a risk factor which will be described later.
  • FIG. 7 While a risk factor is determined by the risk factor determination processing program ( FIG. 7 ), more appropriate supervisory signals for learning (correct values) for determining a risk factor are set by the machine learning processing program ( FIG. 2 ).
  • the programs shown in the following flowcharts of FIGS. 2 and 7 are stored in the RAM 42 , the ROM 43 , etc., included in the navigation ECU 13 , and executed by the CPU 41 .
  • the CPU 41 obtains a vehicle's current location and orientation based on results of detection by the current location detecting part 11 . Specifically, positional coordinates on a map that indicate a vehicle's current location are obtained using the two-dimensional map information 33 . Note that upon detection of a vehicle's current location, a map-matching process for matching the vehicle's current location to the two-dimensional map information 33 is also performed. Furthermore, the vehicle's current location may be identified using a high-accuracy location technique.
  • the high-accuracy location technique is a technique enabling to detect a travel lane or a high-accuracy vehicle location by detecting, by image recognition, white line and road surface painting information captured from a camera at the rear of the vehicle and further checking the white line and road surface painting information against a map information DB stored in advance.
  • the details of the high-accuracy location technique are already publicly known and thus are omitted. Note that it is desirable that the vehicle's current location and orientation be ultimately identified on a map of the three-dimensional map information 34 .
  • the CPU 41 obtains particularly three-dimensional map information 34 for an area around the vehicle's current location which is identified at the above-described S 1 (e.g., an area within 300 m from the vehicle's current location) among the three-dimensional map information 34 stored in the map information DB 31 .
  • the CPU 41 obtains a captured image captured recently by the exterior camera 19 from the captured-image DB 32 .
  • the captured image captured by the exterior camera 19 is an image that captures the environment ahead in a vehicle's traveling direction, i.e., the environment ahead visually identified by the driver (driver's field of vision), to correspond to a start point of a driver's line of sight (eye point) and a driver's line-of-sight direction.
  • the CPU 41 obtains an image capturing range of the captured image obtained at the above-described S 3 .
  • the image capturing range of a captured image 35 can be identified by the position of a focal point P, an optical-axis direction a, and the angle of view ⁇ of the exterior camera 19 obtained at the point in time of image capturing.
  • the angle of view ⁇ is a fixed value which is determined in advance by the exterior camera 19 .
  • the position of the focal point P is determined based on the vehicle's current location obtained at the above-described S 1 and the placement position of the exterior camera 19 on the vehicle.
  • the optical-axis direction a is determined based on the vehicle's orientation obtained at the above-described S 1 and the placement direction of the exterior camera 19 on the vehicle.
  • the CPU 41 creates a bird's-eye-view image (hereinafter, referred to as a map information image) that three-dimensionally represents a map of the same range as the image capturing range of the captured image obtained at the above-described S 4 from the same direction as an image capturing direction of the captured image, using the three-dimensional map information 34 obtained at the above-described S 2 .
  • the map information image itself is a two-dimensional image which is the same as the captured image.
  • FIG. 4 is a diagram showing an example of a map information image 52 created for a captured image 51 .
  • the map information image 52 is an image in which lines indicating outlines 53 of roads included in an image capturing range of the captured image 51 (i.e., in the driver's field of vision) are drawn.
  • the map information image 52 there is also drawn an outline of a road which is hidden in the captured image 51 by obstacles such as other vehicles.
  • the CPU 41 inputs the captured image obtained at the above-described S 3 and the map information image created at the above-described S 5 to machine learning as input images with a plurality of channels.
  • the captured image is inputted to the machine learning as three-channel input images represented by three RGB colors, respectively, four-channel input images including the map information image are inputted.
  • machine learning deep learning using a convolutional neural network with a multilayer structure is used.
  • convolutional CNN convolutional neural network
  • the convolutional CNN 55 repeats a ‘convolutional layer’ and a ‘pooling layer’ a plurality of times, and thereby outputs particularly important feature maps 56 for determining a risk factor.
  • the ‘convolutional layer’ is a layer for filtering (convoluting) an inputted image.
  • the convolution of an image patterns (features) in the image can be detected.
  • by filtering the size of an image to be outputted is reduced.
  • the outputted image is also called a feature map.
  • filters used in the convolutional layer do not need to be set by a designer, and can be obtained by learning. Note that as learning proceeds, filters suitable for extraction of particularly important features for determining a risk factor are set. Supervisory signals for learning (correct values) which are set by the machine learning processing program also include the above-described filters.
  • the ‘pooling layer’ is placed immediately after the convolutional layer and reduces the positional sensitivity of an extracted feature. Specifically, by coarsely resampling a convolution output, a difference caused by some image shift is absorbed. In the pooling layer, too, the size of an output image is reduced compared to an input image.
  • the multilayer perceptron is one-dimensional vector data.
  • the fully-connected multilayer perceptron outputted from the convolutional CNN 55 is inputted to an input layer of a neural network for determining a risk factor (hereinafter, referred to as risk determination CNN) 57 .
  • the risk determination CNN 57 inputs data obtained by multiplying each neuron by a weight (weight coefficient) which is output data having been subjected to a process in the input layer, to next intermediate layers.
  • data obtained by multiplying each neuron by a weight (weight coefficient) which is output data having been subjected to a process in the intermediate layers is inputted to a next output layer.
  • a final determination of a risk factor is made using the data inputted from the intermediate layers, and a result of the determination (i.e., an extracted risk factor) is outputted.
  • a result of the determination i.e., an extracted risk factor
  • the above-described weights (weight coefficients) change as appropriate to more suitable values and are set.
  • a first intermediate layer is a layer for detecting the locations and motions of objects present in a risk determination area
  • a second intermediate layer is a layer for recognizing the detected locations and motions of the objects as a vehicle's surrounding situation (scene)
  • the output layer is a layer for determining a risk factor from the vehicle's surrounding situation (scene).
  • the supervisory signals for learning (correct values) which are set by the machine learning processing program also include the above-described weights (weight coefficients).
  • the CPU 41 estimates the differential area having differences between the captured image and the map information image to be an area that has a target which is present in the map image but has disappeared (has not been captured) in the captured image for some reason, and that is a blind spot for an occupant of the vehicle. Therefore, by determining a risk factor particularly from feature portions extracted in the differential area by machine learning, it becomes possible to further reduce a process related to the determination of a risk factor.
  • the feature portions extracted by machine learning include an object that is included in the map information image but is not included in the captured image, i.e., an object that is present in the driver's field of vision but cannot be visually identified.
  • a captured image 51 in practice, three images represented by RGB
  • a map information image 52 which are shown in FIG. 6
  • a part of a road outline present in the map information image 52 disappears in the captured image 51 due to other vehicles.
  • an area enclosed by a broken line is a differential area 58 .
  • the above-described feature portions extracted by machine learning are other vehicles that cover road outline lines and the disappeared outline lines. Since the feature portions are present in the differential area 58 , by allowing the differential area 58 to be easily identified, it becomes possible to facilitate a process related to the extraction of the above-described feature portions.
  • the road outline lines disappear from the captured image 51 due to other vehicles, the road outline lines may also disappear due to a building, etc., in addition to other vehicles.
  • the CPU 41 sets supervisory signals for learning (correct values) which are learning data, based on learning results obtained at the above-described S 6 .
  • the supervisory signals for learning include, as described above, filters used in the convolutional layer and weights (weight coefficients) used in the risk determination CNN 57 .
  • the supervisory signals for learning include learning data for determining, as a risk factor, an object that is included in the map information image but is not included in the captured image (i.e., an object that is present in the driver's field of vision but cannot be visually identified).
  • FIG. 7 is a flowchart of the risk factor determination processing program according to the present embodiment.
  • the risk factor determination processing program is a program that is executed after turning on the vehicle's ACC, and determines a risk factor present around the vehicle based on a captured image captured by the exterior camera 19 and the three-dimensional map information 34 and outputs a result of the determination.
  • the risk factor determination processing program ( FIG. 7 ) is executed in parallel with the aforementioned machine learning processing program ( FIG. 2 ). Namely, while a risk factor is determined by the risk factor determination processing program ( FIG. 7 ), more appropriate supervisory signals for learning (correct values) for determining a risk factor are set by the machine learning processing program ( FIG. 2 ).
  • the CPU 41 obtains a vehicle's current location and orientation based on results of detection by the current location detecting part 11 . Note that details are the same as those of S 1 and thus are omitted.
  • the CPU 41 obtains particularly three-dimensional map information 34 for an area around the vehicle's current location which is identified at the above-described S 11 (e.g., an area within 300 m from the vehicle's current location) among the three-dimensional map information 34 stored in the map information DB 31 .
  • the CPU 41 obtains a captured image captured recently by the exterior camera 19 from the captured-image DB 32 .
  • the captured image captured by the exterior camera 19 is an image that captures the environment ahead in a vehicle's traveling direction, i.e., the environment ahead visually identified by the driver (driver's field of vision), to correspond to a start point of a driver's line of sight (eye point) and a driver's line-of-sight direction.
  • the CPU 41 obtains an image capturing range of the captured image obtained at the above-described S 3 . Note that details are the same as those of S 4 and thus are omitted.
  • the CPU 41 creates a bird's-eye-view image (map information image) that three-dimensionally represents a map of the same range as the image capturing range of the captured image obtained at the above-described S 14 from the same direction as an image capturing direction of the captured image, using the three-dimensional map information 34 obtained at the above-described S 12 .
  • map information image three-dimensionally represents a map of the same range as the image capturing range of the captured image obtained at the above-described S 14 from the same direction as an image capturing direction of the captured image
  • the CPU 41 inputs the captured image obtained at the above-described S 13 and the map information image created at the above-described S 15 to machine learning as input images with a plurality of channels.
  • the captured image is inputted to the machine learning as three-channel input images represented by three RGB colors, respectively, four-channel input images including the map information image are inputted.
  • machine learning deep learning using a convolutional neural network with a multilayer structure is used.
  • machine learning deep learning
  • supervisory signals for learning correct values which are set by the aforementioned machine learning processing program ( FIG. 2 ) are used. Note that the content of machine learning is described at S 6 and thus the details thereof are omitted.
  • the CPU 41 determines a risk factor present in the surrounding environment of the vehicle, based on a result of the machine learning at the above-described S 16 .
  • a risk factor is determined particularly from feature portions extracted in the differential area by machine learning.
  • the supervisory signals for learning include learning data for determining, as a risk factor, an object that is included in the map information image but is not included in the captured image (i.e., an object that is present in the driver's field of vision but cannot be visually identified), when a candidate for a risk factor such as a crossroad that cannot be visually identified by the driver is present in the surrounding environment of the vehicle, it becomes possible to accurately determine the crossroad as a risk factor based on the learning data such as the supervisory signals for learning.
  • the navigation device 1 outputs a result of the determination of a risk factor (i.e., an extracted risk factor).
  • a risk factor i.e., an extracted risk factor.
  • guidance on the presence of the risk factor may be provided to the user, or vehicle control for avoiding the risk factor may be performed.
  • the guidance “watch ahead of the vehicle” is provided.
  • guidance that more specifically identifies the risk factor e.g., “watch a crossroad present in a blind spot ahead of the vehicle”
  • guidance on the location of the risk factor may also be provided.
  • vehicle control for example, deceleration control is performed.
  • vehicle control can also be applied to a self-driving vehicle. In that case, for example, it is possible to perform control such as setting a travel route to avoid the risk factor.
  • the navigation device 1 and a computer program executed by the navigation device 1 obtain a captured image that captures the surrounding environment of a vehicle (S 13 ); obtain a map information image which is a map image that three-dimensionally represents a map and that represents the same range as an image capturing range of the captured image from the same direction as an image capturing direction of the captured image (S 15 ); and extract, based on learning data, a risk factor that is present in the surrounding environment of the vehicle and that is not included in the captured image, by inputting the captured image and the map information image to machine learning as input images with a plurality of channels (S 16 and S 17 ).
  • a risk factor that is present in the surrounding environment of the vehicle and that is not included in the captured image
  • the three-dimensional map information 34 is information about a map image that three-dimensionally represents road outlines
  • the three-dimensional map information 34 may be map information representing information other than road outlines.
  • the map image may also represent the shapes of facilities, the section lines of roads, road signs, signs, etc.
  • a factor other than a crossroad can also serve as a determination target for a risk factor.
  • a “section where a lane increases or decreases and which is present in a location where it is difficult to visually identify it from the vehicle” can be determined as a risk factor.
  • machine learning particularly, machine learning (deep learning) using a convolutional neural network with a multilayer structure
  • other machine learning can also be used.
  • the machine learning processing program ( FIG. 2 ) and the risk factor determination processing program ( FIG. 7 ) are executed in parallel, the programs do not necessarily need to be executed simultaneously.
  • the machine learning processing program ( FIG. 2 ) may be executed after executing the risk factor determination processing program ( FIG. 7 ).
  • the machine learning processing program ( FIG. 2 ) and the risk factor determination processing program ( FIG. 7 ) are executed by the navigation device 1 , those programs may be configured to be executed by an in-vehicle device other than the navigation device.
  • an external server may perform some of the processes.
  • the travel assistance device can also have the following configurations, and in that case, the following advantageous effects are provided.
  • a first configuration is as follows:
  • a travel assistance device includes surrounding environment imaging means ( 41 ) for obtaining a captured image ( 51 ) that captures the surrounding environment of a mobile unit; map information image obtaining means ( 41 ) for obtaining, as a map information image ( 52 ), a map image that three-dimensionally represents a map and that represents the same range as an image capturing range of the captured image from the same direction as an image capturing direction of the captured image; and risk factor extracting means ( 41 ) for extracting, based on learning data, a risk factor that is present in the surrounding environment of the mobile unit and that is not included in the captured image, by inputting the captured image and the map information image to machine learning as input images with a plurality of channels.
  • the travel assistance device having the above-described configuration, by inputting a map image that three-dimensionally represents a map and a captured image of an area around a mobile unit to machine learning as input images with a plurality of channels, it becomes possible to more accurately determine a risk factor present in the surrounding environment of the mobile unit, based on learning data.
  • the risk factor extracting means ( 41 ) extracts a risk factor present in the surrounding environment of the mobile unit from a feature portion in a differential area having a difference between the captured image ( 51 ) and the map information image ( 52 ).
  • the travel assistance device having the above-described configuration, it becomes possible to reduce the processing load for a determination of a risk factor compared to the conventional case.
  • the risk factor extracting means ( 41 ) extracts, as the feature portion, an object that is included in the map information image ( 52 ) but is not included in the captured image ( 51 ).
  • the travel assistance device having the above-described configuration, it becomes possible to easily extract, as a feature, an object that results in a risk factor present in a blind spot for the driver, i.e., a location where it cannot be visually identified by the driver.
  • the map image is a map image that three-dimensionally represents a road outline, and when there is a road outline that is included in the map information image ( 52 ) but is not included in the captured image ( 51 ), the risk factor extracting means ( 41 ) determines a road corresponding to the outline as a risk factor.
  • the travel assistance device having the above-described configuration, it becomes possible to determine particularly a road present in a location that is a blind spot for the vehicle, as a risk factor.
  • a risk factor for the vehicle.
  • information that identifies road outlines as a three-dimensional map it becomes possible to suppress the amount of information of the three-dimensional map.
  • the risk factor extracting means ( 41 ) determines a risk factor present in the surrounding environment of the mobile unit by machine learning using a neural network with a multilayer structure.
  • the travel assistance device having the above-described configuration, by machine learning using a neural network with a multilayer structure, features of an image can be learned at a deeper level and the features can be recognized with very high accuracy, and thus, it becomes possible to more accurately determine a risk factor.
  • optimal filters can be set by learning.

Abstract

Travel assistance devices and programs obtain a captured image that captures a surrounding environment of a mobile unit and obtain, as a map information image, a map image that three-dimensionally represents a map and that represents a same range as an image capturing range of the captured image from a same direction as an image capturing direction of the captured image. The devices and programs extract a risk factor based on learning data by inputting the captured image and the map information image to machine learning as input images with a plurality of channels. The risk factor is present in the surrounding environment of the mobile unit and is not included in the captured image.

Description

    TECHNICAL FIELD
  • Related technical fields include travel assistance devices and computer programs that provide travel assistance for a mobile unit.
  • BACKGROUND
  • In recent years, for example, as one type of travel assistance for a mobile unit such as a vehicle, travel assistance has been performed in which a risk factor present around the mobile unit is extracted and guidance on the extracted risk factor is provided. The risk factor is a factor to which attention is to be paid when the mobile unit travels, and includes, for example, an obstacle such as another vehicle and a pedestrian, a road sign for a crossroad, a crosswalk, etc., a section where a lane increases or decreases, and the entrance of a building facing a road, which are present in locations where it is difficult to visually identify them from the mobile unit (the same hereinafter). For means for determining a risk factor such as those described above, for example, it is possible to make a determination by comparing the current location and orientation of the mobile unit with map information, or make a determination using a camera, a sensor, or a communication device installed on the mobile unit.
  • For example, JP 2012-192878 A discloses that during traveling of a vehicle, the locations and movement speeds of obstacles such as other vehicles present around the vehicle are detected in real time using a camera, a sensor, and a communication device installed on the vehicle, and when it is determined that the vehicle has a blind spot region and the risk level of the blind spot region is high, driving assistance for avoiding an obstacle present in the blind spot region is provided.
  • SUMMARY
  • However, in the above-described JP 2012-192878 A, though a blind spot region is determined in a bird's-eye view manner based on a positional relationship between a vehicle and another vehicle, a surrounding environment that can be actually visually identified by a driver is not considered, and thus, there has been a case in which a region determined to be a blind spot region differs from an actual driver's blind spot region. In addition, although the risk level of a blind spot region is determined by the behaviors of a preceding vehicle and an oncoming vehicle, even if the preceding vehicle or oncoming vehicle exhibits a behavior that is determined to be high in risk level, it has been difficult to accurately determine whether the blind spot region is actually risky. Thus, there is a possibility that a blind spot region that does not serve as a risk factor is also determined to be a risk factor and thus serves as an assistance target.
  • Exemplary embodiments of the broad inventive principles described herein solve the above-described conventional problem, and provide a travel assistance device and a computer program that enable to more accurately determine a risk factor present in the surround environment of a mobile unit.
  • Exemplary embodiments provide travel assistance devices and programs obtain a captured image that captures a surrounding environment of a mobile unit and obtain, as a map information image, a map image that three-dimensionally represents a map and that represents a same range as an image capturing range of the captured image from a same direction as an image capturing direction of the captured image. The devices and programs extract a risk factor based on learning data by inputting the captured image and the map information image to machine learning as input images with a plurality of channels. The risk factor is present in the surrounding environment of the mobile unit and is not included in the captured image.
  • Note that the “mobile unit” is not limited to a vehicle and may be any as long as the mobile unit is one that moves on a road such as a pedestrian or a bicycle.
  • Note also that the “risk factor” is a factor to which attention is to be paid when the mobile unit travels, and includes, for example, an obstacle such as another vehicle and a pedestrian, a road sign for a crossroad, a crosswalk, etc., a section where a lane increases or decreases, and the entrance of a building facing a road, which are present in locations where it is difficult to visually identify them from the mobile unit (the same hereinafter).
  • According to the travel assistance device and computer program that have the above-described configurations, by inputting a map image that three-dimensionally represents a map and a captured image of an area around a mobile unit to machine learning as input images with a plurality of channels, it becomes possible to more accurately determine a risk factor present in the surrounding environment of the mobile unit based on learning data.
  • BRIEF DESCRIPTION OF DRAWINGS
  • FIG. 1 is a block diagram showing a navigation device according to the present embodiment.
  • FIG. 2 is a flowchart of a machine learning processing program according to the present embodiment.
  • FIG. 3 is a diagram showing an image capturing range of a captured image.
  • FIG. 4 is a diagram showing a map information image created based on a captured image.
  • FIG. 5 is a diagram describing an example of machine learning performed on images.
  • FIG. 6 is a diagram for comparison between the captured image and the map information image.
  • FIG. 7 is a flowchart of a risk factor determination processing program according to the present embodiment.
  • DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
  • A travel assistance device will be described in detail below based on one embodiment that embodies a navigation device and with reference to the drawings. First, a schematic configuration of a navigation device 1 according to the present embodiment will be described using FIG. 1. FIG. 1 is a block diagram showing the navigation device 1 according to the present embodiment.
  • As shown in FIG. 1, the navigation device 1 according to the present embodiment includes a current location detecting part 11 that detects a current location of a vehicle having the navigation device 1 mounted thereon; a data recording part 12 having various types of data recorded therein; a navigation ECU 13 that performs various types of arithmetic processing based on inputted information; an operating part 14 that accepts operations from a user; a liquid crystal display 15 that displays a map of an area around the vehicle, information about a guided route set on the navigation device 1, etc., to the user; a speaker 16 that outputs audio guidance on route guidance, an alert against risk factors, etc.; a DVD drive 17 that reads a DVD which is a storage medium; and a communication module 18 that performs communication with information centers such as a probe center and a VICS (registered trademark: Vehicle Information and Communication System) center. As used herein, the term “storage medium” does not encompass transitory signals. In addition, an exterior camera 19 installed on the vehicle having the navigation device 1 mounted thereon is connected to the navigation device 1 through an in-vehicle network such as a CAN.
  • Each component included in the navigation device 1 will be described below in turn.
  • The current location detecting part 11 includes a GPS 21, a vehicle speed sensor 22, a steering sensor 23, a gyro sensor 24, etc., and can detect the current location, orientation, and travel speed of the vehicle, the current time, etc. Here, particularly, the vehicle speed sensor 22 is a sensor for detecting the movement distance and vehicle speed of the vehicle, and generates pulses according to the rotation of drive wheels of the vehicle and outputs a pulse signal to the navigation ECU 13. Then, the navigation ECU 13 counts the generated pulses and thereby calculates the rotational speed of the drive wheels and a movement distance. Note that the navigation device 1 does not need to include all of the above-described four types of sensors and may be configured to include only one or a plurality of types of sensors among those sensors.
  • In addition, the data recording part 12 includes a hard disk (not shown) serving as an external storage device and a recording medium; and a recording head (not shown) which is a driver for reading a map information DB 31, a captured-image DB 32, a predetermined program, etc., recorded on the hard disk, and writing predetermined data to the hard disk. Note that the data recording part 12 may include a memory card or an optical disc such as a CD or a DVD instead of the hard disk. Note also that the map information DB 31 and the captured-image DB 32 may be stored on an external server, and the navigation device 1 may obtain the map information DB 31 and the captured-image DB 32 by communication.
  • Here, the map information DB 31 stores therein each of two-dimensional map information 33 and a three-dimensional map information 34. The two-dimensional map information 33 is general map information used in the navigation device 1 and includes, for example, link data about roads (links), node data about node points, facility data about facilities, search data used in a route search process, map display data for displaying a map, intersection data about each intersection, and retrieval data for retrieving points.
  • On the other hand, the three-dimensional map information 34 is information about a map image that three-dimensionally represents a map. Particularly, in the present embodiment, the three-dimensional map information 34 is information about a map image that three-dimensionally represents road outlines. Note that the map image may also represent other information than road outlines. For example, the map image may also three-dimensionally represent the shapes of facilities, the section lines of roads, road signs, signs, etc.
  • The navigation device 1 performs general functions such as display of a map image on the liquid crystal display 15 and a search for a guided route, using the two-dimensional map information 33. In addition, as will be described later, a process related to a determination of a risk factor is performed using the three-dimensional map information 34.
  • In addition, the captured-image DB 32 is storage means for storing captured images 35 captured by the exterior camera 19. Note that the captured images 35 captured by the exterior camera 19 are cumulatively stored in the captured-image DB 32 and are deleted in turn from the old ones.
  • Meanwhile, the navigation ECU (electronic control unit) 13 is an electronic control unit that performs overall control of the navigation device 1, and includes a CPU 41 serving as a computing device and a control device; and internal storage devices such as a RAM 42 that is used as a working memory when the CPU 41 performs various types of arithmetic processing and that stores route data obtained when a route is searched for, etc., a ROM 43 having recorded therein a machine learning processing program (see FIG. 2) and a risk factor determination processing program (see FIG. 7) which will be described later, etc., in addition to a program for control, and a flash memory 44 that stores a program read from the ROM 43. Note that the navigation ECU 13 includes various types of means serving as processing algorithms. For example, surrounding environment imaging means obtains a captured image that captures the surrounding environment of the vehicle. Map information image obtaining means obtains, as a map information image, a map image that represents three-dimensional map information 34 of the same range as an image capturing range of the captured image from the same direction as an image capturing direction of the captured image. Risk factor extracting means extracts, based on learning data, a risk factor that is present in the surrounding environment of the vehicle and that is not included in the captured image, by inputting the captured image and the map information image to machine learning as input images with a plurality of channels.
  • The operating part 14 is operated when, for example, a point of departure serving as a travel start point and a destination serving as a travel end point are inputted, and includes a plurality of operating switches such as various types of keys and buttons (not shown). Based on switch signals outputted by, for example, pressing each switch, the navigation ECU 13 performs control to perform corresponding various types of operation. Note that the operating part 14 may include a touch panel provided on the front of the liquid crystal display 15. Note also that the operating part 14 may include a microphone and an audio recognition device.
  • In addition, on the liquid crystal display 15 there are displayed a map image including roads, traffic information, operation guidance, an operation menu, guidance on keys, a guided route set on the navigation device 1, guidance information according to the guided route, news, a weather forecast, time, an e-mail, a TV program, etc. Note that a HUD or an HMD may be used instead of the liquid crystal display 15. In addition, in the present embodiment, particularly, guidance on a result of a determination of a risk factor is also displayed.
  • In addition, the speaker 16 outputs audio guidance that provides guidance on travel along a guided route or guidance on traffic information, based on an instruction from the navigation ECU 13. In addition, in the present embodiment, particularly, guidance on a result of a determination of a risk factor is also outputted.
  • In addition, the DVD drive 17 is a drive that can read data recorded on a recording medium such as a DVD or a CD. Then, based on the read data, for example, music or video is played back or the map information DB 31 is updated. Note that a card slot for performing reading and writing on a memory card may be provided instead of the DVD drive 17.
  • In addition, the communication module 18 is a communication device for receiving traffic information transmitted from traffic information centers, e.g., a VICS center and a probe center, and corresponds, for example, to a mobile phone or a DCM.
  • In addition, the exterior camera 19 is composed of, for example, a camera using a solid-state imaging device such as a CCD, and is attached to the back of a vehicle's rearview mirror, a vehicle's front bumper, etc., and is placed such that an optical-axis direction is downward at a predetermined angle relative to the horizontal. The exterior camera 19 captures an image of the surrounding environment ahead in a vehicle's traveling direction. In addition, the navigation ECU 13, as will be described later, determines a risk factor present around the vehicle, by inputting a captured image having been captured together with an image of the three-dimensional map information 34 to machine learning as input images with a plurality of channels. Note that the exterior camera 19 may be configured to be also disposed on the side or rear of the vehicle. In addition, it is desirable to make an adjustment such that the placement position of the exterior camera 19 is substantially the same as a driver's eye position (a start point of the line of sight) and the optical-axis direction is substantially the same as a driver's normal line-of-sight direction. By doing so, an image captured by the exterior camera 19 matches the driver's field of vision, enabling to more appropriately determine a risk factor.
  • The above-described risk factor determined by machine learning in the navigation device 1 according to the present embodiment is a factor to which attention is to be paid (guidance on which is to be provided) when the vehicle travels. The risk factor includes, for example, an obstacle such as another vehicle and a pedestrian, a road sign for a crossroad, a crosswalk, etc., a section where a lane increases or decreases, and the entrance of a building facing a road, which are present in locations where it is difficult to visually identify them from the vehicle (the same hereinafter). For example, the “entrance of a building facing a road” is a point where a pedestrian may possibly newly appear on the road, and is a location to which attention is to be paid when the vehicle travels. In addition, the “section where a lane increases or decreases” is a point where another vehicle may possibly change its lane, and is a location to which attention is to be paid when the vehicle travels.
  • Note that in the present embodiment, as described above, the three-dimensional map information 34 is information about a map image that three-dimensionally represents particularly road outline lines. Thus, a risk factor that serves as a determination target by inputting a map image of the three-dimensional map information 34 to machine learning is a factor related to a road, e.g., a “crossroad present in a location where it is difficult to visually identify it from the vehicle.” Note, however, that by increasing the number of pieces of information to be included in the three-dimensional map information 34, risk factors of other types can also serve as determination targets. For example, by including information on section lines in the three-dimensional map information 34, a “section where a lane increases or decreases and which is present in a location where it is difficult to visually identify it from the vehicle” serves as a determination target for a risk factor. Furthermore, by including information on the shapes of facilities in the three-dimensional map information 34, an “entrance of a building facing a road that is present in a location where it is difficult to visually identify it from the vehicle” serves as a determination target for a risk factor.
  • Next, a machine learning processing program executed by the CPU 41 in the navigation device 1 according to the present embodiment that has the above-described configuration will be described based on FIG. 2. FIG. 2 is a flowchart of the machine learning processing program according to the present embodiment. Here, the machine learning processing program is a program that is executed after turning on a vehicle's ACC, and sets supervisory signals for learning (correct values) used upon performing machine learning for determining a risk factor based on a captured image captured by the exterior camera 19 and the three-dimensional map information 34. Note that the machine learning processing program (FIG. 2) is executed in parallel with a risk factor determination processing program (FIG. 7) for determining a risk factor which will be described later. Namely, while a risk factor is determined by the risk factor determination processing program (FIG. 7), more appropriate supervisory signals for learning (correct values) for determining a risk factor are set by the machine learning processing program (FIG. 2). In addition, the programs shown in the following flowcharts of FIGS. 2 and 7 are stored in the RAM 42, the ROM 43, etc., included in the navigation ECU 13, and executed by the CPU 41.
  • First, in the machine learning processing program, at step (hereinafter, abbreviated as S) 1, the CPU 41 obtains a vehicle's current location and orientation based on results of detection by the current location detecting part 11. Specifically, positional coordinates on a map that indicate a vehicle's current location are obtained using the two-dimensional map information 33. Note that upon detection of a vehicle's current location, a map-matching process for matching the vehicle's current location to the two-dimensional map information 33 is also performed. Furthermore, the vehicle's current location may be identified using a high-accuracy location technique. Here, the high-accuracy location technique is a technique enabling to detect a travel lane or a high-accuracy vehicle location by detecting, by image recognition, white line and road surface painting information captured from a camera at the rear of the vehicle and further checking the white line and road surface painting information against a map information DB stored in advance. Note that the details of the high-accuracy location technique are already publicly known and thus are omitted. Note that it is desirable that the vehicle's current location and orientation be ultimately identified on a map of the three-dimensional map information 34.
  • Then, at S2, the CPU 41 obtains particularly three-dimensional map information 34 for an area around the vehicle's current location which is identified at the above-described S1 (e.g., an area within 300 m from the vehicle's current location) among the three-dimensional map information 34 stored in the map information DB 31.
  • Subsequently, at S3, the CPU 41 obtains a captured image captured recently by the exterior camera 19 from the captured-image DB 32. Note that the captured image captured by the exterior camera 19 is an image that captures the environment ahead in a vehicle's traveling direction, i.e., the environment ahead visually identified by the driver (driver's field of vision), to correspond to a start point of a driver's line of sight (eye point) and a driver's line-of-sight direction.
  • Thereafter, at S4, the CPU 41 obtains an image capturing range of the captured image obtained at the above-described S3. Here, as shown in FIG. 3, the image capturing range of a captured image 35 can be identified by the position of a focal point P, an optical-axis direction a, and the angle of view φ of the exterior camera 19 obtained at the point in time of image capturing. Note that the angle of view φ is a fixed value which is determined in advance by the exterior camera 19. On the other hand, the position of the focal point P is determined based on the vehicle's current location obtained at the above-described S1 and the placement position of the exterior camera 19 on the vehicle. In addition, the optical-axis direction a is determined based on the vehicle's orientation obtained at the above-described S1 and the placement direction of the exterior camera 19 on the vehicle.
  • Then, at S5, the CPU 41 creates a bird's-eye-view image (hereinafter, referred to as a map information image) that three-dimensionally represents a map of the same range as the image capturing range of the captured image obtained at the above-described S4 from the same direction as an image capturing direction of the captured image, using the three-dimensional map information 34 obtained at the above-described S2. Note that the map information image itself is a two-dimensional image which is the same as the captured image.
  • Here, FIG. 4 is a diagram showing an example of a map information image 52 created for a captured image 51. As shown in FIG. 4, the map information image 52 is an image in which lines indicating outlines 53 of roads included in an image capturing range of the captured image 51 (i.e., in the driver's field of vision) are drawn. In the map information image 52 there is also drawn an outline of a road which is hidden in the captured image 51 by obstacles such as other vehicles.
  • Thereafter, at S6, the CPU 41 inputs the captured image obtained at the above-described S3 and the map information image created at the above-described S5 to machine learning as input images with a plurality of channels. In addition, since the captured image is inputted to the machine learning as three-channel input images represented by three RGB colors, respectively, four-channel input images including the map information image are inputted. Note that in the present embodiment, particularly, machine learning (deep learning) using a convolutional neural network with a multilayer structure is used.
  • As shown in FIG. 5, when captured images 51 represented by the three RGB colors, respectively, and a map information image 52 are inputted to a learning model as four-channel input images, first, image processing based on a convolutional neural network (hereinafter, referred to as convolutional CNN) 55 is performed. The convolutional CNN 55 repeats a ‘convolutional layer’ and a ‘pooling layer’ a plurality of times, and thereby outputs particularly important feature maps 56 for determining a risk factor.
  • In addition, the ‘convolutional layer’ is a layer for filtering (convoluting) an inputted image. By the convolution of an image, patterns (features) in the image can be detected. In addition, there are a plurality of convolutional filters. By providing a plurality of filters, it becomes possible to capture various features in the inputted image. In addition, by filtering, the size of an image to be outputted is reduced. The outputted image is also called a feature map. In addition, filters used in the convolutional layer do not need to be set by a designer, and can be obtained by learning. Note that as learning proceeds, filters suitable for extraction of particularly important features for determining a risk factor are set. Supervisory signals for learning (correct values) which are set by the machine learning processing program also include the above-described filters.
  • On the other hand, the ‘pooling layer’ is placed immediately after the convolutional layer and reduces the positional sensitivity of an extracted feature. Specifically, by coarsely resampling a convolution output, a difference caused by some image shift is absorbed. In the pooling layer, too, the size of an output image is reduced compared to an input image.
  • Then, after repeating the above-described ‘convolutional layer’ and ‘pooling layer’ a plurality of times, a fully-connected multilayer perceptron is finally outputted. The multilayer perceptron is one-dimensional vector data.
  • Thereafter, the fully-connected multilayer perceptron outputted from the convolutional CNN 55 is inputted to an input layer of a neural network for determining a risk factor (hereinafter, referred to as risk determination CNN) 57. Then, the risk determination CNN 57 inputs data obtained by multiplying each neuron by a weight (weight coefficient) which is output data having been subjected to a process in the input layer, to next intermediate layers. Then, in the intermediate layers, too, likewise, data obtained by multiplying each neuron by a weight (weight coefficient) which is output data having been subjected to a process in the intermediate layers is inputted to a next output layer. Then, in the output layer, a final determination of a risk factor is made using the data inputted from the intermediate layers, and a result of the determination (i.e., an extracted risk factor) is outputted. Note that in the risk determination CNN 57, as learning proceeds, the above-described weights (weight coefficients) change as appropriate to more suitable values and are set. In the present embodiment, particularly, a first intermediate layer is a layer for detecting the locations and motions of objects present in a risk determination area, a second intermediate layer is a layer for recognizing the detected locations and motions of the objects as a vehicle's surrounding situation (scene), and the output layer is a layer for determining a risk factor from the vehicle's surrounding situation (scene). The supervisory signals for learning (correct values) which are set by the machine learning processing program also include the above-described weights (weight coefficients).
  • In addition, particularly, in the present embodiment, by inputting the captured image 51 and the map information image 52 which are input targets in an overlapping manner using the same channel, it becomes possible to easily identify correlation between identical pixels (i.e., the same area around the vehicle), i.e., a differential area having differences between the captured image and the map information image. The CPU 41 estimates the differential area having differences between the captured image and the map information image to be an area that has a target which is present in the map image but has disappeared (has not been captured) in the captured image for some reason, and that is a blind spot for an occupant of the vehicle. Therefore, by determining a risk factor particularly from feature portions extracted in the differential area by machine learning, it becomes possible to further reduce a process related to the determination of a risk factor. Note that the feature portions extracted by machine learning include an object that is included in the map information image but is not included in the captured image, i.e., an object that is present in the driver's field of vision but cannot be visually identified.
  • For example, when a captured image 51 (in practice, three images represented by RGB) and a map information image 52 which are shown in FIG. 6 are inputted to machine learning, a part of a road outline present in the map information image 52 disappears in the captured image 51 due to other vehicles. Namely, in an example shown in FIG. 6, an area enclosed by a broken line is a differential area 58. In addition, the above-described feature portions extracted by machine learning are other vehicles that cover road outline lines and the disappeared outline lines. Since the feature portions are present in the differential area 58, by allowing the differential area 58 to be easily identified, it becomes possible to facilitate a process related to the extraction of the above-described feature portions. Therefore, for example, when a candidate for a risk factor such as a crossroad that cannot be visually identified by the driver is present in the differential area 58, it becomes easier to determine the crossroad as a risk factor by machine learning. Note that although in the example shown in FIG. 6 the road outline lines disappear from the captured image 51 due to other vehicles, the road outline lines may also disappear due to a building, etc., in addition to other vehicles.
  • Thereafter, at S7, the CPU 41 sets supervisory signals for learning (correct values) which are learning data, based on learning results obtained at the above-described S6. Note that the supervisory signals for learning include, as described above, filters used in the convolutional layer and weights (weight coefficients) used in the risk determination CNN 57. As a result, by repeatedly executing the machine learning processing program, filters suitable for extraction of particularly important features for identifying a risk factor are set, enabling to more accurately determine a risk factor from a vehicle's surrounding situation (scene). Particularly, in the present embodiment, the supervisory signals for learning include learning data for determining, as a risk factor, an object that is included in the map information image but is not included in the captured image (i.e., an object that is present in the driver's field of vision but cannot be visually identified).
  • Next, a risk factor determination processing program executed by the CPU 41 in the navigation device 1 according to the present embodiment will be described based on FIG. 7. FIG. 7 is a flowchart of the risk factor determination processing program according to the present embodiment. Here, the risk factor determination processing program is a program that is executed after turning on the vehicle's ACC, and determines a risk factor present around the vehicle based on a captured image captured by the exterior camera 19 and the three-dimensional map information 34 and outputs a result of the determination. Note that the risk factor determination processing program (FIG. 7) is executed in parallel with the aforementioned machine learning processing program (FIG. 2). Namely, while a risk factor is determined by the risk factor determination processing program (FIG. 7), more appropriate supervisory signals for learning (correct values) for determining a risk factor are set by the machine learning processing program (FIG. 2).
  • First, in the risk factor determination processing program, at S11, the CPU 41 obtains a vehicle's current location and orientation based on results of detection by the current location detecting part 11. Note that details are the same as those of S1 and thus are omitted.
  • Then, at S12, the CPU 41 obtains particularly three-dimensional map information 34 for an area around the vehicle's current location which is identified at the above-described S11 (e.g., an area within 300 m from the vehicle's current location) among the three-dimensional map information 34 stored in the map information DB 31.
  • Subsequently, at S13, the CPU 41 obtains a captured image captured recently by the exterior camera 19 from the captured-image DB 32. Note that the captured image captured by the exterior camera 19 is an image that captures the environment ahead in a vehicle's traveling direction, i.e., the environment ahead visually identified by the driver (driver's field of vision), to correspond to a start point of a driver's line of sight (eye point) and a driver's line-of-sight direction.
  • Thereafter, at S14, the CPU 41 obtains an image capturing range of the captured image obtained at the above-described S3. Note that details are the same as those of S4 and thus are omitted.
  • Then, at S15, the CPU 41 creates a bird's-eye-view image (map information image) that three-dimensionally represents a map of the same range as the image capturing range of the captured image obtained at the above-described S14 from the same direction as an image capturing direction of the captured image, using the three-dimensional map information 34 obtained at the above-described S12.
  • Thereafter, at S16, the CPU 41 inputs the captured image obtained at the above-described S13 and the map information image created at the above-described S15 to machine learning as input images with a plurality of channels. In addition, since the captured image is inputted to the machine learning as three-channel input images represented by three RGB colors, respectively, four-channel input images including the map information image are inputted. Note that in the present embodiment, particularly, machine learning (deep learning) using a convolutional neural network with a multilayer structure is used.
  • Note that in the present embodiment, particularly, machine learning (deep learning) using a convolutional neural network with a multilayer structure is used. Note also that the latest supervisory signals for learning (correct values) which are set by the aforementioned machine learning processing program (FIG. 2) are used. Note that the content of machine learning is described at S6 and thus the details thereof are omitted.
  • Then, at S17, the CPU 41 determines a risk factor present in the surrounding environment of the vehicle, based on a result of the machine learning at the above-described S16. As described above, in the present embodiment, by inputting the captured image and the map information image which are input targets in an overlapping manner using the same channel, it becomes possible to easily identify correlation between identical pixels (i.e., the same area around the vehicle), i.e., a differential area having differences between the captured image and the map information image. Then, a risk factor is determined particularly from feature portions extracted in the differential area by machine learning. By this, it becomes possible to further reduce processes related to the determination and extraction of a risk factor. Note that since the supervisory signals for learning include learning data for determining, as a risk factor, an object that is included in the map information image but is not included in the captured image (i.e., an object that is present in the driver's field of vision but cannot be visually identified), when a candidate for a risk factor such as a crossroad that cannot be visually identified by the driver is present in the surrounding environment of the vehicle, it becomes possible to accurately determine the crossroad as a risk factor based on the learning data such as the supervisory signals for learning.
  • For example, when the captured image 51 and the map information image 52 which are shown in FIG. 6 are inputted to machine learning, other vehicles that cover road outline lines and the road outline lines that disappear due to other vehicles are extracted as feature portions in the differential area 58. Then, in machine learning, it becomes possible to determine, based on the extracted feature portions, that a road corresponding to the road outline lines that disappear due to other vehicles is a crossroad which is a blind spot for the vehicle due to other vehicles, and is present as a risk factor. If accuracy is further improved by repeating learning, then it becomes also possible to determine a risk factor that the driver cannot notice.
  • Thereafter, the navigation device 1 outputs a result of the determination of a risk factor (i.e., an extracted risk factor). Specifically, when it is determined that a risk factor is present, guidance on the presence of the risk factor may be provided to the user, or vehicle control for avoiding the risk factor may be performed. For example, the guidance “watch ahead of the vehicle” is provided. In addition, if a type of the risk factor can be distinguished by machine learning, then guidance that more specifically identifies the risk factor (e.g., “watch a crossroad present in a blind spot ahead of the vehicle”) may be provided. In addition, guidance on the location of the risk factor may also be provided. On the other hand, when vehicle control is performed, for example, deceleration control is performed. In addition, vehicle control can also be applied to a self-driving vehicle. In that case, for example, it is possible to perform control such as setting a travel route to avoid the risk factor.
  • As described in detail above, the navigation device 1 and a computer program executed by the navigation device 1 according to the present embodiment obtain a captured image that captures the surrounding environment of a vehicle (S13); obtain a map information image which is a map image that three-dimensionally represents a map and that represents the same range as an image capturing range of the captured image from the same direction as an image capturing direction of the captured image (S15); and extract, based on learning data, a risk factor that is present in the surrounding environment of the vehicle and that is not included in the captured image, by inputting the captured image and the map information image to machine learning as input images with a plurality of channels (S16 and S17). Thus, it becomes possible to more accurately determine a risk factor present in the surrounding environment of the vehicle.
  • In addition, in the conventional art (e.g., JP 2012-192878 A), during traveling of a vehicle, the locations and movement speeds of a preceding vehicle and an oncoming vehicle present around the vehicle need to be detected at all times using a camera, a sensor, and a communication device installed on the vehicle, and when the number of corresponding vehicles increases, there is a problem that the processing load for a determination of a risk factor becomes very heavy. On the other hand, in the present embodiment, the processing load for a determination of a risk factor can be reduced compared to the conventional case.
  • Note that various improvements and modifications may, of course, be made without departing from the broad inventive principles.
  • For example, although in the present embodiment the three-dimensional map information 34 is information about a map image that three-dimensionally represents road outlines, the three-dimensional map information 34 may be map information representing information other than road outlines. For example, the map image may also represent the shapes of facilities, the section lines of roads, road signs, signs, etc. In that case, a factor other than a crossroad can also serve as a determination target for a risk factor. For example, by including information on section lines in the three-dimensional map information 34, a “section where a lane increases or decreases and which is present in a location where it is difficult to visually identify it from the vehicle” can be determined as a risk factor.
  • In addition, although in the present embodiment, as machine learning, particularly, machine learning (deep learning) using a convolutional neural network with a multilayer structure is used, other machine learning can also be used.
  • In addition, although in the present embodiment the machine learning processing program (FIG. 2) and the risk factor determination processing program (FIG. 7) are executed in parallel, the programs do not necessarily need to be executed simultaneously. For example, the machine learning processing program (FIG. 2) may be executed after executing the risk factor determination processing program (FIG. 7).
  • In addition, although in the present embodiment the machine learning processing program (FIG. 2) and the risk factor determination processing program (FIG. 7) are executed by the navigation device 1, those programs may be configured to be executed by an in-vehicle device other than the navigation device. In addition, instead of an in-vehicle device performing all processes, an external server may perform some of the processes.
  • In addition, although an implementation example in which the travel assistance device is embodied is described above, the travel assistance device can also have the following configurations, and in that case, the following advantageous effects are provided.
  • For example, a first configuration is as follows:
  • A travel assistance device includes surrounding environment imaging means (41) for obtaining a captured image (51) that captures the surrounding environment of a mobile unit; map information image obtaining means (41) for obtaining, as a map information image (52), a map image that three-dimensionally represents a map and that represents the same range as an image capturing range of the captured image from the same direction as an image capturing direction of the captured image; and risk factor extracting means (41) for extracting, based on learning data, a risk factor that is present in the surrounding environment of the mobile unit and that is not included in the captured image, by inputting the captured image and the map information image to machine learning as input images with a plurality of channels.
  • According to the travel assistance device having the above-described configuration, by inputting a map image that three-dimensionally represents a map and a captured image of an area around a mobile unit to machine learning as input images with a plurality of channels, it becomes possible to more accurately determine a risk factor present in the surrounding environment of the mobile unit, based on learning data.
  • In addition, a second configuration is as follows:
  • The risk factor extracting means (41) extracts a risk factor present in the surrounding environment of the mobile unit from a feature portion in a differential area having a difference between the captured image (51) and the map information image (52).
  • According to the travel assistance device having the above-described configuration, it becomes possible to reduce the processing load for a determination of a risk factor compared to the conventional case.
  • In addition, a third configuration is as follows:
  • The risk factor extracting means (41) extracts, as the feature portion, an object that is included in the map information image (52) but is not included in the captured image (51).
  • According to the travel assistance device having the above-described configuration, it becomes possible to easily extract, as a feature, an object that results in a risk factor present in a blind spot for the driver, i.e., a location where it cannot be visually identified by the driver.
  • In addition, a fourth configuration is as follows:
  • The map image is a map image that three-dimensionally represents a road outline, and when there is a road outline that is included in the map information image (52) but is not included in the captured image (51), the risk factor extracting means (41) determines a road corresponding to the outline as a risk factor.
  • According to the travel assistance device having the above-described configuration, it becomes possible to determine particularly a road present in a location that is a blind spot for the vehicle, as a risk factor. In addition, by using information that identifies road outlines as a three-dimensional map, it becomes possible to suppress the amount of information of the three-dimensional map.
  • In addition, a fifth configuration is as follows:
  • The risk factor extracting means (41) determines a risk factor present in the surrounding environment of the mobile unit by machine learning using a neural network with a multilayer structure.
  • According to the travel assistance device having the above-described configuration, by machine learning using a neural network with a multilayer structure, features of an image can be learned at a deeper level and the features can be recognized with very high accuracy, and thus, it becomes possible to more accurately determine a risk factor. Particularly, without the user side setting filters for extracting features, optimal filters can be set by learning.

Claims (6)

1. A travel assistance device comprising:
a processor programmed to:
obtain a captured image that captures a surrounding environment of a mobile unit;
obtain, as a map information image, a map image that three-dimensionally represents a map and that represents a same range as an image capturing range of the captured image from a same direction as an image capturing direction of the captured image; and
extract a risk factor based on learning data by inputting the captured image and the map information image to machine learning as input images with a plurality of channels, the risk factor being present in the surrounding environment of the mobile unit and being not included in the captured image.
2. The travel assistance device according to claim 1, wherein the processor is programmed to extract a risk factor present in the surrounding environment of the mobile unit from a feature portion in a differential area having a difference between the captured image and the map information image.
3. The travel assistance device according to claim 2, wherein the processor is programmed to extract, as the feature portion, an object that is included in the map information image but is not included in the captured image.
4. The travel assistance device according to claim 3, wherein:
the map image is a map image that three-dimensionally represents a road outline; and
the processor is programmed to extract, when there is a road outline that is included in the map information image but is not included in the captured image, a road corresponding to the road outline as the feature portion.
5. The travel assistance device according to claim 4, wherein the processor is programmed to extract a risk factor present in the surrounding environment of the mobile unit by machine learning using a neural network with a multilayer structure.
6. A computer-readable storage medium storing a computer-executable travel assistance program that causes a computer to perform functions, comprising:
obtaining a captured image that captures a surrounding environment of a mobile unit;
obtaining, as a map information image, a map image that three-dimensionally represents a map and that represents a same range as an image capturing range of the captured image from a same direction as an image capturing direction of the captured image; and
extracting a risk factor based on learning data by inputting the captured image and the map information image to machine learning as input images with a plurality of channels, the risk factor being present in the surrounding environment of the mobile unit and being not included in the captured image.
US16/331,392 2016-10-07 2017-10-09 Travel assistance device and computer program Abandoned US20190251374A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2016-199254 2016-10-07
JP2016199254 2016-10-07
PCT/JP2017/036562 WO2018066712A1 (en) 2016-10-07 2017-10-09 Travel assistance device and computer program

Publications (1)

Publication Number Publication Date
US20190251374A1 true US20190251374A1 (en) 2019-08-15

Family

ID=61831788

Family Applications (3)

Application Number Title Priority Date Filing Date
US16/331,336 Active US10733462B2 (en) 2016-10-07 2017-10-09 Travel assistance device and computer program
US16/331,362 Active 2037-12-12 US10878256B2 (en) 2016-10-07 2017-10-09 Travel assistance device and computer program
US16/331,392 Abandoned US20190251374A1 (en) 2016-10-07 2017-10-09 Travel assistance device and computer program

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US16/331,336 Active US10733462B2 (en) 2016-10-07 2017-10-09 Travel assistance device and computer program
US16/331,362 Active 2037-12-12 US10878256B2 (en) 2016-10-07 2017-10-09 Travel assistance device and computer program

Country Status (5)

Country Link
US (3) US10733462B2 (en)
EP (3) EP3496069B1 (en)
JP (3) JP6700623B2 (en)
CN (3) CN109791053A (en)
WO (3) WO2018066711A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190266448A1 (en) * 2016-09-30 2019-08-29 General Electric Company System and method for optimization of deep learning architecture
US11200381B2 (en) * 2017-12-28 2021-12-14 Advanced New Technologies Co., Ltd. Social content risk identification
US11495028B2 (en) * 2018-09-28 2022-11-08 Intel Corporation Obstacle analyzer, vehicle control system, and methods thereof

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102535540B1 (en) * 2017-01-12 2023-05-23 모빌아이 비젼 테크놀로지스 엘티디. Navigation based on vehicle activity
CN112204343A (en) * 2018-03-02 2021-01-08 迪普迈普有限公司 Visualization of high definition map data
US10809081B1 (en) * 2018-05-03 2020-10-20 Zoox, Inc. User interface and augmented reality for identifying vehicles and persons
US11846514B1 (en) 2018-05-03 2023-12-19 Zoox, Inc. User interface and augmented reality for representing vehicles and persons
US10837788B1 (en) 2018-05-03 2020-11-17 Zoox, Inc. Techniques for identifying vehicles and persons
EP3865822A1 (en) * 2018-05-15 2021-08-18 Mobileye Vision Technologies Ltd. Systems and methods for autonomous vehicle navigation
JP6766844B2 (en) * 2018-06-01 2020-10-14 株式会社デンソー Object identification device, mobile system, object identification method, object identification model learning method and object identification model learning device
US11002066B2 (en) * 2018-06-19 2021-05-11 Apple Inc. Systems with dynamic pixelated windows
JP7070157B2 (en) * 2018-06-29 2022-05-18 富士通株式会社 Image processing program, image processing device and image processing method
US11195030B2 (en) * 2018-09-14 2021-12-07 Honda Motor Co., Ltd. Scene classification
US20200090501A1 (en) * 2018-09-19 2020-03-19 International Business Machines Corporation Accident avoidance system for pedestrians
US11055857B2 (en) * 2018-11-30 2021-07-06 Baidu Usa Llc Compressive environmental feature representation for vehicle behavior prediction
EP3911921A1 (en) * 2019-01-18 2021-11-24 Vestel Elektronik Sanayi ve Ticaret A.S. Head-up display system
US10824947B2 (en) * 2019-01-31 2020-11-03 StradVision, Inc. Learning method for supporting safer autonomous driving without danger of accident by estimating motions of surrounding objects through fusion of information from multiple sources, learning device, testing method and testing device using the same
JP7228472B2 (en) * 2019-06-07 2023-02-24 本田技研工業株式会社 Recognition device, recognition method, and program
JP7332726B2 (en) * 2019-06-10 2023-08-23 華為技術有限公司 Detecting Driver Attention Using Heatmaps
KR20210030136A (en) * 2019-09-09 2021-03-17 현대자동차주식회사 Apparatus and method for generating vehicle data, and vehicle system
JP7139300B2 (en) * 2019-10-03 2022-09-20 本田技研工業株式会社 Recognition device, recognition method, and program
JP7407034B2 (en) * 2020-03-19 2023-12-28 本田技研工業株式会社 Travel route setting device, method and program for setting travel route
JP7380443B2 (en) * 2020-06-22 2023-11-15 トヨタ自動車株式会社 Partial image generation device and computer program for partial image generation
US11948372B2 (en) * 2020-11-27 2024-04-02 Nissan Motor Co., Ltd. Vehicle assist method and vehicle assist device
JP2022123940A (en) * 2021-02-15 2022-08-25 本田技研工業株式会社 vehicle controller
JP2022161066A (en) * 2021-04-08 2022-10-21 トヨタ自動車株式会社 Display control system, display control method, and program
CN113899355A (en) * 2021-08-25 2022-01-07 上海钧正网络科技有限公司 Map updating method and device, cloud server and shared riding equipment
US20230177839A1 (en) * 2021-12-02 2023-06-08 Nvidia Corporation Deep learning based operational domain verification using camera-based inputs for autonomous systems and applications

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9330256B2 (en) * 2013-02-01 2016-05-03 Qualcomm Incorporated Location based process-monitoring
US20180183650A1 (en) * 2012-12-05 2018-06-28 Origin Wireless, Inc. Method, apparatus, and system for object tracking and navigation
US20180188427A1 (en) * 2016-12-29 2018-07-05 Uber Technologies, Inc. Color Filter Array for Image Capture Device
US20180189574A1 (en) * 2016-12-29 2018-07-05 Uber Technologies, Inc. Image Capture Device with Customizable Regions of Interest

Family Cites Families (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07333317A (en) * 1994-06-14 1995-12-22 Mitsubishi Electric Corp Device for recognizing distance between mobile objects
JPH09196695A (en) * 1996-01-18 1997-07-31 Matsushita Electric Ind Co Ltd Navigation guidance apparatus
US6285317B1 (en) * 1998-05-01 2001-09-04 Lucent Technologies Inc. Navigation system with three-dimensional display
EP1378724B1 (en) * 2002-07-01 2006-03-29 Mazda Motor Corporation Route guidance system based on visual activity of the driver
JP3932127B2 (en) * 2004-01-28 2007-06-20 マツダ株式会社 Image display device for vehicle
JP2006172215A (en) 2004-12-16 2006-06-29 Fuji Photo Film Co Ltd Driving support system
JP4483764B2 (en) * 2005-10-28 2010-06-16 株式会社デンソー Driving support system and program
US8103442B2 (en) 2006-04-28 2012-01-24 Panasonic Corporation Navigation device and its method
JP2008046766A (en) * 2006-08-11 2008-02-28 Denso Corp Vehicle external information display device
JP5171629B2 (en) * 2006-09-04 2013-03-27 パナソニック株式会社 Driving information providing device
CN101583842B (en) * 2006-12-05 2011-11-16 株式会社纳维泰 Navigation system, portable terminal device, and peripheral-image display method
JP5017242B2 (en) * 2008-12-03 2012-09-05 本田技研工業株式会社 Visual support device
JP5771889B2 (en) * 2009-03-04 2015-09-02 日産自動車株式会社 Route guidance device and route guidance method
JP2010261747A (en) * 2009-04-30 2010-11-18 Denso Corp Alarm device
JP5407898B2 (en) * 2010-01-25 2014-02-05 株式会社豊田中央研究所 Object detection apparatus and program
JP4990421B2 (en) * 2010-03-16 2012-08-01 三菱電機株式会社 Road-vehicle cooperative safe driving support device
US9424468B2 (en) * 2010-09-08 2016-08-23 Toyota Jidosha Kabushiki Kaisha Moving object prediction device, hypothetical movable object prediction device, program, moving object prediction method and hypothetical movable object prediction method
JP5704902B2 (en) 2010-11-26 2015-04-22 東芝アルパイン・オートモティブテクノロジー株式会社 Driving support device and driving support method
JP5743576B2 (en) * 2011-02-02 2015-07-01 スタンレー電気株式会社 Object detection system
JP2012192878A (en) 2011-03-17 2012-10-11 Toyota Motor Corp Risk determination system
JP5840046B2 (en) * 2012-03-23 2016-01-06 株式会社豊田中央研究所 Information providing apparatus, information providing system, information providing method, and program
EP2833336A4 (en) * 2012-03-29 2015-09-02 Toyota Motor Co Ltd Driving assistance system
DE102012218360A1 (en) * 2012-10-09 2014-04-10 Robert Bosch Gmbh Visual field display for a vehicle
JP6136237B2 (en) 2012-12-19 2017-05-31 アイシン・エィ・ダブリュ株式会社 Driving support system, driving support method, and computer program
JP2015104930A (en) * 2013-11-28 2015-06-08 株式会社デンソー Head-up display device
JP6241235B2 (en) * 2013-12-04 2017-12-06 三菱電機株式会社 Vehicle driving support device
CN104112370B (en) * 2014-07-30 2016-08-17 哈尔滨工业大学深圳研究生院 Parking lot based on monitoring image intelligent car position recognition methods and system
JP6361403B2 (en) * 2014-09-17 2018-07-25 アイシン・エィ・ダブリュ株式会社 Automatic driving support system, automatic driving support method, and computer program
WO2016070193A1 (en) * 2014-10-31 2016-05-06 Nodal Inc. Systems, apparatus, and methods for improving safety related to movable/moving objects
JP2016122308A (en) * 2014-12-25 2016-07-07 クラリオン株式会社 Vehicle controller
CN104571513A (en) * 2014-12-31 2015-04-29 东莞市南星电子有限公司 Method and system for simulating touch instructions by shielding camera shooting area
JP6418574B2 (en) * 2015-01-14 2018-11-07 株式会社デンソーアイティーラボラトリ Risk estimation device, risk estimation method, and computer program for risk estimation
CA2983172C (en) * 2015-04-21 2018-12-04 Nissan Motor Co., Ltd. Vehicle guidance device and vehicle guidance method
MX371164B (en) * 2015-10-09 2020-01-21 Nissan Motor Vehicular display device and vehicular display method.
JP6623044B2 (en) 2015-11-25 2019-12-18 日立オートモティブシステムズ株式会社 Stereo camera device
CN105930833B (en) * 2016-05-19 2019-01-22 重庆邮电大学 A kind of vehicle tracking and dividing method based on video monitoring

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180183650A1 (en) * 2012-12-05 2018-06-28 Origin Wireless, Inc. Method, apparatus, and system for object tracking and navigation
US9330256B2 (en) * 2013-02-01 2016-05-03 Qualcomm Incorporated Location based process-monitoring
US20180188427A1 (en) * 2016-12-29 2018-07-05 Uber Technologies, Inc. Color Filter Array for Image Capture Device
US20180189574A1 (en) * 2016-12-29 2018-07-05 Uber Technologies, Inc. Image Capture Device with Customizable Regions of Interest

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190266448A1 (en) * 2016-09-30 2019-08-29 General Electric Company System and method for optimization of deep learning architecture
US11017269B2 (en) * 2016-09-30 2021-05-25 General Electric Company System and method for optimization of deep learning architecture
US11200381B2 (en) * 2017-12-28 2021-12-14 Advanced New Technologies Co., Ltd. Social content risk identification
US11495028B2 (en) * 2018-09-28 2022-11-08 Intel Corporation Obstacle analyzer, vehicle control system, and methods thereof

Also Published As

Publication number Publication date
WO2018066710A1 (en) 2018-04-12
EP3496069A1 (en) 2019-06-12
US10878256B2 (en) 2020-12-29
CN109791053A (en) 2019-05-21
EP3496069A4 (en) 2019-06-12
US20190344803A1 (en) 2019-11-14
JPWO2018066710A1 (en) 2019-06-24
JP6700623B2 (en) 2020-05-27
US20190197323A1 (en) 2019-06-27
JP6566145B2 (en) 2019-08-28
WO2018066712A1 (en) 2018-04-12
EP3496033A1 (en) 2019-06-12
CN109791737A (en) 2019-05-21
JP6658905B2 (en) 2020-03-04
JPWO2018066712A1 (en) 2019-04-25
EP3496068A4 (en) 2019-06-12
US10733462B2 (en) 2020-08-04
WO2018066711A1 (en) 2018-04-12
CN109791738B (en) 2021-12-21
EP3496068A1 (en) 2019-06-12
EP3496069B1 (en) 2022-10-26
CN109791738A (en) 2019-05-21
JPWO2018066711A1 (en) 2019-06-24
EP3496033A4 (en) 2019-10-23

Similar Documents

Publication Publication Date Title
US20190251374A1 (en) Travel assistance device and computer program
US20210365750A1 (en) Systems and methods for estimating future paths
JP6834704B2 (en) Driving support device and computer program
JP6484228B2 (en) Visually enhanced navigation
JP4321821B2 (en) Image recognition apparatus and image recognition method
JP2020064046A (en) Vehicle position determining method and vehicle position determining device
JP5729176B2 (en) Movement guidance system, movement guidance apparatus, movement guidance method, and computer program
JP2006208223A (en) Vehicle position recognition device and vehicle position recognition method
JP2019164611A (en) Traveling support device and computer program
JP2018173860A (en) Travel support system and computer program
JP4968369B2 (en) In-vehicle device and vehicle recognition method
JP2017062706A (en) Travel support system, travel support method, and computer program
JP3999088B2 (en) Obstacle detection device
JP2018173861A (en) Travel support system and computer program
JP2007071539A (en) On-vehicle navigation device
CN116524454A (en) Object tracking device, object tracking method, and storage medium
JP5573266B2 (en) Vehicle object image recognition apparatus, vehicle object image recognition method, and computer program
JP6582798B2 (en) Driving support system, driving support method, and computer program
JP2014154125A (en) Travel support system, travel support method and computer program
JP5712844B2 (en) Moving body position detection system, moving body position detection apparatus, moving body position detection method, and computer program
JP2014153867A (en) Travel support system, travel support method and computer program
JP2014145672A (en) Travel guide system, travel guide method, and computer program

Legal Events

Date Code Title Description
AS Assignment

Owner name: AISIN AW CO., LTD., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SAKAI, TAKAMITSU;HIROTA, TOMOAKI;REEL/FRAME:048532/0807

Effective date: 20181206

AS Assignment

Owner name: AISIN AW CO., LTD., JAPAN

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE ADDRESS PREVIOUSLY RECORDED AT REEL: 048532 FRAME: 0807. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNORS:SAKAI, TAKAMITSU;HIROTA, TOMOAKI;REEL/FRAME:050305/0135

Effective date: 20181206

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION