EP3444145A1 - Moving body surroundings display method and moving body surroundings display apparatus - Google Patents
Moving body surroundings display method and moving body surroundings display apparatus Download PDFInfo
- Publication number
- EP3444145A1 EP3444145A1 EP16898634.7A EP16898634A EP3444145A1 EP 3444145 A1 EP3444145 A1 EP 3444145A1 EP 16898634 A EP16898634 A EP 16898634A EP 3444145 A1 EP3444145 A1 EP 3444145A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- attention
- mobile body
- required range
- region
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
- 238000000034 method Methods 0.000 title claims abstract description 17
- 241001465754 Metazoa Species 0.000 claims description 4
- 230000015572 biosynthetic process Effects 0.000 description 35
- 238000003786 synthesis reaction Methods 0.000 description 35
- 238000012545 processing Methods 0.000 description 20
- 238000010586 diagram Methods 0.000 description 13
- 238000004891 communication Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 239000000470 constituent Substances 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 241000282326 Felis catus Species 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/20—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/22—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
- B60R1/23—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
- B60R1/27—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/166—Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K35/00—Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/002—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles specially adapted for covering the peripheral part of the vehicle, e.g. for viewing tyres, bumpers or the like
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R21/00—Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
- B60R21/01—Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents
- B60R21/013—Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over
- B60R21/0134—Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over responsive to imminent contact with an obstacle, e.g. using radar systems
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/10—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
- B60R2300/105—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/20—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/165—Anti-collision systems for passive traffic, e.g. including static obstacles, trees
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/168—Driving aids for parking, e.g. acoustic or visual feedback on parking space
Definitions
- the present invention relates to a mobile body surroundings display method and a mobile body surroundings display apparatus.
- Patent Literature 1 a detected attention-required object is displayed on a head-up display.
- Patent Literature 1 Japanese Patent Application Publication No. 2001-23091
- Patent Literature 1 an attention-required object is displayed on a head-up display as an icon.
- information that an occupant has empirically acquired during normal driving such as the attribute of the attention-required object (whether the object is an elderly person or a child) and the eye direction of the object, may be lost.
- the present invention has been made in view of the above problem and has an objective to provide a mobile body surroundings display method and a mobile body surroundings display apparatus capable of informing an occupant of details of information to which attention needs to be paid.
- a driving assistance method acquires surroundings information on a mobile body by image capturing, creates a captured image using the surroundings information acquired and a virtual image representing a situation around the mobile body, detects an attention-required range around the mobile body, creates a capture image of the attention-required range detected, and displays the captured image of the attention-required range on a display.
- the present invention displays a captured image of an attention-required range on a display, and therefore allows an occupant to be informed of details of information to which attention needs to be paid.
- a mobile body surroundings display apparatus 1 according to a first embodiment is described with reference to Fig. 1 .
- the mobile body surroundings display apparatus 1 includes an environment detector 10, a front camera 20, a right camera 21, a left camera 22, a rear camera 23, a controller 40, and a display 50.
- the mobile body surroundings display apparatus 1 is an apparatus mainly used for an autonomous driving vehicle with autonomous driving capability.
- the environment detector 10 is a device that detects the environment surrounding the host vehicle, and is, for example, a laser range finder.
- a laser range finder detects obstacles (such as a pedestrian, a bicycle, a two-wheel vehicle, and a different vehicle) located around (e.g., within 30 meters from) the host vehicle.
- obstacles such as a pedestrian, a bicycle, a two-wheel vehicle, and a different vehicle located around (e.g., within 30 meters from) the host vehicle.
- an infrared sensor, an ultrasonic sensor, or the like may be used as the environment detector 10, or a combination of these may constitute the environment detector 10.
- the environment detector 10 may be configured including cameras such as the front camera 20 and the rear camera 23 to be described later, or including a different camera.
- the environment detector 10 may be configured including a GPS receiver.
- the environment detector 10 can transmit information on the position of the host vehicle received with the GPS receiver to a cloud and receive map information around the host vehicle from the cloud.
- the environment detector 10 outputs detected environment information to the controller 40.
- the environment detector 10 does not necessarily have to be provided to the host vehicle, and data detected by a sensor installed outside the vehicle may be acquired through wireless communication.
- the environment detector 10 may detect the environment surrounding the host vehicle through wireless communication with other vehicles (vehicle-to-vehicle communication) or wireless communication with obstacles and intersections (vehicle-to-infrastructure communication).
- the front camera 20, the right camera 21, the left camera 22, and the rear camera 23 are each a camera having an image capturing element such as a charge-coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS).
- CCD charge-coupled device
- CMOS complementary metal-oxide semiconductor
- the four cameras, namely the front camera 20, the right camera 21, the left camera 22, and the rear camera 23, are collectively referred to as "vehicle cameras 20 to 23".
- the vehicle cameras 20 to 23 acquire surroundings information on the host vehicle by capturing images of the front side, the right side, the left side, and the back side of the host vehicle, respectively, and output the acquired surroundings information to the controller 40.
- the controller 40 is a circuit that processes information acquired from the environment detector 10 and the vehicle cameras 20 to 23, and is configured with, for example, an IC, an LSI, or the like.
- the controller 40 when seen functionally, can be classified into a virtual image creation unit 41, a driving scene determination unit 42, an attention-required range identification unit 43, a storage unit 44, a camera image creation unit 45, and a synthesis unit 46.
- the virtual image creation unit 41 creates a virtual image representing the surrounding situation of the host vehicle using information acquired from the environment detector 10.
- a virtual image is a computer graphic image obtained by three-dimensional mapping of, for example, geographic information, obstacle information, road sign information, and the like, and is different from a camera image to be described later.
- the virtual image creation unit 41 outputs the created virtual image to the synthesis unit 46.
- the driving scene determination unit 42 determines the current driving scene using information acquired from the environment detector 10. Examples of driving scenes determined by the driving scene determination unit 42 include a regular travelling scene, a parking scene, a scene where the host vehicle merges onto an expressway, and a scene where the host vehicle enters an intersection.
- the driving scene determination unit 42 outputs the determined driving scene to the attention-required range identification unit 43.
- the attention-required range identification unit 43 Based on the driving scene determine by the driving scene determination unit 42, the attention-required range identification unit 43 identifies an area to which an occupant needs to pay attention (hereinafter referred to as an attention-required range). More specifically, the attention-required range identification unit 43 identifies an attention-required range using a database stored in the storage unit 44. Although a description will be given later of an attention-required range, the attention-required range is, in a side-by-side parking scene for example, a region from the vicinity of the rear wheel on the inner side of turning, to the back of the host vehicle, to the front of the host vehicle on the right side, and in a parallel parking scene, a region around the host vehicle including its front and rear wheels. In the storage unit 44, attention-required ranges according to driving scenes are stored in advance. The attention-required range identification unit 43 outputs the identified attention-required range to the camera image creation unit 45 and the synthesis unit 46.
- the camera image creation unit 45 uses information acquired from the vehicle cameras 20 to 23, the camera image creation unit 45 creates a camera image (a captured image) of an attention-required range identified by the attention-required range identification unit 43.
- the camera image creation unit 45 outputs the created camera image to the synthesis unit 46.
- the vehicle cameras are used for the captured image in the present embodiment, the vehicle cameras are not limited to particular types, and may be any cameras such as color cameras, monochrome cameras, infrared cameras, or radio cameras.
- the synthesis unit 46 replaces an attention-required range on a virtual image with a camera image.
- the synthesis unit 46 then outputs the thus-synthesized image to the display 50.
- the display 50 is, for example, a liquid crystal display installed in an instrument panel or a liquid crystal display used in a car navigation apparatus, and presents various pieces of information to an occupant.
- the driving scene illustrated in Fig. 2(a) is a scene where a host vehicle M1 parks side by side between a different vehicle M2 and a different vehicle M3.
- An attention-required range for a case of side-by-side parking is, as indicated by the region R, a region from the vicinity of the rear wheel on the inner side of turning, to the back of the host vehicle, to the front of the host vehicle on the right side, and is a range where the host vehicle M1 may travel.
- the attention-required range identification unit 43 identifies the region R as an attention-required range, and the camera image creation unit 45 creates a camera image of the region R.
- the synthesis unit 46 replaces the region R on a virtual image P with the camera image.
- the display 50 displays the region R to which an occupant needs to pay attention with the camera image, i.e., an actual captured image.
- the occupant can be informed of detailed information about the region R.
- FIG. 2(b) a description is given of a driving scene in which a host vehicle M1 performs parallel parking between a different vehicle M2 and a different vehicle M3.
- An attention-required range for a case of parallel parking is, as indicated by the region R, a region around the vehicle including its front and rear wheels.
- the attention-required range identification unit 43 identifies the region R as an attention-required range, and the camera image creation unit 45 creates a camera image of the region R.
- the following processing is the same as that described in connection to Fig. 2(a) , and will therefore not be described here.
- An attention-required range for a case of diverting to the left on a narrow road is, as indicated by the region R, a region covering the front and the left side of the host vehicle.
- the attention-required range identification unit 43 identifies the region R as an attention-required range, and the camera image creation unit 45 creates a camera image of the region R.
- the following processing is the same as described in connection to Fig. 2(a) , and will therefore not be described here.
- the attention-required range identification unit 43 may identify, as an attention-required range, a region where the host vehicle M1 gets close to the different vehicle M2 when passing by the different vehicle M2.
- An attention-required range for a case of letter-S travelling on a narrow road is, as indicated by the region R, a region covering both the left and right sides of the different vehicle M3 and the front of the host vehicle including the positions where the tires touch the ground.
- the region R may include an oncoming different vehicle M2.
- the attention-required range identification unit 43 may set, as an attention-required range, a region where the oncoming vehicle travels within in a region where the host vehicle M1 travels to avoid the parked vehicle.
- the attention-required range identification unit 43 identifies the region R as an attention-required range, and the camera image creation unit 45 creates a camera image of the region R.
- the following processing is the same as that described in connection to Fig. 2(a) , and will therefore not be described here.
- An attention-required range for a case of passing the narrowest part is, as indicated with the region R, a region covering the front of the vehicle including the width of the narrowest part (the width of the road between a different vehicle M2 and the telephone pole T).
- the attention-required range identification unit 43 identifies the region R as an attention-required range, and the camera image creation unit 45 creates a camera image of the region R.
- the following processing is the same as that described in connection to Fig. 2(a) , and will therefore not be described here.
- An attention-required range for such a driving scene is, as indicated with the region R, a region from the right side of the host vehicle to a region therebehind.
- the attention-required range identification unit 43 identifies the region R as an attention-required range, and the camera image creation unit 45 creates a camera image of the region R.
- the following processing is the same as that described in connection to Fig. 2(a) , and will therefore not be described here.
- the attention-required range for the driving scene illustrated in Fig. 4(a) may be a range reflected in the right door mirror of the host vehicle.
- FIG. 4(b) a description is given of a driving scene where a host vehicle M1 merges onto an expressway with a different vehicle M3 in front.
- An attention-required range for such a driving scene is, as indicated with the region R, a region ahead of the right side of the host vehicle.
- the attention-required range identification unit 43 identifies the region R as an attention-required range, and the camera image creation unit 45 creates a camera image of the region R.
- the following processing is the same as that described in connection to Fig. 2(a) , and will therefore not be described here.
- An attention-required range for such a driving scene is, as indicated with the region R, a region ahead of and behind the right side of the host vehicle.
- the attention-required range identification unit 43 identifies the region R as an attention-required range, and the camera image creation unit 45 creates a camera image of the region R.
- the following processing is the same as that described in connection to Fig. 2(a) , and will therefore not be described here.
- An attention-required range for a case of taking a left turn at an intersection is, as indicated with the region R, a region of the entire intersection including the travelling direction (the left-turn direction) of the host vehicle M1.
- the attention-required range identification unit 43 identifies the region R as an attention-required range, and the camera image creation unit 45 creates a camera image of the region R.
- the following processing is the same as that described in connection to Fig. 2(a) , and will therefore not be described here.
- Step S101 the environment detector 10 and the vehicle cameras 20 to 23 acquire information about the surroundings of the host vehicle.
- Step S102 the virtual image creation unit 41 creates a virtual image using the information about the surroundings of the host vehicle.
- Step S103 the driving scene determination unit 42 determines a driving scene using the information about the surroundings of the host vehicle.
- Step S104 based on the driving scene determined by the driving scene determination unit 42, the attention-required range identification unit 43 identifies an attention-required range using the database in the storage unit 44.
- Step S105 the camera image creation unit 45 creates a camera image of the attention-required range identified by the attention-required range identification unit 43.
- Step S106 the synthesis unit 46 replaces the attention-required range on the virtual image with the camera image.
- Step S107 the controller 40 displays the synthesized image synthesized by the synthesis unit 46 on the display 50.
- the mobile body surroundings display apparatus 1 according to the first embodiment as described above can produce the following advantageous effects.
- the mobile body surroundings display apparatus 1 first creates a virtual image using information about the surroundings of the host vehicle. Next, the mobile body surroundings display apparatus 1 identifies an attention-required range based on a driving scene, and creates a camera image of the attention-required range identified. Then, the mobile body surroundings display apparatus 1 replaces the attention-required range on the virtual image with the camera image, and displays the thus-synthesized image on the display 50. Thereby, an occupant can be informed of detailed information on the attention-required range.
- the mobile body surroundings display apparatus 1 has been described as an apparatus mainly used for an autonomous driving vehicle with autonomous driving capability.
- the mobile body surroundings display apparatus 1 displays a virtual image except for an attention-required range, and thus can reduce the amount of information given to an occupant.
- the mobile body surroundings display apparatus 1 can bother an occupant less.
- the mobile body surroundings display apparatus 1 can give an occupant detailed information for a region to which the occupant needs to pay attention (an attention-required range), and reduce excessive information for a region other than the attention-required range. Thereby, the occupant can correctly acquire only necessary information.
- attention-required ranges are places on a road such as a point of mergence to an expressway where a vehicle and a vehicle travel across each other and an intersection where a vehicle and a person travel across each other. An occupant needs to pay attention in such places. Since the mobile body surroundings display apparatus 1 replaces an attention-required range on a virtual image with a camera image and displays the thus-synthesized image, an occupant can be informed of detailed information on the attention-required range.
- the second embodiments differs from the first embodiment in that the mobile body surroundings display apparatus 2 includes an object detector 60 and an attention-required object identification unit 47 and does not include the driving scene determination unit 42, the attention-required range identification unit 43, and the storage unit 44.
- the same constituents as those in the first embodiment are denoted by the same reference numerals as used in the first embodiments and will not be described here. Different points will be mainly discussed below.
- the object detector 60 is an object detection sensor that detects an object present around the host vehicle, and detects an object present in the periphery of a road on which the host vehicle is travelling.
- a radar sensor can be used as the object detector 60.
- objects detected by the object detector 60 include mobile bodies such as a different vehicle, a motorcycle, a pedestrian, and a bicycle, traffic signals, and road signs.
- the object detector 60 may be a sensor other than the radar sensor, and may be an image recognition sensor using an image captured by a camera.
- a laser sensor, an ultrasonic sensor, or the like may be used as the object detector 60.
- the object detector 60 outputs information on detected objects to the attention-required object identification unit 47.
- the attention-required object identification unit 47 identifies, among the objects detected by the object detector 60, an object to which an occupant needs to pay attention (hereinafter referred to as an attention-required object). Examples of an attention-required object include a different vehicle, a motorcycle, a pedestrian, a bicycle, an animal (like a dog or a cat), a telephone pole, an advertising display, a traffic light, a road sign, and a fallen object on a road.
- the attention-required object identification unit 47 outputs the identified attention-required object to the camera image creation unit 45 and the synthesis unit 46.
- a pedestrian W is shown as a symbol.
- information such as the attribute of the pedestrian P (whether the pedestrian P is an elderly person or a child) and the direction of eye of the pedestrian P may be lost.
- the attention-required object identification unit 47 identifies the pedestrian W detected by the object detector 60 as an attention-required object, and the camera image creation unit 45 creates a camera image of the pedestrian W identified, as illustrated in Fig. 8 .
- the synthesis unit 46 replaces the pedestrian W on the virtual image P with the camera image.
- a camera image of an attention-required object created may be a camera image of a region including the pedestrian W as illustrated in Fig. 8 , or a camera image cutting out the pedestrian W along the contour.
- the attention-required object identification unit 47 identifies the different vehicle M2 as an attention-required object, and the camera image creation unit 45 creates a camera image of the different vehicle M2.
- the following processing is the same as that described in connection to Fig. 8 , and will therefore not be described here.
- the object detector 60 detects different vehicles M2 and M3, a pedestrian W, bicycles B1 to B3, and a road sign L
- the attention-required object identification unit 47 identifies these objects as attention-required objects
- the camera image creation unit 45 creates camera images of these objects. The following processing is the same as that described in connection to Fig. 8 , and will therefore not be described here.
- Step S201 the environment detector 10 and the object detector 60 acquire information about the surroundings of the host vehicle.
- Step S202 the virtual image creation unit 41 creates a virtual image using the information about the surroundings of the host vehicle.
- Step S203 the attention-required object identification unit 47 identifies an attention-required object around the host vehicle.
- Step S204 the camera image creation unit 45 creates a camera image of the attention-required object identified by the attention-required object identification unit 47.
- Step S205 the synthesis unit 46 replaces the attention-required object on the virtual image with the camera image.
- Step S206 the controller 40 displays the synthesized image synthesized by the synthesis unit 46 on the display 50.
- the mobile body surroundings display apparatus 2 according to the second embodiment as described above produce the following advantageous effects.
- the mobile body surroundings display apparatus 2 first creates a virtual image using information about the surroundings of the host vehicle. Next, the mobile body surroundings display apparatus 2 identifies an attention-required object and creates a camera image of the attention-required object identified. Then, the mobile body surroundings display apparatus 1 replaces the attention-required object on the virtual image with the camera image, and displays the thus-synthesized image on the display 50. Thereby, an occupant can be informed of detailed information on the attention-required object.
- information on an attention-required object may be lost.
- a human is shown as a symbol
- information such as the attribute of that person (whether the person is an elderly person or a child) and the direction of eye of the person may be lost.
- a vehicle is shown as a symbol
- information such as the size, shape, and color of the vehicle may be lost.
- the mobile body surroundings display apparatus 2 displays an attention-required object on a virtual image after replacing it with a camera image, and thus can compensate for the loss of information which may be caused by image virtualization. Thereby, an occupant is more likely able to predict the motion of the attention-required object.
- the mobile body surroundings display apparatus 2 can inform an occupant with detailed information on an attention-required object, such as a pedestrian, an animal, a bicycle, a vehicle, or a road sign, by displaying the attention-required object after replacing the attention-required object with a camera image.
- an attention-required object such as a pedestrian, an animal, a bicycle, a vehicle, or a road sign
- the mobile body surroundings display apparatus 3 differs from the first embodiment in that the mobile body surroundings display apparatus 3 includes the object detector 60, the attention-required object identification unit 47, and a highlight portion identification unit 48.
- the same constituents as those in the first embodiment are denoted by the same reference numerals as used in the first embodiment, and are not described below. Different points will be mainly discussed below. Note that the object detector 60 and the attention-required object identification unit 47 are the same as those described in the second embodiment, and will therefore not be described below.
- the highlight portion identification unit 48 identifies a highlight portion to which an occupant needs to pay attention. Specifically, when an attention-required object identified by the attention-required object identification unit 47 is located within an attention-required range identified by the attention-required range identification unit 43, the highlight portion identification unit 48 identifies this attention-required object as a highlight portion. The highlight portion identification unit 48 outputs the identified highlight portion to the camera image creation unit 45 and the synthesis unit 46.
- An attention-required range for a case of a T intersection is, as indicated by the region R, a region around the host vehicle including a range of the left and right sides of the center of the T intersection.
- the attention-required range identification unit 43 identifies the region R as an attention-required range.
- the attention-required object identification unit 47 identifies different vehicles M2 and M3, a pedestrian W, bicycles B1 to B3, and a road sign L detected by the object detector 60, as attention-required objects.
- the highlight portion identification unit 48 identifies an attention-required object located within the range R as a highlight portion.
- the highlight portion identification unit 48 identifies the pedestrian W, the bicycles B1 to B3, and the road sign L as highlight portions.
- the camera image creation unit 45 creates camera images of the pedestrian W, the bicycles B1 to B3, and the road sign L identified as the highlight portions.
- the synthesis unit 46 replaces the pedestrian W, the bicycles B1 to B3, and the road sign L on a virtual image P with the camera images.
- the display 50 displays the pedestrian W, the bicycles B1 to B3, and the road sign L by use of the camera images, i.e., actual captured images.
- This allows an occupant to be informed of detailed information on the pedestrian W, the bicycles B1 to B3, and the road sign L.
- an attention-required object which is partially located within the region R such as the bicycles B2 and B3, is also identified as a highlight portion, but only an attention-required object which is entirely located within the region R may be identified as a highlight portion.
- the attention-required range identification unit 43 may identify an attention-required range in real time, and attention-required ranges may be preset on a map or the like.
- the highlight portion identification unit 48 does not identify any highlight portion when no attention-required object is detected within the region R.
- the synthesis unit 46 does not replace the region R on the virtual image P with a camera image.
- the reason for this is that when no attention-required object, such as a different vehicle or a pedestrian, is detected in a region R, the risk of the host vehicle colliding is low, and there is low necessity of informing an occupant of the region R which has been replaced with a camera image.
- the mobile body surroundings display apparatus 3 displays only the virtual image P and thus can reduce the amount of information given to an occupant. Consequently, the mobile body surroundings display apparatus 3 can bother an occupant less.
- the object detector 60 may detect an object within an attention-required range identified by the attention-required range identification unit 43. Such limitation of the range to detect an object in can reduce the time it takes for the object detector 60 to detect an object. In turn, the time it takes for the attention-required object identification unit 47 to identify an attention-required object can be reduced, as well. In addition, the limitation of the range to detect an object in can lead to reduction in the processing load on the controller 40.
- An attention-required range for a case of taking a left turn at an intersection is, as indicated with the region R, the entire region of the intersection, including the travelling direction (left-turn direction) of the host vehicle M1.
- the attention-required range identification unit 43 identifies the region R as an attention-required range
- the attention-required object identification unit 47 identifies different vehicles M2 to M4, a bicycle B, and a traffic light S as attention-required objects.
- the highlight portion identification unit 48 identifies an attention-required object located within the region R as a highlight portion.
- the highlight portion identification unit 48 identifies the different vehicles M2 to M4, the bicycle B, and the traffic light S as attention-required objects.
- the following processing is the same as that described in connection to Fig. 13 , and will therefore not be described here. Note that as depicted in Fig. 14 , the attention-required range can be set to suit a turning-left situation.
- the attention-required range identification unit 43 can set an attention-required range suitable for a travelling scene and a driving operation, and can make it less likely that attention is paid to the outside of the attention-required range.
- An attention-required range for a case of taking a right turn at an intersection is, as indicated with the region R, the entire region of the intersection including the travelling direction (right-turn direction) of the host vehicle M1 and excluding the right side of the host vehicle.
- the attention-required range identification unit 43 identifies the region R as an attention-required range
- the attention-required object identification unit 47 identifies different vehicles M2 to M4, a pedestrian W, bicycles B1 and B2, and road signs L1 and L2 as attention-required objects.
- the highlight portion identification unit 48 identifies an attention-required object located within the region R as a highlight portion.
- the highlight portion identification unit 48 identifies the different vehicles M2 to M4, the pedestrian W, the bicycles B1 and B2, and the road signs L1 and L2 as attention-required objects.
- the following processing is the same as that described in connection to Fig. 13 , and will therefore not be described here.
- the attention-required range identification unit 43 may set the attention-required range to suit a turning-right situation, as depicted in Fig. 15 .
- This flowchart is initiated when, for example, an ignition switch is turned on.
- Step S301 the environment detector 10, the object detector 60, and the vehicle cameras 20 to 23 acquire information about the surroundings of the host vehicle.
- Step S302 the virtual image creation unit 41 creates a virtual image using the information about the surroundings of the host vehicle.
- Step S303 the driving scene determination unit 42 determines a driving scene using the information about the surroundings of the host vehicle.
- Step S304 based on the driving scene determined by the driving scene determination unit 42, the attention-required range identification unit 43 identifies an attention-required range using the database in the storage unit 44.
- Step S305 the attention-required object identification unit 47 identifies an attention-required object around the host vehicle.
- Step S306 the highlight portion identification unit 48 determines whether an attention-required object is located within the attention-required range. When an attention-required object is located within the attention-required range (Yes in Step S306), the highlight portion identification unit 48 identifies the attention-required object located within the attention-required range, and the processing proceeds to Step S307. When no attention-required object is located within the attention-required range (No in Step S306), the processing proceeds to Step S310.
- Step S307 the camera image creation unit 45 creates a camera image of the attention-required object identified by the highlight portion identification unit 48.
- Step S308 the synthesis unit 46 replaces the attention-required object on the virtual image with the camera image.
- Step S309 the controller 40 displays the synthesized image synthesized by the synthesis unit 46 on the display 50.
- Step S310 the controller 40 displays the virtual image on the display 50.
- the mobile body surroundings display apparatus 3 according to the third embodiment as described above can produce the following advantageous effects.
- the mobile body surroundings display apparatus 3 first creates a virtual image using information about the surroundings of the host vehicle. Next, the mobile body surroundings display apparatus 3 identifies an attention-required range based on a driving scene and identifies an attention-required object within the attention-required range. The mobile body surroundings display apparatus 3 creates a camera image of the attention-required object within the attention-required range, replaces the attention-required object within the attention-required range on the virtual image with the camera image, and displays the thus-synthesized image on the display 50. Thereby, an occupant can be informed of detailed information on the attention-required object.
- the mobile body surroundings display apparatus 3 displays only the virtual image on the display 50 when no attention-required object is located within the attention-required range.
- the mobile body surroundings display apparatus 3 can reduce the amount of information given to an occupant.
- the mobile body surroundings display apparatus 3 can thus bother an occupant less.
- an attention-required range and an attention-required object are replaced with a camera image.
- an attention-required object is replaced with a camera image if the attention-required object is located within an attention-required range.
- An attention-required range and an attention-required object indicate a range to which an occupant needs to pay attention, and an attention-required range and an attention-required object can collectively be rephrased as an attention-required range.
- an attention-required range can also include a region the level of attention of which is equal to or above a predetermined value.
- an attention-required range is replaced with a camera image, but the present invention is not limited to this.
- the mobile body surroundings display apparatuses 1 to 3 may calculate a level of attention for the host vehicle and perform the replacement according to the level of attention calculated.
- the level of attention for the host vehicle can be obtained based on a relative speed or a relative distance to the host vehicle.
- the environment detector 10 and/or the object detector 60 may have the capability of detecting a relative speed and a relative distance to the host vehicle.
- the mobile body surroundings display apparatuses 1 to 3 may calculate and set a level of attention such that the higher the relative speed to the host vehicle, the higher the level of attention. Further, the mobile body surroundings display apparatuses 1 to 3 may set a level of attention such that the shorter the relative distance to the host vehicle, the higher the level of attention.
- the mobile body surroundings display apparatuses 1 to 3 calculate a level of attention of an object located around the host vehicle, and when the level of attention calculated is equal to or above a predetermined value, create a camera image of a region where the object is located, replace that region on a virtual image with the camera image, and display the thus-synthesized image. Thereby, the mobile body surroundings display apparatuses 1 to 3 can inform an occupant of detailed information on a region to which attention needs to be paid, without identifying the attribute of the object (whether the object is a human or an animal) or the like.
- the predetermined value can be obtained beforehand through experiment or simulation.
- the mobile body surroundings display apparatuses 1 to 3 may divide a virtual image into a plurality of parts, calculate a level of attention for each of regions corresponding to the respective divided parts of the image, and replace a region the calculated level of attention of which is equal to or above a predetermined value with a camera image. This way, the mobile body surroundings display apparatuses 1 to 3 can reduce the load for the attention level calculation.
- the attention-required range identification unit 43 identifies an attention-required range using a databased stored in the storage unit 44
- the present invention is not limited to this.
- the attention-required range identification unit 43 may transmit information on the position of the host vehicle to a cloud, and identify an attention-required range using information from the cloud corresponding to the information on the position of the host vehicle.
- the attention-required range identification unit 43 may identify an attention-required range using information acquired from a different vehicle through vehicle-to-vehicle communication.
- the synthesis unit 46 of the present embodiments replaces an attention-required range on a virtual image with a camera image
- the present invention is not necessarily limited to this.
- the synthesis unit 46 may generate a camera image around the host vehicle and replace a region other than an attention-required range with a virtual image. In other words, any approach may be implemented as long as an attention-required range is displayed by use of a camera image.
- a processing circuit includes a programmed processing device such as a processing device including an electric circuit.
- a processing circuit includes a device such as an application-specific integrated circuit (ASIC) adapted to execute functions described in the embodiments or a conventional circuit component.
- ASIC application-specific integrated circuit
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Mechanical Engineering (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Chemical & Material Sciences (AREA)
- Combustion & Propulsion (AREA)
- Transportation (AREA)
- Closed-Circuit Television Systems (AREA)
- Traffic Control Systems (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
- The present invention relates to a mobile body surroundings display method and a mobile body surroundings display apparatus.
- There is conventionally known a technique for detecting an attention-required object in the travelling direction of a vehicle and informing the driver of the attention-required object detected. In Patent Literature 1, a detected attention-required object is displayed on a head-up display.
- Patent Literature 1: Japanese Patent Application Publication No.
2001-23091 - In Patent Literature 1, an attention-required object is displayed on a head-up display as an icon. Thus, information that an occupant has empirically acquired during normal driving, such as the attribute of the attention-required object (whether the object is an elderly person or a child) and the eye direction of the object, may be lost.
- The present invention has been made in view of the above problem and has an objective to provide a mobile body surroundings display method and a mobile body surroundings display apparatus capable of informing an occupant of details of information to which attention needs to be paid.
- A driving assistance method according to an aspect of the present invention acquires surroundings information on a mobile body by image capturing, creates a captured image using the surroundings information acquired and a virtual image representing a situation around the mobile body, detects an attention-required range around the mobile body, creates a capture image of the attention-required range detected, and displays the captured image of the attention-required range on a display.
- The present invention displays a captured image of an attention-required range on a display, and therefore allows an occupant to be informed of details of information to which attention needs to be paid.
-
-
Fig. 1 is a diagram of the configuration of a mobile body surroundings display apparatus according to a first embodiment of the present invention. -
Figs. 2(a) and 2(b) are diagrams illustrating an example of synthesis of a virtual image with a camera image according to the first embodiment of the present invention. -
Figs. 3(a), 3(b), and 3(c) are diagrams illustrating another example of synthesis of a virtual image with a camera image according to the first embodiment of the present invention. -
Figs. 4(a), 4(b), and 4(c) are diagrams illustrating yet another example of synthesis of a virtual image with a camera image according to the first embodiment of the present invention. -
Fig. 5 is a diagram illustrating still another example of synthesis of a virtual image with a camera image according to the first embodiment of the present invention. -
Fig. 6 is a flowchart illustrating an example operation of the mobile body surroundings display apparatus according to the first embodiment of the present invention. -
Fig. 7 is a diagram of the configuration of a mobile body surroundings display apparatus according to a second embodiment of the present invention. -
Fig. 8 is a diagram illustrating synthesis of a virtual image with a camera image according to the second embodiment of the present invention. -
Fig. 9 is a diagram illustrating an example of synthesis of a virtual image with a camera image according to the second embodiment of the present invention. -
Fig. 10 is a diagram illustrating another example of synthesis of a virtual image with a camera image according to the second embodiment of the present invention. -
Fig. 11 is a flowchart illustrating an example operation of the mobile body surroundings display apparatus according to the second embodiment of the present invention. -
Fig. 12 is a diagram of the configuration of a mobile body surroundings display apparatus according to a third embodiment of the present invention. -
Fig. 13 is a diagram illustrating an example of synthesis of a virtual image with a camera image according to the third embodiment of the present invention. -
Fig. 14 is a diagram illustrating another example of synthesis of a virtual image with a camera image according to the third embodiment of the present invention. -
Fig. 15 is a diagram illustrating yet another example of synthesis of a virtual image with a camera image according to the third embodiment of the present invention. -
Fig. 16 is a flowchart illustrating an example operation of the mobile body surroundings display apparatus according to the third embodiment of the present invention. - Embodiments of the present invention are described below with reference to the drawings. Throughout the drawings, the same portions are denoted by the same reference numerals and are not described repeatedly.
- A mobile body surroundings display apparatus 1 according to a first embodiment is described with reference to
Fig. 1 . As illustrated inFig. 1 , the mobile body surroundings display apparatus 1 includes anenvironment detector 10, afront camera 20, aright camera 21, aleft camera 22, arear camera 23, acontroller 40, and adisplay 50. Note that the mobile body surroundings display apparatus 1 is an apparatus mainly used for an autonomous driving vehicle with autonomous driving capability. - The
environment detector 10 is a device that detects the environment surrounding the host vehicle, and is, for example, a laser range finder. A laser range finder detects obstacles (such as a pedestrian, a bicycle, a two-wheel vehicle, and a different vehicle) located around (e.g., within 30 meters from) the host vehicle. Instead, an infrared sensor, an ultrasonic sensor, or the like may be used as theenvironment detector 10, or a combination of these may constitute theenvironment detector 10. Further, theenvironment detector 10 may be configured including cameras such as thefront camera 20 and therear camera 23 to be described later, or including a different camera. Also, theenvironment detector 10 may be configured including a GPS receiver. Theenvironment detector 10 can transmit information on the position of the host vehicle received with the GPS receiver to a cloud and receive map information around the host vehicle from the cloud. Theenvironment detector 10 outputs detected environment information to thecontroller 40. In addition, theenvironment detector 10 does not necessarily have to be provided to the host vehicle, and data detected by a sensor installed outside the vehicle may be acquired through wireless communication. In other words, theenvironment detector 10 may detect the environment surrounding the host vehicle through wireless communication with other vehicles (vehicle-to-vehicle communication) or wireless communication with obstacles and intersections (vehicle-to-infrastructure communication). - The
front camera 20, theright camera 21, theleft camera 22, and therear camera 23 are each a camera having an image capturing element such as a charge-coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS). Hereinbelow, the four cameras, namely thefront camera 20, theright camera 21, theleft camera 22, and therear camera 23, are collectively referred to as "vehicle cameras 20 to 23". Thevehicle cameras 20 to 23 acquire surroundings information on the host vehicle by capturing images of the front side, the right side, the left side, and the back side of the host vehicle, respectively, and output the acquired surroundings information to thecontroller 40. - The
controller 40 is a circuit that processes information acquired from theenvironment detector 10 and thevehicle cameras 20 to 23, and is configured with, for example, an IC, an LSI, or the like. Thecontroller 40, when seen functionally, can be classified into a virtualimage creation unit 41, a drivingscene determination unit 42, an attention-requiredrange identification unit 43, astorage unit 44, a cameraimage creation unit 45, and asynthesis unit 46. - The virtual
image creation unit 41 creates a virtual image representing the surrounding situation of the host vehicle using information acquired from theenvironment detector 10. In the first embodiment, a virtual image is a computer graphic image obtained by three-dimensional mapping of, for example, geographic information, obstacle information, road sign information, and the like, and is different from a camera image to be described later. The virtualimage creation unit 41 outputs the created virtual image to thesynthesis unit 46. - The driving
scene determination unit 42 determines the current driving scene using information acquired from theenvironment detector 10. Examples of driving scenes determined by the drivingscene determination unit 42 include a regular travelling scene, a parking scene, a scene where the host vehicle merges onto an expressway, and a scene where the host vehicle enters an intersection. The drivingscene determination unit 42 outputs the determined driving scene to the attention-requiredrange identification unit 43. - Based on the driving scene determine by the driving
scene determination unit 42, the attention-requiredrange identification unit 43 identifies an area to which an occupant needs to pay attention (hereinafter referred to as an attention-required range). More specifically, the attention-requiredrange identification unit 43 identifies an attention-required range using a database stored in thestorage unit 44. Although a description will be given later of an attention-required range, the attention-required range is, in a side-by-side parking scene for example, a region from the vicinity of the rear wheel on the inner side of turning, to the back of the host vehicle, to the front of the host vehicle on the right side, and in a parallel parking scene, a region around the host vehicle including its front and rear wheels. In thestorage unit 44, attention-required ranges according to driving scenes are stored in advance. The attention-requiredrange identification unit 43 outputs the identified attention-required range to the cameraimage creation unit 45 and thesynthesis unit 46. - Using information acquired from the
vehicle cameras 20 to 23, the cameraimage creation unit 45 creates a camera image (a captured image) of an attention-required range identified by the attention-requiredrange identification unit 43. The cameraimage creation unit 45 outputs the created camera image to thesynthesis unit 46. Although the vehicle cameras are used for the captured image in the present embodiment, the vehicle cameras are not limited to particular types, and may be any cameras such as color cameras, monochrome cameras, infrared cameras, or radio cameras. - The
synthesis unit 46 replaces an attention-required range on a virtual image with a camera image. Thesynthesis unit 46 then outputs the thus-synthesized image to thedisplay 50. - The
display 50 is, for example, a liquid crystal display installed in an instrument panel or a liquid crystal display used in a car navigation apparatus, and presents various pieces of information to an occupant. - Next, with reference to
Figs. 2 to 5 , examples of camera image synthesis for various driving scenes are described. - The driving scene illustrated in
Fig. 2(a) is a scene where a host vehicle M1 parks side by side between a different vehicle M2 and a different vehicle M3. An attention-required range for a case of side-by-side parking is, as indicated by the region R, a region from the vicinity of the rear wheel on the inner side of turning, to the back of the host vehicle, to the front of the host vehicle on the right side, and is a range where the host vehicle M1 may travel. The attention-requiredrange identification unit 43 identifies the region R as an attention-required range, and the cameraimage creation unit 45 creates a camera image of the region R. Then, thesynthesis unit 46 replaces the region R on a virtual image P with the camera image. Thereby, thedisplay 50 displays the region R to which an occupant needs to pay attention with the camera image, i.e., an actual captured image. Thus, the occupant can be informed of detailed information about the region R. - Next, with reference to
Fig. 2(b) , a description is given of a driving scene in which a host vehicle M1 performs parallel parking between a different vehicle M2 and a different vehicle M3. An attention-required range for a case of parallel parking is, as indicated by the region R, a region around the vehicle including its front and rear wheels. The attention-requiredrange identification unit 43 identifies the region R as an attention-required range, and the cameraimage creation unit 45 creates a camera image of the region R. The following processing is the same as that described in connection toFig. 2(a) , and will therefore not be described here. - Next, with reference to
Fig. 3(a) , a description is given of a driving scene where a host vehicle M1 diverts to the left to avoid colliding with a different vehicle M2 while travelling a narrow road. An attention-required range for a case of diverting to the left on a narrow road is, as indicated by the region R, a region covering the front and the left side of the host vehicle. The attention-requiredrange identification unit 43 identifies the region R as an attention-required range, and the cameraimage creation unit 45 creates a camera image of the region R. The following processing is the same as described in connection toFig. 2(a) , and will therefore not be described here. In addition, the attention-requiredrange identification unit 43 may identify, as an attention-required range, a region where the host vehicle M1 gets close to the different vehicle M2 when passing by the different vehicle M2. - Next, with reference to
Fig. 3(b) , a description is given of a letter-S travelling scene where a host vehicle M1 travelling a narrow road avoids a parked different vehicle M3. An attention-required range for a case of letter-S travelling on a narrow road is, as indicated by the region R, a region covering both the left and right sides of the different vehicle M3 and the front of the host vehicle including the positions where the tires touch the ground. Note that the region R may include an oncoming different vehicle M2. Further, the attention-requiredrange identification unit 43 may set, as an attention-required range, a region where the oncoming vehicle travels within in a region where the host vehicle M1 travels to avoid the parked vehicle. The attention-requiredrange identification unit 43 identifies the region R as an attention-required range, and the cameraimage creation unit 45 creates a camera image of the region R. The following processing is the same as that described in connection toFig. 2(a) , and will therefore not be described here. - Next, with reference to
Fig. 3(c) , a description is given of a driving scene where a host vehicle M1 travelling a narrow road passes the narrowest place (hereinafter referred to as a narrowest part) due to the presence of a telephone pole T or the like. An attention-required range for a case of passing the narrowest part is, as indicated with the region R, a region covering the front of the vehicle including the width of the narrowest part (the width of the road between a different vehicle M2 and the telephone pole T). The attention-requiredrange identification unit 43 identifies the region R as an attention-required range, and the cameraimage creation unit 45 creates a camera image of the region R. The following processing is the same as that described in connection toFig. 2(a) , and will therefore not be described here. - Next, with reference to
Fig. 4(a) , a description is given of a driving scene where a host vehicle M1 merges onto an expressway with a different vehicle M2 behind. An attention-required range for such a driving scene is, as indicated with the region R, a region from the right side of the host vehicle to a region therebehind. The attention-requiredrange identification unit 43 identifies the region R as an attention-required range, and the cameraimage creation unit 45 creates a camera image of the region R. The following processing is the same as that described in connection toFig. 2(a) , and will therefore not be described here. Note that the attention-required range for the driving scene illustrated inFig. 4(a) may be a range reflected in the right door mirror of the host vehicle. - Next, with reference to
Fig. 4(b) , a description is given of a driving scene where a host vehicle M1 merges onto an expressway with a different vehicle M3 in front. An attention-required range for such a driving scene is, as indicated with the region R, a region ahead of the right side of the host vehicle. The attention-requiredrange identification unit 43 identifies the region R as an attention-required range, and the cameraimage creation unit 45 creates a camera image of the region R. The following processing is the same as that described in connection toFig. 2(a) , and will therefore not be described here. - Next, with reference to
Fig. 4(c) , a description is given of a driving scene where a host vehicle M1 merges onto an expressway with a different vehicle M2 behind and a different vehicle M3 in front. An attention-required range for such a driving scene is, as indicated with the region R, a region ahead of and behind the right side of the host vehicle. The attention-requiredrange identification unit 43 identifies the region R as an attention-required range, and the cameraimage creation unit 45 creates a camera image of the region R. The following processing is the same as that described in connection toFig. 2(a) , and will therefore not be described here. - Next, with reference to
Fig. 5 , a description is given of a driving scene where a host vehicle M1 takes a left turn at an intersection. An attention-required range for a case of taking a left turn at an intersection is, as indicated with the region R, a region of the entire intersection including the travelling direction (the left-turn direction) of the host vehicle M1. The attention-requiredrange identification unit 43 identifies the region R as an attention-required range, and the cameraimage creation unit 45 creates a camera image of the region R. The following processing is the same as that described in connection toFig. 2(a) , and will therefore not be described here. - Next, an example operation of the mobile body surroundings display apparatus 1 is described with reference to the flowchart in
Fig. 6 . This flowchart is initiated when, for example, an ignition switch is turned on. - In Step S101, the
environment detector 10 and thevehicle cameras 20 to 23 acquire information about the surroundings of the host vehicle. - In Step S102, the virtual
image creation unit 41 creates a virtual image using the information about the surroundings of the host vehicle. - In Step S103, the driving
scene determination unit 42 determines a driving scene using the information about the surroundings of the host vehicle. - In Step S104, based on the driving scene determined by the driving
scene determination unit 42, the attention-requiredrange identification unit 43 identifies an attention-required range using the database in thestorage unit 44. - In Step S105, the camera
image creation unit 45 creates a camera image of the attention-required range identified by the attention-requiredrange identification unit 43. - In Step S106, the
synthesis unit 46 replaces the attention-required range on the virtual image with the camera image. - In Step S107, the
controller 40 displays the synthesized image synthesized by thesynthesis unit 46 on thedisplay 50. - The mobile body surroundings display apparatus 1 according to the first embodiment as described above can produce the following advantageous effects.
- The mobile body surroundings display apparatus 1 first creates a virtual image using information about the surroundings of the host vehicle. Next, the mobile body surroundings display apparatus 1 identifies an attention-required range based on a driving scene, and creates a camera image of the attention-required range identified. Then, the mobile body surroundings display apparatus 1 replaces the attention-required range on the virtual image with the camera image, and displays the thus-synthesized image on the
display 50. Thereby, an occupant can be informed of detailed information on the attention-required range. - Earlier, the mobile body surroundings display apparatus 1 has been described as an apparatus mainly used for an autonomous driving vehicle with autonomous driving capability. When many pieces of information are given to an occupant during autonomous driving, the occupant may find them bothersome. The mobile body surroundings display apparatus 1, however, displays a virtual image except for an attention-required range, and thus can reduce the amount of information given to an occupant. Thus, the mobile body surroundings display apparatus 1 can bother an occupant less. By thus displaying an attention-required range with a camera image and displaying a region other than the attention-required range with a virtual image, the mobile body surroundings display apparatus 1 can give an occupant detailed information for a region to which the occupant needs to pay attention (an attention-required range), and reduce excessive information for a region other than the attention-required range. Thereby, the occupant can correctly acquire only necessary information.
- As illustrated in
Figs. 2 to 5 , attention-required ranges are places on a road such as a point of mergence to an expressway where a vehicle and a vehicle travel across each other and an intersection where a vehicle and a person travel across each other. An occupant needs to pay attention in such places. Since the mobile body surroundings display apparatus 1 replaces an attention-required range on a virtual image with a camera image and displays the thus-synthesized image, an occupant can be informed of detailed information on the attention-required range. - Next, with reference to
Fig. 7 , a description is given of a mobile body surroundings display apparatus 2 according to a second embodiment of the present invention. As illustrated inFig. 7 , the second embodiments differs from the first embodiment in that the mobile body surroundings display apparatus 2 includes anobject detector 60 and an attention-requiredobject identification unit 47 and does not include the drivingscene determination unit 42, the attention-requiredrange identification unit 43, and thestorage unit 44. The same constituents as those in the first embodiment are denoted by the same reference numerals as used in the first embodiments and will not be described here. Different points will be mainly discussed below. - The
object detector 60 is an object detection sensor that detects an object present around the host vehicle, and detects an object present in the periphery of a road on which the host vehicle is travelling. For example, a radar sensor can be used as theobject detector 60. Examples of objects detected by theobject detector 60 include mobile bodies such as a different vehicle, a motorcycle, a pedestrian, and a bicycle, traffic signals, and road signs. Note that theobject detector 60 may be a sensor other than the radar sensor, and may be an image recognition sensor using an image captured by a camera. Also, a laser sensor, an ultrasonic sensor, or the like may be used as theobject detector 60. Theobject detector 60 outputs information on detected objects to the attention-requiredobject identification unit 47. - The attention-required
object identification unit 47 identifies, among the objects detected by theobject detector 60, an object to which an occupant needs to pay attention (hereinafter referred to as an attention-required object). Examples of an attention-required object include a different vehicle, a motorcycle, a pedestrian, a bicycle, an animal (like a dog or a cat), a telephone pole, an advertising display, a traffic light, a road sign, and a fallen object on a road. The attention-requiredobject identification unit 47 outputs the identified attention-required object to the cameraimage creation unit 45 and thesynthesis unit 46. - Next, with reference to
Fig. 8 , an example of camera image synthesis is described. As illustrated inFig. 8 , on a virtual image P, a pedestrian W is shown as a symbol. In this case, information such as the attribute of the pedestrian P (whether the pedestrian P is an elderly person or a child) and the direction of eye of the pedestrian P may be lost. Thus, the attention-requiredobject identification unit 47 identifies the pedestrian W detected by theobject detector 60 as an attention-required object, and the cameraimage creation unit 45 creates a camera image of the pedestrian W identified, as illustrated inFig. 8 . Then, thesynthesis unit 46 replaces the pedestrian W on the virtual image P with the camera image. Thereby, thedisplay 50 displays the pedestrian W to which an occupant needs to pay attention, using the camera image, i.e., an actual captured image. Thereby, the occupant can be informed of detailed information on the pedestrian W. Note that a camera image of an attention-required object created may be a camera image of a region including the pedestrian W as illustrated inFig. 8 , or a camera image cutting out the pedestrian W along the contour. - Next, with reference to
Fig. 9 to 10 , a description is given of examples of camera image synthesis for various driving scenes. - As illustrated in
Fig. 9 , when theobject detector 60 detects a different vehicle M2 while a host vehicle M1 is travelling a narrow road, the attention-requiredobject identification unit 47 identifies the different vehicle M2 as an attention-required object, and the cameraimage creation unit 45 creates a camera image of the different vehicle M2. The following processing is the same as that described in connection toFig. 8 , and will therefore not be described here. - Next, with reference to
Fig. 10 , a description is given of a situation where a host vehicle M1 enters a T intersection. When theobject detector 60 detects different vehicles M2 and M3, a pedestrian W, bicycles B1 to B3, and a road sign L, the attention-requiredobject identification unit 47 identifies these objects as attention-required objects, and the cameraimage creation unit 45 creates camera images of these objects. The following processing is the same as that described in connection toFig. 8 , and will therefore not be described here. - Next, with reference to the flowchart in
Fig. 11 , an example operation of the mobile body surroundings display apparatus 2 is described. This flowchart is initiated when, for example, an ignition switch is turned on. - In Step S201, the
environment detector 10 and theobject detector 60 acquire information about the surroundings of the host vehicle. - In Step S202, the virtual
image creation unit 41 creates a virtual image using the information about the surroundings of the host vehicle. - In Step S203, the attention-required
object identification unit 47 identifies an attention-required object around the host vehicle. - In Step S204, the camera
image creation unit 45 creates a camera image of the attention-required object identified by the attention-requiredobject identification unit 47. - In Step S205, the
synthesis unit 46 replaces the attention-required object on the virtual image with the camera image. - In Step S206, the
controller 40 displays the synthesized image synthesized by thesynthesis unit 46 on thedisplay 50. - The mobile body surroundings display apparatus 2 according to the second embodiment as described above produce the following advantageous effects.
- The mobile body surroundings display apparatus 2 first creates a virtual image using information about the surroundings of the host vehicle. Next, the mobile body surroundings display apparatus 2 identifies an attention-required object and creates a camera image of the attention-required object identified. Then, the mobile body surroundings display apparatus 1 replaces the attention-required object on the virtual image with the camera image, and displays the thus-synthesized image on the
display 50. Thereby, an occupant can be informed of detailed information on the attention-required object. - On a virtual image, information on an attention-required object may be lost. For example, if a human is shown as a symbol, information such as the attribute of that person (whether the person is an elderly person or a child) and the direction of eye of the person may be lost. Further, if a vehicle is shown as a symbol, information such as the size, shape, and color of the vehicle may be lost. The mobile body surroundings display apparatus 2, however, displays an attention-required object on a virtual image after replacing it with a camera image, and thus can compensate for the loss of information which may be caused by image virtualization. Thereby, an occupant is more likely able to predict the motion of the attention-required object.
- In addition, the mobile body surroundings display apparatus 2 can inform an occupant with detailed information on an attention-required object, such as a pedestrian, an animal, a bicycle, a vehicle, or a road sign, by displaying the attention-required object after replacing the attention-required object with a camera image.
- Next, with reference to
Fig. 12 , a description is given of a mobile body surroundings display apparatus 3 according to a third embodiment of the present invention. The third embodiment differs from the first embodiment in that the mobile body surroundings display apparatus 3 includes theobject detector 60, the attention-requiredobject identification unit 47, and a highlightportion identification unit 48. The same constituents as those in the first embodiment are denoted by the same reference numerals as used in the first embodiment, and are not described below. Different points will be mainly discussed below. Note that theobject detector 60 and the attention-requiredobject identification unit 47 are the same as those described in the second embodiment, and will therefore not be described below. - The highlight
portion identification unit 48 identifies a highlight portion to which an occupant needs to pay attention. Specifically, when an attention-required object identified by the attention-requiredobject identification unit 47 is located within an attention-required range identified by the attention-requiredrange identification unit 43, the highlightportion identification unit 48 identifies this attention-required object as a highlight portion. The highlightportion identification unit 48 outputs the identified highlight portion to the cameraimage creation unit 45 and thesynthesis unit 46. - Next, with reference to
Figs. 13 to 15 , examples of camera image synthesis for various driving scenes are described. - First, with reference to
Fig. 13 , a description is given of a driving scene where a host vehicle M1 enters a T intersection. An attention-required range for a case of a T intersection is, as indicated by the region R, a region around the host vehicle including a range of the left and right sides of the center of the T intersection. The attention-requiredrange identification unit 43 identifies the region R as an attention-required range. Next, the attention-requiredobject identification unit 47 identifies different vehicles M2 and M3, a pedestrian W, bicycles B1 to B3, and a road sign L detected by theobject detector 60, as attention-required objects. Next, out of the attention-required objects identified by the attention-requiredobject identification unit 47, the highlightportion identification unit 48 identifies an attention-required object located within the range R as a highlight portion. In the example illustrated inFig. 13 , the highlightportion identification unit 48 identifies the pedestrian W, the bicycles B1 to B3, and the road sign L as highlight portions. Next, the cameraimage creation unit 45 creates camera images of the pedestrian W, the bicycles B1 to B3, and the road sign L identified as the highlight portions. Then, thesynthesis unit 46 replaces the pedestrian W, the bicycles B1 to B3, and the road sign L on a virtual image P with the camera images. Thereby, thedisplay 50 displays the pedestrian W, the bicycles B1 to B3, and the road sign L by use of the camera images, i.e., actual captured images. This allows an occupant to be informed of detailed information on the pedestrian W, the bicycles B1 to B3, and the road sign L. Note that, in the example illustrated inFig. 13 , an attention-required object which is partially located within the region R, such as the bicycles B2 and B3, is also identified as a highlight portion, but only an attention-required object which is entirely located within the region R may be identified as a highlight portion. Further, the attention-requiredrange identification unit 43 may identify an attention-required range in real time, and attention-required ranges may be preset on a map or the like. - Further, the highlight
portion identification unit 48 does not identify any highlight portion when no attention-required object is detected within the region R. When the highlightportion identification unit 48 does not identify any highlight portion, thesynthesis unit 46 does not replace the region R on the virtual image P with a camera image. - The reason for this is that when no attention-required object, such as a different vehicle or a pedestrian, is detected in a region R, the risk of the host vehicle colliding is low, and there is low necessity of informing an occupant of the region R which has been replaced with a camera image. When no attention-required object is detected in a region R, the mobile body surroundings display apparatus 3 displays only the virtual image P and thus can reduce the amount of information given to an occupant. Consequently, the mobile body surroundings display apparatus 3 can bother an occupant less. Note that the
object detector 60 may detect an object within an attention-required range identified by the attention-requiredrange identification unit 43. Such limitation of the range to detect an object in can reduce the time it takes for theobject detector 60 to detect an object. In turn, the time it takes for the attention-requiredobject identification unit 47 to identify an attention-required object can be reduced, as well. In addition, the limitation of the range to detect an object in can lead to reduction in the processing load on thecontroller 40. - Next, with reference to
Fig. 14 , a description is given of a driving scene where a host vehicle M1 takes a left turn at an intersection. An attention-required range for a case of taking a left turn at an intersection is, as indicated with the region R, the entire region of the intersection, including the travelling direction (left-turn direction) of the host vehicle M1. The attention-requiredrange identification unit 43 identifies the region R as an attention-required range, and the attention-requiredobject identification unit 47 identifies different vehicles M2 to M4, a bicycle B, and a traffic light S as attention-required objects. Next, out of the attention-required objects identified by the attention-requiredobject identification unit 47, the highlightportion identification unit 48 identifies an attention-required object located within the region R as a highlight portion. In the example illustrated inFig. 14 , the highlightportion identification unit 48 identifies the different vehicles M2 to M4, the bicycle B, and the traffic light S as attention-required objects. The following processing is the same as that described in connection toFig. 13 , and will therefore not be described here. Note that as depicted inFig. 14 , the attention-required range can be set to suit a turning-left situation. When an attention-required range and outside of the attention-required range are thus set according to a travelling scene, a driving operation being currently exercised, a driving operation expected to be exercised in the future, and the like, the attention-requiredrange identification unit 43 can set an attention-required range suitable for a travelling scene and a driving operation, and can make it less likely that attention is paid to the outside of the attention-required range. - Next, with reference to
Fig. 15 , a description is given of a driving scene where a host vehicle M1 takes a right turn at an intersection. An attention-required range for a case of taking a right turn at an intersection is, as indicated with the region R, the entire region of the intersection including the travelling direction (right-turn direction) of the host vehicle M1 and excluding the right side of the host vehicle. The attention-requiredrange identification unit 43 identifies the region R as an attention-required range, and the attention-requiredobject identification unit 47 identifies different vehicles M2 to M4, a pedestrian W, bicycles B1 and B2, and road signs L1 and L2 as attention-required objects. Next, out of the attention-required objects identified by the attention-requiredobject identification unit 47, the highlightportion identification unit 48 identifies an attention-required object located within the region R as a highlight portion. In the example illustrated inFig. 15 , the highlightportion identification unit 48 identifies the different vehicles M2 to M4, the pedestrian W, the bicycles B1 and B2, and the road signs L1 and L2 as attention-required objects. The following processing is the same as that described in connection toFig. 13 , and will therefore not be described here. Further, like inFig. 14 , the attention-requiredrange identification unit 43 may set the attention-required range to suit a turning-right situation, as depicted inFig. 15 . - Next, with reference to the flowchart illustrated in
Fig. 16 , a description is given of an example operation of the mobile body surroundings display apparatus 3. - This flowchart is initiated when, for example, an ignition switch is turned on.
- In Step S301, the
environment detector 10, theobject detector 60, and thevehicle cameras 20 to 23 acquire information about the surroundings of the host vehicle. - In Step S302, the virtual
image creation unit 41 creates a virtual image using the information about the surroundings of the host vehicle. - In Step S303, the driving
scene determination unit 42 determines a driving scene using the information about the surroundings of the host vehicle. - In Step S304, based on the driving scene determined by the driving
scene determination unit 42, the attention-requiredrange identification unit 43 identifies an attention-required range using the database in thestorage unit 44. - In Step S305, the attention-required
object identification unit 47 identifies an attention-required object around the host vehicle. - In Step S306, the highlight
portion identification unit 48 determines whether an attention-required object is located within the attention-required range. When an attention-required object is located within the attention-required range (Yes in Step S306), the highlightportion identification unit 48 identifies the attention-required object located within the attention-required range, and the processing proceeds to Step S307. When no attention-required object is located within the attention-required range (No in Step S306), the processing proceeds to Step S310. - In Step S307, the camera
image creation unit 45 creates a camera image of the attention-required object identified by the highlightportion identification unit 48. - In Step S308, the
synthesis unit 46 replaces the attention-required object on the virtual image with the camera image. - In Step S309, the
controller 40 displays the synthesized image synthesized by thesynthesis unit 46 on thedisplay 50. - In Step S310, the
controller 40 displays the virtual image on thedisplay 50. - The mobile body surroundings display apparatus 3 according to the third embodiment as described above can produce the following advantageous effects.
- The mobile body surroundings display apparatus 3 first creates a virtual image using information about the surroundings of the host vehicle. Next, the mobile body surroundings display apparatus 3 identifies an attention-required range based on a driving scene and identifies an attention-required object within the attention-required range. The mobile body surroundings display apparatus 3 creates a camera image of the attention-required object within the attention-required range, replaces the attention-required object within the attention-required range on the virtual image with the camera image, and displays the thus-synthesized image on the
display 50. Thereby, an occupant can be informed of detailed information on the attention-required object. - Further, the mobile body surroundings display apparatus 3 displays only the virtual image on the
display 50 when no attention-required object is located within the attention-required range. Thus, the mobile body surroundings display apparatus 3 can reduce the amount of information given to an occupant. The mobile body surroundings display apparatus 3 can thus bother an occupant less. - Hereinabove, the embodiments of the present invention have been described. However, it should not be understood that the descriptions and drawings which constitute part of the disclosure limit the present invention. From this disclosure, various alternative embodiments, examples, and operation techniques will be easily found by those skilled in the art.
- In the first and second embodiments, an attention-required range and an attention-required object, respectively, are replaced with a camera image. In the third embodiment, an attention-required object is replaced with a camera image if the attention-required object is located within an attention-required range. An attention-required range and an attention-required object indicate a range to which an occupant needs to pay attention, and an attention-required range and an attention-required object can collectively be rephrased as an attention-required range. In addition, as will be described later, an attention-required range can also include a region the level of attention of which is equal to or above a predetermined value.
- In the first to third embodiments, an attention-required range is replaced with a camera image, but the present invention is not limited to this. For example, the mobile body surroundings display apparatuses 1 to 3 may calculate a level of attention for the host vehicle and perform the replacement according to the level of attention calculated. The level of attention for the host vehicle can be obtained based on a relative speed or a relative distance to the host vehicle. For example, the
environment detector 10 and/or theobject detector 60 may have the capability of detecting a relative speed and a relative distance to the host vehicle. - For example, the mobile body surroundings display apparatuses 1 to 3 may calculate and set a level of attention such that the higher the relative speed to the host vehicle, the higher the level of attention. Further, the mobile body surroundings display apparatuses 1 to 3 may set a level of attention such that the shorter the relative distance to the host vehicle, the higher the level of attention.
- A specific description is given of a display method which is based on a level of attention. The mobile body surroundings display apparatuses 1 to 3 calculate a level of attention of an object located around the host vehicle, and when the level of attention calculated is equal to or above a predetermined value, create a camera image of a region where the object is located, replace that region on a virtual image with the camera image, and display the thus-synthesized image. Thereby, the mobile body surroundings display apparatuses 1 to 3 can inform an occupant of detailed information on a region to which attention needs to be paid, without identifying the attribute of the object (whether the object is a human or an animal) or the like. Note that the predetermined value can be obtained beforehand through experiment or simulation.
- Also, the mobile body surroundings display apparatuses 1 to 3 may divide a virtual image into a plurality of parts, calculate a level of attention for each of regions corresponding to the respective divided parts of the image, and replace a region the calculated level of attention of which is equal to or above a predetermined value with a camera image. This way, the mobile body surroundings display apparatuses 1 to 3 can reduce the load for the attention level calculation.
- Although the attention-required
range identification unit 43 identifies an attention-required range using a databased stored in thestorage unit 44, the present invention is not limited to this. For example, the attention-requiredrange identification unit 43 may transmit information on the position of the host vehicle to a cloud, and identify an attention-required range using information from the cloud corresponding to the information on the position of the host vehicle. Also, the attention-requiredrange identification unit 43 may identify an attention-required range using information acquired from a different vehicle through vehicle-to-vehicle communication. - Although the
synthesis unit 46 of the present embodiments replaces an attention-required range on a virtual image with a camera image, the present invention is not necessarily limited to this. Thesynthesis unit 46 may generate a camera image around the host vehicle and replace a region other than an attention-required range with a virtual image. In other words, any approach may be implemented as long as an attention-required range is displayed by use of a camera image. - Note that each function of the foregoing embodiments may be implemented by one or a plurality of processing circuits. A processing circuit includes a programmed processing device such as a processing device including an electric circuit. A processing circuit includes a device such as an application-specific integrated circuit (ASIC) adapted to execute functions described in the embodiments or a conventional circuit component.
-
- 10
- environment detector
- 20
- front camera
- 21
- right camera
- 22
- left camera
- 23
- rear camera
- 40
- controller
- 41
- virtual image creation unit
- 42
- driving scene determination unit
- 43
- attention-required range identification unit
- 44
- storage unit
- 45
- camera image creation unit
- 46
- synthesis unit
- 47
- attention-required object identification unit
- 48
- highlight portion identification unit
- 50
- display
- 60
- object detector
Claims (7)
- A mobile body surroundings display method performed by a mobile body surroundings display apparatus including an image capturing element that acquires surroundings information on a mobile body by image capturing, a controller that creates a captured image using the surroundings information and a virtual image representing a situation around the mobile body, and a display that displays the virtual image, the method comprising:detecting an attention-required range around the mobile body;creating the captured image of the attention-required range; anddisplaying the attention-required range by use of the captured image.
- The mobile body surroundings display method according to claim 1, wherein the attention-required range is a region on a road where a vehicle and a vehicle or a vehicle and a person travel across each other.
- The mobile body surroundings display method according to claim 1 or 2, comprising:detecting an attention-required object around the mobile body;when detecting the attention-required object, detecting a region including the attention-required object as the attention-required range; anddisplaying the attention-required range by use of the captured image.
- The mobile body surroundings display method according to claim 1 or 2, comprising:detecting an attention-required object around the mobile body within the attention-required range;when detecting the attention-required object within the attention-required range, detecting a region including the attention-required object as a highlight portion; anddisplaying the highlight portion by use of the captured image.
- The mobile body surroundings display method according to claim 3 or 4, wherein the attention-required object is a pedestrian, an animal, a bicycle, a vehicle, or a road sign located around the mobile body.
- The mobile body surroundings display method according to claim 1, comprising:calculating a level of attention around the mobile body;detecting a region the level of attention of which is equal to or above a predetermined value, as the attention-required range; anddisplaying the attention-required range by use of the captured image.
- A mobile body surroundings display apparatus comprising:an image capturing element that acquires surroundings information on a mobile body by image capturing;a controller that creates a captured image using the surroundings information and a virtual image representing a situation around the mobile body; anda display that displays the virtual image, whereinthe controller detects an attention-required range around the mobile body, creates the captured image of the attention-required range detected, and displays the captured image of the attention-required range on the display.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2016/062014 WO2017179174A1 (en) | 2016-04-14 | 2016-04-14 | Moving body surroundings display method and moving body surroundings display apparatus |
Publications (2)
Publication Number | Publication Date |
---|---|
EP3444145A1 true EP3444145A1 (en) | 2019-02-20 |
EP3444145A4 EP3444145A4 (en) | 2019-08-14 |
Family
ID=60042543
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP16898634.7A Ceased EP3444145A4 (en) | 2016-04-14 | 2016-04-14 | Moving body surroundings display method and moving body surroundings display apparatus |
Country Status (10)
Country | Link |
---|---|
US (1) | US10864856B2 (en) |
EP (1) | EP3444145A4 (en) |
JP (1) | JP6555413B2 (en) |
KR (1) | KR102023863B1 (en) |
CN (1) | CN109070799B (en) |
BR (1) | BR112018071020B1 (en) |
CA (1) | CA3020813C (en) |
MX (1) | MX367700B (en) |
RU (1) | RU2715876C1 (en) |
WO (1) | WO2017179174A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7195200B2 (en) * | 2019-03-28 | 2022-12-23 | 株式会社デンソーテン | In-vehicle device, in-vehicle system, and surrounding monitoring method |
US11554668B2 (en) * | 2019-06-25 | 2023-01-17 | Hyundai Mobis Co., Ltd. | Control system and method using in-vehicle gesture input |
CN116964637A (en) * | 2021-03-08 | 2023-10-27 | 索尼集团公司 | Information processing device, information processing method, program, and information processing system |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06278531A (en) | 1993-03-25 | 1994-10-04 | Honda Motor Co Ltd | Visual recognition assisting device for vehicle |
JP4334686B2 (en) | 1999-07-07 | 2009-09-30 | 本田技研工業株式会社 | Vehicle image display device |
JP4956915B2 (en) * | 2005-05-20 | 2012-06-20 | 日産自動車株式会社 | Video display device and video display method |
EP1899899A1 (en) | 2005-06-30 | 2008-03-19 | Norlitech, LLC | Monolithic image perception device and method |
JP4775123B2 (en) | 2006-06-09 | 2011-09-21 | 日産自動車株式会社 | Vehicle monitoring device |
CN101689244B (en) * | 2007-05-04 | 2015-07-22 | 高通股份有限公司 | Camera-based user input for compact devices |
US8947421B2 (en) * | 2007-10-29 | 2015-02-03 | Interman Corporation | Method and server computer for generating map images for creating virtual spaces representing the real world |
JP2011118482A (en) * | 2009-11-30 | 2011-06-16 | Fujitsu Ten Ltd | In-vehicle device and recognition support system |
EP2512133B1 (en) * | 2009-12-07 | 2018-07-18 | Clarion Co., Ltd. | Vehicle periphery image display system |
JP2011205513A (en) * | 2010-03-26 | 2011-10-13 | Aisin Seiki Co Ltd | Vehicle periphery monitoring device |
WO2011135778A1 (en) * | 2010-04-26 | 2011-11-03 | パナソニック株式会社 | Image processing device, car navigation system, and on-street camera system |
JP2012053533A (en) * | 2010-08-31 | 2012-03-15 | Daihatsu Motor Co Ltd | Driving support device |
JP5998496B2 (en) * | 2011-02-02 | 2016-09-28 | 日産自動車株式会社 | Parking assistance device |
US20120287277A1 (en) * | 2011-05-13 | 2012-11-15 | Koehrsen Craig L | Machine display system |
CN104903946B (en) * | 2013-01-09 | 2016-09-28 | 三菱电机株式会社 | Vehicle surrounding display device |
JP2013255237A (en) * | 2013-07-08 | 2013-12-19 | Alpine Electronics Inc | Image display device and image display method |
JP5901593B2 (en) * | 2013-09-11 | 2016-04-13 | 本田技研工業株式会社 | Vehicle display device |
US20150138099A1 (en) * | 2013-11-15 | 2015-05-21 | Marc Robert Major | Systems, Apparatus, and Methods for Motion Controlled Virtual Environment Interaction |
US9639968B2 (en) * | 2014-02-18 | 2017-05-02 | Harman International Industries, Inc. | Generating an augmented view of a location of interest |
-
2016
- 2016-04-14 EP EP16898634.7A patent/EP3444145A4/en not_active Ceased
- 2016-04-14 RU RU2018139676A patent/RU2715876C1/en active
- 2016-04-14 US US16/092,334 patent/US10864856B2/en active Active
- 2016-04-14 BR BR112018071020-2A patent/BR112018071020B1/en active IP Right Grant
- 2016-04-14 CA CA3020813A patent/CA3020813C/en active Active
- 2016-04-14 MX MX2018012119A patent/MX367700B/en active IP Right Grant
- 2016-04-14 CN CN201680084502.4A patent/CN109070799B/en active Active
- 2016-04-14 KR KR1020187030092A patent/KR102023863B1/en active IP Right Grant
- 2016-04-14 JP JP2018511838A patent/JP6555413B2/en active Active
- 2016-04-14 WO PCT/JP2016/062014 patent/WO2017179174A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
CN109070799B (en) | 2021-02-26 |
JPWO2017179174A1 (en) | 2019-04-04 |
CA3020813A1 (en) | 2017-10-19 |
MX367700B (en) | 2019-09-03 |
US20190337455A1 (en) | 2019-11-07 |
BR112018071020A2 (en) | 2019-02-12 |
JP6555413B2 (en) | 2019-08-14 |
EP3444145A4 (en) | 2019-08-14 |
KR102023863B1 (en) | 2019-09-20 |
US10864856B2 (en) | 2020-12-15 |
WO2017179174A1 (en) | 2017-10-19 |
MX2018012119A (en) | 2019-02-11 |
CA3020813C (en) | 2019-05-14 |
KR20180123553A (en) | 2018-11-16 |
RU2715876C1 (en) | 2020-03-03 |
CN109070799A (en) | 2018-12-21 |
BR112018071020B1 (en) | 2022-06-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11535155B2 (en) | Superimposed-image display device and computer program | |
CN109515434B (en) | Vehicle control device, vehicle control method, and storage medium | |
US9589194B2 (en) | Driving assistance device and image processing program | |
CA3002628C (en) | Display control method and display control device | |
US9469248B2 (en) | System and method for providing situational awareness in a vehicle | |
EP3487172A1 (en) | Image generation device, image generation method, and program | |
CN111595357B (en) | Visual interface display method and device, electronic equipment and storage medium | |
CN104691447A (en) | System and method for dynamically focusing vehicle sensors | |
US20190244515A1 (en) | Augmented reality dsrc data visualization | |
CN108604413B (en) | Display device control method and display device | |
JP7011559B2 (en) | Display devices, display control methods, and programs | |
US11104356B2 (en) | Display device and method for a vehicle | |
CN109927629B (en) | Display control apparatus, display control method, and vehicle for controlling projection apparatus | |
CN111936364B (en) | Parking assist device | |
CA3020813C (en) | Mobile body surroundings display method and mobile body surroundings display apparatus | |
US11545035B2 (en) | Driver notification system | |
JP6102509B2 (en) | Vehicle display device | |
JP2021149319A (en) | Display control device, display control method, and program | |
EP3544293B1 (en) | Image processing device, imaging device, and display system | |
CN115503745A (en) | Display device for vehicle, display system for vehicle, display method for vehicle, and non-transitory storage medium storing program | |
JP6824809B2 (en) | Driving support device, imaging system, vehicle, and driving support system | |
JP3222638U (en) | Safe driving support device | |
JP7427556B2 (en) | Operation control device, operation control method and program | |
JP6989418B2 (en) | In-vehicle system | |
JP7259377B2 (en) | VEHICLE DISPLAY DEVICE, VEHICLE, DISPLAY METHOD AND PROGRAM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20181109 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
A4 | Supplementary search report drawn up and despatched |
Effective date: 20190712 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G08G 1/16 20060101ALI20190708BHEP Ipc: H04N 7/18 20060101ALI20190708BHEP Ipc: B60R 1/00 20060101AFI20190708BHEP |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20200423 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R003 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED |
|
18R | Application refused |
Effective date: 20210504 |