JP2009184554A - Safe travel assisting system - Google Patents

Safe travel assisting system Download PDF

Info

Publication number
JP2009184554A
JP2009184554A JP2008027602A JP2008027602A JP2009184554A JP 2009184554 A JP2009184554 A JP 2009184554A JP 2008027602 A JP2008027602 A JP 2008027602A JP 2008027602 A JP2008027602 A JP 2008027602A JP 2009184554 A JP2009184554 A JP 2009184554A
Authority
JP
Japan
Prior art keywords
vehicle
range
driver
difficult
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
JP2008027602A
Other languages
Japanese (ja)
Inventor
裕史 ▲高▼▲崎▼
Yuji Takasaki
Original Assignee
Denso Corp
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Denso Corp, 株式会社デンソー filed Critical Denso Corp
Priority to JP2008027602A priority Critical patent/JP2009184554A/en
Publication of JP2009184554A publication Critical patent/JP2009184554A/en
Withdrawn legal-status Critical Current

Links

Images

Abstract

<P>PROBLEM TO BE SOLVED: To report information at the timing required for a driver in technology that information of a portion hardly seen by the driver of a vehicle is informed to the driver. <P>SOLUTION: The safe travel assisting system specifies dead angle ranges 47, 48 of the driver at starting of the like of the vehicle 20, and sets a warning range 60 according to designation of the driver so as to completely include the specified dead angle. The system displays a photographed image by an on-vehicle camera on a head up display making the fact that the other vehicle is present at the set warning range 60 as a cause. <P>COPYRIGHT: (C)2009,JPO&INPIT

Description

  The present invention relates to a safe driving support system for supporting safe driving of a vehicle.

Conventionally, safety confirmation when changing the lane of a vehicle and during traveling straight ahead has been performed by visual observation of the driver and a rearview mirror and side mirror. However, sufficient safety confirmation may not be possible depending on the vehicle speed during driving, the driver's judgment, concentration, and mental state. Even if a mirror is used, it is difficult to visually confirm the safety range of the blind spot from the driver. As a technique for avoiding such a problem, there is a technique for informing the driver of information such as an image of a blind spot from the driver (see, for example, Patent Documents 1 to 9).
Japanese Patent Laid-Open No. 5-24492 JP-A-6-258426 JP-A-8-2356 JP-A-10-250508 JP 2001-315601 A JP 2002-204446 A JP 2005-171904 A JP 2006-248374 A JP 2007-164328 A

  However, in the conventional technology as described above, the driver is informed of blind spot information until the driver does not necessarily feel necessary, which may cause the driver to feel annoyance.

  SUMMARY OF THE INVENTION In view of the above points, an object of the present invention is to provide information notification at a timing necessary for a driver in a technique for notifying a driver of information that is difficult to see from a driver of a vehicle.

  In order to achieve the above object, the invention according to claim 1 is directed to a safe driving support system including a camera (2a, 2b, 2c) that captures a side and a rear of a vehicle (20). By setting according to the designation of (40) and displaying the captured image by the camera (2a, 2b, 2c) on the driver (40) due to the presence of the moving obstacle (63) in the set warning range. is there.

  As described above, the images taken by the cameras (2a, 2b, 2c) are displayed due to the presence of the moving obstacle (63) within the warning range set according to the designation of the driver (40). Notification that directly reflects a different warning range for each driver (40) (that is, a range that the driver (40) wants to warn) can be performed. That is, it is possible to notify information by an image at a timing necessary for the driver (40).

  Further, as described in claim 2, the safe driving support system may prohibit the setting of a warning range that protrudes from the shooting range of the camera (2a, 2b, 2c). In this way, when the image is displayed due to the presence of the moving obstacle (63) in the warning range, the moving obstacle (63) is not displayed in the display image. No inconvenience occurs.

  According to a third aspect of the present invention, the safe driving support system provides a visual difficulty range corresponding to a point that is difficult for the driver (40) of the vehicle (20) to see the driver (40) outside the vehicle (20). It may be specified. In this case, the safe driving support system may prohibit the setting of a warning range that does not completely include the specified visually difficult range.

  As a result, the driver (40) carelessly excludes a part (or all) of the difficult-to-view range from the warning range, and as a result, the moving obstacle (63) is in the difficult-to-view range. It is possible to prevent the occurrence of a situation where the image display is no longer performed even when entering.

  According to a fourth aspect of the present invention, the safe driving support system sends a captured image to the driver (40) due to the presence of a moving obstacle (63) in the warning range when the lane of the vehicle (20) is changed. It may be displayed. At this time, the safe driving support system may identify an area in which the driver (40) cannot be visually recognized even if the mirror (31, 34, 37) of the vehicle (20) is used as a visually difficult range.

  This is because when the lane is changed, the driver (40) is likely to check the side and the rear by looking at the mirrors (31, 34, 37). This is because it is not always necessary to set the region visible at 31, 34, 37) to the visually difficult range. Therefore, by doing in this way, the possibility that information can be displayed at a timing necessary for the driver (40) is further improved.

  According to a fifth aspect of the present invention, when the vehicle (20) travels straight ahead, the safe driving support system displays a captured image as a driver (40) due to a moving obstacle (63) entering the warning range. May be displayed. At this time, the safe driving support system can only detect the driver (40, 40) using the mirror (31, 34, 37) of the vehicle (20) and the mirror (31, 34, 37). 40) may be specified as a visually difficult range.

  When traveling straight ahead, unlike when changing lanes, it is unlikely that the driver (40) will see the mirrors (31, 34, 37) and check the side and rear. Therefore, in such a case, as described above, the area that can be seen only by the mirrors (31, 34, 37) is also set as a difficult-to-view range, so that information is displayed at a timing necessary for the driver (40). The possibility of being able to be further improved.

  According to a sixth aspect of the present invention, the safe driving support system identifies the difficult-to-view range based on the eye position of the driver (40) and the orientation of the mirrors (31, 34, 37). Also good. In this way, it is possible to identify an appropriate visual difficulty range in accordance with the physical characteristics of the individual driver (40). Therefore, the possibility that information can be displayed at a timing necessary for the driver (40) is further improved.

  Further, according to the seventh aspect, the safe driving support system moves the vehicle (63) to a safe position based on the moving obstacle (63) entering a range narrower than the warning range. As such, the traveling of the vehicle (63) may be controlled.

  In this way, even if the driver (40) receives the display and operates the vehicle (20), the vehicle (20) is automatically made safe even in the case of an emergency that is not in time. Can be moved to a position.

  In addition, the code | symbol in the bracket | parenthesis in the said and the claim shows the correspondence of the term described in the claim, and the concrete thing etc. which illustrate the said term described in embodiment mentioned later. .

  Hereinafter, an embodiment of the present invention will be described. FIG. 1 shows a configuration of an in-vehicle safe driving support system according to the present embodiment. This safe driving support system includes a vehicle navigation device 1, in-vehicle cameras 2 a to 2 c, a head-up display 3, a radar sensor 4, a danger avoidance automatic control device 5, a blinker 6, and a safe driving support unit 7.

  The vehicle navigation apparatus 1 includes a position detector 11, a monitor 12, an operation unit 13, a map data acquisition unit 14, and a control circuit 17.

  The position detector 11 includes a well-known sensor (not shown) such as a geomagnetic sensor, a gyroscope, a vehicle speed sensor, and a GPS receiver. The current position and direction of the vehicle based on the characteristics of each of these sensors. , And information for specifying the speed is output to the control circuit 17. The GPS receiver can output current position information with high accuracy (accuracy within 1 meter) using DGPS, interference positioning GPS, or the like.

  The monitor 12 displays a video based on the video signal output from the control circuit 17 on the driver. The operation unit 13 includes a plurality of mechanical switches provided in the vehicle navigation device 1 and an input device such as a touch panel provided so as to overlap the display surface of the monitor 12, and is based on pressing of the mechanical switch by the driver and touching of the touch panel. The signal is output to the control circuit 17.

  The map data acquisition unit 14 includes a nonvolatile storage medium such as a DVD, a CD, and an HDD, and a device that reads data (and writes data if possible) to the storage medium. The storage medium stores a program executed by the control circuit 17, map data for route guidance, and the like.

  The map data has road data and facility data. Road data includes link position information, link type information, lane number information included in the link, lane position information included in the link, node position information, type information, and information on the connection relationship between nodes and links, etc. Is included. The facility data has a plurality of records for each facility, and each record has data indicating name information, location information, land lot number information, facility type information, and the like of the target facility.

  A control circuit (corresponding to a computer) 17 is a microcomputer having a CPU, RAM, ROM, I / O and the like. The CPU executes a program for the operation of the vehicle navigation device 1 read from the ROM or the map data acquisition unit 16, reads information from the RAM, the ROM, and the map data acquisition unit 16 when executing the program. And (if possible) writing information to the storage medium of the map data acquisition unit 16, position detector 11, monitor 12, operation unit 13, cameras 2a to 2c, head-up display 3, radar sensor 4, Signals are exchanged with the danger avoidance automatic control device 5, the blinker 6, and the safe driving support unit 7.

  Specific processes performed by the control circuit 17 executing the program include a current position specifying process, a navigation process, a lane change support process, and a straight travel support process.

  Based on the signal from the position detector 11, the current position specifying process is performed using a known technique such as map matching, the current position of the vehicle, the link on which the vehicle is traveling, the lane in which the vehicle is traveling, This is processing for specifying the direction, vehicle speed, and the like.

  In the navigation processing, an optimum route to the destination input by the driver using the operation unit 13 is calculated, and voice guidance and monitoring using a speaker (not shown) is performed to guide the vehicle along the route. 12 is a process for performing map enlargement display and the like using 12.

  The lane change support process and the straight travel support process are processes for notifying the driver as necessary when changing the lane of the vehicle and when traveling straight. Details of the lane change support process and the straight travel support process will be described later.

  The in-vehicle cameras 2 a to 2 c are devices for photographing a predetermined photographing range on the side and rear of the vehicle and outputting an image of the photographed result to the control circuit 17. Specifically, as shown in FIG. 2, the right-side camera 2 a is attached to the right side surface of the host vehicle 20, and the areas on the right side and the right rear side of the host vehicle 20 (more specifically, the lines 21 and 22. The area between the two). The left-side camera 2b is attached to the left side surface of the host vehicle 20, and captures the left and rear left regions of the host vehicle 20 (more specifically, the region sandwiched between the lines 23 and 24). The rear camera 2c is attached to the rear portion of the host vehicle 20, and captures a region behind the host vehicle 20 (more specifically, a region sandwiched between the lines 25 and 26).

  The shooting range of the right-side camera 2a covers most of the area (specifically, the area sandwiched between the lines 38 and 39) that can be seen by the driver of the vehicle 20 through the right door mirror 37, and the rearview mirror. 31, an area that cannot be visually recognized using any of the left door mirror 34 and the right door mirror 37 is also covered.

  The imaging range of the left-side camera 2b covers most of the area (specifically, the area sandwiched between the lines 35 and 36) that can be seen by the driver through the left door mirror 34, and the mirror 31 and the left door mirror. 34 and a region that cannot be visually recognized using either the right door mirror 37 are also covered.

  The imaging range of the rear camera 2c covers most of the region (specifically, the region sandwiched between the lines 32 and 33) that can be seen by the driver through the rearview mirror 31, and the mirror 31 and the left door mirror 34. The area that cannot be visually recognized using any of the right door mirrors 37 is also covered.

  The head-up display 3 is a device that projects an image on the windshield portion of the vehicle based on the control of the control circuit 17.

  The radar sensor 4 is attached to the front part, right side part, left side part, and rear part of the vehicle, irradiates the surroundings with a laser beam, and receives the reflected wave, thereby surrounding objects (pedestrians, other vehicles) around the vehicle 20. And the like, a direction viewed from the host vehicle 20, a distance from the host vehicle 20, and a relative speed with respect to the host vehicle 20. The detection range of each radar sensor 4 is within a range of about 100 meters from the radar sensor 4.

  The risk avoidance automatic control device 5 is a control device such as a microcomputer, and controls the steering, accelerator, brake, etc. of a vehicle (not shown) according to the positional relationship between the own vehicle 20 and the vehicle around the own vehicle 20. This is a device for controlling the traveling content of the vehicle. Details of the operation of the danger avoidance automatic control apparatus 5 will be described later.

  The blinker 6 is a device that blinks the right lamp for right turn display or the left lamp for left turn display of the vehicle 20 according to the operation of the driver. Is output to the control circuit 17.

  The safe driving support unit 7 includes a control unit 71, an eye point measurement unit 72, and a mirror angle measurement unit 73.

  As shown in FIGS. 3 and 4, the eye point measuring unit 72 is attached to the position of the room mirror 31 of the vehicle 20, and the three-dimensional eye model 41 for the right eye 41 and the left eye 42 of the driver 40 of the vehicle 20 with respect to the eye point measuring unit 72. This is an apparatus for specifying the positions X, Y, Z1, and Z2.

  Specifically, the eye point measurement unit 72 receives a camera that captures the face portion of the driver 40, a light emitting element that irradiates light to the imaging range of the camera, and a reflected wave of light emitted by the light emitting element. And a control unit.

  The eye point measurement unit 72 irradiates the face of the driver 40 with the light emitting element and receives the light with the light receiving element, slightly vibrates the light emitting element and the light receiving element in the traveling direction of the light, and receives the light at the light receiving element. The distance X from the eye point measurement unit 72 to the eyes 41 and 42 is calculated based on the variation of light and the light intensity (due to minute vibrations). The eye point measurement unit 72 also determines the height difference Y of the eyes 41 and 42 relative to the eye point measurement unit 72 and the lateral direction based on the identified distance X and the positions of the eyes 41 and 42 in the image captured by the camera. The position differences Z1 and Z2 are calculated. Details of the calculation of the positions of the eyes 41 and 42 of the driver 40 by the eye point measurement unit 72 are described in, for example, Japanese Patent Application Laid-Open No. 2001-21315.

  In addition, the coordinates in the vehicle 20 of the attachment position of the eye point measurement part 72 are predetermined. Therefore, the position of the eyes 41 and 42 of the driver 40 relative to the vehicle 20 can be specified by the three-dimensional positions X, Y, Z1, and Z2.

  The mirror angle measurement unit 73 detects the orientation of each mirror of the room mirror 31, the left door mirror 34, and the right door mirror 37, and outputs a detection result to the control circuit 17 (for example, a rotating part for changing the mirror angle). A rotary encoder for detecting a rotation angle.

  A control unit (corresponding to a computer) 71 is a microcomputer having a CPU, a RAM, a ROM, an I / O, and the like. The CPU executes a program for the operation of the safe driving support unit 7 read from the ROM, reads information from the RAM and ROM, writes information to the RAM, and executes an eye point measurement unit. 72, the mirror angle measurement part 73, the vehicle navigation apparatus 1 etc. are transmitted / received.

  A specific process performed by the control circuit 71 executing the program is a warning range setting process. FIG. 5 shows a flowchart of the program 100 executed by the control unit 71 for the warning range setting process. When the driver 40 inputs to the operation unit 13 of the vehicle navigation apparatus 1 that the safe driving support system is used, the control circuit 17 transmits a warning range setting process execution command to the control unit 71, and the control unit 71 71 starts the execution of the program 100 based on the reception of the execution command. Alternatively, the control circuit 71 starts the execution of the program 100 when the vehicle is started (for example, when an ignition switch is turned on).

  Then, in the execution of the program 100, the control unit 71 first obtains relative position information of the eye position of the driver 40 with respect to the eye point measurement unit 72 from the eye point measurement unit 72 in step 105, and is further set in advance. Based on the position coordinate information in the vehicle 20 of the eye point measurement unit 72, the three-dimensional position coordinate in the vehicle 20 of the eye position of the driver 40 is specified.

  Subsequently, in step 110, angle information indicating the orientation of each mirror 31, 34, 37 is acquired from the mirror angle measurement unit 73.

  Subsequently, in step 115, a visual recognition difficult range (that is, a range of positions where confirmation is difficult) is determined based on the information on the eye position of the driver 40 and the orientation of the mirrors 31, 34, and 37 acquired in steps 105 and 110. To do. There are two types of difficult-to-view ranges determined here: a difficult-to-view range for changing lanes and a difficult-to-view range for traveling straight ahead.

  The difficult viewing range for changing lanes is the area shown in areas 47 and 48 in FIG. In other words, the driver cannot visually confirm with the face facing forward, the driver 40 cannot be confirmed through any of the rearview mirror 31, the left door mirror 34, and the left door mirror 34, and the driver 40 cannot This is a blind spot area that cannot be confirmed even when viewed in the horizontal direction (that is, left or right).

  Alternatively, the visual difficulty range for changing lanes cannot be confirmed even when the driver looks with the face facing forward, and the driver 40 can pass through any of the room mirror 31, the left door mirror 34, and the left door mirror 34. It may be a blind spot area that cannot be confirmed.

  Further, the visual difficulty range for straight traveling is a region shown in areas 44, 45, 46, 47, and 48 in FIG. That is, when the face of the driver 40 is facing forward, an area (that is, areas 44 to 46 in FIG. 6) that can be seen only through any of the room mirror 31, the left door mirror 34, and the right door mirror 37; This is an area that includes both the above-mentioned difficult-to-view areas for changing lanes.

  Note that the area of the road surface (hereinafter referred to as area A) that the driver 40 can see directly without looking through the mirrors 31, 34, and 37 with the face facing forward is referred to as the shape and size of the window glass of the vehicle 20. When information such as the shape and size of the vehicle body and the size and shape of each mirror 31, 34, 37 (hereinafter referred to as vehicle body structure information) is recorded in the ROM of the control unit 71, the vehicle body structure It may be calculated using the information and the information on the eye position of the identified driver 40. Alternatively, when data on the correspondence between the eye position of the driver 40 and the specific range of the area A is recorded in the ROM of the control unit 71 in advance, the eye of the identified driver 40 is included in the correspondence data. The area A may be specified by applying the position information.

  In addition, regarding the road surface area (hereinafter referred to as area B) that can be confirmed by the driver 40 through the room mirror 31, the left door mirror 34, and the left door mirror 34, vehicle body structure information is recorded in the ROM of the control unit 71. The vehicle body structure information, the specified eye position information of the driver 40, and the specified mirror direction information may be used for calculation. Alternatively, when data on the correspondence between the combination of the eye position of the driver 40 and the orientation of each mirror and the specific range of area B is recorded in the ROM of the control unit 71 in advance, the correspondence data is included in the correspondence data. The area B may be specified by applying information on the eye position of the specified driver 40 and the orientation of each mirror.

  In addition, for a road surface area (hereinafter referred to as area C) in which the driver 40 can confirm his / her face sideways without looking directly through the mirrors 31, 34 and 37, vehicle body structure information is stored in the ROM of the control unit 71. If the information is recorded, it may be calculated using the vehicle body structure information and information on the eye position of the identified driver 40. However, in this case, the angle at which the driver 40 turns the face sideways is set to a predetermined value (for example, 90 °). Alternatively, when data on the correspondence between the eye position of the driver 40 and the specific range of the area C is recorded in the ROM of the control unit 71 in advance, the eye of the identified driver 40 is included in the correspondence data. The area C may be specified by applying the position information.

  Further, regarding the road surface area (hereinafter referred to as area D) that can be confirmed through the rearview mirror 31, the left door mirror 34, and the left door mirror 34 with the driver 40 facing forward, the vehicle body structure information is stored in the control unit 71. When recorded in the ROM, it may be calculated using the vehicle body structure information, the specified eye position information of the driver 40, and the specified mirror direction information. Alternatively, when data on the correspondence between the combination of the eye position of the driver 40 and the orientation of each mirror and the specific range of the area D is recorded in the ROM of the control unit 71 in advance, the correspondence data is included in the correspondence data. The area D may be specified by applying information on the eye position of the specified driver 40 and the orientation of each mirror.

  Subsequently, in step 120, the driver 40 is inquired whether to activate the lane change support process (that is, whether or not to execute the lane change support process). Specifically, an inquiry message as to whether to activate the lane change support process is output to the control circuit 17. Then, the control circuit 17 displays this inquiry message on the monitor 12 or outputs it to a speaker (not shown). When the control circuit 17 acquires an input of a positive or negative response to the inquiry through the operation unit 13, the control circuit 17 transmits the content of the response to the control unit 71. 71 then executes step 125 if the received response indicates affirmative, and executes step 135 if it indicates negative.

  In step 125, a lane change warning range is determined. This lane change warning range is used as a criterion for determining whether or not to issue a warning in a lane change support process described later. A specific method for determining the warning range for lane change is as follows. That is, a predetermined enlargement process determined in advance is performed on the difficult-to-view range for lane change specified in step 115, and the range of the result of the enlargement process is set as a default lane change warning range. As an example of the enlarging process, there is a method of doubling the area of the difficult-to-view range for lane change without changing the center position.

  It should be noted that the range of the result of the enlargement process is a range that completely includes the difficult-to-view range when changing lanes and is completely included in the imaging range of the in-vehicle cameras 2a to 2c. In addition, the information of the imaging | photography range of vehicle-mounted camera 2a-2c is recorded on ROM of the control part 71 previously.

  Alternatively, the visually difficult range for lane change itself may be the default lane change warning range.

  Further, in step 125, the control unit 71 changes the default lane change warning range in accordance with the driver's operation on the operation unit 13. FIG. 7 shows an example of a lane change warning range including the difficult-to-view ranges 47 and 48 for lane change. In this example, a dotted line 60 indicates the boundary of the lane change warning range.

  This change process may be realized by an interactive and graphical method in which the image display on the monitor 12 and the operation of the touch panel of the operation unit 13 are linked.

  For example, the control unit 71 causes the monitor 12 to display the vehicle 20, the visibility difficulty ranges 47 and 48 for changing lanes around the vehicle 20, and the default lane change warning range via the control circuit 17. . When the driver 40 uses the touch panel superimposed on the monitor 12 and traces the boundary of the range to be set as the lane change warning range on the monitor 12, the control unit 71 performs the tracing via the control circuit 17. The inside of the defined boundary is specified as the warning range for lane change.

  In addition, for example, the control unit 71 sends a boundary line between the vehicle 20 and the visibility difficulty ranges 47 and 48 for the lane change around the vehicle 20 and the default lane change warning range to the monitor 12 via the control circuit 17. Is displayed. Then, when the driver 40 touches a point X on the boundary line of the default lane change warning range on the monitor 12 using the touch panel superimposed on the monitor 12 and moves the finger while maintaining the touch, the control is performed. The unit 71 detects the operation content via the control circuit 17, and changes the warning range for lane change so that the shifted position becomes a new boundary that changes to the point X described above. It may be.

  However, in any method, the control circuit 17 ignores the designation when the area designated by the driver's touch panel operation does not completely include the visibility difficult range for lane change. Maintain the lane change warning range immediately before the designation.

  Also, if the area designated by the driver's touch panel operation deviates from the shooting range of the in-vehicle cameras 2a to 2c, the control circuit 17 ignores the designation and issues a lane change warning immediately before the designation. Maintain range.

  That is, the control circuit 17 sets, as the lane change warning range, a range that does not completely include the difficult-to-view range for lane change and a range that deviates from the shooting range of the in-vehicle cameras 2a to 2c. Prohibit that.

  In addition, when the driver 40 performs an operation to the effect that the lane change warning range is not changed to the operation unit 13 and acquires a signal to that effect via the control circuit 17, the default lane change warning range is Set as the lane change warning range to be used.

  Subsequently, at step 130, the lane change assist flag in the RAM of the control unit 71 is turned on, and then step 135 is executed. The lane change support flag is set to be off at the start of execution of the program 100.

  In step 135, the driver 40 is inquired whether or not to activate the straight traveling support process (that is, whether or not to enable execution). Specifically, an inquiry message as to whether to activate the straight traveling support process is output to the control circuit 17. Then, the control circuit 17 displays this inquiry message on the monitor 12 or outputs it to a speaker (not shown). When the control circuit 17 acquires an input of a positive or negative response to the inquiry through the operation unit 13, the control circuit 17 transmits the content of the response to the control unit 71. Then, 71 executes step 140 if the received response indicates affirmation, and terminates the execution of the program 100 if it indicates no.

  In step 140, a warning range for straight traveling is determined. This straight travel warning range is used as a criterion for determining whether or not to perform a warning in a straight travel support process described later. The specific method of determining the warning range for straight running is the method of setting the warning range for lane change in step 125, replacing the warning range for changing lanes with the warning range for straight running, and going straight through the difficult-to-view range for changing lanes. It is the same as the one replaced with the difficult-to-view range for use. FIG. 8 shows an example of the straight travel warning range including the difficulty-of-view ranges 44 to 48 for straight travel. In this example, a dotted line 70 indicates the boundary of the warning range for straight traveling.

  Subsequently, at step 145, the straight traveling support flag in the RAM of the control unit 71 is turned on, and then the execution of the program 100 is terminated. Note that the straight travel support flag is set to off at the start of execution of the program 100.

  At the end of the program 100, the control circuit 71 determines the value of the lane change support flag, the value of the straight travel support flag, the difficult viewing range when changing lanes, and the difficult visibility range when traveling straight ahead. The control circuit 17 records the output information in the RAM of the control circuit 17.

  By executing the program 100 as described above, the control unit 71 changes the lane based on information on the eye position of the driver 40 (see step 105) and the orientations of the mirrors 31, 34, and 37 (see step 110). The difficult-to-view range for the vehicle and the difficult-to-view range for straight running are calculated (see step 115), and the corresponding difficult-to-view range is completely included, and is completely included in the imaging range of the in-vehicle cameras 2a to 2c. As such areas, a lane change warning range and a straight traveling warning range are determined automatically or in response to a driver's operation (see steps 125 and 140).

  The control circuit 17 executes the lane change support process only when the value of the lane change support flag in the RAM is on, and only when the value of the straight travel support flag in the RAM is on. Then, a straight traveling support process is executed. Therefore, the driver 40 can select not to execute each of the lane change support processing and the straight travel support support independently (see steps 120, 130, 135, and 145 in FIG. 5). Therefore, only when the driver 40 needs it, only the necessary assistance processing can be executed, so that it is less likely that the driver 40 monitors the annoyance.

  In addition, the control unit 71 sets the lane change warning range and the straight travel warning range only when the driver 40 executes the corresponding lane change support process or the straight travel support support. Therefore, it is not necessary to make settings for processing that is not executed, and the work load on the driver 40 is reduced.

  The control circuit 17 executes the program 200 shown in FIG. 9 for the execution of the lane change support process, and executes the program 300 shown in FIG. 10 for the execution of the straight traveling support process. Hereinafter, details of the lane change support process and the straight travel support process will be described.

  In the execution of the program 200, the control circuit 17 first acquires information on the current position based on the signal from the position detector 11 in step 210. Subsequently, in step 220, the road and lane to which the acquired current position belongs are specified based on the map data. Note that the position information acquired in step 210 has such an accuracy that the lane in which the vehicle 20 travels can be specified (for example, the accuracy with which the error is 1 meter or less).

  Subsequently, in Step 230, it is determined whether or not the number of lanes of the road on which the vehicle 20 is traveling is two lanes or more on one side. If more than two lanes on one side, Step 240 is subsequently executed and less than two lanes on one side. If so, step 210 is executed again.

  In step 240, it is determined whether or not the vehicle 20 is about to change lanes based on the signal from the turn signal 6 and the lane in which the vehicle 20 is traveling. Specifically, when the vehicle 20 is traveling outside the right lane and the right lamp of the turn signal 6 is blinking, and when the vehicle 20 is traveling outside the left end lane and the left light of the turn signal 6 is turned on. Only when is blinking, it is determined that the vehicle 20 is about to change lanes. By doing in this way, the possibility of misjudging blinking of the turn signal 6 for right turn or left turn as blinking for lane change is reduced. If the determination result of step 240 is affirmative, step 250 is subsequently executed, and if negative, step 210 is executed again.

  In step 250, the radar sensor 4 is activated (if not activated), and in step 260, a moving obstacle is detected in the lane change warning range (see FIG. 7) based on the signal from the radar sensor 4. It is determined whether or not there is. Whether an obstacle (vehicle, pedestrian, etc.) is a moving obstacle is calculated by calculating the speed of the obstacle relative to the road surface from the traveling speed of the own vehicle 20 and the relative speed of the obstacle with respect to the own vehicle 20. If it is not zero, it is determined that the obstacle is a moving obstacle. If the determination result of step 260 is affirmative, step 270 is subsequently executed, and if negative, step 250 is executed again.

  In step 270, the images acquired by the in-vehicle cameras 2a to 2c are displayed on the head-up display 3 in real time, and a voice warning using a speaker (not shown) is given. The warning by voice is a warning notifying that there is an obstacle at a position where it is difficult for the driver 40 to visually recognize.

  Here, the real-time display means that the delay of the display timing of the image with respect to the image capturing timing of the in-vehicle cameras 2a to 2c is minimized, and the same time interval as the time interval of repeated imaging by the in-vehicle cameras 2a to 2c. This means that the captured image is repeatedly displayed.

  FIG. 11 schematically shows a method for generating an image to be displayed on the head-up display 3 from images taken by the in-vehicle cameras 2a to 2c. First, the control circuit 17 connects the three images 51a to 51c captured by the vehicle-mounted cameras 2a to 2c at the same timing.

  The connection arrangement is such that the image matches the scenery that would be seen if the driver 40 looked back. Specifically, the image 51a photographed by the right side camera 2a is arranged on the left side, the image 51b photographed by the left side camera 2b is arranged on the right side, and the image 51c photographed by the rear camera 2c is arranged in the center. Connect them together.

  It should be noted that the joints of the images 51a to 51c are adjusted so that the same object is not displayed at a plurality of locations or there is no object that is not displayed, that is, the driver 40 does not feel uncomfortable. Specifically, using the information of the shooting ranges of the in-vehicle cameras 2a to 2c recorded in advance in the ROM of the control circuit 17, the position (cut-off image fitting point) where the same shooting target is shown in each image is displayed. The images 51a to 51c are connected by using the positions as the joints.

  In addition, the control circuit 17 may correct | amend the distortion at the time of imaging | photography of the images 51a-51c at the time of this joining. Alternatively, whether to correct the distortion may be determined based on a selection operation on the operation unit 13 of the driver 40.

  The control circuit 17 performs further inversion processing on the images 51 a to 51 c synthesized by the stitching so that the orientation is the same as when the driver 40 looks rearward through the room mirror 31. Display images 52 a to 52 c are generated and displayed on the head-up display 3.

  In this reversal processing, the control circuit 17 arranges, on the right side, an image 52a obtained by horizontally inverting the right-side camera photographed image 51a among the composite images 51a to 52c, and an image 52b obtained by inverting the left-side camera photographed image 51b on the left side. An image 52c obtained by arranging and reversing the rear camera photographed image 51c is arranged at the center, and the images 52a to 52c are connected by the arrangement. In this way, display images 52a to 52c are generated.

  When the driver 40 views the display images 52a to 52c displayed on the display unit of the head-up display 3 of the front window in this way, the driver 40 can easily and intuitively determine whether there is an obstacle on the left or right side. Can grasp.

  Subsequently, in step 280, it is determined whether or not the moving obstacle detected in step 260 has entered the danger range. FIG. 12 schematically shows a case where another vehicle 63 enters the danger range with the dotted line 62 as a boundary. The danger range is defined in advance in the ROM of the control circuit 17 and is, for example, an area within a circle having a radius of 5 meters centered on the rear end of the vehicle 20. If the determination result of step 280 is affirmative, step 290 is subsequently executed, and if negative, step 270 is executed again.

  In step 290, the risk avoidance automatic control apparatus 5 performs automatic control. Specifically, a signal for instructing the danger avoidance automatic control device 5 to start operation is output. The control circuit 17, when requested by the danger avoidance automatic control device 5, receives information from the radar sensor 4 and environmental information around the vehicle 20 (number of road lanes, presence / absence of intersections, travel of the vehicle 20 Lane information) is output to the danger avoidance automatic control device 5. After step 290, step 210 is executed again.

  Here, the operation contents of the danger avoidance automatic control apparatus 5 when a signal for instructing the start of operation is received from the control circuit 17 will be described with reference to FIGS. 13, 14, and 15. The risk avoidance automatic control device 5 accelerates and decelerates based on the vehicle situation around the vehicle 20 based on the information from the radar sensor 4 and the environment information around the vehicle (number of road lanes, presence / absence of intersection, etc.). Then, the automatic control pattern determined to be the safest from the movement in the left-right direction or the like is selected, and the behavior control of the vehicle 20 is performed according to the selection.

  Specifically, as shown in FIG. 13, the other vehicle 63 entering the danger range is on the right side of the current position 65 of the vehicle 20 and immediately follows the current position 65 (that is, a position within a predetermined distance behind). When the vehicle 64 is present and there is no other vehicle within a predetermined distance in front of the current position 65, the vehicle 20 is accelerated to release the vehicle 20 forward or slightly to the left (that is, the side without the other vehicle 63). .

  Further, as shown in FIG. 14, the other vehicle 63 that has entered the danger range is on the right side of the current position 65 of the vehicle 20, and the preceding vehicle 66 is in front of the current position 65 (that is, a position within a predetermined distance ahead). If there is no other vehicle within a predetermined distance behind the current position 65, the vehicle 20 is decelerated to cause the vehicle 20 to escape to the rear or slightly to the left (that is, the side without the other vehicle 63).

  Further, as shown in FIG. 15, the other vehicle 63 entering the danger range is on the right side of the current position 65 of the vehicle 20, the preceding vehicle 66 is immediately in front of the current position 65, and immediately after the current position 65. When the succeeding vehicle 64 is present, the vehicle 20 is slowly decelerated (that is, at a deceleration rate lower than the deceleration rate in the case of FIG. 14), so that the vehicle 20 is slightly rearward (that is, the side where no other vehicle 63 is present). ).

  By executing the program 100 as described above, the control circuit 17 causes the lane change support process when the vehicle 20 changes lanes on a road with two or more lanes on one side (see step 230) (see step 240). Based on the signal from the radar sensor 4, it is determined whether or not the other vehicle 63 is within the lane change warning range (see step 270), and during that time, the real-time captured images 52 a by the in-vehicle cameras 2 a to 2 c are determined. -C is repeatedly displayed on the head-up display 3. (See step 270). However, when another vehicle 63 enters the danger range (see step 280), the traveling content of the vehicle 20 is controlled to avoid danger, and the vehicle 20 is automatically moved to a safe position (see step 290). ).

  Next, the straight traveling support process will be described. In the straight traveling support process, the control circuit 17 executes the program 300 shown in FIG. 10, first starts the radar sensor 4 (if not started) in step 310, and then in step 320, the step of FIG. The current position of the vehicle 20 is acquired in the same manner as 210, and in step 330, the road and lane to which the acquired current position belongs are specified based on the map data in the same manner as in step 220.

  Subsequently, in step 340, based on the signal from the radar sensor 4, it is determined whether or not a moving obstacle has entered the straight travel warning range (see FIG. 8). Alternatively, it may be determined whether or not a moving obstacle has entered the warning range for straight traveling (see FIG. 8) and the moving obstacle is approaching the vehicle 20. If the determination result in step 340 is affirmative, step 350 is subsequently executed, and if negative, step 310 is executed again.

  The processing contents of steps 350, 360, and 370 are the same as the processing contents of steps 270, 280, and 290, respectively.

  By executing the program 100 as described above, the control circuit 17 determines whether or not the other vehicle 63 is within the straight travel warning range based on the signal from the radar sensor 4 in the straight travel support process. However (see step 340), the head-up display 3 repeatedly causes the on-vehicle cameras 2a to 2c to display the real-time captured images 52a to 52c while being on. (See step 350). However, if another vehicle 63 enters the danger range (see step 360), the traveling content of the vehicle 20 is controlled to avoid danger, and the vehicle 20 is automatically moved to a safe position (see step 370). ).

  As described above, the safe driving support system sets the warning lane change warning range and the straight driving warning warning range independently according to the designation of the driver 40 at the time of starting the vehicle 20 and the like. Due to the presence of the moving obstacle 63, the images 52a to 52c captured by the in-vehicle cameras 2a to 2c are displayed on the head-up display 3.

  As described above, the images 52a to 52c captured by the in-vehicle cameras 2a to 2c are displayed on the basis of the presence of another vehicle (or a pedestrian, the same applies hereinafter) 63 within the warning range set in accordance with the designation of the driver 40. Therefore, it is possible to perform danger notification that directly reflects a different warning range for each driver 40 (that is, a range that the driver 40 wants to warn). That is, it is possible to notify information by an image at a timing necessary for the driver 40.

  In addition, the safe driving support system prohibits the setting of a warning range that protrudes from the shooting range by the in-vehicle cameras 2a to 2c. In this way, when an image is displayed due to the presence of the other vehicle 63 in the warning range, there may be a problem that the other vehicle 63 is not displayed in the display image. Disappear.

  In addition, the safe driving support system is configured to identify a difficult-to-view range that is difficult to see outside the vehicle 20 for the driver 40. The safe driving support system prohibits the setting of a warning range that does not include a part or all of the specified visual difficulty range.

  As a result, the driver 40 carelessly excludes a part (or all) of the difficult-to-view range from the warning range, and as a result, even if the moving obstacle 63 enters the difficult-to-view range, the image is displayed. It is possible to prevent the occurrence of the situation that the display is not performed.

  The safe driving support system is configured to display a captured image on the head-up display 3 when the lane of the vehicle 20 is changed due to the presence of the other vehicle 63 in the lane change warning range. Then, the safe driving support system identifies an area in which the driver 40 cannot visually recognize even if the mirrors 31, 34, and 37 of the vehicle 20 are used as the visually difficult range.

  This is because when the lane is changed, the driver 40 is likely to check the side and the rear by looking at the mirrors 31, 34, 37. In such a case, the mirrors 31, 34, 37 are used. This is because it is not always necessary to specify a region that can be seen as a difficult-to-view range. Therefore, by doing in this way, the possibility that information can be displayed at a timing necessary for the driver 40 is further improved.

  In addition, the safe driving support system is configured to display a captured image on the head-up display 3 due to the moving obstacle 63 entering the warning range for straight driving when the vehicle 20 travels straight. At this time, the safe driving support system includes both an area in which the driver 40 cannot visually recognize even using the mirrors 31, 34, and 37 of the vehicle 20 and an area in which the driver 40 can only visually recognize the mirrors 31, 34, and 37. It is designed to be specified as a visually difficult range.

  When traveling straight ahead, unlike when changing lanes, it is unlikely that the driver 40 will see the mirrors 31, 34, and 37 and check the side and rear. Therefore, in such a case, as described above, an area that can be seen only by the mirrors 31, 34, and 37 is set as a visually difficult range, so that information can be displayed at a timing necessary for the driver 40. The possibility is further improved.

  In addition, the safe driving support system is configured to identify a difficult-to-view range based on the positions 41 and 42 of the eyes of the driver 40 and the orientations of the mirrors 31, 34, and 37. With this configuration, it is possible to identify an appropriate visual difficulty range in accordance with the physical characteristics of each driver 40. Therefore, the possibility that information can be displayed at a timing necessary for the driver is further improved.

  In addition, the safe driving support system controls the driving content of the vehicle 63 so that the vehicle 63 moves to a safe position based on the other vehicle 63 entering a danger range that is narrower than the warning range. In this way, the vehicle 20 can be automatically moved to a safe position even in the case of an emergency where the driver 40 receives the display and operates the vehicle 20 in time. it can.

  Further, the safe driving support system causes the head-up display 3 to display only after entering the warning range due to the other vehicle 63 entering the warning range. Therefore, compared with the case where the head-up display 3 is always displayed, the possibility that the driver feels troublesome to display the head-up display 3 is reduced.

  In addition, the degree of attention to the display content of the driver when the head-up display 3 is displayed is higher than when the head-up display 3 is always displayed. Therefore, there is a high possibility that the driver 40 can quickly recognize a dangerous state.

  Further, the display images 52a to 52c are displayed on the head-up display 3, so that the driver 40 can check the display images 52a to 52c while monitoring the front.

  Moreover, even if only one of the on-vehicle cameras 2a to 2c detects a moving obstacle within the warning range, the safe driving support system not only captures the image of the camera but also images captured by the on-vehicle cameras 2a to 2c. To display all.

  By doing this, the driver not only confirms the position of the obstacle, but also displays a safe place to avoid from the obstacle (that is, a place where no other vehicle is approaching behind). Can be confirmed.

(Other embodiments)
As mentioned above, although embodiment of this invention was described, the scope of the present invention is not limited only to the said embodiment, The various form which can implement | achieve the function of each invention specific matter of this invention is included. It is.

  For example, when the vehicle 20 is traveling on a one-lane two-way road or a one-lane one-way road at step 330 in FIG. The warning range may be uniformly set within a range of 2 meters from the outer periphery of the vehicle 20 regardless of the setting of the driver 40. In that case, the control circuit 17 may be configured to execute Step 310 following Step 350 without executing the processing of Steps 360 and 370.

  Further, the control unit 71 of the safe driving support unit 7 can write the eye position of the driver 40 measured using the eye point measuring unit 72 together with the identification information of the driver 40 (not shown) of the safe driving support unit 7. It may be recorded in a non-volatile memory. At this time, the identification information of the driver 40 may be specified based on, for example, the operation content of the identification number input of the driver 40 using the operation unit 13. And the control part 71 acquires the identification information of the present driver 40 via the operation part 13 again at the time of starting of a vehicle, and when the acquired identification information and the identification information of the driver in a non-volatile memory correspond. In step 105 of FIG. 5, the current driver 40 is read by reading the information on the eye position of the driver 40 in the nonvolatile memory without causing the eye point measurement unit 72 to measure the eye position of the driver 40 again. The position of the eyes may be specified.

  In this way, the eye position of the driver once measured is registered in the non-volatile memory, and the eye position measurement unit 72 is used again to determine the eye position of the other party that the control unit 71 determines to be the same driver. There is no measurement. Therefore, it is not necessary to measure the position of a plurality of times for the same driver, and the processing load on the eye point measurement unit 72 is reduced.

  Further, the display range of the photographed image may be wider during the straight traveling support than during the lane change support. In this case, at the time of lane change support, the left and right end portions (corresponding to the left and right portions of the vehicle 20) of the images 51a to 51c obtained by synthesizing the images from the in-vehicle cameras 2a to 2c are cut (trimmed) and inverted. An image obtained as a result of the processing is displayed on the head-up display 3, and at the time of straight running support, images obtained by inverting the images 51a to 51c obtained by combining the images from the in-vehicle cameras 2a to 2c are directly displayed on the head-up display 3. It may be displayed.

  Alternatively, when the shooting range of the in-vehicle cameras 2a to 2c is variable, the left and right portions of the vehicle 20 are excluded from the shooting range by narrowing the shooting range at the time of lane change support, and the shooting range is set at the time of straight running support. The left and right portions of the vehicle 20 may be included in the shooting range by spreading.

  In that case, the control circuit 17 stores two types of information in advance in the ROM, the shooting range of the in-vehicle cameras 2a to 2c for assisting lane change and the imaging range of the in-vehicle cameras 2a to 2c for assisting straight running. May be. When the lane change warning range is set, the warning range is prohibited from deviating from the photographing range of the in-vehicle cameras 2a to 2c for lane change support, and when the straight traveling warning range is set, the warning range is It may be prohibited to deviate from the photographing range of the in-vehicle cameras 2a to 2c for use in assisting straight running.

  Further, after specifying the difficult-to-view range in step 115 of FIG. 5, the control unit 71 sets the shooting range of the in-vehicle cameras 2a to 2c or the display image range of the head-up display 3 so as to include the difficult-to-view range. You may come to decide.

  When the shooting range of the in-vehicle cameras 2a to 2c does not match the display target range by the head-up display 3, the control circuit 17 causes the warning range to deviate from the display target range by the head-up display 3. Such a setting may be prohibited.

  In the safe driving support system, an event other than the entry of the other vehicle 63 within the warning range (for example, an explicit display request operation by the driver) may be displayed on the head-up display 3. May be. That is, in the safe driving support system, the images taken by the in-vehicle cameras 2a to 2c are not always displayed on the head-up display 3, and the head-up display is caused by the presence of another vehicle within the warning range. If the display of 3 is performed, the effect of the present invention is achieved.

  In the above embodiment, the control circuit 71 of the safe driving support unit 7 displays the image on the head-up display 3 at the time of displaying the image on the head-up display 3 at the time of the lane change support process (step 270) and at the time of supporting the straight running. In the display (step 350), the positions of the eyes 41 and 42 of the driver 40 are detected using the eye point measurement unit 72, the line-of-sight direction of the driver 40 is calculated from the positions, and the calculated direction is the heading display 3's. When the user is not facing the direction and the state that the user is not facing is continued for a predetermined time (for example, 1 second), a warning alert may be further issued by voice. By doing in this way, possibility that the driver 40 will miss the other vehicle 63 which entered in the warning range is reduced.

  Moreover, the control circuit 17 may be configured to perform the processing of the control unit 71 described above. In the above embodiment, each function realized by the control circuit 17, the control unit 71, and the danger avoidance automatic control device 5 executing a program is hardware (for example, a circuit configuration is programmed). It may be realized by using an FPGA) capable of doing so.

1 is a configuration diagram of a safe driving support system according to an embodiment of the present invention. It is a figure which shows the range which a driver | operator can visually recognize using the mounting position of the vehicle-mounted cameras 2a-2c, an imaging | photography range, and the mirrors 31,34,37. It is a figure which shows the attachment position of the eye point measurement part. It is a figure which shows the imaging | photography object of the eye point measurement part. It is a flowchart of the program 100 which the control part 71 performs. It is a figure which illustrates the difficult-to-view areas 44-48. It is a figure which shows an example of the warning range for lane change containing the blind spots 47 and 48. FIG. It is a figure which shows an example of the warning range for straight running including the visual recognition difficult areas 44-48. It is a flowchart of the program 200 which the control circuit 17 performs. It is a flowchart of the program 200 which the control circuit 17 performs. It is a figure which shows roughly the method of producing | generating the image displayed on a head-up display from the picked-up image by vehicle-mounted camera 2a-2c. It is a figure which shows roughly the case where the other vehicle 63 approachs in the danger range. It is a figure which shows the control content of the safe driving assistance part 7 in case there exists the following vehicle 64. FIG. It is a figure which shows the control content of the safe driving assistance part 7 when there exists a preceding vehicle 66. FIG. It is a figure which shows the control content of the safe driving assistance part 7, when there exists the following vehicle 64 and the preceding vehicle 66. FIG.

Explanation of symbols

DESCRIPTION OF SYMBOLS 1 Vehicle navigation apparatus 2a-2c Car-mounted camera 3 Head-up display 4 Radar sensor 5 Hazard avoidance automatic control apparatus 20 Own vehicle 31,34,37 Mirror 40 Driver 44-46 Mirror viewing range 47, 48 Blind spot 51a-51c Camera photographed image 52a to 52c Display images 60, 70 Warning range boundaries 63, 64, 66 Other vehicles 72 Eye point measurement unit 73 Mirror angle measurement unit

Claims (7)

  1. Cameras (2a, 2b, 2c) for photographing the side and rear of the vehicle (20);
    Warning range setting means (125, 140) for setting the warning range according to the designation of the driver;
    Safe driving comprising display means (3) for displaying on the driver (40) images taken by the cameras (2a, 2b, 2c) due to the presence of a moving obstacle (63) in the warning range Support system.
  2.   The safe driving support system according to claim 1, wherein the warning range setting means (125, 140) prohibits setting of a warning range that protrudes from a shooting range of the camera (2a, 2b, 2c).
  3. For the driver (40) of the vehicle (20), it comprises a difficult-to-view range specifying means (115) for specifying a difficult-to-view range corresponding to a difficult-to-view range outside the vehicle (20),
    The safe driving support system according to claim 1 or 2, wherein the warning range setting means (125, 140) prohibits the setting of a warning range that does not completely include the specified visually difficult range.
  4. The display means (3) displays the captured image on the driver (40) due to the presence of a moving obstacle (63) in the warning range when the lane of the vehicle (20) is changed,
    The difficult-to-view range specifying means (115) specifies, as the difficult-to-view range, an area in which the driver (40) cannot be viewed even using the mirrors (31, 34, 37) of the vehicle (20). The safe driving support system according to claim 3.
  5. The display means (5) displays the captured image on the driver (40) due to a moving obstacle (63) entering the warning range when the vehicle (20) is traveling straight ahead,
    The difficult-to-view range specifying means (115) includes an area in which the driver (40) cannot visually recognize the mirror (31, 34, 37) using the mirror (31, 34, 37) of the vehicle (20) and the mirror (31, 34, 37). The safe driving support system according to claim 3 or 4, wherein both the region that the driver (40) can only visually recognize is specified as the visually difficult range.
  6.   The difficult-to-view range specifying means (115) specifies the difficult-to-view range based on the eye position of the driver (40) and the orientation of the mirror (31, 34, 37). The safe driving support system according to any one of 3 to 6.
  7.   Travel control for controlling travel of the vehicle (63) so that the vehicle (63) moves to a safe position based on the moving obstacle (63) entering a range narrower than the warning range. The safe driving support system according to any one of claims 1 to 6, further comprising means (5).
JP2008027602A 2008-02-07 2008-02-07 Safe travel assisting system Withdrawn JP2009184554A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2008027602A JP2009184554A (en) 2008-02-07 2008-02-07 Safe travel assisting system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2008027602A JP2009184554A (en) 2008-02-07 2008-02-07 Safe travel assisting system

Publications (1)

Publication Number Publication Date
JP2009184554A true JP2009184554A (en) 2009-08-20

Family

ID=41068263

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2008027602A Withdrawn JP2009184554A (en) 2008-02-07 2008-02-07 Safe travel assisting system

Country Status (1)

Country Link
JP (1) JP2009184554A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012164704A1 (en) * 2011-06-01 2012-12-06 パイオニア株式会社 Heads-up display, heads-up display image display method, and image display program
WO2013031095A1 (en) 2011-08-31 2013-03-07 日産自動車株式会社 Vehicle driving assistance device
KR20140073709A (en) * 2012-12-06 2014-06-17 현대자동차주식회사 System and method of monitoring dead angle area for car
JP2016162229A (en) * 2015-03-02 2016-09-05 トヨタ自動車株式会社 Vehicle control unit
KR20170142939A (en) * 2016-06-20 2017-12-28 메크라 랑 게엠베하 운트 코 카게 Mirror replacement system for a vehicle
JP2018177204A (en) * 2018-03-15 2018-11-15 堺ディスプレイプロダクト株式会社 Sunshade apparatus and image display module
US10363871B2 (en) 2014-08-21 2019-07-30 Denso Corporation Vehicle notification apparatus
US10762357B2 (en) 2017-04-14 2020-09-01 Sakai Display Products Corporation Shading device and image display module

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012164704A1 (en) * 2011-06-01 2012-12-06 パイオニア株式会社 Heads-up display, heads-up display image display method, and image display program
WO2013031095A1 (en) 2011-08-31 2013-03-07 日産自動車株式会社 Vehicle driving assistance device
US9142131B2 (en) 2011-08-31 2015-09-22 Nissan Motor Co., Ltd. Vehicle driving support apparatus
KR20140073709A (en) * 2012-12-06 2014-06-17 현대자동차주식회사 System and method of monitoring dead angle area for car
US10363871B2 (en) 2014-08-21 2019-07-30 Denso Corporation Vehicle notification apparatus
JP2016162229A (en) * 2015-03-02 2016-09-05 トヨタ自動車株式会社 Vehicle control unit
KR20170142939A (en) * 2016-06-20 2017-12-28 메크라 랑 게엠베하 운트 코 카게 Mirror replacement system for a vehicle
KR102000929B1 (en) 2016-06-20 2019-07-17 메크라 랑 게엠베하 운트 코 카게 Mirror replacement system for a vehicle
US10762357B2 (en) 2017-04-14 2020-09-01 Sakai Display Products Corporation Shading device and image display module
JP2018177204A (en) * 2018-03-15 2018-11-15 堺ディスプレイプロダクト株式会社 Sunshade apparatus and image display module

Similar Documents

Publication Publication Date Title
JP6566642B2 (en) Roadway projection system
JP6319349B2 (en) Information presentation device
US9723243B2 (en) User interface method for terminal for vehicle and apparatus thereof
US10436600B2 (en) Vehicle image display system and method
US9649936B2 (en) In-vehicle device, control method of in-vehicle device, and computer-readable storage medium
EP2617022B1 (en) Visual driver information and warning system for a driver of a motor vehicle
US9248796B2 (en) Visually-distracted-driving detection device
JP4967015B2 (en) Safe driving support device
JP6346614B2 (en) Information display system
JP5454934B2 (en) Driving assistance device
JP4855158B2 (en) Driving assistance device
US8350686B2 (en) Vehicle information display system
JP4883977B2 (en) Image display device for vehicle
JP5345350B2 (en) Vehicle driving support device
EP1974998B1 (en) Driving support method and driving support apparatus
JP4134939B2 (en) Vehicle periphery display control device
EP1878618B1 (en) Driving support method and apparatus
KR101478135B1 (en) Augmented reality lane change helper system using projection unit
JP4807263B2 (en) Vehicle display device
JP4432801B2 (en) Driving assistance device
JP4476719B2 (en) Navigation system
JP4475308B2 (en) Display device
DE102011121948A1 (en) Perspective on actions of an autonomous driving system
US9586525B2 (en) Camera-assisted blind spot detection
JP4412365B2 (en) Driving support method and driving support device

Legal Events

Date Code Title Description
A300 Withdrawal of application because of no request for examination

Free format text: JAPANESE INTERMEDIATE CODE: A300

Effective date: 20110510