JP2011192070A - Apparatus for monitoring surroundings of vehicle - Google Patents

Apparatus for monitoring surroundings of vehicle Download PDF

Info

Publication number
JP2011192070A
JP2011192070A JP2010058280A JP2010058280A JP2011192070A JP 2011192070 A JP2011192070 A JP 2011192070A JP 2010058280 A JP2010058280 A JP 2010058280A JP 2010058280 A JP2010058280 A JP 2010058280A JP 2011192070 A JP2011192070 A JP 2011192070A
Authority
JP
Japan
Prior art keywords
vehicle
display
displayed
good
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2010058280A
Other languages
Japanese (ja)
Other versions
JP5192009B2 (en
Inventor
Hiroshi Iwami
浩 岩見
Original Assignee
Honda Motor Co Ltd
本田技研工業株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honda Motor Co Ltd, 本田技研工業株式会社 filed Critical Honda Motor Co Ltd
Priority to JP2010058280A priority Critical patent/JP5192009B2/en
Publication of JP2011192070A publication Critical patent/JP2011192070A/en
Application granted granted Critical
Publication of JP5192009B2 publication Critical patent/JP5192009B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

A display that prompts the driver to directly view the front without depending on the image on the display device regardless of the state of view while allowing the driver to quickly recognize the degree of danger. Provide a method to do.
A predetermined object around a vehicle is detected based on an image acquired by an imaging device, and a display image generated based on the captured image is displayed so that a vehicle occupant can visually recognize the display image. In addition to displaying on the device, the object on the display image is highlighted when the object is in a predetermined positional relationship with the vehicle. Further, the visibility state around the vehicle is estimated. When the visibility state is estimated to be good, the first highlight is displayed at a predetermined distance from the vehicle on the display image, and when it is estimated that the visibility state is not good, the visibility state is good. Compared to the estimated case, the first highlight is displayed closer to the vehicle side.
[Selection] Figure 7

Description

  The present invention relates to an apparatus for monitoring the periphery of a vehicle, and more specifically to an apparatus for controlling a display form according to monitoring of the periphery of the vehicle.

  Conventionally, a device for detecting an object around a vehicle is mounted on the vehicle, and an alarm is issued for the detected object. The following Patent Document 1 predicts an arrival time to an obstacle detected by a radar, immediately sounds an alarm if the arrival time is within 2 seconds, and displays if the arrival time is 2 seconds or more. An apparatus for presenting an alarm is disclosed.

JP 2001-23094 A

  The alarm as described above is preferably issued in a form that allows the driver to recognize the danger. In particular, when a display device is provided in a place where the driver needs to move the line of sight for visual recognition, an alarm on the display device can cause the driver to recognize the danger in a shorter time. It is desirable to display it in a form that allows it.

  In said patent document, the arrival time to an obstacle is estimated and the warning form is changed depending on whether or not the predicted arrival time is within a predetermined time. However, the specific form of alarm display is not described.

  On the other hand, when the visibility around the vehicle is good, it is easy for the driver to move his / her line of sight forward by making the driver recognize the degree of danger by the above-mentioned display for danger recognition. It is possible to find an object to be aware of and to perform an avoidance operation. However, in the case of poor visibility such as rain or snow, it may be difficult to find an object to be careful even if the line of sight is moved forward. As a result, there is a risk of depending on the video on the display device, and thus there is a risk of missing an opportunity to find a dangerous situation approaching the front.

  Therefore, one object of the present invention is to enable the driver to quickly recognize the degree of danger, and to directly view the front without depending on the image on the display device regardless of the state of view. It is to provide a method for performing a display prompting the driver.

  According to one aspect of the present invention, a vehicle periphery monitoring device detects a predetermined object around a vehicle based on an image acquired by an imaging device that images the periphery of the vehicle, and the image includes The display image generated based on the display image is displayed on the display device so that the vehicle occupant can visually recognize the display image. When the object is in a predetermined positional relationship with the vehicle, the display image on the display image is displayed. Display means for highlighting the object. The apparatus further includes means for estimating a visibility state around the vehicle, and the display means is located at a predetermined distance from the vehicle on the display image when the visibility state is estimated to be good. If the first state of highlight is displayed and the visibility state is estimated to be not good, the first highlight is displayed on the vehicle side compared to the case where the state of visibility is estimated to be good. Display close.

  According to the present invention, since the first highlight is displayed at a predetermined distance from the vehicle, the driver can easily recognize the degree of risk by the display of the first highlight. it can. In addition, when the object is present in the predicted course of the vehicle and is displayed on the display device in an emphasized manner, for example, the positional relationship between the object and the first highlighted display can be instantly understood. Therefore, the degree of danger of collision with the object can be quickly recognized. After recognizing the danger by such display, the driver can easily find an object to be noted such as a pedestrian by moving the line of sight forward. If the first highlight display is displayed at the same position as when the visibility is good when the visibility is poor, the driver may not be able to find the object by actual visual observation, and may gaze at the display screen. However, in the present invention, when the field of view is poor, the first highlight is displayed close to the vehicle side. Therefore, it is possible to prevent the display screen from being watched. As in the case where the field of view is good, the degree of danger is recognized by the first highlighting, and the object can be found by moving the line of sight ahead.

  According to an embodiment of the present invention, when it is estimated that the visibility state is not good, the display means allows the driver to visually check the first highlighting in a situation where the visibility state is not good. The image is displayed close to the vehicle side up to a predetermined position set as there is. In this way, even when the field of view is not good, the driver can view the position in the real space corresponding to the first highlight display, so that the driver can be prevented from gazing at the display screen.

  According to an embodiment of the present invention, the vehicle is provided with means for detecting the speed of the vehicle, and the position of the predetermined distance at which the first highlight is displayed when the visibility state is estimated to be good is the speed of the vehicle. It is a position according to the predetermined expected arrival time of the vehicle calculated based on. Thus, since the first highlight is displayed at a position corresponding to the predetermined arrival time of the vehicle, the position of the object can be grasped in association with the arrival time.

  According to an embodiment of the present invention, the position of the predetermined distance is a first position corresponding to a case where the expected arrival time is a first predetermined value. The display means further has a second position at a second position corresponding to a case where an expected arrival time calculated based on the speed of the vehicle is a second predetermined value smaller than the first predetermined value. Display the highlighting. The display means displays the second highlighted display at the same position in the display image both when the visibility state is estimated as good and when it is estimated as not good. Thus, for the first highlight indicating that the risk is relatively low, the display position is changed according to the visibility state, but for the second highlight indicating that the risk is relatively high, the position is changed. Display without changing.

  According to an embodiment of the present invention, when it is estimated that the visibility state is not good, the display means corresponds the first highlight to a predetermined position within the irradiation range of the headlight of the vehicle. It is displayed at a position on the display image. In this way, even when the field of view is not good, the driver can more reliably visually observe the position in the real space corresponding to the first highlight display with the headlight. Therefore, it is possible to prevent the driver from gazing at the display screen. In one embodiment, the predetermined position within the irradiation range of the headlight is changed depending on whether or not the headlight is irradiating a high beam.

  Other features and advantages of the present invention will be apparent from the detailed description that follows.

The block diagram which shows the structure of the vehicle periphery monitoring apparatus according to one Example of this invention. The figure which shows the attachment position of a display apparatus and a camera according to one Example of this invention. The figure which shows an example of the display of an auxiliary line (highlighting display) according to one Example of this invention. The figure which shows an example of the position where the auxiliary line according to a visual field state is displayed according to one Example of this invention. The figure which shows an example of the position where the auxiliary line according to a visual field state is displayed according to one Example of this invention. The figure which shows an example of the position where the auxiliary line according to a visual field state is displayed according to one Example of this invention. 3 is a flowchart illustrating a process in an image processing unit according to an embodiment of the present invention. The figure which shows the other example of the display of an auxiliary line according to one Example of this invention, and an example of the highlight display of an object. The flowchart which shows the process of the target object detection process according to one Example of this invention.

  Next, an embodiment of the present invention will be described with reference to the drawings. FIG. 1 is a block diagram showing a configuration of a vehicle periphery monitoring device using a display device of a navigation device according to an embodiment of the present invention, and FIG. 2 shows attachment of the display device and a camera to the vehicle. FIG.

  A navigation device is mounted on the vehicle, and the navigation device includes a navigation unit 5 and a display device 4. As shown in FIG. 2A, the display device 4 passes through the center of the vehicle handle (steering wheel) 21 and extends in the front-rear direction of the vehicle. It is attached so as to be visible to the driver at a position separated by a predetermined distance with respect to (shown to extend in the direction). In this embodiment, the display device 4 is embedded in the dashboard 23 of the vehicle.

  The navigation unit 5 is realized by a computer having a central processing unit (CPU) and a memory. The navigation unit 5 receives, for example, a GPS signal for measuring the position of the vehicle 10 using an artificial satellite via a communication device (not shown) provided in the navigation unit 5, and receives the GPS signal. Based on this, the current position of the vehicle 10 is detected. The navigation unit 5 stores the current position in the map information around the vehicle (which can be stored in a storage device of the navigation device or can be received from a predetermined server via the communication device). Is displayed on the display screen 25 of the display device 4. The display screen 25 of the display device 4 constitutes a touch panel, and the occupant inputs the destination to the navigation unit 5 via the touch panel or another input device 27 such as a key or a button. Can do. The navigation unit 5 can calculate the optimal route of the vehicle to the destination, superimpose an image showing the optimal route on the map information, and display it on the display screen 25 of the display device 4.

  Further, a speaker 3 is connected to the navigation unit 5 and, when necessary, for example, when performing route guidance such as a pause or an intersection, not only the display on the display device 4 but also the speaker 3 is used. The passenger can be notified by sound or voice. Note that recent navigation devices are equipped with various other functions such as providing traffic information and facility guidance in the vicinity of the vehicle. In this embodiment, any appropriate navigation device can be used. .

  The vehicle periphery monitoring device is mounted on a vehicle and is used to detect objects around the vehicle based on two infrared cameras 1R and 1L capable of detecting far infrared rays and image data captured by the cameras 1R and 1L. And an image processing unit 2. The image processing unit 2 is connected to the display device 4 and the speaker 3. The display device 4 is used to display an image obtained through imaging by the camera 1R or 1L and to display an alarm about the presence of an object around the vehicle detected from the image. The speaker 3 is used to issue an alarm by sound or voice based on the detection result of the object.

  In this embodiment, as shown in FIG. 2B, the cameras 1R and 1L are symmetrical with respect to the central axis passing through the center of the vehicle width at the front portion of the vehicle 10 so as to image the front of the vehicle 10. It is arranged in the position. The two cameras 1R and 1L are fixed to the vehicle so that their optical axes are parallel to each other and their height from the road surface is equal. The infrared cameras 1R and 1L have a characteristic that the level of the output signal becomes higher (that is, the luminance in the captured image becomes higher) as the temperature of the object is higher.

  The image processing unit 2 includes an A / D conversion circuit that converts an input analog signal into a digital signal, an image memory that stores a digitized image signal, a central processing unit (CPU) that performs various arithmetic processing, and a data RAM (Random Access Memory) used to store data, ROM (Read Only Memory) that stores programs to be executed by the CPU and data to be used (including tables and maps), driving signals for the speaker 3, and display for the display device 4 An output circuit for outputting signals and the like is provided. The output signals of the cameras 1R and 1L are converted into digital signals and input to the CPU.

  The traveling state detection unit 7 includes a sensor that detects the speed of the vehicle. The sensor for detecting the speed of the vehicle may be a so-called vehicle speed sensor, or may alternatively be an acceleration sensor or a wheel speed sensor. Furthermore, in this embodiment, the traveling state detection unit 7 includes a sensor for detecting another traveling state used for predicting the course of the vehicle. In one embodiment, a yaw rate sensor for detecting the yaw rate of the vehicle and a rudder angle sensor for detecting the steering angle of the vehicle can be included. In this way, data indicating the running state of the vehicle detected by the running state detection unit 7 is sent to the image processing unit 2.

  The visibility state estimation unit 9 is provided as one of the functions of the image processing unit 2 (alternatively, the function of the visibility state estimation unit 9 may be realized by another electronic control unit), and the driver It is estimated whether the visibility state around the vehicle viewed from the above is good. The visibility state estimation unit 9 can perform the estimation by any appropriate method. In one example, a raindrop sensor (not shown) is provided in the vehicle, and if a raindrop is detected by the sensor, it is estimated that the visibility state is not good, and if it is not detected, it is estimated to be good. In another example, the navigation unit 5 utilizes the acquisition of information related to the weather by connecting to a predetermined server. If the weather information of the area including the current position of the vehicle is acquired and the weather information indicates clear or cloudy, it is estimated to be good, and if it indicates the occurrence of rain or snow or fog, Estimated not good.

  In yet another example, visibility is often poor at night compared to daytime, so it is assumed that visibility is good at daytime and visibility is not good at night. It may be estimated. Here, the visibility state may be estimated depending on whether or not the headlight irradiation of the vehicle is turned on. If the illumination of the light is not turned on, it is estimated that the visibility state is good in the daytime, and if it is not turned on, it is assumed that the visibility state is not good because it is nighttime or dim even in the daytime. Also good.

  The image processing unit 2 predicts the course of the vehicle based on the traveling state detected by the traveling state detection unit 7, and also predicts the prediction on the image captured by the cameras 1R and 1L and displayed on the display device 4. Emphasis is given in the course area to inform the driver of the degree of danger.

  Thus, in this embodiment, the display device 4 of the navigation device is used to display to the driver the display of an image obtained through imaging by the cameras 1R and 1L and the presence of a predetermined object detected from the image. Used for display for notification (alarm). As described above, unlike the head-up display (HUD) provided so that the screen is displayed in front of the driver on the front window, the display device 4 is a predetermined distance from the handle 21 in the vehicle width direction. Therefore, it is necessary to move the line of sight in the horizontal direction in order for the driver to visually recognize the screen of the display device 4 during driving as compared with the HUD. Therefore, it is preferable to perform display that allows the driver to recognize the degree of danger in a shorter time. In the present invention, this is realized by highlighting.

  In this embodiment, auxiliary lines are superimposed and displayed as an example of the highlighted display. In this regard, referring to FIG. 3, an example of the auxiliary line 111 superimposed on the display image 101 on the display device 4 is shown. The display image 101 is a grayscale image obtained through the imaging of the cameras 1R and 1L (note that, in the drawing, a fine change in luminance is not shown for easy viewing), the line 103 is displayed on the image 101. And an area surrounded by 105 is an image area corresponding to the predicted course. The auxiliary line 111 is superimposed on the predicted course. As will be described later, since the auxiliary line is displayed at a predetermined distance from the vehicle, the driver can easily grasp the sense of distance ahead by the display of the auxiliary line and recognize the degree of danger. can do. For example, if a target object such as a pedestrian is picked up in the vicinity of an auxiliary line, the position of the target object can be grasped as an actual feeling, and the target object can be easily found by moving the line of sight ahead Thus, an avoidance operation can be performed.

  Further, in the present invention, the display position of the auxiliary line is changed according to the visual field state estimated by the visual field state estimation unit 9. Several forms of this changing method will be described.

  FIG. 4 shows the first embodiment, and shows how much distance from the vehicle the auxiliary line is displayed at the display position corresponding to it. In this embodiment, as shown in (a), when the visibility state is estimated to be good, the auxiliary line 111 is displayed at the display position corresponding to the position of the predetermined distance Da from the vehicle 10. As shown in (b), when it is estimated that the visibility state is not good, the auxiliary line 111 is displayed at the display position corresponding to the position of the predetermined distance Db from the vehicle 10. Here, the distance Db is smaller than the distance Da. Thus, when the field of view is not good, the auxiliary line is displayed closer to the vehicle side than when the field of view is good.

  When the field of view is good, the degree of danger is recognized by the auxiliary line display, and the object near the auxiliary line can be easily found by moving the line of sight ahead. However, if the auxiliary line is displayed at the same position when the field of view is not good, the infrared camera can almost detect the object in the vicinity of the auxiliary line. There is a possibility that the vicinity cannot be seen. As a result, there is a possibility that the image displayed on the display device 4 may continue to be watched, and attention to the front may be neglected.

  In order to prevent this, the auxiliary line 111 is displayed close to the vehicle 10. Thus, as in the case where the field of view is good, the degree of danger is recognized by the auxiliary line display, and the object near the auxiliary line can be found by moving the line of sight ahead.

  As described above, the auxiliary line as the highlighting has a function of notifying the driver of the existence of the object for the first time and prompting the driver to find the object by visual observation. Therefore, preferably, D1 is set to a distance value that is visible to the driver of the vehicle when the visibility is good, and D2 is set to a distance value that is visible to the driver of the vehicle even when the visibility is poor. The These visually observable distance values can be set to predetermined values set in advance by experiments or the like, for example.

  In the second form of the auxiliary line display, the predetermined distance Da when the visibility state is estimated to be good is set according to the speed of the vehicle 10. For example, the predetermined distance Da can be a distance corresponding to a predetermined expected arrival time TTC calculated based on the vehicle speed detected by the traveling state detection unit 7. Here, the expected arrival time TTC can be set to any appropriate time (for example, several seconds). Assuming the current vehicle speed V, the distance Da is calculated by V × TTC, and the auxiliary line 111 is displayed at a position corresponding to this distance. When the visibility state is not good, as described above, the distance Db is determined so that Da> Db, and the auxiliary line 111 is displayed at the display position corresponding to the distance Db.

  In this embodiment, the auxiliary line 111 indicates a position where the vehicle reaches after a predetermined time, so that the driver can quickly recognize the relationship between the object and the arrival time. If there is an object in the vicinity of the auxiliary line, the driver can recognize that the object will be reached after the arrival time TTC, so that the object is visually detected and the object can be avoided more reliably. be able to. In addition, since the auxiliary line is displayed closer to the vehicle than when it is good when the field of view is poor, as described above, the driver is prevented from continuing to watch the image on the display device 4, It is possible to promote the forward viewing.

  As described above, even when the visibility state is poor, it is preferable to determine the position of the auxiliary line at a place where the driver can see. Thereby, if the danger shown by the auxiliary line display on the display screen is recognized and the line of sight is moved forward, the target object to be noted can be found more reliably. Therefore, in the third embodiment shown in FIG. 5, as shown in FIG. 5B, the position of the auxiliary line 111 when it is estimated that the visibility state is not good is the irradiation range of light from the headlight of the vehicle 10. Of these, 151 is set. Since the irradiation range 151 is determined in advance, the distance Db can be determined so that the position of the auxiliary line is included in the irradiation range. In this way, if the line of sight is moved forward, the object can be found with the light of the headlight.

  Here, the size of the irradiation range 151 may be changed in accordance with the switching of the headlight between the low beam and the high beam. Whether the headlight is performing low beam irradiation or high beam irradiation is detected by any appropriate means, and if low beam irradiation is performed, it is determined according to the range 151 irradiated with the low beam. If the auxiliary line 111 is displayed at the distance Db and high beam irradiation is performed, the auxiliary line 111 can be displayed at the distance Db determined according to the range 151 irradiated with the high beam.

  In the third embodiment, the position of the auxiliary line 111 when the visibility state shown in (a) is good is the same as that in the first or second embodiment described with reference to FIG. Is set to be larger than the distance Db.

  FIG. 6 shows a fourth embodiment, and here, a plurality of highlight displays are used. In this example, a plurality of auxiliary lines are used as a plurality of highlights. The auxiliary line 111 is a first auxiliary line, and in addition to this, a second auxiliary line 112 and a third auxiliary line 113 are used. The first auxiliary line 111 can be displayed according to the first to third modes described above. The second auxiliary line 112 is used to make the driver recognize a higher degree of risk than the first auxiliary line 111 and is displayed closer to the vehicle than the first auxiliary line 111. The third auxiliary line is for causing the driver to recognize a higher degree of risk and is displayed closer to the vehicle than the second auxiliary line 112.

  Therefore, when it is estimated that the visibility state is good as shown in (a), Da> Da2> Da3, and when it is estimated that the visibility state is not good as shown in (b), Db> Db2> Db3. The distances Da2 and Db2 of the second auxiliary line 112 can be determined so that the ratio between Da and Db is the same as the ratio between Da2 and Db2.

  Since the third auxiliary line 113 indicates the highest risk, it is preferable that the third auxiliary line 113 be displayed at the same position regardless of the visibility. Therefore, in the figure, Da3 = Db3.

  When the display position of the first auxiliary line 111 is determined based on the estimated arrival time TTC, the display positions of the second auxiliary line 112 and the third auxiliary line 113 are also based on the estimated arrival time TTC. Can be determined. Here, the second TTC time used for determining the display position of the second auxiliary line 112 is smaller than the first TTC time used for determining the display position of the first auxiliary line 111, The third TTC time used for determining the display position of the third auxiliary line 113 is smaller than the second TTC time.

Preferably, the distances Da3 and Db3 of the third auxiliary line 113 are determined so that they can be stopped when the vehicle performs a braking operation at a predetermined deceleration at the current time. For example, assuming that a predetermined deceleration of the brake (represented by a positive value) is G and the current speed of the vehicle is V, V 2 / (2 · G) is the minimum distance value at which the vehicle can be stopped. Accordingly, the distances Da3 and DB3 of the third auxiliary line 113 are preferably determined so as to have a value equal to or greater than the distance value from the vehicle. By doing so, even when a brake operation is performed in response to recognizing that an object is present at the position of the third auxiliary line 113, the object can be avoided more reliably.

  In this example, three auxiliary lines are used, but the number is not limited to this number, and may be larger or smaller than this number. For example, only the first and third auxiliary lines may be displayed.

  In the above example, highlighting using an auxiliary line is performed. However, the highlighting is not limited to the form of the auxiliary line, and other forms (for example, some figure such as a mark) are used. It may be realized by superimposing display).

  FIG. 7 is a flowchart illustrating a process performed by the image processing unit 2 according to one embodiment of the present invention. The process is performed at predetermined time intervals. This process is based on the fourth form (FIG. 6) described above.

  In step S11, output signals of the cameras 1R and 1L (that is, captured image data) are received as input, A / D converted, and stored in the image memory. The stored image data is a grayscale image including luminance information, and this is an image (display image) displayed on the display device 4.

  In step S12, the course of the vehicle is predicted based on the running state of the vehicle detected by the running state detection unit 7. Any appropriate method can be used as the route prediction method. In one embodiment, the course of the vehicle can be predicted based on the detected vehicle speed and yaw rate. For example, the technique described in Japanese Patent Laid-Open No. 7-104062 can be used. Instead of the yaw rate, the steering angle detected by the steering angle sensor may be used. A method for predicting the course of the vehicle based on the vehicle speed and the steering angle is described in, for example, Japanese Patent Application Laid-Open No. 2009-16663.

  In addition, you may use the information acquired by the navigation unit 5 for the course prediction. For example, assuming that the vehicle maintains the current traveling direction, a route that becomes a course from the current position of the vehicle is obtained based on the map information. For example, in the case of a vehicle traveling on a certain road, a route that travels along the road “in a road” is the predicted route. Alternatively, when the vehicle is guided along the optimum route calculated by the navigation unit 5, the optimum route may be used as the predicted route.

  In step S13, a position corresponding to a predetermined expected arrival time TTC of the vehicle is calculated based on the vehicle speed detected by the traveling state detection unit 7. As described above with respect to the fourth mode, in this embodiment, three values of the first TTC time, the second TTC time, and the third TTC time are used as the expected arrival time TTC (seconds) (first time) TTC time> second TTC time> third TTC time). The corresponding position (referred to as the first TTC position) when the predicted arrival time TTC is the first TTC time is the first position from the present time when the vehicle is at the detected current speed. The position where the vehicle reaches when traveling for TTC time is shown. Therefore, if the current speed of the vehicle is V and the first TTC time is t1 seconds, the position that has traveled the predicted course from the current position of the vehicle by the distance calculated by V × t1 is the first TTC position. Identified as The same applies to the second TTC position corresponding to the second TTC time and the third TTC position corresponding to the third TTC time. As a result, the distance from the vehicle is first TTC position> second TTC position> third TTC position. As described above, preferably, the third TTC time is determined such that the distance from the vehicle at the third TTC position is a distance that can be stopped by a braking operation at a predetermined deceleration at the present time.

  In step S14, the visibility state estimation unit 9 estimates whether the visibility around the vehicle is good. As described above, the visibility state may be estimated based on, for example, the detection result of the raindrop sensor or based on whether or not the headlight irradiation is turned on. Alternatively, the visibility state may be estimated based on information about the weather in the area including the current position of the vehicle acquired through the navigation unit 5.

  If it is estimated that the visibility state is good (S15 is Yes), the process proceeds to step S16, the image area corresponding to the predicted path is identified on the display image obtained in step S11, and the first on the predicted path is determined. The first to third auxiliary lines 111 to 113 are superimposed at positions on the image corresponding to the third TTC position, and the superimposed images are displayed on the display device 4. Here, the first to third auxiliary lines 111 to 113 are preferably displayed in different colors and / or different shapes.

  Here, with reference to FIG. 8A, an example of the first to third auxiliary lines 111 to 113 superimposed on the display image 101 is shown. As described above, the display image is a grayscale image obtained through imaging (note that fine luminance changes are not shown in the figure for easy viewing), and lines 103 and 105 are displayed on the image. A region surrounded by is an image region corresponding to the predicted course. The first auxiliary line 111 corresponds to the first TTC position, the second auxiliary line 112 corresponds to the second TTC position, and the third auxiliary line 113 corresponds to the third TTC position. ing.

  In this embodiment, these auxiliary lines are displayed in different colors (indicated by different types of hatching in the figure), for example, the first auxiliary line 111 is displayed in green or blue. The second auxiliary line 112 is displayed in yellow, and the third auxiliary line 113 is displayed in red. Thus, as with the traffic light, the red display is performed for the third TTC position with the highest risk, and the green or blue display is performed for the first TTC position with the lowest risk. By using such color display, it is possible to promptly inform the driver that the risk is high as it goes from the first auxiliary line to the third auxiliary line, and the position where the risk is high, medium, and low. However, it is possible to make the driver feel how much the vehicle is located from the vehicle.

  In this example, the first to third auxiliary lines 111 to 113 are displayed in different colors, but alternatively or in addition to the differences in color, they may be displayed in different shapes. For example, the first auxiliary line 111 displayed at the first TTC position is displayed as the thinnest line, the third auxiliary line 113 displayed at the third TTC position is displayed as the thickest line, The auxiliary line displayed at the second TTC position can be displayed as a line having an intermediate thickness. Thus, the driver can instantly grasp the degree of danger from the difference in color or shape. Further, when the object is imaged, the degree of danger of the object can be instantly understood from the positional relationship between the object and the auxiliary line. Therefore, it is possible to avoid the driver from nailing the line of sight on the display device 4 in order to determine where the object is present.

  Returning to FIG. 7, if it is estimated that the visibility state is not good (No in S15), in step S17, the image region corresponding to the predicted course is identified on the display image obtained in step S11, and the prediction is performed. The TTC position for poor visibility is displayed in the image area corresponding to the course. In this embodiment, a form as shown in FIG. 6B is used as the TTC position for poor visibility. Accordingly, the first and second TTC positions are closer to the vehicle than in the case where the visibility state is good (in the case of step S16), and the positions on the image corresponding to the closer first and second TTC positions are closer to the vehicle. The first auxiliary line 111 and the second auxiliary line 112 are displayed in a superimposed manner, and the third auxiliary line 113 is positioned at a position on the image corresponding to the same third TTC position as in the case where the visibility state is good. Is superimposed. As described above, the first TTC position can be brought close to a position corresponding to a distance set in advance so as to be visible, or to a position corresponding to a predetermined position within the irradiation range of the headlight. As described above, the second TTC position can be brought closer to the same ratio as the first TTC position is brought closer. In the same manner as described with reference to step S16, also in step S17, the first to third auxiliary lines 111 to 113 can be displayed in different colors and / or different shapes.

  In step S <b> 21 and subsequent steps, processing related to alarm display is performed on a predetermined object detected from the captured image (this processing will be described later with reference to FIG. 9). In step S21, it is determined whether or not the detected predetermined object exists in the predicted course. If it exists (Yes in S21), an expected arrival time TTC to the object is calculated in step S22. Specifically, the expected arrival time TTC (= distance / relative speed) can be calculated based on the distance from the host vehicle to the object and the relative speed of the vehicle with respect to the object. In one embodiment, it is possible to set the speed V of the host vehicle as the relative speed assuming an object that crosses the course of the vehicle. Alternatively, the object may be tracked in time to calculate the relative speed of the object with respect to the vehicle. This specific method is described in, for example, Japanese Patent Application Laid-Open No. 2001-6096. .

  In step S23, a TTC time closest to the expected arrival time TTC calculated for the object is selected from the first to third TTC times, and a TTC position corresponding to the selected TTC time is specified. Thus, by comparing the estimated arrival time TTC to the object and the first to third TTC times with each other, it is possible to determine how dangerous the position where the object exists is. it can. Alternatively, the TTC position closest to the current distance of the object from the vehicle may be selected from the first to third TTC positions.

  In step S24, the object on the display image is displayed with a correlation with the TTC display at the TTC position specified in step 23.

  Here, with reference to FIG. 8B, an example in which the object is highlighted on the display image 101 as shown in FIG. 8A is shown. In this example, as described above, the first auxiliary line 111 of green or blue is displayed at the first TTC position. Therefore, if the TTC position corresponding to the TTC time closest to the predicted arrival time TTC of the object is specified as the first TTC position, the object is displayed in green or blue as indicated by reference numeral 131. To do. Similarly, if the TTC position corresponding to the TTC time closest to the expected arrival time TTC of the object is specified as the second TTC position, the object is displayed in yellow as indicated by reference numeral 132. . If the TTC position corresponding to the TTC time closest to the expected arrival time TTC of the target object is specified as the third TTC position, the target object is displayed in red as indicated by reference numeral 133.

  The method for displaying the target object in a predetermined color may be, for example, converting the color of the pixel in the region extracted as the target object into the predetermined color for display, or the human of the predetermined color. An icon image imitating the image may be superimposed on the area extracted as the object. Thus, the object is displayed so as to correlate with the color of the TTC position determined to be the closest.

  As described above, instead of displaying the object in a color corresponding to each TTC position, an emphasis frame is displayed so as to surround the object, and the color of the emphasis frame is determined according to each TTC position. It is good also as a color.

  When the shapes of the first to third auxiliary lines 111 to 113 are different from each other, the object may be displayed so as to have a correlation with the shape of the nearest TTC position specified as described above. . For example, different TTC positions are displayed with different marks. Then, the same mark as the mark attached to the TTC position determined to be the closest is attached to the object and displayed. In this way, the fact that they are related to each other can be represented by a mark.

  Thus, by displaying the target object in a display form similar to the corresponding TTC position, the driver can quickly grasp which arrival time is related to the target object. Since the TTC position is displayed in stages according to the degree of danger, it is possible to instantly determine the danger that the object is located.

  Returning to FIG. 3, step S25 is provided not only for alarm display by highlighting as described above, but also for alarm sound through the speaker 3 as described above. It is processing of. This alarm sound can also be made different for each TTC position. For example, if the object is displayed to correlate with the first TTC position, one short sound (eg, a beep) is output and the object is displayed to correlate with the second TTC position. Two short sounds (eg, a beeping sound) are output, and one long sound (eg, a beeping sound) is displayed if the object is displayed to correlate with the third TTC position. Can be output. When a plurality of objects exist in the predicted course and are displayed so as to correlate with different TTC positions, an alarm sound for the TTC position with the highest risk may be output. For example, as shown in FIG. 8B, when there are three pedestrians 131 to 133 in correlation with the first to third TTC positions, the third risk level is the highest. Outputs an alarm sound for TTC position.

  Returning to step S21, if the predetermined object detected from the captured image does not exist in the predicted course (No in S21), this indicates that the object exists outside the predicted course. In this case, the process proceeds to step S26, and the object is displayed in a state in which the degree of emphasis is reduced as compared with the alarm display for the object in step S24. In one example, the object may not be highlighted at all. In another example, the object can be displayed in a predetermined color that is harder to distinguish (difficult to identify) than the highlighting in step S24. For example, in step S24, highlighting is performed using color, but in step S26, the object is highlighted with a monochrome color (for example, an intermediate color between white and black). For example, the object may be displayed in a predetermined monochrome color, or the object may be surrounded by the monochrome color frame. In order to distinguish clearly from the highlighting in step S24, in step S26, a color different from the color used in connection with the first to third TTC positions may be used.

  FIG. 8B also shows a display example of such an object. Since the pedestrian indicated by reference numeral 135 exists outside the predicted course, it is displayed in a predetermined gray value. Has been. In this way, the object outside the predicted course has a low risk and can be merely a notification.

  Of course, when a predetermined object is not detected from an image acquired through imaging, only the display image on which the auxiliary line is displayed is displayed without executing the processing in step S21 and the subsequent steps. 4 may be displayed.

  FIG. 9 shows a flowchart of a process for detecting an object from a captured image. In step S31, the right image captured by the camera 1R is used as a reference image (alternatively, the left image may be used as a reference image), and the image signal is binarized. Specifically, a process of setting a region brighter than the luminance threshold value ITH to “1” (white) and a dark region to “0” (black) is performed. By this binarization processing, an object having a temperature higher than a predetermined temperature such as a living body is extracted as a white region. The luminance threshold value ITH can be determined by any appropriate technique.

  In step S32, the binarized image data is converted into run-length data. Specifically, with respect to an area that has become white due to binarization, the coordinates of the start point (the leftmost pixel of each line) of the white area (referred to as a line) of each pixel row and the end point (each The run length data is represented by the length (expressed by the number of pixels) up to the pixel on the right end of the line. Here, the y-axis is taken in the vertical direction in the image, and the x-axis is taken in the horizontal direction. For example, if the white region in the pixel row whose y coordinate is y1 is a line from (x1, y1) to (x3, y1), this line consists of three pixels, so (x1, y1, 3) This is represented by run-length data.

  In steps S33 and S34, the object is labeled and a process of extracting the object is performed. That is, of the lines converted into run length data, a line having a portion overlapping in the y direction is regarded as one object, and a label is given thereto. Thus, one or a plurality of objects are extracted.

  In step S35, it is determined whether each of the objects thus extracted is a predetermined object to be noted. In this embodiment, the predetermined object to be noted is a pedestrian, and may include animals in addition to the pedestrian. Here, the process of determining whether the object is a pedestrian or an animal can be realized by any appropriate technique. For example, using known pattern matching, the similarity between the object extracted as described above and a predetermined pattern representing a pedestrian is calculated, and if the similarity is high, the object is determined to be a pedestrian. be able to. An animal can be similarly determined. As an example of such determination processing, processing for determining whether or not a person is a pedestrian is described in, for example, Japanese Patent Application Laid-Open No. 2007-241740, Japanese Patent Application Laid-Open No. 2007-334751, and the like. The process for determining whether or not an animal is described in, for example, Japanese Patent Application Laid-Open Nos. 2007-310705 and 2007-310706.

  In this way, the process after step S21 in FIG. 7 is executed for the object determined to be a pedestrian.

  Note that highlighting as in steps S24 and S26 described above may be performed on an object within a predetermined distance from the vehicle. In this case, it is preferable that the predetermined distance is set to be equal to or greater than the distance of the first TTC position. For example, when the first TTC time is t1 seconds, a value calculated by “t2 (≧ t1) × vehicle speed V” can be set as the predetermined distance. In step S35 of FIG. 9, among the objects extracted in step S34, an object is determined for the predetermined distance range, and as a result, the object is determined to be a pedestrian (may include an animal). What is necessary is just to perform the process after step S21 of FIG.

Further, when it is estimated that the visibility state is not good as described above, when the display position of the auxiliary line is changed, the occupant may be notified of the change. For example, in order to inform the occupant of how far the currently displayed auxiliary line corresponds to the vehicle, the auxiliary line may be displayed with a corresponding distance value. Good. Alternatively, when it is estimated that the visibility state is not good, a display that shows how close the position of the auxiliary line is to the vehicle may be performed, or the approached distance may be notified by voice.

  In the above embodiment, the display device provided in the navigation device is used. As described above, in the present invention, the above-described highlight display (in the embodiment, auxiliary line) is displayed according to the visibility state, so even if the display device that requires the driver to move the line of sight is used. This makes it possible for the driver to promptly recognize the degree of danger. However, the HUD as described above may be used as the display device.

  Furthermore, although the far-infrared camera is used in the above embodiment, the present invention is applicable to other cameras (for example, a visible camera).

  As described above, specific embodiments of the present invention have been described. However, the present invention is not limited to these embodiments.

1R, 1L infrared camera (imaging means)
2 Image processing unit 3 Speaker 4 Display device

Claims (6)

  1. Means for detecting a predetermined object around the vehicle based on an image acquired by an imaging device that images the periphery of the vehicle;
    A display image generated based on the image is displayed on a display device so that an occupant of the vehicle can visually recognize, and the display is displayed when the object is in a predetermined positional relationship with the vehicle. A vehicle surrounding monitoring device having display means for highlighting the object on the image,
    Furthermore, it comprises means for estimating the visibility state around the vehicle,
    When the visibility state is estimated to be good, the display means displays a first highlight on the display image at a predetermined distance from the vehicle, and estimates that the visibility state is not good. When it is, the vehicle periphery monitoring device that displays the first highlight closer to the vehicle side than when the visibility state is estimated to be good.
  2. Further, when the display means is estimated that the visibility state is not good, the first highlight is set to a predetermined position set to be visible by the driver in a situation where the visibility state is not good. Until close to the vehicle side,
    The vehicle periphery monitoring apparatus according to claim 1.
  3. And a means for detecting the speed of the vehicle.
    The position of the predetermined distance at which the first highlight is displayed when the visibility state is estimated to be good is a position corresponding to the predetermined expected arrival time of the vehicle calculated based on the speed of the vehicle Is,
    The vehicle periphery monitoring device according to claim 1 or 2.
  4. The position of the predetermined distance is a first position corresponding to a case where the expected arrival time is a first predetermined value,
    The display means further has a second position at a second position corresponding to a case where an expected arrival time calculated based on the speed of the vehicle is a second predetermined value smaller than the first predetermined value. Display highlighting,
    The display means displays the second highlight at the same position in the display image both when the visibility state is estimated as good and when it is estimated as not good.
    The vehicle periphery monitoring apparatus according to claim 3.
  5. When it is estimated that the visibility state is not good, the display means displays the first highlight at a position on the display image corresponding to a predetermined position within the irradiation range of the headlight of the vehicle. ,
    The vehicle periphery monitoring device according to any one of claims 1 to 4.
  6. The predetermined position within the irradiation range of the headlight of the vehicle is changed according to whether or not the headlight is radiating a high beam.
    The vehicle periphery monitoring apparatus according to claim 5.
JP2010058280A 2010-03-15 2010-03-15 Vehicle periphery monitoring device Active JP5192009B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2010058280A JP5192009B2 (en) 2010-03-15 2010-03-15 Vehicle periphery monitoring device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2010058280A JP5192009B2 (en) 2010-03-15 2010-03-15 Vehicle periphery monitoring device

Publications (2)

Publication Number Publication Date
JP2011192070A true JP2011192070A (en) 2011-09-29
JP5192009B2 JP5192009B2 (en) 2013-05-08

Family

ID=44796899

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2010058280A Active JP5192009B2 (en) 2010-03-15 2010-03-15 Vehicle periphery monitoring device

Country Status (1)

Country Link
JP (1) JP5192009B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013207746A (en) * 2012-03-29 2013-10-07 Mazda Motor Corp Device for photographing rear lateral side of vehicle
JP2015070280A (en) * 2013-09-26 2015-04-13 京セラ株式会社 Image processing system, camera system, and image processing method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000211542A (en) * 1999-01-26 2000-08-02 Mazda Motor Corp Vehicular driving supporting device
JP2006123610A (en) * 2004-10-26 2006-05-18 Toyota Motor Corp Vehicle posture control system
JP2007272350A (en) * 2006-03-30 2007-10-18 Honda Motor Co Ltd Driving support device for vehicle
JP2009040107A (en) * 2007-08-06 2009-02-26 Denso Corp Image display control device and image display control system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000211542A (en) * 1999-01-26 2000-08-02 Mazda Motor Corp Vehicular driving supporting device
JP2006123610A (en) * 2004-10-26 2006-05-18 Toyota Motor Corp Vehicle posture control system
JP2007272350A (en) * 2006-03-30 2007-10-18 Honda Motor Co Ltd Driving support device for vehicle
JP2009040107A (en) * 2007-08-06 2009-02-26 Denso Corp Image display control device and image display control system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013207746A (en) * 2012-03-29 2013-10-07 Mazda Motor Corp Device for photographing rear lateral side of vehicle
JP2015070280A (en) * 2013-09-26 2015-04-13 京セラ株式会社 Image processing system, camera system, and image processing method

Also Published As

Publication number Publication date
JP5192009B2 (en) 2013-05-08

Similar Documents

Publication Publication Date Title
US10147323B2 (en) Driver assistance system with path clearance determination
US9723243B2 (en) User interface method for terminal for vehicle and apparatus thereof
US8970451B2 (en) Visual guidance system
JP6113375B2 (en) Driving support device and driving support method
US8896687B2 (en) Lane departure prevention support apparatus, method of displaying a lane boundary line and program
EP2559016B1 (en) Video based intelligent vehicle control system
CN103797530B (en) Vehicle periphery monitoring device
US20150331236A1 (en) A system for a vehicle
US7884705B2 (en) Safety-drive assistance device
JP3739693B2 (en) Image recognition device
US10229594B2 (en) Vehicle warning device
US8085140B2 (en) Travel information providing device
JP4277081B2 (en) Driving assistance device
JP4783431B2 (en) Traffic information detection apparatus, traffic information detection method, traffic information detection program, and recording medium
US9760782B2 (en) Method for representing objects surrounding a vehicle on the display of a display device
US8810653B2 (en) Vehicle surroundings monitoring apparatus
JP5421072B2 (en) Approaching object detection system
JP4807263B2 (en) Vehicle display device
US8766816B2 (en) System for monitoring the area around a vehicle
US8405491B2 (en) Detection system for assisting a driver when driving a vehicle using a plurality of image capturing devices
KR20150051735A (en) Parking Guide System and the Method
DE102007014012B4 (en) Vehicle environment monitor, vehicle environment monitoring method, and vehicle environment monitoring program
WO2012043184A1 (en) Parking assistance device
DE112013006385B4 (en) Vehicle peripheral display device
US8330816B2 (en) Image processing device

Legal Events

Date Code Title Description
A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20120919

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20120925

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20121121

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20130122

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20130130

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20160208

Year of fee payment: 3