JP2011191859A - Apparatus for monitoring surroundings of vehicle - Google Patents

Apparatus for monitoring surroundings of vehicle Download PDF

Info

Publication number
JP2011191859A
JP2011191859A JP2010055567A JP2010055567A JP2011191859A JP 2011191859 A JP2011191859 A JP 2011191859A JP 2010055567 A JP2010055567 A JP 2010055567A JP 2010055567 A JP2010055567 A JP 2010055567A JP 2011191859 A JP2011191859 A JP 2011191859A
Authority
JP
Japan
Prior art keywords
vehicle
display
ttc
image
displayed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2010055567A
Other languages
Japanese (ja)
Other versions
JP5192007B2 (en
Inventor
Hiroshi Iwami
浩 岩見
Original Assignee
Honda Motor Co Ltd
本田技研工業株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honda Motor Co Ltd, 本田技研工業株式会社 filed Critical Honda Motor Co Ltd
Priority to JP2010055567A priority Critical patent/JP5192007B2/en
Publication of JP2011191859A publication Critical patent/JP2011191859A/en
Application granted granted Critical
Publication of JP5192007B2 publication Critical patent/JP5192007B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

<P>PROBLEM TO BE SOLVED: To make a driver recognize the risk around a vehicle in a short time. <P>SOLUTION: A predetermined target around the vehicle is detected based on an image acquired by a photographing device photographing the periphery of the vehicle. A display image generated based on the photographed image is displayed on a display device so that it can be visually recognized by an occupant on the vehicle, while when the target is in the predetermined position to the vehicle, the object on the display image is highlighted. A speed of the vehicle is detected, and on the display image, a plurality of auxiliary lines meeting different expected arrival times calculated based on the vehicle speed are displayed, while at least one of the color and the shape of each of the auxiliary lines is differentiated. By the display of the different color/shape of a plurality of auxiliary lines meeting the arrival times of the vehicle, the driver can quickly recognize the degree of risk stepwise. <P>COPYRIGHT: (C)2011,JPO&INPIT

Description

  The present invention relates to an apparatus for monitoring the periphery of a vehicle, and more specifically to an apparatus for controlling a display form according to monitoring of the periphery of the vehicle.

  Conventionally, a device for detecting an object around a vehicle is mounted on the vehicle, and an alarm is issued for the detected object. The following Patent Document 1 predicts an arrival time to an obstacle detected by a radar, immediately sounds an alarm if the arrival time is within 2 seconds, and displays if the arrival time is 2 seconds or more. An apparatus for presenting an alarm is disclosed.

JP 2001-23094 A

  The alarm as described above is preferably issued in a form that allows the driver to recognize the danger. In particular, when a display device is provided in a place where the driver needs to move the line of sight for visual recognition, an alarm on the display device can cause the driver to recognize the danger in a shorter time. It is desirable to display it in a form that allows it.

  In said patent document, the arrival time to an obstacle is estimated and the warning form is changed depending on whether or not the predicted arrival time is within a predetermined time. However, the specific form of alarm display is not described.

  Accordingly, an object of the present invention is to provide a technique for performing display so that the driver can recognize the degree of danger in a shorter time.

  According to one aspect of the present invention, a vehicle periphery monitoring device detects a predetermined object around a vehicle based on an image acquired by an imaging device that images the periphery of the vehicle, and the captured image The display image generated based on the vehicle is displayed on the display device so that the vehicle occupant can visually recognize the image, and when the object is in a predetermined positional relationship with the vehicle, the display image Display means for emphasizing the object and means for detecting the speed of the vehicle, wherein the display means further includes different expected arrival times calculated on the display image based on the vehicle speed. A plurality of auxiliary lines corresponding to the above are displayed, and at least one of the color and shape of each auxiliary line is made different.

  According to the present invention, since the auxiliary line is displayed for each arrival time of the vehicle, the driver can recognize the position where the vehicle reaches after a predetermined time stepwise and quickly by the color or shape of the auxiliary line. Can do. Therefore, the degree of danger can be recognized quickly. In addition, for example, when the object is present in the predicted course of the vehicle and is highlighted on the display device, the positional relationship between the object and the auxiliary line can be instantly understood. It is possible to quickly recognize the degree of danger of a collision with respect to.

  According to an embodiment of the present invention, among the plurality of auxiliary lines, an auxiliary line closest to the detected object is specified, and at least one of the highlight color and shape for the object is specified. Displayed so as to correlate with at least one of the color and shape of the auxiliary line.

  By giving such a correlation, the driver can instantly grasp how dangerous the target object is. For example, when an object is displayed so as to correlate with an auxiliary line corresponding to the arrival time after t seconds, the driver quickly recognizes that there is a risk of reaching the object after t seconds. can do.

  According to one embodiment of the present invention, the vehicle is provided with means for predicting the course of the vehicle, and when the object does not exist on the predicted path, the degree of emphasis on the object is reduced. In this way, by reducing the degree of emphasis on an object that is not present in the course of the vehicle and has a low risk, the driver can be informed that an object having a low risk is present.

  Other features and advantages of the present invention will be apparent from the detailed description that follows.

The block diagram which shows the structure of the vehicle periphery monitoring apparatus according to one Example of this invention. The figure which shows the attachment position of a display apparatus and a camera according to one Example of this invention. 3 is a flowchart illustrating a process in an image processing unit according to an embodiment of the present invention. The figure which shows an example of the display of the auxiliary line for every arrival time of the vehicle according to one Example of this invention. The flowchart which shows the process of the target object detection process according to one Example of this invention.

  Next, an embodiment of the present invention will be described with reference to the drawings. FIG. 1 is a block diagram showing a configuration of a vehicle periphery monitoring device using a display device of a navigation device according to an embodiment of the present invention, and FIG. 2 shows attachment of the display device and a camera to the vehicle. FIG.

  A navigation device is mounted on the vehicle, and the navigation device includes a navigation unit 5 and a display device 4. As shown in FIG. 2A, the display device 4 passes through the center of the vehicle handle (steering wheel) 21 and extends in the front-rear direction of the vehicle. It is attached so as to be visible to the driver at a position separated by a predetermined distance with respect to (shown to extend in the direction). In this embodiment, the display device 4 is embedded in the dashboard 23 of the vehicle.

  The navigation unit 5 is realized by a computer having a central processing unit (CPU) and a memory. The navigation unit 5 receives, for example, a GPS signal for measuring the position of the vehicle 10 using an artificial satellite via a communication device (not shown) provided in the navigation unit 5, and receives the GPS signal. Based on this, the current position of the vehicle is detected. The navigation unit 5 stores the current position in the map information around the vehicle (which can be stored in a storage device of the navigation device or can be received from a predetermined server via the communication device). Is displayed on the display screen 25 of the display device 4. The display screen 25 of the display device 4 constitutes a touch panel, and the occupant inputs the destination to the navigation unit 5 via the touch panel or another input device 27 such as a key or a button. Can do. The navigation unit 5 can calculate the optimal route of the vehicle to the destination, superimpose an image showing the optimal route on the map information, and display it on the display screen 25 of the display device 4.

  Further, a speaker 3 is connected to the navigation unit 5 and, when necessary, for example, when performing route guidance such as a pause or an intersection, not only the display on the display device 4 but also the speaker 3 is used. The passenger can be notified by sound or voice. Note that recent navigation devices are equipped with various other functions such as providing traffic information and facility guidance in the vicinity of the vehicle. In this embodiment, any appropriate navigation device can be used. .

  The vehicle periphery monitoring device is mounted on a vehicle and is used to detect objects around the vehicle based on two infrared cameras 1R and 1L capable of detecting far infrared rays and image data captured by the cameras 1R and 1L. And an image processing unit 2. The image processing unit 2 is connected to the display device 4 and the speaker 3. The display device 4 is used to display an image obtained through imaging by the camera 1R or 1L and to display an alarm about the presence of an object around the vehicle detected from the image. The speaker 3 is used to issue an alarm by sound or voice based on the detection result of the object.

  In this embodiment, as shown in FIG. 2B, the cameras 1R and 1L are symmetrical with respect to the central axis passing through the center of the vehicle width at the front portion of the vehicle 10 so as to image the front of the vehicle 10. It is arranged in the position. The two cameras 1R and 1L are fixed to the vehicle so that their optical axes are parallel to each other and their height from the road surface is equal. The infrared cameras 1R and 1L have a characteristic that the level of the output signal becomes higher (that is, the luminance in the captured image becomes higher) as the temperature of the object is higher.

  The image processing unit 2 includes an A / D conversion circuit that converts an input analog signal into a digital signal, an image memory that stores a digitized image signal, a central processing unit (CPU) that performs various arithmetic processing, and a data RAM (Random Access Memory) used to store data, ROM (Read Only Memory) that stores programs to be executed by the CPU and data to be used (including tables and maps), driving signals for the speaker 3, and display for the display device 4 An output circuit for outputting signals and the like is provided. The output signals of the cameras 1R and 1L are converted into digital signals and input to the CPU.

  The traveling state detection unit 7 includes a sensor that detects the speed of the vehicle. The sensor for detecting the speed of the vehicle may be a so-called vehicle speed sensor, or may alternatively be an acceleration sensor or a wheel speed sensor. Furthermore, in this embodiment, the traveling state detection unit 7 includes a sensor for detecting another traveling state used for predicting the course of the vehicle. In one embodiment, a yaw rate sensor for detecting the yaw rate of the vehicle and a rudder angle sensor for detecting the steering angle of the vehicle can be included.

  Data indicating the traveling state of the vehicle detected by the traveling state detection unit 7 is sent to the image processing unit 2. The image processing unit 2 predicts the course of the vehicle on the basis of the detected traveling state, and drives the region of the predicted course on the image captured by the cameras 1R and 1L and displayed on the display device 4. An auxiliary line is displayed to inform the person of the degree of danger.

  Thus, in this embodiment, the display device 4 of the navigation device is used to display to the driver the display of an image obtained through imaging by the cameras 1R and 1L and the presence of a predetermined object detected from the image. Used for display for notification (alarm). As described above, unlike the head-up display (HUD) provided so that the screen is displayed in front of the driver on the front window, the display device 4 is a predetermined distance from the handle 21 in the vehicle width direction. Therefore, it is necessary to move the line of sight in the horizontal direction in order for the driver to visually recognize the screen of the display device 4 during driving as compared with the HUD. Therefore, it is preferable to perform display that allows the driver to recognize the degree of danger in a shorter time. The present invention realizes this by displaying auxiliary lines, and a specific method will be described below.

  FIG. 3 is a flowchart illustrating a process performed by the image processing unit 2 according to one embodiment of the present invention. The process is performed at predetermined time intervals.

  In step S11, output signals of the cameras 1R and 1L (that is, captured image data) are received as input, A / D converted, and stored in the image memory. The stored image data is a grayscale image including luminance information, and this is an image (display image) displayed on the display device 4.

  In step S12, the course of the vehicle is predicted based on the running state of the vehicle detected by the running state detection unit 7. Any appropriate method can be used as the route prediction method. In one embodiment, the course of the vehicle can be predicted based on the detected vehicle speed and yaw rate. For example, the technique described in Japanese Patent Laid-Open No. 7-104062 can be used. Instead of the yaw rate, the steering angle detected by the steering angle sensor may be used. A method for predicting the course of the vehicle based on the vehicle speed and the steering angle is described in, for example, Japanese Patent Application Laid-Open No. 2009-16663.

  In addition, you may use the information acquired by the navigation unit 5 for the course prediction. For example, assuming that the vehicle maintains the current traveling direction, a route that becomes a course from the current position of the vehicle is obtained based on the map information. For example, in the case of a vehicle traveling on a certain road, a route that travels along the road “in a road” is the predicted route. Alternatively, when the vehicle is guided along the optimum route calculated by the navigation unit 5, the optimum route may be used as the predicted route.

  In step S13, a position corresponding to a predetermined expected arrival time TTC of the vehicle is calculated based on the vehicle speed detected by the traveling state detection unit 7. In this embodiment, three values of the first TTC time, the second TTC time, and the third TTC time are used as the expected arrival time TTC (seconds). Here, the relationship is first TTC time> second TTC time> third TTC time. The corresponding position (referred to as the first TTC position) when the predicted arrival time TTC is the first TTC time is the first position that the vehicle has detected at the current speed detected from the current predicted route. When the vehicle travels over the TTC time, the position where the vehicle reaches is shown. Therefore, if the current speed of the vehicle is V and the first TTC time is t1 seconds, the position that has traveled the predicted course from the current position of the vehicle by the distance calculated by V × t1 is the first TTC position. Identified as The same applies to the second TTC position corresponding to the second TTC time and the third TTC position corresponding to the third TTC time. As a result, the distance from the vehicle is first TTC position> second TTC position> third TTC position.

Preferably, the third TTC time is determined so that the distance from the vehicle at the third TTC position is a distance that can be stopped by a braking operation at a predetermined deceleration at the present time. For example, if the predetermined deceleration of the brake (represented by a positive value) is G, the speed V 2 / (2 · G) is the minimum distance value at which the vehicle can stop. Therefore, it is preferable to determine the third TTC time so that the distance from the vehicle at the third TTC position has a value equal to or greater than the distance value. By doing so, even when a brake operation is performed in response to recognizing that an object is present at the third TTC position, the object can be avoided more reliably.

  In step S14, on the display image obtained in step S11, an image region corresponding to the predicted course is identified, and a predetermined position is set on the predicted course on the image corresponding to the first to third TTC positions. The TTC display is superimposed and the superimposed image is displayed on the display device 4. Here, the TTC displays superimposed on the first to third TTC positions are displayed in different colors and / or different shapes.

  Here, with reference to FIG. 4A, an example of TTC display superimposed on the display image 101 is shown. As described above, the display image is a grayscale image obtained through imaging (note that fine luminance changes are not shown in the figure for easy viewing), and lines 103 and 105 are displayed on the image. A region surrounded by is an image region corresponding to the predicted course. The TTC display is superimposed and displayed as the auxiliary lines 111 to 113 on the predicted course, the first auxiliary line 111 corresponds to the first TTC position, and the second auxiliary line 112 is the second auxiliary line 112. Corresponding to the TTC position, the third auxiliary line 113 corresponds to the third TTC position.

  In this embodiment, these auxiliary lines are displayed in different colors (indicated by different types of hatching in the figure), for example, the first auxiliary line is displayed in green or blue, The second auxiliary line 112 is displayed in yellow, and the third auxiliary line 113 is displayed in red. Thus, as with the traffic light, the red display is performed for the third TTC position with the highest risk, and the green or blue display is performed for the first TTC position with the lowest risk. By using such color display, it is possible to promptly inform the driver that the risk is high as it goes from the first auxiliary line to the third auxiliary line, and the position where the risk is high, medium, and low. However, it is possible to make the driver feel how much the vehicle is located from the vehicle.

  In this example, the first to third auxiliary lines 111 to 113 are displayed in different colors, but alternatively or in addition to the differences in color, they may be displayed in different shapes. For example, the first auxiliary line 111 displayed at the first TTC position is displayed as the thinnest line, the third auxiliary line 113 displayed at the third TTC position is displayed as the thickest line, The auxiliary line 112 displayed at the second TTC position can be displayed as a line having an intermediate thickness. Thus, the driver can instantly grasp the degree of danger from the difference in color or shape. Further, when the object is imaged, the degree of danger of the object can be instantly understood from the positional relationship between the object and the auxiliary line. Therefore, it is possible to avoid the driver from nailing the line of sight on the display device 4 in order to determine where the object is present.

  Returning to FIG. 3, in step S <b> 21 and subsequent steps, processing related to alarm display is performed for a predetermined object detected from the captured image (this processing is described later with reference to FIG. 5). In step S21, it is determined whether or not the detected predetermined object exists in the predicted course. If it exists (Yes in S21), an expected arrival time TTC to the object is calculated in step S22. Specifically, the expected arrival time TTC (= distance / relative speed) can be calculated based on the distance from the host vehicle to the object and the relative speed of the vehicle with respect to the object. In one embodiment, it is possible to set the speed V of the host vehicle as the relative speed assuming an object that crosses the course of the vehicle. Alternatively, the object may be tracked in time to calculate the relative speed of the object with respect to the vehicle. This specific method is described in, for example, Japanese Patent Application Laid-Open No. 2001-6096. .

  In step S23, a TTC time closest to the expected arrival time TTC calculated for the object is selected from the first to third TTC times, and a TTC position corresponding to the selected TTC time is specified. Thus, by comparing the estimated arrival time TTC to the object and the first to third TTC times with each other, it is possible to determine how dangerous the position where the object exists is. it can. Alternatively, the TTC position closest to the current distance of the object from the vehicle may be selected from the first to third TTC positions.

  In step S24, the object on the display image is highlighted with a correlation with the TTC display at the TTC position specified in step S23.

  Here, with reference to FIG. 4B, an example in which an object is highlighted on the display image 101 as shown in FIG. 4A is shown. In this example, as described above, the first auxiliary line 111 of green or blue is displayed at the first TTC position. Therefore, if the TTC position corresponding to the TTC time closest to the predicted arrival time TTC of the object is specified as the first TTC position, the object is displayed in green or blue as indicated by reference numeral 131. To do. Similarly, if the TTC position corresponding to the TTC time closest to the expected arrival time TTC of the object is specified as the second TTC position, the object is displayed in yellow as indicated by reference numeral 132. . If the TTC position corresponding to the TTC time closest to the expected arrival time TTC of the target object is specified as the third TTC position, the target object is displayed in red as indicated by reference numeral 133.

  The method for displaying the target object in a predetermined color may be, for example, converting the color of the pixel in the region extracted as the target object into the predetermined color for display, or the human of the predetermined color. An icon image simulating the above may be superimposed on the area extracted as the object. Thus, the object is displayed so as to correlate with the color of the TTC position determined to be the closest.

  As described above, instead of displaying the object in a color corresponding to each TTC position, an emphasis frame is displayed so as to surround the object, and the color of the emphasis frame is determined according to each TTC position. It is good also as a color.

  When the shapes of the first to third auxiliary lines 111 to 113 are different from each other, the object may be displayed so as to have a correlation with the shape of the nearest TTC position specified as described above. . For example, each auxiliary line is displayed with a different mark. Then, the same mark as the mark attached to the TTC position determined to be the closest is attached to the object and displayed. By doing in this way, it can represent with a mark that it is related mutually.

  Thus, by displaying the target object in a display form similar to the corresponding TTC position, the driver can quickly grasp which arrival time is related to the target object. Since the TTC position is displayed stepwise according to the degree of danger, the danger to the object can be instantly determined.

  Returning to FIG. 3, step S25 is provided not only for alarm display by highlighting as described above, but also for alarm sound through the speaker 3 as described above. It is processing of. This alarm sound can also be made different for each TTC position. For example, if the object is displayed to correlate with the first TTC position, one short sound (eg, a beep) is output and the object is displayed to correlate with the second TTC position. Two short sounds (eg, a beeping sound) are output, and one long sound (eg, a beeping sound) is displayed if the object is displayed to correlate with the third TTC position. Can be output. When a plurality of objects exist in the predicted course and are displayed so as to correlate with different TTC positions, an alarm sound for the TTC position with the highest risk may be output. For example, as shown in FIG. 4B, when there are three pedestrians 131 to 133 in correlation with the first to third TTC positions, the third risk level is the highest. Outputs an alarm sound for TTC position.

  Returning to step S21, if the predetermined object detected from the captured image does not exist in the predicted course (No in S21), this indicates that the object exists outside the predicted course. In this case, the process proceeds to step S26, and the object is displayed in a state in which the degree of emphasis is reduced as compared with the emphasis display on the object in step S24. In one example, the object may not be highlighted at all. In another example, the object can be displayed in a predetermined color that is harder to distinguish (difficult to identify) than the highlighting in step S24. For example, in step S24, highlighting is performed using color, but in step S26, the object is highlighted with a monochrome color (for example, an intermediate color between white and black). For example, the object may be displayed in a predetermined monochrome color, or the object may be surrounded by the monochrome color frame. In order to distinguish clearly from the highlighting in step S24, in step S26, a color different from the color used in connection with the first to third TTC positions may be used.

  FIG. 4B also shows a display example of such an object. Since the pedestrian indicated by reference numeral 135 exists outside the predicted course, it is displayed in a predetermined gray value. Has been. In this way, the object outside the predicted course has a low risk and can be merely a notification.

  Of course, when a predetermined object is not detected from an image acquired through imaging, only the display image on which the auxiliary line is displayed is displayed without executing the processing in step S21 and the subsequent steps. 4 may be displayed.

  FIG. 5 shows a flowchart of a process for detecting an object from a captured image. In step S31, the right image captured by the camera 1R is used as a reference image (alternatively, the left image may be used as a reference image), and the image signal is binarized. Specifically, a process of setting a region brighter than the luminance threshold value ITH to “1” (white) and a dark region to “0” (black) is performed. By this binarization processing, an object having a temperature higher than a predetermined temperature such as a living body is extracted as a white region. The luminance threshold value ITH can be determined by any appropriate technique.

  In step S32, the binarized image data is converted into run-length data. Specifically, with respect to an area that has become white due to binarization, the coordinates of the start point (the leftmost pixel of each line) of the white area (referred to as a line) of each pixel row and the end point (each The run length data is represented by the length (expressed by the number of pixels) up to the pixel on the right end of the line. Here, the y-axis is taken in the vertical direction in the image, and the x-axis is taken in the horizontal direction. For example, if the white region in the pixel row whose y coordinate is y1 is a line from (x1, y1) to (x3, y1), this line consists of three pixels, so (x1, y1, 3) This is represented by run-length data.

  In steps S33 and S34, the object is labeled and a process of extracting the object is performed. That is, of the lines converted into run length data, a line having a portion overlapping in the y direction is regarded as one object, and a label is given thereto. Thus, one or a plurality of objects are extracted.

  In step S35, it is determined whether each of the objects thus extracted is a predetermined object to be noted. In this embodiment, the predetermined object to be noted is a pedestrian, and may include animals in addition to the pedestrian. Here, the process of determining whether the object is a pedestrian or an animal can be realized by any appropriate technique. For example, using known pattern matching, the similarity between the object extracted as described above and a predetermined pattern representing a pedestrian is calculated, and if the similarity is high, the object is determined to be a pedestrian. be able to. An animal can be similarly determined. As an example of such determination processing, processing for determining whether or not a person is a pedestrian is described in, for example, Japanese Patent Application Laid-Open No. 2007-241740, Japanese Patent Application Laid-Open No. 2007-334751, and the like. The process for determining whether or not an animal is described in, for example, Japanese Patent Application Laid-Open Nos. 2007-310705 and 2007-310706.

  In this way, the process after step S21 in FIG. 3 is executed for the object determined to be a pedestrian (which may include animals).

  Note that highlighting as in steps S24 and S26 described above may be performed on an object within a predetermined distance from the vehicle. In this case, it is preferable that the predetermined distance is set to be equal to or greater than the distance of the first TTC position. For example, when the first TTC time is t1 seconds, a value calculated by “t2 (≧ t1) × vehicle speed V” can be set as the predetermined distance. In step S35 of FIG. 5, among the objects extracted in step S34, the object within the predetermined distance range is determined, and as a result, it is determined that the object is a pedestrian (may include an animal). What is necessary is just to perform the process after step S21 of FIG. 3 about a target object.

  In the above embodiment, the display device provided in the navigation device is used. As described above, according to the present invention, an auxiliary line having a different color or shape is displayed for each arrival time of the vehicle, so even if the display device that requires the driver to move the line of sight is used, the degree of risk is reduced. The driver can be promptly recognized. However, the HUD as described above may be used as the display device.

  Furthermore, although the far-infrared camera is used in the above embodiment, the present invention is applicable to other cameras (for example, a visible camera).

  As described above, specific embodiments of the present invention have been described. However, the present invention is not limited to these embodiments.

1R, 1L infrared camera (imaging means)
2 Image processing unit 3 Speaker 4 Display device

Claims (3)

  1. A vehicle occupant can visually recognize a means for detecting a predetermined object around the vehicle based on an image acquired by an imaging device that images the periphery of the vehicle and a display image generated based on the captured image. Display means for highlighting the object on the display image when the object is in a predetermined positional relationship with the vehicle, and the speed of the vehicle A vehicle periphery monitoring device comprising:
    The display means further displays, on the display image, a plurality of auxiliary lines corresponding to different expected arrival times calculated based on the vehicle speed, and displays at least one of the color and shape of each auxiliary line. A vehicle periphery monitoring device, characterized in that it is different.
  2. The display means identifies an auxiliary line that is closest to the detected object among the plurality of auxiliary lines, and at least one of a color and a shape of the highlight for the object is determined by the identified auxiliary line. Display in correlation with at least one of color and shape,
    The vehicle periphery monitoring apparatus according to claim 1.
  3. And a means for predicting the course of the vehicle,
    The display means displays the object with a reduced emphasis degree when the object does not exist in the predicted course;
    The vehicle periphery monitoring device according to claim 1 or 2.
JP2010055567A 2010-03-12 2010-03-12 Vehicle periphery monitoring device Expired - Fee Related JP5192007B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2010055567A JP5192007B2 (en) 2010-03-12 2010-03-12 Vehicle periphery monitoring device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2010055567A JP5192007B2 (en) 2010-03-12 2010-03-12 Vehicle periphery monitoring device

Publications (2)

Publication Number Publication Date
JP2011191859A true JP2011191859A (en) 2011-09-29
JP5192007B2 JP5192007B2 (en) 2013-05-08

Family

ID=44796735

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2010055567A Expired - Fee Related JP5192007B2 (en) 2010-03-12 2010-03-12 Vehicle periphery monitoring device

Country Status (1)

Country Link
JP (1) JP5192007B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2894620A1 (en) * 2013-12-27 2015-07-15 Toyota Jidosha Kabushiki Kaisha Vehicle information display device and vehicle information display method
EP2894621A1 (en) * 2013-12-27 2015-07-15 Toyota Jidosha Kabushiki Kaisha Vehicle information display device and vehicle information display method
JP2017004181A (en) * 2015-06-08 2017-01-05 富士通テン株式会社 Obstacle alarm system and obstacle alarm system method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000142284A (en) * 1998-11-13 2000-05-23 Nissan Shatai Co Ltd Obstruction detecting display device
JP2007323578A (en) * 2006-06-05 2007-12-13 Honda Motor Co Ltd Vehicle periphery monitoring device
JP2008123215A (en) * 2006-11-10 2008-05-29 Aisin Seiki Co Ltd Driving support device, method and program
JP2009040107A (en) * 2007-08-06 2009-02-26 Denso Corp Image display control device and image display control system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000142284A (en) * 1998-11-13 2000-05-23 Nissan Shatai Co Ltd Obstruction detecting display device
JP2007323578A (en) * 2006-06-05 2007-12-13 Honda Motor Co Ltd Vehicle periphery monitoring device
JP2008123215A (en) * 2006-11-10 2008-05-29 Aisin Seiki Co Ltd Driving support device, method and program
JP2009040107A (en) * 2007-08-06 2009-02-26 Denso Corp Image display control device and image display control system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2894620A1 (en) * 2013-12-27 2015-07-15 Toyota Jidosha Kabushiki Kaisha Vehicle information display device and vehicle information display method
EP2894621A1 (en) * 2013-12-27 2015-07-15 Toyota Jidosha Kabushiki Kaisha Vehicle information display device and vehicle information display method
JP2017004181A (en) * 2015-06-08 2017-01-05 富士通テン株式会社 Obstacle alarm system and obstacle alarm system method

Also Published As

Publication number Publication date
JP5192007B2 (en) 2013-05-08

Similar Documents

Publication Publication Date Title
JP5639282B2 (en) Vehicle periphery monitoring device
JP5316713B2 (en) Lane departure prevention support apparatus, lane departure prevention method, and storage medium
JP2016001464A (en) Processor, processing system, processing program, and processing method
JP6113375B2 (en) Driving support device and driving support method
JP4970926B2 (en) Vehicle periphery monitoring device
US8559675B2 (en) Driving support device, driving support method, and program
DE102007011616B4 (en) Vehicle environment monitoring device
US8536995B2 (en) Information display apparatus and information display method
US8085140B2 (en) Travel information providing device
US8330816B2 (en) Image processing device
JP2016001170A (en) Processing unit, processing program and processing method
JP3739693B2 (en) Image recognition device
JP5454934B2 (en) Driving assistance device
JP5718942B2 (en) Apparatus and method for assisting safe operation of transportation means
US7489805B2 (en) Vehicle surroundings monitoring apparatus
US9067537B2 (en) Vehicle periphery monitoring device
US9321399B2 (en) Surrounding area monitoring device for vehicle
JP4899424B2 (en) Object detection device
JP4879189B2 (en) Safe driving support device
KR20150051735A (en) Parking Guide System and the Method
JP4277081B2 (en) Driving assistance device
US7969466B2 (en) Vehicle surroundings monitoring apparatus
JP4774849B2 (en) Vehicle obstacle display device
JP5421072B2 (en) Approaching object detection system
JP4815488B2 (en) Vehicle periphery monitoring device

Legal Events

Date Code Title Description
A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20120919

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20120925

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20121121

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20130122

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20130130

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20160208

Year of fee payment: 3

LAPS Cancellation because of no payment of annual fees