WO2011132388A1 - 車両の周辺監視装置 - Google Patents
車両の周辺監視装置 Download PDFInfo
- Publication number
- WO2011132388A1 WO2011132388A1 PCT/JP2011/002206 JP2011002206W WO2011132388A1 WO 2011132388 A1 WO2011132388 A1 WO 2011132388A1 JP 2011002206 W JP2011002206 W JP 2011002206W WO 2011132388 A1 WO2011132388 A1 WO 2011132388A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- display
- driver
- vehicle
- information
- Prior art date
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 8
- 238000003384 imaging method Methods 0.000 claims abstract description 5
- 238000012806 monitoring device Methods 0.000 claims description 7
- 238000000034 method Methods 0.000 description 27
- 230000008569 process Effects 0.000 description 16
- 238000001514 detection method Methods 0.000 description 8
- 230000009467 reduction Effects 0.000 description 6
- 230000007423 decrease Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000003247 decreasing effect Effects 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 239000000446 fuel Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/20—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/22—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
- B60R1/23—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
- B60R1/24—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view in front of the vehicle
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/20—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/30—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles providing vision in the non-visible spectrum, e.g. night or infrared vision
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/593—Recognising seat occupancy
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/166—Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/10—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
- B60R2300/105—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/10—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
- B60R2300/106—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using night vision cameras
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/20—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
- B60R2300/205—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used using a head-up display
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/20—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
- B60R2300/207—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used using multi-purpose displays, e.g. camera image and navigation or video on same display
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/30—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
- B60R2300/307—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing virtually distinguishing relevant parts of a scene from the background of the scene
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/80—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
- B60R2300/8033—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for pedestrian protection
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/80—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
- B60R2300/8093—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for obstacle warning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30261—Obstacle
Definitions
- the present invention relates to an apparatus for monitoring the periphery of a vehicle, and more specifically to an apparatus for controlling a display form according to monitoring of the periphery of the vehicle.
- an image pickup device that picks up an image of the periphery of a vehicle is mounted on the vehicle, and an image of the periphery of the vehicle acquired by the image pickup device is displayed on a display device. At that time, when a predetermined object is present in the acquired image, a warning is given to the object to alert the driver.
- Patent Document 1 Japanese Patent No. 2929927
- Patent Document 1 obtains a frequency at which the direction in which the driver is gazing and the direction to be gazed in order to obtain information necessary for driving is determined. Is described to determine a notification level of information to be notified. Thereby, while reducing the troublesomeness of the driver, appropriate information is notified about the situation that the driver is not aware of.
- one object of the present invention is to provide a display form that controls the amount of information in the display screen according to the position of the display device and optimizes the driver's gaze on the display screen and forward gaze. Is to provide.
- a vehicle periphery monitoring device is configured to detect an object around a vehicle based on an image acquired by an imaging device that images the periphery of the vehicle, and based on the captured image.
- Display means for displaying the display image generated on the display screen, and alarm means for warning the driver of the presence of the object through the display means when the object is detected.
- the display means is arranged at a plurality of positions where the driver can see, and the warning means allows the driver to visually recognize the display screen based on the driver's line of sight facing the front.
- the display means having a larger required line-of-sight movement amount reduces the amount of information displayed on the display screen.
- the instantaneous readability decreases.
- the display means display device
- Instantaneous readability can be improved, and therefore optimization of the driver's gaze on the display screen and forward gaze can be achieved.
- the present invention is provided with means for calculating an arrival time until the vehicle reaches the object, and the longer the arrival time, the more the display screens of the display means arranged at the plurality of positions.
- the difference in information amount is set to be small.
- the longer the arrival time the greater the margin for checking the object. Therefore, in such a case, the convenience of the passenger can be improved by displaying the same amount of information on any display means.
- the present invention is provided with means for detecting an occupant other than the driver, and when the presence of the occupant is detected, the position of the occupant among the display means arranged at the plurality of positions. For near display means, the amount to reduce the amount of information is suppressed.
- the present invention for example, if an occupant is present in the passenger seat, a reduction in the amount of information on the display device close to the occupant is suppressed, so that the convenience of the occupant can be improved.
- 3 is a flowchart illustrating a process in an image processing unit according to an embodiment of the present invention.
- 7 is a flowchart showing a process in an image processing unit according to another embodiment of the present invention.
- the block diagram which shows the structure of the periphery monitoring apparatus of a vehicle according to the further another Example of this invention.
- 6 is a flowchart illustrating a process in an image processing unit according to still another embodiment of the present invention.
- the figure which shows an example of the display form on the some display apparatus according to the further another Example of this invention.
- FIG. 1 is a block diagram illustrating a configuration of a vehicle periphery monitoring device including a plurality of display devices according to an embodiment of the present invention
- FIG. 2 is a diagram illustrating attachment of the plurality of display devices and cameras to the vehicle.
- the plurality of display devices are shown as first to third display devices 31 to 33, and all of the display devices are arranged at a plurality of positions that are visible to the driver.
- the periphery monitoring device is mounted on a vehicle and can detect far infrared rays, and an image processing unit 2 for detecting an object around the vehicle based on image data obtained by the cameras 1R and 1L. , A speaker 3 for generating an alarm by voice based on the detection result, and a first display device 31 for displaying a display image based on an image from the camera 1R or 1L. Further, the periphery monitoring device includes a yaw rate sensor 6 that detects the yaw rate of the vehicle, and a vehicle speed sensor 7 that detects the traveling speed (vehicle speed) of the vehicle, and the detection results of these sensors are sent to the image processing unit 2. It is done.
- the cameras 1R and 1L are symmetrical with respect to the central axis passing through the center of the vehicle width at the front portion of the vehicle 10 so as to image the front of the vehicle 10. It is arranged in the position.
- the two cameras 1R and 1L are fixed to the vehicle so that their optical axes are parallel to each other and their height from the road surface is equal.
- the infrared cameras 1R and 1L have a characteristic that the level of the output signal becomes higher (that is, the luminance in the captured image becomes higher) as the temperature of the object is higher.
- the first display device 31 is a so-called head-up display (provided on the front window so that the screen is displayed at the front position of the driver).
- HUD head-up display
- Line L1 passes through the center of the steering wheel (steering wheel) 21 of the vehicle and extends in the front-rear direction of the vehicle (in the drawing, shown to extend in the vertical direction of the drawing for easy understanding)
- the line-of-sight direction of the driver when the driver faces the front is shown.
- the first display device 31 is arranged such that the center of the display screen in the vehicle width direction is positioned substantially on the line L1.
- a navigation device is mounted on a vehicle, and the device includes a navigation unit 5 and a third display device 33.
- the third display device 33 is disposed on the dashboard of the vehicle, and is provided at a position separated from the line L1 by a predetermined distance.
- the navigation unit 5 is realized by a computer having a central processing unit (CPU) and a memory.
- the navigation unit 5 receives, for example, a GPS signal for measuring the position of the vehicle 10 using an artificial satellite via a communication device (not shown) provided in the navigation unit 5, and receives the GPS signal. Based on this, the current position of the vehicle is detected.
- the navigation unit 5 stores the current position in the map information around the vehicle (which can be stored in a storage device of the navigation device or can be received from a predetermined server via the communication device). Is displayed on the third display device 33 in a superimposed manner.
- the display screen of the third display device 33 constitutes a touch panel, and the occupant enters the navigation unit 5 via the touch panel or another input device (not shown) such as a key or a button. You can enter your destination.
- the navigation unit 5 can calculate the optimal route of the vehicle to the destination, superimpose an image showing the optimal route on the map information, and display it on the third display device 33.
- a speaker 3 is connected to the navigation unit 5.
- route guidance such as a temporary stop or an intersection
- not only the display on the third display device 33 but also the speaker 3 is connected.
- the passenger can be notified by sound or voice.
- recent navigation devices are equipped with various other functions such as providing traffic information and facility guidance in the vicinity of the vehicle. In this embodiment, any appropriate navigation device can be used. .
- a second display device 32 is provided, which is arranged in the vehicle width (horizontal) direction with the first display device 31 and the third display device as shown in FIG.
- the display device 33 is provided on the instrument panel. Therefore, the distance of the second display device 32 from the line L1 is smaller than the distance of the third display device 33.
- the second display device 32 is a liquid crystal display configured to be able to display a plurality of predetermined information, and is called a so-called multi-information display (MID).
- MID multi-information display
- the second display device 32 is configured to be able to display a plurality of information related to the running state of the vehicle (the state relating to the vehicle speed, the rotational speed, and the fuel efficiency).
- the image processing unit 2 includes an A / D conversion circuit that converts an input analog signal into a digital signal, an image memory that stores a digitized image signal, a central processing unit (CPU) that performs various arithmetic processing, and a data RAM (Random Access Memory) used for storing, ROM (Read Only Memory) storing programs executed by the CPU and data (including tables and maps), driving signals for the speaker 3, and the first to the above-mentioned
- An output circuit for outputting display signals and the like for the third display devices 31 to 33 is provided.
- the output signals of the cameras 1R and 1L are converted into digital signals and input to the CPU.
- each of the first to third display devices 31 to 33 is connected to the image processing unit 2 and can display an image obtained as a result of processing by the image processing unit 2. Yes.
- a switching mechanism relating to the contents to be displayed can be provided for the second and third display devices 33.
- the second display device 32 can switch between the display of the image from the image processing unit 2 and the display of the normal predetermined information, and the third display device 33 can switch the image processing.
- the display of the image from the unit 2 and the display of the information output from the navigation unit 5 can be switched.
- a recent vehicle may be provided with a plurality of display devices as described above.
- a display device arranged farther away from the line L1 has a larger amount of line-of-sight movement in which the driver visually recognizes the display screen. Therefore, it is desirable to improve the instantaneous readability.
- the information amount of the display image generated based on the processing result of the image processing unit 2 is controlled according to the position of the display device. Specifically, the information amount of the display image is reduced as the position of the display device becomes farther in the horizontal direction from the line of sight when the driver is facing the front (the above line L1). Thereby, it can be set as the display content with high instantaneous reading property, so that the position of a display apparatus is kept away from a driver
- FIG. 3 is a flowchart showing a process executed by the image processing unit 2 according to an embodiment of the present invention. The process is performed at predetermined time intervals.
- the output signals of the cameras 1R and 1L (that is, captured image data) are received as input, A / D converted, and stored in the image memory.
- the stored image data is a gray scale image including luminance information.
- step S14 the right image captured by the camera 1R is used as a reference image (alternatively, the left image may be used as a reference image), and the image signal is binarized. Specifically, a process of setting a region brighter than the luminance threshold value ITH to “1” (white) and a dark region to “0” (black) is performed.
- the luminance threshold value ITH can be determined by any appropriate technique.
- this binarization processing an object having a temperature higher than a predetermined temperature such as a living body is extracted as a white region.
- step S15 the binarized image data is converted into run-length data.
- the coordinates of the start point (the leftmost pixel of each line) of the white area (referred to as a line) in each pixel row, and the start point to the end point (each The run length data is represented by the length (expressed by the number of pixels) up to the pixel on the right end of the line.
- the y-axis is taken in the vertical direction in the image
- the x-axis is taken in the horizontal direction.
- steps S16 and S17 the object is labeled and a process of extracting (detecting) the object is performed. That is, of the lines converted into run length data, a line having a portion overlapping in the y direction is regarded as one object, and a label is given thereto. Thus, one or a plurality of objects are extracted (detected).
- the determination process can be realized by any appropriate method. For example, using known pattern matching, the similarity between the object extracted as described above and a predetermined pattern representing a pedestrian is calculated, and if the similarity is high, the object is determined to be a pedestrian. be able to. Examples of such determination processing are described in, for example, Japanese Patent Application Laid-Open Nos. 2007-241740 and 2007-334751.
- step S18 the detected object is displayed to output an alarm for the object.
- images to be displayed on the first to third display devices 31 to 33 based on the above-described gray scale image in which the object is imaged (referred to as first to third images).
- the generated first to third images are displayed on the first to third display devices 31 to 33.
- the generation is performed so that the information amount is reduced from the first to the third image.
- the information amount indicates image contents that can be read by a person from an actually displayed image. The more images are captured in the image (not only living bodies such as pedestrians but also artificial structures such as buildings and other vehicles), the greater the amount of information, and the content of the image is read instantly. It becomes difficult (decreasing reading ability).
- the display mode in which the amount of information is reduced toward the first to third images is referred to as a first display mode.
- the first image is an image that can be visually recognized on the display screen other than the detected object, and the second image is substantially only the detected object.
- the image is visible on the display screen, and the third image is an image that cannot be seen on the display screen even for the detected object.
- the first image is the gray scale image.
- a state in which an image area (referred to as an area other than the object) other than the image area (referred to as an object area) corresponding to the object detected in step S17 is substantially unreadable in the second image.
- the luminance value of each pixel in a region other than the object may be decreased by a predetermined value or changed to a predetermined low luminance value. By doing so, it is possible to generate a second image that is substantially readable only by the object region.
- the luminance value of all the pixels in the grayscale image is decreased by a predetermined value so that the third image is indecipherable even with respect to the object region in the image, Change to a lower brightness value.
- the third image appears as if nothing is captured, and the captured image can be substantially hidden.
- the image display itself on the third display device 33 may be suppressed (non-display) without performing the conversion process of the luminance value of the pixel.
- FIG. (A1) shows the first image
- (b1) shows the second image
- (c1) shows the third image.
- the first image is a grayscale image, and is displayed so that other vehicles 103 and street lamps 105 can be seen in addition to the pedestrian 101.
- the second image is obtained by reducing the contrast of the region other than the object, and is substantially an image in which only the object called the pedestrian 101 is visible.
- the third image the image is substantially hidden.
- all the pixels of the grayscale image have a predetermined low luminance value (in this example, the black luminance value). It is the image of the converted result.
- the amount of information that can be read by the driver from the display image decreases from the first image to the third image. Since the first image includes information such as the pedestrian 101, the other vehicle 103, and the street lamp 105, the driver tries to recognize these objects. Since only the pedestrian 101 contains substantial information in the second image, the second image can be read in a shorter time than the first image. Since nothing is displayed in the third image, nothing is received as information. The less information received from the display image, the more forward attention can be urged.
- the first image may be generated as the gray scale image
- the second image may be generated as an image in which the contrast of the entire gray scale image is reduced.
- a second image with a reduced contrast can be generated by changing the luminance value of each pixel in the grayscale image so that the difference between the maximum luminance value and the minimum luminance value is small.
- it is preferable to set the degree of decrease in the contrast of the second image to such an extent that the detected object can be visually recognized. In this way, it is possible to generate a second image in which only the object region is substantially visible.
- the third image can be an image in which the contrast of the second image is further reduced so that the image is substantially not displayed.
- a darkened image may be generated by lowering the luminance value of all the pixels in the grayscale image by a predetermined value. In this case as well, for the second image, it is preferable to lower the luminance value so that only the object region is substantially visible.
- FIG. 4 also shows one form of an image that has undergone such contrast reduction.
- A2 shows the first image, which is a grayscale image.
- B2 shows a second image, which is an image in which the contrast of the entire gray scale image is lowered. Since the object area in which the pedestrian 101 is imaged has a high luminance value in the grayscale image, the pedestrian is maintained in a visible state as shown in the figure.
- C2 shows a third image, and is an image in which the contrast of the entire image is further lowered than in (b2). By increasing the amount of decreasing the contrast, the image can be substantially hidden as shown in the figure.
- the first image is generated by increasing the contrast of the grayscale image
- the second image is the grayscale image
- the third image is the contrast of the grayscale image. You may make it produce
- the first image is an image obtained by adding an emphasis display to the object detected in step S17, and the second image is not added such an emphasis display.
- the third image is an image that cannot be visually recognized on the display screen.
- FIG. (A3) shows the first image, and is provided with a frame 111 for highlighting the image of (a1) in FIG. 4 and the detected object (pedestrian in this embodiment). Is different. Since this frame 111 is also recognized by the driver as one piece of information, the first image of (a3) is considered to have a larger amount of information than the image of (a1) in FIG. Can do.
- (B3) shows the second image, which is the same grayscale image as (a1) in FIG. Alternatively, a second image may be obtained by superimposing an emphasis frame on the images of (b1) and (b2) in FIG.
- C3 shows a third image, which is the same as (c1) in FIG. Alternatively, an image with reduced contrast as shown in (c2) of FIG. 4 may be used as the third image.
- FIG. 6 is a flowchart of a process executed by the image processing unit 2 according to another embodiment of the present invention. The process is performed at predetermined time intervals. The difference from FIG. 3 is that the display form is changed according to the distance to the object or the arrival time of the vehicle to the object.
- Steps S11 to S17 are the same as those in FIG.
- step S28 the distance to the object extracted (detected) in step S17 is calculated. This can be performed by any known means, and is described in, for example, Japanese Patent Application Laid-Open No. 2001-6096.
- the arrival time of the vehicle to the object may be calculated. The arrival time can be calculated by dividing the distance by the vehicle speed detected by the vehicle speed sensor 7 of the vehicle.
- step S29 it is determined whether the distance (or arrival time) thus calculated is greater than a predetermined value. If this determination is No, in step S30, the first to third display images are generated according to the first display form described above, and these are displayed on the first to third display devices 31 to 33, respectively. Thus, an alarm is output for the object. If this determination is Yes, in step S31, first to third display images are generated according to the second display form, and these are displayed on the first to third display devices 31 to 33, respectively. An alarm is output for the object.
- the first display form includes the first to third displays so that the information amount is reduced toward the first to third images.
- an image is generated and displayed.
- the first to third display images are generated and displayed so as to alleviate (reduce) the information amount difference between the first to third images in the first display form. It is a technique.
- (A4), (b4), and (c4) in FIG. 7 show an example of the second display form, and the first display form is (a1), (b1), and (c1) in FIG.
- (A4) shows the first image, which is the same as (a1) in FIG.
- (B4) shows a second image, which is obtained by reducing the contrast of a region other than the object (here, pedestrian 101) of the grayscale image of (a1).
- the amount of decrease in contrast is smaller than the amount of decrease in contrast when the image of (b1) in FIG. 4 is generated.
- the information of the vehicle 103 other than the pedestrian 101 can be read from the image of (b4), and compared with the difference in information amount between (a1) and (b1) in FIG.
- the difference in information amount between a4) and (b4) is small.
- (C4) shows a third image, which is obtained by further reducing the contrast of the region other than the object of the image of (b4), and is the same as the image of (b1) of FIG. From the image of (c4), it is shown as information that can be read by only the pedestrian 101. Compared with the difference in information amount between (a1) and (c1) in FIG. 4, the difference in information amount between (a4) and (c4) is smaller.
- (a5), (b5), and (c5) of FIG. 7 show other examples of the second display form.
- the first display form is (a3), (b3), and ( The case of c3) is assumed.
- (A5) shows the first image, which is the same as (a3) in FIG.
- (B5) shows the second image, which is the same as (b3) in FIG.
- (C5) shows a third image, which is the same as (b1) in FIG.
- the difference in information amount between (a5) and (b5) is the same as the difference in information amount between (a3) and (b3) in FIG. 101 is substantially legible
- the difference in information amount between (a5) or (b5) and (c5) is (a3) or (b3) in FIG. It is smaller than the difference in information amount from c3).
- the difference in information amount between any two of the plurality of display devices can be reduced.
- the difference in the amount of information can be reduced to allow the gaze time for the second and third display devices 32 and 33 to some extent.
- the first to third images may all be the same (in this case, the amount of information between the first to third images). The difference will disappear.)
- the first to third images are generated so that the object region can be visually recognized.
- the present invention is not limited to this.
- the third image is shown in FIG. You may produce
- FIG. 8 shows a block diagram of a vehicle periphery monitoring device according to still another embodiment of the present invention.
- an occupant detection means 9 is provided for detecting whether there is an occupant other than the driver. In this embodiment, it is detected whether an occupant is present in the passenger seat as an occupant other than the driver.
- the occupant detection means 9 can be realized by any appropriate means. For example, a seating sensor that detects that a person is seated on the seat can be provided in the passenger seat, and determination can be made based on the detection result of the sensor. Alternatively, a camera for imaging the passenger seat is provided in the vehicle, and if a person is detected in the captured image, it can be detected that a passenger is present.
- the first to third images are displayed in accordance with the third display mode instead of the first display mode described with reference to FIGS. Generated and displayed on the first to third display devices 31 to 33. This process will be described with reference to FIG.
- FIG. 9 is a process processed by the image processing unit 2 according to the embodiment of FIG. 8, and is executed at predetermined time intervals. Steps S11 to S17 are the same as those in FIG.
- step S38 the result detected by the occupant detection means 9 is acquired.
- step S39 it is determined whether there is an occupant in the passenger seat based on the acquired detection result. If no occupant is present in the passenger seat, the process proceeds to step S40, and an alarm is output according to the first display form. This is the same as step S18 in FIG. 3 and step S30 in FIG.
- step S41 If there is no occupant in the passenger seat, the process proceeds to step S41, and an alarm is output according to the third display form.
- 3rd display form is a form which suppresses the amount of reduction of the information content of the image displayed on the display apparatus close
- the display device closest to the occupant is identified, and the information amount of the image displayed on the identified display device is not reduced as in the first display form.
- the information amount of the third image is changed.
- the third image is generated so as to have the same amount of information as the second or first image, and is displayed on the third display device 33.
- FIG. 10 shows an example of the third display form.
- the images (a6) to (c6) respectively show a first image, a second image, and a third image.
- the first display forms are (a1) to (c1) in FIG. A certain case is assumed.
- the third image of (c6) is the same as the first image of (a6), and it can be seen that the amount of information from the first image is not reduced compared to the first display form. .
- the third image of (c6) has a larger amount of information than the second image shown in (b6).
- the images (a7) to (c7) respectively show a first image, a second image, and a third image, and the first display form in FIG. It is assumed that a3) to (c3).
- the third image of (c7) is the same as the second image of (b7), and it can be seen that the amount of information from the second image is not reduced compared to the first display form. . It can also be seen that the amount of information reduction from the first image shown in (a7) is smaller than in the first display form.
- the third image in the third display form is generated in the same manner as the first or second image, but alternatively, the third image in the third display form is replaced with the third image.
- the amount of information may not be as high as that of the first or second image, but the amount of information may be larger than that of the third image in the first display form.
- an image that is substantially visible only to the detected object (in this example, pedestrian 101), such as (b6) in FIG. May be used as the third image.
- the amount of information on the display device that is close to the occupant is reduced, so that a display screen that is easily visible to the occupant can be provided. An occupant can visually recognize the display screen and inform the driver of the contents.
- the embodiment using the second display form (FIG. 6) and the embodiment using the third display form (FIG. 10) may be combined.
- the first to third images are reduced so that the information amount difference between the first to third images is smaller than that of the first display form.
- the image of the display device closest to the detected occupant can be generated while suppressing a reduction in the amount of information compared to the first display form.
- three display devices are used.
- the present invention can be applied to any number of two or more display devices.
- the information amount in the first display form is reduced, the difference in the information amount in the second display form is reduced, and the third It is only necessary to suppress the reduction in the amount of information in the display form, and it is not always necessary to do this for all display devices.
- the far-infrared camera is used in the above embodiment, the present invention can be applied to other cameras (for example, a visible camera).
- the pedestrian is configured to be detected as the predetermined object.
- an animal may be detected.
- an alarm for the detected object is issued via the display device, but in addition to this, the driver is notified of the presence of the object via the speaker 3. May be.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Mechanical Engineering (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Traffic Control Systems (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Processing (AREA)
Abstract
Description
2 画像処理ユニット
3 スピーカ
31 第1の表示装置
32 第2の表示装置
33 第3の表示装置
Claims (3)
- 車両の周辺を撮像する撮像装置によって取得された画像に基づいて、車両の周辺の対象物を検出する手段と、前記撮像画像に基づいて生成される表示画像を表示画面上に表示する表示手段と、前記対象物が検出された場合に、該表示手段を介して該対象物の存在を運転者に警報する警報手段と、を備える車両の周辺監視装置であって、
前記表示手段は、前記運転者が目視可能な複数の位置に配置されており、前記警報手段は、正面を向いた該運転者の視線を基準として、該運転者が表示画面を視認するのに必要な視線移動量が大きい表示手段ほど、該表示画面上に表示される情報量を低減させる、ことを特徴とする周辺監視装置。 - さらに、前記車両が前記対象物に到達するまでの到達時間を算出する手段を備え、
該到達時間が長いほど、前記複数の位置に配置された表示手段の表示画面間の情報量の差が小さくなるように設定される、
請求項1に記載の周辺監視装置。 - さらに、前記車両の運転者以外の乗員を検知する手段を備え、
該乗員の存在を検知した場合には、前記複数の位置に配置された表示手段のうち、該乗員の位置に近い表示手段について、前記情報量を低減する量を抑制する、
請求項1または2に記載の周辺監視装置。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012511534A JP5689872B2 (ja) | 2010-04-19 | 2011-04-14 | 車両の周辺監視装置 |
US13/639,924 US20130044218A1 (en) | 2010-04-19 | 2011-04-14 | System for monitoring surroundings of a vehicle |
CN201180018482.8A CN102859567B (zh) | 2010-04-19 | 2011-04-14 | 车辆周边监视装置 |
EP11771731.4A EP2546819B1 (en) | 2010-04-19 | 2011-04-14 | Device for monitoring vicinity of vehicle |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010096054 | 2010-04-19 | ||
JP2010-096054 | 2010-04-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011132388A1 true WO2011132388A1 (ja) | 2011-10-27 |
Family
ID=44833932
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/002206 WO2011132388A1 (ja) | 2010-04-19 | 2011-04-14 | 車両の周辺監視装置 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20130044218A1 (ja) |
EP (1) | EP2546819B1 (ja) |
JP (1) | JP5689872B2 (ja) |
CN (1) | CN102859567B (ja) |
WO (1) | WO2011132388A1 (ja) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2881927A4 (en) * | 2012-07-30 | 2016-03-23 | Ichikoh Industries Ltd | VEHICLE WARNING DEVICE AND EXTERIOR VEHICLE MIRROR DEVICE |
KR20180050360A (ko) * | 2015-09-30 | 2018-05-14 | 닛산 지도우샤 가부시키가이샤 | 정보 제시 장치 및 정보 제시 방법 |
WO2019198172A1 (ja) * | 2018-04-11 | 2019-10-17 | 三菱電機株式会社 | 視線誘導装置および視線誘導方法 |
US10960761B2 (en) | 2017-07-05 | 2021-03-30 | Mitsubishi Electric Corporation | Display system and display method |
JP2022040146A (ja) * | 2018-10-09 | 2022-03-10 | 住友建機株式会社 | ショベル |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9073484B2 (en) * | 2010-03-03 | 2015-07-07 | Honda Motor Co., Ltd. | Surrounding area monitoring apparatus for vehicle |
JP5991647B2 (ja) * | 2013-03-28 | 2016-09-14 | 株式会社デンソー | 車両用周辺監視制御装置 |
JP6205923B2 (ja) * | 2013-07-11 | 2017-10-04 | 株式会社デンソー | 走行支援装置 |
JP6252365B2 (ja) * | 2014-06-11 | 2017-12-27 | 株式会社デンソー | 安全確認支援システム、安全確認支援方法 |
US9950619B1 (en) | 2015-09-30 | 2018-04-24 | Waymo Llc | Occupant facing vehicle display |
DE102016201939A1 (de) * | 2016-02-09 | 2017-08-10 | Volkswagen Aktiengesellschaft | Vorrichtung, Verfahren und Computerprogramm zur Verbesserung der Wahrnehmung bei Kollisionsvermeidungssystemen |
US10540689B2 (en) * | 2016-06-27 | 2020-01-21 | International Business Machines Corporation | System, method, and recording medium for fuel deal advertisements |
WO2019031291A1 (ja) * | 2017-08-10 | 2019-02-14 | 日本精機株式会社 | 車両用表示装置 |
CN111251994B (zh) * | 2018-11-30 | 2021-08-24 | 华创车电技术中心股份有限公司 | 车辆周边物件检测方法及车辆周边物件检测系统 |
JP2021002182A (ja) * | 2019-06-21 | 2021-01-07 | 矢崎総業株式会社 | 車両警報システム |
JP7349859B2 (ja) * | 2019-09-18 | 2023-09-25 | 株式会社Subaru | 車外モニタ装置 |
JP7304378B2 (ja) * | 2021-03-30 | 2023-07-06 | 本田技研工業株式会社 | 運転支援装置、運転支援方法、およびプログラム |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2929927B2 (ja) | 1993-12-14 | 1999-08-03 | 日産自動車株式会社 | 走行情報提供装置 |
JP2000168474A (ja) * | 1998-12-11 | 2000-06-20 | Mazda Motor Corp | 車両の警報装置 |
JP2000168403A (ja) * | 1998-12-11 | 2000-06-20 | Mazda Motor Corp | 車両の表示装置 |
JP2001006096A (ja) | 1999-06-23 | 2001-01-12 | Honda Motor Co Ltd | 車両の周辺監視装置 |
JP2006224700A (ja) * | 2005-02-15 | 2006-08-31 | Denso Corp | 車両用死角監視装置及び車両用運転支援システム |
JP2007241740A (ja) | 2006-03-09 | 2007-09-20 | Honda Motor Co Ltd | 車両周辺監視装置 |
JP2007334751A (ja) | 2006-06-16 | 2007-12-27 | Honda Motor Co Ltd | 車両周辺監視装置 |
JP2008123443A (ja) * | 2006-11-15 | 2008-05-29 | Aisin Aw Co Ltd | 運転支援装置 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2003300514A1 (en) * | 2003-12-01 | 2005-06-24 | Volvo Technology Corporation | Perceptual enhancement displays based on knowledge of head and/or eye and/or gaze position |
US7561966B2 (en) * | 2003-12-17 | 2009-07-14 | Denso Corporation | Vehicle information display system |
JP4650349B2 (ja) * | 2005-10-31 | 2011-03-16 | 株式会社デンソー | 車両用表示システム |
JP4855158B2 (ja) * | 2006-07-05 | 2012-01-18 | 本田技研工業株式会社 | 運転支援装置 |
JP5194679B2 (ja) * | 2007-09-26 | 2013-05-08 | 日産自動車株式会社 | 車両用周辺監視装置および映像表示方法 |
JP2009205268A (ja) * | 2008-02-26 | 2009-09-10 | Honda Motor Co Ltd | 障害物表示装置 |
JP5341402B2 (ja) * | 2008-06-04 | 2013-11-13 | トヨタ自動車株式会社 | 車載表示システム |
JP2010044561A (ja) * | 2008-08-12 | 2010-02-25 | Panasonic Corp | 乗物搭載用監視装置 |
CN102063204A (zh) * | 2009-11-13 | 2011-05-18 | 深圳富泰宏精密工业有限公司 | 触控笔 |
WO2011108217A1 (ja) * | 2010-03-01 | 2011-09-09 | 本田技研工業株式会社 | 車両の周辺監視装置 |
-
2011
- 2011-04-14 EP EP11771731.4A patent/EP2546819B1/en active Active
- 2011-04-14 US US13/639,924 patent/US20130044218A1/en not_active Abandoned
- 2011-04-14 CN CN201180018482.8A patent/CN102859567B/zh not_active Expired - Fee Related
- 2011-04-14 WO PCT/JP2011/002206 patent/WO2011132388A1/ja active Application Filing
- 2011-04-14 JP JP2012511534A patent/JP5689872B2/ja not_active Expired - Fee Related
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2929927B2 (ja) | 1993-12-14 | 1999-08-03 | 日産自動車株式会社 | 走行情報提供装置 |
JP2000168474A (ja) * | 1998-12-11 | 2000-06-20 | Mazda Motor Corp | 車両の警報装置 |
JP2000168403A (ja) * | 1998-12-11 | 2000-06-20 | Mazda Motor Corp | 車両の表示装置 |
JP2001006096A (ja) | 1999-06-23 | 2001-01-12 | Honda Motor Co Ltd | 車両の周辺監視装置 |
JP2006224700A (ja) * | 2005-02-15 | 2006-08-31 | Denso Corp | 車両用死角監視装置及び車両用運転支援システム |
JP2007241740A (ja) | 2006-03-09 | 2007-09-20 | Honda Motor Co Ltd | 車両周辺監視装置 |
JP2007334751A (ja) | 2006-06-16 | 2007-12-27 | Honda Motor Co Ltd | 車両周辺監視装置 |
JP2008123443A (ja) * | 2006-11-15 | 2008-05-29 | Aisin Aw Co Ltd | 運転支援装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2546819A4 |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2881927A4 (en) * | 2012-07-30 | 2016-03-23 | Ichikoh Industries Ltd | VEHICLE WARNING DEVICE AND EXTERIOR VEHICLE MIRROR DEVICE |
US9919649B2 (en) | 2012-07-30 | 2018-03-20 | Ichikoh Industries, Ltd. | Warning device for vehicle and outside mirror device for vehicle |
KR20180050360A (ko) * | 2015-09-30 | 2018-05-14 | 닛산 지도우샤 가부시키가이샤 | 정보 제시 장치 및 정보 제시 방법 |
KR101957117B1 (ko) * | 2015-09-30 | 2019-03-11 | 닛산 지도우샤 가부시키가이샤 | 정보 제시 장치 및 정보 제시 방법 |
US10538252B2 (en) | 2015-09-30 | 2020-01-21 | Nissan Motor Co., Ltd. | Information presenting device and information presenting method |
US10960761B2 (en) | 2017-07-05 | 2021-03-30 | Mitsubishi Electric Corporation | Display system and display method |
WO2019198172A1 (ja) * | 2018-04-11 | 2019-10-17 | 三菱電機株式会社 | 視線誘導装置および視線誘導方法 |
JP2022040146A (ja) * | 2018-10-09 | 2022-03-10 | 住友建機株式会社 | ショベル |
Also Published As
Publication number | Publication date |
---|---|
EP2546819A4 (en) | 2014-01-15 |
JPWO2011132388A1 (ja) | 2013-07-18 |
EP2546819B1 (en) | 2015-06-03 |
EP2546819A1 (en) | 2013-01-16 |
CN102859567B (zh) | 2015-06-03 |
US20130044218A1 (en) | 2013-02-21 |
CN102859567A (zh) | 2013-01-02 |
JP5689872B2 (ja) | 2015-03-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5689872B2 (ja) | 車両の周辺監視装置 | |
JP5706874B2 (ja) | 車両の周辺監視装置 | |
JP5503728B2 (ja) | 車両の周辺監視装置 | |
JP5577398B2 (ja) | 車両の周辺監視装置 | |
JP6379779B2 (ja) | 車両用表示装置 | |
JP2010130646A (ja) | 車両周辺確認装置 | |
JP5855206B1 (ja) | 車両用透過表示装置 | |
JP6182629B2 (ja) | 車両用表示システム | |
EP3694740A1 (en) | Display device, program, image processing method, display system, and moving body | |
JP6857695B2 (ja) | 後方表示装置、後方表示方法、およびプログラム | |
JP2010044561A (ja) | 乗物搭載用監視装置 | |
WO2018042976A1 (ja) | 画像生成装置、画像生成方法、記録媒体、および画像表示システム | |
JP5131152B2 (ja) | 視覚支援装置 | |
JP5192007B2 (ja) | 車両の周辺監視装置 | |
JP5192009B2 (ja) | 車両の周辺監視装置 | |
JP7481333B2 (ja) | 表示装置 | |
JP2007280203A (ja) | 情報提示装置、自動車、及び情報提示方法 | |
JP5831331B2 (ja) | 車両の後側方撮影装置 | |
JP2005294954A (ja) | 表示装置及び補助表示装置 | |
JP2010191666A (ja) | 車両の周辺監視装置 | |
JP2006015803A (ja) | 車両用表示装置、および車両用表示装置を搭載した車両 | |
JP7236674B2 (ja) | 表示装置 | |
JP2005075081A (ja) | 車両の表示方法及び表示装置 | |
JP2006078635A (ja) | 前方道路標示制御装置および前方道路標示制御プログラム | |
JP2020158019A (ja) | 表示制御装置、表示制御方法、および、表示制御プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180018482.8 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11771731 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012511534 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13639924 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011771731 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |