DE102006012773A1 - Object e.g. animal, image displaying method for e.g. driver assisting system, involves dividing image data of objects into two types of fields whose image data are manipulated in different way, where one field has surrounding field - Google Patents

Object e.g. animal, image displaying method for e.g. driver assisting system, involves dividing image data of objects into two types of fields whose image data are manipulated in different way, where one field has surrounding field

Info

Publication number
DE102006012773A1
DE102006012773A1 DE200610012773 DE102006012773A DE102006012773A1 DE 102006012773 A1 DE102006012773 A1 DE 102006012773A1 DE 200610012773 DE200610012773 DE 200610012773 DE 102006012773 A DE102006012773 A DE 102006012773A DE 102006012773 A1 DE102006012773 A1 DE 102006012773A1
Authority
DE
Germany
Prior art keywords
image data
image
areas
characterized
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
DE200610012773
Other languages
German (de)
Inventor
Helmuth Dr.-Ing. Eggers
Gerhard Dipl.-Ing. Kurz
Otto Dr.-Ing. Löhlein
Matthias Dipl.-Ing. Oberländer
Werner Dr.Rer.Nat. Ritter
Roland Dipl.-Inf. Schweiger
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Daimler AG
Original Assignee
DaimlerChrysler AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DaimlerChrysler AG filed Critical DaimlerChrysler AG
Priority to DE200610012773 priority Critical patent/DE102006012773A1/en
Priority claimed from DE102006047777A external-priority patent/DE102006047777A1/en
Publication of DE102006012773A1 publication Critical patent/DE102006012773A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/106Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using night vision cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/20Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
    • B60R2300/205Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used using a head-up display
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/301Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing combining image information with other obstacle sensor information, e.g. using RADAR/LIDAR/SONAR sensors for estimating risk of collision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/307Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing virtually distinguishing relevant parts of a scene from the background of the scene
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/8053Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for bad weather conditions or night vision

Abstract

The method involves distinguishing recognized objects such that their image data (1) are divided into two types of fields. One type of the field comprises the recognized and distinguished objects and a surrounding field that directly correspond to the objects. Another type of the fields comprises image data, which is not assigned to former type of field. The image data of both types of fields are manipulated in different way. Independent claims are also included for the following: (1) an image display comprising a camera (2) an application of the image display for a driver assistance system.

Description

  • The The invention relates to a method for highlighting objects in Image data, as well as one for this suitable image display according to the preamble of claims 1 and 17th
  • to support of drivers in the lead their vehicles are increasingly being used for assistance systems which by means of camera systems image data of the environment around the vehicle and display them on a display. Especially in support of drivers during night driving so-called night vision systems are known which by the driver Visible area over the visible with the help of the dipped beam range beyond farther. Here are the environment image data in infra-red Wavelength range captured and presented to the driver. Since these are from the infra-red Wavelength range originating Image data by her unusual Appearance to a driver not directly accessible is, it lends itself to this image data before its presentation to process by means of image processing systems.
  • In order to draw the attention of a driver to a pedestrian is in the German publication DE 101 31 720 A1 described on a head-up display in the field of vision of the driver at the point at which a pedestrian located in the vehicle environment shows a symbolic representation of the same. In order to emphasize this, it is also proposed to display a frame around the symbolic representation.
  • In order to avoid obscuring relevant details by a symbolic superimposition of the image information, however, it is also conceivable to manipulate the image data of relevant objects before their representation by means of image processing. So is in the German patent application DE 10 2004 034 532 A1 proposed to highlight relevant objects by manipulating the image data for brightening or coloring the resulting image representation. In addition, the German Offenlegungsschrift describes DE 10 2004 028 324 A1 the highlighting of objects, especially of living beings, in the image data by enhancing the contour.
  • In order to further enhance the distinctness of such objects highlighted in the image data, the German Offenlegungsschrift DE 102 59 882 A1 additionally proposed to subject the objects prior to the color manipulation of the image data of a type classification and starting to make the type-specific coloring.
  • task The invention is a method and a for carrying out the Method to find suitable image display, with which a highlighting relevant Objects in image data can be further improved.
  • The Invention is achieved by a method and a for carrying out the Method suitable image display with the features of claims 1 and 17 solved. Advantageous developments and refinements of the invention will become with the dependent claims described.
  • In the method for image display, image data from the surroundings of a vehicle is recorded by means of a camera, and the image data ( 1 ) after object recognition and image processing at least partially displayed on a display. After the image data has been captured by the camera, these are processed by object recognition in order to detect objects ( 2 . 3 . 4 . 5 ) in the recorded image data. The recognized objects ( 2 . 3 . 4 . 5 ) are at least partially used in the representation of the image data ( 1 ) on the display. In an inventive way, the highlighting of the detected objects takes place ( 2 . 3 . 4 . 5 ) such that the image data to be displayed ( 1 ) are divided into two types of areas. In this case, the first type of regions comprises the recognized and to be highlighted objects ( 2 . 3 . 4 . 5 ) and a corresponding directly adjacent thereto surrounding area ( 2a . 3a . 4a ). The second type of area then includes those image data ( 1 ), which have not been assigned to the first type of areas. The highlighting of the objects in the image data ( 1 ) then takes place such that the image data of the two types of regions are manipulated in different ways.
  • In contrast to the prior art, in which the image data of the objects to be highlighted are either replaced by symbolic representations or manipulated specifically the object directly assignable image data (contrast enhancement or color change) is now in a particularly advantageous manner highlighting the objects ( 2 . 3 . 4 . 5 ) is achieved by not only highlighting the object as such, but additionally the image area immediately adjacent to it. This combination results in an image area of the first kind, which is subsequently treated uniformly in an inventive manner. Through joint treatment of an object with the adjoining immediate surrounding area results in a profitable manner for the viewer of the image display by highlighting in the image data of the impression of virtual lighting of the object and its immediate environment (flashlight effect). Thus, especially within night scenes, an extremely intuitive perceptible created the object to be highlighted.
  • One Another advantage stems from the fact that in the camera recorded image data, although the relevant objects to be highlighted need to be recognized However, their object contour need not be determined exactly. This is because the image areas of the first kind in addition to the Objects is still associated with their immediate surrounding area. at suitable dimensioning of these direct surrounding areas in general, that the objects are also in their full Extent Highlighted when the object detection algorithm Image processing does not affect the object in its entirety could extract the image scene; This is especially true for complex ones Objects, such as pedestrians, in weakly lit scenes with poor image contrast often the Case.
  • following The invention will be explained in detail with the aid of figures. in this connection shows
  • 1 a traffic scene in front of a vehicle,
  • 2 the image data ( 1 ) of the traffic scene 1 , after object recognition and highlighting of pedestrians ( 2 . 3 ) containing image areas of the first kind,
  • 3 the image data ( 1 ) of the traffic scene 1 , after object recognition and highlighting of pedestrians ( 2 . 3 ) containing image areas of the first kind, as well as vehicles ( 4 . 5 ) comprehensive image areas of the third kind,
  • 4 the image data ( 1 ) of the traffic scene 1 , after object recognition and highlighting of pedestrians ( 2 . 3 ) containing image areas of the first kind, and another the preceding vehicle ( 4 ) containing image area of the first kind.
  • In the 1 A typical traffic scene is shown in front of a motor vehicle, as can be observed by the driver of a vehicle when looking through the windshield, or as it can be detected by the camera associated with the image processing system according to the invention. The traffic scene includes a road, which lane boundaries ( 6 ) having. Along the course of the road can be found in the area surrounding a plurality of trees ( 7 ). On this road is on the road in front of your own vehicle, a preceding or parked vehicle ( 4 ), while on the opposite lane a oncoming vehicle ( 5 ) is located. Furthermore, the traffic scene comprises a person moving from the left onto the roadway in the foreground of the scene ( 2 ), as well as at a further distance, in the area of the groups of trees from the right on the roadway moving group of people ( 3 ). The driver's observation of this traffic scene requires a great deal of attention, since a multitude of different objects must perceive and observe; this is, for example, at the distance, in the area of trees ( 7 ) ( 3 ) quite demanding. In addition, however, the driver is also required to assess the objects in terms of their position and movement relative to their own vehicle in their risk relevance.
  • In order to relieve the driver in this task, it therefore makes sense to capture the traffic scene by means of a camera system and process and process the image data thus obtained, so that they are displayed on a display, facilitate the detection of the traffic scene. In this case, it is useful to select the objects to be particularly observed in the image data and to display them highlighted in the context of the representation. Such processed image data ( 1 ) the Indian 1 illustrated traffic scene are in the 2 shown. The image data obtained from the traffic scene by means of a camera were here subjected to an object classification, by means of which pedestrians contained in the traffic scene ( 2 . 3 ) were recognized, so that these in the represented image data together with their immediate surrounding areas ( 2a . 3a ) could be highlighted.
  • Of course you can the object classification depending on the field of application and purpose the detection of other or other objects are aligned. That's how it is it is conceivable in addition to the pedestrians too to detect vehicles on the roadway; in the context of others Fields of application could but the objects to be recognized are also traffic signs or pedestrian crossings act. On the other hand, the object recognition or classification be designed so that the directions of movement and speeds is considered with the objects, so that, for example, a located further away, away from the roadway Person no longer selected and highlighted in the presentation.
  • Like in the 2 the objects that are recognized by the object classification and to be emphasized in the representation are clearly shown ( 2 . 3 ) each their immediate surrounding area ( 2a . 3a ), resulting in two regions of the first kind, which are then treated differently to the remaining image data (image regions of the second kind) in order to achieve their emphasis.
  • In order to achieve a highlighting of the image areas of the first type when displaying the image data, it is now possible on the one hand to lighten the image data assigned to these image areas. On the other hand, highlighting of the image areas of the first type can, however, also be achieved by lowering the intensity of the image areas of the second type. This corresponds to that in the 2 schematically represented image data ( 1 ). When compared with the intensities of the in the 1 illustrated traffic scene can be seen that the objects ( 2 . 3 ) and the directly surrounding image area ( 2a . 3a ) were kept constant in their intensity, while the other image areas were significantly lowered in their intensity and thus perceptibility. This results in a highlighting of the image areas of the first kind quasi by the fact that over the remaining image areas a kind of dark veil is placed. However, depending on the particular field of application and purpose, it may be profitable to simultaneously lighten both the image areas of the first kind and to lower the intensity of the image areas of the second kind.
  • From the 2 It can also be seen that the additional highlighting of the relevant objects ( 2 . 3 ) directly surrounding area ( 2a . 3a ) results in a particularly advantageous manner also highlighting other useful for the assessment of the traffic scene detail information helpful; Here, the display of the roadside in the highlighted image area of the first kind. As described in the prior art, only the object ( 2 . 3 ) even highlighted from the image data, so come just to assess the traffic situation also important environmental details just in the background, so that there is an additional complication of perception by the driver.
  • The perceptibility of the manipulated image data presented to the driver by means of the image display ( 1 ) of the traffic scene can also be achieved by alternatively or supplementing a darkening of the image areas of the second type, at least partially simplifying these image areas, for example with reduced contrast or schematically, for example by overlaying with a texture or symbolically represented. This makes sense, in particular, for subregions which are located further away from the road and are thus irrelevant anyway for an estimation of potential dangers by the vehicle driver.
  • In the 3 is a further particularly advantageous embodiment of the invention shown. Herein, the image data from the image areas of the second kind, which other recognized objects ( 4 . 5 ) are assigned to a third type of areas. As shown here, the other objects can be, for example, vehicles ( 4 . 5 ) act. So a primary focus on the visualization of pedestrians ( 2 . 3 ) specified image, can be further improved by additionally recognizing other objects, here vehicles, and can be represented separately in image areas of the third kind. Like in the 3 In addition to the highlighted image areas of the first type, the vehicles contained in the image areas of the third kind (FIG. 4 . 5 ). It is thus possible, in addition to the objects to be highlighted in the image areas of the first type, to clearly depict further possibly relevant objects on the image display. However, in this case, in particular in order to avoid unnecessarily weakening the image data of the objects in the image regions of the first type, in particular in order to manipulate the objects in the image regions of the third type in a different manner, it is advisable. In a preferred embodiment in which the image areas of the first type are brightened and the image areas of the second type are lowered in intensity, the image areas of the third type could be imaged with the intensity originally detected by the camera, so that an easily perceivable three-level intensity grouping in the picture results.
  • When advantageous alternative to the direct manipulation of the image areas the first type attributable image data in the sense of a brightening or the like It is also conceivable on the data of other sensors or sensor currents recourse and with these the original ones to replace image data supplied by the camera of the image display system. So could For example, the image data of an infrared camera partially through the image data of a color camera also located in the vehicle be replaced. This can be of great advantage, especially if it is in the objects to be highlighted is traffic lights. Here could be in the 'black and white' data of the infrared camera the highlighting of the traffic lights by replacing the image data in the Image areas of the first kind by colored image information of a color camera respectively. Since the image areas of the first kind are both the relevant objects (here, traffic lights) as well as their immediate environment, it is not particularly critical if due to paralysis errors between the two camera systems at the transitions between the image areas first type and the image areas of the second kind jumps or distortions in the image representation the traffic scene result. Alternatively, it would be very well, for example also conceivable an image area of the first kind in an infrared image by a refurbished information of a radar system, in particular a high-resolution (imaging) radar system, replace.
  • The image representation in the 4 is one, already described above, special embodiment of the object recognition. In this case, by means of object recognition, persons ( 2 . 3 ) as well as another object located directly in front of the own vehicle ( 5 ), here also a vehicle, selected as objects to be highlighted. From the
  • 4 shows that in the image areas of the first kind the objects ( 2 . 3 . 4 . 5 ) associated environmental area ( 2a . 3a . 4a ) can be chosen differently in the form of its outline. That's how the surrounding areas look 2a and 3a an elliptical shaped outline on while the vehicle 4 associated direct environment area ( 4a ) in its outline is substantially adapted to the object shape. Of course, application-specific, any other types of outlines are conceivable, in particular round or rectangular.
  • It is also conceivable for highlighting those contained in the traffic scene relevant objects, the image data of the image areas of the first kind to manipulate in their color. This can be on the one hand a simple coloring, especially to yellow or red shades. The other is but it is possible the color or brightness gradients the image data so that either soft transition, without hard limits to the image areas of the second kind. Such manipulation of the image data is particularly useful at image areas of the first kind, in which the image data of other sensors or sensor currents were derived.
  • The image areas of the first type in the image data become particularly clear ( 1 ) is highlighted when these image areas are displayed flashing or pulsing. In this case, it is conceivable to design the frequency of the flashing or pulsation as a function of a danger potential emanating from the objects to be highlighted, in particular also as a function of their distance or their relative speed with respect to their own vehicle.
  • Beneficially, the manipulation of the image data in the image areas of the first type can take place in such a way that the highlighting of these areas in the displayed image data (FIG. 1 ) varies in their perceptibility over time. As a result, it can be achieved, for example, that image areas assigned to newly detected objects are first highlighted only weakly and then as the duration of the recognition progresses. In this way, recognition errors in the object recognition within the image data taken by the camera have only insignificant effects, since the initial slight highlighting of such incorrectly detected objects does not distract the driver unnecessarily.
  • In general, it is advantageous if the highlighting of the image areas of the first type depends on parameters of the object contained in the image area ( 2 . 3 . 4 . 5 ) is chosen or varied. It is conceivable, for example, those parameters of an object ( 2 . 3 . 4 . 5 ), which describe its distance, the danger potential emanating from it or its type of object. So it would be conceivable image areas in which depict objects that move quickly to the own vehicle to color with reds. Or image areas in which distant objects could be displayed with a faint coloration. Also, for example, image areas to be highlighted could be colored with people having a different hue than areas with vehicles, thus assisting in intuitive object perception.
  • If the vehicle provided with the image display according to the invention moves along the roadway, its dynamic behavior is frequently subject to relatively rapid changes (shaking and shaking), in particular due to road bumps. Caused by the rigid coupling between camera and vehicle body, the rapid dynamic changes cause a 'wobble' of the image on the display. If the image data recorded by a camera is displayed unchanged on the display, this 'shaking' is not perceived very clearly. However, if objects or entire object areas are shown highlighted in the image display, this can have a particularly disturbing effect on the driver, particularly in the case of brightening these areas on a display in the darkened vehicle interior. Therefore, in an advantageous embodiment of the invention in the highlighted image areas of the first type of the objects to be highlighted ( 2 . 3 . 4 . 5 ) associated environmental area ( 2a . 3a . 4a ) in its position relative to the assigned object is selected to be temporally variable so that the positions of the image areas in the representation of the image data ( 1 ) change only slowly on the display.
  • The image areas of the first type are characterized by the object ( 2 . 3 . 4 . 5 ) and the adjoining surrounding area ( 2a . 3a . 4a ) Are defined. If, in the cases in which the object recognition in the image data captured by the camera as part of an object tracking temporarily no longer recognizes an object to be highlighted, this would inevitably lead to the task of highlighting the corresponding image area of the first type. However, this is generally not desirable since, especially in poor lighting conditions always with a temporary non-detection of a self-existing Object must be expected. A resulting activation and deactivation of the highlighting of a corresponding image area of the first type would, however, have a very disturbing effect on the viewer.
  • Therefore, it is provided in a particularly advantageous embodiment of the invention that when an image area of the first kind highlighted in the image data ( 1 ), an object potentially imaged therein ( 2 . 3 . 4 . 5 ) can no longer be recognized by the object recognition, however, a highlighted representation of this image area is continued for a certain period of time. If, in the meantime, it is then possible for the object recognition to recognize the object again, this results in an essentially trouble-free continuation of the highlighting. The viewer will only perceive that for a short time no object will be recognizable in the highlighted image area; but this is not annoying because the object may still be dimly perceptible.
  • Then if over a defined (longer) period a previously recognized object in the image data is not recognized again can be, it lends itself to the emphasis of the assigned Cancel image area first way and then like this image area to treat a picture area of the second kind. It is preferred the emphasis is not abruptly interrupted but varies over time, making this a kind of slow fade out of the image area is perceived.
  • Especially the picture display according to the invention becomes profitable with means associated with the generation of acoustic or haptic signals. This will make it possible in addition to an optical highlighting of image areas on relevant To indicate objects in the environment of the vehicle.
  • In Advantageously, the invention is suitable for a driver assistance system to improve the visibility of a driver at night (night vision system), for which image pickup preferably a sensitive in the infrared wavelength range Camera is used. On the other hand, it is also a profitable one Use as part of a driver assistance system for improvement to improve the perception of relevant information in inner city scenarios (Downtown assistant) conceivable, in particular traffic lights and / or Traffic signs are highlighted as relevant objects.
  • At the It can be used on the one hand as part of a driver assistance system to the image display as a head-up display perform, On the other hand, however, is also an introduction of the display in the direct in front of the driver located dashboard area advantageous because the driver to look just have to look down at the display for a moment.

Claims (22)

  1. Method for image display, in which by means of a camera image data from the environment of a vehicle are recorded, in which by means of an object recognition objects ( 2 . 3 . 4 . 5 ) are recognized in the recorded image data, and in which the image data ( 1 ) are at least partially displayed on a display, wherein in the image data ( 1 ) the recognized objects ( 2 . 3 . 4 . 5 ) are at least partially highlighted, characterized in that the highlighting of the detected objects ( 2 . 3 . 4 . 5 ) in such a way that the displayed image data ( 1 ) are divided into two types of regions, the first type of regions being the recognized and to be highlighted objects ( 2 . 3 . 4 . 5 ) and a corresponding directly adjacent thereto surrounding area ( 2a . 3a . 4a ), and wherein the second type of areas are those image data ( 1 ), which have not been assigned to the first type of areas, and that the image data of the two types of areas are manipulated in different ways.
  2. Method according to claim 1, characterized that in the context of the manipulation of the image data, the image areas of the first kind brightened.
  3. Method according to one of the preceding claims, characterized characterized in that in the context of the manipulation of the image data the Image areas of the second kind are darkened or at least partially simplified or by schematic or symbolic representations be replaced.
  4. Method according to one of the preceding claims, characterized in that the image data of the second kind of areas, which recognized objects ( 2 . 3 . 4 . 5 ), are assigned to a third kind of areas, and that the image data of this third kind of areas are manipulated differently from the image data of the second kind of areas.
  5. Method according to claim 4, characterized that the image data of the image areas of the third kind of no manipulation be subjected.
  6. Method according to one of the preceding claims, characterized characterized in that in the context of the manipulation of the image data the Image data of the areas of the first kind by the corresponding image data be replaced by another sensor or another sensor current.
  7. Method according to one of the preceding claims, characterized in that in the image areas of the first type the objects ( 2 . 3 . 4 . 5 ) associated environmental area ( 2a . 3a . 4a ) is selected in its outline elliptical or circular or adapted to the object shape.
  8. Method according to one of the preceding claims, thereby in that the image data of the image areas of the first Kind of being manipulated in their color, and / or that they are in their color or brightness gradients with soft transition, displayed without hard limits to the image areas of the second kind become.
  9. Method according to one of the preceding claims, characterized in that the image data of the image areas of the first kind be displayed flashing or pulsating.
  10. Method according to one of the preceding claims, characterized in that the manipulation of the image data in the image regions of the first type takes place in such a way that the highlighting of these regions in the represented image data ( 1 ) varies in their perceptibility over time.
  11. Method according to Patent Claim 10, characterized in that the temporal variation takes place in such a way that the perceptibility of the emphasis with the progressive duration of the recognition of the object contained in the image area ( 2 . 3 . 4 . 5 ) increases.
  12. Method according to one of the preceding claims, characterized in that the highlighting of the image areas of the first type depends on parameters of the object contained in the image area ( 2 . 3 . 4 . 5 ) is selected or varied, the parameters being the distance of the object ( 2 . 3 . 4 . 5 ) or describe the danger potential or its object type emanating from it.
  13. Method according to one of the preceding claims, characterized in that in the image areas of the first type the respective objects ( 2 . 3 . 4 . 5 ) surrounding area ( 2a . 3a . 4a ) in its position relative to the assigned object is selected to be temporally variable so that the positions of the image areas in the representation of the image data ( 1 ) change only slowly on the display.
  14. Method according to one of the preceding claims, characterized in that when an image area of the first kind is highlighted in the image data ( 1 ), an object potentially imaged therein ( 2 . 3 . 4 . 5 ) can no longer be detected by the object recognition, a highlighted representation of this image area is continued for a certain period of time.
  15. Method according to claim 14, characterized in that, when, after the expiration of the period of time, the object recognition is still not an object ( 2 . 3 . 4 . 5 ) in the image area, the highlighting of the image area is canceled, and then the image area is treated as a second type image area.
  16. Method according to claim 15, characterized that the cancellation of the highlighting is done with temporal variation.
  17. An image display comprising a camera for capturing image data from the surroundings of a vehicle, comprising object recognition for detecting objects ( 2 . 3 . 4 . 5 ) in the captured image data, and comprising a display for displaying at least portions of the captured image data, wherein an image processing device is provided which manipulates the image data so that they are at least partially highlighted on the display, characterized in that the image display means in order to divide the image data into at least two types of regions, wherein the first type of regions are the recognized and to be highlighted objects ( 2 . 3 . 4 . 5 ) and the respectively adjoining surrounding area ( 2a . 3a . 4a ), and wherein the second type of regions comprises those image data ( 1 ), which have not been assigned to the first type of areas.
  18. Image display according to claim 17, characterized that the camera image data from the infra-red wavelength range detected.
  19. Image display according to claims 17 or 18, characterized in that it is in the image display to is a display located in the dashboard of the vehicle.
  20. Image display according to one of the claims 17 to 19, characterized in that the image display with means for Generation of acoustic or haptic signals is connected by means of the next to an optical highlighting of image areas by more Signaling on the presence of relevant objects in the environment of the vehicle can be pointed.
  21. Using the image display or the procedure for Image display according to one of the preceding claims as Driver assistance system for improving the visibility of a driver Night (night vision system).
  22. Using the image display or the procedure for Image display according to one of the preceding claims as Driver assistance system for improvement to improve the perception of relevant Information in city center scenarios (city center assistant).
DE200610012773 2006-03-17 2006-03-17 Object e.g. animal, image displaying method for e.g. driver assisting system, involves dividing image data of objects into two types of fields whose image data are manipulated in different way, where one field has surrounding field Withdrawn DE102006012773A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
DE200610012773 DE102006012773A1 (en) 2006-03-17 2006-03-17 Object e.g. animal, image displaying method for e.g. driver assisting system, involves dividing image data of objects into two types of fields whose image data are manipulated in different way, where one field has surrounding field

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
DE200610012773 DE102006012773A1 (en) 2006-03-17 2006-03-17 Object e.g. animal, image displaying method for e.g. driver assisting system, involves dividing image data of objects into two types of fields whose image data are manipulated in different way, where one field has surrounding field
DE102006047777A DE102006047777A1 (en) 2006-03-17 2006-10-06 Virtual spotlight for marking objects of interest in image data
PCT/EP2007/002134 WO2007107259A1 (en) 2006-03-17 2007-03-12 Virtual spotlight for distinguishing objects of interest in image data
US12/293,364 US20090102858A1 (en) 2006-03-17 2007-03-12 Virtual spotlight for distinguishing objects of interest in image data
JP2008558696A JP5121737B2 (en) 2006-03-17 2007-03-12 Virtual spotlight for identifying important objects in image data
EP07723183A EP1997093B1 (en) 2006-03-17 2007-03-12 Virtual spotlight for distinguishing objects of interest in image data

Publications (1)

Publication Number Publication Date
DE102006012773A1 true DE102006012773A1 (en) 2006-11-30

Family

ID=37387834

Family Applications (1)

Application Number Title Priority Date Filing Date
DE200610012773 Withdrawn DE102006012773A1 (en) 2006-03-17 2006-03-17 Object e.g. animal, image displaying method for e.g. driver assisting system, involves dividing image data of objects into two types of fields whose image data are manipulated in different way, where one field has surrounding field

Country Status (1)

Country Link
DE (1) DE102006012773A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102007039110A1 (en) 2007-08-18 2009-02-19 Bayerische Motoren Werke Aktiengesellschaft Driver assistance system i.e. advanced driver assistance system, for motor vehicle, has main warning device cooperating with verification device so that outputting of warning signals is caused by verifying critical driving situation
EP2080668A1 (en) * 2007-08-27 2009-07-22 Mazda Motor Corporation Driving assist device, method and computer program product for a vehicle
EP2103485A3 (en) * 2008-03-19 2009-10-28 Mazda Motor Corporation Surroundings monitoring device, method and computer program product for a vehicle
DE102009051485A1 (en) 2009-10-30 2010-06-17 Daimler Ag Method for controlling driving light of e.g. car, involves automatically activating light source of headlamp, and manually activating or deactivating light function based on control elements e.g. pushbuttons
US9123179B2 (en) 2010-09-15 2015-09-01 Toyota Jidosha Kabushiki Kaisha Surrounding image display system and surrounding image display method for vehicle
DE102015214777A1 (en) * 2015-08-03 2017-02-09 Continental Automotive Gmbh A method of verifying a condition of a system for automated driving of a vehicle
DE102015013600A1 (en) * 2015-10-21 2017-04-27 Audi Ag Method for operating a headlamp of a motor vehicle
EP3147149A4 (en) * 2014-05-23 2018-02-28 Nippon Seiki Co., Ltd. Display device
DE102018209192B3 (en) 2018-05-17 2019-05-09 Continental Automotive Gmbh Method and device for operating a camera monitor system for a motor vehicle

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102007039110A1 (en) 2007-08-18 2009-02-19 Bayerische Motoren Werke Aktiengesellschaft Driver assistance system i.e. advanced driver assistance system, for motor vehicle, has main warning device cooperating with verification device so that outputting of warning signals is caused by verifying critical driving situation
EP2080668A1 (en) * 2007-08-27 2009-07-22 Mazda Motor Corporation Driving assist device, method and computer program product for a vehicle
US8004424B2 (en) 2007-08-27 2011-08-23 Mazda Motor Corporation Driving assist device for vehicle
EP2103485A3 (en) * 2008-03-19 2009-10-28 Mazda Motor Corporation Surroundings monitoring device, method and computer program product for a vehicle
US8054201B2 (en) 2008-03-19 2011-11-08 Mazda Motor Corporation Surroundings monitoring device for vehicle
DE102009051485A1 (en) 2009-10-30 2010-06-17 Daimler Ag Method for controlling driving light of e.g. car, involves automatically activating light source of headlamp, and manually activating or deactivating light function based on control elements e.g. pushbuttons
US9123179B2 (en) 2010-09-15 2015-09-01 Toyota Jidosha Kabushiki Kaisha Surrounding image display system and surrounding image display method for vehicle
EP3147149A4 (en) * 2014-05-23 2018-02-28 Nippon Seiki Co., Ltd. Display device
DE102015214777A1 (en) * 2015-08-03 2017-02-09 Continental Automotive Gmbh A method of verifying a condition of a system for automated driving of a vehicle
DE102015013600A1 (en) * 2015-10-21 2017-04-27 Audi Ag Method for operating a headlamp of a motor vehicle
DE102018209192B3 (en) 2018-05-17 2019-05-09 Continental Automotive Gmbh Method and device for operating a camera monitor system for a motor vehicle
WO2019219440A1 (en) * 2018-05-17 2019-11-21 Continental Automotive Gmbh Method and apparatus for operating a camera monitor system for a motor vehicle

Similar Documents

Publication Publication Date Title
US10081370B2 (en) System for a vehicle
CN104185010B (en) Enhanced three-dimensional view generation in the curb observing system of front
US9139133B2 (en) Vehicle collision warning system and method
JP6346614B2 (en) Information display system
US9505338B2 (en) Vehicle driving environment recognition apparatus
US8405491B2 (en) Detection system for assisting a driver when driving a vehicle using a plurality of image capturing devices
US9384401B2 (en) Method for fog detection
DE10247371B4 (en) Vehicle information providing device
ES2280049T3 (en) Procedure and device for visualizing an environment of a vehicle.
CN101088027B (en) Stereo camera for a motor vehicle
DE102014117854B4 (en) Device and method for displaying information of a head-up display (HUD)
EP2720458A1 (en) Image generation device
JP3941926B2 (en) Vehicle periphery monitoring device
US8754760B2 (en) Methods and apparatuses for informing an occupant of a vehicle of surroundings of the vehicle
US9230178B2 (en) Vision support apparatus for vehicle
US9415719B2 (en) Headlight control device
JP5577398B2 (en) Vehicle periphery monitoring device
CN103098097B (en) Surrounding image display system and surrounding image display method for vehicle
CN102197418B (en) Device for monitoring surrounding area of vehicle
CN100413324C (en) On-vehicle night vision camera system, display device and display method
EP1005421B1 (en) Method for displaying information in a motor vehicle
JP2012064096A (en) Vehicle image display device
US9649980B2 (en) Vehicular display apparatus, vehicular display method, and vehicular display program
JPWO2008029802A1 (en) Driving information providing device
CN103770708B (en) The dynamic reversing mirror self adaptation dimming estimated by scene brightness is covered

Legal Events

Date Code Title Description
OAV Applicant agreed to the publication of the unexamined application as to paragraph 31 lit. 2 z1
OP8 Request for examination as to paragraph 44 patent law
8143 Withdrawn due to claiming internal priority