EP1395852A1 - Procede pour preparer des informations imagees - Google Patents

Procede pour preparer des informations imagees

Info

Publication number
EP1395852A1
EP1395852A1 EP02743204A EP02743204A EP1395852A1 EP 1395852 A1 EP1395852 A1 EP 1395852A1 EP 02743204 A EP02743204 A EP 02743204A EP 02743204 A EP02743204 A EP 02743204A EP 1395852 A1 EP1395852 A1 EP 1395852A1
Authority
EP
European Patent Office
Prior art keywords
image
video
pixels
depth
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP02743204A
Other languages
German (de)
English (en)
Inventor
Ulrich Lages
Klaus Dietmayer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ibeo Automobile Sensor GmbH
Original Assignee
Ibeo Automobile Sensor GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from DE10128954A external-priority patent/DE10128954A1/de
Priority claimed from DE10132335A external-priority patent/DE10132335A1/de
Priority claimed from DE10148062A external-priority patent/DE10148062A1/de
Priority claimed from DE10154861A external-priority patent/DE10154861A1/de
Application filed by Ibeo Automobile Sensor GmbH filed Critical Ibeo Automobile Sensor GmbH
Publication of EP1395852A1 publication Critical patent/EP1395852A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/93Sonar systems specially adapted for specific applications for anti-collision purposes
    • G01S15/931Sonar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2015/932Sonar systems specially adapted for specific applications for anti-collision purposes of land vehicles for parking operations
    • G01S2015/933Sonar systems specially adapted for specific applications for anti-collision purposes of land vehicles for parking operations for measuring the dimensions of the parking space when driving past
    • G01S2015/935Sonar systems specially adapted for specific applications for anti-collision purposes of land vehicles for parking operations for measuring the dimensions of the parking space when driving past for measuring the contour, e.g. a trajectory of measurement points, representing the boundary of the parking space
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section

Definitions

  • the present invention relates to a method for providing image information about a surveillance area.
  • Monitoring areas are frequently monitored with devices for image acquisition in order to detect changes in these areas.
  • methods for recognizing and tracking objects are also used, in which objects that correspond to objects in the monitoring area are recognized and tracked on the basis of sequentially captured images of the monitoring area.
  • An important area of application of such methods is the monitoring of the area in front of a vehicle or of the entire near area around the vehicle.
  • Devices for image acquisition are preferably used for object recognition and tracking, with which depth-resolved images can be acquired.
  • Such depth-resolved images contain information about the position of detected objects relative to the image-capturing device and in particular the distance of at least points on the surface of such objects from the image-capturing device or data from which this distance can be derived.
  • Laser scanners for example, can be used as image-capturing devices for capturing depth-resolved images, which, during a scan, scan a visual range with at least one pulsed radiation beam that sweeps over and over a predetermined angular range a reflected point or area of an object, mostly diffuse, reflected radiation pulses of the radiation beam.
  • the transit time of the emitted, reflected and detected radiation pulses is recorded for distance measurement.
  • the raw data recorded in this way for a pixel can then contain as coordinates the angle at which the reflection was recorded and the distance from the object point determined from the transit time of the radiation pulses.
  • the radiation can in particular be visible or infrared light.
  • laser scanners provide very precise position information and in particular very precise distances between object points and laser scanners, these are generally only provided in the detection plane in which the radiation beam is moved, so that it can be very difficult, based solely on the Classify location information in this level of a captured object.
  • a traffic light from which only the pole bearing the light signals is detected cannot be easily distinguished from a lamppost or a tree which has a trunk of the same diameter in the detection plane.
  • Another important example for the application would be the distinction between a person and a tree.
  • Depth-resolved images can also be recorded with video systems with stereo cameras.
  • the accuracy of the depth information decreases with increasing distance between the object and the stereo camera system, which makes object detection and tracking more difficult.
  • the distance between the cameras of the stereo camera system should be as high as possible with regard to the greatest possible accuracy of the depth information, which is problematic in the case of limited installation space, as is present in particular in a vehicle. It is therefore the object of the present invention to provide a method with which image information can be provided which enables good object recognition and tracking.
  • the object is achieved by a method having the features of claim 1.
  • a method for providing image information via a monitoring area which is located in the field of view of an optoelectronic sensor for detecting the position of objects in at least one detection plane and in the field of vision of a video system with at least one video camera, is provided, in which the optoelectronic sensor detects Depth images, each containing object points on one or more detected objects in the surveillance area, containing corresponding image points with position coordinates of the corresponding object points, and video images of an area containing the object points, which include image points with data recorded by the video system, recorded by the video system , on the basis of the detected position coordinates of at least one of the object points, at least one image point corresponding to the object point and detected by the video system is determined, and the image point d data corresponding to the video image and the image point of the depth image and / or the position coordinates of the object point are assigned to one another.
  • the images of two devices for image acquisition are used, the viewing areas of which each include the monitoring area, which in particular can also correspond to one of the two viewing areas.
  • the viewing area of a video system tems is usually three-dimensional, that of an optoelectronic sensor for position detection, for example a laser scanner, but only two-dimensional.
  • the wording that the monitoring area is in the field of view of a sensor is therefore understood to mean that the projection of the monitoring area onto the detection plane in which the optoelectronic sensor detects position information is within the field of vision of the optoelectronic sensor.
  • One device for image acquisition is at least one optoelectronic sensor for acquiring the position of objects in at least one acquisition plane, i.e. for capturing depth-resolved images which contain data about distances of object points from the sensor in the direction of the electromagnetic radiation received by the sensor and emanating from the respective counter positions, directly or indirectly.
  • Such depth-resolved images of the optoelectronic sensor are referred to as depth images in this application.
  • optoelectronic sensors for recording such depth-resolved images are known.
  • systems with stereo video cameras can be used which have a device for converting the intensity images recorded by the cameras into depth-resolved images.
  • laser scanners are preferably used, which allow a very precise position determination. In particular, it can be the laser scanners mentioned at the outset.
  • a video system with at least one video camera is used as the second device for image capturing, which is, for example, a row of photodetection elements or, preferably, cameras with CCD or CMOS area sensors can act.
  • the video cameras can work in the visible or in the infrared range of the electromagnetic spectrum.
  • the video system can have at least one monocular video camera or also a stereo camera or stereo camera arrangement.
  • the video system captures video images of its viewing area, which can contain pixels with, for example, intensity and / or color information.
  • the photodetection elements of a camera arranged in a row, column or on a surface can be fixed in relation to the optoelectronic sensor for capturing depth images or - when using laser scanners of the aforementioned type - preferably also synchronously with the radiation beam and / or at least one Photo detection element of the laser scanner, which detects reflected or remitted radiation from the radiation beam, are moved.
  • first depth images recorded by the optoelectronic sensor each containing object points on one or more detected objects in the surveillance area, contain corresponding image points with position coordinates of the corresponding object points, and video images of an area containing the object points, the image points with the image points, recorded by the video system
  • Video system captured data include, provided. They can be provided by direct transmission of the images from the sensor or the video system or by reading them out of a storage medium in which the corresponding data are stored. In the case of the images, it is only important that both have an image of the same area, which may in principle be smaller than the surveillance area, so that pixels corresponding to the object point can appear both in the depth image and in the video image.
  • At least one image point corresponding to the object point and detected by the video system is then determined.
  • a corresponding pixel in the video image is determined for a pixel of the depth image.
  • Position data of the depth image are thus assigned to video data of the video image, which can be any data that result directly or through an intermediate evaluation from the pixels of the video image.
  • the data can contain intensity or color information, and, if infrared cameras are used, temperature information.
  • Data obtained in this way for an object point can, for example, be output as new pixels with data elements for position coordinates and intensity or color information, stored or used directly in a parallel process, for example for object detection and tracking.
  • data can be provided for an object point not only with regard to either the position or other, for example optical properties of object points, as is the case with simple sensors and video cameras, but also with regard to the position as well as the others Characteristics.
  • intensity and / or color can be provided for an object point.
  • the larger number of data assigned to a pixel allows not only the position information but also video information to be used in object recognition and tracking methods. This can be very advantageous, for example, in the case of segmentation, segment-object assignment or the classification of objects, since the larger number of information or data permits more reliable identification.
  • a pixel in the video image corresponding to an object point in the depth image can be determined in various ways.
  • the relative position of the optoelectronic sensor to the video camera or the video system that is to say distance in space and relative orientation, is preferably known.
  • the relative position can be determined, for example, by calibration.
  • Another preferred design is the combination of video system and laser scanner in one device, so that the calibration can be carried out once during the manufacturing process.
  • the determination can be made solely by comparing the position information.
  • the image point of the video image corresponding to the subject point of a depth image is preferably determined as a function of the imaging properties of the video system.
  • the imaging properties include, in particular, focal lengths of imaging devices of the video camera or the video system and their distance from receiving elements such as CCD or CMOS. Surface sensors understood.
  • the video camera has an imaging device such as a lens system that covers the field of view on a photodetector field, e.g. B.
  • a CCD or CMOS surface sensor can be calculated from the position coordinates of an image point in the depth image, taking into account the imaging properties of the imaging device, onto which of the photodetector elements in the photodetector field the object point corresponding to the image point is mapped, from which results in which pixel of the video image the pixel of the depth image corresponds to.
  • the resolution of the imaging device and the position of the object point several image points of the video image can also be assigned to one object point.
  • the viewing angles differ from the optoelectronic sensor and video system, the case may arise for a large number of objects in the surveillance area that an object point visible in the depth image is completely or partially hidden in the video image by another object point. It is therefore preferred that on the basis of the position coordinates of an object point detected by the optoelectronic sensor in the depth image and at least the position and orientation of the video system, it is determined whether the object point in the video image recorded by the video system is completely or partially covered.
  • the position of the video camera of the video system relative to the optoelectronic sensor should be known. This position can either be specified by attaching the optoelectronic sensor and the video system in a precise relative position and orientation, or in particular also determined by calibration.
  • the determination of image points of the video image corresponding to object points and the assignment of corresponding data to image points of the depth image corresponding to object points is carried out for object points in a predetermined fusion region.
  • the fusion area can initially be any area in the monitoring area, which can be specified, for example, depending on the use of the data to be provided. This means that, regardless of the monitoring area, in particular a smaller area within the monitoring area can be specified in which the addition of data is to take place.
  • the fusion region then corresponds to a "region of interest". The process can be significantly accelerated by specifying such fusion areas.
  • the depth image and the video image are each segmented first. At least one segment of the video image is then assigned to at least one segment in the depth image, which contains pixels which correspond to at least some of the pixels of the segment of the depth image.
  • the segmentation of the depth image and the segmentation of the video image can take place according to the same criteria, the segmentation in the depth image is preferably carried out using position information, in particular neighborhood criteria, and the segmentation in the video image according to others , for example criteria known in image processing of video images, for example on the basis of intensities, colors, textures and / or edges of image areas.
  • the corresponding data can be determined by preprocessing stages, for example image data filtering.
  • This assignment makes it possible to assign pixels in the depth image as data segments of the video image. So that can in particular, information is also obtained in directions perpendicular to the detection plane of the depth image in which the scanning by the optoelectronic sensor takes place. This can be, for example, the expansion of the segment or an object assigned to this segment in a third dimension. Using such information, a classification of objects in an object recognition and tracking process can be made very easy. For example, a single guide post on a street can easily be distinguished from a street lamp simply because of its height, although the two objects do not differ or differ little in the depth image.
  • the depth image is segmented, that in a region of the video image which contains pixels, the pixels correspond to at least one segment in the depth image, a predetermined pattern is searched, and that the result of the search is the segment and / or the pixels forming the segment are assigned as data.
  • the pattern can generally be an image of an area of an object, for example an image of a traffic sign or an image of a road marking.
  • the pattern in the video image can be identified using pattern recognition methods known from video image processing. This development of the method is particularly advantageous if, based on information about possible objects in the monitoring area, it is already possible to make assumptions as to what a segment in the depth image could correspond to for objects or objects representing them.
  • a cutout in the video image the width of which is given by the size and position of the segment and the extent of the largest expected object, for example a traffic sign, can be shown on the image of a certain traffic sign be examined, and the segment is assigned corresponding information, for example the type of traffic sign.
  • the temperature can also be inferred, which significantly simplifies the classification of a person.
  • the combination of information from the video image with that from the depth image can also be used to identify objects or specific areas on the objects that are only present in one of the images, or to support an interpretation of one of the images.
  • a video system can detect the white line of a lane boundary marking that cannot be detected with a scanner with a comparatively low depth and angle resolution.
  • it can also be concluded from the movement of the other objects and from the lane edge detection that the lane detection from the video image is plausible.
  • a method for providing image information about a surveillance area which is in the field of view of an optoelectronic sensor for detecting the position of objects in at least one detection plane and in the field of vision of a video system for recording depth-resolved, three-dimensional video images with at least one A video camera is created, in which depth images recorded by the optoelectronic sensor, each containing image points on one or more recorded objects in the surveillance area, and video images recorded by the video system of an area containing the object points, the image points with position coordinates of the object points contain, are provided, pixels in the video image that are close to or in the detection plane of the depth image are adapted by translation and / or rotation to corresponding pixels of the depth image, and the position coordinates of these pixels of the video image according to the determined translation and / or rotation can be corrected.
  • the video system which, like the video system in the method according to the first alternative, has at least one video camera to which the explanations above also apply accordingly, is designed in the method according to the second alternative for capturing depth-resolved, three-dimensional video images.
  • the video system can have a monocular camera and an evaluation device with which one another position data for pixels are provided using known methods in the subsequently captured video images.
  • preference is given to using video systems with stereo video cameras which are designed to provide depth-resolved images in the above-mentioned sense and can have corresponding evaluation devices for determining the depth-resolved images from the data recorded by the video cameras.
  • the video cameras can have CCD or CMOS area sensors and an imaging device which maps the field of vision of the video cameras to the area sensors.
  • pixels in the video image that are near or in the detection plane of the depth image are adapted by translation and / or rotation to corresponding pixels of the depth image.
  • at least the relative orientation of the optoelectronic sensor and video camera and their relative position, in particular the distance of the video system from the detection plane in a direction perpendicular to the detection plane in which the depth image is recorded by the optoelectronic sensor should be referred to below as "height", be known.
  • the adjustment can be done in different ways.
  • the position coordinates of all pixels of a segment are projected onto the detection plane of the optoelectronic sensor.
  • a position of the segment in the detection plane of the optoelectronic sensor is then defined by averaging the image points thus projected.
  • the method alone means averaging over the coordinates in the detection plane.
  • pixels of the depth-resolved video image that are in or near the detection plane are used for adaptation. Those pixels which are at a predetermined maximum distance from the detection plane are preferably understood as pixels lying near the detection plane. If the video image is segmented, the maximum distance can be given, for example, by the distance in a direction perpendicular to the detection plane of adjacent pixels of a segment of the video image intersecting the detection plane.
  • the adaptation can be carried out by means of optimization methods in which, for example, the - simple or square - distance of the corresponding pixels or the sum of the - simple or square distances - of all the pixels under consideration is minimized, the minimization, if necessary, depending on the computing time available can only be done partially.
  • Distance is understood here to mean any function of the coordinates of the image points that fulfills criteria for a distance from points in a vector space.
  • at least one translation and / or rotation is determined, which is necessary in order to adapt the pixels of the video image to those of the depth image.
  • the position coordinates of these pixels of the video image are then corrected in accordance with the determined translation and / or rotation.
  • pixels of the video image lying in the direction perpendicular to the detection plane are preferably corrected accordingly.
  • the adaptation of a calibration corresponds to the position of the depth image and the video image.
  • the corrected coordinates can then be output, stored, in particular as an image, or used in a parallel process.
  • the accuracy according to the position information is therefore in a three-dimensional, increased resolution, which makes object detection and tracking much easier.
  • objects can be classified very easily on the basis of the three-dimensional information available.
  • captured images are segmented in each case, that at least one segment in the video image that has pixels in or near the plane of the depth image, at least by translation and / or rotation is adapted to a corresponding segment in the depth image, and that the position coordinates of these pixels of the segment of the video image are corrected in accordance with the translation and / or rotation.
  • the position coordinates of all pixels of the segment are particularly preferably corrected.
  • the segmentation can take place for both images on the basis of corresponding criteria, which usually means segmentation according to distance criteria between neighboring pixels.
  • different criteria can also be used for the depth image and the video image; in particular, known criteria can be used for the video image in the image processing of video images, for example segmentation according to intensity, color and / or edges. By correcting the positions of all the pixels of the segment, this is then brought into a more precise position overall.
  • the method according to this embodiment has the advantage that the same number of pixels in the depth image and the depth-resolved video image or sections is not necessarily the same of which must be present.
  • the adaptation for which corresponding methods such as the adaptation of the pixels can be used, in particular the sums of the simple or square distances of all pixels of the segment of the depth image from all pixels of the segment of the video image in or near the detection plane in the sense of The first or second variant can be used as a function to be minimized, so that a simple but precise adaptation can be implemented.
  • the method according to the second alternative can be carried out individually for each segment, so that essentially a local correction takes place.
  • the adaptation is carried out jointly for all segments of the depth image, so that the depth image and video image in the detection plane are brought together as well as possible, which equates to a calibration of the relative position and orientation of the optoelectronic sensor and video system.
  • the adaptation is carried out only for segments in a predetermined fusion area, which is a predetermined partial area of the monitoring area and can be selected, for example, depending on the later use of the image information to be provided.
  • the method can be considerably accelerated by this defined restriction to a part of the monitoring area which is only of interest for further processing ("region of interest").
  • the following developments relate to the method according to the invention according to the first and the second alternative.
  • the methods according to the invention can be carried out intertwined with other methods, for example for object detection and tracking.
  • the image information, ie in the method according to the first alternative at least the position information and the further data from the video image, and in the method according to the second alternative the corrected position information can only be formed if necessary.
  • the image information provided contains at least the position coordinates of object points and is used as a depth-resolved image.
  • the data provided in this way can then be treated like a depth-resolved image, ie output or stored, for example.
  • the fusion area is determined on the basis of a predetermined section of the video image and the imaging properties of the video system.
  • the depth image can be used to obtain position information from the depth image for selected sections of the video image, which information is required for the evaluation of the video image. This also greatly simplifies the identification of an object in a video image, since a suspected object often stands out from others solely on the basis of the depth information.
  • object recognition and tracking is carried out on the basis of the data of one of the depth-resolved images or the merged image information, and the fusion area is determined on the basis of data of the object recognition and tracking.
  • this can be used to supplement location information from the depth image, which is used for object detection and tracking is used by appropriate information from the video image.
  • the fusion area can be given by the expansion of segments in the depth image or the size of a search area used for object detection for the objects being tracked.
  • the additional information from the video image can then be used to classify objects or assign segments to objects with high certainty.
  • the presumptive position of an object in the video image can be displayed by the optoelectronic sensor without a classification already taking place.
  • Video image processing then only needs to search for objects in the restricted fusion area, which considerably improves the speed and reliability of the search algorithms.
  • both the geometric measurements of the optoelectronic sensor, in particular a laser scanner, and the visual properties determined by the video image processing can then be used, which likewise significantly improves the reliability of the statements made.
  • a laser scanner can detect lane boundaries in the form of guiding or delimiting posts, from which the position of the lane can be deduced. This information can be used by the video system to find the white lane boundary lines in the video image more quickly.
  • the fusion area is determined on the basis of data about the presumed position of objects or specific areas on the objects.
  • the presumed location of objects can result from information from other systems.
  • the fusion region can preferably be determined using data from a digital road map, possibly in conjunction with a global positioning system receiver. Using the digital map, for example as the course of the road can be predicted with great accuracy. This presumption can then be used to support the interpretation of the depth and / or video images.
  • a plurality of depth images of one or more optoelectronic sensors are used which contain position information of objects in different detection planes.
  • Laser scanners which receive emitted electromagnetic radiation with a plurality of adjacent detectors which are not arranged parallel to the detection planes in which the scanning radiation beam moves can be used particularly preferably for this purpose.
  • the adaptation for segments in at least two of the several depth images takes place simultaneously.
  • the adjustment for several depth images in one step enables a consistent correction of the position information in the video image, so that the position data, in particular in a depth-resolved image, can also be corrected very precisely for inclined surfaces.
  • Certain types of optoelectronic sensors such as laser scanners, acquire depth images by sequentially acquiring the image points when the viewing area is scanned. If the optoelectronic sensor moves relative to objects in the field of view, different object points of the same object appear due to the movement of the object relative to the sensor shifted against each other. Furthermore, there may be shifts relative to the video image of the video system, since the video images are recorded practically instantaneously on the time scale on which the visual range of a laser scanner is scanned (typically in the range of approximately 10 Hz). Such shifts not only lead to inaccuracies in the positions of the pixels in the depth image, but also to difficulties in merging depth images and video images.
  • the position coordinates of the image points of the depth image are each corrected in accordance with the actual or an approximated movement of the optoelectronic sensor and the difference between the acquisition times of the respective pixels of the depth image and a reference time. If segmentation is carried out, the correction is preferably carried out before segmentation.
  • the movement of the sensor can be taken into account, for example, depending on the quality of the correction via its speed or also via its speed and acceleration, here vectorial quantities, that is to say quantities with magnitude and direction.
  • the data on these kinematic quantities can be read in, for example.
  • the vehicle's own speed and the steering angle or the yaw rate can be used, for example, via appropriate vehicle sensors to specify the movement of the sensor. It can be used to calculate the movement of the sensor from the kinematic Data of a vehicle and its position on the vehicle are taken into account. However, the movement of the sensor or the kinematic data can also be determined from a corresponding parallel object detection and tracking in the optoelectronic sensor or from a subsequent object detection. Furthermore, a GPS position detection system, preferably with a digital map, can be used.
  • Kinematic data are preferably used which are acquired by the sensor in close proximity to the scanning and particularly preferably during the scanning.
  • the displacements caused by the movement within the time difference can be calculated and the coordinates in the image points of the depth image corrected, preferably from the kinematic data of the movement and the time difference between the acquisition time of the respective image point of the depth image and a reference time become.
  • modified kinematic relationships can also be used.
  • a reverse transformation after the correction may be useful.
  • An error in the positions of the pixels of the depth image can also be caused by the fact that two objects, one of which was detected at the start of the scan and the other towards the end of the scan, move against one another at high speed. This can lead to the positions of the objects being shifted against each other due to the latency in time between the acquisition times. It is therefore preferred in the case that depth images are used which were obtained by scanning the field of view of the optoelectronic sensor one after the other, a sequence of depth images and an object detection and / or tracking based on the pixels of the images of the surveillance area, with each detected object being assigned image points and each of these image points during the object tracking calculated movement data and before determining the
  • Pixels in the video image or before the segment formation corrected the position coordinates of the pixels of the depth image using the results of object detection and / or tracking.
  • an object recognition and tracking is carried out in parallel with the image acquisition and evaluation, which processes the acquired data of at least the optoelectronic sensor or laser scanner.
  • Known methods can be used for object detection and / or tracking for each scan, although in principle comparatively simple methods are sufficient.
  • such a method can be carried out independently of a complex object detection and tracking method, in which the recorded data are processed and, for example, a complex object classification is carried out, with a tracking of segments in the depth image being sufficient.
  • This correction also reduces the risk that problems arise when pixels of the depth image are merged with pixels of the video image.
  • the subsequent processing of the pixels is facilitated.
  • the position coordinates of the pixels are particularly preferably corrected in accordance with the movement data assigned to them and the difference between the acquisition time of the pixels of the depth image and a reference time.
  • the movement data can in particular be kinematic data, the shifts used for the correction being carried out as described above from the vectorial speeds and possibly accelerations of the objects and the time difference between the acquisition time of a pixel of the depth image and the reference time.
  • corrections mentioned can be applied alternatively or cumulatively.
  • the reference point in time can in principle be chosen freely, it is preferred that it is selected separately for each scan, but in each case chosen the same, since then there are no major differences even after a plurality of scans Numbers occur and there is still no shifting of the positions by varying the reference time of successive scans with the sensor moved, which could make subsequent object detection and tracking more difficult.
  • the reference point in time is the point in time at which the video image is captured. This choice of the reference time corrects, in particular, a shift of pixels which correspond to objects that are moved relative to the sensor or objects that are moved relative to one another, on account of the detection time shifted relative to the video system, as a result of which the combination of depth and video images leads to better results.
  • the acquisition time of the video image can be synchronized with the scanning of the field of view of the optoelectronic sensor, it is particularly preferred that the acquisition time and thus the reference time between the earliest time of a scan defined as the acquisition time of a pixel of the depth image and the last time as the acquisition time of a pixel of the Depth image defined time of the scan is. This ensures that errors caused by the approximation in the kinematic description are kept as low as possible.
  • a detection time of one of the pixels of the scanning can be selected particularly advantageously as the reference time, so that it receives zero time as the detection time within the scanning.
  • a depth image and a video image are recorded as the first step and their data are provided for the further method steps.
  • Another object of the invention is a method for recognizing and tracking objects, in which image information about the surveillance area is provided with a method according to one of the preceding claims, and on the basis of the image information provided, object detection and tracking is carried out.
  • the invention also relates to a computer program with program code means for carrying out one of the methods according to the invention when the program is executed on a computer.
  • the invention also relates to a computer program product with program code means which are stored on a computer-readable data carrier in order to carry out one of the methods according to the invention when the computer program product is executed on a computer.
  • a computer is understood here to mean any data processing device with which the method can be carried out.
  • it can have digital signal processors and / or microprocessors with which the method is carried out in whole or in part.
  • the invention relates to a device for providing depth-resolved images of a surveillance area, with at least one optoelectronic sensor for detecting the position of objects in at least one plane, in particular a laser scanner, a video system with at least one video camera and one with the optoelectronic sensor and the video system connected data tenprocessing device which is designed to carry out one of the methods according to the invention.
  • the video system preferably has a stereo camera.
  • the video system is particularly preferably designed for capturing depth-resolved, three-dimensional images.
  • the device required to form the depth-resolved video image from the images of the stereo camera can either be contained in the video system or given by the data processing device in which the corresponding operations are carried out.
  • An optical axis of an imaging device of a video camera of the video system is particularly preferably at least in the area of the optoelectronic sensor, preferably in the detection plane. This arrangement allows a particularly simple determination of mutually assigned pixels of the depth and video image.
  • the video system has an arrangement of photodetection elements
  • the optoelectronic sensor is a laser scanner
  • Laser scanner in particular about a common axis, can be pivoted, since this also reduces the problems with regard to the synchronization of the acquisition of video image and depth image.
  • photodetection- elements can in particular be a row, column or a two-dimensional arrangement such as a matrix.
  • a column or a planar arrangement is preferably also used for the detection of pixels in a direction perpendicular to the detection plane.
  • Fig. 1 is a schematic plan view of a vehicle with a laser scanner, a video system with a monocular
  • FIG. 2 is a partially schematic side view of the vehicle and the pile in FIG. 1,
  • FIG. 3 shows a schematic partial illustration of a video image captured by the video system in FIG. 1,
  • Fig. 4 is a schematic plan view of a vehicle with a laser scanner, a video system with a stereo camera and a pile in front of the vehicle, and
  • FIG. 5 is a partial, schematic side view of the vehicle and the pile in FIG. 4.
  • a vehicle 10 for monitoring the area in front of the vehicle carries on its front a laser scanner 12 and a video system 14 with a monocular video camera 16. In the vehicle there is also one with the laser scanner 12 and the video system. 14 connected data processing device 18.
  • a pile 20 is located in front of the vehicle in the direction of travel.
  • the laser scanner 12 has a viewing area 22 which is only partially shown in FIG. 1 and, due to the mounting position, covers an angle of somewhat more than 180 ° symmetrically to the longitudinal axis of the vehicle 10.
  • the viewing area 22 is only shown schematically in FIG. 1 and is too small for a better illustration, in particular in the radial direction.
  • the viewing area 22 there is, for example, the pile 20 as the object to be detected.
  • the laser scanner 12 scans its field of view 22 in a generally known manner with a pulsed laser radiation beam 24 rotating at a constant angular velocity, with the detection of whether the radiation beam 24 is also circumferentially at constant time intervals ⁇ t at times ⁇ i in fixed angular ranges around an average angle OÜ is reflected from a point 26 or area of an object such as the pile 20.
  • the index i runs from 1 to the number of angular ranges in the viewing range 22. Of these angular ranges, only one angular range is shown in FIG. 1, which is assigned to the average angle ⁇ i. Here, the angular range is shown exaggerated for clarity. As can be seen in FIG.
  • the viewing area 22 is two-dimensional except for the widening of the beam 24 and lies in a detection plane.
  • the sensor distance di of the object point 26 from the laser scanner 12 is determined on the basis of the transit time of the laser beam pulse.
  • the laser scanner 12 therefore detects, as coordinates in the image point for the object point 26 of the pile 20, the angle ⁇ i and the distance di determined at this angle, that is to say the position of the object point 26 in polar coordinates.
  • the amount of pixels recorded during a scan forms a depth image in the sense of the present application.
  • the laser scanner 12 scans its field of view 22 in successive scans, so that a chronological sequence of scans and corresponding depth images is produced.
  • the monocular video camera 16 of the video system 14 is a conventional new black and white video camera with a CCD area sensor 28 and an imaging device, which is shown schematically in FIGS. 1 and 2 as a simple lens 30, but actually consists of a lens system , and images incident light from the viewing area 32 of the video system onto the CCD area sensor 28.
  • the CCD area sensor 28 has photodetection elements arranged in a matrix. Signals of
  • Photodetection elements are read out, video images being formed with pixels which contain the positions of the photodetection elements in the matrix or another identifier for the photodetection elements and in each case an intensity value corresponding to the intensity of the light received by the corresponding photodetection element.
  • the video images are acquired at the same rate at which depth images are acquired by the laser scanner 12.
  • a monitoring area 34 is shown schematically in FIGS. 1 and 2 approximately by a dotted line and is given by the part of the viewing area 32 of the video system whose projection onto the level of the viewing area 22 of the laser scanner lies within the viewing area 22.
  • the pile 20 is located within this monitoring area 34.
  • the data processing device 18 which is connected to the laser scanner 12 and the video system 14, is provided for processing the images of the laser scanner 12 and the video system 14.
  • the data processing device 18 has, inter alia, a digital signal processor programmed to carry out the method according to the invention and a memory device connected to the digital signal processor.
  • the data processing device can also have a conventional processor with which an inventive computer program for executing the method according to the invention is stored in the data processing device.
  • a depth image is first captured by the laser scanner 12 and a video image is captured by the video system 14.
  • the depth image captured by the laser scanner 12 then has image points 26 ', 36' and 38 'which correspond to the object points 26, 36 and 38. These pixels are identified in FIG. 1 and FIG. 2 together with the corresponding subject points. Of the video image, only the pixels 40 of the video image are shown in FIG. 3, which have essentially the same intensity values since they correspond to the pile 20.
  • a segment of the depth image is formed from pixels, at least two of which have at most a predetermined maximum distance.
  • the pixels 26 ', 36' and 38 ' form a segment.
  • the segments of the video image contain picture elements whose intensity values differ by less than a small predetermined maximum value.
  • the result of the segmentation is shown in FIG.
  • Pixels of the video image are not shown which do not belong to the segment shown which corresponds to the pile 20.
  • the segment therefore essentially has a rectangular shape which corresponds to that of the pile 20.
  • the information from the video image is also used.
  • the entire monitoring area 34 is specified here as the fusion area. From the position coordinates of the pixels 26 ', 36' and 38 'of the depth image, taking into account the relative position of the video system 14 to the detection plane of the laser scanner 12, the relative position to the laser scanner 12 and the imaging properties of the lens 30, those photodetection elements or Pixels 39 of the video image are calculated which correspond to the subject points 26, 36 and 38 and are also shown in FIG. 3.
  • the segment formed from the image points 40 is assigned to the image points 26 ′, 36 ′ and 38 ′ corresponding to the object points 26, 36 and 38 or the segment of the depth image formed therefrom.
  • the height of the pile 20 can then be calculated from the height of the segment at a given distance, taking into account the imaging properties of the lens 30.
  • This information can also be assigned to the image points 26 ', 36' and 38 'of the depth image corresponding to the object points 26, 36 and 38.
  • this information can also be assigned to the image points 26 ', 36' and 38 'of the depth image corresponding to the object points 26, 26 and 38.
  • These pixels of the depth image can also be output or stored together with the associated information.
  • a second method according to a further embodiment of the invention according to the first alternative differs from the first method in that it is not the information from the depth image but rather the information of the video image should be supplemented.
  • the fusion area is therefore defined differently.
  • the position of the segment formed from the pixels 40 of the video image in the area or at the level of the detection plane of the laser scanner 12 is used to calculate which area in the detection plane of the laser scanner 12 the segment in the Video image can correspond. Since the distance of the segment formed from the pixels 40 from the video system 14 is initially unknown, an entire fusion region results. For pixels of the depth image lying in this fusion region, an assignment to the segment in the video image is then determined, as in the first method. The distance of the segment in the video image from the laser scanner 12 can thus be determined. This information then represents an addition to the video image data, which can be taken into account in image processing of the video image.
  • FIGS. 4 and 5 Another embodiment of the invention according to the second alternative will now be described with reference to FIGS. 4 and 5.
  • the same reference numerals are used in the following and reference is made to the above exemplary embodiment for a more detailed description.
  • a vehicle 10 carries a laser scanner 12 and a video system 42 with a stereo camera 44 for monitoring the area in front of the vehicle.
  • a data processing device 46 connected to the laser scanner 12 and the video system 42
  • the video system 42 with the stereo camera 44 is provided, which is designed for capturing depth-resolved images.
  • the stereo camera is formed by two monocular video cameras 48a and 48b attached to the front outer edges of the vehicle 10 and an evaluation device 50, which is connected to the video cameras 48a and 48b and processes their signals into three-dimensional video images with a high resolution.
  • the monocular video cameras 48a and 48b are each constructed like the video camera 16 of the first exemplary embodiment and are oriented towards one another in a predetermined geometry such that their viewing areas 52a and 52b overlap.
  • the overlap area of the viewing areas 52a and 52b forms the viewing area 32 of the stereo camera 44 and the video system 42, respectively.
  • the pixels detected by the video cameras 48a and 48b within the viewing area 32 of the video system are fed to the evaluation device 50, which calculates a depth-resolved image from these pixels taking into account the position and orientation of the video cameras 48a and 48b, which contains pixels with three-dimensional position coordinates and intensity information.
  • the monitoring area 34 is provided by the viewing areas 22 and 32 of the laser scanner 12 and of the video system 42.
  • the laser scanner detects pixels 26 ', 36' and 38 'with high accuracy, which correspond to the object points 26, 36 and 38 on the pile 20.
  • the video system captures pixels in three dimensions.
  • the image points 26 ", 36" and 38 "shown in FIG. 4 and captured by the video system 42 and corresponding to the object points 26, 36 and 38 have, due to the method used for the acquisition, greater positional inaccuracies in the direction of the depth of the image means that the distances from the video system given by the position coordinates of a pixel are not very precise.
  • FIG. 5 shows further pixels 54 of the depth-resolved video image which do not correspond directly to any pixels in the depth image of the laser scanner 12, since they are not in or near the detection plane in which the object points 26, 36 and 38 lie. For the sake of clarity, further pixels have been omitted.
  • the data processing device 46 is provided, which is connected to the laser scanner 12 and the video system 42 for this purpose.
  • the data processing device 42 has, inter alia, a digital signal processor programmed to carry out the method according to the invention in accordance with the second alternative and a memory device connected to the digital signal processor.
  • the data processing device can also have a conventional processor with which an inventive computer program for executing an embodiment of the method according to the invention described below is executed.
  • a depth image is captured and read in by the laser scanner 12 and a depth-resolved, three-dimensional video image by the video system 42.
  • the images are then segmented, wherein the segmentation of the video image may also have taken place in the evaluation device 50 before or during the calculation of the depth-resolved images.
  • the pixels 26 ', 36' and 38 'of the depth image corresponding to the object points 26, 36 and 38 form a segment of the depth image.
  • the pixels are determined which have a predetermined maximum distance from the detection plane , in which the radiation beam 24. If one assumes that the depth-resolved images have layers of pixels in the direction perpendicular to the detection plane corresponding to the structure of the CCD area sensors of the video cameras 48a and 48b, the maximum distance can be determined, for example, by the distance of the Layers.
  • This step provides a sub-segment of the segment of the video image with the pixels 26 ", 36" and 38 ", which corresponds to the segment of the depth image.
  • the position of the sub-segment is now adapted to the much more precisely determined position of the depth segment.
  • the sum of the square distances of the position coordinates of all pixels of the segment of the depth image from those by a translation and / or rotation transformed position coordinates of all pixels of the sub-segment minimized as a function of translation and / or rotation.
  • the position coordinates are transformed with the optimal translation and / or rotation determined in this way.
  • the entire segment of the video image in the acquisition plane is aligned such that it has the optimal position in the acquisition plane in the area in which it intersects the acquisition plane with respect to the segment of the depth image determined by the laser scanner.
  • a suitable segment of the video image can also be determined on the basis of a segment of the depth image, an exact three-dimensional image with a high resolution being again provided after the adaptation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Facsimile Heads (AREA)
  • Silver Salt Photography Or Processing Solution Therefor (AREA)
  • Non-Silver Salt Photosensitive Materials And Non-Silver Salt Photography (AREA)
  • Photosensitive Polymer And Photoresist Processing (AREA)
  • Image Processing (AREA)
EP02743204A 2001-06-15 2002-06-14 Procede pour preparer des informations imagees Withdrawn EP1395852A1 (fr)

Applications Claiming Priority (9)

Application Number Priority Date Filing Date Title
DE10128954A DE10128954A1 (de) 2001-06-15 2001-06-15 Korrekturverfahren für Daten mehrerer optoelektronischer Sensoren
DE10128954 2001-06-15
DE10132335A DE10132335A1 (de) 2001-07-04 2001-07-04 Verfahren und Vorrichtung zur Lokalisierung von Objekten im Raum
DE10132335 2001-07-04
DE10148062A DE10148062A1 (de) 2001-09-28 2001-09-28 Verfahren zur Verarbeitung eines tiefenaufgelösten Bildes
DE10148062 2001-09-28
DE10154861 2001-11-08
DE10154861A DE10154861A1 (de) 2001-11-08 2001-11-08 Verfahren zur Bereitstellung von Bildinformationen
PCT/EP2002/006599 WO2002103385A1 (fr) 2001-06-15 2002-06-14 Procede pour preparer des informations imagees

Publications (1)

Publication Number Publication Date
EP1395852A1 true EP1395852A1 (fr) 2004-03-10

Family

ID=27437982

Family Applications (4)

Application Number Title Priority Date Filing Date
EP02743204A Withdrawn EP1395852A1 (fr) 2001-06-15 2002-06-14 Procede pour preparer des informations imagees
EP02013172A Expired - Lifetime EP1267178B1 (fr) 2001-06-15 2002-06-14 Procédé de traitement d'une image à haute définition
EP02013171A Ceased EP1267177A1 (fr) 2001-06-15 2002-06-14 Procédé et dispositif pour la localisation d'objets dans l'espace
EP02751031A Expired - Lifetime EP1405100B1 (fr) 2001-06-15 2002-06-14 Procede de correction de donnees relatives a plusieurs capteurs optoelectroniques

Family Applications After (3)

Application Number Title Priority Date Filing Date
EP02013172A Expired - Lifetime EP1267178B1 (fr) 2001-06-15 2002-06-14 Procédé de traitement d'une image à haute définition
EP02013171A Ceased EP1267177A1 (fr) 2001-06-15 2002-06-14 Procédé et dispositif pour la localisation d'objets dans l'espace
EP02751031A Expired - Lifetime EP1405100B1 (fr) 2001-06-15 2002-06-14 Procede de correction de donnees relatives a plusieurs capteurs optoelectroniques

Country Status (6)

Country Link
US (1) US7570793B2 (fr)
EP (4) EP1395852A1 (fr)
JP (2) JP4669661B2 (fr)
AT (2) ATE405853T1 (fr)
DE (2) DE50212810D1 (fr)
WO (2) WO2002103385A1 (fr)

Families Citing this family (99)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6124886A (en) 1997-08-25 2000-09-26 Donnelly Corporation Modular rearview mirror assembly
US6326613B1 (en) 1998-01-07 2001-12-04 Donnelly Corporation Vehicle interior mirror assembly adapted for containing a rain sensor
US6445287B1 (en) 2000-02-28 2002-09-03 Donnelly Corporation Tire inflation assistance monitoring system
US6278377B1 (en) 1999-08-25 2001-08-21 Donnelly Corporation Indicator for vehicle accessory
US8288711B2 (en) 1998-01-07 2012-10-16 Donnelly Corporation Interior rearview mirror system with forwardly-viewing camera and a control
US6420975B1 (en) 1999-08-25 2002-07-16 Donnelly Corporation Interior rearview mirror sound processing system
US7480149B2 (en) 2004-08-18 2009-01-20 Donnelly Corporation Accessory module for vehicle
AU2001243285A1 (en) 2000-03-02 2001-09-12 Donnelly Corporation Video mirror systems incorporating an accessory module
US6396408B2 (en) 2000-03-31 2002-05-28 Donnelly Corporation Digital electrochromic circuit with a vehicle network
US6824281B2 (en) 2002-01-31 2004-11-30 Donnelly Corporation Vehicle accessory module
DE10312249A1 (de) * 2003-03-19 2004-09-30 Ibeo Automobile Sensor Gmbh Verfahren zur gemeinsamen Verarbeitung von tiefenaufgelösten Bildern und Videobildern
DE10312611A1 (de) 2003-03-21 2004-09-30 Daimlerchrysler Ag Verfahren und Vorrichtung zum Erfassen eines Objekts im Umfeld eines Kraftfahrzeugs
DE10325709A1 (de) * 2003-06-06 2004-12-23 Valeo Schalter Und Sensoren Gmbh Vorrichtung und Verfahren zum Erkennen des Konturverlaufes eines Hindernisses
DE10336638A1 (de) 2003-07-25 2005-02-10 Robert Bosch Gmbh Vorrichtung zur Klassifizierung wengistens eines Objekts in einem Fahrzeugumfeld
JP3941765B2 (ja) 2003-09-11 2007-07-04 トヨタ自動車株式会社 物体検出装置
US7047132B2 (en) * 2004-01-12 2006-05-16 Steven Jacobs Mobile vehicle sensor array
EP1839290B1 (fr) 2004-12-01 2013-08-21 Zorg Industries (Hong Kong) Limited Systeme integre pour vehicule permettant d'eviter des chocs a faible vitesse
EP1827908B1 (fr) 2004-12-15 2015-04-29 Magna Electronics Inc. Systeme module accessoire pour fenetre d'un vehicule
USRE46672E1 (en) 2006-07-13 2018-01-16 Velodyne Lidar, Inc. High definition LiDAR system
DE102006057277A1 (de) * 2006-12-05 2008-06-12 Robert Bosch Gmbh Verfahren zum Betrieb eines Radarsystems bei möglicher Zielobjektverdeckung sowie Radarsystem zur Durchführung des Verfahrens
US8662397B2 (en) * 2007-09-27 2014-03-04 Symbol Technologies, Inc. Multiple camera imaging-based bar code reader
JP5136927B2 (ja) 2007-10-09 2013-02-06 オプテックス株式会社 レーザエリアセンサ
US20100001075A1 (en) * 2008-07-07 2010-01-07 Symbol Technologies, Inc. Multi-imaging scanner for reading images
US8570374B2 (en) 2008-11-13 2013-10-29 Magna Electronics Inc. Camera for vehicle
US8415609B2 (en) * 2009-01-31 2013-04-09 Keyence Corporation Safety photoelectric switch
US8717225B2 (en) 2009-07-31 2014-05-06 Honda Motor Co., Ltd. Object detection device for vehicle and object detection method for vehicle
DE202010012985U1 (de) 2010-11-25 2012-02-27 Sick Ag Sensoranordnung zur Objekterkennung
DE102010060942A1 (de) 2010-12-01 2012-06-06 Sick Ag Sensoranordnung zur Objekterkennung
RU2468382C1 (ru) * 2011-05-24 2012-11-27 Открытое Акционерное Общество "Производственное Объединение "Уральский Оптико-Механический Завод" Имени Э.С. Яламова" (Оао "По "Уомз") Способ формирования сигнала управления в следящей системе
US8885151B1 (en) 2012-09-04 2014-11-11 Google Inc. Condensing sensor data for transmission and processing
JP6019959B2 (ja) * 2012-09-06 2016-11-02 富士通株式会社 物体検出装置、物体検出プログラムおよび車両
US10557939B2 (en) 2015-10-19 2020-02-11 Luminar Technologies, Inc. Lidar system with improved signal-to-noise ratio in the presence of solar background noise
CN108369274B (zh) 2015-11-05 2022-09-13 路明亮有限责任公司 用于高分辨率深度映射的具有经改进扫描速度的激光雷达系统
EP3411660A4 (fr) 2015-11-30 2019-11-27 Luminar Technologies, Inc. Système lidar avec laser distribué et plusieurs têtes de détection et laser pulsé pour système lidar
US10627490B2 (en) 2016-01-31 2020-04-21 Velodyne Lidar, Inc. Multiple pulse, LIDAR based 3-D imaging
JP7149256B2 (ja) 2016-03-19 2022-10-06 ベロダイン ライダー ユーエスエー,インコーポレイテッド Lidarに基づく3次元撮像のための統合された照射及び検出
US10429496B2 (en) 2016-05-27 2019-10-01 Analog Devices, Inc. Hybrid flash LIDAR system
CA3024510C (fr) 2016-06-01 2022-10-04 Velodyne Lidar, Inc. Lidar a balayage a pixels multiples
US10942257B2 (en) 2016-12-31 2021-03-09 Innovusion Ireland Limited 2D scanning high precision LiDAR using combination of rotating concave mirror and beam steering devices
US9905992B1 (en) 2017-03-16 2018-02-27 Luminar Technologies, Inc. Self-Raman laser for lidar system
US9810775B1 (en) 2017-03-16 2017-11-07 Luminar Technologies, Inc. Q-switched laser for LIDAR system
US9810786B1 (en) 2017-03-16 2017-11-07 Luminar Technologies, Inc. Optical parametric oscillator for lidar system
US9869754B1 (en) 2017-03-22 2018-01-16 Luminar Technologies, Inc. Scan patterns for lidar systems
US10139478B2 (en) 2017-03-28 2018-11-27 Luminar Technologies, Inc. Time varying gain in an optical detector operating in a lidar system
US10061019B1 (en) 2017-03-28 2018-08-28 Luminar Technologies, Inc. Diffractive optical element in a lidar system to correct for backscan
US10732281B2 (en) 2017-03-28 2020-08-04 Luminar Technologies, Inc. Lidar detector system having range walk compensation
US10267899B2 (en) 2017-03-28 2019-04-23 Luminar Technologies, Inc. Pulse timing based on angle of view
US10114111B2 (en) 2017-03-28 2018-10-30 Luminar Technologies, Inc. Method for dynamically controlling laser power
US10121813B2 (en) 2017-03-28 2018-11-06 Luminar Technologies, Inc. Optical detector having a bandpass filter in a lidar system
US10209359B2 (en) 2017-03-28 2019-02-19 Luminar Technologies, Inc. Adaptive pulse rate in a lidar system
US10254388B2 (en) 2017-03-28 2019-04-09 Luminar Technologies, Inc. Dynamically varying laser output in a vehicle in view of weather conditions
US11119198B2 (en) 2017-03-28 2021-09-14 Luminar, Llc Increasing operational safety of a lidar system
US10545240B2 (en) 2017-03-28 2020-01-28 Luminar Technologies, Inc. LIDAR transmitter and detector system using pulse encoding to reduce range ambiguity
US10007001B1 (en) 2017-03-28 2018-06-26 Luminar Technologies, Inc. Active short-wave infrared four-dimensional camera
US10976417B2 (en) 2017-03-29 2021-04-13 Luminar Holdco, Llc Using detectors with different gains in a lidar system
US10983213B2 (en) 2017-03-29 2021-04-20 Luminar Holdco, Llc Non-uniform separation of detector array elements in a lidar system
US10641874B2 (en) 2017-03-29 2020-05-05 Luminar Technologies, Inc. Sizing the field of view of a detector to improve operation of a lidar system
US10191155B2 (en) 2017-03-29 2019-01-29 Luminar Technologies, Inc. Optical resolution in front of a vehicle
US10663595B2 (en) 2017-03-29 2020-05-26 Luminar Technologies, Inc. Synchronized multiple sensor head system for a vehicle
US10969488B2 (en) 2017-03-29 2021-04-06 Luminar Holdco, Llc Dynamically scanning a field of regard using a limited number of output beams
US10254762B2 (en) 2017-03-29 2019-04-09 Luminar Technologies, Inc. Compensating for the vibration of the vehicle
WO2018183715A1 (fr) 2017-03-29 2018-10-04 Luminar Technologies, Inc. Procédé de commande de puissance de crête et de puissance moyenne par le biais d'un récepteur laser
US10088559B1 (en) 2017-03-29 2018-10-02 Luminar Technologies, Inc. Controlling pulse timing to compensate for motor dynamics
US11002853B2 (en) 2017-03-29 2021-05-11 Luminar, Llc Ultrasonic vibrations on a window in a lidar system
US10241198B2 (en) 2017-03-30 2019-03-26 Luminar Technologies, Inc. Lidar receiver calibration
US10295668B2 (en) 2017-03-30 2019-05-21 Luminar Technologies, Inc. Reducing the number of false detections in a lidar system
US10401481B2 (en) 2017-03-30 2019-09-03 Luminar Technologies, Inc. Non-uniform beam power distribution for a laser operating in a vehicle
US9989629B1 (en) 2017-03-30 2018-06-05 Luminar Technologies, Inc. Cross-talk mitigation using wavelength switching
US10684360B2 (en) 2017-03-30 2020-06-16 Luminar Technologies, Inc. Protecting detector in a lidar system using off-axis illumination
US11022688B2 (en) 2017-03-31 2021-06-01 Luminar, Llc Multi-eye lidar system
EP3593166B1 (fr) 2017-03-31 2024-04-17 Velodyne Lidar USA, Inc. Commande de puissance d'éclairage à lidar intégré
US20180284246A1 (en) 2017-03-31 2018-10-04 Luminar Technologies, Inc. Using Acoustic Signals to Modify Operation of a Lidar System
US10677897B2 (en) 2017-04-14 2020-06-09 Luminar Technologies, Inc. Combining lidar and camera data
JP2020519881A (ja) 2017-05-08 2020-07-02 ベロダイン ライダー, インク. Lidarデータ収集及び制御
DE102017118156A1 (de) 2017-08-09 2019-02-14 Valeo Schalter Und Sensoren Gmbh Verfahren zum Überwachen eines Umgebungsbereiches eines Kraftfahrzeugs, Sensorsteuergerät, Fahrerassistenzsystem sowie Kraftfahrzeug
US10211593B1 (en) 2017-10-18 2019-02-19 Luminar Technologies, Inc. Optical amplifier with multi-wavelength pumping
US10451716B2 (en) 2017-11-22 2019-10-22 Luminar Technologies, Inc. Monitoring rotation of a mirror in a lidar system
US10663585B2 (en) 2017-11-22 2020-05-26 Luminar Technologies, Inc. Manufacturing a balanced polygon mirror
US11294041B2 (en) 2017-12-08 2022-04-05 Velodyne Lidar Usa, Inc. Systems and methods for improving detection of a return signal in a light ranging and detection system
US11493601B2 (en) 2017-12-22 2022-11-08 Innovusion, Inc. High density LIDAR scanning
WO2019165294A1 (fr) 2018-02-23 2019-08-29 Innovusion Ireland Limited Système de direction bidimensionnelle de systèmes lidar
WO2020013890A2 (fr) 2018-02-23 2020-01-16 Innovusion Ireland Limited Orientation d'impulsions à longueurs d'onde multiples dans des systèmes lidar
US10578720B2 (en) 2018-04-05 2020-03-03 Luminar Technologies, Inc. Lidar system with a polygon mirror and a noise-reducing feature
US11029406B2 (en) 2018-04-06 2021-06-08 Luminar, Llc Lidar system with AlInAsSb avalanche photodiode
FR3080922B1 (fr) * 2018-05-03 2020-09-18 Transdev Group Dispositif electronique et procede de detection d'un objet via un lidar a balayage, vehicule automobile autonome et programme d'ordinateur associes
US10348051B1 (en) 2018-05-18 2019-07-09 Luminar Technologies, Inc. Fiber-optic amplifier
US10591601B2 (en) 2018-07-10 2020-03-17 Luminar Technologies, Inc. Camera-gated lidar system
US10627516B2 (en) * 2018-07-19 2020-04-21 Luminar Technologies, Inc. Adjustable pulse characteristics for ground detection in lidar systems
US10551501B1 (en) 2018-08-09 2020-02-04 Luminar Technologies, Inc. Dual-mode lidar system
US10340651B1 (en) 2018-08-21 2019-07-02 Luminar Technologies, Inc. Lidar system with optical trigger
US11971507B2 (en) 2018-08-24 2024-04-30 Velodyne Lidar Usa, Inc. Systems and methods for mitigating optical crosstalk in a light ranging and detection system
US10712434B2 (en) 2018-09-18 2020-07-14 Velodyne Lidar, Inc. Multi-channel LIDAR illumination driver
US11082010B2 (en) 2018-11-06 2021-08-03 Velodyne Lidar Usa, Inc. Systems and methods for TIA base current detection and compensation
US11885958B2 (en) 2019-01-07 2024-01-30 Velodyne Lidar Usa, Inc. Systems and methods for a dual axis resonant scanning mirror
US11774561B2 (en) 2019-02-08 2023-10-03 Luminar Technologies, Inc. Amplifier input protection circuits
CN111745635B (zh) * 2019-03-28 2022-03-04 苏州科瓴精密机械科技有限公司 识别反光标方法、移动机器人定位方法及移动机器人系统
US10613203B1 (en) 2019-07-01 2020-04-07 Velodyne Lidar, Inc. Interference mitigation for light detection and ranging
US11556000B1 (en) 2019-08-22 2023-01-17 Red Creamery Llc Distally-actuated scanning mirror
JP2021148746A (ja) * 2020-03-23 2021-09-27 株式会社リコー 測距装置及び測距方法

Family Cites Families (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE2021566A1 (de) 1970-05-02 1971-11-25 Ibm Deutschland Anordnung zur raeumlichen und zeitlichen Modulation eines Lichtstrahles
JPS5843009A (ja) * 1981-09-07 1983-03-12 Toyota Motor Corp 自動車速制御装置
DE3832720A1 (de) 1988-09-27 1990-03-29 Bosch Gmbh Robert Abstandsmesseinrichtung zur beruehrungslosen abstands- und winkelerkennung
DE3915631A1 (de) 1989-05-12 1990-11-15 Dornier Luftfahrt Navigationsverfahren
US5189619A (en) * 1989-09-05 1993-02-23 Toyota Jidosha Kabushiki Kaisha AI-based adaptive vehicle control system
US5128874A (en) 1990-01-02 1992-07-07 Honeywell Inc. Inertial navigation sensor integrated obstacle detection system
DE69124726T2 (de) 1990-10-25 1997-07-03 Mitsubishi Electric Corp Vorrichtung zur Abstandsdetektion für ein Kraftfahrzeug
FR2676284B1 (fr) * 1991-05-07 1994-12-02 Peugeot Procede de detection d'obstacles presents devant un vehicule automobile et dispositif pour la mise en óoeuvre d'un tel procede.
DE4119180A1 (de) 1991-06-11 1992-12-17 Merkel Peter Dr Verfahren und vorrichtung zur vermessung und dokumentation der geometrie, struktur- und materialbeschaffenheit von dreidimensionalen objekten, wie fassaden und raeume sowie deren waende, einrichtungen und installationen
DE4142702A1 (de) 1991-12-21 1993-06-24 Leuze Electronic Gmbh & Co Einrichtung zum abtasten dreidimensionaler objekte
US5756981A (en) * 1992-02-27 1998-05-26 Symbol Technologies, Inc. Optical scanner for reading and decoding one- and-two-dimensional symbologies at variable depths of field including memory efficient high speed image processing means and high accuracy image analysis means
DE4301538A1 (de) 1992-03-17 1994-07-28 Peter Dr Ing Brueckner Verfahren und Anordnung zur berührungslosen dreidimensionalen Messung, insbesondere zur Messung von Gebißmodellen
DE4222642A1 (de) 1992-07-10 1994-01-13 Bodenseewerk Geraetetech Bilderfassende Sensoreinheit
US5275354A (en) * 1992-07-13 1994-01-04 Loral Vought Systems Corporation Guidance and targeting system
JP3232724B2 (ja) * 1992-12-08 2001-11-26 株式会社デンソー 車間距離制御装置
JP3263699B2 (ja) 1992-12-22 2002-03-04 三菱電機株式会社 走行環境監視装置
DE4320485B4 (de) 1993-06-21 2007-04-19 Eads Deutschland Gmbh Verfahren zur Objektvermessung mittels intelligenter Entfernungsbildkamera
JP3183598B2 (ja) * 1993-12-14 2001-07-09 三菱電機株式会社 障害物検知装置
CN1043464C (zh) * 1993-12-27 1999-05-26 现代电子产业株式会社 使用激光的汽车碰撞预防装置及其方法
JP3212218B2 (ja) 1994-05-26 2001-09-25 三菱電機株式会社 車両用障害物検出装置
JP2854805B2 (ja) * 1994-06-15 1999-02-10 富士通株式会社 物体認識方法および視覚装置
DE4434233A1 (de) 1994-09-24 1995-11-16 Peter Dr Ing Brueckner Verfahren und Anordnung zur berührungslosen dreidimensionalen Messung, insbesondere von ungleichförmig bewegten Meßobjekten
JPH08122060A (ja) * 1994-10-21 1996-05-17 Mitsubishi Electric Corp 車両周辺監視システム
JP3498406B2 (ja) 1995-02-17 2004-02-16 マツダ株式会社 自動車の走行制御装置
DE19516324A1 (de) 1995-04-24 1996-10-31 Gos Ges Zur Foerderung Angewan Meßverfahren und Anordnung zur Messung der Lage-, Form- und Bewegungsparameter entfernter Objekte
JP3230642B2 (ja) * 1995-05-29 2001-11-19 ダイハツ工業株式会社 先行車検出装置
IL114278A (en) * 1995-06-22 2010-06-16 Microsoft Internat Holdings B Camera and method
US5717401A (en) * 1995-09-01 1998-02-10 Litton Systems, Inc. Active recognition system with optical signal processing
JP3336825B2 (ja) * 1995-09-25 2002-10-21 三菱自動車工業株式会社 障害物認識装置
DE19603267A1 (de) 1996-01-30 1997-07-31 Bosch Gmbh Robert Vorrichtung zur Abstands- und/oder Positionsbestimmung
DE59802284D1 (de) 1997-02-20 2002-01-17 Volkswagen Ag Verfahren und Vorrichtung zum Bestimmen eines Abstandes eines Hindernisses von einem Fahrzeug
DE19757840C1 (de) 1997-12-24 1999-09-30 Johann F Hipp Vorrichtung zur optischen Erfassung und Abstandermittlung von Objekten von einem Fahrzeug aus
US6175652B1 (en) * 1997-12-31 2001-01-16 Cognex Corporation Machine vision system for analyzing features based on multiple object images
US5966090A (en) * 1998-03-16 1999-10-12 Mcewan; Thomas E. Differential pulse radar motion sensor
DE19815149A1 (de) 1998-04-03 1999-10-07 Leuze Electronic Gmbh & Co Sensoranordnung
US5966678A (en) * 1998-05-18 1999-10-12 The United States Of America As Represented By The Secretary Of The Navy Method for filtering laser range data
US6055490A (en) * 1998-07-27 2000-04-25 Laser Technology, Inc. Apparatus and method for determining precision reflectivity of highway signs and other reflective objects utilizing an optical range finder instrument
JP2000075030A (ja) 1998-08-27 2000-03-14 Aisin Seiki Co Ltd スキャン型レーザーレーダ装置
WO2000013037A1 (fr) * 1998-08-31 2000-03-09 Osaka Gas Co., Ltd. Procede de recherche tridimensionnel, procede d'affichage de donnees de voxels tridimensionnelles, et dispositif de realisation de ces procedes
DE19910667A1 (de) 1999-03-11 2000-09-21 Volkswagen Ag Vorrichtung mit mindestens einem Lasersensor und Verfahren zum Betreiben eines Lasersensors
US6662649B1 (en) * 1999-03-19 2003-12-16 Simmons Sirvey Corporation Material level monitoring and reporting
DE19947766A1 (de) 1999-10-02 2001-05-10 Bosch Gmbh Robert Einrichtung zur Überwachung der Umgebung eines einparkenden Fahrzeugs
DE10143504A1 (de) * 2001-09-05 2003-03-20 Sick Ag Überwachungsverfahren und optoelektronischer Sensor

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BECKER J.C.: "Fusion of heterogeneous sensors for the guidance of an autonomous vehicle", PROCEEDINGS OF THE THIRD INTERNATIONAL CONFERENCE ON INFORMATION FUSION, vol. 2, July 2000 (2000-07-01), PARIS, FRANCE, pages WED5/11 - WED5/18 *
STILLER ET AL: "multisensor obstacle detection and tracking", IMAGE AND VISION COMPUTING *

Also Published As

Publication number Publication date
WO2003001241A1 (fr) 2003-01-03
US7570793B2 (en) 2009-08-04
DE50212810D1 (de) 2008-11-06
EP1405100B1 (fr) 2008-09-24
ATE409320T1 (de) 2008-10-15
EP1267178A1 (fr) 2002-12-18
JP2004530902A (ja) 2004-10-07
EP1267178B1 (fr) 2008-08-20
JP4669661B2 (ja) 2011-04-13
ATE405853T1 (de) 2008-09-15
DE50212663D1 (de) 2008-10-02
WO2002103385A1 (fr) 2002-12-27
EP1405100A1 (fr) 2004-04-07
JP2004530144A (ja) 2004-09-30
US20050034036A1 (en) 2005-02-10
EP1267177A1 (fr) 2002-12-18

Similar Documents

Publication Publication Date Title
EP1395852A1 (fr) Procede pour preparer des informations imagees
DE10154861A1 (de) Verfahren zur Bereitstellung von Bildinformationen
EP1589484B1 (fr) Procédé pour la détection et/ou le suivi d'objets
EP3034995B1 (fr) Procédé de détermination d'un décalage d'orientation ou de position d'un appareil de mesure géodésique et appareil de mesure correspondant
EP2742319B1 (fr) Appareil de mesure pour déterminer la position spatiale d'un instrument de mesure auxiliaire
EP0842395B1 (fr) Procede et dispositif de detection rapide de la position d'un repere cible
DE102010012811B4 (de) Verfahren zur Messung von Geschwindigkeiten und Zuordnung der gemessenen Geschwindigkeiten zu angemessenen Fahrzeugen durch Erfassen und Zusammenführen von Objekt-Trackingdaten und Bild-Trackingdaten
AT412028B (de) Einrichtung zur aufnahme eines objektraumes
EP3396409B1 (fr) Procédé d'étalonnage d'une caméra et d'un balayeur laser
EP1460454B1 (fr) Procédé de traitement combiné d'images à haute définition et d'images vidéo
DE112010004767T5 (de) Punktwolkedaten-Verarbeitungsvorrichtung, Punktwolkedaten-Verarbeitungsverfahren und Punktwolkedaten-Verarbeitungsprogramm
DE102004033114A1 (de) Verfahren zur Kalibrierung eines Abstandsbildsensors
DE112017007467T5 (de) Verfahren zum Bereitstellen einer Interferenzreduzierung und eines dynamischen Bereichs von Interesse in einem LIDAR-System
DE102004010197A1 (de) Verfahren zur Funktionskontrolle einer Positionsermittlungs- oder Umgebungserfassungseinrichtung eines Fahrzeugs oder zur Kontrolle einer digitalen Karte
DE102018108027A1 (de) Objekterfassungsvorrichtung
EP1298454A2 (fr) Procédé de reconnaissance et de suivi d'objets
EP1953568A1 (fr) Élément semi-conducteur d'imageur, système de caméra et procédé de production d'une image
DE102009030644B4 (de) Berührungslose Erfassungseinrichtung
DE102020129096A1 (de) Erzeugung dreidimensionaler punktwolken mittels einer polarimetrischen kamera in einem mit einem fahrassistenzsystem ausgestatteten fahrzeug
DE10148064A1 (de) Verfahren zur Erkennung und Verfolgung von Objekten
DE102006039104A1 (de) Verfahren zur Entfernungsmessung von Objekten auf von Bilddaten eines Monokamerasystems
EP1298012B1 (fr) Procédé de reconnaissance et de poursuite d'objets
EP3663881B1 (fr) Procédé de commande d'un véhicule autonome en fonction des vecteurs de mouvement estimés
DE10148062A1 (de) Verfahren zur Verarbeitung eines tiefenaufgelösten Bildes
DE10019462A1 (de) Straßenflächensensorsystem

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20031030

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

17Q First examination report despatched

Effective date: 20041230

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20080410