WO2022263683A1 - Method for detecting an object in a road surface, method for autonomous driving and automotive lighting device - Google Patents
Method for detecting an object in a road surface, method for autonomous driving and automotive lighting device Download PDFInfo
- Publication number
- WO2022263683A1 WO2022263683A1 PCT/EP2022/066769 EP2022066769W WO2022263683A1 WO 2022263683 A1 WO2022263683 A1 WO 2022263683A1 EP 2022066769 W EP2022066769 W EP 2022066769W WO 2022263683 A1 WO2022263683 A1 WO 2022263683A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- stripes
- light
- image
- light pattern
- stripe
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 238000001514 detection method Methods 0.000 claims abstract description 8
- 238000012545 processing Methods 0.000 claims description 17
- 230000008569 process Effects 0.000 claims description 9
- 238000010801 machine learning Methods 0.000 claims description 5
- 239000000463 material Substances 0.000 claims description 5
- 230000006870 function Effects 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 2
- 239000011159 matrix material Substances 0.000 description 22
- 230000003287 optical effect Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 239000004065 semiconductor Substances 0.000 description 4
- 239000000758 substrate Substances 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000003213 activating effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005401 electroluminescence Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000020169 heat generation Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 239000012212 insulator Substances 0.000 description 1
- 229910052751 metal Inorganic materials 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 150000002739 metals Chemical class 0.000 description 1
- 230000005693 optoelectronics Effects 0.000 description 1
- 230000021715 photosynthesis, light harvesting Effects 0.000 description 1
- 229920000642 polymer Polymers 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000035939 shock Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000002207 thermal evaporation Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/145—Illumination specially adapted for pattern recognition, e.g. using gratings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/42—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
- G06V10/431—Frequency domain transformation; Autocorrelation
Definitions
- This invention is related to the field of automotive luminous devices, and more particularly, to the ones used in autonomous driving conditions.
- Advanced control systems receive the sensors data and provide a construction of the surroundings of the vehicle to identify appropriate navigation paths, as well as obstacles and relevant signalling.
- J3016 A classification system with six levels, ranging from fully manual to fully automated systems, was published in 2014 by SAE International, an automotive standardization body, as J3016. This classification is based on the amount of driver intervention and attentiveness required, rather than the vehicle's capabilities, although these are loosely related. In 2016, SAE updated its classification, called J3016_201609.
- Level 3 of this classification the driver can safely turn their attention away from the driving tasks, e.g. the driver can text or watch a movie.
- the vehicle will handle situations that call for an immediate response, like emergency braking.
- the driver must still be prepared to intervene within some limited time, specified by the manufacturer, when called upon by the vehicle to do so.
- Luminous performance of lighting devices are essential for the sensors to receive all the relevant information necessary to achieve this driving mode, especially at night.
- the present invention provides a solution for this problem by means of a method for detecting an object in a road surface, the method comprising the steps of
- a lighting device projects a light pattern on the road surface where the object is intended to be detected.
- This pattern comprises light stripes.
- An image device such as a camera, acquires the image of the road illuminated by this light pattern.
- this debris object causes a deformation in the stripes of the light pattern.
- This deformation is easily identifiable, since the processing unit which analyse the image have information about the original light pattern projected by the lighting device.
- the shape and dimensions of the deformed portion are used to provide information about the detected object and its importance.
- this lighting method can be used as an aid to night autonomous driving, thus improving safety and accuracy in the object detection.
- the stripes are not continuous.
- the stripes are not continuous means that the stripes may have some portions with a first luminous intensity and some other portions, between the first portions, with a second luminous intensity (or even completely dark, to increase the contrast). With this approach, a higher detail is obtained, and a more accurate identification of the object is provided.
- the stripes are horizontal and the object data comprises the frequency of the stripes.
- the term “horizontal” is understood as parallel to the plane of the road and perpendicular to the advance direction of the vehicle (i.e. to the lane lines). It is the usual sense of this term. As a consequence, if an object is detected the frequency of the stripes (number of lines per metre) is different from the frequency in the original pattern, thus indicating that there is an object. Since this distorted zone ends abruptly in each side of the object, the horizontal lines are particularly advantageous for measuring the width of the object.
- the step of detecting object data comprises performing a Fast Fourier Transform or a Discrete Cosine Transform on the acquired image.
- a Fast Fourier Transform (FFT) is suitable for measuring this distortion in the frequency of the lines.
- the parameters of the FFT are set using the relative height between the image sensor and the light module.
- DCT Discrete Cosine Transform
- a co-occurrence matrix can also be used to these two transforms.
- the statistical approach is used to provide valuable information about the neighbouring pixels in an image, which helps to detect those changes (between lighted strip and dark space).
- the stripes are vertical and the object data comprises the angle of a portion of one stripe with respect to the rest of the stripe.
- the term “vertical” is understood as perpendicular to the plane of the road. It is the usual sense of this term. As a consequence, if an object is detected the lines are “refracted”, changing the angle of the portion of the line projected over the object, thus indicating that there is an object. Since this distorted zone ends abruptly in the top side of the object, the vertical lines are particularly advantageous for measuring the height of the object.
- the step of projecting the light pattern is performed by a light module
- the step of acquiring the image is performed by an image sensor
- the method further comprises a first step of optimizing the horizontal distance between the light module and the image sensor to maximize the angle between two portions of a stripe when an object is detected.
- the horizontal distance (i.e., the distance measured in a horizontal line, without considering the height difference) between the light module and the image sensor has an influence in the angle that the deformed portion of the stripe forms with the non-deformed portion of the stripe.
- an optimization in the horizontal distance between the light module and the image sensor helps the processing unit to detect and identify the deformed portion of the light pattern caused by the object.
- the stripes are oriented forming an angle between 1o and 89o with respect to the horizontal.
- the stripes can also be diagonal, forming an angle with respect to the horizontal (the “horizontal” concept is the same as in the rest of the document). With diagonal stripes, the advantages of the horizontal stripes and the vertical stripes are combined.
- a FFT may be used to detect a change in the frequency and the change in the angle of a portion of the stripes may also be detected.
- the method further comprises the step of optimizing the angle of the stripe with respect to the horizontal to maximize the angle that a portion of the stripe forms with respect to the rest of the stripe in the acquired images.
- One optimization possibility includes maximizing the angle that the deformed portion forms with respect to the original projected stripe. This optimizes the information provided to the processing unit, for a better identification of the portion of the light pattern deformed by the object.
- the method further comprises the step of increasing the luminous intensity of the light pattern when an object is detected.
- the light pattern is projected by a headlamp of by reverse light.
- the method may be used for lighting the road ahead or behind the vehicle, like in a parking operation or in any other reverse manoeuvring.
- the features of the object comprises the position, the width and/or the height of the object.
- the method further comprises the step of defining the distance between two consecutive stripes as a function of a desired detection range.
- the width of the stripes and the distance between them may be chosen.
- the image equalization improves the contrast, thus boosting the learning process.
- the invention provides a method for autonomous managing of a vehicle, comprising the steps of
- the method for detecting an object may be used for a method for the autonomous driving of a vehicle.
- the detection method provides the necessary features that allow the adoption of a correct manoeuvre to avoid collision.
- the invention provides an automotive lighting device comprising
- solid state refers to light emitted by solid-state electroluminescence, which uses semiconductors to convert electricity into light. Compared to incandescent lighting, solid state lighting creates visible light with reduced heat generation and less energy dissipation.
- the typically small mass of a solid-state electronic lighting device provides for greater resistance to shock and vibration compared to brittle glass tubes/bulbs and long, thin filament wires. They also eliminate filament evaporation, potentially increasing the lifespan of the illumination device.
- Some examples of these types of lighting comprise semiconductor light-emitting diodes (LEDs), organic light-emitting diodes (OLED), or polymer light-emitting diodes (PLED) as sources of illumination rather than electrical filaments, plasma or gas.
- a matrix arrangement is a typical example for this method.
- the rows may be grouped in projecting distance ranges and each column of each group represent an angle interval. This angle value depends on the resolution of the matrix arrangement, which is typically comprised between 0.01o per column and 0.5o per column. As a consequence, many light sources may be managed at the same time.
- the image sensor and the plurality of solid-state light sources are located at extreme horizontal positions, maximizing the horizontal distance between them.
- the image sensor and the lighting module with the solid-state light sources may be located within the same lighting device or may be arranged in different locations.
- the lighting module and the image sensor are located at extreme positions in the vehicle (i.e., one element in the left edge and the other one in the right edge), the horizontal distance between them is maximum, and the angle the perceived by the image sensor of the stripes projected by the lighting module is optimum.
- FIG. 1 shows a general perspective view of an automotive lighting device according to the invention.
- FFT Fast Fourier Transform
- FIG. 1 shows a general perspective view of an automotive lighting device according to the invention.
- This headlamp 1 is installed in an automotive vehicle 100 and comprises
- This matrix configuration is a high-resolution module, having a resolution greater than 2000 pixels. However, no restriction is attached to the technology used for producing the projection modules.
- a first example of this matrix configuration comprises a monolithic source.
- This monolithic source comprises a matrix of monolithic electroluminescent elements arranged in several columns by several rows.
- the electroluminescent elements can be grown from a common substrate and are electrically connected to be selectively activatable either individually or by a subset of electroluminescent elements.
- the substrate may be predominantly made of a semiconductor material.
- the substrate may comprise one or more other materials, for example non-semiconductors (metals and insulators).
- each electroluminescent element/group can form a light pixel and can therefore emit light when its/their material is supplied with electricity.
- the configuration of such a monolithic matrix allows the arrangement of selectively activatable pixels very close to each other, compared to conventional light-emitting diodes intended to be soldered to printed circuit boards.
- the monolithic matrix may comprise electroluminescent elements whose main dimension of height, measured perpendicularly to the common substrate, is substantially equal to one micrometre.
- the monolithic matrix is coupled to the control centre so as to control the generation and/or the projection of a pixelated light beam by the matrix arrangement.
- the control centre is thus able to individually control the light emission of each pixel of the matrix arrangement.
- the matrix arrangement may comprise a main light source coupled to a matrix of mirrors.
- the pixelated light source is formed by the assembly of at least one main light source formed of at least one light emitting diode emitting light and an array of optoelectronic elements, for example a matrix of micro-mirrors, also known by the acronym DMD, for "Digital Micro-mirror Device", which directs the light rays from the main light source by reflection to a projection optical element.
- DMD Digital Micro-mirror Device
- an auxiliary optical element can collect the rays of at least one light source to focus and direct them to the surface of the micro-mirror array.
- Each micro-mirror can pivot between two fixed positions, a first position in which the light rays are reflected towards the projection optical element, and a second position in which the light rays are reflected in a different direction from the projection optical element.
- the two fixed positions are oriented in the same manner for all the micro-mirrors and form, with respect to a reference plane supporting the matrix of micro-mirrors, a characteristic angle of the matrix of micro-mirrors defined in its specifications. Such an angle is generally less than 20° and may be usually about 12°.
- each micro-mirror reflecting a part of the light beams which are incident on the matrix of micro-mirrors forms an elementary emitter of the pixelated light source.
- the actuation and control of the change of position of the mirrors for selectively activating this elementary emitter to emit or not an elementary light beam is controlled by the control centre.
- the matrix arrangement may comprise a scanning laser system wherein a laser light source emits a laser beam towards a scanning element which is configured to explore the surface of a wavelength converter with the laser beam. An image of this surface is captured by the projection optical element.
- the exploration of the scanning element may be performed at a speed sufficiently high so that the human eye does not perceive any displacement in the projected image.
- the scanning means may be a mobile micro-mirror for scanning the surface of the wavelength converter element by reflection of the laser beam.
- the micro-mirrors mentioned as scanning means are for example MEMS type, for "Micro-Electro-Mechanical Systems".
- the invention is not limited to such a scanning means and can use other kinds of scanning means, such as a series of mirrors arranged on a rotating element, the rotation of the element causing a scanning of the transmission surface by the laser beam.
- the light source may be complex and include both at least one segment of light elements, such as light emitting diodes, and a surface portion of a monolithic light source.
- FIG. 1 shows some steps of the operation of such an automotive lighting device.
- a light pattern 6 is shown projected on the road surface 5.
- This uniform light pattern comprises a plurality of horizontal stripes 7.
- the camera of the lighting device acquires image data of the projected light pattern 6.
- the acquired image data contains a zone 8 where the width and frequency of the stripes is different from the rest of the image, due to the fact that is projected over a surface (the surface of an object) which forms an angle with respect to the road.
- First one is analysing the image as such and second one is modifying the light pattern for a better identification of the object, increasing the luminous intensity of the light stripes.
- the processing unit receives the acquired image with the deformed zone, either with standard luminous intensity of with an increased one.
- the processing unit performs a Fast Fourier Transform (FFT) over the image, resulting the image shown in .
- FFT Fast Fourier Transform
- the FFT clearly indicates that there is a change in the frequency of the light stripes, thus indicating that there is an object in the road, providing information about its height and width. Therefore, the autonomous driving system of the vehicle is able to use this information for deciding the best way of avoiding the object (braking, reducing speed or changing lane).
- DCT Direct Cosine Transform
- co-occurrence matrix could be used for this purpose.
- FIG. 1 shows an alternative pattern of horizontal stripes, comprising non-continuous stripes, arranged in a chessboard pattern.
- the camera of the lighting device acquires image data of the projected light pattern 6.
- the acquired image data contains a zone 8 where a portion of each affected stripe (the number of affected stripes depending on the width of the object) forms an angle with respect to the remaining stripe, due to the fact that is projected over a surface (the surface of an object) which forms an angle with respect to the road.
- First one is analysing the image as such and second one is modifying the light pattern for a better identification of the object, increasing the luminous intensity of the light stripes.
- the processing unit receives the acquired image with the deformed zone, either with standard luminous intensity of with an increased one.
- the processing unit performs an image analysis to identify the stripe portions which are oriented in a different way with respect to the original pattern.
- the zone where these biased stripes belong to is categorized as an object, so that its position and dimensions may be obtained.
- the angle that the biased portions of the stripes form with respect to the non-biased portions of the stripes depend on the horizontal distance between the lighting module (projecting the light pattern) and the image sensor (acquiring an image of the projected light pattern).
- This angle is maximum when the image sensor is at the highest possible distance from the light projector. If the lighting module is located in the extreme position of a headlamp, the image sensor should be located in the extreme position of the opposite headlamp. Hence, the horizontal distance between these two objects is maximum, thus maximizing the angle between the projection direction 9 and the sensing direction 11. With such an arrangement, the angle between the biased portions and the non-biased portions of the stripes is maximized as well, thus improving the operation of the processing unit.
- the light pattern comprises diagonal stripes.
- the object on the road modifies the angle and frequency of the diagonal stripes, so that the processing unit may obtain, more accurately, the shape and dimensions of the object.
- the processing unit receives the image and processes it.
- First optional stage comprises performing an image equalization, to enhance the contrast between the lighted stripes and the black zones between stripes. This enhanced contrast will be useful for the processing unit, for a better identification and quantification of the object zone.
- Second optional stage is the use of machine learning.
- the processing unit may undergo a supervised learning process before being installed in the automotive vehicle. This comprises the fact that a database of debris objects is provided within a preliminary training stage.
- the processing unit comprises a convolutional neural network with some convolutional blocks, each convolutional block comprising several convolutional layers and a max pool layer, which operate on the input (the image from the database).
- the network further comprises the same number of deconvolutional blocks, each block comprising an unsampling layer and convolutional layers. The output of this process will lead to the dimensions of the object zone.
- An alternative arrangement of a convolutional neural network may comprise some convolutional layers, which operate on the input (the image from the database). This network comprises skip connections to propagate input features faster through residual connections. This network also comprises a final fully-connected layer.
- One operational example means that, since objects present in the images are limited, the network learns to classify a provided shadow within a predefined set of surfaces range.
- the task is that the network estimates the surface as if it was a probability function.
- the light may be projected by the headlamp, but it may also be projected by a rear lamp, such as a reverse light. This example would be useful when parking the car, and an accurate map of the obstacles would be needed for an autonomous operation.
- This invention may also be used when dealing with the object.
- the processing unit once that has detected and identified the presence, position, orientation and size of the object, decides the best way of overcoming it: either by changing the lane, or by decreasing the speed, or even by totally stopping the vehicle. Autonomous driving steps are performed according to the invention for the most suitable operation that avoids being damaged by the object. However, sometimes, it has to check if this manoeuvre is possible (because there are no nearby vehicles) before performing it.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Lighting Device Outwards From Vehicle And Optical Signal (AREA)
- Investigating Or Analysing Materials By Optical Means (AREA)
Abstract
The present invention refers to a method for detecting an object in a road surface, the method comprising the steps of projecting a light pattern on the road surface, acquiring an image of the projected light pattern, detecting a shadow in the acquired image and using some features of the shadow to obtain information about features of an object. The invention also provides a method for autonomous driving using this object detection and an automotive lighting device.
Description
This invention is related to the field of automotive luminous devices, and more particularly, to the ones used in autonomous driving conditions.
Autonomous driving is being developed to provide vehicles which are capable of sensing their environment and moving safely with a reduced human input.
To achieve this goal, these autonomous vehicles combine a variety of sensors to perceive their surroundings. Advanced control systems receive the sensors data and provide a construction of the surroundings of the vehicle to identify appropriate navigation paths, as well as obstacles and relevant signalling.
A classification system with six levels, ranging from fully manual to fully automated systems, was published in 2014 by SAE International, an automotive standardization body, as J3016. This classification is based on the amount of driver intervention and attentiveness required, rather than the vehicle's capabilities, although these are loosely related. In 2016, SAE updated its classification, called J3016_201609.
In Level 3 of this classification, the driver can safely turn their attention away from the driving tasks, e.g. the driver can text or watch a movie. The vehicle will handle situations that call for an immediate response, like emergency braking. The driver must still be prepared to intervene within some limited time, specified by the manufacturer, when called upon by the vehicle to do so.
Luminous performance of lighting devices are essential for the sensors to receive all the relevant information necessary to achieve this driving mode, especially at night.
The present invention provides a solution for this problem by means of a method for detecting an object in a road surface, the method comprising the steps of
- projecting a light pattern on a road surface, the light pattern comprising light stripes;
- acquiring an image of the projected light pattern;
- detecting object data in the light stripes of the acquired image; and
- using the object data of the light stripes to infer features of an object.
In this method, a lighting device projects a light pattern on the road surface where the object is intended to be detected. This pattern comprises light stripes. An image device, such as a camera, acquires the image of the road illuminated by this light pattern. When a debris object is present in this image, acquired by the image device, this debris object causes a deformation in the stripes of the light pattern. This deformation is easily identifiable, since the processing unit which analyse the image have information about the original light pattern projected by the lighting device. The shape and dimensions of the deformed portion are used to provide information about the detected object and its importance. Thus, this lighting method can be used as an aid to night autonomous driving, thus improving safety and accuracy in the object detection.
In some particular embodiments, the stripes are not continuous.
The fact that the stripes are not continuous means that the stripes may have some portions with a first luminous intensity and some other portions, between the first portions, with a second luminous intensity (or even completely dark, to increase the contrast). With this approach, a higher detail is obtained, and a more accurate identification of the object is provided.
In some particular embodiments, the stripes are horizontal and the object data comprises the frequency of the stripes.
In this case, the term “horizontal” is understood as parallel to the plane of the road and perpendicular to the advance direction of the vehicle (i.e. to the lane lines). It is the usual sense of this term. As a consequence, if an object is detected the frequency of the stripes (number of lines per metre) is different from the frequency in the original pattern, thus indicating that there is an object. Since this distorted zone ends abruptly in each side of the object, the horizontal lines are particularly advantageous for measuring the width of the object.
In some particular embodiments, the step of detecting object data comprises performing a Fast Fourier Transform or a Discrete Cosine Transform on the acquired image.
A Fast Fourier Transform (FFT) is suitable for measuring this distortion in the frequency of the lines. The parameters of the FFT are set using the relative height between the image sensor and the light module.
A Discrete Cosine Transform (DCT) is also suitable for measuring this distortion. It is sometimes more adequate, since it only uses real number operations.
A co-occurrence matrix can also be used to these two transforms. The statistical approach is used to provide valuable information about the neighbouring pixels in an image, which helps to detect those changes (between lighted strip and dark space).
In some particular embodiments, the stripes are vertical and the object data comprises the angle of a portion of one stripe with respect to the rest of the stripe.
In this case, the term “vertical” is understood as perpendicular to the plane of the road. It is the usual sense of this term. As a consequence, if an object is detected the lines are “refracted”, changing the angle of the portion of the line projected over the object, thus indicating that there is an object. Since this distorted zone ends abruptly in the top side of the object, the vertical lines are particularly advantageous for measuring the height of the object.
In some particular embodiments, the step of projecting the light pattern is performed by a light module, the step of acquiring the image is performed by an image sensor and the method further comprises a first step of optimizing the horizontal distance between the light module and the image sensor to maximize the angle between two portions of a stripe when an object is detected.
The horizontal distance (i.e., the distance measured in a horizontal line, without considering the height difference) between the light module and the image sensor has an influence in the angle that the deformed portion of the stripe forms with the non-deformed portion of the stripe. Hence, an optimization in the horizontal distance between the light module and the image sensor helps the processing unit to detect and identify the deformed portion of the light pattern caused by the object.
In some particular embodiments, the stripes are oriented forming an angle between 1º and 89º with respect to the horizontal.
The stripes can also be diagonal, forming an angle with respect to the horizontal (the “horizontal” concept is the same as in the rest of the document). With diagonal stripes, the advantages of the horizontal stripes and the vertical stripes are combined. A FFT may be used to detect a change in the frequency and the change in the angle of a portion of the stripes may also be detected.
In some particular embodiments, the method further comprises the step of optimizing the angle of the stripe with respect to the horizontal to maximize the angle that a portion of the stripe forms with respect to the rest of the stripe in the acquired images.
One optimization possibility includes maximizing the angle that the deformed portion forms with respect to the original projected stripe. This optimizes the information provided to the processing unit, for a better identification of the portion of the light pattern deformed by the object.
In some particular embodiments, the method further comprises the step of increasing the luminous intensity of the light pattern when an object is detected.
It is possible to have a first luminous intensity level for standard lighting (to save energy) and then, when the object is detected, increasing the luminous intensity for a better accuracy.
In some particular embodiments, the light pattern is projected by a headlamp of by reverse light.
The method may be used for lighting the road ahead or behind the vehicle, like in a parking operation or in any other reverse manoeuvring.
In some particular embodiments, the features of the object comprises the position, the width and/or the height of the object.
These features are useful for assessing the relevance of the detected object, in order to decide the best decision possible.
In some particular embodiments, the method further comprises the step of defining the distance between two consecutive stripes as a function of a desired detection range.
Depending on the resolution of the light module and the desired accuracy in the detection of the obstacles, the width of the stripes and the distance between them may be chosen.
In some particular embodiments,
- - the method comprises a first step of providing the lighting device with a labelled database of debris objects, wherein the database contains objects with different sizes, materials, shapes, orientations and shadows;
- the step of using the object data of the light stripes to obtain information about features of an object is carried out by a machine learning process; and
- the machine learning process includes a pre-processing of the images, which includes an image equalization to enhance the contrast between the lighted surface and the shadow created thereby.
The image equalization improves the contrast, thus boosting the learning process.
In a second inventive aspect, the invention provides a method for autonomous managing of a vehicle, comprising the steps of
- performing the detection of an object with a method according to the first inventive aspect;
- using the obtained features of the object to decide a suitable vehicle manoeuvre;
- checking if the vehicle manoeuvre can be performed in security conditions; and
- performing the manoeuvre.
The method for detecting an object may be used for a method for the autonomous driving of a vehicle. When the object is detected, the detection method provides the necessary features that allow the adoption of a correct manoeuvre to avoid collision.
In a further inventive aspect, the invention provides an automotive lighting device comprising
- a plurality of solid-state light sources, configured to project the light pattern in a method according to a previous inventive aspect;
- an image sensor configured to acquire an image of the projected light pattern in a method according to a previous inventive aspect; and
- processing unit configured to perform the rest of the steps of a method according to a previous inventive aspect.
The term "solid state" refers to light emitted by solid-state electroluminescence, which uses semiconductors to convert electricity into light. Compared to incandescent lighting, solid state lighting creates visible light with reduced heat generation and less energy dissipation. The typically small mass of a solid-state electronic lighting device provides for greater resistance to shock and vibration compared to brittle glass tubes/bulbs and long, thin filament wires. They also eliminate filament evaporation, potentially increasing the lifespan of the illumination device. Some examples of these types of lighting comprise semiconductor light-emitting diodes (LEDs), organic light-emitting diodes (OLED), or polymer light-emitting diodes (PLED) as sources of illumination rather than electrical filaments, plasma or gas.
A matrix arrangement is a typical example for this method. The rows may be grouped in projecting distance ranges and each column of each group represent an angle interval. This angle value depends on the resolution of the matrix arrangement, which is typically comprised between 0.01º per column and 0.5º per column. As a consequence, many light sources may be managed at the same time.
In some particular embodiments, the image sensor and the plurality of solid-state light sources are located at extreme horizontal positions, maximizing the horizontal distance between them.
The image sensor and the lighting module with the solid-state light sources may be located within the same lighting device or may be arranged in different locations. When the lighting module and the image sensor are located at extreme positions in the vehicle (i.e., one element in the left edge and the other one in the right edge), the horizontal distance between them is maximum, and the angle the perceived by the image sensor of the stripes projected by the lighting module is optimum.
Unless otherwise defined, all terms (including technical and scientific terms) used herein are to be interpreted as is customary in the art. It will be further understood that terms in common usage should also be interpreted as is customary in the relevant art and not in an idealised or overly formal sense unless expressly so defined herein.
In this text, the term “comprises” and its derivations (such as “comprising”, etc.) should not be understood in an excluding sense, that is, these terms should not be interpreted as excluding the possibility that what is described and defined may include further elements, steps, etc.
To complete the description and in order to provide for a better understanding of the invention, a set of drawings is provided. Said drawings form an integral part of the description and illustrate an embodiment of the invention, which should not be interpreted as restricting the scope of the invention, but just as an example of how the invention can be carried out. The drawings comprise the following figures:
In these figures, the following reference numbers are used:
1 Lighting device
2 LEDs
3 Control unit
4 Camera
5 Road surface
6 Light pattern
7 Stripes
8 Object zone
9 Projection direction
11 Sensing direction
100 Automotive vehicle
The example embodiments are described in sufficient detail to enable those of ordinary skill in the art to embody and implement the systems and processes herein described. It is important to understand that embodiments can be provided in many alternate forms and should not be construed as limited to the examples set forth herein.
Accordingly, while embodiment can be modified in various ways and take on various alternative forms, specific embodiments thereof are shown in the drawings and described in detail below as examples. There is no intent to limit to the particular forms disclosed. On the contrary, all modifications, equivalents, and alternatives falling within the scope of the appended claims should be included.
This headlamp 1 is installed in an automotive vehicle 100 and comprises
- a matrix arrangement of
LEDs 2, intended to provide a light pattern; - a
control unit 3 to perform a control of the operation of theLEDs 2; and - a
camera 4 intended to provide some external data.
This matrix configuration is a high-resolution module, having a resolution greater than 2000 pixels. However, no restriction is attached to the technology used for producing the projection modules.
A first example of this matrix configuration comprises a monolithic source. This monolithic source comprises a matrix of monolithic electroluminescent elements arranged in several columns by several rows. In a monolithic matrix, the electroluminescent elements can be grown from a common substrate and are electrically connected to be selectively activatable either individually or by a subset of electroluminescent elements. The substrate may be predominantly made of a semiconductor material. The substrate may comprise one or more other materials, for example non-semiconductors (metals and insulators). Thus, each electroluminescent element/group can form a light pixel and can therefore emit light when its/their material is supplied with electricity. The configuration of such a monolithic matrix allows the arrangement of selectively activatable pixels very close to each other, compared to conventional light-emitting diodes intended to be soldered to printed circuit boards. The monolithic matrix may comprise electroluminescent elements whose main dimension of height, measured perpendicularly to the common substrate, is substantially equal to one micrometre.
The monolithic matrix is coupled to the control centre so as to control the generation and/or the projection of a pixelated light beam by the matrix arrangement. The control centre is thus able to individually control the light emission of each pixel of the matrix arrangement.
Alternatively to what has been presented above, the matrix arrangement may comprise a main light source coupled to a matrix of mirrors. Thus, the pixelated light source is formed by the assembly of at least one main light source formed of at least one light emitting diode emitting light and an array of optoelectronic elements, for example a matrix of micro-mirrors, also known by the acronym DMD, for "Digital Micro-mirror Device", which directs the light rays from the main light source by reflection to a projection optical element. Where appropriate, an auxiliary optical element can collect the rays of at least one light source to focus and direct them to the surface of the micro-mirror array.
Each micro-mirror can pivot between two fixed positions, a first position in which the light rays are reflected towards the projection optical element, and a second position in which the light rays are reflected in a different direction from the projection optical element. The two fixed positions are oriented in the same manner for all the micro-mirrors and form, with respect to a reference plane supporting the matrix of micro-mirrors, a characteristic angle of the matrix of micro-mirrors defined in its specifications. Such an angle is generally less than 20° and may be usually about 12°. Thus, each micro-mirror reflecting a part of the light beams which are incident on the matrix of micro-mirrors forms an elementary emitter of the pixelated light source. The actuation and control of the change of position of the mirrors for selectively activating this elementary emitter to emit or not an elementary light beam is controlled by the control centre.
In different embodiments, the matrix arrangement may comprise a scanning laser system wherein a laser light source emits a laser beam towards a scanning element which is configured to explore the surface of a wavelength converter with the laser beam. An image of this surface is captured by the projection optical element.
The exploration of the scanning element may be performed at a speed sufficiently high so that the human eye does not perceive any displacement in the projected image.
The synchronized control of the ignition of the laser source and the scanning movement of the beam makes it possible to generate a matrix of elementary emitters that can be activated selectively at the surface of the wavelength converter element. The scanning means may be a mobile micro-mirror for scanning the surface of the wavelength converter element by reflection of the laser beam. The micro-mirrors mentioned as scanning means are for example MEMS type, for "Micro-Electro-Mechanical Systems". However, the invention is not limited to such a scanning means and can use other kinds of scanning means, such as a series of mirrors arranged on a rotating element, the rotation of the element causing a scanning of the transmission surface by the laser beam.
In another variant, the light source may be complex and include both at least one segment of light elements, such as light emitting diodes, and a surface portion of a monolithic light source.
Each 0.2 seconds, the camera of the lighting device acquires image data of the projected light pattern 6. When an object is present on the road surface 5, the acquired image data contains a zone 8 where the width and frequency of the stripes is different from the rest of the image, due to the fact that is projected over a surface (the surface of an object) which forms an angle with respect to the road.
At this stage, there are two options. First one is analysing the image as such and second one is modifying the light pattern for a better identification of the object, increasing the luminous intensity of the light stripes.
In any case, the processing unit receives the acquired image with the deformed zone, either with standard luminous intensity of with an increased one.
The processing unit performs a Fast Fourier Transform (FFT) over the image, resulting the image shown in . In this image, the FFT clearly indicates that there is a change in the frequency of the light stripes, thus indicating that there is an object in the road, providing information about its height and width. Therefore, the autonomous driving system of the vehicle is able to use this information for deciding the best way of avoiding the object (braking, reducing speed or changing lane). However, other methods, such as Direct Cosine Transform (DCT) or a co-occurrence matrix, could be used for this purpose.
With this approach, a higher detail is obtained, and a more accurate identification of the object is provided. However, the rest of the steps of the method remains the same.
The same as in the previous case, each 0.2 seconds, the camera of the lighting device acquires image data of the projected light pattern 6. When an object is present on the road, the acquired image data contains a zone 8 where a portion of each affected stripe (the number of affected stripes depending on the width of the object) forms an angle with respect to the remaining stripe, due to the fact that is projected over a surface (the surface of an object) which forms an angle with respect to the road.
At this stage, there are two options. First one is analysing the image as such and second one is modifying the light pattern for a better identification of the object, increasing the luminous intensity of the light stripes.
In any case, the processing unit receives the acquired image with the deformed zone, either with standard luminous intensity of with an increased one.
In this case, the processing unit performs an image analysis to identify the stripe portions which are oriented in a different way with respect to the original pattern. When they are identified, the zone where these biased stripes belong to is categorized as an object, so that its position and dimensions may be obtained.
The angle that the biased portions of the stripes form with respect to the non-biased portions of the stripes depend on the horizontal distance between the lighting module (projecting the light pattern) and the image sensor (acquiring an image of the projected light pattern).
This angle is maximum when the image sensor is at the highest possible distance from the light projector. If the lighting module is located in the extreme position of a headlamp, the image sensor should be located in the extreme position of the opposite headlamp. Hence, the horizontal distance between these two objects is maximum, thus maximizing the angle between the projection direction 9 and the sensing direction 11. With such an arrangement, the angle between the biased portions and the non-biased portions of the stripes is maximized as well, thus improving the operation of the processing unit.
In any of these examples (image of the light pattern without modifications or image of the modified light pattern), the processing unit receives the image and processes it.
At these stages, there are different methods for the processing unit to analyse the shadow.
First optional stage comprises performing an image equalization, to enhance the contrast between the lighted stripes and the black zones between stripes. This enhanced contrast will be useful for the processing unit, for a better identification and quantification of the object zone.
Second optional stage is the use of machine learning. The processing unit may undergo a supervised learning process before being installed in the automotive vehicle. This comprises the fact that a database of debris objects is provided within a preliminary training stage.
The processing unit comprises a convolutional neural network with some convolutional blocks, each convolutional block comprising several convolutional layers and a max pool layer, which operate on the input (the image from the database). The network further comprises the same number of deconvolutional blocks, each block comprising an unsampling layer and convolutional layers. The output of this process will lead to the dimensions of the object zone.
An alternative arrangement of a convolutional neural network may comprise some convolutional layers, which operate on the input (the image from the database). This network comprises skip connections to propagate input features faster through residual connections. This network also comprises a final fully-connected layer.
One operational example means that, since objects present in the images are limited, the network learns to classify a provided shadow within a predefined set of surfaces range. The task is that the network estimates the surface as if it was a probability function.
This invention may also be used in different situations: the light may be projected by the headlamp, but it may also be projected by a rear lamp, such as a reverse light. This example would be useful when parking the car, and an accurate map of the obstacles would be needed for an autonomous operation.
This invention may also be used when dealing with the object. The processing unit, once that has detected and identified the presence, position, orientation and size of the object, decides the best way of overcoming it: either by changing the lane, or by decreasing the speed, or even by totally stopping the vehicle. Autonomous driving steps are performed according to the invention for the most suitable operation that avoids being damaged by the object. However, sometimes, it has to check if this manoeuvre is possible (because there are no nearby vehicles) before performing it.
Claims (15)
- Method for detecting an object in a road surface, the method comprising the steps of:
- projecting a light pattern on a road surface, the light pattern comprising light stripes;
- acquiring an image of the projected light pattern;
- detecting object data in the light stripes of the acquired image; and
- using the object data of the light stripes to infer features of an object.
- Method according to claim 1, wherein the stripes are not continuous.
- Method according to any of the preceding claims, wherein the stripes are horizontal and the object data comprises the frequency of the stripes.
- Method according to any of the preceding claims, wherein the step of detecting object data comprises performing a Fast Fourier Transform on the acquired image.
- Method according to any of claims 1 or 2, wherein the stripes are vertical and the object data comprises the angle of a portion of one stripe with respect to the rest of the stripe.
- Method according to claim 5, wherein the step of projecting the light pattern is performed by a light module, the step of acquiring the image is performed by an image sensor and the method further comprises a first step of optimizing the horizontal distance between the light module and the image sensor to maximize the angle between two portions of a stripe when an object is detected.
- Method according to any of claims 1 or 2, wherein the stripes are oriented forming an angle between 1º and 89º with respect to the horizontal.
- Method according to claim 7, further comprising the step of optimizing the angle of the stripe with respect to the horizontal to maximize the angle that a portion of the stripe forms with respect to the rest of the stripe in the acquired images.
- Method according to any of the preceding claims, further comprising the step of increasing the luminous intensity of the light pattern when an object is detected.
- Method according to any of the preceding claims, wherein the features contain the position, the width and/or the height of the object.
- Method according to any of the preceding claims, further comprising the step of defining the distance between two consecutive stripes as a function of a desired detection range.
- Method according to any of the preceding claims, wherein
- the method comprises a first step of providing the lighting device with a labelled database of debris objects, wherein the database contains objects with different sizes, materials, shapes, orientations and shadows;
- the step of using the object data of the light stripes to obtain information about features of an object is carried out by a machine learning process; and
- the machine learning process includes a pre-processing of the images, which includes an image equalization to enhance the contrast between the lighted surface and the shadow created thereby.
- Method for autonomous managing of a vehicle, comprising the steps of
- performing the detection of an object with a method according to any of the preceding claims;
- use the obtained features of the object to decide a suitable vehicle manoeuvre;
- check if the vehicle manoeuvre can be performed in security conditions; and
- perform the manoeuvre.
- Automotive lighting device implementing the method for detecting an object according to any of claims 1 to 12, comprising
- a plurality of solid-state light sources, configured to project the light pattern;
- an image sensor configured to acquire an image of the projected light pattern; and
- a processing unit configured to perform the rest of the steps.
- Automotive lighting device according to claim 14, wherein the image sensor and the plurality of solid-state light sources are located at extreme horizontal positions, maximizing the horizontal distance between them.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR2106505A FR3124253A1 (en) | 2021-06-18 | 2021-06-18 | Method for detecting an object on a road surface, autonomous driving method and automotive lighting device |
FRFR2106505 | 2021-06-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022263683A1 true WO2022263683A1 (en) | 2022-12-22 |
Family
ID=78536268
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2022/066769 WO2022263683A1 (en) | 2021-06-18 | 2022-06-20 | Method for detecting an object in a road surface, method for autonomous driving and automotive lighting device |
Country Status (2)
Country | Link |
---|---|
FR (1) | FR3124253A1 (en) |
WO (1) | WO2022263683A1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080063239A1 (en) * | 2006-09-13 | 2008-03-13 | Ford Motor Company | Object detection system and method |
US20160378117A1 (en) * | 2015-06-24 | 2016-12-29 | Brain Corporation | Bistatic object detection apparatus and methods |
US20190220677A1 (en) * | 2018-01-17 | 2019-07-18 | GM Global Technology Operations LLC | Structured light illumination system for object detection |
-
2021
- 2021-06-18 FR FR2106505A patent/FR3124253A1/en active Pending
-
2022
- 2022-06-20 WO PCT/EP2022/066769 patent/WO2022263683A1/en unknown
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080063239A1 (en) * | 2006-09-13 | 2008-03-13 | Ford Motor Company | Object detection system and method |
US20160378117A1 (en) * | 2015-06-24 | 2016-12-29 | Brain Corporation | Bistatic object detection apparatus and methods |
US20190220677A1 (en) * | 2018-01-17 | 2019-07-18 | GM Global Technology Operations LLC | Structured light illumination system for object detection |
Non-Patent Citations (1)
Title |
---|
AVINASH SHARMA: "PROJECTED TEXTURE FOR 3D OBJECT RECOGNITION", 1 July 2008 (2008-07-01), pages 1 - 80, XP055286933, Retrieved from the Internet <URL:http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.469.6918&rep=rep1&type=pdf> [retrieved on 20160708] * |
Also Published As
Publication number | Publication date |
---|---|
FR3124253A1 (en) | 2022-12-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11505112B2 (en) | Method for controlling a light pattern using a matrix of light sources responsive to steering angle | |
CN113330246B (en) | Method for correcting light pattern and automobile lighting device | |
CN113226849B (en) | Method for correcting light pattern and motor vehicle lighting device assembly | |
WO2022263683A1 (en) | Method for detecting an object in a road surface, method for autonomous driving and automotive lighting device | |
WO2022263685A1 (en) | Method for detecting an object in a road surface, method for autonomous driving and automotive lighting device | |
WO2022263684A1 (en) | Method for detecting an object in a road surface, method for autonomous driving and automotive lighting device | |
EP3672369B1 (en) | Method for controlling a light pattern and automotive lighting device | |
US20240130025A1 (en) | Method for controlling an automotive lighting device | |
EP4201740A1 (en) | Automotive lighting device and automotive vehicle | |
EP4202495A1 (en) | Automotive lighting device and automotive vehicle | |
EP4201741A1 (en) | Automotive lighting device and automotive vehicle | |
EP4202292A1 (en) | Automotive lighting device and automotive vehicle | |
EP4202496A1 (en) | Automotive lighting arrangement and automotive vehicle | |
EP4202384A1 (en) | Automotive lighting device and automotive vehicle | |
EP3702215B1 (en) | Method for correcting a light pattern and automotive lighting device | |
EP4202503A1 (en) | Automotive lighting device and automotive vehicle | |
FR3120258A1 (en) | Method for controlling an automotive lighting device | |
EP3670262A1 (en) | Method for correcting a light pattern and automotive lighting device assembly | |
WO2023118130A1 (en) | Automotive lighting arrangement and automotive vehicle | |
FR3120214A1 (en) | Method for controlling an automotive lighting device and automotive lighting device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22737807 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |