FR3120214A1 - Method for controlling an automotive lighting device and automotive lighting device - Google Patents

Method for controlling an automotive lighting device and automotive lighting device Download PDF

Info

Publication number
FR3120214A1
FR3120214A1 FR2101884A FR2101884A FR3120214A1 FR 3120214 A1 FR3120214 A1 FR 3120214A1 FR 2101884 A FR2101884 A FR 2101884A FR 2101884 A FR2101884 A FR 2101884A FR 3120214 A1 FR3120214 A1 FR 3120214A1
Authority
FR
France
Prior art keywords
lighting device
zone
entering
image
visibility zone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
FR2101884A
Other languages
French (fr)
Inventor
Ali Kanj
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Valeo Vision SAS
Original Assignee
Valeo Vision SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Valeo Vision SAS filed Critical Valeo Vision SAS
Priority to FR2101884A priority Critical patent/FR3120214A1/en
Priority to FR2108144A priority patent/FR3120213A3/en
Publication of FR3120214A1 publication Critical patent/FR3120214A1/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q1/00Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
    • B60Q1/02Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments
    • B60Q1/04Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights
    • B60Q1/06Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights adjustable, e.g. remotely-controlled from inside vehicle
    • B60Q1/08Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights adjustable, e.g. remotely-controlled from inside vehicle automatically
    • B60Q1/085Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights adjustable, e.g. remotely-controlled from inside vehicle automatically due to special conditions, e.g. adverse weather, type of road, badly illuminated road signs or potential dangers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q1/00Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
    • B60Q1/0017Devices integrating an element dedicated to another function
    • B60Q1/0023Devices integrating an element dedicated to another function the element being a sensor, e.g. distance sensor, camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q2300/00Indexing codes for automatically adjustable headlamps or automatically dimmable headlamps
    • B60Q2300/30Indexing codes relating to the vehicle environment
    • B60Q2300/33Driving situation
    • B60Q2300/337Tunnels or bridges

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Mechanical Engineering (AREA)
  • Lighting Device Outwards From Vehicle And Optical Signal (AREA)

Abstract

The present invention refers to a method for controlling an automotive lighting device. This method comprises capturing an image of a region in front of the lighting device, recognizing the presence of a low visibility zone in the image and, when the presence of a low visibility zone is recognized, extracting data from the image, wherein the data comprises a distance between the lighting device and the low visibility zone. Afterwards, the method comprises estimating an entering time for the lighting device to enter the low visibility zone and activating a lighting functionality in the lighting device a comfort time before the time estimated for entering the low visibility zone.Figure pour l’abrégé : figure 3The present invention refers to a method for controlling an automotive lighting device. This method comprehends capturing an image of a region in front of the lighting device, recognizing the presence of a low visibility zone in the image and, when the presence of a low visibility zone is recognized, extracting data from the image, wherein the data comprehends a distance between the lighting device and the low visibility zone. Afterwards, the method comprises estimating an entering time for the lighting device to enter the low visibility zone and activating a lighting functionality in the lighting device a comfort time before the time estimated for entering the low visibility zone. Figure for abstract: Figure 3

Description

Method for controlling an automotive lighting device and automotive lighting deviceMethod for controlling an automotive lighting device and automotive lighting device

This invention is related to the field of automotive luminous devices, and more particularly, to the automatic management thereof.This invention is related to the field of automotive luminous devices, and more particularly, to the automatic management thereof.

Automotive lighting devices comprise light sources, so that the lighting device may provide some light, either for lighting and/or signalling. Several types of light sources families are used nowadays, all of them having advantages and disadvantages.Automotive lighting devices including light sources, so that the lighting device may provide some light, either for lighting and/or signaling. Several types of light source families are used nowadays, all of them having advantages and disadvantages.

In some scenarios, it is useful to provide an automatic switching on and off of the lighting devices, since it helps the driver not to be in charge of turning on and off the lights when lighting conditions suddenly change (for example, when entering or exiting a tunnel).In some scenarios, it is useful to provide an automatic switching on and off of the lighting devices, since it helps the driver not to be in charge of turning on and off the lights when lighting conditions suddenly change (for example, when entering or exiting a tunnel).

Systems for automatic switching on and off of the lighting devices are known, and are based on a sensor which is configured to sense exterior luminosity. When low luminosity conditions are detected, instructions are sent to the control unit to activate the corresponding lighting functionality.Systems for automatic switching on and off of the lighting devices are known, and are based on a sensor which is configured to sense exterior luminosity. When low luminosity conditions are detected, instructions are sent to the control unit to activate the corresponding lighting functionality.

However, these sensors introduce a delay between the conditions are detected and the lighting functionality is activated.However, these sensors introduce a delay between the conditions are detected and the lighting functionality is activated.

This delay may cause discomfort in the human eye, who is entering a tunnel and, after one or two seconds, receive the required illumination.This delay may cause discomfort in the human eye, who is entering a tunnel and, after one or two seconds, receive the required illumination.

A solution for this problem is therefore sought.A solution for this problem is therefore sought.

The invention provides a solution for this problem by means of a method for controlling an automotive lighting device, the method comprising the steps of

  • capturing an image of a region in front of the lighting device;
  • recognizing the presence of a low visibility zone in the image;
  • when the presence of a low visibility zone is recognized, extracting data from the image, wherein the data comprises a distance between the lighting device and the low visibility zone;
  • estimating an entering time for the lighting device to enter the low visibility zone; and
  • activating a lighting functionality in the lighting device a comfort time before the time estimated for entering the low visibility zone.
The invention provides a solution for this problem by means of a method for controlling an automotive lighting device, the method comprising the steps of
  • capturing an image of a region in front of the lighting device;
  • recognizing the presence of a low visibility zone in the image;
  • when the presence of a low visibility zone is recognized, extracting data from the image, wherein the data comprises a distance between the lighting device and the low visibility zone;
  • estimating an entering time for the lighting device to enter the low visibility zone; and
  • activating a lighting functionality in the lighting device a comfort time before the time estimated for entering the low visibility zone.

With such a method, the delay between the entering of a dark location and the activation of the corresponding lighting functionality is eliminated, since the system foresees the presence of the dark location, calculating the remaining distance before entering into it. The system does not respond to a sudden decrease in ambient light, but is perfectly aware of the dark location well before entering into it. Hence, the delay is eliminated, since the system may prepare for providing a suitable lighting functionality in the optimal moment.With such a method, the delay between the entering of a dark location and the activation of the corresponding lighting functionality is eliminated, since the system foresees the presence of the dark location, calculating the remaining distance before entering into it. The system does not respond to a sudden decrease in ambient light, but is perfectly aware of the dark location well before entering into it. Hence, the delay is eliminated, since the system may prepare for providing a suitable lighting functionality in the optimal moment.

In some particular embodiments, the step of extracting data from the image comprises recognizing an entering zone and calculating the distance from the lighting device to the entering zone.In some particular embodiments, the step of extracting data from the image comprehends recognizing an entering zone and calculating the distance from the lighting device to the entering zone.

The entering zone may be used as the base for calculation.The entering zone may be used as the basis for calculation.

In some particular embodiments, the method comprises a first step of providing the lighting device with a labelled database of low visibility zones, and the step of recognizing the presence of a low visibility zone is carried out by a machine learning process.In some particular embodiments, the method comprises a first step of providing the lighting device with a labeled database of low visibility zones, and the step of recognizing the presence of a low visibility zone is carried out by a machine learning process.

This process is based on the ability of the system to incorporate the features of labelled images, so that the identification of these dark zones in real time is improved.This process is based on the ability of the system to incorporate the features of labeled images, so that the identification of these dark zones in real time is improved.

In some particular embodiments, the labelled database comprises data of entering zones, and the step of recognizing the entering zone is carried out by a machine learning process.In some particular embodiments, the labeled database comprises data of entering zones, and the step of recognizing the entering zone is carried out by a machine learning process.

This process is based on the ability of the system to incorporate the features of labelled images, so that the identification of these dark zones entries in real time is improved.This process is based on the ability of the system to incorporate the features of labeled images, so that the identification of these dark zones entries in real time is improved.

In some particular embodiments, the machine learning process comprises a decision tree built using a learning data set.In some particular embodiments, the machine learning process comprises a decision tree built using a learning data set.

The machine learning algorithm for recognizing the entering zone includes a decision tree.The machine learning algorithm for recognizing the entering zone includes a decision tree.

The decision tree may be a decision tree obtained by means of a supervised training algorithm of the ID3 type (standing for “Iterative Dichotomize 3”) applied to a set of training data comprising a plurality of samples acquired beforehand.The decision tree may be a decision tree obtained by means of a supervised training algorithm of the ID3 type (standing for “Iterative Dichotomize 3”) applied to a set of training data comprising a majority of samples acquired beforehand.

Each sample includes several attributes determined at the time of acquisition of this sample, including shapes, contours and contrast ratios, further comprising data concerning the distance between the vehicle and the entry of a tunnel or parking at the time of acquisition and the speed of the vehicle at the time of acquisition.Each sample includes several attributes determined at the time of acquisition of this sample, including shapes, contours and contrast ratios, further comprising data concerning the distance between the vehicle and the entry of a tunnel or parking at the time of acquisition and the speed of the vehicle at the time of acquisition.

According to this algorithm, the decision tree will be built, recursively, by selecting at each step of the recursion the attribute for which the entropy gain, estimated on the training data set used at this step, is maximum, then by partitioning this training data set into at least two subsets using the selected attribute, and repeating these steps on each of the subsets, said selected attribute forming a node of the decision tree, a leaf of the tree being reached, and the recursion ending, when all the samples of a subset obtained at the end of a partition have the same label.According to this algorithm, the decision tree will be built, recursively, by selecting at each step of the recursion the attribute for which the entropy gain, estimated on the training data set used at this step, is maximum, then by partitioning this training data set into at least two subsets using the selected attribute, and repeating these steps on each of the subsets, said selected attribute forming a node of the decision tree, a leaf of the tree being reached, and the recursion ending, when all the samples of a subset obtained at the end of a partition have the same label.

The decision tree thus constructed will contain a set of branches connected by nodes and leading to leaves making it possible to predict, from an instance comprising the contour and contrast ratio in a tunnel wall, the distance between the vehicle and tunnel. Browsing the decision tree, as a function of the values of these attributes, thus makes it possible to arrive at a sheet, which makes it possible to reach a conclusion on the existence of a risk of dazzling the driver when exiting the tunnel.The decision tree thus constructed will contain a set of branches connected by nodes and leading to leaves making it possible to predict, from an instance comprising the contour and contrast ratio in a tunnel wall, the distance between the vehicle and tunnel. Browsing the decision tree, as a function of the values of these attributes, thus makes it possible to arrive at a sheet, which makes it possible to reach a conclusion on the existence of a risk of dazzling the driver when exiting the tunnel.

If desired, the machine learning algorithm could be a random drill (also called "Random Forest Classifier"), constructed using a set of training data. For example, each sample of the training data set could include several attributes determined at the time of acquisition of this sample, including shapes, contours and contrast ratios, further comprising data concerning the distance between the vehicle and the entry of a tunnel or parking at the time of acquisition; the speed of the vehicle at the time of acquisition; as well as the dimensions of the windshield of the motor vehicle, the shape of the windshield, the distance from the driver's seat to the windshield and from the trim of the motor vehicle.If desired, the machine learning algorithm could be a random drill (also called "Random Forest Classifier"), constructed using a set of training data. For example, each sample of the training data set could include several attributes determined at the time of acquisition of this sample, including shapes, contours and contrast ratios, further comprising data concerning the distance between the vehicle and the entry of a tunnel or parking at the time of acquisition; the speed of the vehicle at the time of acquisition; as well as the dimensions of the windshield of the motor vehicle, the shape of the windshield, the distance from the driver's seat to the windshield and from the trim of the motor vehicle.

In some particular embodiments, the method further comprises the steps of

  • capturing an image of a region in front of the lighting device;
  • recognizing the presence of a high visibility zone in the image;
  • when the presence of a high visibility zone is recognized, extracting data from the image, wherein the data comprises a distance between the lighting device and the high visibility zone;
  • estimating an entering time for the lighting device to enter the high visibility zone; and
  • adapting a lighting functionality in the lighting device a comfort time before the time estimated for entering the high visibility zone.
In some particular embodiments, the method further comprises the steps of
  • capturing an image of a region in front of the lighting device;
  • recognizing the presence of a high visibility zone in the image;
  • when the presence of a high visibility zone is recognized, extracting data from the image, wherein the data comprises a distance between the lighting device and the high visibility zone;
  • estimating an entering time for the lighting device to enter the high visibility zone; and
  • adapting a lighting functionality in the lighting device a comfort time before the time estimated for entering the high visibility zone.

This method may be also used for recognizing the exiting from a dark zone, so that the lighting functionalities are re-adapted accordingly.This method may also be used for recognizing the exiting from a dark zone, so that the lighting functionalities are re-adapted accordingly.

In some particular embodiments, the low visibility zone is a tunnel or a parking entry.In some particular embodiments, the low visibility zone is a tunnel or a parking entry.

This method is especially designed to make the system recognize where the vehicle is entering a tunnel or a parking, so that the lighting functionalities are activated before entering the dark zone, and not as a response of the decrease in ambient luminosity once the vehicle is inside the dark zone.This method is especially designed to make the system recognize where the vehicle is entering a tunnel or a parking lot, so that the lighting functionalities are activated before entering the dark zone, and not as a response of the decrease in ambient luminosity once the vehicle is inside the dark zone.

In some particular embodiments, the comfort time is comprised between 0 and 5 seconds.In some particular embodiments, the comfort time is comprised between 0 and 5 seconds.

When a lighting functionality is activated well before the entrance in a dark zone, the driver does not experiment any visual discomfort. A comfort time between 0 and 5 seconds is enough to cover different options of car manufacturers.When a lighting functionality is activated well before the entrance in a dark zone, the driver does not experience any visual discomfort. A comfort time between 0 and 5 seconds is enough to cover different options of car manufacturers.

In some particular embodiments, the comfort time is predefined by the user in a previous step.In some particular embodiments, the comfort time is predefined by the user in a previous step.

The user may have access to the control centre to define the comfort time.The user may have access to the control center to define the comfort time.

In a second inventive aspect, the invention provides an automotive lighting device comprising a plurality of light sources, a camera intended to provide some external data and a control unit configured to selectively control the activation of the plurality of light sources, wherein the control unit is configured to carried out a method according to the first inventive aspect.In a second inventive aspect, the invention provides an automotive lighting device comprising a majority of light sources, a camera intended to provide some external data and a control unit configured to selectively control the activation of the majority of light sources, wherein the control unit is configured to carry out a method according to the first inventive aspect.

In some particular embodiments, the control unit comprises at least part of a convolutional neural network, wherein the convolutional neural network comprises

  • a plurality of convolutional blocks, each convolutional block comprising a convolutional layer, an activation layer and a pooling layer, and
  • final blocks of activation functions are configured to reduce the data size to no more than 5 bits of information.
In some particular embodiments, the control unit comprised at least part of a convolutional neural network, wherein the convolutional neural network comprised
  • a plurality of convolutional blocks, each convolutional block comprising a convolutional layer, an activation layer and a pooling layer, and
  • final blocks of activation functions are configured to reduce the data size to no more than 5 bits of information.

A convolutional neural network may be used to improve the features recognition from the images acquired by the camera, as a complementary action to the acquisition of further data from external servers. By using a convolutional neural network, the dark zones features can be learned from a dataset (in a training process) to be recognized (in a testing process). Hence, the database may be enriched and improved.A convolutional neural network may be used to improve the features recognition from the images acquired by the camera, as a complementary action to the acquisition of further data from external servers. By using a convolutional neural network, the dark zones features can be learned from a dataset (in a training process) to be recognized (in a testing process). Hence, the database may be enriched and improved.

In some particular embodiments, the light sources are a matrix arrangement of solid-state light sources.In some particular embodiments, the light sources are a matrix arrangement of solid-state light sources.

The term "solid state" refers to light emitted by solid-state electroluminescence, which uses semiconductors to convert electricity into light. Compared to incandescent lighting, solid state lighting creates visible light with reduced heat generation and less energy dissipation. The typically small mass of a solid-state electronic lighting device provides for greater resistance to shock and vibration compared to brittle glass tubes/bulbs and long, thin filament wires. They also eliminate filament evaporation, potentially increasing the lifespan of the illumination device. Some examples of these types of lighting comprise semiconductor light-emitting diodes (LEDs), organic light-emitting diodes (OLED), or polymer light-emitting diodes (PLED) as sources of illumination rather than electrical filaments, plasma or gas.The term "solid state" refers to light emitted by solid-state electroluminescence, which uses semiconductors to convert electricity into light. Compared to incandescent lighting, solid state lighting creates visible light with reduced heat generation and less energy dissipation. The typically small mass of a solid-state electronic lighting device provides for greater resistance to shock and vibration compared to brittle glass tubes/bulbs and long, thin filament wires. They also eliminate filament evaporation, potentially increasing the lifespan of the illumination device. Some examples of these types of lighting include semiconductor light-emitting diodes (LEDs), organic light-emitting diodes (OLED), or polymer light-emitting diodes (PLED) as sources of illumination rather than electrical filaments, plasma or gas.

This lighting device provides the advantageous functionality of adapting the light pattern to the conditions transmitted by the data acquired by the camera, in such a way that the new light pattern provided by the control unit improves visual comfort and safety.This lighting device provides the advantageous functionality of adapting the light pattern to the conditions transmitted by the data acquired by the camera, in such a way that the new light pattern provided by the control unit improves visual comfort and safety.

In some particular embodiments, the matrix arrangement comprises at least 2000 solid-state light sources.In some particular embodiments, the matrix arrangement comprises at least 2000 solid-state light sources.

A matrix arrangement is a typical example for this method. The rows may be grouped in projecting distance ranges and each column of each group represent an angle interval. This angle value depends on the resolution of the matrix arrangement, which is typically comprised between 0.01º per column and 0.5º per column. As a consequence, many light sources may be managed at the same time.A matrix arrangement is a typical example for this method. The rows may be grouped in projecting distance ranges and each column of each group represents an angle interval. This angle value depends on the resolution of the matrix arrangement, which is typically comprised between 0.01º per column and 0.5º per column. As a consequence, many light sources may be managed at the same time.

Unless otherwise defined, all terms (including technical and scientific terms) used herein are to be interpreted as is customary in the art. It will be further understood that terms in common usage should also be interpreted as is customary in the relevant art and not in an idealised or overly formal sense unless expressly so defined herein.Unless otherwise defined, all terms (including technical and scientific terms) used herein are to be interpreted as is customary in the art. It will be further understood that terms in common usage should also be interpreted as is customary in the relevant art and not in an idealized or overly formal sense unless expressly so defined herein.

In this text, the term “comprises” and its derivations (such as “comprising”, etc.) should not be understood in an excluding sense, that is, these terms should not be interpreted as excluding the possibility that what is described and defined may include further elements, steps, etc.In this text, the term “comprises” and its derivations (such as “comprising”, etc.) should not be understood in an excluding sense, that is, these terms should not be interpreted as excluding the possibility that what is described and defined may include further elements, steps, etc.

To complete the description and in order to provide for a better understanding of the invention, a set of drawings is provided. Said drawings form an integral part of the description and illustrate an embodiment of the invention, which should not be interpreted as restricting the scope of the invention, but just as an example of how the invention can be carried out. The drawings comprise the following figures:To complete the description and in order to provide for a better understanding of the invention, a set of drawings is provided. Said drawings form an integral part of the description and illustrate an embodiment of the invention, which should not be interpreted as restricting the scope of the invention, but just as an example of how the invention can be carried out. The drawings include the following figures:

shows a general perspective view of an automotive lighting device according to the invention. shows a general perspective view of an automotive lighting device according to the invention.

shows an example of a blocks diagram of the operation of this lighting device. shows an example of a block diagram of the operation of this lighting device.

shows an example of an image captured by the camera. shows an example of an image captured by the camera.

shows the evolution of the loss and accuracy of the method when interpreting the tunnel detection using 200 images. shows the evolution of the loss and accuracy of the method when interpreting the tunnel detection using 200 images.

The example embodiments are described in sufficient detail to enable those of ordinary skill in the art to embody and implement the systems and processes herein described. It is important to understand that embodiments can be provided in many alternate forms and should not be construed as limited to the examples set forth herein.The example embodiments are described in sufficient detail to enable those of ordinary skill in the art to embody and implement the systems and processes herein described. It is important to understand that embodiments can be provided in many alternate forms and should not be construed as limited to the examples set forth herein.

Accordingly, while embodiment can be modified in various ways and take on various alternative forms, specific embodiments thereof are shown in the drawings and described in detail below as examples. There is no intent to limit to the particular forms disclosed. On the contrary, all modifications, equivalents, and alternatives falling within the scope of the appended claims should be included.Accordingly, while embodiment can be modified in various ways and take on various alternative forms, specific embodiments thereof are shown in the drawings and described in detail below as examples. There is no intent to limit to the particular forms disclosed. On the contrary, all modifications, equivalents, and alternatives falling within the scope of the appended claims should be included.

shows a general perspective view of an automotive lighting device according to the invention. shows a general perspective view of an automotive lighting device according to the invention.

This headlamp 1 is installed in an automotive vehicle 100 and comprises

  • a matrix arrangement of LEDs 2, intended to provide a light pattern;
  • a control unit 3 to perform a control of the operation of the LEDs 2; and
  • a camera 4 intended to provide some external data.
This headlamp 1 is installed in an automotive vehicle 100 and included
  • a matrix arrangement of LEDs 2, intended to provide a light pattern;
  • a control unit 3 to perform a control of the operation of the LEDs 2; and
  • a camera 4 intended to provide some external data.

This matrix configuration is a high-resolution module, having a resolution greater than 2000 pixels. However, no restriction is attached to the technology used for producing the projection modules.This matrix configuration is a high-resolution module, having a resolution greater than 2000 pixels. However, no restriction is attached to the technology used for producing the projection modules.

A first example of this matrix configuration comprises a monolithic source. This monolithic source comprises a matrix of monolithic electroluminescent elements arranged in several columns by several rows. In a monolithic matrix, the electroluminescent elements can be grown from a common substrate and are electrically connected to be selectively activatable either individually or by a subset of electroluminescent elements. The substrate may be predominantly made of a semiconductor material. The substrate may comprise one or more other materials, for example non-semiconductors (metals and insulators). Thus, each electroluminescent element/group can form a light pixel and can therefore emit light when its/their material is supplied with electricity. The configuration of such a monolithic matrix allows the arrangement of selectively activatable pixels very close to each other, compared to conventional light-emitting diodes intended to be soldered to printed circuit boards. The monolithic matrix may comprise electroluminescent elements whose main dimension of height, measured perpendicularly to the common substrate, is substantially equal to one micrometre.A first example of this matrix configuration included a monolithic source. This monolithic source comprises a matrix of monolithic electroluminescent elements arranged in several columns by several rows. In a monolithic matrix, the electroluminescent elements can be grown from a common substrate and are electrically connected to be selectively activatable either individually or by a subset of electroluminescent elements. The substrate may be efficiently made of a semiconductor material. The substrate may comprise one or more other materials, for example non-semiconductors (metals and insulators). Thus, each electroluminescent element/group can form a light pixel and can therefore emit light when its/their material is supplied with electricity. The configuration of such a monolithic matrix allows the arrangement of selectively activatable pixels very close to each other, compared to conventional light-emitting diodes intended to be soldered to printed circuit boards. The monolithic matrix may comprise electroluminescent elements whose main dimension of height, measured perpendicularly to the common substrate, is substantially equal to one micrometre.

The monolithic matrix is coupled to the control centre so as to control the generation and/or the projection of a pixelated light beam by the matrix arrangement. The control centre is thus able to individually control the light emission of each pixel of the matrix arrangement.The monolithic matrix is coupled to the control center so as to control the generation and/or the projection of a pixelated light beam by the matrix arrangement. The control center is thus able to individually control the light emission of each pixel of the matrix arrangement.

Alternatively to what has been presented above, the matrix arrangement may comprise a main light source coupled to a matrix of mirrors. Thus, the pixelated light source is formed by the assembly of at least one main light source formed of at least one light emitting diode emitting light and an array of optoelectronic elements, for example a matrix of micro-mirrors, also known by the acronym DMD, for "Digital Micro-mirror Device", which directs the light rays from the main light source by reflection to a projection optical element. Where appropriate, an auxiliary optical element can collect the rays of at least one light source to focus and direct them to the surface of the micro-mirror array.Alternatively to what has been presented above, the matrix arrangement may comprise a main light source coupled to a matrix of mirrors. Thus, the pixelated light source is formed by the assembly of at least one main light source formed of at least one light emitting diode emitting light and an array of optoelectronic elements, for example a matrix of micro-mirrors, also known by the acronym DMD , for "Digital Micro-mirror Device", which directs the light rays from the main light source by reflection to a projection optical element. Where appropriate, an auxiliary optical element can collect the rays of at least one light source to focus and direct them to the surface of the micro-mirror array.

Each micro-mirror can pivot between two fixed positions, a first position in which the light rays are reflected towards the projection optical element, and a second position in which the light rays are reflected in a different direction from the projection optical element. The two fixed positions are oriented in the same manner for all the micro-mirrors and form, with respect to a reference plane supporting the matrix of micro-mirrors, a characteristic angle of the matrix of micro-mirrors defined in its specifications. Such an angle is generally less than 20° and may be usually about 12°. Thus, each micro-mirror reflecting a part of the light beams which are incident on the matrix of micro-mirrors forms an elementary emitter of the pixelated light source. The actuation and control of the change of position of the mirrors for selectively activating this elementary emitter to emit or not an elementary light beam is controlled by the control centre.Each micro-mirror can pivot between two fixed positions, a first position in which the light rays are reflected towards the projection optical element, and a second position in which the light rays are reflected in a different direction from the projection optical element. The two fixed positions are oriented in the same manner for all the micro-mirrors and form, with respect to a reference plane supporting the matrix of micro-mirrors, a characteristic angle of the matrix of micro-mirrors defined in its specifications. Such an angle is generally less than 20° and may be usually about 12°. Thus, each micro-mirror reflecting a part of the light beams which are incident on the matrix of micro-mirrors forms an elementary emitter of the pixelated light source. The actuation and control of the change of position of the mirrors for selectively activating this elementary emitter to emit or not an elementary light beam is controlled by the control centre.

In different embodiments, the matrix arrangement may comprise a scanning laser system wherein a laser light source emits a laser beam towards a scanning element which is configured to explore the surface of a wavelength converter with the laser beam. An image of this surface is captured by the projection optical element.In different embodiments, the matrix arrangement may comprise a scanning laser system wherein a laser light source emits a laser beam towards a scanning element which is configured to explore the surface of a wavelength converter with the laser beam. An image of this surface is captured by the projection optical element.

The exploration of the scanning element may be performed at a speed sufficiently high so that the human eye does not perceive any displacement in the projected image.The exploration of the scanning element may be performed at a speed sufficiently high so that the human eye does not perceive any displacement in the projected image.

The synchronized control of the ignition of the laser source and the scanning movement of the beam makes it possible to generate a matrix of elementary emitters that can be activated selectively at the surface of the wavelength converter element. The scanning means may be a mobile micro-mirror for scanning the surface of the wavelength converter element by reflection of the laser beam. The micro-mirrors mentioned as scanning means are for example MEMS type, for "Micro-Electro-Mechanical Systems". However, the invention is not limited to such a scanning means and can use other kinds of scanning means, such as a series of mirrors arranged on a rotating element, the rotation of the element causing a scanning of the transmission surface by the laser beam.The synchronized control of the ignition of the laser source and the scanning movement of the beam makes it possible to generate a matrix of elementary emitters that can be activated selectively at the surface of the wavelength converter element. The scanning means may be a mobile micro-mirror for scanning the surface of the wavelength converter element by reflection of the laser beam. The micro-mirrors mentioned as scanning means are for example MEMS type, for "Micro-Electro-Mechanical Systems". However, the invention is not limited to such a scanning means and can use other kinds of scanning means, such as a series of mirrors arranged on a rotating element, the rotation of the element causing a scanning of the transmission surface by the laser beam.

In another variant, the light source may be complex and include both at least one segment of light elements, such as light emitting diodes, and a surface portion of a monolithic light source.In another variant, the light source may be complex and include both at least one segment of light elements, such as light emitting diodes, and a surface portion of a monolithic light source.

The control unit, previously to its installation in the automotive headlamp, has been fed with a preliminary database of tunnel and parking labelled images. This database may be obtained from public servers or from private servers, depending on the car manufacturer.The control unit, previously to its installation in the automotive headlamp, has been fed with a preliminary database of tunnel and parking labeled images. This database may be obtained from public servers or from private servers, depending on the car manufacturer.

Further, a training process may also be carried out previously to the installation in an automotive vehicle 100 of , to perform the luminous control of the headlamp 1.Further, a training process may also be carried out previously to the installation in an automotive vehicle 100 of , to perform the luminous control of the headlamp 1.

shows an example of a blocks diagram of the operation of this lighting device: each 0.2 seconds, the camera acquires image data. From this image data, image features are extracted. Then, the control unit compares the extracted image features with the features contained in the database of tunnel and parking features. shows an example of a blocks diagram of the operation of this lighting device: each 0.2 seconds, the camera acquires image data. From this image data, image features are extracted. Then, the control unit compares the extracted image features with the features contained in the database of tunnel and parking features.

The comparison is done in a convolutional neural network which comprises different convolutional blocks 5, each block comprising a convolutional layer, an activation layer and a pooling layer. The number of neurons increases in each block 5, while the size of the data decreases.The comparison is done in a convolutional neural network which comprises different convolutional blocks 5, each block comprising a convolutional layer, an activation layer and a pooling layer. The number of neurons increases in each block 5, while the size of the data decreases.

For example, some convolutional blocks 5 would receive the image in a 222x222 size and reduce the shape of the input data to a 12x12 array, by increasing the number of neurons in each block.For example, some convolutional blocks 5 would receive the image in a 222x222 size and reduce the shape of the input data to a 12x12 array, by increasing the number of neurons in each block.

Final blocks of activation functions reduce finally the size of the data to 3 bits of information, so that the essential information is extracted. In this particular embodiment, the information is related to the vehicle being on road (no tunnel or parking ahead), inside the tunnel or near a tunnel.Final blocks of activation functions finally reduce the size of the data to 3 bits of information, so that the essential information is extracted. In this particular embodiment, the information is related to the vehicle being on road (no tunnel or parking ahead), inside the tunnel or near a tunnel.

shows an example of an image 6 captured by the camera. In this case, the neural network would use this image and reduce the dimension to focus on the presence of the tunnel entry 7. shows an example of an image 6 captured by the camera. In this case, the neural network would use this image and reduce the dimension to focus on the presence of the tunnel entry 7.

shows the evolution of the loss and accuracy of the method when interpreting the tunnel detection using 200 images. Stars and crosses show the accuracy of the measurements in the learning stage (stars) and in the validation stage (crosses). Triangles and circles show the error rate in the learning stage (triangles) and in the validation stage (circles). shows the evolution of the loss and accuracy of the method when interpreting the tunnel detection using 200 images. Stars and crosses show the accuracy of the measurements in the learning stage (stars) and in the validation stage (crosses). Triangles and circles show the error rate in the learning stage (triangles) and in the validation stage (circles).

With an epoch of 35 (i.e., when the neural network has been allowed to update the internal parameters 35 times), the accuracy in the validation stage is higher than 90 %, while the error is lower than 10%. However, these figures could be improved by using a greater amount of training images.With an epoch of 35 (i.e., when the neural network has been allowed to update the internal parameters 35 times), the accuracy in the validation stage is higher than 90%, while the error is lower than 10%. However, these figures could be improved by using a greater amount of training images.

Once the vehicle has been detected to be approaching a tunnel or a parking, there are two options to calculate the distance from the vehicle to the entry zone.Once the vehicle has been detected to be approaching a tunnel or a parking lot, there are two options to calculate the distance from the vehicle to the entry zone.

First option is to use the same neural network but trained with images taken at different distances from a tunnel or parking. Some images would be “a tunnel at a distance of 20 m”, other ones would represent “a tunnel at a distance of 40m”, and so on. The neural network would not output just three options (on road, inside a tunnel or approaching a tunnel), but would output more options: on road, inside a tunnel, at 20m from a tunnel, at 40m from a tunnel, at 60m from a tunnel…First option is to use the same neural network but trained with images taken at different distances from a tunnel or parking. Some images would be “a tunnel at a distance of 20 m”, other ones would represent “a tunnel at a distance of 40m”, and so on. The neural network would not output just three options (on road, inside a tunnel or approaching a tunnel), but would output more options: on road, inside a tunnel, at 20m from a tunnel, at 40m from a tunnel, at 60m from a tunnel…

As an alternative, the standard neural network would be used and then an additional algorithm with a decision tree is used to recognize the entry zone and calculate the distance between the automotive vehicle and the entry zone.As an alternative, the standard neural network would be used and then an additional algorithm with a decision tree is used to recognize the entry zone and calculate the distance between the automotive vehicle and the entry zone.

The decision tree is obtained by means of a supervised training algorithm of the ID3 type (standing for “Iterative Dichotomize 3”) applied to a set of training data comprising a plurality of samples acquired beforehand.The decision tree is obtained by means of a supervised training algorithm of the ID3 type (standing for “Iterative Dichotomize 3”) applied to a set of training data comprising a plurality of samples acquired beforehand.

Each sample includes several attributes determined at the time of acquisition of this sample, including shapes, contours and contrast ratios, further comprising data concerning the distance between the vehicle and the entry of a tunnel or parking at the time of acquisition and the speed of the vehicle at the time of acquisition.Each sample includes several attributes determined at the time of acquisition of this sample, including shapes, contours and contrast ratios, further comprising data concerning the distance between the vehicle and the entry of a tunnel or parking at the time of acquisition and the speed of the vehicle at the time of acquisition.

With this estimation of time, the entering time for the vehicle into the tunnel or parking is calculated. The corresponding lighting functionality (usually, a low beam pattern) is activated some time (for example, two seconds) before reaching the entry zone. This comfort time may be predefined by the user, or customizable during the operation of the vehicle, depending on the car manufacturer.With this estimate of time, the entering time for the vehicle into the tunnel or parking is calculated. The corresponding lighting functionality (usually, a low beam pattern) is activated some time (for example, two seconds) before reaching the entry zone. This comfort time may be predefined by the user, or customizable during the operation of the vehicle, depending on the car manufacturer.

This method may be also extended to the recognition of the exiting from the tunnel or parking. In these cases, the exit zone is recognized, and the distance is calculated in an analogous way as the distance from the entry zone in the embodiment detailed above. In this case, the lighting functionality is adapted before the time estimated for entering the high visibility zone.This method may also be extended to the recognition of the exiting from the tunnel or parking. In these cases, the exit zone is recognized, and the distance is calculated in an analogous way as the distance from the entry zone in the embodiment detailed above. In this case, the lighting functionality is adapted before the time estimated for entering the high visibility zone.

Claims (11)

Method for controlling an automotive lighting device, the method comprising the steps of:
  • capturing an image (6) of a region in front of the lighting device (1);
  • recognizing the presence of a low visibility zone (7) in the image (6);
  • when the presence of a low visibility zone (7) is recognized, extracting data from the image (6), wherein the data comprises a distance between the lighting device (1) and the low visibility zone (7);
  • estimating an entering time for the lighting device to enter the low visibility zone (7); and
  • activating a lighting functionality in the lighting device (1) a comfort time before the time estimated for entering the low visibility zone (7).
Method for controlling an automotive lighting device, the method comprising the steps of:
  • capturing an image (6) of a region in front of the lighting device (1);
  • recognizing the presence of a low visibility zone (7) in the image (6);
  • when the presence of a low visibility zone (7) is recognized, extracting data from the image (6), wherein the data comprises a distance between the lighting device (1) and the low visibility zone (7);
  • estimating an entering time for the lighting device to enter the low visibility zone (7); and
  • activating a lighting functionality in the lighting device (1) a comfort time before the time estimated for entering the low visibility zone (7).
Method according to claim 1, wherein the step of extracting data from the image (6) comprises recognizing an entering zone (7) and calculating the distance from the lighting device to the entering zone (7).Method according to claim 1, wherein the step of extracting data from the image (6) comprised recognizing an entering zone (7) and calculating the distance from the lighting device to the entering zone (7). Method according to any of the preceding claims, wherein the method comprises a first step of providing the lighting device with a labelled database of low visibility zones, and the step of recognizing the presence of a low visibility zone is carried out by a machine learning process.Method according to any of the preceding claims, wherein the method comprises a first step of providing the lighting device with a labeled database of low visibility zones, and the step of recognizing the presence of a low visibility zone is carried out by a machine learning process . Method according to claim 3, wherein the labelled database comprises data of entering zones, and the step of recognizing the entering zone is carried out by a machine learning process.Method according to claim 3, wherein the labeled database comprises data of entering zones, and the step of recognizing the entering zone is carried out by a machine learning process. Method according to any of the preceding claims, wherein the machine learning process comprises a decision tree built using a learning data set.Method according to any of the preceding claims, wherein the machine learning process comprises a decision tree built using a learning data set. Method according to any of the preceding claims, further comprising the steps of
  • capturing an image of a region in front of the lighting device;
  • recognizing the presence of a high visibility zone in the image;
  • when the presence of a high visibility zone is recognized, extracting data from the image, wherein the data comprises a distance between the lighting device and the high visibility zone;
  • estimating an entering time for the lighting device to enter the high visibility zone; and
  • adapting a lighting functionality in the lighting device a comfort time before the time estimated for entering the high visibility zone.
Method according to any of the preceding claims, further comprising the steps of
  • capturing an image of a region in front of the lighting device;
  • recognizing the presence of a high visibility zone in the image;
  • when the presence of a high visibility zone is recognized, extracting data from the image, wherein the data comprises a distance between the lighting device and the high visibility zone;
  • estimating an entering time for the lighting device to enter the high visibility zone; and
  • adapting a lighting functionality in the lighting device a comfort time before the time estimated for entering the high visibility zone.
Method according to any of the preceding claims, wherein the low visibility zone is a tunnel or a parking entry.Method according to any of the preceding claims, wherein the low visibility zone is a tunnel or a parking entry. Method according to any of the preceding claims, wherein the comfort time is comprised between 0 and 5 seconds.Method according to any of the preceding claims, wherein the comfort time is comprised between 0 and 5 seconds. Method according to any of the preceding claims, wherein the comfort time is predefined by the user in a previous step.Method according to any of the preceding claims, wherein the comfort time is predefined by the user in a previous step. Automotive lighting device comprising a plurality of light sources (2), a camera (4) intended to provide some external data and a control unit (3) configured to selectively control the activation of the plurality of light sources, wherein the control unit (3) is configured to carried out a method according to any of the preceding claims.Automotive lighting device comprising a plurality of light sources (2), a camera (4) intended to provide some external data and a control unit (3) configured to selectively control the activation of the plurality of light sources, wherein the control unit (3 ) is configured to carried out a method according to any of the preceding claims. Automotive lighting device according to claim 10, wherein the control unit comprises at least part of a convolutional neural network, wherein the convolutional neural network comprises
  • a plurality of convolutional blocks (5), each convolutional block comprising a convolutional layer, an activation layer and a pooling layer, and
  • final blocks of activation functions are configured to reduce the data size to no more than 5 bits of information.
Automotive lighting device according to claim 10, wherein the control unit comprised at least part of a convolutional neural network, wherein the convolutional neural network comprised
  • a plurality of convolutional blocks (5), each convolutional block comprising a convolutional layer, an activation layer and a pooling layer, and
  • final blocks of activation functions are configured to reduce the data size to no more than 5 bits of information.
FR2101884A 2021-02-26 2021-02-26 Method for controlling an automotive lighting device and automotive lighting device Pending FR3120214A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
FR2101884A FR3120214A1 (en) 2021-02-26 2021-02-26 Method for controlling an automotive lighting device and automotive lighting device
FR2108144A FR3120213A3 (en) 2021-02-26 2021-07-27 Method for controlling an automotive lighting device and automotive lighting device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR2101884 2021-02-26
FR2101884A FR3120214A1 (en) 2021-02-26 2021-02-26 Method for controlling an automotive lighting device and automotive lighting device

Publications (1)

Publication Number Publication Date
FR3120214A1 true FR3120214A1 (en) 2022-09-02

Family

ID=83050155

Family Applications (2)

Application Number Title Priority Date Filing Date
FR2101884A Pending FR3120214A1 (en) 2021-02-26 2021-02-26 Method for controlling an automotive lighting device and automotive lighting device
FR2108144A Pending FR3120213A3 (en) 2021-02-26 2021-07-27 Method for controlling an automotive lighting device and automotive lighting device

Family Applications After (1)

Application Number Title Priority Date Filing Date
FR2108144A Pending FR3120213A3 (en) 2021-02-26 2021-07-27 Method for controlling an automotive lighting device and automotive lighting device

Country Status (1)

Country Link
FR (2) FR3120214A1 (en)

Also Published As

Publication number Publication date
FR3120213A3 (en) 2022-09-02

Similar Documents

Publication Publication Date Title
CN113330246B (en) Method for correcting light pattern and automobile lighting device
EP3670263A1 (en) Method for controlling a light pattern and automotive lighting device
CN113518881B (en) Method for correcting a light pattern and automotive lighting device
WO2021170509A1 (en) Method for controlling a light pattern and automotive lighting device
FR3120214A1 (en) Method for controlling an automotive lighting device and automotive lighting device
EP4313680A1 (en) Method for controlling a light pattern and automotive lighting device
WO2022263685A1 (en) Method for detecting an object in a road surface, method for autonomous driving and automotive lighting device
WO2022263684A1 (en) Method for detecting an object in a road surface, method for autonomous driving and automotive lighting device
EP3672369A1 (en) Method for controlling a light pattern and automotive lighting device
WO2022263683A1 (en) Method for detecting an object in a road surface, method for autonomous driving and automotive lighting device
EP3730836A1 (en) Method for providing a light pattern, automotive lighting device and automotive lighting assembly
US20240157868A1 (en) Method for controlling a light pattern and automotive lighting device
US20240159371A1 (en) Method for controlling a light pattern and automotive lighting device
EP3702215B1 (en) Method for correcting a light pattern and automotive lighting device
EP4202495A1 (en) Automotive lighting device and automotive vehicle
EP4201741A1 (en) Automotive lighting device and automotive vehicle
EP4202384A1 (en) Automotive lighting device and automotive vehicle
US20240130025A1 (en) Method for controlling an automotive lighting device
US11945363B2 (en) Method for operating an automotive lighting device and automotive lighting device
US20230347812A1 (en) Method for operating an automotive lighting device and automotive lighting device
EP4201740A1 (en) Automotive lighting device and automotive vehicle
EP3670262A1 (en) Method for correcting a light pattern and automotive lighting device assembly
FR3120258A1 (en) Method for controlling an automotive lighting device
CN115243931A (en) Method for controlling a light pattern and a vehicle lighting device