WO2022180253A1 - Procédé de contrôle d'un système d'éclairage d'un véhicule automobile - Google Patents
Procédé de contrôle d'un système d'éclairage d'un véhicule automobile Download PDFInfo
- Publication number
- WO2022180253A1 WO2022180253A1 PCT/EP2022/054886 EP2022054886W WO2022180253A1 WO 2022180253 A1 WO2022180253 A1 WO 2022180253A1 EP 2022054886 W EP2022054886 W EP 2022054886W WO 2022180253 A1 WO2022180253 A1 WO 2022180253A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- lighting
- initial
- detection
- zone
- objects
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000001514 detection method Methods 0.000 claims abstract description 125
- 238000005375 photometry Methods 0.000 claims description 37
- 230000001276 controlling effect Effects 0.000 claims description 15
- 230000006870 function Effects 0.000 claims description 10
- 230000001105 regulatory effect Effects 0.000 claims description 6
- 230000006399 behavior Effects 0.000 claims description 4
- 230000004048 modification Effects 0.000 claims description 4
- 238000012986 modification Methods 0.000 claims description 4
- 230000011664 signaling Effects 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 2
- 238000012550 audit Methods 0.000 description 4
- 238000006073 displacement reaction Methods 0.000 description 4
- 238000005286 illumination Methods 0.000 description 3
- 238000012706 support-vector machine Methods 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 239000011248 coating agent Substances 0.000 description 1
- 238000000576 coating method Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 230000001747 exhibiting effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000004313 glare Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/582—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60Q—ARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
- B60Q1/00—Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
- B60Q1/02—Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments
- B60Q1/04—Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights
- B60Q1/14—Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights having dimming means
- B60Q1/1415—Dimming circuits
- B60Q1/1423—Automatic dimming circuits, i.e. switching between high beam and low beam due to change of ambient light or light level in road traffic
- B60Q1/143—Automatic dimming circuits, i.e. switching between high beam and low beam due to change of ambient light or light level in road traffic combined with another condition, e.g. using vehicle recognition from camera images or activation of wipers
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60Q—ARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
- B60Q1/00—Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
- B60Q1/26—Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic
- B60Q1/2603—Attenuation of the light according to ambient luminiosity, e.g. for braking or direction indicating lamps
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/141—Control of illumination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60Q—ARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
- B60Q2800/00—Features related to particular types of vehicles not otherwise provided for
- B60Q2800/10—Autonomous vehicles
Definitions
- the invention relates to the field of automotive lighting. More specifically, the subject of the invention is a lighting system for a motor vehicle.
- Modern motor vehicles tend to be equipped, more and more frequently, with partial or total autonomous driving systems.
- This type of system is intended to replace the human driver of the vehicle, during only part of his journey under certain conditions, in particular speed or environment, or during his entire journey.
- the autonomous driving system controls, among other things, all or part of the various components of the motor vehicle likely to affect its trajectory or its speed, and in particular steering components, braking components and motorization components or transmission.
- the vehicle In order to be able to carry out this check automatically, without endangering the lives of the occupants of the vehicle or that of other road users, the vehicle is equipped with a set of sensors and one or more computers capable of processing the data acquired by these sensors in order to estimate the environment in which the vehicle is traveling.
- the autonomous driving system thus controls the various components mentioned according to a route instruction and this estimate of the environment in order to lead its passengers to their destination while guaranteeing their safety and that of third parties.
- All of the sensors available in a vehicle generally include a camera capable of acquiring images of all or part of the road scene.
- This type of sensor is appreciable given the high image resolutions and acquisition frequency it is capable of offering.
- this sensor has a significant drawback, namely its relationship with the illumination of the road scene. It is in fact necessary for the road scene to be sufficiently illuminated for the objects present in this scene to be able to be detected by the image processing software used in the computer(s) of the autonomous driving system. In the absence of sufficient lighting, an object could not be detected, which would be particularly harmful in the event that this object is a road user or an obstacle towards which the vehicle is heading.
- these lighting systems emit light beams whose emission zones on the road and the photometries in these zones of emission are intended to assist the driver in perceiving objects.
- these light beams are absolutely not optimized for a camera and their emission zones and/or their photometries in these zones may not be sufficient or suitable to allow the detection of an object in an image acquired by this camera. .
- the present invention is thus placed in this context and aims to meet the cited need, by proposing a solution capable of producing, from a motor vehicle, road lighting, distinct from that obtained by means of lighting beams existing ones, and making it possible to maximize the probability that an object on the road can be detected from an image of the road scene acquired by a camera of the vehicle.
- the invention thus proposes to collect data relating to the position of objects on the road, classified into at least one set of object types, and in particular several sets of object types, defined beforehand. These data make it possible to describe at least one zone in which any new object, belonging to one of the types of this or these sets, which will be detected by the detection system of the motor vehicle will be likely to be present.
- each of these sets of types of objects may require lighting characteristics specific to this set, in particular due to the ability of these types of object to reflect the light they receive towards the detection system or again due to the ability of these types of objects to contrast with the rest of the road scene depending on the light they receive. It is thus possible to define for each set of object types a photometry allowing to maximize the probability that an object of this type is actually detected by the detection system.
- the lighting likely to be emitted by the lighting system can thus be segmented into light beams, each light beam being emitted in one of said initial detection zones with its own photometry and dedicated to the types of objects likely to appear in this area. It is therefore understood that the dedicated zones and photometries are thus intended entirely for the support of the image acquisition system, and no longer for the driver of the motor vehicle. These light beams are thus "default" light beams, emitted prior to any detection which will then be operated by the detection system. Each detection of an object, in an initial detection zone, operated by the detection system may then lead to a modification of the light beam emitted in this zone, for example for the purpose of tracking the object or of not dazzling the the object.
- the image acquisition system can be a camera capable of acquiring images of a road scene at the front, or at the rear, of the motor vehicle, or as a variant, one or more cameras capable acquiring images of the road scene all around the motor vehicle.
- the detection system may comprise one or more processing units arranged to implement image processing algorithms acquired by the image acquisition system in order to detect objects, in particular objects of said types of l set of types, in said images.
- the detection system may comprise one or more additional sensors, in particular a laser scanner, a radar or an infrared sensor, and possibly a processing unit arranged to implement algorithms for merging data from the detection system. image acquisition and this or these other sensors.
- the data set relating to the position of the objects can be acquired, beforehand, under daytime conditions.
- the data set relating to the position of the objects, acquired during the acquisition step comprises, for each object, the so-called initial position of this object at the moment when it has been detected by the detection system.
- the step of determining said model comprises, for each type of object of said set, a step of modeling, from the data set, a zone, called the first detection zone of said type of object, encompassing all the initial positions of objects of said type of object.
- said initial detection zone is determined from the first detection zones of all the types of objects of said set.
- the step of determining said model may include, for each type of object of each set, a step of modeling, from the data set associated with this set, a zone, called the first detection zone of said type of object, encompassing all initial positions of objects of said object type.
- each initial detection zone is determined from the first detection zones of all the types of objects in the same set.
- the or each initial detection zone could be formed from the combination of all the first detection zones of all the types of objects of the or of the same set.
- each step of modeling the zone of first detection of a type of object implements an automatic learning algorithm, making it possible to determine the zone of first detection from the initial positions objects of said object type.
- said machine learning algorithm may comprise, without being limiting, a supervised or unsupervised trained learning algorithm, for example of the type: linear or nonlinear regression, naive Bayesian classifier, support vector machine or neural network, a K-means type algorithm.
- the machine learning algorithm could be trained to determine, from a plurality of data sets each comprising initial positions, in environment of the vehicle, of a plurality of objects of types belonging to one of said sets, a first detection zone for each type of object, so that the initial detection zones, each formed by the combination of all the zones of first detection of the types of objects of the same set, are disjoint.
- the automatic learning algorithm can be trained to determine, for each type of object, a border of a zone such that the probability that an object of said type of object is detected there is greater at a given threshold and/or such that the probability that an object of a type other than said type of object is detected therein is less than a given threshold.
- each threshold may be separate for each type of object.
- said initial photometry of the light beam is determined as a function of at least one of the types of objects of the set of types of objects.
- said initial photometry of the light beam is determined according to the zones of first detection of each of the types of objects of the set of types of objects, and in particular according to the position of each first detection zone in the environment of the motor vehicle.
- the method comprises a step of supplying at least one range of values of a parameter relating to the behavior of the motor vehicle or to the environment.
- the step of determining the lighting model associated with said set is a step of determining a lighting model, associated with said set, variable according to said values of the parameter.
- the parameter relating to the behavior of the motor vehicle could be the speed of the motor vehicle and/or the trajectory of the motor vehicle and/or the yaw of the motor vehicle.
- the parameter relating to the environment of the motor vehicle may be the meteorological conditions and/or the profile of the road, and in particular its curvature and/or its slope, and/or a datum of the position of the motor vehicle, in particular a GPS (Global Positioning System) data.
- GPS Global Positioning System
- variable lighting model is meant a lighting model whose initial detection zone has a shape, dimensions and/or a position in the environment of the vehicle which varies according to the value of said parameter and/or whose photometry initial variable as a function of the value of said parameter.
- the variable lighting model defines a plurality of initial detection and/or initial photometry zones associated with the same set of object types and each associated with a given value of said range of values of said parameter .
- the step of determining said model comprises, for each type of object of said set and for each value of said range of values of said parameter, a step of modeling, from the data set, a first detection zone of said type of object, encompassing all the initial positions of the objects of said type of object for which the parameter had said value during the acquisition of this initial position.
- each of the initial detection zones associated with the same set of object types is determined from the first detection zones of all the object types of said set which are associated with the same value of said parameter.
- the initial detection zone determined for the first model could be a low zone
- the initial detection zone determined for the second model could be a central zone
- the initial detection zone determined for the third model could be a zone high.
- the light beam has, in the initial detection zone, an initial photometry adapted to help the object detection system to detect the appearance of objects of a given type.
- the motor vehicle and/or the detected object can move, causing a movement of the detected object in the frame of reference of the image acquisition system.
- the initial photometry although adapted during the initial detection of this object, may no longer be appropriate later, due to this displacement.
- the step of detecting the object of the given type can include a sub-step of estimating the position of this object.
- the step of controlling the lighting system comprises a step of generating a zone in the light beam at the level of the detected object, the zone having a photometry adapted to the type of the detected object and a step of displacement of said zone according to the displacement of the object detected in the frame of reference of the image acquisition system.
- "Area with suitable photometry” means an area whose dimensions, shape, position in the road scene and/or photometry is adapted to the type of detected object.
- the zone may be a zone centered on the vehicle detected and whose light intensity is lower than a given glare threshold.
- the area may be an area centered on the detected pedestrian and whose light intensity is greater than a given detection threshold.
- Said predetermined regulatory lighting and/or signaling beam may, for example, be a regulatory crossing type beam or a regulatory road type beam.
- the control step may include a sub-step of extinguishing the light beam exhibiting the initial photometry in the initial detection zone.
- the invention also relates to a motor vehicle, comprising an object detection system comprising a system for acquiring images of all or part of the environment of the vehicle, a lighting system, an autonomous driving system partial or total, and a controller of the lighting system, the controller being arranged to implement the control step of the method according to the invention.
- an object detection system comprising a system for acquiring images of all or part of the environment of the vehicle, a lighting system, an autonomous driving system partial or total, and a controller of the lighting system, the controller being arranged to implement the control step of the method according to the invention.
- the invention also relates to a lighting system of a motor vehicle according to the invention.
- the lighting system comprises at least one light module capable of emitting a pixelated light beam and a controller capable of receiving an instruction to emit a given light function and arranged and for controlling the light module to emit a pixelated illumination beam having characteristics determined according to said instruction.
- the light module is arranged so that the pixelated light beam is a light beam comprising a plurality of pixels, for example 500 pixels with dimensions between 0.05° and 0.3°, distributed according to a plurality of rows and columns, for example 20 rows and 25 columns.
- the light module may comprise a plurality of elementary light sources and an optical device arranged to emit together said pixelated light beam.
- the controller can be arranged to selectively control each of the elementary light sources of the light module so that this light source emits an elementary light beam forming one of the pixels of the pixelated light beam.
- Light source means any light source possibly associated with an electro-optical element, capable of being activated and controlled selectively to emit an elementary light beam whose light intensity is controllable. It may in particular be a light-emitting semiconductor chip, a light-emitting element of a monolithic pixelated light-emitting diode, a portion of a light-converting element excitable by a light source or even a light source associated with a liquid crystal or a micro-mirror.
- the motor vehicle 1, shown in , comprises an object detection system 2.
- This detection system 2 comprises an image acquisition system 21.
- This system 21 comprises a camera capable of acquiring images of the road scene all around the motor vehicle 1.
- the detection system 2 also comprises a processing unit (not shown) arranged to implement image processing algorithms acquired by the camera 21 in order to detect objects in said images.
- the motor vehicle 1 comprises a lighting system 3, comprising a plurality of light modules 31 to 36, each capable of emitting a pixelated light beam in a given direction, the lighting system 3 thus being able to illuminate the road all around the vehicle car 1.
- the motor vehicle 1 comprises a lighting system controller 3, able to selectively control each of the light modules 31 to 36 and to selectively control each of the pixels of the pixelated light beams likely to be emitted by these light modules 31 to 36.
- the motor vehicle 1 comprises a total autonomous driving system arranged to control, when the motor vehicle is in an autonomous driving mode, the steering components, the braking components and the engine or transmission components of the motor vehicle, in particular by function of the objects detected by the processing unit of the detection system 2 in the images acquired by the camera 21.
- the method of will be a method of controlling the light modules 31 and 32, and will be described in connection with the at which each represent a road scene in front of the vehicle, as it can be viewed by the camera 21 and as it can be illuminated by the light modules 31 and 32, it being understood that the method is also implemented works for side road scenes and at the rear of the vehicle by controlling the light modules 33 to 36 .
- a plurality of sets of object types G 1 to G N will have been defined beforehand, each set Gi grouping together one or more types of objects T i , j .
- this step E1 is simplified by defining a first set G 1 of object types T 1 , 1 grouping road signs, a second set G 2 of object types T 2,1 and T 2 ,2 grouping, respectively, pedestrians and vehicles, and a third set G 3 of type of objects T 3.1 grouping ground markings and obstacles likely to be reached by the vehicle in less than two seconds .
- objects of type T 1.1 will be represented by squares
- objects of type T 2.1 will be represented by circles
- objects of type T 2.2 will be represented by triangles
- objects of type T 3.1 will be represented by stars.
- a plurality of data sets S 1 to S N are acquired.
- Each datum P i , j , k of a data set S i represents a set of positions of an object O i , j , k of a type T i , j belonging to a set G i , estimated by a system detection of a motor vehicle, similar to the detection system 2 and comprising a camera similar to the camera 21.
- This set of positions P i , j , k combines all the positions of this object O i , j , k from an initial position P i , j , k (0) of this object, estimated at the moment when it was detected by the detection system in the field of the camera, up to a final position, estimated at the last instant preceding the disappearance of the camera field object.
- Each data set S i further comprises, for each datum P i , j , k of this set representing a set of positions of an object, the speed V i,j,k of the motor vehicle when the set of positions of this object has been estimated.
- each of the data sets S 1 to S N is split into a plurality of sub-data sets S 1 ,1 to S N ,M , each data item P i , j , k of a data set S i being attributed to a subset S i,l if the speed V i,j,k (0) of the motor vehicle, at the time of acquisition of the initial position P i , j , k (0) of l object O i , j , k , is included in the range ⁇ V l .
- the subset S i,l contains all the initial positions P i , j , k (0) of the objects O i , j , k whose type T i,j belongs to the set G i and whose initial speed V i,j,k (0) is within the range ⁇ V l .
- a zone Z i,j,l for each type of object T i,j of each set Gi and for each speed range ⁇ V l , a zone Z i,j,l , called the first detection zone of this type of object is modelled.
- This zone Z i,j,l encompasses all the initial positions P i , j , k (0) of objects O i , j , k of type of object T i,j and whose initial speed V i,j,k (0) is included in the range ⁇ V l .
- a support vector machine has been previously trained to determine, in a supervised manner and from a plurality of points labeled with different labels and positioned in a space, for each label, a border of an area such that the number of points labeled with this label and present in this zone is greater than a given threshold and such that the number of points labeled with a label other than this label and present in this zone is less than a given threshold.
- each of the sub-data sets S i ,l for the same range ⁇ V l is then supplied as input to the support vector machine trained beforehand, as well as thresholds for each type of object and for each range, to determine zones Z i, j , l of first detection of objects of type T i, j .
- Each zone Z i, j, l thus encompasses the initial positions P i , j , k (0) of the objects O i , j , k of object type T i,j and whose initial speed V i,j,k (0) is included in the range ⁇ V l .
- each zone Z i, j, l is thus modeled by the neural network so that the probability that an object O i , j , k of object type T i,j is detected there, when the initial speed V i,j,k (0) is within the range ⁇ V l , is maximum and the probability that an object O i , j , k of a type other than said type of object T i, j , when the initial speed V i,j,k (0) is within the range ⁇ V l , y is either detected or minimum.
- an initial detection zone A i,l is determined by combining the first detection zones Z i, j,l of the objects of type T i,j belonging to the same set G i .
- the sub-data sets S 1, 2 , S 2, 2 and S 3, 2 for initial speeds comprised between 50 and 90 km/h.
- The also shows the zones Z 1 , 1 , 2 , Z 2 , 1 , 2 , Z 2 , 2 , 2 and Z 3 , 1 , 2 , respectively associated with the types T 1.1 , T 2.1 , T 2.2 and T 3.1 determined at the end of step E51 and the zones A 1.2 , A 2.2 and A 3.2 determined at the end of step E52.
- The also shows the zones Z 1 , 1 , 3 , Z 2 , 1 , 3 , Z 2 , 2 , 3 and Z 3 , 1 , 3 , respectively associated with the types T 1.1 , T 2.1 , T 2.2 and T 3.1 determined at the end of step E51 and the zones A 1.3 , A 2.3 and A 3.3 determined at the end of step E52.
- the zones A 1.1 , A 1.2 and A 1.3 associated with the set G 1 of the traffic signs are zones rather located in the upper part of the road scene
- the zones A 2.1 , A 2 ,2 and A 2,3 associated with the set G 2 of road users are zones located rather in the center of the road scene
- the zones A 3,1 , A 3,2 and A 3,3 associated in the set G 3 of the objects in the immediate navigable space of the vehicle are zones located rather in the lower part of the road scene.
- Each initial detection zone A i,l is a zone of space in which the probability that an object, of type T i,j belonging to a set G i associated with this zone, can be detected by the detection system 2 from an image acquired by the camera 21, is particularly high.
- an initial photometry P i,l is determined making it possible to improve the detection performance of the detection system 2 taking into account the types of objects of this set G i .
- the determination of this initial photometry P i,l may comprise the determination of a minimum, average and/or maximum light intensity of a light beam intended to be emitted by the lighting system 3 in the initial detection zone A i , l , or even the determination of a light intensity for a plurality of pixels, for a plurality of groups of pixels or even for all the pixels of a light beam intended to be emitted by the lighting system 3 in the zone of initial detection A i,l .
- the lighting emitted by the light modules 31 and 32 is substantially parallel to the ground.
- the retroreflection of this lighting towards the camera 21 will therefore be not very intense, so that it is necessary for the average light intensity of a light beam emitted in these zones to be high in order to allow the detection of a marking or of an obstacle in these areas.
- the lighting emitted by the light modules 31 and 32 will be substantially perpendicular to a road user.
- This lighting will therefore be reflected satisfactorily towards the camera 21, so that the average light intensity of a light beam emitted in these zones may be lower than that of a beam emitted in the zones A 3.1 , A 3 .2 and A 3.3 .
- the lighting emitted by the light modules 31 and 32 will also be substantially perpendicular to a traffic sign. Since a traffic sign is usually provided with a reflective coating, this lighting will be retroreflected by being amplified. It is therefore necessary for the average light intensity of a light beam emitted in these areas to be low so as not to saturate the sensors of the camera 21.
- step E52 all of the initial detection zones A i,l and the initial photometry P i,l , for all the ranges ⁇ V 1 to ⁇ V M and for the same set G i , forms a lighting model M i associated with this set G i .
- steps E1 to E52 making it possible to determine these lighting models M 1 to M N , for the sets G 1 to G N , are carried out by a computer unit, comprising a memory in which the sets G 1 to G N and the speed ranges ⁇ V 1 to ⁇ V M defined in steps E1 and E1', as well as the data sets S 1 to S N , and a processor able to implement these steps.
- the computer unit is separate from the motor vehicle 1, the steps E1 to E52 thus being carried out prior to the following steps.
- the models M 1 to M N are loaded into a memory of the controller of the lighting system 3, for example in the form of images in which each pixel represents a pixel of a pixelated light beam intended to be emitted by the modules 31 and 32, the gray level of the pixel of the image representing a light intensity setpoint of an elementary light beam capable of being emitted by these modules 31 and 32 to form the pixel of the pixelated light beam.
- a step E6 when the motor vehicle 1 is in autonomous driving mode, the light modules 31 and 32 of the lighting system 3 are controlled by the controller to emit, towards the front of the vehicle, an overall light beam F formed of several light beams F i each conforming to one of the models M 1 to M N .
- the speed of the motor vehicle being included in one of the ranges ⁇ V l , each light beam F i is emitted in the initial detection zone A i,l with the initial photometry P i,l .
- These light beams F 1 to F N are light beams emitted by default, in the absence of detection of an object on the road.
- The represents a road scene, illuminated by means of the beams F 1 , F 2 and F 3 , emitted simultaneously by the light modules 31 and 32, to together form a global segmented light beam F.
- the motor vehicle travels at a speed of between 50 and 90 km/h.
- steps E7 and E8 relate to the adaptation of the global segmented beam F carried out following the detection of an object O, while the step E9 relates to the passage of the vehicle from an autonomous driving mode in a manual drive mode.
- an object O 1 is detected by the detection system 2, and is classified by this detection system 2 as being of a type T 2 , 1 belonging to a set G i .
- Another object O 2 is detected by the detection system 2, and is classified by this detection system 2 as being of a type T 2 , 2 belonging to this set G 2 .
- the object O 1 is a motor vehicle and the object O 2 is a pedestrian, these objects being indeed in the initial detection zone A 2,2 .
- the objects O 1 and O 2 are thus illuminated by the beam F2, whose P2.2 photometry makes it possible to improve the detection performance of these types of objects by the detection system 2.
- a step E8 following the detection of an object O, the controller controls the lighting system 3 to generate a zone B in the light beam, centered on the object O and having a photometry adapted to the type of this object O.
- the controller controls the modules 31 and 32 to generate, in the beam F2, a zone B 1 of less intensity, centered on the object O 1 and an over-intensified zone B 2 , centered on the object O 2 .
- the zone B 1 allows the detection system 2 to continue to detect the vehicle O 1 during its movement and the movement of the vehicle 1 without however dazzling a possible driver of this vehicle.
- Zone B 2 allows the detection system 2 to continue to detect the pedestrian O 2 during the movement of the vehicle 1.
- the zones B 1 and B 2 thus remain centered on these objects O 1 and O 2 during their movements in the field of the camera 21, the estimation of the position of these objects O 1 and O 2 at a given instant allowing the controller to move the zones B 1 and B 2 at the following instant, as shown in , until the objects O 1 and O 2 leave the field of the camera.
- the controller of the lighting system then controls the modules 31 and 32 so that the light beam F2 conforms to the default lighting model M 2 .
- a step E9 when the autonomous driving system receives an instruction I to resume manual control of the motor vehicle 1, the controller controls the lighting system and in particular the light modules 31 and 32 to gradually transform the light beam overall F a beam of the regulatory crossing type LB. If the autonomous driving system receives an instruction to pass from the motor vehicle 1 into an autonomous mode, the controller then controls the lighting system 3 for the emission of F 1 , F 2 and F 3 , conforming to the models respectively M 1 , M 2 and M 3 , by light modules 31 and 32
- the preceding description clearly explains how the invention makes it possible to achieve the objectives which it has set itself, and in particular by proposing a method for controlling a lighting system of a motor vehicle, in which data relating to the position of objects, classified according to their types, make it possible to describe at least one zone in which any new object, belonging to one of these types will be likely to be present, and in which a photometry is defined making it possible to maximize the probability that an object of this type is actually detected by a detection system of the motor vehicle. Thanks to the invention, the light beams emitted by the lighting system are thus intended entirely to support the image acquisition system of the detection system.
- the invention cannot be limited to the embodiments specifically described in this document, and extends in particular to all equivalent means and to any technically effective combination of these means. It is possible in particular to envisage other types of detection system than that described, and in particular systems combining an image acquisition system with other types of sensors, the detection and estimation of the position of objects on the route being for example produced by a fusion of multi-sensor data. It is also possible to envisage other types of objects than those described. We can also consider other examples of methods for modeling the first detection zones, and in particular other types of automatic learning algorithm than the one described. It is also possible to consider modeling first detection zones as a function of parameters other than the speed of the vehicle.
Abstract
Description
- Définition d’au moins un ensemble de types d’objets destinés à être détectés par le système de détection du véhicule automobile,
- Acquisition par le système de détection d’un jeu de données relatif à la position, dans l’environnement du véhicule, d’une pluralité d’objets de types appartenant à audit ensemble,
- Détermination, à partir du jeu de données, d’un modèle d’éclairage associé audit ensemble définissant au moins une zone, dite de détection initiale, associée à cet ensemble de types d’objets et adressable par le système d’éclairage, et une photométrie, dite initiale, d’un faisceau lumineux destiné à être émis par le système d’éclairage dans la zone de détection initiale associée à cet ensemble,
- Contrôle du système d’éclairage en fonction du modèle d’éclairage déterminé pour l’émission d’un faisceau lumineux présentant la photométrie initiale dans la zone de détection initiale de ce modèle d’éclairage.
- l’étape de définition comporte la définition d’une pluralité d’ensembles distincts de types d’objets ;
- l’étape d’acquisition comporte l’acquisition, pour chaque ensemble, d’un jeu de données relatif à la position, dans l’environnement du véhicule, d’une pluralité d’objets de types appartenant à audit ensemble ;
- l’étape de détermination comporte la détermination, à partir de chaque jeu de données, d’un modèle d’éclairage associé audit ensemble associé ce jeu de données, chaque modèle définissant au moins une zone, dite de détection initiale, associée à cet ensemble de types d’objets et adressable par le système d’éclairage, et une photométrie, dite initiale, d’un faisceau lumineux destiné à être émis par le système d’éclairage dans la zone de détection initiale associée à cet ensemble.
- différents types de panneaux de signalisation et feux de circulation ;
- différents types d’usagers de la route, et notamment des piétons, des cyclistes, des véhicules ; ainsi que différents types d’animaux ;
- différents types de marquage au sol et d’obstacles susceptibles d’être atteint par le véhicule en un temps inférieur à un seuil donné, par exemple de deux secondes.
- l’étape de définition comporte la définition d’au moins trois ensembles de types d’objets dont un premier ensemble comprenant au moins des objets du type marquage au sol, un deuxième ensemble comprenant au moins des objets de type usagers de la route et un troisième ensemble comprenant au moins des objets de type panneau de signalisation,
- l’étape de détermination comporte la détermination de trois modèles d’éclairage chacun associé à l’un des ensembles, dont un premier modèle d’éclairage associé au premier ensemble, un deuxième modèle d’éclairage associé au deuxième ensemble et un troisième modèle d’éclairage associé au troisième ;
- et l’étape de contrôle du système d’éclairage comporte le contrôle du système d’éclairage en fonction des modèles d’éclairage déterminés pour l’émission, notamment simultanée, d’un premier faisceau lumineux présentant la photométrie initiale du premier modèle d’éclairage dans la zone de détection initiale de ce premier modèle, d’un deuxième faisceau lumineux présentant la photométrie initiale du deuxième modèle d’éclairage dans la zone de détection initiale de ce deuxième modèle et d’un troisième faisceau lumineux présentant la photométrie initiale du troisième modèle d’éclairage dans la zone de détection initiale de ce troisième modèle.
- Détection d’un objet d’un type donné parmi ledit ensemble de types d’objets par le système de détection d’objet du véhicule,
- Contrôle du système d’éclairage pour la modification du faisceau lumineux en fonction du type de l’objet détecté.
- Réception d’une instruction de reprise de contrôle manuel du véhicule automobile par un occupant du véhicule,
- Contrôle du système d’éclairage pour l’émission d’au moins un faisceau d’éclairage et/ou de signalisation réglementaire prédéterminé.
Claims (11)
- Procédé de contrôle d’un système d’éclairage (3) d’un véhicule automobile (1) équipé d’un système de détection d’objets (2) comportant un système d’acquisition d’images (21) de tout ou partie de l’environnement du véhicule, le procédé comportant les étapes suivantes :
- (E1) Définition d’au moins un ensemble de types d’objets (Gi) destinés à être détectés par le système de détection du véhicule automobile,
- (E2) Acquisition par le système de détection d’un jeu de données (Si) relatif à la position (Pi,j,k), dans l’environnement du véhicule, d’une pluralité d’objets (Oi,j,k) de types (Ti,j) appartenant à audit ensemble,
- (E4, E51, E52) Détermination, à partir du jeu de données, d’un modèle d’éclairage (Mi) associé audit ensemble définissant au moins une zone (Ai,l), dite de détection initiale, associée à cet ensemble de types d’objets et adressable par le système d’éclairage, et une photométrie (Pi,l), dite initiale, d’un faisceau lumineux (Fi) destiné à être émis par le système d’éclairage dans la zone de détection initiale associée à cet ensemble,
- (E6) Contrôle du système d’éclairage en fonction du modèle d’éclairage déterminé pour l’émission d’un faisceau lumineux présentant la photométrie initiale dans la zone de détection initiale de ce modèle d’éclairage.
- Procédé selon la revendication précédente, dans lequel le jeu de données (Si) relatif à la position (Pi,j,k) des objets (Oi,j,k), acquis lors de l’étape d’acquisition (E2), comporte, pour chaque objet, la position (Pi,j,k(0)), dite initiale, de cet objet au moment où il a été détecté par le système de détection (2).
- Procédé selon la revendication précédente, dans lequel l’étape de détermination (E4 , E51, E52) dudit modèle (Mi) comporte, pour chaque type d’objet (Ti,j) dudit ensemble (Gi), une étape de modélisation (E4), à partir du jeu de données (Si), d’une zone (Zi,j,l), dite de première détection dudit type d’objet, englobant toutes les positions initiales (Pi,j,k(0)) des objets (Oi,j,k) dudit type d’objet, et dans lequel ladite zone de détection initiale (Ai,l) est déterminée à partir des zones de première détection de tous les types d’objets dudit ensemble.
- Procédé selon la revendication précédente, dans lequel chaque étape de modélisation (E4) de la zone de première détection (Zi,j,l) d’un type d’objet (Ti,j) met en œuvre un algorithme d’apprentissage automatique permettant de déterminer la zone de première détection à partir des positions initiales (Pi,j,k(0)) des objets (Oi,j,k) dudit type d’objet.
- Procédé selon l’une des revendications précédentes, dans lequel, dans l’étape de détermination (E4 , E51, E52) dudit modèle (Mi), ladite photométrie initiale (Pi,l) du faisceau lumineux (Fi) est déterminée en fonction d’au moins un des types d’objets (Ti,j) de l’ensemble de type d’objets (Gi).
- Procédé selon la revendication précédente, le procédé comportant une étape de fourniture (E1’) d’au moins une plage de valeurs (ΔVL) d’un paramètre (Vi,j,k(0)) relatif au comportement du véhicule automobile (1) ou à l’environnement, et dans lequel l’étape de détermination (E4 , E51, E52) du modèle d’éclairage (Mi) associé audit ensemble (Gi) est une étape de détermination d’un modèle d’éclairage, associé audit ensemble, variable en fonction desdites valeurs du paramètre.
- Procédé selon l’une des revendications précédentes, dans lequel :
- l’étape de définition (E1) comporte la définition d’au moins trois ensembles (G1, G2, G3) de types d’objets (T1,1, T2,1, T2,2, T3,1) dont un premier ensemble (G1) comprenant au moins des objets du type marquage au sol, un deuxième ensemble (G2) comprenant au moins des objets de type usagers de la route et un troisième ensemble (G3) comprenant au moins des objets de type panneau de signalisation,
- l’étape de détermination (E4 , E51, E52) comporte la détermination de trois modèles d’éclairage (M1, M2, M3) chacun associé à l’un des ensembles, dont un premier modèle d’éclairage associé au premier ensemble, un deuxième modèle d’éclairage associé au deuxième ensemble et un troisième modèle d’éclairage associé au troisième ;
- et l’étape de contrôle (E6) du système d’éclairage (3) comporte le contrôle du système d’éclairage en fonction des modèles d’éclairage déterminés pour l’émission d’un premier faisceau lumineux (F1) présentant la photométrie initiale (P1, 2) du premier modèle d’éclairage dans la zone de détection initiale (A1,2) de ce premier modèle, d’un deuxième faisceau lumineux (F2) présentant la photométrie initiale (P2 ,2) du deuxième modèle d’éclairage dans la zone de détection initiale (A2 ,2) de ce deuxième modèle et d’un troisième faisceau lumineux (F3) présentant la photométrie initiale (P3 ,2) du troisième modèle d’éclairage dans la zone de détection initiale (A3 ,2) de ce troisième modèle.
- Procédé selon l’une des revendications précédentes, le procédé comportant en outre les étapes suivantes :
- (E7) Détection d’un objet (O) d’un type donné (Ti,j) parmi ledit ensemble de types d’objets (Gi) par le système de détection d’objet (2) du véhicule (1),
- (E8) Contrôle du système d’éclairage (3) pour la modification du faisceau lumineux (Fi) en fonction du type de l’objet détecté.
- Procédé selon la revendication précédente, dans lequel l’étape de contrôle (E8) du système d’éclairage (3) comporte une étape de génération d’une zone (B) dans le faisceau lumineux (Fi) au niveau de l’objet détecté (O), la zone présentant une photométrie adaptée au type (Ti,j) de l’objet détecté et une étape de déplacement de ladite zone en fonction du déplacement de l’objet détecté dans le référentiel du système d’acquisition d’images (21).
- Procédé selon l’une des revendications précédentes, le véhicule automobile (1) étant équipé d’un système de conduite autonome partielle ou totale, dans lequel la mise en œuvre de l’étape de contrôle (E6) du système d’éclairage (3) est conditionnée à l’activation du système de conduit autonome, le procédé comporte les étapes suivantes :
- Réception d’une instruction (I) de reprise de contrôle manuel du véhicule automobile par un occupant du véhicule,
- (E9) Contrôle du système d’éclairage pour l’émission d’au moins un faisceau d’éclairage et/ou de signalisation réglementaire prédéterminé (LB).
- Véhicule automobile (1) comportant un système de détection d’objets (2) comportant un système d’acquisition d’images (21) de tout ou partie de l’environnement du véhicule, un système d’éclairage (3), un système de conduite autonome partielle ou totale, et un contrôleur du système d’éclairage, le contrôleur étant agencé pour mettre en œuvre l’étape de contrôle (E6) du procédé selon l’invention.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202280017299.4A CN116888636A (zh) | 2021-02-26 | 2022-02-25 | 用于控制机动车辆照明系统的方法 |
EP22712839.4A EP4298612A1 (fr) | 2021-02-26 | 2022-02-25 | Procédé de contrôle d'un système d'éclairage d'un véhicule automobile |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR2101882A FR3120212B1 (fr) | 2021-02-26 | 2021-02-26 | Procédé de contrôle d’un système d’éclairage d’un véhicule automobile |
FRFR2101882 | 2021-02-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022180253A1 true WO2022180253A1 (fr) | 2022-09-01 |
Family
ID=75339949
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2022/054886 WO2022180253A1 (fr) | 2021-02-26 | 2022-02-25 | Procédé de contrôle d'un système d'éclairage d'un véhicule automobile |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP4298612A1 (fr) |
CN (1) | CN116888636A (fr) |
FR (1) | FR3120212B1 (fr) |
WO (1) | WO2022180253A1 (fr) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2947223A1 (fr) * | 2009-06-29 | 2010-12-31 | Valeo Vision | Procede de commande de faisceau d'eclairage pour vehicules |
US20210046862A1 (en) * | 2018-10-31 | 2021-02-18 | SZ DJI Technology Co., Ltd. | Method and apparatus for controlling a lighting system of a vehicle |
-
2021
- 2021-02-26 FR FR2101882A patent/FR3120212B1/fr active Active
-
2022
- 2022-02-25 WO PCT/EP2022/054886 patent/WO2022180253A1/fr active Application Filing
- 2022-02-25 EP EP22712839.4A patent/EP4298612A1/fr active Pending
- 2022-02-25 CN CN202280017299.4A patent/CN116888636A/zh active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2947223A1 (fr) * | 2009-06-29 | 2010-12-31 | Valeo Vision | Procede de commande de faisceau d'eclairage pour vehicules |
US20210046862A1 (en) * | 2018-10-31 | 2021-02-18 | SZ DJI Technology Co., Ltd. | Method and apparatus for controlling a lighting system of a vehicle |
Also Published As
Publication number | Publication date |
---|---|
CN116888636A (zh) | 2023-10-13 |
EP4298612A1 (fr) | 2024-01-03 |
FR3120212B1 (fr) | 2023-07-14 |
FR3120212A1 (fr) | 2022-09-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP0527665B1 (fr) | Dispositif embarqué et procédé de repérage et de suivi de la position du véhicule sur la route et dipositif d'aide à la conduite en comportant application | |
EP3305592A1 (fr) | Systeme d'eclairage pour vehicule automobile | |
FR2906362A1 (fr) | Procede de determination anticipee d'un virage sur une portion de route et systeme associe. | |
EP1437258A1 (fr) | Système de commande de l'orientation en site d'un projecteur de véhicule et procédé de mise en oeuvre | |
EP1437259B1 (fr) | Système de commande de l'orientation en site d'un projecteur de véhicule et procédé de mise en oeuvre | |
EP1515293A1 (fr) | Dispositif de détection d'obstacle comportant un système d'imagerie stéréoscopique incluant deux capteurs optiques | |
EP1422663A1 (fr) | Procédé et dispositif de détermination de la distance de visibilité du conducteur d'un véhicule | |
CA3083626A1 (fr) | Dispositif de pilotage pour vehicule automobile, vehicule automobile, procede de commande d'un tel vehicule automobile et programme d'ordinateur associes | |
WO2020008062A1 (fr) | Adaptation d'une fonction de feu de route d'un véhicule automobile | |
WO2022180253A1 (fr) | Procédé de contrôle d'un système d'éclairage d'un véhicule automobile | |
EP4077047A1 (fr) | Procédé de contrôle d'un système d'éclairage d'un véhicule automobile | |
FR3055431B1 (fr) | Dispositif de projection d'une image pixelisee | |
WO2020070078A1 (fr) | Procédé de pilotage de modules de projection de faisceaux de lumiere pixellise pour vehicule | |
FR3097820A1 (fr) | Procede de prevention d’une collision d’un vehicule automobile avec un objet mettant en œuvre un module lumineux | |
US20240135666A1 (en) | Method for controlling a motor vehicle lighting system | |
FR3103052A1 (fr) | Procédé d’aide à la conduite d’un véhicule et système d’aide à la conduite associé | |
FR3113638A1 (fr) | Méthode de contrôle pour contrôler le mouvement latéral d’un véhicule automobile | |
FR3104674A1 (fr) | Procédé de commande d’un système d’éclairage d’un véhicule automobile | |
EP3830741B1 (fr) | Assistance à la conduite pour le controle d'un vehicule automobile comprenant des étappes parallèles de traitement des images transformées. | |
EP4015332A1 (fr) | Méthode de supervision pour le contrôle d'un véhicule automobile autonome | |
EP4263311A1 (fr) | Méthode de contrôle pour contrôler le déplacement d'un véhicule automobile autonome | |
FR3105143A1 (fr) | Procédé de détection d’un état local de la route sur laquelle circule un véhicule automobile | |
FR3075136B1 (fr) | Procede et dispositif d’assistance pour conduire un vehicule a conduite partiellement automatisee, par comparaison d’etats globaux | |
WO2020126264A1 (fr) | Procédé d'obtention d'une image d'un objet à classifier et système associé | |
EP4263310A1 (fr) | Méthode d'aide à la conduite d'un véhicule automobile autonome sur une route |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22712839 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18547902 Country of ref document: US Ref document number: 202280017299.4 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022712839 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022712839 Country of ref document: EP Effective date: 20230926 |