EP3704625A1 - Method of processing data for system for aiding the driving of a vehicle and associated system for aiding driving - Google Patents
Method of processing data for system for aiding the driving of a vehicle and associated system for aiding drivingInfo
- Publication number
- EP3704625A1 EP3704625A1 EP18789168.4A EP18789168A EP3704625A1 EP 3704625 A1 EP3704625 A1 EP 3704625A1 EP 18789168 A EP18789168 A EP 18789168A EP 3704625 A1 EP3704625 A1 EP 3704625A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- data
- vehicle
- environment
- processing method
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 238000005070 sampling Methods 0.000 claims description 52
- 238000003672 processing method Methods 0.000 claims description 19
- 230000002093 peripheral effect Effects 0.000 description 25
- 230000002123 temporal effect Effects 0.000 description 12
- 238000001514 detection method Methods 0.000 description 9
- 230000003190 augmentative effect Effects 0.000 description 8
- 239000007787 solid Substances 0.000 description 5
- 230000007423 decrease Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000011282 treatment Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/96—Management of image or video recognition tasks
Definitions
- the present invention generally relates to the field of processing data acquired by a sensor.
- It relates more particularly to a method of processing data acquired by the sensor of a vehicle driving assistance system and such a driving assistance system.
- Such driving assistance systems generally comprise a data sensor, of the video camera type, capable of acquiring images of the vehicle environment, a processing unit able to apply processing algorithms to the data acquired in order to extract the relevant information, and a display unit allowing the driver to view the extracted information.
- the present invention proposes a data processing method for a vehicle driving assistance system comprising a step of acquiring a set of data relating to the environment of the vehicle, said set of data comprising a first set of data relating to a first part of the vehicle environment and a second set of data relating to a second part of the vehicle environment,
- the method further comprises steps of: processing the first set of data according to a first process and at a first instant,
- processing the second set of data according to a second process distinct from the first process or at a second instant distinct from the first instant.
- the driver assistance system makes it possible to reduce the volume of processed data and to increase the speed of display of the information obtained. Indeed, the system makes it possible to allocate more material resources to the data set processing to obtain the most relevant information than to those containing little interest.
- the second set of data is processed at the second instant and the second instant is after the first instant
- the first process comprises sampling the first set of data according to a first sampling rate
- the second process comprises sampling the second set of data according to a second sampling rate, the first sampling rate being greater than the second sampling rate,
- the sampling of the first set of data and the sampling of the second set of data are spatial sampling
- sampling of the first set of data and the sampling of the second set of data are time sampling
- the method further comprises a step of detecting an object in the vehicle environment and / or a step of determining a subset of data corresponding to said object in the data set,
- a number of algorithms is applied to the subset of data, and in which said number of algorithms is a function of a location of the object,
- the location of the object comprises a membership of the subset of data to the first set of data (the subset of data then forming a first subset of data), or the second set of data (the subset of data then forming a second subset of data),
- a second number of algorithm (s) is applied to said second subset of data, the first number of algorithms being greater than the second number of algorithms,
- the location of the object comprises a distance between the object and the vehicle
- the method further comprises a step of superposition of information obtained by the processing of the first set of data to the object by means of a head-up display (for example a head-up display with augmented reality),
- a head-up display for example a head-up display with augmented reality
- the first set of data represents a first image part
- the first image part corresponds to a display area of the driver assistance system.
- the invention also proposes a vehicle assistance system comprising:
- a sensor adapted to acquiring a set of data relating to the environment of the vehicle, said set of data comprising a first set of data relating to a first part of the environment of the vehicle and a second set of data relating to a second part the environment of the vehicle,
- an electronic processing unit able to process the first set of data according to a first process and at a first instant, and able to process the second set of data according to a second process distinct from the first process or at a second instant distinct from the first instant; .
- the system comprises a head-up display (for example a head-up display with augmented reality),
- the electronic processing unit is able to produce information by means of the first process
- said head-up display (augmented reality) is capable of superimposing said information on an object in the environment of the vehicle.
- FIG. 1 schematically represents a vehicle seen from above equipped with a driving assistance system
- FIG. 2 schematically represents an image acquired by the driver assistance system and processed by means of the data processing method according to the invention
- FIG. 3 schematically represents an image acquired by the driver assistance system and processed according to a variant of the data processing method
- FIG. 4 shows schematically an image acquired by the driver assistance system and processed according to another variant of the data processing method.
- FIG. 1 diagrammatically shows the main elements of a driving assistance system 1 intended to equip a vehicle 3, for example a motor vehicle, a train, a boat such as a barge, a tramway or a bus, to facilitate driving.
- a driving assistance system 1 intended to equip a vehicle 3, for example a motor vehicle, a train, a boat such as a barge, a tramway or a bus, to facilitate driving.
- This driving assistance system 1 here comprises a head-up display.
- the head-up display includes a light source, a projection unit and a partially transparent blade.
- the partially transparent plate serves as a display area 5 (see FIG. 2) and makes it possible to display information 2 relating to the vehicle 3 and / or its environment in the form of a virtual image.
- the driver assistance system 1 described here is particularly advantageous when the aforementioned head-up display is a head-up display augmented reality.
- Such an augmented-reality head-up display makes it possible to superpose (for the driver of the vehicle) the information (s) displayed (s) 2 to an object (pedestrian, animal, other vehicle, etc.) located in front of the vehicle. 3.
- the partially transparent blade comprises for example the windshield 9 of the vehicle 3, and the display zone 5 can then extend over at least part of said windshield 9.
- the driver assistance system 1 further comprises a detection device.
- Said detection device typically comprises a sensor adapted to acquire a set of data relating to the environment of the vehicle 3.
- the sensor comprises for example an image acquisition unit 11, for example a video camera.
- the set of data relating to the environment represents an acquired image 13 (see FIG. 2) and / or a sequence of images 13.
- the acquired image 13 comprises a plurality of pixels, generally arranged in the form of a matrix of pixels.
- the image acquisition unit 11 is arranged at the front of the vehicle 3, for example at the interior (central) rearview mirror of the vehicle 3.
- the image acquisition unit 11 has an angle of field 1 1 1 which extends facing the vehicle 3.
- the angle of view 1 1 1 covers the solid angle corresponding to the windshield 9 of the vehicle (as seen from the driver) and extends beyond the solid angle corresponding to said windscreen 9.
- the driver assistance system 1 further comprises an electronic processing unit 15.
- the electronic processing unit 15 is programmed to process data of the acquired image 13.
- This processing comprises at least one algorithm selected from a plurality of algorithms.
- Said plurality of algorithms is stored by the driver assistance system 1, for example in a memory unit 17.
- the plurality of algorithms comprises, for example, algorithms capable of obtaining information 2 on the acquired image 13, for example detecting the presence or the absence of an object 7, identifying the object 7, following the object 7 on a sequence of acquired images 13, to predict the evolution of the position of the object 7 in time, to extract the characteristics of the object 7 (distance, orientation, size, speed, etc.), to be calculated the time remaining before a potential collision between the object 7 and the vehicle 3.
- the plurality of algorithms also includes sampling algorithms for sampling the acquired image 13 and thus reduce the volume of data to be processed thereafter.
- the image acquisition unit 11 acquires the image 13 of the environment located at the front of the vehicle 3.
- the dataset includes a first set of data and a second set of data.
- the first set of data here represents a first image portion 19 relating to a first part of the environment.
- the second data set represents a second image part 21 relating to a second part of the environment.
- said first and second picture portions 19, 21 have pixel coordinates stored in the picture unit. memory 17.
- the first picture portion 19 corresponds to the display area 5 of the head-up display, which corresponds as already indicated to at least one party.
- the display area 5 is the area on which the information 2 is superimposed on the environment and therefore the area for which the image analysis acquired 13 benefits the driver the most.
- the second image portion 21 includes a peripheral zone surrounding the first image portion 19.
- the acquired image 13 is transmitted to the electronic processing unit 15.
- the electronic processing unit 15 applies a first processing process on the first part of image 19 at a first moment h. More specifically, the first process is applied to the pixels of the first image portion 19.
- the first process may comprise in practice a single first algorithm or a plurality of first algorithms.
- the first process uses, for example, an algorithm for detecting whether an object 7 located in the environment of the vehicle 3 is located more precisely in the solid angle corresponding to the zone d. display 5.
- the first process implements a second algorithm making it possible to extract a first subset of data 719 corresponding to the object 7 in the first part of picture 19.
- the electronic processing unit 15 applies the second process to the pixels of the second image portion 21.
- the electronic processing unit 15 applies the first process to the pixels of the second image portion 21 at a second instant I2 distinct from the first moment h, the first moment h being anterior to second moment I2.
- the first image portion 19 is processed in priority, which allows a rapid display of information 2 obtained through the first process.
- the first process implements the algorithm for detecting whether an object 7 located in the environment of the vehicle 3 is more precisely out of the solid angle corresponding to the display area 5.
- the first process makes it possible to extract a second subset of data 721 corresponding to the object 7 in the second image portion 21 .
- the driver assistance system 1 projects the information 2 on the display area 5 which allows the driver to view them as virtual images.
- the information 2 is projected so that it appears superimposed on the objects 7 in the environment of the vehicle 3.
- the information 2 contains for example a symbol or a contour which, superimposed on the object 7 or placed nearby, to emphasize its presence. This information 2 is particularly relevant in the case of objects 7 difficult to see by the driver.
- the electronic processing unit 15 applies at the first instant h a second distinct process of the first process to the pixels of the second image portion 21.
- the second process may comprise a single second algorithm or a plurality of second algorithms.
- the plurality of second algorithms may comprise one or more of the plurality of first algorithms.
- This embodiment is particularly advantageous when an object 7 is detected in the environment of the vehicle 3.
- the electronic processing unit 15 can then apply a plurality of algorithms to the first subassembly 719 or the second subassembly 721 in order to measuring a plurality of features of the object 7.
- the electronic processing unit 15 can apply a first number of first algorithms to the pixels of the first subset 719, and a second number of second pixel algorithms of the second subset 721.
- the first number of first algorithms is greater than the second number of second algorithms, so more information 2 is obtained on a first subset 719 (for example the size, distance, nature, etc. of the object 7 qu 'it represents) only on a second subset 721 (for example only the distance)
- the data processing method makes it possible to reduce the number of operations performed by the electronic processing unit 15.
- the first process and the second process comprise a sampling of the pixels of the acquired image 13. During this sampling, the second image portion 21 is subsampled with respect to the first part of the image. image 19. Only the sampled pixels will subsequently be processed according to other algorithms. This makes it possible to reduce the volume of data to be processed and thus accelerate the image processing while reducing the necessary electronic resources.
- the sampling is for example implemented by the electronic processing unit 15.
- the sampling comprises a spatial sampling of the pixels.
- the pixels of the first image portion 19 are sampled according to a first spatial sampling rate tsi.
- the pixels of the second image portion 21 are sampled according to a second spatial sampling rate t S 2.
- the first spatial sampling rate tsi is greater than the second spatial sampling rate t S 2, so the resolution of the first sampled image portion is greater than the resolution of the second sampled image portion.
- the first spatial sampling rate tsi is equal to 1
- the second spatial sampling rate t S 2 is equal to 1/2, 1/3, 1/4, and so on.
- the size of an object 7 corresponding to the first subset 719 is calculated more accurately than the size of an object 7 corresponding to the second subset 721 (this size can then be underestimated or overestimated).
- the volume of data to be processed is reduced and thus the number of operations to be performed to obtain the data.
- information 2 relating to the environment of the vehicle 3, while maintaining good accuracy in the relevant information 2.
- the sampling comprises a temporal sampling of the pixels.
- the pixels of the first image portion 19 are sampled according to a first temporal sampling rate tu.
- the pixels of the second image portion 21 are sampled according to a second temporal sampling rate te.
- the first temporal sampling rate ⁇ is greater than the second temporal sampling rate te, thus the information 2 obtained is updated more frequently for the first image portion 19 than for the second image portion 21.
- the first temporal sampling rate is equal to 1
- the second temporal sampling rate is equal to 1/2, 1/3, 1/4, and so on.
- the position of an object 7 corresponding to the first subset 719 is calculated on all the first parts of images 19 when the first rate of temporal sampling you are equal to 1.
- the position of an object 7 corresponding to the second subset 721 is calculated on one of a second image portion over two when the second temporal sampling rate is equal to 1/2.
- the position of the object 7 corresponding to the first subset 719 is known with precision in time, which allows for example a precise tracking of said first object 719.
- the tracking of the object 7 corresponding to the second subassembly 721 is less accurate.
- the volume of data to be processed is reduced and thus the number of operations to be performed to obtain the information 2 relating to the environment of the vehicle 3.
- the second image portion 21 may comprise a plurality of zones, for example a first zone and a second zone.
- the zones of the second image portion 21 form peripheral zones 21 1, 213 arranged concentrically around the first image portion 19, as shown in FIG. 3.
- a first peripheral zone 21 1 surrounds the first image portion 19 and a second peripheral zone 213 surrounds the first peripheral zone 21 1.
- the first image portion 19 is processed at the first moment h
- the first peripheral zone 21 1 is processed at the second instant I2 distinct from the first instant
- the second peripheral zone 213 is processed at a first time.
- the third instant I3 is later than the second instant I2, the second instant I2 being itself later than the first instant h.
- the third number of third algorithms applied to the second peripheral zone 213 is smaller than the second number of second algorithms applied to the first peripheral zone 21 1.
- the second number of algorithms applied to the first peripheral zone 21 1 is lower than the first number of first algorithms applied to the first image portion 19.
- the spatial sampling rates ts and the temporal sampling rates tt decrease as a function of the distance of the first image portion 19.
- all images are selected for the first image portion 19, only every other image is selected for the first peripheral area 21 1, and one of every four images is selected for the second peripheral area 213.
- the extracted information 2 peripheral zones 21 1, 213 are therefore less precise than those of the first part picture 19.
- FIG. 4 illustrates the positions 7N, 7N + I, 7N + 2, 7N + 3 of an object 7 on a superposition of an acquired image sequence N, N + 1, N + 2, N + 3.
- the initial position represents the object 7 on an initial image N etc.
- the object 7, for example a pedestrian, is first detected at a first position 7N in the second peripheral zone 213 of the first acquired image N.
- the second peripheral zone 213 has a temporal sampling rate t S 2i3 is equal to 1/4, then the second position 7N + I of the object 7 will not be known on the second captured image N + 1 . Indeed, the object 7 is always in the second peripheral zone 213.
- the temporal sampling rate ts2n of the first peripheral zone is equal to 1/2.
- the object 7 has entered the first peripheral zone 21 1.
- the object 7 is thus detected at a third position 7 N + 2 in the first peripheral zone 21 1 on the third acquired image N + 2.
- the object 7 enters the first image part 19, and its fourth position 7N + 3 can be detected on the fourth acquired image N + 3.
- the driver assistance system 1 comprises a hybrid detection device.
- the hybrid detection device comprises multiple sensors.
- the hybrid detection device comprises a device telemetry device 23 for measuring a distance between the object 7 and the vehicle 3.
- the telemetry device 23 comprises for example a lidar (according to the Anglo-Saxon acronym "Llght Detection And Ranging”), or a radar. Said telemetry device 23 is able to emit an electromagnetic wave and to acquire the electromagnetic wave reflected by the object 7 present in a detection field of the telemetry device 23. The distance between the object 7 and the vehicle 3 is then calculated by measuring a flight time of the light wave or of the radio wave reflected by this object 7, this flight time being obtained from the echo signal captured by this lidar or radar.
- a lidar accordinging to the Anglo-Saxon acronym "Llght Detection And Ranging”
- radar a radar
- each image portion is determined as a set of pixels corresponding to objects located over a certain range of distances as measured by the telemetry device 23. (Each image portion can then be be formed of disjoint subparts.)
- the first image portion 19 may comprise all the pixels corresponding to objects situated at a measured distance less than a threshold value (for example of 20 m).
- the telemetry device 23 makes it possible to determine a first image portion 19 of reduced size and thus to increase the speed of the process and to reduce the number of operations.
- a plurality of objects 7 is detected by telemetry, each object 7 being at a different distance from the vehicle 3.
- the first image portion 19 then corresponds to the first set of pixels representing an object, or to a plurality of pixels. sets of pixels representing a plurality of objects, being at a distance less than the threshold value.
- the second image portion 21 then corresponds to the second set of pixels representing an object, or to a plurality of sets of pixels representing a plurality of objects, being at a distance greater than the threshold value. .
- a plurality of threshold values can be stored in the memory unit 17 of the vehicle.
- the second image portion 21 may comprise a plurality of defined areas according to the threshold values.
- the first zone of the second image portion 21 includes the pixel set (s) representing one or more objects located between 20 m and 30 m of the vehicle 3.
- the second zone corresponds to the third set of pixels representing one or more objects located more than 30 m from the vehicle 3 etc.
- This fourth embodiment can be combined with the other embodiments and their variants.
- the processes applied to the first image portion 19, the first zone and the second zone may be such as those previously described.
- the information 2 corresponding to the object closest to the vehicle 3 can be displayed in priority.
- the telemetry device 23 can furthermore make it possible to confirm certain information 2 obtained by means of the image acquisition unit 11, for example the distance between the object 7 and the vehicle 3.
- the display step d) is performed as previously described.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Traffic Control Systems (AREA)
- Control Of Vehicle Engines Or Engines For Specific Uses (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR1760252A FR3072931B1 (en) | 2017-10-30 | 2017-10-30 | DATA PROCESSING METHOD FOR A DRIVING ASSISTANCE SYSTEM OF A VEHICLE AND ASSOCIATED DRIVING ASSISTANCE SYSTEM |
PCT/EP2018/079208 WO2019086314A1 (en) | 2017-10-30 | 2018-10-24 | Method of processing data for system for aiding the driving of a vehicle and associated system for aiding driving |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3704625A1 true EP3704625A1 (en) | 2020-09-09 |
Family
ID=61027895
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP18789168.4A Pending EP3704625A1 (en) | 2017-10-30 | 2018-10-24 | Method of processing data for system for aiding the driving of a vehicle and associated system for aiding driving |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP3704625A1 (en) |
FR (1) | FR3072931B1 (en) |
WO (1) | WO2019086314A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7151234B2 (en) * | 2018-07-19 | 2022-10-12 | 株式会社デンソー | Camera system and event recording system |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5147874B2 (en) * | 2010-02-10 | 2013-02-20 | 日立オートモティブシステムズ株式会社 | In-vehicle image processing device |
JP5626578B2 (en) * | 2010-12-02 | 2014-11-19 | アイシン・エィ・ダブリュ株式会社 | Driving support system, driving support program, and driving support method |
DE102012213291A1 (en) * | 2012-07-27 | 2014-01-30 | Robert Bosch Gmbh | Method and device for determining situation data based on image data |
US20150153184A1 (en) * | 2013-12-04 | 2015-06-04 | GM Global Technology Operations LLC | System and method for dynamically focusing vehicle sensors |
-
2017
- 2017-10-30 FR FR1760252A patent/FR3072931B1/en active Active
-
2018
- 2018-10-24 EP EP18789168.4A patent/EP3704625A1/en active Pending
- 2018-10-24 WO PCT/EP2018/079208 patent/WO2019086314A1/en unknown
Also Published As
Publication number | Publication date |
---|---|
WO2019086314A1 (en) | 2019-05-09 |
FR3072931B1 (en) | 2021-07-23 |
FR3072931A1 (en) | 2019-05-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016042106A1 (en) | Localisation and mapping method and system | |
FR2932595A1 (en) | METHOD FOR DISPLAYING PARKING ASSIST. | |
WO2022002531A1 (en) | System and method for detecting an obstacle in an area surrounding a motor vehicle | |
EP3704625A1 (en) | Method of processing data for system for aiding the driving of a vehicle and associated system for aiding driving | |
EP2043044A1 (en) | Method and device for automobile parking assistance | |
FR3056531B1 (en) | OBSTACLE DETECTION FOR MOTOR VEHICLE | |
WO2018060380A1 (en) | Detection of obstacles by merging objects for a motor vehicle | |
FR3106108A1 (en) | Method and device for determining the trajectory of a road | |
WO2019077010A1 (en) | Data processing method and associated onboard system | |
FR3107114A1 (en) | Method and device for validating mapping data of a vehicle road environment | |
EP3155446B1 (en) | Method and system for estimating a parameter in a motor vehicle, and motor vehicle provided with such a system | |
WO2019141788A1 (en) | Head-up display for motor vehicle and assisted-driving system including such a display | |
FR3092545A1 (en) | ASSISTANCE IN DRIVING A VEHICLE, BY DETERMINING THE TRAFFIC LANE IN WHICH AN OBJECT IS LOCATED | |
FR3084628A1 (en) | METHOD FOR DETERMINING A TYPE OF PARKING LOCATION | |
WO2018069060A1 (en) | Locating device and device for producing integrity data | |
EP3830741B1 (en) | Driving assistance for control of a motor vehicle including parallel processing steps of transformed images | |
WO2023161568A1 (en) | Method for computing three-dimensional surfaces for a vehicle equipped with a driver-assistance system | |
EP4165601A1 (en) | Method for calibrating a camera and associated device | |
FR2937775A1 (en) | METHOD FOR DETECTING A TARGET OBJECT FOR A MOTOR VEHICLE | |
WO2020174142A1 (en) | Vehicle driving assistance by reliable determination of objects in deformed images | |
WO2022033902A1 (en) | Method for aligning at least two images formed by three-dimensional points | |
FR3100641A1 (en) | DETERMINATION OF REAL-TIME ENVIRONMENTAL INFORMATION AND SELECTIVE REGRESSION, FOR A SYSTEM | |
WO2021099395A1 (en) | Method for detecting intensity peaks of a specularly reflected light beam | |
FR3105961A1 (en) | Method and device for determining a lane change indicator for a vehicle | |
FR3135682A1 (en) | Method for managing the trajectory of a motor vehicle in its lane |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20200428 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20221006 |
|
RAP3 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: VALEO COMFORT AND DRIVING ASSISTANCE |
|
P01 | Opt-out of the competence of the unified patent court (upc) registered |
Effective date: 20230528 |
|
RAP3 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: VALEO COMFORT AND DRIVING ASSISTANCE |