EP4264535A1 - System und verfahren zur berechnung eines endbildes eines umgebungsbereichs eines fahrzeugs - Google Patents

System und verfahren zur berechnung eines endbildes eines umgebungsbereichs eines fahrzeugs

Info

Publication number
EP4264535A1
EP4264535A1 EP21820553.2A EP21820553A EP4264535A1 EP 4264535 A1 EP4264535 A1 EP 4264535A1 EP 21820553 A EP21820553 A EP 21820553A EP 4264535 A1 EP4264535 A1 EP 4264535A1
Authority
EP
European Patent Office
Prior art keywords
image
vehicle
environment
camera
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21820553.2A
Other languages
English (en)
French (fr)
Inventor
Andrea Ancora
Fabrizio Tomatis
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ampere Sas
Original Assignee
Renault SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Renault SAS filed Critical Renault SAS
Publication of EP4264535A1 publication Critical patent/EP4264535A1/de
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/247Aligning, centring, orientation detection or correction of the image by affine transforms, e.g. correction due to perspective effects; Quadrilaterals, e.g. trapezoids
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30264Parking

Definitions

  • a fish-eye lens (which refers to the eye of the fish, fish eye in English) is a lens which has the characteristic of having an extremely short focal length. The impact of this very short focal length is felt directly on the angle of view which becomes very wide. Using a fisheye camera results in a strongly curving distortion effect on all straight lines that do not pass through the center.
  • the perspective transformation operation involves knowledge of the three-dimensional characteristics of the environment. These features are not available with state-of-the-art camera-based approaches, and in the prior art the three-dimensional environment is traditionally assumed to be constant.
  • the 3D environment is often pre-computed to be a flat surface onto which the image is projected.
  • the flat surface of reference is generally the ground. This assumption is correct for image elements whose pixels are on the ground (eg line marks).
  • the projection on the ground creates a distortion.
  • This distortion can be likened to the phenomenon of shadow casting: a tall object is stretched as if it were spread out on the ground. This distortion leads to an uncomfortable image perception for the driver, because the high distorted objects do not respect the same proportions as the elements actually on the ground.
  • the invention aims to overcome all or part of the problems mentioned above by proposing a solution capable of better modeling the 3D environment of the vehicle and compensate for the deformation of tall objects located in the surrounding space of the vehicle. This results in a final image of the environment, or portion of environment, of the vehicle with better precision, which allows the driver to better assess the surrounding elements of his vehicle and thus navigate in complete safety in the environment.
  • the subject of the invention is a method for calculating a final image of an environment of a vehicle, said method being implemented in the vehicle, from data coming from a perception system on board the vehicle, the perception system comprising:
  • At least one panoramic vision camera each being positioned on the vehicle and configured to capture at least one image in a first angular portion of the environment of the vehicle;
  • At least one measurement sensor configured to determine a distance between the vehicle and a point of the environment in a second angular portion located in one of the first angular portions of the environment of the vehicle; said method comprising:
  • a perspective transformation step of the corrected image from a matrix storing, for each pixel of the corrected image, a pre-calculated distance between said camera and a point of the corresponding environment audit pixel projected onto a reference surface to generate a transformed image;
  • the perspective transformation step further comprising a step of updating, for at least part of the matrix, the pre-calculated distance by the distance determined by the measurement sensor.
  • the perception system comprising a man-machine interface capable of displaying an image
  • the calculation method according to the invention further comprises a step of displaying the final image on the man-machine interface.
  • the perspective transformation step further comprises a step of extruding a zone of the second angular portion into a sector oriented by a predefined angle with respect to the reference surface, said zone being positioned at the distance determined by the at least one measurement sensor.
  • the distortion in the captured image coming from the lens of the camera comprises, prior to the step of correcting the distortion, a step of characterizing the characteristics of the lens.
  • the invention also relates to a computer program product, said computer program comprising code instructions making it possible to perform the steps of the detection method according to the invention, when said program is executed on a computer.
  • the perception system according to the invention further comprises a man-machine interface capable of displaying the final image.
  • the measurement sensor comprises a sonar, a LIDAR, a 2D or 3D radar, and/or a module for estimating distance from images by calculation, alone or in combination.
  • FIG.1 Figure 1 schematically shows a vehicle equipped with a perception system according to the invention
  • FIG.3 Figure 3 shows the steps of the method for calculating a final image according to the invention
  • the perception system 20 further comprises a man-machine interface 70 capable of displaying the final image.
  • the man-machine interface can in particular be a screen positioned close to the driving position.
  • the final image is thus displayed on this screen and the driver of the vehicle thus has a final image of the environment of the vehicle allowing him to make decisions, to maneuver in complete safety by checking, thanks to the display of the final image, that there is no obstacle (object or pedestrian, cyclist, etc.) in the environment of the vehicle.
  • the final image obtained is compensated in terms of deformation of high objects located in the surrounding space of the vehicle 10. This results in a final image of the environment, or portion of environment of the vehicle, with better accuracy.
  • the driver is thus able to assess the surrounding elements of his vehicle in a relevant manner and thus navigate the environment in complete safety.
  • the spatial distance between the LIDAR and the contact point on the obstacle is calculated by comparing the delay between the pulse and the return.
  • the LIDAR makes it possible to have a distance between the surrounding obstacle and the vehicle. For example, if there is another obstacle (for example an obstacle on the left and an obstacle on the right of the vehicle), the LIDAR makes it possible to have two distances, one corresponding to the obstacle on the left and another corresponding to the obstacle on the right.
  • FIG. 2 is a flowchart representing the method of calculating a final image of a vehicle environment according to the invention.
  • the method for calculating a final Ifini image of an environment of a vehicle 10 is intended to be implemented in the vehicle 10, from data coming from a perception system 20 on board the vehicle 10.
  • the perception system 20 comprises:
  • At least one camera 21, 22 with panoramic vision each being positioned on the vehicle 10 and configured to capture at least one image h, h in a first angular portion 31, 32 of the environment of the vehicle 10;
  • At least one measurement sensor 41, 42 configured to determine a distance di, d2 between the vehicle 10 and a point of the environment in a second angular portion 51, 52 located in one of the first angular portions 31, 32 of the vehicle environment.
  • step 300 of perspective transformation further comprises a step 500 of updating, for at least part of the matrix D, the pre-calculated distance dcald , dcalc2 by the distance d1 , d2 determined by the measurement sensor 41 and/or 42.
  • the step 500 of updating the distance matrix associated with each pixel of the image is crucial for better consideration of the three-dimensional aspect of the environment. It makes it possible to correct the bias of a matrix which models a plane reference surface 60 (or even of a surface having a slight curvature such as the surface 60' represented in FIG. 1) and which results in a constant and pre- calculated, independently of the real three-dimensional environment in which the vehicle 10 moves.
  • This transformation step 300 will be the subject of a more detailed description below.
  • the perception system 20 comprises a man-machine interface 70 capable of displaying an image, and in which case the method according to the invention further comprising a step 600 of displaying the final image Ifin 1 on the man-machine interface 70.
  • the display of the final image representative of the environment of the vehicle allows the driver of the vehicle to better understand the environment of the vehicle. This results in better safety for the driver and any passengers, as well as for the surrounding area.
  • the calculation method according to the invention may further comprise, after step 100 of image capture for at least one of the captured images 11, I2, a step 700 of resizing the captured image 11, I2 .
  • the resizing step 700 makes it possible to obtain images with the correct dimensions for the subsequent steps of the method.
  • the distortion in the captured image comes from the lens of the camera 21, 22.
  • the camera lenses are not uniform, which introduces fish-eye type aberrations into the image. These aberrations appear especially on the edges of the image because the thickness of the lens is less regular on the edges. Image distortion is therefore linked to the intrinsic characteristics of the camera and especially of the lens. It is necessary to compensate for the intrinsic parameters of the camera.
  • the method according to the invention may comprise, prior to the step 200 of correcting the distortion, a step 800 of characterizing the characteristics of the lens.
  • This characterization step 800 can only be performed once in the life of a camera, or it can be performed at more or less regular intervals to ensure the proper correction of distortions over time. Characterization step 800 is generally based on data from the camera supplier.
  • Figure 3 shows the steps of the method for calculating a final image according to the invention.
  • An image h is captured.
  • the step 200 of correction of the distortion in the captured image to generate a corrected image Icorn.
  • step 300 of perspective transformation specific to the invention is more precise than a perspective transformation of the prior art, thanks to the enrichment of the capture of the image frames using information from the measurement sensors.
  • the method according to the invention provides additional information in the calculation of the final image by recovering distance data between the vehicle and the three-dimensional objects of the environment, in order to take into account the height of these objects and to allow a faithful visual rendering of them on the final image.
  • the calculation method according to the invention can be summarized as follows:
  • the matrix D takes into account the distance information supplied by the measurement sensor or sensors, of the sonar or other type. This update of the information in matrix D provides better consideration of the reality on the ground.
  • FIG. 4 illustrates in more detail step 300 of perspective transformation of the method for calculating a final image according to the invention.
  • Matrix D is represented with only 3 rows and 3 columns. Obviously, this matrix comprises many more rows and columns in accordance with the number of pixels of the image considered.
  • the matrix D stores for each pixel a precalculated distance (dcalc-i, dcalc2) between said camera 21, 22 and a point of the environment corresponding to said pixel projected on a reference surface 60.
  • the 3D shape is calculated online using the measurement sensor, preferably the sonar sensors which are already commonly on board the vehicle, to better model the 3D environment and compensate for the deformation of tall objects located in the surrounding space of the vehicle.
  • a pre-calculated matrix D (whether it is flat with reference to the floor 60 or bowl-shaped with reference to the bowl 60'). This is the matrix shown on the left side of the figure. Then an update is performed (step 500) of this matrix D according to the distance data provided by the measurement sensor. We then obtain the updated matrix represented on the right part of the figure.
  • a distance measurement sensor such as a sonar
  • step 300 of perspective transformation further comprises a step 510 of extruding a zone 55, 56 of the second angular portion 51, 52 into a sector oriented at a predefined angle ⁇ with respect to the reference surface 60, said zone 55, 56 being positioned at the distance di, d2 determined by the at least one measurement sensor 41, 42.
  • the plane surface represents the pre-calculated ground (reference surface 60).
  • the zone 55 that is to say the portion of the second angular portion 51 situated beyond the distance di with respect to the vehicle, is extruded by a predefined angle of inclination ⁇ .
  • the inclination is controlled by a so-called visual comfort angle 0.
  • the angle of inclination 0 can be adjusted during the calibration phase or by other means (such as a function of the height of the object). This angle is chosen between 0° and 90°. It can be predefined or adapted depending on the situation. In particular, its value may vary depending on the distance from the zone 55 to the vehicle. This extrusion makes it possible to take into account the three-dimensional aspect of the surrounding objects in the calculation of the final image.
  • the sonar sensors are often discreet and placed in sufficient number to cover 360 degrees around the vehicle, their number and their specific azimuth field of vision determine the angular sector a that they cover.
  • a quasi-vertical surface arises on the surface of the ground at a point distant from di from the origin (for example, the center) of the car.
  • the distance di corresponds to the distance measurement coming from the sensor (for example sonar) processing a specific angular sector.
  • This updating and extrusion step is performed for each measurement sensor available on the vehicle.
  • the invention allows a more faithful reconstruction of the bird's eye view when large objects are present in the vicinity of the vehicle. This invention allows better comfort and better reliability in estimating the distances of these objects to facilitate maneuvers.
  • the invention also relates to a computer program product, said computer program comprising code instructions making it possible to perform the steps of the method according to the invention described above, when said program is executed on a computer.
  • a computer program product said computer program comprising code instructions making it possible to perform the steps of the method according to the invention described above, when said program is executed on a computer.
  • the system or subsystems according to the embodiments of the invention can be implemented in various ways by hardware (“hardware”), software, or a combination of hardware and computer software, especially in the form of program code that can be distributed as a program product, in various forms.
  • the program code may be distributed using computer readable media, which may include computer readable storage media and communication media.
  • the methods described in the present description may in particular be implemented in the form of computer program instructions executable by one or more processors in a computer computing device. These computer program instructions may also be stored in a computer readable medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Processing (AREA)
  • Geometry (AREA)
EP21820553.2A 2020-12-17 2021-11-26 System und verfahren zur berechnung eines endbildes eines umgebungsbereichs eines fahrzeugs Pending EP4264535A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR2013460A FR3118253B1 (fr) 2020-12-17 2020-12-17 Système et procédé de calcul d’une image finale d’un environnement d’un véhicule
PCT/EP2021/083198 WO2022128413A1 (fr) 2020-12-17 2021-11-26 Système et procédé de calcul d'une image finale d'un environnement d'un véhicule

Publications (1)

Publication Number Publication Date
EP4264535A1 true EP4264535A1 (de) 2023-10-25

Family

ID=74592234

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21820553.2A Pending EP4264535A1 (de) 2020-12-17 2021-11-26 System und verfahren zur berechnung eines endbildes eines umgebungsbereichs eines fahrzeugs

Country Status (5)

Country Link
US (1) US20240107169A1 (de)
EP (1) EP4264535A1 (de)
CN (1) CN116685998A (de)
FR (1) FR3118253B1 (de)
WO (1) WO2022128413A1 (de)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190349571A1 (en) * 2018-05-11 2019-11-14 Ford Global Technologies, Llc Distortion correction for vehicle surround view camera projections
US10796402B2 (en) * 2018-10-19 2020-10-06 Tusimple, Inc. System and method for fisheye image processing

Also Published As

Publication number Publication date
FR3118253A1 (fr) 2022-06-24
FR3118253B1 (fr) 2023-04-14
US20240107169A1 (en) 2024-03-28
CN116685998A (zh) 2023-09-01
WO2022128413A1 (fr) 2022-06-23

Similar Documents

Publication Publication Date Title
EP2884226B1 (de) Verfahren zur Winkelkalibrierung der Position einer Videokamera an Bord eines Kraftfahrzeugs
US9123247B2 (en) Surrounding area monitoring apparatus for vehicle
FR2883826A1 (fr) Appareil d'assistance a la conduite d'un vehicule
NL2004996C2 (nl) Werkwijze voor het vervaardigen van een digitale foto, waarbij ten minste een deel van de beeldelementen positieinformatie omvatten en een dergelijke digitale foto.
EP2133237B1 (de) Verfahren zur Anzeige einer Einparkhilfe
FR2899332A1 (fr) Dispositif de mesure de champ de visibilite pour vehicule et systeme d'assistance a la conduite pour vehicule
US20120236287A1 (en) External environment visualization apparatus and method
EP1785966B1 (de) Verfahren zur Bewertung der Eigenschaften eines Frontelements mittels eines Kraftfahrzeugs
US20200117918A1 (en) Vehicle lane marking and other object detection using side fisheye cameras and three-fold de-warping
FR2861488A1 (fr) Dispositif de determination de possibilite de collision
FR2965765A1 (fr) Procede et dispositif pour former une image d'un objet dans l'environnement d'un vehicule
EP2043044B1 (de) Verfahren und Vorrichtung zur Einparkhilfe eines Kraftfahrzeugs
FR2965956A1 (fr) Procede et dispositif de representation optique de l'environnement d'un vehicule
FR3054673B1 (fr) Fusion de donnees de detection et de suivi d'objets pour vehicule automobile
EP4264535A1 (de) System und verfahren zur berechnung eines endbildes eines umgebungsbereichs eines fahrzeugs
FR3067999B1 (fr) Procede d'aide a la conduite d'un vehicule automobile
FR2998956A1 (fr) Procede de calibration d'une camera mise en place dans un vehicule automobile
WO2021156026A1 (fr) Procédé de calibration des caractéristiques extrinsèques d'un lidar
JP5580062B2 (ja) 障害物検知警報装置
FR3100914A1 (fr) Procédé d’élaboration d’une image distordue
FR2938228A1 (fr) Procede de mesure de distance au moyen d'une camera embarquee dans un vehicule automobile
FR3078667A1 (fr) Procede et systeme d'assistance au stationnement par vision active a lumiere structuree
EP3704625A1 (de) Verfahren zur datenverarbeitung für ein fahrhilfesystem eines fahrzeugs und zugehöriges fahrhilfesystem
FR3060775A1 (fr) Procede pour determiner une zone d'affichage d'un element dans un dispositif d'affichage
WO2017216465A1 (fr) Procédé et dispositif de traitement d'images acquises par une caméra d'un véhicule automobile

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20230605

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: AMPERE SAS