EP3555802A1 - Objekterkennungssystem basierend auf einem adaptiven generischen 3d-modell - Google Patents

Objekterkennungssystem basierend auf einem adaptiven generischen 3d-modell

Info

Publication number
EP3555802A1
EP3555802A1 EP17811644.8A EP17811644A EP3555802A1 EP 3555802 A1 EP3555802 A1 EP 3555802A1 EP 17811644 A EP17811644 A EP 17811644A EP 3555802 A1 EP3555802 A1 EP 3555802A1
Authority
EP
European Patent Office
Prior art keywords
objects
generic
model
images
class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP17811644.8A
Other languages
English (en)
French (fr)
Inventor
Loïc LECERF
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Marelli Europe SpA
Original Assignee
Magneti Marelli SpA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Magneti Marelli SpA filed Critical Magneti Marelli SpA
Publication of EP3555802A1 publication Critical patent/EP3555802A1/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/772Determining representative reference patterns, e.g. averaging or distorting patterns; Generating dictionaries

Definitions

  • the invention relates to mobile object recognition systems, including systems based on machine learning.
  • the isolation and tracking of a moving object in a sequence of images can be performed by relatively unsophisticated generic algorithms based, for example, on background subtraction.
  • it is more difficult to classify the objects thus isolated in categories that one wishes to detect that is to say, to recognize whether the object is a person, a car, a bicycle, an animal, etc.
  • the objects can have a great variety of morphologies in the images of the sequence (position, size, orientation, distortion, texture, configuration of possible appendages and articulated elements, etc.).
  • the morphologies also depend on the viewing angle and the lens of the camera that films the scene to watch. Sometimes we also want to recognize subclasses (car model, gender of a person).
  • a machine learning system To classify and detect objects, a machine learning system is generally used. The ranking is then based on a knowledge base or a data set developed by learning.
  • An initial data set is generally generated during a so-called supervised learning phase, where an operator views sequences of images produced in context and manually annotates the image areas corresponding to the objects to be recognized. This phase is generally long and tedious, because one seeks ideally to capture all the possible variants of the morphology of the objects of the class, at least enough variants to obtain a satisfactory recognition rate.
  • a characteristic of these techniques is that they generate many synthesized images that, although they conform to the parameters and constraints of the 3D models, have improbable morphologies. This clutters the dataset with unnecessary images and can slow recognition.
  • a method of automatically configuring a recognition system of a variable morphology object class comprising the steps of: providing a machine learning system with an initial data set sufficient to recognize instances objects of the class in a sequence of images of a target scene; providing a generic three-dimensional model specific to the class of objects, the morphology of which can be defined by a set of parameters; acquire a sequence of images of the scene using a camera; recognizing image instances of objects of the class in the acquired image sequence using the initial dataset; conforming the generic three-dimensional model to recognized image instances; record ranges of variation of the parameters resulting from conformations of the generic model; synthesize multiple three-dimensional objects from the generic model by varying the parameters in the recorded ranges of variation; and complete the data set of the learning system by projections of the objects synthesized in the plane of the images.
  • the method may comprise the following steps: defining parameters of the generic three-dimensional model by the relative positions of bitters of a mesh of the model, the positions of the other nodes of the mesh being bound to the bitters by constraints; and perform conformations of the generic three-dimensional model by positioning bitters of a projection of the model in the plane of the images.
  • the method may further include the steps of: registering textures from areas of the recognized image instances; and stamping on each synthesized object a texture among the recorded textures.
  • the initial data set of the learning system can be obtained by supervised learning involving at least two objects of the class whose morphologies are at opposite ends of a domain of observed variation of the morphologies.
  • Figure 1 shows a schematic three-dimensional generic model of an object, projected in different positions of a scene seen by a camera
  • FIG. 2 schematically illustrates a configuration phase of a machine learning system for recognizing objects according to the generic model of FIG. 1.
  • Figure 1 illustrates a simplified generic model of an object, for example a car, projected onto an image in different positions of an example of a scene monitored by a fixed camera.
  • the scene here is, for simplicity's sake, a street crossing horizontally the field of view of the camera.
  • the model In the background, the model is projected in three aligned positions, in the center and near the left and right edges of the image. In the foreground, the model is projected in a slightly left position. All these projections come from the same model from the point of view of dimensions and show the morphological variations of the projections in the image as a function of the position in the scene. In a more complex scene, for example a curved street, one would also see variations of morphology depending on the orientation of the model.
  • the variations of morphology as a function of the position are defined by the projection of the plane on which the objects evolve, here the street.
  • the projection of the plane of evolution is defined by equations that depend on the characteristics of the camera (angle of view, focal length and distortion of the lens). Edges perpendicular to the camera axis change size homothetically as a function of distance from the camera, and edges parallel to the camera axis follow creepage distances.
  • the projections of the same object in the image at different positions or orientations have a variable morphology, even if the real object has a fixed morphology.
  • real objects can also have a variable morphology, whether from one object to another (between two cars of different models), or during the movement of the same object (pedestrian).
  • Learning systems are well suited to this situation when they have been configured with enough data to represent the range of most likely projected morphologies.
  • the envisaged three-dimensional generic model for example of the Point Distribution Model (PDM) type, may comprise a mesh of nodes linked to each other by constraints, that is to say parameters that establish the relative displacements between adjacent nodes or the deformations of the mesh which cause displacements of certain nodes, known as landmarks.
  • constraints that is to say parameters that establish the relative displacements between adjacent nodes or the deformations of the mesh which cause displacements of certain nodes, known as landmarks.
  • landmarks known as landmarks.
  • the bitter ones are chosen so that their displacements make it possible to reach all the desired morphologies of the model within the defined constraints.
  • a simplified generic car model may include, for the bodywork, a mesh of 16 nodes defining 10 rectangular surfaces and having 10 bitter.
  • Eight bitter KO to K7 define one of the side faces of the car, and the two remaining K8 and K9 landmarks on the other side face define the width of the car.
  • a single bitter would be enough to define the width of the car, but the presence of two bitter or more will allow to conform the model to a projection of a real object taking into account the deformations of the projection.
  • the wheels are a characteristic element of a car and can be assigned a specific set of bitter, not shown, defining the center distance, the diameter and the points of contact with the road.
  • the illustrated generic 3D model is simplistic, to clarify the presentation.
  • the model will include a finer mesh and to define edges and curved surfaces.
  • FIG. 2 schematically illustrates a configuration phase of a machine learning system for recognizing cars according to the generic model of FIG. 1, by way of example.
  • the learning system comprises a data set 10 associated with a camera 12 installed for filming a scene to be monitored, for example that of FIG.
  • the configuration phase can start from an existing dataset, which can be summary and offer only a low recognition rate.
  • This existing dataset may have been produced by a quick and uncomplicated supervised learning. The following steps are used to complete the dataset to achieve a satisfactory recognition rate.
  • the reconnaissance system is started and starts recognizing and tracking cars in successive images captured by the camera.
  • An image instance of each recognized car is extracted in 14.
  • the camera typically produces multiple images that each contain an instance of the same car at different positions. One can choose the largest instance, which will have the best resolution for subsequent operations.
  • the generic 3D model is conformed to each instance of image thus extracted.
  • This can be done by a conventional conformation algorithm ("fitting") which seeks, for example, the best matches between the image and the bitters of the model as projected in the plane of the image.
  • fitting seeks, for example, the best matches between the image and the bitters of the model as projected in the plane of the image.
  • algorithms based on bitter detection as described, for example, in “One Millisecond Face Alignment with an Ensemble of Regression Trees", Vahid Kazemi et al. IEEE CVPR 2014].
  • it is preferable that other faces of the cars are visible in the instances so that the model can be defined in a complete way.
  • conformation operations produce 3D models that are supposed to be scaled to real objects.
  • the conformation operations can use the equations of the projection of the plane of evolution of the object. These equations can be determined manually from the camera characteristics and the configuration of the scene, or estimated by the system in a calibration phase using, if necessary, adapted tools such as depth cameras. Knowing that objects evolve on a plane, the equations can be deduced from the variation of the size according to the position of the instances of a tracked object.
  • 3D model representing the car recognized scale.
  • the models are illustrated in two dimensions, in correspondence with the extracted lateral faces 14. (Note that the generic 3D model used is conformable to cars as well as vans or even buses, so the system is here rather intended to recognize any four-wheeled vehicle.)
  • the conformation step it is also possible to sample the image zones corresponding to the different faces of the car, and to store these image zones in the form of textures at 18. After a certain acquisition period, will have collected without supervision a multitude of 3D models representing different cars, as well as their textures. If the recognition rate offered by the initial dataset is low, it is sufficient to extend the acquisition time to reach a collection with a satisfactory number of models.
  • the models are compared with each other at 20 and there is a range of variation for each bitter.
  • An example of a range of variation for bitter K6 has been illustrated.
  • the ranges of variation can define relative variations affecting the shape of the model itself, or absolute variations such as the position and orientation of the model.
  • One of the bitter, for example KO can serve as an absolute reference. It can be assigned ranges of absolute variation that determine the positions and possible orientations of the car in the image. These ranges are in fact not directly deductible from the registered models, since a registered model can come from a single instance chosen on a multitude of instances produced during the displacement of a car. We can estimate the variations of position and orientation by deducing them from the multiple instances of a car followed, without having to carry out a complete conformation of the generic model in each instance.
  • bitter diametrically opposed to KO we can establish a range of variation relative to bitter KO, which determines the length of the car.
  • K8 we can establish a range of variation relative to bitter KO, which determines the width of the car.
  • the range of variation of each of the other bitters can be established relative to one of its adjacent landmarks.
  • the variations of the bitter can be random, incremental, or a combination of both.
  • each synthesized car is projected in the camera image plane to form a self-annotated image instance for completing the data set 10 of the training system.
  • These projections also use the equations of the projection of the plan of evolution of the cars.
  • the same car synthesized can be projected in several positions and different orientations, according to the absolute ranges of variation previously determined. In general, the orientations are correlated to the positions, so that we will not vary the two parameters independently, unless we want to detect abnormal situations, like a car across the road.
  • the dataset complemented by this procedure could still have gaps preventing the detection of certain car models. In this case, one can reiterate an automatic configuration phase starting from the completed dataset.
  • This dataset normally offers a recognition rate higher than the initial game, which will lead to the constitution of a more varied collection of models 16, making it possible to refine the parameters variation ranges and to synthesize models 22 at a time. more accurate and varied to feed the dataset 10 again.
  • the initial dataset can be produced by simple and fast supervised learning.
  • an operator views images of the filmed scene and, using a graphical interface, annotates the image areas corresponding to instances of the objects to be recognized. Since the subsequent configuration procedure is based on the morphological variations of the generic model, the operator may wish to annotate the objects exhibiting the most important variations. It can thus annotate at least two objects whose morphologies are at opposite ends of a range of variation that it would have visually observed.
  • the interface can be designed to establish the equations of the projection of the plan of evolution with the assistance of the operator. The interface can then propose to the operator to manually conform the generic model to image areas, offering both an annotation and the creation of the first models in the collection 16.
  • This annotation phase is summary and rapid, the objective being to obtain a restricted initial dataset allowing the start of the automatic configuration phase that will complete the dataset.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Computer Graphics (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
EP17811644.8A 2016-12-14 2017-11-21 Objekterkennungssystem basierend auf einem adaptiven generischen 3d-modell Withdrawn EP3555802A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR1662455A FR3060170B1 (fr) 2016-12-14 2016-12-14 Systeme de reconnaissance d'objets base sur un modele generique 3d adaptatif
PCT/FR2017/053191 WO2018109298A1 (fr) 2016-12-14 2017-11-21 Système de reconnaissance d'objets basé sur un modèle générique 3d adaptatif

Publications (1)

Publication Number Publication Date
EP3555802A1 true EP3555802A1 (de) 2019-10-23

Family

ID=58501514

Family Applications (1)

Application Number Title Priority Date Filing Date
EP17811644.8A Withdrawn EP3555802A1 (de) 2016-12-14 2017-11-21 Objekterkennungssystem basierend auf einem adaptiven generischen 3d-modell

Country Status (9)

Country Link
US (1) US11036963B2 (de)
EP (1) EP3555802A1 (de)
JP (1) JP7101676B2 (de)
KR (1) KR102523941B1 (de)
CN (1) CN110199293A (de)
CA (1) CA3046312A1 (de)
FR (1) FR3060170B1 (de)
IL (1) IL267181B2 (de)
WO (1) WO2018109298A1 (de)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3060170B1 (fr) * 2016-12-14 2019-05-24 Smart Me Up Systeme de reconnaissance d'objets base sur un modele generique 3d adaptatif
US11462023B2 (en) 2019-11-14 2022-10-04 Toyota Research Institute, Inc. Systems and methods for 3D object detection
FR3104054B1 (fr) * 2019-12-10 2022-03-25 Capsix Dispositif de definition d’une sequence de deplacements sur un modele generique
US11736748B2 (en) * 2020-12-16 2023-08-22 Tencent America LLC Reference of neural network model for adaptation of 2D video for streaming to heterogeneous client end-points
KR20230053262A (ko) 2021-10-14 2023-04-21 주식회사 인피닉 2d 현실공간 이미지를 기반의 3d 객체 인식 및 변환 방법과 이를 실행하기 위하여 기록매체에 기록된 컴퓨터 프로그램

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5397985A (en) * 1993-02-09 1995-03-14 Mobil Oil Corporation Method for the imaging of casing morphology by twice integrating magnetic flux density signals
DE10252298B3 (de) * 2002-11-11 2004-08-19 Mehl, Albert, Prof. Dr. Dr. Verfahren zur Herstellung von Zahnersatzteilen oder Zahnrestaurationen unter Verwendung elektronischer Zahndarstellungen
JP2006520054A (ja) * 2003-03-06 2006-08-31 アニメトリクス,インク. 不変視点からの画像照合および2次元画像からの3次元モデルの生成
JP4501937B2 (ja) * 2004-11-12 2010-07-14 オムロン株式会社 顔特徴点検出装置、特徴点検出装置
JP4653606B2 (ja) * 2005-05-23 2011-03-16 株式会社東芝 画像認識装置、方法およびプログラム
JP2007026400A (ja) * 2005-07-15 2007-02-01 Asahi Engineering Kk 可視光を用いた照度差の激しい場所での物体検出・認識システム、及びコンピュータプログラム
JP4991317B2 (ja) * 2006-02-06 2012-08-01 株式会社東芝 顔特徴点検出装置及びその方法
JP4585471B2 (ja) * 2006-03-07 2010-11-24 株式会社東芝 特徴点検出装置及びその方法
JP4093273B2 (ja) * 2006-03-13 2008-06-04 オムロン株式会社 特徴点検出装置、特徴点検出方法および特徴点検出プログラム
JP4241763B2 (ja) * 2006-05-29 2009-03-18 株式会社東芝 人物認識装置及びその方法
JP4829141B2 (ja) * 2007-02-09 2011-12-07 株式会社東芝 視線検出装置及びその方法
US7872653B2 (en) * 2007-06-18 2011-01-18 Microsoft Corporation Mesh puppetry
US20100123714A1 (en) * 2008-11-14 2010-05-20 General Electric Company Methods and apparatus for combined 4d presentation of quantitative regional parameters on surface rendering
JP5361524B2 (ja) * 2009-05-11 2013-12-04 キヤノン株式会社 パターン認識システム及びパターン認識方法
EP2333692A1 (de) * 2009-12-11 2011-06-15 Alcatel Lucent Verfahren und Anordnung für verbessertes Bild-Matching
KR101697184B1 (ko) * 2010-04-20 2017-01-17 삼성전자주식회사 메쉬 생성 장치 및 그 방법, 그리고, 영상 처리 장치 및 그 방법
KR101681538B1 (ko) * 2010-10-20 2016-12-01 삼성전자주식회사 영상 처리 장치 및 방법
JP6026119B2 (ja) * 2012-03-19 2016-11-16 株式会社東芝 生体情報処理装置
GB2515510B (en) * 2013-06-25 2019-12-25 Synopsys Inc Image processing method
US9299195B2 (en) * 2014-03-25 2016-03-29 Cisco Technology, Inc. Scanning and tracking dynamic objects with depth cameras
CN106133756B (zh) * 2014-03-27 2019-07-12 赫尔实验室有限公司 过滤、分割和识别对象的系统、方法及非暂时性计算机可读介质
FR3021443B1 (fr) * 2014-05-20 2017-10-13 Essilor Int Procede de construction d'un modele du visage d'un individu, procede et dispositif d'analyse de posture utilisant un tel modele
CN104182765B (zh) * 2014-08-21 2017-03-22 南京大学 一种互联网图像驱动的三维模型最优视图自动选择方法
US10559111B2 (en) * 2016-06-23 2020-02-11 LoomAi, Inc. Systems and methods for generating computer ready animation models of a human head from captured data images
FR3060170B1 (fr) * 2016-12-14 2019-05-24 Smart Me Up Systeme de reconnaissance d'objets base sur un modele generique 3d adaptatif

Also Published As

Publication number Publication date
IL267181B (en) 2022-11-01
KR20190095359A (ko) 2019-08-14
FR3060170A1 (fr) 2018-06-15
CA3046312A1 (fr) 2018-06-21
WO2018109298A1 (fr) 2018-06-21
JP2020502661A (ja) 2020-01-23
FR3060170B1 (fr) 2019-05-24
IL267181A (en) 2019-08-29
US11036963B2 (en) 2021-06-15
JP7101676B2 (ja) 2022-07-15
US20190354745A1 (en) 2019-11-21
KR102523941B1 (ko) 2023-04-20
CN110199293A (zh) 2019-09-03
IL267181B2 (en) 2023-03-01

Similar Documents

Publication Publication Date Title
WO2018109298A1 (fr) Système de reconnaissance d'objets basé sur un modèle générique 3d adaptatif
CA2701698A1 (fr) Procede de synchronisation de flux video
EP3614306B1 (de) Verfahren zur gesichtslokalisierung und -identifizierung und zur haltungsbestimmung anhand einer 3d-ansicht
EP2593907A1 (de) Verfahren für den nachweis eines ziels auf stereoskopischen bildern mittels lernen und statistischer klassifizierung auf der basis eines wahrscheinlichleitsgesetzes
WO2005010820A2 (fr) Procede et dispositif automatise de perception avec determination et caracterisation de bords et de frontieres d'objets d'un espace, construction de contours et applications
FR3025918A1 (fr) Procede et systeme de modelisation automatisee d'une piece
EP3759430A1 (de) 3d-szenenmodellierungssystem durch fotogrammetrie mit mehreren ansichten
WO2012117210A1 (fr) Procédé et système pour estimer une similarité entre deux images binaires
EP3145405A1 (de) Verfahren zur bestimmung von mindestens einem verhaltensparameter
FR3083352A1 (fr) Procede et dispositif de detection rapide de structures repetitives dans l'image d'une scene routiere
FR3033913A1 (fr) Procede et systeme de reconnaissance d'objets par analyse de signaux d'image numerique d'une scene
FR3066303B1 (fr) Procede de calibration d'un dispositif de surveillance d'un conducteur dans un vehicule
FR3039919A1 (fr) Suivi d’une cible dans un reseau de cameras
FR3055452B1 (fr) Procede de determination de la pose de la tete d'un conducteur de vehicule
EP3485463A1 (de) Verfahren und system zur lokalisierung und rekonstruktion der haltung eines bewegten objekts in echtzeit mit eingebetteten sensoren
WO2018015654A1 (fr) Procede et dispositif d'aide a la navigation d'un vehicule
EP2904543A1 (de) Verfahren zum zählen von personen für eine stereoskopische anwendung und entsprechende anwendung zum zählen von personen
FR3015099A1 (fr) Systeme et dispositif d'aide a la detection et au suivi automatise d'objets mouvants ou de personnes
EP4006838A1 (de) Verfahren zur kalibrierung einer kamera und zur messung von tatsächlichen entfernungen anhand eines von der kamera aufgenommenen bildes
EP1958157A1 (de) Verfahren zur einfügung stereoskopischer bilder in eine korrespondenz
FR3112401A1 (fr) Procédé et système pour la vision stéréoscopique d’une scène d’observation céleste
WO2000004503A1 (fr) Procede de modelisation d'objets ou de scenes 3d
EP3367335A1 (de) Verfahren und vorrichtung zum nicht kontakten 3d-objekt-scannen
FR3077550A1 (fr) Procede et dispositif d’analyse de l’environnement d’un vehicule par discrimination d’objets differents detectes par plusieurs moyens d’acquisition.
FR2978269A1 (fr) Procede de mise en correspondance de plusieurs images associees a des referencements geographiques incoherents

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20190619

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20210621

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20211103