EP3555802A1 - Object recognition system based on an adaptive 3d generic model - Google Patents
Object recognition system based on an adaptive 3d generic modelInfo
- Publication number
- EP3555802A1 EP3555802A1 EP17811644.8A EP17811644A EP3555802A1 EP 3555802 A1 EP3555802 A1 EP 3555802A1 EP 17811644 A EP17811644 A EP 17811644A EP 3555802 A1 EP3555802 A1 EP 3555802A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- objects
- generic
- model
- images
- class
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2148—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/28—Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/772—Determining representative reference patterns, e.g. averaging or distorting patterns; Generating dictionaries
Definitions
- the invention relates to mobile object recognition systems, including systems based on machine learning.
- the isolation and tracking of a moving object in a sequence of images can be performed by relatively unsophisticated generic algorithms based, for example, on background subtraction.
- it is more difficult to classify the objects thus isolated in categories that one wishes to detect that is to say, to recognize whether the object is a person, a car, a bicycle, an animal, etc.
- the objects can have a great variety of morphologies in the images of the sequence (position, size, orientation, distortion, texture, configuration of possible appendages and articulated elements, etc.).
- the morphologies also depend on the viewing angle and the lens of the camera that films the scene to watch. Sometimes we also want to recognize subclasses (car model, gender of a person).
- a machine learning system To classify and detect objects, a machine learning system is generally used. The ranking is then based on a knowledge base or a data set developed by learning.
- An initial data set is generally generated during a so-called supervised learning phase, where an operator views sequences of images produced in context and manually annotates the image areas corresponding to the objects to be recognized. This phase is generally long and tedious, because one seeks ideally to capture all the possible variants of the morphology of the objects of the class, at least enough variants to obtain a satisfactory recognition rate.
- a characteristic of these techniques is that they generate many synthesized images that, although they conform to the parameters and constraints of the 3D models, have improbable morphologies. This clutters the dataset with unnecessary images and can slow recognition.
- a method of automatically configuring a recognition system of a variable morphology object class comprising the steps of: providing a machine learning system with an initial data set sufficient to recognize instances objects of the class in a sequence of images of a target scene; providing a generic three-dimensional model specific to the class of objects, the morphology of which can be defined by a set of parameters; acquire a sequence of images of the scene using a camera; recognizing image instances of objects of the class in the acquired image sequence using the initial dataset; conforming the generic three-dimensional model to recognized image instances; record ranges of variation of the parameters resulting from conformations of the generic model; synthesize multiple three-dimensional objects from the generic model by varying the parameters in the recorded ranges of variation; and complete the data set of the learning system by projections of the objects synthesized in the plane of the images.
- the method may comprise the following steps: defining parameters of the generic three-dimensional model by the relative positions of bitters of a mesh of the model, the positions of the other nodes of the mesh being bound to the bitters by constraints; and perform conformations of the generic three-dimensional model by positioning bitters of a projection of the model in the plane of the images.
- the method may further include the steps of: registering textures from areas of the recognized image instances; and stamping on each synthesized object a texture among the recorded textures.
- the initial data set of the learning system can be obtained by supervised learning involving at least two objects of the class whose morphologies are at opposite ends of a domain of observed variation of the morphologies.
- Figure 1 shows a schematic three-dimensional generic model of an object, projected in different positions of a scene seen by a camera
- FIG. 2 schematically illustrates a configuration phase of a machine learning system for recognizing objects according to the generic model of FIG. 1.
- Figure 1 illustrates a simplified generic model of an object, for example a car, projected onto an image in different positions of an example of a scene monitored by a fixed camera.
- the scene here is, for simplicity's sake, a street crossing horizontally the field of view of the camera.
- the model In the background, the model is projected in three aligned positions, in the center and near the left and right edges of the image. In the foreground, the model is projected in a slightly left position. All these projections come from the same model from the point of view of dimensions and show the morphological variations of the projections in the image as a function of the position in the scene. In a more complex scene, for example a curved street, one would also see variations of morphology depending on the orientation of the model.
- the variations of morphology as a function of the position are defined by the projection of the plane on which the objects evolve, here the street.
- the projection of the plane of evolution is defined by equations that depend on the characteristics of the camera (angle of view, focal length and distortion of the lens). Edges perpendicular to the camera axis change size homothetically as a function of distance from the camera, and edges parallel to the camera axis follow creepage distances.
- the projections of the same object in the image at different positions or orientations have a variable morphology, even if the real object has a fixed morphology.
- real objects can also have a variable morphology, whether from one object to another (between two cars of different models), or during the movement of the same object (pedestrian).
- Learning systems are well suited to this situation when they have been configured with enough data to represent the range of most likely projected morphologies.
- the envisaged three-dimensional generic model for example of the Point Distribution Model (PDM) type, may comprise a mesh of nodes linked to each other by constraints, that is to say parameters that establish the relative displacements between adjacent nodes or the deformations of the mesh which cause displacements of certain nodes, known as landmarks.
- constraints that is to say parameters that establish the relative displacements between adjacent nodes or the deformations of the mesh which cause displacements of certain nodes, known as landmarks.
- landmarks known as landmarks.
- the bitter ones are chosen so that their displacements make it possible to reach all the desired morphologies of the model within the defined constraints.
- a simplified generic car model may include, for the bodywork, a mesh of 16 nodes defining 10 rectangular surfaces and having 10 bitter.
- Eight bitter KO to K7 define one of the side faces of the car, and the two remaining K8 and K9 landmarks on the other side face define the width of the car.
- a single bitter would be enough to define the width of the car, but the presence of two bitter or more will allow to conform the model to a projection of a real object taking into account the deformations of the projection.
- the wheels are a characteristic element of a car and can be assigned a specific set of bitter, not shown, defining the center distance, the diameter and the points of contact with the road.
- the illustrated generic 3D model is simplistic, to clarify the presentation.
- the model will include a finer mesh and to define edges and curved surfaces.
- FIG. 2 schematically illustrates a configuration phase of a machine learning system for recognizing cars according to the generic model of FIG. 1, by way of example.
- the learning system comprises a data set 10 associated with a camera 12 installed for filming a scene to be monitored, for example that of FIG.
- the configuration phase can start from an existing dataset, which can be summary and offer only a low recognition rate.
- This existing dataset may have been produced by a quick and uncomplicated supervised learning. The following steps are used to complete the dataset to achieve a satisfactory recognition rate.
- the reconnaissance system is started and starts recognizing and tracking cars in successive images captured by the camera.
- An image instance of each recognized car is extracted in 14.
- the camera typically produces multiple images that each contain an instance of the same car at different positions. One can choose the largest instance, which will have the best resolution for subsequent operations.
- the generic 3D model is conformed to each instance of image thus extracted.
- This can be done by a conventional conformation algorithm ("fitting") which seeks, for example, the best matches between the image and the bitters of the model as projected in the plane of the image.
- fitting seeks, for example, the best matches between the image and the bitters of the model as projected in the plane of the image.
- algorithms based on bitter detection as described, for example, in “One Millisecond Face Alignment with an Ensemble of Regression Trees", Vahid Kazemi et al. IEEE CVPR 2014].
- it is preferable that other faces of the cars are visible in the instances so that the model can be defined in a complete way.
- conformation operations produce 3D models that are supposed to be scaled to real objects.
- the conformation operations can use the equations of the projection of the plane of evolution of the object. These equations can be determined manually from the camera characteristics and the configuration of the scene, or estimated by the system in a calibration phase using, if necessary, adapted tools such as depth cameras. Knowing that objects evolve on a plane, the equations can be deduced from the variation of the size according to the position of the instances of a tracked object.
- 3D model representing the car recognized scale.
- the models are illustrated in two dimensions, in correspondence with the extracted lateral faces 14. (Note that the generic 3D model used is conformable to cars as well as vans or even buses, so the system is here rather intended to recognize any four-wheeled vehicle.)
- the conformation step it is also possible to sample the image zones corresponding to the different faces of the car, and to store these image zones in the form of textures at 18. After a certain acquisition period, will have collected without supervision a multitude of 3D models representing different cars, as well as their textures. If the recognition rate offered by the initial dataset is low, it is sufficient to extend the acquisition time to reach a collection with a satisfactory number of models.
- the models are compared with each other at 20 and there is a range of variation for each bitter.
- An example of a range of variation for bitter K6 has been illustrated.
- the ranges of variation can define relative variations affecting the shape of the model itself, or absolute variations such as the position and orientation of the model.
- One of the bitter, for example KO can serve as an absolute reference. It can be assigned ranges of absolute variation that determine the positions and possible orientations of the car in the image. These ranges are in fact not directly deductible from the registered models, since a registered model can come from a single instance chosen on a multitude of instances produced during the displacement of a car. We can estimate the variations of position and orientation by deducing them from the multiple instances of a car followed, without having to carry out a complete conformation of the generic model in each instance.
- bitter diametrically opposed to KO we can establish a range of variation relative to bitter KO, which determines the length of the car.
- K8 we can establish a range of variation relative to bitter KO, which determines the width of the car.
- the range of variation of each of the other bitters can be established relative to one of its adjacent landmarks.
- the variations of the bitter can be random, incremental, or a combination of both.
- each synthesized car is projected in the camera image plane to form a self-annotated image instance for completing the data set 10 of the training system.
- These projections also use the equations of the projection of the plan of evolution of the cars.
- the same car synthesized can be projected in several positions and different orientations, according to the absolute ranges of variation previously determined. In general, the orientations are correlated to the positions, so that we will not vary the two parameters independently, unless we want to detect abnormal situations, like a car across the road.
- the dataset complemented by this procedure could still have gaps preventing the detection of certain car models. In this case, one can reiterate an automatic configuration phase starting from the completed dataset.
- This dataset normally offers a recognition rate higher than the initial game, which will lead to the constitution of a more varied collection of models 16, making it possible to refine the parameters variation ranges and to synthesize models 22 at a time. more accurate and varied to feed the dataset 10 again.
- the initial dataset can be produced by simple and fast supervised learning.
- an operator views images of the filmed scene and, using a graphical interface, annotates the image areas corresponding to instances of the objects to be recognized. Since the subsequent configuration procedure is based on the morphological variations of the generic model, the operator may wish to annotate the objects exhibiting the most important variations. It can thus annotate at least two objects whose morphologies are at opposite ends of a range of variation that it would have visually observed.
- the interface can be designed to establish the equations of the projection of the plan of evolution with the assistance of the operator. The interface can then propose to the operator to manually conform the generic model to image areas, offering both an annotation and the creation of the first models in the collection 16.
- This annotation phase is summary and rapid, the objective being to obtain a restricted initial dataset allowing the start of the automatic configuration phase that will complete the dataset.
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR1662455A FR3060170B1 (en) | 2016-12-14 | 2016-12-14 | OBJECT RECOGNITION SYSTEM BASED ON AN ADAPTIVE 3D GENERIC MODEL |
PCT/FR2017/053191 WO2018109298A1 (en) | 2016-12-14 | 2017-11-21 | Object recognition system based on an adaptive 3d generic model |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3555802A1 true EP3555802A1 (en) | 2019-10-23 |
Family
ID=58501514
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP17811644.8A Withdrawn EP3555802A1 (en) | 2016-12-14 | 2017-11-21 | Object recognition system based on an adaptive 3d generic model |
Country Status (9)
Country | Link |
---|---|
US (1) | US11036963B2 (en) |
EP (1) | EP3555802A1 (en) |
JP (1) | JP7101676B2 (en) |
KR (1) | KR102523941B1 (en) |
CN (1) | CN110199293A (en) |
CA (1) | CA3046312A1 (en) |
FR (1) | FR3060170B1 (en) |
IL (1) | IL267181B2 (en) |
WO (1) | WO2018109298A1 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR3060170B1 (en) * | 2016-12-14 | 2019-05-24 | Smart Me Up | OBJECT RECOGNITION SYSTEM BASED ON AN ADAPTIVE 3D GENERIC MODEL |
US11462023B2 (en) | 2019-11-14 | 2022-10-04 | Toyota Research Institute, Inc. | Systems and methods for 3D object detection |
US11736748B2 (en) * | 2020-12-16 | 2023-08-22 | Tencent America LLC | Reference of neural network model for adaptation of 2D video for streaming to heterogeneous client end-points |
KR20230053262A (en) | 2021-10-14 | 2023-04-21 | 주식회사 인피닉 | A 3D object recognition method based on a 2D real space image and a computer program recorded on a recording medium to execute the same |
Family Cites Families (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5397985A (en) * | 1993-02-09 | 1995-03-14 | Mobil Oil Corporation | Method for the imaging of casing morphology by twice integrating magnetic flux density signals |
DE10252298B3 (en) * | 2002-11-11 | 2004-08-19 | Mehl, Albert, Prof. Dr. Dr. | Process for the production of tooth replacement parts or tooth restorations using electronic tooth representations |
US7853085B2 (en) | 2003-03-06 | 2010-12-14 | Animetrics, Inc. | Viewpoint-invariant detection and identification of a three-dimensional object from two-dimensional imagery |
EP1811456B1 (en) * | 2004-11-12 | 2011-09-28 | Omron Corporation | Face feature point detector and feature point detector |
JP4653606B2 (en) * | 2005-05-23 | 2011-03-16 | 株式会社東芝 | Image recognition apparatus, method and program |
JP2007026400A (en) | 2005-07-15 | 2007-02-01 | Asahi Engineering Kk | Object detection/recognition system at place with sharp difference in illuminance using visible light and computer program |
JP4991317B2 (en) * | 2006-02-06 | 2012-08-01 | 株式会社東芝 | Facial feature point detection apparatus and method |
JP4585471B2 (en) * | 2006-03-07 | 2010-11-24 | 株式会社東芝 | Feature point detection apparatus and method |
JP4093273B2 (en) * | 2006-03-13 | 2008-06-04 | オムロン株式会社 | Feature point detection apparatus, feature point detection method, and feature point detection program |
JP4241763B2 (en) * | 2006-05-29 | 2009-03-18 | 株式会社東芝 | Person recognition apparatus and method |
JP4829141B2 (en) * | 2007-02-09 | 2011-12-07 | 株式会社東芝 | Gaze detection apparatus and method |
US7872653B2 (en) * | 2007-06-18 | 2011-01-18 | Microsoft Corporation | Mesh puppetry |
US20100123714A1 (en) * | 2008-11-14 | 2010-05-20 | General Electric Company | Methods and apparatus for combined 4d presentation of quantitative regional parameters on surface rendering |
JP5361524B2 (en) * | 2009-05-11 | 2013-12-04 | キヤノン株式会社 | Pattern recognition system and pattern recognition method |
EP2333692A1 (en) * | 2009-12-11 | 2011-06-15 | Alcatel Lucent | Method and arrangement for improved image matching |
KR101697184B1 (en) * | 2010-04-20 | 2017-01-17 | 삼성전자주식회사 | Apparatus and Method for generating mesh, and apparatus and method for processing image |
KR101681538B1 (en) * | 2010-10-20 | 2016-12-01 | 삼성전자주식회사 | Image processing apparatus and method |
JP6026119B2 (en) * | 2012-03-19 | 2016-11-16 | 株式会社東芝 | Biological information processing device |
GB2515510B (en) * | 2013-06-25 | 2019-12-25 | Synopsys Inc | Image processing method |
US9299195B2 (en) * | 2014-03-25 | 2016-03-29 | Cisco Technology, Inc. | Scanning and tracking dynamic objects with depth cameras |
US9633483B1 (en) * | 2014-03-27 | 2017-04-25 | Hrl Laboratories, Llc | System for filtering, segmenting and recognizing objects in unconstrained environments |
FR3021443B1 (en) * | 2014-05-20 | 2017-10-13 | Essilor Int | METHOD FOR CONSTRUCTING A MODEL OF THE FACE OF AN INDIVIDUAL, METHOD AND DEVICE FOR ANALYZING POSTURE USING SUCH A MODEL |
CN104182765B (en) * | 2014-08-21 | 2017-03-22 | 南京大学 | Internet image driven automatic selection method of optimal view of three-dimensional model |
US10559111B2 (en) * | 2016-06-23 | 2020-02-11 | LoomAi, Inc. | Systems and methods for generating computer ready animation models of a human head from captured data images |
FR3060170B1 (en) * | 2016-12-14 | 2019-05-24 | Smart Me Up | OBJECT RECOGNITION SYSTEM BASED ON AN ADAPTIVE 3D GENERIC MODEL |
-
2016
- 2016-12-14 FR FR1662455A patent/FR3060170B1/en active Active
-
2017
- 2017-11-21 CN CN201780076996.6A patent/CN110199293A/en active Pending
- 2017-11-21 US US16/469,561 patent/US11036963B2/en active Active
- 2017-11-21 EP EP17811644.8A patent/EP3555802A1/en not_active Withdrawn
- 2017-11-21 CA CA3046312A patent/CA3046312A1/en active Pending
- 2017-11-21 IL IL267181A patent/IL267181B2/en unknown
- 2017-11-21 KR KR1020197020073A patent/KR102523941B1/en active IP Right Grant
- 2017-11-21 WO PCT/FR2017/053191 patent/WO2018109298A1/en unknown
- 2017-11-21 JP JP2019531411A patent/JP7101676B2/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN110199293A (en) | 2019-09-03 |
IL267181B (en) | 2022-11-01 |
IL267181A (en) | 2019-08-29 |
FR3060170B1 (en) | 2019-05-24 |
US20190354745A1 (en) | 2019-11-21 |
IL267181B2 (en) | 2023-03-01 |
KR102523941B1 (en) | 2023-04-20 |
WO2018109298A1 (en) | 2018-06-21 |
FR3060170A1 (en) | 2018-06-15 |
CA3046312A1 (en) | 2018-06-21 |
JP7101676B2 (en) | 2022-07-15 |
KR20190095359A (en) | 2019-08-14 |
JP2020502661A (en) | 2020-01-23 |
US11036963B2 (en) | 2021-06-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3555802A1 (en) | Object recognition system based on an adaptive 3d generic model | |
FR2998401A1 (en) | METHOD FOR 3D RECONSTRUCTION AND PANORAMIC 3D MOSQUERY OF A SCENE | |
CA2701698A1 (en) | Method for synchronising video streams | |
EP3614306B1 (en) | Method for facial localisation and identification and pose determination, from a three-dimensional view | |
WO2012007382A1 (en) | Method for detecting a target in stereoscopic images by learning and statistical classification on the basis of a probability law | |
WO2005010820A2 (en) | Automated method and device for perception associated with determination and characterisation of borders and boundaries of an object of a space, contouring and applications | |
FR3025918A1 (en) | METHOD AND SYSTEM FOR AUTOMATED MODELING OF A PART | |
FR3002673A1 (en) | METHOD AND DEVICE FOR THREE-DIMENSIONAL IMAGING OF A PARTIAL REGION OF THE ENVIRONMENT OF A VEHICLE | |
WO2019166743A1 (en) | 3d scene modelling system by multi-view photogrammetry | |
WO2012117210A1 (en) | Method and system for estimating a similarity between two binary images | |
EP3145405A1 (en) | Method of determining at least one behavioural parameter | |
FR3083352A1 (en) | METHOD AND DEVICE FOR FAST DETECTION OF REPETITIVE STRUCTURES IN THE IMAGE OF A ROAD SCENE | |
FR3033913A1 (en) | METHOD AND SYSTEM FOR RECOGNIZING OBJECTS BY ANALYZING DIGITAL IMAGE SIGNALS OF A SCENE | |
EP3504683B1 (en) | Method for determining the placement of the head of a vehicle driver | |
WO2018206331A1 (en) | Method for calibrating a device for monitoring a driver in a vehicle | |
FR3039919A1 (en) | TRACKING A TARGET IN A CAMERAS NETWORK | |
WO2018011498A1 (en) | Method and system for locating and reconstructing in real time the posture of a moving object using embedded sensors | |
WO2018015654A1 (en) | Method and device for aiding the navigation of a vehicule | |
WO2014053437A1 (en) | Method for counting people for a stereoscopic appliance and corresponding stereoscopic appliance for counting people | |
FR3015099A1 (en) | SYSTEM AND DEVICE FOR ASSISTING AUTOMATED DETECTION AND MONITORING OF MOVING OBJECTS OR PEOPLE | |
FR3138951A1 (en) | System and method for aiding the navigation of a mobile system by means of a model for predicting terrain traversability by the mobile system | |
EP1958157A1 (en) | Method of bringing steoreoscopic images into correspondence | |
FR3127295A1 (en) | METHOD FOR AUTOMATIC IDENTIFICATION OF TARGET(S) WITHIN IMAGE(S) PROVIDED BY A SYNTHETIC APERTURE RADAR | |
FR3112401A1 (en) | Method and system for stereoscopic vision of a celestial observation scene | |
WO2000004503A1 (en) | Method for modelling three-dimensional objects or scenes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20190619 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20210621 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20211103 |