EP2335220A2 - Verfahren für verteilten punktabgleich mit mindestabstand auf zwei oder mehr bildern einer mit einer video- oder stereokamera aufgenommenen 3d-szene - Google Patents

Verfahren für verteilten punktabgleich mit mindestabstand auf zwei oder mehr bildern einer mit einer video- oder stereokamera aufgenommenen 3d-szene

Info

Publication number
EP2335220A2
EP2335220A2 EP09786002A EP09786002A EP2335220A2 EP 2335220 A2 EP2335220 A2 EP 2335220A2 EP 09786002 A EP09786002 A EP 09786002A EP 09786002 A EP09786002 A EP 09786002A EP 2335220 A2 EP2335220 A2 EP 2335220A2
Authority
EP
European Patent Office
Prior art keywords
tuples
points
image
images
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP09786002A
Other languages
English (en)
French (fr)
Inventor
Sergei Startchik
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of EP2335220A2 publication Critical patent/EP2335220A2/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Definitions

  • the present invention relates to a method of high-density point registration between images.
  • Registration between images taken by a stereo camera or images from a sequence taken by a video camera is a critical and unavoidable step in such applications as 3D scene reconstruction, photogrammetry, scene capture, augmented reality, depth computation, navigation and security surveillance.
  • the registration establishes correspondence between points of two or more images where points correspond to the same object points in the observed 3D scene. Once such registration is in place, it is further used in applications mentioned above to infer information important for the application at hand.
  • the 3D information or depth of the scene represents the main interest. It is further interpreted to compute information such as distance to an obstacle, height of a person, etc.
  • the 3D reconstruction with metric information is targeted for measuring precise object size. Higher precision in depth estimation leads to high precision of navigation or object size estimation.
  • the reconstruction can be summarized by the scheme in the Fig. 19. Two images: Left image (1) and Right image (2) taken from different viewpoints are used to reconstruct a 3D surface corresponding to the observed scene. In scene capture for movies, the latter reconstruction is covered with texture taking from image.
  • This patent focuses on a method for high spatial precision point registration in images containing views of rigid and other objects that can move and occlude each other under changing illumination conditions. It also fits well to implementation on a Graphical Processing Unit or other device or physical material that can perform massively parallel tasks. Registration is a match of spatial locations between two or more images. Registration methods used in application described above differ one from another by type of spatial elements used for matching (points, lines, direct pixels) and constraints that they use to reduce the possible number of matchings between locations.
  • An Initial matching step (80) is used to estimate general scene motion and calibrate the optical system of the camera. It is always applied in the absence of any infor- mation about the scene and thus rely on assumption that majority of the image shows one rigid surface (scene background). The very stable points ( f.ex. corners) are found in both images. The match between groups of such points allow to infer the relative position between left and right images and compute parameters of the optical system (or calibrate it) in step Calibration of camera (81). Once the system is calibrated, in the next step Epipolar geometry (82) a, so called, epipolar geometry between two images Left image (1) and Right image (2) is built as shown in Fig. 19.
  • Advanced matching step (84) it allows to match every point in Left image (1) to a line in Right image (2) on which the corresponding point is constrained to stay.
  • more constraints are required and this is done in Advanced matching step (84).
  • the ordering constraint, continuity constraint (or smoothness constraint) which assume that the surfaces are smooth and some order between points is preserved, and chromatic constraint are used.
  • Medium level of occlusion rapidly deteriorates the performance of these constraints and the precision of matching degrades significantly.
  • Previous methods rely on constraints (like f.ex. epipolar) that in turn rely on assumptions about scene (like that no more than 30% of the scene moves), local continuity. Another drawback of these methods is that they work with only one line at a time and thus no information on relative position between lines is taken into account.
  • 3D Reconstruction (85) or Augmented Reality (86) are done.
  • the present application describes a method for automatic matching of points tuples in two or more images taken with stereo camera or video camera comprising the steps of sampling two images with curves exhibiting geometric invariant properties and of finding point tuples, along those curves, that exhibit representative geometirc and illumination invariant properties.
  • such point tuples preferably correspond to the groups of six or more points linked by 2D projective geometric invariants that have the same value that 3D projective invariants.
  • One of such invariants are preferably cross-ratios of the four- point tuples on the crunodal cubic in 2D image with the value of the mentioned cross- ratio being the same as the cross-ratio of four planes in 3D where the 2D crunodal cubic is the projection of the 3D twisted cubic.
  • Beside geometric invariants, tuples are characterised with chromatic invariants that are independent on illumination changes. Camera is modelled by at least projective transformation.
  • the method comprises the step of analysing each pixel in an image and constructing a polar transformed version of the image followed by sampling of points of the polar transformed version of the image with crunodal cubics.
  • tuples of six or more points are selected and used as locally unique points to represent one image if they are characterised by chromatic invariants and geometric invari- ants that are different from other tuples in their neighbourhood while no surrounding support region of each individual point is used in representation itself.
  • the point selection process provides that every point of one image is used in several representative tuples that include points from various parts of the image in a distributed manner, that points are selected to densely cover the scene visible in the image and that redundancy is provided by using each point in as many representative tuples as possible.
  • a method is optimized by preferably transforming image according to sampling curves to optimize computations and where all the coefficients and values are pre- computed before applying the method to a given image size.
  • the method is well designed for being implemented on parallel processor architectures.
  • Fig. 1 Preservation of cross-ratio on the 3D twisted cubic during projection to 2D crunodal cubic.
  • Fig. 2 Measuring cross-ratio in one image.
  • Fig. 3A classical algorithm for registration that contains common steps followed by either 3D reconstruction or augmented reality.
  • FIG. 4 General algorithm for dense matching of tuple of points.
  • FIG. 1 Preservation of cross-ratio on the 3D twisted cubic during projection to 2D crunodal cubic.
  • Fig. 5Algorithm for finding representative tuples in the image.
  • Fig. ⁇ Algorithm for inserting tuple into histogram of representative tuples.
  • FIG. 14 Sampling radial image along one cubic
  • FIG. 1 Preservation of cross-ratio on the 3D twisted cubic during projection to 2D crunodal cubic.
  • Fig. 15 Spatial coordinates of the point tuples.
  • Fig. 16 Computing chromatic descriptions.
  • FIG. 17 Schematic illustration of table (or histogram) for storing representative tuples.
  • FIG. 1 Preservation of cross-ratio on the 3D twisted cubic during projection to 2D crunodal cubic.
  • the present method for registration of points disclosed in this patent is based on the massive and independent matching of tuples of points.
  • the general algorithm of registration between two images is presented on Fig. 4 and comprises several stages explained in the following sections.
  • the preparation step of Allocate memory and intialize structures (60) is important to prepare optimized storage structures to host data.
  • the step Define subset of cubics and precompute parameters (61) allows to precompute numerous parameter values (of cubic curves in particular) that remains the same during computations and thus can save significant time.
  • the main computations starts with selection of two images in Select first and second image (62). It can be two subsequent or two remote images in the video sequence or two images from a stereo camera.
  • the first part of the algorithm that is Find representative tuples (63) is composed of several steps shown in Fig. 5 and detailed in section .3.
  • the central concept is to find representative point tuples and characterise them with geometric properties that are reflected by values that are invariant to geometric 3D-to-2D projective transforma- tion produced by a camera. Definition and properties of such values are described in section .1.
  • the algorithm takes specific curves along which invariant values are measured and use them to sample the image and find locally representative points. Only a certain type of curves can be used for sampling. Selection of a set of curves satisfying all the constraints is outlined in section .2.
  • the second part of the algorithm is the step Matching of tuples (64) which corresponds to matching tuples representing left and right image. This step is detailed in Fig. 7 and described in section .4. After the matching between two images is done, the results can be used for reconstruction, motior ⁇ estimation or other application mentioned above.
  • the new registration algorithm has several advantages. First, low support of each point (limited to one pixel and not its neighbourhood) allows to deal with scenes with high depth discontinuities and presence of, many objects in the field of view occluding each other. No assumptions about scene smoothness or continuity are made.
  • An element used in the present invention is a property of a curve called twisted cubic to have an invariant while being projected from 3D to 2D is illustrated in Fig. 1.
  • the Twisted cubic in 3D (21) is defined by equation:
  • X 1 are projective coordinates in 3D
  • is the parameter of the curve
  • a 4 x 4 is the matrix of parameters a tJ that define the form of the curve. This curve have 15 parameters (or degrees of freedom) for its definition.
  • FIG. 2 Contents of left and right images is shown in Fig. 2 for more details.
  • the cross- ratio of the four rays in left and right images can be measured in two ways. First, by measuring distances between points of intersection A, B, C, D of those rays with any line and taking the ratios of those distances as:
  • Such cross-ratio has the same value for Left image (1) and Right image (2) in general and is independent on the viewpoint from which the curve is viewed (given that nodal point exists in both views).
  • This invariant property will be used to characterise tuples of more than four points with several invariant values (one for each group of four) and match them between two images. For example, if six points laying on a twisted cubic are visible in left and right image, they can be characterised with two cross-ratio values that give same values in left and right image.
  • nodal cubics are selected and used to sample the image for representative points laying on them.
  • nodal cubics that have some specific properties can be used. These properties define a class of cuves that can be used.
  • crunodal cubic a cubic with nodal point and such cubic (called crunodal cubic).
  • crunodal cubic a cubic
  • nodal point is preferably not close to the epipole. Otherwise, there is an ambiguity in matching between left and right images leading to erroneous information about points motion. Position of the epipole depends on the camera motion that can not be known, so matching cases can not be avoided but their number reduced. Probability of epipole position can be estimated relatively to the possible motions and that position avoided for nodal point.
  • Fig. 9 several camera motions are shown with their corresponding epipole positions.
  • the definition of first image as “left” and second images as “right” is used for simplicity only.
  • the most frequent motion is the pan motion shown in Fig. 9.a.
  • the epipole Left epipole (8) is defined as projection of C 1 Optical center of the left camera (6) onto the sensor plane of the right camera Optical center of the right camera (7).
  • the position of the epipole find itself in the image if displacement is less than sensor size.
  • the epipole occurs more often within the image frame if, again, the motion is less than sensor size. Since lateral translational motions are much more frequent than vertical, the epipole can occur within a horizontal corridor on the image. To reduce the risk of , the system will avoid selecting points in that corridor as nodal points.
  • the third constraint is to provide sufficient stability of the cross ratio. This stability depends, first, on stability of rays orientation and, second, on the, presence of the nodal point in both left and right images.
  • the stability of rays is influenced by minimum distance between two points on the cubic that are defined by rays which is D mw . Given that at least six points will be taken on the cubic, the elongation of the loop of the crunodal cubic should have minimum cleavage as shown in Fig. 10. This cleavage should be thus minimum AQ m ⁇ n ⁇
  • This section shows how the invariant property described above can be used to characterise the tuples of points.
  • This characterisation allows to select numerous independent point tuples that are representative in the image.
  • Representative tuples are those whose properties (geometric and chromatic invariant values) are sufficiently different from properties of all other point tuples in their neighbourhood. When tuples will be matched between two images this representativeness insures that the chance of mismatches is low.
  • the algorithm for searching of such representative tuples in one image is outlined in Fig. 5.
  • non-uniform areas the areas composed of pixels not surrounded by neighbours having almost the same color are selected. Geometric stability of one point in uniform area is low and this step is applied to reject this instabiligy. Selection is done by standard filtering detecting low frequency variations as shown in Fig. 18. A note should be taken this step does not perform edge detection. Also this step does not mean that a support regions for pixels (detection of tuples does not rely on surrounding area).
  • each pixel p t [ ⁇ ,,y,] in nonuniform areas selected previously is used as a nodal point Central point (44) of a set of cubics.
  • Central point (44) of a set of cubics.
  • the algorithm step Make polar transform (52) uses selected pixel P 1 as a center of polar transform and transforms original image into Polar image (45) as depicted in Fig. 11.
  • the [x,y] coordinates are replaced by [ ⁇ ,r] .
  • the curve is defined (as before) with four degrees of freedom Q 1 , Q 2 , r x , r 2 .
  • AA point on the cubic (13) is defined by four coordinates of polygon and pre-defined polynomials.
  • To sample polar image one has to define cubics that are acceptable for sampling and select corresponding part of the four parameters space. This part of the space will be used to generate cubics for sampling and will correspond to the step Define subset of cubics and precompute parameters (61) of the algorithm described in Fig. 4.
  • the angle parameter ⁇ can not be set simply with an interval.
  • the difference between two values B 1 and ⁇ 2 can not be lower than Q min or more than Q m ⁇ x as shown in Fig. 13.b.
  • a cubic is allowed to have one of its parts outside of the image. For simplicity we consider that only a half of each cubic can be outside of the image, but this part can be different in general.
  • the values that parameters ⁇ , and ⁇ 2 can take and their dependence are given in Fig. 13.c as a hashed area.
  • the graph contains parameter ⁇ taage corresponds to the maximum value of angle for radial image.
  • the image can be sampled with those curves. Taking one cubic with parameters G 1 , Q 2 , r v r 2 from a set defined previously, one need to define the way image is sampled with that cubic.
  • the coefficients in the first matrix are stored as part of the step Define subset of cubics and precompute parameters (61) since their values will be used numerous number of times for each nodal point in the image.
  • the first step Compute cross ratios (55) evaluates three geometric invariants
  • the rays Bisecant (15) originating from nodal point Nodal point (14) in the original image become vertical parallel lines Vertical ray (46) in the radial image.
  • the cross-ratio between rays Bisecant (15) in original image is equal to cross ratio between parallel lines in radial image. Measurement of that cross-ratio is, however, much simpler computationally in the radial image since it is just the cross ratio between horizontal coordinates Vertical ray coordinate (47) of those rays. For three sets of four rays the cross-ratios between them is computed. These three combinations are shown in the lower part of the Fig. 14.
  • this invariant value will be computed for point tuple as shown in Fig. 16 corresponding to the step Compute chromatic properties (56) in the algorithm. It should be noted that additional values can be computes as chromatic invariants. For the current setup, three geometric invariant values and one chromatic invarian value provide invariant description of the six point tuple.
  • Insert tuple in the histogram (57) the obtained tuples are compared to already stored to define their representativeness. Only stable and representative point tuples, distinct from their neighbouring points are interesting for matching (thus reducing the risk of mismatches). To filter tuples that are stable and store them for matching, the step is used.
  • tuples having same geometric and chromatic invariants are potentially similar.
  • tuples are stored within a four dimensional look-up table with I 1 ,1 2 ,1 3 and / c as indices as schematically shown in Fig. 17.
  • I 1 ,1 2 ,1 3 and / c are stored within a four dimensional look-up table with I 1 ,1 2 ,1 3 and / c as indices as schematically shown in Fig. 17.
  • One cell of such table will contain tuples that have same (similar up to predefined delta) values of those four invariants.
  • Insertion of the current tuple into this table occurs according to the algorithm described in Fig. 6.
  • the look-up table is accessed and all elements having similar values are retrieved as a set H 1 .
  • This set thus contains all tuples having invariant values similar up to certain threshold A 1 .
  • the comparison set being reduced a more refined comparison can be applied.
  • One-by-one stored tuples are compared with current tuple for to having close resemblance in invariant values.
  • This difference is computed vector-wise and is based on the metrics that take into account the chromatic transformation. If the difference is lower than threshold, the tuples are considered chromatically similar and processing continues with geometric positions of individual points.
  • the geometric positions of tuples are compared. Spatial coordinates are retrieved from storage in order to be compared with current tuple. Geometric coordinates of the tuples in the original image are computed from coordinates in the radial image by applying original image coordinates of the control polygon vertices.
  • the tuple that was found similar in the list is marked as "duplicate”. Also, a difference between the tuple in the histogram and tuple is stored. This is done to define the neighborhood of the tuple within which other tuples should be considered as similar to this one.
  • this tuple is considered as representative (until another tuple is found close to it). It will, thus, be added to the histo- gram cell at the end of the list of tuples.
  • Spatial coordinates and chromatic values of the tuple are stored as part of the tuple description. Computation of spatial coordinates from radial representation is shown in Fig. 15. The representative tuples are described with invariant values and stored as characteristics for current view.
  • a density histogram is constructed for the left image showing in how many tuples the current pixel was used. The use of those pixels in other tuples is reduced (they, however, will be used in neighbor tuples for comparison).
  • the redundancy of representation is important, the participation of each pixel in a fixed number of tuples (for example ten). One avoids the use of one pixel in too many tuples to avoid extreme dependence on this point.
  • step Clean histogram all cells of the histogram are scanned. For each cell, the list of tuples is fetched and scanned. All tuples that are marked as "duplicate" are removed from the list. A cleaned list is stored back in the histogram. A more compact histogram is kept as representation of the scene with representative tuples and is denoted as representation table RTT 1 .
  • Every cell of the table storing representative tuples is analysed individually. All elements of two corresponding cells from RTT X and RTT 1 are fetched and compared one- by-one. When two tuples are compared, invariant values are compared together with - chromatic values. As in the insertion step, a transformation is computed to see how close two tuples can be matched one to another by projective transformation. All possible matches are stored as pairs of tuples in the table RTH 12 that reflects matching between two images.
  • the algorithms of learning tuples and searching them can be implemented as an image processing software modules. Such software can be run on a computer with CPU, memory and storage, on DSP within an embedded system or on a Graphical Processing Unite with parallel architecture.
  • Cameras that can be used for current invention realization should be able to take pictures from different viewpoints of the scene, video camera that takes a video sequence while undergoing a motion or stationary or, finally a multicamera system that is synchronised.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
EP09786002A 2008-07-06 2009-07-06 Verfahren für verteilten punktabgleich mit mindestabstand auf zwei oder mehr bildern einer mit einer video- oder stereokamera aufgenommenen 3d-szene Withdrawn EP2335220A2 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CH2008000303 2008-07-06
PCT/IB2009/006206 WO2010004417A2 (en) 2008-07-06 2009-07-06 Method for distributed and minimum-support point matching in two or more images of 3d scene taken with video or stereo camera.

Publications (1)

Publication Number Publication Date
EP2335220A2 true EP2335220A2 (de) 2011-06-22

Family

ID=41507493

Family Applications (1)

Application Number Title Priority Date Filing Date
EP09786002A Withdrawn EP2335220A2 (de) 2008-07-06 2009-07-06 Verfahren für verteilten punktabgleich mit mindestabstand auf zwei oder mehr bildern einer mit einer video- oder stereokamera aufgenommenen 3d-szene

Country Status (2)

Country Link
EP (1) EP2335220A2 (de)
WO (1) WO2010004417A2 (de)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9619933B2 (en) 2014-06-16 2017-04-11 Occipital, Inc Model and sizing information from smartphone acquired image sequences

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299228B (zh) * 2014-09-23 2017-08-25 中国人民解放军信息工程大学 一种基于精确点位预测模型的遥感影像密集匹配方法
CN105139375B (zh) * 2015-07-15 2017-09-29 武汉大学 一种结合全球dem与立体视觉的卫星影像云检测方法
CN105160702B (zh) * 2015-08-20 2017-09-29 武汉大学 基于LiDAR点云辅助的立体影像密集匹配方法及系统
CN105205808B (zh) * 2015-08-20 2018-01-23 武汉大学 基于多特征多约束的多视影像密集匹配融合方法及系统
CN105225233B (zh) * 2015-09-15 2018-01-26 武汉大学 一种基于两类膨胀的立体影像密集匹配方法及系统
CN107240110A (zh) * 2017-06-05 2017-10-10 张洋 基于机器视觉技术的投影映射区域自动识别方法
CN109840457B (zh) * 2017-11-29 2021-05-18 深圳市掌网科技股份有限公司 增强现实注册方法及增强现实注册装置
CN112767460B (zh) * 2020-12-31 2022-06-14 武汉大学 空间指纹图配准基元特征描述与匹配方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9619933B2 (en) 2014-06-16 2017-04-11 Occipital, Inc Model and sizing information from smartphone acquired image sequences

Also Published As

Publication number Publication date
WO2010004417A2 (en) 2010-01-14
WO2010004417A3 (en) 2014-01-16

Similar Documents

Publication Publication Date Title
Liu et al. 3D imaging, analysis and applications
US10540576B1 (en) Panoramic camera systems
EP2335220A2 (de) Verfahren für verteilten punktabgleich mit mindestabstand auf zwei oder mehr bildern einer mit einer video- oder stereokamera aufgenommenen 3d-szene
Hornacek et al. Depth super resolution by rigid body self-similarity in 3d
Newcombe et al. Live dense reconstruction with a single moving camera
US8452081B2 (en) Forming 3D models using multiple images
US8432435B2 (en) Ray image modeling for fast catadioptric light field rendering
US8447099B2 (en) Forming 3D models using two images
US5432712A (en) Machine vision stereo matching
US6476803B1 (en) Object modeling system and process employing noise elimination and robust surface extraction techniques
JP5249221B2 (ja) 画像から奥行きマップを決定する方法、奥行きマップを決定する装置
US20120242795A1 (en) Digital 3d camera using periodic illumination
EP1589482A2 (de) Vorrichtung und Verfahren zur dreidimensionalen Bildvermessung
WO2012096747A1 (en) Forming range maps using periodic illumination patterns
EP1063614A2 (de) Gerät zum Verwenden mehrere Gesichtsbilder aus verschiedenen Blickpunkten um ein Gesichtsbild aus einem neuen Blickpunkt zu erzeugen, dessen Verfahren, Gerät und Speichermedium
Lin et al. Vision system for fast 3-D model reconstruction
JP2001236522A (ja) 画像処理装置
Dante et al. Precise real-time outlier removal from motion vector fields for 3D reconstruction
Hu et al. Multiple-view 3-D reconstruction using a mirror
Cornelius et al. Towards complete free-form reconstruction of complex 3D scenes from an unordered set of uncalibrated images
CN117635875B (zh) 一种三维重建方法、装置及终端
Zaharescu et al. Camera-clustering for multi-resolution 3-d surface reconstruction
Tran et al. A simple model generation system for computer graphics
Yu Automatic 3d modeling of environments: a sparse approach from images taken by a catadioptric camera
Zabulis et al. Efficient, precise, and accurate utilization of the uniqueness constraint in multi-view stereo

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR

AX Request for extension of the european patent

Extension state: AL BA RS

DAX Request for extension of the european patent (deleted)
R17D Deferred search report published (corrected)

Effective date: 20140116

RBV Designated contracting states (corrected)

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR

17P Request for examination filed

Effective date: 20140625

17Q First examination report despatched

Effective date: 20190531

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20191011