EP3018625B1 - Method for calibrating a sight system - Google Patents

Method for calibrating a sight system Download PDF

Info

Publication number
EP3018625B1
EP3018625B1 EP15186507.8A EP15186507A EP3018625B1 EP 3018625 B1 EP3018625 B1 EP 3018625B1 EP 15186507 A EP15186507 A EP 15186507A EP 3018625 B1 EP3018625 B1 EP 3018625B1
Authority
EP
European Patent Office
Prior art keywords
viewfinder
point
function
angular
calibration method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP15186507.8A
Other languages
German (de)
French (fr)
Other versions
EP3018625A1 (en
EP3018625B8 (en
Inventor
Benoît MALRAT
Jean Beaudet
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Idemia Identity and Security France SAS
Original Assignee
Morpho SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Morpho SA filed Critical Morpho SA
Publication of EP3018625A1 publication Critical patent/EP3018625A1/en
Publication of EP3018625B1 publication Critical patent/EP3018625B1/en
Application granted granted Critical
Publication of EP3018625B8 publication Critical patent/EP3018625B8/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B13/00Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
    • G03B13/18Focusing aids
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the invention relates to a method of calibrating a sighting system for aiming an object by means of a viewfinder, the position of the object being located in a repository external to the viewfinder, and a system implementing said method.
  • the invention finds application in particular in the field of acquiring high-resolution images at a precise position, such as the acquisition of biometric feature images of individuals.
  • Calibration of a sighting system is the determination of a control law to be applied to a viewfinder to enable, from a position of an object in a repository external to the viewfinder, to orient said viewfinder so as to aim at the object.
  • An example of an application is a system for acquiring images of biometric traits of individuals, such as, for example, iris images.
  • the system In order to acquire high resolution images of an individual's iris, the system requires a high-resolution camera. Because of this important resolution, this camera can not have a wide fixed field, but must be mobile in order to be able to aim at an iris of an individual.
  • the system thus also includes two cameras of lower resolution, but wide field, and fixed, which can detect the position of an iris of an individual.
  • the position of the iris in the repository of the large-field cameras must be exploited by the high-resolution camera to aim at the iris and then proceed to its acquisition.
  • the determination of the control law of the aiming camera was made from a prior kinematic model of the system comprising the whole cameras.
  • This model makes it possible to estimate the relative positions of the various cameras of the system, and according to these positions, the position of the object in the repository of the aiming camera, then the commands to be applied to this camera to aim at the object. .
  • this approach may require, in order to simplify the kinematic model, to simplify the design of the sighting system, which can be binding.
  • Precision errors may occur if the aiming system has not been assembled with the required accuracy and if there are discrepancies between the prior model and the actual system.
  • the document US 2010/0289869 describes another type of calibration of a camera, including the determination of intrinsic and extrinsic parameters of a camera, the extrinsic parameters including in particular the angles of view of the camera with respect to a point, from the commands of the camera for aim for the point.
  • This document does not make it possible to establish a control law to be applied to the camera to reach these viewing angles.
  • this document can not be applied in the case of acquisition of iris images because the long focal length used for this purpose prevents precise determination of the intrinsic and extrinsic parameters of the camera.
  • the object of the invention is to propose a method for calibrating a sighting system that is simple and quick to implement, and less restrictive on the design and manufacture of the sighting system.
  • the invention also aims to propose a universal calibration method, that is to say not dependent on the composition or the structure of the sighting system.
  • the invention also relates to an aiming system comprising a viewfinder, an optical system for detecting a position of an object to be aimed at, and a processing unit comprising processing means, the aiming system being characterized in that is adapted to implement the method according to the foregoing description.
  • the proposed calibration method has the advantage of being an automatic method and simple implementation by any sighting system.
  • This method does not require the development of a priori kinematic model of the system, which makes it possible to reduce the constraints of mechanical reproducibility of the system during manufacture and to reduce its cost. Constraints on system design to simplify the kinematic model are also relaxed.
  • the proposed method does not require shooting with an overlap area.
  • FIG 1a We have shown in figure 1a an example of a sighting system 1 that can be calibrated according to the method described below.
  • This sighting system 1 comprises at least one viewfinder 10, which can be an image acquisition device such as a camera.
  • the viewfinder 10 is rotatable along two axes which are the yaw and the pitch, the rotation of the viewfinder along these two axes being actuated by a motor not shown.
  • the viewfinder 10 also has variable focus for focusing over a range of depths.
  • the aiming system 1 also comprises an optical 11 for detecting a position of an object O in space, for example in the form of two cameras.
  • the position of the object O is located in the repository of the detection optics 11, which is a reference different from that of the viewfinder 10. This reference frame is fixed arbitrarily, and can if necessary be orthonormed.
  • Figures 1a and 1b a non-limiting example of a reference system.
  • this reference frame is orthogonal, has an axis z corresponding to a longitudinal axis of sight of the detection optics 11 and an axis x corresponding to the horizontal of the image obtained by this detection optics.
  • This repository is the one that is used in the following.
  • the aiming system comprises a processing unit 12, for example an integrated circuit, comprising processing means adapted to control the rotation and the focusing of the viewfinder 10 by a control law from the position of the object. detected by the detection optics 11.
  • a processing unit 12 for example an integrated circuit, comprising processing means adapted to control the rotation and the focusing of the viewfinder 10 by a control law from the position of the object. detected by the detection optics 11.
  • FIG. 1b an alternative embodiment of the sighting system is shown, comprising a mirror 13 mounted between the object O to be aimed and the sighting system, the mirror being rotatable along two axes.
  • the viewfinder 10 is in this case fixed and is aimed at the mirror 13, and the rotation of the mirror makes it possible to move the sight line of the viewfinder 10 to aim at the object O.
  • the coordinates of the object O in this reference frame can be converted into spherical coordinates with respect to the point M. represented on the figure 1c the conversion of the coordinates of the point O in the spherical repository centered on the point M.
  • the spherical coordinates of the object O comprise two values of angles ⁇ and ⁇ and a distance value.
  • the repository of the detection optics 11 is an orthonormal repository
  • ⁇ O - M ⁇ is the distance between the point O and the point M, noted ⁇ in the following.
  • the calibration process includes the determination of a control law ⁇ C y , C p , C d ⁇ to be applied to the viewfinder to aim at an object O, and the determination of the position of the point along the lines of sighting M.
  • step 200 is implemented by the processing means of the processing unit, by executing an appropriate program.
  • the control law comprises two angular commands C y , C p , these commands being commands of rotation of the viewfinder respectively yaw and pitch to be applied to the viewfinder to be oriented towards the object O.
  • the two angular commands are determined as a function of the position of the object O to be aimed with respect to the point M. They are therefore a function of the angles ⁇ and ⁇ : C y ( ⁇ , ⁇ ), C p ( ⁇ , ⁇ ), ⁇ and ⁇ depending themselves on the position of the point M.
  • the control law also comprises a focusing control C d ( ⁇ ) of the viewfinder as a function of the distance ⁇ between the concurrent point of the lines of sight M and the object O, ⁇ depending itself on the position of the point Mr.
  • the method comprises a first acquisition step 100 of n target object reference positions and corresponding commands to be applied to the viewfinder to aim at the positions, where n is an integer greater than or equal to 6.
  • P i denotes the reference positions acquired during this step, C yi , C pi the corresponding angular commands and C di corresponding debug commands.
  • This step 100 is advantageously implemented by means of a test pattern equipped with several bar codes, and positioned in at least two different positions.
  • the viewfinder 10 can be manually controlled to be positioned so as to successively acquire a sharp image of each of the barcodes of the target (this corresponds to both a target accuracy and focus accuracy).
  • the viewfinder can acquire images of the target without being controlled to specifically target a barcode, and the target position is determined a posteriori according to what appears on the image, preferably by reading a barcode appearing on the image.
  • the test pattern comprises at least ten or even twenty barcodes, which correspond to as many positions P i , and the target is itself positioned in several places with respect to the viewfinder to multiply the number of positions P i .
  • Each position P i is acquired in the repository of the detection optics 11.
  • the method comprises a step 200 of determining the position of the concurrent point M and the commands C y , C p , C d constituting the control law.
  • This step is implemented by determining the minimum of a function of the second derivative of the control law. In this way the commands obtained are the least chaotic for a position of the object O fixed, so the most robust in case of inaccuracy on the measurement of the position of the object O.
  • the function to be minimized which can be called the deformation energy of the control law, is the integral of the sum of the second derivatives of the commands composing the control law.
  • f ⁇ ⁇ d 2 VS there d ⁇ 2 ⁇ 2 + 2 ⁇ d 2 VS there d ⁇ d ⁇ ⁇ 2 + ⁇ d 2 VS there d ⁇ 2 ⁇ 2 d ⁇ d ⁇ + ⁇ ⁇ d 2 VS p d ⁇ 2 ⁇ 2 + 2 ⁇ d 2 VS p d ⁇ d ⁇ ⁇ 2 + ⁇ d 2 VS p d ⁇ ⁇ 2 d ⁇ + k ⁇ ⁇ d 2 VS d d ⁇ 2 ⁇ 2 d ⁇ Where k is a predetermined weighting constant, for example equal to 1.
  • this step is carried out by first determining, during a sub-step 210, the angular commands C y , C p , then in a second step the focus control C d .
  • This step 210 is implemented by iteratively determining 211 the optimal position of the point M corresponding to the commands C y , C p minimizing the function f, then determining 212 the angular commands minimizing said function after the point M determined at the first time. the result of step 211.
  • the step 211 for determining the optimal position of the point M can be implemented in different ways.
  • f * denotes the term f corresponding only to the angular commands (f deprived of its term as a function of the focus control C d ).
  • f * ⁇ ⁇ d 2 VS there d ⁇ 2 ⁇ 2 + 2 ⁇ d 2 VS there d ⁇ d ⁇ ⁇ 2 + ⁇ d 2 VS there d ⁇ 2 ⁇ 2 d ⁇ d ⁇ + ⁇ ⁇ d 2 VS p d ⁇ 2 ⁇ 2 + 2 ⁇ d 2 VS p d ⁇ d ⁇ ⁇ 2 + ⁇ d 2 VS p d ⁇ 2 ⁇ d ⁇ d ⁇
  • step 212 Since it is known to determine argmin C y , C p ( f * ( M, C y , C p )) to M fixed (the obtaining of minimizing commands f * is described hereinafter in step 212, which describes this step for the particular case of the optimal M), that is to say that it is known to calculate the functions C y and C p minimizing the function f *.
  • F * M mid not VS there , VS p f * M , VS there , VS p
  • the step 211 for determining the optimal position of the point M is a step of minimizing the function F * (resp, G *). This step can be implemented by gradient descent.
  • This step is implemented iteratively by calculating, for example by gradient descent, by calculating, for each position of the point M, the angles ⁇ and ⁇ expressed as a function of the commands C y , C p minimizing the function g *, then adjusting the position of the point M.
  • step 211 whatever the variant implemented, an optimum position of the point M is thus obtained.
  • With: U r r 2 log r 2
  • the points P i correspond to the n points obtained in step 100, and to which correspond values of respective angles ⁇ i , ⁇ i .
  • the fact of having at least 6 points P i makes it possible to obtain at least as many known points as degrees of freedom (3 degrees of freedom for a 1 , a ⁇ and a ⁇ and three others for the position of the point M) .
  • K 0 U
  • the vector W ( w 1 , ..., w n ) and the coefficients a 1 , a ⁇ , and a ⁇ are provided by the equation:
  • the - 1 Y W
  • the angular commands C y , C p may be regularized thin plate spline functions, as described in the publication of G. Donato and S. Belongie, Approximate Thin Plate Spline Mappings, Computer Vision - ECCV 2002, Springer Berlin Heldelberg, 2002, 21-31 .
  • the commands C y and C p are not the commands minimizing exactly the function f *, but approaching the minimum.
  • the minimized function is therefore not f * but a function f r defined as the sum of f * and residual errors:
  • f r f * + ⁇ ⁇ VS there ⁇ P i - M , ⁇ P i - M - VS there i 2 + VS p ⁇ P i - M , ⁇ P i - M - VS p i 2
  • is a pre-determined constant, for example equal to 1
  • the method also includes a step 220 of determining the focus control C d .
  • f d the term related to the focus control
  • f * the term related to the focus control
  • the angular and debugging commands are calculated simultaneously.
  • the command determining step is noted 200, this step also being an iterative step comprising the implementation, successively, of a first substep 211 'in which the optimum position of the point M is determined. , by determining the commands C y , C p , C d minimizing the function f at each fixed M, and a second substep 212 'during which the commands C y , C p and C d are determined at the point Fixed optimal M resulting from step 211 '.
  • the proposed method has the advantage of being able to be implemented on any sighting system without prior knowledge of its kinematic model. It does not induce constraints on the design or use of the system.
  • the aiming system 1 comprises, as the viewfinder 10, a narrow-field, high-resolution mobile camera, for example a resolution of the order of 200 dpi to one meter.
  • the detection optics 11 comprises two fixed cameras with a relatively lower resolution than the viewfinder, for example of the order of 30 dpi at one meter, and a field relatively wider than the viewfinder, in order to be able to locate in a scene a iris of an individual whose image one wishes to acquire.
  • the position of the iris is acquired by the detection optics and communicated to the viewfinder which, having been calibrated by the method described above, can be positioned to accurately aim the iris and acquire an image.
  • This method also makes it possible to target an object such as an iris in a scene even if its position is not known a priori. This is therefore less restrictive for users whose iris image is acquired because they do not have to position themselves in a particular way or at a specific place so that an image of their iris can be acquired.
  • the method is nevertheless not limited to the field of acquisition of biometric feature images but is applicable to any object that is desired to be aimed by a sighting system.

Description

DOMAINE DE L'INVENTIONFIELD OF THE INVENTION

L'invention concerne un procédé de calibration d'un système de visée pour viser un objet au moyen d'un viseur, la position de l'objet étant repérée dans un référentiel externe au viseur, et un système mettant en oeuvre ledit procédé.The invention relates to a method of calibrating a sighting system for aiming an object by means of a viewfinder, the position of the object being located in a repository external to the viewfinder, and a system implementing said method.

L'invention trouve application en particulier dans le domaine de l'acquisition d'images à haute résolution en une position précise, comme l'acquisition d'images de traits biométriques d'individus.The invention finds application in particular in the field of acquiring high-resolution images at a precise position, such as the acquisition of biometric feature images of individuals.

ETAT DE LA TECHNIQUESTATE OF THE ART

La calibration d'un système de visée est la détermination d'une loi de commande à appliquer à un viseur pour permettre, à partir d'une position d'un objet dans un référentiel externe au viseur, d'orienter ledit viseur de façon à viser l'objet.Calibration of a sighting system is the determination of a control law to be applied to a viewfinder to enable, from a position of an object in a repository external to the viewfinder, to orient said viewfinder so as to aim at the object.

Un exemple d'application est un système d'acquisition d'images de traits biométriques d'individus, comme par exemple des images d'iris. Afin d'acquérir des images de haute résolution de l'iris d'un individu, le système nécessite une caméra haute-résolution. Du fait de cette résolution importante, cette caméra ne peut être à large champ fixe, mais doit être mobile afin de pouvoir viser un iris d'un individu.An example of an application is a system for acquiring images of biometric traits of individuals, such as, for example, iris images. In order to acquire high resolution images of an individual's iris, the system requires a high-resolution camera. Because of this important resolution, this camera can not have a wide fixed field, but must be mobile in order to be able to aim at an iris of an individual.

Le système comprend donc également deux caméras de plus faible résolution, mais à grand champ, et fixes, qui peuvent détecter la position d'un iris d'un individu.The system thus also includes two cameras of lower resolution, but wide field, and fixed, which can detect the position of an iris of an individual.

La position de l'iris dans le référentiel des caméras de grand champ doit être exploitée par la caméra haute-résolution pour viser l'iris et ensuite procéder à son acquisition.The position of the iris in the repository of the large-field cameras must be exploited by the high-resolution camera to aim at the iris and then proceed to its acquisition.

Dans ce type de système, la détermination de la loi de commande de la caméra de visée (dans l'exemple qui précède : la caméra haute-résolution) a été réalisée à partir d'un modèle cinématique a priori du système comprenant l'ensemble des caméras.In this type of system, the determination of the control law of the aiming camera (in the preceding example: the high-resolution camera) was made from a prior kinematic model of the system comprising the whole cameras.

Ce modèle permet d'estimer les positions relatives des différentes caméras du système, et en fonction de ces positions, la position de l'objet dans le référentiel de la caméra de visée, puis les commandes à appliquer à cette caméra pour viser l'objet.This model makes it possible to estimate the relative positions of the various cameras of the system, and according to these positions, the position of the object in the repository of the aiming camera, then the commands to be applied to this camera to aim at the object. .

Cependant cette approche présente de nombreux problèmes. Tout d'abord la détermination d'un modèle cinématique doit être mise en oeuvre pour chaque nouveau système de visée, puisque le modèle dépend des positions relatives entre les différents composants du système. Or ce processus de détermination du modèle cinématique est long et complexe.However, this approach has many problems. First of all, the determination of a kinematic model must be implemented for each new aiming system, since the model depends on the relative positions between the different components of the system. But this process of determining the kinematic model is long and complex.

De plus, cette approche peut nécessiter, afin de simplifier le modèle cinématique, de simplifier la conception du système de visée, ce qui peut s'avérer contraignant.In addition, this approach may require, in order to simplify the kinematic model, to simplify the design of the sighting system, which can be binding.

Enfin, cette approche est très contraignante au niveau de la précision mécanique lors de la fabrication du système de visée afin d'assurer la pertinence du modèle cinématique une fois le système de visée assemblé.Finally, this approach is very restrictive in terms of mechanical precision during the manufacture of the sighting system to ensure the relevance of the kinematic model once the sighting system assembled.

Des erreurs de précision peuvent apparaître si le système de visée n'a pas été assemblé avec la précision requise et si des écarts existent entre le modèle a priori et le système réel.Precision errors may occur if the aiming system has not been assembled with the required accuracy and if there are discrepancies between the prior model and the actual system.

Une autre solution a été proposée dans l'article de Junejo, I.N, et Foroosh, H, Optimizing PTZ camera calibration from two images, Machine Vision and Applications, 23(2), 375-389 ; 2012 .Another solution has been proposed in the article of Junejo, IN, and Foroosh, H, Optimizing PTZ camera calibration from two images, Machine Vision and Applications, 23 (2), 375-389; 2012 .

Néanmoins cette méthode requiert d'acquérir deux images avec un recouvrement des vues, donc pour une grande focale, avec une petite différence angulaire entre les prises de vues, ce qui est contraignant et défavorable à la précision.Nevertheless this method requires to acquire two images with a recovery of views, so for a large focal length, with a small angular difference between the shots, which is constraining and unfavorable to accuracy.

Le document US 2010/0289869 décrit un autre type de calibration d'une caméra, comprenant la détermination de paramètres intrinsèques et extrinsèques d'une caméra, les paramètres extrinsèques incluant notamment les angles de visée de la caméra par rapport à un point, à partir des commandes de la caméra pour viser le point.The document US 2010/0289869 describes another type of calibration of a camera, including the determination of intrinsic and extrinsic parameters of a camera, the extrinsic parameters including in particular the angles of view of the camera with respect to a point, from the commands of the camera for aim for the point.

Ce document ne permet pas d'établir une loi de commande à appliquer à la caméra pour atteindre ces angles de visée. De plus, ce document ne peut pas s'appliquer dans le cas de l'acquisition d'images d'iris car la longue distance focale utilisée à cette fin empêche de déterminer avec précision les paramètres intrinsèques et extrinsèques de la caméra.This document does not make it possible to establish a control law to be applied to the camera to reach these viewing angles. In addition, this document can not be applied in the case of acquisition of iris images because the long focal length used for this purpose prevents precise determination of the intrinsic and extrinsic parameters of the camera.

PRESENTATION DE L'INVENTIONPRESENTATION OF THE INVENTION

L'invention a pour but de proposer une méthode de calibration d'un système de visée simple et rapide à mettre en oeuvre, et moins contraignante sur la conception et la fabrication du système de visée.The object of the invention is to propose a method for calibrating a sighting system that is simple and quick to implement, and less restrictive on the design and manufacture of the sighting system.

L'invention a également pour but de proposer une méthode de calibration universelle, c'est-à-dire ne dépendant pas de la composition ou la structure du système de visée.The invention also aims to propose a universal calibration method, that is to say not dependent on the composition or the structure of the sighting system.

A cet égard, l'invention a pour objet un procédé de calibration d'un système de visée comprenant un viseur et une optique de détection de la position d'un objet dans l'espace, caractérisé en ce qu'il comprend la détermination d'une loi de commande à appliquer au viseur pour viser l'objet en fonction de sa position, ladite position étant déterminée dans un référentiel de l'optique de détection et la loi de commande comprenant deux commandes angulaires et une commande de mise au point du viseur, exprimées en fonction des positions relatives entre l'objet à viser et un point concourant de toutes les lignes de visée du viseur,
le procédé comprenant les étapes consistant à :

  • viser, avec le viseur des objets se trouvant à au moins six positions différentes connues dans le référentiel du système de visée et relever les commandes correspondantes,
  • à partir des positions de l'objet et des commandes correspondantes, déterminer la position du point concourant des lignes de visées et la loi de commande par la minimisation d'une fonction de la dérivée seconde de la loi de commande.
In this regard, the subject of the invention is a method of calibrating a sighting system comprising a viewfinder and an optical device for detecting the position of an object in space, characterized in that it comprises the determination of a control law to be applied to the viewfinder to aim the object according to its position, said position being determined in a repository of the detection optics and the control law comprising two angular commands and a control of development of the viewfinder, expressed as a function of the relative positions between the object to be aimed at and a common point of all the lines of sight of the viewfinder,
the method comprising the steps of:
  • aim, with the viewfinder, objects found in at least six different positions known in the reference system of the aiming system and record the corresponding commands,
  • from the positions of the object and the corresponding commands, determining the position of the point constituting the lines of sight and the control law by minimizing a function of the second derivative of the control law.

Avantageusement, mais facultativement, le procédé selon l'invention peut en outre comprendre au moins l'une des caractéristiques suivantes :

  • l'étape de détermination de la position du point concourant et de la loi de commande comprend la mise en oeuvre des étapes consistant à :
    • ∘ déterminer une position du point concourant correspondant à des commandes minimisant la fonction de la dérivée seconde de la loi de commande, et
    • ∘ une fois la position du point concourant déterminée, déterminer les commandes minimisant la fonction de la dérivée seconde de la loi de commande.
  • l'étape de détermination de la position du point concourant comprend la minimisation, en fonction de ladite position, de l'intégrale de la somme des dérivées secondes des commandes angulaires.
  • l'étape de détermination de la position du point concourant comprend la minimisation, en fonction de ladite position, de l'intégrale de la somme des dérivées secondes des positions angulaires relatives entre le point à viser et le point concourant exprimées en fonction des commandes angulaires du viseur.
  • chaque commande angulaire est une fonction de deux angles de type spline de plaque mince.
  • chaque commande angulaire est une fonction de deux angles du type spline de plaque mince régularisée.
  • la détermination de la commande de mise au point est mise en oeuvre lors de la détermination des commandes angulaires ou postérieurement à la détermination des commandes angulaires et du point M.
  • la détermination de la commande de mise au point comprend la minimisation de l'intégrale de la dérivée seconde de la commande de mise au point.
  • la commande de mise au point est une fonction de type spline cubique.
Advantageously, but optionally, the method according to the invention may further comprise at least one of the following characteristics:
  • the step of determining the position of the concurrent point and the control law comprises the implementation of the steps of:
    • Determining a position of the concurrent point corresponding to commands minimizing the function of the second derivative of the control law, and
    • ∘ Once the position of the concurrent point determined, determine the commands minimizing the function of the second derivative of the control law.
  • the step of determining the position of the concurrent point comprises the minimization, as a function of said position, of the integral of the sum of the second derivatives of the angular commands.
  • the step of determining the position of the concurrent point comprises the minimization, as a function of said position, of the integral of the sum of the second derivatives of the relative angular positions between the point to be aimed and the concurrent point expressed as a function of the angular commands of the viewfinder.
  • each angular control is a function of two spline angles of thin plate.
  • each angular control is a function of two angles of the regularized thin plate spline type.
  • the determination of the focus control is carried out during the determination of the angular commands or after the determination of the angular commands and the point M.
  • determining the focus control includes minimizing the integral of the second derivative of the focus control.
  • focus control is a cubic spline function.

L'invention concerne également un système de visée comprenant un viseur, une optique de détection d'une position d'un objet à viser, et une unité de traitement comportant des moyens de traitement, le système de visée étant caractérisé en ce qu'il est adapté pour mettre en oeuvre le procédé selon la description qui précède.The invention also relates to an aiming system comprising a viewfinder, an optical system for detecting a position of an object to be aimed at, and a processing unit comprising processing means, the aiming system being characterized in that is adapted to implement the method according to the foregoing description.

L'invention a enfin pour objet l'utilisation d'un tel système de visée comprenant les étapes consistant à :

  • acquérir une position d'un objet à viser dans un référentiel de l'optique de détection,
  • déduire des positions relatives entre l'objet et un point concourant des lignes de visée du viseur, des coordonnées de l'objet dans un référentiel sphérique centré sur le point concourant, et
  • à partir de la loi de commande déterminée lors de la calibration, en déduire une commande à appliquer au viseur pour viser l'objet.
The invention finally relates to the use of such a sighting system comprising the steps of:
  • acquire a position of an object to be targeted in a repository of detection optics,
  • deduce relative positions between the object and a point of intersection of the sighting lines of the viewfinder, coordinates of the object in a spherical frame centered on the concurrent point, and
  • from the control law determined during the calibration, deduce a command to be applied to the viewfinder to aim the object.

La méthode de calibration proposée présente l'avantage d'être une méthode automatique et de mise en oeuvre simple par n'importe quel système de visée.The proposed calibration method has the advantage of being an automatic method and simple implementation by any sighting system.

Cette méthode ne nécessite pas d'élaborer un modèle cinématique a priori du système, ce qui permet de diminuer les contraintes de reproductibilité mécanique du système lors de la fabrication et de diminuer son coût. Les contraintes sur la conception du système pour simplifier le modèle cinématique sont également relâchées.This method does not require the development of a priori kinematic model of the system, which makes it possible to reduce the constraints of mechanical reproducibility of the system during manufacture and to reduce its cost. Constraints on system design to simplify the kinematic model are also relaxed.

De plus la méthode proposée ne requiert pas de prises de vues présentant une zone de recouvrement.In addition, the proposed method does not require shooting with an overlap area.

DESCRIPTION DES FIGURESDESCRIPTION OF THE FIGURES

D'autres caractéristiques, buts et avantages de la présente invention apparaîtront à la lecture de la description détaillée qui va suivre, au regard des figures annexées, données à titre d'exemples non limitatifs et sur lesquelles :

  • La figure 1a représente un vue en deux dimensions schématique un système de visée,
  • La figure 1b représente une vue en deux dimensions schématique d'un mode de réalisation alternatif du système de visée de la figure 1a.
  • La figure 1c représente un exemple de référentiel pour la mesure des positions relatives d'un point à viser et d'un point concourant des lignes de visée du viseur du système de visée.
  • Les figures 2a et 2b représentent schématiquement les principales étapes d'un procédé de calibration du système de visée selon deux modes de réalisation.
  • La figure 3 représente schématiquement les étapes d'utilisation d'un système de visée calibrée selon le procédé de la figure 2a ou 2b.
Other features, objects and advantages of the present invention will appear on reading the detailed description which follows, with reference to the appended figures, given by way of non-limiting examples and in which:
  • The figure 1a represents a schematic two-dimensional view of a sighting system,
  • The figure 1b represents a schematic two-dimensional view of an alternative embodiment of the aiming system of the figure 1a .
  • The figure 1c is an example of a repository for measuring the relative positions of a point to be aimed and a point in sight lines of sight of the aiming system.
  • The Figures 2a and 2b schematically represent the main steps of a calibration method of the sighting system according to two embodiments.
  • The figure 3 schematically represents the steps of using an aiming system calibrated according to the method of figure 2a or 2b .

DESCRIPTION DETAILLEE D'AU MOINS UN MODE DE REALISATION DE L'INVENTIONDETAILED DESCRIPTION OF AT LEAST ONE EMBODIMENT OF THE INVENTION Système de viséeAiming system

On a représenté en figure 1a un exemple de système de visée 1 pouvant être calibré selon le procédé décrit ci-après.We have shown in figure 1a an example of a sighting system 1 that can be calibrated according to the method described below.

Ce système de visée 1 comprend au moins un viseur 10, pouvant être un dispositif d'acquisition d'images tel qu'une caméra. Le viseur 10 est mobile en rotation selon deux axes qui sont le lacet (yaw) et le tangage (pitch), la rotation du viseur selon ces deux axes étant actionnée par un moteur non représenté.This sighting system 1 comprises at least one viewfinder 10, which can be an image acquisition device such as a camera. The viewfinder 10 is rotatable along two axes which are the yaw and the pitch, the rotation of the viewfinder along these two axes being actuated by a motor not shown.

Le viseur 10 présente également une mise au point variable pour effectuer une mise au point sur une plage de profondeurs.The viewfinder 10 also has variable focus for focusing over a range of depths.

Le système de visée 1 comprend également une optique de détection 11 d'une position d'un objet O dans l'espace, par exemple sous forme de deux caméras. La position de l'objet O est repérée dans le référentiel de l'optique de détection 11, qui est un référentiel différent de celui du viseur 10. Ce référentiel est fixé arbitrairement, et peut le cas échéant être orthonormé.The aiming system 1 also comprises an optical 11 for detecting a position of an object O in space, for example in the form of two cameras. The position of the object O is located in the repository of the detection optics 11, which is a reference different from that of the viewfinder 10. This reference frame is fixed arbitrarily, and can if necessary be orthonormed.

On note (xi, yi, zi) les coordonnées de l'objet O dans le référentiel de l'optique de détection.We denote (x i , y i , z i ) the coordinates of the object O in the repository of the detection optics.

On a représenté sur les figures 1a et 1b un exemple non limitatif de référentiel. En l'espèce ce référentiel est orthogonal, présente un axe z correspondant à un axe longitudinal de visée de l'optique de détection 11 et un axe x correspondant à l'horizontale de l'image obtenue par cette optique de détection. Ce référentiel est celui qui est utilisé dans la suite.We have shown on Figures 1a and 1b a non-limiting example of a reference system. In this case, this reference frame is orthogonal, has an axis z corresponding to a longitudinal axis of sight of the detection optics 11 and an axis x corresponding to the horizontal of the image obtained by this detection optics. This repository is the one that is used in the following.

Enfin le système de visée comprend une unité de traitement 12, par exemple un circuit intégré, comprenant des moyens de traitement adaptés pour piloter la rotation et la mise au point du viseur 10 par une loi de commande à partir de la position de l'objet détecté par l'optique de détection 11.Finally, the aiming system comprises a processing unit 12, for example an integrated circuit, comprising processing means adapted to control the rotation and the focusing of the viewfinder 10 by a control law from the position of the object. detected by the detection optics 11.

Sur la figure 1b, un mode de réalisation alternatif du système de visée est représenté, comprenant un miroir 13 monté entre l'objet O à viser et le système de visée, le miroir étant mobile en rotation selon deux axes. Le viseur 10 est dans ce cas fixe et vise le miroir 13, et la rotation du miroir permet de déplacer la ligne du visée du viseur 10 pour viser l'objet O.On the figure 1b , an alternative embodiment of the sighting system is shown, comprising a mirror 13 mounted between the object O to be aimed and the sighting system, the mirror being rotatable along two axes. The viewfinder 10 is in this case fixed and is aimed at the mirror 13, and the rotation of the mirror makes it possible to move the sight line of the viewfinder 10 to aim at the object O.

Dans la suite, on fait l'hypothèse que dans les deux cas il existe un point concourant M de toutes les lignes de visée du viseur 10. Ce point M correspond à l'intersection entre le miroir 13 et la ligne de visée du viseur 10 s'étendant entre le viseur et le miroir 13 dans le cas de la figure 1b.In the following, it is assumed that in both cases there is a concurrency point M of all the lines of sight of the viewfinder 10. This point M corresponds to the intersection between the mirror 13 and the line of sight of the viewfinder 10 extending between the viewfinder and the mirror 13 in the case of the figure 1b .

Sur la figure 1a, en considérant que le viseur 10 est monté selon une rotule parfaite dont le centre de rotation est sur l'axe optique, le point M correspond au centre de rotation.On the figure 1a , considering that the viewfinder 10 is mounted according to a perfect patella whose center of rotation is on the optical axis, the point M corresponds to the center of rotation.

Dans la suite, on exploite les positions relatives de l'objet O et du point M, pour en déduire la loi de commande du viseur 10.In the following, we exploit the relative positions of the object O and the point M, to deduce the control law of the viewfinder 10.

En particulier, les positions de l'objet O et du point M étant relevées dans le référentiel de l'optique de détection 11, on peut convertir les coordonnées de l'objet O dans ce référentiel en coordonnées sphériques par rapport au point M. On a représenté sur la figure 1c la conversion des coordonnées du point O dans le référentiel sphérique centré sur le point M.In particular, since the positions of the object O and the point M are recorded in the repository of the detection optics 11, the coordinates of the object O in this reference frame can be converted into spherical coordinates with respect to the point M. represented on the figure 1c the conversion of the coordinates of the point O in the spherical repository centered on the point M.

Les coordonnées sphériques de l'objet O comprennent deux valeurs d'angles α et β et une valeur de distance.The spherical coordinates of the object O comprise two values of angles α and β and a distance value.

Selon l'exemple qui précède dans lequel le référentiel de l'optique de détection 11 est un référentiel orthonormé, en notant xM, yM et zM les coordonnées du point concourant M dans le référentiel de l'optique de détection 11, les coordonnées αi et βi de l'objet O s'écrivent comme suit : α i = tan 1 x i x M z i z M

Figure imgb0001
β i = cos 1 y i y M O M
Figure imgb0002
Où ∥O - M∥ est la distance entre le point O et le point M, notée ρ dans la suite.According to the foregoing example in which the repository of the detection optics 11 is an orthonormal repository, by noting x M , y M and z M the coordinates of the concurrent point M in the repository of the detection optics 11, the coordinates α i and β i of the object O are written as follows: α i = tan - 1 x i - x M z i - z M
Figure imgb0001
β i = cos - 1 there i - there M O - M
Figure imgb0002
Where ∥ O - M ∥ is the distance between the point O and the point M, noted ρ in the following.

Procédé de calibrationCalibration process

Le procédé de calibration, dont les principales étapes sont représentées sur les figures 2a et 2b, comprend la détermination d'une loi de commande {Cy, Cp, Cd} à appliquer au viseur pour viser un objet O, et la détermination de la position du point concourant des lignes de visées M.The calibration process, whose main steps are represented on the Figures 2a and 2b , includes the determination of a control law {C y , C p , C d } to be applied to the viewfinder to aim at an object O, and the determination of the position of the point along the lines of sighting M.

Ce procédé est mis en oeuvre par le système de visée 1, en particulier l'étape 200 est mise en oeuvre pas les moyens de traitement de l'unité de traitement, par exécution d'un programme approprié.This method is implemented by the aiming system 1, in particular step 200 is implemented by the processing means of the processing unit, by executing an appropriate program.

La loi de commande comprend deux commandes angulaires Cy, Cp, ces commandes étant des commandes de rotation du viseur respectivement en lacet et en tangage à appliquer au viseur pour être orienté vers l'objet O.The control law comprises two angular commands C y , C p , these commands being commands of rotation of the viewfinder respectively yaw and pitch to be applied to the viewfinder to be oriented towards the object O.

Les deux commandes angulaires sont déterminées en fonction de la position de l'objet O à viser par rapport au point M. Elles sont donc fonction des angles α et β : Cy(α,β), Cp(α, β), α et β dépendant eux-mêmes de la position du point M.The two angular commands are determined as a function of the position of the object O to be aimed with respect to the point M. They are therefore a function of the angles α and β: C y (α, β), C p (α, β), α and β depending themselves on the position of the point M.

La loi de commande comprend également une commande de mise au point Cd(ρ) du viseur en fonction de la distance ρ entre le point concourant des lignes de visée M et l'objet O, ρ dépendant elle-même de la position du point M.The control law also comprises a focusing control C d (ρ) of the viewfinder as a function of the distance ρ between the concurrent point of the lines of sight M and the object O, ρ depending itself on the position of the point Mr.

On note la loi de commande C={Cy(α,β), Cp(α, β), Cd(ρ)}We note the control law C = {C y (α, β), C p (α, β), C d (ρ)}

Le procédé comprend une première étape d'acquisition 100 de n positions de référence d'objets visés et des commandes correspondantes à appliquer au viseur pour viser les positions, n étant un entier supérieur ou égal à 6. On nomme Pi les positions de référence acquises lors de cette étape, Cyi, Cpi les commandes angulaires correspondantes et Cdi les commandes de mise au point correspondantes.The method comprises a first acquisition step 100 of n target object reference positions and corresponding commands to be applied to the viewfinder to aim at the positions, where n is an integer greater than or equal to 6. P i denotes the reference positions acquired during this step, C yi , C pi the corresponding angular commands and C di corresponding debug commands.

Cette étape 100 est avantageusement mise en oeuvre au moyen d'une mire équipée de plusieurs codes-barres, et positionnée en au moins deux positions différentes.This step 100 is advantageously implemented by means of a test pattern equipped with several bar codes, and positioned in at least two different positions.

Le viseur 10 peut être commandé manuellement pour être positionné de façon à successivement acquérir une image nette de chacun des codes-barres de la mire (ceci correspond à la fois à une précision de visée et une précision de mise au point).The viewfinder 10 can be manually controlled to be positioned so as to successively acquire a sharp image of each of the barcodes of the target (this corresponds to both a target accuracy and focus accuracy).

Alternativement, le viseur peut acquérir des images de la mire sans être piloté pour viser spécifiquement un code-barres, et la position visée est déterminée a posteriori en fonction de ce qui apparaît sur l'image, avantageusement par lecture d'un code-barres apparaissant sur l'image.Alternatively, the viewfinder can acquire images of the target without being controlled to specifically target a barcode, and the target position is determined a posteriori according to what appears on the image, preferably by reading a barcode appearing on the image.

Avantageusement, la mire comprend au moins dix, voire vingt codes-barres, qui correspondent à autant de positions Pi, et la mire est elle-même positionnée en plusieurs endroits par rapport au viseur pour multiplier le nombre de positions Pi.Advantageously, the test pattern comprises at least ten or even twenty barcodes, which correspond to as many positions P i , and the target is itself positioned in several places with respect to the viewfinder to multiply the number of positions P i .

Chaque position Pi est acquise dans le référentiel de l'optique de détection 11.Each position P i is acquired in the repository of the detection optics 11.

Une fois un code-barres de la mire correctement visé, on relève les commandes angulaires Cyi, Cpi et de mise au point Cdi correspondantes du viseur 10.Once a barcode of the target correctly aimed, it raises the angular commands C yi , C pi and focus C di corresponding to the viewfinder 10.

Puis le procédé comprend une étape 200 de détermination de la position du point concourant M et des commandes Cy, Cp, Cd composant la loi de commande.Then the method comprises a step 200 of determining the position of the concurrent point M and the commands C y , C p , C d constituting the control law.

Cette étape est mise en oeuvre en déterminant le minimum d'une fonction de la dérivée seconde de la loi de commande. De cette manière les commandes obtenues sont les moins chaotiques pour une position de l'objet O fixée, donc les plus robustes en cas d'imprécision sur la mesure de la position de l'objet O.This step is implemented by determining the minimum of a function of the second derivative of the control law. In this way the commands obtained are the least chaotic for a position of the object O fixed, so the most robust in case of inaccuracy on the measurement of the position of the object O.

La fonction à minimiser, que l'on peut nommer énergie de déformation de la loi de commande, est l'intégrale de la somme des dérivées secondes des commandes composant la loi de commande.The function to be minimized, which can be called the deformation energy of the control law, is the integral of the sum of the second derivatives of the commands composing the control law.

Elle s'écrit comme suit : f = d 2 C y d α 2 2 + 2 d 2 C y dαdβ 2 + d 2 C y d β 2 2 dαdβ + d 2 C p d α 2 2 + 2 d 2 C p dαdβ 2 + d 2 C p 2 dαdβ + k d 2 C d d ρ 2 2

Figure imgb0003
Où k est une constante prédéterminée de pondération, par exemple égale à 1.It is written as follows: f = d 2 VS there d α 2 2 + 2 d 2 VS there dαdβ 2 + d 2 VS there d β 2 2 dαdβ + d 2 VS p d α 2 2 + 2 d 2 VS p dαdβ 2 + d 2 VS p 2 dαdβ + k d 2 VS d d ρ 2 2
Figure imgb0003
Where k is a predetermined weighting constant, for example equal to 1.

Selon un premier mode de réalisation représenté en figure 2a, cette étape est mise en oeuvre en déterminant d'abord, au cours d'une sous-étape 210, les commandes angulaires Cy, Cp, puis dans un deuxième temps la commande de mise au point Cd.According to a first embodiment represented in figure 2a this step is carried out by first determining, during a sub-step 210, the angular commands C y , C p , then in a second step the focus control C d .

Cette étape 210 est mise en oeuvre en déterminant de manière itérative 211 la position optimale du point M correspondant aux commandes Cy, Cp minimisant la fonction f, puis en déterminant 212 les commandes angulaires minimisant ladite fonction une fois le point M déterminé à l'issue de l'étape 211.This step 210 is implemented by iteratively determining 211 the optimal position of the point M corresponding to the commands C y , C p minimizing the function f, then determining 212 the angular commands minimizing said function after the point M determined at the first time. the result of step 211.

L'étape 211 de détermination de la position optimale du point M peut être mise en oeuvre de différentes manières.The step 211 for determining the optimal position of the point M can be implemented in different ways.

Selon une première possibilité, on note f* le terme de f correspondant uniquement aux commandes angulaires (f privé de son terme fonction de la commande de mise au point Cd). f * = d 2 C y d α 2 2 + 2 d 2 C y dαdβ 2 + d 2 C y d β 2 2 dαdβ + d 2 C p d α 2 2 + 2 d 2 C p dαdβ 2 + d 2 C p d β 2 dαdβ

Figure imgb0004
According to a first possibility, f * denotes the term f corresponding only to the angular commands (f deprived of its term as a function of the focus control C d ). f * = d 2 VS there d α 2 2 + 2 d 2 VS there dαdβ 2 + d 2 VS there d β 2 2 dαdβ + d 2 VS p d α 2 2 + 2 d 2 VS p dαdβ 2 + d 2 VS p d β 2 dαdβ
Figure imgb0004

Etant donné f* on sait déterminer argmin Cy ,Cp (f*(M,Cy,Cp )) à M fixé (l'obtention des commandes minimisant f* est décrite ci-après à l'étape 212, qui décrit cette étape pour le cas particulier du M optimal), c'est-à-dire que l'on sait calculer les fonctions Cy et Cp minimisant la fonction f*.Since it is known to determine argmin C y , C p ( f * ( M, C y , C p )) to M fixed (the obtaining of minimizing commands f * is described hereinafter in step 212, which describes this step for the particular case of the optimal M), that is to say that it is known to calculate the functions C y and C p minimizing the function f *.

On note F* la fonction de R3 dans R définie comme suit : F * M = mi n C y , C p f * M , C y , C p

Figure imgb0005
We denote F * the function of R 3 in R defined as follows: F * M = mid not VS there , VS p f * M , VS there , VS p
Figure imgb0005

L'étape 211 de détermination de la position optimale du point M est une étape de minimisation de la fonction F* (resp. G*). Cette étape peut être mise en oeuvre par descente de gradient.The step 211 for determining the optimal position of the point M is a step of minimizing the function F * (resp, G *). This step can be implemented by gradient descent.

Elle est mise en oeuvre de manière itérative comme suit :

  • Détermination d'une position du point M,
  • Calcul de Cy, Cp minimisant f* à M fixé,
  • Ajustement itératif de la valeur du point M pour déterminer de nouveaux Cy, Cp minimisant f*.
It is implemented iteratively as follows:
  • Determination of a position of the point M,
  • Calculation of C y , C p minimizing f * to M fixed,
  • Iterative adjustment of the value of the point M to determine new C y , C p minimizing f *.

Selon une seconde possibilité, il est possible d'inverser le calcul en exprimant les angles α et β en fonction des commandes Cy, Cp et non l'inverse. On obtient la fonction g* telle que : g * = d 2 α d C y 2 2 + 2 d 2 α d C y d C p 2 + d 2 α d C p 2 2 d C y d C p + d 2 β d C y 2 2 + 2 d 2 β d C y d C p 2 + d 2 β d C p 2 2 d C y d C p

Figure imgb0006
According to a second possibility, it is possible to invert the calculation by expressing the angles α and β according to the commands C y , C p and not the opposite. We obtain the function g * such that: boy Wut * = d 2 α d VS there 2 2 + 2 d 2 α d VS there d VS p 2 + d 2 α d VS p 2 2 d VS there d VS p + d 2 β d VS there 2 2 + 2 d 2 β d VS there d VS p 2 + d 2 β d VS p 2 2 d VS there d VS p
Figure imgb0006

Dans ce cas, l'étape 211 comprend la détermination de la position du point M minimisant la fonction G*, définie de R3 dans R comme suit : G * M = mi n α , β g * M , α , β

Figure imgb0007
In this case, step 211 comprises determining the position of the point M minimizing the function G *, defined from R 3 in R as follows: BOY WUT * M = mid not α , β boy Wut * M , α , β
Figure imgb0007

Cette étape est mise en oeuvre itérativement en calculant, par exemple par descente de gradient, en calculant, pour chaque position du point M, les angles α et β exprimés en fonction des commandes Cy, Cp minimisant la fonction g*, puis en ajustant la position du point M.This step is implemented iteratively by calculating, for example by gradient descent, by calculating, for each position of the point M, the angles α and β expressed as a function of the commands C y , C p minimizing the function g *, then adjusting the position of the point M.

A l'issue de l'étape 211, quelle que soit la variante mise en oeuvre, on obtient donc une position optimale du point M.At the end of step 211, whatever the variant implemented, an optimum position of the point M is thus obtained.

Il est ensuite possible de déterminer au cours d'une étape 212 les commandes angulaires Cy, Cp minimisant f* pour cette position du point M.It is then possible to determine during a step 212 the angular commands C y , C p minimizing f * for this position of the point M.

La minimisation de ce terme est réalisée en tenant compte des contraintes qui résultent de l'étape 100, selon lesquelles : C y α P i M , β P i M = C y i

Figure imgb0008
C p α P i M , β P i M = C p i
Figure imgb0009
pour i=1,..,n où n est le nombre de positions de référence relevées lors de l'étape 100.The minimization of this term is carried out taking into account the constraints that result from step 100, according to which: VS there α P i - M , β P i - M = VS there i
Figure imgb0008
VS p α P i - M , β P i - M = VS p i
Figure imgb0009
for i = 1, .., n where n is the number of reference positions found in step 100.

D'après l'article de F. L. Bookstein, Principal Warps : Thin-Plate Splines and the Decomposition of Deformations, IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 11, No 6, de Juin 1989 , les commandes angulaires minimisant la fonction f* à M fixé sont du type spline de plaque mince (en anglais TPS ou Thin-Plate Spline).According to the article FL Bookstein, Main Warps: Thin-Plate Splines and the Decomposition of Deformations, IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 11, No 6, of June 1989 , the angular commands minimizing the function f * to M fixed are the type of thin-plate spline (in English TPS or Thin-Plate Spline).

La commande angulaire Cy (resp. Cp) s'écrit comme suit : C y α , β = a 1 + a α α + a β β + i = 1 n w i U | α P i M , β P i M α , β |

Figure imgb0010
Avec : U r = r 2 log r 2
Figure imgb0011
Les points Pi correspondent aux n points obtenus à l'étape 100, et auxquels correspondent des valeurs d'angles αi, βi respectives. Le fait d'avoir au moins 6 points Pi permet d'obtenir au moins autant de points connus que de degrés de libertés (3 degrés de liberté pour a 1, aα et aβ et trois autres pour la position du point M).The angular command C y (resp.c. C p ) can be written as follows: VS there α , β = at 1 + at α α + at β β + Σ i = 1 not w i U | α P i - M , β P i - M - α , β |
Figure imgb0010
With: U r = r 2 log r 2
Figure imgb0011
The points P i correspond to the n points obtained in step 100, and to which correspond values of respective angles α i , β i . The fact of having at least 6 points P i makes it possible to obtain at least as many known points as degrees of freedom (3 degrees of freedom for a 1 , a α and a β and three others for the position of the point M) .

On définit les matrices : K = 0 U | α 1 , β 1 α 2 , β 2 | U | α 1 , β 1 α n , β n | U | α 2 , β 2 α 1 , β 1 | 0 U | α 2 , β 2 α 1 , β 1 | U | α n , α n α 1 , β 1 | U | α n , β n α 2 , β 2 | 0 , n

Figure imgb0012
× n
où |Pi-Pj| est la distance entre les points Pi et Pj, Q = 1 α 1 β 1 1 α 2 β 2 1 α n β n , n × 3
Figure imgb0013
et L = K Q Q T 0 , n + 3 × n + 3
Figure imgb0014
Où QT est la transposée de Q et O est une matrice nulle 3x3.We define the matrices: K = 0 U | α 1 , β 1 - α 2 , β 2 | ... U | α 1 , β 1 - α not , β not | U | α 2 , β 2 - α 1 , β 1 | 0 ... U | α 2 , β 2 - α 1 , β 1 | ... ... ... ... U | α not , α not - α 1 , β 1 | U | α not , β not - α 2 , β 2 | ... 0 , not
Figure imgb0012
× n
where | P i -P j | is the distance between the points P i and P j , Q = 1 α 1 β 1 1 α 2 β 2 ... ... ... 1 α not β not , not × 3
Figure imgb0013
and The = K Q Q T 0 , not + 3 × not + 3
Figure imgb0014
Where Q T is the transpose of Q and O is a 3x3 null matrix.

Soit V = (v 1,...,vn ) un vecteur de longueur n correspondant aux commandes Cyi (resp. Cpi) acquises lors de l'étape 100, et Y = (V|0 0 0) un vecteur colonne de dimension n. Le vecteur W = (w 1,...,wn ) et les coefficients a1, aα, et aβ sont fournis par l'équation : L 1 Y = W | a 1 a α a β T .

Figure imgb0015
Let V = ( v 1 , ..., v n ) be a vector of length n corresponding to the commands C yi (resp.c C pi ) acquired during step 100, and Y = ( V | 0 0 0) a column vector of dimension n. The vector W = ( w 1 , ..., w n ) and the coefficients a 1 , a α , and a β are provided by the equation: The - 1 Y = W | at 1 at α at β T .
Figure imgb0015

Avantageusement, mais facultativement les commandes angulaires Cy, Cp peuvent être des fonctions de type spline de plaque mince régularisées, telles que décrites dans La publication de G. Donato et S. Belongie, Approximate Thin Plate Spline Mappings, Computer Vision - ECCV 2002, Springer Berlin Heldelberg, 2002, 21-31 .Advantageously, but optionally, the angular commands C y , C p may be regularized thin plate spline functions, as described in the publication of G. Donato and S. Belongie, Approximate Thin Plate Spline Mappings, Computer Vision - ECCV 2002, Springer Berlin Heldelberg, 2002, 21-31 .

L'utilisation de splines régularisées permet de relâcher la contrainte sur les commandes angulaires au niveau des points de référence Pi visés lors de l'étape 100 (les commandes obtenues ne prennent pas exactement les valeurs Cyi et Cpi si l'objet se trouve à la position Pi) et ainsi de tenir compte du bruit éventuel de mesure. Ceci permet d'obtenir une loi de commande plus robuste.The use of regularized splines makes it possible to release the constraint on the angular commands at the reference points P i referred to during step 100 (the commands obtained do not take exactly the values C yi and C pi if the object is find in the position P i ) and thus to take into account the possible noise measurement. This makes it possible to obtain a more robust control law.

En pratique, selon un mode de réalisation avantageux, lors de l'étape 100, les commandes Cy et Cp ne sont pas les commandes minimisant exactement la fonction f*, mais approchant le minimum. La fonction minimisée n'est donc pas f* mais une fonction fr définie comme la somme de f* et d'erreurs résiduelles : f r = f * + λ C y α P i M , β P i M C y i 2 + C p α P i M , β P i M C p i 2

Figure imgb0016
où λ est une constante pré-déterminée, par exemple égale à 1In practice, according to an advantageous embodiment, during step 100, the commands C y and C p are not the commands minimizing exactly the function f *, but approaching the minimum. The minimized function is therefore not f * but a function f r defined as the sum of f * and residual errors: f r = f * + λ Σ VS there α P i - M , β P i - M - VS there i 2 + VS p α P i - M , β P i - M - VS p i 2
Figure imgb0016
where λ is a pre-determined constant, for example equal to 1

Les expressions de Cy et Cp obtenues lors de la minimisation de la fonction fr sont identiques à celles obtenues pour la minimisation de f*, mais avec des valeurs différentes pour les a 1, aα et aβ et les wi.The expressions of C y and C p obtained during the minimization of the function f r are identical to those obtained for the minimization of f *, but with different values for the a 1 , a α and a β and the w i .

De retour à la figure 2a, le procédé comprend également une étape 220 de détermination de la commande de mise au point Cd.Back to the figure 2a the method also includes a step 220 of determining the focus control C d .

Cette étape comprend la minimisation de la fonction f comprenant le terme lié à la commande de mise au point, noté fd (f=f*+fd) en ayant M, Cy et Cp fixés. En variante, on ne minimise que le terme fd lié à la commande de mise au point, cette variante étant équivalente car f* ne dépend pas de la commande Cd de mise au point. Le document de D. Eberly, Thin-Plate Splines, Geometric Tools LLC, sur le site www.geometrictools.com enseigne la solution à la minimisation de la fonction fd. La commande de mise au point Cd obtenue est une spline cubique, qui s'écrit comme suit : Cd = a 1 + a ρ ρ + i = 1 n w i U 1 D | ρ ρ P i M |

Figure imgb0017
avec U 1D (r) = r 3, et la contrainte qui résulte de l'étape 100, selon laquelle : C d ρ P i M = C d i
Figure imgb0018
This step includes minimizing the function f including the term related to the focus control, denoted f d (f = f * + f d ) having M, C y and C p set. As a variant, only the term f d related to the debugging command is minimized, this variant being equivalent because f * does not depend on the control C d of debugging. D. Eberly's paper, Thin-Plate Splines, Geometric Tools LLC, at www.geometrictools.com teaches the solution to minimizing function f d . The focus control C d obtained is a cubic spline, which is written as follows: CD = at 1 + at ρ ρ + Σ i = 1 not w i U 1 D | ρ - ρ P i - M |
Figure imgb0017
with U 1 D ( r ) = r 3 , and the constraint resulting from step 100, wherein: VS d ρ P i - M = VS d i
Figure imgb0018

Les calculs des coefficients a1, aρ et wi se fait de façon analogue à la description qui précède concernant le calcul de splines de plaques minces régularisées : définition des matrices K, Q et L et obtention des coefficients à partir de la matrice L et d'un vecteur V comprenant les commandes de mises au point Cdi correspondant aux positions connues Pi.The calculations of the coefficients a 1 , a ρ and w i are analogous to the above description concerning the calculation of regularized thin plate splines: definition of matrices K, Q and L and obtaining coefficients from the matrix L and a vector V comprising the focusing commands Cd i corresponding to the known positions P i .

Selon un mode de réalisation alternatif représenté en figure 2b, au cours de l'étape 200, les commandes angulaires et de mise au point sont calculées simultanément.According to an alternative embodiment represented in figure 2b during step 200, the angular and debugging commands are calculated simultaneously.

Dans ce cas on note 200' l'étape de détermination de la commande, cette étape étant également une étape itérative comprenant la mise en oeuvre, successivement, d'une première sous-étape 211' dans laquelle la position optimale du point M est déterminée, en déterminant les commandes Cy, Cp, Cd minimisant la fonction f à chaque M fixé, et d'une deuxième sous-étape 212' au cours de laquelle les commandes Cy, Cp et Cd sont déterminées au point M optimal fixé résultant de l'étape 211'.In this case, the command determining step is noted 200, this step also being an iterative step comprising the implementation, successively, of a first substep 211 'in which the optimum position of the point M is determined. , by determining the commands C y , C p , C d minimizing the function f at each fixed M, and a second substep 212 'during which the commands C y , C p and C d are determined at the point Fixed optimal M resulting from step 211 '.

Une fois la loi de commande déterminée, le système de visée est utilisé comme suit, et comme illustré en figure 3 :

  • La position d'un objet O dans le référentiel de l'optique de détection 11 est déterminée au cours d'une étape 410.
  • A partir de la position de l'objet O et de la position du point M, déterminée lors du procédé de calibration, on en déduit 420 les valeurs des angles α et β et de la distance d entre l'objet O et le point M.
  • Puis, la loi de commande déterminée lors du procédé de calibration permet d'en déduire 430 les commandes à appliquer au moteur et au viseur pour viser précisément l'objet O.
Once the control law has been determined, the aiming system is used as follows, and as illustrated in figure 3 :
  • The position of an object O in the repository of the detection optics 11 is determined during a step 410.
  • From the position of the object O and the position of the point M, determined during the calibration process, 420 is deduced from the values of the angles α and β and from the distance d between the object O and the point M .
  • Then, the control law determined during the calibration process makes it possible to deduce from it 430 the commands to be applied to the motor and the viewfinder in order to aim precisely at the object O.

Le procédé proposé présente l'avantage de pouvoir être mis en oeuvre sur tout système de visée sans connaissance a priori de son modèle cinématique. Il n'induit donc pas de contraintes sur la conception ou l'utilisation du système.The proposed method has the advantage of being able to be implemented on any sighting system without prior knowledge of its kinematic model. It does not induce constraints on the design or use of the system.

Le procédé présente une application préférée dans le cadre de l'acquisition d'images d'iris à distance. Dans ce cas, le système de visée 1 comprend, comme viseur 10, une caméra mobile à champ étroit et à haute résolution, par exemple une résolution de l'ordre de 200 dpi à un mètre.The method has a preferred application for the acquisition of remote iris images. In this case, the aiming system 1 comprises, as the viewfinder 10, a narrow-field, high-resolution mobile camera, for example a resolution of the order of 200 dpi to one meter.

L'optique de détection 11 comprend deux caméras fixes à résolution relativement plus basse que le viseur, par exemple de l'ordre de 30 dpi à un mètre, et à champ relativement plus large que le viseur, afin de pouvoir repérer dans une scène un iris d'un individu dont on souhaite acquérir une image.The detection optics 11 comprises two fixed cameras with a relatively lower resolution than the viewfinder, for example of the order of 30 dpi at one meter, and a field relatively wider than the viewfinder, in order to be able to locate in a scene a iris of an individual whose image one wishes to acquire.

La position de l'iris est acquise par l'optique de détection et communiquée au viseur qui, ayant été calibré grâce au procédé décrit ci-avant, peut se positionner pour viser avec précision l'iris et en acquérir une image.The position of the iris is acquired by the detection optics and communicated to the viewfinder which, having been calibrated by the method described above, can be positioned to accurately aim the iris and acquire an image.

Ce procédé permet aussi de viser un objet tel qu'un iris dans une scène même si sa position n'est pas connue a priori. Ceci est donc moins contraignant pour les utilisateurs dont l'image d'iris est acquise car ainsi ils n'ont pas à se positionner de façon particulière ou à un endroit précis pour qu'une image de leur iris puisse être acquise.This method also makes it possible to target an object such as an iris in a scene even if its position is not known a priori. This is therefore less restrictive for users whose iris image is acquired because they do not have to position themselves in a particular way or at a specific place so that an image of their iris can be acquired.

Le procédé n'est néanmoins pas limité au domaine de l'acquisition d'images de traits biométriques mais est applicable à n'importe quel objet que l'on souhaite viser par un système de visée.The method is nevertheless not limited to the field of acquisition of biometric feature images but is applicable to any object that is desired to be aimed by a sighting system.

Claims (11)

  1. A method for calibrating a system for acquiring images of biometric features of individuals, said system comprising a viewfinder (10) and an optics (11) for detecting the position of an object (O) in space, characterized in that said calibration method comprises the determination of a control law to be applied to the viewfinder (10) to sight the object (O) depending on its position, said position being determined in a reference frame of the detection optics (11) and the control law comprising two angular controls (Cy, Cp) and a focus control (Cd) of the viewfinder (10), expressed as a function of the relative positions between the object to be sighted (O) and a concurrent point (M) of all lines of sight of the viewfinder (10),
    the calibration method comprising the steps consisting of:
    - sighting (100), with the viewfinder (10), objects located at least at six different reference positions (Pi) known in the reference frame of the detection optics (11) and noting the corresponding controls (Cyi, Cpi, Cdi) for each reference position,
    - from the reference positions (Pi) and the corresponding controls (Cyi, Cpi, Cdi) for each reference position, determining (200, 200') the position of the concurrent point (M) of the lines of sight and the control law (Cy, Cp) by minimizing a function of the second derivative of the control law by taking into account, as constraints, the controls (Cyi, Cpi, Cdi), noted for the various sighted points (Pi).
  2. The method for calibrating a system for acquiring images of biometric features of individuals according to claim 1, wherein the step of determining (200, 200') the position of the concurrent point (M) and the control law comprises the implementation of the steps consisting of:
    - determining (211, 211') a position of the concurrent point (M) corresponding to controls (Cy, Cp, Cd) minimizing the function of the second derivative of the control law, and
    - once the position of the concurrent point (M) is determined, determining (212, 212') the controls (Cy, Cp, Cd) minimizing the function of the second derivative of the control law.
  3. The calibration method according to claim 2, wherein the step of determining (211, 211') the position of the concurrent point (M) comprises the minimization, depending on said position, of the integral of the sum of the second derivatives of the angular controls (Cy, Cp).
  4. The calibration method according to claim 2, wherein the step of determining (211, 211') the position of the concurrent point (M) comprises the minimization, depending on said position, of the integral of the sum of the second derivatives of the relative angular positions between the point to be sighted (O) and the concurrent point (M), expressed as a function of the angular controls of the viewfinder (α(Cy, Cp), β(Cy, Cp)).
  5. The calibration method according to any of the preceding claims, wherein each angular control (Cy, Cp) is a function of two angles (α, β) of the thin-plate spline type.
  6. The calibration method according to claim 5, wherein each angular control (Cy, Cp) is a function of two angles (α, β) of the regularized thin-plate spline type.
  7. The calibration method according to any of claims 2 to 6, wherein the determination of the focus control is implemented when determining the angular controls (Cy, Cp) or subsequently to the determination of the angular controls and the point M.
  8. The calibration method according to claim 7, wherein the determination of the focus control (220, 211') comprises the minimization of the integral of the second derivative of the focus control.
  9. The calibration method according to any of the preceding claims, wherein the focus control (Cd) is a function of the cubic spline type.
  10. A system (1) for acquiring images of biometric features of individuals, comprising a viewfinder (10), an optics (11) for detecting a position of an object to be sighted, and a processing unit (12) including processing means, the system (1) for acquiring images of biometric features of individuals being characterized in that it is adapted to implement the method according to any of the preceding claims.
  11. A use of a system (1) for acquiring images of biometric features of individuals according to the preceding claim, comprising the steps consisting of:
    - acquiring (410) a position of an object (O) to be sighted in a reference frame of the detection optics,
    - deducing (420) relative positions between the object (O) and a concurrent point (M) of the lines of sight of the viewfinder (10), coordinates of the object (O) in a spherical reference frame centered on the concurrent point (M), and
    - from the control law determined during the calibration method according to any of claims 1 to 9, deducing (430) therefrom a control to be applied to the viewfinder (10) to sight the object.
EP15186507.8A 2014-11-05 2015-09-23 Method for calibrating a sight system Active EP3018625B8 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
FR1460691A FR3028128A1 (en) 2014-11-05 2014-11-05 METHOD OF CALIBRATING A VISEE SYSTEM

Publications (3)

Publication Number Publication Date
EP3018625A1 EP3018625A1 (en) 2016-05-11
EP3018625B1 true EP3018625B1 (en) 2018-12-26
EP3018625B8 EP3018625B8 (en) 2019-02-27

Family

ID=52807871

Family Applications (1)

Application Number Title Priority Date Filing Date
EP15186507.8A Active EP3018625B8 (en) 2014-11-05 2015-09-23 Method for calibrating a sight system

Country Status (3)

Country Link
US (1) US10018893B2 (en)
EP (1) EP3018625B8 (en)
FR (1) FR3028128A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9868212B1 (en) * 2016-02-18 2018-01-16 X Development Llc Methods and apparatus for determining the pose of an object based on point cloud data

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070035627A1 (en) * 2005-08-11 2007-02-15 Cleary Geoffrey A Methods and apparatus for providing fault tolerance in a surveillance system

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6442293B1 (en) * 1998-06-11 2002-08-27 Kabushiki Kaisha Topcon Image forming apparatus, image forming method and computer-readable storage medium having an image forming program
US6369818B1 (en) * 1998-11-25 2002-04-09 Be Here Corporation Method, apparatus and computer program product for generating perspective corrected data from warped information
US7058204B2 (en) * 2000-10-03 2006-06-06 Gesturetek, Inc. Multiple camera control system
GB0208909D0 (en) * 2002-04-18 2002-05-29 Canon Europa Nv Three-dimensional computer modelling
US7259784B2 (en) * 2002-06-21 2007-08-21 Microsoft Corporation System and method for camera color calibration and image stitching
JP4191449B2 (en) * 2002-09-19 2008-12-03 株式会社トプコン Image calibration method, image calibration processing device, image calibration processing terminal
US20070065004A1 (en) * 2005-08-01 2007-03-22 Topcon Corporation Three-dimensional measurement system and method of the same, and color-coded mark
WO2008089791A1 (en) * 2007-01-26 2008-07-31 Trimble Jena Gmbh Optical instrument and method for obtaining distance and image information
TWI389558B (en) * 2009-05-14 2013-03-11 Univ Nat Central Method of determining the orientation and azimuth parameters of the remote control camera

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070035627A1 (en) * 2005-08-11 2007-02-15 Cleary Geoffrey A Methods and apparatus for providing fault tolerance in a surveillance system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WOOD S N: "Thin plate regression splines", JOURNAL OF THE ROYAL STATISTICAL SOCIETY - SERIES B: STATISTICAL METHODOLOGY, BLACKWELL PUBLISHING, vol. 65, no. 1, 28 January 2003 (2003-01-28), pages 95 - 114, XP002518497, ISSN: 1369-7412, DOI: 10.1111/1467-9868.00374 *

Also Published As

Publication number Publication date
US20160124287A1 (en) 2016-05-05
EP3018625A1 (en) 2016-05-11
EP3018625B8 (en) 2019-02-27
FR3028128A1 (en) 2016-05-06
US10018893B2 (en) 2018-07-10

Similar Documents

Publication Publication Date Title
EP0948760B1 (en) Method for calibrating the initial position and the orientation of one or several mobile cameras
FR2836215A1 (en) Portable object photogrammetry system has gyrometer and accelerometer sets
EP2813434B1 (en) Test bench for star sensor, and test method
WO2006064051A1 (en) Method for processing images using automatic georeferencing of images derived from a pair of images captured in the same focal plane
FR2756626A1 (en) SYSTEM FOR MEASURING GAMES AND FLOORS BETWEEN OPPOSITE PARTS
EP2502202B1 (en) Method for estimating the movement of a travelling observation instrument flying over a celestial body
FR3054897A1 (en) METHOD FOR PRODUCING DIGITAL IMAGE, COMPUTER PROGRAM PRODUCT AND OPTICAL SYSTEM THEREOF
Harshaw et al. The Speckle Toolbox: A Powerful Data Reduction Tool for CCD Astrometry
EP2289026A2 (en) Method and device for the invariant affine recognition of shapes
FR3006296A1 (en) DRONE COMPRISING A MULTISPECTRAL IMAGE DEVICE FOR THE GENERATION OF MAPS REPRESENTING A PLANT STATE OF A CULTURE
EP2909671B1 (en) Method for designing a single-path imager able to estimate the depth of field
EP3018625B1 (en) Method for calibrating a sight system
WO2018007628A1 (en) Method and system for reconstructing a three-dimensional representation
EP3070643B1 (en) Method and device for object recognition by analysis of digital image signals representative of a scene
EP3073441B1 (en) Method for correcting an image of at least one object presented remotely to an imager and illuminated by an illumination system and camera system for carrying out said method
EP0608945B1 (en) Star sensor with CCD-array, method of detection and application to reposition a spacecraft
WO2021009431A1 (en) Method for determining extrinsic calibration parameters for a measuring system
EP2113460B1 (en) Method for characterising vibrations for an observation satellite.
WO2021156026A1 (en) Method for calibrating the extrinsic characteristics of a lidar
FR2981149A1 (en) Aircraft, has attitude measurement device including optical sensor that captures images of stars, where attitude measurement device measures attitude of aircraft at both day and night from images taken by sensor
EP1371958A1 (en) Method and apparatus for extracting the spectral signature of a point target
EP3182374A1 (en) Automated method for three-dimensional optical measurement
FR3047830A1 (en) METHOD FOR DETERMINING THE DIRECTION OF MOVING OBJECTS IN A SCENE
EP4260284A1 (en) Method for calibrating an ultra wide angle camera
EP3170303B1 (en) Method for processing high-frequency movements in an optronic system, optronic system, computer program product and storage means

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20150923

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20170217

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20180921

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1082465

Country of ref document: AT

Kind code of ref document: T

Effective date: 20190115

RAP2 Party data changed (patent owner data changed or rights of a patent transferred)

Owner name: IDEMIA IDENTITY & SECURITY FRANCE

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602015022169

Country of ref document: DE

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: FRENCH

REG Reference to a national code

Ref country code: CH

Ref legal event code: PK

Free format text: RECTIFICATION B8

REG Reference to a national code

Ref country code: AT

Ref legal event code: REZ

Ref document number: 1082465

Country of ref document: AT

Kind code of ref document: T

Effective date: 20181226

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181226

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190326

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181226

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181226

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181226

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190326

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20181226

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181226

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181226

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190327

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181226

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181226

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181226

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181226

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190426

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181226

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181226

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190426

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181226

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181226

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181226

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181226

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602015022169

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181226

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20190927

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181226

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181226

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181226

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190923

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190930

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190923

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190930

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20190930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190930

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20190923

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190923

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181226

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181226

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20150923

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181226

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20181226

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230428

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230822

Year of fee payment: 9

Ref country code: DE

Payment date: 20230822

Year of fee payment: 9