EP4182775A1 - Computerverfahren, vorrichtung und programm zur unterstützung der positionierung von systemen der erweiterten realität - Google Patents

Computerverfahren, vorrichtung und programm zur unterstützung der positionierung von systemen der erweiterten realität

Info

Publication number
EP4182775A1
EP4182775A1 EP21746530.1A EP21746530A EP4182775A1 EP 4182775 A1 EP4182775 A1 EP 4182775A1 EP 21746530 A EP21746530 A EP 21746530A EP 4182775 A1 EP4182775 A1 EP 4182775A1
Authority
EP
European Patent Office
Prior art keywords
point
user
points
marker
sight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21746530.1A
Other languages
English (en)
French (fr)
Inventor
Fabrice Malaingre
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Theoris
Original Assignee
Theoris
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Theoris filed Critical Theoris
Publication of EP4182775A1 publication Critical patent/EP4182775A1/de
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes

Definitions

  • the invention relates to extended reality systems, for example to augmented reality or mixed reality systems, and to positioning assistance, to correlate real and virtual spaces and thus allow the addition of virtual objects to a user's perception of a real environment.
  • Extended reality also known as extended reality
  • augmented reality also known as augmented reality
  • mixed reality also known as mixed reality
  • Such an insertion typically consists of inserting virtual objects into a user's field of vision, for example by projecting a representation of these virtual objects onto a transparent surface such as spectacle lenses, or adding virtual objects to a representation of the real environment, usually in images from one or more video streams.
  • an extended reality system comprises a rendering engine (also known as a rendering engine) capable of enriching a real environment with a virtual scene, that is to say of adding a representation of virtual objects in reality, according to a particular point of view, typically the point of view of the user of the system.
  • a rendering engine also known as a rendering engine
  • each rendering engine uses a predetermined reference of its own, rigidly linked to the real environment to be enriched.
  • the position and orientation of the virtual objects to be added to reality must be determined with precision according to elements of the real environment, for example according to the position of singular points of the real scene such as characteristic points of a real object to be enriched. Regardless of how the position and orientation of the virtual objects are determined, it is important to establish a link between the real and virtual environments to ensure spatial and temporal coherence between these environments, i.e. say that virtual objects are placed in the right place and at the right time in the user's perception of the real world.
  • an extended reality system determines in real time the perception that their user has of the real environment.
  • many extended reality systems being mobile due to the fact that, in general, at least a part of these systems is carried by their user, it is therefore necessary to determine in real time the position and orientation of the mobile part of the extended reality systems with respect to the real environment, in order to ensure spatial coherence and temporal between real scene and virtual scene.
  • 6DoF tracking a localization with 6 degrees of freedom
  • SLAM type implementing techniques of the SLAM type (acronym for Simultaneous Localization And Mapping in English terminology) or equivalent, is often used.
  • a passage matrix is a correlation matrix between the real and virtual environments.
  • the present invention helps in particular to solve these problems.
  • the invention proposes a positioning aid mechanism for extended reality systems, allowing a real/virtual correlation.
  • the extended reality system comprising at least one part that is mobile in a real environment and being provided with a location device for determining a position and /or a relative orientation of the moving part according to a first benchmark rigidly linked to the real environment, the first benchmark being defined by the location device, the extended reality system further comprising a device for tracking a user's gaze of the extended reality system, the method comprising:
  • the method according to the invention thus makes it possible to establish a link, in a simple, rapid way, and without particular knowledge, between a real marker linked to a real environment and defined by a localization device on the one hand, and a predefined virtual benchmark and also linked to this real environment on the other hand.
  • the virtual marker is for example a marker used by a rendering engine.
  • the mobile part comprises the location device, the location device being a simultaneous location and mapping device of the SLAM type.
  • the gaze tracking device is rigidly linked to the location device.
  • the obtaining of several estimates of the position of a point fixed by the user comprises an evaluation of dispersion, the position estimates obtained being such that the dispersion of the position estimates obtained is below a threshold.
  • obtaining several estimates of the position of a point fixed by the user comprises filtering of estimated positions, the filtering being based on a distribution of estimated positions.
  • At least one position estimate is obtained from at least two lines of sight and, if the at least two lines of sight are not coplanar, the at least one estimated position is the position of the center of a segment perpendicular to the at least two lines of sight.
  • the second marker is a marker used by a rendering engine of the extended reality system.
  • Another aspect of the invention relates to a processing unit configured to execute each of the steps of the method described above. The advantages and variants of this processing unit are similar to those mentioned above.
  • a computer program, implementing all or part of the method described above, installed on pre-existing equipment, is in itself advantageous, since it makes it possible to establish in a simple and rapid manner a link between a real marker linked to a real environment, defined by a localization device, and a predefined virtual marker also linked to this real environment.
  • the present invention also relates to a computer program comprising instructions for implementing the method described above, when this program is executed by a processor.
  • This program can use any programming language (for example, an object language or other) and be in the form of interpretable source code, partially compiled code or fully compiled code. .
  • Another aspect relates to a non-transitory storage medium for a computer-executable program, comprising a set of data representing one or more programs, said one or more programs comprising instructions for, when executing said or several programs by a computer comprising a processing unit operationally coupled to memory means and to an input/output interface module, to execute all or part of the method described above.
  • FIG. 1 illustrates an example of an environment in which the invention can be implemented according to particular embodiments of the invention
  • FIG. 2 comprising Figures 2a and 2b, schematically illustrates the estimation of the coordinates of a point fixed by a user, according to a projection on a transverse plane and according to a projection on a sagittal plane;
  • FIG. 3 schematically illustrates the estimation of the coordinates of a point fixed by a user from several lines of sight obtained from several positions of the user, when the latter moves while keeping his gaze on the same fixed point;
  • FIG. 4 illustrates an example of estimation of the coordinates of a point fixed by a user, from two lines of sight relative to this same point fixed by the user from two different positions;
  • FIGS. 5a and 5b schematically illustrates the results of measurements carried out to estimate the coordinates of a singular point of a real environment, fixed by a user, using a gaze tracking device providing an estimate of the coordinates d a fixed point and a gaze tracking device providing a characterization of the line of sight passing through the fixed point, respectively;
  • FIG. 6 illustrates an example of steps for determining a passage matrix between a real marker defined by a localization device and a predetermined virtual marker, the real and virtual markers being rigidly linked to the same real environment;
  • FIG. 7 illustrates a first example of steps for estimating the coordinates of points measured by a gaze tracking device when the user is aiming at a singular point in a real environment from several distinct observation positions, in a real frame linked to a localization device of an extended reality system, the gaze tracking device used being capable of directly supplying the coordinates of the point fixed by the user;
  • FIG. 8 illustrates a second example of steps for estimating the coordinates of points measured by a gaze tracking device when the user is aiming at a singular point in a real environment from several distinct observation positions, in a real frame linked to a localization device of an extended reality system, the gaze tracking device used being limited to providing the characterization of the line of sight passing through the point fixed by the user;
  • Fig. 9 illustrates a second example of steps for estimating the coordinates of points measured by a gaze tracking device when the user is aiming at a singular point in a real environment from several distinct observation positions, in a real frame linked to a localization device of an extended reality system, the gaze tracking device used being limited to providing the characterization of the line of sight passing through the point fixed by the user;
  • FIG. 9 illustrates an example of steps for calculating the coordinates of a set of measured points, representing the same point fixed by a user, from a set of lines of sight;
  • Fig. 10 illustrates an example of steps for calculating the coordinates of a set of measured points, representing the same point fixed by a user, from a set of lines of sight;
  • FIG. 10 illustrates an example of steps for estimating the coordinates of a point fixed by a user from the coordinates of several measured points (cloud of measured points);
  • FIG. 11 illustrates an example of a device that can be used to implement, at least partially, one embodiment of the invention, in particular the steps described with reference to Figures 6 to 10.
  • a positioning aid for an extended reality system comprising a location device
  • a location device by establishing a relationship between a predetermined reference point rigidly linked to the real environment, for example a marker used by a rendering engine, called a virtual marker, and a marker defined by the location device of this extended reality system, also rigidly linked to this same real environment, called a real marker.
  • the extended reality system comprises a gaze tracking device making it possible to estimate the coordinates of a point in the real environment, fixed by a user, in the real frame defined by the location device.
  • FIG. 1 illustrates an example of an environment in which the invention can be implemented according to particular embodiments.
  • a user 100 provided with an extended reality headset 105 can move around in a real environment 110.
  • the extended reality headset 105 here comprises a gaze tracking device (not shown), rigidly linked to the headset of extended reality.
  • the extended reality headset 105 further comprises, in particular, a localization device as well as a rendering engine (not represented), or is connected to such means, to estimate the movements of the extended reality headset 105 (changes of position and/or orientation) and display virtual objects according to particular positions and orientations.
  • the rendering engine can be implemented in information processing means such as the device described with reference to FIG. 11.
  • the location device also comprises information processing means, the same as those used by the rendering engine or others, as well as one or more motion sensors.
  • the user can move freely in the real environment 110.
  • the real environment here is an interior space, it could also be an exterior space.
  • the user can move his head in three distinct directions (forward-backward, left-right and up-down) and turn it along three distinct axes (generally called yaw, pitch and roll), i.e. 6 degrees of freedom.
  • the real environment 110 includes fixed elements, for example the windows 115-1 and 115-2 and the table 120. It can also include fixed elements that can be moved, for example the plant 125 and the armchair 130.
  • the invention makes it possible to establish a relationship between the position of fixed elements of a real environment, for example the windows 115-1 and 115-2 and the table 120, according to a predetermined virtual reference point and used by a rendering engine and the position of these elements in a real frame defined by the localization device. For these purposes, it is necessary to identify at least three different singular points of the real environment and to determine the coordinates of these points in the virtual frame in the real frame. Knowledge of the coordinates of these three points in the virtual and real coordinate systems makes it possible to establish a transition matrix of coordinates expressed in the virtual system and in the real system (and/or vice versa).
  • These singular points are preferably singular points of the real environment, chosen so as to be easily identifiable by a user, for example the user 100, but also easily identifiable on a corresponding digital model. It may be, for example, the upper left and lower right corners of the windows 115-1 and 115-2.
  • a gaze tracking device makes it possible to estimate the coordinates of a point fixed by a user in a reference frame linked to the gaze tracking device or in any other reference frame including a passage matrix with a marker linked to the gaze tracking device is known. Indeed, the position of the center of the pupil in relation to the eye makes it possible to determine an optical axis. The intersection of the optical axes from the two eyes represents the fixed point.
  • Figure 2 comprising Figures 2a and 2b, schematically illustrates the estimation of the coordinates of a point fixed by a user, according to a projection on a transverse plane and according to a projection on a sagittal plane.
  • a first optical axis 200-1 can be defined from a first eyeball 205-1, by the line passing through the center 210-1 of the eyeball and the center of the pupil 215-1.
  • a second optical axis 200-2 can be defined from a second eyeball 205-2, by the line passing through the center 210-2 of the eyeball and the center of the pupil 215-2.
  • the intersection of the optical axes 200-1 and 200-2 corresponds to the point fixed by the user, referenced 220.
  • FIG. 2b being a view according to a projection on a sagittal plane, the optical axes 200-1 and 200-2 are superimposed. Only the optical axis 200-1 is "visible" here.
  • This line of sight 225 which corresponds to the direction of the user's gaze, can be defined as being the straight line passing through the point fixed by the user and an optical center.
  • the latter is a point rigidly linked to the user which can be defined in different ways, in particular depending on the gaze tracking device. It can for example correspond to the center of the globe eyepiece of the dominant eye of the user or at the center of the interpupillary segment, the ends of which correspond to the center of the eyeballs (referenced 230 in FIG. 2a).
  • a SLAM-type location device is capable of defining its position and its orientation in a real reference defined by this location device, this reference being rigidly linked to the real environment.
  • this location device is rigidly linked to the gaze tracking device, it is possible to determine, depending on the capacities of this gaze tracking device, either the coordinates of a point fixed by a user, or a characterization of the line of sight passing through the point set by the user, in the real reference defined by the localization device, this reference being rigidly linked to the real environment.
  • the gaze tracking device implemented in the mixed reality headset marketed by Microsoft under the name HoloLens 2 makes it possible to obtain in real time a characterization of the line of sight passing through the point fixed by the gaze of the user in a real benchmark defined by the localization device implemented in this mixed reality headset (this real benchmark being rigidly linked to the real environment ).
  • the gaze tracking device is limited to providing a characterization of the line of sight passing through the point fixed by a user, it is possible to estimate the coordinates of the point fixed by asking the user to move while continuing to stare at the same point.
  • FIG. 3 schematically illustrates the estimation of the coordinates of a point fixed by a user from several lines of sight obtained from several positions of the user, when the latter moves while keeping his gaze on the same fixed point.
  • a gaze tracking device capable of determining a line of sight can be used to determine the line of sight 300-1 of a user located at a first position 305-1 whose gaze is fixed on a point 310.
  • the gaze tracking device can then determine the line of sight 300-2.
  • the coordinates of the target point 310 can then be calculated from the lines of sight 300-1 and 300-2. It is observed that if the lines of sight do not belong to the same plane, the target point can, for example, be defined as the middle of a segment perpendicular to the two lines of sight and of which each of the ends belongs to the one of the two lines of sight, as illustrated in FIG. 4, the length of this segment being the shortest possible distance between the two lines of sight.
  • the aiming, from several distinct locations, of a point fixed by a user, in order to estimate the coordinates of this point can be called a moving aiming.
  • FIG. 4 illustrates an example of estimation of the coordinates of a point fixed by a user, from two lines of sight relating to this same point fixed by the user from two different positions.
  • the point 400 located in the middle of the segment 405 perpendicular to the lines of sight 300-1 and 300-2, one end of which belongs to the line of sight 300-1 and the other end belongs to the line of sight 300- 2 can be considered as the point fixed by the user as a point equidistant from the two lines of sight 300-1 and 300-2. It is recalled here that if the lines of sight are coplanar, the point fixed by the user is the point of intersection of these lines of sight.
  • the relationship between the real environment and the virtual environment is determined on the basis of the stereoscopic vision of a user equipped with an extended reality system (for example an extended reality headset).
  • an extended reality system allows direct vision (called optical see-through in Anglo-Saxon terminology or OST)
  • the user's vision is not distorted and the calculations described above can be implemented directly.
  • the vision is indirect (called video see-through in Anglo-Saxon terminology or VST), for example when the environment is acquired using sensors, for example stereoscopic video sensors, then reproduced, for example displayed on screens, it is necessary to verify that the perception of the environment is on a 1:1 scale to directly implement the calculations described previously. If the scale is not 1:1, a corresponding correction is necessary.
  • Examples of OST-type extended reality headsets that can be used to implement the invention are the headset known as HoloLens 2 from the company Microsoft and the headset known as Magic Leap One from the company Magic Leap (Magic Leap One and Magic Leap are trademarks).
  • Examples of VST-type extended reality headsets that can be used to implement the invention are the headset known as XR-1 from the company Varjo (XR-A and Varjo are trademarks) and the headset known as the name Lynx-R1 of the company Lynx (Lynx-R1 and Lynx are trademarks).
  • the coordinates of a point fixed by a user are estimated using monocular vision, its coordinates then being, for example, estimated from two determined lines of sight by observing the same point using the same eye from two different locations, as described with reference to Figure 3.
  • the establishment of a relationship between a real marker defined by a localization device and a predetermined virtual marker (for example a marker of a rendering engine), the real and virtual markers being rigidly linked to the same real environment is carried out by a mobile sighting of several singular points.
  • a user provided with an extended reality device comprising a gaze tracking device aims at a singular point in the real environment by staring at it for a certain time, for example a few seconds, while moving, while continuing to fix the singular point. It does so for at least three singular points.
  • no virtual object is preferably displayed in the user's field of vision during this correlation phase.
  • FIG. 5 comprising FIGS. 5a and 5b, schematically illustrates the results of measurements carried out to estimate the coordinates of a singular point of a real environment, fixed by a user, using a gaze tracking device providing the coordinates of a fixed point and a gaze tracking device providing a characterization of the line of sight passing through the fixed point, respectively.
  • a gaze tracking device providing the coordinates of a fixed point
  • a gaze tracking device providing a characterization of the line of sight passing through the fixed point
  • this estimation of the coordinates of the point set by the user is based on several measurements from the gaze tracking device, this plurality of measurements possibly making it possible to overcome the limitations of this device.
  • the main limitation lies in the imprecision of the coordinates provided. This imprecision results in particular from the imprecision of the gaze tracking device itself, but also from the imprecision of the location device, as well as their combination.
  • the use of a VST-type extended reality device can also increase this inaccuracy because a user may have to adapt his vision to a distance inconsistent with the convergence distance of his eyes (the distance of the device display in relation to the distance from the targeted singular point).
  • a mobile sighting carried out using a gaze tracking device providing the coordinates of a point fixed by a user can produce as results the measured points 510-1 at 510-4 when the user aims at the singular point 500 from positions 505-1 to 505-4, respectively.
  • the main limitation resides in the fact that it is necessary to have at least two lines sight to be able to estimate the coordinates of the target point.
  • To this limitation can also be added an inaccuracy linked to the gaze tracking device itself, or linked to the location device, or even to their combination.
  • a gaze tracking device providing a characterization of the line of sight passing through a point fixed by a user can produce as results the measured lines of sight 515-1 to 515-4, when the user fixes the singular point 500' from the positions 505'-1 to 505'-4. These measured lines of sight can then be used to estimate the coordinates of measured points such as measured point 520.
  • the results of a moving sight can be formalized in the form of a cloud of measured points which are distributed around the singular point fixed by a moving user, and whose coordinates in the reference frame of the gaze tracking device have been estimated.
  • the coordinates of this singular point can then be determined as being the coordinates of the center of gravity of the cloud of points measured resulting from the moving sight.
  • the measured points relating to the same singular point set by the user it is also possible to filter the measured points relating to the same singular point set by the user, to keep only the measured points closest to each other.
  • the distribution of these measured points can be taken into account to ignore the most distant points and consider only the remaining points for the purpose of calculating the coordinates of the center of gravity which can be considered as being the fixed singular point .
  • FIG. 6 illustrates an example of steps for determining a passage matrix between a real marker defined by a location device and a predetermined virtual marker, the real and virtual markers being rigidly linked to the same real environment.
  • the virtual marker is, for example, a marker used by the rendering engine of the extended reality system.
  • the steps illustrated in FIG. 6 are implemented in a computer connected or integrated into an extended reality system, at least one part of which comprises a gaze tracking device and a location device rigidly linked to one to the other.
  • the purpose of these steps is in particular to estimate the coordinates of singular points of a real environment in a real reference defined by a localization device and to determine a transition matrix between a predetermined virtual reference and the real reference.
  • the coordinates of the singular points in the predetermined virtual frame can, for example, be obtained from an external system or determined according to standard measurement techniques.
  • n the number of singular points, to be fixed by a user, denoted n, is greater than or equal to 3.
  • a first step 600 includes the initialization of a variable i, representing an index of singular points to be fixed in order to establish a relationship between the real benchmark defined by the location device and a predetermined virtual benchmark, here at the value 1.
  • the coordinates of the singular point having index i, in the predetermined virtual frame, are then obtained (step 605). They can, for example, be obtained from the rendering engine of the extended reality system. These coordinates in the virtual coordinate system are noted here (xf,y,z).
  • a next step consists in inhibiting the display of virtual elements (step 615) in order to avoid disturbing the gaze of the user when he fixes the singular point of index i.
  • a following step has as its object the sighting of the singular point itself. Embodiments of this step 620 are described in more detail with reference to Figures 7 and 8 which illustrate two different embodiments of this sighting, either by using a gaze tracking device providing the coordinates of the fixed point ( Figure 7) , or by using a gaze tracking device providing a characterization of the line of sight passing through the fixed point (figure 8).
  • step 620 allows the recording of a set of points measured from several different observation positions of the user, for example from a number of observation positions denoted p (the result of step 620 is thus a cloud of p measured points).
  • the estimated coordinates, in the real coordinate system linked to the localization device of the extended reality system, of the point measured when the user aims at the singular point of index i denoted ( xZ k> y ⁇ ,k ' z ⁇ ,k ) > are obtained.
  • the number of positions p can vary from the estimation of the coordinates of a singular point to the estimation of the coordinates of another singular point. It is also observed that if the number of observation positions p is greater than or equal to 1 for a gaze tracking device capable of directly providing the coordinates of the fixed point, the number p is greater than or equal to 2 in the case d a gaze tracking device providing characterization of the line of sight (at least 2 measured lines of sight are needed to estimate the coordinates of 1 measured point).
  • the coordinates of the singular point having the index i, in the real coordinate system defined by the location device denoted (x[,y[,z[), are calculated (step 625).
  • the estimated coordinates (x[,y[,z[) are here calculated from the coordinates (xf 'k yf ,k ' z I ,k ) of the cloud of measured points recorded when the user fixed the singular point having l 'index i from the different observation positions k.
  • a test is then carried out to determine if all the singular points have been targeted by the user, that is to say if the value of the index i is less than the number of singular points to be targeted, denoted n (step 630).
  • index i is incremented by 1 (step 635) and the previous steps (steps 605 to 625) are repeated to determine the coordinates of the singular point having the new value of the index i.
  • the passage matrix between the real landmark defined by the location device and the predetermined virtual landmark is calculated (step 640). This matrix makes it possible in particular to determine a position and an orientation to which virtual objects must be rendered in order to be spatially coherent with the real environment in which the extended reality system evolves.
  • FIG. 7 illustrates a first example of steps for estimating the coordinates of points measured by a gaze tracking device when the user is aiming at a singular point in a real environment from several distinct observation positions, in a real landmark linked to a localization device of an extended reality system, the gaze tracking device used being capable of directly supplying the coordinates of the point fixed by the user.
  • the singular point targeted by the user is here the singular point having the index i.
  • the estimated coordinates of the point measured when the user fixes the singular point of index i from the observation position k, in the real reference linked to the localization device, are here noted (xZ k ,y[ k ,z [ k ).
  • a first step here is to initialize the value of the index k of the positions, to the value 1 (step 700).
  • the coordinates ([ k> y[ k ' z l k ) of the point measured when the user aims at the singular point having the index i are then obtained from the gaze tracking device (step 705).
  • these coordinates are obtained in the real frame linked to the location device of the extended reality system comprising the gaze tracking device, the two devices being rigidly linked.
  • a test is carried out to determine whether a sufficient number of measurements have been carried out (step 710), that is to say, according to the example of FIG. 7, whether an estimate has been made for a number p of observation positions, by comparing the value of the index k with the number p.
  • this number p can be determined as a function of a predetermined duration, for example a duration comprised between 1 and 7 seconds, so as to perform as many measurements as possible in the predetermined time.
  • this number p can be chosen between two values, for example between 1 and 300.
  • the value of the index k is strictly lower than the value p, the value of the index k is incremented by 1 and a pause d a predetermined duration is performed so that the user has time to move (step 715). This duration is, for example, a few tens of milliseconds. The previous steps (steps 705 and 710) are then repeated to obtain new measurements.
  • an estimation of the dispersion of the cloud of measured points is preferably carried out (step 720), to check the consistency of the measurements. carried out.
  • the dispersion can be estimated as being the difference between the smallest abscissa and the largest abscissa, between the smallest ordinate and the largest ordinate and between the largest dimension smallest and the largest dimension (which can result in the calculation of the bounding box containing all the measured points).
  • a threshold or with several thresholds, for example an abscissa threshold, an ordinate threshold and a dimension threshold
  • the previous steps are re-executed to obtain new measurements. Otherwise, if the dispersion is acceptable, the measured coordinates (*[ 3 ⁇ 4 , y 3 ⁇ 4 ,3 ⁇ 4), with k varying here from 1 to p, are memorized to be then used to estimate the coordinates of the targeted singular point of index i in the real reference linked to the localization device.
  • FIG. 8 illustrates a second example of steps for estimating the coordinates of points measured by a gaze tracking device when the user is aiming at a singular point in a real environment from several distinct observation positions, in a real landmark linked to a localization device of an extended reality system, the gaze tracking device used being limited to providing the characterization of the line of sight passing through the point fixed by the user.
  • the singular point targeted by the user is here the singular point having the index i.
  • the characterization of the line of sight measured when the user fixes the singular point of index i from the observation position k, in the real reference linked to the localization device, formalized by a point and a direction is noted here (o[ fc ;d[ fc ), or more simply the index i not being recalled in order to improve readability.
  • a first step is to initialize the value of the index k, representing an index of observation positions, to the value 1 (step 800).
  • a characterization ( 0 ;d ⁇ of the line of sight corresponding to the gaze of the user when he fixes the singular point having the index i is obtained from the gaze tracking device (step 805).
  • This characterization of the line of sight obtained is expressed in the real reference linked to the localization device of the extended reality system comprising the gaze tracking device (which is rigidly linked to it).
  • a test is carried out to determine whether a sufficient number of measurements have been carried out (step 810), that is to say, according to the example of FIG. 8, whether a line of sight was obtained for a number p of observation positions, by comparing the value of the index k with the number p.
  • this number can be determined according to a predetermined duration, for example a duration between 1 and 7 seconds, so as to perform as many measurements as possible over time. predetermined.
  • this number p can be chosen between two values, for example between 2 and 300.
  • the value of the index k is strictly lower than the value p, the value of the index k is incremented by 1 and a pause d a predetermined duration is performed so that the user has time to move (step 815). This duration is, for example, a few tens of milliseconds. The previous steps (steps 805 and 810) are then repeated to obtain new measurements.
  • a cloud of measured points is calculated (step 820).
  • This cloud for example composed of r measured points, is calculated from the p lines of sight previously obtained.
  • an estimation of the dispersion of the cloud of measured points obtained is preferably carried out (step 825), to check the consistency of the measurements.
  • the dispersion can be estimated as being the difference between the smallest abscissa and the largest abscissa, between the smallest ordinate and the largest ordinate and between the smallest dimension and the largest dimension (calculation of the bounding box containing all the measured points). These values are compared with a threshold (or with several thresholds, for example an abscissa threshold, an ordinate threshold and a dimension threshold) to determine whether the dispersion is acceptable or not (step 830).
  • the previous steps are re-executed to obtain new measurements. Otherwise, if the dispersion is acceptable, the measured coordinates (x ,y ,zij) > with J varying here from 1 to r, are memorized to be then used to estimate the coordinates of the targeted singular point of index i in the real landmark linked to the localization system.
  • FIG. 9 illustrates an example of steps for calculating the coordinates of a set of measured points, representing the same point set by a user, from lines of sight. These steps can be performed when performing step 820 of Figure 8.
  • the estimated coordinates of the measured points with J varying here from 1 to r, corresponding to the same singular point of index i, are here obtained from p lines of sight characterized in the form ⁇ O k ,-d r k J, with k varying here from 1 to p, in a real frame linked to the extended reality system.
  • a first step is to initialize the values of the indexes u, v and j (step 900).
  • the indexes u and v are line of sight indexes. They are initialized here to values 1 and 2, respectively.
  • the index j is an index of measured points whose value is here initialized to 1.
  • the lines of sight having for index u and v that is to say the lines of sight and , are then selected (step 905).
  • it is determined whether these lines of sight are coplanar (step 910).
  • this test can be carried out by determining whether the vector formed by the points (3 ⁇ 4 and Oz is orthogonal to the vector resulting from the vector product of the vectors d u r and that is, if the dot product (3 ⁇ 40 . (t3 ⁇ 4A3 ⁇ 4) is zero.
  • a test is performed to determine if they are parallel (step 915). This test may consist, for example, to determine if the vector product of vectors u and r let d u r Ad v r , be zero.
  • a test is then performed to determine whether the value of the index v is strictly less than the number p of lines of sight considered (step 920). If the value of the index v is strictly less than the number p, this value is incremented by 1 (step 925) and a new pair of lines of sight is processed to estimate, if necessary, the coordinates of a measured point. Otherwise, if the value of the index v is not strictly less than the number p, a test is performed to determine whether the value of the index u is strictly less than the number p minus 1 of lines of sight considered (step 930).
  • index u is strictly less than the number (p - 1), this value is incremented by 1, the value of index v is modified to be equal to that of index u plus 1 (step 935), and a new pair of lines of sight is processed to estimate, if necessary, the coordinates of a measured point. If, on the contrary, the value of the index u is greater than or equal to the number (p-1), the search for measured points ends, all the lines of sight having been taken into consideration. [0108] If the lines of sight are coplanar but not parallel
  • step 910 and 915 the coordinates (3 ⁇ 4, yf, z[ j ) of the point of intersection of these lines are calculated (step 940) then stored as those of a measured point having the index j (step 945) .
  • the value of the index j is incremented by 1 then the test(s) described previously on the values of the indices u and v are carried out to, if necessary, process a new pair of lines of sight.
  • step 910 If the lines of sight are not coplanar (step 910), the segment perpendicular to these two lines of sight is calculated here (step 950). If point S is considered to be the end of the segment belonging to the line of sight and that point S is the end of the segment belonging to the line of sight and the coordinates of these points can be determined according to the following relations:
  • step 945 the value of the index j is incremented by 1 then the test(s) described previously on the values of the indices u and v are carried out to, if necessary, process a new pair of lines of sight.
  • a set of measured points comprising (j-1) points, corresponding to the same point targeted by a user, is obtained.
  • the coordinates of the points making up this cloud of measured points can be used to estimate the coordinates of the point targeted by the user.
  • FIG. 10 illustrates an example of steps for estimating the coordinates of a point fixed by a user from coordinates of several measured points (cloud of measured points).
  • the measured points are for example the points of the set of points obtained from a gaze tracking device during step 620 of FIG. 6 or the points of the set of points determined from lines sight, as described with reference to Figure 9.
  • a first step is to filter the points measured according to their abscissa (step 1000).
  • this filtering may consist in selecting the points according to their distribution on the abscissa axis in order to eliminate the extreme points, for for example the points belonging to the upper and lower c percentiles where the value c is for example chosen between 1 and 10.
  • a second step has the purpose of filtering the points according to their ordinate (step 1005). Again, this filtering can consist in selecting the points according to their distribution on the y-axis in order to eliminate the extreme points.
  • a third step is to filter the points according to their rating (step 1010).
  • this filtering can consist of selecting the points according to their distribution on the dimension axis in order to eliminate the extreme points.
  • the steps 1000 to 1010 can be combined in order to simultaneously consider the three dimensions of space and filter the points measured according to the distances which separate them in order to eliminate the points located at the periphery of the point cloud. It is also possible to filter the points by eliminating the points which would be in the neighborhood of the smallest convex hull (or convex hull in English terminology) encompassing the cloud of measured points.
  • step 1015 the coordinates (x[,y[,z[) of the center of gravity of the cloud of measured points considered, after filtering, are calculated (step 1015). These coordinates can be estimated as follows:
  • the center of gravity here represents an estimate of the singular point with index i set by the user.
  • this center of gravity could be projected onto the nearest point of a geometric surface determined by the location device, this geometric surface corresponding to a virtual reconstruction of the real surface on which finds the singular point set by the user.
  • Figure 11 illustrates an example of a device that can be used to implement, at least partially, an embodiment of the invention, in particular the steps described with reference to Figures 6 to 10.
  • the device 1100 is for example a computer or a calculator.
  • the device 1100 preferably comprises a communication bus 1102 to which are connected:
  • a central processing unit or microprocessor 1104 (CPU, acronym for Central Processing Unit in English terminology); [0125] a read only memory 1106 (ROM, acronym for Read Only Memory in English terminology) which may include the operating system and programs such as "Prog";
  • RAM Random Access Memory
  • cache memory 1108 comprising registers adapted to record variables and parameters created and modified during the execution of the aforementioned programs
  • an input/output interface 1110 connected to one or more sensors 1112, in particular one or more gaze tracking devices, one or more video sensors, one or more localization devices (eg 6DoF) and/or one or more multiple motion sensors; and
  • a graphics card 1114 connected to one or more rendering devices 1116 (e.g. screens, projectors, etc.).
  • rendering devices 1116 e.g. screens, projectors, etc.
  • the device 1100 can also have the following elements:
  • a hard disk 1120 which may include the aforementioned "Prog” programs and data processed or to be processed according to the invention.
  • any device 1122 allowing a user to interact with the device 1100 such as a voice recognition system, an object tracking device (e.g. hand tracking), a joystick or a remote control; and
  • a communication interface 1126 connected to a distributed communication network 1128, for example a wireless communication network and/or a local communication network.
  • the communication bus allows communication and interoperability between the different elements included in the device 1100 or connected to it.
  • the representation of the bus is not limiting and, in particular, the central unit is capable of communicating instructions to any element of the device 1100 directly or via another element of the device 1100.
  • the executable code of each program allowing the programmable device to implement the processes according to the invention can be stored, for example, in the hard disk 1120 or in ROM 1106. According to a variant, the executable code of the programs could be received via the communication network 1128, via the interface 1126, to be stored in the same way as described previously.
  • the program(s) can be loaded into one of the storage means of the device 1100 before being executed.
  • the central unit 1104 will control and direct the execution of the instructions or portions of software code of the program or programs according to the invention, instructions which are stored in the hard disk 1120 or in the ROM 1106 or else in the other aforementioned storage elements.
  • the program or programs which are stored in a non-volatile memory for example the hard disk 1120 or the ROM 1106, are transferred to the random access memory 1108 which then contains the executable code of the program or programs according to the invention, as well as registers for storing the variables and parameters necessary for the implementation of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Dermatology (AREA)
  • Neurosurgery (AREA)
  • Neurology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
EP21746530.1A 2020-07-16 2021-07-08 Computerverfahren, vorrichtung und programm zur unterstützung der positionierung von systemen der erweiterten realität Pending EP4182775A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR2007478A FR3112639A1 (fr) 2020-07-16 2020-07-16 Procédé, dispositif et programme d’ordinateur d’aide au positionnement pour système de réalité étendue
PCT/FR2021/051265 WO2022013492A1 (fr) 2020-07-16 2021-07-08 Procede, dispositif et programme d'ordinateur d'aide au positionnement pour systemes de realite etendue

Publications (1)

Publication Number Publication Date
EP4182775A1 true EP4182775A1 (de) 2023-05-24

Family

ID=72801727

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21746530.1A Pending EP4182775A1 (de) 2020-07-16 2021-07-08 Computerverfahren, vorrichtung und programm zur unterstützung der positionierung von systemen der erweiterten realität

Country Status (3)

Country Link
EP (1) EP4182775A1 (de)
FR (1) FR3112639A1 (de)
WO (1) WO2022013492A1 (de)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2954494B1 (fr) * 2009-12-18 2012-07-27 Thales Sa Procede de calibrage d'un instrument de mesure d'un systeme optronique
DE102011122206A1 (de) * 2011-12-23 2013-06-27 Volkswagen Aktiengesellschaft Verfahren zum Betrieb eines Augmented-Reality-Systems
US11861062B2 (en) * 2018-02-03 2024-01-02 The Johns Hopkins University Blink-based calibration of an optical see-through head-mounted display

Also Published As

Publication number Publication date
FR3112639A1 (fr) 2022-01-21
WO2022013492A1 (fr) 2022-01-20

Similar Documents

Publication Publication Date Title
EP2760329B1 (de) Verfahren zur bestimmung okularer und optischer messungen
KR102231910B1 (ko) 초점 이동에 반응하는 입체적 디스플레이
EP3659109B1 (de) Verfahren zur bestimmung mindestens eines mit einer ophthalmischen vorrichtung verbundenen parameters
EP2104925B1 (de) Verfahren und vorrichtung zur echtzeiteinbettung von virtuellen objekten in einen bildstrom mithilfe von daten aus einer echten szene, die von den bildern dargestellt wird
FR3053509B1 (fr) Procede d’occultation d’un objet dans une image ou une video et procede de realite augmentee associe
EP2999393B1 (de) Verfahren zur bestimmung von augenmessungen unter verwendung eines verbrauchersensors
EP2715662B1 (de) Verfahren zum lokalisieren einer kamera und 3d-rekonstruktion in einer teilweise bekannten umgebung
CN107358217B (zh) 一种视线估计方法及装置
US20170123488A1 (en) Tracking of wearer's eyes relative to wearable device
US10120442B2 (en) Eye tracking using a light field camera on a head-mounted display
JP2018532199A (ja) 眼の特徴を用いる眼ポーズ識別
EP2582283B1 (de) Verfahren zur beurteilung einer referenzhaltung
WO2011113936A1 (fr) Procédé et dispositif de mesure de distance inter-pupillaire
FR3011952A1 (fr) Procede d'interaction par le regard et dispositif associe
EP2901209B1 (de) Verfahren zur bestimmung der sichtparameter einer person
CN115803750B (zh) 使用参考框架的眼镜的虚拟试戴系统
FR3041804A1 (fr) Systeme de simulation tridimensionnelle virtuelle propre a engendrer un environnement virtuel reunissant une pluralite d'utilisateurs et procede associe
EP3145387B1 (de) Verfahren zur visuellen prüfung eines individuums
WO2022013492A1 (fr) Procede, dispositif et programme d'ordinateur d'aide au positionnement pour systemes de realite etendue
EP3145405A1 (de) Verfahren zur bestimmung von mindestens einem verhaltensparameter
WO2018002533A1 (fr) Procédé d'occultation d'un objet dans une image ou une vidéo et procédé de réalité augmentée associé
EP2994813A1 (de) Verfahren zur steuerung einer grafischen schnittstelle zur anzeige von bildern eines dreidimensionalen objekts
FR2978841A1 (fr) Systeme electronique d'aide a la vente
US11972549B2 (en) Frame selection for image matching in rapid target acquisition
EP3227743A1 (de) Verfahren zur kalibrierung eines visuellen darstellungssystems der erweiterten realität mit mindestens einer hinsichtlich des benutzers teilweise transparenten anzeigevorrichtung und zugehöriges system

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20230104

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)