EP3156880A1 - Effet de zoom dans une interface de suivi du regard - Google Patents
Effet de zoom dans une interface de suivi du regard Download PDFInfo
- Publication number
- EP3156880A1 EP3156880A1 EP15306629.5A EP15306629A EP3156880A1 EP 3156880 A1 EP3156880 A1 EP 3156880A1 EP 15306629 A EP15306629 A EP 15306629A EP 3156880 A1 EP3156880 A1 EP 3156880A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- space
- point
- graphical representation
- user
- attention
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 230000000694 effects Effects 0.000 title description 14
- 238000000034 method Methods 0.000 claims description 41
- 238000004590 computer program Methods 0.000 claims description 7
- 230000001172 regenerating effect Effects 0.000 claims 1
- 230000007246 mechanism Effects 0.000 abstract description 9
- 230000006870 function Effects 0.000 abstract description 8
- 238000004891 communication Methods 0.000 description 17
- 238000012545 processing Methods 0.000 description 12
- 230000003993 interaction Effects 0.000 description 11
- 230000008859 change Effects 0.000 description 9
- 238000011161 development Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 9
- 210000001747 pupil Anatomy 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 6
- 238000013459 approach Methods 0.000 description 5
- 230000000875 corresponding effect Effects 0.000 description 5
- 230000001976 improved effect Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 210000003128 head Anatomy 0.000 description 4
- 230000005291 magnetic effect Effects 0.000 description 4
- 230000004434 saccadic eye movement Effects 0.000 description 4
- 230000003068 static effect Effects 0.000 description 4
- 238000005265 energy consumption Methods 0.000 description 3
- 230000004424 eye movement Effects 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000001149 cognitive effect Effects 0.000 description 2
- 238000013480 data collection Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 230000000750 progressive effect Effects 0.000 description 2
- 230000002829 reductive effect Effects 0.000 description 2
- 230000000246 remedial effect Effects 0.000 description 2
- 230000008093 supporting effect Effects 0.000 description 2
- 101000822695 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C1 Proteins 0.000 description 1
- 101000655262 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C2 Proteins 0.000 description 1
- 101000655256 Paraclostridium bifermentans Small, acid-soluble spore protein alpha Proteins 0.000 description 1
- 101000655264 Paraclostridium bifermentans Small, acid-soluble spore protein beta Proteins 0.000 description 1
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000007177 brain activity Effects 0.000 description 1
- 230000001364 causal effect Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 238000004883 computer application Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000005684 electric field Effects 0.000 description 1
- 230000000763 evoking effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000004886 head movement Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 230000004446 light reflex Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000035479 physiological effects, processes and functions Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 230000004439 pupillary reactions Effects 0.000 description 1
- 230000004461 rapid eye movement Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 238000013515 script Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 230000026683 transduction Effects 0.000 description 1
- 238000010361 transduction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 238000007794 visualization technique Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G5/00—Traffic control systems for aircraft, e.g. air-traffic control [ATC]
- G08G5/0017—Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information
- G08G5/0026—Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information located on the ground
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G5/00—Traffic control systems for aircraft, e.g. air-traffic control [ATC]
- G08G5/0073—Surveillance aids
- G08G5/0082—Surveillance aids for monitoring traffic from a ground station
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04806—Zoom, i.e. interaction techniques or interactors for controlling the zooming operation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
Definitions
- the present invention relates to graphical user interfaces supporting gaze tracking, and in particular scale shifting or zoom features in such interfaces.
- a method of managing a graphical representation of a physical space within a larger space comprising the steps of generating a graphical representation a selected space, wherein the selected space lies within a larger space and wherein the size of the selected space is defined with regard to a predetermined scale, and displaying the representation.
- a point of attention of a user within the representation is determined with reference to the point of regard of the user, and responsive to the user providing an input via a user interface, the selected physical space is redefined to correspond to a new selected space positioned with respect to and containing the point of attention, where the new selected space is situated in the larger space, and the new selected space is defined at a new scale.
- the graphical representation is then regenerated accordingly.
- This approach supports an instinctive relationship with the interface, where the focus of a user's attention is automatically selected for closer scrutiny.
- Ready and instinctive awareness of ongoing events outside the user's direct field of vision can improve the user's ability to anticipate emerging problems, and take remedial measures earlier than with conventional systems. Depending on context, this will translate into improved safety and reduced costs.
- the graphical representation is generated so as to incorporate a graphical user interface cursor.
- a graphical user interface cursor such as a mouse pointer the compatibility with existing graphical user interface platforms is ensured, providing synergies by offering the user a choice of interface mechanisms from which he may select based on the nature of the task at hand.
- the steps of generating, displaying and determining are repeated iteratively.
- the graphical representation is generated so as to incorporate an indication of the point of attention.
- the zoom feature may seem to behave erratically since the point of attention selected by the system may differ from that intended by the user.
- the system may indicate the point that the system currently considers to reflect the point of attention.
- the step of determining the point of attention of the user within the representation comprises determining a weighted average of the user's point of regard over a predetermined duration.
- a weighted average over a predetermined duration makes it possible to more accurately determine the point of attention, by filtering out transient movements of the user's point of regard, and taking into account other predictive factors. This leads to a more transparent experience for the user, where the user interface seems to implicitly understand his true intention. This correspondingly translates into more rapid and efficient user interactions.
- the step of determining the point of attention of a user within the representation comprises positioning said point of attention preferentially with respect to certain types of feature within said graphical representation.
- Positioning the point of attention preferentially with respect to certain types of feature makes it possible to more reliably predict the point of attention at a statistical level, by paying less attention to less likely inputs. This leads to a more transparent experience for the user, where the user interface seems to implicitly understand his true intention. This correspondingly translates into more rapid and efficient user interactions.
- the centre of the new selected space is situated at the point of attention.
- the selected space, the larger space and the graphical representation are two dimensional.
- the selected space, the larger space and the graphical representation are three dimensional.
- the centre of the selected space is unchanged with respect to the larger space.
- the zoom process is transparent, as the user has the impression of merely focussing his attention more closely on a fixed point. This may improve the immersive nature of the interaction, improving concentration and situational awareness.
- a computer program adapted to implement the steps of the first aspect.
- a computer readable medium incorporating the computer program of the second aspect.
- an apparatus adapted to generate a graphical representation of a selected space, wherein the selected space lies within a larger space and wherein the size of the selected space is defined with regard to a predetermined scale.
- the apparatus is further adapted to cause a display unit to display the representation.
- the apparatus is further adapted to determine a point of attention of a user within the representation with reference to signals received from an eye tracking system, and to redefine the selected space to correspond to a new selected space positioned with respect to and containing the point of attention responsive to receiving an input via a user interface, the new selected space being situated in the larger space, and the new selected space being defined at a new scale.
- the apparatus is further adapted to regenerate the graphical representation on the basis of said redefined selected space, and to cause the display unit to display the regenerated representation.
- This approach supports an instinctive relationship with the interface, where the focus of a user's attention is automatically selected for closer scrutiny.
- Ready and instinctive awareness of ongoing events outside the user's direct field of vision can improve the user's ability to anticipate emerging problems, and take remedial measures earlier than with conventional systems. Depending on context, this will translate into improved safety and reduced costs.
- gaze based user interfaces are particularly suited to applications where a user must track monitor a large number of moving elements in the interface, over a long period and where the implications of a misinterpretation of failed manipulation of the interface in real time are sufficiently serious to justify the cost of such systems, such as air traffic control displays, head up displays in vehicles, and so on.
- the user will need to move between a high level overview covering a large volume of space (which may be a representation of real space, or a virtual space existing only within the interface environment) or number of entities with minimal detail, and a more focussed view providing more detail on a selected volume of space, number of entities, etc.
- Moving between such views is often referred to as a zoom effect, as the user may have the impression of physically moving closer to or away from the point of interest.
- This type of effect will necessarily be based around a selected point, which is taken as the focal point towards which or away from which the user seems to move.
- this point will often be designated by specifically selecting a point for this purpose e.g. by clicking on it, or otherwise taking whatever interface element currently has focus, i.e. was selected most recently, as this focal point.
- the current position of the mouse cursor may be taken to be the focal point. Since in such cases it is possible to move the mouse whilst implementing the zoom, the focal point of the zoom may also change during the zoom process.
- the zoom effect is implemented by a scroll wheel on the mouse or elsewhere.
- Eye tracking devices are mostly off-the-shelf products and need to be integrated in existing systems by the customers themselves. Such integration can be a problem especially when existing environment such as flight or drive simulators does not allow communication with third party software.
- eye trackers produce large amounts of data which need to be stored and then processed.
- the data When an eye tracker is used as a system input the data must be processed in real- or near real-time, thus adding further complications.
- Figure 1 shows the steps of a method according to a first embodiment.
- a method of managing a graphical representation of a selected space within a larger space This may be a representation of a real physical space, for example a portion of land or airspace, or alternatively a virtual desktop or other computer generated environment.
- the method begins at step 11 of generating a representation of a selected portion of the larger space.
- This representation will inherently have a scale, either as a fraction of the larger space, or in terms of the real dimensions of the corresponding physical space.
- the representation itself may be generated from any suitable source including geographic data files, other predefined graphical elements or live video signals, or any combination of these or other data types. In particular, it may be generated wholly or partially as a representation of the operating system of the device generating the representation.
- the selected space, and the larger space may be two or three dimensional.
- the representation may also be two or three dimensional.
- the representation may be a two dimensional representation of the surface of the earth, which is of course inherently three dimensional given the generally spherical form of the earth, and the variations in its diameter at different points on its surface. It is nevertheless common to represent portions of the earth's surface two dimensionally by applying a suitable projection, and disregarding local deviations from the average local diameter (hills, etc.).
- step 12 the graphical representation is displayed to the user.
- This display may be by means of a conventional CRT or LCD monitor, whether as a 2d or 3d image, by holographic means or otherwise.
- the method next proceeds to stem 13 at which the point of attention of the user is determined, with reference to the point of regard of the user.
- Various systems are used to track eye movements, which may be adapted to implement this step. Any such system may be used, including head-mounted, table-based, or remote systems. These devices commonly use video-cameras and processing software to compute the gaze position from the pupil/corneal reflection of an infra-red emissive source. To increase data accuracy with table devices, it is possible to limit head movement with a fixed chin on the table.
- a calibration process is also common, to ensure system accuracy. The calibration process usually consists of displaying several points in different locations of the viewing scene; the Eye Tracking software will compute a transformation that processes pupil position and head location.
- Table-based eye trackers are usually binocular and can thus calculate eye divergence and output raw coordinates of the Gaze Intersection Point (GIP) in x-y pixels applied to a screen in real-time. This feature allows integration of gaze position as an input for the HMI. Areas Of Interest (AOIs) are then defined to interact with the user. When the gaze meets an AOI an event is generated and a specific piece of information will be sent. When an AOI is an element of the interface with some degree of freedom (a scrollbar, for instance), one is talking about a dynamic AOI (dAOI). Tracking of a DAOI is more challenging compared to a static one.
- GIP Gaze Intersection Point
- the point of attention may simply be taken to be the instantaneous point of regard, that is, whatever point the eye tracking system considers the user to be looking at the instant the input is received at step 14.
- the point of attention may take into account other factors such as system status, historical information and the like.
- the determination of the point of attention of the user may involve determining a weighted average of the users point of regard over a predetermined duration-further embodiments are described hereafter.
- Eye Tracking data collection method There are two kinds of Eye Tracking data collection method. The first and the most common is to use the original software (for data recording and analysis) that is often provided by the device manufacturer. The second is to develop a specific software module (using a System Developer Kit (SDK), usually provided with the eye tracker) for data collection.
- SDK System Developer Kit
- Various parameters will impact the precision of raw data issued from the Eye Tracking system. Among them, the video frame rate and the camera resolution are critical for the Eye Tracking software. Existing systems use a video frame rate from 30 to 2000 Hz, and it is expected that higher frame rates will be used in future systems. For high precision Eye Tracking, high frequency rate will improve data filtering but will also in-crease the data size and processing time which is critical for online processing.
- Eye tracking data collected during an experiment can be analyzed by statistical methods and visualization techniques to reveal characteristics of eye movements (fixations, hot spots, saccades, and scanpaths). Fixation, saccade, and smooth pursuit events can be computed from raw data coordinates. To correlate these pieces of information with the Human-Machine Interface (HMI), some interface-related data have to be collected (i.e. object coordinates within the interface, HMI events like mouse hover, etc.). This information can be used to infer user behaviour:
- HMI Human-Machine Interface
- Saccades are rapid eye movements that serve to change the point of fixation, and during which, as it is often considered, no information is encoded.
- Fixations occurs when the user fixate an object (usually during a 150 ms threshold) and encode relevant information. Sometimes shorter fixations are taken into account. Unlike long fixations that are considered to be a part of top-down visual processing, short ones are regarded as part of a bottom-up process. It is estimated that 90% of viewing time is dedicated to fixations. Other complex ocular events like glissades or retro-saccades could be considered.
- Variation of the pupil diameter can also be used as an indication of the cognitive workload, defined as task-evoked pupillary response (TEPR).
- TEPR task-evoked pupillary response
- light sources environment, electronic displays, etc.
- the pupil light reflex is more pronounced than the impact of the cognition on pupil size.
- luminance of the fixation area even when the luminance of the computer screen does not change
- Scanpaths can also provide insight on HMI usage.
- collected and cleaned data can be analyzed to infer causal links, statistics, and user behaviour. By considering these various factors, the system attempts to continuously maintain an indication of the point in the display which represents the user's current focus.
- this step will generally be repeated at intervals or continuously for the purposes of other functions of the user interface.
- step 15 the selected space is redefined to correspond to a space in said larger space at a new scale, and positioned with respect to the point of regard determined at step 13.
- the new selected space may be centred on the point of regard determined at step 13, or alternatively, offset in any direction by a predetermined absolute distance, or proportion of the display area.
- the system may seek to intelligently define the selected space so as to contain the point of the regard and as many as possible entities, or as many as possible entities of a particular type.
- the new scale may be larger or smaller than the scale used at step 11, corresponding to an inward zoom or outward zoom respectively.
- the user input may be provided by conventional interface operations such as performing a "click" operation with the mouse, by touching a zone of the display where this has a touchscreen interface or the like, or using designated keys on a keypad, a foot pedal, mouse or keyboard scroll wheel, jog dial, joystick or the like.
- the user will generally have an option of zooming in or out, unless already at the maximum zoom level in either direction.
- the rate of change of scale may be determined as being a function of the rate at which the wheel or dial is turned.
- this may be achieved progressively by means of a series of intermediate positions between the current selected space and the initial selected space, so as to simulate the effect from the point of view of a user of travelling towards, or away from (depending on whether they are zooming in or out) the point of regard.
- the graphical representation is generated at this new scale at step 16.
- gaze tracking interfaces are particularly common in situations where a user must track a large number of moving elements in the interface, over a long period and where the implications of a misinterpretation of failed manipulation of the interface in real time are sufficiently serious to justify the cost of such systems.
- One example of such installations is in air traffic control displays, where air traffic controllers must monitor a moving aircraft in a designated airspace. An embodiment will now be described in such a context.
- Figure 2a presents an embodiment in a first phase.
- a circular perimeter 201 divided by a reticule whose centre corresponds to the centre of this circular region, which is centred on a point of interest which may be the location of the display itself, and hence the user of the display.
- the circular region is further more divided by three progressively smaller circles sharing the axis of the circular perimeter 201.
- This circle represents a substantially cylindrical volume of space in the real word, as projected on the underlying ground, and accordingly is associated with a scale.
- This space corresponds accordingly to the selected space with respect the representation of figure 2a is generated in accordance with the method described with respect to figure 1 above.
- the diameter of the smallest circle is one quarter of the diameter of the circular perimeter
- the diameter of the second smallest circle is half that of the circular perimeter
- the diameter of the largest circle is three quarters of the diameter of the circular perimeter, so that these circles can be used to determine the distance of any item on the display from the point of interest.
- the four axes of the reticule conventionally correspond to the cardinal points of the compass, and as shown the circular perimeter is additionally provided with markings indicating 10 degree increments around the perimeter, so that the bearing of any object on the display with respect to the point of interest may also readily be determined.
- While the forgoing display features are generally static, there are furthermore shown a number of features that are dynamically generated. These include geographical features 205, representing for example features of the ground at the bottom of the volume under observation. Other features include predetermined flight paths 203 structured around way markers 204. Aircraft 206 are represented by small squares, and associated with lines indicating their current bearing. Information 207 associated with each plane is represented in the proximity of each aircraft.
- the display shown in figure 2a is generally associated with a graphical user interface, which may permit the user to change the point of interest, or to obtain more information about a particular object shown in the representation. For example, where a user requires more detailed information about a particular aircraft, this may be selected as shown by the dark box 208, causing additional details associated with the same aircraft to be displayed in the box 209.
- a mouse cursor which is directed by the user with a mouse or similar cursor control device and may be used to interact with the interface in a conventional manner.
- a gaze cursor 211 This reflects the position that the system currently considers to be the focus of the users gaze on the basis of the various considerations described above. It will be appreciated that the movement of the gaze cursor 211 is entirely independent of the movement of the mouse cursor 210.
- the selected space is redefined on the basis of a new scale.
- this may be a larger or smaller scale, corresponding to a zoom in operation or zoom out operation respectively, and may zoom by a greater or lesser extent, with or without intermediate positions, with or without an accompanying change in orientation, etc. as discussed above.
- the graphical representation is regenerated on the basis of the redefined selected space.
- Figure 2b presents the embodiment of figure 2a in a second phase.
- This second phase corresponds to the graphical representation as regenerated on the basis of the redefined selected space.
- FIG 2b there is presented a circular perimeter 201 divided by a reticule whose centre corresponds to the centre of this circular region, which is centred on a point of interest which may be the location of the display itself, and hence the user of the display.
- the circular region is furthermore divided by three progressively smaller circles sharing the axis of the circular perimeter 201.
- This circle represents a substantially cylindrical volume of space in the real word, as projected on the underlying ground, and accordingly is associated with a scale.
- This space corresponds accordingly to the new selected space with respect the representation of figure 2b is generated in accordance with the method described with respect to figure 1 above.
- the elements 201, 202, 203, 204, 205, 206, 207, 208, 209, 210 and 211 correspond to the elements with the same reference numerals in figure 2a . As shown, although the relative positions of the different elements are retained, the elements still visible take up a larger proportion of the selected area.
- the gaze cursor 211 remains in the same position relative to its position in figure 2a , and this point is the focus of the zoom operation.
- the gaze cursor may be panned to the centre of the display, as discussed below, whilst retaining its relative position with respect to the other elements. In other words, the system has zoomed in on the position of the gaze cursor as shown in figure 2a , to produce the graphical representation of figure 2b .
- the gaze cursor 211 may begin to move around the display again as it follows the user's gaze.
- the system may further redefine the selected space to revert to the initial selected physical space. This may occur automatically after a predetermined time, or in response to an action by the user for example by means of conventional interface operations such as moving a cursor over a graphical user interface widget with a mouse, trackerball or the like, and performing a "click" operation, by touching the widget in question where the display has a touchscreen interface or the like, or by any other suitable graphical user interface operation. Still further, a simple action by means of a keypad, foot pedal, mouse button may be sufficient in some cases to cause the system to revert to the initial selected space.
- this may be achieved progressively by means of a series of intermediate scales between the current scale and that used in generating the graphical representation of the initial selected physical space so as to simulate the effect from the point of view of a user of moving in towards, or back away from their starting position.
- This simulated journey may follow a direct line between the two points, or follow some other route, for example as dictated by available ground routes or flight paths. Where a particular path was followed from the initial selected physical space to the entity of interest, the same path may be followed in reverse back to the initial selected physical space, or not as desired.
- a series of intermediate selected spaces are defined, they may correspondingly adopt intermediate orientations, so as to simulate a progressive re-orientation.
- a gaze cursor is rendered as part of the graphical representation, the gaze cursor may equally be invisible to the user in a variant of this or any other embodiment described herein, or otherwise.
- rendering the gaze cursor may disturb the user, since visible eye cursor can attract user attention and create possibly undesirable effects such as the phenomenon known as cursor shift.
- the definition of the point of regard, and where applicable, the location of the gaze cursor is not a trivial matter.
- the gaze of the user moves continually over the graphical representation, and it may not always be clear that the user is deliberately fixing his regard in a particular location.
- the gaze of the user may often pause in a particular location without this necessarily indicating a requirement by the user that any particular step be taken with regard to the information conveyed in that area.
- Filtering this input so as to identify the point in the graphical representation that can best be considered to represent the instantaneous focus of the users attention (which may differ from the instantaneous point of gaze) helps improve the prospects of achieving the user's true intention when he or she operates the user interface, for example at step 14 of the method described with respect to figure 1 .
- Figure 3 shows a further embodiment offering improved definition of the point of attention.
- Figure 3 shows a similar graphical representation to that of figure 2 , and correspondingly numbered elements can be considered to be identical.
- a shaded zone 310 incorporates 5 different areas, which are each shaded with one of three different levels of shading. Specifically, area 311 is lightly shaded, areas 312, 313 and 314 are heavily shaded, and zone 315 is black, furthermore, these areas are disposed in a hierarchical manner, so that heavy shaded areas are situated in the lightly shaded area, and the black area is situated in a heavily shaded area.
- This type of representation may be referred to a heat map.
- the graphical representation is divided into a plurality of regions, and that the location of the instantaneous point of regard in each of these regions is recorded, a graphical representation of the average presence in these regions might look something like the zone 310, where the intensity of the shading represents the amount of the recorded duration that the point of regard was present in each region, whereby the darker the shading, the more time was spent in the area in question.
- the centre of the darkest area may be selected as the current point of attention, as determined at step 13 of the method described with reference to figure 1 .
- the recorded duration may be a rolling window, whereby the record is continually updated with recent values, whilst the oldest values are discarded.
- the prominence given to each region may take into account not only the duration for which the instantaneous point of regard was located in that region, but also how recently that presence was recorded, so that more recent activity will be accorded more prominence than older activity.
- a region may be given more prominence if it is in a defined proximity to other regions through which the point of regard has passed during the recorded duration, so as to further emphasise areas recording persistent activity at the expense of regions receiving only occasional or sporadic attention.
- the instantaneous point of regard may be considered to have an area of effect greater in size than the recorded regions, so that a number of adjacent regions may record the presence of the instantaneous point of regard for any given instant.
- a greater weighting may be accorded regions closer to the centre of the adjacent regions. It will be appreciated that the areas need not be displayed to the user, but merely calculated as the basis of the definition of the most likely point of interest to the user. Any number of levels of duration of presence (represented by different degrees of shading in figure 3 ) may be defined, and the number of levels and/or the thresholds between levels may be varied dynamically depending on system conditions.
- the point of regard data of a plurality of users may be combined to define the point of attention for one or more of those users, or for another user.
- Another mechanism for determining the point of attention may involve positioning the point of attention preferentially with respect to certain types of feature within the graphical representation. This may involve assigning the point of attention preferentially to particular parts of the screen. Certain features, objects, entities or icons may be defined in the graphical representation which a zoom action is particularly likely. Where this is the case, such objects, entities or icons may be afforded a magnetic effect in the user interface, such that whenever the point of regard is in the vicinity of that feature, object, entity or icon, the point of attention is assumed to be the nearby feature, object, entity or icon.
- the two preceding approaches may of course be combined, for example by assigning particular weightings to particular regions of the graphical representation, such that time spent by the point of regard on certain regions has a greater effect than in certain other regions, where the high weighting regions are those corresponding to the features, objects, entities or icons.
- different regions may have a capture radius, where certain regions only register the passage of the point of regard if it passes right over them, whilst other regions register the passage of the point of regard if it merely passes close by, where the greater capture radius regions are those corresponding to the features, objects, entities or icons.
- a zoom operation in an eye tracking based graphical user interface where the zoom operation, as initiated by a scroll wheel etc. takes place with respect to the focus of the user's attention as determined on the basis of eye tracking data.
- Mechanisms for determining the point of attention based on a sliding window recording point of regard and subject to a variety of weighting functions are proposed.
- the disclosed methods can take form of an entirely hardware embodiment (e.g. FPGA), an entirely software embodiment (for example to control a system according to the invention) or an embodiment containing both hardware and software elements.
- Software embodiments include but are not limited to firmware, resident software, microcode, etc.
- the invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or an instruction execution system.
- a computer-usable or computer-readable can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
- the medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium.
- the methods and processes described herein may be implemented in whole or part by a user device. These methods and processes may be implemented by computer-application programs or services, an application-programming interface (API), a library, and/or other computer-program product, or any combination of such entities.
- API application-programming interface
- the user device may be a mobile device such as a smart phone or tablet, a computer or any other device with processing capability, such as a robot or other connected device.
- Figure 4 shows a generic computing system suitable for implementation of embodiments of the invention.
- a system includes a logic device 401 and a storage device 402.
- the system may optionally include a display subsystem 411, input subsystem 412, 413, 414, communication subsystem 420, and/or other components not shown.
- Logic device 401 includes one or more physical devices configured to execute instructions.
- the logic device 401 may be configured to execute instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs.
- Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, achieve a technical effect, or otherwise arrive at a desired result.
- the logic device 401 may include one or more processors configured to execute software instructions. Additionally or alternatively, the logic device may include one or more hardware or firmware logic devices configured to execute hardware or firmware instructions. Processors of the logic device may be single-core or multi-core, and the instructions executed thereon may be configured for sequential, parallel, and/or distributed processing. Individual components of the logic device 401 optionally may be distributed among two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the logic device 401 may be virtualized and executed by remotely accessible, networked computing devices configured in a cloud-computing configuration.
- Storage device 402 includes one or more physical devices configured to hold instructions executable by the logic device to implement the methods and processes described herein. When such methods and processes are implemented, the state of storage 402 device may be transformed-e.g., to hold different data.
- Storage device 402 may include removable and/or built-in devices.
- Storage device 402 may comprise one or more types of storage device including optical memory (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory (e.g., RAM, EPROM, EEPROM, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others.
- Storage device may include volatile, non-volatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices.
- the system may comprise an interface 403 adapted to support communications between the Logic device 401 and further system components.
- additional system components may comprise removable and/or built-in extended storage devices.
- Extended storage devices may comprise one or more types of storage device including optical memory 432 (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory 433 (e.g., RAM, EPROM, EEPROM, FLASH etc.), and/or magnetic memory 431 (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others.
- Such extended storage device may include volatile, non-volatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices.
- storage device includes one or more physical devices, and excludes propagating signals per se.
- aspects of the instructions described herein alternatively may be propagated by a communication medium (e.g., an electromagnetic signal, an optical signal, etc.), as opposed to being stored on a storage device.
- a communication medium e.g., an electromagnetic signal, an optical signal, etc.
- logic device 401 and storage device 402 may be integrated together into one or more hardware-logic components.
- Such hardware-logic components may include field-programmable gate arrays (FPGAs), program- and application-specific integrated circuits (PASIC/ASICs), program- and application-specific standard products (PSSP/ASSPs), system-on-a-chip (SOC), and complex programmable logic devices (CPLDs), for example.
- FPGAs field-programmable gate arrays
- PASIC/ASICs program- and application-specific integrated circuits
- PSSP/ASSPs program- and application-specific standard products
- SOC system-on-a-chip
- CPLDs complex programmable logic devices
- program may be used to describe an aspect of computing system implemented to perform a particular function.
- a program may be instantiated via logic device executing machine-readable instructions held by storage device. It will be understood that different modules may be instantiated from the same application, service, code block, object, library, routine, API, function, etc. Likewise, the same program may be instantiated by different applications, services, code blocks, objects, routines, APIs, functions, etc.
- program may encompass individual or groups of executable files, data files, libraries, drivers, scripts, database records, etc.
- system of figure 4 may be used to implement embodiments of the invention.
- a program implementing the steps described with respect to figure 1 may be stored in storage device 402 and executed by logic device 401.
- Data used for the creation of the graphical representation of the selected space, including data describing the larger space may be stored in storage 402 or the extended storage devices 432, 433 or 431.
- the Logic device may use data received from the camera 416 or eye tracking system 460 to determine the instantaneous point of regard, and the display 411 used to display the graphical representation.
- the invention may be embodied in the form of a computer program.
- the elements of figure 4 may constitute an apparatus adapted to generate a graphical representation of a selected space, wherein said selected space lies within a larger space and wherein the size of said selected space is defined with regard to a predetermined scale.
- This apparatus may further be adapted to cause a display unit to display said representation.
- the apparatus may further be adapted to determine a point of attention of a user within said representation with reference signals received from an eye tracking system.
- the apparatus may further be adapted to redefine the selected space to correspond to a new selected space positioned with respect to and containing said point of attention responsive to receiving an input via a user interface, said new selected space being situated in the larger space, and the new selected space being defined at a new scale, and the apparatus may further be adapted to regenerate said graphical representation on the basis of said redefined selected space, and to cause said display unit to display said regenerated representation.
- a “service”, as used herein, is an application program executable across multiple user sessions.
- a service may be available to one or more system components, programs, and/or other services.
- a service may run on one or more server-computing devices.
- display subsystem 411 may be used to present a visual representation of data held by storage device.
- This visual representation may take the form of a graphical user interface (GUI).
- GUI graphical user interface
- the state of display subsystem 411 may likewise be transformed to visually represent changes in the underlying data.
- Display subsystem 411 may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with logic device and/or storage device in a shared enclosure, or such display devices may be peripheral display devices.
- input subsystem may comprise or interface with one or more user-input devices such as a keyboard 412, mouse 411, touch screen 411, or game controller, button, footswitch, etc. (not shown).
- user-input devices such as a keyboard 412, mouse 411, touch screen 411, or game controller, button, footswitch, etc. (not shown).
- the input subsystem may comprise or interface with selected natural user input (NUI) componentry.
- NUI natural user input
- Example NUI componentry may include a microphone for speech and/or voice recognition; an infrared, colour, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker 460, accelerometer, and/or gyroscope for motion detection and/or intent recognition; as well as electric-field sensing componentry for assessing brain activity.
- communication subsystem 420 may be configured to communicatively couple computing system with one or more other computing devices.
- communication module of may communicatively couple computing device to remote service hosted for example on a remote server 476 via a network of any size including for example a personal area network, local area network, wide area network, or the internet.
- Communication subsystem may include wired and/or wireless communication devices compatible with one or more different communication protocols.
- the communication subsystem may be configured for communication via a wireless telephone network 474, or a wired or wireless local- or wide-area network.
- the communication subsystem may allow computing system to send and/or receive messages to and/or from other devices via a network such as the Internet 475.
- the communications subsystem may additionally support short range inductive communications 421 with passive devices (NFC, RFID etc).
- the system of figure 4 is intended to reflect a broad range of different types of information handling system. It will be appreciated that many of the subsystems and features described with respect to figure 4 are not required for implementation of the invention, but are included to reflect possible systems in accordance with the present invention. It will be appreciated that system architectures vary widely, and the relationship between the different sub-systems of figure 4 is merely schematic, and is likely to vary in terms of layout and the distribution of roles in systems. It will be appreciated that, in practice, systems are likely to incorporate different subsets of the various features and subsystems described with respect to figure 4 . Figures 5 , 6 and 7 disclose further example devices in accordance with the present invention. Those of ordinary skill in the art will appreciate that systems may be employed in the future which also operate in accordance with the present invention.
- Figure 5 shows a smartphone device adaptable to constitute an embodiment.
- the smartphone device incorporates elements 401, 402, 403, 420, 433, 414, 415, 416, 411 as described above. It is in communication with the telephone network 474 and a server 476 via the network 475.
- elements 431, 432, 417, 412, 413 are omitted.
- the features disclosed in this figure may also be included within a tablet device as well.
- the dedicated eye tracking hardware 460 is omitted, and the device depends on the camera 416 with suitable software, for determining the point of regard.
- Figure 6 shows a vehicle adaptable to constitute an embodiment.
- the vehicle comprises elements 401, 402, 403, 420,421, 433, 414, 415, 416, 460 and 421 as described above. It may be in communication with a server 476 via the mobile telephone network 474. On the other hand, elements 431, 432, 416, 417, 412, 413 and 475 are omitted.
- Figure 7 shows a computer device adaptable to constitute an embodiment.
- the computer device incorporates elements 401, 402, 403, 420, 430, 431, 432, as described above. It is in communication with elements 414, 415, 417, 412, 460 and 413 as peripheral devices which may also be incorporated in the same computer device, and with a server 476 via the network 475.
- elements 433, 421 and 474 are omitted, and element 411 is an ordinary display with or without touchscreen functionality.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- User Interface Of Digital Computer (AREA)
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP15306629.5A EP3156880A1 (fr) | 2015-10-14 | 2015-10-14 | Effet de zoom dans une interface de suivi du regard |
US15/291,186 US10416761B2 (en) | 2015-10-14 | 2016-10-12 | Zoom effect in gaze tracking interface |
CN201611157193.7A CN106774886A (zh) | 2015-10-14 | 2016-10-14 | 视线跟踪界面中的缩放效果 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP15306629.5A EP3156880A1 (fr) | 2015-10-14 | 2015-10-14 | Effet de zoom dans une interface de suivi du regard |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3156880A1 true EP3156880A1 (fr) | 2017-04-19 |
Family
ID=54360373
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP15306629.5A Withdrawn EP3156880A1 (fr) | 2015-10-14 | 2015-10-14 | Effet de zoom dans une interface de suivi du regard |
Country Status (3)
Country | Link |
---|---|
US (1) | US10416761B2 (fr) |
EP (1) | EP3156880A1 (fr) |
CN (1) | CN106774886A (fr) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10496162B2 (en) * | 2017-07-26 | 2019-12-03 | Microsoft Technology Licensing, Llc | Controlling a computer using eyegaze and dwell |
US10768696B2 (en) | 2017-10-05 | 2020-09-08 | Microsoft Technology Licensing, Llc | Eye gaze correction using pursuit vector |
CN109799908B (zh) * | 2019-01-02 | 2022-04-01 | 东南大学 | 一种基于眼动信号的图像缩放及拖拽方法 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013169237A1 (fr) * | 2012-05-09 | 2013-11-14 | Intel Corporation | Accentuation sélective basée sur un suivi oculaire de parties d'un dispositif d'affichage |
DE102013001327A1 (de) * | 2013-01-26 | 2014-07-31 | Audi Ag | Verfahren und Anzeigesystem zum blickrichtungsabhängigen Skalieren einer Darstellung |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050047629A1 (en) * | 2003-08-25 | 2005-03-03 | International Business Machines Corporation | System and method for selectively expanding or contracting a portion of a display using eye-gaze tracking |
US7561143B1 (en) * | 2004-03-19 | 2009-07-14 | The University of the Arts | Using gaze actions to interact with a display |
US8130260B2 (en) * | 2005-11-09 | 2012-03-06 | Johns Hopkins University | System and method for 3-dimensional display of image data |
US8808164B2 (en) * | 2008-03-28 | 2014-08-19 | Intuitive Surgical Operations, Inc. | Controlling a robotic surgical tool with a display monitor |
US8155479B2 (en) * | 2008-03-28 | 2012-04-10 | Intuitive Surgical Operations Inc. | Automated panning and digital zooming for robotic surgical systems |
US9728006B2 (en) * | 2009-07-20 | 2017-08-08 | Real Time Companies, LLC | Computer-aided system for 360° heads up display of safety/mission critical data |
US20130278631A1 (en) * | 2010-02-28 | 2013-10-24 | Osterhout Group, Inc. | 3d positioning of augmented reality information |
US9377852B1 (en) * | 2013-08-29 | 2016-06-28 | Rockwell Collins, Inc. | Eye tracking as a method to improve the user interface |
CN102830918B (zh) * | 2012-08-02 | 2016-05-04 | 东莞宇龙通信科技有限公司 | 移动终端及该移动终端调节显示字体大小的方法 |
US8797604B2 (en) * | 2012-08-24 | 2014-08-05 | Xerox Corporation | Methods and systems for creating structural documents |
US20140316543A1 (en) * | 2013-04-19 | 2014-10-23 | Qualcomm Incorporated | Configuring audio for a coordinated display session between a plurality of proximate client devices |
WO2016018488A2 (fr) * | 2014-05-09 | 2016-02-04 | Eyefluence, Inc. | Systèmes et procédés de discernement de signaux oculaires et d'identification biométrique continue |
-
2015
- 2015-10-14 EP EP15306629.5A patent/EP3156880A1/fr not_active Withdrawn
-
2016
- 2016-10-12 US US15/291,186 patent/US10416761B2/en not_active Expired - Fee Related
- 2016-10-14 CN CN201611157193.7A patent/CN106774886A/zh active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013169237A1 (fr) * | 2012-05-09 | 2013-11-14 | Intel Corporation | Accentuation sélective basée sur un suivi oculaire de parties d'un dispositif d'affichage |
DE102013001327A1 (de) * | 2013-01-26 | 2014-07-31 | Audi Ag | Verfahren und Anzeigesystem zum blickrichtungsabhängigen Skalieren einer Darstellung |
Also Published As
Publication number | Publication date |
---|---|
CN106774886A (zh) | 2017-05-31 |
US20170108924A1 (en) | 2017-04-20 |
US10416761B2 (en) | 2019-09-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10955914B2 (en) | Gaze-based object placement within a virtual reality environment | |
EP3172644B1 (fr) | Projection de regard multiutilisateur à l'aide de dispositifs de visiocasque | |
CN107810465B (zh) | 用于产生绘制表面的系统和方法 | |
KR102235410B1 (ko) | 머리-장착형 디스플레이에서의 메뉴 내비게이션 | |
EP3014411B1 (fr) | Navigation dans une interface d'utilisateur | |
JP6387404B2 (ja) | 位置信号を介したユーザインターフェイスエレメントの選択 | |
CN113168725A (zh) | 使用语音命令和定义的视角优化虚拟数据视图 | |
US20170108923A1 (en) | Historical representation in gaze tracking interface | |
EP3185107A1 (fr) | Interaction avec une interface utilisateur pour les visiocasques transparents | |
US20180046363A1 (en) | Digital Content View Control | |
US20190310756A1 (en) | System, method, computer readable medium, and viewer-interface for prioritized selection of mutually occluding objects in a virtual environment | |
US10416761B2 (en) | Zoom effect in gaze tracking interface | |
US20170109007A1 (en) | Smart pan for representation of physical space | |
CN113457144A (zh) | 游戏中的虚拟单位选取方法及装置、存储介质及电子设备 | |
KR20240112287A (ko) | 메타버스 콘텐츠 모달리티 매핑 | |
EP3483713A1 (fr) | Système et procédé de modulation d'une rétroaction d'interface de commande | |
CN108499102B (zh) | 信息界面展示方法及装置、存储介质、电子设备 | |
de Lacerda Campos | Augmented Reality in Industrial Equipment | |
CN117075770A (zh) | 基于扩展现实的交互控制方法、装置、电子设备和存储介质 | |
CN117784919A (zh) | 虚拟输入设备的显示方法、装置、电子设备以及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20171018 |
|
RBV | Designated contracting states (corrected) |
Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 1236653 Country of ref document: HK |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20200207 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20200818 |
|
REG | Reference to a national code |
Ref country code: HK Ref legal event code: WD Ref document number: 1236653 Country of ref document: HK |