WO2021221676A1 - Frames of reference - Google Patents

Frames of reference Download PDF

Info

Publication number
WO2021221676A1
WO2021221676A1 PCT/US2020/030849 US2020030849W WO2021221676A1 WO 2021221676 A1 WO2021221676 A1 WO 2021221676A1 US 2020030849 W US2020030849 W US 2020030849W WO 2021221676 A1 WO2021221676 A1 WO 2021221676A1
Authority
WO
WIPO (PCT)
Prior art keywords
user device
circuitry
output image
data
frame
Prior art date
Application number
PCT/US2020/030849
Other languages
French (fr)
Inventor
Raimon Castells DEMONET
Josep Tarradas I JUAN
Original Assignee
Hewlett-Packard Development Company, L.P.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett-Packard Development Company, L.P. filed Critical Hewlett-Packard Development Company, L.P.
Priority to PCT/US2020/030849 priority Critical patent/WO2021221676A1/en
Publication of WO2021221676A1 publication Critical patent/WO2021221676A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/038Indexing scheme relating to G06F3/038
    • G06F2203/0383Remote input, i.e. interface arrangements in which the signals generated by a pointing device are transmitted to a PC at a remote location, e.g. to a PC in a LAN

Definitions

  • a wide range of user devices are available for interacting with computers.
  • a mouse allows a user to interact with a displayed image to perform numerous actions such as invoking functionality.
  • the functionality can be associated with a feature of software, a universal resource locator (URL) and the like.
  • URL universal resource locator
  • some user devices are not closely coupled with computers.
  • a laser-pointer can be used to great effect during a presentation by drawing the attention of an audience to certain aspects of the presentation.
  • the laser-pointer is limited in its functionality to projecting a light spot onto the projected image and stepping either forwards or backwards through the presentation.
  • the laser-pointer does not support invoking functionality in the same way that a mouse and a corresponding pointer can invoke functionality.
  • figure 1 shows a view of a user device according to example implementations
  • figure 2 illustrates a user device defined frame of reference according to example implementations
  • figures 3 to 5 depict defining reference points of an output image according to example implementations
  • figure 6 shows a further user device according to example implementations;
  • figure 7 depicts an output image with defined reference points within the user device defined frame of reference of figure 2 according to example implementations;
  • figure 8 shows the output image of figure 7 and operation of a user device according to example implementations;
  • figure 9 depicts a flow chart for establishing the reference points depicted in figures 3 to 5 according to example implementations;
  • figure 10 illustrates a flow chart for establishing the user device frame of reference of figure 2 according to example implementations
  • figure 11 shows a flow chart for defining the reference points of figures 3 to 5 according to example implementations
  • figure 12 depicts a further flow chart for defining the reference points of figures 3 to 5 according to further example implementations;
  • figure 13 illustrates mapping of the reference points of figures 3 to 5 to an output image according to example implementations;
  • figure 14 shows a flow chart for determining the location of an output image of the user device within the user device reference frame and generating an associated indicium within the output image according to example implementations;
  • figure 15 depicts a flow chart for invoking functionality associated with an output image using the user device according to example implementations; and [0016] figure 16 illustrates machine-readable storage and machine-executable instructions according to example implementations.
  • FIG. 1 there is shown a view 100 of a user device 102 for interacting with an output image 104 produced by an output peripheral 106 controlled by a computer 108.
  • the output peripheral 106 can be a projector. Consequently, the output image 104 can be a projected image output by the projector.
  • the output image 104 can have an associated indicium 110.
  • the indicium 110 can be produced by the user device 102, the computer 108 or both.
  • the user device 102 comprises a light source 112.
  • the light source 112 can comprise a light emitting diode (LED), laser diode, collimated light or any other source of light to produce the indicium 110.
  • the user device 102 comprises at least one motion sensor 114 for sensing relative movement of the user device 102 in 3-dimensional space and attitude of the user device 102.
  • the motion sensor 114 comprises circuitry 116 to detect relative translation and relative rotation of the user device 102.
  • the circuitry 116 to detect relative translation and relative rotation of the user device 102 detects movement in any or all of the 3D translation directions and rotations in any or all rotational directions.
  • the rotation in any or all rotational directions can comprise one, or both, of pitch and yaw movements.
  • Example implementations can be realised that detect roll in addition to either, or both, of pitch and yaw.
  • the motion sensor 114 also produces, or can be used to derive, position data 118 and attitude data 120.
  • the attitude data 120 comprises an indication of current orientation of the user device 102.
  • the position data represents the position of the user device within the user device frame of reference.
  • Position circuitry 121 is provided to generate a user device frame of reference associated with the user device, and to derive the position data 118 and attitude data 120, responsive to the motion sensor 114, comprising an indication of the device position and attitude expressed relative to the device frame of reference.
  • At least one, or both, of the position circuitry 121 or the motion sensor 114 taken jointly or severally, are an example of image plane circuitry to define the region of the output image comprises circuitry to establish a set of points that define the region of the output image.
  • the position data 118 and attitude data 120 are used to produce a user device frame of reference 122.
  • Data 124 representing the user device frame of reference 122 is stored in a memory 126 of the user device 102.
  • the user device comprises an actuator 128.
  • the actuator 128 can be used to generate actuation data or control signals that perform respective functions.
  • the actuator 128 can control whether or not the light source 112 outputs a beam of light 130.
  • the actuator 128 can, additionally or alternatively, generate software control signals to be output to a data processing system.
  • a data processing system is an electronic device to perform data processing operations using a processor other circuitry to process data.
  • An example data processing system is a computer, which may be a computer of any form factor such as a desktop tower computer or a laptop computer.
  • the actuator can also control operations of the user device 102 such as, for example, calibration operations.
  • the user device 102 comprises communication circuitry 132.
  • the communication circuitry 132 is circuitry to perform communication to a separate device.
  • the communication circuitry 132 can be realized using a transceiver, for example.
  • the communication circuitry 132 is used to communicate with a data processing system such as the computer 108 illustrated.
  • the communication circuitry 132 can be realised using, for example, wireless communication protocols such as, for example, BLUETOOTH, ZIGBEE, or the like.
  • the user device frame of reference 122 is defined using the user device 102. The process of defining the frame of reference is described below.
  • the frame of reference 122 is used to determine and track the position and attitude of the user device within that frame of reference and to determine the position and attitude of the output image 104 within that frame of reference 122.
  • the frame of reference 122 comprises an orthonormal basis defined by respective unit basis vectors 134 to 138 . Movement, position and attitude of the user device 102 within the user device frame of reference 122 can be determined by the user device 102 or can be determined by the computer 108.
  • the frame of reference is also used to determined frame of reference coordinates of a number of reference points 140’, 140”, 140’” associated with the output image 104.
  • reference points 140 The number of reference points 140’, 140”, 140’” will be referred to collectively as reference points 140 unless otherwise indicated.
  • the reference points 140 are selected and defined by the user device 102. In the example depicted, the reference points 140 correspond to corners of the output image 104 that were defined using position and attitude data of the user device 102 within the frame of reference 122.
  • the reference points 140 define an example implementation of a region of the output image 104 of interest, that is, they define the expanse of the output image 104, or the expanse of a sub-region of interest of the output image 104. It will be appreciated that the reference points 140 are an example implementation of a set of points associated with the output image that are derived from or associated with at least one, or both, of position data and attitude data.
  • the computer 108 comprises communication circuitry 142.
  • the communication circuitry 142 is circuitry used to communicate with the user device 102.
  • the communication circuitry 142 receives at least one, all, or any or all permutations, of data associated with user device frame of reference 122, the position data 120 and attitude data 118 of the user device 102, at least one, or both, of the position or attitude of the output image 104, or the position of the indicium 110 taken jointly and severally, collectively known as the user device reference frame coordinates or data 143.
  • the computer 108 comprises a mapper 144.
  • the mapper 144 is used to map, or otherwise associate, the position of at least one, or both, of the reference points 140 or the indicium 110 into or with image data 146 from which the output image 104 is derived.
  • Indicium generator circuitry 148 generates an indicium 150.
  • the position of the indicium 150 within the image data 146 corresponds to, or is otherwise in registry with, the indicium 110 in the output image 104.
  • the image data 146 is output by image output software 152.
  • the image output software 152 can comprise, for example, presentation software.
  • the computer 108 comprises mouse and pointer control circuitry 145.
  • the mouse and pointer control circuitry can control the position of a pointer in response to movement of a mouse.
  • Example implementations pass coordinate data associated with the indicium, expressed in terms of the user device reference frame or image data coordinates, to the mouse and pointer control circuitry 145 to allow a region of the image data, and consequently the output image 104, to be selected to thereby invoke functionality associated with that region of the image data 146 or output image 104.
  • Figure 2 illustrates the user device defined frame of reference 122 according to example implementations.
  • the frame of reference 122 comprises an orthonormal basis of unit vectors 134 to 138.
  • the frame of reference 122 is established by positioning the light spot or indicium 110 on an initial reference point 202 and actuating the actuator 128 in a manner to indicate that such an initial reference point 202 is being defined.
  • the foregoing can be achieved by, for example, placing the user device 102 in a calibration mode, or by having a respective mode of actuating the actuator 128 of the user device 102 such as holding a predetermined button for a predetermined period of time or pressing the predetermined button a number of times.
  • Indicating such an initial reference point causes the user device 102 to store its present attitude data as an initial unit basis vector 134 and sets the origin 204 of the frame of reference 122.
  • the basis vector 134 is the initial basis vector.
  • the other two basis vectors 136 and 138 are calculated from the initial basis vector to form a set of mutually orthogonal basis vectors 134 to 138 that define the frame of reference 122
  • the position within the frame of reference 122 of the initial reference point 202 is determined. For example, knowing the initial basis vector 134, the distance ⁇ between the user device 102 and the initial reference point 202 can be used to calculate the position of the initial reference point 202 within the frame of reference 122.
  • the attitude data 120 of the user device 102 when setting the initial reference point 202, forms an axis 134 of the frame of reference 122.
  • the basis vectors 134 to 138 can correspond to x, y, z axes.
  • the reference point 140’ is an example of the initial reference point 202.
  • a state vector 206 is determined when setting the frame of reference and setting the initial reference point, P 1 202.
  • the state vector 206 represents the state of the user device 102 at the time of establishing the initial basis vector 134 and reference point 202.
  • the state vector 206 can comprise coordinates of the user device 102 within the frame of reference 122 and attitude data of the user device 102 within the reference frame. Since setting the initial reference point 202 concurrently defines the origin 204 of the frame of reference, the position of the user device 202 will be (0,0,0).
  • Origin circuitry is circuitry to define the origin of the user device frame of reference and/or a set of orthogonal basis vectors of the user device frame of reference.
  • the attitude data 120 will represent current outputs or indications of attitude of the user device in terms of a number of, or any or all permutations of, roll, pitch and yaw. Therefore, the state vector 206 for the user device 102, at the point of establishing the frame of reference 122, will be , where represent roll, pitch and yaw respectively. It can be seen that the initial reference point 202 when expressed in terms of the basis vectors 134 to 138 of the frame of reference 122 has coordinates Since the reference point 202 lies on the
  • a second reference point 208 can be defined by moving the user device 102 to a second or further position 210.
  • the second or further position 210 is determined by monitoring the outputs of the motion sensors 114 and producing a transformation matrix, T l2 , 214 that represents the totai translation and rotation of the user device 102 from the first position 212 to the second position 210.
  • a second state vector 216, SVP2((7), is determined when the user device is setting the second or further reference point P 2 .
  • the second state vector 216 represents the state of the user device 102 at the time of defining the second or further reference point 208.
  • the second state vector 216 can comprise coordinates of the user device 102 within the frame of reference 122 and attitude data of the user device 102 within the reference frame.
  • the position of the user device 202 will be determined by transforming the point (0,0,0) by the transformation matrix P 2 214.
  • the attitude data of the second state vector 216 will represent current outputs or indications of orientation of the user device in terms of a number of, or any or all permutations of, roll, pitch and yaw when the user device is in the second position.
  • the second state vector 216 for the user device 102 will be represent the coordinates of the user device within the frame of reference and represent roll, pitch and yaw respectively. It can be seen that the second reference point 208, when expressed in terms of the basis vectors 134 to 138 of the frame of reference 122, has coordinates such that where are constants.
  • a third reference point 218 can be defined by moving the user device 102 to a third or still further position 220.
  • the third position 220 is determined by monitoring the outputs of the motion sensors 114 and producing a transformation matrix, T 21 , 222 that represents the total translation and rotation of the user device 102 from the second position 210 to the third position 220.
  • a third state vector 224 is determined when the user device is setting the third or still further reference point P 3 .
  • the third state vector 224 represents the state of the user device 102 at the time of defining the third or further reference point 218.
  • the third state vector 224 can comprise coordinates of the user device 102 within the frame of reference 122 and attitude data of the user device 102 within the reference frame.
  • the position of the user device 102 wili be determined by transforming the point (0,0,0) by the transformation matrix T l7 214 and transforming the result by the transformation matrix T 23
  • the attitude data of the third state vector 216 will represent current outputs or indications of orientation of the user device 102 in terms of a number of, or any or all permutations of, roll, pitch and yaw when the user device is in the third position 220. Therefore, the third state vector 224 for the user device 102 will be represent the coordinates of the user device within the frame of reference and represent roll, pitch and yaw respectively. It can be seen that the third reference point 218 when expressed in terms of the basis vectors 134 *° 138 of the frame of reference 122 has coordinates such that where ( are constants.
  • the user device 102 in the first 212, second 210 and third 220 positions are at distances n, r and r 3 from their respective reference points 202, 208, 218.
  • the reference points 202, 208, 218 are examples of the reference points 140 described with reference to figure 1.
  • Figures 3 to 5 demonstrate example implementations for determining the coordinates of the reference points 202, 208 and 218.
  • FIG 3 there is shown a view 300 of an example implementation for setting the first reference point 202 and establishing coordinates, expressed within the user device frame of reference 122, of that first reference point 202.
  • the coordinates of the first reference point 202 are determined by illuminating a common point, that is, the same reference point 202, with the user device from two different perspectives as follows.
  • the user device 102 is used, that is, positioned and orientated, at an initial position and an initial attitude 302 to set the first reference point 202 associated with the output image 104 by placing the indicium 110, created by the light-source 112, on the output image 104 where desired and actuating the actuator 128 to capture the corresponding position data 118 and attitude data 120 of the user device 102 at that initial position 302.
  • the first position 212 is an example of the initial position.
  • the initial position 302 has a corresponding initial position state vector 304.
  • the user device 102 is moved to a further position 306.
  • the user device 102 is used, that is, positioned and orientated, at the further position and further attitude 306, to set the same, that is, the first, reference point 202 associated with the output image 104 by placing the indicium 110, created by the light-source 112, on the output image 104 where desired and actuating the actuator 128 to capture the corresponding position data 118 and attitude data 120 of the user device 102 at that further position and further attitude 306.
  • the further position 306 has a corresponding further position state vector 308 corresponding to the first reference point P1
  • the point of intersection comprises an actual point of intersection, assuming that the lines actually do intersect, as well as a point of sufficient convergence or minimal separation between the two lines in circumstances where the two lines do not intersect.
  • the two lines may not actually intersect where, for example, there is a margin of error in positioning the indicium 110 on precisely the same point.
  • the transformation between the initial position 302 and the further position 306 can be represented using a transformation matrix T svp svp 310 that represents the translations and rotations to move the user device 102 from the initial position 302 to the further position as determined by the outputs of the motion sensors 114.
  • FIG 4 there is shown a view 400 of an example implementation for setting the second reference point 208 and establishing coordinates, expressed within the user device frame of reference 122, of that second reference point 208.
  • the coordinates of the second reference point 208 are determined by illuminating a common point, that is, the same reference point 208, with the user device from two different perspectives as follows.
  • the user device 102 is used, that is, positioned and orientated, at an initial position and an initial attitude 402 to set the second reference point 208 associated with the output image 104 by placing the indicium 110, created by the light-source 112, on the output image 104 where desired and actuating the actuator 128 to capture the corresponding position data 118 and attitude data 120 of the user device 102 at that initial position 402.
  • the second reference point 208 is an example of the second reference point 140”.
  • the initial position 402 has a corresponding initial position state vector [0043]
  • the user device 102 is moved to a further position 406.
  • the user device 102 is used, that is, positioned and orientated, at the further position and further attitude 406 to set the same, that is, second, reference point 208 associated with the output image 104 by placing the indicium 110, created by the light-source 112, on the output image 104 where desired and actuating the actuator 128 to capture the corresponding position data 118 and attitude data 120 of the user device 102 at that further position and further attitude 406.
  • the further position 406 has a corresponding further position state vector SVP F2 (g F2l e 1 ,g F11 e 2 ,g F23 e 3 ,y F2 , F2 ,a F2 ) 408 corresponding to the second reference point P 2 140”.
  • the position data of the reference point P 2 (1 e, ,b 7 e 2 , 3 ⁇ 4 e, ) P 2 140” is calculated from the point of intersection between the lines defined by the initial state vector 404 and the further state vector 408.
  • the point of intersection comprises an actual point of intersection assuming that lines do intersect or a point of sufficient convergence or minimal separation between the two lines in circumstances where the two lines do not actually intersect.
  • the two lines may not actually intersect where, for example, there is a margin of error in positioning the indicium 110 on precisely the same point.
  • the transformation between the initial position 402 and the further position 406 can be represented using a transformation matrix T S q> slv 410 that represents the translations and rotations to move the user device 102 from the initial position 402 to the further position 406 as determined by the outputs of the motion sensors 114.
  • the state vector of the further position 406 can be determined by the combined transformation of
  • FIG. 5 there is shown a view 500 of an example implementation for setting the third reference point 218 and establishing coordinates, expressed within the user device frame of reference 122, of that third reference point 218.
  • the coordinates of the third reference point 218 are determined by illuminating a common point, that is, the same reference point 218, with the user device from two different perspectives as follows.
  • the user device 102 is used, that is, positioned and orientated, at an initial position and an initial attitude 502 to set the third reference point 218 associated with the output image 104 by placing the indicium 110, created by the light-source 112, on the output image 104 where desired and actuating the actuator 128 to capture the corresponding position data 118 and attitude data 120 of the user device 102 at that initial position 502.
  • the third reference point 218 is an example of the third reference point 140”’.
  • the initial position 502 has a corresponding initial position state vector 504.
  • the user device 102 is moved to a further position 506.
  • the user device 102 is used, that is, positioned and orientated, at the further position and further attitude 506 to set the same, that is, third, the reference point 218 associated with the output image 104 by placing the indicium 110, created by the iight- source 112, on the output image 104 where desired and actuating the actuator 128 to capture the corresponding position data 118 and attitude data 120 of the user device 102 at that further position and further attitude 506.
  • the further position 506 has a corresponding further position state vector 508 corresponding to the third reference point P 3 140”’.
  • the point of intersection comprises an actual point of intersection assuming that lines actually do intersect or a point of sufficient convergence or minimal separation between the two lines in circumstances where the two lines do not actually intersect.
  • the two lines may not actually intersect where, for example, there is a margin of error in positioning the indicium 110 on precisely the same point.
  • the transformation between the initial position 502 and the further position 506 can be represented using a transformation matrix T SVPr ®SVPi 510 that represents the translations and rotations to move the user device 102 from the initial position 502 to the further position 506 as determined by the outputs of the motion sensors 114.
  • the state vector for the further position can be found from .
  • FIG. 6 there is shown a view 600 of a further user device 602 according to example implementations.
  • the user device 602 is substantially similar to the above user device 102, but for the addition of a range finder 604.
  • the range finder 604 determines the distance n 606 of an object identified by the indicium 110.
  • Example implementations of the range finder 603 can comprise a laser range finder.
  • the user device 602 is used to determine the positions of the number of reference points 140 associated with the output image 104.
  • the user device reference frame 122 is formed in the manner described above with reference to figure 1, but for the coordinates of the reference points 140, in user device reference frame coordinates, being determined from the position data 118 and attitude data 120 of the user device 602 together with the range or distance n.
  • the state vectors of the user device 602 at the various positions together with the respective ranges n give the reference point coordinates within the frame of reference 122.
  • the output image 104 with defined reference points 140, expressed using the user device defined frame of reference 122 of figure 2, according to example implementations.
  • the output image 104 bears the indicium 110 from the light-source 112.
  • the indicium 110 can be navigated towards an active or invokable region 704 of the image 104.
  • Examples of such an active or invokable region of such an image comprise embedded video, a URL or any other active or invokable region. If the indicium was a pointer under the control of a mouse and the image was an image on a computer screen, the active or invokable region could be invoked using the mouse and pointer.
  • the output image 104 is projected or the indicium 110 is otherwise independent of the image data 146 from which the output image 104 is derived, but for example implementations, invoking the active region would not be possible.
  • Example implementations can be realised that track the position of the indicium 110 in the user device frame of reference 122 and map that position, using the mapper 144, into the image data 146.
  • the coordinates of the indicium within the image data 146 are passed to the mouse and pointer control circuitry 145 and any active or invokable region of the image data 146 is invoked, thereby giving the impression of invoking the active or invokable region 704 of the output image 104 using the user device.
  • FIG 8 there is shown a view 800 of the output image 104 of figure 7 and operation of a user device according to example implementations.
  • the optical path 802 of the user device 102 in a position that is different to position 702, is obscured by an object 804.
  • the state vector SVP X plus 806 associated with the user device 102 is still communicated to the computer 108, which allows the indicium 110 to be placed within the image data 146 and, therefore, allows the indicium 110 to be displayed on the output image 104 notwithstanding the object 804 obscuring the optical path 802 to the output image 104.
  • FIG 9 there is illustrated a view 900 of a flow chart for establishing the reference points depicted in figures 3 to 5 according to example implementations.
  • the user device frame of reference 122 is established at 902. As indicated above, the frame of reference 122 is defined by and established with reference to the user device.
  • the reference points 140 in the output image 104 are established as indicated above.
  • the reference points 140 are expressed in terms of user device frame of reference coordinates. Data associated with any or all permutations of the reference points, the frame of reference, the indicium, the position data or attitude data of the user device can be output to the computer 108 at 906.
  • Figure 10 shows a view of a flow chart 1000 comprising further details for establishing the user device frame of reference as described at 902 above in figure 9 according to example implementations.
  • a set-up or calibration mode of the user device 102 is entered or commenced by detecting an initial calibration actuation of the actuator 128.
  • the attitude data, or attitude output signals, associated with the motion sensors 114 is noted at the time of actuation of the actuator 128.
  • the attitude data, or attitude signals is used to establish a first unit basis vector 134 of the orthonormal basis 122 at 1006.
  • the remainder of the unit basis vectors 136 and 138 are established at 1008 to produce the orthonormal basis 122.
  • attitude data 120 and position data 118 are established for the user device 102 at an initial position when illuminating a target or desired reference point 140.
  • position data 118 and attitude data 120 are established for the user device at a further position when illuminating the same target or desired reference point 140.
  • the coordinates of the reference point 140 expressed in user device frame of reference coordinates, are determined at 1106 from the initial position data 118 and attitude data 120 and the further position data 118 and further attitude data 120.
  • the coordinates of the reference point 140 are output for further processing such as, for example, transmission to the computer 108.
  • FIG 12 there is depicted a further flow chart 1200 for defining reference points 140 of each of figures 3 to 5 according to further example implementations using the user device 602 of figure 6.
  • the orientation of the user device 602 at an initial position is established when illuminating a respective reference point together with a range measurement to the reference point 140 and the position data 118 of the user device 602.
  • the reference point coordinates expressed in terms of user device frame of reference coordinates, are established at 1204 from the attitude data 120, initial position data 118 and range n to the reference point 140.
  • the coordinates of the reference point 140 are output for further processing at 1206. The above is repeated for each target or desired reference point at 1208.
  • Figure 13 illustrates a view 1300 of the processing performed by the mapper 144.
  • the image data 146 will have a predetermined resolution or one of a set of possible predetermined resolutions. Maximum horizontal 1302 and vertical 1304 resolutions are predetermined.
  • the coordinates of the reference points 140 are mapped onto the image resolution such that the reference points 140’, 140”, 140”' map to the corners 1306, 1308, 1310 respectively of the image data 146. Mapping the corners in such a manner allows the computer-generated indicium 150, generated by the indicium generator circuitry 148, to be mapped appropriately given the user device reference frame coordinates of the indicium 110 output by the user device 102 or 602.
  • FIG 14 there is illustrated a flow chart 1400 for determining the location of an output of the user device within the user device reference frame and generating an associated indicium within the output image.
  • the output of the user device can comprise the indicium 110.
  • current position data and attitude data of the user device are determined.
  • a point of intersection of a projection from the user device with the output image 104 is determined in user device reference frame coordinates.
  • Example implementations can realise the foregoing by determining the point of intersection of an equation of a line based on the position data and attitude data of the user device with an equation of a plane based on the reference points 140.
  • the coordinates of the point of intersection are output to the computer 108.
  • the computer processes the coordinates to generate the computer-generated indicium 150 to be incorporated into the image data 146 and output by the image output device 106.
  • the coordinates of the indicium can be mapped into image coordinates of the image data 146 and output to mouse and pointer control circuitry to allow the latter to invoke or otherwise activate functionality associated with the image data 146 in response to the computer 108 receiving actuation data from the user device 102 or 602.
  • FIG 15 there is depicted a flow chart 1500 for invoking functionality associated with an output image using the user device.
  • the current position data and attitude data of the user device are determined.
  • a point of intersection of a projection from the user device with the output image 104 is determined in user device reference frame coordinates.
  • Example implementations can realise the foregoing by determining the point of intersection of an equation of a line based on the position data and attitude data of the user device with an equation of a plane based on the reference points 140.
  • the coordinates of the point of intersection are output to the computer 108.
  • the computer 108 processes the coordinates to map the coordinates of the indicium into image coordinates of the image data 146.
  • the computer 108 can display an indicium via the mouse and pointer control circuitry at the indicium image data coordinates at 1508.
  • the computer 108 outputs, at 1510, the mapped coordinates to the mouse and pointer control circuitry 145 to allow the latter to invoke or otherwise activate functionality associated with the image data 146 in response to the computer 108 receiving actuation data from the user device 102 or 602.
  • circuitry as used herein can comprise any of physical electronic circuitry, software (such as machine-readable and machine-executable instructions), hardware, application specific integrated circuitry, or the like, taken jointly or severally in any and all permutations.
  • implementations also provide machine-readable storage storing such machine-executable instructions.
  • the machine-readable storage can comprise transitory or non-transitory machine-readable storage.
  • the machine can comprise one or more processors, or other circuitry, for executing the instructions or implementing the instructions.
  • FIG 16 there is shown a view 1600 of implementations of at least one of machine-executable instructions or machine-readable storage.
  • Figure 16 shows machine-readable storage 1602.
  • the machine-readable storage 1602 can be realised using any type of volatile or non-volatile storage such as, for example, memory, a ROM, RAM, EEPROM, or other electrical storage, or magnetic or optical storage or the like.
  • the machine-readable storage 1602 can be transitory or non-transitory.
  • the machine-readable storage 1602 stores machine-executable instructions (MEIs) 1604.
  • the MEIs 1604 comprise instructions that are executable by a processor or other instruction execution, or instruction implementation, circuitry 1606.
  • the processor or other circuitry 1606 is responsive to executing or implementing the MEIs 1604 to perform any and all activities, operations, or methods described and/or claimed in this application such as the operations described with reference to at least one or more of figures 1 to 15. [0065]
  • the processor or other circuitry 1606 can output one or more than one control signal 1608 for controlling other devices 1610.
  • Example implementations of such other devices 1610 comprise, for example, the image output device 106.
  • the MEIs 1604 can comprise MEIs to implement any methods of operation described herein, such as operations corresponding to any flow chart described herein, or any part thereof taken jointly and severally with any other part thereof.
  • Example implementations can be realised in which the user devices are pointer devices such as, for example, laser-pointers.
  • a user device for controlling interactions with an output image; the device comprising: a motion sensor to detect movement of the device; position circuitry to: generate a user device frame of reference associated with the user device, and to derive position data and attitude data, responsive to the motion sensor, comprising an indication of device position and attitude expressed relative to the device frame of reference; and communication circuitry to communicate at least one of the position and attitude data to a data processing system.
  • Clause 2 The device of clause 1 , in which the position circuitry to generate a user device frame of reference associated with the user device comprises origin circuitry to define at least one of an origin of the user device frame of reference or a set of orthogonal basis vectors of the user device frame of reference.
  • Clause 3 The device of any preceding clause, in which the position circuitry to generate a user device frame of reference associated with the user device comprises image plane circuitry to define a region of the output image.
  • Clause 4 The device of clause 3, in which the image plane circuitry to define the region of the output image comprises circuitry to establish a set of points that define the region of the output image.
  • Clause 5 The device of clause 4, in which the circuitry to establish a set of points comprises circuitry to define at least two points (Pi, P2), associated with the output image, derived from at least one, or both, of the position or attitude data.
  • Clause 6 The device of clause 5, in which the circuitry to define at least two points associated with the output image comprises circuitry to define three points (P ⁇ P 2 , P 3 ) associated with the output image derived from at least one, or both, of the position or attitude data.
  • Clause 7 The device of any preceding clause, in which the output image is projected onto a surface from a projector or displayed on a display device.
  • Clause 8 The device of any preceding clause, comprising circuitry to generate an indicium associated with the output image using at least one, or both, of the position data or attitude data.
  • Clause 9 The device of clause 8, in which the indicium comprises a light- based indicium associated with light emitted by the user device or a computer-generated indicium forming part of the output image.
  • Clause 10 The device of clause 9, in which the light-based indicium and the computer-generated indicium have an overlapping relationship within, or on, the output image.
  • Clause 11 The device of any preceding clause, comprising a range finder to determine a distance between the user device and a surface associated with the output image.
  • Clause 12 A method of calibrating a user device for influencing interactions with an output image, the method comprising defining, using the user device, a user device frame of reference relative to the user device, and defining, using the user device, at least a region of interest of the output image.
  • Clause 13 The method of clause 12, in which defining at least a region of interest of the output image comprises defining a first point (Pi) of the region of interest and defining at least one of an origin of the user device frame of reference, as an initial position of the user device at the time of defining the first point of the region of interest, or e. an initial basis vector (— ) of the user device frame of reference using the attitude of the user device.
  • Clause 14 The method of clause 13, comprising defining a set of orthogonal basis vectors with reference to the initial basis vector
  • Clause 15 The method of clause 13 or clause 14, comprising defining a set of further points (P 2 , P 3 ) associated with the region of interest.
  • Clause 16 Machine-readable storage storing machine executable instructions arranged, when executed, to implement the method of any of clauses 12 to 14.
  • Clause 17 An apparatus for actuating functionality associated with an output image, the apparatus comprising: circuitry to define, within an apparatus frame of reference, data relating to: positions of reference indicia associated with the output image and the position and attitude of the apparatus within the apparatus frame of reference, a movement sensor to monitor movement of the apparatus, and circuitry to determine a point of intersection of a line extending from the apparatus with the output image, and a transmitter to output data associated with said movement of the apparatus.
  • Clause 18 The apparatus of clause 17, in which the circuitry to define data relating to positions of reference indicia associated with the output image comprises circuitry to define data relating to a plurality of points of the output image selected to define an expanse of the output image.
  • Clause 19 The apparatus of any of clauses 17 to 18, comprising an actuator to generate actuation data for output to a data processing system.
  • Clause 20 The apparatus of clause 19, in which the actuation data is associated with the data processing system performing a predetermined operation in response to the actuation data.
  • Clause 21 The apparatus of clause 20, in which the predetermined operation comprises invoking the functionality associated with the output image.
  • Clause 22 A method of calibrating a pointer device, the method comprising determining a position of a feature of a displayed image within a user device coordinate system defined by and with reference to the pointer device, tracking movement and attitude of the user device within the user device coordinate system; and outputting data associated with the movement and attitude.
  • a method of calibrating a user device comprising establishing an initial relative position and orientation between an aspect of a projected image and the user device within a user device frame of reference, monitoring relative movement of the user device to a further position and orientation, establishing a further relative position and orientation between the aspect of the projected image and the user device, determining at least a point of intersection between projections from the user device at the initial and further positions; and defining a plane of the projected image within the user device frame of reference.
  • Clause 24 A method of calibrating a user device, comprising establishing locations of a number of indicia associated with a projected image within a coordinate system defined with reference to the user device, and outputting data associated with the indicia to a device associated with the projected image.
  • Clause 25 A method of calibrating a user device, comprising establishing a position of at least one user device defined point (P ) of a projected image within a coordinate system defined with reference to the user device, and outputting data associated with the position of said at least one user device defined point (P1) of the projected image.
  • Clause 26 The method of clause 25, in which said establishing comprises establishing positions of a number of user device defined points of the projected image.
  • Clause 27 The method of clause 26, in which said establishing positions of a number of user device defined points comprises establishing positions of at least two user device defined points (P 1, P 2 ), or at least three user device defined points (P 1 ,P 2 ,P 3 ) of the projected image.
  • Clause 28 The method of any of clauses 25 to 27, in which each user device defined point of said at least one user device defined feature (P1) comprises a respective corner of the projected image.
  • Clause 29 Apparatus comprising circuitry to implement a method of any preceding clause.
  • Clause 30 Machine readable storage storing instructions, when executed, to implement a method of any preceding clause or to implement a device or apparatus of any preceding clause.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Example implementations provide devices comprising: circuitry to establish a frame of reference using locations of number of indicia associated with a projected image within a coordinate system defined with reference to the user device, and circuitry to output data associated with the indicia to a device associated with the projected image.

Description

FRAMES OF REFERENCE
BACKGROUND
[0001] A wide range of user devices are available for interacting with computers. For example, a mouse allows a user to interact with a displayed image to perform numerous actions such as invoking functionality. The functionality can be associated with a feature of software, a universal resource locator (URL) and the like. However, some user devices are not closely coupled with computers. For example, a laser-pointer can be used to great effect during a presentation by drawing the attention of an audience to certain aspects of the presentation. The laser-pointer, however, is limited in its functionality to projecting a light spot onto the projected image and stepping either forwards or backwards through the presentation. The laser-pointer does not support invoking functionality in the same way that a mouse and a corresponding pointer can invoke functionality.
BRIEF DESCRIPTION OF THE DRAWINGS
[0002] Example implementations will now be described, by way of example, with reference to the accompanying drawings in which:
[0003] figure 1 shows a view of a user device according to example implementations; [0004] figure 2 illustrates a user device defined frame of reference according to example implementations;
[0005] figures 3 to 5 depict defining reference points of an output image according to example implementations;
[0006] figure 6 shows a further user device according to example implementations; [0007] figure 7 depicts an output image with defined reference points within the user device defined frame of reference of figure 2 according to example implementations; [0008] figure 8 shows the output image of figure 7 and operation of a user device according to example implementations;
[0009] figure 9 depicts a flow chart for establishing the reference points depicted in figures 3 to 5 according to example implementations;
[0010] figure 10 illustrates a flow chart for establishing the user device frame of reference of figure 2 according to example implementations;
[0011] figure 11 shows a flow chart for defining the reference points of figures 3 to 5 according to example implementations;
[0012] figure 12 depicts a further flow chart for defining the reference points of figures 3 to 5 according to further example implementations; [0013] figure 13 illustrates mapping of the reference points of figures 3 to 5 to an output image according to example implementations;
[0014] figure 14 shows a flow chart for determining the location of an output image of the user device within the user device reference frame and generating an associated indicium within the output image according to example implementations;
[0015] figure 15 depicts a flow chart for invoking functionality associated with an output image using the user device according to example implementations; and [0016] figure 16 illustrates machine-readable storage and machine-executable instructions according to example implementations.
[0017] DETAILED DESCRIPTION
[0018] Referring to figure 1 , there is shown a view 100 of a user device 102 for interacting with an output image 104 produced by an output peripheral 106 controlled by a computer 108. The output peripheral 106 can be a projector. Consequently, the output image 104 can be a projected image output by the projector. The output image 104 can have an associated indicium 110. The indicium 110 can be produced by the user device 102, the computer 108 or both.
[0019] The user device 102 comprises a light source 112. The light source 112 can comprise a light emitting diode (LED), laser diode, collimated light or any other source of light to produce the indicium 110. The user device 102 comprises at least one motion sensor 114 for sensing relative movement of the user device 102 in 3-dimensional space and attitude of the user device 102. The motion sensor 114 comprises circuitry 116 to detect relative translation and relative rotation of the user device 102. The circuitry 116 to detect relative translation and relative rotation of the user device 102 detects movement in any or all of the 3D translation directions and rotations in any or all rotational directions. For example, the rotation in any or all rotational directions can comprise one, or both, of pitch and yaw movements. Example implementations can be realised that detect roll in addition to either, or both, of pitch and yaw. The motion sensor 114 also produces, or can be used to derive, position data 118 and attitude data 120. The attitude data 120 comprises an indication of current orientation of the user device 102. The position data represents the position of the user device within the user device frame of reference. Position circuitry 121 is provided to generate a user device frame of reference associated with the user device, and to derive the position data 118 and attitude data 120, responsive to the motion sensor 114, comprising an indication of the device position and attitude expressed relative to the device frame of reference. At least one, or both, of the position circuitry 121 or the motion sensor 114, taken jointly or severally, are an example of image plane circuitry to define the region of the output image comprises circuitry to establish a set of points that define the region of the output image.
[0020] The position data 118 and attitude data 120 are used to produce a user device frame of reference 122. Data 124 representing the user device frame of reference 122 is stored in a memory 126 of the user device 102.
[0021] The user device comprises an actuator 128. The actuator 128 can be used to generate actuation data or control signals that perform respective functions. The actuator 128 can control whether or not the light source 112 outputs a beam of light 130. The actuator 128 can, additionally or alternatively, generate software control signals to be output to a data processing system. As used herein, a data processing system is an electronic device to perform data processing operations using a processor other circuitry to process data. An example data processing system is a computer, which may be a computer of any form factor such as a desktop tower computer or a laptop computer. The actuator can also control operations of the user device 102 such as, for example, calibration operations.
[0022] The user device 102 comprises communication circuitry 132. The communication circuitry 132 is circuitry to perform communication to a separate device. The communication circuitry 132 can be realized using a transceiver, for example. The communication circuitry 132 is used to communicate with a data processing system such as the computer 108 illustrated. The communication circuitry 132 can be realised using, for example, wireless communication protocols such as, for example, BLUETOOTH, ZIGBEE, or the like.
[0023] The user device frame of reference 122 is defined using the user device 102. The process of defining the frame of reference is described below. The frame of reference 122 is used to determine and track the position and attitude of the user device within that frame of reference and to determine the position and attitude of the output image 104 within that frame of reference 122. The frame of reference 122 comprises an orthonormal basis defined by respective unit basis vectors 134 to 138
Figure imgf000004_0001
. Movement, position and attitude of the user device 102 within the user device frame of reference 122 can be determined by the user device 102 or can be determined by the computer 108. The frame of reference is also used to determined frame of reference coordinates of a number of reference points 140’, 140”, 140’” associated with the output image 104. The number of reference points 140’, 140”, 140’” will be referred to collectively as reference points 140 unless otherwise indicated. The reference points 140 are selected and defined by the user device 102. In the example depicted, the reference points 140 correspond to corners of the output image 104 that were defined using position and attitude data of the user device 102 within the frame of reference 122. The reference points 140 define an example implementation of a region of the output image 104 of interest, that is, they define the expanse of the output image 104, or the expanse of a sub-region of interest of the output image 104. It will be appreciated that the reference points 140 are an example implementation of a set of points associated with the output image that are derived from or associated with at least one, or both, of position data and attitude data.
[0024] The computer 108 comprises communication circuitry 142. The communication circuitry 142 is circuitry used to communicate with the user device 102. The communication circuitry 142 receives at least one, all, or any or all permutations, of data associated with user device frame of reference 122, the position data 120 and attitude data 118 of the user device 102, at least one, or both, of the position or attitude of the output image 104, or the position of the indicium 110 taken jointly and severally, collectively known as the user device reference frame coordinates or data 143.
[0025] The computer 108 comprises a mapper 144. The mapper 144 is used to map, or otherwise associate, the position of at least one, or both, of the reference points 140 or the indicium 110 into or with image data 146 from which the output image 104 is derived. Indicium generator circuitry 148 generates an indicium 150. The position of the indicium 150 within the image data 146 corresponds to, or is otherwise in registry with, the indicium 110 in the output image 104. The image data 146 is output by image output software 152. The image output software 152 can comprise, for example, presentation software.
[0026] The computer 108 comprises mouse and pointer control circuitry 145. The mouse and pointer control circuitry can control the position of a pointer in response to movement of a mouse. Example implementations pass coordinate data associated with the indicium, expressed in terms of the user device reference frame or image data coordinates, to the mouse and pointer control circuitry 145 to allow a region of the image data, and consequently the output image 104, to be selected to thereby invoke functionality associated with that region of the image data 146 or output image 104.
[0027] Figure 2 illustrates the user device defined frame of reference 122 according to example implementations. As indicated above, the frame of reference 122 comprises an orthonormal basis of unit vectors 134 to 138. The frame of reference 122 is established by positioning the light spot or indicium 110 on an initial reference point 202 and actuating the actuator 128 in a manner to indicate that such an initial reference point 202 is being defined. The foregoing can be achieved by, for example, placing the user device 102 in a calibration mode, or by having a respective mode of actuating the actuator 128 of the user device 102 such as holding a predetermined button for a predetermined period of time or pressing the predetermined button a number of times.
[0028] Indicating such an initial reference point causes the user device 102 to store its present attitude data as an initial unit basis vector 134 and sets the origin 204 of the frame of reference 122. In the example indicated, the basis vector 134 is the initial basis vector. The other two basis vectors 136 and 138 are calculated from the initial basis vector to form a set of mutually orthogonal basis vectors 134 to 138 that define the frame of reference 122
[0029] The position within the frame of reference 122 of the initial reference point 202 is determined. For example, knowing the initial basis vector 134, the distance Γ between the user device 102 and the initial reference point 202 can be used to calculate the position of the initial reference point 202 within the frame of reference 122. The attitude data 120 of the user device 102, when setting the initial reference point 202, forms an axis 134 of the frame of reference 122. The basis vectors 134 to 138 can correspond to x, y, z axes. The reference point 140’ is an example of the initial reference point 202.
[0030] A state vector 206, SVP1(...), is determined when setting the frame of reference and setting the initial reference point, P1 202. The state vector 206 represents the state of the user device 102 at the time of establishing the initial basis vector 134 and reference point 202. The state vector 206 can comprise coordinates of the user device 102 within the frame of reference 122 and attitude data of the user device 102 within the reference frame. Since setting the initial reference point 202 concurrently defines the origin 204 of the frame of reference, the position of the user device 202 will be (0,0,0). Origin circuitry is circuitry to define the origin of the user device frame of reference and/or a set of orthogonal basis vectors of the user device frame of reference. The attitude data 120 will represent current outputs or indications of attitude of the user device in terms of a number of, or any or all permutations of, roll, pitch and yaw. Therefore, the state vector 206 for the user device 102, at the point of establishing the frame of reference 122, will be , where represent roll, pitch and yaw respectively. It can be
Figure imgf000006_0001
Figure imgf000006_0002
seen that the initial reference point 202 when expressed in terms of the basis vectors 134 to 138 of the frame of reference 122 has coordinates
Figure imgf000006_0003
Figure imgf000006_0004
Since the reference point 202 lies on the
Figure imgf000007_0001
[0031] A second reference point 208 can be defined by moving the user device 102 to a second or further position 210. The second or further position 210 is determined by monitoring the outputs of the motion sensors 114 and producing a transformation matrix, Tl2 , 214 that represents the totai translation and rotation of the user device 102 from the first position 212 to the second position 210.
[0032] A second state vector 216, SVP2(...), is determined when the user device is setting the second or further reference point P2. The second state vector 216 represents the state of the user device 102 at the time of defining the second or further reference point 208. The second state vector 216 can comprise coordinates of the user device 102 within the frame of reference 122 and attitude data of the user device 102 within the reference frame. The position of the user device 202 will be determined by transforming the point (0,0,0) by the transformation matrix P2 214. The attitude data of the second state vector 216 will represent current outputs or indications of orientation of the user device in terms of a number of, or any or all permutations of, roll, pitch and yaw when the user device is in the second position. Therefore, the second state vector 216 for the user device 102 will be represent the coordinates of the user device
Figure imgf000007_0002
within the frame of reference and represent roll, pitch and yaw respectively. It
Figure imgf000007_0003
can be seen that the second reference point 208, when expressed in terms of the basis vectors 134 to 138 of the frame of reference 122, has coordinates
Figure imgf000007_0004
such that where are constants.
Figure imgf000007_0005
Figure imgf000007_0006
Figure imgf000007_0007
[0033] A third reference point 218 can be defined by moving the user device 102 to a third or still further position 220. The third position 220 is determined by monitoring the outputs of the motion sensors 114 and producing a transformation matrix, T21 , 222 that represents the total translation and rotation of the user device 102 from the second position 210 to the third position 220.
[0034] A third state vector 224, SVP3(...), is determined when the user device is setting the third or still further reference point P3. The third state vector 224 represents the state of the user device 102 at the time of defining the third or further reference point 218. The third state vector 224 can comprise coordinates of the user device 102 within the frame of reference 122 and attitude data of the user device 102 within the reference frame. The position of the user device 102 wili be determined by transforming the point (0,0,0) by the transformation matrix Tl7 214 and transforming the result by the transformation matrix T23
222. The attitude data of the third state vector 216 will represent current outputs or indications of orientation of the user device 102 in terms of a number of, or any or all permutations of, roll, pitch and yaw when the user device is in the third position 220. Therefore, the third state vector 224 for the user device 102 will be represent the coordinates of the user device
Figure imgf000008_0001
within the frame of reference and represent roll, pitch and yaw respectively. It
Figure imgf000008_0002
can be seen that the third reference point 218 when expressed in terms of the basis vectors 134 *° 138 of the frame of reference 122 has coordinates such that where ( are constants.
Figure imgf000008_0003
Figure imgf000008_0004
Figure imgf000008_0005
It can be seen that the user device 102 in the first 212, second 210 and third 220 positions are at distances n, r and r3 from their respective reference points 202, 208, 218.
[0035] The reference points 202, 208, 218 are examples of the reference points 140 described with reference to figure 1.
[0036] Figures 3 to 5 demonstrate example implementations for determining the coordinates of the reference points 202, 208 and 218.
[0037] Referring to figure 3, there is shown a view 300 of an example implementation for setting the first reference point 202 and establishing coordinates, expressed within the user device frame of reference 122, of that first reference point 202. The coordinates of the first reference point 202 are determined by illuminating a common point, that is, the same reference point 202, with the user device from two different perspectives as follows. [0038] The user device 102 is used, that is, positioned and orientated, at an initial position and an initial attitude 302 to set the first reference point 202 associated with the output image 104 by placing the indicium 110, created by the light-source 112, on the output image 104 where desired and actuating the actuator 128 to capture the corresponding position data 118 and attitude data 120 of the user device 102 at that initial position 302. The first position 212 is an example of the initial position. The initial position 302 has a corresponding initial position state vector 304. The user device
Figure imgf000008_0006
102 is moved to a further position 306. The user device 102 is used, that is, positioned and orientated, at the further position and further attitude 306, to set the same, that is, the first, reference point 202 associated with the output image 104 by placing the indicium 110, created by the light-source 112, on the output image 104 where desired and actuating the actuator 128 to capture the corresponding position data 118 and attitude data 120 of the user device 102 at that further position and further attitude 306. The further position 306 has a corresponding further position state vector 308 corresponding to the first reference point P1
Figure imgf000009_0003
140’.
[0039] The position data
Figure imgf000009_0001
of the reference point Pi
Figure imgf000009_0004
140’ is calculated from the point of intersection between the lines defined by the initial state vector 304 and the further state vector 308. The point of intersection comprises an actual point of intersection, assuming that the lines actually do intersect, as well as a point of sufficient convergence or minimal separation between the two lines in circumstances where the two lines do not intersect. The two lines may not actually intersect where, for example, there is a margin of error in positioning the indicium 110 on precisely the same point.
[0040] The transformation between the initial position 302 and the further position 306 can be represented using a transformation matrix Tsvp svp 310 that represents the translations and rotations to move the user device 102 from the initial position 302 to the further position as determined by the outputs of the motion sensors 114.
[0041] Referring to figure 4, there is shown a view 400 of an example implementation for setting the second reference point 208 and establishing coordinates, expressed within the user device frame of reference 122, of that second reference point 208. The coordinates of the second reference point 208 are determined by illuminating a common point, that is, the same reference point 208, with the user device from two different perspectives as follows.
[0042] The user device 102 is used, that is, positioned and orientated, at an initial position and an initial attitude 402 to set the second reference point 208 associated with the output image 104 by placing the indicium 110, created by the light-source 112, on the output image 104 where desired and actuating the actuator 128 to capture the corresponding position data 118 and attitude data 120 of the user device 102 at that initial position 402. The second reference point 208 is an example of the second reference point 140”. The initial position 402 has a corresponding initial position state vector
Figure imgf000009_0002
[0043] The user device 102 is moved to a further position 406. The user device 102 is used, that is, positioned and orientated, at the further position and further attitude 406 to set the same, that is, second, reference point 208 associated with the output image 104 by placing the indicium 110, created by the light-source 112, on the output image 104 where desired and actuating the actuator 128 to capture the corresponding position data 118 and attitude data 120 of the user device 102 at that further position and further attitude 406. The further position 406 has a corresponding further position state vector SVPF2(gF2le1,gF11e2,gF23e3,yF2, F2,aF2) 408 corresponding to the second reference point P2 140”.
[0044] The position data
Figure imgf000010_0001
of the reference point P2 (1 e, ,b7e2, ¾ e, ) P2 140” is calculated from the point of intersection between the lines defined by the initial state vector 404 and the further state vector 408. The point of intersection comprises an actual point of intersection assuming that lines do intersect or a point of sufficient convergence or minimal separation between the two lines in circumstances where the two lines do not actually intersect. The two lines may not actually intersect where, for example, there is a margin of error in positioning the indicium 110 on precisely the same point.
[0045] The transformation between the initial position 402 and the further position 406 can be represented using a transformation matrix TS q> slv 410 that represents the translations and rotations to move the user device 102 from the initial position 402 to the further position 406 as determined by the outputs of the motion sensors 114. The state vector of the further position 406 can be determined by the combined transformation of
Figure imgf000010_0002
[0046] Referring to figure 5, there is shown a view 500 of an example implementation for setting the third reference point 218 and establishing coordinates, expressed within the user device frame of reference 122, of that third reference point 218. The coordinates of the third reference point 218 are determined by illuminating a common point, that is, the same reference point 218, with the user device from two different perspectives as follows. [0047] The user device 102 is used, that is, positioned and orientated, at an initial position and an initial attitude 502 to set the third reference point 218 associated with the output image 104 by placing the indicium 110, created by the light-source 112, on the output image 104 where desired and actuating the actuator 128 to capture the corresponding position data 118 and attitude data 120 of the user device 102 at that initial position 502. The third reference point 218 is an example of the third reference point 140”’. The initial position 502 has a corresponding initial position state vector
Figure imgf000011_0001
504. The user device 102 is moved to a further position 506. The user device 102 is used, that is, positioned and orientated, at the further position and further attitude 506 to set the same, that is, third, the reference point 218 associated with the output image 104 by placing the indicium 110, created by the iight- source 112, on the output image 104 where desired and actuating the actuator 128 to capture the corresponding position data 118 and attitude data 120 of the user device 102 at that further position and further attitude 506. The further position 506 has a corresponding further position state vector 508
Figure imgf000011_0004
corresponding to the third reference point P3 140”’.
[0048] The position data
Figure imgf000011_0002
of the reference point P3
Figure imgf000011_0005
140’” is calculated from the point of intersection between the lines defined by the initial state vector 504 and the further state vector 508. The point of intersection comprises an actual point of intersection assuming that lines actually do intersect or a point of sufficient convergence or minimal separation between the two lines in circumstances where the two lines do not actually intersect. The two lines may not actually intersect where, for example, there is a margin of error in positioning the indicium 110 on precisely the same point. [0049] The transformation between the initial position 502 and the further position 506 can be represented using a transformation matrix TSVPr ®SVPi 510 that represents the translations and rotations to move the user device 102 from the initial position 502 to the further position 506 as determined by the outputs of the motion sensors 114. The state vector for the further position can be found from
Figure imgf000011_0003
.
[0050] Referring to figure 6, there is shown a view 600 of a further user device 602 according to example implementations. The user device 602 is substantially similar to the above user device 102, but for the addition of a range finder 604. The range finder 604 determines the distance n 606 of an object identified by the indicium 110. Example implementations of the range finder 603 can comprise a laser range finder. In the present example, the user device 602 is used to determine the positions of the number of reference points 140 associated with the output image 104. The user device reference frame 122 is formed in the manner described above with reference to figure 1, but for the coordinates of the reference points 140, in user device reference frame coordinates, being determined from the position data 118 and attitude data 120 of the user device 602 together with the range or distance n. The state vectors of the user device 602 at the various positions together with the respective ranges n give the reference point coordinates within the frame of reference 122.
[0051] Referring to figure 7, there is depicted the output image 104 with defined reference points 140, expressed using the user device defined frame of reference 122 of figure 2, according to example implementations. The output image 104 bears the indicium 110 from the light-source 112. The indicium 110 can be navigated towards an active or invokable region 704 of the image 104. Examples of such an active or invokable region of such an image comprise embedded video, a URL or any other active or invokable region. If the indicium was a pointer under the control of a mouse and the image was an image on a computer screen, the active or invokable region could be invoked using the mouse and pointer. However, when the output image 104 is projected or the indicium 110 is otherwise independent of the image data 146 from which the output image 104 is derived, but for example implementations, invoking the active region would not be possible.
[0052] Example implementations can be realised that track the position of the indicium 110 in the user device frame of reference 122 and map that position, using the mapper 144, into the image data 146. When actuation data has been received, the coordinates of the indicium within the image data 146 are passed to the mouse and pointer control circuitry 145 and any active or invokable region of the image data 146 is invoked, thereby giving the impression of invoking the active or invokable region 704 of the output image 104 using the user device.
[0053] Referring to figure 8, there is shown a view 800 of the output image 104 of figure 7 and operation of a user device according to example implementations. It can be appreciated that the optical path 802 of the user device 102, in a position that is different to position 702, is obscured by an object 804. However, the state vector SVPX(...) 806 associated with the user device 102 is still communicated to the computer 108, which allows the indicium 110 to be placed within the image data 146 and, therefore, allows the indicium 110 to be displayed on the output image 104 notwithstanding the object 804 obscuring the optical path 802 to the output image 104.
[0054] Referring to figure 9, there is illustrated a view 900 of a flow chart for establishing the reference points depicted in figures 3 to 5 according to example implementations. The user device frame of reference 122 is established at 902. As indicated above, the frame of reference 122 is defined by and established with reference to the user device. At 904, the reference points 140 in the output image 104 are established as indicated above. The reference points 140 are expressed in terms of user device frame of reference coordinates. Data associated with any or all permutations of the reference points, the frame of reference, the indicium, the position data or attitude data of the user device can be output to the computer 108 at 906.
[0055] Figure 10 shows a view of a flow chart 1000 comprising further details for establishing the user device frame of reference as described at 902 above in figure 9 according to example implementations. At 1002, a set-up or calibration mode of the user device 102 is entered or commenced by detecting an initial calibration actuation of the actuator 128. The attitude data, or attitude output signals, associated with the motion sensors 114 is noted at the time of actuation of the actuator 128. The attitude data, or attitude signals, is used to establish a first unit basis vector 134 of the orthonormal basis 122 at 1006. The remainder of the unit basis vectors 136 and 138 are established at 1008 to produce the orthonormal basis 122.
[0056] Referring to figure 11 , there is shown a flow chart 1100 for defining the reference points 140 of each of figures 3 to 5 according to example implementations. At 1102, attitude data 120 and position data 118 are established for the user device 102 at an initial position when illuminating a target or desired reference point 140. Similarly, position data 118 and attitude data 120 are established for the user device at a further position when illuminating the same target or desired reference point 140. The coordinates of the reference point 140, expressed in user device frame of reference coordinates, are determined at 1106 from the initial position data 118 and attitude data 120 and the further position data 118 and further attitude data 120. The coordinates of the reference point 140 are output for further processing such as, for example, transmission to the computer 108.
[0057] Referring to figure 12, there is depicted a further flow chart 1200 for defining reference points 140 of each of figures 3 to 5 according to further example implementations using the user device 602 of figure 6. At 1202, the orientation of the user device 602, at an initial position, is established when illuminating a respective reference point together with a range measurement to the reference point 140 and the position data 118 of the user device 602. The reference point coordinates, expressed in terms of user device frame of reference coordinates, are established at 1204 from the attitude data 120, initial position data 118 and range n to the reference point 140. The coordinates of the reference point 140 are output for further processing at 1206. The above is repeated for each target or desired reference point at 1208.
[0058] Figure 13 illustrates a view 1300 of the processing performed by the mapper 144. The image data 146 will have a predetermined resolution or one of a set of possible predetermined resolutions. Maximum horizontal 1302 and vertical 1304 resolutions are predetermined. The coordinates of the reference points 140 are mapped onto the image resolution such that the reference points 140’, 140”, 140”' map to the corners 1306, 1308, 1310 respectively of the image data 146. Mapping the corners in such a manner allows the computer-generated indicium 150, generated by the indicium generator circuitry 148, to be mapped appropriately given the user device reference frame coordinates of the indicium 110 output by the user device 102 or 602.
[0059] Referring to figure 14, there is illustrated a flow chart 1400 for determining the location of an output of the user device within the user device reference frame and generating an associated indicium within the output image. The output of the user device can comprise the indicium 110. At 1402, current position data and attitude data of the user device are determined. At 1404, a point of intersection of a projection from the user device with the output image 104 is determined in user device reference frame coordinates. Example implementations can realise the foregoing by determining the point of intersection of an equation of a line based on the position data and attitude data of the user device with an equation of a plane based on the reference points 140. The coordinates of the point of intersection are output to the computer 108. The computer processes the coordinates to generate the computer-generated indicium 150 to be incorporated into the image data 146 and output by the image output device 106. In an example, the coordinates of the indicium can be mapped into image coordinates of the image data 146 and output to mouse and pointer control circuitry to allow the latter to invoke or otherwise activate functionality associated with the image data 146 in response to the computer 108 receiving actuation data from the user device 102 or 602.
[0060] Referring to figure 15, there is depicted a flow chart 1500 for invoking functionality associated with an output image using the user device. At 1502, the current position data and attitude data of the user device are determined. At 1504, a point of intersection of a projection from the user device with the output image 104 is determined in user device reference frame coordinates. Example implementations can realise the foregoing by determining the point of intersection of an equation of a line based on the position data and attitude data of the user device with an equation of a plane based on the reference points 140. The coordinates of the point of intersection are output to the computer 108. At 1506, the computer 108 processes the coordinates to map the coordinates of the indicium into image coordinates of the image data 146. In an example, the computer 108 can display an indicium via the mouse and pointer control circuitry at the indicium image data coordinates at 1508. The computer 108 outputs, at 1510, the mapped coordinates to the mouse and pointer control circuitry 145 to allow the latter to invoke or otherwise activate functionality associated with the image data 146 in response to the computer 108 receiving actuation data from the user device 102 or 602.
[0061] Referring to figure 16, there is illustrated machine-readable storage and machine- executable instructions according to example implementations.
[0062] It will be appreciated that circuitry as used herein can comprise any of physical electronic circuitry, software (such as machine-readable and machine-executable instructions), hardware, application specific integrated circuitry, or the like, taken jointly or severally in any and all permutations.
[0063] Therefore, implementations also provide machine-readable storage storing such machine-executable instructions. The machine-readable storage can comprise transitory or non-transitory machine-readable storage. The machine can comprise one or more processors, or other circuitry, for executing the instructions or implementing the instructions.
[0064] Accordingly, referring to figure 16, there is shown a view 1600 of implementations of at least one of machine-executable instructions or machine-readable storage. Figure 16 shows machine-readable storage 1602. The machine-readable storage 1602 can be realised using any type of volatile or non-volatile storage such as, for example, memory, a ROM, RAM, EEPROM, or other electrical storage, or magnetic or optical storage or the like. The machine-readable storage 1602 can be transitory or non-transitory. The machine-readable storage 1602 stores machine-executable instructions (MEIs) 1604. The MEIs 1604 comprise instructions that are executable by a processor or other instruction execution, or instruction implementation, circuitry 1606. The processor or other circuitry 1606 is responsive to executing or implementing the MEIs 1604 to perform any and all activities, operations, or methods described and/or claimed in this application such as the operations described with reference to at least one or more of figures 1 to 15. [0065] The processor or other circuitry 1606 can output one or more than one control signal 1608 for controlling other devices 1610. Example implementations of such other devices 1610 comprise, for example, the image output device 106.
[0066] The MEIs 1604 can comprise MEIs to implement any methods of operation described herein, such as operations corresponding to any flow chart described herein, or any part thereof taken jointly and severally with any other part thereof.
[0067] Example implementations can be realised in which the user devices are pointer devices such as, for example, laser-pointers.
[0068] Implementations can be realised in accordance with the following example implementations or clauses:
[0069] Clause 1 : A user device for controlling interactions with an output image; the device comprising: a motion sensor to detect movement of the device; position circuitry to: generate a user device frame of reference associated with the user device, and to derive position data and attitude data, responsive to the motion sensor, comprising an indication of device position and attitude expressed relative to the device frame of reference; and communication circuitry to communicate at least one of the position and attitude data to a data processing system.
[0070] Clause 2: The device of clause 1 , in which the position circuitry to generate a user device frame of reference associated with the user device comprises origin circuitry to define at least one of an origin of the user device frame of reference or a set of orthogonal basis vectors of the user device frame of reference.
[0071] Clause 3: The device of any preceding clause, in which the position circuitry to generate a user device frame of reference associated with the user device comprises image plane circuitry to define a region of the output image.
[0072] Clause 4: The device of clause 3, in which the image plane circuitry to define the region of the output image comprises circuitry to establish a set of points that define the region of the output image.
[0073] Clause 5: The device of clause 4, in which the circuitry to establish a set of points comprises circuitry to define at least two points (Pi, P2), associated with the output image, derived from at least one, or both, of the position or attitude data.
[0074] Clause 6: The device of clause 5, in which the circuitry to define at least two points associated with the output image comprises circuitry to define three points (P^ P2, P3) associated with the output image derived from at least one, or both, of the position or attitude data.
[0075] Clause 7: The device of any preceding clause, in which the output image is projected onto a surface from a projector or displayed on a display device.
[0076] Clause 8: The device of any preceding clause, comprising circuitry to generate an indicium associated with the output image using at least one, or both, of the position data or attitude data.
[0077] Clause 9: The device of clause 8, in which the indicium comprises a light- based indicium associated with light emitted by the user device or a computer-generated indicium forming part of the output image.
[0078] Clause 10: The device of clause 9, in which the light-based indicium and the computer-generated indicium have an overlapping relationship within, or on, the output image.
[0079] Clause 11: The device of any preceding clause, comprising a range finder to determine a distance between the user device and a surface associated with the output image.
[0080] Clause 12: A method of calibrating a user device for influencing interactions with an output image, the method comprising defining, using the user device, a user device frame of reference relative to the user device, and defining, using the user device, at least a region of interest of the output image.
[0081] Clause 13: The method of clause 12, in which defining at least a region of interest of the output image comprises defining a first point (Pi) of the region of interest and defining at least one of an origin of the user device frame of reference, as an initial position of the user device at the time of defining the first point of the region of interest, or e. an initial basis vector (— ) of the user device frame of reference using the attitude of the user device.
[0082] Clause 14: The method of clause 13, comprising defining a set of orthogonal basis vectors
Figure imgf000017_0001
with reference to the initial basis vector
Figure imgf000017_0002
[0083] Clause 15: The method of clause 13 or clause 14, comprising defining a set of further points (P2, P3) associated with the region of interest.
[0084] Clause 16: Machine-readable storage storing machine executable instructions arranged, when executed, to implement the method of any of clauses 12 to 14.
[0085] Clause 17: An apparatus for actuating functionality associated with an output image, the apparatus comprising: circuitry to define, within an apparatus frame of reference, data relating to: positions of reference indicia associated with the output image and the position and attitude of the apparatus within the apparatus frame of reference, a movement sensor to monitor movement of the apparatus, and circuitry to determine a point of intersection of a line extending from the apparatus with the output image, and a transmitter to output data associated with said movement of the apparatus. [0086] Clause 18: The apparatus of clause 17, in which the circuitry to define data relating to positions of reference indicia associated with the output image comprises circuitry to define data relating to a plurality of points of the output image selected to define an expanse of the output image. [0087] Clause 19: The apparatus of any of clauses 17 to 18, comprising an actuator to generate actuation data for output to a data processing system.
[0088] Clause 20: The apparatus of clause 19, in which the actuation data is associated with the data processing system performing a predetermined operation in response to the actuation data. [0089] Clause 21: The apparatus of clause 20, in which the predetermined operation comprises invoking the functionality associated with the output image.
[0090] Clause 22: A method of calibrating a pointer device, the method comprising determining a position of a feature of a displayed image within a user device coordinate system defined by and with reference to the pointer device, tracking movement and attitude of the user device within the user device coordinate system; and outputting data associated with the movement and attitude.
[0091] Cause 23: A method of calibrating a user device, the method comprising establishing an initial relative position and orientation between an aspect of a projected image and the user device within a user device frame of reference, monitoring relative movement of the user device to a further position and orientation, establishing a further relative position and orientation between the aspect of the projected image and the user device, determining at least a point of intersection between projections from the user device at the initial and further positions; and defining a plane of the projected image within the user device frame of reference. [0092] Clause 24: A method of calibrating a user device, comprising establishing locations of a number of indicia associated with a projected image within a coordinate system defined with reference to the user device, and outputting data associated with the indicia to a device associated with the projected image.
[0093] Clause 25: A method of calibrating a user device, comprising establishing a position of at least one user device defined point (P ) of a projected image within a coordinate system defined with reference to the user device, and outputting data associated with the position of said at least one user device defined point (P1) of the projected image.
[0094] Clause 26: The method of clause 25, in which said establishing comprises establishing positions of a number of user device defined points of the projected image. [0095] Clause 27: The method of clause 26, in which said establishing positions of a number of user device defined points comprises establishing positions of at least two user device defined points (P1,P2), or at least three user device defined points (P1,P2,P3) of the projected image.
[0096] Clause 28: The method of any of clauses 25 to 27, in which each user device defined point of said at least one user device defined feature (P1) comprises a respective corner of the projected image.
[0097] Clause 29: Apparatus comprising circuitry to implement a method of any preceding clause.
[0098] Clause 30: Machine readable storage storing instructions, when executed, to implement a method of any preceding clause or to implement a device or apparatus of any preceding clause.

Claims

1. A device for controlling interactions with an output image; the device comprising: a motion sensor to detect movement of the device; position circuitry to: generate a device frame of reference associated with the device, and derive position data and attitude data, responsive to the motion sensor, comprising an indication of device position and device attitude expressed relative to the device frame of reference; and communication circuitry to communicate the position data and attitude data to a data processing system.
2. The device of claim 1 , in which the position circuitry to generate the device frame of reference associated with the device comprises origin circuitry to define origin of the user device frame of reference or a set of orthogonal basis vectors of the user device frame of reference.
3. The device of claim 1 , in which the position circuitry to generate the device frame of reference associated with device comprises image plane circuitry to define a region of the output image.
4. The device of claim 3, in which the image plane circuitry to define the region of the output image comprises circuitry to establish a set of points that define the region of the output image.
5. The device of claim 4, in which the circuitry to establish the set of points comprises circuitry to define at least two points (P1, P2), associated with the output image, derived from one, or both, of the position data or attitude data.
6. The device of claim 5, in which the circuitry to define at least two points associated with the output image comprises circuitry to define three points (P1 , P2, P3) associated with the output image derived from one, or both, of the position data or attitude data.
7. The device of claim 1 , comprising circuitry to generate a light-based indicium associated with light emitted by the device or a computer-generated indicium forming part of the output image.
8. The device of claim 7, in which the light-based indicium and the computer-generated indicium have an overlapping relationship within or on the output image.
9. The device of claim 1 , comprising a range finder to determine a distance between the user device and a surface associated with the output image.
10. A method of calibrating a user device for influencing interactions with an output image, the method comprising defining, using the user device, a user device frame of reference relative to the user device, and defining, using the user device, a region of interest of the output image expressed in terms of the user device frame of reference.
11. The method of claim 10, in which defining the region of interest of the output image comprises defining a set of points comprising at least a first point (P1) of the region of interest and defining at least one of an origin of the user device frame of reference, as an initial position of the user device at the time of defining said at least a first point of the region of interest, or an initial basis vector of the user device frame of reference using
Figure imgf000021_0002
attitude data of the user device.
12. The method of claim 11 , comprising defining a set of orthogonal basis vectors
Figure imgf000021_0001
with reference to the initial basis vector
Figure imgf000021_0003
13. The method of claim 11 , comprising defining a set of further points (P2, P3) associated with the region of interest.
14. Machine-readable storage storing machine executable instructions, to, when executed: define, within a frame of reference defined with reference to an apparatus, data relating to positions of reference indicia associated with an output image, and the position and attitude of the apparatus; monitor movement of the apparatus; determine a point of intersection of a line extending from the apparatus with the output image; and output data associated with the movement of the apparatus or the point of intersection.
15. An apparatus comprising: circuitry to establish a frame of reference using locations of number of indicia associated with a projected image within a coordinate system defined with reference to the apparatus, and circuitry to output data associated with the indicia to a device associated with the projected image.
PCT/US2020/030849 2020-04-30 2020-04-30 Frames of reference WO2021221676A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2020/030849 WO2021221676A1 (en) 2020-04-30 2020-04-30 Frames of reference

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2020/030849 WO2021221676A1 (en) 2020-04-30 2020-04-30 Frames of reference

Publications (1)

Publication Number Publication Date
WO2021221676A1 true WO2021221676A1 (en) 2021-11-04

Family

ID=78373820

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/030849 WO2021221676A1 (en) 2020-04-30 2020-04-30 Frames of reference

Country Status (1)

Country Link
WO (1) WO2021221676A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100001952A1 (en) * 2008-07-03 2010-01-07 Nintendo Co., Ltd. Storage medium storing information processing program, information processing apparatus and information processing method
US20120119992A1 (en) * 2010-11-17 2012-05-17 Nintendo Co., Ltd. Input system, information processing apparatus, information processing program, and specified position calculation method
US20140320404A1 (en) * 2012-02-10 2014-10-30 Sony Corporation Image processing device, image processing method, and program
WO2015156539A2 (en) * 2014-04-09 2015-10-15 Samsung Electronics Co., Ltd. Computing apparatus, method for controlling computing apparatus thereof, and multi-display system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100001952A1 (en) * 2008-07-03 2010-01-07 Nintendo Co., Ltd. Storage medium storing information processing program, information processing apparatus and information processing method
US20120119992A1 (en) * 2010-11-17 2012-05-17 Nintendo Co., Ltd. Input system, information processing apparatus, information processing program, and specified position calculation method
US20140320404A1 (en) * 2012-02-10 2014-10-30 Sony Corporation Image processing device, image processing method, and program
WO2015156539A2 (en) * 2014-04-09 2015-10-15 Samsung Electronics Co., Ltd. Computing apparatus, method for controlling computing apparatus thereof, and multi-display system

Similar Documents

Publication Publication Date Title
US20150138086A1 (en) Calibrating control device for use with spatial operating system
TWI649675B (en) Display device
JP5472056B2 (en) Display system, display processing apparatus, display method, and display program
US7123214B2 (en) Information processing method and apparatus
CA2888943A1 (en) Augmented reality system and method for positioning and mapping
EP2991323B1 (en) Mobile device and method of projecting image by using the mobile device
WO2015100205A1 (en) Remote sensitivity adjustment in an interactive display system
US20140359437A1 (en) Method, device, and system for providing sensory information and sense
KR102310994B1 (en) Computing apparatus and method for providing 3-dimensional interaction
US10114478B2 (en) Control method, control apparatus, and program
US11758100B2 (en) Portable projection mapping device and projection mapping system
US9013404B2 (en) Method and locating device for locating a pointing device
WO2021221676A1 (en) Frames of reference
JP2005147894A (en) Measuring method and measuring instrument
CN111896015A (en) Navigation method, navigation device, storage medium and electronic equipment
US20130002549A1 (en) Remote-control device and control system and method for controlling operation of screen
CN108762527A (en) A kind of recognition positioning method and device
CN109144234B (en) Virtual reality system with external tracking and internal tracking and control method thereof
US20230136269A1 (en) Systems and methods for dynamic sketching with exaggerated content
CN109917904A (en) The spatial position computing system of object in virtual reality or augmented reality environment
CN113643443B (en) Positioning system for AR/MR technology
US12001629B2 (en) Systems and methods for dynamic shape sketching using position indicator and processing device that displays visualization data based on position of position indicator
JP2023531302A (en) Systems and methods for dynamic shape sketching
WO2023001816A1 (en) Augmented reality virtual camera
EP3547080B1 (en) Rendering anchored objects in a scene

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20933623

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20933623

Country of ref document: EP

Kind code of ref document: A1