EP3776160A1 - Device operation control - Google Patents
Device operation controlInfo
- Publication number
- EP3776160A1 EP3776160A1 EP19717551.6A EP19717551A EP3776160A1 EP 3776160 A1 EP3776160 A1 EP 3776160A1 EP 19717551 A EP19717551 A EP 19717551A EP 3776160 A1 EP3776160 A1 EP 3776160A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- motion
- user input
- display regions
- information displayed
- identifying
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/163—Wearable computers, e.g. on a belt
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2200/00—Indexing scheme relating to G06F1/04 - G06F1/32
- G06F2200/16—Indexing scheme relating to G06F1/16 - G06F1/18
- G06F2200/163—Indexing scheme relating to constructional details of the computer
- G06F2200/1637—Sensing arrangement for detection of housing movement or orientation, e.g. for controlling scrolling or cursor movement on the display of an handheld computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04803—Split screen, i.e. subdividing the display area or the window area into separate subareas
Definitions
- the present invention relates to a device and a method of controlling the device using detected motion types.
- Portable and moveable devices such as mobile telephones, tablet computer devices and laptops are widely used.
- the challenges for the designers of portable devices is to design an apparatus that is light, battery efficient and convenient to use.
- One facet of ease of use is the ease with which a user can input control commands.
- the present invention provides a device comprising a display to display information in a plurality of independent display regions, each display region being independently identifiable, a memory to store motion type control data comprising identifications of motion types and corresponding operations to be performed on information displayed in one or more identified display regions; a motion detecting arrangement to detect motion of the device; a user input arrangement to receive a user input identifying operations to be performed on information displayed in one or more identified display regions during a learning phase; and a processor programmed, in the learning phase when a user input is received identifying an operation to be performed on information displayed in one or more identified display regions, to identify a motion type from motion detected by the motion detecting arrangement and to store the received user input in correspondence with data identifying the motion type in the motion type control data, and, subsequent to the learning phase when the motion detecting arrangement detects motion, to use the motion type control data to identify a motion type and execute the corresponding operation on information displayed in one or more identified display regions.
- the present invention also provides a method of controlling the display of information in a plurality of independent display regions on a device having a motion detector arrangement to detect motion of the device, each display region being independently identifiable, the method comprising, during a learning phase, receiving a user input identifying an operation to be performed on information displayed in one or more identified display regions; detecting motion of the device using the motion detector arrangement; identifying a motion type from the motion detected by the motion detecting arrangement; storing the received user input in correspondence with data identifying the motion type as motion type control data; and repeating the receiving, detecting, identifying and storing steps; and in an operation phase detecting motion of the device using the motion detector arrangement; and using the motion type control data to identify a motion type and execute the corresponding operation on information displayed in one or more identified display regions.
- the present invention also provides a device comprising a memory to store control data comprising identifications of motion types, corresponding control operations, and corresponding gesture type data; a motion detecting arrangement to detect motion of the device; a gesture detecting arrangement to detect gestures; a user input arrangement to receive a user input identifying control operations during a learning phase; and a processor programmed in the learning phase when a user input is received identifying a control operation, to identify a motion type from motion detected by the motion detecting arrangement, to store the received user input in correspondence with data identifying the motion type in the control data, and to determine corresponding gesture type data, and subsequent to the learning phase, when the motion detecting arrangement detects motion, to use the control data to identify a motion type and execute the corresponding control operation, or when the gesture detecting arrangement detects a gesture, to use the control data to identify a gesture type and execute the corresponding control operation.
- Figure 1 is a schematic diagram illustrating an example handheld device
- Figure 2 is a schematic diagram illustrating an alternative display of an example device
- Figure 3 is a schematic diagram illustrating an alternative display of an example device
- Figure 4 is a schematic diagram illustrating the electronic components of an example device
- Figure 5 is a flow diagram illustrating an example learning phase
- Figure 6 is a flow diagram illustrating an example operational phase. DETAILED DESCRIPTION
- data store or memory is intended to encompass any computer readable storage medium and/or device (or collection of data storage mediums and/or devices).
- data stores include, but are not limited to, optical disks (e.g., CD-ROM, DVD-ROM, etc.), magnetic disks (e.g., hard disks, floppy disks, etc.), memory circuits (e.g., solid state drives, random-access memory (RAM), etc.), and/or the like.
- the functions or algorithms described herein are implemented in hardware, software or a combination of software and hardware in one embodiment.
- the software comprises computer executable instructions stored on computer readable carrier media such as memory or other type of storage devices.
- described functions may correspond to modules, which may be software, hardware, firmware, or any combination thereof. Multiple functions are performed in one or more modules as desired, and the embodiments described are merely examples.
- the software is executed on a digital signal processor, ASIC, microprocessor, or other type of processor.
- Some embodiments implement the functions in two or more specific interconnected hardware modules or devices with related control and data signals communicated between and through the modules, or as portions of an application- specific integrated circuit.
- the exemplary process flow is applicable to software, firmware, and hardware implementations.
- a generalized embodiment provides a device comprises a display to display information in a plurality of independent display regions, each display region being independently identifiable, a memory to store motion type control data comprising identifications of motion types and corresponding operations to be performed on information displayed in one or more identified display regions; a motion detecting arrangement to detect motion of the device; a user input arrangement to receive a user input identifying operations to be performed on information displayed in one or more identified display regions during a learning phase; and a processor programmed, in the learning phase when a user input is received identifying an operation to be performed on information displayed in one or more identified display regions, to identify a motion type from motion detected by the motion detecting arrangement and to store the received user input in correspondence with data identifying the motion type in the motion type control data, and, subsequent to the learning phase when the motion detecting arrangement detects motion, to use the motion type control data to identify a motion type and execute the corresponding operation on information displayed in one or more identified display regions.
- a user of the device is able to train the device on preferred motion types to be used as inputs to cause operations to be performed on information displayed in one or more identified display regions. In this way, a user is able to tailor the way the user interfaces with specific identified display regions of the device to select preferred motion types for certain operations to be performed on information displayed in one or more identified display regions.
- Identifications of motion types can comprise recorded motion data for each motion type for matching with detected motion data in order to identify a motion type from the motion data. In one embodiment not all of the motion types need to be learnt, some can be prestored.
- the invention is suited to a portable device but is not limited to such.
- the device can be any device that can be moved in a reciprocal, rotational or shaking motion. Such a device is typically portable and wireless but could be simply moveable and connected by wires.
- the device could be part of another apparatus and removeable therefore in use to allow for the shaking, rotating or reciprocating motion to be effected by a user to control the device.
- the device can comprise a mobile telephone, a tablet computer, a wearable device, such as a smart watch or personal monitor device, a laptop, or a machine control device.
- the operation to be performed on information displayed in one or more identified display regions can comprise any operation on the information in a display region, such as cutting, pasting, insertion or deletion of displayed information in one or more display regions of a display.
- the operation can be limited to an operation on one display region, a subset of the display regions or all of the display regions.
- Types of motion that the user can use for identification and execution of control operations can comprise any of a rectilinear, motion, a reciprocating motion, a rotational motion, swaying motion, see-sawing motion, swinging motion or rocking motion for example.
- the displacement or amplitude of the‘shake’, the frequency or rate of the‘shake’ and/or the speed/velocity or vigour of the‘shake’ can also be components of the type of motion of the device.
- the motion types can comprise a single motion type or a combination of motion types, e.g. a translational reciprocal motion and a rotational motion, and the motions can be in multiple directions.
- Motion thresholds can be set, whereby the motion type is only identified from motion data when the speed or amplitude of the motion is above a threshold. This avoids erroneous motion type identification and hence erroneous control operation execution.
- the motion can comprise a complex mix or aggregation of motions or motion types e.g. a rotation simultaneously with a shake, or a sequence of motions e.g. a shake followed by a rotation, or a rotation in one direction followed by a rotation in another direction.
- the sequence of motions can hence be a sequence of the same motion types or different motion types.
- the device may require the user to carry out the motion type more than once so that the motion can be detected more than once and the motion pattern recorded each time so that some form of average of the desired motion pattern can be stored as the data identifying the motion type in the motion type control data.
- This smoothing or averaging of the motion recorded that the user wishes to be the motion type for an operation to be performed on information displayed in one or more identified display regions assist the processor in learning deviations in the motion data that might be received in an operational phase that a user intends to cause an operation on information displayed in one or more identified display regions.
- a user may input a sequence of user inputs representing a sequence of desired motion types to be used to execute corresponding operations.
- a user input can be received followed by motion data can be input as a pair as a series to generate motion type control data for a plurality of control operations to be performed on information displayed in one or more identified display regions.
- user selections of one or more display region can be received to identify the one or more display regions as part of the user input.
- the user selections of one or more display regions can be made using a pointer or touch input, or simply by selecting a display region from a list of display regions listed by identifiers.
- a menu can be displayed to the user to allow the user to generate the user input simply by selecting a displayed menu item and one or more display regions to which the operation is to be applied.
- the device in the learning phase, may have a set list of control operations and the user input comprises a selection to implement the list to learn motion types for all of the listed operations to be performed on information displayed in one or more identified display regions.
- the device simply requires the user to cause the required motion type for the device sequentially for the listed operations to be performed on information displayed in one or more identified display regions.
- the device can indicate to the user when each type of motion should be carried out for each control operation. This can be by a displayed message, an indicator light or a sound output for example.
- the motion types and control operations can be learned for individual users. For example, a user could enter their name or sign in to use the display apparatus and the learnt motion types and control operations are then stored for that user for use when the user enters their name or signs in.
- the user may use each region of the display simultaneously. Further, the display screen regions may be positioning as required by the user and may be pre-set or set by the user. The user may use the motion control to manipulate a display region, e.g. point to choose and position or re -position a part of the display screen region or data within the chosen display region to relocate it.
- a display region e.g. point to choose and position or re -position a part of the display screen region or data within the chosen display region to relocate it.
- the user is able to select and control a display region and to open and adjust the display region e.g. by zooming in on it, e.g. by tapping on a chosen display region to open up the display region to the given size as required or desired by the user. This may enable the user to fully open the display region e.g. to cover the whole screen.
- the device comprises a flexible device with one or more displays, with one or more display regions where display data can be transitioned between one or more display regions including appearing and transitioning on one or more sides of the device display.
- a device with displays on both sides may comprise an electronic document reader that can display and control the displayed data. For example, data can be displayed on the front side A of the device and then transitioned to side B, the reverse side of the device.
- the device comprises a flexible device and the data may be displayed on one or more folded or flexed regions of the flexible device and the user can use motion or gestures to cause the data to transition between display regions on the same display or another display if more than one is provided.
- the user input arrangement can comprise any conventional device input arrangement, such as a keyboard, pointing device (e.g. a mouse, track pad, or tracker ball), a touch sensitive display and/or a pen.
- pointing device e.g. a mouse, track pad, or tracker ball
- a pen allows a user to input handwritten text and drawings.
- the device can have a display that can comprise a fixed or flexible display or displays.
- the display or displays may be detachable.
- the displays can be of different types, e.g. one flexible and one rigid.
- a carrier medium such as a non-transient storage medium storing code for execution by a processor of a machine to carry out the method, or a transient medium carrying processor executable code for execution by a processor of a machine to carry out the method.
- Embodiments can be implemented in programmable digital logic that implements computer code. The code can be supplied to the programmable logic, such as a processor or microprocessor, on a carrier medium.
- a carrier medium is a transient medium i.e. a signal such as an electrical, electromagnetic, acoustic, magnetic, or optical signal.
- Another form of carrier medium is a non-transitory storage medium that stores the code, such as a solid-state memory, magnetic media (hard disk drive), or optical media (Compact disc (CD) or digital versatile disc (DVD)).
- the device can, in one example, be used as the display unit in any of the examples of the display apparatus disclosed in co-pending application GB 1722249.8 filed on 29 th December 2017, the contents of which is hereby incorporate by reference in its entirety or with the device disclosed in co-pending UK patent application GB 1805278.7 entitled“Display Apparatus” filed on the same date as this application, the contents of which is hereby incorporate by reference in its entirety. [0036] Specific embodiments will now be described with reference to the drawings.
- Figure 1 is a schematic diagram illustrating a handheld device 1.
- the device 1 can comprise any handheld device, such as a mobile telephone.
- the motion directions that the device 1 can experience as a method of entering a command for one or more independent display regions of a device are illustrated by the arrows in figure 1.
- the motion can be a back and forth linear or reciprocating motion in any of three directions along three axes x, y and z in three dimensions as shown by the intersecting arrows.
- the motion can be non-linear, curved or rotational as shown by the curved arrows A and B.
- the rotational motion can be rotation about any of the x, y or z axes.
- Figure 2 is a schematic diagram illustrating a mobile device 2.
- the device 2 comprises a display screen 3 capable of displaying information.
- the display screen 3 is displaying three types of information, namely typed text 11, handwritten text 41 and an image 31 in three different display regions.
- different information is illustrated as being displayed in the different regions, the invention applies also to the display of the same information type or a mixture of information types in the same or different display regions.
- the displayed information can for example be a document or form that has regions to be completed or filled in, e.g. an address region and a name region in a standard form letter, or fields in a form.
- the information displayed in a display region can comprise any of text, images, or videos.
- the information can be displayed by any application executing on the apparatus, such as a web browser, email application, word processor, spreadsheet, image editor etc.
- the handwritten text 41 can be input using a pen device 50.
- the display screen 3 can be sensitive to the proximity of the pen device 50 to detect the coordinates of the pen device 50 to track its location and hence to generate the input handwritten text 41.
- the display screen 3 can be touch sensitive to allow for input of information and commands to the device 2, e.g. by the display of touch sensitive options and/or a keyboard.
- a motion of the device 2 can cause execution of a control operation to modify any of the displayed information in any of the display regions, such as by erasing all of the displayed information or selective parts, such as one or more of the typed text 11, the handwritten text 41 or the image 31, or by entering predefined information, e.g. entering text or image data, into a display region.
- Figure 3 illustrates an alternative display screen 20 of a device. In this example, there are three different display regions that are clearly defined by boundary markings, namely a typed text display region 10 with displayed typed text 11, an image display region 30 displaying an image 31, and a handwritten text display region 40 displaying handwritten text 41, created using a pen 50.
- figures 2 and 3 illustrate motion control of displayed information in the form of text or images
- the motion control can comprise any displayed information.
- a virtual shopping cart could be moved, filled or emptied and items could be checked out.
- Each control operation can have an associated control motion that the device is taught in a learning phase.
- Some control options can be prestored and do not require teaching.
- the motion of the device of this example can be a back and forth linear or reciprocating motion in any of three directions along three axes x, y and z in three dimensions as shown by the intersecting arrows of figure 1. Also, or alternatively, the motion can be non linear, curved or rotational as shown by the curved arrows in figure 1.
- a motion of the device can erase all or some of the displayed information in in one or more of the display regions 10, 30 and 40.
- the one or more of the displayed regions and the operation performed on the information in the displayed region is defined as a command specific to a motion of the device.
- the device stored a set of motion patterns with corresponding display region identifiers and information operations so that a specific motion type will cause a specific operation on information displayed in only one display region, only a subset of the display regions or all of the display regions.
- a motion type detected by a motion detecting arrangement of the device can detect any one of a plurality of types of patterns of motion in any dimension or direction. Examples of different patterns of motion are: shaking back and forth, shaking side to side, shaking up and down, turning side to side (tilting side to side), turning up and down (tilting up and down) and rotating. Each of these represents a lateral (shaking) or rotational movement along or about one of the three axes illustrated in figure 1. It is possible to detect two types of each of these motion types, based on the starting motion. For example, back and forth could be back and forth or forth and back, side to side could be left to right or right to left, rotating could be clockwise or anticlockwise rotating etc.
- the vigor e.g. speed or frequency of shaking or reciprocity
- the same spatial displacements of the display apparatus may correspond to different motion types depending upon the speed, acceleration and frequency that the displacement is executed.
- complex motion types can comprise complex patterns of motion comprising a combination of basic motion types, e.g. a lateral motion and a rotational motion, or combinations of reciprocal lateral and rotational motions.
- a memory of the device can store a set of operations to be implemented on identified one or more display regions by the processor for each type of motion detected by the motion detecting arrangement and identified by the processor from the signal from the motion detecting arrangement. These operations can be predefined or downloaded from a data source and they can be user defined or supplemented. The user definition or supplementing can take place during a learning phase.
- Figure 4 is a schematic diagram illustrating the electronic components of an example device.
- a processor 100 is connected to static or non-volatile memory 102 storing code used by the processor 100, such as operating system code and for the storage of data and information for display in a non-volatile manner.
- Volatile memory 103 is provided for the storage of application code for implementation by the processor 100 and data and information for display.
- the volatile memory can store code (e.g. a code module) comprising code for identifying motion patterns detected by the motion sensor 104, and code (e.g. a code module) for controlling a display screen 105 in response to perform operations on information displayed in independent display regions.
- code e.g. a code module
- the memory can also store data mapping operations on display regions and identifiers for the display regions against motion types, prestored or learnt during a learning phase.
- the display screen 105 is connected to the processor 100 for the display of information.
- the display screen 105 can comprise a touch sensitive display to provide an input arrangement.
- a pen 50 shown in figure 2 can be used with the display screen 105 to allow for the input of freehand text or drawings.
- a camera 106 can also optionally be connected to the processor 100 to capture images. The camera 106 can also be used as an input arrangement for the input of information and commands using gesture recognition implemented by the processor on the images captured by the camera 106.
- a network connector 101 is connected to the processor 100 for communication with a communications or computer network. This communication link can pass data, information and controls between the device and other devices or computers over the network.
- the network connector 101 can be connected to a communications or computer network using a wired or wireless connection.
- a wired connection could be an ethemet connection to a LAN or a telephone connection, such that the network connector 101 acts as a modem or router.
- a wireless connection could be WIFI, WiMAX, CDMA, GSM, GPRS, wireless local loop or WAN.
- BluetoothTM may be used.
- a motion sensor 104 is connected to the processor 100 to detect motion of the device.
- the motion sensor 104 can comprise any conventional motion sensor, such as an accelerometer.
- the motion sensor 104 is capable of detecting motion of the device in any direction i.e. it is a multi-axis motion sensor to detect motion along any of three orthogonal axes in three dimensions.
- the device may have a display screen on both sides, so that it can be turned over to view information on both sides.
- the control operation could apply to both sides or to just one side. This, in one example, can be selectively set.
- the or a display or displays can also be detached from a mount frame.
- the device can comprise a docking part or mount to which a display screen is mounted in a detachable or demountable manner so that the display screen can be used separately from the mount.
- the mount and display screen can communicate between each other using a wireless link so that the mount can contain some components of the display apparatus, such as the processor and storage, while the display screen can contain only those components necessary to operate the display screen, detect motion of the display screen, and the communicate with the processor.
- more than one display screen can be provided detachable from a mount, with each display screen operating independently to provide the erase operation by motion detection using a shared processor in the mount.
- the device can include one or more loudspeakers for audio output and one or more microphones for audio input. Also, the device can connect to a peripheral device such as a keyboard and/or pointing device by a wired or wireless connection to allow for input of information and commands to the device.
- a peripheral device such as a keyboard and/or pointing device by a wired or wireless connection to allow for input of information and commands to the device.
- Figure 5 is a flow diagram illustrating a learning phase of the device.
- step S l the device enters the learning phase. This can be automatic on start up of the device or as a result of some other operation of the device, or it can be as a result of a user selection i.e. a user input, such as using an input arrangement of the device.
- a display of the device displays a menu of defined operations to be performed on information displayed in one or more identified display regions that a user can choose to execute using motion as a user input.
- the displayed menu can simply be a list, or a drop-down menu for example. The list could be organized into categories of types of operations.
- the menu can also display a list of display regions that the operations can be applied to and a user can select one or more display regions.
- the display can display the regions in a selectable manner so that a user can select one or more regions to which the selected operation is to be applied using a pointer device or a touch input for example to select the region or regions of the display.
- step S3 a user selection of an operation menu item is received and then in step S4, the device waits for the motion detector arrangement e.g. an accelerometer, to detect motion of the device. The motion is recorded over a short period of time appropriate to capture the motion pattern e.g. 1-2 seconds.
- the process loops back to require the user to repeat the motion input a number (N) times e.g. anything from 3 to 10 times. More iterations improve the accuracy of the matching process but reduces the usability of the device: a user will lose patience with a requirement to repeat the motion pattern too many times.
- step S6 the recorded motion patterns for the repeated recordings are averaged and the average is used in step S7 for storage as part of the information on the motion type in the stored motion type control data.
- Data indicating the degree of deviation between the recorded motion patterns can also be stored as part of the motion type data to assist with the matching process during the operational phase i.e. to assist with assessing whether a match lies within an expected deviation of a stored average motion pattern.
- An identifier identifying the corresponding control operation is also stored as part of the stored motion type control data.
- step S2 The process will then return to step S2 to allow the user to select another operation from the menu. If the user has finished selecting operations, a user can select to exit the learning phase.
- Figure 6 is a flow diagram illustrating an operational phase of the device following the learning phase.
- step S 10 the device enters the operational phase and in step S l l motion of the device is detected in step S 12.
- the pattern of motion is compared with the stored patterns for the motion type in the motion type control data to determine if the motion matches a motion type. If no match is found (step S 13), the process returns to step S l l to detect further motion. If a match is found with motion data for a match type in step S 13, in step S 14 a corresponding operation on information displayed in one or more identified display regions is identified in the motion type control data and in step S 15 the operation on information displayed in one or more identified display regions is executed. The process then returns to step S l l to detect the next motion of the device for further operations on information displayed in one or more identified display regions.
- an average of motion data for a number of recorded motion operations is taken.
- the invention is not restricted to taking an average and only one motion detection may be used, or a number may be recorded and they can be stored individually for individual matching with a motion pattern during the operation phase.
- a user is presented with a menu to select an operation to teach the device the motion type for that control operation.
- the user may also be able to define their own operation to be performed on information displayed in one or more identified display regions using the user input arrangement.
- the device can include other input arrangements, such as audio sensor for audio input, a keyboard, pointer device (e.g. a mouse), or a touch screen, to enable a user to input controls and information in conjunction with the motion detection control input method.
- a camera can be provided to allow image capture to enable gesture input as a method of control or information input.
- the device can include an infrared sensor e.g. a Passive infrared sensor (PIR).
- PIR Passive infrared sensor
- the information displayed in a display region can comprise video (moving images, television, 3D video, panoramic views, etc).
- the device can include a remote control for controlling the device.
- the remote control can be motion responsive for motion control of the device.
- a camera in addition to or alternatively to the motion control input, can be provided in the device to capture video in a direction facing the user so that the user can use gestures to input a control in dependence upon the type of identified gesture.
- the gesture control can comprise the recognition of movements of a user’s body parts i.e. hands, facial features or the overall body position. Pattern recognition techniques can be used to recognize the type of gestures and to identify them. Example gesture types can include sweeping of a hand to the left or right, or up or down, or a waving of a hand.
- the gesture control erase operation can be used, for example, where the user is unable to pick up the device, e.g. they have wet or dirty hands.
- the device can be controlled to be responsive to the detected gesture type and to identify a comparable or associated motion type that inputs the same control and to control a display screen to cause the display screen to mimic the motion equivalent command to the gesture command. For example, if the motion command is a left to right rotation, when an equivalent or associated gesture command e.g. waving the hand from left to right, is identified, the display screen can cause the display to rotate from left to right to act as a confirmation back to the user that the gesture command has been recognized as the motion command.
- the equivalent gesture type can be automatically determined by the processor from a corresponding motion type and hence the motion type data can include corresponding gesture type data for the same operation.
- the use of a gesture that matches the motion command in general movement, means that the gesture will be intuitive for the user e.g. a rotation of the device can be equated to rotation of the hands of a user.
- This aspect of the invention can be applied to the input of any type of command to the device, including the execution of an application, the taking of a picture using a camera of the device or any other operation changing the state of the device.
- This aspect can use the hardware of figure 4 and the method of the flow diagrams of figures 5 and 6 for the learning and execution of control operations.
- This aspect is not limited to the control of display operations in display regions and instead can be applied to the input of any control operations to the device.
- the motion types and control operations can be learned for individual users. For example, a user could enter their name or sign in to use the display apparatus and the learnt motion types and operations are then stored for that user for use when the user enters their name or signs in.
- the invention provides gesture control to supplement motion control.
- the device can comprise a memory to store control data comprising identifications of motion types, corresponding control operations, and corresponding gesture type data; a motion detecting arrangement to detect motion of the device; a gesture detecting arrangement to detect gestures; a user input arrangement to receive a user input identifying control operations during a learning phase; and a processor programmed in the learning phase when a user input is received identifying a control operation, to identify a motion type from motion detected by the motion detecting arrangement, to store the received user input in correspondence with data identifying the motion type in the control data, and to determine corresponding gesture type data, and subsequent to the learning phase, when the motion detecting arrangement detects motion, to use the control data to identify a motion type and execute the corresponding control operation, or when the gesture detecting arrangement detects a gesture, to use the control data to identify a gesture type and execute the corresponding control operation.
- the device can be mobile or fixed/attached.
- a camera can be provided in the device to capture video in a direction facing the user so that the user can use gestures as a control input in dependence upon the type of identified gesture.
- the gesture control can comprise the recognition of movements of a user’s body parts i.e. hands, facial features or the overall body position or the recognition of an object moved by the user, such as a stick, ruler or something worn by the user.
- Pattern recognition techniques can be used to recognize the type of gestures and to identify them.
- Example gesture types can include sweeping of a hand to the left or right, or up or down, or a waving of a hand. The gesture matches the motion of the device that was learnt in the learning phase e.g. a rotation of the device can be mimicked by a rotation of the user’s hand.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- User Interface Of Digital Computer (AREA)
- Position Input By Displaying (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1805269.6A GB2572434A (en) | 2018-03-29 | 2018-03-29 | Device operation control |
PCT/GB2019/050933 WO2019186203A1 (en) | 2018-03-29 | 2019-03-29 | Device operation control |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3776160A1 true EP3776160A1 (en) | 2021-02-17 |
Family
ID=62142364
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP19717551.6A Withdrawn EP3776160A1 (en) | 2018-03-29 | 2019-03-29 | Device operation control |
Country Status (8)
Country | Link |
---|---|
EP (1) | EP3776160A1 (en) |
JP (2) | JP2021519977A (en) |
KR (1) | KR20210002512A (en) |
CN (1) | CN112041804A (en) |
GB (1) | GB2572434A (en) |
SG (1) | SG11202009628UA (en) |
WO (1) | WO2019186203A1 (en) |
ZA (1) | ZA202100682B (en) |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6798429B2 (en) * | 2001-03-29 | 2004-09-28 | Intel Corporation | Intuitive mobile device interface to virtual spaces |
US20050212760A1 (en) * | 2004-03-23 | 2005-09-29 | Marvit David L | Gesture based user interface supporting preexisting symbols |
KR100819880B1 (en) * | 2006-12-01 | 2008-04-07 | 삼성전자주식회사 | Method for executing function in wireless terminal |
US8391786B2 (en) * | 2007-01-25 | 2013-03-05 | Stephen Hodges | Motion triggered data transfer |
KR100912310B1 (en) * | 2008-04-17 | 2009-08-14 | 엘지전자 주식회사 | User interface controlling method by detecting user's gestures |
KR20100096425A (en) * | 2009-02-24 | 2010-09-02 | 삼성전자주식회사 | Method for recognizing motion based on motion sensor and mobile terminal using the same |
US9483085B2 (en) * | 2011-06-01 | 2016-11-01 | Blackberry Limited | Portable electronic device including touch-sensitive display and method of controlling same |
CN102420942A (en) * | 2011-11-28 | 2012-04-18 | 康佳集团股份有限公司 | Photograph device and photograph control method based on same |
CN102722239A (en) * | 2012-05-17 | 2012-10-10 | 上海冠勇信息科技有限公司 | Non-contact control method of mobile device |
US20140184495A1 (en) * | 2012-12-31 | 2014-07-03 | Joseph Patrick Quin | Portable Device Input by Configurable Patterns of Motion |
US9229529B2 (en) * | 2013-12-19 | 2016-01-05 | Sony Corporation | Apparatus and control method based on motion |
JP6324203B2 (en) * | 2014-05-14 | 2018-05-16 | キヤノン株式会社 | Information processing apparatus, control method therefor, program, and recording medium |
KR102275653B1 (en) * | 2014-10-21 | 2021-07-09 | 삼성전자주식회사 | Wearable device and method for transmitting contents |
CN104639966A (en) * | 2015-01-29 | 2015-05-20 | 小米科技有限责任公司 | Method and device for remote control |
US9996164B2 (en) * | 2016-09-22 | 2018-06-12 | Qualcomm Incorporated | Systems and methods for recording custom gesture commands |
-
2018
- 2018-03-29 GB GB1805269.6A patent/GB2572434A/en not_active Withdrawn
-
2019
- 2019-03-29 EP EP19717551.6A patent/EP3776160A1/en not_active Withdrawn
- 2019-03-29 JP JP2020552899A patent/JP2021519977A/en active Pending
- 2019-03-29 KR KR1020207031364A patent/KR20210002512A/en active Search and Examination
- 2019-03-29 CN CN201980029126.2A patent/CN112041804A/en active Pending
- 2019-03-29 WO PCT/GB2019/050933 patent/WO2019186203A1/en active Application Filing
- 2019-03-29 SG SG11202009628UA patent/SG11202009628UA/en unknown
-
2021
- 2021-01-29 ZA ZA2021/00682A patent/ZA202100682B/en unknown
-
2024
- 2024-02-02 JP JP2024014730A patent/JP2024056764A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
GB201805269D0 (en) | 2018-05-16 |
JP2024056764A (en) | 2024-04-23 |
ZA202100682B (en) | 2023-10-25 |
KR20210002512A (en) | 2021-01-08 |
CN112041804A (en) | 2020-12-04 |
WO2019186203A1 (en) | 2019-10-03 |
GB2572434A (en) | 2019-10-02 |
SG11202009628UA (en) | 2020-10-29 |
JP2021519977A (en) | 2021-08-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200257373A1 (en) | Terminal and method for controlling the same based on spatial interaction | |
US12073008B2 (en) | Three-dimensional object tracking to augment display area | |
KR100790896B1 (en) | Controlling method and apparatus for application using image pickup unit | |
US10949668B2 (en) | Electronic apparatus and method for controlling thereof | |
JP2016511471A (en) | Method for controlling display of a plurality of objects by movement-related input to portable terminal and portable terminal | |
CN107407945A (en) | From the system and method for screen locking capture images | |
US20130155108A1 (en) | Augmented Reality User Interaction Methods, Computing Devices, And Articles Of Manufacture | |
US20150063785A1 (en) | Method of overlappingly displaying visual object on video, storage medium, and electronic device | |
US9904444B2 (en) | Method of providing user interface of device and device including the user interface | |
CN204945943U (en) | For providing the remote control equipment of remote control signal for external display device | |
US10175780B2 (en) | Behind-display user interface | |
KR102138233B1 (en) | Display control apparatus and method for controlling the same | |
CN110827412A (en) | Method, apparatus and computer-readable storage medium for adapting a plane | |
WO2019186203A1 (en) | Device operation control | |
WO2019186201A1 (en) | Display apparatus | |
CN110827413B (en) | Method, apparatus and computer readable storage medium for controlling a change in a form of a virtual object | |
JP2016015078A (en) | Display control device, display control method, and program | |
US20240312154A1 (en) | Authoring edge-based opportunistic tangible user interfaces in augmented reality | |
KR101898162B1 (en) | Apparatus and method of providing additional function and feedback to other apparatus by using information of multiple sensor | |
CN109753143A (en) | A kind of method and apparatus optimizing cursor position |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20201015 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20210518 |