EP3776160A1 - Device operation control - Google Patents

Device operation control

Info

Publication number
EP3776160A1
EP3776160A1 EP19717551.6A EP19717551A EP3776160A1 EP 3776160 A1 EP3776160 A1 EP 3776160A1 EP 19717551 A EP19717551 A EP 19717551A EP 3776160 A1 EP3776160 A1 EP 3776160A1
Authority
EP
European Patent Office
Prior art keywords
motion
user input
display regions
information displayed
identifying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP19717551.6A
Other languages
German (de)
French (fr)
Inventor
Maria Francisca Jones
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of EP3776160A1 publication Critical patent/EP3776160A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2200/00Indexing scheme relating to G06F1/04 - G06F1/32
    • G06F2200/16Indexing scheme relating to G06F1/16 - G06F1/18
    • G06F2200/163Indexing scheme relating to constructional details of the computer
    • G06F2200/1637Sensing arrangement for detection of housing movement or orientation, e.g. for controlling scrolling or cursor movement on the display of an handheld computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04803Split screen, i.e. subdividing the display area or the window area into separate subareas

Definitions

  • the present invention relates to a device and a method of controlling the device using detected motion types.
  • Portable and moveable devices such as mobile telephones, tablet computer devices and laptops are widely used.
  • the challenges for the designers of portable devices is to design an apparatus that is light, battery efficient and convenient to use.
  • One facet of ease of use is the ease with which a user can input control commands.
  • the present invention provides a device comprising a display to display information in a plurality of independent display regions, each display region being independently identifiable, a memory to store motion type control data comprising identifications of motion types and corresponding operations to be performed on information displayed in one or more identified display regions; a motion detecting arrangement to detect motion of the device; a user input arrangement to receive a user input identifying operations to be performed on information displayed in one or more identified display regions during a learning phase; and a processor programmed, in the learning phase when a user input is received identifying an operation to be performed on information displayed in one or more identified display regions, to identify a motion type from motion detected by the motion detecting arrangement and to store the received user input in correspondence with data identifying the motion type in the motion type control data, and, subsequent to the learning phase when the motion detecting arrangement detects motion, to use the motion type control data to identify a motion type and execute the corresponding operation on information displayed in one or more identified display regions.
  • the present invention also provides a method of controlling the display of information in a plurality of independent display regions on a device having a motion detector arrangement to detect motion of the device, each display region being independently identifiable, the method comprising, during a learning phase, receiving a user input identifying an operation to be performed on information displayed in one or more identified display regions; detecting motion of the device using the motion detector arrangement; identifying a motion type from the motion detected by the motion detecting arrangement; storing the received user input in correspondence with data identifying the motion type as motion type control data; and repeating the receiving, detecting, identifying and storing steps; and in an operation phase detecting motion of the device using the motion detector arrangement; and using the motion type control data to identify a motion type and execute the corresponding operation on information displayed in one or more identified display regions.
  • the present invention also provides a device comprising a memory to store control data comprising identifications of motion types, corresponding control operations, and corresponding gesture type data; a motion detecting arrangement to detect motion of the device; a gesture detecting arrangement to detect gestures; a user input arrangement to receive a user input identifying control operations during a learning phase; and a processor programmed in the learning phase when a user input is received identifying a control operation, to identify a motion type from motion detected by the motion detecting arrangement, to store the received user input in correspondence with data identifying the motion type in the control data, and to determine corresponding gesture type data, and subsequent to the learning phase, when the motion detecting arrangement detects motion, to use the control data to identify a motion type and execute the corresponding control operation, or when the gesture detecting arrangement detects a gesture, to use the control data to identify a gesture type and execute the corresponding control operation.
  • Figure 1 is a schematic diagram illustrating an example handheld device
  • Figure 2 is a schematic diagram illustrating an alternative display of an example device
  • Figure 3 is a schematic diagram illustrating an alternative display of an example device
  • Figure 4 is a schematic diagram illustrating the electronic components of an example device
  • Figure 5 is a flow diagram illustrating an example learning phase
  • Figure 6 is a flow diagram illustrating an example operational phase. DETAILED DESCRIPTION
  • data store or memory is intended to encompass any computer readable storage medium and/or device (or collection of data storage mediums and/or devices).
  • data stores include, but are not limited to, optical disks (e.g., CD-ROM, DVD-ROM, etc.), magnetic disks (e.g., hard disks, floppy disks, etc.), memory circuits (e.g., solid state drives, random-access memory (RAM), etc.), and/or the like.
  • the functions or algorithms described herein are implemented in hardware, software or a combination of software and hardware in one embodiment.
  • the software comprises computer executable instructions stored on computer readable carrier media such as memory or other type of storage devices.
  • described functions may correspond to modules, which may be software, hardware, firmware, or any combination thereof. Multiple functions are performed in one or more modules as desired, and the embodiments described are merely examples.
  • the software is executed on a digital signal processor, ASIC, microprocessor, or other type of processor.
  • Some embodiments implement the functions in two or more specific interconnected hardware modules or devices with related control and data signals communicated between and through the modules, or as portions of an application- specific integrated circuit.
  • the exemplary process flow is applicable to software, firmware, and hardware implementations.
  • a generalized embodiment provides a device comprises a display to display information in a plurality of independent display regions, each display region being independently identifiable, a memory to store motion type control data comprising identifications of motion types and corresponding operations to be performed on information displayed in one or more identified display regions; a motion detecting arrangement to detect motion of the device; a user input arrangement to receive a user input identifying operations to be performed on information displayed in one or more identified display regions during a learning phase; and a processor programmed, in the learning phase when a user input is received identifying an operation to be performed on information displayed in one or more identified display regions, to identify a motion type from motion detected by the motion detecting arrangement and to store the received user input in correspondence with data identifying the motion type in the motion type control data, and, subsequent to the learning phase when the motion detecting arrangement detects motion, to use the motion type control data to identify a motion type and execute the corresponding operation on information displayed in one or more identified display regions.
  • a user of the device is able to train the device on preferred motion types to be used as inputs to cause operations to be performed on information displayed in one or more identified display regions. In this way, a user is able to tailor the way the user interfaces with specific identified display regions of the device to select preferred motion types for certain operations to be performed on information displayed in one or more identified display regions.
  • Identifications of motion types can comprise recorded motion data for each motion type for matching with detected motion data in order to identify a motion type from the motion data. In one embodiment not all of the motion types need to be learnt, some can be prestored.
  • the invention is suited to a portable device but is not limited to such.
  • the device can be any device that can be moved in a reciprocal, rotational or shaking motion. Such a device is typically portable and wireless but could be simply moveable and connected by wires.
  • the device could be part of another apparatus and removeable therefore in use to allow for the shaking, rotating or reciprocating motion to be effected by a user to control the device.
  • the device can comprise a mobile telephone, a tablet computer, a wearable device, such as a smart watch or personal monitor device, a laptop, or a machine control device.
  • the operation to be performed on information displayed in one or more identified display regions can comprise any operation on the information in a display region, such as cutting, pasting, insertion or deletion of displayed information in one or more display regions of a display.
  • the operation can be limited to an operation on one display region, a subset of the display regions or all of the display regions.
  • Types of motion that the user can use for identification and execution of control operations can comprise any of a rectilinear, motion, a reciprocating motion, a rotational motion, swaying motion, see-sawing motion, swinging motion or rocking motion for example.
  • the displacement or amplitude of the‘shake’, the frequency or rate of the‘shake’ and/or the speed/velocity or vigour of the‘shake’ can also be components of the type of motion of the device.
  • the motion types can comprise a single motion type or a combination of motion types, e.g. a translational reciprocal motion and a rotational motion, and the motions can be in multiple directions.
  • Motion thresholds can be set, whereby the motion type is only identified from motion data when the speed or amplitude of the motion is above a threshold. This avoids erroneous motion type identification and hence erroneous control operation execution.
  • the motion can comprise a complex mix or aggregation of motions or motion types e.g. a rotation simultaneously with a shake, or a sequence of motions e.g. a shake followed by a rotation, or a rotation in one direction followed by a rotation in another direction.
  • the sequence of motions can hence be a sequence of the same motion types or different motion types.
  • the device may require the user to carry out the motion type more than once so that the motion can be detected more than once and the motion pattern recorded each time so that some form of average of the desired motion pattern can be stored as the data identifying the motion type in the motion type control data.
  • This smoothing or averaging of the motion recorded that the user wishes to be the motion type for an operation to be performed on information displayed in one or more identified display regions assist the processor in learning deviations in the motion data that might be received in an operational phase that a user intends to cause an operation on information displayed in one or more identified display regions.
  • a user may input a sequence of user inputs representing a sequence of desired motion types to be used to execute corresponding operations.
  • a user input can be received followed by motion data can be input as a pair as a series to generate motion type control data for a plurality of control operations to be performed on information displayed in one or more identified display regions.
  • user selections of one or more display region can be received to identify the one or more display regions as part of the user input.
  • the user selections of one or more display regions can be made using a pointer or touch input, or simply by selecting a display region from a list of display regions listed by identifiers.
  • a menu can be displayed to the user to allow the user to generate the user input simply by selecting a displayed menu item and one or more display regions to which the operation is to be applied.
  • the device in the learning phase, may have a set list of control operations and the user input comprises a selection to implement the list to learn motion types for all of the listed operations to be performed on information displayed in one or more identified display regions.
  • the device simply requires the user to cause the required motion type for the device sequentially for the listed operations to be performed on information displayed in one or more identified display regions.
  • the device can indicate to the user when each type of motion should be carried out for each control operation. This can be by a displayed message, an indicator light or a sound output for example.
  • the motion types and control operations can be learned for individual users. For example, a user could enter their name or sign in to use the display apparatus and the learnt motion types and control operations are then stored for that user for use when the user enters their name or signs in.
  • the user may use each region of the display simultaneously. Further, the display screen regions may be positioning as required by the user and may be pre-set or set by the user. The user may use the motion control to manipulate a display region, e.g. point to choose and position or re -position a part of the display screen region or data within the chosen display region to relocate it.
  • a display region e.g. point to choose and position or re -position a part of the display screen region or data within the chosen display region to relocate it.
  • the user is able to select and control a display region and to open and adjust the display region e.g. by zooming in on it, e.g. by tapping on a chosen display region to open up the display region to the given size as required or desired by the user. This may enable the user to fully open the display region e.g. to cover the whole screen.
  • the device comprises a flexible device with one or more displays, with one or more display regions where display data can be transitioned between one or more display regions including appearing and transitioning on one or more sides of the device display.
  • a device with displays on both sides may comprise an electronic document reader that can display and control the displayed data. For example, data can be displayed on the front side A of the device and then transitioned to side B, the reverse side of the device.
  • the device comprises a flexible device and the data may be displayed on one or more folded or flexed regions of the flexible device and the user can use motion or gestures to cause the data to transition between display regions on the same display or another display if more than one is provided.
  • the user input arrangement can comprise any conventional device input arrangement, such as a keyboard, pointing device (e.g. a mouse, track pad, or tracker ball), a touch sensitive display and/or a pen.
  • pointing device e.g. a mouse, track pad, or tracker ball
  • a pen allows a user to input handwritten text and drawings.
  • the device can have a display that can comprise a fixed or flexible display or displays.
  • the display or displays may be detachable.
  • the displays can be of different types, e.g. one flexible and one rigid.
  • a carrier medium such as a non-transient storage medium storing code for execution by a processor of a machine to carry out the method, or a transient medium carrying processor executable code for execution by a processor of a machine to carry out the method.
  • Embodiments can be implemented in programmable digital logic that implements computer code. The code can be supplied to the programmable logic, such as a processor or microprocessor, on a carrier medium.
  • a carrier medium is a transient medium i.e. a signal such as an electrical, electromagnetic, acoustic, magnetic, or optical signal.
  • Another form of carrier medium is a non-transitory storage medium that stores the code, such as a solid-state memory, magnetic media (hard disk drive), or optical media (Compact disc (CD) or digital versatile disc (DVD)).
  • the device can, in one example, be used as the display unit in any of the examples of the display apparatus disclosed in co-pending application GB 1722249.8 filed on 29 th December 2017, the contents of which is hereby incorporate by reference in its entirety or with the device disclosed in co-pending UK patent application GB 1805278.7 entitled“Display Apparatus” filed on the same date as this application, the contents of which is hereby incorporate by reference in its entirety. [0036] Specific embodiments will now be described with reference to the drawings.
  • Figure 1 is a schematic diagram illustrating a handheld device 1.
  • the device 1 can comprise any handheld device, such as a mobile telephone.
  • the motion directions that the device 1 can experience as a method of entering a command for one or more independent display regions of a device are illustrated by the arrows in figure 1.
  • the motion can be a back and forth linear or reciprocating motion in any of three directions along three axes x, y and z in three dimensions as shown by the intersecting arrows.
  • the motion can be non-linear, curved or rotational as shown by the curved arrows A and B.
  • the rotational motion can be rotation about any of the x, y or z axes.
  • Figure 2 is a schematic diagram illustrating a mobile device 2.
  • the device 2 comprises a display screen 3 capable of displaying information.
  • the display screen 3 is displaying three types of information, namely typed text 11, handwritten text 41 and an image 31 in three different display regions.
  • different information is illustrated as being displayed in the different regions, the invention applies also to the display of the same information type or a mixture of information types in the same or different display regions.
  • the displayed information can for example be a document or form that has regions to be completed or filled in, e.g. an address region and a name region in a standard form letter, or fields in a form.
  • the information displayed in a display region can comprise any of text, images, or videos.
  • the information can be displayed by any application executing on the apparatus, such as a web browser, email application, word processor, spreadsheet, image editor etc.
  • the handwritten text 41 can be input using a pen device 50.
  • the display screen 3 can be sensitive to the proximity of the pen device 50 to detect the coordinates of the pen device 50 to track its location and hence to generate the input handwritten text 41.
  • the display screen 3 can be touch sensitive to allow for input of information and commands to the device 2, e.g. by the display of touch sensitive options and/or a keyboard.
  • a motion of the device 2 can cause execution of a control operation to modify any of the displayed information in any of the display regions, such as by erasing all of the displayed information or selective parts, such as one or more of the typed text 11, the handwritten text 41 or the image 31, or by entering predefined information, e.g. entering text or image data, into a display region.
  • Figure 3 illustrates an alternative display screen 20 of a device. In this example, there are three different display regions that are clearly defined by boundary markings, namely a typed text display region 10 with displayed typed text 11, an image display region 30 displaying an image 31, and a handwritten text display region 40 displaying handwritten text 41, created using a pen 50.
  • figures 2 and 3 illustrate motion control of displayed information in the form of text or images
  • the motion control can comprise any displayed information.
  • a virtual shopping cart could be moved, filled or emptied and items could be checked out.
  • Each control operation can have an associated control motion that the device is taught in a learning phase.
  • Some control options can be prestored and do not require teaching.
  • the motion of the device of this example can be a back and forth linear or reciprocating motion in any of three directions along three axes x, y and z in three dimensions as shown by the intersecting arrows of figure 1. Also, or alternatively, the motion can be non linear, curved or rotational as shown by the curved arrows in figure 1.
  • a motion of the device can erase all or some of the displayed information in in one or more of the display regions 10, 30 and 40.
  • the one or more of the displayed regions and the operation performed on the information in the displayed region is defined as a command specific to a motion of the device.
  • the device stored a set of motion patterns with corresponding display region identifiers and information operations so that a specific motion type will cause a specific operation on information displayed in only one display region, only a subset of the display regions or all of the display regions.
  • a motion type detected by a motion detecting arrangement of the device can detect any one of a plurality of types of patterns of motion in any dimension or direction. Examples of different patterns of motion are: shaking back and forth, shaking side to side, shaking up and down, turning side to side (tilting side to side), turning up and down (tilting up and down) and rotating. Each of these represents a lateral (shaking) or rotational movement along or about one of the three axes illustrated in figure 1. It is possible to detect two types of each of these motion types, based on the starting motion. For example, back and forth could be back and forth or forth and back, side to side could be left to right or right to left, rotating could be clockwise or anticlockwise rotating etc.
  • the vigor e.g. speed or frequency of shaking or reciprocity
  • the same spatial displacements of the display apparatus may correspond to different motion types depending upon the speed, acceleration and frequency that the displacement is executed.
  • complex motion types can comprise complex patterns of motion comprising a combination of basic motion types, e.g. a lateral motion and a rotational motion, or combinations of reciprocal lateral and rotational motions.
  • a memory of the device can store a set of operations to be implemented on identified one or more display regions by the processor for each type of motion detected by the motion detecting arrangement and identified by the processor from the signal from the motion detecting arrangement. These operations can be predefined or downloaded from a data source and they can be user defined or supplemented. The user definition or supplementing can take place during a learning phase.
  • Figure 4 is a schematic diagram illustrating the electronic components of an example device.
  • a processor 100 is connected to static or non-volatile memory 102 storing code used by the processor 100, such as operating system code and for the storage of data and information for display in a non-volatile manner.
  • Volatile memory 103 is provided for the storage of application code for implementation by the processor 100 and data and information for display.
  • the volatile memory can store code (e.g. a code module) comprising code for identifying motion patterns detected by the motion sensor 104, and code (e.g. a code module) for controlling a display screen 105 in response to perform operations on information displayed in independent display regions.
  • code e.g. a code module
  • the memory can also store data mapping operations on display regions and identifiers for the display regions against motion types, prestored or learnt during a learning phase.
  • the display screen 105 is connected to the processor 100 for the display of information.
  • the display screen 105 can comprise a touch sensitive display to provide an input arrangement.
  • a pen 50 shown in figure 2 can be used with the display screen 105 to allow for the input of freehand text or drawings.
  • a camera 106 can also optionally be connected to the processor 100 to capture images. The camera 106 can also be used as an input arrangement for the input of information and commands using gesture recognition implemented by the processor on the images captured by the camera 106.
  • a network connector 101 is connected to the processor 100 for communication with a communications or computer network. This communication link can pass data, information and controls between the device and other devices or computers over the network.
  • the network connector 101 can be connected to a communications or computer network using a wired or wireless connection.
  • a wired connection could be an ethemet connection to a LAN or a telephone connection, such that the network connector 101 acts as a modem or router.
  • a wireless connection could be WIFI, WiMAX, CDMA, GSM, GPRS, wireless local loop or WAN.
  • BluetoothTM may be used.
  • a motion sensor 104 is connected to the processor 100 to detect motion of the device.
  • the motion sensor 104 can comprise any conventional motion sensor, such as an accelerometer.
  • the motion sensor 104 is capable of detecting motion of the device in any direction i.e. it is a multi-axis motion sensor to detect motion along any of three orthogonal axes in three dimensions.
  • the device may have a display screen on both sides, so that it can be turned over to view information on both sides.
  • the control operation could apply to both sides or to just one side. This, in one example, can be selectively set.
  • the or a display or displays can also be detached from a mount frame.
  • the device can comprise a docking part or mount to which a display screen is mounted in a detachable or demountable manner so that the display screen can be used separately from the mount.
  • the mount and display screen can communicate between each other using a wireless link so that the mount can contain some components of the display apparatus, such as the processor and storage, while the display screen can contain only those components necessary to operate the display screen, detect motion of the display screen, and the communicate with the processor.
  • more than one display screen can be provided detachable from a mount, with each display screen operating independently to provide the erase operation by motion detection using a shared processor in the mount.
  • the device can include one or more loudspeakers for audio output and one or more microphones for audio input. Also, the device can connect to a peripheral device such as a keyboard and/or pointing device by a wired or wireless connection to allow for input of information and commands to the device.
  • a peripheral device such as a keyboard and/or pointing device by a wired or wireless connection to allow for input of information and commands to the device.
  • Figure 5 is a flow diagram illustrating a learning phase of the device.
  • step S l the device enters the learning phase. This can be automatic on start up of the device or as a result of some other operation of the device, or it can be as a result of a user selection i.e. a user input, such as using an input arrangement of the device.
  • a display of the device displays a menu of defined operations to be performed on information displayed in one or more identified display regions that a user can choose to execute using motion as a user input.
  • the displayed menu can simply be a list, or a drop-down menu for example. The list could be organized into categories of types of operations.
  • the menu can also display a list of display regions that the operations can be applied to and a user can select one or more display regions.
  • the display can display the regions in a selectable manner so that a user can select one or more regions to which the selected operation is to be applied using a pointer device or a touch input for example to select the region or regions of the display.
  • step S3 a user selection of an operation menu item is received and then in step S4, the device waits for the motion detector arrangement e.g. an accelerometer, to detect motion of the device. The motion is recorded over a short period of time appropriate to capture the motion pattern e.g. 1-2 seconds.
  • the process loops back to require the user to repeat the motion input a number (N) times e.g. anything from 3 to 10 times. More iterations improve the accuracy of the matching process but reduces the usability of the device: a user will lose patience with a requirement to repeat the motion pattern too many times.
  • step S6 the recorded motion patterns for the repeated recordings are averaged and the average is used in step S7 for storage as part of the information on the motion type in the stored motion type control data.
  • Data indicating the degree of deviation between the recorded motion patterns can also be stored as part of the motion type data to assist with the matching process during the operational phase i.e. to assist with assessing whether a match lies within an expected deviation of a stored average motion pattern.
  • An identifier identifying the corresponding control operation is also stored as part of the stored motion type control data.
  • step S2 The process will then return to step S2 to allow the user to select another operation from the menu. If the user has finished selecting operations, a user can select to exit the learning phase.
  • Figure 6 is a flow diagram illustrating an operational phase of the device following the learning phase.
  • step S 10 the device enters the operational phase and in step S l l motion of the device is detected in step S 12.
  • the pattern of motion is compared with the stored patterns for the motion type in the motion type control data to determine if the motion matches a motion type. If no match is found (step S 13), the process returns to step S l l to detect further motion. If a match is found with motion data for a match type in step S 13, in step S 14 a corresponding operation on information displayed in one or more identified display regions is identified in the motion type control data and in step S 15 the operation on information displayed in one or more identified display regions is executed. The process then returns to step S l l to detect the next motion of the device for further operations on information displayed in one or more identified display regions.
  • an average of motion data for a number of recorded motion operations is taken.
  • the invention is not restricted to taking an average and only one motion detection may be used, or a number may be recorded and they can be stored individually for individual matching with a motion pattern during the operation phase.
  • a user is presented with a menu to select an operation to teach the device the motion type for that control operation.
  • the user may also be able to define their own operation to be performed on information displayed in one or more identified display regions using the user input arrangement.
  • the device can include other input arrangements, such as audio sensor for audio input, a keyboard, pointer device (e.g. a mouse), or a touch screen, to enable a user to input controls and information in conjunction with the motion detection control input method.
  • a camera can be provided to allow image capture to enable gesture input as a method of control or information input.
  • the device can include an infrared sensor e.g. a Passive infrared sensor (PIR).
  • PIR Passive infrared sensor
  • the information displayed in a display region can comprise video (moving images, television, 3D video, panoramic views, etc).
  • the device can include a remote control for controlling the device.
  • the remote control can be motion responsive for motion control of the device.
  • a camera in addition to or alternatively to the motion control input, can be provided in the device to capture video in a direction facing the user so that the user can use gestures to input a control in dependence upon the type of identified gesture.
  • the gesture control can comprise the recognition of movements of a user’s body parts i.e. hands, facial features or the overall body position. Pattern recognition techniques can be used to recognize the type of gestures and to identify them. Example gesture types can include sweeping of a hand to the left or right, or up or down, or a waving of a hand.
  • the gesture control erase operation can be used, for example, where the user is unable to pick up the device, e.g. they have wet or dirty hands.
  • the device can be controlled to be responsive to the detected gesture type and to identify a comparable or associated motion type that inputs the same control and to control a display screen to cause the display screen to mimic the motion equivalent command to the gesture command. For example, if the motion command is a left to right rotation, when an equivalent or associated gesture command e.g. waving the hand from left to right, is identified, the display screen can cause the display to rotate from left to right to act as a confirmation back to the user that the gesture command has been recognized as the motion command.
  • the equivalent gesture type can be automatically determined by the processor from a corresponding motion type and hence the motion type data can include corresponding gesture type data for the same operation.
  • the use of a gesture that matches the motion command in general movement, means that the gesture will be intuitive for the user e.g. a rotation of the device can be equated to rotation of the hands of a user.
  • This aspect of the invention can be applied to the input of any type of command to the device, including the execution of an application, the taking of a picture using a camera of the device or any other operation changing the state of the device.
  • This aspect can use the hardware of figure 4 and the method of the flow diagrams of figures 5 and 6 for the learning and execution of control operations.
  • This aspect is not limited to the control of display operations in display regions and instead can be applied to the input of any control operations to the device.
  • the motion types and control operations can be learned for individual users. For example, a user could enter their name or sign in to use the display apparatus and the learnt motion types and operations are then stored for that user for use when the user enters their name or signs in.
  • the invention provides gesture control to supplement motion control.
  • the device can comprise a memory to store control data comprising identifications of motion types, corresponding control operations, and corresponding gesture type data; a motion detecting arrangement to detect motion of the device; a gesture detecting arrangement to detect gestures; a user input arrangement to receive a user input identifying control operations during a learning phase; and a processor programmed in the learning phase when a user input is received identifying a control operation, to identify a motion type from motion detected by the motion detecting arrangement, to store the received user input in correspondence with data identifying the motion type in the control data, and to determine corresponding gesture type data, and subsequent to the learning phase, when the motion detecting arrangement detects motion, to use the control data to identify a motion type and execute the corresponding control operation, or when the gesture detecting arrangement detects a gesture, to use the control data to identify a gesture type and execute the corresponding control operation.
  • the device can be mobile or fixed/attached.
  • a camera can be provided in the device to capture video in a direction facing the user so that the user can use gestures as a control input in dependence upon the type of identified gesture.
  • the gesture control can comprise the recognition of movements of a user’s body parts i.e. hands, facial features or the overall body position or the recognition of an object moved by the user, such as a stick, ruler or something worn by the user.
  • Pattern recognition techniques can be used to recognize the type of gestures and to identify them.
  • Example gesture types can include sweeping of a hand to the left or right, or up or down, or a waving of a hand. The gesture matches the motion of the device that was learnt in the learning phase e.g. a rotation of the device can be mimicked by a rotation of the user’s hand.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)

Abstract

A device comprises a memory to store motion type control data comprising a display to display information in a plurality of independent display regions, each display region being independently identifiable; identifications of motion types and corresponding operations to be performed on information displayed in one or more identified display regions; a motion detecting arrangement to detect motion of the device; a user input arrangement to receive a user input identifying operations to be performed on information displayed in one or more identified display regions during a learning phase; and a processor programmed, in the learning phase when a user input is received identifying an operation to be performed on information displayed in one or more identified display regions, to identify a motion type from motion detected by the motion detecting arrangement and to store the received user input in correspondence with data identifying the motion type in the motion type control data, and, subsequent to the learning phase when the motion detecting arrangement detects motion, to use the motion type control data to identify a motion type and execute the corresponding operation on information displayed in one or more identified display regions.

Description

DEVICE OPERATION CONTROL
FIELD OF THE INVENTION
[0001] The present invention relates to a device and a method of controlling the device using detected motion types.
BACKGROUND INFORMATION
[0002] Portable and moveable devices, such as mobile telephones, tablet computer devices and laptops are widely used. The challenges for the designers of portable devices is to design an apparatus that is light, battery efficient and convenient to use. One facet of ease of use is the ease with which a user can input control commands.
SUMMARY OF THE INVENTION
[0003] The present invention provides a device comprising a display to display information in a plurality of independent display regions, each display region being independently identifiable, a memory to store motion type control data comprising identifications of motion types and corresponding operations to be performed on information displayed in one or more identified display regions; a motion detecting arrangement to detect motion of the device; a user input arrangement to receive a user input identifying operations to be performed on information displayed in one or more identified display regions during a learning phase; and a processor programmed, in the learning phase when a user input is received identifying an operation to be performed on information displayed in one or more identified display regions, to identify a motion type from motion detected by the motion detecting arrangement and to store the received user input in correspondence with data identifying the motion type in the motion type control data, and, subsequent to the learning phase when the motion detecting arrangement detects motion, to use the motion type control data to identify a motion type and execute the corresponding operation on information displayed in one or more identified display regions.
[0004] The present invention also provides a method of controlling the display of information in a plurality of independent display regions on a device having a motion detector arrangement to detect motion of the device, each display region being independently identifiable, the method comprising, during a learning phase, receiving a user input identifying an operation to be performed on information displayed in one or more identified display regions; detecting motion of the device using the motion detector arrangement; identifying a motion type from the motion detected by the motion detecting arrangement; storing the received user input in correspondence with data identifying the motion type as motion type control data; and repeating the receiving, detecting, identifying and storing steps; and in an operation phase detecting motion of the device using the motion detector arrangement; and using the motion type control data to identify a motion type and execute the corresponding operation on information displayed in one or more identified display regions.
[0005] The present invention also provides a device comprising a memory to store control data comprising identifications of motion types, corresponding control operations, and corresponding gesture type data; a motion detecting arrangement to detect motion of the device; a gesture detecting arrangement to detect gestures; a user input arrangement to receive a user input identifying control operations during a learning phase; and a processor programmed in the learning phase when a user input is received identifying a control operation, to identify a motion type from motion detected by the motion detecting arrangement, to store the received user input in correspondence with data identifying the motion type in the control data, and to determine corresponding gesture type data, and subsequent to the learning phase, when the motion detecting arrangement detects motion, to use the control data to identify a motion type and execute the corresponding control operation, or when the gesture detecting arrangement detects a gesture, to use the control data to identify a gesture type and execute the corresponding control operation.
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] Figure 1 is a schematic diagram illustrating an example handheld device;
[0007] Figure 2 is a schematic diagram illustrating an alternative display of an example device;
[0008] Figure 3 is a schematic diagram illustrating an alternative display of an example device;
[0009] Figure 4 is a schematic diagram illustrating the electronic components of an example device;
[0010] Figure 5 is a flow diagram illustrating an example learning phase; and
[0011] Figure 6 is a flow diagram illustrating an example operational phase. DETAILED DESCRIPTION
[0012] In the following detailed description, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration specific embodiments in which the inventive subject matter may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice them, and it is to be understood that other embodiments may be utilized, and that structural, logical, and electrical changes may be made without departing from the scope of the inventive subject matter. Such embodiments of the inventive subject matter may be referred to, individually and/or collectively, herein by the term“invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is in fact disclosed.
[0013] The following description is, therefore, not to be taken in a limited sense, and the scope of the inventive subject matter is defined by the appended claims and their equivalents.
[0014] In the following embodiments, like components are labelled with like reference numerals.
[0015] In the following embodiments, the term data store or memory is intended to encompass any computer readable storage medium and/or device (or collection of data storage mediums and/or devices). Examples of data stores include, but are not limited to, optical disks (e.g., CD-ROM, DVD-ROM, etc.), magnetic disks (e.g., hard disks, floppy disks, etc.), memory circuits (e.g., solid state drives, random-access memory (RAM), etc.), and/or the like.
[0016] The functions or algorithms described herein are implemented in hardware, software or a combination of software and hardware in one embodiment. The software comprises computer executable instructions stored on computer readable carrier media such as memory or other type of storage devices. Further, described functions may correspond to modules, which may be software, hardware, firmware, or any combination thereof. Multiple functions are performed in one or more modules as desired, and the embodiments described are merely examples. The software is executed on a digital signal processor, ASIC, microprocessor, or other type of processor.
[0017] Some embodiments implement the functions in two or more specific interconnected hardware modules or devices with related control and data signals communicated between and through the modules, or as portions of an application- specific integrated circuit. Thus, the exemplary process flow is applicable to software, firmware, and hardware implementations.
[0018] A generalized embodiment provides a device comprises a display to display information in a plurality of independent display regions, each display region being independently identifiable, a memory to store motion type control data comprising identifications of motion types and corresponding operations to be performed on information displayed in one or more identified display regions; a motion detecting arrangement to detect motion of the device; a user input arrangement to receive a user input identifying operations to be performed on information displayed in one or more identified display regions during a learning phase; and a processor programmed, in the learning phase when a user input is received identifying an operation to be performed on information displayed in one or more identified display regions, to identify a motion type from motion detected by the motion detecting arrangement and to store the received user input in correspondence with data identifying the motion type in the motion type control data, and, subsequent to the learning phase when the motion detecting arrangement detects motion, to use the motion type control data to identify a motion type and execute the corresponding operation on information displayed in one or more identified display regions.
[0019] A user of the device is able to train the device on preferred motion types to be used as inputs to cause operations to be performed on information displayed in one or more identified display regions. In this way, a user is able to tailor the way the user interfaces with specific identified display regions of the device to select preferred motion types for certain operations to be performed on information displayed in one or more identified display regions. Identifications of motion types can comprise recorded motion data for each motion type for matching with detected motion data in order to identify a motion type from the motion data. In one embodiment not all of the motion types need to be learnt, some can be prestored.
[0020] The invention is suited to a portable device but is not limited to such. The device can be any device that can be moved in a reciprocal, rotational or shaking motion. Such a device is typically portable and wireless but could be simply moveable and connected by wires. The device could be part of another apparatus and removeable therefore in use to allow for the shaking, rotating or reciprocating motion to be effected by a user to control the device. The device can comprise a mobile telephone, a tablet computer, a wearable device, such as a smart watch or personal monitor device, a laptop, or a machine control device. [0021] The operation to be performed on information displayed in one or more identified display regions can comprise any operation on the information in a display region, such as cutting, pasting, insertion or deletion of displayed information in one or more display regions of a display. The operation can be limited to an operation on one display region, a subset of the display regions or all of the display regions.
[0022] Types of motion that the user can use for identification and execution of control operations can comprise any of a rectilinear, motion, a reciprocating motion, a rotational motion, swaying motion, see-sawing motion, swinging motion or rocking motion for example. The displacement or amplitude of the‘shake’, the frequency or rate of the‘shake’ and/or the speed/velocity or vigour of the‘shake’ can also be components of the type of motion of the device. The motion types can comprise a single motion type or a combination of motion types, e.g. a translational reciprocal motion and a rotational motion, and the motions can be in multiple directions. Motion thresholds can be set, whereby the motion type is only identified from motion data when the speed or amplitude of the motion is above a threshold. This avoids erroneous motion type identification and hence erroneous control operation execution. The motion can comprise a complex mix or aggregation of motions or motion types e.g. a rotation simultaneously with a shake, or a sequence of motions e.g. a shake followed by a rotation, or a rotation in one direction followed by a rotation in another direction. The sequence of motions can hence be a sequence of the same motion types or different motion types.
[0023] In order to reduce errors in the recognition of a motion type by the device, during the learning phase, after a user input has been received to identify an operation to be performed on information displayed in one or more identified display regions, the device may require the user to carry out the motion type more than once so that the motion can be detected more than once and the motion pattern recorded each time so that some form of average of the desired motion pattern can be stored as the data identifying the motion type in the motion type control data. This smoothing or averaging of the motion recorded that the user wishes to be the motion type for an operation to be performed on information displayed in one or more identified display regions assist the processor in learning deviations in the motion data that might be received in an operational phase that a user intends to cause an operation on information displayed in one or more identified display regions. Errors in the matching of the motion data from the motion detecting arrangement with the stored motion data in the motion type control data is reduced. [0024] In the learning phase, a user may input a sequence of user inputs representing a sequence of desired motion types to be used to execute corresponding operations. Hence, in the learning phase, a user input can be received followed by motion data can be input as a pair as a series to generate motion type control data for a plurality of control operations to be performed on information displayed in one or more identified display regions.
[0025] In the learning phase, user selections of one or more display region can be received to identify the one or more display regions as part of the user input. The user selections of one or more display regions can be made using a pointer or touch input, or simply by selecting a display region from a list of display regions listed by identifiers.
[0026] In the learning phase, a menu can be displayed to the user to allow the user to generate the user input simply by selecting a displayed menu item and one or more display regions to which the operation is to be applied.
[0027] In an alternative example, in the learning phase, the device may have a set list of control operations and the user input comprises a selection to implement the list to learn motion types for all of the listed operations to be performed on information displayed in one or more identified display regions. Thus, in the learning phase, the device simply requires the user to cause the required motion type for the device sequentially for the listed operations to be performed on information displayed in one or more identified display regions. The device can indicate to the user when each type of motion should be carried out for each control operation. This can be by a displayed message, an indicator light or a sound output for example.
[0028] The motion types and control operations can be learned for individual users. For example, a user could enter their name or sign in to use the display apparatus and the learnt motion types and control operations are then stored for that user for use when the user enters their name or signs in.
[0029] The user may use each region of the display simultaneously. Further, the display screen regions may be positioning as required by the user and may be pre-set or set by the user. The user may use the motion control to manipulate a display region, e.g. point to choose and position or re -position a part of the display screen region or data within the chosen display region to relocate it.
[0030] In one embodiment, the user is able to select and control a display region and to open and adjust the display region e.g. by zooming in on it, e.g. by tapping on a chosen display region to open up the display region to the given size as required or desired by the user. This may enable the user to fully open the display region e.g. to cover the whole screen. [0031] In one embodiment, the device comprises a flexible device with one or more displays, with one or more display regions where display data can be transitioned between one or more display regions including appearing and transitioning on one or more sides of the device display. A device with displays on both sides may comprise an electronic document reader that can display and control the displayed data. For example, data can be displayed on the front side A of the device and then transitioned to side B, the reverse side of the device.
[0032] In one embodiment, the device comprises a flexible device and the data may be displayed on one or more folded or flexed regions of the flexible device and the user can use motion or gestures to cause the data to transition between display regions on the same display or another display if more than one is provided. The user input arrangement can comprise any conventional device input arrangement, such as a keyboard, pointing device (e.g. a mouse, track pad, or tracker ball), a touch sensitive display and/or a pen. A pen allows a user to input handwritten text and drawings.
[0033] The device can have a display that can comprise a fixed or flexible display or displays. The display or displays may be detachable. The displays can be of different types, e.g. one flexible and one rigid.
[0034] One aspect provides a carrier medium, such as a non-transient storage medium storing code for execution by a processor of a machine to carry out the method, or a transient medium carrying processor executable code for execution by a processor of a machine to carry out the method. Embodiments can be implemented in programmable digital logic that implements computer code. The code can be supplied to the programmable logic, such as a processor or microprocessor, on a carrier medium. One such embodiment of a carrier medium is a transient medium i.e. a signal such as an electrical, electromagnetic, acoustic, magnetic, or optical signal. Another form of carrier medium is a non-transitory storage medium that stores the code, such as a solid-state memory, magnetic media (hard disk drive), or optical media (Compact disc (CD) or digital versatile disc (DVD)).
[0035] The device can, in one example, be used as the display unit in any of the examples of the display apparatus disclosed in co-pending application GB 1722249.8 filed on 29th December 2017, the contents of which is hereby incorporate by reference in its entirety or with the device disclosed in co-pending UK patent application GB 1805278.7 entitled“Display Apparatus” filed on the same date as this application, the contents of which is hereby incorporate by reference in its entirety. [0036] Specific embodiments will now be described with reference to the drawings.
[0037] Figure 1 is a schematic diagram illustrating a handheld device 1. The device 1 can comprise any handheld device, such as a mobile telephone.
[0038] The motion directions that the device 1 can experience as a method of entering a command for one or more independent display regions of a device are illustrated by the arrows in figure 1. The motion can be a back and forth linear or reciprocating motion in any of three directions along three axes x, y and z in three dimensions as shown by the intersecting arrows. Also, or alternatively, the motion can be non-linear, curved or rotational as shown by the curved arrows A and B. The rotational motion can be rotation about any of the x, y or z axes.
[0039] Figure 2 is a schematic diagram illustrating a mobile device 2.
[0040] The device 2 comprises a display screen 3 capable of displaying information. In this example, the display screen 3 is displaying three types of information, namely typed text 11, handwritten text 41 and an image 31 in three different display regions. Although in this embodiment different information is illustrated as being displayed in the different regions, the invention applies also to the display of the same information type or a mixture of information types in the same or different display regions. The displayed information can for example be a document or form that has regions to be completed or filled in, e.g. an address region and a name region in a standard form letter, or fields in a form. The information displayed in a display region can comprise any of text, images, or videos. The information can be displayed by any application executing on the apparatus, such as a web browser, email application, word processor, spreadsheet, image editor etc. The handwritten text 41 can be input using a pen device 50. The display screen 3 can be sensitive to the proximity of the pen device 50 to detect the coordinates of the pen device 50 to track its location and hence to generate the input handwritten text 41. The display screen 3 can be touch sensitive to allow for input of information and commands to the device 2, e.g. by the display of touch sensitive options and/or a keyboard.
[0041] In the example of figure 2, a motion of the device 2 can cause execution of a control operation to modify any of the displayed information in any of the display regions, such as by erasing all of the displayed information or selective parts, such as one or more of the typed text 11, the handwritten text 41 or the image 31, or by entering predefined information, e.g. entering text or image data, into a display region. [0042] Figure 3 illustrates an alternative display screen 20 of a device. In this example, there are three different display regions that are clearly defined by boundary markings, namely a typed text display region 10 with displayed typed text 11, an image display region 30 displaying an image 31, and a handwritten text display region 40 displaying handwritten text 41, created using a pen 50.
[0043] Although figures 2 and 3 illustrate motion control of displayed information in the form of text or images, the motion control can comprise any displayed information. In a shopping application a virtual shopping cart could be moved, filled or emptied and items could be checked out. Each control operation can have an associated control motion that the device is taught in a learning phase. Some control options can be prestored and do not require teaching.
[0044] The motion of the device of this example can be a back and forth linear or reciprocating motion in any of three directions along three axes x, y and z in three dimensions as shown by the intersecting arrows of figure 1. Also, or alternatively, the motion can be non linear, curved or rotational as shown by the curved arrows in figure 1.
[0045] In the example of figure 3, a motion of the device can erase all or some of the displayed information in in one or more of the display regions 10, 30 and 40. The one or more of the displayed regions and the operation performed on the information in the displayed region is defined as a command specific to a motion of the device. The device stored a set of motion patterns with corresponding display region identifiers and information operations so that a specific motion type will cause a specific operation on information displayed in only one display region, only a subset of the display regions or all of the display regions.
[0046] Although different types of information are illustrated and described as being displayed in the different display regions in figure 3, the different display regions could display the same information type or any combination of information types. Also, a display region could display a mix of information types. The example of figure 3 is just for illustration.
[0047] The examples of figures 2 and 3 describe a control operation controlling the display of information in specific display regions according to recognized motion patterns or types.
[0048] A motion type detected by a motion detecting arrangement of the device can detect any one of a plurality of types of patterns of motion in any dimension or direction. Examples of different patterns of motion are: shaking back and forth, shaking side to side, shaking up and down, turning side to side (tilting side to side), turning up and down (tilting up and down) and rotating. Each of these represents a lateral (shaking) or rotational movement along or about one of the three axes illustrated in figure 1. It is possible to detect two types of each of these motion types, based on the starting motion. For example, back and forth could be back and forth or forth and back, side to side could be left to right or right to left, rotating could be clockwise or anticlockwise rotating etc. Further, the vigor (e.g. speed or frequency of shaking or reciprocity) with which the motion is executed can be used as distinguishing features for different motion types or patterns. Hence, the same spatial displacements of the display apparatus may correspond to different motion types depending upon the speed, acceleration and frequency that the displacement is executed. Further, complex motion types can comprise complex patterns of motion comprising a combination of basic motion types, e.g. a lateral motion and a rotational motion, or combinations of reciprocal lateral and rotational motions.
[0049] A memory of the device can store a set of operations to be implemented on identified one or more display regions by the processor for each type of motion detected by the motion detecting arrangement and identified by the processor from the signal from the motion detecting arrangement. These operations can be predefined or downloaded from a data source and they can be user defined or supplemented. The user definition or supplementing can take place during a learning phase.
[0050] Figure 4 is a schematic diagram illustrating the electronic components of an example device.
[0051] A processor 100 is connected to static or non-volatile memory 102 storing code used by the processor 100, such as operating system code and for the storage of data and information for display in a non-volatile manner. Volatile memory 103 is provided for the storage of application code for implementation by the processor 100 and data and information for display. The volatile memory can store code (e.g. a code module) comprising code for identifying motion patterns detected by the motion sensor 104, and code (e.g. a code module) for controlling a display screen 105 in response to perform operations on information displayed in independent display regions. The memory can also store data mapping operations on display regions and identifiers for the display regions against motion types, prestored or learnt during a learning phase.
[0052] The display screen 105 is connected to the processor 100 for the display of information. The display screen 105 can comprise a touch sensitive display to provide an input arrangement. A pen 50 (shown in figure 2) can be used with the display screen 105 to allow for the input of freehand text or drawings. A camera 106 can also optionally be connected to the processor 100 to capture images. The camera 106 can also be used as an input arrangement for the input of information and commands using gesture recognition implemented by the processor on the images captured by the camera 106.
[0053] A network connector 101 is connected to the processor 100 for communication with a communications or computer network. This communication link can pass data, information and controls between the device and other devices or computers over the network. The network connector 101 can be connected to a communications or computer network using a wired or wireless connection. For example, a wired connection could be an ethemet connection to a LAN or a telephone connection, such that the network connector 101 acts as a modem or router. A wireless connection could be WIFI, WiMAX, CDMA, GSM, GPRS, wireless local loop or WAN. For low range and low power wireless communications, Bluetooth™ may be used.
[0054] A motion sensor 104 is connected to the processor 100 to detect motion of the device. The motion sensor 104 can comprise any conventional motion sensor, such as an accelerometer. The motion sensor 104 is capable of detecting motion of the device in any direction i.e. it is a multi-axis motion sensor to detect motion along any of three orthogonal axes in three dimensions.
[0055] In any of the examples above, the device may have a display screen on both sides, so that it can be turned over to view information on both sides. The control operation could apply to both sides or to just one side. This, in one example, can be selectively set. The or a display or displays can also be detached from a mount frame.
[0056] In one or more embodiments, the device can comprise a docking part or mount to which a display screen is mounted in a detachable or demountable manner so that the display screen can be used separately from the mount. The mount and display screen can communicate between each other using a wireless link so that the mount can contain some components of the display apparatus, such as the processor and storage, while the display screen can contain only those components necessary to operate the display screen, detect motion of the display screen, and the communicate with the processor. In one embodiment, more than one display screen can be provided detachable from a mount, with each display screen operating independently to provide the erase operation by motion detection using a shared processor in the mount.
[0057] The device can include one or more loudspeakers for audio output and one or more microphones for audio input. Also, the device can connect to a peripheral device such as a keyboard and/or pointing device by a wired or wireless connection to allow for input of information and commands to the device.
[0058] A method of operating an example device will now be described with reference to figures 5 and 6.
[0059] Figure 5 is a flow diagram illustrating a learning phase of the device.
[0060] In step S l, the device enters the learning phase. This can be automatic on start up of the device or as a result of some other operation of the device, or it can be as a result of a user selection i.e. a user input, such as using an input arrangement of the device. In step S2 a display of the device displays a menu of defined operations to be performed on information displayed in one or more identified display regions that a user can choose to execute using motion as a user input. The displayed menu can simply be a list, or a drop-down menu for example. The list could be organized into categories of types of operations. The menu can also display a list of display regions that the operations can be applied to and a user can select one or more display regions. Alternatively, the display can display the regions in a selectable manner so that a user can select one or more regions to which the selected operation is to be applied using a pointer device or a touch input for example to select the region or regions of the display.
[0061] In step S3, a user selection of an operation menu item is received and then in step S4, the device waits for the motion detector arrangement e.g. an accelerometer, to detect motion of the device. The motion is recorded over a short period of time appropriate to capture the motion pattern e.g. 1-2 seconds. In order to improve the accuracy of the matching process and to allow for variations in the pattern of motion by the user, in step S5, the process loops back to require the user to repeat the motion input a number (N) times e.g. anything from 3 to 10 times. More iterations improve the accuracy of the matching process but reduces the usability of the device: a user will lose patience with a requirement to repeat the motion pattern too many times.
[0062] At the end of the loop back process, in step S6 the recorded motion patterns for the repeated recordings are averaged and the average is used in step S7 for storage as part of the information on the motion type in the stored motion type control data. Data indicating the degree of deviation between the recorded motion patterns can also be stored as part of the motion type data to assist with the matching process during the operational phase i.e. to assist with assessing whether a match lies within an expected deviation of a stored average motion pattern. An identifier identifying the corresponding control operation is also stored as part of the stored motion type control data.
[0063] The process will then return to step S2 to allow the user to select another operation from the menu. If the user has finished selecting operations, a user can select to exit the learning phase.
[0064] Figure 6 is a flow diagram illustrating an operational phase of the device following the learning phase.
[0065] In step S 10, the device enters the operational phase and in step S l l motion of the device is detected in step S 12. The pattern of motion is compared with the stored patterns for the motion type in the motion type control data to determine if the motion matches a motion type. If no match is found (step S 13), the process returns to step S l l to detect further motion. If a match is found with motion data for a match type in step S 13, in step S 14 a corresponding operation on information displayed in one or more identified display regions is identified in the motion type control data and in step S 15 the operation on information displayed in one or more identified display regions is executed. The process then returns to step S l l to detect the next motion of the device for further operations on information displayed in one or more identified display regions.
[0066] In figures 5, an average of motion data for a number of recorded motion operations is taken. However, the invention is not restricted to taking an average and only one motion detection may be used, or a number may be recorded and they can be stored individually for individual matching with a motion pattern during the operation phase.
[0067] In figure 5 a user is presented with a menu to select an operation to teach the device the motion type for that control operation. The user may also be able to define their own operation to be performed on information displayed in one or more identified display regions using the user input arrangement.
[0068] In any of the examples above, the device can include other input arrangements, such as audio sensor for audio input, a keyboard, pointer device (e.g. a mouse), or a touch screen, to enable a user to input controls and information in conjunction with the motion detection control input method. Also, a camera can be provided to allow image capture to enable gesture input as a method of control or information input. The device can include an infrared sensor e.g. a Passive infrared sensor (PIR).
[0069] The information displayed in a display region can comprise video (moving images, television, 3D video, panoramic views, etc). [0070] The device can include a remote control for controlling the device. The remote control can be motion responsive for motion control of the device.
[0071] In embodiments, in addition to or alternatively to the motion control input, a camera can be provided in the device to capture video in a direction facing the user so that the user can use gestures to input a control in dependence upon the type of identified gesture. The gesture control can comprise the recognition of movements of a user’s body parts i.e. hands, facial features or the overall body position. Pattern recognition techniques can be used to recognize the type of gestures and to identify them. Example gesture types can include sweeping of a hand to the left or right, or up or down, or a waving of a hand.
[0072] Where the gesture control is provided in conjunction with the motion control, the gesture control erase operation can be used, for example, where the user is unable to pick up the device, e.g. they have wet or dirty hands. Further, the device can be controlled to be responsive to the detected gesture type and to identify a comparable or associated motion type that inputs the same control and to control a display screen to cause the display screen to mimic the motion equivalent command to the gesture command. For example, if the motion command is a left to right rotation, when an equivalent or associated gesture command e.g. waving the hand from left to right, is identified, the display screen can cause the display to rotate from left to right to act as a confirmation back to the user that the gesture command has been recognized as the motion command. The equivalent gesture type can be automatically determined by the processor from a corresponding motion type and hence the motion type data can include corresponding gesture type data for the same operation. The use of a gesture that matches the motion command in general movement, means that the gesture will be intuitive for the user e.g. a rotation of the device can be equated to rotation of the hands of a user.
[0073] This aspect of the invention can be applied to the input of any type of command to the device, including the execution of an application, the taking of a picture using a camera of the device or any other operation changing the state of the device. This aspect can use the hardware of figure 4 and the method of the flow diagrams of figures 5 and 6 for the learning and execution of control operations. This aspect is not limited to the control of display operations in display regions and instead can be applied to the input of any control operations to the device.
[0074] The motion types and control operations can be learned for individual users. For example, a user could enter their name or sign in to use the display apparatus and the learnt motion types and operations are then stored for that user for use when the user enters their name or signs in.
[0075] In one aspect, the invention provides gesture control to supplement motion control. In this aspect the device can comprise a memory to store control data comprising identifications of motion types, corresponding control operations, and corresponding gesture type data; a motion detecting arrangement to detect motion of the device; a gesture detecting arrangement to detect gestures; a user input arrangement to receive a user input identifying control operations during a learning phase; and a processor programmed in the learning phase when a user input is received identifying a control operation, to identify a motion type from motion detected by the motion detecting arrangement, to store the received user input in correspondence with data identifying the motion type in the control data, and to determine corresponding gesture type data, and subsequent to the learning phase, when the motion detecting arrangement detects motion, to use the control data to identify a motion type and execute the corresponding control operation, or when the gesture detecting arrangement detects a gesture, to use the control data to identify a gesture type and execute the corresponding control operation.
[0076] In this aspect, the device can be mobile or fixed/attached. A camera can be provided in the device to capture video in a direction facing the user so that the user can use gestures as a control input in dependence upon the type of identified gesture. The gesture control can comprise the recognition of movements of a user’s body parts i.e. hands, facial features or the overall body position or the recognition of an object moved by the user, such as a stick, ruler or something worn by the user. Pattern recognition techniques can be used to recognize the type of gestures and to identify them. Example gesture types can include sweeping of a hand to the left or right, or up or down, or a waving of a hand. The gesture matches the motion of the device that was learnt in the learning phase e.g. a rotation of the device can be mimicked by a rotation of the user’s hand.
[0077] It will be readily understood to those skilled in the art that various other changes in the details, material, and arrangements of the parts and method stages which have been described and illustrated in order to explain the nature of the inventive subject matter may be made without departing from the principles and scope of the inventive subject matter as expressed in the subjoined claims.

Claims

1. A device comprising:
a display to display information in a plurality of independent display regions, each display region being independently identifiable;
a memory to store motion type control data comprising identifications of motion types and corresponding operations to be performed on information displayed in one or more identified display regions;
a motion detecting arrangement to detect motion of the device;
a user input arrangement to receive a user input identifying operations to be performed on information displayed in one or more identified display regions during a learning phase; and a processor programmed, in the learning phase when a user input is received identifying an operation to be performed on information displayed in one or more identified display regions, to identify a motion type from motion detected by the motion detecting arrangement and to store the received user input in correspondence with data identifying the motion type in the motion type control data, and, subsequent to the learning phase when the motion detecting arrangement detects motion, to use the motion type control data to identify a motion type and execute the corresponding operation on information displayed in one or more identified display regions.
2. A device according to claim 1, wherein the processor is programmed, in the learning phase when a user input is received identifying an operation to be performed on information displayed in one or more identified display regions, to identify a said motion type by averaging the detected motion received from the motion detecting arrangement a plurality of times and to store the received user input in correspondence with data identifying the motion type in the motion type control data
3. A device according to claim 1 or claim 2, wherein the processor is programmed to, in the learning phase, receive a plurality of user inputs and to identify a said motion type from motion detected by the motion detecting arrangement for each user input and to store the received user inputs in correspondence with data identifying the respective motion type in the motion type control data.
4. A device according to any one of claims 1 to 3, wherein the processor is programmed to identify a plurality of motion types comprising a combination of motions in multiple directions.
5. A device according to claim 4, wherein the directions comprise at least one of lateral and/or rotational directions.
6. A device according to any one of claims 1 to 5, wherein the processor is programmed, in the learning phase, to receive user selections of one or more display region to identify the one or more display regions as part of the user input.
7. A device according to any one of claims 1 to 5, wherein the processor is programmed in the learning phase to control the display to display a menu of operations to be performed on information displayed in one or more identified display regions and the user input arrangement is adapted to receive a user selection of one of the operations to be performed on information displayed in one or more identified display regions in the menu and a selection of one or more of the display regions to which the operation applies as the user input.
8. A device according to any one of claims 1 to 5, wherein the processor is programmed in the learning phase to control the display to display a list of operations to be performed on information displayed in one or more identified display regions, the user input arrangement is adapted to receive a user selection to select the list as a list of user inputs identifying operations to be performed on information displayed in one or more identified display regions and a selection of one or more of the display regions to which the operations apply as the user input, and the processor is programmed to identify a said motion type from motion detected by the motion detecting arrangement for each user input and to store the user inputs in correspondence with data identifying the respective motion type in the motion type control data.
9. A device according to any preceding claim, wherein the processor is programmed to identify a plurality of patterns of motion in multiple directions, and to execute the corresponding operation on information displayed in one or more identified display regions.
10. A method of controlling the display of information in a plurality of independent display regions on a device having a motion detector arrangement to detect motion of the device, each display region being independently identifiable, the method comprising:
during a learning phase:
receiving a user input identifying an operation to be performed on information displayed in one or more identified display regions;
detecting motion of the device using the motion detector arrangement;
identifying a motion type from the motion detected by the motion detecting arrangement;
storing the received user input in correspondence with data identifying the motion type as motion type control data;
and repeating the receiving, detecting, identifying and storing steps; and
in an operation phase:
detecting motion of the device using the motion detector arrangement; and
using the motion type control data to identify a motion type and execute the corresponding operation on information displayed in one or more identified display regions.
11. A method according to claim 10, wherein in the learning phase when a user input is received identifying an operation to be performed on information displayed in one or more identified display regions, a said motion type is identified by averaging the detected motion received from the motion detecting arrangement a plurality of times, and the received user input is stored in correspondence with data identifying the motion type in the motion type control data
12. A method according to claim 10 or claim 11, wherein the motion types comprise a combination of motions in multiple directions.
13. A method according to claim 12, wherein the directions comprise at least one of lateral and/or rotational directions.
14. A method according to any one of claims 10 to 13, including, in the learning phase, receiving user selections of one or more display region to identify the one or more display regions as part of the user input.
15. A method according to any one of claims 10 to 13, including, in the learning phase, displaying a menu of operations to be performed on information displayed in one or more identified display regions and the user input is received as a user selection of one of the operations to be performed on information displayed in one or more identified display regions in the menu and a selection of one or more of the display regions to which the operations apply as the user input.
16. A method according to any one of claims 10 to 13, including in the learning phase displaying a list of operations to be performed on information displayed in one or more identified display regions, wherein the user input is received as a user selection to select the list as a list of user inputs identifying operations to be performed on information displayed in one or more identified display regions and a selection of one or more of the display regions to which the operations apply as the user input, motion types are identified from the motion detected by the motion detecting arrangement for each user input and the user inputs are stored in correspondence with data identifying the respective motion type in the motion type control data.
17. A method according to any one of claims 10 to 16, wherein a plurality of patterns of motion in multiple directions are identified to execute the corresponding operation to be performed on information displayed in one or more identified display regions.
18. A carrier medium carrying processor implementable code for execution by a processor to implement the method of any one of claims 10 to 17.
19. A device comprising:
a memory to store control data comprising identifications of motion types, corresponding control operations, and corresponding gesture type data;
a motion detecting arrangement to detect motion of the device;
a gesture detecting arrangement to detect gestures;
a user input arrangement to receive a user input identifying control operations during a learning phase; and
a processor programmed: in the learning phase when a user input is received identifying a control operation, to identify a motion type from motion detected by the motion detecting arrangement, to store the received user input in correspondence with data identifying the motion type in the control data, and to determine corresponding gesture type data, and
subsequent to the learning phase, when the motion detecting arrangement detects motion, to use the control data to identify a motion type and execute the corresponding control operation, or when the gesture detecting arrangement detects a gesture, to use the control data to identify a gesture type and execute the corresponding control operation.
20. A device according to claim 19, wherein the gesture detecting arrangement comprises a camera.
EP19717551.6A 2018-03-29 2019-03-29 Device operation control Withdrawn EP3776160A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1805269.6A GB2572434A (en) 2018-03-29 2018-03-29 Device operation control
PCT/GB2019/050933 WO2019186203A1 (en) 2018-03-29 2019-03-29 Device operation control

Publications (1)

Publication Number Publication Date
EP3776160A1 true EP3776160A1 (en) 2021-02-17

Family

ID=62142364

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19717551.6A Withdrawn EP3776160A1 (en) 2018-03-29 2019-03-29 Device operation control

Country Status (8)

Country Link
EP (1) EP3776160A1 (en)
JP (2) JP2021519977A (en)
KR (1) KR20210002512A (en)
CN (1) CN112041804A (en)
GB (1) GB2572434A (en)
SG (1) SG11202009628UA (en)
WO (1) WO2019186203A1 (en)
ZA (1) ZA202100682B (en)

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6798429B2 (en) * 2001-03-29 2004-09-28 Intel Corporation Intuitive mobile device interface to virtual spaces
US20050212760A1 (en) * 2004-03-23 2005-09-29 Marvit David L Gesture based user interface supporting preexisting symbols
KR100819880B1 (en) * 2006-12-01 2008-04-07 삼성전자주식회사 Method for executing function in wireless terminal
US8391786B2 (en) * 2007-01-25 2013-03-05 Stephen Hodges Motion triggered data transfer
KR100912310B1 (en) * 2008-04-17 2009-08-14 엘지전자 주식회사 User interface controlling method by detecting user's gestures
KR20100096425A (en) * 2009-02-24 2010-09-02 삼성전자주식회사 Method for recognizing motion based on motion sensor and mobile terminal using the same
US9483085B2 (en) * 2011-06-01 2016-11-01 Blackberry Limited Portable electronic device including touch-sensitive display and method of controlling same
CN102420942A (en) * 2011-11-28 2012-04-18 康佳集团股份有限公司 Photograph device and photograph control method based on same
CN102722239A (en) * 2012-05-17 2012-10-10 上海冠勇信息科技有限公司 Non-contact control method of mobile device
US20140184495A1 (en) * 2012-12-31 2014-07-03 Joseph Patrick Quin Portable Device Input by Configurable Patterns of Motion
US9229529B2 (en) * 2013-12-19 2016-01-05 Sony Corporation Apparatus and control method based on motion
JP6324203B2 (en) * 2014-05-14 2018-05-16 キヤノン株式会社 Information processing apparatus, control method therefor, program, and recording medium
KR102275653B1 (en) * 2014-10-21 2021-07-09 삼성전자주식회사 Wearable device and method for transmitting contents
CN104639966A (en) * 2015-01-29 2015-05-20 小米科技有限责任公司 Method and device for remote control
US9996164B2 (en) * 2016-09-22 2018-06-12 Qualcomm Incorporated Systems and methods for recording custom gesture commands

Also Published As

Publication number Publication date
GB2572434A (en) 2019-10-02
ZA202100682B (en) 2023-10-25
GB201805269D0 (en) 2018-05-16
JP2024056764A (en) 2024-04-23
KR20210002512A (en) 2021-01-08
WO2019186203A1 (en) 2019-10-03
JP2021519977A (en) 2021-08-12
SG11202009628UA (en) 2020-10-29
CN112041804A (en) 2020-12-04

Similar Documents

Publication Publication Date Title
US20200257373A1 (en) Terminal and method for controlling the same based on spatial interaction
US20220129060A1 (en) Three-dimensional object tracking to augment display area
US20200201901A1 (en) Information search method and device and computer readable recording medium thereof
KR100790896B1 (en) Controlling method and apparatus for application using image pickup unit
EP2790089A1 (en) Portable device and method for providing non-contact interface
US10949668B2 (en) Electronic apparatus and method for controlling thereof
JP2016511471A (en) Method for controlling display of a plurality of objects by movement-related input to portable terminal and portable terminal
CN107407945A (en) From the system and method for screen locking capture images
US20130155108A1 (en) Augmented Reality User Interaction Methods, Computing Devices, And Articles Of Manufacture
US20150063785A1 (en) Method of overlappingly displaying visual object on video, storage medium, and electronic device
US9904444B2 (en) Method of providing user interface of device and device including the user interface
CN204945943U (en) For providing the remote control equipment of remote control signal for external display device
US10175780B2 (en) Behind-display user interface
WO2019186203A1 (en) Device operation control
CN110827412A (en) Method, apparatus and computer-readable storage medium for adapting a plane
WO2019186201A1 (en) Display apparatus
KR102138233B1 (en) Display control apparatus and method for controlling the same
JP2016015078A (en) Display control device, display control method, and program
KR101898162B1 (en) Apparatus and method of providing additional function and feedback to other apparatus by using information of multiple sensor
CN109753143A (en) A kind of method and apparatus optimizing cursor position

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20201015

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20210518