WO2016106659A1 - Method, apparatus, computer program product for executing gesture-based command on touch screen - Google Patents

Method, apparatus, computer program product for executing gesture-based command on touch screen Download PDF

Info

Publication number
WO2016106659A1
WO2016106659A1 PCT/CN2014/095844 CN2014095844W WO2016106659A1 WO 2016106659 A1 WO2016106659 A1 WO 2016106659A1 CN 2014095844 W CN2014095844 W CN 2014095844W WO 2016106659 A1 WO2016106659 A1 WO 2016106659A1
Authority
WO
WIPO (PCT)
Prior art keywords
entity
touch screen
gesture
command
predefined
Prior art date
Application number
PCT/CN2014/095844
Other languages
French (fr)
Inventor
Christian Kraft
Original Assignee
Nokia Technologies Oy
Navteq (Shanghai) Trading Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy, Navteq (Shanghai) Trading Co., Ltd. filed Critical Nokia Technologies Oy
Priority to PCT/CN2014/095844 priority Critical patent/WO2016106659A1/en
Publication of WO2016106659A1 publication Critical patent/WO2016106659A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Definitions

  • the invention relates to improved user interfaces for touch screens.
  • it relates to issuing commands and viewing the result of commands on a touch screen display.
  • the real estate of the display may be continuously used to display the information in full.
  • a virtual keyboard is used, i.e. a keyboard which keys are displayed on the display itself.
  • Such a virtual keyboard may occupy a significant part of the real estate of the display, up to 30-50per cent of it.
  • touch screen display having virtual keyboards
  • the problem is that there is much less room to display information. This problem is accentuated by the fact that touch screen displays are often seen in relatively small devices, such as smartphones.
  • the virtual keyboard needs to be brought up on the display, which may take about half of the screen, which on a small display makes it difficult to see the search result.
  • search dialog boxes or other means for repeating the search are easily obscured by search dialog boxes or other means for repeating the search.
  • Prior art solutions are directed to rather specific solutions using special purpose software for achieving certain UI effects in one application.
  • the various approaches presented do not fill the need of a universal and simple gesture-based command system that can be used in a similar way across many applications, which does not occupy the real estate of the display screen, which does not require any special hardware, and which can be implemented without re-designing the various software and applications the system is intended to work with.
  • the present invention suggest a new and intuitive way to issue commands on e.g. a web page, in a contacts list, in images, in a text document etc.
  • the invention relates to the idea of issuing and viewing the result of commands on a touch screen display without using a virtual keyboard or other pop-up boxes on the display.
  • the invention is characterized by what is presented in the appended claims.
  • command activation is done by first marking up in a normal way a text or alphanumeric string to be searched. The text is then long-tapped or double-tapped, or the user may hover his finger above the marked text, whereafter a gesture is performed on the display screen. The gestured letter “F” would then find the next occurrence of the marked text in the document, the letter “C” would copy the text, etc.
  • the command activation is done by long-tapping, double-tapping or hovering over a symbol or icon of a file, a message file or other data file, whereafter a gesture is performed on the display screen.
  • a letter “D” would then delete the file, for example.
  • a letter “F” would initiate a forward command of the message. It is to be noted that the same gesture may mean a different thing in a different context.
  • the activation may be done by triple-tapping or by other means suitable for selecting an item on the screen out of a number of items for further processing.
  • the inventive method is not restricted to any particular way of pointing at an item on a touch-sensitive screen.
  • the recognition of a selected item may be made according to the normal standard the software product is using, and the subsequent gesture which follows on the screen will tell the software what command is in question.
  • the inventive concept brings considerable advantages.
  • One advantage of the invention is that there is no need to bring up a virtual keyboard, menu or dialog box to issue a command, e.g. to perform searching on a specific webpage or when browsing a (text) document.
  • the user can-with simple touch and gesture operations-perform a further search without blocking half the screen with a virtual keyboard and does not need to re-type the search phrase on e.g. a webpage after already typing it in e.g. a Google or other search.
  • Another important advantage is that there is no need for loading multiple software applications to a mobile device, thus saving device storage space and money.
  • Fig. 1 shows an exemplary embodiment of the present invention relating to text editing
  • Fig. 2 shows an embodiment of the present invention where it is used to handle mail lists
  • Fig. 3 shows an embodiment of the present invention where it is used to handle applications
  • Fig. 4 shows an example of an apparatus capable of supporting the present invention
  • Fig. 5 shows a typical system configuration capable of supporting the present invention
  • Fig. 6 shows a block diagram of an example on how the present invention works.
  • Fig. 1 is shown an example embodiment of the present invention.
  • the user has made a search with a search engine or a search function for the phrase “Ut enim ad minim veniam” , and found the phrase 11 (marked in bold) , in a text 10.
  • the user 12 activates the text by long pressing or double tapping on the phrase to invoke the gesture recognition mode of the device. If the gesture performed on the screen is an “N” , for example, in the context of Fig. 1 that would mean “find next” , which is at 13 (marked with italics for clarity) . In this case the user can very easily do a further search of an identical or almost identical word or phrase on any web page or text file-without having to bring up the virtual keyboard that may take up half the screen or more.
  • a command gesture forming a symbol or a letter may be continuous, i.e. the symbol or letter is written without lifting the finger from the screen. Alternatively, it may be adapted to wait for and compile several gestures into one recognizable symbol.
  • search could be made less precise, and also search for “similar” expressions than what the user wants to look for.
  • Aletter “S” instead of the letter “N” in Fig. 1, for example, could be configured to retrieve not only identical, but also similar words or expressions.
  • the metadata item of interest can be activated in order to find other images or photos that matches the metadata.
  • the“gesture” means in this context a move of a finger or pen along the (horizontal) surface of a touch screen.
  • a long press, a tap or hovering is a movement or a posture in a generally perpendicular direction with regard to the touch screen.
  • the concept of “hover” is a way of combine self-and mutual capacitive sensing in a touch screen display that emulates a mouse-over-like functionality in a handheld device. Hovering may include magnification and assisted text selection. It thus allows further ways of interaction with touch screen devices.
  • FIG. 2A and 2B is shown an example of an embodiment where the inventive solution is used to handle a mail list (Email 1-Email 6) .
  • each message is represented by an icon 22.
  • the messages are displayed on a touch screen display 21 of a mobile device 20.
  • the user 12 activates one message item 23 by a long press or a double or triple tap.
  • the message icon 23 of the activated message then becomes highlighted or otherwise marked, as shown.
  • a gesture forming the letter “F” is subsequently drawn on the touch screen 21.
  • “F” stands in this context for “Forward” , which means a mail routine in the device 20 for forwarding message 23 is executed.
  • the popular commands to be configured for gesture commands would probably be: Delete, Forward, Reply, Move and Save.
  • the activated icon 23 may follow the gesture (as shown) or remain in place during the gesture on the touch screen 21.
  • FIG. 3A and 3B is shown an example of an embodiment where the inventive solution is used to manage applications (App 1-App 6) .
  • Each application is represented by an icon 32 which is displayed on the touch screen display 31 of the mobile device 30.
  • the user 12 activates one application item 33 by a long press or a double or triple tap.
  • the message icon 33 of the activated message then becomes highlighted or otherwise marked, as shown.
  • a gesture forming the letter “D” is subsequently drawn on the touch screen 31.
  • “D” stands in this context for “Delete” , which means a file manager routine in the device 30 for deleting the application 33 is executed.
  • Fig. 4 illustrates an example apparatus capable of supporting at least some embodiments of the present invention.
  • device 40 which may comprise, for example, a mobile communication device such as a smartphone.
  • processor 41 which may comprise, for example, a single-or multi-core processor wherein a single-core processor comprises one processing core and a multi-core processor comprises more than one processing core.
  • Processor 41 may comprise a Qualcomm Snapdragon 800 processor, for example.
  • Processor 41 may comprise more than one processor.
  • a processing core may comprise, for example, a Cortex-A8 processing core manufactured by Intel Corporation or a Brisbane processing core produced by Advanced Micro Devices Corporation.
  • Processor 41 may comprise at least one application-specific integrated circuit, ASIC.
  • Processor 41 may comprise at least one field-programmable gate array, FPGA. Processor 41 may be means for performing method steps in device 40. Processor 41 may be means for performing method steps in device 40. Processor 41 may be configured, at least in part by computer instructions, to perform actions.
  • the device 40 may include a separate memory unit 42, which may comprise a random-access memory and/or permanent memory.
  • Memory 42 may comprise at least one RAM chip.
  • Memory 42 may comprise magnetic, optical and/or holographic memory, for example.
  • Memory 42 is at least in part accessible to processor 41 and may at least partly be a storage of computer instructions that processor 41 is configured to execute.
  • processor 41 and/or its at least one processing core may be considered to be configured to perform said certain actions.
  • the device 40 has a transceiver unit 46, which comprise a transmitter 43 and a receiver 44.
  • Transmitter 43 and receiver 44 are configured to transmit and receive, respectively, information in accordance with at least one cellular or non-cellular standard.
  • Transmitter 43 may comprise more than one transmitter.
  • Receiver 44 may comprise more than one receiver.
  • Transmitter 43 and/or receiver 44 may be configured to operate in accordance with global system for mobile communication, GSM, wideband code division multiple access, WCDMA, long term evolution, LTE, IS-95, wireless local area network, WLAN, Ethernet and/or worldwide interoperability for microwave access, WiMAX, standards, for example.
  • the Device 40 also comprises a short range radio communication transceiver 45.
  • the transceiver 45 supports at least one such technology, such as Bluetooth, WLAN, Wi-Fi Direct, LTE D2D, Wibree or similar technologies.
  • the device 40 comprises a user interface.
  • a user interface may comprise at least one of a touch screen display 48, a keyboard (not shown) , a vibrator arranged to signal to a user by causing device 40 to vibrate, a speaker and a microphone.
  • a user is able to operate device 40 via UI, for example to accept incoming telephone calls, to originate telephone calls or video calls, to browse the Internet, to manage digital files stored in memory 42 or on a cloud accessible via transmitter 43 and receiver 44, or via NFC transceiver 45, and/or to play games.
  • the device 40 may also be arranged to accept a user identity module (not shown) , such as a subscriber identity module (SIM) card installable in device 40.
  • SIM subscriber identity module
  • the device 40 may comprise further devices not illustrated in Fig. 4, such as at least one digital camera.
  • the processor 41 may be furnished with a transmitter arranged to output information from processor 41, via electrical leads internal to device 40, to other devices comprised in device 40.
  • a transmitter may comprise a serial bus transmitter arranged to, for example, output information via at least one electrical lead to memory 42 for storage therein.
  • the transmitter may comprise a parallel bus transmitter.
  • processor 41 may comprise a receiver arranged to receive information in processor 41, via electrical leads internal to device 40, from other devices comprised in device 40.
  • Such a receiver may comprise a serial bus receiver arranged, for example, to receive information via at least one electrical lead from receiver 44 for processing in processor 41.
  • the receiver may comprise a parallel bus receiver.
  • Processor 41, memory 42, transmitter 43, receiver 44, the transceiver 45, touch screen 48 and/or any other modules or devices may be interconnected by electrical leads internal to device 40 in a multitude of different ways.
  • each of the aforementioned devices may be separately connected to a master bus internal to device 40, to allow for the devices to exchange information.
  • this is only one example and depending on the embodiment various ways of interconnecting at least two of the aforementioned devices may be selected without departing from the scope of the present invention.
  • the technical implementation of the invention may be carried out in a number of ways.
  • the solution may include both a device and a server and the functionality can be split between them, or it can be done purely on the server or on the mobile device.
  • a web page itself does not understand gesture commands, but the device may know this and translate the gesture this into a command like ′′next page′′ or similar-which the webpage again may understand.
  • Fig. 5 is shown a typical configuration with a mobile device 50, a wireless network 51, a HTTP server (webserver) 52 and a host or content server 53. Any search or retrieval of information is done from the host 53, while the HTTP server 52 renders the webpages to be looked at in a browser in the mobile device 50.
  • At least some aspects of the invention can be implemented with software purely, with an off-the-shelf or slightly modified browser on the device, which can process and read a webpage, a document, a contact list or similar files.
  • Many browsers know where in a document the originally entered search keywords reside, or the resulting search result page knows it. For instance, the original searched word (s) may be highlighted with a different color.
  • the user makes a search which results are displayed. The user chooses a specific word or phrase and gives a gesture command on the touch screen to look for the next hit in the same document, or to make some other operation with the selected alphanumeric string.
  • Script language injection is a technique that allows one to alter a site contents in order to manipulate parameters or cookies.
  • the activation which is needed to invoke the gesture recognition mode may be enabled by a script language code injection retrieved from a remote server and that is then added to a webpage being rendered.
  • Javascript injection can be done at the content server or host as follows:
  • the browser makes a search request to a search engine with a few keywords
  • the search engine locates a webpage in its memory, which matches the keyword.
  • the webpage is not aware of the keywords
  • the search engine modifies the webpage and injects Javascript to it to make activation of the keywords possible
  • ⁇ the search engine returns modified webpage to the browser for rendering.
  • Javascript injection can be done as follows:
  • the browser makes a search request to a search engine with a few keywords
  • the search engine locates a URL to a web page that matches the keywords
  • the search engine appends the keywords as additional query parameters and returns the URL to the browser
  • the HTTP server that host the webpage injects Javascript to the webpage with the keywords (which then may be activated) in the modified URL, and returns the result to the browser.
  • keywords represent some special interests of the user based on his context or location.
  • a search for such keywords may be initiated by a local application at 60. For example, if a user is travelling, such an application in his mobile device will know the current location, and may retrieve keywords from a server at 61 about the location itself, or from any relevant context.
  • the user will then at 62 retrieve a webpage from an HTTP server to the browser in the mobile device.
  • the HTTP server and the webpage need not to be aware of any keywords.
  • the browser may then at 63 execute
  • JavaScript codes to enable the retrieved keywords to be activated by the user.
  • the keywords may become highlighted or otherwise marked.
  • the user may then long press, double tap or hover on one of them to activate them at 64, whereafter a gesture command may be given to act on the activated keyword, e.g. to do a search for the next hit in the document, or if it is a hyperlinked keyword, the user may want to follow the link to display further information.
  • the touch screen user interface is managed by the device, as well as the detection of e.g. search terms within a webpage, document or contacts list, for example.
  • the device may has a dedicated ′′find function′′ that could search the entire device and/or the web.
  • a further search for more hits in one of the found documents may be done by activating (selecting, marking or highlighting on the display) the document, and then by making a gesture command as described above.
  • the device software may be aware of all existing major search engines and how they work, which enables the device to adapt to the browser used when the user is e.g. typing the initial search term.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method, an apparatus and corresponding computer programs for executing a gesture-based command are disclosed. Gesture-based commands are executed on a touch screen in the following steps: displaying on a touch screen selectable entities of files or computer programs represented by icons or hyperlinks, or a pre-selected sequence of alphanumeric data in a data file; activating an entity to invoke a gesture recognition mode by sensing at said entity on the touch screen the contact or proximity of at least one finger; recognizing a predefined gesture along the surface of the touch screen which corresponds to a command pertaining to said activated entity; identifying said gesture and executing the command on said entity.

Description

A METHOD, AN APPARATUS AND A COMPUTER PROGRAM PRODUCT FOR EXECUTING A GESTURE-BASED COMMAND ON A TOUCH SCREEN FIELD OF THE INVENTION
The invention relates to improved user interfaces for touch screens. In particular, it relates to issuing commands and viewing the result of commands on a touch screen display.
BACKGROUND OF THE INVENTION
Issuing short-cut commands on a physical keyboard, when working with files containing alphanumeric information, is simple enough. Usually this is done by pressing and holding down the Control key followed by a letter for the specific command in question, e.g. Ctrl-Sfor save, Ctrl-F for find, etc. The command is executed and the result is visible on the computer display, if the command is expected to return an item to be displayed, otherwise not.
In any case, the real estate of the display may be continuously used to display the information in full. However, with touch screens, usually a virtual keyboard is used, i.e. a keyboard which keys are displayed on the display itself. Such a virtual keyboard may occupy a significant part of the real estate of the display, up to 30-50per cent of it.
On a touch screen display having virtual keyboards, the problem is that there is much less room to display information. This problem is accentuated by the fact that touch screen displays are often seen in relatively small devices, such as smartphones. For example, when searching for a word or expression in a text in a document, a web page, a contact list etc., the virtual keyboard needs to be brought up on the display, which may take about half of the screen, which on a small display makes it difficult to see the search result. Also when browsing through the document in search for further hits, a small display is easily obscured by search dialog boxes or other means for repeating the search.
Other examples of difficulties with the user interface of small touch-screen interfaces include situations where commands need to be issued in various applications. Some features may be common through the standard drag and drop support in the applications, but such are available mostly for move and delete commands using file manager capabilities. Mostly the commands need to be entered one way or another using a virtual keyboard or a menu, which again occupies the real estate of the display.
SUMMARY OF THE INVENTION
Prior art solutions are directed to rather specific solutions using special purpose software for achieving certain UI effects in one application. The various approaches presented do not fill the need of a universal and simple gesture-based command system that can be used in a similar way across many applications, which does not occupy the real estate of the display screen, which does not require any special hardware, and which can be implemented without re-designing the various software and applications the system is intended to work with.
The present invention suggest a new and intuitive way to issue commands on e.g. a web page, in a contacts list, in images, in a text document etc. The invention relates to the idea of issuing and viewing the result of commands on a touch screen display without using a virtual keyboard or other pop-up boxes on the display. The invention is characterized by what is presented in the appended claims.
In some embodiments, command activation is done by first marking up in a normal way a text or alphanumeric string to be searched. The text is then long-tapped or double-tapped, or the user may hover his finger above the marked text, whereafter a gesture is performed on the display screen. The gestured letter “F” would then find the next occurrence of the marked text in the document, the letter “C” would copy the text, etc.
With document is herein meant any file with alphanumeric information content that can be accessed and edited and where strings of alphanumeric information can be selected by the user e.g. for copy, cut-and-paste, delete and search operations.
In some embodiments, the command activation is done by long-tapping, double-tapping or hovering over a symbol or icon of a file, a message file or other data file, whereafter a gesture is performed on the display screen. A letter “D” would then delete the file, for example. In the case of a message, a letter “F” would initiate a forward command of the message. It is to be noted that the same gesture may mean a different thing in a different context.
In some embodiments, the activation may be done by triple-tapping or by other means suitable for selecting an item on the screen out of a number of items for further processing. The inventive method is not restricted to any particular way of pointing at an item on a touch-sensitive screen. The recognition of a selected item may be made according to the normal standard the software product is using, and the subsequent gesture which follows on the screen will tell the software what command is in question.
The inventive concept brings considerable advantages. One advantage of the invention is that there is no need to bring up a virtual keyboard, menu or dialog box to issue a command, e.g. to perform searching on a specific webpage or when browsing a (text) document. In such a case,  the user can-with simple touch and gesture operations-perform a further search without blocking half the screen with a virtual keyboard and does not need to re-type the search phrase on e.g. a webpage after already typing it in e.g. a Google or other search. Another important advantage is that there is no need for loading multiple software applications to a mobile device, thus saving device storage space and money.
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1 shows an exemplary embodiment of the present invention relating to text editing,
Fig. 2 shows an embodiment of the present invention where it is used to handle mail lists,
Fig. 3 shows an embodiment of the present invention where it is used to handle applications,
Fig. 4 shows an example of an apparatus capable of supporting the present invention,
Fig. 5 shows a typical system configuration capable of supporting the present invention,
Fig. 6 shows a block diagram of an example on how the present invention works.
DETAILED DESCRIPTION OF EMBODIMENTS
In Fig. 1 is shown an example embodiment of the present invention. The user has made a search with a search engine or a search function for the phrase “Ut enim ad minim veniam” , and found the phrase 11 (marked in bold) , in a text 10. The user 12 activates the text by long pressing or double tapping on the phrase to invoke the gesture recognition mode of the device. If the gesture performed on the screen is an “N” , for example, in the context of Fig. 1 that would mean “find next” , which is at 13 (marked with italics for clarity) . In this case the user can very easily do a further search of an identical or almost identical word or phrase on any web page or text file-without having to bring up the virtual keyboard that may take up half the screen or more.
Correspondingly, if the user searches e.g. for contacts, he may first find a person with a certain first or last name. By activating the found name and performing the correct gesture, he would then find other matches to that specific part of the name. Backward searching can be easily configured, e.g. by typing the letter “B” instead of letter “N” . A command gesture forming a symbol or a letter may be continuous, i.e. the symbol or letter is written without lifting the finger from the screen. Alternatively, it may be adapted to wait for and compile several gestures into one recognizable symbol.
The search could be made less precise, and also search for “similar” expressions than what the user wants to look for. Aletter “S” instead of the letter “N” in Fig. 1, for example, could be configured to retrieve not only identical, but also similar words or expressions.
For images or photos provided with metadata e.g. about the object, location or the creator, the metadata item of interest can be activated in order to find other images or photos that matches the metadata.
To avoid any ambiguity with the terms describing the physical interaction with the touch screen, the“gesture” means in this context a move of a finger or pen along the (horizontal) surface of a touch screen. In contrast, a long press, a tap or hovering is a movement or a posture in a generally perpendicular direction with regard to the touch screen. Similarly, the concept of “hover” is a way of combine self-and mutual capacitive sensing in a touch screen display that emulates a mouse-over-like functionality in a handheld device. Hovering may include magnification and assisted text selection. It thus allows further ways of interaction with touch screen devices.
In fig. 2A and 2B is shown an example of an embodiment where the inventive solution is used to handle a mail list (Email 1-Email 6) . Here, each message is represented by an icon 22. The messages are displayed on a touch screen display 21 of a mobile device 20. In Fig. 2A, the user 12 activates one message item 23 by a long press or a double or triple tap. The message icon 23 of the activated message then becomes highlighted or otherwise marked, as shown.
In fig. 2B, a gesture forming the letter “F” is subsequently drawn on the touch screen 21. “F” stands in this context for “Forward” , which means a mail routine in the device 20 for forwarding message 23 is executed. In this context, the popular commands to be configured for gesture commands would probably be: Delete, Forward, Reply, Move and Save. The activated icon 23 may follow the gesture (as shown) or remain in place during the gesture on the touch screen 21.
In fig. 3A and 3B is shown an example of an embodiment where the inventive solution is used to manage applications (App 1-App 6) . Each application is represented by an icon 32 which is displayed on the touch screen display 31 of the mobile device 30. In Fig. 3A, the user 12 activates one application item 33 by a long press or a double or triple tap. The message icon 33 of the activated message then becomes highlighted or otherwise marked, as shown.
In fig. 3B, a gesture forming the letter “D” is subsequently drawn on the touch screen 31. “D” stands in this context for “Delete” , which means a file manager routine in the device 30 for deleting the application 33 is executed.
Fig. 4 illustrates an example apparatus capable of supporting at least some embodiments of the present invention. Illustrated is device 40, which may comprise, for example, a mobile communication device such as a smartphone. Comprised in device 40 is processor 41, which  may comprise, for example, a single-or multi-core processor wherein a single-core processor comprises one processing core and a multi-core processor comprises more than one processing core. Processor 41 may comprise a Qualcomm Snapdragon 800 processor, for example. Processor 41 may comprise more than one processor. A processing core may comprise, for example, a Cortex-A8 processing core manufactured by Intel Corporation or a Brisbane processing core produced by Advanced Micro Devices Corporation. Processor 41 may comprise at least one application-specific integrated circuit, ASIC. Processor 41 may comprise at least one field-programmable gate array, FPGA. Processor 41 may be means for performing method steps in device 40. Processor 41 may be means for performing method steps in device 40. Processor 41 may be configured, at least in part by computer instructions, to perform actions.
The device 40 may include a separate memory unit 42, which may comprise a random-access memory and/or permanent memory. Memory 42 may comprise at least one RAM chip. Memory 42 may comprise magnetic, optical and/or holographic memory, for example. Memory 42 is at least in part accessible to processor 41 and may at least partly be a storage of computer instructions that processor 41 is configured to execute. When computer instructions configured to cause processor 41 to perform certain actions are stored in memory 42, and device 40 overall is configured to run under the direction of processor 41 using computer instructions from memory 42, processor 41 and/or its at least one processing core may be considered to be configured to perform said certain actions.
The device 40 has a transceiver unit 46, which comprise a transmitter 43 and a receiver 44. Transmitter 43 and receiver 44 are configured to transmit and receive, respectively, information in accordance with at least one cellular or non-cellular standard. Transmitter 43 may comprise more than one transmitter. Receiver 44 may comprise more than one receiver. Transmitter 43 and/or receiver 44 may be configured to operate in accordance with global system for mobile communication, GSM, wideband code division multiple access, WCDMA, long term evolution, LTE, IS-95, wireless local area network, WLAN, Ethernet and/or worldwide interoperability for microwave access, WiMAX, standards, for example.
Device 40 also comprises a short range radio communication transceiver 45. The transceiver 45 supports at least one such technology, such as Bluetooth, WLAN, Wi-Fi Direct, LTE D2D, Wibree or similar technologies.
The device 40 comprises a user interface. A user interface (UI) may comprise at least one of a touch screen display 48, a keyboard (not shown) , a vibrator arranged to signal to a user by causing device 40 to vibrate, a speaker and a microphone. A user is able to operate device 40 via UI, for example to accept incoming telephone calls, to originate telephone calls or video calls,  to browse the Internet, to manage digital files stored in memory 42 or on a cloud accessible via transmitter 43 and receiver 44, or via NFC transceiver 45, and/or to play games.
The device 40 may also be arranged to accept a user identity module (not shown) , such as a subscriber identity module (SIM) card installable in device 40. The device 40 may comprise further devices not illustrated in Fig. 4, such as at least one digital camera.
The processor 41 may be furnished with a transmitter arranged to output information from processor 41, via electrical leads internal to device 40, to other devices comprised in device 40. Such a transmitter may comprise a serial bus transmitter arranged to, for example, output information via at least one electrical lead to memory 42 for storage therein. Alternatively to a serial bus, the transmitter may comprise a parallel bus transmitter. Likewise processor 41 may comprise a receiver arranged to receive information in processor 41, via electrical leads internal to device 40, from other devices comprised in device 40. Such a receiver may comprise a serial bus receiver arranged, for example, to receive information via at least one electrical lead from receiver 44 for processing in processor 41. Alternatively to a serial bus, the receiver may comprise a parallel bus receiver.
Processor 41, memory 42, transmitter 43, receiver 44, the transceiver 45, touch screen 48 and/or any other modules or devices may be interconnected by electrical leads internal to device 40 in a multitude of different ways. For example, each of the aforementioned devices may be separately connected to a master bus internal to device 40, to allow for the devices to exchange information. However, as the skilled person will appreciate, this is only one example and depending on the embodiment various ways of interconnecting at least two of the aforementioned devices may be selected without departing from the scope of the present invention.
The technical implementation of the invention may be carried out in a number of ways. The solution may include both a device and a server and the functionality can be split between them, or it can be done purely on the server or on the mobile device. A web page itself does not understand gesture commands, but the device may know this and translate the gesture this into a command like ″next page″ or similar-which the webpage again may understand. In Fig. 5 is shown a typical configuration with a mobile device 50, a wireless network 51, a HTTP server (webserver) 52 and a host or content server 53. Any search or retrieval of information is done from the host 53, while the HTTP server 52 renders the webpages to be looked at in a browser in the mobile device 50.
At least some aspects of the invention can be implemented with software purely, with an off-the-shelf or slightly modified browser on the device, which can process and read a webpage, a document, a contact list or similar files. Many browsers know where in a document the originally  entered search keywords reside, or the resulting search result page knows it. For instance, the original searched word (s) may be highlighted with a different color. Thus the user makes a search which results are displayed. The user chooses a specific word or phrase and gives a gesture command on the touch screen to look for the next hit in the same document, or to make some other operation with the selected alphanumeric string.
Script language injection, notably JavaScript injection, is a technique that allows one to alter a site contents in order to manipulate parameters or cookies. One can also inject JavaScript into dynamic pages to cause the page to render differently, i.e. to add functionality. Here, the activation which is needed to invoke the gesture recognition mode may be enabled by a script language code injection retrieved from a remote server and that is then added to a webpage being rendered.
Javascript injection can be done at the content server or host as follows:
●the browser makes a search request to a search engine with a few keywords
●the search engine locates a webpage in its memory, which matches the keyword. The webpage is not aware of the keywords
●the search engine modifies the webpage and injects Javascript to it to make activation of the keywords possible
●the search engine returns modified webpage to the browser for rendering.
At a HTTP or web server, Javascript injection can be done as follows:
●the browser makes a search request to a search engine with a few keywords
●the search engine locates a URL to a web page that matches the keywords
●the search engine appends the keywords as additional query parameters and returns the URL to the browser
●the browser makes a request to modify the URL
●the HTTP server that host the webpage injects Javascript to the webpage with the keywords (which then may be activated) in the modified URL, and returns the result to the browser.
Referring to the example shown in Fig. 6, we assume certain keywords represent some special interests of the user based on his context or location. A search for such keywords may be initiated by a local application at 60. For example, if a user is travelling, such an application in his mobile device will know the current location, and may retrieve keywords from a server at 61 about the location itself, or from any relevant context. The user will then at 62 retrieve a webpage from an HTTP server to the browser in the mobile device. The HTTP server and the webpage need not to be aware of any keywords. The browser may then at 63 execute
JavaScript codes to enable the retrieved keywords to be activated by the user. The keywords may become highlighted or otherwise marked. The user may then long press, double tap or hover on one of them to activate them at 64, whereafter a gesture command may be given to act on the activated keyword, e.g. to do a search for the next hit in the document, or if it is a hyperlinked keyword, the user may want to follow the link to display further information.
In some embodiments of the invention, the touch screen user interface is managed by the device, as well as the detection of e.g. search terms within a webpage, document or contacts list, for example.
At least some aspects of the invention can be implemented in the mobile device itself. For example, the device may has a dedicated ″find function″ that could search the entire device and/or the web. When the user sees the results, a further search for more hits in one of the found documents may be done by activating (selecting, marking or highlighting on the display) the document, and then by making a gesture command as described above. The device software may be aware of all existing major search engines and how they work, which enables the device to adapt to the browser used when the user is e.g. typing the initial search term.
It is to be understood that the embodiments of the invention disclosed are not limited to the particular structures, process steps or components disclosed herein, but are extended to equivalents thereof as would be recognized by those ordinarily skilled in the relevant arts. It should also be understood that terminology employed herein is used for the purpose of describing particular embodiments only and is not intended to be limiting.
Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment.
As used herein, a plurality of items, actions and/or components may be presented in a common list for convenience. However, these lists should be construed as though each member of the list is individually identified as a separate and unique member. Thus, no individual member of such list should be construed as a de facto equivalent of any other member of the same list solely based on their presentation in a common group without indications to the contrary. In addition, various embodiments and example of the present invention may be referred to herein along with alternatives for the various components thereof. It is understood that such embodiments, examples, and alternatives are not to be construed as de facto equivalents of one another, but are to be considered as separate and autonomous representations of the present invention.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
While the forgoing examples are illustrative of the principles of the present invention in one or more particular applications, it will be apparent to those of ordinary skill in the art that numerous modifications in form, usage and details of implementation can be made without the exercise of inventive faculty, and without departing from the principles and concepts of the invention. Accordingly, it is not intended that the invention be limited, except as by the claims set forth below.

Claims (18)

  1. A method for executing a gesture-based command on a touch screen including the steps of:
    -displaying on a touch screen selectable entities of files or computer programs represented by icons or hyperlinks, or a pre-selected sequence of alphanumeric data in a data file;
    -activating an entity to invoke a gesture recognition mode by sensing at said entity on the touch screen the contact or proximity of at least one finger;
    -recognizing a predefined gesture along the surface of the touch screen which corresponds to a command pertaining to said activated entity;
    -identifying said gesture and executing the command on said entity.
  2. A method according to claim 1, wherein the contact or proximity of at least one finger is sensed by one of the following: a long tap, a double tap, a triple tap or a hovering.
  3. A method according to claims 1 or 2, wherein an application detecting and executing said predefined gesture command determines the present context out of a set of contexts, each having a context-dependent command set, and wherein a set of commands matching the context is used for detecting and executing said gesture commands.
  4. A method according to claim 3, wherein the different contexts includes one or several of the following: text files, mail message icons, exeoutable software icons, hypertext documents, contact lists or hyperlinks.
  5. A method according to any of claims 1-4, wherein said selectable entity is a pre-selected alphanumeric string in a document, and that said activation of said alphanumeric string invokes a gesture recognition mode having a set of commands that act upon the pre-selected alphanumeric string in a number of ways when receiving one of a set of predefined gestures forming a legible symbol.
  6. A method according to claim 5, wherein one of the said commands consisting of a predefined symbol-forming gesture initiates a search command for the next identical or similar alphanumeric string in the same document, which is then displayed on said touch screen.
  7. A method according to any of claims 1-4, wherein said selectable entity is an icon representing a data or message file, and that said activation of said icon invokes a gesture  recognition mode having a set of commands that act upon the icon in a number of ways when receiving one of a set of predefined gestures forming a legible symbol.
  8. A method according to any of claims 1-4, wherein said selectable entity is a hyperlinked text file or list, and that activation of an alphanumeric string in said hyperlinked text file or list invokes a gesture recognition mode having a set of commands that act upon the hyperlink in a number of ways when receiving one of a set of predefined gestures forming a legible symbol.
  9. A method according to any of claims 1-8, wherein said activation is enabled by a script language code injection retrieved from a remote server.
  10. A method according to any of claims 1-9, wherein at least some of the predefined gestures form a letter.
  11. An apparatus comprising at least one processing core, at least one memory including computer program code, at least one communication transceiver module operable in at least one local radio communications network and at least one touch screen display, the at least one memory and the computer program code being configured to, with the at least one processing core, cause the apparatus at least to:
    -display on said touch screen selectable entities of files or computer programs represented by icons or hyperlinks, or a pre-selected sequence of alphanumeric data in a data file;
    -activate a displayed entity configured to invoke a gesture recognition mode in response to sensing at said entity on the touch screen the contact or proximity of at least one finger;
    -recognize a predefined gesture along the surface of said touch screen which corresponds to a command pertaining to said activated entity; and
    -execute the corresponding command on said entity.
  12. An apparatus according to claim 11, wherein said predetermined tap or sequence of taps for activating a displayed entity is selected from are one of the following: a long tap, a double tap, a triple tap or a hovering.
  13. An apparatus according to claims 11 or 12, wherein said activation of an entity displayed on the touch screen is configured invoke a gesture recognition mode having a set of commands that act upon the displayed entity in a number of ways when receiving on said touch screen one of a set of predefined gestures forming a legible symbol.
  14. An apparatus according to any of claims 11-13, wherein said activation is enabled by a script language code injection retrieved from a remote server.
  15. An apparatus according to any of claims 11-13, wherein at least some of the predefined gestures recognized on the touch screen form a letter.
  16. A computer program configured to carry out a method according to at least one of claims 1-10 in an apparatus according to at least one of claims 11-15.
  17. A non-transitory computer readable medium having stored thereon a set of computer readable instructions that, when executed by at least one processor, cause an apparatus to at least:
    -display on a touch screen selectable entities of files or computer programs represented by icons or hyperlinks, or a pre-selected sequence of alphanumeric data in a data file;
    -activate an entity to be acted upon by sensing on said entity a predetermined tap or sequence of taps on the touch screen to invoke a gesture recognition mode;
    -activate a displayed entity configured to invoke a gesture recognition mode in response to sensing at said entity on the touch screen the contact or proximity of at least one finger;
    -recognize a predefined gesture along the surface of the touch screen which corresponds to a command pertaining to said activated entity;
    -identify said gesture and executing the command on said entity.
  18. An apparatus, comprising:
    -means for displaying on a touch screen selectable entities of files or computer programs represented by icons or hyperlinks, or a pre-selected sequence of alphanumeric data in a data file;
    -means for activating an entity to invoke a gesture recognition mode by sensing at said entity on the touch screen the contact or proximity of at least one finger;
    -means for recognizing a predefined gesture along the surface of the touch screen which corresponds to a command pertaining to said activated entity;
    -means for identifying said gesture and executing the command on said entity.
PCT/CN2014/095844 2014-12-31 2014-12-31 Method, apparatus, computer program product for executing gesture-based command on touch screen WO2016106659A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/095844 WO2016106659A1 (en) 2014-12-31 2014-12-31 Method, apparatus, computer program product for executing gesture-based command on touch screen

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/095844 WO2016106659A1 (en) 2014-12-31 2014-12-31 Method, apparatus, computer program product for executing gesture-based command on touch screen

Publications (1)

Publication Number Publication Date
WO2016106659A1 true WO2016106659A1 (en) 2016-07-07

Family

ID=56283933

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/095844 WO2016106659A1 (en) 2014-12-31 2014-12-31 Method, apparatus, computer program product for executing gesture-based command on touch screen

Country Status (1)

Country Link
WO (1) WO2016106659A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101630231A (en) * 2009-08-04 2010-01-20 苏州瀚瑞微电子有限公司 Operation gesture of touch screen
CN102929485A (en) * 2012-10-30 2013-02-13 广东欧珀移动通信有限公司 Character input method and device
US20130346893A1 (en) * 2012-06-21 2013-12-26 Fih (Hong Kong) Limited Electronic device and method for editing document using the electronic device
CN103970460A (en) * 2013-01-30 2014-08-06 三星电子(中国)研发中心 Touch screen-based operation method and terminal equipment using same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101630231A (en) * 2009-08-04 2010-01-20 苏州瀚瑞微电子有限公司 Operation gesture of touch screen
US20130346893A1 (en) * 2012-06-21 2013-12-26 Fih (Hong Kong) Limited Electronic device and method for editing document using the electronic device
CN102929485A (en) * 2012-10-30 2013-02-13 广东欧珀移动通信有限公司 Character input method and device
CN103970460A (en) * 2013-01-30 2014-08-06 三星电子(中国)研发中心 Touch screen-based operation method and terminal equipment using same

Similar Documents

Publication Publication Date Title
US11907642B2 (en) Enhanced links in curation and collaboration applications
RU2632144C1 (en) Computer method for creating content recommendation interface
US9930167B2 (en) Messaging application with in-application search functionality
CN102819555B (en) A kind of method and apparatus carrying out recommendation information loading in the reading model of webpage
CN103314373B (en) Effective processing of large data sets on mobile device
EP3074888B1 (en) Contextual information lookup and navigation
CN102024064B (en) Rapid searching method and mobile communication terminal
US9922121B2 (en) Search system, search method, terminal apparatus, and non-transitory computer-readable recording medium
JP6599127B2 (en) Information retrieval system and method
US20140053061A1 (en) System for clipping webpages
WO2015081824A1 (en) Method and apparatus for searching
US20140109009A1 (en) Method and apparatus for text searching on a touch terminal
CN105094603B (en) Method and device for associated input
US11157576B2 (en) Method, system and terminal for performing search in a browser
CN104133815B (en) The method and system of input and search
CN103902736A (en) System and method for finger click word-capturing search of words displayed on mobile information equipment screen
CN104462232A (en) Data storage method and device
KR20160001250A (en) Method for providing contents in electronic device and apparatus applying the same
CN103365872B (en) A kind of method and system for realizing planarization search in the terminal
US20130282686A1 (en) Methods, systems and computer program product for dynamic content search on mobile internet devices
US20140379688A1 (en) Methods Performed by Electronic Devices that Facilitate Navigating a Webpage
CN105159993A (en) Search method and device
CN102799650B (en) A kind of communication terminal and character string search method thereof
WO2022089078A1 (en) Method for presenting interface information, and electronic device
CN103020183A (en) Search share operation by selecting string and activating search share bar by left button

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14909468

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14909468

Country of ref document: EP

Kind code of ref document: A1