WO2017063141A1 - Setting cursor position in text on display device - Google Patents

Setting cursor position in text on display device Download PDF

Info

Publication number
WO2017063141A1
WO2017063141A1 PCT/CN2015/091840 CN2015091840W WO2017063141A1 WO 2017063141 A1 WO2017063141 A1 WO 2017063141A1 CN 2015091840 W CN2015091840 W CN 2015091840W WO 2017063141 A1 WO2017063141 A1 WO 2017063141A1
Authority
WO
WIPO (PCT)
Prior art keywords
subset
characters
text string
electronic device
input
Prior art date
Application number
PCT/CN2015/091840
Other languages
French (fr)
Inventor
Liang Zhang
Original Assignee
Motorola Mobility Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Motorola Mobility Llc filed Critical Motorola Mobility Llc
Priority to CN201580083812.XA priority Critical patent/CN108351740A/en
Priority to PCT/CN2015/091840 priority patent/WO2017063141A1/en
Publication of WO2017063141A1 publication Critical patent/WO2017063141A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04812Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04805Virtual magnifying lens, i.e. window or frame movable on top of displayed information to enlarge it for better reading or selection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation

Definitions

  • Touchscreens have become a convenient and straightforward way for users to interact with a device. For instance, auser can invoke an application simply by using his or her finger to tap a region of a touchscreen displaying a corresponding icon, or can copy or modify displayed text by selecting an insertion point on a corresponding region of the touchscreen, and so forth. While a finger is a convenient tool to use with an interactive touchscreen display, its size can sometimes be larger than items being displayed, thus obscuring the displayed items in a manner which makes providing input prone to inaccuracies.
  • Figure 1 is an overview of a representative environment in which the present techniques may be practiced
  • Figure 2 illustrates an example implementation in which cursor positioning can be employed
  • Figure 3 illustrates an example implementation of spatial expansion
  • Figure 4 illustrates an example of spatial expansion relative to other techniques
  • Figure 5 illustrates an example implementation of display screen management relative to spatial expansion
  • Figure 6 illustrates an example flow diagram in which spatial expansion is employed
  • Figure 7 is an illustration of a device that can use the present techniques.
  • Touchscreen displays provide a user with a convenient way to interact with a respective device. Instead of using additional peripheral devices (e.g., amouse, a keyboard, and so forth) , a touchscreen display provides a dual-purpose, integrated input mechanism that displays content and simultaneously receives input. Performing a correct input gesture at a corresponding region of the touchscreen device, such as through a finger touching the display, invokes an associated action or response.
  • a finger to interact with a touchscreen device is convenient to a user, it poses certain challenges depending upon which action is being invoked. Consider an example of a user attempting to position a cursor at a precise point in a text string (e.g., at a desired space between two characters in a word) .
  • the user’s finger may obscure multiple characters when touching the display, thus making it difficult for the user to know where in the string the cursor will be inserted.
  • the size of the portion of the finger that makes contact with the touchscreen display can be much larger than the spaces between characters, resulting in less precision on where the cursor is inserted. This issue is further exacerbated as device form factors are reduced.
  • the embodiments described herein provide spatial expansion to a subset of displayed content effective to accentuate each respective element contained within the subset.
  • Some embodiments identify an input associated with selecting an insertion point within the displayed content. Responsive to identifying the input, a subset of the displayed content is determined based, at least in part, on the input.
  • some embodiments visually expand content within the subset using spatial expansion effective to increase horizontal spacing between a respective element or elements of the subset and an adjacent element, without increasing each respective element’s size.
  • some embodiments magnify all of the content in the subset at a first factor, and additionally apply spatial expansion to spacing between each respective element within the subset by increasing the horizontal spacing between each respective element of the magnified content.
  • Figure 1 illustrates an example operating environment 100 that includes a computing device in the form of mobile phone 102.
  • the computing device could be embodied as any other suitable type of device without departing from the scope of the claimed subject matter, such as a tablet, a desktop computer, a laptop, a gaming station, and so forth.
  • mobile phone 102 includes touchscreen display 104.
  • Touchscreen display 104 generally represents a multi-purpose display device that displays information on a screen, and detects input received through contact with the screen.
  • Touchscreen display 104 may be configured in a variety of ways.
  • touchscreen display 104 may include touch-sensitive components such as sensors that are configured to detect contact with the screen, such as when being touched with a finger of user hand 106.
  • an X-Y grid may be formed across the touchscreen using near optically transparent conductors (e.g., indium tin oxide) to detect contact at different X-Y locations on the screen of touchscreen display 104.
  • near optically transparent conductors e.g., indium tin oxide
  • other techniques can be used, such as surface capacitance, mutual capacitance, self-capacitance, infrared, optical imaging, dispersive signal technology, acoustic pulse recognition, and so forth.
  • Mobile phone 102 also includes processor (s) 108 and computer-readable media 110.
  • Processor (s) 108 can configured as a single or multi-core processor capable of enabling various functionalities of the mobile phone.
  • Processor (s) 108 may be coupled with, and may implement functionalities of any other components or modules of mobile phone 102 that are described herein.
  • processor (s) 108 is coupled (directly or indirectly) to computer-readable media 110.
  • Computer-readable media 110 represents one or more memory storage devices on which information can be stored, such as processor-executable instructions, data, files, digital audio, digital images, etc.
  • computer-readable media 110 includes input identification module 112 and expansion module 114. While input identification module 112 and expansion module 114 are illustrated here as separate modules, in some embodiments, the associated functionality can be incorporated into a common module without departing from the scope of the claimed subject matter.
  • Input identification module 112 represents functionality that receives input from touchscreen display 104, and identifies a corresponding action associated with the input. For example, input identification module 112 can identify a double-tap input gesture on touchscreen display 104 as a zoom-in action on a respective image in a respective region as the input gesture, a single-tap input gesture on a displayed icon as an invocation action of an application associated with the displayed icon, a press-and-hold gesture over displayed text as an insert cursor action, and so forth. In some embodiments, input identification module 112 identifies or facilitates a text expansion action, as further described below.
  • Expansion module 114 represents functionality that modifies a subset of content by dynamically expanding the subset of displayed content through spatial expansion, responsive to receiving a touch input.
  • the content in the displayed region corresponds to a subset of characters in a text string.
  • expansion module 114 identifies which characters are included in a subset of characters associated with the touch input.
  • the expansion module 114 then visually alters the spacing between each respective character, by modifying and expanding the spacing between respective characters horizontally, while keeping a fixed sizing for the characters (e.g., keeping the characters at the same font size) .
  • the expansion can simply increase the overall horizontal distance between each respective character from an originally displayed horizontal distance.
  • zoom-in functionality simply magnifies each respective character, and the spacing between characters, at a same ratio, as further described below.
  • Touchscreen display 104 then displays the updated, expanded content. In turn, a user can now interact with the expanded content to accurately position the cursor.
  • any of the functions described herein can be implemented using software, firmware, hardware (e.g., fixed logic circuitry) , manual processing, or a combination of these implementations.
  • the terms “module, ” “functionality, ” “component” and “logic” as used herein generally represent software, firmware, hardware, or a combination thereof.
  • the module, functionality, component or logic represents program code that performs specified tasks when executed on or by a processor (e.g., CPU or CPUs) .
  • the program code can be stored in one or more computer readable memory devices.
  • One trend in technology is the push to increase functionality in a device, while decreasing the form factor or size of the device.
  • a device decreases in size, so too can a corresponding display.
  • the device can compensate for the smaller screen size by either reducing the amount of content displayed, or reducing the size of the displayed content.
  • the reduced screen size sometimes impacts the accuracy of how input via the touchscreen display is received or interpreted.
  • Various embodiments spatially expand a subset of displayed content, e.g., text, responsive to receiving touch input, effective to accentuate each respective element of the subset.
  • the spatial expansion increases a displayed distance between each respective element of the subset from an originally displayed distance, thereby enabling a user to more accurately position a cursor within the subset.
  • FIG. 2 which includes mobile device 102 and user hand 106 from Figure 1.
  • mobile device 102 displays text string 202 on the touchscreen display.
  • a portion of the text is selected by the user’s finger for inserting a cursor.
  • selection of a cursor’s insertion point in a text string can be associated with any suitable type of action.
  • a user may wish to edit, add, or delete characters in a string, initiate a copy, cut or paste operation and the like.
  • Any suitable type of gesture can be utilized to interact with the touchscreen display, such as a press-and-hold gesture, a swipe-and-stop gesture, a double-tap gesture, a single-tap gesture, a force touch gesture, and so forth.
  • module device 102 identifies the gesture as being associated with a cursor positioning action.
  • the user’s finger When the user’s finger connects with the surface of the touchscreen display, the user’s finger and hand visually obscures at least part of the displayed content, making it difficult for a user to know exactly where in text string 202 the action–here the cursor position action--will be applied. Further, the impression or surface contact of the finger covers a larger display area than the spacing between the characters contained within text string 202.
  • the space between the lowercase “i” and lowercase “n” is relatively smaller in size than other spaces contained within text string 202, such as the space between “y” and “e” in “years” , the space between “n” and “g” in “During” , and so forth.
  • selecting the space between “i” and “n” can be more difficult, since the smaller distance becomes more difficult to select than a larger distance. While the above example describes non-uniform spacing between characters, it is to be appreciated that the techniques described herein can be utilized in connection with uniform spacing between characters without departing from the scope of the claimed subject matter.
  • various embodiments expand or increase the horizontal spacing between elements in a subset of displayed content, such as a subset of characters in text string 202, responsive to receiving a touch input.
  • a subset is identified based upon the touch input.
  • the subset is identified by subset area 204.
  • Any suitable techniques can be used to define the subset area.
  • the subset area may correspond to the actual shape of the contact area (e.g., the actual footprint of the touch contact) .
  • a predetermined shape can be used to define the subset area, such as one positioned around an approximate or detected point, such as the centroid of contact.
  • subset area 204 is a predetermined shape (a circle) with a predetermined size (diameter 206) . It is to be appreciated, however, that any shape or size can be utilized without departing from the scope of the claimed subject matter. Further, the location or positioning of subset area 204 can be determined in any suitable manner, such as by determining a center point, a left-most point, a right-most point, and so forth, based upon the footprint of the contact area. Subset area 204 covers or includes the subset of characters “uring” . Note that while “u” and “g” are not completely contained within subset area 204, at least a portion of each character is included. Accordingly, in some embodiments, if a portion of a character falls within the subset area, the character can be included in the subset of characters. In other embodiments, only characters that completely fall within the contact area are considered to be included in a subset of characters.
  • Figure 3 which includes text string 202 from Figure 2.
  • a subset of text string 202 ( “uring” ) has been identified via subset area 204.
  • Each respective character of this subset has a respective space between itself and the next character in the subset. For example, “u” and “r” have space 302 (n-2) there between, “r” and “i” have space 302 (n-1) , “i” and “n” have space 302n, “n” and “g” have space 302 (n+1) , and “g” and“t” have space 302 (n+2) .
  • the notations of (n-2) , (n-1) , (n) , (n+1) , and (n+2) are used to identify the position of the respective space relative to a current insertion point.
  • space 302n in the subset of text string 202 identifies the current insertion point based upon subset area 204, while space 302 (n-1) identifies the first insertion position to the left of the current insertion point, space 302 (n+1) identifies the first insertion point to the right of the current insertion point, and so forth.
  • space 302n is identified as the space closest to the center of the subset area.
  • the space associated with the current insertion point can be identified in any suitable manner, such as a left-most position and the contact area, a right-most position in the contact area, and so forth.
  • Some embodiments apply spatial expansion to a subset of elements in displayed content to help emphasize each respective element by expanding the empty space around the respective element.
  • text string 304 which is an example of spatial expansion applied to the above identified subset of characters in subset area 204.
  • space 306 (n-2) corresponds to an expanded version of space 302 (n-2)
  • space 306 (n-1) corresponds to an expanded version of space 302 (n-1) , and so forth
  • each space between characters has been expanded to a uniform size (e.g., space 306 (n-2) , space 306(n-1) , space 306n, space 306 (n+1) , and space 306 (n+2) are each the same size) .
  • this is merely for illustrative purposes, and it is to be appreciated that the expanded spaces can be non-uniform in size without departing from the scope of the claimed subject matter. Expanding each empty space horizontally not only increases the isolation of each character from one another, but it additionally enlarges each potential insertion point. In turn, a user can more accurately select an insertion point due to the increased size of the spaces.
  • Example text string 402 represents a baseline text string as it is originally displayed on a display device, such as text string 202 of Figure 2.
  • example text string 402 includes the word “Example” and appears at an arbitrary font size. For a particular font size, the letters and spaces have predetermined sizes or lengths.
  • the lowercase “e” in the word “Example” has a height of size Y, designated here as height 408, while the space between the lowercase “e” and the previous character (e.g., the lowercase “l” in the word “Example” ) has a length of size X, designated here as length 410.
  • this can be expressed as:
  • Example text string 404 represents a text string on which a zoom-in operation has been performed over the subset of characters “ple” .
  • the subset has been magnified by a factor of 2X.
  • the magnification factor, or a zoom-in factor is applied to the subset on a whole (e.g., the magnification factor is applied it to both characters and spaces contained within the subset) .
  • This can be seen by examining height 412 of lowercase “e” in example text string 404, which has a size of 2Y.
  • length 414 which represents the space between the lowercase “e” and the character preceding it, has a size of 2X.
  • Example text string 406 applies spatial expansion to the same subset of characters “ple” .
  • the height of lowercase “e” has a size of “Y” , designated here are as height 416.
  • the space between the lowercase “e” and an adjacent character has increased to a size of 3.5X, designated here as length 418.
  • spatial expansion simply affects the horizontal spacing between characters, while the characters themselves remain at the same size as originally displayed.
  • spatial expansion increases the size of the spacing without magnifying the size of the adjacent characters to the spacing.
  • the spatial expansion applied in text string 406 creates a length-to-height ratio that is greater than originally displayed example text string 402 by increasing the space at a factor larger than a factor applied to the height of the character (s) around that space (Here, since the height of the characters did not increase, this equates to an applied factor of “1” ) . In turn, this create a larger “target” or selection point for the user relative to the characters than what a zoom-in or magnification action generates. For simplicity’s sake, example text string 404 and example text string 406 have been discussed separately to further illustrate differences. However, in some embodiments, asubset of displayed content can be magnified and spatially expanded.
  • asubset can first be magnified, such as that illustrated with respect to example text string 404, and then spatially expanded, such as that illustrated with respect to example text string 406, resulting in a character height of 2Y, and a space length of 7X.
  • text string 504 illustrates a version of text string 502 that has spatially expanded spaces within the subset of characters “long” .
  • Text string 504 also includes subset 506 (e.g., “This” ) , and subset 508 (e.g., “string” ) .
  • subset 506 and subset 508 are displayed at a reduced size. Any suitable reduction can be applied without departing from the scope of the claimed subject matter.
  • each character contained within its respective subset has been reduced using a same factor.
  • some embodiments may use a graduated reduction, where the reduction factor for each character of the subset is proportional to how close the respective character is to the edge of the display.
  • graduated reduction techniques generate an appearance of shrinking as characters are positioned closer to the edge of a display.
  • Some embodiments visually truncate a text string as spatial expansion is applied to a subset.
  • text string 510 has spatially expanded the spaces contained within the subset of characters “long” . Since the spaces contained within the subset of characters “long” are being expanded, this subset becomes a region of priority on the display, in that it becomes a priority to fit the expanded content on the display. Accordingly, as the spaces are expanded, the furthest characters from this region that do not fit on the display are visually truncated.
  • the letter “T” is visually truncated from subset 512 and the letter “g” is visually truncated from subset 514.
  • Spatial expansion expands spaces or insertion points between elements in displayed content.
  • the displayed content spans from left-to-right in a horizontal manner.
  • the spatial expansion vertically expands spaces, such as when the displayed content includes characters from a language in which characters are written from top-to-bottom, instead of left-to-right, or when text runs in the vertical direction.
  • the spaces between elements are vertically expanded.
  • content can be arranged in a matrix that contains multiple rows and columns, such as a matrix of icons on a screen. In this instance, both the vertical and horizontal spaces between the icons can be expanded, while the icons themselves remain a same size.
  • spatial expansion further emphasizes elements from one another, increases the size of spaces between the elements, and allows for more accurate selection of content or insertion points.
  • Figure 6 illustrates a flow chart that describes steps in a method in accordance with one or more embodiments.
  • the method can be performed by any suitable hardware, software, firmware, or combination thereof.
  • aspects of the method can be implemented by one or more suitably configured software modules, such as input identification module 112 and/or expansion module 114 of Figure 1.
  • Step 600 displays content on a display device.
  • Step 602 receives input associated with selection of the displayed content.
  • the input is received via a user making physical contact with a touchscreen display. Any suitable type of input can be received, examples of which are provided above.
  • step 604 automatically identifies the received input as being associated with spatial expansion of content. For example, in some cases, an insert cursor action can automatically be associated with spatial expansion.
  • step 606 identifies a subset of content within the displayed content, such as a subset of characters in a text string. This can be achieved in any suitable manner, as further described above.
  • the subset is determined by identifying a region within the displayed content that contains the subset, such as through the use of a predetermined shape, or a shape of a footprint associated with the input (i.e., contact point (s) on a touchscreen display) , positioned on or around the displayed content.
  • the input can be used to identify the positioning of the shape, such as by identifying a center point to position the shape around.
  • the subset of content can be determined by identifying all elements that completely or partially fall within the identified region.
  • step 608 modifies the subset of content by applying spatial expansion to the subset of content, such as by increasing the respective horizontal distance between left-to-right text characters in a text string. In some cases, this is achieved by increasing a space, character or element that is included between text characters. In other cases, when an element includes blank space to define its respective edges, simply the size of blank space for the respective character is increased. However, any suitable spatial expansion can be employed, such as vertical expansion as noted above. Alternately or additionally, modifying the subset of content combines spatial expansion with other techniques, such as magnification.
  • Step 610 displays the modified subset of content (i.e., the spatially expanded subset of content) .
  • the modified subset of content is displayed with the unmodified or original displayed content.
  • portions of the originally displayed content can be modified to fit the available display space, such as by visually truncating what is displayed or visually shrinking portions that are displayed outside of the expanded subset of content.
  • some embodiments receive additional input associated with the spatially expanded subset of content, such as selection of an insertion point for a cursor.
  • Figure 7 illustrates various components of example electronic device 700 that can be utilized to implement the embodiments described herein.
  • Electronic device 700 can be, or include, many different types of devices capable of implementing dynamic spatial expansion, such as mobile phone 102 of Figure 1.
  • Electronic device 700 includes communication transceivers 702 that enable wired or wireless communication of device data 704, such as received data and transmitted data.
  • transceivers is used here to generally refer to transmit and receive capabilities. While referred to as a transceiver, it is to be appreciated that transceivers 702 can additionally include separate transmit antennas and receive antennas without departing from the scope of the claimed subject matter.
  • Example communication transceivers include WPAN radios compliant with various Institute of Electrical and Electronics Engineers (IEEE) 802.15 (Bluetooth TM ) standards, WLAN radios compliant with any of the various IEEE 802.11 (WiFi TM ) standards, WWAN (3GPP-compliant) radios for cellular telephony, wireless metropolitan area network radios compliant with various IEEE 802.16 (WiMAX TM ) standards, and wired LAN Ethernet transceivers.
  • IEEE Institute of Electrical and Electronics Engineers
  • WiFi TM Wireless TM
  • WiMAX TM wireless metropolitan area network radios compliant with various IEEE 802.16
  • wired LAN Ethernet transceivers wired LAN Ethernet transceivers.
  • Electronic device 700 may also include one or more data-input ports 706 via which any type of data, media content, and inputs can be received, such as user-selectable inputs, messages, music, television content, recorded video content, and any other type of audio, video, or image data received from any content or data source.
  • Data-input ports 706 may include USB ports, coaxial-cable ports, and other serial or parallel connectors (including internal connectors) for flash memory, DVDs, CDs, and the like. These data-input ports may be used to couple the electronic device to components, peripherals, or accessories such as keyboards, microphones, or cameras.
  • Electronic device 700 of this example includes processor system 710 (e.g., any of application processors, microprocessors, digital-signal processors, controllers, and the like) or a processor and memory system (e.g., implemented in a system-on-chip) , which processes computer-executable instructions to control operation of the device.
  • processor system 710 e.g., any of application processors, microprocessors, digital-signal processors, controllers, and the like
  • a processor and memory system e.g., implemented in a system-on-chip
  • a processing system may be implemented at least partially in hardware, which can include components of an integrated circuit or on-chip system, digital-signal processor, application-specific integrated circuit, field-programmable gate array, a complex programmable logic device, and other implementations in silicon and other hardware.
  • the electronic device can be implemented with any one or combination of software, hardware, firmware, or fixed-logic circuitry that is implemented in connection with processing and control circuits, which are generally identified at 712 (processing and control 712) .
  • electronic device 700 can include a system bus, crossbar, interlink, or data-transfer system that couples the various components within the device.
  • a system bus can include any one or combination of different bus structures, such as a memory bus or memory controller, data protocol/format converter, a peripheral bus, a universal serial bus, a processor bus, or local bus that utilizes any of a variety of bus architectures.
  • Electronic device 700 also includes one or more memory devices 714 that enable data storage, examples of which include random access memory (RAM) , non-volatile memory (e.g., read-only memory (ROM) , flash memory, EPROM, EEPROM, etc. ) , and a disk storage device.
  • RAM random access memory
  • non-volatile memory e.g., read-only memory (ROM) , flash memory, EPROM, EEPROM, etc.
  • ROM read-only memory
  • flash memory e.g., EPROM, EEPROM, etc.
  • operating system 718 can be maintained as software instructions within memory devices 714 and executed by processors 710.
  • input identification module 720 and expansion module 722 are embodied in memory devices 714 of electronic device 700 as executable instructions or code.
  • input identification module 720 identifies a corresponding action associated with received input.
  • Expansion module 722 applies spatial expansion to subsets of content and, in some cases, modifies additional displayed content to fit a display screen.
  • input identification module 720 and expansion module 722 may be implemented as any form of software application, firmware, or any combination thereof.
  • Electronic device 700 also includes audio and video processing system 724 that processes audio data and passes through the audio and video data to audio system 726 and to display device 728.
  • Audio system 726 and display device 728 may include any modules that process, display, or otherwise render audio, video, display, or image data. Display data and audio signals can be communicated to an audio component and to a display component via a radio-frequency link, S-video link, HDMI, composite-video link, component-video link, digital video interface, analog-audio connection, or other similar communication link, such as media-data port 730.
  • audio system 726 and display device 728 are external components to electronic device 700.
  • display device 728 can be an integrated component of the example electronic device, such as part of an integrated display and touchscreen interface. Display device 728 can sometimes include or be associated with touch-sensitive components that are configured to receive input through contact with display device 728, examples ofwhich are provided above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiments described herein provide spatial expansion to a subset of displayed content effective to accentuate each respective element contained within the subset. Some embodiments identify an input associated with selecting an insertion point within the displayed content. Responsive to identifying the input, a subset of the displayed content is determined (606) based, at least in part, on the input. Upon determining the subset of the displayed content, some embodiments visually expand content within the subset using spatial expansion effective to increase horizontal spacing between each respective element of the subset without increasing each respective element's size. Alternately or additionally, some embodiments magnify all of the content in the subset at a first rate, and additionally apply spatial expansion to spaces (306(n-2), 306(n-1), 306n, 306(n+1), 306(n+2)) between each respective element within the subset by increasing the horizontal spacing between each respective element of the magnified displayed content.

Description

SETTING A CURSOR POSITION IN TEXT ON A DISPLAY DEVICE BACKGROUND
Touchscreens have become a convenient and straightforward way for users to interact with a device. For instance, auser can invoke an application simply by using his or her finger to tap a region of a touchscreen displaying a corresponding icon, or can copy or modify displayed text by selecting an insertion point on a corresponding region of the touchscreen, and so forth. While a finger is a convenient tool to use with an interactive touchscreen display, its size can sometimes be larger than items being displayed, thus obscuring the displayed items in a manner which makes providing input prone to inaccuracies.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
While the appended claims set forth the features of the present techniques with particularity, these techniques, together with their objects and advantages, may be best understood from the following detailed description taken in conjunction with the accompanying drawings ofwhich:
Figure 1 is an overview of a representative environment in which the present techniques may be practiced;
Figure 2 illustrates an example implementation in which cursor positioning can be employed;
Figure 3 illustrates an example implementation of spatial expansion;
Figure 4 illustrates an example of spatial expansion relative to other techniques;
Figure 5 illustrates an example implementation of display screen management relative to spatial expansion;
Figure 6 illustrates an example flow diagram in which spatial expansion is employed; and
Figure 7 is an illustration of a device that can use the present techniques.
DETAILED DESCRIPTION
Turning to the drawings, wherein like reference numerals refer to like elements, techniques of the present disclosure are illustrated as being implemented in a suitable environment. The following description is based on embodiments of the claims and should not be taken as limiting the claims with regard to alternative embodiments that are not explicitly described herein.
Touchscreen displays provide a user with a convenient way to interact with a respective device. Instead of using additional peripheral devices (e.g., amouse, a keyboard, and so forth) , a touchscreen display provides a dual-purpose, integrated input mechanism that displays content and simultaneously receives input. Performing a correct input gesture at a corresponding region of the touchscreen device, such as through a finger touching the display, invokes an associated action or response. However, while using a finger to interact with a touchscreen device is convenient to a user, it poses  certain challenges depending upon which action is being invoked. Consider an example of a user attempting to position a cursor at a precise point in a text string (e.g., at a desired space between two characters in a word) . Depending upon the characters being displayed, their associated font size, as well as the size of the user’s finger, the user’s finger may obscure multiple characters when touching the display, thus making it difficult for the user to know where in the string the cursor will be inserted. Similarly, the size of the portion of the finger that makes contact with the touchscreen display can be much larger than the spaces between characters, resulting in less precision on where the cursor is inserted. This issue is further exacerbated as device form factors are reduced.
The embodiments described herein provide spatial expansion to a subset of displayed content effective to accentuate each respective element contained within the subset. Some embodiments identify an input associated with selecting an insertion point within the displayed content. Responsive to identifying the input, a subset of the displayed content is determined based, at least in part, on the input. Upon determining the subset of the displayed content, some embodiments visually expand content within the subset using spatial expansion effective to increase horizontal spacing between a respective element or elements of the subset and an adjacent element, without increasing each respective element’s size. Alternately or additionally, some embodiments magnify all of the content in the subset at a first factor, and additionally apply spatial expansion to spacing between each respective element within the subset by increasing the horizontal spacing between each respective element of the magnified content.
Example Environment
Figure 1 illustrates an example operating environment 100 that includes a computing device in the form of mobile phone 102. It is to be appreciated and understood that the computing device could be embodied as any other suitable type of device without departing from the scope of the claimed subject matter, such as a tablet, a desktop computer, a laptop, a gaming station, and so forth. In this example, mobile phone 102 includes touchscreen display 104.
Touchscreen display 104 generally represents a multi-purpose display device that displays information on a screen, and detects input received through contact with the screen. Touchscreen display 104 may be configured in a variety of ways. For example, touchscreen display 104 may include touch-sensitive components such as sensors that are configured to detect contact with the screen, such as when being touched with a finger of user hand 106. As one example, an X-Y grid may be formed across the touchscreen using near optically transparent conductors (e.g., indium tin oxide) to detect contact at different X-Y locations on the screen of touchscreen display 104. However, other techniques can be used, such as surface capacitance, mutual capacitance, self-capacitance, infrared, optical imaging, dispersive signal technology, acoustic pulse recognition, and so forth.
Mobile phone 102 also includes processor (s) 108 and computer-readable media 110. Processor (s) 108 can configured as a single or multi-core processor capable of enabling various functionalities of the mobile phone. Processor (s) 108 may be coupled with, and may implement functionalities of any other components or modules of mobile  phone 102 that are described herein. In some cases, processor (s) 108 is coupled (directly or indirectly) to computer-readable media 110. Computer-readable media 110 represents one or more memory storage devices on which information can be stored, such as processor-executable instructions, data, files, digital audio, digital images, etc. Here, computer-readable media 110 includes input identification module 112 and expansion module 114. While input identification module 112 and expansion module 114 are illustrated here as separate modules, in some embodiments, the associated functionality can be incorporated into a common module without departing from the scope of the claimed subject matter.
Input identification module 112 represents functionality that receives input from touchscreen display 104, and identifies a corresponding action associated with the input. For example, input identification module 112 can identify a double-tap input gesture on touchscreen display 104 as a zoom-in action on a respective image in a respective region as the input gesture, a single-tap input gesture on a displayed icon as an invocation action of an application associated with the displayed icon, a press-and-hold gesture over displayed text as an insert cursor action, and so forth. In some embodiments, input identification module 112 identifies or facilitates a text expansion action, as further described below.
Expansion module 114 represents functionality that modifies a subset of content by dynamically expanding the subset of displayed content through spatial expansion, responsive to receiving a touch input. For instance, in some embodiments, the content in the displayed region corresponds to a subset of characters in a text string.  When a touch input is received and processed by the input identification module, expansion module 114 identifies which characters are included in a subset of characters associated with the touch input. The expansion module 114 then visually alters the spacing between each respective character, by modifying and expanding the spacing between respective characters horizontally, while keeping a fixed sizing for the characters (e.g., keeping the characters at the same font size) . At times, the expansion can simply increase the overall horizontal distance between each respective character from an originally displayed horizontal distance. Doing so visually emphasizes the spacing between characters to facilitate cursor positioning, as described below. This differs from zoom-in functionality, in that zoom-in functionality simply magnifies each respective character, and the spacing between characters, at a same ratio, as further described below. Touchscreen display 104 then displays the updated, expanded content. In turn, a user can now interact with the expanded content to accurately position the cursor.
Generally, any of the functions described herein can be implemented using software, firmware, hardware (e.g., fixed logic circuitry) , manual processing, or a combination of these implementations. The terms “module, ” “functionality, ” “component” and “logic” as used herein generally represent software, firmware, hardware, or a combination thereof. In the case of a software implementation, the module, functionality, component or logic represents program code that performs specified tasks when executed on or by a processor (e.g., CPU or CPUs) . The program code can be stored in one or more computer readable memory devices.
Having described an example operating environment in which various embodiments can be utilized, consider now a discussion of dynamic spatial expansion of displayed content based on touch input, in accordance with one or more embodiments.
Dynamic Spatial Expansion of Displayed Content
One trend in technology is the push to increase functionality in a device, while decreasing the form factor or size of the device. As a device decreases in size, so too can a corresponding display. When a display screen decreases in size, the device can compensate for the smaller screen size by either reducing the amount of content displayed, or reducing the size of the displayed content. In the case of a device that contains a touchscreen display, the reduced screen size sometimes impacts the accuracy of how input via the touchscreen display is received or interpreted.
Various embodiments spatially expand a subset of displayed content, e.g., text, responsive to receiving touch input, effective to accentuate each respective element of the subset. In some cases, the spatial expansion increases a displayed distance between each respective element of the subset from an originally displayed distance, thereby enabling a user to more accurately position a cursor within the subset.
To further illustrate, consider Figure 2 which includes mobile device 102 and user hand 106 from Figure 1. Here, mobile device 102 displays text string 202 on the touchscreen display. A portion of the text is selected by the user’s finger for inserting a cursor. It is to be appreciated that selection of a cursor’s insertion point in a text string can be associated with any suitable type of action. For example, a user may wish to edit,  add, or delete characters in a string, initiate a copy, cut or paste operation and the like. Any suitable type of gesture can be utilized to interact with the touchscreen display, such as a press-and-hold gesture, a swipe-and-stop gesture, a double-tap gesture, a single-tap gesture, a force touch gesture, and so forth. Upon receiving the input gesture, module device 102 identifies the gesture as being associated with a cursor positioning action.
When the user’s finger connects with the surface of the touchscreen display, the user’s finger and hand visually obscures at least part of the displayed content, making it difficult for a user to know exactly where in text string 202 the action–here the cursor position action--will be applied. Further, the impression or surface contact of the finger covers a larger display area than the spacing between the characters contained within text string 202.
Consider, for example, a case where the user desires a cursor insertion point at the space between the “i” and “n” of the word “During” . At times, the spaces between characters in a text string can vary from one another. This technique, known as kerning, adjusts the space between individual letter forms based upon overhangs, etc., to achieve a visually pleasing result. In turn, this can result in varying, non-uniform horizontal spacing between letters. Here, kerning has been used to space the letters in the text string 202. As a result, the space between the lowercase “i” and lowercase “n” is relatively smaller in size than other spaces contained within text string 202, such as the space between “y” and “e” in “years” , the space between “n” and “g” in “During” , and so forth. In turn, selecting the space between “i” and “n” can be more difficult, since the smaller distance becomes more difficult to select than a larger distance. While the above  example describes non-uniform spacing between characters, it is to be appreciated that the techniques described herein can be utilized in connection with uniform spacing between characters without departing from the scope of the claimed subject matter.
Regardless of whether non-uniform spaces have been used or not, various embodiments expand or increase the horizontal spacing between elements in a subset of displayed content, such as a subset of characters in text string 202, responsive to receiving a touch input. Specifically, when a touch input is received, a subset is identified based upon the touch input. In this example, the subset is identified by subset area 204. Any suitable techniques can be used to define the subset area. For example, the subset area may correspond to the actual shape of the contact area (e.g., the actual footprint of the touch contact) . Alternately or additionally, a predetermined shape can be used to define the subset area, such as one positioned around an approximate or detected point, such as the centroid of contact. Here, subset area 204 is a predetermined shape (a circle) with a predetermined size (diameter 206) . It is to be appreciated, however, that any shape or size can be utilized without departing from the scope of the claimed subject matter. Further, the location or positioning of subset area 204 can be determined in any suitable manner, such as by determining a center point, a left-most point, a right-most point, and so forth, based upon the footprint of the contact area. Subset area 204 covers or includes the subset of characters “uring” . Note that while “u” and “g” are not completely contained within subset area 204, at least a portion of each character is included. Accordingly, in some embodiments, if a portion of a character falls within the subset area, the character can be included in the subset of characters. In other  embodiments, only characters that completely fall within the contact area are considered to be included in a subset of characters.
Continuing the above example, now consider Figure 3, which includes text string 202 from Figure 2. Recall that, responsive to receiving the touch input, a subset of text string 202 ( “uring” ) has been identified via subset area 204. Each respective character of this subset has a respective space between itself and the next character in the subset. For example, “u” and “r” have space 302 (n-2) there between, “r” and “i” have space 302 (n-1) , “i” and “n” have space 302n, “n” and “g” have space 302 (n+1) , and “g” and“t” have space 302 (n+2) . Here, the notations of (n-2) , (n-1) , (n) , (n+1) , and (n+2) are used to identify the position of the respective space relative to a current insertion point. For example, space 302n in the subset of text string 202 identifies the current insertion point based upon subset area 204, while space 302 (n-1) identifies the first insertion position to the left of the current insertion point, space 302 (n+1) identifies the first insertion point to the right of the current insertion point, and so forth. In this example, space 302n is identified as the space closest to the center of the subset area. However, this is merely for illustrative purposes, and it is to be appreciated that the space associated with the current insertion point can be identified in any suitable manner, such as a left-most position and the contact area, a right-most position in the contact area, and so forth.
Some embodiments apply spatial expansion to a subset of elements in displayed content to help emphasize each respective element by expanding the empty space around the respective element. Consider text string 304, which is an example of spatial expansion applied to the above identified subset of characters in subset area 204.  Here, only empty spaces between elements contained within the identified subset have been modified, while the rest of the displayed content, as well as the respective characters within the identified subset, remain as previously displayed. This is further emphasized by using the same notations as those used above to identify the expanded spaces (e.g., space 306 (n-2) corresponds to an expanded version of space 302 (n-2) , space 306 (n-1) corresponds to an expanded version of space 302 (n-1) , and so forth) . Here, each space between characters has been expanded to a uniform size (e.g., space 306 (n-2) , space 306(n-1) , space 306n, space 306 (n+1) , and space 306 (n+2) are each the same size) . However, this is merely for illustrative purposes, and it is to be appreciated that the expanded spaces can be non-uniform in size without departing from the scope of the claimed subject matter. Expanding each empty space horizontally not only increases the isolation of each character from one another, but it additionally enlarges each potential insertion point. In turn, a user can more accurately select an insertion point due to the increased size of the spaces.
To further illustrate, consider Figure 4, which includes example text string 402, example text string 404, and example text string in 406. Example text string 402 represents a baseline text string as it is originally displayed on a display device, such as text string 202 of Figure 2. Here, example text string 402 includes the word “Example” and appears at an arbitrary font size. For a particular font size, the letters and spaces have predetermined sizes or lengths. In this example, the lowercase “e” in the word “Example” has a height of size Y, designated here as height 408, while the space between the lowercase “e” and the previous character (e.g., the lowercase “l” in the word  “Example” ) has a length of size X, designated here as length 410. Using a space length-to-height ratio, this can be expressed as:
Now consider example text string 404. Example text string 404 represents a text string on which a zoom-in operation has been performed over the subset of characters “ple” . Here, the subset has been magnified by a factor of 2X. When applying a zoom-in or magnification operation to the subset of characters, the magnification factor, or a zoom-in factor, is applied to the subset on a whole (e.g., the magnification factor is applied it to both characters and spaces contained within the subset) . This can be seen by examining height 412 of lowercase “e” in example text string 404, which has a size of 2Y. Similarly, length 414, which represents the space between the lowercase “e” and the character preceding it, has a size of 2X. Thus, both measures for spaces and characters have been increased by the same factor. While the horizontal spacing between “l” and “e”has doubled in size, so, too, have the respective characters, thus diminishing the effectiveness of the expanded empty space since the space length-to-height ratio remains the same:
Example text string 406, on the other hand, applies spatial expansion to the same subset of characters “ple” . As in the case of example text string 402, the height of lowercase “e” has a size of “Y” , designated here are as height 416. However, the  space between the lowercase “e” and an adjacent character (lowercase “l” ) has increased to a size of 3.5X, designated here as length 418. In example text string 406, spatial expansion simply affects the horizontal spacing between characters, while the characters themselves remain at the same size as originally displayed. Here, spatial expansion increases the size of the spacing without magnifying the size of the adjacent characters to the spacing. This differs from the zoom-in or magnification action as discussed with respect to example text string 404, in that zooming in or magnifying simply increases spaces and characters by the same amount. To compare, the same length-to-height ratio metric for example text string 406 becomes:
Effectively, the spatial expansion applied in text string 406 creates a length-to-height ratio that is greater than originally displayed example text string 402 by increasing the space at a factor larger than a factor applied to the height of the character (s) around that space (Here, since the height of the characters did not increase, this equates to an applied factor of “1” ) . In turn, this create a larger “target” or selection point for the user relative to the characters than what a zoom-in or magnification action generates. For simplicity’s sake, example text string 404 and example text string 406 have been discussed separately to further illustrate differences. However, in some embodiments, asubset of displayed content can be magnified and spatially expanded. For example, asubset can first be magnified, such as that illustrated with respect to example text string  404, and then spatially expanded, such as that illustrated with respect to example text string 406, resulting in a character height of 2Y, and a space length of 7X.
One challenge associated with spatial expansion is balancing the available display area with what is being displayed and expanded. Consider Figure 5, which illustrates text string 502 as it would be displayed on mobile device 102 of Figure 1. It can be generally seen that the size of the displayed characters “this is a long text string” contained within text string 502 spans the horizontal length of the display. Consider now the case where an input gesture (associated with spatial expansion) is applied to the touchscreen display over the subset of characters “long” . Expanding the spaces between the characters “l” and “o” , “o” and “n” , and “n” and “g” , as well as the spaces between the words “long” and “a” , and “long” and “text” , potentially expands the modified version of text string 502 off the horizontal boundaries of the display of mobile device 102.
To compensate for the longer size of a string that contains a spatially expanded subset of characters, some embodiments reduce the size of other subsets of characters within the text string. For example, text string 504 illustrates a version of text string 502 that has spatially expanded spaces within the subset of characters “long” . Text string 504 also includes subset 506 (e.g., “This” ) , and subset 508 (e.g., “string” ) . In order to fit text string 504 in its entirety on the display, subset 506 and subset 508 are displayed at a reduced size. Any suitable reduction can be applied without departing from the scope of the claimed subject matter. Here, each character contained within its respective subset has been reduced using a same factor. However, other techniques can be used as  well. For example, some embodiments may use a graduated reduction, where the reduction factor for each character of the subset is proportional to how close the respective character is to the edge of the display. Thus, graduated reduction techniques generate an appearance of shrinking as characters are positioned closer to the edge of a display.
Some embodiments visually truncate a text string as spatial expansion is applied to a subset. As in the above case, text string 510 has spatially expanded the spaces contained within the subset of characters “long” . Since the spaces contained within the subset of characters “long” are being expanded, this subset becomes a region of priority on the display, in that it becomes a priority to fit the expanded content on the display. Accordingly, as the spaces are expanded, the furthest characters from this region that do not fit on the display are visually truncated. In this example, the letter “T” is visually truncated from subset 512 and the letter “g” is visually truncated from subset 514.
Spatial expansion expands spaces or insertion points between elements in displayed content. In the above examples, the displayed content spans from left-to-right in a horizontal manner. However, in some embodiments, the spatial expansion vertically expands spaces, such as when the displayed content includes characters from a language in which characters are written from top-to-bottom, instead of left-to-right, or when text runs in the vertical direction. Here, the spaces between elements are vertically expanded. In another embodiment, content can be arranged in a matrix that contains multiple rows and columns, such as a matrix of icons on a screen. In this instance, both the vertical and horizontal spaces between the icons can be expanded, while the icons themselves remain  a same size. Thus, spatial expansion further emphasizes elements from one another, increases the size of spaces between the elements, and allows for more accurate selection of content or insertion points.
Figure 6 illustrates a flow chart that describes steps in a method in accordance with one or more embodiments. The method can be performed by any suitable hardware, software, firmware, or combination thereof. In at least some embodiments, aspects of the method can be implemented by one or more suitably configured software modules, such as input identification module 112 and/or expansion module 114 of Figure 1.
Step 600 displays content on a display device. Step 602 receives input associated with selection of the displayed content. In some embodiments, the input is received via a user making physical contact with a touchscreen display. Any suitable type of input can be received, examples of which are provided above. Upon receiving the input, step 604 automatically identifies the received input as being associated with spatial expansion of content. For example, in some cases, an insert cursor action can automatically be associated with spatial expansion.
Responsive to receiving the input, step 606 identifies a subset of content within the displayed content, such as a subset of characters in a text string. This can be achieved in any suitable manner, as further described above. In some cases, the subset is determined by identifying a region within the displayed content that contains the subset, such as through the use of a predetermined shape, or a shape of a footprint associated with the input (i.e., contact point (s) on a touchscreen display) , positioned on or around  the displayed content. In some embodiments, the input can be used to identify the positioning of the shape, such as by identifying a center point to position the shape around. The subset of content can be determined by identifying all elements that completely or partially fall within the identified region.
Responsive to identifying a subset of content, step 608 modifies the subset of content by applying spatial expansion to the subset of content, such as by increasing the respective horizontal distance between left-to-right text characters in a text string. In some cases, this is achieved by increasing a space, character or element that is included between text characters. In other cases, when an element includes blank space to define its respective edges, simply the size of blank space for the respective character is increased. However, any suitable spatial expansion can be employed, such as vertical expansion as noted above. Alternately or additionally, modifying the subset of content combines spatial expansion with other techniques, such as magnification.
Step 610 displays the modified subset of content (i.e., the spatially expanded subset of content) . In some cases, the modified subset of content is displayed with the unmodified or original displayed content. When the available display space is too small to display the modified subset of content and the rest of the originally displayed content, portions of the originally displayed content can be modified to fit the available display space, such as by visually truncating what is displayed or visually shrinking portions that are displayed outside of the expanded subset of content. After updating the display with modified content, some embodiments receive additional input associated  with the spatially expanded subset of content, such as selection of an insertion point for a cursor.
Having considered a discussion of dynamic spatial expansion, consider now a discussion of an example device that can be utilized to implement the embodiments described above.
Example Device
Figure 7 illustrates various components of example electronic device 700 that can be utilized to implement the embodiments described herein. Electronic device 700 can be, or include, many different types of devices capable of implementing dynamic spatial expansion, such as mobile phone 102 of Figure 1.
Electronic device 700 includes communication transceivers 702 that enable wired or wireless communication of device data 704, such as received data and transmitted data. The term transceivers is used here to generally refer to transmit and receive capabilities. While referred to as a transceiver, it is to be appreciated that transceivers 702 can additionally include separate transmit antennas and receive antennas without departing from the scope of the claimed subject matter. Example communication transceivers include WPAN radios compliant with various Institute of Electrical and Electronics Engineers (IEEE) 802.15 (BluetoothTM) standards, WLAN radios compliant with any of the various IEEE 802.11 (WiFiTM) standards, WWAN (3GPP-compliant) radios for cellular telephony, wireless metropolitan area network radios compliant with various IEEE 802.16 (WiMAXTM) standards, and wired LAN Ethernet transceivers.
Electronic device 700 may also include one or more data-input ports 706 via which any type of data, media content, and inputs can be received, such as user-selectable inputs, messages, music, television content, recorded video content, and any other type of audio, video, or image data received from any content or data source. Data-input ports 706 may include USB ports, coaxial-cable ports, and other serial or parallel connectors (including internal connectors) for flash memory, DVDs, CDs, and the like. These data-input ports may be used to couple the electronic device to components, peripherals, or accessories such as keyboards, microphones, or cameras.
Electronic device 700 of this example includes processor system 710 (e.g., any of application processors, microprocessors, digital-signal processors, controllers, and the like) or a processor and memory system (e.g., implemented in a system-on-chip) , which processes computer-executable instructions to control operation of the device. A processing system may be implemented at least partially in hardware, which can include components of an integrated circuit or on-chip system, digital-signal processor, application-specific integrated circuit, field-programmable gate array, a complex programmable logic device, and other implementations in silicon and other hardware. Alternatively or in addition, the electronic device can be implemented with any one or combination of software, hardware, firmware, or fixed-logic circuitry that is implemented in connection with processing and control circuits, which are generally identified at 712 (processing and control 712) . Although not shown, electronic device 700 can include a system bus, crossbar, interlink, or data-transfer system that couples the various components within the device. A system bus can include any one or combination of  different bus structures, such as a memory bus or memory controller, data protocol/format converter, a peripheral bus, a universal serial bus, a processor bus, or local bus that utilizes any of a variety of bus architectures.
Electronic device 700 also includes one or more memory devices 714 that enable data storage, examples of which include random access memory (RAM) , non-volatile memory (e.g., read-only memory (ROM) , flash memory, EPROM, EEPROM, etc. ) , and a disk storage device. Memory devices 714 are implemented at least in part as a physical device that stores information (e.g., digital or analog values) in storage media, which does not include propagating signals or waveforms. The storage media may be implemented as any suitable types of media such as electronic, magnetic, optic, mechanical, quantum, atomic, and so on. Memory devices 714 provide data storage mechanisms to store the device data 704, other types of information or data, and various device applications 716 (e.g., software applications) . For example, operating system 718 can be maintained as software instructions within memory devices 714 and executed by processors 710. In some embodiments, input identification module 720 and expansion module 722 are embodied in memory devices 714 of electronic device 700 as executable instructions or code. Among other things, input identification module 720 identifies a corresponding action associated with received input. Expansion module 722 applies spatial expansion to subsets of content and, in some cases, modifies additional displayed content to fit a display screen. Although represented as a software implementation, input identification module 720 and expansion module 722 may be implemented as any form of software application, firmware, or any combination thereof.
Electronic device 700 also includes audio and video processing system 724 that processes audio data and passes through the audio and video data to audio system 726 and to display device 728. Audio system 726 and display device 728 may include any modules that process, display, or otherwise render audio, video, display, or image data. Display data and audio signals can be communicated to an audio component and to a display component via a radio-frequency link, S-video link, HDMI, composite-video link, component-video link, digital video interface, analog-audio connection, or other similar communication link, such as media-data port 730. In some implementations, audio system 726 and display device 728 are external components to electronic device 700. Alternatively or additionally, display device 728 can be an integrated component of the example electronic device, such as part of an integrated display and touchscreen interface. Display device 728 can sometimes include or be associated with touch-sensitive components that are configured to receive input through contact with display device 728, examples ofwhich are provided above.
In view of the many possible embodiments to which the principles of the present discussion may be applied, it should be recognized that the embodiments described herein with respect to the drawing figures are meant to be illustrative only and should not be taken as limiting the scope of the claims. Therefore, the techniques as described herein contemplate all such embodiments as may come within the scope of the following claims and equivalents thereof.

Claims (20)

  1. An electronic device comprising:
    a touchscreen display device configured to display content;
    a touch-sensitive component associated with the touchscreen display device and configured to receive an input gesture associated with the displayed content; and
    at least one processor configured to identify a subset of the displayed content based, at least in part, on the input gesture, and provide the touchscreen display device with a spatially expanded subset of the displayed content, for display, by increasing a respective spacing between one or more elements contained within the subset without increasing a size of each respective element associated with the respective spacing effective to generate the spatially expanded subset.
  2. The electronic device of claim 1, wherein the spatially expanded subset of the displayed content is spatially expanded by horizontally expanding the respective spacing to increase a respective distance between respective elements of the one or more elements.
  3. The electronic device of claim 1, wherein the at least one processor is configured to identify the subset of the displayed content by using a shape associated with a footprint of the input gesture.
  4. The electronic device of claim 3, wherein the at least one processor is configured to identify the subset of the displayed content by including all elements of the displayed content that are at least partially included in the shape associated with the footprint.
  5. The electronic device of claim 1, wherein the spatially expanded subset of the displayed subset is spatially expanded by visually truncating display of at least some of the displayed content outside of the subset.
  6. The electronic device of claim 1, wherein the touch-sensitive component is configured to receive an input gesture associated with an insert cursor action.
  7. The electronic device of claim 6, wherein the input gesture comprises a press-and-hold gesture.
  8. A method for an electronic device comprising:
    displaying a text string on a display device associated with the electronic device;
    receiving input associated with the text string;
    identifying, based on the input, a subset of characters within the text string relative to which to apply spatial expansion;
    modifying the subset of characters by:
    applying spatial expansion to the subset of characters; and
    magnifying the subset of characters; and
    displaying the modified subset of characters on the display device.
  9. The method of claim 8 further comprising:
    receiving additional input associated with the modified subset of characters; and
    performing an action associated with the additional input.
  10. The method of claim 9, wherein performing the action associated with the additional input comprises selecting a cursor insertion point.
  11. The method of claim 8, wherein identifying the subset of characters comprises using a predetermined shape to identify a region in the text string that contains the subset of characters.
  12. The method of claim 11, wherein the predetermined shape comprises a circle having a predetermined size.
  13. The method of claim 8, wherein displaying the modified subset of characters further comprises shrinking at least some portions of the text string not in the subset of characters.
  14. The method of claim 8, wherein applying the spatial expansion and magnifying further comprises increasing a respective size of a horizontal spacing between two respective characters in the subset of characters at a factor larger than a factor ofthe magnification applied to a height of each respective character.
  15. An electronic device comprising:
    a touchscreen display device configured to:
    display a text string at a first size that defines a first length-to-height ratio based upon a height of at least one character in the text string and a length of a space between the at least one character and a previous character in the text string;
    a touch-sensitive component associated with the touchscreen display device and configured to receive an input gesture associated with the displayed text string; and
    at least one processor configured to identify a subset of characters in the text string based, at least in part, on the input gesture, and provide the touchscreen display device with a spatially expanded subset of characters to display by modifying the subset of characters to a second length-to-height ratio that is greater than the first length-to-height ratio.
  16. The electronic device of claim 15, wherein the electronic device comprises a mobile phone.
  17. The electronic device of claim 15, wherein the at least one processor is configured to provide a spatially expanded subset of characters to the touchscreen display device by providing a magnified spatially expanded subset of characters.
  18. The electronic device of claim 15, wherein the touch-sensitive component receives the input gesture by receiving an input gesture associated with selecting an insertion point in the text string to edit the text string.
  19. The electronic device of claim 15, wherein the at least one processor is configured to identify the subset of characters by using the input gesture to identify a region in the text string that includes the subset of characters.
  20. The electronic device of claim 15, wherein the touchscreen display device provides the spatially expanded subset of characters by modifying display of at least one portion of the text string outside of the subset of characters by:
    visually truncating the at least one portion of the text string; or
    visually shrinking the at least one portion of the text string.
PCT/CN2015/091840 2015-10-13 2015-10-13 Setting cursor position in text on display device WO2017063141A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201580083812.XA CN108351740A (en) 2015-10-13 2015-10-13 Cursor position is set in text on the display apparatus
PCT/CN2015/091840 WO2017063141A1 (en) 2015-10-13 2015-10-13 Setting cursor position in text on display device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/091840 WO2017063141A1 (en) 2015-10-13 2015-10-13 Setting cursor position in text on display device

Publications (1)

Publication Number Publication Date
WO2017063141A1 true WO2017063141A1 (en) 2017-04-20

Family

ID=58516983

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/091840 WO2017063141A1 (en) 2015-10-13 2015-10-13 Setting cursor position in text on display device

Country Status (2)

Country Link
CN (1) CN108351740A (en)
WO (1) WO2017063141A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111949194B (en) * 2020-08-11 2022-08-19 深圳传音控股股份有限公司 Character input method, terminal device and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101068411A (en) * 2006-05-03 2007-11-07 Lg电子株式会社 Method of displaying text using mobile terminal
US20140132499A1 (en) * 2012-11-12 2014-05-15 Microsoft Corporation Dynamic adjustment of user interface
EP2818994A1 (en) * 2013-06-26 2014-12-31 Honeywell International Inc. Touch screen and method for adjusting touch sensitive object placement thereon
EP2905689A1 (en) * 2012-08-30 2015-08-12 ZTE Corporation Method and apparatus for displaying character on touchscreen

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000284774A (en) * 1999-03-29 2000-10-13 Hitachi Ltd Character display method and display device
US9612670B2 (en) * 2011-09-12 2017-04-04 Microsoft Technology Licensing, Llc Explicit touch selection and cursor placement
CN104111787B (en) * 2013-04-18 2018-09-28 三星电子(中国)研发中心 A kind of method and apparatus for realizing text editing on touch screen interface
WO2015027505A1 (en) * 2013-08-31 2015-03-05 华为技术有限公司 Text processing method and touchscreen device
CN103699324B (en) * 2013-12-24 2016-08-17 小米科技有限责任公司 A kind of method and apparatus that cursor position is controlled

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101068411A (en) * 2006-05-03 2007-11-07 Lg电子株式会社 Method of displaying text using mobile terminal
EP2905689A1 (en) * 2012-08-30 2015-08-12 ZTE Corporation Method and apparatus for displaying character on touchscreen
US20140132499A1 (en) * 2012-11-12 2014-05-15 Microsoft Corporation Dynamic adjustment of user interface
EP2818994A1 (en) * 2013-06-26 2014-12-31 Honeywell International Inc. Touch screen and method for adjusting touch sensitive object placement thereon

Also Published As

Publication number Publication date
CN108351740A (en) 2018-07-31

Similar Documents

Publication Publication Date Title
US10698564B2 (en) User terminal device and displaying method thereof
US8176435B1 (en) Pinch to adjust
TWI611338B (en) Method for zooming screen and electronic apparatus and computer program product using the same
US9824266B2 (en) Handwriting input apparatus and control method thereof
US20120117470A1 (en) Learning Tool for a Ribbon-Shaped User Interface
US20160154564A1 (en) Electronic device and method for providing desktop user interface
US9983764B2 (en) Method, electronic device, and non-transitory storage medium for adjusting icons
US10296207B2 (en) Capture of handwriting strokes
US20170047065A1 (en) Voice-controllable image display device and voice control method for image display device
US9910831B2 (en) Display apparatus and method for providing font effect thereof
CN114374751A (en) Electronic device including flexible display and method of operating screen of electronic device
US20150046869A1 (en) Display control apparatus and control method thereof
US9747002B2 (en) Display apparatus and image representation method using the same
EP2977865A1 (en) Information terminal, operating region control method, and operating region control program
US20140258899A1 (en) Modifying numeric values
KR102135947B1 (en) Method for resizing window area and electronic device for the same
US20110286662A1 (en) System for building a personalized-character database and method thereof
KR102274156B1 (en) Method for resizing window area and electronic device for the same
US9244612B1 (en) Key selection of a graphical keyboard based on user input posture
WO2017063141A1 (en) Setting cursor position in text on display device
KR20160084629A (en) Content display method and electronic device implementing the same
US20140019904A1 (en) Method for providing data associated with an object displayed on a touch screen display
US20160334922A1 (en) Information processing device, non-transitory computer-readable recording medium storing information processing program, and information processing method
US10768670B2 (en) Control method, electronic device and non-transitory computer readable recording medium device
US20170011715A1 (en) Method, non-transitory storage medium and electronic device for displaying system information

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15906026

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15906026

Country of ref document: EP

Kind code of ref document: A1