WO2019136964A1 - Text selecting method and terminal - Google Patents

Text selecting method and terminal Download PDF

Info

Publication number
WO2019136964A1
WO2019136964A1 PCT/CN2018/099447 CN2018099447W WO2019136964A1 WO 2019136964 A1 WO2019136964 A1 WO 2019136964A1 CN 2018099447 W CN2018099447 W CN 2018099447W WO 2019136964 A1 WO2019136964 A1 WO 2019136964A1
Authority
WO
WIPO (PCT)
Prior art keywords
text
terminal
target text
user
target
Prior art date
Application number
PCT/CN2018/099447
Other languages
French (fr)
Chinese (zh)
Inventor
李昂
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to CN201810025128.1 priority Critical
Priority to CN201810025128 priority
Priority to CN201810327466.0 priority
Priority to CN201810327466.0A priority patent/CN110032324A/en
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2019136964A1 publication Critical patent/WO2019136964A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Abstract

Embodiments of the present invention relate to the field of communications, and provide a text selecting method and a terminal, capable of reducing the occurrence of selecting more or less text than desired during text selection and improving the operation efficiency of a terminal during the text selection. The method comprises: a terminal displays a graphical user interface in a touch screen; the terminal receives a first gesture acting on the graphical user interface, the first gesture comprising a closed trajectory; in response to the first gesture, the terminal determines a target region, in the graphical user interface, corresponding to the closed trajectory; the terminal determines a first target text comprised in the target region; the terminal performs semantic analysis on the first target text to determine a second target text, the second target text being different from the first target text; the terminal marks the second target text in the graphical user interface.

Description

Text selection method and terminal

This application is required to be submitted to the Chinese Patent Office on April 12, 2018, the application number is 201101827466.0, the Chinese patent application with the name of “a text selection method and terminal”, and the Chinese Patent Office submitted to the Chinese Patent Office on January 11, 2018. The application is entitled to the priority of the Chinese Patent Application Serial No. PCT Application No. No. No. No. No. No. No. No. No. No. No. No. No. No. No.

Technical field

The present invention relates to the field of communications, and in particular, to a text selection method and a terminal.

Background technique

The text view is one of the controls used to display a string. When a terminal such as a mobile phone displays text through a text view type control, if a user input is input for editing a specified operation (for example, a long press operation) As shown in FIG. 1, the terminal can display the text in the text view as an editable state, and display the editing options supported by the text (for example, the option of copy 11, translation 12 or delete 13 in FIG. 1).

At this time, the user can drag the first cursor 14a and the second cursor 14b located at both ends of the selected text to expand or delete the selected text verbatim, and then select the desired editing option to implement the corresponding editing function. However, when the font is small or the number of characters is large in the text, when the user drags the first cursor 14a or the second cursor 14b to select text, the problem of multiple selection or less selection is likely to occur, so that the operation efficiency of the terminal when the text is selected is lowered.

Summary of the invention

The embodiment of the invention provides a text selection method and a terminal, which can reduce the phenomenon of multiple selection or less selection when the selected text is selected, and improve the operation efficiency of the terminal when the text is selected.

To achieve the above objectives, the present application adopts the following technical solutions:

In a first aspect, the present application provides a text selection method, including: displaying, by a terminal, a graphical user interface (GUI) in a touch screen; and further, receiving, by the terminal, a first gesture acting on the graphical interface of the user, the first gesture being capable of being in the user Generating a closed track in the graphical interface; in response to the first gesture, the terminal may determine a target area corresponding to the closed track in the user graphical interface; the terminal determines a first target text included in the target area; The first target text is semantically analyzed to determine a second target text that is different from the first target text; and the second target text is marked in the user graphical interface. In other words, when the user selects the target text on the user graphical interface by executing the first gesture, the terminal may correct the first target text actually circled by the user to the second target text according to the semantics, so that the final terminal is selected by the user. The semantics of the second target text is more prepared, which reduces the phenomenon of multiple selection or less selection when the user selects the text, and improves the operation efficiency of the terminal when the text is selected.

In a possible design method, after the terminal displays the user graphical interface in the touch screen, and before the terminal receives the first gesture applied to the graphical interface of the user, the method further includes: displaying, by the terminal, the first prompt in the graphical interface of the user, A prompt includes a selection box for circled text information; wherein the terminal receives the first gesture that acts on the graphical interface of the user, and specifically includes: receiving, by the terminal, the user selecting the first target text by using the selection frame in the graphical interface of the user a gesture. That is to say, before the user circled the text, the terminal may prompt the user to use the selection box provided by the terminal to circle the target text, and reduce the multiple selection or less caused when the user manually slides on the user graphical interface to draw the first gesture of the closed trajectory. The chance of selecting text improves the user experience.

In a possible design method, after the terminal marks the second target text in the user graphical interface, the method further includes: the terminal receiving a click operation acting on the first character, wherein the first character is a second target in the user graphical interface Text outside the text; in response to the click operation, the terminal expands the text in the closed region formed by the first target text and the row and column in which the first character is located as the third target text. That is to say, the user can also manually expand the second target text into the third target text by clicking operation, and use the first character of the click as the starting position or ending position of the third target text, so that the terminal can be very convenient. Allow users to flexibly select text, further improving the intelligent interaction between the terminal and the user.

In a possible design method, after the terminal performs semantic analysis on the first target text to determine the second target text, the method may further include: the terminal displays the first cursor at the beginning position of the second target text, and ends The location displays the second cursor.

Then, the user can also expand or deselect the second target text by dragging the cursor. For example, the terminal may receive a drag operation acting on the first cursor or the second cursor; in response to the drag operation, the terminal may expand the second target text into the third target text in units of phrases; or, in response to the Drag operation, the terminal cancels the selected text in the second target text in units of phrases. Since a phrase is the smallest unit of text with complete semantics, expanding or unchecking the target text in units of phrases reduces the semantic incompleteness of the selected text.

In a possible design method, after the terminal receives the drag operation on the first cursor or the second cursor, the method further includes: after detecting that the user's finger does not leave the touch screen, the terminal does not display the first cursor or the second cursor. In this way, it can be avoided that when the terminal expands or unselects the text in units of phrases, the cursor does not follow the user's drag operation correspondingly, resulting in a problem of reduced user experience.

In a possible design method, after the terminal displays the user graphical interface in the touch screen, and before the terminal receives the first gesture that acts on the graphical interface of the user, the method further includes: receiving, by the terminal, a second gesture that acts on the graphical interface of the user. The second gesture is used to activate the function of circled text.

In a possible design method, after the terminal determines the target area corresponding to the closed trajectory in the user graphical interface, the method further includes: displaying, by the terminal, a boundary of the target area in a user graphical interface, a boundary of the target area Having at least one control block disposed thereon, the control block is configured to adjust a position or a size of the target area; then, the terminal can receive a third gesture acting on the control block; and further adjusting a position or a size of the target area according to the third gesture , thereby modifying the first target text in the target area and the target area circled by the user through the first gesture.

In a possible design method, the second target text includes the first target text, and the second target text includes a number of characters greater than the number of characters included in the first target text; or the first target text includes the second target text, The second target text includes a number of characters smaller than the number of characters included in the first target text; or the user graphical interface is a short message interface; or the user graphical interface is an interface containing a picture; or the first target text or the second target text is highlighted In the user graphical interface; or the terminal is a mobile phone.

In a second aspect, the present application provides a terminal, including: a display unit, configured to: display a first graphical interface GUI in a touch screen; and an acquiring unit, configured to: receive a first gesture that acts on a graphical interface of the user, where the first gesture includes a closed trajectory; a determining unit, configured to: determine a target area corresponding to the closed trajectory in the user graphical interface; and determine a first target text included in the target area; and a correcting unit, configured to: Semantic analysis is performed to determine the second target text, the second target text is different from the first target text; the display unit is further configured to: mark the second target text in the user graphical interface.

In a possible design method, the display unit is further configured to: display a first prompt in the user graphical interface, the first prompt includes a selection box for circled text information; the obtaining unit is specifically configured to: receive The user uses the selection box to circle the first gesture of the first target text in the user graphical interface.

In a possible design method, the obtaining unit is further configured to: receive a click operation on the first character, where the first character is text other than the second target text in the user graphic interface; For: expanding the text in the closed area formed by the first target text and the row and column where the first character is located into the third target text.

In a possible design method, the display unit is further configured to: display a first cursor at a start position of the second target text, and display a second cursor at the end position.

At this time, the obtaining unit is further configured to: receive a drag operation that is applied to the first cursor or the second cursor; and the correcting unit is further configured to: expand the second target text into the third target text in units of phrases Or, cancel the selected text in the second target text in units of phrases.

In a possible design method, the determining unit is further configured to: after detecting that the user's finger has not left the touch screen, instructing the display unit not to display the first cursor or the second cursor.

In a possible design method, the acquiring unit is further configured to: receive a second gesture that acts on a graphical interface of the user, and the second gesture is used to initiate a function of circled text.

In a possible design method, the display unit is further configured to: display a boundary of the target area in a graphical interface of the user, where at least one control block is disposed on a boundary of the target area, where the control block is used to adjust the target The location or size of the area; the acquiring unit is further configured to: receive a third gesture that acts on the control block; the determining unit is further configured to: adjust a position or a size of the target area according to the third gesture.

In a third aspect, the application provides a terminal, comprising: a touch screen, one or more processors, a memory, a plurality of applications, and one or more programs; wherein the processor is coupled to the memory, the one or more programs Stored in a memory, when the terminal is running, the processor executes one or more programs stored in the memory to cause the terminal to perform any of the text selection methods described above.

In a fourth aspect, the present application provides a computer readable storage medium having instructions stored therein that, when executed on any of the terminals described above, cause the terminal to perform any of the text selection methods described above.

In a fifth aspect, the present application provides a computer program product comprising instructions for causing a terminal to perform any of the above text selection methods when operating on any of the above terminals.

In the present application, the names of the components in the above terminal are not limited to the device itself, and in actual implementation, these components may appear under other names. As long as the functions of the various components are similar to the embodiments of the present application, they are within the scope of the claims and their equivalents.

In addition, the technical effects brought by the design method of any one of the second aspect to the fifth aspect can be referred to the technical effects brought by different design methods in the above first aspect, and details are not described herein again.

DRAWINGS

1 is a schematic diagram 1 of a scene when editing text in a terminal in the prior art;

2 is a schematic structural diagram 1 of a terminal according to an embodiment of the present application;

FIG. 3 is a schematic structural diagram of an operating system according to an embodiment of the present disclosure;

4 is a schematic diagram 2 of a scene when editing text in a terminal in the prior art;

FIG. 5 is a schematic diagram 3 of a scene when editing text in a terminal in the prior art; FIG.

FIG. 6 is a schematic flowchart of a text selection method according to an embodiment of the present application;

FIG. 7 is a schematic diagram 1 of a text selection method according to an embodiment of the present application; FIG.

FIG. 8 is a second schematic diagram of a text selection method according to an embodiment of the present disclosure;

FIG. 9 is a schematic diagram 3 of a text selection method according to an embodiment of the present disclosure;

FIG. 10 is a schematic diagram 4 of a text selection method according to an embodiment of the present disclosure;

FIG. 11 is a schematic diagram 5 of a text selection method according to an embodiment of the present disclosure;

12A is a schematic diagram 6 of a text selection method according to an embodiment of the present application;

FIG. 12B is a schematic diagram 7 of a text selection method according to an embodiment of the present application; FIG.

FIG. 13A is a schematic diagram 8 of a text selection method according to an embodiment of the present application; FIG.

FIG. 13B is a schematic diagram nin of a text selection method according to an embodiment of the present application; FIG.

FIG. 14 is a schematic diagram 10 of a text selection method according to an embodiment of the present disclosure;

FIG. 15 is a schematic diagram 11 of a text selection method according to an embodiment of the present disclosure;

FIG. 16 is a schematic structural diagram 2 of a terminal according to an embodiment of the present disclosure;

FIG. 17 is a schematic structural diagram 3 of a terminal according to an embodiment of the present disclosure;

FIG. 18 is a schematic structural diagram 4 of a terminal according to an embodiment of the present application.

Detailed ways

In order to facilitate a clear understanding of the following embodiments, a brief introduction of the related art is first given:

Optical character recognition (OCR) technology refers to a terminal (such as a mobile phone) that optically converts text in an image into a black and white dot matrix image file for a printed character, and uses an identification software to image the image. The text is converted into a text format for further editing and processing techniques such as word processing. OCR technology can recognize text information contained in files of image types such as screenshots.

Control: A software component, usually contained in an application, that controls all the data processed by the application and the interaction of the data. It can provide users with certain operational functions or for displaying certain content. For controls presented in a graphical user interface (GUI), the user can interact with the control through direct manipulation to read or edit information about the application. In general, controls can include interface elements such as icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, widgets, and the like.

In some embodiments of the present application, an attribute that can be visible to a control is referred to as a visibility attribute. Visible properties have three values: visible, invisible, and gone. Visible means visible, invisible means invisible but occupy layout position, goe means invisible and does not occupy the layout position, at this time other controls can occupy the layout position of the control whose property is gone. In the embodiment of the present application, the control whose visible attribute is visible can be simply understood as a control that the user wants to see in the program development design, and the control whose visible attribute is invisible and gone can be simply understood as a user who does not want to be in the program development design. See the controls. In addition, in the process of program development, the visible properties of some controls can be switched as needed, which can be set to invisible by default, and changed to visible when needed, ie from invisible to visible.

In some embodiments of the present application, an attribute that can be edited by a control is referred to as an edit attribute. The values of the edit properties can be editable and non-editable, respectively. Among them, the editable representation of the content displayed in the control (such as text information) is to allow the user to perform one or more editing operations, such as copy operations, cut and paste operations, delete operations, etc., exemplary, text view type controls generally It is an editable type of control; it is not editable to indicate that the content displayed in the control does not allow the user to perform any editing operations. For example, a control of type image view is generally a non-editable control type.

The following describes a terminal, a GUI for such a terminal, and a specific embodiment for selecting a text using such a terminal. In some embodiments of the present application, the terminal may be a portable terminal that further includes other functions such as a personal digital assistant or a music player function, such as a mobile phone, a tablet, a wearable terminal having a wireless communication function (such as a smart watch), and the like. Exemplary embodiments of the portable terminal include, but are not limited to, piggybacking

Figure PCTCN2018099447-appb-000001
Or a portable terminal of other operating systems. The portable terminal described above may also be other portable terminals such as a laptop having a touch-sensitive surface such as a touch panel or the like. It should also be understood that in some other embodiments of the present application, the terminal may not be a portable terminal but a desktop computer having a touch-sensitive surface such as a touch panel.

As shown in FIG. 2, the terminal in the embodiment of the present application may be the mobile phone 100. The embodiment will be specifically described below by taking the mobile phone 100 as an example. It should be understood that the illustrated mobile phone 100 is only one example of a terminal, and the mobile phone 100 may have more or fewer components than those shown in the figures, two or more components may be combined, or may have Different component configurations. The various components shown in the figures can be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing or application specific integrated circuits.

As shown in FIG. 2, the mobile phone 100 may specifically include: a processor 101, a radio frequency (RF) circuit 102, a memory 103, a touch screen 104, a Bluetooth device 105, one or more sensors 106, a WI-FI device 107, and positioning. Components such as device 108, audio circuit 109, peripheral interface 110, power system 111, and fingerprint recognizer 112. These components can communicate over one or more communication buses or signal lines (not shown in Figure 2). It will be understood by those skilled in the art that the hardware structure shown in FIG. 2 does not constitute a limitation on the mobile phone 100, and the mobile phone 100 may include more or less components than those illustrated, or combine some components, or different component arrangements. .

The various components of the mobile phone 100 will be specifically described below with reference to FIG. 2:

The processor 101 is a control center of the mobile phone 100, and connects various parts of the mobile phone 100 using various interfaces and lines, executes the mobile phone by running or executing an application stored in the memory 103, and calling data and instructions stored in the memory 103. 100 various functions and processing data. In some embodiments, processor 101 may include one or more processing units; processor 101 may also integrate an application processor and a modem processor; wherein the application processor primarily processes operating systems, user interfaces, applications, etc. The modem processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the processor 101. For example, the processor 101 may be a Kirin 960 multi-core processor manufactured by Huawei Technologies Co., Ltd.

The radio frequency circuit 102 can be used to receive and transmit wireless signals during transmission or reception of information or calls. Specifically, the radio frequency circuit 102 can process the downlink data of the base station and then process it to the processor 101. In addition, the data related to the uplink is sent to the base station. Generally, radio frequency circuits include, but are not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency circuit 102 can also communicate with other devices through wireless communication. The wireless communication can use any communication standard or protocol, including but not limited to global mobile communication systems, general packet radio services, code division multiple access, wideband code division multiple access, long term evolution, email, short message service, and the like.

The memory 103 is used to store applications and data, and the processor 101 executes various functions and data processing of the mobile phone 100 by running applications and data stored in the memory 103. The memory 103 mainly includes a storage program area and a storage data area, wherein the storage program area can store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.); the storage data area can be stored according to the use of the mobile phone. Data created at 100 o'clock (such as audio data, phone book, etc.). Further, the memory 103 may include a high speed random access memory, and may also include a nonvolatile memory such as a magnetic disk storage device, a flash memory device, or other volatile solid state storage device. The memory 103 can store various operating systems, such as those developed by Apple.

Figure PCTCN2018099447-appb-000002
Operating system, developed by Google Inc.
Figure PCTCN2018099447-appb-000003
Operating system, etc.

Touch screen 104 can include touch sensitive surface 104-1 and display 104-2. The touch-sensitive surface 104-1 (eg, a touch panel) can collect touch events on or near the user of the mobile phone 100 (eg, the user uses a finger, a stylus, or the like on the touch-sensitive surface 104-1. Or operation in the vicinity of the touch-sensitive surface 104-1), and the collected touch information is transmitted to other devices such as the processor 101. The touch event of the user in the vicinity of the touch-sensitive surface 104-1 may be referred to as a hovering touch; the hovering touch may mean that the user does not need to directly touch the touch pad in order to select, move or drag a target (eg, an icon, etc.) And only the user is located near the terminal in order to perform the desired function. In the context of a floating touch application, the terms "touch", "contact", and the like do not imply a direct contact with the touch screen, but rather a proximity or proximity contact. The touch-sensitive surface 104-1 capable of floating touch can be realized by capacitive, infrared light, ultrasonic, or the like. The touch sensitive surface 104-1 can include two portions of a touch detection device and a touch controller. Wherein, the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits a signal to the touch controller; the touch controller receives the touch information from the touch detection device, and converts the touch information into contact coordinates, and then Sended to the processor 101, the touch controller can also receive instructions from the processor 101 and execute them. In addition, the touch sensitive surface 104-1 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves. A display (also referred to as display) 104-2 can be used to display information entered by the user or information provided to the user as well as various menus of the mobile phone 100. The display 104-2 can be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The touch-sensitive surface 104-1 can be overlaid on the display 104-2, and when the touch-sensitive surface 104-1 detects a touch event on or near it, is transmitted to the processor 101 to determine the type of touch event, followed by the processor 101 may provide a corresponding visual output on display 104-2 depending on the type of touch event. Although in FIG. 2, touch-sensitive surface 104-1 and display screen 104-2 are implemented as two separate components to implement the input and output functions of handset 100, in some embodiments, touch-sensitive surface 104- 1 is integrated with the display screen 104-2 to implement the input and output functions of the mobile phone 100. It can be understood that the touch screen 104 is formed by stacking a plurality of layers of materials. In the embodiment of the present application, only the touch-sensitive surface (layer) and the display screen (layer) are shown, and other layers are not described in the embodiment of the present application. Additionally, in some other embodiments of the present application, the touch-sensitive surface 104-1 can be overlaid on the display 104-2, and the size of the touch-sensitive surface 104-1 is greater than the size of the display 104-2 such that the display 104- 2 is completely covered under the touch-sensitive surface 104-1, or the touch-sensitive surface 104-1 may be disposed on the front side of the mobile phone 100 in a full-board form, that is, the user's touch on the front of the mobile phone 100 can be perceived by the mobile phone. You can achieve a full touch experience on the front of your phone. In some other embodiments, the touch-sensitive surface 104-1 is disposed on the front side of the mobile phone 100 in a full-board form, and the display screen 104-2 may also be disposed on the front side of the mobile phone 100 in the form of a full-board, such that the front side of the mobile phone is Can achieve a borderless structure. In some other embodiments of the present application, the touch screen 104 may further include one or more sets of sensor arrays for the touch screen 104 to sense the pressure exerted by the user while sensing the touch event of the user thereon. Wait.

The mobile phone 100 can also include a Bluetooth device 105 for enabling data exchange between the handset 100 and other short-range terminals (eg, mobile phones, smart watches, etc.). The Bluetooth device in the embodiment of the present application may be an integrated circuit or a Bluetooth chip or the like.

The handset 100 can also include at least one type of sensor 106, such as a light sensor, motion sensor, and other sensors. In particular, the light sensor can include an ambient light sensor and a proximity sensor. Wherein, the ambient light sensor can adjust the brightness of the display of the touch screen 104 according to the brightness of the ambient light, and the proximity sensor can turn off the power of the display when the mobile phone 100 moves to the ear. As a kind of motion sensor, the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity. It can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc. As for the mobile phone 100 can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, here Give a brief description.

In other embodiments of the present application, the mobile phone 100 may also have a fingerprint recognition function. For example, a fingerprint sensor can be placed on the back of the handset 100 (e.g., below the rear camera) or on the front side of the handset 100 (e.g., below the touch screen 104). In addition, the fingerprint recognition function can also be implemented by configuring the fingerprint sensor in the touch screen 104, that is, the fingerprint sensor can be integrated with the touch screen 104 to implement the fingerprint recognition function of the mobile phone 100. In this case, the fingerprint sensor may be disposed in the touch screen 104, may be part of the touch screen 104, or may be otherwise disposed in the touch screen 104. In addition, the fingerprint sensor can also be implemented as a full-board fingerprint reader, so that the touch screen 104 can be viewed as a panel that can be fingerprinted at any location. The fingerprint sensor can send the collected fingerprint to the processor 101 for the processor 101 to process the fingerprint (eg, fingerprint verification, etc.). The fingerprint sensor in the embodiments of the present application may employ any type of sensing technology, including but not limited to optical, capacitive, piezoelectric or ultrasonic sensing technologies. In addition, a specific technical solution for integrating a fingerprint sensor in a touch screen according to an embodiment of the present application can be found in the PCT Patent Application No. PCT/CN2017/084602, entitled "Input Method and Terminal", the entire contents of which are incorporated herein by reference. Apply in various embodiments.

The WI-FI device 107 is configured to provide the mobile phone 100 with network access complying with the WI-FI related standard protocol, and the mobile phone 100 can access the WI-FI access point through the WI-FI device 107, thereby helping the user to send and receive emails. Browsing web pages and accessing streaming media, etc., it provides users with wireless broadband Internet access. In some other embodiments, the WI-FI device 107 can also function as a WI-FI wireless access point, and can provide WI-FI network access for other terminals.

The positioning device 108 is configured to provide a geographic location for the mobile phone 100. It can be understood that the positioning device 108 can be specifically a receiver of a positioning system such as a global positioning system (GPS) or a Beidou satellite navigation system. After receiving the geographical location transmitted by the positioning system, the positioning device 108 sends the information to the processor 101 for processing, or sends it to the memory 103 for storage. In some other embodiments, the positioning device 108 can be an assisted global positioning system (AGPS) receiver, and the AGPS is an operation mode for performing GPS positioning with certain assistance, which can be utilized. The signal of the base station, in conjunction with the GPS satellite signal, can make the mobile phone 100 locate faster; in the AGPS system, the positioning device 108 can obtain positioning assistance by communicating with an auxiliary positioning server (such as a mobile phone positioning server). The AGPS system assists the positioning device 108 in performing the ranging and positioning services by acting as a secondary server, in which case the secondary positioning server provides positioning through communication over a wireless communication network with a positioning device 108 (i.e., GPS receiver) of the terminal, such as the handset 100. assist.

The audio circuit 109, the speaker 113, and the microphone 114 can provide an audio interface between the user and the handset 100. The audio circuit 109 can transmit the converted electrical data of the received audio data to the speaker 113 for conversion to the sound signal output by the speaker 113; on the other hand, the microphone 114 converts the collected sound signal into an electrical signal by the audio circuit 109. After receiving, it is converted into audio data, and then the audio data is output to the RF circuit 102 for transmission to, for example, another mobile phone, or the audio data is output to the memory 103 for further processing.

The peripheral interface 110 is used to provide various interfaces for external input/output devices (such as a keyboard, a mouse, an external display, an external memory, a subscriber identity module card, etc.). For example, it is connected to the mouse through a universal serial bus interface, and is electrically connected to a subscriber identity module (SIM) card provided by a telecommunications carrier through a metal contact on the card slot of the subscriber identity module. Peripheral interface 110 can be used to couple the external input/output peripherals described above to processor 101 and memory 103.

The mobile phone 100 may further include a power supply device 111 (such as a battery and a power management chip) that supplies power to the various components. The battery may be logically connected to the processor 101 through the power management chip to manage charging, discharging, and power management through the power supply device 111. And other functions.

Although not shown in FIG. 2, the mobile phone 100 may further include a camera, a flash, a micro-projection device, a near field communication (NFC) device, and the like, which are not described herein.

Illustratively, it can be stored in the memory 120 of the mobile phone 100.

Figure PCTCN2018099447-appb-000004
The operating system, which is a Linux-based mobile device operating system, and implements various functions in combination with the above hardware in the mobile phone 100. Below, the storage will be described in detail
Figure PCTCN2018099447-appb-000005
The software architecture of the operating system. It should be noted that the embodiment of the present application only uses
Figure PCTCN2018099447-appb-000006
The operating system is an example to illustrate the software environment required for the terminal to implement the technical solution of the embodiment. Those skilled in the art can understand that the embodiment of the present application can also be implemented by other operating systems.

Illustratively, FIG. 3 is a type that can be operated in the above terminal.

Figure PCTCN2018099447-appb-000007
Schematic diagram of the software architecture of the operating system. The software architecture can be divided into four layers, namely the application layer, the application framework layer, the function library layer and the Linux kernel layer.

1, the application (Applications) layer

The application layer is the top layer of the operating system, including the native applications of the operating system, such as email clients, text messages, calls, calendars, browsers, contacts, and more. Of course, for developers, developers can write applications and install them to that layer. In general, applications are developed using the Java language by calling the application programming interface (API) provided by the application framework layer.

2, the application framework (Application Framework) layer

The application framework layer mainly provides developers with various APIs that can be used to access applications. Developers can interact with the underlying operating system (such as function libraries, Linux kernels, etc.) through the application framework to develop their own. application. The application framework is primarily a series of service and management systems for the Android operating system. The application framework mainly includes the following basic services:

Activity Manager: used to manage the application life cycle and provide common navigation rollback functions;

Content Providers: Used to manage data sharing and access between different applications;

Notification Manager: used to control the application to display prompt information (such as Alerts, Notifications, etc.) to the user in the status bar, lock screen interface, etc.;

Resource Manager: Provides non-code resources (such as strings, graphics, layout files, etc.) for use by the application;

Clipboard Manager: mainly provides copy or paste function inside the application or between applications;

View: A rich, extensible collection of views that can be used to build an application. It specifically includes a list, a grid, a text, a button, and an image. Among them, the main function of the image view is to display the picture, which is generally presented in the GUI in the form of an uneditable control. The main function of the text view is to display the string, which is generally presented in the GUI in the form of an editable control.

Location Manager: Mainly allows the application to access the geographic location of the terminal.

3, the function library (Libraries) layer

The function library layer is the support of the application framework, and is an important link between the application framework layer and the Linux kernel layer. The library layer includes libraries that are compiled by the computer program C or C++. These libraries can be used by different components in the operating system to serve developers through the application framework layer. Specifically, the function library may include a libc function library, which is specifically designed for embedded linux-based devices; the function library may also include a multimedia library (Media Framework), which supports playback and recording of audio or video in multiple encoding formats. Also supports still image files, as well as common audio or video encoding formats. The function library also includes an interface management library (Surface Manager), which is mainly responsible for managing access to the display system, and is used to manage the interaction between display and access operations when executing multiple applications, and is also responsible for 2D drawing and 3D. The drawing is displayed for synthesis.

The function library layer may also include other function libraries for implementing various functions of the mobile phone, for example: SGL (Scalable Graphics Library): 2D graphics image processing engine based on XML (Extensible Markup Language) file; SSL (Secure Sockets Layer): Located between TVP/IP protocol and various application layer protocols, providing support for data communication; OpenGL/ES: 3D effect support; SQLite: relational database engine; Webkit: Web browser engine; FreeType: bitmap and vector font Support; and so on.

Android Runtime is a kind of

Figure PCTCN2018099447-appb-000008
The operating environment on the operating system is
Figure PCTCN2018099447-appb-000009
A new virtual machine used by the operating system. In Android Runtime, AOT (Ahead-Of-Time) technology is used. When the application is first installed, the application's bytecode is pre-compiled into machine code, making the application a real local application. After running again, the compilation step is eliminated, and startup and execution become faster.

In some other embodiments of the present application, the Android Runtime may also be replaced by a Core Libraries and a Dalvik Virtual Machine. The core function library provides most of the functions in the Java language API, and provides an interface to the application framework layer to call the underlying library mainly through the Java native interface (JNI). It also contains some core APIs of the operating system, such as android.os, android.net, android.media, and so on. The Dalvik virtual machine uses a JIT (Just-in-Time) runtime compilation mechanism. Each time a process is started, the virtual machine needs to recompile the bytecode in the background, which will have a certain impact on the startup speed. Each application runs in an instance of a Dalvik virtual machine, and each Dalvik virtual machine instance is a separate process space. The Dalvik virtual machine is designed to run multiple virtual machines efficiently on a single device. The Dalvik virtual machine executable file format is .dex. The dex format is a compression format designed for Dalvik and is suitable for systems with limited memory and processor speed. What needs to be mentioned is that the Dalvik virtual machine relies on the Linux kernel to provide basic functions (threading, underlying memory management). It can be understood that Android Runtime and Dalvik belong to different types of virtual machines, and those skilled in the art can select different types of virtual machines in different situations.

4, Linux kernel (Linux Kernel) layer

This layer provides the core system services of the operating system, such as security, memory management, process management, network protocol stack and driver model, all based on the Linux kernel. The Linux kernel also acts as an abstraction layer between the hardware and software stacks. There are many mobile device related drivers at this layer. The main drivers are: display driver; Linux-based frame buffer driver; keyboard driver as input device; flash driver based on memory technology device; camera driver; audio driver; ; WI-FI driver, etc.

In the embodiment of the present application, as an example of the Android operating system, as shown in FIG. 4, the application framework layer may further include a clipboard manager (Clipboard Manager) for managing text information selected by the user in the editable control. Provides functions such as copying and pasting of text and other information.

Among them, the Clipboard Manager can be obtained through the function getSystemService(CLIPBOARD_SERVICE), and the terminal manages the copying or pasting of data between two applications or within the application through the Clipboard Manager. ClipData is the Clip object, which contains the data description information and the data itself. The Clipboard only has one Clip object at a time. When another Clip object is acquired, the previous Clip object will no longer be saved in the Clipboard. A Clip object can contain one or more ClipData.Item objects. Adding an Item object to a Clip object can be implemented by the function addItem(ClipData.Item item). The data item in the Item object may specifically contain text, a uniform resource identifier (URI), or an Intent. Multiple ClipData.Item objects can be added to a Clip object, which allows the user to copy multiple selected content into the same Clip object; for example, if there is a List Widget that allows the user to select multiple options at a time, then Clipboard Manager All selected options can be copied to the clipboard at one time.

Exemplarily, in combination with the application scenario of text copying shown in FIG. 4 and FIG. 5, as shown in (a) of FIG. 5, the WeChat displays a chat interface with Mike when the foreground is running, and the interface includes a control 504 (return) Button icon), control 505 (title bar), control 506 (chat detail button icon), control 507 (avatar icon), control 508 (conversation content), control 511 (voice input button icon), control 512 (input box), and Control 513 (option button icon). Then, when the touch screen detects that the user inputs a long press operation for editing the control on the control 508, the integrated IC chip in the touch screen can report the touch parameters such as the coordinate point and the touch duration of the long press operation to the mobile phone 100. of

Figure PCTCN2018099447-appb-000010
operating system.

Figure PCTCN2018099447-appb-000011
After the WeChat in the operating system obtains the above touch parameters, it can be determined that the user performs a long press operation on the control 508. Further, the WeChat can obtain the clipboard service in the system service by calling the getSystemService (CLIPBOARD_SERVICE) function. At this time, as shown in (b) of FIG. 5, since the control 508 belongs to an editable control, the terminal can display a text selection menu 510 and a cursor for adjusting the selected text (the cursor includes the selected text start) The first cursor 520a of the location and the second cursor 520b) at the end of the selected text. The text selection menu 510 includes editing operations supported by the control 508, such as copy 510a, forwarding 510b, deleting 510c, and the like.

Then, the user can select the text content desired by the user in the control 508 by dragging the first cursor 520a and the second cursor 520b. When it is detected that the user clicks the above copy 510a option, the clipboard service may call the addItem (ClipData.Item item) function to add the text between the first cursor 520a and the second cursor 520b as the target text selected by the user to the new clip. In the object. In turn, the clipboard service puts a new clip object into the clipboard via the function clipManager.setPrimaryClip(clip). At this point, the original clip object in the clipboard is deleted. Subsequently, as shown in FIG. 4, when the user performs a paste operation on another application or other interface, the clipboard service can be called again to copy the target text saved in the clip object in the clipboard to the selected position of the user, thereby completing the entire copy. Paste operation.

However, in the process of implementing the above-mentioned copy-and-paste operation, the user needs to drag the first cursor 520a or the second cursor 520b to select the desired target text, and the problem of multiple selection or less selection is likely to occur during the dragging process, resulting in the terminal being the user. The efficiency of the operation is reduced when the target text is extracted.

In this regard, a text selection method provided by an embodiment of the present application will be specifically described below with reference to the accompanying drawings. The following is an introduction to the case where the terminal is a mobile phone. FIG. 6 is a schematic flowchart of a method for selecting a text according to an embodiment of the present application. The method may specifically include:

S601. The terminal displays a graphical user interface on the touch screen.

S602. The terminal receives a first gesture that is applied to the user graphical interface, where the first gesture includes a closed trajectory.

In the process of using the mobile phone, if some text information in the user's graphical interface displayed by the mobile phone is of interest, if you want to copy, modify or share the text information, you need to select these edits. Text information (which may be referred to as target text in the embodiment of the present application). At this time, the user may input a first gesture to the first control that includes the target text, the first gesture is used to instruct the mobile phone to select the target text corresponding to the region of the user graphical interface that interacts with the first gesture.

The first gesture may be any gesture in which the motion track is a closed figure, or may be any gesture that the mobile phone can form a closed track in response to the first gesture. For example, the first gesture may specifically refer to a circle or frame selection operation in which a motion track of a user sliding a finger or a stylus in the touch screen can form a closed figure. For example, the first gesture may also refer to an operation of clicking, double-clicking, or re-pressing a closed trajectory after the user touches the touch screen, and the embodiment of the present application does not impose any limitation.

In some embodiments of the present application, after the terminal displays the user graphical interface including the first control (ie, step S601), the user may first receive the user graphical interface (eg, the first control in the user graphical interface). The second gesture is used to initiate a function of circle text in the user graphical interface. For example, the second gesture can be a long press gesture.

Exemplarily, as shown in FIG. 7, the mobile phone displays a chat interface with Sara, and the controls 504 to 513 in the chat interface are all visible controls. The control 504, the control 505, and the control 508 all include text information. Then, the user can perform a long press operation (ie, a second gesture) on any of the controls 504, 505, and 508 (ie, the first control) as needed. After the touch screen detects the long press operation, the touch parameter such as the detected touch time is reported to the running WeChat in the application layer (that is, the application to which the chat interface belongs) to indicate that the WeChat is started in the chat interface. The function of selecting text is to provide subsequent text editing services to the user.

Still taking the user's input of the long press operation to the control 508 in FIG. 7 , after the WeChat obtains the touch event corresponding to the long press operation, it may further determine whether the activated control 508 belongs to the editable control. For example, because the edit properties of different names or types of controls are certain, for example, text view type controls are editable controls, and image view type controls are non-editable controls. Therefore, WeChat can be queried by calling related APIs. The name or type of control 508 determines whether control 508 is an editable control.

Exemplarily, in the chat interface shown in FIG. 7, the control containing the text information includes a control 504, a control 505, and a control 508. The control 508 is an editable control of the text view type, and the control 504 and the control 505 are non-editable controls of the image view type.

Then, if the first control selected by the user belongs to the editable control, when the subsequent mobile phone receives the first gesture of the user acting on the control 508 (for example, the circle selection operation), the mobile phone can call the system service (system service). The Clipboard Manager implements functions of text selection, copy and paste, etc., and extracts target text (ie, first target text) in the target area circled by the user by performing the following steps S603-S606.

Optionally, when the mobile phone determines that the first control belongs to an editable control, if the user receives the first gesture of selecting text in the first control, the mobile phone may mark the user by animation, voice, or highlighting. A gesture is selected within the target area of the first control. Alternatively, the mobile phone may mark the start position and the end position of the first target text circled by the user through the first gesture through two cursors.

Correspondingly, if the first control selected by the user is a non-editable control, the mobile phone may first convert the text information in the first control into an editable state. Further, when the subsequent mobile phone receives the first gesture (for example, the circle selection operation) of the user acting on the above control 508, the text editing function can be realized by performing the following steps S603-S606. That is to say, in the embodiment of the present application, the text editing function can also be implemented for the application scene in which the text in the picture cannot be edited by text, thereby improving the operation efficiency of using the mobile phone for text editing.

Exemplarily, if the first control functioning when the user performs the second gesture described above belongs to a non-editable control, such as an image view type control, the mobile phone may further extract the text information included in the first control by using the ORC technology. In this way, when the subsequent mobile phone receives the first gesture, the mobile phone may determine the first target text included in the target area circled in the first gesture based on the text information extracted by the ORC technology.

In other embodiments of the present application, for an image view type control such as a picture, the mobile phone may extract text information included in the control based on ORC technology when generating or displaying the control, and store the text information in the content of the control. In the content description field.

Then, when the first control that the mobile phone determines that the second gesture is a non-editable control, the mobile phone may first call the interface View.getContentDescription() to query whether the content description field stores the first control. Text information. If the text information in the first control is stored, the mobile phone may extract the text information included in the first control from the content description field; if the text information in the first control is not stored, the mobile phone may identify by the ORC technology And extracting the text information included in the first control.

In addition, when the mobile phone receives a second gesture (for example, a long press operation) that the user acts on the graphical interface of the user to initiate a function of circled text in the graphical interface of the user, as shown in (a) of FIG. Still using the first control as the control 508, the handset can also display one or more selection boxes 803 (eg, a first selection box and a second selection box) for circled text in the current chat interface. Alternatively, the mobile phone may also display a prompt 804 prompting the user to circle the target area in the current chat interface.

Further, as shown in (b)-(c) of FIG. 8, after the mobile phone detects that the user selects a gesture of a selection box (for example, selection box 803), the mobile phone can determine the selection box selected by the user in response to the gesture. . At this time, the selection box selected by the user is used to circle the target text required by the user. Further, the terminal may receive a first gesture in which the user circled the target text using the selection box 803. In response to the first gesture, the handset can form a closed trajectory in the user graphical interface in accordance with the shape of the selection box 803. At this time, the area corresponding to the closed trajectory is the target area 901 that the user desires to select, and the text information included in the target area 901 is the first target text 902.

Of course, the mobile phone can also display the above selection box 803 in the display interface shown in (a) of FIG. 8 , in which the user can manually illuminate a closed trajectory as the first gesture input into the touch screen of the mobile phone.

Exemplarily, as shown in (a)-(c) of FIG. 9, if the mobile phone detects that the motion trajectory of the user's finger in the first control (eg, the control 508) is a closed figure (ie, the first gesture), then After the mobile phone determines that the control 508 is an editable control, the getSystemService(CLIPBOARD_SERVICE) function can be called to obtain the clipboard service, and the Clipboard Manager in the clipboard service uses the closed area corresponding to the closed track in the touch screen as the target desired by the user. The area, that is, the target area 901 shown in (c) of FIG.

S603. In response to the first gesture, the terminal determines a target area corresponding to the closed trajectory in the user graphical interface.

Still taking FIG. 8 or FIG. 9 as an example, after the mobile phone receives the first gesture that the user acts on the user graphical interface (eg, the control 508) and includes the closed trajectory, the mobile phone may select an area of the user graphical interface corresponding to the closed trajectory. As the target area selected by the user (for example, the area 901 in FIG. 8 or FIG. 9).

As shown in (c) of FIG. 8, the closed trajectory formed by the user after performing the first gesture in the control 508 using the selection box is the boundary line of the selection frame. Then, the mobile phone can use the position coordinate of the boundary line of the selection frame at this time as the position coordinate of the target area 901, thereby determining the target area 901 corresponding to the closed trajectory in the user graphical interface.

In still another embodiment of the present application, as shown in (c) of FIG. 8, when the mobile phone displays the target area 901, at least one control block 903 may be disposed on the boundary of the target area 901, and the control block 903 is provided. Used to adjust the position or size of the target area 901.

Then, as shown in FIG. 10, if the terminal receives the third gesture of the user acting on the control block 903, for example, the operation of the drag control block 903, the terminal can adjust the position or size of the target area 901 according to the third gesture. For example, the third gesture is for the user to drag the control block 903. The terminal can expand or reduce the target area 901 in the direction of the user dragging the control block 903 to form the adjusted target area 901'. At this time, the target area 901' is selected. The first target text is also increased or decreased.

S604. The terminal determines the first target text included in the target area.

In step S604, after determining the target area (for example, the target area 901) circled by the user in the first gesture, the mobile phone may call the interface View.getText() or View.getContentDescription() to obtain the specific text content included in the target area. (ie the first target text). For example, when the first control is a text view type of control, the text field of the control stores all the text content within the control. Then, the mobile phone can extract the first target text corresponding to the target area in all the text content according to the coordinate information of the target area.

As shown in FIG. 8 or FIG. 9, for the control 508, the mobile phone can call the interface View.getText() in conjunction with the target area 901 determined in step S603 to obtain the text of the first target text 902 in the target area 901 circled by the user. The content is:

"Ready to go out

Pieces. If you go abroad for sightseeing

Hand in passport and sign in advance"

The target area of the first target text may be marked by highlighting, bolding, etc., so that the user can accurately know the selected specific text content, so as to subsequently expand or delete the selected text.

Of course, the first target text in the target area may also include text information in various languages, such as numbers, English letters, and words. The embodiment of the present application does not impose any limitation on this.

S605. The terminal performs semantic analysis on the first target text to determine a second target text, where the second target text is different from the first target text.

S606. The terminal marks the second target text in the user graphical interface.

In step S605, the terminal may expand or deselect the phrase segmented by the target area in the first target text to obtain a second target text that is different from the first target text. For example, the mobile phone may identify the to-be-corrected text whose semantic/information is incomplete in the first target text by using techniques such as semantic analysis or word segmentation. For example, the text to be corrected appearing in the first target text 902 is: “passport, sign”. Generally, the text to be corrected with semantic/incomplete meaning is mostly caused by the user selecting or selecting more when the first target text is circled.

Therefore, the mobile phone can continue to extract the context of the text to be corrected (the context is outside the target area), for example, extract the following "certificate" word of the "passport, sign" word. Further, it is determined whether the text to be corrected has a complete semantic/word meaning after adding the context of the text to be corrected (ie, "passport, visa"). If there is a complete semantic/word meaning, as shown in FIG. 11, the mobile phone can automatically expand the context "certificate" of the text to be corrected into the selected text. At this time, the selected text is in addition to the first target. The second target text 1001 of the context of the text to be corrected is also included in the text.

Exemplarily, a dictionary for word segmentation may be preset in the mobile phone, and commonly used Chinese words, phrases or English words may be stored in the dictionary. Then, after the mobile phone obtains the context "certificate" of the text to be corrected, the dictionary can be searched for whether the word "visa" is included, and if it is included, the first target text of the user circle is selected less. The word "certificate" has a complete meaning after adding the word "password" after the word "passport, sign" in the first target text. Therefore, the mobile phone can add the word "certificate" to the first target text. Second target text.

In addition, the mobile phone can also update the commonly used or user-defined words to the above-mentioned dictionary according to the input habits of the user when using the input method, so as to improve the accuracy of the mobile phone automatically assisting the user in expanding the target text.

Alternatively, the mobile phone may further send the first target text to the server, and request the server to identify the to-be-corrected text in the first target text that does not have complete semantics/word meaning. In addition, the mobile phone can also send all the text information in the first control to the server, requesting the server to determine the words to be corrected in the first control that need to be expanded, and then according to the feedback result of the server, the first target text is The segmentation of the target area is expanded to obtain the second target text.

In other embodiments of the present application, the text “passport, sign” to be corrected appears in the first target text 902. After the mobile phone determines the text to be corrected, the mobile phone may continue to extract the text to be corrected in the selected target area. The context of the external "certificate". Then, the above-mentioned dictionary mobile phone can determine that the "certificate" word needs to be expanded into the first target text. At this time, as shown in FIG. 12A, the mobile phone can be bounded by the line 1101 and the column 1102 where the "certificate" word is located. Text within a closed region formed by a target text and rows 1101 and 1102 is automatically expanded into second target text 1103. At this time, the text content of the second target text 1103 obtained after the mobile phone is automatically expanded is:

"Get ready to go out with the card

Pieces. If you go abroad for sightseeing,

Run your passport and visa in advance"

Exemplarily, as shown in FIG. 12B, in combination with the first control 508 shown in FIG. 12A, the target area 901 of the first target text circled by the user is a rectangle, and the mobile phone can obtain the four vertices A and B of the rectangle. , C, D coordinate values in the touch screen. When the mobile phone determines that the "certificate" word needs to be expanded to the first target text, the coordinate point E(x, y) of the "certificate" word in the touch screen can be obtained. Then, the mobile phone can determine that the vertex closest to the E point in the rectangular ABCD is the D point, and the expanded rectangular target area can use the E point instead of the D point as the vertex, and the expanded rectangular target area is the same as the E point. The coordinate values of the vertex A on one diagonal are unchanged. Then, based on the A point and E point mobile phones, it can be determined that the two vertices on the other diagonal line in the expanded rectangular target area are the intersection point B of the E point and the AB side, and the column where the E point is located. The point F of the intersection with the AC edge extension line is obtained, thereby obtaining an expanded rectangular target area having vertices A, B, E, and F, and the text in the rectangular target area is the second target text 1103.

In addition, after the mobile phone expands the first target text, the above method may be repeated to continue the word segmentation or semantic analysis on the expanded second target text, thereby correcting the multiple-selected or less-selected text in the second target text. .

In other embodiments of the present application, when the mobile phone determines that the semantic/information incomplete text in the first target text appears (for example, “passport, sign” in FIG. 9), if the “checked” word is followed by For punctuation or new paragraphs, the phone can also automatically delete the extra "check" characters in the text to be corrected, thereby correcting the multiple-choice operation that occurs when the user circles the first target text.

At this point, the mobile phone can correct the text to be corrected that the user selects or selects in the first target text by using text extraction, semantic analysis, etc. based on the first target text circled by the user in the display interface, and obtains the corrected number. The second target text, thereby improving the accuracy and operational efficiency of the user when selecting the target text in the display interface. Subsequently, based on the second target text selected for the user, the mobile phone can further perform text editing operations such as copying, deleting, translating, etc., so that the operation efficiency of the text editing operation is also improved.

In other embodiments, after the mobile phone performs the above steps S601-S606, the user may manually expand or deselect the second target text by clicking or dragging the cursor, that is, after step S606, the terminal The following steps S608-S609 or S610-S611 can also be performed.

S608. The terminal receives a click operation on the first character, where the first character is text other than the second target text in the user graphical interface.

S609. In response to the clicking operation, the terminal expands the second target text into a third target text by using a row and a column where the first character is located.

If the user needs to continue to expand the text based on the second target text, the user may input a click operation at the first character that needs to be expanded beyond the second target text to indicate that the user wishes to select the text as described in step S608. The target area is extended from the area where the second target text is located to the area containing the first character described above.

As shown in (a) of FIG. 13A, the control 508 in the chat interface is still used as the first control example, and the mobile phone automatically expands the first target text to the second target based on the first target text 902 circled by the user. Text 1103. If the user wishes to continue to expand the second target text 1103, the last word/word of the target text desired by the user may continue to be clicked in the control 508, for example, the user clicks on the word "ship" 1201 in the control 508. At the time, the "ticket" 1201 is the first character described above.

Further, in step S609, in response to the user clicking the click operation of the "ticket" 1201, the mobile phone may border the row and column where the word "ticket" 1201 is located, and the second target text 1101 and the "ticket". The text in the closed region formed by the row and column in which the word 1201 is located is automatically expanded to the third target text 1202. At this time, as shown in (b) of FIG. 13A, the first cursor 801 is located at the start position of the third target text 1204, the second cursor 802 is located at the end position of the third target text 1204, and the third target text 1202 is passed. The highlighting indicates that the third target text 1202 is in the selected state.

Exemplarily, as shown in FIG. 13B, in combination with the first control 508 shown in FIG. 13A, the target area where the second target text 1103 is located is a rectangle formed by four vertices of A, B, E, and F. When the mobile phone detects that the user clicks on the word "ship" 1201 in the control 508, the mobile phone can detect that the "ticket" word of the "ticket" is at the point E' in the touch screen. Furthermore, the mobile phone can calculate the point closest to the E' point among the four vertices A, B, E, and F as point E. Then, the mobile phone can use the E' point as the vertices of the expanded rectangular target area instead of the E point. . Similarly, the coordinate value of the vertex A on the same diagonal line as the E' point in the expanded rectangular target area is unchanged, then the mobile phone can determine the other diagonal line in the expanded rectangular target area. Two vertices, that is, the point B' of the line where the E' point is located with the AB extension line and the point F' of the line where the E' point is located and the EF extension line, thereby obtaining the vertices of A, B', E', F' The rectangular target area where the third target text 1204 is located.

Parallel to steps S608-S609, the following steps S610-S611 are another method for the user to manually expand the second target text.

S610. The terminal receives a drag operation of dragging a cursor by a user, where the cursor is located at a start position or an end position of the second target text.

S611. In response to the drag operation, the terminal expands the second target text into the third target text in units of phrases.

Specifically, as shown in (a) of FIG. 14 , after the mobile phone automatically expands the divided phrases in the first target text, the mobile phone may display the start position of the second target text 1101 obtained after the expansion. A cursor 801 is displayed, and the second cursor 802 is displayed at the end position of the second target text 1101. Then, the user can continue to expand on the basis of the second target text 1101 by dragging the first cursor 801 (or the second cursor 802).

Still as shown in (a) of FIG. 14, the user drags the second cursor 802 backward from the end position "certificate" of the second target text 1101 to perform a drag operation. After the mobile phone detects the drag operation, the pre-set dictionary can be used to query whether the currently expanded text of the user is a phrase. For example, as shown in (b) of FIG. 14, when the user drags the second cursor 802 and moves the finger to the "jian" word in the control 508, the "jian" word expanded at this time does not belong to the phrase in the dictionary. Then, the mobile phone does not need to expand the "Jian" word to the text selected by the user at this time, and the second cursor 802 can not respond. Correspondingly, as shown in (c) of FIG. 14, when the user continues to drag the second cursor 802 and the finger moves to the "healthy" "Kang" word in the control 508, the word "health" belongs to the current expansion. The phrase in the dictionary, therefore, the mobile phone can now expand the word "health" to the text selected by the user, and obtain the third target text 1301 after the expansion. At this time, the second cursor 802 moves to the end position of the third target text 1301 (ie, after the "Health" word "Kang"), and the third target text 1301 marked between the first cursor 801 and the second cursor 802 may also be Highlight.

That is to say, in the embodiment of the present application, when the mobile phone manually expands the target text in response to the user dragging the cursor, the mobile phone expands the selection in units of phrases, so that the user can effectively reduce the time when the user drags the cursor to expand the selected text. The phenomenon of multiple selection or less selection improves the efficiency of the operation when the mobile phone extracts text for the user.

In addition, when the user drags the cursor to cancel the selected text, the mobile phone can also cancel the selected text in units of phrases, thereby reducing the phenomenon of multiple selection or less selection when the user drags the cursor to cancel the selected text. For example, as shown in (a) of FIG. 15, the user drags the second cursor 802 forward from the end position "certificate" of the second target text 1101 to move behind the "sign" word to perform deselection. The function of the text. After the mobile phone detects the drag operation, the pre-set dictionary can be used to query whether the selected text at the location of the current second cursor 802 is a phrase. Still as shown in (a) of FIG. 15, the selected text "sign" at the location of the second cursor 802 does not belong to the phrase in the dictionary, then the mobile phone does not need to cancel the selected "certificate" word. The second cursor 802 does not respond, still after the word "certificate". Correspondingly, as shown in (b) of FIG. 15, when the user continues to drag the second cursor 802 and moves the finger forward to the "sign" of the "visa", the current location of the second cursor 802 is selected. The word "passport" in the text belongs to the phrase in the dictionary, so the mobile phone can cancel the word "visa" already selected at this time. At this time, before the second cursor 802 moves to the "sign" word of "Visa", the highlighted text between the first cursor 801 and the second cursor 802 is the updated target text.

Optionally, when detecting that the user drags the first cursor 801 or the second cursor 802 to perform a drag operation, the mobile phone may hide the dragged cursor. When it is detected that the user leaves the touch screen and the drag operation is no longer performed, the mobile phone can redisplay the hidden cursor. When the mobile phone expands or unchecks the text in units of phrases, the cursor does not follow the user's drag operation and the user experience is reduced.

Of course, when the user drags the cursor to cancel the selected text, in order to improve the accuracy of canceling the selected target text, the mobile phone can also cancel the selected text verbatim in response to the user's drag operation in units of words. The application embodiment does not impose any restrictions on this.

It should be noted that, when the mobile phone extracts the first target text in the target area, or the mobile phone expands the first target text into the second target text, or the mobile phone expands the second target text into the third target text. After the target text, as shown in (b) of FIG. 5, the mobile phone can also display a text selection menu 510 for editing the extracted text content, such as options such as copy 510a, forward 510b, delete 510c, and the like.

Subsequently, if it is detected that the user clicks the above copy 510a option, the Clipboard Manager running in the mobile phone can call the addItem (ClipData.Item item) function to add the extracted target text to the new clip object. Furthermore, the Clipboard Manager puts a new clip object into the clipboard by calling the function clipManager.setPrimaryClip(clip) to complete the copy operation. Subsequently, when the mobile phone detects that the user performs the paste operation, the Clipboard Manager can take out and paste the stored clip object (ie, the target text) from the clipboard into the input box specified by the user, and complete the paste operation.

It can be understood that, in order to implement the above functions, the above terminal and the like include hardware structures and/or software modules corresponding to each function. Those skilled in the art will readily appreciate that the embodiments of the present application can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the embodiments of the present application.

The embodiment of the present application may perform the division of the function modules on the terminal or the like according to the foregoing method example. For example, each function module may be divided according to each function, or two or more functions may be integrated into one processing module. The above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of the module in the embodiment of the present application is schematic, and is only a logical function division, and the actual implementation may have another division manner.

FIG. 16 is a schematic diagram showing a possible structure of the terminal involved in the foregoing embodiments, where the terminal is used to implement the method described in the foregoing method embodiments, Specifically, the display unit 1601, the obtaining unit 1602, the determining unit 1603, and the correcting unit 1604.

The display unit 1601 is configured to support the terminal to execute the processes S601 and S606 shown in FIG. 6; the obtaining unit 1602 is configured to support the terminal to execute the processes S602, S608, and S610 shown in FIG. 6; and the determining unit 1603 is configured to support the terminal. The process S604-S605 shown in FIG. 6 is executed; the correcting unit 1604 is configured to support the terminal to execute the processes S605, S609, and S611 shown in FIG. 6. All the related content of the steps involved in the foregoing method embodiments may be referred to the functional descriptions of the corresponding functional modules, and details are not described herein again.

In the case of using an integrated unit, the above-described determining unit 1603 and the correcting unit 1604 can be integrated into a processing module, the display unit 1601 is used as an output module, and the above-mentioned obtaining unit 1602 is used as an input module. Of course, the terminal may further include a storage module and a communication module. At this time, as shown in FIG. 17, a possible schematic structural diagram of the terminal involved in the foregoing embodiment is shown, including a processing module 1701, a communication module 1702, an input/output module 1703, and a storage module 1704.

The processing module 1701 is configured to control and manage the action of the terminal. The communication module 1702 is for supporting communication of the terminal with other network entities such as servers or other terminals. The input/output module 1703 is for receiving information input by a user or outputting information provided to the user and various menus of the terminal. The storage module 1704 is configured to save program codes and data of the terminal.

Exemplarily, the processing module 1701 may be a processor or a controller, for example, may be a central processing unit (CPU), a GPU, a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit. (Application-Specific Integrated Circuit, ASIC), Field Programmable Gate Array (FPGA) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure. The processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.

The communication module 1702 can be a transceiver, a transceiver circuit, an input/output device, a communication interface, or the like. For example, the communication module 1702 can be specifically a Bluetooth device, a Wi-Fi device, a peripheral interface, or the like.

The memory module 1704 can be a memory, which can include high speed random access memory (RAM), and can also include non-volatile memory, such as magnetic disk storage devices, flash memory devices, or other volatile solid state storage devices.

The input/output module 1703 can be an input/output device such as a touch screen, a keyboard, a microphone, and a display. The display may specifically be configured in the form of a liquid crystal display, an organic light emitting diode or the like. In addition, a touch panel can be integrated on the display for collecting touch events on or near the display, and transmitting the collected touch information to other devices (such as a processor, etc.).

As shown in FIG. 18, another embodiment of the present application discloses a terminal, which may include: a touch screen 1801, wherein the touch screen 1801 includes a touch-sensitive surface 1806 and a display screen 1807; one or more processors 1802; Memory 1803; a plurality of applications 1808; and one or more computer programs 1804, each of which may be coupled by one or more communication buses 1805. The one or more computer programs 1804 are stored in the memory 1803 and configured to be executed by the one or more processors 1802, the one or more computer programs 1804 including instructions that can be used to execute 6 and the various steps in the corresponding embodiments.

In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware or any combination thereof. When implemented using a software program, it may occur in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions described in accordance with embodiments of the present application are generated in whole or in part. The computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device. The computer instructions can be stored in a computer readable storage medium or transferred from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions can be from a website site, computer, server or data center Transfer to another website site, computer, server, or data center by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL), or wireless (eg, infrared, wireless, microwave, etc.). The computer readable storage medium can be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that includes one or more available media. The usable medium may be a magnetic medium (eg, a floppy disk, a hard disk, a magnetic tape), an optical medium (eg, a DVD), or a semiconductor medium (such as a solid state disk (SSD)).

The foregoing is only a specific embodiment of the present application, but the scope of protection of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. . Therefore, the scope of protection of the present application should be determined by the scope of the claims.

Claims (21)

  1. A text selection method, which is implemented in a terminal with a touch screen, and includes:
    The terminal displays a user graphical interface in the touch screen;
    Receiving, by the terminal, a first gesture that is applied to the graphical interface of the user, where the first gesture includes a closed trajectory;
    In response to the first gesture, the terminal determines a target area corresponding to the closed trajectory in the user graphical interface;
    The terminal determines a first target text included in the target area;
    The terminal performs semantic analysis on the first target text to determine a second target text, where the second target text is different from the first target text;
    The terminal marks the second target text in the user graphical interface.
  2. The method according to claim 1, wherein after the terminal displays the user graphical interface in the touch screen, and before the terminal receives the first gesture applied to the user graphical interface, the method further includes:
    The terminal displays a first prompt in the user graphical interface, and the first prompt includes a selection box for circled text information;
    The terminal receives the first gesture that is applied to the user graphical interface, and specifically includes:
    The terminal receives a first gesture in which the user circled the first target text using the selection box in the user graphical interface.
  3. The method according to claim 1 or 2, further comprising: after the terminal marking the second target text in the user graphical interface, further comprising:
    Receiving, by the terminal, a click operation on the first character, where the first character is a character other than the second target text in the user graphical interface;
    In response to the click operation, the terminal expands text in the closed region formed by the first target text and the row and column in which the first character is located as the third target text.
  4. The method according to claim 1 or 2, wherein after the terminal performs semantic analysis on the first target text to determine the second target text, the method further includes:
    The terminal displays a first cursor at a start position of the second target text; the terminal displays a second cursor at an end position of the second target text.
  5. The method according to claim 4, further comprising: after the first cursor and the second cursor are respectively displayed at the start position and the end position of the second target text;
    Receiving, by the terminal, a drag operation acting on the first cursor or the second cursor;
    In response to the drag operation, the terminal expands the second target text into a third target text in units of phrases; or, in response to the drag operation, the terminal cancels the phrase in units of phrases The selected text in the second target text.
  6. The method according to claim 5, further comprising: after the terminal receives a drag operation on the first cursor or the second cursor, further comprising:
    The terminal does not display the first cursor or the second cursor before detecting that the user's finger has not left the touch screen.
  7. The method according to any one of claims 1 to 6, wherein after the terminal displays the user graphical interface in the touch screen, and before the terminal receives the first gesture acting on the user graphical interface, include:
    The terminal receives a second gesture that acts on the graphical interface of the user, and the second gesture is used to initiate a function of circled text.
  8. The method according to any one of claims 1 to 7, wherein after the terminal determines the target area corresponding to the closed trajectory in the user graphical interface, the method further includes:
    The terminal displays a boundary of the target area in the user graphical interface, and at least one control block is disposed on a boundary of the target area, where the control block is used to adjust a position or a size of the target area;
    Receiving, by the terminal, a third gesture that acts on the control block;
    The terminal adjusts a position or a size of the target area according to the third gesture.
  9. A method according to any one of claims 1-8, wherein
    The second target text includes the first target text, and the second target text includes a number of characters greater than a number of characters included in the first target text; or
    The first target text includes the second target text, and the second target text includes a number of characters smaller than a number of characters included in the first target text; or
    The user graphical interface is a short message interface; or
    The user graphical interface is an interface including a picture; or
    The first target text or the second target text is highlighted in the user graphical interface; or
    The terminal is a mobile phone.
  10. A terminal, comprising:
    a display unit, configured to: display a graphical user interface on the touch screen;
    An acquiring unit, configured to: receive a first gesture that is applied to the graphical interface of the user, where the first gesture includes a closed trajectory;
    a determining unit, configured to: determine a target area corresponding to the closed trajectory in the user graphical interface; and determine a first target text included in the target area;
    a modifying unit, configured to perform semantic analysis on the first target text to determine a second target text, where the second target text is different from the first target text;
    The display unit is further configured to: mark the second target text in the user graphical interface.
  11. The terminal according to claim 10, characterized in that
    The display unit is further configured to: display a first prompt in the user graphical interface, where the first prompt includes a selection box for circled text information;
    The acquiring unit is specifically configured to: receive a first gesture in which the user selects the first target text by using the selection box in the user graphical interface.
  12. A terminal according to claim 10 or 11, wherein
    The obtaining unit is further configured to: receive a click operation on the first character, where the first character is text other than the second target text in the user graphical interface;
    The correction unit is further configured to: expand the text in the closed area formed by the first target text and the row and column where the first character is located into the third target text.
  13. A terminal according to claim 10 or 11, wherein
    The display unit is further configured to: display a first cursor at a start position of the second target text; and display a second cursor at an end position of the second target text.
  14. The terminal of claim 13 wherein:
    The obtaining unit is further configured to: receive a drag operation that is applied to the first cursor or the second cursor;
    The modifying unit is further configured to: expand the second target text into a third target text in units of phrases; or cancel the selected text in the second target text in units of phrases.
  15. The terminal according to claim 14, wherein
    The determining unit is further configured to: after detecting that the user's finger does not leave the touch screen, instruct the display unit not to display the first cursor or the second cursor.
  16. A terminal according to any one of claims 10-15, characterized in that
    The acquiring unit is further configured to: receive a second gesture applied to the graphical interface of the user, where the second gesture is used to initiate a function of circled text.
  17. A terminal according to any one of claims 10-16, characterized in that
    The display unit is further configured to: display a boundary of the target area in the user graphic interface, where at least one control block is disposed on a boundary of the target area, where the control block is used to adjust the target area Location or size;
    The acquiring unit is further configured to: receive a third gesture that acts on the control block;
    The determining unit is further configured to: adjust a position or a size of the target area according to the third gesture.
  18. A terminal comprising a touch screen, a memory, one or more processors, a plurality of applications, and one or more programs; wherein the one or more programs are stored in the memory; wherein The terminal is configured to perform the text selection method according to any one of claims 1 to 9.
  19. A computer readable storage medium having stored therein instructions, wherein when the instructions are run on a terminal, causing the terminal to perform as claimed in any one of claims 1-9 The text selection method described.
  20. A computer program product comprising instructions, wherein the computer program product, when run on a terminal, causes the terminal to perform the text selection method of any one of claims 1-9.
  21. A text selection method, which is implemented in a mobile phone with a touch screen, wherein the method comprises:
    Displaying an interface of a short message on the touch screen, the interface including text;
    Receiving, by the mobile phone, an operation of starting a circle selection function;
    In response to the operation, the mobile phone displays a first selection box and a second selection box in the touch screen, the first selection box and the second selection box are both used to select text by a fixed shape;
    Receiving, by the mobile phone, a click operation for the first selection box;
    In response to the clicking operation, the handset determines the first selection box for circled text;
    Receiving, by the mobile phone, a circle selection gesture for the text;
    Responding to the circle selection gesture, the mobile phone determines a target area based on the first selection box and the circle selection gesture;
    Determining, by the mobile phone, the first target text included in the target area, where the number of characters of the first target text is less than the number of characters of the text;
    Performing semantic analysis on the first target text by the mobile phone to determine a second target text, where the second target text is different from the first target text;
    The mobile phone highlights the second target text in the interface;
    Receiving, by the mobile phone, a click operation on the first character, where the first character is a character other than the second target text in the interface;
    In response to the clicking operation, the mobile phone expands text in the closed area formed by the first target text and the row and column in which the first character is located into a third target text;
    The mobile phone highlights the third target text.
PCT/CN2018/099447 2018-01-11 2018-08-08 Text selecting method and terminal WO2019136964A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201810025128.1 2018-01-11
CN201810025128 2018-01-11
CN201810327466.0 2018-04-12
CN201810327466.0A CN110032324A (en) 2018-01-11 2018-04-12 A kind of text chooses method and terminal

Publications (1)

Publication Number Publication Date
WO2019136964A1 true WO2019136964A1 (en) 2019-07-18

Family

ID=67218834

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/099447 WO2019136964A1 (en) 2018-01-11 2018-08-08 Text selecting method and terminal

Country Status (1)

Country Link
WO (1) WO2019136964A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102349046A (en) * 2009-03-12 2012-02-08 诺基亚公司 Method and apparatus for selecting text information
US20150212707A1 (en) * 2014-01-29 2015-07-30 Social Commenting, Llc Computer System and Method to View and Edit Documents from an Electronic Computing Device Touchscreen
CN105094626A (en) * 2015-06-26 2015-11-25 小米科技有限责任公司 Method and device for selecting text contents
CN105653160A (en) * 2016-02-25 2016-06-08 努比亚技术有限公司 Text determining method and terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102349046A (en) * 2009-03-12 2012-02-08 诺基亚公司 Method and apparatus for selecting text information
US20150212707A1 (en) * 2014-01-29 2015-07-30 Social Commenting, Llc Computer System and Method to View and Edit Documents from an Electronic Computing Device Touchscreen
CN105094626A (en) * 2015-06-26 2015-11-25 小米科技有限责任公司 Method and device for selecting text contents
CN105653160A (en) * 2016-02-25 2016-06-08 努比亚技术有限公司 Text determining method and terminal

Similar Documents

Publication Publication Date Title
EP2615607B1 (en) Method and apparatus for executing a user function using voice recognition
US8908973B2 (en) Handwritten character recognition interface
US10156980B2 (en) Toggle gesture during drag gesture
US9256349B2 (en) User-resizable icons
JP5912083B2 (en) User interface providing method and apparatus
JP5535898B2 (en) Touch event handling for web pages
US9600178B2 (en) Mobile terminal
KR101541147B1 (en) Dynamic virtual input device configuration
US20150067519A1 (en) Device, Method, and Graphical User Interface for Manipulating Framed Graphical Objects
CN101344848B (en) Management of icons in a display interface
US20080168367A1 (en) Dashboards, Widgets and Devices
CN103761044B (en) Touch event model programming interface
US9436381B2 (en) Device, method, and graphical user interface for navigating and annotating an electronic document
JP5112507B2 (en) Touch event model for web pages
US20140173747A1 (en) Disabling access to applications and content in a privacy mode
EP3156900A1 (en) Content preview
US9886430B2 (en) Entity based content selection
JP6404267B2 (en) Correction of language input
KR20140101169A (en) Guide method for taking a picture and mobile terminal implementing the same
US20120297341A1 (en) Modified Operating Systems Allowing Mobile Devices To Accommodate IO Devices More Convenient Than Their Own Inherent IO Devices And Methods For Generating Such Systems
DE202011110891U1 (en) Scroll in extensive hosted dataset
CN108958550A (en) For contacting the equipment for carrying out display additional information, method and graphic user interface in response to user
US10078421B2 (en) User terminal apparatus and method of controlling the same
US20100205559A1 (en) Quick-launch desktop application
JP2016027481A (en) Navigation applications using side-mounted touchpad

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18900416

Country of ref document: EP

Kind code of ref document: A1