WO2019136964A1 - Procédé de sélection de texte et terminal - Google Patents

Procédé de sélection de texte et terminal Download PDF

Info

Publication number
WO2019136964A1
WO2019136964A1 PCT/CN2018/099447 CN2018099447W WO2019136964A1 WO 2019136964 A1 WO2019136964 A1 WO 2019136964A1 CN 2018099447 W CN2018099447 W CN 2018099447W WO 2019136964 A1 WO2019136964 A1 WO 2019136964A1
Authority
WO
WIPO (PCT)
Prior art keywords
text
terminal
target text
user
target
Prior art date
Application number
PCT/CN2018/099447
Other languages
English (en)
Chinese (zh)
Inventor
李昂
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201810327466.0A external-priority patent/CN110032324B/zh
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2019136964A1 publication Critical patent/WO2019136964A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Definitions

  • the present invention relates to the field of communications, and in particular, to a text selection method and a terminal.
  • the text view is one of the controls used to display a string.
  • a terminal such as a mobile phone displays text through a text view type control
  • a user input is input for editing a specified operation (for example, a long press operation)
  • the terminal can display the text in the text view as an editable state, and display the editing options supported by the text (for example, the option of copy 11, translation 12 or delete 13 in FIG. 1).
  • the user can drag the first cursor 14a and the second cursor 14b located at both ends of the selected text to expand or delete the selected text verbatim, and then select the desired editing option to implement the corresponding editing function.
  • the font is small or the number of characters is large in the text
  • the user drags the first cursor 14a or the second cursor 14b to select text the problem of multiple selection or less selection is likely to occur, so that the operation efficiency of the terminal when the text is selected is lowered.
  • the embodiment of the invention provides a text selection method and a terminal, which can reduce the phenomenon of multiple selection or less selection when the selected text is selected, and improve the operation efficiency of the terminal when the text is selected.
  • the present application provides a text selection method, including: displaying, by a terminal, a graphical user interface (GUI) in a touch screen; and further, receiving, by the terminal, a first gesture acting on the graphical interface of the user, the first gesture being capable of being in the user Generating a closed track in the graphical interface; in response to the first gesture, the terminal may determine a target area corresponding to the closed track in the user graphical interface; the terminal determines a first target text included in the target area; The first target text is semantically analyzed to determine a second target text that is different from the first target text; and the second target text is marked in the user graphical interface.
  • GUI graphical user interface
  • the terminal may correct the first target text actually circled by the user to the second target text according to the semantics, so that the final terminal is selected by the user.
  • the semantics of the second target text is more prepared, which reduces the phenomenon of multiple selection or less selection when the user selects the text, and improves the operation efficiency of the terminal when the text is selected.
  • the method further includes: displaying, by the terminal, the first prompt in the graphical interface of the user, A prompt includes a selection box for circled text information; wherein the terminal receives the first gesture that acts on the graphical interface of the user, and specifically includes: receiving, by the terminal, the user selecting the first target text by using the selection frame in the graphical interface of the user a gesture.
  • the terminal may prompt the user to use the selection box provided by the terminal to circle the target text, and reduce the multiple selection or less caused when the user manually slides on the user graphical interface to draw the first gesture of the closed trajectory.
  • the chance of selecting text improves the user experience.
  • the method further includes: the terminal receiving a click operation acting on the first character, wherein the first character is a second target in the user graphical interface Text outside the text; in response to the click operation, the terminal expands the text in the closed region formed by the first target text and the row and column in which the first character is located as the third target text. That is to say, the user can also manually expand the second target text into the third target text by clicking operation, and use the first character of the click as the starting position or ending position of the third target text, so that the terminal can be very convenient. Allow users to flexibly select text, further improving the intelligent interaction between the terminal and the user.
  • the method may further include: the terminal displays the first cursor at the beginning position of the second target text, and ends The location displays the second cursor.
  • the user can also expand or deselect the second target text by dragging the cursor.
  • the terminal may receive a drag operation acting on the first cursor or the second cursor; in response to the drag operation, the terminal may expand the second target text into the third target text in units of phrases; or, in response to the Drag operation, the terminal cancels the selected text in the second target text in units of phrases. Since a phrase is the smallest unit of text with complete semantics, expanding or unchecking the target text in units of phrases reduces the semantic incompleteness of the selected text.
  • the method further includes: after detecting that the user's finger does not leave the touch screen, the terminal does not display the first cursor or the second cursor. In this way, it can be avoided that when the terminal expands or unselects the text in units of phrases, the cursor does not follow the user's drag operation correspondingly, resulting in a problem of reduced user experience.
  • the method further includes: receiving, by the terminal, a second gesture that acts on the graphical interface of the user.
  • the second gesture is used to activate the function of circled text.
  • the method further includes: displaying, by the terminal, a boundary of the target area in a user graphical interface, a boundary of the target area Having at least one control block disposed thereon, the control block is configured to adjust a position or a size of the target area; then, the terminal can receive a third gesture acting on the control block; and further adjusting a position or a size of the target area according to the third gesture , thereby modifying the first target text in the target area and the target area circled by the user through the first gesture.
  • the second target text includes the first target text, and the second target text includes a number of characters greater than the number of characters included in the first target text; or the first target text includes the second target text, The second target text includes a number of characters smaller than the number of characters included in the first target text; or the user graphical interface is a short message interface; or the user graphical interface is an interface containing a picture; or the first target text or the second target text is highlighted In the user graphical interface; or the terminal is a mobile phone.
  • the present application provides a terminal, including: a display unit, configured to: display a first graphical interface GUI in a touch screen; and an acquiring unit, configured to: receive a first gesture that acts on a graphical interface of the user, where the first gesture includes a closed trajectory; a determining unit, configured to: determine a target area corresponding to the closed trajectory in the user graphical interface; and determine a first target text included in the target area; and a correcting unit, configured to: Semantic analysis is performed to determine the second target text, the second target text is different from the first target text; the display unit is further configured to: mark the second target text in the user graphical interface.
  • the display unit is further configured to: display a first prompt in the user graphical interface, the first prompt includes a selection box for circled text information; the obtaining unit is specifically configured to: receive The user uses the selection box to circle the first gesture of the first target text in the user graphical interface.
  • the obtaining unit is further configured to: receive a click operation on the first character, where the first character is text other than the second target text in the user graphic interface; For: expanding the text in the closed area formed by the first target text and the row and column where the first character is located into the third target text.
  • the display unit is further configured to: display a first cursor at a start position of the second target text, and display a second cursor at the end position.
  • the obtaining unit is further configured to: receive a drag operation that is applied to the first cursor or the second cursor; and the correcting unit is further configured to: expand the second target text into the third target text in units of phrases Or, cancel the selected text in the second target text in units of phrases.
  • the determining unit is further configured to: after detecting that the user's finger has not left the touch screen, instructing the display unit not to display the first cursor or the second cursor.
  • the acquiring unit is further configured to: receive a second gesture that acts on a graphical interface of the user, and the second gesture is used to initiate a function of circled text.
  • the display unit is further configured to: display a boundary of the target area in a graphical interface of the user, where at least one control block is disposed on a boundary of the target area, where the control block is used to adjust the target The location or size of the area; the acquiring unit is further configured to: receive a third gesture that acts on the control block; the determining unit is further configured to: adjust a position or a size of the target area according to the third gesture.
  • the application provides a terminal, comprising: a touch screen, one or more processors, a memory, a plurality of applications, and one or more programs; wherein the processor is coupled to the memory, the one or more programs Stored in a memory, when the terminal is running, the processor executes one or more programs stored in the memory to cause the terminal to perform any of the text selection methods described above.
  • the present application provides a computer readable storage medium having instructions stored therein that, when executed on any of the terminals described above, cause the terminal to perform any of the text selection methods described above.
  • the present application provides a computer program product comprising instructions for causing a terminal to perform any of the above text selection methods when operating on any of the above terminals.
  • FIG. 1 is a schematic diagram 1 of a scene when editing text in a terminal in the prior art
  • FIG. 2 is a schematic structural diagram 1 of a terminal according to an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of an operating system according to an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram 2 of a scene when editing text in a terminal in the prior art
  • FIG. 5 is a schematic diagram 3 of a scene when editing text in a terminal in the prior art
  • FIG. 6 is a schematic flowchart of a text selection method according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram 1 of a text selection method according to an embodiment of the present application.
  • FIG. 8 is a second schematic diagram of a text selection method according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram 3 of a text selection method according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram 4 of a text selection method according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic diagram 5 of a text selection method according to an embodiment of the present disclosure.
  • 12A is a schematic diagram 6 of a text selection method according to an embodiment of the present application.
  • FIG. 12B is a schematic diagram 7 of a text selection method according to an embodiment of the present application.
  • FIG. 13A is a schematic diagram 8 of a text selection method according to an embodiment of the present application.
  • FIG. 13B is a schematic diagram nin of a text selection method according to an embodiment of the present application.
  • FIG. 14 is a schematic diagram 10 of a text selection method according to an embodiment of the present disclosure.
  • FIG. 15 is a schematic diagram 11 of a text selection method according to an embodiment of the present disclosure.
  • FIG. 16 is a schematic structural diagram 2 of a terminal according to an embodiment of the present disclosure.
  • FIG. 17 is a schematic structural diagram 3 of a terminal according to an embodiment of the present disclosure.
  • FIG. 18 is a schematic structural diagram 4 of a terminal according to an embodiment of the present application.
  • Optical character recognition (OCR) technology refers to a terminal (such as a mobile phone) that optically converts text in an image into a black and white dot matrix image file for a printed character, and uses an identification software to image the image.
  • the text is converted into a text format for further editing and processing techniques such as word processing.
  • OCR technology can recognize text information contained in files of image types such as screenshots.
  • Control A software component, usually contained in an application, that controls all the data processed by the application and the interaction of the data. It can provide users with certain operational functions or for displaying certain content. For controls presented in a graphical user interface (GUI), the user can interact with the control through direct manipulation to read or edit information about the application.
  • GUI graphical user interface
  • controls can include interface elements such as icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, widgets, and the like.
  • an attribute that can be visible to a control is referred to as a visibility attribute.
  • Visible properties have three values: visible, invisible, and gone.
  • Visible means visible, invisible means invisible but occupy layout position
  • goe means invisible and does not occupy the layout position
  • other controls can occupy the layout position of the control whose property is gone.
  • the control whose visible attribute is visible can be simply understood as a control that the user wants to see in the program development design
  • the control whose visible attribute is invisible and gone can be simply understood as a user who does not want to be in the program development design. See the controls.
  • the visible properties of some controls can be switched as needed, which can be set to invisible by default, and changed to visible when needed, ie from invisible to visible.
  • an attribute that can be edited by a control is referred to as an edit attribute.
  • the values of the edit properties can be editable and non-editable, respectively.
  • the editable representation of the content displayed in the control (such as text information) is to allow the user to perform one or more editing operations, such as copy operations, cut and paste operations, delete operations, etc.
  • exemplary, text view type controls generally It is an editable type of control; it is not editable to indicate that the content displayed in the control does not allow the user to perform any editing operations.
  • a control of type image view is generally a non-editable control type.
  • the terminal may be a portable terminal that further includes other functions such as a personal digital assistant or a music player function, such as a mobile phone, a tablet, a wearable terminal having a wireless communication function (such as a smart watch), and the like.
  • Exemplary embodiments of the portable terminal include, but are not limited to, piggybacking Or a portable terminal of other operating systems.
  • the portable terminal described above may also be other portable terminals such as a laptop having a touch-sensitive surface such as a touch panel or the like. It should also be understood that in some other embodiments of the present application, the terminal may not be a portable terminal but a desktop computer having a touch-sensitive surface such as a touch panel.
  • the terminal in the embodiment of the present application may be the mobile phone 100.
  • the embodiment will be specifically described below by taking the mobile phone 100 as an example. It should be understood that the illustrated mobile phone 100 is only one example of a terminal, and the mobile phone 100 may have more or fewer components than those shown in the figures, two or more components may be combined, or may have Different component configurations.
  • the various components shown in the figures can be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing or application specific integrated circuits.
  • the mobile phone 100 may specifically include: a processor 101, a radio frequency (RF) circuit 102, a memory 103, a touch screen 104, a Bluetooth device 105, one or more sensors 106, a WI-FI device 107, and positioning.
  • Components such as device 108, audio circuit 109, peripheral interface 110, power system 111, and fingerprint recognizer 112. These components can communicate over one or more communication buses or signal lines (not shown in Figure 2). It will be understood by those skilled in the art that the hardware structure shown in FIG. 2 does not constitute a limitation on the mobile phone 100, and the mobile phone 100 may include more or less components than those illustrated, or combine some components, or different component arrangements. .
  • the processor 101 is a control center of the mobile phone 100, and connects various parts of the mobile phone 100 using various interfaces and lines, executes the mobile phone by running or executing an application stored in the memory 103, and calling data and instructions stored in the memory 103. 100 various functions and processing data.
  • processor 101 may include one or more processing units; processor 101 may also integrate an application processor and a modem processor; wherein the application processor primarily processes operating systems, user interfaces, applications, etc.
  • the modem processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the processor 101.
  • the processor 101 may be a Kirin 960 multi-core processor manufactured by Huawei Technologies Co., Ltd.
  • the radio frequency circuit 102 can be used to receive and transmit wireless signals during transmission or reception of information or calls. Specifically, the radio frequency circuit 102 can process the downlink data of the base station and then process it to the processor 101. In addition, the data related to the uplink is sent to the base station.
  • radio frequency circuits include, but are not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency circuit 102 can also communicate with other devices through wireless communication.
  • the wireless communication can use any communication standard or protocol, including but not limited to global mobile communication systems, general packet radio services, code division multiple access, wideband code division multiple access, long term evolution, email, short message service, and the like.
  • the memory 103 is used to store applications and data, and the processor 101 executes various functions and data processing of the mobile phone 100 by running applications and data stored in the memory 103.
  • the memory 103 mainly includes a storage program area and a storage data area, wherein the storage program area can store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.); the storage data area can be stored according to the use of the mobile phone. Data created at 100 o'clock (such as audio data, phone book, etc.).
  • the memory 103 may include a high speed random access memory, and may also include a nonvolatile memory such as a magnetic disk storage device, a flash memory device, or other volatile solid state storage device.
  • the memory 103 can store various operating systems, such as those developed by Apple. Operating system, developed by Google Inc. Operating system, etc.
  • Touch screen 104 can include touch sensitive surface 104-1 and display 104-2.
  • the touch-sensitive surface 104-1 eg, a touch panel
  • the touch-sensitive surface 104-1 can collect touch events on or near the user of the mobile phone 100 (eg, the user uses a finger, a stylus, or the like on the touch-sensitive surface 104-1. Or operation in the vicinity of the touch-sensitive surface 104-1), and the collected touch information is transmitted to other devices such as the processor 101.
  • the touch event of the user in the vicinity of the touch-sensitive surface 104-1 may be referred to as a hovering touch; the hovering touch may mean that the user does not need to directly touch the touch pad in order to select, move or drag a target (eg, an icon, etc.) And only the user is located near the terminal in order to perform the desired function.
  • a target eg, an icon, etc.
  • the touch-sensitive surface 104-1 capable of floating touch can be realized by capacitive, infrared light, ultrasonic, or the like.
  • the touch sensitive surface 104-1 can include two portions of a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits a signal to the touch controller; the touch controller receives the touch information from the touch detection device, and converts the touch information into contact coordinates, and then Sended to the processor 101, the touch controller can also receive instructions from the processor 101 and execute them.
  • the touch sensitive surface 104-1 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • a display (also referred to as display) 104-2 can be used to display information entered by the user or information provided to the user as well as various menus of the mobile phone 100.
  • the display 104-2 can be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the touch-sensitive surface 104-1 can be overlaid on the display 104-2, and when the touch-sensitive surface 104-1 detects a touch event on or near it, is transmitted to the processor 101 to determine the type of touch event, followed by the processor 101 may provide a corresponding visual output on display 104-2 depending on the type of touch event.
  • touch-sensitive surface 104-1 and display screen 104-2 are implemented as two separate components to implement the input and output functions of handset 100, in some embodiments, touch-sensitive surface 104- 1 is integrated with the display screen 104-2 to implement the input and output functions of the mobile phone 100.
  • the touch screen 104 is formed by stacking a plurality of layers of materials. In the embodiment of the present application, only the touch-sensitive surface (layer) and the display screen (layer) are shown, and other layers are not described in the embodiment of the present application.
  • the touch-sensitive surface 104-1 can be overlaid on the display 104-2, and the size of the touch-sensitive surface 104-1 is greater than the size of the display 104-2 such that the display 104- 2 is completely covered under the touch-sensitive surface 104-1, or the touch-sensitive surface 104-1 may be disposed on the front side of the mobile phone 100 in a full-board form, that is, the user's touch on the front of the mobile phone 100 can be perceived by the mobile phone. You can achieve a full touch experience on the front of your phone.
  • the touch-sensitive surface 104-1 is disposed on the front side of the mobile phone 100 in a full-board form
  • the display screen 104-2 may also be disposed on the front side of the mobile phone 100 in the form of a full-board, such that the front side of the mobile phone is Can achieve a borderless structure.
  • the touch screen 104 may further include one or more sets of sensor arrays for the touch screen 104 to sense the pressure exerted by the user while sensing the touch event of the user thereon. Wait.
  • the mobile phone 100 can also include a Bluetooth device 105 for enabling data exchange between the handset 100 and other short-range terminals (eg, mobile phones, smart watches, etc.).
  • the Bluetooth device in the embodiment of the present application may be an integrated circuit or a Bluetooth chip or the like.
  • the handset 100 can also include at least one type of sensor 106, such as a light sensor, motion sensor, and other sensors.
  • the light sensor can include an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display of the touch screen 104 according to the brightness of the ambient light
  • the proximity sensor can turn off the power of the display when the mobile phone 100 moves to the ear.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity. It can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.
  • the mobile phone 100 can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, here Give a brief description.
  • the mobile phone 100 may also have a fingerprint recognition function.
  • a fingerprint sensor can be placed on the back of the handset 100 (e.g., below the rear camera) or on the front side of the handset 100 (e.g., below the touch screen 104).
  • the fingerprint recognition function can also be implemented by configuring the fingerprint sensor in the touch screen 104, that is, the fingerprint sensor can be integrated with the touch screen 104 to implement the fingerprint recognition function of the mobile phone 100.
  • the fingerprint sensor may be disposed in the touch screen 104, may be part of the touch screen 104, or may be otherwise disposed in the touch screen 104.
  • the fingerprint sensor can also be implemented as a full-board fingerprint reader, so that the touch screen 104 can be viewed as a panel that can be fingerprinted at any location.
  • the fingerprint sensor can send the collected fingerprint to the processor 101 for the processor 101 to process the fingerprint (eg, fingerprint verification, etc.).
  • the fingerprint sensor in the embodiments of the present application may employ any type of sensing technology, including but not limited to optical, capacitive, piezoelectric or ultrasonic sensing technologies.
  • a specific technical solution for integrating a fingerprint sensor in a touch screen according to an embodiment of the present application can be found in the PCT Patent Application No. PCT/CN2017/084602, entitled "Input Method and Terminal", the entire contents of which are incorporated herein by reference. Apply in various embodiments.
  • the WI-FI device 107 is configured to provide the mobile phone 100 with network access complying with the WI-FI related standard protocol, and the mobile phone 100 can access the WI-FI access point through the WI-FI device 107, thereby helping the user to send and receive emails. Browsing web pages and accessing streaming media, etc., it provides users with wireless broadband Internet access.
  • the WI-FI device 107 can also function as a WI-FI wireless access point, and can provide WI-FI network access for other terminals.
  • the positioning device 108 is configured to provide a geographic location for the mobile phone 100. It can be understood that the positioning device 108 can be specifically a receiver of a positioning system such as a global positioning system (GPS) or a Beidou satellite navigation system. After receiving the geographical location transmitted by the positioning system, the positioning device 108 sends the information to the processor 101 for processing, or sends it to the memory 103 for storage. In some other embodiments, the positioning device 108 can be an assisted global positioning system (AGPS) receiver, and the AGPS is an operation mode for performing GPS positioning with certain assistance, which can be utilized.
  • AGPS assisted global positioning system
  • the signal of the base station, in conjunction with the GPS satellite signal, can make the mobile phone 100 locate faster; in the AGPS system, the positioning device 108 can obtain positioning assistance by communicating with an auxiliary positioning server (such as a mobile phone positioning server).
  • the AGPS system assists the positioning device 108 in performing the ranging and positioning services by acting as a secondary server, in which case the secondary positioning server provides positioning through communication over a wireless communication network with a positioning device 108 (i.e., GPS receiver) of the terminal, such as the handset 100. assist.
  • the audio circuit 109, the speaker 113, and the microphone 114 can provide an audio interface between the user and the handset 100.
  • the audio circuit 109 can transmit the converted electrical data of the received audio data to the speaker 113 for conversion to the sound signal output by the speaker 113; on the other hand, the microphone 114 converts the collected sound signal into an electrical signal by the audio circuit 109. After receiving, it is converted into audio data, and then the audio data is output to the RF circuit 102 for transmission to, for example, another mobile phone, or the audio data is output to the memory 103 for further processing.
  • the peripheral interface 110 is used to provide various interfaces for external input/output devices (such as a keyboard, a mouse, an external display, an external memory, a subscriber identity module card, etc.). For example, it is connected to the mouse through a universal serial bus interface, and is electrically connected to a subscriber identity module (SIM) card provided by a telecommunications carrier through a metal contact on the card slot of the subscriber identity module.
  • SIM subscriber identity module
  • Peripheral interface 110 can be used to couple the external input/output peripherals described above to processor 101 and memory 103.
  • the mobile phone 100 may further include a power supply device 111 (such as a battery and a power management chip) that supplies power to the various components.
  • the battery may be logically connected to the processor 101 through the power management chip to manage charging, discharging, and power management through the power supply device 111. And other functions.
  • the mobile phone 100 may further include a camera, a flash, a micro-projection device, a near field communication (NFC) device, and the like, which are not described herein.
  • a camera a camera
  • a flash a flash
  • micro-projection device a micro-projection device
  • NFC near field communication
  • the operating system which is a Linux-based mobile device operating system, and implements various functions in combination with the above hardware in the mobile phone 100.
  • the software architecture of the operating system It should be noted that the embodiment of the present application only uses The operating system is an example to illustrate the software environment required for the terminal to implement the technical solution of the embodiment. Those skilled in the art can understand that the embodiment of the present application can also be implemented by other operating systems.
  • FIG. 3 is a type that can be operated in the above terminal.
  • the software architecture can be divided into four layers, namely the application layer, the application framework layer, the function library layer and the Linux kernel layer.
  • the application layer is the top layer of the operating system, including the native applications of the operating system, such as email clients, text messages, calls, calendars, browsers, contacts, and more. Of course, for developers, developers can write applications and install them to that layer. In general, applications are developed using the Java language by calling the application programming interface (API) provided by the application framework layer.
  • API application programming interface
  • the application framework layer mainly provides developers with various APIs that can be used to access applications. Developers can interact with the underlying operating system (such as function libraries, Linux kernels, etc.) through the application framework to develop their own. application.
  • the application framework is primarily a series of service and management systems for the Android operating system.
  • the application framework mainly includes the following basic services:
  • Activity Manager used to manage the application life cycle and provide common navigation rollback functions
  • Content Providers Used to manage data sharing and access between different applications
  • Notification Manager used to control the application to display prompt information (such as Alerts, Notifications, etc.) to the user in the status bar, lock screen interface, etc.;
  • Resource Manager Provides non-code resources (such as strings, graphics, layout files, etc.) for use by the application;
  • Clipboard Manager mainly provides copy or paste function inside the application or between applications
  • View A rich, extensible collection of views that can be used to build an application. It specifically includes a list, a grid, a text, a button, and an image. Among them, the main function of the image view is to display the picture, which is generally presented in the GUI in the form of an uneditable control. The main function of the text view is to display the string, which is generally presented in the GUI in the form of an editable control.
  • Location Manager Mainly allows the application to access the geographic location of the terminal.
  • the function library layer is the support of the application framework, and is an important link between the application framework layer and the Linux kernel layer.
  • the library layer includes libraries that are compiled by the computer program C or C++. These libraries can be used by different components in the operating system to serve developers through the application framework layer.
  • the function library may include a libc function library, which is specifically designed for embedded linux-based devices; the function library may also include a multimedia library (Media Framework), which supports playback and recording of audio or video in multiple encoding formats. Also supports still image files, as well as common audio or video encoding formats.
  • Media Framework multimedia library
  • the function library also includes an interface management library (Surface Manager), which is mainly responsible for managing access to the display system, and is used to manage the interaction between display and access operations when executing multiple applications, and is also responsible for 2D drawing and 3D. The drawing is displayed for synthesis.
  • interface management library Silicon Manager
  • the function library layer may also include other function libraries for implementing various functions of the mobile phone, for example: SGL (Scalable Graphics Library): 2D graphics image processing engine based on XML (Extensible Markup Language) file; SSL (Secure Sockets Layer): Located between TVP/IP protocol and various application layer protocols, providing support for data communication; OpenGL/ES: 3D effect support; SQLite: relational database engine; Webkit: Web browser engine; FreeType: bitmap and vector font Support; and so on.
  • SGL Scalable Graphics Library
  • XML Extensible Markup Language
  • SSL Secure Sockets Layer
  • Android Runtime is a kind of The operating environment on the operating system is A new virtual machine used by the operating system. In Android Runtime, AOT (Ahead-Of-Time) technology is used. When the application is first installed, the application's bytecode is pre-compiled into machine code, making the application a real local application. After running again, the compilation step is eliminated, and startup and execution become faster.
  • AOT Address-Of-Time
  • the Android Runtime may also be replaced by a Core Libraries and a Dalvik Virtual Machine.
  • the core function library provides most of the functions in the Java language API, and provides an interface to the application framework layer to call the underlying library mainly through the Java native interface (JNI). It also contains some core APIs of the operating system, such as android.os, android.net, android.media, and so on.
  • the Dalvik virtual machine uses a JIT (Just-in-Time) runtime compilation mechanism. Each time a process is started, the virtual machine needs to recompile the bytecode in the background, which will have a certain impact on the startup speed.
  • Each application runs in an instance of a Dalvik virtual machine, and each Dalvik virtual machine instance is a separate process space.
  • the Dalvik virtual machine is designed to run multiple virtual machines efficiently on a single device.
  • the Dalvik virtual machine executable file format is .dex.
  • the dex format is a compression format designed for Dalvik and is suitable for systems with limited memory and processor speed. What needs to be mentioned is that the Dalvik virtual machine relies on the Linux kernel to provide basic functions (threading, underlying memory management). It can be understood that Android Runtime and Dalvik belong to different types of virtual machines, and those skilled in the art can select different types of virtual machines in different situations.
  • This layer provides the core system services of the operating system, such as security, memory management, process management, network protocol stack and driver model, all based on the Linux kernel.
  • the Linux kernel also acts as an abstraction layer between the hardware and software stacks. There are many mobile device related drivers at this layer.
  • the main drivers are: display driver; Linux-based frame buffer driver; keyboard driver as input device; flash driver based on memory technology device; camera driver; audio driver; ; WI-FI driver, etc.
  • the application framework layer may further include a clipboard manager (Clipboard Manager) for managing text information selected by the user in the editable control. Provides functions such as copying and pasting of text and other information.
  • a clipboard manager Clipboard Manager
  • the Clipboard Manager can be obtained through the function getSystemService(CLIPBOARD_SERVICE), and the terminal manages the copying or pasting of data between two applications or within the application through the Clipboard Manager.
  • ClipData is the Clip object, which contains the data description information and the data itself.
  • the Clipboard only has one Clip object at a time. When another Clip object is acquired, the previous Clip object will no longer be saved in the Clipboard.
  • a Clip object can contain one or more ClipData.Item objects. Adding an Item object to a Clip object can be implemented by the function addItem(ClipData.Item item).
  • the data item in the Item object may specifically contain text, a uniform resource identifier (URI), or an Intent.
  • URI uniform resource identifier
  • Multiple ClipData.Item objects can be added to a Clip object, which allows the user to copy multiple selected content into the same Clip object; for example, if there is a List Widget that allows the user to select multiple options at a time, then Clipboard Manager All selected options can be copied to the clipboard at one time.
  • the WeChat displays a chat interface with Mike when the foreground is running, and the interface includes a control 504 (return) Button icon), control 505 (title bar), control 506 (chat detail button icon), control 507 (avatar icon), control 508 (conversation content), control 511 (voice input button icon), control 512 (input box), and Control 513 (option button icon).
  • control 504 return
  • control 505 title bar
  • control 506 chat detail button icon
  • control 507 avatar icon
  • control 508 conversation content
  • control 511 voice input button icon
  • control 512 input box
  • Control 513 optional button icon
  • the WeChat in the operating system After the WeChat in the operating system obtains the above touch parameters, it can be determined that the user performs a long press operation on the control 508. Further, the WeChat can obtain the clipboard service in the system service by calling the getSystemService (CLIPBOARD_SERVICE) function.
  • the terminal can display a text selection menu 510 and a cursor for adjusting the selected text (the cursor includes the selected text start) The first cursor 520a of the location and the second cursor 520b) at the end of the selected text.
  • the text selection menu 510 includes editing operations supported by the control 508, such as copy 510a, forwarding 510b, deleting 510c, and the like.
  • the clipboard service may call the addItem (ClipData.Item item) function to add the text between the first cursor 520a and the second cursor 520b as the target text selected by the user to the new clip.
  • the clipboard service puts a new clip object into the clipboard via the function clipManager.setPrimaryClip(clip).
  • clipboard service can be called again to copy the target text saved in the clip object in the clipboard to the selected position of the user, thereby completing the entire copy. Paste operation.
  • the user needs to drag the first cursor 520a or the second cursor 520b to select the desired target text, and the problem of multiple selection or less selection is likely to occur during the dragging process, resulting in the terminal being the user.
  • the efficiency of the operation is reduced when the target text is extracted.
  • FIG. 6 is a schematic flowchart of a method for selecting a text according to an embodiment of the present application.
  • the method may specifically include:
  • the terminal displays a graphical user interface on the touch screen.
  • the terminal receives a first gesture that is applied to the user graphical interface, where the first gesture includes a closed trajectory.
  • Text information (which may be referred to as target text in the embodiment of the present application).
  • the user may input a first gesture to the first control that includes the target text, the first gesture is used to instruct the mobile phone to select the target text corresponding to the region of the user graphical interface that interacts with the first gesture.
  • the first gesture may be any gesture in which the motion track is a closed figure, or may be any gesture that the mobile phone can form a closed track in response to the first gesture.
  • the first gesture may specifically refer to a circle or frame selection operation in which a motion track of a user sliding a finger or a stylus in the touch screen can form a closed figure.
  • the first gesture may also refer to an operation of clicking, double-clicking, or re-pressing a closed trajectory after the user touches the touch screen, and the embodiment of the present application does not impose any limitation.
  • the user may first receive the user graphical interface (eg, the first control in the user graphical interface).
  • the second gesture is used to initiate a function of circle text in the user graphical interface.
  • the second gesture can be a long press gesture.
  • the mobile phone displays a chat interface with Sara
  • the controls 504 to 513 in the chat interface are all visible controls.
  • the control 504, the control 505, and the control 508 all include text information.
  • the user can perform a long press operation (ie, a second gesture) on any of the controls 504, 505, and 508 (ie, the first control) as needed.
  • the touch screen detects the long press operation
  • the touch parameter such as the detected touch time is reported to the running WeChat in the application layer (that is, the application to which the chat interface belongs) to indicate that the WeChat is started in the chat interface.
  • the function of selecting text is to provide subsequent text editing services to the user.
  • the WeChat may further determine whether the activated control 508 belongs to the editable control. For example, because the edit properties of different names or types of controls are certain, for example, text view type controls are editable controls, and image view type controls are non-editable controls. Therefore, WeChat can be queried by calling related APIs.
  • the name or type of control 508 determines whether control 508 is an editable control.
  • the control containing the text information includes a control 504, a control 505, and a control 508.
  • the control 508 is an editable control of the text view type
  • the control 504 and the control 505 are non-editable controls of the image view type.
  • the mobile phone can call the system service (system service).
  • system service implements functions of text selection, copy and paste, etc., and extracts target text (ie, first target text) in the target area circled by the user by performing the following steps S603-S606.
  • the mobile phone may mark the user by animation, voice, or highlighting. A gesture is selected within the target area of the first control.
  • the mobile phone may mark the start position and the end position of the first target text circled by the user through the first gesture through two cursors.
  • the mobile phone may first convert the text information in the first control into an editable state.
  • the text editing function can be realized by performing the following steps S603-S606. That is to say, in the embodiment of the present application, the text editing function can also be implemented for the application scene in which the text in the picture cannot be edited by text, thereby improving the operation efficiency of using the mobile phone for text editing.
  • the mobile phone may further extract the text information included in the first control by using the ORC technology.
  • the mobile phone may determine the first target text included in the target area circled in the first gesture based on the text information extracted by the ORC technology.
  • the mobile phone may extract text information included in the control based on ORC technology when generating or displaying the control, and store the text information in the content of the control. In the content description field.
  • the mobile phone may first call the interface View.getContentDescription() to query whether the content description field stores the first control. Text information. If the text information in the first control is stored, the mobile phone may extract the text information included in the first control from the content description field; if the text information in the first control is not stored, the mobile phone may identify by the ORC technology And extracting the text information included in the first control.
  • the handset can also display one or more selection boxes 803 (eg, a first selection box and a second selection box) for circled text in the current chat interface.
  • the mobile phone may also display a prompt 804 prompting the user to circle the target area in the current chat interface.
  • the mobile phone after the mobile phone detects that the user selects a gesture of a selection box (for example, selection box 803), the mobile phone can determine the selection box selected by the user in response to the gesture. . At this time, the selection box selected by the user is used to circle the target text required by the user. Further, the terminal may receive a first gesture in which the user circled the target text using the selection box 803. In response to the first gesture, the handset can form a closed trajectory in the user graphical interface in accordance with the shape of the selection box 803. At this time, the area corresponding to the closed trajectory is the target area 901 that the user desires to select, and the text information included in the target area 901 is the first target text 902.
  • a gesture of a selection box for example, selection box 803
  • the mobile phone can determine the selection box selected by the user in response to the gesture. .
  • the selection box selected by the user is used to circle the target text required by the user.
  • the terminal may receive a first gesture in which the user circled the target text
  • the mobile phone can also display the above selection box 803 in the display interface shown in (a) of FIG. 8 , in which the user can manually illuminate a closed trajectory as the first gesture input into the touch screen of the mobile phone.
  • the mobile phone detects that the motion trajectory of the user's finger in the first control (eg, the control 508) is a closed figure (ie, the first gesture), then After the mobile phone determines that the control 508 is an editable control, the getSystemService(CLIPBOARD_SERVICE) function can be called to obtain the clipboard service, and the Clipboard Manager in the clipboard service uses the closed area corresponding to the closed track in the touch screen as the target desired by the user.
  • the area that is, the target area 901 shown in (c) of FIG.
  • the terminal determines a target area corresponding to the closed trajectory in the user graphical interface.
  • the mobile phone may select an area of the user graphical interface corresponding to the closed trajectory. As the target area selected by the user (for example, the area 901 in FIG. 8 or FIG. 9).
  • the closed trajectory formed by the user after performing the first gesture in the control 508 using the selection box is the boundary line of the selection frame. Then, the mobile phone can use the position coordinate of the boundary line of the selection frame at this time as the position coordinate of the target area 901, thereby determining the target area 901 corresponding to the closed trajectory in the user graphical interface.
  • At least one control block 903 may be disposed on the boundary of the target area 901, and the control block 903 is provided. Used to adjust the position or size of the target area 901.
  • the terminal can adjust the position or size of the target area 901 according to the third gesture.
  • the third gesture is for the user to drag the control block 903.
  • the terminal can expand or reduce the target area 901 in the direction of the user dragging the control block 903 to form the adjusted target area 901'.
  • the target area 901' is selected.
  • the first target text is also increased or decreased.
  • the terminal determines the first target text included in the target area.
  • step S604 after determining the target area (for example, the target area 901) circled by the user in the first gesture, the mobile phone may call the interface View.getText() or View.getContentDescription() to obtain the specific text content included in the target area. (ie the first target text).
  • the first target text For example, when the first control is a text view type of control, the text field of the control stores all the text content within the control. Then, the mobile phone can extract the first target text corresponding to the target area in all the text content according to the coordinate information of the target area.
  • the mobile phone can call the interface View.getText() in conjunction with the target area 901 determined in step S603 to obtain the text of the first target text 902 in the target area 901 circled by the user.
  • the content is:
  • the target area of the first target text may be marked by highlighting, bolding, etc., so that the user can accurately know the selected specific text content, so as to subsequently expand or delete the selected text.
  • the first target text in the target area may also include text information in various languages, such as numbers, English letters, and words.
  • the embodiment of the present application does not impose any limitation on this.
  • the terminal performs semantic analysis on the first target text to determine a second target text, where the second target text is different from the first target text.
  • the terminal marks the second target text in the user graphical interface.
  • the terminal may expand or deselect the phrase segmented by the target area in the first target text to obtain a second target text that is different from the first target text.
  • the mobile phone may identify the to-be-corrected text whose semantic/information is incomplete in the first target text by using techniques such as semantic analysis or word segmentation.
  • the text to be corrected appearing in the first target text 902 is: “passport, sign”.
  • the text to be corrected with semantic/incomplete meaning is mostly caused by the user selecting or selecting more when the first target text is circled.
  • the mobile phone can continue to extract the context of the text to be corrected (the context is outside the target area), for example, extract the following "certificate” word of the "passport, sign” word. Further, it is determined whether the text to be corrected has a complete semantic/word meaning after adding the context of the text to be corrected (ie, "passport, visa”). If there is a complete semantic/word meaning, as shown in FIG. 11, the mobile phone can automatically expand the context "certificate" of the text to be corrected into the selected text. At this time, the selected text is in addition to the first target.
  • the second target text 1001 of the context of the text to be corrected is also included in the text.
  • a dictionary for word segmentation may be preset in the mobile phone, and commonly used Chinese words, phrases or English words may be stored in the dictionary. Then, after the mobile phone obtains the context "certificate” of the text to be corrected, the dictionary can be searched for whether the word "visa” is included, and if it is included, the first target text of the user circle is selected less.
  • the word “certificate” has a complete meaning after adding the word "password” after the word "passport, sign” in the first target text. Therefore, the mobile phone can add the word "certificate” to the first target text. Second target text.
  • the mobile phone can also update the commonly used or user-defined words to the above-mentioned dictionary according to the input habits of the user when using the input method, so as to improve the accuracy of the mobile phone automatically assisting the user in expanding the target text.
  • the mobile phone may further send the first target text to the server, and request the server to identify the to-be-corrected text in the first target text that does not have complete semantics/word meaning.
  • the mobile phone can also send all the text information in the first control to the server, requesting the server to determine the words to be corrected in the first control that need to be expanded, and then according to the feedback result of the server, the first target text is The segmentation of the target area is expanded to obtain the second target text.
  • the text “passport, sign” to be corrected appears in the first target text 902.
  • the mobile phone may continue to extract the text to be corrected in the selected target area.
  • the above-mentioned dictionary mobile phone can determine that the "certificate” word needs to be expanded into the first target text.
  • the mobile phone can be bounded by the line 1101 and the column 1102 where the "certificate” word is located.
  • Text within a closed region formed by a target text and rows 1101 and 1102 is automatically expanded into second target text 1103.
  • the text content of the second target text 1103 obtained after the mobile phone is automatically expanded is:
  • the target area 901 of the first target text circled by the user is a rectangle, and the mobile phone can obtain the four vertices A and B of the rectangle. , C, D coordinate values in the touch screen.
  • the coordinate point E(x, y) of the "certificate” word in the touch screen can be obtained.
  • the mobile phone can determine that the vertex closest to the E point in the rectangular ABCD is the D point, and the expanded rectangular target area can use the E point instead of the D point as the vertex, and the expanded rectangular target area is the same as the E point.
  • the coordinate values of the vertex A on one diagonal are unchanged. Then, based on the A point and E point mobile phones, it can be determined that the two vertices on the other diagonal line in the expanded rectangular target area are the intersection point B of the E point and the AB side, and the column where the E point is located. The point F of the intersection with the AC edge extension line is obtained, thereby obtaining an expanded rectangular target area having vertices A, B, E, and F, and the text in the rectangular target area is the second target text 1103.
  • the above method may be repeated to continue the word segmentation or semantic analysis on the expanded second target text, thereby correcting the multiple-selected or less-selected text in the second target text.
  • the phone when the mobile phone determines that the semantic/information incomplete text in the first target text appears (for example, “passport, sign” in FIG. 9), if the “checked” word is followed by For punctuation or new paragraphs, the phone can also automatically delete the extra "check” characters in the text to be corrected, thereby correcting the multiple-choice operation that occurs when the user circles the first target text.
  • the semantic/information incomplete text in the first target text for example, “passport, sign” in FIG. 9
  • the mobile phone can correct the text to be corrected that the user selects or selects in the first target text by using text extraction, semantic analysis, etc. based on the first target text circled by the user in the display interface, and obtains the corrected number.
  • the second target text thereby improving the accuracy and operational efficiency of the user when selecting the target text in the display interface.
  • the mobile phone can further perform text editing operations such as copying, deleting, translating, etc., so that the operation efficiency of the text editing operation is also improved.
  • the user may manually expand or deselect the second target text by clicking or dragging the cursor, that is, after step S606, the terminal
  • the following steps S608-S609 or S610-S611 can also be performed.
  • the terminal receives a click operation on the first character, where the first character is text other than the second target text in the user graphical interface.
  • the terminal expands the second target text into a third target text by using a row and a column where the first character is located.
  • the user may input a click operation at the first character that needs to be expanded beyond the second target text to indicate that the user wishes to select the text as described in step S608.
  • the target area is extended from the area where the second target text is located to the area containing the first character described above.
  • the control 508 in the chat interface is still used as the first control example, and the mobile phone automatically expands the first target text to the second target based on the first target text 902 circled by the user. Text 1103. If the user wishes to continue to expand the second target text 1103, the last word/word of the target text desired by the user may continue to be clicked in the control 508, for example, the user clicks on the word "ship" 1201 in the control 508. At the time, the "ticket" 1201 is the first character described above.
  • step S609 in response to the user clicking the click operation of the "ticket” 1201, the mobile phone may border the row and column where the word “ticket” 1201 is located, and the second target text 1101 and the "ticket".
  • the text in the closed region formed by the row and column in which the word 1201 is located is automatically expanded to the third target text 1202.
  • the first cursor 801 is located at the start position of the third target text 1204, the second cursor 802 is located at the end position of the third target text 1204, and the third target text 1202 is passed.
  • the highlighting indicates that the third target text 1202 is in the selected state.
  • the target area where the second target text 1103 is located is a rectangle formed by four vertices of A, B, E, and F.
  • the mobile phone detects that the user clicks on the word "ship” 1201 in the control 508, the mobile phone can detect that the "ticket” word of the "ticket” is at the point E' in the touch screen.
  • the mobile phone can calculate the point closest to the E' point among the four vertices A, B, E, and F as point E. Then, the mobile phone can use the E' point as the vertices of the expanded rectangular target area instead of the E point. .
  • the mobile phone can determine the other diagonal line in the expanded rectangular target area.
  • Two vertices that is, the point B' of the line where the E' point is located with the AB extension line and the point F' of the line where the E' point is located and the EF extension line, thereby obtaining the vertices of A, B', E', F'
  • steps S610-S611 are another method for the user to manually expand the second target text.
  • the terminal receives a drag operation of dragging a cursor by a user, where the cursor is located at a start position or an end position of the second target text.
  • the terminal expands the second target text into the third target text in units of phrases.
  • the mobile phone may display the start position of the second target text 1101 obtained after the expansion.
  • a cursor 801 is displayed, and the second cursor 802 is displayed at the end position of the second target text 1101. Then, the user can continue to expand on the basis of the second target text 1101 by dragging the first cursor 801 (or the second cursor 802).
  • the user drags the second cursor 802 backward from the end position "certificate" of the second target text 1101 to perform a drag operation.
  • the pre-set dictionary can be used to query whether the currently expanded text of the user is a phrase. For example, as shown in (b) of FIG. 14, when the user drags the second cursor 802 and moves the finger to the "jian" word in the control 508, the "jian" word expanded at this time does not belong to the phrase in the dictionary. Then, the mobile phone does not need to expand the "Jian" word to the text selected by the user at this time, and the second cursor 802 can not respond.
  • the mobile phone when the mobile phone manually expands the target text in response to the user dragging the cursor, the mobile phone expands the selection in units of phrases, so that the user can effectively reduce the time when the user drags the cursor to expand the selected text.
  • the phenomenon of multiple selection or less selection improves the efficiency of the operation when the mobile phone extracts text for the user.
  • the mobile phone can also cancel the selected text in units of phrases, thereby reducing the phenomenon of multiple selection or less selection when the user drags the cursor to cancel the selected text.
  • the user drags the second cursor 802 forward from the end position "certificate" of the second target text 1101 to move behind the "sign" word to perform deselection.
  • the function of the text After the mobile phone detects the drag operation, the pre-set dictionary can be used to query whether the selected text at the location of the current second cursor 802 is a phrase. Still as shown in (a) of FIG.
  • the mobile phone may hide the dragged cursor.
  • the mobile phone can redisplay the hidden cursor.
  • the mobile phone expands or unchecks the text in units of phrases, the cursor does not follow the user's drag operation and the user experience is reduced.
  • the mobile phone can also cancel the selected text verbatim in response to the user's drag operation in units of words.
  • the application embodiment does not impose any restrictions on this.
  • the mobile phone when the mobile phone extracts the first target text in the target area, or the mobile phone expands the first target text into the second target text, or the mobile phone expands the second target text into the third target text.
  • the mobile phone can also display a text selection menu 510 for editing the extracted text content, such as options such as copy 510a, forward 510b, delete 510c, and the like.
  • the Clipboard Manager running in the mobile phone can call the addItem (ClipData.Item item) function to add the extracted target text to the new clip object. Furthermore, the Clipboard Manager puts a new clip object into the clipboard by calling the function clipManager.setPrimaryClip(clip) to complete the copy operation. Subsequently, when the mobile phone detects that the user performs the paste operation, the Clipboard Manager can take out and paste the stored clip object (ie, the target text) from the clipboard into the input box specified by the user, and complete the paste operation.
  • the addItem ClipData.Item item
  • the Clipboard Manager puts a new clip object into the clipboard by calling the function clipManager.setPrimaryClip(clip) to complete the copy operation.
  • the Clipboard Manager can take out and paste the stored clip object (ie, the target text) from the clipboard into the input box specified by the user, and complete the paste operation.
  • the above terminal and the like include hardware structures and/or software modules corresponding to each function.
  • the embodiments of the present application can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the embodiments of the present application.
  • the embodiment of the present application may perform the division of the function modules on the terminal or the like according to the foregoing method example.
  • each function module may be divided according to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of the module in the embodiment of the present application is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • FIG. 16 is a schematic diagram showing a possible structure of the terminal involved in the foregoing embodiments, where the terminal is used to implement the method described in the foregoing method embodiments, Specifically, the display unit 1601, the obtaining unit 1602, the determining unit 1603, and the correcting unit 1604.
  • the display unit 1601 is configured to support the terminal to execute the processes S601 and S606 shown in FIG. 6; the obtaining unit 1602 is configured to support the terminal to execute the processes S602, S608, and S610 shown in FIG. 6; and the determining unit 1603 is configured to support the terminal.
  • the process S604-S605 shown in FIG. 6 is executed; the correcting unit 1604 is configured to support the terminal to execute the processes S605, S609, and S611 shown in FIG. 6. All the related content of the steps involved in the foregoing method embodiments may be referred to the functional descriptions of the corresponding functional modules, and details are not described herein again.
  • the above-described determining unit 1603 and the correcting unit 1604 can be integrated into a processing module, the display unit 1601 is used as an output module, and the above-mentioned obtaining unit 1602 is used as an input module.
  • the terminal may further include a storage module and a communication module.
  • FIG. 17 a possible schematic structural diagram of the terminal involved in the foregoing embodiment is shown, including a processing module 1701, a communication module 1702, an input/output module 1703, and a storage module 1704.
  • the processing module 1701 is configured to control and manage the action of the terminal.
  • the communication module 1702 is for supporting communication of the terminal with other network entities such as servers or other terminals.
  • the input/output module 1703 is for receiving information input by a user or outputting information provided to the user and various menus of the terminal.
  • the storage module 1704 is configured to save program codes and data of the terminal.
  • the processing module 1701 may be a processor or a controller, for example, may be a central processing unit (CPU), a GPU, a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit. (Application-Specific Integrated Circuit, ASIC), Field Programmable Gate Array (FPGA) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication module 1702 can be a transceiver, a transceiver circuit, an input/output device, a communication interface, or the like.
  • the communication module 1702 can be specifically a Bluetooth device, a Wi-Fi device, a peripheral interface, or the like.
  • the memory module 1704 can be a memory, which can include high speed random access memory (RAM), and can also include non-volatile memory, such as magnetic disk storage devices, flash memory devices, or other volatile solid state storage devices.
  • RAM high speed random access memory
  • non-volatile memory such as magnetic disk storage devices, flash memory devices, or other volatile solid state storage devices.
  • the input/output module 1703 can be an input/output device such as a touch screen, a keyboard, a microphone, and a display.
  • the display may specifically be configured in the form of a liquid crystal display, an organic light emitting diode or the like.
  • a touch panel can be integrated on the display for collecting touch events on or near the display, and transmitting the collected touch information to other devices (such as a processor, etc.).
  • a terminal which may include: a touch screen 1801, wherein the touch screen 1801 includes a touch-sensitive surface 1806 and a display screen 1807; one or more processors 1802; Memory 1803; a plurality of applications 1808; and one or more computer programs 1804, each of which may be coupled by one or more communication buses 1805.
  • the one or more computer programs 1804 are stored in the memory 1803 and configured to be executed by the one or more processors 1802, the one or more computer programs 1804 including instructions that can be used to execute 6 and the various steps in the corresponding embodiments.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer readable storage medium or transferred from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions can be from a website site, computer, server or data center Transfer to another website site, computer, server, or data center by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL), or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer readable storage medium can be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that includes one or more available media.
  • the usable medium may be a magnetic medium (eg, a floppy disk, a hard disk, a magnetic tape), an optical medium (eg, a DVD), or a semiconductor medium (such as a solid state disk (SSD)).

Abstract

Des modes de réalisation de la présente invention concernent le domaine des communications et proposent un procédé de sélection de texte et un terminal, à même de réduire l'apparition d'une sélection de plus ou de moins de texte que souhaité pendant la sélection de texte et d'améliorer l'efficacité de fonctionnement d'un terminal pendant la sélection de texte. Le procédé comprend les étapes suivantes : un terminal affiche une interface utilisateur graphique sur un écran tactile ; le terminal reçoit un premier geste agissant sur l'interface utilisateur graphique, le premier geste comprenant une trajectoire fermée ; en réponse au premier geste, le terminal détermine une région cible correspondant à la trajectoire fermée dans l'interface utilisateur graphique ; le terminal détermine un premier texte cible compris dans la région cible ; le terminal effectue une analyse sémantique sur le premier texte cible pour déterminer un deuxième texte cible, le deuxième texte cible étant différent du premier texte cible ; le terminal marque le deuxième texte cible dans l'interface utilisateur graphique.
PCT/CN2018/099447 2018-01-11 2018-08-08 Procédé de sélection de texte et terminal WO2019136964A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201810025128 2018-01-11
CN201810025128.1 2018-01-11
CN201810327466.0 2018-04-12
CN201810327466.0A CN110032324B (zh) 2018-01-11 2018-04-12 一种文本选中方法及终端

Publications (1)

Publication Number Publication Date
WO2019136964A1 true WO2019136964A1 (fr) 2019-07-18

Family

ID=67218834

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/099447 WO2019136964A1 (fr) 2018-01-11 2018-08-08 Procédé de sélection de texte et terminal

Country Status (1)

Country Link
WO (1) WO2019136964A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102349046A (zh) * 2009-03-12 2012-02-08 诺基亚公司 用于选择文本信息的方法和设备
US20150212707A1 (en) * 2014-01-29 2015-07-30 Social Commenting, Llc Computer System and Method to View and Edit Documents from an Electronic Computing Device Touchscreen
CN105094626A (zh) * 2015-06-26 2015-11-25 小米科技有限责任公司 文本内容选择方法及装置
CN105653160A (zh) * 2016-02-25 2016-06-08 努比亚技术有限公司 一种文本确定方法和终端

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102349046A (zh) * 2009-03-12 2012-02-08 诺基亚公司 用于选择文本信息的方法和设备
US20150212707A1 (en) * 2014-01-29 2015-07-30 Social Commenting, Llc Computer System and Method to View and Edit Documents from an Electronic Computing Device Touchscreen
CN105094626A (zh) * 2015-06-26 2015-11-25 小米科技有限责任公司 文本内容选择方法及装置
CN105653160A (zh) * 2016-02-25 2016-06-08 努比亚技术有限公司 一种文本确定方法和终端

Similar Documents

Publication Publication Date Title
CN110032324B (zh) 一种文本选中方法及终端
US20220075518A1 (en) Fast Data Copying Method and Electronic Device
KR102113272B1 (ko) 전자장치에서 복사/붙여넣기 방법 및 장치
US8836652B2 (en) Touch event model programming interface
KR102083209B1 (ko) 데이터 제공 방법 및 휴대 단말
WO2019178869A1 (fr) Procédé d'ouverture de compte de carte esim et terminal
US11681432B2 (en) Method and terminal for displaying input method virtual keyboard
US20150220239A1 (en) Global keyboard shortcuts management for web applications
AU2010327453A1 (en) Method and apparatus for providing user interface of portable device
WO2016168983A1 (fr) Procédé et dispositif électronique d'affichage de page web
WO2019200588A1 (fr) Procédé d'affichage lors de la sortie d'une application, et terminal
US20150378549A1 (en) Light dismiss manager
EP4193253A1 (fr) Identification et présentation de caractéristique intelligente
WO2020006669A1 (fr) Procédé de commutation d'icônes, procédé d'affichage de gui, et dispositif électronique
CN108780400B (zh) 数据处理方法及电子设备
US11243679B2 (en) Remote data input framework
WO2021244459A1 (fr) Procédé d'entrée et dispositif électronique
WO2022052677A1 (fr) Procédé d'affichage d'interface et dispositif électronique
WO2019136964A1 (fr) Procédé de sélection de texte et terminal
NL2024634B1 (en) Presenting Intelligently Suggested Content Enhancements
KR20150009035A (ko) 메모 기능이 연동된 메시지 기능 운용 방법 및 장치
CN115698988A (zh) 用于经由远程浏览器实例查看不兼容网页的系统和方法
WO2020000276A1 (fr) Procédé et terminal de commande de bouton de raccourci
US20230153133A1 (en) System and Method of Providing Access to and Managing Virtual Desktops
WO2022154880A1 (fr) Aide à la configuration de bout en bout pour services en nuage

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18900416

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18900416

Country of ref document: EP

Kind code of ref document: A1