CN113835532A - Text input method and system - Google Patents

Text input method and system Download PDF

Info

Publication number
CN113835532A
CN113835532A CN202010564896.1A CN202010564896A CN113835532A CN 113835532 A CN113835532 A CN 113835532A CN 202010564896 A CN202010564896 A CN 202010564896A CN 113835532 A CN113835532 A CN 113835532A
Authority
CN
China
Prior art keywords
user
input
text
phrase
phrases
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010564896.1A
Other languages
Chinese (zh)
Inventor
许兴旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Publication of CN113835532A publication Critical patent/CN113835532A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0236Character input methods using selection techniques to select from displayed items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a text input method, which comprises the following steps: receiving an activation operation of a user in an area needing to input a text; acquiring a common phrase corresponding to the user and displaying the common phrase; receiving the selection operation of the user on at least one phrase in the common phrases; and combining the phrases selected by the user into a text and filling the text into a text input area activated by the activation operation for the user to edit, send or store. The application also discloses a text input system, an electronic device and a computer readable storage medium. Therefore, an input frame or a fixed input area of a traditional input method can be removed, a user can splice required text contents directly on a screen by selecting phrases, a more free and easier text input mode is realized, and the use experience of the user is improved.

Description

Text input method and system
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a text input method, a text input system, an electronic device, and a computer-readable storage medium.
Background
With the popularization and development of computer technology, users often need to input text on electronic devices, input methods have become important tools for users to interact with electronic devices, and users in different professional fields, different interests and use habits have higher and higher requirements on the intelligence of input modes.
The input mode of the current electronic device is mainly a keyboard or a virtual keyboard, each system needs to pop up an input box when calling an input method, and a text input area (such as a keyboard) and a text candidate area of the input method are displayed in a screen, and sometimes even the display style of the input box needs to be switched. Therefore, the original display content in the screen is changed to provide the display position for the text input area and the text candidate area. The experience brought to the user by the input and display mode is poor, and especially when the screen is designed to be crowded, the user experience is very poor.
It should be noted that the above-mentioned contents are not intended to limit the scope of protection of the application.
Disclosure of Invention
The present application is directed to a text input method, system, electronic device and computer readable storage medium, and aims to solve the problem of providing a simple and easy-to-use text input method without a keyboard or a fixed input area.
In order to achieve the above object, an embodiment of the present application provides a text input method, where the method includes:
receiving an activation operation of a user in an area needing to input a text;
acquiring a common phrase corresponding to the user and displaying the common phrase;
receiving the selection operation of the user on at least one phrase in the common phrases; and
and combining the phrases selected by the user into text and filling the text into a text input area activated by the activation operation.
Optionally, the method further includes, after displaying the common phrase:
and receiving a new word group input by the user through voice, and displaying the new word group and the common word group on the same interface so as to receive the selection operation of the user on the common word group or at least one word group in the new word group.
Optionally, the receiving a new word group input by the user through voice, and displaying the new word group and the common word group on the same interface includes:
receiving a sentence input by the user through voice and converting the sentence into characters;
automatically segmenting the sentence into one or more phrases;
and displaying the word group obtained by segmentation as the new word group and the common word group on the same interface.
Optionally, the activating operation includes: clicking an input box area of an input method, clicking a specific icon or a text input area as long as needed.
Optionally, the obtaining of the common phrase corresponding to the user includes:
and acquiring the commonly used phrases according to the current application program interface content or the historical behavior data of the user.
Optionally, the displaying the commonly used phrases comprises:
and dispersedly displaying each phrase in the commonly used phrases in a screen, and performing iterative display according to the historical use frequency of each phrase, wherein the iterative display is to display the phrases with high use frequency in the historical input in a larger font and/or at a position close to the center of the screen, and display the phrases with low use frequency in the historical input in a smaller font and/or at a position close to the edge of the screen.
Optionally, the selecting operation includes touching, clicking or dragging the phrase.
Optionally, after receiving the selection operation, the method further includes:
by long pressing the selected phrase pop-up option to provide separate formatting for each phrase.
Optionally, after combining the user-selected phrases into text and filling the text input area with the text, the method further includes:
and receiving repeated insertion or deletion operation of the phrase by the user through a shortcut operation mode.
In addition, to achieve the above object, an embodiment of the present application further provides a text input system, where the system includes:
the activation module is used for receiving the activation operation of a user in an area needing to input a text;
the display module is used for acquiring the commonly used phrases corresponding to the user and displaying the commonly used phrases;
the selection module is used for receiving the selection operation of the user on at least one phrase in the commonly used phrases; and
and the filling module is used for combining the phrases selected by the user into texts and filling the texts into the text input area activated by the activation operation.
In order to achieve the above object, an embodiment of the present application further provides an electronic device, including: a memory, a processor, and a text input program stored on the memory and executable on the processor, the text input program when executed by the processor implementing a text input method as described above.
To achieve the above object, an embodiment of the present application further provides a computer-readable storage medium, on which a text input program is stored, and the text input program, when executed by a processor, implements the text input method as described above.
The text input method, the text input system, the electronic device and the computer readable storage medium provided by the embodiment of the application can remove an input frame or a fixed input area of a traditional input method, and a user can splice required text contents directly on a screen by selecting phrases without combining the texts by inputting input methods such as syllables, letters and strokes. Therefore, a keyboard is not needed in the screen, a user does not need to type, the existing display content in the interface is not changed due to the fact that a display position needs to be provided for the input keyboard, a more free and easier text input mode is achieved, and the use experience of the user is improved.
Drawings
FIG. 1 is a diagram of an application environment architecture in which various embodiments of the present application may be implemented;
fig. 2 is a flowchart of a text input method according to a first embodiment of the present application;
fig. 3 is a flowchart of a text input method according to a second embodiment of the present application;
FIG. 4 is a detailed flowchart of step S304 in FIG. 3;
5A-5C are schematic diagrams of an alternative text input interface of the present application;
fig. 6 is a schematic hardware architecture diagram of an electronic device according to a third embodiment of the present application;
FIG. 7 is a block diagram of a text input system according to a fourth embodiment of the present application;
fig. 8 is a block diagram of a text input system according to a fifth embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the descriptions relating to "first", "second", etc. in the embodiments of the present application are only for descriptive purposes and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a diagram illustrating an application environment architecture for implementing various embodiments of the present application. The present application is applicable in application environments including, but not limited to, client 2, server 4, network 6.
The client 2 is configured to display an interface of a current application to a user and receive operations such as text input of the user. The client 2 may be a terminal device such as a PC (Personal Computer), a mobile phone, a tablet Computer, a portable Computer, and a wearable device.
The server 4 is used for providing data and technical support for the client 2. The server 4 may be a rack server, a blade server, a tower server, a cabinet server, or other computing devices, may be an independent server, or may be a server cluster formed by a plurality of servers.
The network 6 may be a wireless or wired network such as an Intranet (Intranet), the Internet (Internet), a Global System of Mobile communication (GSM), Wideband Code Division Multiple Access (WCDMA), a 4G network, a 5G network, Bluetooth (Bluetooth), Wi-Fi, and the like. The server 4 and one or more clients 2 are connected through the network 6 for data transmission and interaction.
Example one
Fig. 2 is a flowchart of a text input method according to a first embodiment of the present application. It is to be understood that the flow charts in the embodiments of the present method are not intended to limit the order in which the steps are performed.
The method comprises the following steps:
and S200, receiving the activation operation of the user in the area needing to input the text.
The embodiment can be compatible with an input box of a traditional input method, and can also carry out text input under the condition of no input keyboard or fixed input area. When a user needs to input text in a screen, an activation operation is carried out in an area where the text needs to be input so as to activate a text input action in the area. The manner of the activation operation includes but is not limited to: clicking a traditional input box area (an input box compatible with a traditional input method), clicking a specific icon or a long area for inputting text as required (without the traditional input box or a fixed input area), and the like. The client 2 receives the above activation operation by the user, and activates the region as a text entry region.
S202, obtaining the commonly used phrases and displaying the commonly used phrases to a user.
And after receiving the activation operation of the user, acquiring the common phrases recommended to the user according to preset standards such as the current application program interface content or the historical behavior data of the user. For example, if a bullet screen needs to be input in the video playing interface at present, a common phrase can be recommended to the user according to the common bullet screen counted in the video; if the search keyword needs to be input for the search interface at present, the common phrases can be recommended to the user according to the user attribute, the geographic position, the hot keyword, the historical record and the like; if the current commodity purchasing interface needs to be communicated with the merchant, the common phrases can be recommended to the user according to common problems related to the commodity. In addition, a user dictionary (a phrase with a high use frequency in the user history input) corresponding to each user can be recorded, and some phrases with the highest use frequency are obtained from the user dictionary and are used as common phrases recommended to the user.
And after the common phrases corresponding to the user are acquired, displaying the common phrases in a screen for the user to check and select. In this embodiment, each phrase in the commonly used phrases may be dispersedly displayed on the screen, and the background is transparent, so that the existing display content of the current interface is not affected (the display range of the current display content does not need to be compressed to provide a display position for the commonly used phrases and the activated text input area). And the display mode of the common phrases is iterative display, wherein the iterative display means that phrases which are possibly selected are placed at more obvious positions, and phrases which are not selected for a long time are gradually squeezed to edge positions. For example, phrases with high frequency of use in historical input are displayed in a larger font and/or near the center of the screen, and phrases with low frequency of use in historical input are displayed in a smaller font and/or near the edges of the screen. Of course, the specific display mode may be flexibly set according to the actual application scenario, and is not limited herein.
It should be noted that the phrases in this embodiment include, but are not limited to, Chinese characters, words and even phrases, and words or simple sentences in other languages (e.g., English).
And S204, receiving the selection operation of the user on the common phrases.
The user can select and splice from the common phrases according to the text content required to be input by the user. In this embodiment, the user may quickly select a phrase by touching. In other embodiments, the selection operation may be in other forms, such as clicking, dragging a phrase, and the like. In addition, the style, the color and the like of the characters can be selected by long pressing the selected phrase pop-up option, so that the independent format setting of each phrase is realized, and a higher-level input experience is provided for the user.
It should be noted that the present embodiment may also perform operations such as line changing or word selecting in combination with the modes such as voice, gesture, eye spirit, and the like. For example, the user controls the selection of the first 10 words of a short sentence by speaking "select the first 10 words" in speech; or when the client 2 supports the pupil recognition technology, the user selects a phrase in the direction of eye view and confirms the phrase by blinking, and so on.
And S206, combining the phrases selected by the user into a text and filling the text into the currently activated area for the user to edit, send or store.
And filling one or more phrases selected by the user through the selection operation into the region (text input region) which is activated in the step S200 and needs to input the text in sequence.
In this embodiment, each phrase selected by the user still maintains an editing state in the text input area, and various fast editing functions such as repeated insertion or deletion can be provided for the user for each phrase through a preset fast operation mode. For example, by repeatedly inserting the current phrase by "+", or deleting the current phrase by "-". The "+" and "-" may be virtual buttons or icons, etc., displayed directly above, in the upper left corner, or in the upper right corner of the current phrase. Of course, in other embodiments, other shortcut operation modes may be provided for the user to edit the selected phrases. The edited text can be sent or stored.
The text input method provided by the embodiment can remove an input box or a fixed input area of the traditional input method, and a user can splice required text contents directly on a screen by selecting phrases without combining the texts by inputting input methods such as syllables, letters and strokes. Therefore, a keyboard is not needed in the screen, a user does not need to type, the existing display content in the interface is not changed due to the fact that a display position needs to be provided for the input keyboard, a more free and easier text input mode is achieved, and the use experience of the user is improved.
Example two
Fig. 3 is a flowchart of a text input method according to a second embodiment of the present application. In the second embodiment, the text input method further includes step S304 on the basis of the first embodiment. It is to be understood that the flow charts in the embodiments of the present method are not intended to limit the order in which the steps are performed.
The method comprises the following steps:
and S300, receiving the activation operation of the user in the area needing to input the text.
The embodiment can be compatible with an input box of a traditional input method, and can also carry out text input under the condition of no input keyboard or fixed input area. When a user needs to input text in a screen, an activation operation is carried out in an area where the text needs to be input so as to activate a text input action in the area. The manner of the activation operation includes but is not limited to: clicking a traditional input box area (an input box compatible with a traditional input method), clicking a specific icon or a long area for inputting text as required (without the traditional input box or a fixed input area), and the like. The client 2 receives the above activation operation by the user, and activates the region as a text entry region.
S302, obtaining the commonly used phrases and displaying the commonly used phrases to a user.
And after receiving the activation operation of the user, acquiring the common phrases recommended to the user according to preset standards such as the current application program interface content or the historical behavior data of the user. For example, if a bullet screen needs to be input in the video playing interface at present, a common phrase can be recommended to the user according to the common bullet screen counted in the video; if the search keyword needs to be input for the search interface at present, the common phrases can be recommended to the user according to the user attribute, the geographic position, the hot keyword, the historical record and the like; if the current commodity purchasing interface needs to be communicated with the merchant, the common phrases can be recommended to the user according to common problems related to the commodity. In addition, a user dictionary (a phrase with a high use frequency in the user history input) corresponding to each user can be recorded, and some phrases with the highest use frequency are obtained from the user dictionary and are used as common phrases recommended to the user.
And after the common phrases corresponding to the user are acquired, displaying the common phrases in a screen for the user to check and select. In this embodiment, each phrase in the commonly used phrases may be dispersedly displayed on the screen, and the background is transparent, so that the existing display content of the current interface is not affected (the display range of the current display content does not need to be compressed to provide a display position for the commonly used phrases and the activated text input area). The common phrases are displayed in an iterative manner, where the iterative display means that phrases (e.g., high-frequency words) that may be selected are placed at more prominent positions, and phrases that have not been selected for a long time are gradually squeezed to edge positions. For example, phrases with high frequency of use in historical input are displayed in a larger font and/or near the center of the screen, and phrases with low frequency of use in historical input are displayed in a smaller font and/or near the edges of the screen. Of course, the specific display mode may be flexibly set according to the actual application scenario, and is not limited herein.
It should be noted that the phrases in this embodiment include, but are not limited to, Chinese characters, words and even phrases, and words or simple sentences in other languages (e.g., English).
S304, receiving a new word group input by a user through voice, and displaying the new word group and the common word group on the same interface.
If the currently recommended commonly used phrases lack the phrases which need to be input by the user, the user can input new phrases through voice. The sentences input by voice can be automatically broken into phrases, and the phrases output by the system intelligently and the phrases input by voice are displayed in the same interface in an iterative way for the user to select.
Specifically, further refer to fig. 4, which is a schematic view of the detailed flow of step S304. It is to be understood that the flow chart is not intended to limit the order in which the steps are performed. Some steps in the flowchart may be added or deleted as desired. In this embodiment, the step S304 specifically includes:
s3040 receiving a sentence input by a user through voice and converting the sentence into characters.
In this embodiment, the user can input a sentence containing the required new phrase by voice to achieve the purpose of adding the new phrase. The sentence of the speech input only needs to contain the new word group without limiting the form thereof. For example, the user may speak a word group a to be added directly, or speak a complete sentence B containing the word group a, or speak the word group a plus the word group C (which cannot constitute a sentence with complete meaning), and so on. And after receiving the sentence input by the voice of the user, the client 2 automatically converts the voice into characters.
S3042, automatically segmenting the sentence into phrases by adopting a Chinese word segmentation technology.
When the sentence contains only one phrase, segmentation may not be used. When the sentence contains a plurality of phrases, the sentence can be automatically divided into a plurality of phrases by adopting a Chinese word segmentation technology. For example, a sentence B input by a user through voice is received, and the sentence B is automatically divided into a phrase a, a phrase D and a phrase E. In other embodiments, if the sentence includes other languages besides the chinese language, other feasible technologies may also be used to perform the word segmentation processing on the sentence.
S3044, the word group obtained by segmentation is used as the new word group and the common word group to be displayed on the same interface.
And one or more phrases obtained by automatically segmenting the sentences are used as the new phrases and added to the current interface to be iteratively displayed together with the common phrases. In addition, when the user inputs the text next time, the new phrase can be supplemented into the common phrase for preferential recommendation.
It is worth noting that the embodiment can better protect the privacy of the user, unlike the method of directly inputting the text desired by the user in a voice mode. For example, in some cases, the user may not want to speak a complete sentence, and may simply speak the new set of words that are missing from the commonly used phrase. For another example, the user may say another sentence containing the new phrase without the user wanting to hear what the user wants to input, and then select the needed phrase for re-splicing.
Returning to fig. 3, S306, receiving a selection operation of the user on the common phrase and the new phrase.
And the user can select and splice the common phrases and the new phrases displayed on the current interface according to the text content required to be input by the user. In this embodiment, the user may quickly select a phrase by touching. In other embodiments, the selection operation may be in other forms, such as clicking, dragging a phrase, and the like. In addition, the style, the color and the like of the characters can be selected by long pressing the selected phrase pop-up option, so that the independent format setting of each phrase is realized, and a higher-level input experience is provided for the user.
It should be noted that the present embodiment may also perform operations such as line changing or word selecting in combination with the modes such as voice, gesture, eye spirit, and the like. For example, the user controls the selection of the first 10 words of a short sentence by speaking "select the first 10 words" in speech; or when the client 2 supports the pupil recognition technology, the user selects a phrase in the direction of eye view and confirms the phrase by blinking, and so on.
And S308, combining the phrases selected by the user into a text and filling the text into the currently activated area for the user to edit, send or store.
And filling one or more phrases selected by the user through the selection operation into the region (text input region) which is activated in the step S300 and needs to input the text in sequence.
In this embodiment, each phrase selected by the user still maintains an editing state in the text input area, and various fast editing functions such as repeated insertion or deletion can be provided for the user for each phrase through a preset fast operation mode. For example, by repeatedly inserting the current phrase by "+", or deleting the current phrase by "-". The "+" and "-" may be virtual buttons or icons, etc., displayed directly above, in the upper left corner, or in the upper right corner of the current phrase. Of course, in other embodiments, other shortcut operation modes may be provided for the user to edit the selected phrases. The edited text can be sent or stored.
The text input method provided by the embodiment can remove an input box or a fixed input area of the traditional input method, and a user can splice required text contents directly on a screen by selecting phrases without combining the texts by inputting input methods such as syllables, letters and strokes. In addition, the user can add new phrases through voice input to supplement the defects of commonly used phrases automatically recommended by the system, so that the required text input content is perfected, the privacy of the user can be protected, and the user experience is further improved.
In order to explain the above steps of the method in more detail, a specific embodiment (the user needs to input the bullet screen when watching the video) is taken as an example for explanation. Those skilled in the art should appreciate that the following detailed description is not intended to limit the inventive concepts of the present disclosure and that appropriate content divergence and extensions can be readily devised by those skilled in the art based on the following detailed description of the embodiments.
(1) The user activates the text input area by clicking the subtitle input box, clicking a specific icon, or pressing a blank area without subtitles for a long time to activate the text input area. And after receiving the activation operation, the client 2 displays a text editing window in a semi-transparent center in the screen as a common phrase recommendation area and the activated text input area.
(2) And acquiring a corresponding common phrase according to a common bullet screen in the video and a recorded user dictionary, displaying the common phrase and recommending the common phrase to a user, and prompting the voice input of the started state according to the setting of the user. If a user inputs a sentence by voice, the voice is automatically converted into characters, the sentence is divided into a plurality of phrases by adopting a Chinese word segmentation technology, and the phrases are arranged according to the sequence in the sentence.
(3) If the user directly selects all phrases of this sentence, the sentence is directly filled into the text input area as input. And if the user selects a plurality of phrases from the common phrases and the new word groups input by the voice, combining the phrases according to the sequence selected by the user and filling the phrases into the text input area.
(4) In the text input area, each phrase still keeps a separate and editable state, and a user can repeatedly insert the current phrase according to "+", or delete the current phrase according to "-", or can independently set each phrase for format setting. After all editing is finished, clicking the confirmation area, and inputting characters to form a bullet screen.
Fig. 5A-5C are schematic diagrams of an alternative text input interface according to the present application. In fig. 5A, an input box 501 and an icon 502 are displayed on the screen 50. The input box 501 may be an input box of a conventional input method, and activates a text input behavior after being clicked by a user; the icon 502 is an activation icon specific to the application and is used to activate text entry behavior after being clicked by a user without the need for a conventional input box or fixed input area. It should be noted that the text input operation interface of the present application may have both the input box 501 and the icon 502, or may provide only one of them.
When the user clicks on the input box 501 or the icon 502 (alternatively, the area for inputting text as needed can be long), a text input action is activated. As shown in fig. 5B, after receiving the above activation operation by the user, the client 2 displays a text editing window 503 in the screen 50 in a semi-transparent center, wherein the text editing window includes a common phrase recommendation area 504 and a text input area 505, and displays a common phrase recommended to the user. In addition, three icons 506, 507 and 508 of voice, expression and cloud are arranged above the screen 50. The voice icon 506 is used for inputting a new phrase through voice after being clicked by a user; emoticons 507 are used for displaying at least one selectable emoticon to a user, and the user can select one or more emoticons from the selectable emoticons by clicking and the like, insert the emoticons into the text input area 505 below, and use the emoticons as a part of text input; the cloud icon 508 is used to synchronize a common phrase corresponding to the user from the cloud, for example, a common phrase stored by the user in another device, or a common phrase recommended by a cloud AI (Artificial Intelligence) system according to the historical behavior data of the user, and the like. Optionally, in the operation interface shown in fig. 5B, each phrase in the common phrases may be set to be displayed in different manners based on different sources (local recommendation, voice input, cloud synchronization, etc.), for example, the phrases are distinguished by different colors.
When the user selects one or more phrases in the common phrases by touch, the selected one or more phrases are sequentially filled into the text input area 505 below, and still maintain the editing state. The user may repeatedly insert or delete a phrase by using the "+" and "-" icons on the phrase or phrases in text entry area 505. In addition, the selected phrase or phrases may be highlighted or the like in the screen 50 to indicate that they have been selected and/or moved to a position near the edge of the screen 50 (because they have less chance of being selected again, it is necessary to replace the position near the center of the screen with another phrase that is more likely to be selected).
And when the user finishes selecting all required phrases from the commonly used phrases and edits the selected phrases in the text input area 505, the text input behavior is finished. As shown in fig. 5C, the input result is displayed in the input box 501, and an operation such as sending to a bullet screen may be performed. When the input box 501 is not present in the operation interface, the input result may be displayed in a specific area (a preset area or an area activated by a long press by a user, etc.) in the screen 50, and an upper layer of the content displayed in the current interface may be overlaid in a semi-transparent form to reduce the occlusion of the currently displayed content.
EXAMPLE III
As shown in fig. 6, a hardware architecture of an electronic device 20 is provided for a third embodiment of the present application. In the present embodiment, the electronic device 20 may include, but is not limited to, a memory 21, a processor 22, and a network interface 23, which are communicatively connected to each other through a system bus. It is noted that fig. 6 only shows the electronic device 20 with components 21-23, but it is to be understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead. In this embodiment, the electronic device 20 may be the client 2.
The memory 21 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage 21 may be an internal storage unit of the electronic device 20, such as a hard disk or a memory of the electronic device 20. In other embodiments, the memory 21 may also be an external storage device of the electronic apparatus 20, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the electronic apparatus 20. Of course, the memory 21 may also include both an internal storage unit and an external storage device of the electronic apparatus 20. In this embodiment, the memory 21 is generally used for storing an operating system and various application software installed in the electronic device 20, such as program codes of the text input system 60. Further, the memory 21 may also be used to temporarily store various types of data that have been output or are to be output.
The processor 22 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 22 is generally used to control the overall operation of the electronic device 20. In this embodiment, the processor 22 is configured to operate the program codes stored in the memory 21 or process data, such as operating the text input system 60.
The network interface 23 may include a wireless network interface or a wired network interface, and the network interface 23 is generally used for establishing a communication connection between the electronic apparatus 20 and other electronic devices.
Example four
Fig. 7 is a block diagram of a text input system 60 according to a fourth embodiment of the present invention. The text input system 60 may be partitioned into one or more program modules, which are stored in a storage medium and executed by one or more processors to implement embodiments of the present application. The program modules referred to in the embodiments of the present application refer to a series of computer program instruction segments capable of performing specific functions, and the following description will specifically describe the functions of each program module in the embodiments.
In the present embodiment, the text input system 60 includes:
an activation module 600, configured to receive an activation operation of a user in an area where text input is required.
The embodiment can be compatible with an input box of a traditional input method, and can also carry out text input under the condition of no input keyboard or fixed input area. When a user needs to input text in a screen, an activation operation is carried out in an area where the text needs to be input so as to activate a text input action in the area. The manner of the activation operation includes but is not limited to: clicking a traditional input box area (an input box compatible with a traditional input method), clicking a specific icon or a long area for inputting text as required (without the traditional input box or a fixed input area), and the like. The activation module 600 receives the above activation operation of the user, and activates the region as a text input region.
The display module 602 is configured to obtain a commonly used phrase and display the commonly used phrase to a user.
And after receiving the activation operation of the user, acquiring the common phrases recommended to the user according to preset standards such as the current application program interface content or the historical behavior data of the user. For example, if a bullet screen needs to be input in the video playing interface at present, a common phrase can be recommended to the user according to the common bullet screen counted in the video; if the search keyword needs to be input for the search interface at present, the common phrases can be recommended to the user according to the user attribute, the geographic position, the hot keyword, the historical record and the like; if the current commodity purchasing interface needs to be communicated with the merchant, the common phrases can be recommended to the user according to common problems related to the commodity. In addition, a user dictionary (a phrase with a high use frequency in the user history input) corresponding to each user can be recorded, and some phrases with the highest use frequency are obtained from the user dictionary and are used as common phrases recommended to the user.
And after the common phrases corresponding to the user are acquired, displaying the common phrases in a screen for the user to check and select. In this embodiment, each phrase in the commonly used phrases may be dispersedly displayed on the screen, and the background is transparent, so that the existing display content of the current interface is not affected (the display range of the current display content does not need to be compressed to provide a display position for the commonly used phrases and the activated text input area). And the display mode of the common phrases is iterative display, wherein the iterative display means that phrases which are possibly selected are placed at more obvious positions, and phrases which are not selected for a long time are gradually squeezed to edge positions. For example, a phrase that is frequently used in the history input is displayed in a larger font at a position near the center of the screen, and a phrase that is frequently used in the history input is displayed in a smaller font at a position near the edge of the screen. Of course, the specific display mode may be flexibly set according to the actual application scenario, and is not limited herein.
It should be noted that the phrases in this embodiment include, but are not limited to, Chinese characters, words and even phrases, and words or simple sentences in other languages (e.g., English).
A selecting module 604, configured to receive a selection operation of the user on the common phrase.
The user can select and splice from the common phrases according to the text content required to be input by the user. In this embodiment, the user may quickly select a phrase by touching. In other embodiments, the selection operation may be in other forms, such as clicking, dragging a phrase, and the like. In addition, the style, the color and the like of the characters can be selected by long pressing the selected phrase pop-up option, so that the independent format setting of each phrase is realized, and a higher-level input experience is provided for the user.
It should be noted that the present embodiment may also perform operations such as line changing or word selecting in combination with the modes such as voice, gesture, eye spirit, and the like. For example, the user controls the selection of the first 10 words of a short sentence by speaking "select the first 10 words" in speech; or when the client 2 supports the pupil recognition technology, the user selects a phrase in the direction of eye view and confirms the phrase by blinking, and so on.
And a filling module 606, configured to combine the phrases selected by the user into a text and fill the text in the currently activated area, so that the user can edit, send, or store the text.
And filling one or more phrases selected by the user through the selection operation into the region (text input region) which is activated in the step S200 and needs to input the text in sequence.
In this embodiment, each phrase selected by the user still maintains an editing state in the text input area, and various fast editing functions such as repeated insertion or deletion can be provided for the user for each phrase through a preset fast operation mode. For example, by repeatedly inserting the current phrase by "+", or deleting the current phrase by "-". The "+" and "-" may be virtual buttons or icons, etc., displayed directly above, in the upper left corner, or in the upper right corner of the current phrase. Of course, in other embodiments, other shortcut operation modes may be provided for the user to edit the selected phrases. The edited text can be sent or stored.
The text input system provided by the embodiment can remove an input box or a fixed input area of the traditional input method, and a user can splice required text contents directly on a screen by selecting phrases without combining the texts by inputting input methods such as syllables, letters and strokes. Therefore, a keyboard is not needed in the screen, a user does not need to type, the existing display content in the interface is not changed due to the fact that a display position needs to be provided for the input keyboard, a more free and easier text input mode is achieved, and the use experience of the user is improved.
EXAMPLE five
Fig. 8 is a block diagram of a text input system 60 according to a fifth embodiment of the present invention. In this embodiment, the text input system 60 further includes a voice module 608 in addition to the activation module 600, the display module 602, the selection module 604, and the filling module 606 in the fourth embodiment.
The voice module 608 is configured to receive a new word group input by a user through voice, and display the new word group and the common word group on the same interface.
If the currently recommended commonly used phrases lack the phrases which need to be input by the user, the user can input new phrases through voice. The sentences input by voice can be automatically broken into phrases, and the phrases output by the system intelligently and the phrases input by voice are displayed in the same interface in an iterative way for the user to select.
Specifically, the process may include:
(1) receiving a sentence input by a user through voice and converting the sentence into characters.
In this embodiment, the user can input a sentence containing the required new phrase by voice to achieve the purpose of adding the new phrase. The sentence of the speech input only needs to contain the new word group without limiting the form thereof. For example, the user may speak a word group a to be added directly, or speak a complete sentence B containing the word group a, or speak the word group a plus the word group C (which cannot constitute a sentence with complete meaning), and so on. The speech module 608 receives the sentence inputted by the user's speech and automatically converts the speech into text.
(2) And automatically segmenting the sentences into phrases by adopting a Chinese word segmentation technology.
When the sentence contains only one phrase, segmentation may not be used. When the sentence contains a plurality of phrases, the sentence can be automatically divided into a plurality of phrases by adopting a Chinese word segmentation technology. For example, a sentence B input by a user through voice is received, and the sentence B is automatically divided into a phrase a, a phrase D and a phrase E. In other embodiments, if the sentence includes other languages besides the chinese language, other feasible technologies may also be used to perform the word segmentation processing on the sentence.
(3) And displaying the word group obtained by segmentation as the new word group and the common word group on the same interface.
And one or more phrases obtained by automatically segmenting the sentences are used as the new phrases and added to the current interface to be iteratively displayed together with the common phrases. In addition, when the user inputs the text next time, the new phrase can be supplemented into the common phrase for preferential recommendation.
It is worth noting that the embodiment can better protect the privacy of the user, unlike the method of directly inputting the text desired by the user in a voice mode. For example, in some cases, the user may not want to speak a complete sentence, and may simply speak the new set of words that are missing from the commonly used phrase. For another example, the user may say another sentence containing the new phrase without the user wanting to hear what the user wants to input, and then select the needed phrase for re-splicing.
Then, the selection module 604 is triggered to receive the selection operation of the user on the common phrase and the new phrase.
The text input system provided by the embodiment can remove an input box or a fixed input area of the traditional input method, and a user can splice required text contents directly on a screen by selecting phrases without combining the texts by inputting input methods such as syllables, letters and strokes. In addition, the user can add new phrases through voice input to supplement the defects of commonly used phrases automatically recommended by the system, so that the required text input content is perfected, the privacy of the user can be protected, and the user experience is further improved.
EXAMPLE six
The present application further provides another embodiment, which is a computer-readable storage medium storing a text input program, which is executable by at least one processor to cause the at least one processor to perform the steps of the text input method as described above.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be apparent to those skilled in the art that the modules or steps of the embodiments of the present application described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different from that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications that can be made by the use of the equivalent structures or equivalent processes in the specification and drawings of the present application or that can be directly or indirectly applied to other related technologies are also included in the scope of the present application.

Claims (12)

1. A method of text input, the method comprising:
receiving an activation operation of a user in an area needing to input a text;
acquiring a common phrase corresponding to the user and displaying the common phrase;
receiving the selection operation of the user on at least one phrase in the common phrases; and
and combining the phrases selected by the user into text and filling the text into a text input area activated by the activation operation.
2. The text entry method of claim 1, wherein after displaying the common phrase, the method further comprises:
and receiving a new word group input by the user through voice, and displaying the new word group and the common word group on the same interface so as to receive the selection operation of the user on the common word group or at least one word group in the new word group.
3. The method of claim 2, wherein the receiving a new phrase input by the user through voice and displaying the new phrase and the common phrase on the same interface comprises:
receiving a sentence input by the user through voice and converting the sentence into characters;
automatically segmenting the sentence into one or more phrases;
and displaying the word group obtained by segmentation as the new word group and the common word group on the same interface.
4. The text entry method of claim 1, wherein the activating operation comprises: clicking an input box area of an input method, clicking a specific icon or a text input area as long as needed.
5. The text input method according to claim 1, wherein the obtaining of the common phrases corresponding to the user comprises:
and acquiring the commonly used phrases according to the current application program interface content or the historical behavior data of the user.
6. The text entry method of claim 1, wherein said displaying the common phrase comprises:
and dispersedly displaying each phrase in the commonly used phrases in a screen, and performing iterative display according to the historical use frequency of each phrase, wherein the iterative display is to display the phrases with high use frequency in the historical input in a larger font and/or at a position close to the center of the screen, and display the phrases with low use frequency in the historical input in a smaller font and/or at a position close to the edge of the screen.
7. The text input method of claim 1, wherein the selection operation comprises touching, clicking or dragging a phrase.
8. The text entry method of claim 1, further comprising, after receiving the selection operation:
by long pressing the selected phrase pop-up option to provide separate formatting for each phrase.
9. The text entry method of claim 1, wherein after assembling the user-selected phrase into text to fill the text entry area, the method further comprises:
and receiving repeated insertion or deletion operation of the phrase by the user through a shortcut operation mode.
10. A text entry system, the system comprising:
the activation module is used for receiving the activation operation of a user in an area needing to input a text;
the display module is used for acquiring the commonly used phrases corresponding to the user and displaying the commonly used phrases;
the selection module is used for receiving the selection operation of the user on at least one phrase in the commonly used phrases; and
and the filling module is used for combining the phrases selected by the user into texts and filling the texts into the text input area activated by the activation operation.
11. An electronic device, comprising: a memory, a processor, and a text input program stored on the memory and executable on the processor, the text input program when executed by the processor implementing a text input method as recited in any of claims 1-9.
12. A computer-readable storage medium, characterized in that a text input program is stored thereon, which when executed by a processor implements a text input method according to any one of claims 1 to 9.
CN202010564896.1A 2020-06-08 2020-06-19 Text input method and system Pending CN113835532A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010513384 2020-06-08
CN2020105133842 2020-06-08

Publications (1)

Publication Number Publication Date
CN113835532A true CN113835532A (en) 2021-12-24

Family

ID=78963790

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010564896.1A Pending CN113835532A (en) 2020-06-08 2020-06-19 Text input method and system

Country Status (1)

Country Link
CN (1) CN113835532A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108459733A (en) * 2018-02-06 2018-08-28 广州阿里巴巴文学信息技术有限公司 auxiliary input method, device, computing device and storage medium
CN108566565A (en) * 2018-03-30 2018-09-21 科大讯飞股份有限公司 Barrage methods of exhibiting and device
CN109842820A (en) * 2017-11-29 2019-06-04 腾讯数码(天津)有限公司 Barrage data inputting method and device, mobile terminal and readable storage medium storing program for executing
CN110750617A (en) * 2018-07-06 2020-02-04 北京嘀嘀无限科技发展有限公司 Method and system for determining relevance between input text and interest points

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109842820A (en) * 2017-11-29 2019-06-04 腾讯数码(天津)有限公司 Barrage data inputting method and device, mobile terminal and readable storage medium storing program for executing
CN108459733A (en) * 2018-02-06 2018-08-28 广州阿里巴巴文学信息技术有限公司 auxiliary input method, device, computing device and storage medium
CN108566565A (en) * 2018-03-30 2018-09-21 科大讯飞股份有限公司 Barrage methods of exhibiting and device
CN110750617A (en) * 2018-07-06 2020-02-04 北京嘀嘀无限科技发展有限公司 Method and system for determining relevance between input text and interest points

Similar Documents

Publication Publication Date Title
US20080282153A1 (en) Text-content features
CN108595445A (en) Interpretation method, device and terminal
JP6771259B2 (en) Computer-implemented methods for processing images and related text, computer program products, and computer systems
CN105929980B (en) Method and apparatus for information input
CN113609834A (en) Information processing method, device, equipment and medium
JP2023549903A (en) Multimedia interaction methods, information interaction methods, devices, equipment and media
CN106899755B (en) Information sharing method, information sharing device and terminal
CN113076499A (en) Page interaction method, device, equipment, medium and program product
CN110889266A (en) Conference record integration method and device
CN106775711B (en) Information processing method, device and computer-readable storage medium for contact persons
CN106873798B (en) Method and apparatus for outputting information
WO2023236795A1 (en) Encyclopedia entry processing method and apparatus, and electronic device, medium and program product
CN107357481B (en) Message display method and message display device
CN113835532A (en) Text input method and system
CN107168627B (en) Text editing method and device for touch screen
CN113110829B (en) Multi-UI component library data processing method and device
CN115329720A (en) Document display method, device, equipment and storage medium
CN115081423A (en) Document editing method and device, electronic equipment and storage medium
CN105630959B (en) Text information display method and electronic equipment
CN111399722A (en) Mail signature generation method, device, terminal and storage medium
KR100785756B1 (en) Method for providing personal dictionary and system thereof
KR101750788B1 (en) Method and system for providing story board, and method and system for transmitting and receiving object selected in story board
CN116107684B (en) Page amplification processing method and terminal equipment
CN102929859B (en) Reading assistive method and device
CN108092875A (en) A kind of expression providing method, medium, device and computing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination