CN112416217A - Input method and related device - Google Patents

Input method and related device Download PDF

Info

Publication number
CN112416217A
CN112416217A CN201910770101.XA CN201910770101A CN112416217A CN 112416217 A CN112416217 A CN 112416217A CN 201910770101 A CN201910770101 A CN 201910770101A CN 112416217 A CN112416217 A CN 112416217A
Authority
CN
China
Prior art keywords
text
target
context
input
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910770101.XA
Other languages
Chinese (zh)
Other versions
CN112416217B (en
Inventor
臧娇娇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Technology Development Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN201910770101.XA priority Critical patent/CN112416217B/en
Publication of CN112416217A publication Critical patent/CN112416217A/en
Application granted granted Critical
Publication of CN112416217B publication Critical patent/CN112416217B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an input method and a related device, wherein the method comprises the following steps: determining an input text before the display position of an input cursor as a target context text according to the touch instruction; inputting a target context text into a context association context model obtained by pre-training according to a plurality of training context texts and corresponding training context texts, and obtaining a target context text of the target context text; and displaying the target upper text corresponding to the target lower text. Therefore, when a user wants to associate the text of the input text, the user does not need to move the cursor to move the input cursor, but can trigger the input text before the display position of the input cursor to associate the text of the input text by means of a touch instruction, so that complicated operation and misoperation are avoided, the pre-trained lower association text model can provide a better function of associating the text of the input text, and the input experience of the user is effectively improved.

Description

Input method and related device
Technical Field
The present application relates to the field of input methods, and in particular, to an input method and a related device.
Background
With the rapid development of intelligent technology, the input method can associate the text which the user may want to input with the text which the user inputs. For example, when the user inputs "career shared time", and the user has not input the text "career shared time" as the text, the text "loves complain at night" is displayed to the user.
When the user does not want to associate the lower text of the input text but wants to associate the upper text of the input text after inputting the text, for example, when the user inputs "skyline-times" and the user wants to associate the upper text "marine morningyue", the user must move the input cursor from the display position thereof to the front of the input text by the cursor moving operation so as to associate the upper text thereof with the lower text after the input cursor is moved.
However, the inventor finds that the operation of moving the input cursor through cursor moving operation is complicated and inconvenient, and misoperation is easy to occur, so that the position of the input cursor after moving is incorrect, the problem that the text below after the input cursor associates with the text above the text exists, and the input experience of a user is greatly influenced.
Disclosure of Invention
The technical problem to be solved by the present application is to provide an input method and a related device, when a user wants to associate to obtain an upper text of an input text, the user can trigger the input text before the display position of an input cursor to associate with the upper text without moving an input cursor by a cursor moving operation, thereby effectively improving the input experience of the user.
In a first aspect, an embodiment of the present application provides a method for inputting, where the method includes:
determining an input text before the display position of the input cursor as a target context text based on the touch instruction;
inputting the target context text into a context association context model to obtain the target context text of the target context text, wherein the context association context model is obtained by pre-training according to a plurality of training context texts and training context texts corresponding to the training context texts;
and displaying the target text corresponding to the target text.
Optionally, the determining, based on the touch instruction, that the input text before the input cursor display position is the target text includes:
determining the input cursor display position based on a touch instruction;
identifying the input text forward from the input cursor display location until a first punctuation mark before the input text is identified;
determining the input text between the first punctuation symbol and the input cursor display location as the target contextual text.
Optionally, the entering the target context text into the context association context model to obtain the target context text of the target context text includes:
determining a touch type corresponding to the touch instruction, wherein the touch type represents a touch operation mode;
and inputting the target context text into the context association context model, and obtaining the context text corresponding to the touch type as the target context text.
Optionally, the following step of associating with the training step of the above model specifically includes:
and training an initial language model by taking each training lower text as input and the training upper text corresponding to each training lower text as output to obtain the lower association upper model.
Optionally, the Touch instruction includes a multi-Touch instruction, and the multi-Touch instruction includes a 3D Touch instruction.
Optionally, after the displaying the target text, the method further includes:
according to the screen-up instruction of the target text, inputting the target text in front of the target text.
Optionally, if a target punctuation mark exists between the target text and the target text, while or after the target text is input, the method further includes:
automatically inputting the target punctuation between the target text-above and the target text-below.
In a second aspect, an embodiment of the present application provides an apparatus for inputting, where the apparatus includes:
the target context text determining unit is used for determining an input text before the display position of the input cursor as a target context text based on the touch instruction;
a target text-above obtaining unit, configured to input the target text-below into a text-below association model, and obtain a target text-above of the target text-below, where the text-below association model is obtained by pre-training a plurality of training text-below and training text-above corresponding to the training text-below;
and the target upper text display unit is used for displaying the target upper text corresponding to the target lower text.
Optionally, the target context text determining unit includes:
the first determining subunit is used for determining the display position of the input cursor based on the touch instruction;
the recognition subunit is used for recognizing the input text from the input cursor display position to the first punctuation mark before the input text is recognized;
a second determining subunit, configured to determine the input text between the first punctuation mark and the input cursor display position as the target text.
Optionally, the target text-above obtaining unit includes:
the third determining subunit is configured to determine a touch type corresponding to the touch instruction, where the touch type represents a touch operation mode;
and the obtaining subunit is used for inputting the target context text into the context association context model, and obtaining the context text corresponding to the touch type as the target context text.
Optionally, the method further includes:
and the lower association upper model training unit is specifically used for training an initial language model by taking each training lower text as input and the training upper text corresponding to each training lower text as output to obtain the lower association upper model.
Optionally, the Touch instruction includes a multi-Touch instruction, and the multi-Touch instruction includes a 3D Touch instruction.
Optionally, the method further includes:
and the target text input unit is used for inputting the target text before the target text according to the screen-up instruction of the target text.
Optionally, if a target punctuation exists between the target text and the target text, the method further includes:
and the target punctuation input unit is used for automatically inputting the target punctuation between the target upper text and the target lower text.
In a third aspect, an embodiment of the present application provides an apparatus for input, the apparatus comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs configured to be executed by the one or more processors include instructions for:
determining an input text before the display position of the input cursor as a target context text based on the touch instruction;
inputting the target context text into a context association context model to obtain the target context text of the target context text, wherein the context association context model is obtained by pre-training according to a plurality of training context texts and training context texts corresponding to the training context texts;
and displaying the target text corresponding to the target text.
In a fourth aspect, embodiments of the present application provide a machine-readable medium having stored thereon instructions, which when executed by one or more processors, cause an apparatus to perform a method of inputting as described in one or more of the above first aspects.
Compared with the prior art, the method has the advantages that:
according to the technical scheme of the embodiment of the application, firstly, an input text before the display position of an input cursor is determined as a target text according to a touch instruction; then, inputting the target context text into a context association context model obtained by pre-training according to a plurality of training context texts and training context texts corresponding to the training context texts, and obtaining the target context text of the target context text; and finally, displaying the target upper text corresponding to the target lower text. Therefore, when a user wants to associate the text of the input text, the user does not need to move the cursor to move the input cursor, but can trigger the input text before the display position of the input cursor to associate the text of the input text by means of a touch instruction, so that complicated operation and misoperation are avoided, the pre-trained lower association text model can provide a better function of associating the text of the upper text, and the input experience of the user is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a system framework related to an application scenario in an embodiment of the present application;
FIG. 2 is a schematic flow chart of an input method according to an embodiment of the present application;
fig. 3 is a schematic diagram illustrating that the target text is displayed according to the target text corresponding to the tap 3D Touch provided in the embodiment of the present application;
fig. 4 is a schematic diagram illustrating a target upper text displayed in a target lower text corresponding to a long press 3D Touch according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of an input device according to an embodiment of the present disclosure;
FIG. 6 is a schematic structural diagram of an apparatus for input according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
At present, after the user inputs "career sharing" the input method may use the input text "career sharing" as the text above, and associate the text below "lovers complain about night" to show to the user. If the user wishes to associate the upper text of the input text "career shared time", the user must move the input cursor from the display position thereof to the front of the input text "career shared time" by the cursor moving operation, so as to associate the upper text "career shared time" with the bright moon "according to the lower text" career shared time "after the input cursor is moved. However, moving the cursor through cursor movement operation is cumbersome and inconvenient, and misoperation easily occurs to cause that the position of the input cursor after movement is incorrect, so that the problem exists in associating the text after the cursor is input with the text above the input cursor, and the input experience of a user is greatly influenced.
In order to solve the problem, in the embodiment of the application, firstly, an input text before the display position of an input cursor is determined as a target context text according to a touch instruction; then, inputting the target context text into a context association context model obtained by pre-training according to a plurality of training context texts and training context texts corresponding to the training context texts, and obtaining the target context text of the target context text; and finally, displaying the target upper text corresponding to the target lower text. Therefore, when a user wants to associate the text of the input text, the user does not need to move the cursor to move the input cursor, but can trigger the input text before the display position of the input cursor to associate the text of the input text by means of a touch instruction, so that complicated operation and misoperation are avoided, the pre-trained lower association text model can provide a better function of associating the text of the upper text, and the input experience of the user is effectively improved.
For example, one of the scenarios in the embodiment of the present application may be applied to the scenario shown in fig. 1, where the scenario includes the user terminal 101 and the processor 102. After a user inputs a text through the user terminal 101 by using an input method, a touch instruction is generated to the processor 102 in response to a touch operation of the user at the user terminal 101, the processor 102 determines, based on the touch instruction, that an input text before a display position of an input cursor on an interface of the user terminal 101 is a target context text, inputs the target context text into a context association context model, obtains a target context text of the target context text, and displays the target context text corresponding to the target context text on the interface of the user terminal 101 for the user to select an upper screen.
It is to be understood that, in the above application scenario, although the actions of the embodiments of the present application are described as being performed by the processor 102, the actions may also be performed by the user terminal 101, or may also be performed partially by the user terminal 101 and partially by the processor 102. The present application is not limited in terms of the execution subject as long as the actions disclosed in the embodiments of the present application are executed.
It is to be understood that the above scenario is only one example of a scenario provided in the embodiment of the present application, and the embodiment of the present application is not limited to this scenario.
The following describes in detail specific implementations of the method and related apparatus input in the embodiments of the present application with reference to the drawings.
Exemplary method
Referring to fig. 2, a flow chart of an input method in an embodiment of the present application is shown. In this embodiment, the method may include, for example, the steps of:
step 201: and determining that the input text before the input cursor display position is the target text based on the touch instruction.
It is understood that, when a user wishes to associate an upper text of an input text in the related art, a cursor moving operation must be performed to move an input cursor from its display position to the front of the input text so that the input text is behind the input cursor as a lower text to associate the upper text thereof. Because the input cursor is moved through cursor movement operation in the prior art, the operation is complex and is easy to be operated by mistake, therefore, in the embodiment of the application, the touch control technology principle capable of avoiding complex operation and mistake operation is considered to be used for touch control operation to generate a touch control instruction, the cursor movement operation is replaced to move the input cursor, and the purpose of triggering the input text before the cursor display position to be associated with the text thereon is achieved.
In an optional embodiment of the present application, the touch technology may be a multi-point touch technology, and the corresponding touch instruction is a multi-point touch instruction; since the multi-Touch technology may be a three-dimensional multi-Touch technology 3D Touch, the corresponding multi-Touch instruction includes a 3D Touch instruction. Of course, in other alternative embodiments of the present application, the touch technology may be a single-point touch technology, that is, the corresponding touch command is a single-point touch command, which is not limited in detail, and the touch command may be a touch command generated by a convenient touch technology.
It should be noted that, in practice, the purpose of the touch instruction is to use the input text between the input cursor display position and the first punctuation mark before the input text as the target text, and then it is necessary to determine the input cursor display position after the input by the user, and then recognize the input text forward on the basis of the input cursor display position until the first punctuation mark before the input text is recognized, and finally, use the recognized input text before the input cursor display position and after the first punctuation mark as the target text. Therefore, in an optional implementation manner of this embodiment of the present application, the step 201 may include, for example, the following steps:
step A: determining the input cursor display position based on a touch instruction;
and B: identifying the input text forward from the input cursor display location until a first punctuation mark before the input text is identified;
and C: determining the input text between the first punctuation symbol and the input cursor display location as the target contextual text.
As an example, the user inputs "poetry wangyi writes a track in jiu yue jiu ri memory Shandong brother" by using an input method at the user terminal: to remotely know the ascending position of the brother, after a user generates a touch instruction by performing touch operation on a user terminal, firstly, determining that the display position of an input cursor is behind a place based on the touch instruction, and then, identifying the input text from behind the place to the front until a first punctuation mark before identifying the input text is a colon mark': ", and finally, the first punctuation mark is colon": the input text "remote-known sibling climbing up" between "and behind" the input cursor display position "is determined as the target context text.
Step 202: and inputting the target context text into a context association context model to obtain the target context text of the target context text, wherein the context association context model is obtained by pre-training according to a plurality of training context texts and training context texts corresponding to the training context texts.
It can be understood that, after the target context text is determined in step 201, a step of associating the target context text with the target context text needs to be performed, and in this embodiment of the application, it is considered in advance that based on a plurality of training context texts and their corresponding training context texts, a context association context model is obtained by performing context association context training, so as to be used for performing context association context on the target context text to obtain its target context.
Specifically, when training a lower context and associating an upper context model, collecting a plurality of training lower context texts and training upper context texts corresponding to the training lower context texts as a plurality of training samples; and based on the initial language model, taking each training lower text as input, taking the corresponding training upper text as output, training until the training of a plurality of training samples is completed, and obtaining the trained language model as a lower association upper model. Therefore, in an optional implementation manner of the embodiment of the present application, the following step is associated with the above model training step, specifically: and training an initial language model by taking each training lower text as input and the training upper text corresponding to each training lower text as output to obtain the lower association upper model.
As an example, a plurality of training context texts and their corresponding training context texts are collected in advance: training the text below "distant knowledge brother climbing up" to train the text above "every time good festival doubles the si relative"; training the text below "Skyline is used together at this time" corresponding to training the text above "Haoshengmue"; … and the following text "West-Yang relation no-handicapped person" correspond to the above text "persuade you to have a full cup of wine". And training the initial language model by taking each training lower text as input and the corresponding training upper text as output to obtain a lower association upper model. For this contextual associative-context model, entering the target contextual text "know brother aloft" may obtain "a good festival of a house" as its target contextual text.
It should be noted that, because the Touch types corresponding to different Touch operations are different, that is, the Touch types corresponding to the Touch instructions generated by different Touch operations are different, for example, the Touch type generating the Touch instruction by the Touch 3D Touch operation is a light Touch, and the Touch type generating the Touch instruction by the long press 3D Touch operation is a long press; for a target context text, the context association context model may associate to obtain a plurality of context texts, the plurality of context texts having different characteristics; the touch type and the characteristics of the above text obtained by associating the above model with the below text can be associated in advance in the embodiment of the application. Therefore, when inputting the target context text into the context association context model to obtain the target context text, firstly, the touch type corresponding to the touch instruction needs to be determined, and then the target context text is input into the context association context model to obtain the context text corresponding to the touch type as the target context text. That is, in an optional implementation manner of the embodiment of the present application, the step 202 may include, for example, the following steps:
step D: determining a touch type corresponding to the touch instruction, wherein the touch type represents a touch operation mode;
step E: and inputting the target context text into the context association context model, and obtaining the context text corresponding to the touch type as the target context text.
For example, assume that the touch type includes a light touch and a long press; for the target context text, the context association context model may associate to obtain a plurality of contexts, such as a context containing one sentence and a context containing two sentences, having the characteristics of the one sentence and the two sentences, respectively, associating the touch type "tap" with the characteristic "one sentence" in advance, and associating the touch type "long press" with the characteristic "two sentences" in advance. The user inputs 'Shirenwanwei in Jiu Yue Jiu-Ri-Yishan east brother' to write a track by using an input method at a user terminal: to learn the sibling to ascend, "an example for steps D to E is as follows:
as an example, when the user performs a tap 3D Touch operation, the target context text is "know brother aloft", the Touch type corresponding to the Touch instruction is a tap, and the target context text "know brother aloft" is input into the context association context model to obtain the context text "think every day with one sentence characteristic as the target context text.
As another example, when the user performs a long press operation on 3D Touch, the target context text is "know brother and board up", the Touch type corresponding to the Touch instruction is long press, and the target context text "know brother and board up" inputs the context association context model to obtain the context text with two sentence characteristics, "fellow different counties is a visitor, and every day of good festival is a beginner" as the target context text.
Step 203: and displaying the target text corresponding to the target text.
It will be appreciated that after the target text is obtained in step 202, the target text may need to be displayed on the user terminal interface in correspondence with the target text in order to be visually presented to the user for the user to select whether to screen the user. In the embodiment of the present application, no limitation is made on the display position of the target context text corresponding to the target context text on the user terminal interface.
Corresponding to the above step 202 for the description of steps D to E, for example, the schematic diagram of text above the target is displayed in text below the corresponding target of the tap 3D Touch shown in fig. 3, wherein "poetry king is written in" jiu yue jiu ri memory mountain east brother "and is input and displayed on the user terminal interface: when the user knows the sibling elevation (input cursor) remotely, the user touches the 3D Touch lightly, and displays the target upper text 'every day of the good festival' corresponding to the target lower text 'the sibling elevation remotely known'. For example, a schematic diagram of a long press 3D Touch corresponding target context display target context is shown in fig. 4, in which "poetry king is written in" kuyue jiu-ri memory mountain east brother "is input and displayed on a user terminal interface: when the user knows the sibling elevation (input cursor) remotely, the user presses the 3D Touch for a long time, and displays a target upper text 'the different country is the different visitor and thinks about the visitor every good festival' corresponding to a target lower text 'the sibling elevation is remotely known' on a user terminal interface.
It should be further noted that, after the target text is displayed corresponding to the target text in step 203, the user may perform a screen-up operation on the target text, and then generate a screen-up instruction for the target text, where the target text needs to be input before the target text. Therefore, in an optional implementation manner of this embodiment of the present application, after the step 203, for example, a step F may further be included: according to the screen-up instruction of the target text, inputting the target text in front of the target text.
As an example, the target context text is "distant brother ascendant", the target context text displayed corresponding to the target context text is "every happy club thriving", and when the user performs a screen-up operation on the target context text "every happy club thriving", a screen-up instruction is generated, according to which the target context text "every happy club thriving" is input before the target context text "distant brother ascendant".
It should be noted that, there may be an indispensable target punctuation mark for indicating sentence or mood between the target text and the target text, and considering the connection relationship between the target text and the role of the target punctuation mark, it is necessary to automatically input the target punctuation mark between the target text and the target text at the same time of inputting the target text before the target text or after inputting the target text before the target text. Therefore, in an optional implementation manner of this embodiment of the present application, if a target punctuation mark exists between the target text and the target text, at the same time or after the step of inputting the target text in step F, for example, the method may further include step G: automatically inputting the target punctuation between the target text-above and the target text-below.
As an example, the target context text is "known brother to ascend, the target context text displayed corresponding thereto is" happy nod plus si ", and a target punctuation mark exists between the target context text" happy nod plus si "and the target context text" known brother to ascend ". "; then, at the same time as or after the target upper text "think per syllabary" is input before the target lower text "distant sibling shanks" according to the on-screen instruction of the target upper text "think per syllabary", the target punctuation symbols are automatically input between the target upper text "think per syllabary" and the target lower text "distant sibling shanks". ".
Through various implementation manners provided by the embodiment, firstly, an input text before the display position of an input cursor is determined as a target context text according to a touch instruction; then, inputting the target context text into a context association context model obtained by pre-training according to a plurality of training context texts and training context texts corresponding to the training context texts, and obtaining the target context text of the target context text; and finally, displaying the target upper text corresponding to the target lower text. Therefore, when a user wants to associate the text of the input text, the user does not need to move the cursor to move the input cursor, but can trigger the input text before the display position of the input cursor to associate the text of the input text by means of a touch instruction, so that complicated operation and misoperation are avoided, the pre-trained lower association text model can provide a better function of associating the text of the upper text, and the input experience of the user is effectively improved.
Exemplary devices
Referring to fig. 5, a schematic diagram of an input device in the embodiment of the present application is shown. In this embodiment, the apparatus may specifically include:
a target context text determination unit 501, configured to determine, based on the touch instruction, that an input text before the input cursor display position is a target context text;
a target text obtaining unit 502, configured to input the target text into a lower association text model, and obtain a target text of the target text, where the lower association text model is obtained by pre-training according to a plurality of training lower texts and training upper texts corresponding to the training lower texts;
a target above text display unit 503, configured to display the target above text corresponding to the target below text.
In an optional implementation manner of the embodiment of the present application, the target context determining unit 501 includes:
the first determining subunit is used for determining the display position of the input cursor based on the touch instruction;
the recognition subunit is used for recognizing the input text from the input cursor display position to the first punctuation mark before the input text is recognized;
a second determining subunit, configured to determine the input text between the first punctuation mark and the input cursor display position as the target text.
In an optional implementation manner of this embodiment of the present application, the target above text obtaining unit 502 includes:
the third determining subunit is configured to determine a touch type corresponding to the touch instruction, where the touch type represents a touch operation mode;
and the obtaining subunit is used for inputting the target context text into the context association context model, and obtaining the context text corresponding to the touch type as the target context text.
In an optional implementation manner of the embodiment of the present application, the method further includes:
and the lower association upper model training unit is specifically used for training an initial language model by taking each training lower text as input and the training upper text corresponding to each training lower text as output to obtain the lower association upper model.
In an optional implementation manner of the embodiment of the present application, the Touch instruction includes a multi-Touch instruction, and the multi-Touch instruction includes a 3D Touch instruction.
In an optional implementation manner of the embodiment of the present application, the method further includes:
and the target text input unit is used for inputting the target text before the target text according to the screen-up instruction of the target text.
In an optional implementation manner of the embodiment of the present application, if a target punctuation mark exists between the target text and the target text, the method further includes:
and the target punctuation input unit is used for automatically inputting the target punctuation between the target upper text and the target lower text.
Through various implementation manners provided by the embodiment, firstly, an input text before the display position of an input cursor is determined as a target context text according to a touch instruction; then, inputting the target context text into a context association context model obtained by pre-training according to a plurality of training context texts and training context texts corresponding to the training context texts, and obtaining the target context text of the target context text; and finally, displaying the target upper text corresponding to the target lower text. Therefore, when a user wants to associate the text of the input text, the user does not need to move the cursor to move the input cursor, but can trigger the input text before the display position of the input cursor to associate the text of the input text by means of a touch instruction, so that complicated operation and misoperation are avoided, the pre-trained lower association text model can provide a better function of associating the text of the upper text, and the input experience of the user is effectively improved.
FIG. 6 is a block diagram illustrating an apparatus 600 for input according to an example embodiment. For example, the apparatus 600 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 6, apparatus 600 may include one or more of the following components: processing component 602, memory 604, power component 606, multimedia component 608, audio component 610, input/output (I/O) interface 612, sensor component 614, and communication component 616.
The processing component 602 generally controls overall operation of the device 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 602 may include one or more processors 620 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 can include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is configured to store various types of data to support operation at the device 600. Examples of such data include instructions for any application or method operating on device 600, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 604 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power supply component 606 provides power to the various components of device 600. The power components 606 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 600.
The multimedia component 608 includes a screen that provides an output interface between the device 600 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure correlated to the touch or slide operation. In some embodiments, the multimedia component 608 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 600 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 610 is configured to output and/or input audio signals. For example, audio component 610 includes a Microphone (MIC) configured to receive external audio signals when apparatus 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 604 or transmitted via the communication component 616. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 614 includes one or more sensors for providing status assessment of various aspects of the apparatus 600. For example, the sensor component 614 may detect an open/closed state of the device 600, the relative positioning of components, such as a display and keypad of the apparatus 600, the sensor component 614 may also detect a change in position of the apparatus 600 or a component of the apparatus 600, the presence or absence of user contact with the apparatus 600, orientation or acceleration/deceleration of the apparatus 600, and a change in temperature of the apparatus 600. The sensor assembly 614 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is configured to facilitate communications between the apparatus 600 and other devices in a wired or wireless manner. The apparatus 600 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 604 comprising instructions, executable by the processor 620 of the apparatus 600 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium having instructions therein which, when executed by a processor of a mobile terminal, enable the mobile terminal to perform a method of inputting, the method comprising:
determining an input text before the display position of the input cursor as a target context text based on the touch instruction;
inputting the target context text into a context association context model to obtain the target context text of the target context text, wherein the context association context model is obtained by pre-training according to a plurality of training context texts and training context texts corresponding to the training context texts;
and displaying the target text corresponding to the target text.
Fig. 7 is a schematic structural diagram of a server in the embodiment of the present application. The server 700 may vary significantly depending on configuration or performance, and may include one or more Central Processing Units (CPUs) 722 (e.g., one or more processors) and memory 732, one or more storage media 730 (e.g., one or more mass storage devices) storing applications 742 or data 744. Memory 732 and storage medium 730 may be, among other things, transient storage or persistent storage. The program stored in the storage medium 730 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Further, the central processor 722 may be configured to communicate with the storage medium 730, and execute a series of instruction operations in the storage medium 730 on the server 700.
The server 700 may also include one or more power supplies 726, one or more wired or wireless network interfaces 750, one or more input-output interfaces 758, one or more keyboards 756, and/or one or more operating systems 741, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a preferred embodiment of the present application and is not intended to limit the present application in any way. Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application. Those skilled in the art can now make numerous possible variations and modifications to the disclosed embodiments, or modify equivalent embodiments, using the methods and techniques disclosed above, without departing from the scope of the claimed embodiments. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical essence of the present application still fall within the protection scope of the technical solution of the present application without departing from the content of the technical solution of the present application.

Claims (10)

1. A method of inputting, comprising:
determining an input text before the display position of the input cursor as a target context text based on the touch instruction;
inputting the target context text into a context association context model to obtain the target context text of the target context text, wherein the context association context model is obtained by pre-training according to a plurality of training context texts and training context texts corresponding to the training context texts;
and displaying the target text corresponding to the target text.
2. The method of claim 1, wherein determining, based on the touch instruction, that the input text before the input cursor display position is the target contextual text comprises:
determining the input cursor display position based on a touch instruction;
identifying the input text forward from the input cursor display location until a first punctuation mark before the input text is identified;
determining the input text between the first punctuation symbol and the input cursor display location as the target contextual text.
3. The method of claim 1, wherein entering the target contextual text into a contextual association context model, obtaining the target contextual text of the target contextual text, comprises:
determining a touch type corresponding to the touch instruction, wherein the touch type represents a touch operation mode;
and inputting the target context text into the context association context model, and obtaining the context text corresponding to the touch type as the target context text.
4. The method according to claim 1, wherein the following is associated with an above model training step, in particular:
and training an initial language model by taking each training lower text as input and the training upper text corresponding to each training lower text as output to obtain the lower association upper model.
5. The method of any of claims 1-4, wherein the Touch instruction comprises a multi-Touch instruction, and wherein the multi-Touch instruction comprises a 3D Touch instruction.
6. The method of claim 1, further comprising, after said displaying said target text above:
according to the screen-up instruction of the target text, inputting the target text in front of the target text.
7. The method of claim 6, wherein if a target punctuation mark exists between the target text and the target text, further comprising, simultaneously with or after the inputting the target text:
automatically inputting the target punctuation between the target text-above and the target text-below.
8. An apparatus for input, comprising:
the target context text determining unit is used for determining an input text before the display position of the input cursor as a target context text based on the touch instruction;
a target text-above obtaining unit, configured to input the target text-below into a text-below association model, and obtain a target text-above of the target text-below, where the text-below association model is obtained by pre-training a plurality of training text-below and training text-above corresponding to the training text-below;
and the target upper text display unit is used for displaying the target upper text corresponding to the target lower text.
9. An apparatus for input, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for:
determining an input text before the display position of the input cursor as a target context text based on the touch instruction;
inputting the target context text into a context association context model to obtain the target context text of the target context text, wherein the context association context model is obtained by pre-training according to a plurality of training context texts and training context texts corresponding to the training context texts;
and displaying the target text corresponding to the target text.
10. A machine-readable medium having stored thereon instructions, which when executed by one or more processors, cause an apparatus to perform a method of inputting as recited in one or more of claims 1-7.
CN201910770101.XA 2019-08-20 2019-08-20 Input method and related device Active CN112416217B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910770101.XA CN112416217B (en) 2019-08-20 2019-08-20 Input method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910770101.XA CN112416217B (en) 2019-08-20 2019-08-20 Input method and related device

Publications (2)

Publication Number Publication Date
CN112416217A true CN112416217A (en) 2021-02-26
CN112416217B CN112416217B (en) 2022-05-06

Family

ID=74779526

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910770101.XA Active CN112416217B (en) 2019-08-20 2019-08-20 Input method and related device

Country Status (1)

Country Link
CN (1) CN112416217B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008276459A (en) * 2007-04-27 2008-11-13 Sanyo Electric Co Ltd Input character string prediction device, input character string prediction program and electronic medical chart system
CN101526878A (en) * 2009-04-10 2009-09-09 无敌科技(西安)有限公司 Association word input system and method thereof
CN102629160A (en) * 2012-03-16 2012-08-08 华为终端有限公司 Input method, input device and terminal
CN104298457A (en) * 2013-07-18 2015-01-21 广州三星通信技术研究有限公司 Character input method and device
CN109558016A (en) * 2017-09-25 2019-04-02 北京搜狗科技发展有限公司 A kind of input method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008276459A (en) * 2007-04-27 2008-11-13 Sanyo Electric Co Ltd Input character string prediction device, input character string prediction program and electronic medical chart system
CN101526878A (en) * 2009-04-10 2009-09-09 无敌科技(西安)有限公司 Association word input system and method thereof
CN102629160A (en) * 2012-03-16 2012-08-08 华为终端有限公司 Input method, input device and terminal
CN104298457A (en) * 2013-07-18 2015-01-21 广州三星通信技术研究有限公司 Character input method and device
CN109558016A (en) * 2017-09-25 2019-04-02 北京搜狗科技发展有限公司 A kind of input method and device

Also Published As

Publication number Publication date
CN112416217B (en) 2022-05-06

Similar Documents

Publication Publication Date Title
US10296201B2 (en) Method and apparatus for text selection
EP3171279A1 (en) Method and device for input processing
US20170300210A1 (en) Method and device for launching a function of an application and computer-readable medium
CN105260360B (en) Name recognition methods and the device of entity
CN107992257B (en) Screen splitting method and device
CN104317402B (en) Description information display method and device and electronic equipment
US9959487B2 (en) Method and device for adding font
JP2017527928A (en) Text input method, apparatus, program, and recording medium
EP3232314A1 (en) Method and device for processing an operation
CN103885632A (en) Input method and input device
CN105511777B (en) Session display method and device on touch display screen
WO2019007236A1 (en) Input method, device, and machine-readable medium
US20210165670A1 (en) Method, apparatus for adding shortcut plug-in, and intelligent device
CN106155703B (en) Emotional state display method and device
JP2017525076A (en) Character identification method, apparatus, program, and recording medium
CN111596832B (en) Page switching method and device
CN113936697B (en) Voice processing method and device for voice processing
CN108073291B (en) Input method and device and input device
CN110554780A (en) sliding input method and device
CN112416217B (en) Input method and related device
CN111679746A (en) Input method and device and electronic equipment
CN109542244B (en) Input method, device and medium
CN113127613B (en) Chat information processing method and device
CN114051157A (en) Input method and device
US20170060822A1 (en) Method and device for storing string

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant