CN112506390A - Multi-party text input method and device - Google Patents
Multi-party text input method and device Download PDFInfo
- Publication number
- CN112506390A CN112506390A CN202011456649.6A CN202011456649A CN112506390A CN 112506390 A CN112506390 A CN 112506390A CN 202011456649 A CN202011456649 A CN 202011456649A CN 112506390 A CN112506390 A CN 112506390A
- Authority
- CN
- China
- Prior art keywords
- input
- text
- area
- party
- input text
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0489—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using dedicated keyboard keys or combinations thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
Abstract
The application discloses a multi-party text input method and device. The multi-party text input method comprises the following steps: acquiring a first input text; constructing an input area corresponding to the first input text; inputting the first input text into the input area; acquiring a second input text; acquiring a cursor index area corresponding to a second input text in the current document; inputting the second input text into the cursor index area; it should be noted that the input mode of the second input text is not associated with the input mode of the first input text.
Description
Technical Field
The present application relates to the field of text input technologies, and in particular, to a method and an apparatus for inputting multi-party text.
Background
When people use mobile phones or computers, many people like to use a keyboard input mode to perform character input, and the keyboard input method is a relatively mature technology.
In addition, most people like to use voice recognition technology to perform character input, and the voice recognition input method is widely applied.
In the process of realizing the prior art, the inventor finds that:
the current text input mode can not realize the simultaneous input of texts in multiple input modes. For example, when text is input by using speech recognition, the current cursor position is affected, so that the keyboard input method cannot be used simultaneously with the speech input method.
Therefore, it is necessary to provide a technical solution for implementing multi-party text input, which is a solution for implementing text entry by a multi-party input method.
Disclosure of Invention
The embodiment of the application provides a scheme for realizing multi-party text input, which is used for solving the technical problem that texts cannot be input by using multiple input modes simultaneously.
The multi-party text input method provided by the application comprises the following steps:
acquiring a first input text;
constructing an input area corresponding to the first input text;
inputting the first input text into the input area;
acquiring a second input text;
acquiring a cursor index area corresponding to a second input text in the current document;
inputting the second input text into the cursor index area;
and the input mode of the second input text is not related to the input mode of the first input text.
Further, the first input text or the second input text is input by at least one of voice recognition, handwriting input and keyboard input.
Further, the input area is constructed on one side of the cursor index area.
Further, the input area calls a document operation interface provided by the WPS under the Linux system to construct.
Further, the input area is at least one text field of a Range field and a Cells field.
A multi-party text input device comprising:
the first acquisition module is used for acquiring a first input text;
the first creating module is used for constructing an input area corresponding to the first input text;
a first input module for inputting the first input text into the input area;
the second acquisition module is used for acquiring a second input text;
the third acquisition module is used for acquiring a cursor index area corresponding to the second input text in the current document;
and the second input module is used for inputting the second input text to the cursor index area.
And the input mode of the second input text is not related to the input mode of the first input text.
Further, the first input text or the second input text is input by at least one of voice recognition, handwriting input and keyboard input.
Further, the input area is constructed on one side of the cursor index area.
Further, the input area calls a document operation interface provided by the WPS under the Linux system to construct.
Further, the input area is at least one text field of a Range field and a Cells field.
The embodiment provided by the application has at least the following beneficial effects:
when multiple users edit the same document, the text can be input simultaneously by adopting multiple input modes, thereby improving the working efficiency.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a flowchart of a method for implementing multi-party text input according to an embodiment of the present disclosure.
Fig. 2 is a block diagram schematically illustrating a structure of a multi-party text input device according to an embodiment of the present disclosure.
Reference numerals:
100 multi-party text input device
110 first acquisition module
120 first creation module
130 first input module
140 second acquisition module
150 third acquisition Module
160 second input module
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, the present application provides a multi-party text input method, which includes the following steps:
s110: a first input text is obtained.
It should be noted that the first input text is input by at least one of speech recognition, handwriting input, and keyboard input.
Specifically, when the input text adopts voice recognition, the multi-party text input system determines matched characters in an acoustic model or a language model through the acquired audio information.
When the input text adopts handwriting input, the multi-party text input system determines matched characters in the database through the acquired handwriting track.
When the input text adopts keyboard input, the multi-party text input system acquires matched characters through the keyboard input of a user.
It should be noted in particular that the character combination will generate the first input text.
Specifically, the first input text includes at least one character of characters, numbers, letters, symbols, and the like.
S120: an input area corresponding to the first input text is constructed.
It is noted that the multi-party text input system will construct at least one input area.
Wherein the input area is for providing a first input text input.
Specifically, the input area may be at least one of a Range field and a Cells field.
It will be appreciated that the input regions described herein are each used to provide a first input text input. Thus, the input region may be constructed in different programming languages or in different ways. The described embodiments are only some embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Specifically, the input area may be constructed on one side of the cursor index area.
In a specific embodiment provided by the present application, the input area may be a Range domain constructed by calling a document operation interface provided by WPS in the Linux system.
In another embodiment provided by the present application, the input area may be a Cells domain constructed by calling a document operation interface provided by WPS in the Linux system.
S130: inputting the first input text into the input area.
It is noted that the multi-party text input system enters the first input text into the input area.
In one embodiment provided by the present application, a multi-party text input system obtains audio information of an operator and generates a first input text through speech recognition.
And then the multi-party text input system calls a document operation interface provided by the WPS under the Linux system to construct a Cells domain as an input area, and the Cells domain is constructed behind the cursor index.
A multi-party text input system inputs the first input text into the Cells field for display.
It is specifically noted that the Cells field does not carry cursor movements.
S140: and acquiring a second input text.
It should be noted that the second input text is input by at least one of speech recognition, handwriting input, and keyboard input.
Specifically, when the input text adopts voice recognition, the multi-party text input system determines matched characters in an acoustic model or a language model through the acquired audio information.
When the input text adopts handwriting input, the multi-party text input system determines matched characters in the database through the acquired handwriting track.
When the input text adopts keyboard input, the multi-party text input system acquires matched characters through the keyboard input of a user.
It should be noted in particular that the character combination will generate the second input text.
Specifically, the second input text includes at least one character of characters, numbers, letters, symbols, and the like.
It should be noted in particular that the input mode of the second input text is not associated with the input mode of the first input text.
S150: and acquiring a cursor index area corresponding to the second input text in the current document.
It should be noted that the cursor index area corresponding to the second input text in the current document is used for providing the second input text input.
Specifically, the cursor index area may provide an index area input of the second input text in front of the cursor.
S160: and inputting the second input text to the cursor index area.
It is noted that the multi-party text input system inputs the second input text into the cursor index area.
In one embodiment provided herein, a multi-party text input system obtains audio information of a first operator and generates a first input text through speech recognition.
And then the multi-party text input system calls a document operation interface provided by the WPS under the Linux system to construct a Range domain as an input area, wherein the Range domain is constructed behind the cursor index.
The multi-party text input system inputs the first input text into the Range field for display.
It is particularly noted that the Range field does not carry cursor movements.
Next, the multi-party text input system acquires text input by the second operator using the keyboard as second input text.
The multi-party text input system inputs the second input text at the cursor index area.
The input mode of the second input text is not related to the input mode of the first input text, and the voice recognition content can be modified in real time to supplement the voice recognition result.
To support a multi-party text input method, the present application provides a multi-party text input device 100.
Referring to fig. 2, a multi-party text input device 100 provided in the present application includes:
the first obtaining module 110 is configured to obtain a first input text.
It should be noted that the first input text is input by at least one of speech recognition, handwriting input, and keyboard input.
Specifically, when the input text adopts speech recognition, the first obtaining module 110 determines a matched character in the acoustic model or the language model according to the obtained audio information.
When the input text adopts the handwriting input, the first obtaining module 110 determines the matched character in the database through the obtained writing track.
When the input text adopts keyboard input, the first obtaining module 110 obtains the matched character through the keyboard input of the user.
It should be particularly noted that the first obtaining module 110 generates the first input text from the character combination.
Specifically, the first input text includes at least one character of characters, numbers, letters, symbols, and the like.
A first creation module 120 for constructing an input area corresponding to the first input text.
It is noted that the first creation module 120 will construct at least one input area.
Wherein the input area is for providing a first input text input.
Specifically, the input area may be at least one of a Range field and a Cells field.
It will be appreciated that the input regions described herein are each used to provide a first input text input. Thus, the input region may be constructed in different programming languages or in different ways. The described embodiments are only some embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Specifically, the input area may be constructed on one side of the cursor index area.
In a specific embodiment provided by the present application, the input area is a Range area constructed by the first creation module 120 calling a document operation interface provided by a WPS in a Linux system.
In another embodiment provided by the present application, the input area is a Cells domain constructed by the first creating module 120 calling a document operation interface provided by the WPS in the Linux system.
A first input module 130, configured to input the first input text into the input area.
It is noted that the first input module 130 inputs the first input text into the input area.
In one embodiment provided by the present application, the first obtaining module 110 obtains audio information of an operator, and generates a first input text through speech recognition.
Then, the first creating module 120 calls a document operation interface provided by the WPS in the Linux system, and constructs a Cells field as an input area, and the Cells field is constructed behind the cursor index.
The first input module 130 inputs the first input text into the Cells field for display.
It is specifically noted that the Cells field does not carry cursor movements.
And a second obtaining module 140, configured to obtain a second input text.
It should be noted that the second input text is input by at least one of speech recognition, handwriting input, and keyboard input.
Specifically, when the input text adopts speech recognition, the second obtaining module 140 determines a matched character in the acoustic model or the language model through the obtained audio information.
When the input text adopts the handwriting input, the second obtaining module 140 determines the matched character in the database through the obtained writing track.
When the input text adopts keyboard input, the second obtaining module 140 obtains the matched character through the keyboard input of the user.
It should be particularly noted that the second obtaining module 140 generates the second input text from the character combination.
Specifically, the second input text includes at least one character of characters, numbers, letters, symbols, and the like.
It should be noted in particular that the input mode of the second input text is not associated with the input mode of the first input text.
The third obtaining module 150: and the cursor index area is used for acquiring the cursor index area corresponding to the second input text in the current document.
It should be noted that the cursor index area corresponding to the second input text in the current document is used for providing the second input text input.
Specifically, the cursor index area may provide an index area input of the second input text in front of the cursor.
The second input module 160: for inputting the second input text into the cursor index area.
It is noted that the second input module 160 inputs the second input text into the cursor index area.
In one embodiment provided herein, a computer is equipped with the multi-party text input device 100, and the computer system is a Linux system.
The first obtaining module 110 obtains audio information of a first operator, and generates a first input text through speech recognition.
Then, the first creating module 120 calls a document operation interface provided by the WPS in the Linux system, and constructs a Range domain as an input area, wherein the Range domain is constructed behind the cursor index.
The first input module 130 inputs the first input text into the Range field for display.
It is particularly noted that the Range field does not carry cursor movements.
The second acquiring module 140 acquires text input by the second operator using the keyboard as second input text.
The third obtaining module 150 obtains a cursor index area corresponding to the second input text in the current document.
Next, the second input module 160 inputs a second input text into the cursor index area.
The input mode of the second input text is not related to the input mode of the first input text, and the voice recognition content can be modified in real time to supplement the voice recognition result.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.
Claims (10)
1. A multi-party text input method, comprising the steps of:
acquiring a first input text;
constructing an input area corresponding to the first input text;
inputting the first input text into the input area;
acquiring a second input text;
acquiring a cursor index area corresponding to a second input text in the current document;
inputting the second input text into the cursor index area;
and the input mode of the second input text is not related to the input mode of the first input text.
2. The multiparty text input method according to claim 1, wherein the first input text or the second input text is input using at least one of voice recognition, handwriting input, and keyboard input.
3. The multi-party text input method of claim 1, wherein the input area is constructed on one side of a cursor index area.
4. The multiparty text input method according to claim 1, wherein the input area calls a document operation interface setup provided by WPS under Linux system.
5. The multiparty text input method of claim 1, wherein the input area is at least one of a Range field and a Cells field.
6. A multi-party text input device, comprising:
the first acquisition module is used for acquiring a first input text;
the first creating module is used for constructing an input area corresponding to the first input text;
a first input module for inputting the first input text into the input area;
the second acquisition module is used for acquiring a second input text;
the third acquisition module is used for acquiring a cursor index area corresponding to the second input text in the current document;
and the second input module is used for inputting the second input text to the cursor index area.
And the input mode of the second input text is not related to the input mode of the first input text.
7. The multi-party text input device of claim 6, wherein the first input text or the second input text is input by at least one of voice recognition, handwriting input, and keyboard input.
8. The multi-party text input device of claim 6, wherein the input area is constructed to one side of a cursor index area.
9. The multi-party text input device of claim 6, wherein the input area invokes a document manipulation interface build provided by WPS under Linux system.
10. The multi-party text input device of claim 6, wherein the input area is at least one of a Range field and a Cells field.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011456649.6A CN112506390A (en) | 2020-12-10 | 2020-12-10 | Multi-party text input method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011456649.6A CN112506390A (en) | 2020-12-10 | 2020-12-10 | Multi-party text input method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112506390A true CN112506390A (en) | 2021-03-16 |
Family
ID=74973641
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011456649.6A Pending CN112506390A (en) | 2020-12-10 | 2020-12-10 | Multi-party text input method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112506390A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS62191915A (en) * | 1986-02-18 | 1987-08-22 | Yokogawa Electric Corp | Multiinput data storage device |
CN103970467A (en) * | 2013-02-04 | 2014-08-06 | 英华达(上海)科技有限公司 | Multi-input-method handwriting recognition system and method |
CN105988769A (en) * | 2015-02-12 | 2016-10-05 | 中兴通讯股份有限公司 | Hybrid input method and apparatus |
CN110362214A (en) * | 2019-07-22 | 2019-10-22 | 江苏观复科技信息咨询有限公司 | A kind of input method, equipment and program product |
-
2020
- 2020-12-10 CN CN202011456649.6A patent/CN112506390A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS62191915A (en) * | 1986-02-18 | 1987-08-22 | Yokogawa Electric Corp | Multiinput data storage device |
CN103970467A (en) * | 2013-02-04 | 2014-08-06 | 英华达(上海)科技有限公司 | Multi-input-method handwriting recognition system and method |
CN105988769A (en) * | 2015-02-12 | 2016-10-05 | 中兴通讯股份有限公司 | Hybrid input method and apparatus |
CN110362214A (en) * | 2019-07-22 | 2019-10-22 | 江苏观复科技信息咨询有限公司 | A kind of input method, equipment and program product |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101369187B (en) | Chinese character input method | |
JP5860171B2 (en) | Input processing method and apparatus | |
CN103369122A (en) | Voice input method and system | |
CN112100353A (en) | Man-machine conversation method and system, computer device and medium | |
CN105739954A (en) | Method for achieving IVR business flow based on visual IVR flow editor | |
CN111063355A (en) | Conference record generation method and recording terminal | |
CN101561718B (en) | Braille input method of keyboard, keyboard and mobile phone adopting same | |
Cui et al. | Enhancing interactions for in-car voice user interface with gestural input on the steering wheel | |
CN108737634B (en) | Voice input method and device, computer device and computer readable storage medium | |
KR102581414B1 (en) | Apparatus and method for filling electronic document using dialogue comprehension based on format of electronic document | |
CN112506390A (en) | Multi-party text input method and device | |
CN112506391A (en) | Multi-party text input method and device | |
KR102206486B1 (en) | Method for proving translation service by using input application and terminal device using the same | |
CN109408621B (en) | Dialogue emotion analysis method and system | |
Zhu | A symbolism study of expression in text-based communication | |
Basapur et al. | User expectations from dictation on mobile devices | |
Crisler et al. | Considering the user in the wireless world | |
KR102305181B1 (en) | Method for providing electric document using chatbot, apparatus and method for writing electric document using chatbot | |
Yousef et al. | An enhanced mobile phone dialler application for blind and visually impaired people | |
CN112560404A (en) | Text editor voice recognition method and device | |
Condado et al. | EasyWrite: A touch-based entry method for mobile devices | |
Zhelezniakov et al. | InteractivePaper: minimalism in document editing UI through the handwriting prism | |
CN106940595B (en) | A kind of information edit method and device | |
Xu et al. | Drawing in talking: Using pen and voice for drawing system configuration figures in talking | |
CN107391491A (en) | Word system of selection and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210316 |
|
RJ01 | Rejection of invention patent application after publication |