CN111752439B - Input method, device, equipment and storage medium - Google Patents

Input method, device, equipment and storage medium Download PDF

Info

Publication number
CN111752439B
CN111752439B CN202010605146.4A CN202010605146A CN111752439B CN 111752439 B CN111752439 B CN 111752439B CN 202010605146 A CN202010605146 A CN 202010605146A CN 111752439 B CN111752439 B CN 111752439B
Authority
CN
China
Prior art keywords
recognition result
input
operation instruction
matching relation
input content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010605146.4A
Other languages
Chinese (zh)
Other versions
CN111752439A (en
Inventor
张阔宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010605146.4A priority Critical patent/CN111752439B/en
Publication of CN111752439A publication Critical patent/CN111752439A/en
Application granted granted Critical
Publication of CN111752439B publication Critical patent/CN111752439B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an input method, an input device, input equipment and a storage medium, and relates to the fields of artificial intelligence and intelligent search. The specific implementation scheme is as follows: identifying the operation instruction received by the input area to obtain an identification result; determining input content matched with the recognition result according to a pre-established matching relation table; inputting input content matched with the recognition result in an input area; the matching relation table comprises the matching relation between the recognition result and the input content. By the scheme, the input content can be matched according to the user-defined operation instruction, the problem that the input depends on the fixed key position in the prior art is solved, and the touch typing input of the user is facilitated because the matching relation between the operation instruction and the input content is also user-defined.

Description

Input method, device, equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence, and more particularly to the field of image processing and intelligent search. Specifically, the application provides an input method, an input device, equipment and a storage medium.
Background
Along with the development of intelligent equipment, man-machine information interaction is becoming an indispensable part of life more and more. The relevant input method related to the man-machine information comprises an English keyboard, a Sudoku keyboard, a handwriting keyboard and the like. The input method is input through the fixed keys, and the key input needs accurate touch control to realize the input of corresponding contents, so that the related input method is inconvenient for special people or when touch typing input is needed in special occasions.
Disclosure of Invention
The application provides an input method, an input device, input equipment and a storage medium.
According to an aspect of the present application, there is provided an input method including the steps of:
identifying the operation instruction received by the input area to obtain an identification result;
determining input content matched with the recognition result according to a pre-established matching relation table; and
inputting input content matched with the recognition result in the input area;
the matching relation table comprises the matching relation between the recognition result and the input content.
According to another aspect of the present application, there is provided an input device comprising the following components:
the recognition result confirmation module is used for recognizing the operation instruction received by the input area to obtain a recognition result;
the input content determining module is used for determining the input content matched with the recognition result according to a matching relation table established in advance; and
inputting input content matched with the recognition result in the input area;
the matching relation table comprises the matching relation between the recognition result and the input content.
According to a third aspect of the present application, an embodiment of the present application provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform a method provided by any one of the embodiments of the present application.
According to a fourth aspect of the present application, embodiments of the present application provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform a method provided by any one of the embodiments of the present application.
According to a fifth aspect of the present application, embodiments of the present application provide a computer program product comprising a computer program which, when executed by a processor, implements the method as described above.
By the scheme, the input content can be matched according to the user-defined operation instruction, the problem that the input depends on the fixed key position in the prior art is solved, and the touch typing input of the user is facilitated because the matching relation between the operation instruction and the input content is also user-defined.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a flow chart of an input method according to a first embodiment of the present application;
FIG. 2 is a flow chart for obtaining recognition results according to the first embodiment of the present application;
FIG. 3 is a schematic diagram of an input area according to a first embodiment of the present application;
FIG. 4 is a flow chart for determining input content according to a first embodiment of the present application;
FIG. 5 is a schematic diagram of an input device according to a second embodiment of the present application;
FIG. 6 is a block diagram of an electronic device for implementing an input method of an embodiment of the application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
As shown in fig. 1, an embodiment of the present application provides an input method, including the following steps:
s101: identifying the operation instruction received by the input area to obtain an identification result;
s102: determining input content matched with the recognition result according to a pre-established matching relation table; and
inputting input content matched with the recognition result in the input area;
the matching relation table comprises the matching relation between the recognition result and the input content.
The execution main body of the embodiment of the application can be a screen device, such as a mobile phone, a PAD, a screen smart speaker, and the like. The input method according to the embodiment of the present application is switched from another input method by means of manual switching, voice switching, or the like.
The user can establish a matching relation table of the operation instruction and the input content in advance. The operation instruction may be set by a user, and for example, may be one time of clicking the screen, two times of clicking the screen, clicking the screen and continuously pressing for 3 seconds, sliding to the right or left once, and the like. The screen can be used as an input area, and after receiving an operation instruction of a user, the operation instruction of the user can be identified. After recognition, the user can be confirmed in the form of text or voice.
For example, when a user performs a sliding touch from bottom to top on the input area (screen) and the sliding distance exceeds a threshold value, feedback information of "receiving a sliding instruction" may be played to the user. After receiving the instruction confirmed by the user, the user may be prompted to input content matching the slide-up instruction. For example, if the user inputs the letter "a" or the user inputs the Chinese character "good", a matching relationship may be established between the one-time upward-sliding instruction and the letter "a", or between the one-time upward-sliding instruction and the Chinese character "good", so as to form a matching relationship table. When the subsequent user inputs, when receiving one sliding touch of the user from bottom to top on the input area and the sliding distance exceeds the threshold value, determining the input content according to the matching relation table. For example, the input content is "a" or "good".
In addition, input contents such as a space, carriage return, and deletion corresponding to the operation command also exist in the matching relationship table. For example, the user preset the input content corresponding to one sliding touch from top to bottom as an enter instruction. The enter input may be performed in a case where one sliding operation instruction from the top down is received.
By the scheme, the input content can be matched according to the user-defined operation instruction, the problem that the input depends on the fixed key position in the prior art is solved, and the touch typing input of the user is facilitated because the matching relation between the operation instruction and the input content is also user-defined.
As shown in fig. 2, in an embodiment, the input area includes at least two sub-input areas, and on the basis, step S101 includes:
s1011: identifying a first operation instruction received by the first sub-input area to obtain a first identification result;
s1012: identifying a second operation instruction received by the second sub-input area to obtain a second identification result;
and taking the set of the first recognition result and the second recognition result as the recognition result.
The input area may be divided into two sub-input areas. The division manner may include dividing the input area left and right, or dividing the input area up and down.
The input area is divided into left and right areas as shown in fig. 3. The left area may be a first sub-input area, and the right area may be a second sub-input area.
For example, the obtained first recognition result is a click operation performed by the user in the first sub-input area, and the obtained second recognition result is a left-to-right sliding operation performed by the user in the second sub-input area. Thus, the set of the click operation and the slide operation can be used as the final recognition result. And subsequently, when the matching relation table is inquired, inquiring input contents corresponding to the click operation and the sliding operation for matching.
In the present embodiment, the first recognition result and the second recognition result are distinguished by an area in which a user operation instruction is received. That is, the default operation instruction of the user for the first time is in the first sub-input area (left area) of the input area, and the operation instruction for the second time is in the second sub-input area (right area) of the input area, so that the operation of the user with both hands can be facilitated.
In an actual scenario, the first recognition result and the second recognition result may be in the order of the operation instructions, for example. For example, the obtained first recognition result is a first click operation performed by the user in any input sub-region, and the obtained second recognition result is a subsequent left-to-right sliding operation performed by the user in any input sub-region.
In addition, the operation region and the operation order may be considered in combination. For example, it is recognized that the user first performs a click operation in the first sub-input area, and then performs a left-to-right sliding operation in the second sub-input area, thereby obtaining a set of recognition results. And otherwise, when the user is identified to firstly perform the sliding operation from left to right in the second sub-input area, and then perform the one-time clicking operation in the first sub-input area, so as to obtain another group of identification results.
In addition, in combination with fig. 3 in this embodiment of the present application, the first sub-input area may also be used as a keyboard input area, such as a squared figure keyboard, or a numeric keyboard lamp. The second sub-input area may also be used as a handwriting input area or an operation instruction input area as described in the above embodiments, and the like.
In the above case, the corresponding input may be performed according to the operation instruction received by the sub input area. Under the condition that the two sub-input areas receive the operation instruction, the input can be carried out according to the sequence of receiving the operation instruction.
Through the scheme, the operation instructions input by both hands of the user can be recognized and confirmed simultaneously. Therefore, the operation instruction modes of the user are enriched, and the input experience of the user is improved.
As shown in fig. 4, in one embodiment, step S102 includes:
s1021: screening matching relations in a matching relation table according to the first recognition result in the set, and reserving the matching relation corresponding to the first recognition result;
s1022: and determining the input content matched with the second recognition result in the matching relation corresponding to the first recognition result according to the second recognition result in the set.
Shown in tables 1 and 2. Two exemplary match relationship tables are shown in tables 1 and 2. Table 1 shows a matching relationship table between the operation command and the english input content. Table 2 shows a matching relationship table between the operation command and the chinese input content.
Taking table 1 as an example, there are 30 matching relationships in table 1. The 30 matching relationships are matching relationships between the input content and the set including the first recognition result and the second recognition result.
For example, when the first recognition result is "click", 30 matching relationships may be filtered, and only the top 5 pairs of matching relationships in table 1 are retained. Further, input content is determined in the reserved 5 pairs of matching relations according to the second recognition result. For example, the second recognition result is "click", the input content is determined to be the letter "a". For another example, if the second recognition result is "slide right", the input content is determined to be the letter "E".
The case where the first recognition result is "no contact" is also included in tables 1 and 2. In this case, it may be equivalent to the first recognition result being empty.
Figure GDA0002923242100000051
Figure GDA0002923242100000061
TABLE 1
Similarly, there are also 30 pairs of matching relationships in table 2. The 30-pair matching relationship is a matching relationship between the input content and the recognition result including the first recognition result and the second recognition result. The specific matching process is the same as table 1 and is not described again.
Figure GDA0002923242100000062
Figure GDA0002923242100000071
TABLE 2
By the scheme, the time for determining the input content can be shortened by screening, and the determination efficiency is improved.
In one embodiment, the method further comprises:
and the matching relation corresponding to the first recognition result is displayed on a screen.
Since there may be a plurality of pairs of matching relationship pairs included in the matching relationship table, there may be a case where a user cannot remember all matching relationships. In view of the above situation, after the first recognition result is obtained according to the operation instruction of the user, the matching relationship in the matching relationship table is screened, and the matching relationship corresponding to the first recognition result that is retained after screening can be displayed, so that the user can be prompted.
Taking table 2 as an example, for example, when the first recognition result obtained by recognizing the operation command of the user is a rightward swipe, the contents shown in table 3 can be displayed.
Please proceed the following operations The input content is
Click on Please note that
Slide upwards Help me!
Slide to the left Thank you
Slide down Help me dial telephone
To the rightSliding motion Actually failing to feel good
TABLE 3
In the current embodiment, the user is prompted by displaying the matching relationship retained after the filtering, and in addition, the user can be prompted by adopting a voice mode.
By means of the scheme, after the first recognition result is obtained, the subsequent executable operation of the user and the corresponding input content can be prompted in a display mode. Thereby reducing the difficulty of memory of the user.
In one embodiment, identifying the result includes:
at least one of a click operation instruction, a slide up operation instruction, a slide down operation instruction, a slide left operation instruction, and a slide right operation instruction.
For example, when the user touches the input area and stays for more than a predetermined time, the recognition result may be a click operation instruction. Or, when the user touches the input area with more than a predetermined force, the recognition result can be obtained as a click operation instruction.
For another example, when the user touches the input area and slides for a distance exceeding a predetermined length, it may be determined that the user is a slide-up operation instruction, a slide-down operation instruction, a slide-left operation instruction, or a slide-right operation instruction according to the direction of the slide.
Through the scheme, different operation instructions of the user can be identified.
As shown in fig. 5, in one implementation, the present embodiment further includes an input device, which includes the following components:
the recognition result confirming module 501 is configured to recognize an operation instruction received by the input area to obtain a recognition result;
an input content determining module 502, configured to determine, according to a matching relationship table established in advance, input content matching the recognition result; and
inputting input content matched with the recognition result in the input area;
the matching relation table comprises the matching relation between the recognition result and the input content.
In one embodiment, the input area includes at least two sub-input areas, and the recognition result confirming module 501 includes:
the first recognition result confirming submodule 5011 is configured to recognize the first operation instruction received by the first sub input area, and obtain a first recognition result;
the second recognition result confirmation submodule 5012 is configured to recognize the second operation instruction received by the second sub input area, and obtain a second recognition result;
and taking the set of the first recognition result and the second recognition result as the recognition result.
In one embodiment, the input content determination module 502 includes:
the matching relationship screening submodule 5021 is used for screening matching relationships in the matching relationship table according to the first recognition result in the set and reserving the matching relationship corresponding to the first recognition result;
the input content determining sub-module 5022 is configured to determine, according to the second recognition result in the set, the input content that matches the second recognition result in the matching relationship corresponding to the first recognition result.
In one embodiment, the input device further comprises:
and a matching relation display module 503, configured to display a matching relation corresponding to the first recognition result on a screen.
In one embodiment, identifying the result includes:
at least one of a click operation instruction, a slide up operation instruction, a slide down operation instruction, a slide left operation instruction, and a slide right operation instruction.
According to embodiments of the present application, an electronic device, a readable storage medium, and a computer program product are also provided.
As shown in fig. 6, the electronic device is a block diagram of an electronic device according to an input method of the embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 6, the electronic apparatus includes: one or more processors 610, memory 620, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). One processor 610 is illustrated in fig. 6.
Memory 620 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by at least one processor to cause the at least one processor to perform the input method provided herein. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to perform the input method provided herein.
The memory 620, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the input method in the embodiment of the present application (for example, the recognition result confirming module 501 and the input content determining module 502 shown in fig. 5). The processor 610 executes various functional applications of the server and data processing, i.e., implements the input method in the above-described method embodiments, by executing non-transitory software programs, instructions, and modules stored in the memory 620.
The memory 620 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device of the input method, and the like. Further, the memory 620 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 620 optionally includes memory located remotely from processor 610, which may be connected to the electronics of the input method via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the input method may further include: an input device 630 and an output device 640. The processor 610, the memory 620, the input device 630, and the output device 640 may be connected by a bus or other means, such as the bus connection in fig. 6.
The input device 630 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus of the input method, such as an input device of a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or the like. The output device 640 may include a display device, an auxiliary lighting device (e.g., an LED), a haptic feedback device (e.g., a vibration motor), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present application can be achieved, and the present invention is not limited herein.
The above-described embodiments are not intended to limit the scope of the present disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. An input method, comprising:
identifying the operation instructions received by the at least two sub-input areas to obtain an identification result; the operation instruction comprises at least one of a click operation instruction, an upward sliding operation instruction, a downward sliding operation instruction, a leftward sliding operation instruction and a rightward sliding operation instruction; the operation instruction is input by touch typing;
determining input content matched with the recognition result according to a pre-established matching relation table; and
inputting input content matched with the recognition result in an input area;
the matching relation table comprises the matching relation between the recognition result and the input content.
2. The method according to claim 1, wherein the identifying the operation instructions received by the at least two sub-input areas to obtain an identification result comprises:
identifying a first operation instruction received by the first sub-input area to obtain a first identification result;
identifying a second operation instruction received by the second sub-input area to obtain a second identification result;
and taking the set of the first recognition result and the second recognition result as the recognition result.
3. The method of claim 2, wherein determining input content matching the recognition result according to a pre-established matching relation table comprises:
according to the first recognition result in the set, screening matching relations in the matching relation table, and reserving the matching relation corresponding to the first recognition result;
and according to the second recognition result in the set, determining the input content matched with the second recognition result in the matching relation corresponding to the first recognition result.
4. The method of claim 3, further comprising:
and displaying the matching relation corresponding to the first recognition result on a screen.
5. An input device, comprising:
the recognition result confirmation module is used for recognizing the operation instructions received by the at least two sub-input areas to obtain recognition results; the operation instruction comprises at least one of a click operation instruction, an upward sliding operation instruction, a downward sliding operation instruction, a leftward sliding operation instruction and a rightward sliding operation instruction; the operation instruction is input by touch typing;
the input content determining module is used for determining the input content matched with the identification result according to a matching relation table established in advance; and
inputting input content matched with the recognition result in an input area;
the matching relation table comprises the matching relation between the recognition result and the input content.
6. The apparatus of claim 5, wherein the input area comprises at least two sub-input areas;
the recognition result confirming module includes:
the first recognition result confirming submodule is used for recognizing the first operation instruction received by the first sub-input area to obtain a first recognition result;
the second recognition result confirming submodule is used for recognizing the second operation instruction received by the second sub-input area to obtain a second recognition result;
and taking the set of the first recognition result and the second recognition result as the recognition result.
7. The apparatus of claim 6, wherein the input content determination module comprises:
a matching relation screening submodule, configured to screen a matching relation in the matching relation table according to the first identification result in the set, and retain the matching relation corresponding to the first identification result;
and the input content determining submodule is used for determining the input content matched with the second recognition result in the matching relation corresponding to the first recognition result according to the second recognition result in the set.
8. The apparatus of claim 7, further comprising:
and the matching relation display module is used for displaying the matching relation corresponding to the first recognition result on a screen.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 4.
10. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 4.
CN202010605146.4A 2020-06-29 2020-06-29 Input method, device, equipment and storage medium Active CN111752439B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010605146.4A CN111752439B (en) 2020-06-29 2020-06-29 Input method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010605146.4A CN111752439B (en) 2020-06-29 2020-06-29 Input method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111752439A CN111752439A (en) 2020-10-09
CN111752439B true CN111752439B (en) 2022-06-24

Family

ID=72677866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010605146.4A Active CN111752439B (en) 2020-06-29 2020-06-29 Input method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111752439B (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102262441B (en) * 2011-07-21 2018-05-04 中兴通讯股份有限公司 Input method and device
CN103019402A (en) * 2011-09-28 2013-04-03 索尼爱立信移动通讯有限公司 Chinese character input method, keyboard and electronic device comprising keyboard
CN103677311A (en) * 2014-01-02 2014-03-26 朱海威 Handwriting input device rapid input method convenient to change
CN105589572A (en) * 2015-12-10 2016-05-18 努比亚技术有限公司 Information input method and mobile terminal
CN107967103B (en) * 2017-12-01 2019-09-17 上海星佑网络科技有限公司 Method, apparatus and computer readable storage medium for information processing

Also Published As

Publication number Publication date
CN111752439A (en) 2020-10-09

Similar Documents

Publication Publication Date Title
US10248635B2 (en) Method for inserting characters in a character string and the corresponding digital service
US10416868B2 (en) Method and system for character insertion in a character string
JP2021131528A (en) User intention recognition method, device, electronic apparatus, computer readable storage media and computer program
CN112466280B (en) Voice interaction method and device, electronic equipment and readable storage medium
CN111241234B (en) Text classification method and device
CN112153206B (en) Contact person matching method and device, electronic equipment and storage medium
CN110532415B (en) Image search processing method, device, equipment and storage medium
CN111783998A (en) Illegal account recognition model training method and device and electronic equipment
US10248640B2 (en) Input-mode-based text deletion
CN101882025A (en) Hand input method and system
CN111708477B (en) Key identification method, device, equipment and storage medium
CN111752439B (en) Input method, device, equipment and storage medium
CN111310481B (en) Speech translation method, device, computer equipment and storage medium
CN112181582A (en) Method, apparatus, device and storage medium for device control
CN112162800A (en) Page display method and device, electronic equipment and computer readable storage medium
CN112016524A (en) Model training method, face recognition device, face recognition equipment and medium
CN112527110A (en) Non-contact interaction method and device, electronic equipment and medium
CN105589570A (en) Input error processing method and apparatus
CN111339314A (en) Method and device for generating triple-group data and electronic equipment
CN111966432B (en) Verification code processing method and device, electronic equipment and storage medium
CN111723318B (en) Page data processing method, device, equipment and storage medium
CN112329434B (en) Text information identification method, device, electronic equipment and storage medium
CN112652311B (en) Chinese and English mixed speech recognition method and device, electronic equipment and storage medium
CN111665956B (en) Candidate character string processing method and device, electronic equipment and storage medium
CN111209023B (en) Skill service updating method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant