CN111984129A - Input method, device, equipment and machine readable medium - Google Patents

Input method, device, equipment and machine readable medium Download PDF

Info

Publication number
CN111984129A
CN111984129A CN201910426237.9A CN201910426237A CN111984129A CN 111984129 A CN111984129 A CN 111984129A CN 201910426237 A CN201910426237 A CN 201910426237A CN 111984129 A CN111984129 A CN 111984129A
Authority
CN
China
Prior art keywords
input
information
voice
user
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910426237.9A
Other languages
Chinese (zh)
Inventor
郭云云
耿梦娇
刘蓓
陈帅
崔娜娜
李臣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Banma Zhixing Network Hongkong Co Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910426237.9A priority Critical patent/CN111984129A/en
Publication of CN111984129A publication Critical patent/CN111984129A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Abstract

The embodiment of the application provides an input method, an input device, equipment and a machine readable medium, wherein the method comprises the following steps: responding to the calling operation of a user on an input method, entering a voice input state of the input method, and displaying a text input interface or an entrance of the text input interface; receiving input information of a user; if the input information is voice information, keeping the voice input state of the input method; or if the input information is text input information, entering a text input state of the input method; the text input information is input through the text input interface. The embodiment of the application can be suitable for scenes of inconvenient manual operation, can improve the input efficiency of a user, can improve the utilization rate of a voice input mode, and can improve the switching efficiency between the voice input state and the text input state.

Description

Input method, device, equipment and machine readable medium
Technical Field
The present application relates to the field of input methods, and in particular, to an input method, an input device, an apparatus, and a machine-readable medium.
Background
The input method refers to an encoding method adopted for inputting characters into a computer or other equipment (such as a mobile phone, a tablet computer and the like). For users in languages such as chinese, english, japanese, korean, etc., it is generally necessary to interact with a computer or other devices through input methods.
The current input process is generally: and responding to the clicking operation of the user on the input control, displaying a keyboard interface of the input method, wherein the keyboard interface can comprise a plurality of keys for the user to input the required characters in a keyboard input mode.
In practical application, the keyboard input mode usually requires a long time of manual operation, and therefore, the keyboard input mode is not suitable for scenes which are inconvenient to manually operate, such as vehicle-mounted scenes, home remote scenes and the like. Taking a vehicle-mounted scene as an example, on one hand, a keyboard mode usually requires a long-time operation, and the positions of vehicle-mounted equipment such as a car machine or a rearview mirror are not suitable for a user to operate for a long time, so that the keyboard mode is not in line with ergonomics and is easy to cause physical fatigue; on the other hand, when the keyboard mode is used during driving, driving safety is easily affected.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present application is to provide an input method, which is applicable to a scene inconvenient for manual operation, can improve the input efficiency of a user, can improve the utilization rate of a voice input mode, and can improve the efficiency of switching between a voice input state and a text input state.
Correspondingly, the embodiment of the application also provides an input device, equipment and a machine readable medium, which are used for ensuring the realization and the application of the method.
In order to solve the above problem, an embodiment of the present application discloses an input method, including:
responding to the calling operation of a user on an input method, entering a voice input state of the input method, and displaying a text input interface or an entrance of the text input interface;
receiving input information of a user;
if the input information is voice information, keeping the voice input state of the input method; or if the input information is text input information, entering a text input state of the input method; the text input information is input through the text input interface.
On the other hand, the embodiment of the application also discloses an input device, which comprises:
the calling response module is used for responding to the calling operation of a user on the input method, entering a voice input state of the input method and displaying a text input interface or an entrance of the text input interface;
the input receiving module is used for receiving input information of a user; and
the input response module is used for keeping the voice input state of the input method under the condition that the input information is voice information; or entering a text input state of the input method under the condition that the input information is text input information; the text input information is input through the text input interface.
In another aspect, an embodiment of the present application further discloses an apparatus, including:
one or more processors; and
one or more machine-readable media having instructions stored thereon that, when executed by the one or more processors, cause the apparatus to perform one or more of the methods described above.
In yet another aspect, embodiments of the present application disclose one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform one or more of the methods described above.
Compared with the prior art, the embodiment of the application has the following advantages:
after the input method is called, the voice input state of the input method is entered, so that a user can input characters quickly in a voice input mode; because the voice input mode can be suitable for scenes which are inconvenient to operate manually, such as vehicle-mounted scenes, home remote scenes and the like, the embodiment of the application can be suitable for scenes which are inconvenient to operate manually.
In addition, after the input method is called, the voice input state of the input method is entered, so that the user can input the voice quickly through the voice input mode, the input efficiency of the user can be improved, and the utilization rate of the voice input mode can be improved.
In addition, the embodiment of the application preferentially enters the voice input state, and can provide a voice input mode and a text input mode for the user to select and use. Through the embodiment of the application, a user can quickly switch between the voice input state and the text input state and can quickly use the voice input mode and the text input mode, so that the advantages of the voice input mode and the text input mode can be brought into play, the switching efficiency between the voice input state and the text input state can be improved, and the input efficiency of the user can be improved.
Drawings
FIG. 1 is an illustration of an application environment for an input method of the present application;
FIG. 2 is a flowchart illustrating steps of a second embodiment of an input method of the present application;
FIG. 3 is a flowchart of the steps of an input method embodiment three of the present application;
FIG. 4 is a flow chart of the steps of an embodiment of an input method of the present application;
FIG. 5 is a flow chart of the steps of an input method embodiment five of the present application;
FIG. 6 is a flowchart of the steps of an input method embodiment six of the present application;
FIG. 7 is a block diagram of an embodiment of an input device according to the present application; and
fig. 8 is a schematic structural diagram of an apparatus provided in an embodiment of the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments that can be derived from the embodiments given herein by a person of ordinary skill in the art are intended to be within the scope of the present disclosure.
While the concepts of the present application are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the description above is not intended to limit the application to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the application.
Reference in the specification to "one embodiment," "an embodiment," "a particular embodiment," or the like, means that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, where a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. In addition, it should be understood that items in the list included in the form "at least one of a, B, and C" may include the following possible items: (A) (ii) a (B) (ii) a (C) (ii) a (A and B); (A and C); (B and C); or (A, B and C). Likewise, a listing of items in the form of "at least one of a, B, or C" may mean (a); (B) (ii) a (C) (ii) a (A and B); (A and C); (B and C); or (A, B and C).
In some cases, the disclosed embodiments may be implemented as hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be executed by one or more processors. A machine-readable storage medium may be implemented as a storage device, mechanism, or other physical structure (e.g., a volatile or non-volatile memory, a media disk, or other media other physical structure device) for storing or transmitting information in a form readable by a machine.
In the drawings, some structural or methodical features may be shown in a particular arrangement and/or ordering. Preferably, however, such specific arrangement and/or ordering is not necessary. Rather, in some embodiments, such features may be arranged in different ways and/or orders than as shown in the figures. Moreover, the inclusion of structural or methodical features in particular figures is not meant to imply that such features are required in all embodiments and that, in some embodiments, such features may not be included or may be combined with other features.
Aiming at the technical problem that a keyboard input mode is not suitable for a scene inconvenient for manual operation, the embodiment of the application provides an input scheme, which can comprise the following steps: and responding to the calling operation of the user on the input method, and entering a voice input state of the input method.
In the embodiment of the application, the input method is a hosted program, and is hosted in the environment provided by the hosted program. The host program can be used for providing a software environment, namely a host environment (host environment), on which the hosted program lives in the computer environment, and meanwhile, the host program can also dynamically load a DLL (Dynamic Link Library) provided by the hosted program so as to dynamically load external functions.
The calling operation of the embodiment of the application can be used for calling the input method, so that the input method provides services for the user in the host environment. The voice input state can refer to an input state supporting a voice input mode, and the voice input state can collect voice information input by a user, determine and display character candidate items corresponding to the voice information for the user to select.
Alternatively, a speech recognition technique may be used to determine the character candidates corresponding to the speech information. If the voice information is marked as S, a series of processing is carried out on S to obtain a corresponding voice characteristic sequence O, and the voice characteristic sequence O is marked as O ═ { O } 1,O2,…,Oi,…,OTIn which O isiIs the ith (i is a natural number) speech feature, and T is the total number of speech features. A sentence corresponding to speech information S can be regarded as a word string composed of many words, and is denoted by W ═ W1,w2,…,wn}. The process of speech recognition is to find the most probable word string W', n is a natural number based on the known speech feature sequence O.
Specifically, the speech recognition is a model matching process, in which a speech model is first established according to the speech characteristics of a person, and a template required for the speech recognition is established by extracting required features through analysis of input speech information; the process of recognizing the voice information of the user is a process of comparing the characteristics of the voice information with the template, and finally determining the characteristics and the matched optimal template so as to obtain a voice recognition result. The specific speech recognition algorithm may adopt a training and recognition algorithm based on a statistical hidden markov model, or may adopt other algorithms such as a training and recognition algorithm based on a neural network, a recognition algorithm based on dynamic time warping matching, and the like.
After the input method is called, the voice input state of the input method is entered, so that a user can input characters quickly in a voice input mode; because the voice input mode can be suitable for scenes which are inconvenient to operate manually, such as vehicle-mounted scenes, home remote scenes and the like, the embodiment of the application can be suitable for scenes which are inconvenient to operate manually.
In addition, after the input method is called, the voice input state of the input method is entered, so that the user can input the voice quickly through the voice input mode, the input efficiency of the user can be improved, and the utilization rate of the voice input mode can be improved.
The data processing scheme provided by the embodiment of the present application can be applied to the application environment shown in fig. 1, as shown in fig. 1, the client 100 and the server 200 are located in a wired or wireless network, and the client 100 and the server 200 perform data interaction through the wired or wireless network.
Optionally, the client may run on the device, for example, the client may be an APP (Application program, Application) running on the device, such as an input method APP, and the specific APP corresponding to the client is not limited in the embodiment of the present Application.
Optionally, the device may be provided with a built-in or external screen, and the screen is used for displaying information. For example, the displayed information may include: input controls, or character candidates, etc.
The equipment can be internally provided with or externally provided with a voice acquisition device, and the voice acquisition device is used for acquiring voice information input by a user. The voice collecting device may include: a microphone, etc.
The device may be built-in or external electro-acoustic transduction means for converting an electrical signal into an acoustic signal. The electroacoustic transducer device may include: speakers, etc.
The above devices may specifically include but are not limited to: smart phones, tablet computers, electronic book readers, MP3 (Moving Picture Experts Group Audio Layer III) players, MP4 (Moving Picture Experts Group Audio Layer IV) players, laptop portable computers, car-mounted devices, PCs (Personal computers), set-top boxes, smart televisions, wearable devices, car-mounted devices, smart home devices, and the like. The smart home device may include: intelligence stereo set, intelligent door lock, intelligent entrance guard etc. and mobile unit can include: the vehicle machine, the rearview mirror and the like, it can be understood that the embodiment of the present application does not limit the specific equipment.
Method embodiment one
In the first embodiment of the input method of the present application, the method may specifically include the following steps:
and responding to the calling operation of the user on the input method, and entering a voice input state of the input method.
At least one step included in the method of the embodiment of the present application may be executed by the client, and of course, the embodiment of the present application does not impose a limitation on a specific execution subject of the step of the method.
The calling operation of the embodiment of the application can be used for calling the input method, so that the input method provides services for the user in the host environment.
In this embodiment of the application, optionally, the invoking operation may specifically include: and (3) triggering operation of the input (input) control by the user. Input controls may be provided by the host program for receiving user-entered information. Examples of input controls may include: input boxes, etc., which may include: a search box, etc.
The user can represent the requirement of inputting information into the input control by the user aiming at the triggering operation of the input control, so that the input method can be called. Alternatively, the trigger operation may be a click operation for an input control, or the like.
In the embodiment of the application, the control refers to encapsulation of data and methods. The control can have its own properties and methods, wherein the properties are simple visitors of the control data, the methods are some simple and visible functions of the control, the control creating process comprises designing, developing and debugging work, and then the use of the control.
It is understood that the triggering operation of the input control by the user is only an alternative embodiment of the invoking operation, and in fact, a person skilled in the art may determine the invoking operation according to the actual application requirement, for example, the invoking operation may also be: a voice command, an operation of a physical key, an operation of a virtual key, and the like.
After receiving the calling operation of the user on the input method, the embodiment of the application enters the voice input state of the input method, so that the user can input characters quickly in a voice input mode.
Text input may refer to input via a keyboard, touch screen, or the like. It should be noted that the text input method is widely used in smart devices such as mobile phones and tablet computers, which makes users prefer to input through the text input method.
Currently, some input methods provide a voice entry in the keyboard interface to allow the user to trigger a voice input state by triggering the voice entry. However, the text input mode is the primary input mode, so that the voice input mode is the secondary input mode, and further the user ignores the voice entry in the keyboard interface, which makes the usage rate of the voice input mode low. Moreover, the user triggers the voice input state by triggering the voice entry, which increases the operation cost of triggering the voice input state and affects the input efficiency of the user.
After the input method is called, the voice input state of the input method is entered, so that the user can input characters quickly in a voice input mode, and the method and the device can be suitable for scenes which are inconvenient to operate manually, such as vehicle-mounted scenes, home remote scenes and the like; and the input efficiency of the user can be improved, and the utilization rate of the voice input mode can be improved.
In this embodiment of the application, optionally, the entering of the voice input state of the input method may specifically include:
displaying prompt information of the input method in a voice input state; or
And displaying the voice input interface.
The prompt information can be used for prompting that the input method is in a voice input state and prompting a user to perform voice input. The voice input interface may be used to characterize an interface corresponding to the voice input, which may include: a voice capture icon, or a voice capture text, etc., it can be understood that the embodiment of the present application is not limited to a specific voice input interface.
In an optional embodiment of the present application, the method may further include:
responding to the calling operation of a user on the input method, and displaying a text input entry or a text input interface;
The voice inlet is used for triggering a voice input state; the text input entry is used for triggering display of the text input interface.
Alternatively, the voice portal may be implemented by a control. Optionally, the voice entry may correspond to a voice icon, and the voice icon may be used to identify the voice input. Alternatively, the voice input state may be entered in response to a user's trigger operation for the voice entry described above.
Alternatively, the text input entry may be implemented by a control. Optionally, the text input entry may correspond to a handwritten icon, and the handwritten icon may be used to identify text input.
In the embodiment of the application, the voice entry and the text entry (or the text input interface) can enable a user to switch between a voice input state and a text input state.
The text input state of the embodiment of the application can be an input state supporting a text input mode. The text entry state may include: keyboard input state, handwriting input state, etc. The input method is mainly described by taking a keyboard input state as an example, and the input methods corresponding to other text input states can be referred to one another. The text input interface corresponding to the keyboard input state may include: the keyboard interface can include: a plurality of keys, the keys may include: alphabetic keys, numeric keys, symbolic keys, functional keys, and the like. The symbol key may include: punctuation keys, etc. The function keys may include: a delete key, a search key, etc.
For example, after entering a voice input state of an input method in response to a call operation of the input method by a user, if a trigger operation of the user on a text input entry (or a text input interface) is received, a text input state may be entered.
For another example, in the case of a text input state, if a trigger operation of the user on the voice entry is received, the voice input state may be entered.
In the embodiment of the present application, optionally, a voice entry and/or a text entry are/is presented in a surrounding area of the input control. For example, a speech entry is located to the left of the input control, or a text entry is located to the right of the input control, and so on.
In an application example of the application, assuming that the input control is a search box, and a search icon is displayed on the left side of the search box, a voice input state of the input method may be entered in response to a user's trigger operation on the search box, and a voice entry is displayed on the search icon in an overlapping manner. Optionally, a handwriting entry may also be displayed to the right of the search box.
Method embodiment two
Referring to fig. 2, a flowchart illustrating steps of a second embodiment of an input method according to the present application is shown, which may specifically include the following steps:
Step 201, responding to the calling operation of a user on an input method, and entering a voice input state of the input method;
step 202, responding to the calling operation of the user on the input method, and outputting first prompt information corresponding to the input environment information; the first prompt information is used for prompting input content.
After receiving the calling operation of the user to the input method, the embodiment of the application can also output the first prompt information corresponding to the input environment information, wherein the first prompt information is used for prompting the input content and can play a role in guiding the input of the user.
In this embodiment of the application, optionally, the outputting the first prompt information corresponding to the input environment information by 202 may specifically include:
playing first prompt information corresponding to the input environment information; and/or
And displaying first prompt information corresponding to the input environment information in the input control.
According to the embodiment of the application, the first prompt information corresponding to the input environment information can be displayed in the input control, so that a user can check the first prompt information. Or, the first prompt information can be played in a voice mode, and the problem that a user is inconvenient to view a screen in scenes such as a vehicle-mounted scene can be solved to a certain extent.
Optionally, the first prompt message may include: information of the content is input. The information of the input content may include: category of input content, etc. The categories of the input content may include: address, road, music, or radio, etc.
The embodiment of the application provides the first prompt information corresponding to the input environment information for the user, the first prompt information can restrict and guide the input of the user, the matching degree between the input content of the user and the input environment information is improved, and the accuracy of the input content can be further improved.
Optionally, the input environment information specifically refers to environment information where the user is located. In practical applications, the input environment information may specifically include: one or more of time environment, geographic environment, physical environment or application environment information. The physical environment may include: weather environment, humidity environment, etc.
Optionally, the input environment information may specifically include at least one of the following information:
application environment information; and/or
Interface environment information.
The application environment information may refer to application information where the user is located, and the application environment information may include: the category of the application, the name of the application, etc. For example, in the case where the category of the application program is music, the first prompt information may include: "please enter the name of the song". As another example, in the case where the category of the application program is a station, the first prompt message may include: "please enter the name of the station", or "please enter the name of the moderator", etc.
The interface environment information may be related to the interface content in which the user is located. For example, the interface content includes: and navigating a plurality of roads, the interface environment information may be related to the "road", and the corresponding prompt information may include: "please select a road". As another example, the interface content includes: map data near my location, the interface environment information may be associated with an "address," and the corresponding prompt information may include: "Please enter Address".
In an optional embodiment of the present application, the method may further include: and determining first prompt information corresponding to the input environment information according to the mapping relation between the input environment information and the prompt information. The embodiment of the application can store the mapping relation between the input environment information and the prompt information so as to determine the prompt information corresponding to the input environment information of the user according to the mapping relation, and further provide different first prompt information according to different input environment information.
The determining of the mapping relationship between the input environment information and the prompt information may include: the user inputs the contents of the history under the input environment information. Optionally, the historical input content of the user under the input environment information may be analyzed to obtain target historical input content according with the input rule of the input environment information, and the prompt information corresponding to the input environment information may be obtained according to the target historical input content. The process of analyzing the historical input content of the user under the input environment information may include: determining the appearance frequency of the historical input content of the user under the input environment information, and determining the target historical input content according to the appearance frequency. For example, M history input contents with the highest frequency of occurrence may be used as the target history input contents, where M may be a natural number. As another example, history input content having a frequency of occurrence higher than a threshold may be used as the target history input content.
Of course, the mapping relationship is determined according to the historical input content of the user under the input environment information, which is only an alternative embodiment, and actually, the mapping relationship may be determined by a person skilled in the art or the user. For example, a collection interface may be provided for a user to collect input environment information and corresponding prompt information, etc. through the collection interface.
In summary, the input method of the embodiment of the application enters the voice input state of the input method after receiving the call operation of the user to the input method, and outputs the first prompt information corresponding to the input environment information, and the first prompt information can quickly guide the input of the user, so that the user can quickly input the required information through the voice, and therefore, the input efficiency of the user can be improved.
Method embodiment three
Referring to fig. 3, a flowchart illustrating steps of a third embodiment of an input method according to the present application is shown, which may specifically include the following steps:
step 301, responding to the calling operation of a user on an input method, and entering a voice input state of the input method;
step 302, receiving voice information input by a user;
and step 303, outputting response information aiming at the voice information.
In this embodiment of the application, entering the voice input state of the input method may specifically include: and starting software and hardware corresponding to the voice input state. For example, the hardware corresponding to the voice input state may include: a voice acquisition device, etc. For another example, the software corresponding to the voice input state may include: a voice processing module, etc., which can be used to execute steps 302-303 to implement voice processing functions.
In step 302, voice information input by a user may be collected by a voice collecting device.
In step 303, a speech recognition technique may be adopted to determine a speech recognition result corresponding to the speech information, and determine response information for the speech information according to the speech recognition result.
The embodiment of the application can provide the following output modes for outputting the response information aiming at the voice information:
the output mode 1 is that if the voice information accords with the input condition, data corresponding to the voice information is output; or
An output mode 2, if the voice information does not accord with the input condition, outputting the input environment information and second prompt information corresponding to the voice information; the second prompt information is used for prompting the input content.
The input conditions may be used to characterize input requirements or input rules. The voice information meets the input condition, which can indicate that the voice information meets the input requirement, so that the character candidate corresponding to the voice information can be output.
According to an embodiment, the data corresponding to the voice information may specifically include: and the character candidate item can be obtained according to a word bank corresponding to the input environment information.
The embodiment of the application can display at least one character candidate item corresponding to the voice information for a user to select. Optionally, at least one character candidate corresponding to the voice information may be presented in a surrounding area of the input control. Optionally, in response to a selection operation of the character candidate by the user, a target character candidate corresponding to the selection operation may be output to the input control.
In an alternative embodiment of the present application, the input condition may include: information of the content is input. The voice information conforming to the input condition may include: and matching the information of the voice recognition result corresponding to the voice information with the information of the input content, and the like. The information of the input content may include: categories of input content, etc., which may include: address, road, music, radio station, radio head box, etc. It is understood that those skilled in the art can determine the input condition according to the actual application requirement, and the embodiment of the present application is not limited to the specific input condition.
For example, if the speech recognition result corresponding to the speech information a is "address a", and the input condition is related to "address", the character candidate corresponding to the speech information a may include: and the name of the POI (Point of interest) corresponding to the address A is provided for the user to select. In the geographic information system, one POI may be one house, one shop, one mailbox, one bus station, and the like.
In this embodiment of the application, optionally, the character candidate may be obtained according to a lexicon corresponding to the input environment information.
According to the embodiment of the application, the corresponding word bank can be established and stored aiming at the input environment information. The character candidate corresponding to the speech information may be obtained from the lexicon corresponding to the input environment information.
Alternatively, the input environment information may correspond to environment keywords, and different environment keywords may correspond to different word banks. The environment keywords may include: address, road, music, radio station, radio head box, etc.
According to another embodiment, the data corresponding to the voice information may specifically include: and searching results which can be obtained according to the database corresponding to the input environment information.
According to the embodiment of the application, the corresponding database can be established and stored aiming at the input environment information. The search result corresponding to the voice information may be obtained from the database corresponding to the input environment information. For example, a database corresponding to "address" may include: POI database, etc.
According to the embodiment of the application, the search result corresponding to the voice information can be provided for the user under the condition that the user does not trigger the search. Generally, a user needs to trigger a search of input contents in an input control by triggering a search control corresponding to the input control. According to the embodiment of the application, the search result corresponding to the voice information can be provided for the user in advance under the condition that the user does not trigger the search, so that the search cost of the user can be saved.
In an application example of the present application, assuming that a speech recognition result corresponding to the speech information B is "text B", if the input environment information corresponding to the speech information B includes "address", the character candidate corresponding to the speech information B may include: POI corresponding to "text B"; or, if the input environment information corresponding to the speech information B includes "music", the character candidate corresponding to the speech information B may include: the "text B" corresponds to a song or singer. The character candidate corresponding to the speech information B may be obtained according to a lexicon corresponding to the input environment information.
In the output mode 2, if the voice information does not meet the input condition, it can be shown that the voice information does not meet the input requirement, so that the input environment information and the second prompt information corresponding to the voice information can be output; the second prompt information is used for prompting the input content so that the user can input the voice information meeting the input condition again.
Optionally, before receiving the voice information input by the user, the method may further include: and responding to the calling operation of the user on the input method, and outputting first prompt information corresponding to the input environment information, wherein the first prompt information is used for prompting the input content. In this case, if the voice information does not meet the input condition, the input environment information and the second prompt information corresponding to the voice information may be output again. The embodiment of the application can provide prompt information for the user for many times so as to improve the accuracy of the input content of the user.
In an application example of the application, the first prompt message may be "please input an address", and assuming that a voice recognition result corresponding to the voice message of the user does not belong to the "address" category, the second prompt message "please input an address", or "information you input is wrong, please input an address", or "information you input does not meet a requirement, please input an address", and the like may be output. The second prompt message can indicate the problem of the input voice message and guide the user to input the correct content, thereby improving the accuracy of the input content of the user.
It should be noted that the output mode 1 and the output mode 2 are only optional embodiments of the output mode for outputting the response information for the voice information, and actually, a person skilled in the art may adopt other output modes according to the actual application requirements, for example, in one other output mode, data corresponding to the voice information may be directly output, that is, the data corresponding to the voice information may be output regardless of whether the voice information meets the input condition; in this case, the character candidates may be obtained according to the speech recognition result. For example, the speech recognition result may be directly used as a character candidate, and for example, the speech recognition result may be corrected by using a language model, and the corrected speech recognition result may be used as a character candidate.
In the embodiment of the present application, the response information for the voice information may be determined by the server or the client, and the specific determination manner of the response information is not determined in the embodiment of the present application.
In summary, according to the input method of the embodiment of the application, after responding to the calling operation and entering the voice input state of the input method, the voice information input by the user can be collected through the voice collecting device, and the response information for the voice information is determined. According to the embodiment of the application, the user can quickly input the voice through calling operation, so that the input efficiency of the user can be improved.
Method example four
Referring to fig. 4, a flowchart illustrating a fourth step of an input method embodiment of the present application is shown, which may specifically include the following steps:
step 401, responding to the calling operation of a user on an input method, and entering a voice input state of the input method;
step 402, responding to the calling operation of the user to the input method, and outputting first prompt information corresponding to the input environment information; the first prompt information is used for prompting input content;
step 403, receiving voice information input by a user;
step 404, judging whether the voice information meets the input condition, if so, executing step 405, otherwise, executing step 406;
Step 405, outputting data corresponding to the voice information; or
Step 406, outputting input environment information and second prompt information corresponding to the voice information; the second prompt message is used to prompt the input content, and the process returns to step 403.
According to the embodiment of the application, under the condition that the voice information input by the user accords with the input condition, the data corresponding to the voice information can be output; in the case that the voice information input by the user does not meet the input condition, a second prompt message may be output to the user to guide and assist the user in inputting the voice information meeting the input condition.
Under the condition that the voice information does not accord with the input condition, the embodiment of the application can continuously provide the second prompt information for the user, so that the embodiment of the application can improve the intelligence of the voice input.
Method example five
Referring to fig. 5, a flowchart illustrating steps of a fifth embodiment of an input method according to the present application is shown, which may specifically include the following steps:
step 501, responding to the calling operation of a user on an input method, entering a voice input state of the input method, and displaying a text input interface or an entrance of the text input interface;
Step 502, receiving input information of a user;
step 503, if the input information is voice information, maintaining the voice input state of the input method; or
Step 504, entering a text input state of the input method if the input information is text input information; the text input information may be entered via the text input interface.
After receiving the calling operation of the user to the input method, the embodiment of the application can enter the voice input state of the input method and display a voice inlet and a text input interface. The voice inlet is used for triggering a voice input state, and the text input interface is used for triggering a text input state. The displayed voice entry, and text entry interface allow a user to quickly switch between a voice entry state and a text entry state.
In the embodiment of the application, under the conditions of entering the voice input state of the input method, displaying the voice entry and the text input interface, the supported input modes may include: a speech input mode and a text input mode. The implementation process of the voice input mode may include: inputting voice information; the implementation process of the text input mode can comprise the following steps: and inputting the text input information by triggering interface elements in the text input interface. Therefore, the embodiment of the application preferentially enters the voice input state, and can provide the voice input mode and the text input mode for the user to select and use. Through the embodiment of the application, the user can quickly switch between the voice input state and the text input state and can quickly use the voice input mode and the text input mode, so that the advantages of the voice input mode and the text input mode can be brought into play, and the input efficiency of the user can be improved.
After the step 501 to the step 502 are executed, the step 503 or the step 504 may be determined to be executed according to the type of the information input in the step 502.
If the type of the input information is voice information, step 503 may be executed, that is, the voice input state of the input method may be maintained. Optionally, after step 503, the method may further include: and outputting response information aiming at the voice information.
The outputting the response information for the voice information may specifically include:
if the voice information accords with the input condition, outputting data corresponding to the voice information; or
If the voice information does not accord with the input condition, outputting input environment information and second prompt information corresponding to the voice information; the second prompt information is used for prompting the input content.
Optionally, the data corresponding to the voice information may specifically include: and character candidate items are obtained according to the word stock corresponding to the input environment information.
Optionally, the data corresponding to the voice information may specifically include: and searching results which can be obtained according to the database corresponding to the input environment information.
If the type of the input information is manual operation information, step 504 may be executed, that is, the text input state of the input method may be entered. Optionally, after step 504, the method may further include: and processing the manual operation information and outputting character candidates corresponding to the manual operation information.
The text input interface may include: a keyboard interface or a handwriting interface, etc.
Taking a keyboard interface as an example, the text input information corresponds to the text input interface, and may specifically include: the text input information corresponds to a key in the text input interface, and optionally, the text input information is a key character string or the like. Taking the keyboard interface as a full keyboard (QWERTY keyboard), the keystroke strings may include: "zifuchuan" and the like.
Taking a handwriting interface as an example, the text input information may be trajectory data input by the user. In this case, the trajectory data input by the user may be processed to obtain character candidates corresponding to the trajectory data.
In an optional embodiment of the present application, the method may further include: and after entering the text input state, if the switching condition is met, entering a voice input state.
The switching conditions provided by the embodiment of the application may include:
switching the condition 1, wherein the text input information is not received within a preset time length; or
Switching condition 2, receiving a trigger operation aiming at the voice entry; or
And switching condition 3, receiving a voice awakening instruction of a user.
The switching condition 1 can realize automatic switching from the text input state to the voice input state, and therefore can improve the switching efficiency of the input state.
The preset time period may be determined by a person skilled in the art or a user, or the preset time period may be determined according to input interval information of the user. The input interval information may characterize a pause time between two adjacent input processes of the user. The primary input process starts from the initial manual operation information and ends at the step that a user screens character candidate items, namely the character candidate items are selected by the user to output the selected character candidate items to a screen, and particularly the selected character candidate items are output to an input control of the screen.
Alternatively, the preset time period may be greater than or equal to the input interval information. When the preset time length is greater than or equal to the input interval information, the user can be considered to have no text input requirement, and therefore the voice input state can be automatically entered.
The switching condition 2 and the switching condition 3 can realize manual switching from the text input state to the voice input state. The voice wake-up instruction of the switching condition 3 may be a voice wake-up instruction, such as "voice input" in a voice form, "switching to voice" in a voice form, and the like, and may be applicable to a scene where a user does not facilitate manual operation.
Optionally, the method may further include: responding to the calling operation of a user on the input method, and outputting first prompt information corresponding to the input environment information; the first prompt information is used for prompting input content.
Optionally, the input environment information may specifically include at least one of the following information:
application environment information; and/or
Interface environment information.
Optionally, the method may further include: and determining first prompt information corresponding to the input environment information according to the mapping relation between the input environment information and the prompt information.
Optionally, the outputting the first prompt information corresponding to the input environment information may specifically include:
playing first prompt information corresponding to the input environment information; and/or
And displaying first prompt information corresponding to the input environment information in the input control.
Optionally, the invoking operation may specifically include: and triggering operation of the input control by the user.
In summary, the input method of the embodiment of the application preferentially enters the voice input state after receiving the call operation, and can provide the voice input mode and the text input mode for the user to select and use. Through the embodiment of the application, a user can quickly switch between the voice input state and the text input state and can quickly use the voice input mode and the text input mode, so that the advantages of the voice input mode and the text input mode can be brought into play, the switching efficiency between the voice input state and the text input state can be improved, and the input efficiency of the user can be improved.
Method example six
Referring to fig. 6, a flowchart illustrating steps of a sixth embodiment of an input method according to the present application is shown, which may specifically include the following steps:
step 601, responding to the calling operation of a user on an input method, entering a voice input state of the input method, and displaying a text input interface or an entrance of the text input interface;
step 602, responding to the calling operation of the user on the input method, and outputting first prompt information corresponding to the input environment information; the first prompt information is used for prompting input content;
Step 603, judging whether input information of a user is received, if not, executing step 604, and if so, executing step 605;
step 604, outputting a first prompt message corresponding to the input environment information;
after step 604 is performed, execution of step 603 may return.
Step 605, judging whether the input information of the user is voice information, if so, executing step 606, otherwise, executing step 607;
step 606, keeping the voice input state of the input method, and returning to execute step 605;
step 607, entering a text input state of the input method;
the text input information may be entered via the text input interface.
Step 608, under the condition that the search triggering operation of the user is not received, judging whether the pause duration of the text input operation exceeds a preset duration, if so, executing step 609, otherwise, executing step 610;
step 609, entering a voice input state of the input method;
and step 610, keeping the text input state of the input method.
In step 607, the input information is not voice information, and may include: the input information is text input information. The text input information may correspond to the presented text input interface, for example, the text input information may be key click information in a keyboard interface. In this case, the embodiment of the application may further display a text input interface when entering the voice input state of the input method, so that the user can input text input information quickly.
In step 608, a search trigger operation may be used to trigger a search for input content in the input control. When the search trigger operation of the user is not received, it is described that the user has an input demand, and thus the input state can be switched.
After step 609 is performed, step 603 may be performed. After performing step 610, step 608 may be performed.
In summary, the input method according to the embodiment of the application preferentially enters the voice input state after receiving the call operation of the user to the input method and before receiving the input information of the user, and can output the first prompt information corresponding to the input environment information to guide and help the user to input accurate input content.
In addition, the embodiment of the application can provide a voice input mode and a text input mode for the user to select and use. Through the embodiment of the application, the user can quickly switch between the voice input state and the text input state and can quickly use the voice input mode and the text input mode, so that the advantages of the voice input mode and the text input mode can be brought into play, and the input efficiency of the user can be improved.
In addition, according to the embodiment of the application, under the condition that the pause duration of the text input operation exceeds the preset duration, the automatic switching from the text input state to the voice input state is realized, and therefore the switching efficiency of the input state can be improved.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the embodiments. Further, those skilled in the art will also appreciate that the embodiments described in the specification are presently preferred and that no particular act is required of the embodiments of the application.
The embodiment of the application also provides an input device.
Referring to fig. 7, a block diagram of an embodiment of an input device according to the present application is shown, which may specifically include the following modules:
the calling response module 701 is used for responding to the calling operation of a user on the input method, entering a voice input state of the input method and displaying a text input interface or an entrance of the text input interface;
an input receiving module 702, configured to receive input information of a user; and
an input response module 703, configured to maintain a voice input state of the input method when the input information is voice information; or entering a text input state of the input method under the condition that the input information is text input information; the text input information is input through the text input interface.
Optionally, the apparatus may further include:
the first prompt information output module is used for responding to the calling operation of a user on the input method and outputting first prompt information corresponding to the input environment information; the first prompt information is used for prompting input content.
Optionally, the input environment information may include at least one of the following information:
application environment information; and/or
Interface environment information.
Optionally, the apparatus may further include:
and the first prompt information determining module is used for determining first prompt information corresponding to the input environment information according to the mapping relation between the input environment information and the prompt information.
Optionally, the first prompt information output module may include:
the first prompt information playing module is used for playing first prompt information corresponding to the input environment information; and/or
And the first prompt information display module is used for displaying first prompt information corresponding to the input environment information in the input control.
Optionally, the invoking operation may include:
and triggering operation of the input control by the user.
Optionally, the apparatus may further include:
and the response output module is used for outputting response information aiming at the voice information under the condition that the input information is the voice information.
Optionally, the response output module may include:
the data output module is used for outputting data corresponding to the voice information if the voice information meets the input condition; or
The second prompt information output module is used for outputting input environment information and second prompt information corresponding to the voice information if the voice information does not accord with the input condition; the second prompt information is used for prompting the input content.
Optionally, the data corresponding to the voice information may include: and the character candidate item can be obtained according to a word bank corresponding to the input environment information.
Optionally, the data corresponding to the voice information may include: and searching results which can be obtained according to the database corresponding to the input environment information.
Optionally, the apparatus may further include:
and the switching module is used for entering a voice input state if the text input state is in accordance with the switching condition.
Optionally, the handover condition may include:
the text input information is not received within a preset time length; or
Receiving a trigger operation aiming at the voice inlet; or
And receiving a voice awakening instruction of a user.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Embodiments of the application can be implemented as a system or apparatus employing any suitable hardware and/or software for the desired configuration. Fig. 8 schematically illustrates an exemplary device 1300 that can be used to implement various embodiments described herein.
For one embodiment, fig. 8 illustrates an exemplary apparatus 1300, which apparatus 1300 may comprise: one or more processors 1302, a system control module (chipset) 1304 coupled to at least one of the processors 1302, system memory 1306 coupled to the system control module 1304, non-volatile memory (NVM)/storage 1308 coupled to the system control module 1304, one or more input/output devices 1310 coupled to the system control module 1304, and a network interface 1312 coupled to the system control module 1306. The system memory 1306 may include: instruction 1362, the instruction 1362 executable by the one or more processors 1302.
Processor 1302 may include one or more single-core or multi-core processors, and processor 1302 may include any combination of general-purpose processors or special-purpose processors (e.g., graphics processors, application processors, baseband processors, etc.). In some embodiments, the device 1300 can be a server, a target device, a wireless device, etc., as described in embodiments herein.
In some embodiments, device 1300 may include one or more machine-readable media (e.g., system memory 1306 or NVM/storage 1308) having instructions thereon and one or more processors 1302, which in combination with the one or more machine-readable media, are configured to execute the instructions to implement the modules included in the aforementioned means to perform the actions described in embodiments of the present application.
System control module 1304 for one embodiment may include any suitable interface controller to provide any suitable interface to at least one of processors 1302 and/or any suitable device or component in communication with system control module 1304.
System control module 1304 for one embodiment may include one or more memory controllers to provide an interface to system memory 1306. The memory controller may be a hardware module, a software module, and/or a firmware module.
System memory 1306 for one embodiment may be used to load and store data and/or instructions 1362. For one embodiment, system memory 1306 may include any suitable volatile memory, such as suitable DRAM (dynamic random access memory). In some embodiments, system memory 1306 may include: double data rate type four synchronous dynamic random access memory (DDR4 SDRAM).
System control module 1304 for one embodiment may include one or more input/output controllers to provide an interface to NVM/storage 1308 and input/output device(s) 1310.
NVM/storage 1308 for one embodiment may be used to store data and/or instructions 1382. NVM/storage 1308 may include any suitable non-volatile memory (e.g., flash memory, etc.) and/or may include any suitable non-volatile storage device(s), e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives, etc.
The NVM/storage 1308 may include storage resources that are physically part of the device on which the apparatus 1300 is installed or may be accessible by the device and not necessarily part of the device. For example, the NVM/storage 1308 may be accessed over a network via the network interface 1312 and/or through the input/output devices 1310.
Input/output device(s) 1310 for one embodiment may provide an interface for device 1300 to communicate with any other suitable device, and input/output devices 1310 may include communication components, audio components, sensor components, and so forth.
Network interface 1312 of one embodiment may provide an interface for device 1300 to communicate with one or more networks and/or with any other suitable apparatus, and device 1300 may communicate wirelessly with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols, such as to access a communication standard-based wireless network, such as WiFi, 2G, or 3G, or a combination thereof.
For one embodiment, at least one of the processors 1302 may be packaged together with logic for one or more controllers (e.g., memory controllers) of the system control module 1304. For one embodiment, at least one of the processors 1302 may be packaged together with logic for one or more controllers of the system control module 1304 to form a System In Package (SiP). For one embodiment, at least one of the processors 1302 may be integrated on the same novelty as the logic of one or more controllers of the system control module 1304. For one embodiment, at least one of processors 1302 may be integrated on the same chip with logic for one or more controllers of system control module 1304 to form a system on a chip (SoC).
In various embodiments, apparatus 1300 may include, but is not limited to: a computing device such as a desktop computing device or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.). In various embodiments, device 1300 may have more or fewer components and/or different architectures. For example, in some embodiments, device 1300 may include one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
Wherein, if the display includes a touch panel, the display screen may be implemented as a touch screen display to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The present application also provides a non-transitory readable storage medium, where one or more modules (programs) are stored in the storage medium, and when the one or more modules are applied to an apparatus, the apparatus may be caused to execute instructions (instructions) of methods in the present application.
Provided in one example is an apparatus comprising: one or more processors; and, instructions in one or more machine-readable media stored thereon, which when executed by the one or more processors, cause the apparatus to perform a method as in embodiments of the present application, which may include: the method shown in fig. 2 or fig. 3 or fig. 4 or fig. 5 or fig. 6.
One or more machine-readable media are also provided in one example, having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform a method as in embodiments of the application, which may include: the method shown in fig. 2 or fig. 3 or fig. 4 or fig. 5 or fig. 6.
The specific manner in which each module performs operations of the apparatus in the above embodiments has been described in detail in the embodiments related to the method, and will not be described in detail here, and reference may be made to part of the description of the method embodiments for relevant points.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable input device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable input device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable input device to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable input device to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the true scope of the embodiments of the application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above detailed description is provided for an input method, an input device, an apparatus, and a machine-readable medium, which are provided in this application, and the present application uses specific examples to explain the principles and embodiments of the present application, and the descriptions of the above examples are only used to help understand the method and the core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (16)

1. An input method, comprising:
responding to the calling operation of a user on an input method, entering a voice input state of the input method, and displaying a text input interface or an entrance of the text input interface;
receiving input information of a user;
if the input information is voice information, keeping the voice input state of the input method; or if the input information is text input information, entering a text input state of the input method; the text input information is input through the text input interface.
2. The method of claim 1, further comprising:
Responding to the calling operation of a user on the input method, and outputting first prompt information corresponding to the input environment information; the first prompt information is used for prompting input content.
3. The method of claim 2, wherein the input environment information comprises at least one of:
application environment information; and/or
Interface environment information.
4. The method of claim 2, further comprising:
and determining first prompt information corresponding to the input environment information according to the mapping relation between the input environment information and the prompt information.
5. The method of claim 2, wherein outputting the first prompt message corresponding to the input environment information comprises:
playing first prompt information corresponding to the input environment information; and/or
And displaying first prompt information corresponding to the input environment information in the input control.
6. The method of any of claims 1 to 5, wherein the invoking comprises:
and triggering operation of the input control by the user.
7. The method according to any one of claims 1 to 5, further comprising:
And if the input information is voice information, outputting response information aiming at the voice information.
8. The method of claim 7, wherein outputting the response information for the voice information comprises:
if the voice information accords with the input condition, outputting data corresponding to the voice information; or
If the voice information does not accord with the input condition, outputting input environment information and second prompt information corresponding to the voice information; the second prompt information is used for prompting the input content.
9. The method of claim 8, wherein the data corresponding to the voice message comprises: and character candidate items are obtained according to the word stock corresponding to the input environment information.
10. The method of claim 8, wherein the data corresponding to the voice message comprises: and searching results, wherein the searching results are obtained according to the database corresponding to the input environment information.
11. The method according to any one of claims 1 to 5, further comprising:
and after entering the text input state, if the switching condition is met, entering a voice input state.
12. The method of claim 11, wherein the handover condition comprises:
the text input information is not received within a preset time length; or
Receiving a trigger operation aiming at the voice inlet; or
And receiving a voice awakening instruction of a user.
13. The method according to any one of claims 1 to 5, wherein entering the speech input state of the input method comprises:
displaying prompt information of the input method in a voice input state; or
And displaying the voice input interface.
14. An input device, comprising:
the calling response module is used for responding to the calling operation of a user on the input method, entering a voice input state of the input method and displaying a text input interface or an entrance of the text input interface;
the input receiving module is used for receiving input information of a user; and
the input response module is used for keeping the voice input state of the input method under the condition that the input information is voice information; or entering a text input state of the input method under the condition that the input information is text input information; the text input information is input through the text input interface.
15. An apparatus, comprising:
one or more processors; and
one or more machine-readable media having instructions stored thereon that, when executed by the one or more processors, cause the apparatus to perform the method of one or more of claims 1-13.
16. One or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform the method recited by one or more of claims 1-13.
CN201910426237.9A 2019-05-21 2019-05-21 Input method, device, equipment and machine readable medium Pending CN111984129A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910426237.9A CN111984129A (en) 2019-05-21 2019-05-21 Input method, device, equipment and machine readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910426237.9A CN111984129A (en) 2019-05-21 2019-05-21 Input method, device, equipment and machine readable medium

Publications (1)

Publication Number Publication Date
CN111984129A true CN111984129A (en) 2020-11-24

Family

ID=73436218

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910426237.9A Pending CN111984129A (en) 2019-05-21 2019-05-21 Input method, device, equipment and machine readable medium

Country Status (1)

Country Link
CN (1) CN111984129A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1555481A (en) * 2002-03-15 2004-12-15 三菱电机株式会社 Navigation device for vehicle
CN104346127A (en) * 2013-08-02 2015-02-11 腾讯科技(深圳)有限公司 Realization method, realization device and terminal for voice input
CN107831994A (en) * 2017-11-28 2018-03-23 珠海市魅族科技有限公司 Input method enables method and device, computer installation and readable storage medium storing program for executing
CN108062214A (en) * 2017-10-20 2018-05-22 沈阳美行科技有限公司 The methods of exhibiting and device of a kind of search interface
CN108737634A (en) * 2018-02-26 2018-11-02 珠海市魅族科技有限公司 Pronunciation inputting method and device, computer installation and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1555481A (en) * 2002-03-15 2004-12-15 三菱电机株式会社 Navigation device for vehicle
CN104346127A (en) * 2013-08-02 2015-02-11 腾讯科技(深圳)有限公司 Realization method, realization device and terminal for voice input
CN108062214A (en) * 2017-10-20 2018-05-22 沈阳美行科技有限公司 The methods of exhibiting and device of a kind of search interface
CN107831994A (en) * 2017-11-28 2018-03-23 珠海市魅族科技有限公司 Input method enables method and device, computer installation and readable storage medium storing program for executing
CN108737634A (en) * 2018-02-26 2018-11-02 珠海市魅族科技有限公司 Pronunciation inputting method and device, computer installation and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN107102746B (en) Candidate word generation method and device and candidate word generation device
US20170076181A1 (en) Converting text strings into number strings, such as via a touchscreen input
US9129011B2 (en) Mobile terminal and control method thereof
CN107436691B (en) Method, client, server and device for correcting errors of input method
WO2021128880A1 (en) Speech recognition method, device, and device for speech recognition
US11749273B2 (en) Speech control method, terminal device, and storage medium
WO2014055791A1 (en) Incremental feature-based gesture-keyboard decoding
KR20150017156A (en) Method and apparatus for providing recommendations on portable terminal
CN110060674B (en) Table management method, device, terminal and storage medium
CN107544684B (en) Candidate word display method and device
WO2014176750A1 (en) Reminder setting method, apparatus and system
US10950221B2 (en) Keyword confirmation method and apparatus
CN107918496B (en) Input error correction method and device for input error correction
CN107564526B (en) Processing method, apparatus and machine-readable medium
CN110727410A (en) Man-machine interaction method, terminal and computer readable storage medium
US20190340233A1 (en) Input method, input device and apparatus for input
KR20150027885A (en) Operating Method for Electronic Handwriting and Electronic Device supporting the same
US11423880B2 (en) Method for updating a speech recognition model, electronic device and storage medium
CN108073291B (en) Input method and device and input device
CN111984129A (en) Input method, device, equipment and machine readable medium
KR20110025510A (en) Electronic device and method of recognizing voice using the same
CN113407099A (en) Input method, device and machine readable medium
CN111103986A (en) User word stock management method and device and input method and device
CN109426359A (en) A kind of input method, device and machine readable media
EP2806364B1 (en) Method and apparatus for managing audio data in electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201218

Address after: Room 603, 6 / F, Roche Plaza, 788 Cheung Sha Wan Road, Kowloon, China

Applicant after: Zebra smart travel network (Hong Kong) Limited

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.