CN111176545A - Equipment control method, system, electronic equipment and storage medium - Google Patents

Equipment control method, system, electronic equipment and storage medium Download PDF

Info

Publication number
CN111176545A
CN111176545A CN201911403064.5A CN201911403064A CN111176545A CN 111176545 A CN111176545 A CN 111176545A CN 201911403064 A CN201911403064 A CN 201911403064A CN 111176545 A CN111176545 A CN 111176545A
Authority
CN
China
Prior art keywords
information
target
touch
instruction
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911403064.5A
Other languages
Chinese (zh)
Other versions
CN111176545B (en
Inventor
喻纯
史元春
石伟男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201911403064.5A priority Critical patent/CN111176545B/en
Publication of CN111176545A publication Critical patent/CN111176545A/en
Application granted granted Critical
Publication of CN111176545B publication Critical patent/CN111176545B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a device control method, which comprises the steps of determining a target input method currently used by a device; recording touch information of a target touch area; the touch information comprises position information and touch time of all touch points; generating a target touch point sequence according to the position information and the touch time, and calculating similarity information between the target touch point sequence and a preset touch point sequence; the preset touch point sequence is a touch point sequence of text content corresponding to a preset instruction in a target instruction set under a target input method; selecting a target instruction corresponding to the touch information from all preset instructions according to the similarity information; and the control equipment executes the operation corresponding to the target instruction. According to the method and the device, the electronic equipment can be accurately controlled on the premise of not needing visual participation. The application also discloses an equipment control system, an electronic device and a storage medium, which have the beneficial effects.

Description

Equipment control method, system, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an apparatus control method, an apparatus control system, an electronic apparatus, and a storage medium.
Background
With the development of science and technology, electronic devices have become necessities of people's lives. The electronic equipment can be provided with a graphical user interface so that a user can realize human-computer interaction through the graphical user interface. However, when the electronic device is used by a visually impaired person or a person who is not convenient to observe the graphical user interface, the human-computer interaction through the graphical user interface cannot be conveniently performed, and the control of the device is further realized.
In the related art, the instruction is executed through the function provided by the voice assistant, but the above-mentioned manner of implementing device control based on the voice assistant is greatly affected by the network environment and the environmental noise, and the recognition accuracy is low.
Therefore, how to realize accurate control of an electronic device without visual participation is a technical problem that needs to be solved by those skilled in the art at present.
Disclosure of Invention
The present application aims to provide an apparatus control method, an apparatus control system, an electronic apparatus, and a storage medium, which can realize accurate control of the electronic apparatus without visual participation.
In order to solve the above technical problem, the present application provides an apparatus control method, including:
determining a target input method currently used by equipment;
recording touch information of a target touch area; the touch information comprises position information and touch time of all touch points;
generating a target touch point sequence according to the position information and the touch time, and calculating similarity information between the target touch point sequence and a preset touch point sequence; the preset touch point sequence is a touch point sequence of text content corresponding to a preset instruction in a target instruction set under the target input method;
selecting a target instruction corresponding to the touch information from all the preset instructions according to the similarity information;
and the control equipment executes the operation corresponding to the target instruction.
Optionally, before selecting the target instruction corresponding to the touch information from all the preset instructions according to the similarity information, the method further includes:
acquiring reference information; the reference information comprises context information and/or use frequency information of each preset instruction, and the context environment information comprises any one or a combination of any several items of historical operation records, current page information and personalized setting information;
correspondingly, selecting the target instruction corresponding to the touch information from all the preset instructions according to the similarity information includes:
and selecting a target instruction corresponding to the touch information from all the preset instructions according to the similarity information and the reference information.
Optionally, selecting, according to the similarity information, a target instruction corresponding to the touch information from all the preset instructions includes:
and taking a preset instruction corresponding to a preset touch point sequence with the highest similarity as a target instruction corresponding to the touch information according to the similarity information.
Optionally, before recording the touch information of the target touch area, the method further includes:
when an identification trigger instruction is received, determining regional position information according to the identification trigger instruction;
and setting an area range corresponding to the area position information as the target touch area so as to identify the touch information of the target touch area.
Optionally, after setting the area range corresponding to the area location information as the target touch area, the method further includes:
generating a keyboard hiding instruction so as to hide a virtual keyboard interface in the target touch area;
or generating a keyboard disabling instruction so as to close the virtual keyboard interface in the target touch area.
Optionally, the target instruction includes a first type of instruction, a second type of instruction, or a third type of instruction;
the first class of instructions are instructions for jumping to a target page, the second class of instructions are instructions for executing operations in a current page, and the third class of instructions are instructions for jumping to the target page and executing operations in the target page.
Optionally, the target instruction set includes instruction information of the target instruction; wherein the instruction information comprises command content and an execution path;
correspondingly, the control device executing the operation corresponding to the target instruction includes:
and determining instruction information corresponding to the target instruction, and controlling the equipment to execute the operation corresponding to the target instruction according to the instruction information.
Optionally, when the touch information of the target touch area is recorded, the method further includes:
judging whether an emptying instruction is received or not;
and if so, deleting the touch information of the target touch area so as to record the touch information of the target touch area again.
Optionally, before the control device executes the operation corresponding to the target instruction, the method further includes:
playing voice information corresponding to the target instruction, and receiving feedback information of a user to the voice information;
when the feedback information meets a preset standard, controlling the equipment to execute the operation corresponding to the target instruction;
when the feedback information does not meet the preset standard, playing voice information of preset instructions corresponding to the preset touch point sequence according to the sequence of similarity from high to low, and setting the preset instructions confirmed by the user as new target instructions so as to control the equipment to execute operations corresponding to the new target instructions;
and feedback information corresponding to the preset instruction confirmed by the user meets the preset standard.
The present application also provides an apparatus control system, including:
the input method determining module is used for determining a target input method currently used by the equipment;
the information acquisition module is used for recording the touch information of the target touch area; the touch information comprises position information and touch time of all touch points;
the sequence matching module is used for generating a target touch point sequence according to the position information and the touch time and calculating the similarity information between the target touch point sequence and a preset touch point sequence; the preset touch point sequence is a touch point sequence of text content corresponding to a preset instruction in a target instruction set under the target input method;
the instruction determining module is used for selecting a target instruction corresponding to the touch information from all the preset instructions according to the similarity information;
and the equipment control module is used for controlling the equipment to execute the operation corresponding to the target instruction.
The present application also provides a storage medium having stored thereon a computer program that, when executed, implements the steps performed by the above-described apparatus control method.
The application also provides an electronic device, which comprises a memory and a processor, wherein the memory is stored with a computer program, and the processor realizes the steps executed by the device control method when calling the computer program in the memory.
The application provides a device control method, which comprises the steps of determining a target input method currently used by a device; recording touch information of a target touch area; the touch information comprises position information and touch time of all touch points; generating a target touch point sequence according to the position information and the touch time, and calculating similarity information between the target touch point sequence and a preset touch point sequence; the preset touch point sequence is a touch point sequence of text content corresponding to a preset instruction in a target instruction set under the target input method; selecting a target instruction corresponding to the touch information from all the preset instructions according to the similarity information; and controlling the equipment to execute the operation corresponding to the target instruction.
According to the method and the device, the position information and the touch time of the user for executing the touch operation in the target touch area can be recorded, and the target touch point sequence input by the user in the target touch area is determined according to the position information and the touch time of all the touch points. On the premise of determining a target input method currently used by the device, a preset touch point sequence of text content corresponding to a preset instruction in a target instruction set under the target input method can be determined. And comparing the similarity of the preset touch point sequence and the target touch point sequence to determine a target instruction corresponding to the target touch point sequence in the target instruction set, so as to control the equipment to execute the operation corresponding to the target instruction. In the identification process, the target touch point sequence is determined according to the position information and the touch time of the touch point, so that the user can complete the input of information according to the key position memory corresponding to the current input method on the premise of not referring to the key position prompt information displayed in the target touch area, and further determine the corresponding target instruction. Therefore, the electronic equipment can be accurately controlled on the premise of no visual participation. The application also provides an equipment control system, an electronic device and a storage medium, which have the beneficial effects and are not repeated herein.
Drawings
In order to more clearly illustrate the embodiments of the present application, the drawings needed for the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained by those skilled in the art without inventive effort.
Fig. 1 is a flowchart of an apparatus control method according to an embodiment of the present disclosure;
FIG. 2 is a schematic interface diagram of a 26-key input method;
FIG. 3 is a schematic diagram of a key used for "open WeChat" input through a 26-key input method;
FIG. 4 is a key sequence diagram of "open WeChat" input via the 26-key input method;
fig. 5 is a flowchart of a blind input instruction triggering method applied to a mobile terminal according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an apparatus control system according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the process of using the electronic equipment, a user can realize the control of the electronic equipment by issuing an instruction, namely: the electronic device may recognize the user's instruction to perform the corresponding function. However, since the display space of the screen of the electronic device is limited, the current access of various instructions on the mobile device usually requires multiple operations, which is very inefficient for users unfamiliar with the interface. And for the special groups with visual disorder, such as the blind, the corresponding function button can be linearly searched by using auditory feedback only with the help of screen reading software under the condition of lacking visual feedback, the difficulty in the process is more (for example, some buttons do not provide proper voice feedback), and the efficiency is lower.
In order to achieve vision-independent control of electronic devices, it is common in the related art to execute instructions using functions provided by a voice assistant. For example, a smartphone may implement device control through a built-in voice assistant. The working principle of the voice assistant is to recognize the voice input by the user into text content, and then recognize the intention of the user by means of natural language processing and respond to corresponding instructions. Most voice assistants support common instructions such as "open WeChat friend circle," but do not provide support for more instructions within the application; meanwhile, the mode has the natural disadvantage of voice input: the method has the defects of poor privacy, low identification accuracy, unstable identification result and used time (depending on network conditions) and the like, and can still not realize the accurate control of the electronic equipment on the premise of not needing visual participation. In view of the above problems in the related art, the present application, through the following embodiments, can achieve the effect of implementing precise control of an electronic device without visual intervention by using a new device control method.
Referring to fig. 1, fig. 1 is a flowchart of an apparatus control method according to an embodiment of the present disclosure.
The specific steps may include:
s101: determining a target input method currently used by equipment;
the present embodiment can be applied to an electronic device with a touch function, such as a mobile phone, a tablet computer, and a touch all-in-one machine, and the device mentioned in the present embodiment can be an electronic device with a touch function. When a user inputs information by using an electronic device with a touch function, mapping between a touch point and input content needs to be realized according to a target input method currently used by the electronic device. Each input method can have a corresponding keyboard key mapping relation, that is, the input contents of the same touch point can be different under different input methods. Before this step, there may be an operation of the user setting the target input method. As a possible implementation manner, the input method switching information may be generated when the user sets the input method or the electronic device system automatically switches the input method. The specific input method switching information may be voice information of the name of the current input method, and of course, the input method switching information may be displayed in the form of icons or characters on a human-computer interaction interface of the electronic device.
The purpose of this step lies in confirming the target input method that equipment uses at present, and as time goes on, equipment can have input method switching operation, can confirm the target input method that equipment uses at present again after the input method switches over.
S102: recording touch information of a target touch area;
the electronic device may have a touch information receiving device, and a user may input a specific gesture command in a touch area of the touch information receiving device to control the electronic device. As a possible implementation manner, the touch area of the touch information receiving device can receive the touch information of the user and can also be used as a display screen for displaying information.
As a possible implementation manner, there may be an operation of receiving a recognition trigger instruction before this step, where the recognition trigger instruction is an instruction for triggering a recognition operation on the user touch information, and the recognition trigger instruction may be voice information, key information, touch gesture information, and the like input by the user. After receiving the identification trigger instruction, an operation of recording touch information of the target touch area may be performed. As a possible implementation manner, in order to save energy and avoid false recognition, after receiving the recognition trigger instruction, it may be determined that a duration in which the target touch area continuously detects no touch point is longer than a preset duration, and the recording of the touch information of the target touch area may be stopped. Further, when the trigger gesture command (i.e., the recognition trigger command in the form of touch gesture information) is a necessary condition for executing S102, the above-mentioned trigger gesture command may be a preset gesture operation, and when it is detected that the user inputs the trigger gesture command in the touch area, an operation of recording the input information may be executed. As a possible implementation manner, when the embodiment is applied to a smart phone, if it is detected that a user performs an operation of sliding down and then sliding up on a screen, it may be determined that a trigger gesture instruction is received and the related operation step of the embodiment is entered. Certainly, the embodiment does not limit the specific form of the trigger gesture instruction, and a person skilled in the art can flexibly set the trigger gesture instruction according to the actual application scenario.
The target touch area mentioned in this embodiment may be one or several sub areas in the touch information receiving area of the device, or may be the entire touch information receiving area. The touch information is information describing that a user performs a touch action in the target touch area, and the input information may include a relative position between all touch points and a touch sequence. As one of the cases that may exist, the touch information may include position information and touch time of all touch points. It is understood that the user may input touch information at the target touch area based on the target input method.
S103: generating a target touch point sequence according to the position information and the touch time, and calculating similarity information between the target touch point sequence and a preset touch point sequence;
the position information mentioned in this step may include absolute position information of each touch point on the screen, or may include relative positions between any number of touch points, where the touch time refers to an input time of each touch point, and a touch sequence between each touch point and an input time interval between adjacent touch points may be determined according to the touch time. The target touch point sequence obtained in this embodiment is information for describing a relative position and a touch sequence between touch points of a user in the target touch area.
Referring to fig. 2, fig. 3 and fig. 4, fig. 2 is a schematic interface diagram of a 26-key input method, fig. 3 is a schematic diagram of a key used for "open WeChat" input through the 26-key input method, and fig. 4 is a key sequence diagram of "open WeChat" input through the 26-key input method. As can be seen from fig. 4, when the 26-key input method is used to press "open the WeChat", 8 keys are sequentially touched 11 times, and the sequence is: D. a, K, A, I, W, E, I, X, I and N, the numbers in fig. 4 represent the order of touches, and the numbers with brackets indicate the presence of multiple touches. In this embodiment, the user only needs to successively touch the target touch area for 11 times according to the relative positions of the eight letters D, A, K, I, W, E, X and N to complete the input of the touch information, and further obtain the target touch point sequence. That is to say, in this embodiment, the user can input touch information at any position, and the same target touch point sequence can be obtained on the premise that the relative position and the touch sequence of each touch point are correct, so that the touch information input without visual participation is realized.
As a possible implementation manner, the target touch point sequence generated in this embodiment may be information describing all touch point positions, touch sequences and touch time intervals, and components of the content input by the user may be further divided according to the touch time intervals, for example, when the user inputs QUXIAN, according to the spelling rule, if only the touch event interval between U and X between adjacent touch points is greater than 0.5 second and is less than 0.5 second, it may be determined that the pinyin corresponding to the "curve" is input by the user; if the touch event interval between U, X, I and A between adjacent touch points is greater than 0.5 second and the rest is less than 0.5 second, it can be determined that the pinyin corresponding to the 'descilian' is input by the user. Therefore, when the target touch point sequence comprises the touch time interval, the input content of the user can be better divided, and the accuracy of the similarity information is further improved. As a possible implementation manner, the present embodiment may use the name of the operation to be performed as "open a browser", and since the user expresses the intention in daily life by a language, the efficient encoding of a large number of instructions may be achieved by the above manner.
After the target touch point sequence is obtained, the target touch point sequence is compared with the preset touch point sequence, and the similarity information between the target touch point sequence and the preset touch point sequence is calculated, where the similarity information is information used for describing the degree of similarity between the target touch point sequence and the preset touch point sequence. The preset touch point sequence provided in this embodiment is a touch point sequence of text content corresponding to a preset instruction in a target instruction set under the target input method. In this embodiment, a target instruction set including a plurality of preset instructions may be preset, where each preset instruction has a corresponding text content, the text content may be a text expression in a specific language under the name of the preset instruction, and the text content of each preset instruction may be multiple, for example, the text content corresponding to the preset instruction "open map" may be pinyin "da kai di tu", may be abbreviated pinyin "dkdt", may be english full pinyin "open map", and of course, spaces may be used as spaces between words or words in the text content. On the basis that the text content corresponding to each preset instruction is determined, in this embodiment, a preset touch point sequence of each text content under the target input method may be obtained according to the target input method currently used by the device determined in S101, where the preset touch point sequence may include position information, a touch sequence, and a touch time interval range of a preset touch point of the preset instruction in the target input method.
As a feasible implementation manner, the embodiment may determine a preset touch point sequence of text content corresponding to each preset instruction in the target instruction set under the target input method, compare all the preset touch point sequences with the target touch point sequence, and obtain similarity information of the target touch point sequence with respect to each preset touch point sequence.
S104: selecting a target instruction corresponding to the touch information from all the preset instructions according to the similarity information;
the step is established on the basis of obtaining the similarity information, and the target instruction can be determined from all preset instructions of the target instruction set according to the similarity information. As a possible implementation manner, the present embodiment may set a standard similarity, and use, as the target instruction, the preset instruction with the similarity greater than the standard similarity according to the similarity information, and if the number of the preset instructions with the similarity greater than the standard similarity is greater than 1, use the preset instruction with the highest similarity as the target instruction, that is, use, according to the similarity information, the preset instruction corresponding to the preset touch point sequence with the highest similarity as the target instruction corresponding to the touch information. Of course, the preset instruction with the highest use frequency may also be used as the target instruction in the present embodiment, and the target instruction may also be determined by combining the similarity ranking and the use frequency.
Next, the operation of determining the target instruction corresponding to the input information according to the position information of all the touch points and the touch time in the embodiment is analyzed from the perspective of the electronic device. Because different control instructions correspond to different touch gestures, the embodiment abandons the technical scheme that the display content of the touch area and the touch position of the user need to be in one-to-one correspondence in the related art, and determines the content of the input information according to the relative position between the touch points and the touch sequence. Because the content of the input information can be determined according to the relative position and the touch time between the touch points, the technical scheme provided by the embodiment can achieve the effects of inputting an instruction and controlling the equipment in a touch mode without visual participation of a user. As a possible implementation manner, the first collected touch point may be set as an origin, a coordinate system may be established based on the origin, coordinate information of all touch points in the coordinate system may be obtained according to relative positions of other touch points and the first collected touch point, and relative positions between all coordinate points may be determined according to the coordinate information of all coordinate points. For example, the standard touch point relative position and touch sequence change of the text content of a shutdown instruction is called as: and after the first touch point is input, moving the first touch point to the right for 3 unit distances to touch to obtain a second touch point, and then moving the second touch point to the upper for 5 unit distances to touch to obtain a third touch point. Taking a mobile phone screen as a reference system, and when three touch points are detected to be (0,0), (3,0) and (3,5) in sequence, indicating that a user inputs a shutdown instruction; when the three touch points are detected to be (2,2), (5,2) and (5,7), the user can also input a shutdown instruction; when three touch points are detected to be (20,20), (23,20) and (23,25), the user can also input a shutdown instruction. In the embodiment, even if the whole area where the touch position of the user on the touch screen is located is changed, the identification of the touch information is not influenced, so that the blind or the user with visual disorder can realize the device control on the premise of not using visual feedback through the embodiment. That is, the present embodiment may implement instruction recognition by focusing attention on the relative position between touch points and the touch sequence, without depending on the actual touch position of the user on the touch screen.
S105: and controlling the equipment to execute the operation corresponding to the target instruction.
The embodiment is based on determining a target instruction corresponding to the input information, and may control the device to execute an operation corresponding to the target instruction.
The target instruction referred to in this embodiment is in the context of this embodiment specifically referring to the user's intent to be communicated to the system. The target instruction can be an action, such as screen capture, which means that the screen capture operation is completed; it can also be a target, such as a circle of friends, which means to jump to the page of the circle of friends. The target instructions are divided into two types, one type is an instruction without parameters, and the target instructions are directly executed without ambiguity, such as screen capture; the other is an instruction with parameters, the target of the instruction can be determined only by specifying the values of the parameters, such as a video call, and a call object needs to be specified.
The embodiment can record the position information and the touch time of the user performing the touch operation in the target touch area, and determine the target touch point sequence input by the user in the target touch area according to the position information and the touch time of all the touch points. On the premise of determining a target input method currently used by the device, a preset touch point sequence of text content corresponding to a preset instruction in a target instruction set under the target input method can be determined. And comparing the similarity of the preset touch point sequence and the target touch point sequence to determine a target instruction corresponding to the target touch point sequence in the target instruction set, so as to control the equipment to execute the operation corresponding to the target instruction. In the identification process, the target touch point sequence is determined according to the position information and the touch time of the touch point, so that the user can complete the input of information according to the key position memory corresponding to the current input method on the premise of not referring to the key position prompt information displayed in the target touch area, and further determine the corresponding target instruction. Therefore, the embodiment can realize the accurate control of the electronic equipment on the premise of no visual participation.
As a further introduction to the embodiment corresponding to fig. 1, before selecting a target instruction corresponding to the touch information from all the preset instructions according to the similarity information, reference information may also be acquired; the reference information comprises context information and/or use frequency information of each preset instruction, and the context environment information comprises any one or a combination of any several items of historical operation records, current page information and personalized setting information. Specifically, the personalized setting information may include instruction selection reference information customized by the user, for example, the personalized setting information may include a selection frequency of each preset instruction or a selection frequency of each preset instruction in each time period.
Correspondingly, the operation of selecting the target instruction corresponding to the touch information from all the preset instructions according to the similarity information in S104 may be: and selecting a target instruction corresponding to the touch information from all the preset instructions according to the similarity information and the reference information. The above process introduces context information to predict the instruction expression of the user (e.g. analysis of the user mental model), and can cooperate with the user to input relative position information to predict the target instruction of the user. The context information may include page attributes of a currently used page and instructions input by a previous user, for example, when the currently used page is an address list page, the probability that a target instruction is a call dialing instruction or a short message sending instruction is high; if the user has input a certain map APP open instruction before that, the probability that the target instruction is a route query instruction is large.
As a further description of the corresponding embodiment of fig. 1, before recording the touch information of the target touch area, the following operations may be further included: when an identification trigger instruction is received, determining regional position information according to the identification trigger instruction; and setting an area range corresponding to the area position information as the target touch area so as to identify the touch information of the target touch area. The identification trigger instruction may be voice information, key information, touch gesture information and the like input by a user, and after the identification trigger instruction is received, the embodiment may determine the area position information according to the identification trigger instruction, and then set the target touch area based on the area position information. When the target touch area is rectangular in the present embodiment, the area position information may include coordinates of four vertices. After the area range corresponding to the area position information is set as the target touch area, a keyboard hiding instruction or a keyboard disabling instruction can be generated, and the keyboard hiding instruction can control equipment to hide a virtual keyboard interface in the target touch area; the keyboard disabling instruction can control the device to close the virtual keyboard interface in the target touch area. The keyboard hiding instruction or the keyboard forbidding instruction is generated only after the target touch area is set, and other operations can exist between the target touch area and the keyboard hiding instruction or the keyboard forbidding instruction.
Further, after receiving the identification trigger instruction, the target input method can be set by analyzing the identification trigger instruction. In this embodiment, a corresponding identification trigger instruction may be set for each input method, and when the identification trigger instruction is received, the identification trigger instruction may be analyzed to determine a target input method corresponding to the touch information input by the user. Certainly, the target input method in this embodiment may include an squared figure input method, a 26-key input method, a handwriting input method, a five-stroke input method, a double-spelling input method, or a Chinese chang input method.
As a further introduction to the corresponding embodiment of fig. 1, the step of selecting the target instruction corresponding to the touch information from all the preset instructions according to the similarity information in S104 may be: and taking a preset instruction with the highest similarity as a target instruction corresponding to the touch information according to the similarity information.
A plurality of preset instructions may be stored in the target instruction set, and the first sample instruction with the highest similarity may be used as the target instruction in this embodiment. By way of example, the relative positions of the touch points (the first touch point is set as the origin) and the touch sequence actually captured in the present embodiment are (0,0) → (-0.49, 1) → (6.2, 2) → (5.9, 0) → (-0.5, 1.1); the target instruction set includes 3 first sample instructions, sample instruction a corresponds to a text content whose standard touch point relative position and standard touch order are (0,0) → (6, 1) → (3, 2) → (5, 0) → (5, 1), sample instruction B corresponds to a text content whose standard touch point relative position and standard touch order are (0,0) → (-0.5, 1) → (6, 2) → (6, 0) → (-0.5, 1), and sample instruction C corresponds to a text content whose standard touch point relative position and standard touch order are (0,0) → (6, 2) → (-0.5, 1) → (6, 0) → (-0.5, 1). Through comparison, the similarity of the 3 preset instructions in the target instruction set is sequentially a sample instruction B, a sample instruction C and a sample instruction A from large to small, and the preset instruction B can be set as a target instruction corresponding to the input information.
As a further introduction to the corresponding embodiment of fig. 1, the preset instructions (including the target instructions) in the target instruction set may be the first type of instructions, the second type of instructions, or the third type of instructions. The first class of instructions are instructions for jumping to a target page, the second class of instructions are instructions for executing operations in a current page, and the third class of instructions are instructions for jumping to the target page and executing operations in the target page. Specifically, the target instruction set may include instruction information of the target instruction; wherein the instruction information comprises command content and an execution path. Correspondingly, in S105, the operation that the control device executes the target instruction may be: and determining instruction information corresponding to the target instruction, and controlling the equipment to execute the operation corresponding to the target instruction according to the instruction information.
As a further description of the corresponding embodiment of fig. 1, when the touch information of the target touch area is recorded in S102, the following operations may be further included: judging whether an emptying instruction is received or not; and if so, deleting the touch information of the target touch area so as to record the touch information of the target touch area again. Through the emptying mechanism mentioned in the above description, the previously recorded touch points can be emptied after the user finds the error content, so that the device can re-record the touch information of the target touch area when the user re-inputs the content.
As a further introduction to the embodiment corresponding to fig. 1, before the control device executes the operation corresponding to the target instruction, the following operations may be further included: playing voice information corresponding to the target instruction, and receiving feedback information of a user to the voice information; when the feedback information meets a preset standard, controlling the equipment to execute the operation corresponding to the target instruction; when the feedback information does not meet the preset standard, playing voice information of preset instructions corresponding to the preset touch point sequence according to the sequence of similarity from high to low, and setting the preset instructions confirmed by the user as new target instructions so as to control the equipment to execute operations corresponding to the new target instructions; and feedback information corresponding to the preset instruction confirmed by the user meets the preset standard.
Since the embodiment corresponding to fig. 1 selects one target instruction from all preset instructions included in the target instruction set, there may be a case where the selected target instruction is not an instruction required by the user, and at this time, the voice information corresponding to the target instruction may be played, so that the user can feed back the voice information. The form of the feedback information of the user to the voice information can comprise voice feedback, touch feedback, key feedback and the like, and the feedback information can also comprise the shaking of the user to the electronic equipment. The preset criteria corresponding to the feedback information may be preset in this embodiment, for example, the preset criteria may include specific voice content, specific gesture, or specific key. When the feedback information meets the preset standard, the step of controlling the equipment to execute the operation corresponding to the target instruction can be entered. And when the feedback information does not meet the preset standard, playing the voice information of the preset instruction corresponding to the preset touch point sequence according to the sequence of the similarity from high to low, continuously monitoring the determined information of the user in the process of playing the voice information, and taking the information confirmed by the user as a new target instruction. The specific information of the user may include: an instruction to select the current play, an instruction to select the last play, an instruction to select the first play (i.e., the original target instruction). The confirmation information of the user may exist in the form of voice information, touch information, key information, or the like, and the confirmation information may also be shaking of the electronic device by the user.
For example, when the target instruction is determined to be an instruction for receiving a mail operation, the voice message corresponding to "receiving a mail" may be played in a voice broadcast manner, and after the user receives the voice message, if the target instruction corresponding to the voice message is an instruction that the user needs to control the device to execute, the user may send a confirmation message in a manner of a key, a voice reply, a touch gesture, or the like; if the target instruction corresponding to the voice message is not an instruction which the user needs to control the equipment to execute, the user can send error-reporting information in a mode of pressing a key, replying by voice or touching a gesture and the like. When the confirmation information is received, the step of controlling the device to perform the operation corresponding to the target instruction in S105 may be performed. When receiving the error message, the embodiment may play another preset instruction for the user to select.
The flow described in the above embodiment is explained below by an embodiment in practical use. Referring to fig. 5, fig. 5 is a flowchart illustrating a blind input command triggering method applied to a mobile terminal according to an embodiment of the present disclosure.
A user calls an input interface of a blind input instruction by executing a global trigger gesture on an original interface of the mobile terminal, then fuzzy input is carried out on the input interface according to text expression of a target instruction, and the user can clear input when finding input errors in the input process. After the input is finished, the user determines whether the prediction is correct or not according to the voice feedback given by the system, if the prediction is incorrect, the user can switch to other candidate words through gestures on the basis of the current input, and can also select emptying and re-inputting; if the input interface is correct, the user executes the determined gesture, the system automatically completes the execution of the instruction, and the input interface exits and returns to the original interface. In any position before the above process is determined, the user can return to the original interface by executing the trigger gesture again.
The implementation process of the above embodiment may include two parts, namely, a prediction algorithm for the user instruction in the user input part, and a system execution part after the user confirms to execute the instruction.
The input received by the user instruction prediction algorithm is information of clicking operation of a user on a screen, current context information (an application and a page where the user is located, and the like) and instruction set information (a full set of legal instructions and the use frequency of the instructions), and the output is that each instruction in the instruction set is sorted from high to low according to the probability of the input intention of the user. The above process is equivalent to providing an intelligent prediction method for instruction input, which can allow the user to reduce the precision of input (i.e. "fuzzy input") and still give correct results, thereby improving the input efficiency. After the target instruction of the user is obtained, the system can save the time for the user to search the operation path and execute the multi-step operation in an automatic execution mode, and the efficiency and the user experience can be further improved.
The instruction execution part needs predefined application structure information which expresses the jump relation between different pages in a graph form. When the instruction is executed, the position of the current page in the graph is found according to the information of the current page, the shortest path from the current position to the target position is calculated on the graph, and the execution is simulated step by step according to the information on the path so as to complete the execution of the whole instruction.
In the execution process of the instruction, if the instruction is a parameter-free instruction, the system can execute the instruction in sequence according to the calculated path information until the execution is finished; if the command is an instruction with parameters, the system pauses when the user needs to input the parameters, prompts the user to input the parameters, and continuously executes the subsequent steps after the parameters are input.
The present embodiment provides a method by which a user can input instructions and recognize instructions without looking or carefully looking at the device screen. When the user confirms to trigger the command, the system automatically completes the execution of the command and the jump of the interface and provides feedback when the execution is finished. Through the process, the user can complete the execution of the target instruction in a more efficient and easy manner.
Taking "enter the WeChat friend circle from the desktop" as an example, when the user is on the desktop, the user calls an input interface (as shown in the above figure) through a global gesture (such as sliding down and sliding up on the screen), then clicks five times on the approximate positions of the five letters on the keyboard in sequence according to the sequence of "wxpyq" ("abbreviated spelling abbreviation of WeChat friend circle"), hears the voice feedback that the system prompts "friend circle-WeChat", and then confirms execution through the sliding-down gesture, and after the system executes the jump to the friend circle page, the voice feedback that the execution is successful is given. In addition, it should be noted that the position of the keyboard displayed on the interface is only used for prompting, and the user can use the keyboard without looking at the interface, and does not need to click the corresponding letter in the area where the letter is located every time.
In the technical scheme provided by this embodiment, the user can complete the triggering process of the whole instruction only by auditory feedback without looking at the display content of the screen. This feature provides advantages for users in certain special inconvenient screen viewing situations or when using mobile devices for the blind user group. The instruction triggering efficiency of the embodiment is higher. The improvement of the efficiency is embodied in three aspects: firstly, because the command is directly input, the time for searching the target element in each level menu on the screen is saved, and the execution of the command is accelerated; secondly, the intelligent recognition algorithm can tolerate larger input noise when the instruction is input, and a user can obtain a desired result without inputting the instruction accurately, so that the input speed can be increased on the basis of sacrificing the accuracy; thirdly, after the user is skillfully used by the same instruction for multiple times, muscle memory can be formed to further accelerate the speed of inputting the instruction. The embodiment is convenient for a user to use by one hand. The intelligent recognition algorithm in the input process can automatically recognize the target instruction of the user according to the relative position clicked by the user, so that the user can use the method only in the area which can be reached by one hand even if a large screen is used, and the use effect is not influenced. The instruction input mode of the embodiment is naturally easy to memorize and learn. The method uses the name of the instruction as the input basis, and the input mode is the mode used by the user to express the intention at ordinary times, is more natural and convenient to memorize compared with other schemes, and is more suitable for supporting instruction sets with large orders of magnitude. The embodiment has strong expansibility. The user can set different mapping modes of the instructions according to the use habits of the user, and the use efficiency is further improved.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an apparatus control system according to an embodiment of the present disclosure;
the system may include:
an input method determining module 100, configured to determine a target input method currently used by the device;
the information acquisition module 200 is used for recording touch information of a target touch area; the touch information comprises position information and touch time of all touch points;
the sequence matching module 300 is configured to generate a target touch point sequence according to the position information and the touch time, and calculate similarity information between the target touch point sequence and a preset touch point sequence; the preset touch point sequence is a touch point sequence of text content corresponding to a preset instruction in a target instruction set under the target input method;
an instruction determining module 400, configured to select a target instruction corresponding to the touch information from all the preset instructions according to the similarity information;
and the device control module 500 is configured to control the device to execute an operation corresponding to the target instruction.
The embodiment can record the position information and the touch time of the user performing the touch operation in the target touch area, and determine the target touch point sequence input by the user in the target touch area according to the position information and the touch time of all the touch points. On the premise of determining a target input method currently used by the device, a preset touch point sequence of text content corresponding to a preset instruction in a target instruction set under the target input method can be determined. And comparing the similarity of the preset touch point sequence and the target touch point sequence to determine a target instruction corresponding to the target touch point sequence in the target instruction set, so as to control the equipment to execute the operation corresponding to the target instruction. In the identification process, the target touch point sequence is determined according to the position information and the touch time of the touch point, so that the user can complete the input of information according to the key position memory corresponding to the current input method on the premise of not referring to the key position prompt information displayed in the target touch area, and further determine the corresponding target instruction. Therefore, the embodiment can realize the accurate control of the electronic equipment on the premise of no visual participation.
Further, the method also comprises the following steps:
the information acquisition module is used for acquiring reference information before selecting a target instruction corresponding to the touch information from all the preset instructions according to the similarity information; the reference information comprises context information and/or use frequency information of each preset instruction, and the context environment information comprises any one or a combination of any several items of historical operation records, current page information and personalized setting information;
correspondingly, the instruction determining module 400 is specifically a module configured to select a target instruction corresponding to the touch information from all the preset instructions according to the similarity information and the reference information.
Further, the instruction determining module 400 is specifically a module configured to use, according to the similarity information, a preset instruction corresponding to a preset touch point sequence with the highest similarity as a target instruction corresponding to the touch information.
Further, the method also comprises the following steps:
the position information determining module is used for determining the area position information according to the identification triggering instruction when the identification triggering instruction is received;
and the area setting module is used for setting an area range corresponding to the area position information as the target touch area so as to identify the touch information of the target touch area.
Further, the method also comprises the following steps:
the first instruction generation module is used for generating a keyboard hiding instruction so as to hide a virtual keyboard interface in the target touch area;
or the second instruction generating module is used for generating a keyboard disabling instruction so as to close the virtual keyboard interface in the target touch area.
Further, the target instruction comprises a first type of instruction, a second type of instruction or a third type of instruction;
the first class of instructions are instructions for jumping to a target page, the second class of instructions are instructions for executing operations in a current page, and the third class of instructions are instructions for jumping to the target page and executing operations in the target page.
Further, the target instruction set includes instruction information of the target instruction; wherein the instruction information comprises command content and an execution path;
correspondingly, the device control module 500 is specifically a module for determining instruction information corresponding to the target instruction and controlling the device to execute an operation corresponding to the target instruction according to the instruction information.
Further, the method also comprises the following steps:
the clearing module is used for judging whether a clearing instruction is received or not when the touch information of the target touch area is recorded; and if so, deleting the touch information of the target touch area so as to record the touch information of the target touch area again.
Further, the method also comprises the following steps:
the voice playing module is used for playing voice information corresponding to the target instruction before the control equipment executes the operation corresponding to the target instruction, and receiving feedback information of a user on the voice information;
the first processing module is used for controlling the equipment to execute the operation corresponding to the target instruction when the feedback information meets a preset standard;
the second processing module is used for playing voice information of preset instructions corresponding to the preset touch point sequence according to the sequence from high similarity to low similarity when the feedback information does not meet the preset standard, and setting the preset instructions confirmed by the user as new target instructions so as to control the equipment to execute the operation corresponding to the new target instructions; and feedback information corresponding to the preset instruction confirmed by the user meets the preset standard.
Since the embodiment of the system part corresponds to the embodiment of the method part, the embodiment of the system part is described with reference to the embodiment of the method part, and is not repeated here.
The present application also provides a storage medium having a computer program stored thereon, which when executed, may implement the steps provided by the above-described embodiments. The storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The application further provides an electronic device, which may include a memory and a processor, where the memory stores a computer program, and the processor may implement the steps provided by the foregoing embodiments when calling the computer program in the memory. Of course, the electronic device may also include various network interfaces, power supplies, and the like.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.
It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (12)

1. An apparatus control method characterized by comprising:
determining a target input method currently used by equipment;
recording touch information of a target touch area; the touch information comprises position information and touch time of all touch points;
generating a target touch point sequence according to the position information and the touch time, and calculating similarity information between the target touch point sequence and a preset touch point sequence; the preset touch point sequence is a touch point sequence of text content corresponding to a preset instruction in a target instruction set under the target input method;
selecting a target instruction corresponding to the touch information from all the preset instructions according to the similarity information;
and controlling the equipment to execute the operation corresponding to the target instruction.
2. The device control method according to claim 1, further comprising, before selecting a target instruction corresponding to the touch information from all the preset instructions according to the similarity information:
acquiring reference information; the reference information comprises context information and/or use frequency information of each preset instruction, and the context environment information comprises any one or a combination of any several items of historical operation records, current page information and personalized setting information;
correspondingly, selecting the target instruction corresponding to the touch information from all the preset instructions according to the similarity information includes:
and selecting a target instruction corresponding to the touch information from all the preset instructions according to the similarity information and the reference information.
3. The device control method according to claim 1, wherein selecting a target instruction corresponding to the touch information from all the preset instructions according to the similarity information comprises:
and taking a preset instruction corresponding to a preset touch point sequence with the highest similarity as a target instruction corresponding to the touch information according to the similarity information.
4. The device control method according to claim 1, further comprising, before recording the touch information of the target touch area:
when an identification trigger instruction is received, determining regional position information according to the identification trigger instruction;
and setting an area range corresponding to the area position information as the target touch area so as to identify the touch information of the target touch area.
5. The device control method according to claim 4, wherein after setting the area range corresponding to the area position information as the target touch area, the method further comprises:
generating a keyboard hiding instruction so as to hide a virtual keyboard interface in the target touch area;
or generating a keyboard disabling instruction so as to close the virtual keyboard interface in the target touch area.
6. The device control method according to claim 1, wherein the target instruction includes a first type of instruction, a second type of instruction, or a third type of instruction;
the first class of instructions are instructions for jumping to a target page, the second class of instructions are instructions for executing operations in a current page, and the third class of instructions are instructions for jumping to the target page and executing operations in the target page.
7. The device control method according to claim 1, wherein the target instruction set includes instruction information of the target instruction; wherein the instruction information comprises command content and an execution path;
correspondingly, the control device executing the operation corresponding to the target instruction includes:
and determining instruction information corresponding to the target instruction, and controlling the equipment to execute the operation corresponding to the target instruction according to the instruction information.
8. The device control method according to claim 1, further comprising, when recording touch information of the target touch area:
judging whether an emptying instruction is received or not;
and if so, deleting the touch information of the target touch area so as to record the touch information of the target touch area again.
9. The device control method according to any one of claims 1 to 8, further comprising, before the control device performs an operation corresponding to the target instruction:
playing voice information corresponding to the target instruction, and receiving feedback information of a user to the voice information;
when the feedback information meets a preset standard, controlling the equipment to execute the operation corresponding to the target instruction;
when the feedback information does not meet the preset standard, playing voice information of preset instructions corresponding to the preset touch point sequence according to the sequence of similarity from high to low, and setting the preset instructions confirmed by the user as new target instructions so as to control the equipment to execute operations corresponding to the new target instructions;
and feedback information corresponding to the preset instruction confirmed by the user meets the preset standard.
10. An appliance control system, comprising:
the input method determining module is used for determining a target input method currently used by the equipment;
the information acquisition module is used for recording the touch information of the target touch area; the touch information comprises position information and touch time of all touch points;
the sequence matching module is used for generating a target touch point sequence according to the position information and the touch time and calculating the similarity information between the target touch point sequence and a preset touch point sequence; the preset touch point sequence is a touch point sequence of text content corresponding to a preset instruction in a target instruction set under the target input method;
the instruction determining module is used for selecting a target instruction corresponding to the touch information from all the preset instructions according to the similarity information;
and the equipment control module is used for controlling the equipment to execute the operation corresponding to the target instruction.
11. An electronic device, comprising a memory in which a computer program is stored and a processor which, when calling the computer program in the memory, implements the steps of the device control method according to any one of claims 1 to 9.
12. A storage medium having stored thereon computer-executable instructions which, when loaded and executed by a processor, carry out the steps of a device control method according to any one of claims 1 to 9.
CN201911403064.5A 2019-12-30 2019-12-30 Equipment control method, system, electronic equipment and storage medium Active CN111176545B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911403064.5A CN111176545B (en) 2019-12-30 2019-12-30 Equipment control method, system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911403064.5A CN111176545B (en) 2019-12-30 2019-12-30 Equipment control method, system, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111176545A true CN111176545A (en) 2020-05-19
CN111176545B CN111176545B (en) 2021-05-04

Family

ID=70649047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911403064.5A Active CN111176545B (en) 2019-12-30 2019-12-30 Equipment control method, system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111176545B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112363669A (en) * 2020-11-19 2021-02-12 北京元心科技有限公司 Operation behavior determination method and device, electronic equipment and computer-readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0340117A (en) * 1989-07-07 1991-02-20 Matsushita Electric Ind Co Ltd Keyboard device for character input
JPH07134633A (en) * 1993-11-11 1995-05-23 Daio Denshi Kk Input device, piercing member, input auxiliary device and input method
CN103235696A (en) * 2013-04-12 2013-08-07 白春荣 Fast Chinese pinyin input method based on touch sensitive device and achieving system thereof
CN103870192A (en) * 2014-01-24 2014-06-18 白春荣 Input method and device based on touch screen as well as Pinyin input method and system
CN106445369A (en) * 2015-08-10 2017-02-22 北京搜狗科技发展有限公司 Input method and device
CN106453823A (en) * 2016-08-31 2017-02-22 腾讯科技(深圳)有限公司 Method and device for sending messages rapidly, and terminal
CN106774991A (en) * 2017-02-20 2017-05-31 深圳市华林奇科技有限公司 The processing method of input data, device and keyboard

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0340117A (en) * 1989-07-07 1991-02-20 Matsushita Electric Ind Co Ltd Keyboard device for character input
JPH07134633A (en) * 1993-11-11 1995-05-23 Daio Denshi Kk Input device, piercing member, input auxiliary device and input method
CN103235696A (en) * 2013-04-12 2013-08-07 白春荣 Fast Chinese pinyin input method based on touch sensitive device and achieving system thereof
CN103870192A (en) * 2014-01-24 2014-06-18 白春荣 Input method and device based on touch screen as well as Pinyin input method and system
CN106445369A (en) * 2015-08-10 2017-02-22 北京搜狗科技发展有限公司 Input method and device
CN106453823A (en) * 2016-08-31 2017-02-22 腾讯科技(深圳)有限公司 Method and device for sending messages rapidly, and terminal
CN106774991A (en) * 2017-02-20 2017-05-31 深圳市华林奇科技有限公司 The processing method of input data, device and keyboard

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112363669A (en) * 2020-11-19 2021-02-12 北京元心科技有限公司 Operation behavior determination method and device, electronic equipment and computer-readable storage medium
CN112363669B (en) * 2020-11-19 2021-07-27 北京元心科技有限公司 Operation behavior determination method and device, electronic equipment and computer-readable storage medium

Also Published As

Publication number Publication date
CN111176545B (en) 2021-05-04

Similar Documents

Publication Publication Date Title
US10156981B2 (en) User-centric soft keyboard predictive technologies
CN105683874B (en) Method for using emoji for text prediction
US8918739B2 (en) Display-independent recognition of graphical user interface control
CN106325688B (en) Text processing method and device
CN101681198A (en) Providing relevant text auto-completions
CN104090652A (en) Voice input method and device
EP2713255A1 (en) Method and electronic device for prompting character input
CN102414994B (en) Input processing method of mobile terminal and device for performing same
CN107861932B (en) Text editing method, device and system and terminal equipment
CN110058775A (en) Display and update application view group
US20120246591A1 (en) Process and Apparatus for Selecting an Item From a Database
US9405558B2 (en) Display-independent computerized guidance
KR101394874B1 (en) Device and method implementing for particular function based on writing
CN104809174A (en) Opening method of terminal application
JP2012238295A (en) Handwritten character input device and handwritten character input method
KR20150023151A (en) Electronic device and method for executing application thereof
CN101287026A (en) System and method for executing quick dialing by hand-write recognition function
CN104808899A (en) Terminal
CN111176545B (en) Equipment control method, system, electronic equipment and storage medium
CN106970899B (en) Text processing method and device
CN109189243B (en) Input method switching method and device and user terminal
CN108737634B (en) Voice input method and device, computer device and computer readable storage medium
CN112732379A (en) Operation method of application program on intelligent terminal, terminal and storage medium
CN110569501A (en) user account generation method, device, medium and computer equipment
CN113672154B (en) Page interaction method, medium, device and computing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant